score
int64
50
2.08k
text
stringlengths
698
618k
url
stringlengths
16
846
year
int64
13
24
78
Friction is the force that opposes the relative motion or tendency of such motion of two surfaces in contact. It is not, however, a fundamental force, as it originates from the electromagnetic forces and exchange force between atoms. In situations where the surfaces in contact are moving relative to each other, the friction between the two objects converts kinetic energy into sensitive energy, or heat (atomic vibrations). Friction between solid objects and fluids (gases or liquids) is called fluid friction. Friction is an extremely important force. For example, it allows us to walk on the ground without slipping, it helps propel automobiles and other ground transport, and it is involved in holding nails, screws, and nuts. On the other hand, friction also causes wear and tear on the materials in contact. The classical approximation of the force of friction, known as Coulomb friction (named after Charles-Augustin de Coulomb), is expressed as: - μ is the coefficient of friction, - R is the reaction force normal to the contact surface, - Ff is the maximum possible force exerted by friction. This force is exerted in the direction opposite the object's motion. This law mathematically follows from the fact that contacting surfaces have atomically close contacts only over extremely small fraction of their overall surface area, and this contact area is proportional to load (until saturation which takes place when all area is in atomic contact thus no further increase of friction force takes place). This simple (although incomplete) representation of friction is adequate for the analysis of many physical systems. Coefficient of friction The coefficient of friction (also known as the frictional coefficient) is a dimensionless scalar value which describes the ratio of the force of friction between two bodies and the force pressing them together. The coefficient of friction depends on the materials used—for example, ice on metal has a low coefficient of friction (they slide past each other easily), while rubber on pavement has a high coefficient of friction (they do not slide past each other easily). Coefficients of friction need not be less than 1—under good conditions, a tire on concrete may have a coefficient of friction of 1.7. Magnetically attractive surfaces can have very large friction coefficients, and, theoretically, glued or welded together surfaces have infinite friction coefficients. Sliding (kinetic) friction and static friction are distinct concepts. For sliding friction, the force of friction does not vary with the area of contact between the two objects. This means that sliding friction does not depend on the size of the contact area. When the surfaces are adhesive, Coulomb friction becomes a very poor approximation (for example, transparent tape resists sliding even when there is no normal force, or a negative normal force). In this case, the frictional force may depend on the area of contact. Some drag racing tires are adhesive in this way. The force of friction is always exerted in a direction that opposes movement (for kinetic friction) or potential movement (for static friction) between the two surfaces. For example, a curling stone sliding along the ice experiences a static force slowing it down. For an example of potential movement, the drive wheels of an accelerating car experience a frictional force pointing forward; if they did not, the wheels would spin, and the rubber would slide backwards along the pavement. Note that it is not the direction of movement of the vehicle they oppose but the direction of (potential) sliding between tire and road. The coefficient of friction is an empirical measurement—it has to be measured experimentally, and cannot be found through calculations. Rougher surfaces tend to have higher values. Most dry materials in combination give friction coefficient values from 0.3 to 0.6. It is difficult to maintain values outside this range. A value of 0.0 would mean there is no friction at all. Rubber in contact with other surfaces can yield friction coefficients from 1.0 to 2.0. The coefficient of friction, when multiplied by the reaction force on the object by the contact surface, will give the maximum frictional force opposing sliding on the object. However, if the force pulling on the object is less than the maximum force of friction then the force of friction will be equal to the force pulling on the object. You have to pull with a force greater than the maximum value of friction to move the object. Types of friction There are three types of frictional forces. - Static friction is the friction acting on a body when the body is not in motion, but when a force is acting on it. Static friction is the same as the force being applied (because the body isn't moving). Static friction acts because the body tends to move when a force is applied on it. - Limiting friction is the friction on a body just before it starts moving. Generally, limiting friction is highest. - Kinetic friction is the friction which acts on the body when the body is moving. Kinetic friction is usually smaller than limiting friction. The kinetic frictional force of a solid-solid interface is given by: where R is the normal reaction force acting between the interface and the object and is the coefficient of kinetic friction. The value of the coefficient depends upon the nature of the surfaces. The limiting friction is given by where R is the normal reaction force acting between the interface and the object and is the coefficient of limiting friction. For a fluid, the frictional force is directly proportional to the velocity of the object. Static friction occurs when the two objects are not moving relative to each other (like a book on a desk). The coefficient of static friction is typically denoted as μs. The initial force to get an object moving is often dominated by static friction. The static friction is in most cases higher than the kinetic friction. Examples of static friction: Rolling friction occurs when one object "rolls" on another (like a car's wheels on the ground). This is classified under static friction because the patch of the tire in contact with the ground, at any point while the tire spins, is stationary relative to the ground. The coefficient of rolling friction is typically denoted as μr. Limiting friction is the maximum value of static friction, or the force of friction that acts when a body is just on the verge of motion on a surface. Kinetic (or dynamic) friction occurs when two objects are moving relative to each other and rub together (like a sled on the ground). The coefficient of kinetic friction is typically denoted as μk, and is usually less than the coefficient of static friction. From the mathematical point of view, however, the difference between static and kinetic friction is of minor importance: take a coefficient of friction that depends on the sliding velocity and is such that its value at 0 (the static friction μs ) is the limit of the kinetic friction μk for the velocity tending to zero. Then a solution of the contact problem with such Coulomb friction solves also the problem with the original μk and any static friction greater than that limit. Since friction is always exerted in a direction that opposes movement, kinetic friction always does negative work. Examples of kinetic friction: - Sliding friction is when two objects are rubbing against each other. Putting a book flat on a desk and moving it around is an example of sliding friction - Fluid friction is the friction between a solid object as it moves through a liquid or a gas. The drag of air on an airplane or of water on a swimmer are two examples of fluid friction. Devices such as ball bearings or rollers can change sliding friction into much smaller rolling friction by reducing the points of contact on the object. One technique used by railroad engineers is to back up the train to create slack in the linkages between cars. This allows the locomotive to pull forward and only take on the static friction of one car at a time, instead of all cars at once, thus spreading the static frictional force out over time. Generally, when moving an object over a distance: To minimize work against static friction, the movement is performed in a single interval, if possible. To minimize work against kinetic friction, the movement is performed at the lowest velocity that's practical. This also minimizes frictional stress. A common way to reduce friction is by using a lubricant, such as oil or water, that is placed between the two surfaces, often dramatically lessening the coefficient of friction. The science of friction and lubrication is called tribology. Lubricant technology is when lubricants are mixed with the application of science, especially to industrial or commercial objectives. Superlubricity, a recently-discovered effect, has been observed in graphite. It is the substantial decrease of friction between two sliding objects, approaching zero levels (a very small amount of frictional energy would still be dissipated). Lubricants to overcome friction need not always be thin, turbulent fluids or powdery solids such as graphite and talc; acoustic lubrication actually uses sound as a lubricant. Energy of friction According to the law of conservation of energy, no energy is destroyed due to friction, though it may be lost to the system of concern. Energy is transformed from other forms into heat. A sliding hockey puck comes to rest due to friction as its kinetic energy changes into heat. Since heat quickly dissipates, many early philosophers, including Aristotle, wrongly concluded that moving objects lose energy without a driving force. When an object is pushed along a surface, the energy converted to heat is given by: - R is the magnitude of the normal reaction force, - μk is the coefficient of kinetic friction, - d is the distance traveled by the object while in contact with the surface. Physical deformation is associated with friction. While this can be beneficial, as in polishing, it is often a problem, as the materials are worn away, and may no longer hold the specified tolerances. The work done by friction can translate into deformation and heat that in the long run may affect the surface's specification and the coefficient of friction itself. Friction can, in some cases, cause solid materials to melt. - ↑ Office of DOE Science Education, Ask a Scientist: Wide Tires, Argonne National Laboratory Division of Educational Programs. Retrieved July 6, 2007. - Adamson, Arthur W., and Alice P. Gast. 1997. Physical Chemistry of Surfaces, 6th ed. New York: John Wiley. ISBN 0471148733 - Tipler, Paul A., and Gene Mosca. 2003. Physics for Scientists and Engineers: Standard Version, 5th ed. New York: W. H. Freeman. ISBN 0716783398 All links retrieved July 6, 2007. - Friction Factors, Coefficients of Friction – Tables of coefficients and many links New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed.
http://www.newworldencyclopedia.org/entry/Friction
13
82
A finite difference is a mathematical expression of the form f(x + b) − f(x + a). If a finite difference is divided by b − a, one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for the numerical solution of differential equations, especially boundary value problems. Recurrence relations can be written as difference equations by replacing iteration notation with finite differences. Forward, backward, and central differences Three forms are commonly considered: forward, backward, and central differences. A forward difference is an expression of the form Depending on the application, the spacing h may be variable or constant. A backward difference uses the function values at x and x − h, instead of the values at x + h and x: Finally, the central difference is given by Relation with derivatives If h has a fixed (non-zero) value instead of approaching zero, then the right-hand side of the above equation would be written Hence, the forward difference divided by h approximates the derivative when h is small. The error in this approximation can be derived from Taylor's theorem. Assuming that f is continuously differentiable, the error is The same formula holds for the backward difference: However, the central difference yields a more accurate approximation. Its error is proportional to square of the spacing (if f is twice continuously differentiable; that is, the second derivative of the function, f", is continuous for all x: The main problem with the central difference method, however, is that oscillating functions can yield zero derivative. If f(nh)=1 for n uneven, and f(nh)=2 for n even, then f ' (nh)=0 if it is calculated with the central difference scheme. This is particularly troublesome if the domain of f is discrete. Higher-order differences In an analogous way one can obtain finite difference approximations to higher order derivatives and differential operators. For example, by using the above central difference formula for f ' (x+h/2) and f ' (x−h/2) and applying a central difference formula for the derivative of f ' at x, we obtain the central difference approximation of the second derivative of f: 2nd order central Similarly we can apply other differencing formulas in a recursive manner. 2nd order forward More generally, the nth-order forward, backward, and central differences are respectively given by: Note that the central difference will, for odd , have multiplied by non-integers. This is often a problem because it amounts to changing the interval of discretization. The problem may be remedied taking the average of and . The relationship of these higher-order differences with the respective derivatives is very straightforward: Higher-order differences can also be used to construct better approximations. As mentioned above, the first-order difference approximates the first-order derivative up to a term of order h. However, the combination approximates f'(x) up to a term of order h2. This can be proven by expanding the above expression in Taylor series, or by using the calculus of finite differences, explained below. If necessary, the finite difference can be centered about any point by mixing forward, backward, and central differences. Arbitrarily sized kernels Using a little linear algebra, one can fairly easily construct approximations, which sample an arbitrary number of points to the left and a (possibly different) number of points to the right of the center point, for any order of derivative. This involves solving a linear system such that the Taylor expansion of the sum of those points, around the center point, well approximates the Taylor expansion of the desired derivative. This is useful for differentiating a function on a grid, where, as one approaches the edge of the grid, one must sample fewer and fewer points on one side. The details are outlined in these notes. - For all positive k and n Finite difference methods An important application of finite differences is in numerical analysis, especially in numerical differential equations, which aim at the numerical solution of ordinary and partial differential equations respectively. The idea is to replace the derivatives appearing in the differential equation by finite differences that approximate them. The resulting methods are called finite difference methods. n-th difference The nth forward difference of a function f(x) is given by Forward differences may be evaluated using the Nörlund–Rice integral. The integral representation for these types of series is interesting because the integral can often be evaluated using asymptotic expansion or saddle-point techniques; by contrast, the forward difference series can be extremely hard to evaluate numerically, because the binomial coefficients grow rapidly for large n. Newton's series The Newton series consists of the terms of the Newton forward difference equation, named after Isaac Newton; in essence, it is the Newton interpolation formula, first published in his Principia Mathematica in 1687, namely the discrete analog of the continuum Taylor expansion, is the binomial coefficient, and is the "falling factorial" or "lower factorial", while the empty product (x)0 is defined to be 1. In this particular case, there is an assumption of unit steps for the changes in the values of x, h=1 of the generalization below. To illustrate how one may use Newton's formula in actual practice, consider the first few terms of the Fibonacci sequence f = 2, 2, 4... One can find a polynomial that reproduces these values, by first computing a difference table, and then substituting the differences which correspond to x0 (underlined) into the formula as follows, For the case of nonuniform steps in the values of x, Newton computes the divided differences, the series of products, Carlson's theorem provides necessary and sufficient conditions for a Newton series to be unique, if it exists. However, a Newton series will not, in general, exist. The Newton series, together with the Stirling series and the Selberg series, is a special case of the general difference series, all of which are defined in terms of suitably scaled forward differences. In a compressed and slightly more general form and equidistant nodes the formula reads Calculus of finite differences The finite difference of higher orders can be defined in recursive manner as Δhn ≡ Δh (Δhn-1). Another equivalent definition is Δhn = [Th −I]n. The difference operator Δh is a linear operator and it satisfies a special Leibniz rule indicated above, Δh(f(x)g(x)) = (Δhf(x)) g(x+h) + f(x) (Δhg(x)). Similar statements hold for the backward and central differences. Formally applying the Taylor series with respect to h, yields the formula where D denotes the continuum derivative operator, mapping f to its derivative f'. The expansion is valid when both sides act on analytic functions, for sufficiently small h. Thus, Th=ehD, and formally inverting the exponential yields This formula holds in the sense that both operators give the same result when applied to a polynomial. Even for analytic functions, the series on the right is not guaranteed to converge; it may be an asymptotic series. However, it can be used to obtain more accurate approximations for the derivative. For instance, retaining the first two terms of the series yields the second-order approximation to f’(x) mentioned at the end of the section Higher-order differences. The analogous formulas for the backward and central difference operators are The calculus of finite differences is related to the umbral calculus of combinatorics. This remarkably systematic correspondence is due to the identity of the commutators of the umbral quantities to their continuum analogs (h→0 limits), A large number of formal differential relations of standard calculus involving functions f(x) thus map systematically to umbral finite-difference analogs involving f(xTh−1). For instance, the umbral analog of a monomial xn is a generalization of the above falling factorial (Pochhammer k-symbol), , so that hence the above Newton interpolation formula (by matching coefficients in the expansion of an arbitrary function f(x) in such symbols), and so on. For example, the umbral sine is As in the continuum limit, the eigenfunction of Δh /h also happens to be an exponential, and hence Fourier sums of continuum functions are readily mapped to umbral Fourier sums faithfully, i.e., involving the same Fourier coefficients multiplying these umbral basis exponentials. This umbral exponential thus amounts to the exponential generating function of the Pochhammer symbols. The inverse operator of the forward difference operator, so then the umbral integral, is the indefinite sum or antidifference operator. Rules for calculus of finite difference operators Analogous to rules for finding the derivative, we have: - Constant rule: If c is a constant, then All of the above rules apply equally well to any difference operator, including as to . - Summation rules: - A generalized finite difference is usually defined as where is its coefficients vector. An infinite difference is a further generalization, where the finite sum above is replaced by an infinite series. Another way of generalization is making coefficients depend on point : , thus considering weighted finite difference. Also one may make step depend on point : . Such generalizations are useful for constructing different modulus of continuity. - As a convolution operator: Via the formalism of incidence algebras, difference operators and other Möbius inversion can be represented by convolution with a function on the poset, called the Möbius function μ; for the difference operator, μ is the sequence (1, −1, 0, 0, 0, ...). Finite difference in several variables Finite differences can be considered in more than one variable. They are analogous to partial derivatives in several variables. Some partial derivative approximations are (using central step method): Alternatively, for applications in which the computation of is the most costly step and both first and second derivatives must be computed, a more efficient formula for the last case is: since the only values to be computed which are not already needed for the previous four equations are and . See also - Finite difference coefficients - Finite difference method - Newton polynomial - Table of Newtonian series - Sheffer sequence - Umbral calculus - Taylor series - Carlson's theorem - Numerical differentiation - Five-point stencil - Divided differences - Modulus of continuity - Time scale calculus - Summation by parts - Lagrange polynomial - Gilbreath's conjecture - Nörlund–Rice integral - Newton, Isaac, (1687). Principia, Book III, Lemma V, Case 1 - Richtmeyer, D. and Morton, K.W., (1967). Difference Methods for Initial Value Problems, 2nd ed., Wiley, New York. - Boole, George, (1872). A Treatise On The Calculus of Finite Differences, 2nd ed., Macmillan and Company. On line. Also, [Dover edition 1960] - Jordan, Charles, (1939/1965). "Calculus of Finite Differences", Chelsea Publishing. On-line: - Zachos, C. (2008). "Umbral Deformations on Discrete Space-Time". International Journal of Modern Physics A 23 (13): 2005–2014. doi:10.1142/S0217751X08040548. - Levy, H.; Lessman, F. (1992). Finite Difference Equations. Dover. ISBN 0-486-67260-3. - Ames, W. F., (1977). Numerical Methods for Partial Differential Equations, Section 1.6. Academic Press, New York. ISBN 0-12-056760-1. - Hildebrand, F. B., (1968). Finite-Difference Equations and Simulations, Section 2.2, Prentice-Hall, Englewood Cliffs, New Jersey. - Flajolet, Philippe; Sedgewick, Robert (1995). "Mellin transforms and asymptotics: Finite differences and Rice's integrals". Theoretical Computer Science 144 (1–2): 101–124. doi:10.1016/0304-3975(94)00281-M. - Hazewinkel, Michiel, ed. (2001), "Finite-difference calculus", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 - Table of useful finite difference formula generated using Mathematica - Finite Calculus: A Tutorial for Solving Nasty Sums - Discrete Second Derivative from Unevenly Spaced Points
http://en.wikipedia.org/wiki/Newton_series
13
66
Stars are often born in groups. Early in the history of the universe, many stars were born in compact clusters of hundreds of thousands or millions of stars called globular clusters. More recent star formation within the Galactic disk gives rise to open clusters of hundreds or thousands of stars. But stars are also born in smaller groupings of two or three. Many of the bright, nearby stars are members of binary or triplet star systems. For instance, the brightest star in the sky, Sirius, is a member of a triple-star system. The second-closest star to Earth, Rigil Kentaurus (α Centauri) has a slightly-dimmer companion star. Another bright, nearby star, Procyon (α Canis Minoris), has a 13th magnitude companion star. These systems are not unusual. In fact, multiple star systems of main-sequence stars are far more common than single main-sequence stars in the Galactic disk. The binary main-sequence star systems slightly outnumber single main-sequence stars. The ratios of binary systems to triplet and quadruplet systems is 46:9:2. This means that only 34% of the main sequence stars in the Galactic disk have no companion stars. Generally a binary star looks like a single star to the eye. At a distance of 5 parsecs, a pair of stars separated by 200 AU would have a separation on the sky of only 40 arc seconds, which is about the angle spanned by Saturn's rings. This separation is easily resolved with a telescope. Binary stars that can be resolved with a telescope are angularly-resolved binaries. But most binary systems are too distant to resolve with a telescope. These systems betray their binary nature through their spectra. As the stars in a binary system orbit one-another, their spectra are Doppler shifted, so that one sees the spectral lines of one star shifted in frequency relative to the spectral lines of the other star. Binary systems that reveal themselves in this way are called spectroscopic binaries. Binary stars, when they are widely separated, are described as the action of Newtonian gravity on two point-masses. Each star moves in an elliptical orbit, and the motion of one star relative to the other also traces an ellipse. The relationship between the period and the semimajor axis (the average of the maximum and minimum separation of the stars) of a binary system is given by Kepler's laws: the square of the period is proportional to the cube of the semimajor axis. The physics of binary star motion is therefore very simple when the stars are far enough apart that their tidal influence on each-other is negligible. This simple physics makes the binary star the best tool for weighing stars. The size of a binary star system is more like the size of the Solar System than the separation between stars in the stellar neighborhood. The orbital periods of the majority of binary stars are between 1/3 and 300,000 years, with the median at 14 years. Only a tiny fraction of binary stars have periods shorter than 1 day or longer than 1 million years. For a binary system with a total mass of 1 solar mass, the median orbital period of 14 years corresponds to a semimajor axis of only 6 AU, which is slightly more than Jupiter's distance from the Sun. For a 1/3 year period, the semimajor axis is 0.5 AU, and for a 300,000 year period, it is 4,500 AU. These separations increase with an increase in the total mass of the system as the total mass to the one-third power, so the semimajor axis of a 10-solar-mass binary system is only 2.2 times greater than that of a 1-solar-mass system with the same period. With Solar-System like values, the semimajor axis of a binary system is tiny compared to the average separation of more than a parsec (206,000 AU) between the stars of the Galactic disk. The eccentricities of binary star orbits fall into two classes. For binary stars with periods longer than 3 years, the orbits are generally very elliptical, with most having eccentricities e ranging between 0.3 and 0.9 (a circular orbit has e = 0, and a parabolic orbit has e = 1. Mercury, the Solar System planet with the most eccentric orbit, has e = 0.2). For periods of less than 3 years, the orbits are much more circular, with a large majority of the orbits having eccentricities between 0.15 and 0.45. This effect is attributed to the tidal dissipation of orbital energy in these tightly-bound systems, which causes the orbits to become circular. In binary systems with periods of less than 1 day, the tidal dissipation of energy is so efficient that the orbits have eccentricities of 0. Besides having circular orbits, the stars in the most tightly-bound systems are close enough to tidally distort and heat each-other. If the stars in such a binary system are so close that each star fills its Roche lobe and the photospheres touch, the system is a contact binary star; if the stars are so close that one fills its Roche lobe, but the other does not, then the system is a semi-detached binary star; if neither star fills its Roche lobe, the system is a detached binary star. The evolution of these binary stars is complex, with some evolving into the brilliant compact binary systems that contain compact objects, such as degenerate dwarfs, neutron stars, and black holes. One final, striking aspect of binary stars is the relative masses of the stars in a system. For binary systems with orbital periods longer than 100 years, the secondary (less massive) star tends to be of very low mass, just as the stars in the Galactic disk tend to be of very low-mass, but for systems with orbital periods less than 100 years, the secondary star's mass tends to be close to the mass of the primary star. This difference in the secondary's mass with orbital period suggests that the long-period binaries are either created by a different process than the short-period binaries, or the process that creates binary stars behaves much differently when creating a larger system than when creating a small system. The commonness and small size of the binary stars in our Galaxy have implications for the theories of star formation. The majority of stars are members of binary systems, so binary systems form very easily within the Galactic disk. The size of a binary system is generally about the size of our Solar System; this has lead astrophysicists to associate the separation between stars with the size of the cloud that gave birth to the stars in the binary system. The idea that a binary system is born when its stars are born is supported by more recent observations of binary systems that contain a T Tauri star. These stars are very young variable stars of between 0.1 and 3 solar masses that have not yet settled down onto the main sequence. They are from one million to several tens of millions of years old. T Tauri stars are found to have companion stars with the same frequency as the main-sequence star, and the distribution of their periods is similar to the binary stars containing main-sequence stars. A T Tauri star and its companion are of the same age. These properties support the idea that stars are born with their companions, as it is unlikely that they could acquire companions so rapidly after birth. Binary stars therefore tell us something about how stars are born. Duquennoy, A., and Mayor, M. “Multiplicity Among Solar-Type Stars in the Solar Neighborhood: II. Distribution of the Orbital Elements in an Unbiased Sample.” Astronomy and Astrophysics 248 (1991): 485–524.
http://www.astrophysicsspectator.com/topics/stars/BinaryStars.html
13
69
|Go to Chapter 3 - Strings||Go to Chapter 5 - Jokes| We are going to make a "Guess the Number" game. In this game, the computer will think of a random number from 1 to 20, and ask you to guess the number. You only get six guesses, but the computer will tell you if your guess is too high or too low. If you guess the number within six tries, you win. This is a good game for you to start with because it uses random numbers, loops, and input from the user in a fairly short program. As you write this game, you will learn how to convert values to different data types (and why you would need to do this). Because this program is a game, we'll call the user the player, but the word "user" would be correct too. Here is what our game will look like to the player when the program is run. The text that the player types in is in bold. Enter this code exactly as it appears here, and then save it by clicking on the guess.py then run it by pressing the F5 key. Don't worry if you don't understand the code now, I'll explain it step by step.menu and then . Give it a file name like Here is the source code for our Guess the Number game. When you enter this code into the file editor, be sure to pay attention to the spacing at the front of some of the lines. Some lines have four or eight spaces in front of them. After you have typed in the code, save the file as guess.py. You can run the program from the file editor by pressing F5. If you see an error message, check that you have typed the program in exactly as written. If you don't want to type all this code, you can download it from this book's website at the URL http://inventwithpython.com/chapter4. Important Note! Be sure to run this program with Python 3, and not Python 2. The programs in this book use Python 3, and you'll get errors if you try to run them with Python 2. You can click onand then to find out what version of Python you have. Even though we are entering our source code into a new file editor window, we can return to the shell to enter individual instructions in order to see what they do. The interactive shell is very good for experimenting with different instructions when we are not running a program. You can return to the interactive shell by clicking on its window or on its taskbar button. In Windows or Mac OS X, the taskbar or dock is on the bottom of the screen. On Linux the taskbar may be located along the top of the screen. If the program doesn't seem to work after you've typed it, check to see if you have typed the code exactly as it appears in this book. You can also copy and paste your code to the online "diff" tool at http://inventwithpython.com/diff. The diff tool will show you how your code is different from the source code in this book. In the file editor, press Ctrl-A to "Select All" the text you've typed, then press Ctrl-C to copy the text to the clipboard. Then, paste this text by clicking in the diff tool's text field on the website and click the "Compare" button. The website will show you any differences between your code and the code in this book. There is a diff tool for each program in this book on the http://inventwithpython.com website. A video tutorial of how to use the diff tool is available from this book's website at http://inventwithpython.com/videos/. Let's look at each line of code in turn to see how this program works. This line is a comment. Comments were introduced in our Hello World program in Chapter 3. Remember that Python will ignore everything after the # sign. This just reminds us what this program does. This is an import statement. Statements are not functions (notice that neither import nor random has parentheses after its name). Remember, statements are instructions that perform some action but do not evaluate to a value. You have already seen statements: assignment statements store a value into a variable (but the statement does not evaluate to anything). While Python includes many built-in functions, some functions exist in separate programs called modules. Modules are Python programs that contain additional functions. We use the functions of these modules by bringing them into our programs with the import statement. In this case, we're importing the module random. The import statement is made up of the import keyword followed by the module name. Together, the keyword and module name make up the statement. Line 2 then is an import statement that imports the module named random which contains several functions related to random numbers. (We'll use one of these functions later to have the computer come up with a random number for us to guess.) This line creates a new variable named guessesTaken. We'll store the number of guesses the player makes in this variable. Since the player hasn't made any guesses so far, we store the integer 0 here. Lines 6 and 7 are the same as the lines in the Hello World program that we saw in Chapter 3. Programmers often reuse code from their other programs when they need the program to do something that they've already coded before. Line 6 is a function call to the print() function. Remember that a function is like a mini-program that our program runs, and when our program calls a function it runs this mini-program. The code inside the print() function displays the string you passed it inside the parentheses on the screen. When these two lines finish executing, the string that is the player's name will be stored in the myName variable. (Remember, the string might not really be the player's name. It's just whatever string the player typed in. Computers are dumb and just follow their programs no matter what.) In Line 9 we call a new function named randint(), and then store the return value in a variable named number. Remember that function calls are expressions because they evaluate to a value. We call this value the function call's return value. Because the randint() function is provided by the random module, we precede it with random. (don't forget the period!) to tell our program that the function randint() is in the random module. The randint() function will return a random integer between (and including) the two integers we give it. Here, we give it the integers 1 and 20 between the parentheses that follow the function name (separated by a comma). The random integer that randint() returns is stored in a variable named number; this is the secret number the player is trying to guess. Just for a moment, go back to the interactive shell and enter import random to import the random module. Then enter random.randint(1, 20) to see what the function call evaluates to. It should return an integer between 1 and 20. Now enter the same code again and the function call will probably return a different integer. This is because each time the randint() function is called, it returns some random number, just like when you roll dice you will get a random number each time. Whenever we want to add randomness to our games, we can use the randint() function. And we use randomness in most games. (Think of how many board games use dice.) You can also try out different ranges of numbers by changing the arguments. For example, enter random.randint(1, 4) to only get integers between 1 and 4 (including both 1 and 4). Or try random.randint(1000, 2000) to get integers between 1000 and 2000. Below is an example of calling the random.randint() function and seeing what values it returns. The results you get when you call the random.randint() function will probably be different (it is random, after all). We can change the game's code slightly to make the game behave differently. Try changing line 9 and 10 from this: into these lines: And now the computer will think of an integer between 1 and 100. Changing line 9 will change the range of the random number, but remember to change line 10 so that the game also tells the player the new range instead of the old one. By the way, be sure to enter random.randint(1, 20) and not just randint(1, 20), or the computer will not know to look in the random module for the randint() function and you'll get an error like this: Remember, your program needs to run import random before it can call the random.randint() function. This is why import statements usually go at the beginning of the program. The integer values between the parentheses in the random.randint(1, 20) function call are called arguments. Arguments are the values that are passed to a function when the function is called. Arguments tell the function how to behave. Just like the player's input changes how our program behaves, arguments are inputs for functions. Some functions require that you pass them values when you call them. For example, look at these function calls: The input() function has no arguments but the print() function call has one and the randint() function call has two. When we have more than one argument, we separate each with commas, as you can see in this example. Programmers say that the arguments are delimited (that is, separated) by commas. This is how the computer knows where one value ends and another begins. If you pass too many or too few arguments in a function call, Python will display an error message, as you can see below. In this example, we first called randint() with only one argument (too few), and then we called randint() with three arguments (too many). Notice that the error message says we passed 2 and 4 arguments instead of 1 and 3. This is because Python always passes an extra, invisible argument. This argument is beyond the scope of this book, and you don't have to worry about it. Lines 10 and 12 greets the player and tells them about the game, and then starts letting the player guess the secret number. Line 10 is fairly simple, but line 12 introduces a useful concept called a loop. In Line 10 the print() function welcomes the player by name, and tells them that the computer is thinking of a random number. But wait - didn't I say that the print() function takes only one string? It may look like there's more than one string there. But look at the line carefully. The plus signs concatenate the three strings to evaluate down to one string, and that is the one string the print() function prints. It might look like the commas are separating the strings, but if you look closely you see that the commas are inside the quotes, and part of the strings themselves. Line 12 has something called a while statement, which indicates the beginning of a while loop. Loops are parts of code that are executed over and over again. But before we can learn about while loops, we need to learn a few other concepts first. Those concepts are blocks, Booleans, comparison operators, conditions, and finally, the while statement. A block is one or more lines of code grouped together with the same minimum amount of indentation. You can tell where a block begins and ends by looking at the line's indentation (that is, the number of spaces in front of the line). A block begins when a line is indented by four spaces. Any following line that is also indented by four spaces is part of the block. A block within a block begins when a line is indented with another four spaces (for a total of eight spaces in front of the line). The block ends when there is a line of code with the same indentation before the block started. Below is a diagram of the code with the blocks outlined and numbered. The spaces have black squares filled in to make them easier to count. Figure 4-1: Blocks and their indentation. The black dots represent spaces. For example, look at the code in Figure 4-1. The spaces have been replaced with dark squares to make them easier to count. Line 12 has an indentation of zero spaces and is not inside any block. Line 13 has an indentation of four spaces. Since this indentation is larger than the previous line's indentation, we can tell that a new block has started. Lines 14, 15, 17 and 19 also have four spaces for indentation. Both of these lines have the same amount of indentation as the previous line, so we know they are in the same block. (We do not count blank lines when we look for indentation.) Line 20 has an indentation of eight spaces. Eight spaces is more than four spaces, so we know a new block has started. This is a block that is inside of another block. Line 22 only has four spaces. The line before line 22 had a larger number of spaces. Because the indentation has decreased, we know that block has ended. Line 22 is in the same block as the other lines with four spaces. Line 23 increases the indentation to eight spaces, so again a new block has started. To recap, line 12 is not in any block. Lines 13 to 23 all in one block (marked with the circled 1). Line 20 is in a block in a block (marked with a circled 2). And line 23 is the only line in another block in a block (marked with a circled 3). When you type code into IDLE, each letter is the same width. You can count the number of letters above or below the line to see how many spaces you have put in front of that line of code. In this figure, the lines of code inside box 1 are all in the same block, and blocks 2 and 3 are inside block 1. Block 1 is indented with at least four spaces from the left margin, and blocks 2 and 3 are indented eight spaces from the left margin. A block can contain just one line. Notice that blocks 2 and 3 are only one line each. The Boolean data type has only two values: True or False. These values are case-sensitive and they are not string values; in other words, you do not put a ' quote character around them. We will use Boolean values (also called bools) with comparison operators to form conditions. (Explained next.) In line 12 of our program, the line of code containing the while statement: The expression that follows the while keyword (guessesTaken < 6) contains two values (the value in the variable guessesTaken, and the integer value 6) connected by an operator (the < sign, the "less than" sign). The < sign is called a comparison operator. The comparison operator is used to compare two values and evaluate to a True or False Boolean value. A list of all the comparison operators is in Table 4-1. |Operator Sign||Operator Name| |<=||Less than or equal to| |>=||Greater than or equal to| |!=||Not equal to| A condition is an expression that combines two values with a comparison operator (such as < or >) and evaluates to a Boolean value. A condition is just another name for an expression that evaluates to True or False. You'll find a list of other comparison operators in Table 4-1. Conditions always evaluate to a Boolean value: either True or False. For example, the condition in our code, guessesTaken < 6 asks "is the value stored in guessesTaken less than the number 6?" If so, then the condition evaluates to True. If not, the condition evaluates to False. In the case of our Guess the Number program, in line 4 we stored the value 0 in guessesTaken. Because 0 is less than 6, this condition evaluates to the Boolean value of True. Remember, a condition is just a name for an expression that uses comparison operators such as < or !=. Enter the following expressions in the interactive shell to see their Boolean results: The condition 0 < 6 returns the Boolean value True because the number 0 is less than the number 6. But because 6 is not less than 0, the condition 6 < 0 evaluates to False. 50 is not less than 10, so 50 < 10 is False. 10 is less than 11, so 10 < 11 is True. But what about 10 < 10? Why does it evaluate to False? It is False because the number 10 is not smaller than the number 10. They are exactly the same size. If a girl named Alice was the same height as a boy named Bob, you wouldn't say that Alice is taller than Bob or that Alice is shorter than Bob. Both of those statements would be false. Try entering some conditions into the shell to see how these comparison operators work: Notice the difference between the assignment operator (=) and the "equal to" comparison operator (==). The equal (=) sign is used to assign a value to a variable, and the equal to (==) sign is used in expressions to see whether two values are equal. It's easy to accidentally use one when you meant to use the other, so be careful of what you type in. Two values that are different data types will always be not equal to each other. For example, try entering the following into the interactive shell: The while statement marks the beginning of a loop. Sometimes in our programs, we want the program to do something over and over again. When the execution reaches a while statement, it evaluates the condition next to the while keyword. If the condition evaluates to True, the execution moves inside the while-block. (In our program, the while-block begins on line 13.) If the condition evaluates to False, the execution moves all the way past the while-block. (In our program, the first line after the while-block is line 28.) A while statement always has a colon (the : sign) after the condition. Figure 4-2: The while loop's condition. Figure 4-2 shows how the execution flows depending on the condition. If the condition evaluates to True (which it does the first time, because the value of guessesTaken is 0), execution will enter the while-block at line 13 and keep going down. Once the program reaches the end of the while-block, instead of going down to the next line, it jumps back up to the while statement's line (line 12). It then re-evaluates the condition, and if it still evaluates to True we enter the while-block again. This is how the loop works. As long as the condition is True, the program keeps executing the code inside the while-block repeatedly until we reach the end of the while-block and the condition is False. And, until guessesTaken is equal to or greater than 6, we will keep looping. Think of the while statement as saying, "while this condition is true, keep looping through the code in this block". You can make this game harder or easier by changing the number of guesses the player gets. All you have to do is change this line: into this line: ...and now the player only gets four guesses instead of six guesses. By setting the condition to guessesTaken < 4, we ensure that the code inside the loop only runs four times instead of six. This makes the game much more difficult. To make the game easier, set the condition to guessesTaken < 8 or guessesTaken < 10, which will cause the loop to run a few more times than before and accept more guesses from the player. Of course, if we removed line 17 (guessesTaken = guessesTaken + 1) altogether then the guessesTaken would never increase and the condition would always be True. This would give the player an unlimited number of guesses. Lines 13 to 17 ask the player to guess what the secret number is and lets them enter their guess. We store this guess in a variable, and then convert that string value into an integer value. The program now asks us for a guess. We type in our guess and that number is stored in a variable named guess. In line 15, we call a new function called int(). The int() function takes one argument. The input() function returned a string of text that player typed. But in our program, we will want an integer, not a string. If the player enters 5 as their guess, the input() function will return the string value '5' and not the integer value 5. Remember that Python considers the string '5' and the integer 5 to be different values. So the int() function will take the string value we give it and return the integer value form of it. Let's experiment with the int() function in the interactive shell. Try typing the following: We can see that the int('42') call will return the integer value 42, and that int(42) will do the same (though it is kind of pointless to convert an integer to an integer). However, even though you can pass a string to the int() function, you cannot just pass any string. For example, passing 'hello' to int() (like we do in the int('hello') call) will result in an error. The string we pass to int() must be made up of numbers. The integer we pass to int() must also be numerical, rather than text, which is why int('forty-two') also produces an error. That said, the int() function is slightly forgiving; if our string has spaces on either side, it will still run without error. This is why the int(' 42 ') call works. The 3 + int('2') line shows an expression that adds an integer 3 to the return value of int('2') (which is the integer 2). The expression evaluates to 3 + 2, which then evaluates to 5. So even though we cannot add an integer and a string (3 + '2' would show us an error), we can add an integer to a string that has been converted to an integer. Remember, back in our program on line 15 the guess variable originally held the string value of what the player typed. We will overwrite the string value stored in guess with the integer value returned by the int() function. This is because we will later compare the player's guess with the random number the computer came up with. We can only compare two integer values to see if one is greater (that is, higher) or less (that is, lower) than the other. We cannot compare a string value with an integer value to see if one is greater or less than the other, even if that string value is numeric such as '5'. In our Guess the Number game, if the player types in something that is not a number, then the function call int() will result in an error and the program will crash. In the other games in this book, we will add some more code to check for error conditions like this and give the player another chance to enter a correct response. Notice that calling int(guess) does not change the value in the guess variable. The code int(guess) is an expression that evaluates to the integer value form of the string stored in the guess variable. We must assign this return value to guess in order to change the value in guess to an integer with this full line: guess = int(guess) Once the player has taken a guess, we want to increase the number of guesses that we remember the player taking. The first time that we enter the loop block, guessesTaken has the value of 0. Python will take this value and add 1 to it. 0 + 1 is 1. Then Python will store the new value of 1 to guessesTaken. Think of line 17 as meaning, "the guessesTaken variable should be one more than what it already is". When we add one to an integer value, programmers say they are incrementing the value (because it is increasing by one). When we subtract one from a value, we are decrementing the value (because it is decreasing by one). The next time the loop block loops around, guessesTaken will have the value of 1 and will be incremented to the value 2. Lines 19 and 20 check if the number that the player guessed is less than the secret random number that the computer came up with. If so, then we want to tell the player that their guess was too low by printing this message to the screen. Line 19 begins an if statement with the keyword, if. Next to the if keyword is the condition. Line 20 starts a new block (you can tell because the indentation has increased from line 19 to line 20.) The block that follows the if keyword is called an if-block. An if statement is used if you only want a bit of code to execute if some condition is true. Line 19 has an if statement with the condition guess < number. If the condition evaluates to True, then the code in the if-block is executed. If the condition is False, then the code in the if-block is skipped. Figure 4-3: if and while statements. Like the while statement, the if statement also has a keyword, followed by a condition, a colon, and then a block of code. See Figure 4-3 for a comparison of the two statements. The if statement works almost the same way as a while statement, too. But unlike the while-block, execution does not jump back to the if statement at the end of the if-block. It just continues on down to the next line. In other words, if statements won't loop. If the condition is True, then all the lines inside the if-block are executed. The only line inside this if-block on line 19 is a print() function call. If the integer the player enters is less than the random integer the computer thought up, the program displays Your guess is too low. If the integer the player enters is equal to or larger than the random integer (in which case, the condition next to the if keyword would have been False), then this block would have been skipped over. Lines 22 to 26 in our program check if the player's guess is either too big or exactly equal to the secret number. If the player's guess is larger than the random integer, we enter the if-block that follows the if statement. The print() line tells the player that their guess is too big. This if statement's condition checks to see if the guess is equal to the random integer. If it is, we enter line 26, the if-block that follows it. The line inside the if-block is a break statement that tells the program to immediately jump out of the while-block to the first line after the end of the while-block. (The break statement does not bother re-checking the while loop's condition, it just breaks out immediately.) The break statement is just the break keyword by itself, with no condition or colon. If the player's guess is not equal to the random integer, we do not break out of the while-block, we will reach the bottom of the while-block anyway. Once we reach the bottom of the while-block, the program will loop back to the top and recheck the condition (guessesTaken < 6). Remember after the guessesTaken = guessesTaken + 1 line of code executed, the new value of guessesTaken is 1. Because 1 is less than 6, we enter the loop again. If the player keeps guessing too low or too high, the value of guessesTaken will change to 2, then 3, then 4, then 5, then 6. If the player guessed the number correctly, the condition in the if guess == number statement would be True, and we would have executed the break statement. Otherwise, we keep looping. But when guessesTaken has the number 6 stored, the while statement's condition is False, since 6 is not less than 6. Because the while statement's condition is False, we will not enter the loop and instead jump to the end of the while-block. The remaining lines of code run when the player has finished guessing (either because the player guessed the correct number, or because the player ran out of guesses). The reason the player exited the previous loop will determine if they win or lose the game, and the program will display the appropriate message on the screen for either case. Unlike the code in line 25, this line has no indentation, which means the while-block has ended and this is the first line outside the while-block. When we left the while block, we did so either because the while statement's condition was False (when the player runs out of guesses) or if we executed the break statement (when the player guesses the number correctly). With line 28, check again to see if the player guessed correctly. If so, we enter the if-block that follows. Lines 29 and 30 are inside the if-block. They only execute if the condition in the if statement on line 28 was True (that is, if the player correctly guessed the computer's number). In line 29 we call the new function str(), which returns the string form of an argument. We use this function because we want to change the integer value in guessesTaken into its string version because we can only use strings in calls to print(). Line 29 tells the player that they have won, and how many guesses it took them. Notice in this line that we change the guessesTaken value into a string because we can only add (that is, concatenate) strings to other strings. If we were to try to add a string to an integer, the Python interpreter would display an error. In Line 32, we use the comparison operator != with the if statement's condition to mean "is not equal to." If the value of the player's guess is lower or higher than (and therefore, not equal to) the number chosen by the computer, then this condition evaluates to True, and we enter the block that follows this if statement on line 33. Lines 33 and 34 are inside the if-block, and only execute if the condition is True. In this block, we tell the player what the number is because they failed to guess correctly. But first we have to store the string version of number as the new value of number. This line is also inside the if-block, and only executes if the condition was True. At this point, we have reached the end of the code, and the program terminates. Congratulations! We've just programmed our first real game! If someone asked you, "What exactly is programming anyway?" what could you say to them? Programming is just the action of writing code for programs, that is, creating programs that can be executed by a computer. "But what exactly is a program?" When you see someone using a computer program (for example, playing our Guess The Number game), all you see is some text appearing on the screen. The program decides what exact text to show on the screen (which is called the output), based on its instructions (that is, the program) and on the text that the player typed on the keyboard (which is called the input). The program has very specific instructions on what text to show the user. A program is just a collection of instructions. "What kind of instructions?" There are only a few different kinds of instructions, really. And that's it, just those four things. Of course, there are many details about those four types of instructions. In this book you will learn about new data types and operators, new flow control statements besides if, while and break, and several new functions. There are also different types of I/O (input from the mouse, and outputting sound and graphics and pictures instead of just text.) For the person using your programs, they really only care about that last type, I/O. The user types on the keyboard and then sees things on the screen or hears things from the speakers. But for the computer to figure out what sights to show and what sounds to play, it needs a program, and programs are just a bunch of instructions that you, the programmer, have written. If you have access to the Internet and a web browser, you can go to this book's website at http://inventwithpython.com/traces you will find a page that traces through each of the programs in this book. By following along with the trace line by line, it might become more clear what the Guess the Number program does. This website just shows a simulation of what happens when the program is run. No actual code is really being executed. Figure 4-4: The tracing web page. Figure 4-4: The tracing web page. The left side of the web page shows the source code, and the highlighted line is the line of code that is about to be executed. You execute this line and move to the next line by clicking the "Next" button. You can also go back a step by clicking the "Previous" button, or jump directly to a step by typing it in the white box and clicking the "Jump" button. On the right side of the web page, there are three sections. The "Current variable values" section shows you each variable that has been assigned a value, along with the value itself. The "Notes" section will give you a hint about what is happening on the highlighted line. The "Program output" section shows the output from the program, and the input that is sent to the program. (This web page automatically enters text to the program when the program asks.) So go to each of these web pages and click the "Next" and "Previous" buttons to trace through the program like we did above. A video tutorial of how to use the online tracing tool is available from this book's website at http://inventwithpython.com/videos/. |Go to Chapter 3 - Strings||Go to Chapter 5 - Jokes|
http://inventwithpython.com/chapter4.html
13
54
This will be REAL basics. Only a couple definitions so you can follow the later pages. Like, what a Derivative and and Integral actually are; and how to do differentiation and integration in software. There will be NO math except basic algebra and geometry. This is simply the rate of change of some signal. Your car may travel forward at 20 feet per second for 30 seconds. It will travel 600 feet. During that 30 seconds, the distance the car has moved can be measured and will vary from 0 feet at the beginning to 600 feet at the end. The derivative of the distance measurement is the rate at which it changes; which is a constant 20 feet per second during the entire 30 seconds It is possible that your car's speed could vary during the 30 seconds Maybe you drive up and down a hill with the gas pedal held constant, or maybe you push or release the gas pedal to change the speed. You can see that the car proceeds at 20 feet per second (fps) until it reaches the hill. Then it slows down to about 15 fps. As it tops the hill and starts downhill, the speed (rate of distance) increases to about 25 fps. Then when it reaches the level road again it returns to 20 fps. This change in speed can also be seen on the plot of distance versus time. The line (distance) increases at a constant rate (20 fps) until the hill is reached. Then the rate of increase in distance reduces until the downhill portion when it speeds up. Then it goes back to the original rate of increase ending up at approximately 600 feet again. (if you do the math, you'd find that it doesn't quite make it to 600, since you lose more time going uphill then you make up going downhill) The rate plot is the derivative of the distance plot. The rate plot shows the speed or rate at which the car is covering distance. There is also a derivative of the rate plot. Since the second example with a hill results in a rate plot with different values on it, then the rate plot also changes as a function of time. The change in rate is acceleration. So, speed is the derivative of distance and acceleration is the derivative of speed. Actually, you can keep taking derivatives of derivatives forever, but we seldom have to worry about any derivatives beyond acceleration. The plots on the right show that while rate is constant at 20 fps, the acceleration is zero. When the hill is encountered, there is a negative acceleration which is the rate signal decreasing to 15 fps. The acceleration returns to zero as the speed stabilizes at 15 fps. Similarly as the car starts downhill, there is a positive acceleration until the car stabilizes at 25 fps, then there is another negative acceleration as the car returns to level ground at 20 fps. The integral is just the opposite concept of the derivative. If you start with the acceleration, speed (or rate) is the integral of acceleration; and distance is the integral of speed. For all these examples, we will be talking about signals that occur over time. So, if you have a measured signal that tells you (or your robot) how its speed has varied over an interval of time, you can calculate what distance the robot has traveled over that same period of time by multiplying the speed times the TIME that the robot spends at that speed. Note: Two methods you can use to measure speed are a tachometer attached to the motor or drive train, or measuring the back-EMF of the motor (which is the same as using your motor as a tachometer). Of course, if you have wheel encoders, you could calculate speed from them and integrate that value to get distance; however, this is unnecessary since the encoder gives distance directly. Since rate is often changing over time, you will usually calculate the distance traveled frequently, computing the distance traveled since the last computation and adding all the incremental distances together for a total. If this is done at a high speed, e.g. 20 times per second),. the resulting distance can be quite accurate. There are two primary sources of error in computing distance from speed. First is the accuracy of the speed signal. If speed is off by 10%, distance will be off by 10 percent also. This error source can be minimized by doing a calibration of your speed signal to make it as accurate as possible. Second is the accuracy of the piecemeal integration. Multiplying current speed by the time since the last sample assumes the speed was constant over that time. If the speed increased or decreased, the answer will not be quite right. This source of error can be minimized by performing the calculation at such a high rate that the speed change between calculations is insignificant; or by using a value for speed that is the average of the current and previous values. For example, you have a robot and have some means of measuring its speed. You find that it moved at 0 fps for 10 seconds, then 2 fps for 10 seconds, then 3 fps for 10 seconds. How far has the robot moved? In the first 10 seconds, it doesn't move at all. In the second 10 seconds it moves 20 feet (2 fps * 10 sec) and in the final 10 seconds it moves 30 feet. Add it up and it is 50 feet. As plots it would look like: OK, assuming we have the concept down, how would you implement a calculation of the integral in software? The object is to integrate the rate signal to get the distance measurement. As the robot runs, you are making the speed measurements each time your software executes. Lets assume your software executes at 20 times per second. Using pseudocode: //Initialize DISTANCE integrator to zero DO while time<30 seconds //just for the length of the test SPEED = ??? //however you measure the rate signal DISTANCE = DISTANCE + SPEED/20 //the actual integration This loop will execute 600 times, 30 seconds at 20 times per second. A new value of speed is determined for each execution. The distance measurement is then updated by taking the previous value of distance and adding how far the robot would move in 1/20 of a second at the measured rate. The special characteristics of an integrator are: that it must be initialized since the computed value is always based on the previous value; and the amount added for each cycle of the integration must be multiplied by the time interval between executions (0.05 seconds or 1/20 of a second in this case). This is all there is to an integrator. If you integrate an acceleration signal, you will get speed; if you integrate speed, you will get distance. //Must also initialize Speed now DISTANCE = 0 //Initialize DISTANCE integrator to zero DO while time<30 seconds //just for the length of the test ACCELERATION = ??? //however you measure the accel signal SPEED = SPEED + ACCELERATION/20 //integrate to get speed DISTANCE = DISTANCE + SPEED/20 //integrate to get distance Lest you get too carried away with this, integrating acceleration twice to get distance may work fairly well in the bowels of a high precision digital computer; but, trying it in the real world can lead to large errors. This is because your measured signal is probably not perfect. If you had an accelerometer on your robot measuring how fast it was accelerating forward or backwards, and that accelerometer was off by just .001 g's (.0332 feet/second^2), the integration of speed would be off by 1 fps and the distance integrator would be off by 14 feet. Hence, integrating navigation measurements to get position is not often done over prolonged periods of time. But it can be useful at times. For example, if you are measuring the distance traveled with a look-behind sonar, and the sonar quits providing data for a second or two, you may be able to keep your distance updated pretty accurately by integrating speed into the last valid sonar measurement. By the way, a single integration of speed to get distance can be much more accurate since the error only builds up linearly with time rather than exponentially (squared) with the double integration. While the example above of integration showed the concept, most actual uses of an integrator will be for short term navigation backup or for doing corrections in the control equations (the "I" part of PID). And the control equations are what will be discussed here. Using software to get a derivative is also easy, and also has its own set of problems. The most basic way to calculate the derivative of a signal is to subtract the previous execution cycle's value of the signal from the new value. For example, if your software is executing 20 times per second and you have just measured your distance (perhaps from an encoder) to be 98.6 inches; and the value on the last cycle was 97.8 inches; then you can calculate the speed (the derivative of distance) as (98.6 -97.8) * 20 = 16 inches per second. You can do this over a 30 second interval by: DO while time<30 //just for the length of the test DISTANCE = ??? //however you measure the distance SPEED = (DISTANCE - DISTANCE_LAST) * 20 //perform differentiation DISTANCE_LAST = DISTANCE //save the distance to use as last distance next loop Note that you didn't have to initialize SPEED as you had to initialize DISTANCE in the integral. This is because SPEED is recomputed each cycle and the previous value doesn't carry over. However, the first value of speed calculated will be wrong unless DISTANCE_LAST has been initialized to the correct value. And the sample code above doesn't even set DISTANCE_LAST until after SPEED has been calculated. If DISTANCE starts at zero, this probably won't be a problem. But, if you turn on the speed calculation when distance is perhaps 10 feet, and DISTANCE_LAST is zero because it hasn't been used yet, your first calculation of speed would be (10-0)*20 = 200 feet/second; which might cause a large command from your control equations. This problem can be overcome in several ways including: set speed to zero on the first time the speed calculation is done (if zero is an acceptable value for your equations); or set DISTANCE_LAST = DISTANCE for all processing loops, not just ones where SPEED is calculated (so it is already set when the first time the derivative loop is called); or set SPEED to a reasonable estimate of speed on the first calculation. And the differentiation takes a TIMES 20 (rather than a DIVIDE BY 20 as in integration) because the difference in distance is what happened over just 1/20 of a second; and you want to compute how much distance would have passed over a whole second. Now for the problems: the differentiation often has low resolution and is very susceptible to noise. In the example above, the encoder distance only changed 0.8 inches over .05 seconds. If it has 0.1 inch resolution, then the derivative output only has 12.5% resolution (0.1/0.8). This isn't very impressive. And if you increased your execution rate to 100 per second, then the number of encoder pulses per cycle would be just 1 or 2 (0.1 or 0.2) inches. This means your derivative speed calculation would be jumping back and forth from 10 inches per second to 20 inches per second. This isn't necessarily disastrous. First, the derived value often goes into a calculation which results in a PWM signal which inherently is turning on and off at a high rate and the motor dynamics filter that PWM signal into something which appears smooth. Hence, a noisy derivative may just cause the PWM signal to change its duration a bit noisily; but the motor will filter the noise out giving the right average value. Another alternative, if you find your calculations bouncing around to be distasteful, is to filter or average the derived signal. Averaging is easy to do, just save the last few values, add them together and divide by the number of values added. A low pass filter (which I hope to go into later) will also smooth the signal output. The disadvantage to averaging or filtering is that the data you are using is not the most recent. If you average 5 values, the data resulting is about what the data was 2 cycles ago, not what it is now. This time delay can cause oscillatory problems in high speed high gain calculations. Hopefully, we can avoid such problems, but the time delay factor is something to keep in mind. The best plan is to use the shortest period of averaging or filtering that will give acceptable results. Return to the Control law menu to select the next subject
http://abrobotics.tripod.com/ControlLaws/calculus.htm
13
94
In the context of programming, a function is a named sequence of statements that performs a desired operation. This operation is specified in a function definition. In Python, the syntax for a function definition is: def NAME( LIST OF PARAMETERS ): STATEMENTS You can make up any names you want for the functions you create, except that you can’t use a name that is a Python keyword. The list of parameters specifies what information, if any, you have to provide in order to use the new function. There can be any number of statements inside the function, but they have to be indented from the def. In the examples in this book, we will use the standard indentation of four spaces. Function definitions are the first of several compound statements we will see, all of which have the same pattern: In a function definition, the keyword in the header is def, which is followed by the name of the function and a list of parameters enclosed in parentheses. The parameter list may be empty, or it may contain any number of parameters. In either case, the parentheses are required. The first couple of functions we are going to write have no parameters, so the syntax looks like this: def new_line(): print # a print statement with no arguments prints a new line This function is named new_line. The empty parentheses indicate that it has no parameters. Its body contains only a single statement, which outputs a newline character. (That’s what happens when you use a print command without any arguments.) Defining a new function does not make the function run. To do that we need a function call. Function calls contain the name of the function being executed followed by a list of values, called arguments, which are assigned to the parameters in the function definition. Our first examples have an empty parameter list, so the function calls do not take any arguments. Notice, however, that the parentheses are required in the function call: print "First Line." new_line() print "Second Line." The output of this program is: First line. Second line. The extra space between the two lines is a result of the new_line() function call. What if we wanted more space between the lines? We could call the same function repeatedly: print "First Line." new_line() new_line() new_line() print "Second Line." Or we could write a new function named three_lines that prints three new lines: def three_lines(): new_line() new_line() new_line() print "First Line." three_lines() print "Second Line." This function contains three statements, all of which are indented by four spaces. Since the next statement is not indented, Python knows that it is not part of the function. You should notice a few things about this program: So far, it may not be clear why it is worth the trouble to create all of these new functions. Actually, there are a lot of reasons, but this example demonstrates two: Pulling together the code fragments from the previous section into a script named tryme1.py, the whole program looks like this: def new_line(): print def three_lines(): new_line() new_line() new_line() print "First Line." three_lines() print "Second Line." This program contains two function definitions: new_line and three_lines. Function definitions get executed just like other statements, but the effect is to create the new function. The statements inside the function do not get executed until the function is called, and the function definition generates no output. As you might expect, you have to create a function before you can execute it. In other words, the function definition has to be executed before the first time it is called. In order to ensure that a function is defined before its first use, you have to know the order in which statements are executed, which is called the flow of execution. Execution always begins at the first statement of the program. Statements are executed one at a time, in order from top to bottom. Function definitions do not alter the flow of execution of the program, but remember that statements inside the function are not executed until the function is called. Although it is not common, you can define one function inside another. In this case, the inner definition isn’t executed until the outer function is called. Function calls are like a detour in the flow of execution. Instead of going to the next statement, the flow jumps to the first line of the called function, executes all the statements there, and then comes back to pick up where it left off. That sounds simple enough, until you remember that one function can call another. While in the middle of one function, the program might have to execute the statements in another function. But while executing that new function, the program might have to execute yet another function! Fortunately, Python is adept at keeping track of where it is, so each time a function completes, the program picks up where it left off in the function that called it. When it gets to the end of the program, it terminates. What’s the moral of this sordid tale? When you read a program, don’t read from top to bottom. Instead, follow the flow of execution. Most functions require arguments, values that control how the function does its job. For example, if you want to find the absolute value of a number, you have to indicate what the number is. Python has a built-in function for computing the absolute value: >>> abs(5) 5 >>> abs(-5) 5 In this example, the arguments to the abs function are 5 and -5. Some functions take more than one argument. For example the built-in function pow takes two arguments, the base and the exponent. Inside the function, the values that are passed get assigned to variables called parameters. >>> pow(2, 3) 8 >>> pow(7, 4) 2401 Another built-in function that takes more than one argument is max. >>> max(7, 11) 11 >>> max(4, 1, 17, 2, 12) 17 >>> max(3 * 11, 5**3, 512 - 9, 1024**0) 503 max can be sent any number of arguments, separated by commas, and will return the maximum value sent. The arguments can be either simple values or expressions. In the last example, 503 is returned, since it is larger than 33, 125, and 1. Here is an example of a user-defined function that has a parameter: def print_twice(param): print param, param This function takes a single argument and assigns it to the parameter named param. The value of the parameter (at this point we have no idea what it will be) is printed twice, followed by a newline. The name param was chosen to reinforce the idea that it is a parameter, but in general, you will want to choose a name for your parameters that describes their use in the function. The interactive Python shell provides us with a convenient way to test our functions. We can use the import statement to bring the functions we have defined in a script into the interpreter session. To see how this works, assume the print_twice function is defined in a script named chap03.py. We can now test it interactively by importing it into our Python shell session: >>> from chap03 import * >>> print_twice('Spam') Spam Spam >>> print_twice(5) 5 5 >>> print_twice(3.14159) 3.14159 3.14159 In a function call, the value of the argument is assigned to the corresponding parameter in the function definition. In effect, it is if param = 'Spam' is executed when print_twice('Spam') is called, param = 5 in print_twice(5), and param = 3.14159 in print_twice(3.14159). Any type of argument that can be printed can be sent to print_twice In the first function call, the argument is a string. In the second, it’s an integer. In the third, it’s a float. As with built-in functions, we can use an expression as an argument for print_twice: >>> print_twice('Spam' * 4) SpamSpamSpamSpam SpamSpamSpamSpam 'Spam'*4 is first evaluated to 'SpamSpamSpamSpam', which is then passed as an argument to print_twice. Just as with mathematical functions, Python functions can be composed, meaning that you use the result of one function as the input to another. >>> print_twice(abs(-7)) 7 7 >>> print_twice(max(3, 1, abs(-11), 7)) 11 11 In the first example, abs(-7) evaluates to 7, which then becomes the argument to print_twice. In the second example we have two levels of composition, since abs(-11) is first evaluated to 11 before max(3, 1, 11, 7) is evaluated to 11 and print_twice(11) then displays the result. We can also use a variable as an argument: >>> sval = 'Eric, the half a bee.' >>> print_twice(sval) Eric, the half a bee. Eric, the half a bee. Notice something very important here. The name of the variable we pass as an argument (sval) has nothing to do with the name of the parameter (param). Again, it is as if param = sval is executed when print_twice(sval) is called. It doesn’t matter what the value was named in the caller, in print_twice its name is param. When you create a local variable inside a function, it only exists inside the function, and you cannot use it outside. For example: def cat_twice(part1, part2): cat = part1 + part2 print_twice(cat) This function takes two arguments, concatenates them, and then prints the result twice. We can call the function with two strings: >>> chant1 = "Pie Jesu domine, " >>> chant2 = "Dona eis requiem." >>> cat_twice(chant1, chant2) Pie Jesu domine, Dona eis requiem. Pie Jesu domine, Dona eis requiem. When cat_twice terminates, the variable cat is destroyed. If we try to print it, we get an error: >>> print cat NameError: name 'cat' is not defined Parameters are also local. For example, outside the function print_twice, there is no such thing as param. If you try to use it, Python will complain. To keep track of which variables can be used where, it is sometimes useful to draw a stack diagram. Like state diagrams, stack diagrams show the value of each variable, but they also show the function to which each variable belongs. Each function is represented by a frame. A frame is a box with the name of a function beside it and the parameters and variables of the function inside it. The stack diagram for the previous example looks like this: The order of the stack shows the flow of execution. print_twice was called by cat_twice, and cat_twice was called by __main__, which is a special name for the topmost function. When you create a variable outside of any function, it belongs to __main__. Each parameter refers to the same value as its corresponding argument. So, part1 has the same value as chant1, part2 has the same value as chant2, and param has the same value as cat. If an error occurs during a function call, Python prints the name of the function, and the name of the function that called it, and the name of the function that called that, all the way back to the top most function. To see how this works, create a Python script named tryme2.py that looks like this: def print_twice(param): print param, param print cat def cat_twice(part1, part2): cat = part1 + part2 print_twice(cat) chant1 = "Pie Jesu domine, " chant2 = "Dona eis requim." cat_twice(chant1, chant2) We’ve added the statement, print cat inside the print_twice function, but cat is not defined there. Running this script will produce an error message like this: Traceback (innermost last): File "tryme2.py", line 11, in <module> cat_twice(chant1, chant2) File "tryme2.py", line 7, in cat_twice print_twice(cat) File "tryme2.py", line 3, in print_twice print cat NameError: global name 'cat' is not defined This list of functions is called a traceback. It tells you what program file the error occurred in, and what line, and what functions were executing at the time. It also shows the line of code that caused the error. Notice the similarity between the traceback and the stack diagram. It’s not a coincidence. In fact, another common name for a traceback is a stack trace. A statement that consists of two parts: The syntax of a compound statement looks like this: keyword expression: statement statement ... A statement which permits functions and variables defined in a Python script to be brought into the environment of another script or a running Python shell. For example, assume the following is in a script named tryme.py: def print_thrice(thing): print thing, thing, thing n = 42 s = "And now for something completely different..." Now begin a python shell from within the same directory where tryme.py is located: $ ls tryme.py $ python >>> Three names are defined in tryme.py: print_thrice, n, and s. If we try to access any of these in the shell without first importing, we get an error: >>> n Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'n' is not defined >>> print_thrice("ouch!") Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'print_thrice' is not defined If we import everything from tryme.py, however, we can use everything defined in it: >>> from tryme import * >>> n 42 >>> s 'And now for something completely different...' >>> print_thrice("Yipee!") Yipee! Yipee! Yipee! >>> Note that you do not include the .py from the script name in the import statement. Using a text editor, create a Python script named tryme3.py . Write a function in this file called nine_lines that uses three_lines to print nine blank lines. Now add a function named clear_screen that prints out twenty-five blank lines. The last line of your program should be a call to clear_screen. Move the last line of tryme3.py to the top of the program, so the function call to clear_screen appears before the function definition. Run the program and record what error message you get. Can you state a rule about function definitions and function calls which describes where they can appear relative to each other in a program? Starting with a working version of tryme3.py , move the definition of new_line after the definition of three_lines. Record what happens when you run this program. Now move the definition of new_line below a call to three_lines(). Explain how this is an example of the rule you stated in the previous exercise. Fill in the body of the function definition for cat_n_times so that it will print the string, s, n times: def cat_n_times(s, n): <fill in your code here> Save this function in a script named import_test.py. Now at a unix prompt, make sure you are in the same directory where the import_test.py is located ( ls should show import_test.py). Start a Python shell and try the following: >>> from import_test import * >>> cat_n_times('Spam', 7) SpamSpamSpamSpamSpamSpamSpam If all is well, your session should work the same as this one. Experiment with other calls to cat_n_times until you feel comfortable with how it works.
http://www.openbookproject.net/thinkcs/python/english2e/ch03.html
13
118
|Expressed in (SI unit):||Pa·s = kg/(s·m)| |Commonly used symbols:||μ| |Expressed in other quantities:||μ = G·t| Viscosity is a measure of the resistance of a fluid which is being deformed by either shear stress or tensile stress. In everyday terms (and for fluids only), viscosity is "thickness". Thus, water is "thin", having a lower viscosity, while honey is "thick", having a higher viscosity. Viscosity describes a fluid's internal resistance to flow and may be thought of as a measure of fluid friction. For example, high-viscosity magma will create a tall, steep stratovolcano, because it cannot flow far before it cools, while low-viscosity lava will create a wide, shallow-sloped shield volcano. Put simply, the less viscous something is, the greater its ease of movement (fluidity). All real fluids (except superfluids) have some resistance to stress, but a fluid which has no resistance to shear stress is known as an ideal fluid or inviscid fluid. The study of viscosity is known as rheology. Viscosity coefficients can be defined in two ways: Viscosity is a tensorial quantity that can be decomposed in different ways into two independent components. The most usual decomposition yields the following viscosity coefficients: For example, at room temperature, water has a dynamic shear viscosity of about 1.0 × 10−3 Pa∙s and motor oil of about 250 × 10−3 Pa∙s. In general, in any flow, layers move at different velocities and the fluid's viscosity arises from the shear stress between the layers that ultimately opposes any applied force. Isaac Newton postulated that, for straight, parallel and uniform flow, the shear stress, τ, between layers is proportional to the velocity gradient, ∂u /∂y, in the direction perpendicular to the layers. Here, the constant μ is known as the coefficient of viscosity, the viscosity, the dynamic viscosity, or the Newtonian viscosity. This is a constitutive equation (like Hooke's law, Fick's law, Ohm's law). This means: it is not a fundamental law of nature, but a reasonable first approximation that holds in some materials and fails in others. Many fluids, such as water and most gases, satisfy Newton's criterion and are known as Newtonian fluids. Non-Newtonian fluids exhibit a more complicated relationship between shear stress and velocity gradient than simple linearity. The relationship between the shear stress and the velocity gradient can also be obtained by considering two plates closely spaced apart at a distance y, and separated by a homogeneous substance. Assuming that the plates are very large, with a large area A, such that edge effects may be ignored, and that the lower plate is fixed, let a force F be applied to the upper plate. If this force causes the substance between the plates to undergo shear flow (as opposed to just shearing elastically until the shear stress in the substance balances the applied force), the substance is called a fluid. The applied force is proportional to the area and velocity of the plate and inversely proportional to the distance between the plates. Combining these three relations results in the equation F = μ (Au/y), where μ is the proportionality factor called the dynamic viscosity (also called absolute viscosity, or simply viscosity). The equation can be expressed in terms of shear stress; τ = F/A = μ (u / y). The rate of shear deformation is u / y and can be also written as a shear velocity, du/dy. Hence, through this method, the relation between the shear stress and the velocity gradient can be obtained. James Clerk Maxwell called viscosity fugitive elasticity because of the analogy that elastic deformation opposes shear stress in solids, while in viscous fluids, shear stress is opposed by rate of deformation. Dynamic viscosity is measured with various types of rheometer. Close temperature control of the fluid is essential to accurate measurements, particularly in materials like lubricants, whose viscosity can double with a change of only 5 °C. For some fluids, it is a constant over a wide range of shear rates. These are Newtonian fluids. The fluids without a constant viscosity are called non-Newtonian fluids. Their viscosity cannot be described by a single number. Non-Newtonian fluids exhibit a variety of different correlations between shear stress and shear rate. One of the most common instruments for measuring kinematic viscosity is the glass capillary viscometer. In paint industries, viscosity is commonly measured with a Zahn cup, in which the efflux time is determined and given to customers. The efflux time can also be converted to kinematic viscosities (centistokes, cSt) through the conversion equations. A Ford viscosity cup measures the rate of flow of a liquid. This, under ideal conditions, is proportional to the kinematic viscosity. Also used in paint, a Stormer viscometer uses load-based rotation in order to determine viscosity. The viscosity is reported in Krebs units (KU), which are unique to Stormer viscometers. Vibrating viscometers can also be used to measure viscosity. These models such as the Dynatrol use vibration rather than rotation to measure viscosity. The usual symbol for dynamic viscosity used by mechanical and chemical engineers — as well as fluid dynamicists — is the Greek letter mu (μ). The symbol η is also used by chemists, physicists, and the IUPAC. The SI physical unit of dynamic viscosity is the pascal-second (Pa·s), which is identical to N·m−2·s. If a fluid with a viscosity of one Pa·s is placed between two plates, and one plate is pushed sideways with a shear stress of one pascal, it moves a distance equal to the thickness of the layer between the plates in one second. The cgs physical unit for dynamic viscosity is the poise (P), named after Jean Louis Marie Poiseuille. It is more commonly expressed, particularly in ASTM standards, as centipoise (cP). Water at 20 °C has a viscosity of 1.0020 cP or 0.001002 kilogram/meter second. The relation to the SI unit is In many situations, we are concerned with the ratio of the viscous force to the inertial force, the latter characterised by the fluid density ρ. This ratio is characterised by the kinematic viscosity (Greek letter nu, ν), defined as follows: The SI unit of ν is m2/s. The SI unit of ρ is kg/m3. The cgs physical unit for kinematic viscosity is the stokes (St), named after George Gabriel Stokes. It is sometimes expressed in terms of centistokes (cSt or ctsk). In U.S. usage, stoke is sometimes used as the singular form. Water at 20 °C has a kinematic viscosity of about 1 cSt. The kinematic viscosity is sometimes referred to as diffusivity of momentum, because it has the same unit as and is comparable to diffusivity of heat and diffusivity of mass. It is therefore used in dimensionless numbers which compare the ratio of the diffusivities. At one time the petroleum industry relied on measuring kinematic viscosity by means of the Saybolt viscometer, and expressing kinematic viscosity in units of Saybolt Universal Seconds (SUS). Other abbreviations such as SSU (Saybolt Seconds Universal) or SUV (Saybolt Universal Viscosity) are sometimes used. Kinematic viscosity in centistoke can be converted from SUS according to the arithmetic and the reference table provided in ASTM D 2161. The viscosity of a system is determined by how molecules constituting the system interact. There are no simple but correct expressions for the viscosity of a fluid. The simplest exact expressions are the Green–Kubo relations for the linear shear viscosity or the Transient Time Correlation Function expressions derived by Evans and Morriss in 1985. Although these expressions are each exact in order to calculate the viscosity of a dense fluid, using these relations requires the use of molecular dynamics computer simulations. Viscosity in gases arises principally from the molecular diffusion that transports momentum between layers of flow. The kinetic theory of gases allows accurate prediction of the behavior of gaseous viscosity. Within the regime where the theory is applicable: James Clerk Maxwell published a famous paper in 1866 using the kinetic theory of gases to study gaseous viscosity. To understand why the viscosity is independent of pressure consider two adjacent boundary layers (A and B) moving with respect to each other. The internal friction (the viscosity) of the gas is determined by the probability a particle of layer A enters layer B with a corresponding transfer of momentum. Maxwell's calculations showed him that the viscosity coefficient is proportional to both the density, the mean free path and the mean velocity of the atoms. On the other hand, the mean free path is inversely proportional to the density. So an increase of pressure doesn't result in any change of the viscosity. In relation to diffusion, the kinematic viscosity provides a better understanding of the behavior of mass transport of a dilute species. Viscosity is related to shear stress and the rate of shear in a fluid, which illustrates its dependence on the mean free path, λ, of the diffusing particles. for a unit area parallel to the x-z plane, moving along the x axis. We will derive this formula and show how μ is related to λ. Interpreting shear stress as the time rate of change of momentum, p, per unit area A (rate of momentum flux) of an arbitrary control surface gives where is the average velocity along x of fluid molecules hitting the unit area, with respect to the unit area. Further manipulation will show The Chapman-Enskog equation may be used to estimate viscosity for a dilute gas. This equation is based on a semi-theoretical assumption by Chapman and Enskog. The equation requires three empirically determined parameters: the collision diameter (σ), the maximum energy of attraction divided by the Boltzmann constant (є/к) and the collision integral (ω(T*)). In liquids, the additional forces between molecules become important. This leads to an additional contribution to the shear stress though the exact mechanics of this are still controversial. Thus, in liquids: The dynamic viscosities of liquids are typically several orders of magnitude higher than dynamic viscosities of gases. The first step is to calculate the Viscosity Blending Number (VBN) (also called the Viscosity Blending Index) of each component of the blend: where v is the kinematic viscosity in centistokes (cSt). It is important that the kinematic viscosity of each component of the blend be obtained at the same temperature. The next step is to calculate the VBN of the blend, using this equation: where xX is the mass fraction of each component of the blend. Once the viscosity blending number of a blend has been calculated using equation (2), the final step is to determine the kinematic viscosity of the blend by solving equation (1) for v: where VBNBlend is the viscosity blending number of the blend. The viscosity of air and water are by far the two most important materials for aviation aerodynamics and shipping fluid dynamics. Temperature plays the main role in determining viscosity. The viscosity of air depends mostly on the temperature. At 15.0 °C, the viscosity of air is 1.78 × 10−5 kg/(m·s), 17.8 μPa.s or 1.78 × 10−4 P. One can get the viscosity of air as a function of temperature from the Gas Viscosity Calculator The dynamic viscosity of water is 8.90 × 10−4 Pa·s or 8.90 × 10−3 dyn·s/cm2 or 0.890 cP at about 25 °C. Water has a viscosity of 0.0091 poise at 25 °C, or 1 centipoise at 20 °C. As a function of temperature T (K): μ(Pa·s) = A × 10B/(T−C) where A=2.414 × 10−5 Pa·s ; B = 247.8 K ; and C = 140 K. Viscosity of liquid water at different temperatures up to the normal boiling point is listed below. Some dynamic viscosities of Newtonian fluids are listed below: |Gases (at 0 °C):||viscosity |Liquids (at 25 °C):||viscosity |blood (37 °C)||3e-3 to 4e-3||3–4| |glycerol||1.49 (at 20 °C)||1490| |liquid nitrogen @ 77K||1.58e-4||0.158| |Fluids with variable compositions||viscosity |molten chocolate*||45–130 ||45,000–130,000| * These materials are highly non-Newtonian. On the basis that all solids such as granite flow to a small extent in response to small shear stress, some researchers have contended that substances known as amorphous solids, such as glass and many polymers, may be considered to have viscosity. This has led some to the view that solids are simply liquids with a very high viscosity, typically greater than 1012 Pa·s. This position is often adopted by supporters of the widely held misconception that glass flow can be observed in old buildings. This distortion is more likely the result of the glass making process rather than the viscosity of glass. However, others argue that solids are, in general, elastic for small stresses while fluids are not. Even if solids flow at higher stresses, they are characterized by their low-stress behavior. This distinction can become muddled if measurements are continued over long time periods, such as the Pitch drop experiment. Viscosity may be an appropriate characteristic for solids in a plastic regime. The situation becomes somewhat confused as the term viscosity is sometimes used for solid materials, for example Maxwell materials, to describe the relationship between stress and the rate of change of strain, rather than rate of shear. These distinctions may be largely resolved by considering the constitutive equations of the material in question, which take into account both its viscous and elastic behaviors. Materials for which both their viscosity and their elasticity are important in a particular range of deformation and deformation rate are called viscoelastic. In geology, earth materials that exhibit viscous deformation at least three times greater than their elastic deformation are sometimes called rheids. where Q is activation energy, T is temperature, R is the molar gas constant and A is approximately a constant. The viscous flow in amorphous materials is characterized by a deviation from the Arrhenius-type behavior: Q changes from a high value QH at low temperatures (in the glassy state) to a low value QL at high temperatures (in the liquid state). Depending on this change, amorphous materials are classified as either The fragility of amorphous materials is numerically characterized by the Doremus’ fragility ratio: and strong material have RD < 2 whereas fragile materials have RD ≥ 2. The viscosity of amorphous materials is quite exactly described by a two-exponential equation: with constants A1, A2, B, C and D related to thermodynamic parameters of joining bonds of an amorphous material. Not very far from the glass transition temperature, Tg, this equation can be approximated by a Vogel-Fulcher-Tammann (VFT) equation. If the temperature is significantly lower than the glass transition temperature, T < Tg, then the two-exponential equation simplifies to an Arrhenius type equation: where Hd is the enthalpy of formation of broken bonds (termed configuron s) and Hm is the enthalpy of their motion. When the temperature is less than the glass transition temperature, T < Tg, the activation energy of viscosity is high because the amorphous materials are in the glassy state and most of their joining bonds are intact. If the temperature is highly above the glass transition temperature, T > Tg, the two-exponential equation also simplifies to an Arrhenius type equation: When the temperature is higher than the glass transition temperature, T > Tg, the activation energy of viscosity is low because amorphous materials are melt and have most of their joining bonds broken which facilitates flow. which only depends upon the equilibrium state potentials like temperature and density (equation of state). In general, the trace of the stress tensor is the sum of thermodynamic pressure contribution plus another contribution which is proportional to the divergence of the velocity field. This constant of proportionality is called the volume viscosity. In the study of turbulence in fluids, a common practical strategy for calculation is to ignore the small-scale vortices (or eddies) in the motion and to calculate a large-scale motion with an eddy viscosity that characterizes the transport and dissipation of energy in the smaller-scale flow (see large eddy simulation). Values of eddy viscosity used in modeling ocean circulation may be from 5x104 to 106 Pa·s depending upon the resolution of the numerical grid. The reciprocal of viscosity is fluidity, usually symbolized by φ = 1 / μ or F = 1 / μ, depending on the convention used, measured in reciprocal poise (cm·s·g−1), sometimes called the rhe. Fluidity is seldom used in engineering practice. The concept of fluidity can be used to determine the viscosity of an ideal solution. For two components a and b, the fluidity when a and b are mixed is which is only slightly simpler than the equivalent equation in terms of viscosity: where χa and χb is the mole fraction of component a and b respectively, and μa and μb are the components pure viscosities. Viscous forces in a fluid are a function of the rate at which the fluid velocity is changing over distance. The velocity at any point r is specified by the velocity field v(r). The velocity at a small distance dr from point r may be written as a Taylor series: where dv / dr is shorthand for the dyadic product of the del operator and the velocity: This is just the Jacobian of the velocity field. Viscous forces are the result of relative motion between elements of the fluid, and so are expressible as a function of the velocity field. In other words, the forces at r are a function of v(r) and all derivatives of v(r) at that point. In the case of linear viscosity, the viscous force will be a function of the Jacobian tensor alone. For almost all practical situations, the linear approximation is sufficient. If we represent x, y, and z by indices 1, 2, and 3 respectively, the i,j component of the Jacobian may be written as ∂i vj where ∂i is shorthand for ∂/∂xi. Note that when the first and higher derivative terms are zero, the velocity of all fluid elements is parallel, and there are no viscous forces. Any matrix may be written as the sum of an antisymmetric matrix and a symmetric matrix, and this decomposition is independent of coordinate system, and so has physical significance. The velocity field may be approximated as: where Einstein notation is now being used in which repeated indices in a product are implicitly summed. The second term from the right is the asymmetric part of the first derivative term, and it represents a rigid rotation of the fluid about r with angular velocity ω where: For such a rigid rotation, there is no change in the relative positions of the fluid elements, and so there is no viscous force associated with this term. The remaining symmetric term is responsible for the viscous forces in the fluid. Assuming the fluid is isotropic (i.e. its properties are the same in all directions), then the most general way that the symmetric term (the rate-of-strain tensor) can be broken down in a coordinate-independent (and therefore physically real) way is as the sum of a constant tensor (the rate-of-expansion tensor) and a traceless symmetric tensor (the rate-of-shear tensor): where ς is the coefficient of bulk viscosity (or "second viscosity") and μ is the coefficient of (shear) viscosity. The forces in the fluid are due to the velocities of the individual molecules. The velocity of a molecule may be thought of as the sum of the fluid velocity and the thermal velocity. The viscous stress tensor described above gives the force due to the fluid velocity only. The force on an area element in the fluid due to the thermal velocities of the molecules is just the hydrostatic pressure. This pressure term (−p δij) must be added to the viscous stress tensor to obtain the total stress tensor for the fluid. The infinitesimal force dFi on an infinitesimal area dAi is then given by the usual relationship:
http://www.thefullwiki.org/Viscous_friction
13
82
||The lead section of this article may need to be rewritten. (January 2010)| Muscle is a soft tissue found in most animals. Muscle cells contain protein filaments that slide past one another, producing a contraction that changes both the length and the shape of the cell. Muscles function to produce force and motion. They are primarily responsible for maintenance of and changes in posture, locomotion of the organism itself, as well as movement of internal organs, such as the contraction of the heart and movement of food through the digestive system via peristalsis. Muscle tissues are derived from the mesodermal layer of embryonic germ cells in a process known as myogenesis. There are three types of muscle; classified as skeletal, cardiac, or smooth muscles. These types of muscles are split down into two more different classifications: voluntary and involuntary. Cardiac and smooth muscle contraction muscles occur without conscious thought and are thought to be essential for survival. Muscles are predominantly powered by the oxidation of fats and carbohydrates, but anaerobic chemical reactions are also used, particularly by fast twitch fibers. These chemical reactions produce adenosine triphosphate (ATP) molecules which are used to power the movement of the myosin heads. Types of tissue - Skeletal muscle or "voluntary muscle" is anchored by tendons (or by aponeuroses at a few places) to bone and is used to effect skeletal movement such as locomotion and in maintaining posture. Though this postural control is generally maintained as an unconscious reflex, the muscles responsible react to conscious control like non-postural muscles. An average adult male is made up of 42% of skeletal muscle and an average adult female is made up of 36% (as a percentage of body mass). - Smooth muscle or "involuntary muscle" is found within the walls of organs and structures such as the esophagus, stomach, intestines, bronchi, uterus, urethra, bladder, blood vessels, and the arrector pili in the skin (in which it controls erection of body hair). Unlike skeletal muscle, smooth muscle is not under conscious control. - Cardiac muscle is also an "involuntary muscle" but is more akin in structure to skeletal muscle, and is found only in the heart. Cardiac and skeletal muscles are "striated" in that they contain sarcomeres and are packed into highly regular arrangements of bundles; smooth muscle has neither. While skeletal muscles are arranged in regular, parallel bundles, cardiac muscle connects at branching, irregular angles (called intercalated discs). Striated muscle contracts and relaxes in short, intense bursts, whereas smooth muscle sustains longer or even near-permanent contractions. Skeletal (voluntary) muscle is further divided into two broad types: slow twitch and fast twitch: - Type I, slow twitch, or "red" muscle, is dense with capillaries and is rich in mitochondria and myoglobin, giving the muscle tissue its characteristic red color. It can carry more oxygen and sustain aerobic activity using fats or carbohydrates as fuel. Slow twitch fibers contract for long periods of time but with little force. - Type II, fast twitch muscle, has three major subtypes (IIa, IIx, and IIb) that vary in both contractile speed and force generated. Fast twitch fibers contract quickly and powerfully but fatigue very rapidly, sustaining only short, anaerobic bursts of activity before muscle contraction becomes painful. They contribute most to muscle strength and have greater potential for increase in mass. Type IIb is anaerobic, glycolytic, "white" muscle that is least dense in mitochondria and myoglobin. In small animals (e.g., rodents) this is the major fast muscle type, explaining the pale color of their flesh. The density of mammalian skeletal muscle tissue is about 1.06 kg/liter. This can be contrasted with the density of adipose tissue (fat), which is 0.9196 kg/liter. This makes muscle tissue approximately 15% denser than fat tissue. All muscles derive from paraxial mesoderm. The paraxial mesoderm is divided along the embryo's length into somites, corresponding to the segmentation of the body (most obviously seen in the vertebral column. Each somite has 3 divisions, sclerotome (which forms vertebrae), dermatome (which forms skin), and myotome (which forms muscle). The myotome is divided into two sections, the epimere and hypomere, which form epaxial and hypaxial muscles, respectively. Epaxial muscles in humans are only the erector spinae and small intervertebral muscles, and are innervated by the dorsal rami of the spinal nerves. All other muscles, including limb muscles, are hypaxial muscles, formed from the hypomere, and inervated by the ventral rami of the spinal nerves. During development, myoblasts (muscle progenitor cells) either remain in the somite to form muscles associated with the vertebral column or migrate out into the body to form all other muscles. Myoblast migration is preceded by the formation of connective tissue frameworks, usually formed from the somatic lateral plate mesoderm. Myoblasts follow chemical signals to the appropriate locations, where they fuse into elongate skeletal muscle cells. Skeletal muscles are sheathed by a tough layer of connective tissue called the epimysium. The epimysium anchors muscle tissue to tendons at each end, where the epimysium becomes thicker and collagenous. It also protects muscles from friction against other muscles and bones. Within the epimysium are multiple bundles called fascicles, each of which contains 10 to 100 or more muscle fibers collectively sheathed by a perimysium. Besides surrounding each fascicle, the perimysium is a pathway for nerves and the flow of blood within the muscle. The threadlike muscle fibers are the individual muscle cells (myocytes), and each cell is encased within its own endomysium of collagen fibers. Thus, the overall muscle consists of fibers (cells) that are bundled into fascicles, which are themselves grouped together to form muscles. At each level of bundling, a collagenous membrane surrounds the bundle, and these membranes support muscle function both by resisting passive stretching of the tissue and by distributing forces applied to the muscle. Scattered throughout the muscles are muscle spindles that provide sensory feedback information to the central nervous system. This same bundles-within-bundles structure is replicated within the muscle cells. Within the cells of the muscle are myofibrils, which themselves are bundles of protein filaments. The term "myofibril" should not be confused with "myofiber", which is a simply another name for a muscle cell. Myofibrils are complex strands of several kinds of protein filaments organized together into repeating units called sarcomeres. The striated appearance of both skeletal and cardiac muscle results from the regular pattern of sarcomeres within their cells. Although both of these types of muscle contain sarcomeres, the fibers in cardiac muscle are typically branched to form a network. Cardiac muscle fibers are interconnected by intercalated discs, giving that tissue the appearance of a syncytium. The gross anatomy of a muscle is the most important indicator of its role in the body. One particularly important aspect of gross anatomy of muscles is pennation or lack thereof. In most muscles, all the fibers are oriented in the same direction, running in a line from the origin to the insertion. In pennate muscles, the individual fibers are oriented at an angle relative to the line of action, attaching to the origin and insertion tendons at each end. Because the contracting fibers are pulling at an angle to the overall action of the muscle, the change in length is smaller, but this same orientation allows for more fibers (thus more force) in a muscle of a given size. Pennate muscles are usually found where their length change is less important than maximum force, such as the rectus femoris. Skeletal muscle is arranged in discrete muscles, an example of which is the biceps brachii. The tough, fibrous epimysium of skeletal muscle is both connected to and continuous with the tendons. In turn, the tendons connect to the periosteum layer surrounding the bones, permitting the transfer of force from the muscles to the skeleton. Together, these fibrous layers, along with tendons and ligaments, constitute the deep fascia of the body. The muscular system consists of all the muscles present in a single body. There are approximately 650 skeletal muscles in the human body, but an exact number is difficult to define. The difficulty lies partly in the fact that different sources group the muscles differently and partly in that some muscles, such as palmaris longus, are not always present. The muscular system is one component of the musculoskeletal system, which includes not only the muscles but also the bones, joints, tendons, and other structures that permit movement. The three types of muscle (skeletal, cardiac and smooth) have significant differences. However, all three use the movement of actin against myosin to create contraction. In skeletal muscle, contraction is stimulated by electrical impulses transmitted by the nerves, the motoneurons (motor nerves) in particular. Cardiac and smooth muscle contractions are stimulated by internal pacemaker cells which regularly contract, and propagate contractions to other muscle cells they are in contact with. All skeletal muscle and many smooth muscle contractions are facilitated by the neurotransmitter acetylcholine. The action a muscle generates is determined by the origin and insertion locations. The cross-sectional area of a muscle (rather than volume or length) determines the amount of force it can generate by defining the number of sarcomeres which can operate in parallel. The amount of force applied to the external environment is determined by lever mechanics, specifically the ratio of in-lever to out-lever. For example, moving the insertion point of the biceps more distally on the radius (farther from the joint of rotation) would increase the force generated during flexion (and, as a result, the maximum weight lifted in this movement), but decrease the maximum speed of flexion. Moving the insertion point proximally (closer to the joint of rotation) would result in decreased force but increased velocity. This can be most easily seen by comparing the limb of a mole to a horse - in the former, the insertion point is positioned to maximize force (for digging), while in the latter, the insertion point is positioned to maximize speed (for running). Muscular activity accounts for much of the body's energy consumption. All muscle cells produce adenosine triphosphate (ATP) molecules which are used to power the movement of the myosin heads. Muscles conserve energy in the form of creatine phosphate which is generated from ATP and can regenerate ATP when needed with creatine kinase. Muscles also keep a storage form of glucose in the form of glycogen. Glycogen can be rapidly converted to glucose when energy is required for sustained, powerful contractions. Within the voluntary skeletal muscles, the glucose molecule can be metabolized anaerobically in a process called glycolysis which produces two ATP and two lactic acid molecules in the process (note that in aerobic conditions, lactate is not formed; instead pyruvate is formed and transmitted through the citric acid cycle). Muscle cells also contain globules of fat, which are used for energy during aerobic exercise. The aerobic energy systems take longer to produce the ATP and reach peak efficiency, and requires many more biochemical steps, but produces significantly more ATP than anaerobic glycolysis. Cardiac muscle on the other hand, can readily consume any of the three macronutrients (protein, glucose and fat) aerobically without a 'warm up' period and always extracts the maximum ATP yield from any molecule involved. The heart, liver and red blood cells will also consume lactic acid produced and excreted by skeletal muscles during exercise. The efferent leg of the peripheral nervous system is responsible for conveying commands to the muscles and glands, and is ultimately responsible for voluntary movement. Nerves move muscles in response to voluntary and autonomic (involuntary) signals from the brain. Deep muscles, superficial muscles, muscles of the face and internal muscles all correspond with dedicated regions in the primary motor cortex of the brain, directly anterior to the central sulcus that divides the frontal and parietal lobes. In addition, muscles react to reflexive nerve stimuli that do not always send signals all the way to the brain. In this case, the signal from the afferent fiber does not reach the brain, but produces the reflexive movement by direct connections with the efferent nerves in the spine. However, the majority of muscle activity is volitional, and the result of complex interactions between various areas of the brain. Nerves that control skeletal muscles in mammals correspond with neuron groups along the primary motor cortex of the brain's cerebral cortex. Commands are routed though the basal ganglia and are modified by input from the cerebellum before being relayed through the pyramidal tract to the spinal cord and from there to the motor end plate at the muscles. Along the way, feedback, such as that of the extrapyramidal system contribute signals to influence muscle tone and response. The afferent leg of the peripheral nervous system is responsible for conveying sensory information to the brain, primarily from the sense organs like the skin. In the muscles, the muscle spindles convey information about the degree of muscle length and stretch to the central nervous system to assist in maintaining posture and joint position. The sense of where our bodies are in space is called proprioception, the perception of body awareness. More easily demonstrated than explained, proprioception is the "unconscious" awareness of where the various regions of the body are located at any one time. This can be demonstrated by anyone closing their eyes and waving their hand around. Assuming proper proprioceptive function, at no time will the person lose awareness of where the hand actually is, even though it is not being detected by any of the other senses. Several areas in the brain coordinate movement and position with the feedback information gained from proprioception. The cerebellum and red nucleus in particular continuously sample position against movement and make minor corrections to assure smooth motion. The efficiency of human muscle has been measured (in the context of rowing and cycling) at 18% to 26%. The efficiency is defined as the ratio of mechanical work output to the total metabolic cost, as can be calculated from oxygen consumption. This low efficiency is the result of about 40% efficiency of generating ATP from food energy, losses in converting energy from ATP into mechanical work inside the muscle, and mechanical losses inside the body. The latter two losses are dependent on the type of exercise and the type of muscle fibers being used (fast-twitch or slow-twitch). For an overall efficiency of 20 percent, one watt of mechanical power is equivalent to 4.3 kcal per hour. For example, one manufacturer of rowing equipment calibrates its rowing ergometer to count burned calories as equal to four times the actual mechanical work, plus 300 kcal per hour, this amounts to about 20 percent efficiency at 250 watts of mechanical output. The mechanical energy output of a cyclic contraction can depend upon many factors, including activation timing, muscle strain trajectory, and rates of force rise & decay. These can be synthesized experimentally using work loop analysis. A display of "strength" (e.g. lifting a weight) is a result of three factors that overlap: physiological strength (muscle size, cross sectional area, available crossbridging, responses to training), neurological strength (how strong or weak is the signal that tells the muscle to contract), and mechanical strength (muscle's force angle on the lever, moment arm length, joint capabilities). |Grade 0||No contraction| |Grade 1||Trace of contraction, but no movement at the joint| |Grade 2||Movement at the joint with gravity eliminated| |Grade 3||Movement against gravity, but not against added resistance| |Grade 4||Movement against external resistance, but less than normal| |Grade 5||Normal strength| Vertebrate muscle typically produces approximately 25 N (5.6 lbf) of force per square centimeter of muscle cross-sectional area when isometric and at optimal length. Some invertebrate muscles, such as in crab claws, have much longer sarcomeres than vertebrates, resulting in many more sites for actin and myosin to bind and thus much greater force per square centimeter at the cost of much slower speed. The force generated by a contraction can be measured non-invasively using either mechanomyography or phonomyography, be measured in vivo using tendon strain (if a prominent tendon is present), or be measured directly using more invasive methods. The strength of any given muscle, in terms of force exerted on the skeleton, depends upon length, shortening speed, cross sectional area, pennation, sarcomere length, myosin isoforms, and neural activation of motor units. Significant reductions in muscle strength can indicate underlying pathology, with the chart at right used as a guide. The "strongest" human muscle Since three factors affect muscular strength simultaneously and muscles never work individually, it is misleading to compare strength in individual muscles, and state that one is the "strongest". But below are several muscles whose strength is noteworthy for different reasons. - In ordinary parlance, muscular "strength" usually refers to the ability to exert a force on an external object—for example, lifting a weight. By this definition, the masseter or jaw muscle is the strongest. The 1992 Guinness Book of Records records the achievement of a bite strength of 4,337 N (975 lbf) for 2 seconds. What distinguishes the masseter is not anything special about the muscle itself, but its advantage in working against a much shorter lever arm than other muscles. - If "strength" refers to the force exerted by the muscle itself, e.g., on the place where it inserts into a bone, then the strongest muscles are those with the largest cross-sectional area. This is because the tension exerted by an individual skeletal muscle fiber does not vary much. Each fiber can exert a force on the order of 0.3 micronewton. By this definition, the strongest muscle of the body is usually said to be the quadriceps femoris or the gluteus maximus. - Because muscle strength is determined by cross-sectional area, a shorter muscle will be stronger "pound for pound" (i.e., by weight) than a longer muscle of the same cross-sectional area. The myometrial layer of the uterus may be the strongest muscle by weight in the female human body. At the time when an infant is delivered, the entire human uterus weighs about 1.1 kg (40 oz). During childbirth, the uterus exerts 100 to 400 N (25 to 100 lbf) of downward force with each contraction. - The external muscles of the eye are conspicuously large and strong in relation to the small size and weight of the eyeball. It is frequently said that they are "the strongest muscles for the job they have to do" and are sometimes claimed to be "100 times stronger than they need to be." However, eye movements (particularly saccades used on facial scanning and reading) do require high speed movements, and eye muscles are exercised nightly during rapid eye movement sleep. - The statement that "the tongue is the strongest muscle in the body" appears frequently in lists of surprising facts, but it is difficult to find any definition of "strength" that would make this statement true. Note that the tongue consists of eight muscles, not one. - The heart has a claim to being the muscle that performs the largest quantity of physical work in the course of a lifetime. Estimates of the power output of the human heart range from 1 to 5 watts. This is much less than the maximum power output of other muscles; for example, the quadriceps can produce over 100 watts, but only for a few minutes. The heart does its work continuously over an entire lifetime without pause, and thus does "outwork" other muscles. An output of one watt continuously for eighty years yields a total work output of two and a half gigajoules. Humans are genetically predisposed with a larger percentage of one type of muscle group over another. An individual born with a greater percentage of Type I muscle fibers would theoretically be more suited to endurance events, such as triathlons, distance running, and long cycling events, whereas a human born with a greater percentage of Type II muscle fibers would be more likely to excel at anaerobic events such as a 200 meter dash, or weightlifting. Exercise is often recommended as a means of improving motor skills, fitness, muscle and bone strength, and joint function. Exercise has several effects upon muscles, connective tissue, bone, and the nerves that stimulate the muscles. One such effect is muscle hypertrophy, an increase in size. This is used in bodybuilding. Various exercises require a predominance of certain muscle fiber utilization over another. Aerobic exercise involves long, low levels of exertion in which the muscles are used at well below their maximal contraction strength for long periods of time (the most classic example being the marathon). Aerobic events, which rely primarily on the aerobic (with oxygen) system, use a higher percentage of Type I (or slow-twitch) muscle fibers, consume a mixture of fat, protein and carbohydrates for energy, consume large amounts of oxygen and produce little lactic acid. Anaerobic exercise involves short bursts of higher intensity contractions at a much greater percentage of their maximum contraction strength. Examples of anaerobic exercise include sprinting and weight lifting. The anaerobic energy delivery system uses predominantly Type II or fast-twitch muscle fibers, relies mainly on ATP or glucose for fuel, consumes relatively little oxygen, protein and fat, produces large amounts of lactic acid and can not be sustained for as long a period as aerobic exercise. Many exercises are partially aerobic and partially anaerobic; for example, soccer involves a combination of both. The presence of lactic acid has an inhibitory effect on ATP generation within the muscle; though not producing fatigue, it can inhibit or even stop performance if the intracellular concentration becomes too high. However, long-term training causes neovascularization within the muscle, increasing the ability to move waste products out of the muscles and maintain contraction. Once moved out of muscles with high concentrations within the sarcomere, lactic acid can be used by other muscles or body tissues as a source of energy, or transported to the liver where it is converted back to pyruvate. In addition to increasing the level of lactic acid, strenuous exercise causes the loss of potassium ions in muscle and causing an increase in potassium ion concentrations close to the muscle fibres, in the interstitium. Acidification by lactic acid may allow recovery of force so that acidosis may protect against fatigue rather than being a cause of fatigue. Delayed onset muscle soreness is pain or discomfort that may be felt one to three days after exercising and subsides generally within two to three days later. Once thought to be caused by lactic acid buildup, a more recent theory is that it is caused by tiny tears in the muscle fibers caused by eccentric contraction, or unaccustomed training levels. Since lactic acid disperses fairly rapidly, it could not explain pain experienced days after exercise. Independent of strength and performance measures, muscles can be induced to grow larger by a number of factors, including hormone signaling, developmental factors, strength training, and disease. Contrary to popular belief, the number of muscle fibres cannot be increased through exercise. Instead, muscles grow larger through a combination of muscle cell growth as new protein filaments are added along with additional mass provided by undifferentiated satellite cells alongside the existing muscle cells. Muscle fibres have a limited capacity for growth through hypertrophy and some believe they split through hyperplasia if subject to increased demand. Biological factors such as age and hormone levels can affect muscle hypertrophy. During puberty in males, hypertrophy occurs at an accelerated rate as the levels of growth-stimulating hormones produced by the body increase. Natural hypertrophy normally stops at full growth in the late teens. As testosterone is one of the body's major growth hormones, on average, men find hypertrophy much easier to achieve than women. Taking additional testosterone or other anabolic steroids will increase muscular hypertrophy. Muscular, spinal and neural factors all affect muscle building. Sometimes a person may notice an increase in strength in a given muscle even though only its opposite has been subject to exercise, such as when a bodybuilder finds her left biceps stronger after completing a regimen focusing only on the right biceps. This phenomenon is called cross education. Inactivity and starvation in mammals lead to atrophy of skeletal muscle, a decrease in muscle mass that may be accompanied by a smaller number and size of the muscle cells as well as lower protein content. Muscle atrophy may also result from the natural aging process or from disease. In humans, prolonged periods of immobilization, as in the cases of bed rest or astronauts flying in space, are known to result in muscle weakening and atrophy. Atrophy is of particular interest to the manned spaceflight community, since the weightlessness experienced in spaceflight results is a loss of as much as 30% of mass in some muscles. Such consequences are also noted in small hibernating mammals like the golden-mantled ground squirrels and brown bats. During aging, there is a gradual decrease in the ability to maintain skeletal muscle function and mass, known as sarcopenia. The exact cause of sarcopenia is unknown, but it may be due to a combination of the gradual failure in the "satellite cells" which help to regenerate skeletal muscle fibers, and a decrease in sensitivity to or the availability of critical secreted growth factors which are necessary to maintain muscle mass and satellite cell survival. Sarcopenia is a normal aspect of aging, and is not actually a disease state yet can be linked to many injuries in the elderly population as well as decreasing quality of life. There are also many diseases and conditions which cause muscle atrophy. Examples include cancer and AIDS, which induce a body wasting syndrome called cachexia. Other syndromes or conditions which can induce skeletal muscle atrophy are congestive heart disease and some diseases of the liver. Neuromuscular diseases are those that affect the muscles and/or their nervous control. In general, problems with nervous control can cause spasticity or paralysis, depending on the location and nature of the problem. A large proportion of neurological disorders, ranging from cerebrovascular accident (stroke) and Parkinson's disease to Creutzfeldt-Jakob disease, can lead to problems with movement or motor coordination. Symptoms of muscle diseases may include weakness, spasticity, myoclonus and myalgia. Diagnostic procedures that may reveal muscular disorders include testing creatine kinase levels in the blood and electromyography (measuring electrical activity in muscles). In some cases, muscle biopsy may be done to identify a myopathy, as well as genetic testing to identify DNA abnormalities associated with specific myopathies and dystrophies. A non-invasive elastography technique that measures muscle noise is undergoing experimentation to provide a way of monitoring neuromuscular disease. The sound produced by a muscle comes from the shortening of actomyosin filaments along the axis of the muscle. During contraction, the muscle shortens along its longitudinal axis and expands across the transverse axis, producing vibrations at the surface. Evolutionarily, specialized forms of skeletal and cardiac muscles predated the divergence of the vertebrate/arthropod evolutionary line.[dead link] This indicates that these types of muscle developed in a common ancestor sometime before 700 million years ago (mya). Vertebrate smooth muscle was found to have evolved independently from the skeletal and cardiac muscles. - Electroactive polymers—materials that behave like muscles, used in robotics research - Hand strength - Muscle memory - Rohmert's law—pertaining to muscle fatigue - Alfred Carey Carpenter (2007). "Muscle". Anatomy Words. Retrieved October 3, 2012. - Douglas Harper (2012). "Muscle". Online Etymology Dictionary. Retrieved October 3, 2012. - Marieb, EN; Hoehn, Katja (2010). Human Anatomy & Physiology (8th ed.). San Francisco: Benjamin Cummings. p. 312. ISBN 978-0-8053-9569-3. - McCloud, Aaron (30 November 2011). "Build Fast Twitch Muscle Fibers". Complete Strength Training. Retrieved 30 November 2011. - Larsson, L; Edström, L; Lindegren, B; Gorza, L; Schiaffino, S (July 1991). "MHC composition and enzyme-histochemical and physiological properties of a novel fast-twitch motor unit type". The American Journal of Physiology 261 (1 pt 1): C93–101. PMID 1858863. Retrieved 2006-06-11. - Urbancheka, M; Pickenb, E; Kaliainenc, L; Kuzon, W (2001). "Specific Force Deficit in Skeletal Muscles of Old Rats Is Partially Explained by the Existence of Denervated Muscle Fibers". The Journals of Gerontology Series A: Biological Sciences and Medical Sciences 56 (5): B191–B197. doi:10.1093/gerona/56.5.B191. - Farvid, MS; Ng, TW; Chan, DC; Barrett, PH; Watts, GF (2005). "Association of adiponectin and resistin with adipose tissue compartments, insulin resistance and dyslipidaemia". Diabetes, obesity & metabolism 7 (4): 406–13. doi:10.1111/j.1463-1326.2004.00410.x. PMID 15955127. - Sweeney, Lauren (1997). Basic Concepts in Embryology: A Student's Survival Guide (1st Paperback ed.). McGraw-Hill Professional. - MacIntosh, BR; Gardiner, PF; McComas, AJ (2006). "1. Muscle Architecture and Muscle Fiber Anatomy". Skeletal Muscle: Form and Function (2nd ed.). Champaign, IL: Human Kinetics. pp. 3–21. ISBN 0-7360-4517-1. - Kent, George C (1987). "11. Muscles". Comparative Anatomy of the Vertebrates (7th ed.). Dubuque, Iowa, USA: Wm. C. Brown Publishers. pp. 326–374. ISBN 0-697-23486-X. - Poole, RM, ed. (1986). The Incredible Machine. Washington, DC: National Geographic Society. pp. 307–311. ISBN 0-87044-621-5. - Heymsfield, SB; Gallagher, D; Kotler, DP; Wang, Z; Allison, DB; Heshka, S (2002). "Body-size dependence of resting energy expenditure can be attributed to nonenergetic homogeneity of fat-free mass". American Journal of Physiology - Endocrinology and Metabolism 282 (1): E132–E138. PMID 11739093. - "Concept II Rowing Ergometer, user manual" (PDF). 1993. - Muslumova, Irada (2003). "Power of a Human Heart". The Physics Factbook. - Nielsen, OB; de Paoli, F; Overgaard, K (2001). "Protective effects of lactic acid on force production in rat skeletal muscle". Journal of Physiology 536 (1): 161–6. doi:10.1111/j.1469-7793.2001.t01-1-00161.x. PMC 2278832. PMID 11579166. - Robergs, R; Ghiasvand, F; Parker, D (2004). "Biochemistry of exercise-induced metabolic acidosis". Am J Physiol Regul Integr Comp Physiol 287 (3): R502–16. doi:10.1152/ajpregu.00114.2004. PMID 15308499. - Fuster, G; Busquets, S; Almendro, V; López-Soriano, FJ; Argilés, JM (2007). "Antiproteolytic effects of plasma from hibernating bears: a new approach for muscle wasting therapy?". Clin Nutr 26 (5): 658–61. doi:10.1016/j.clnu.2007.07.003. PMID 17904252. - Roy, RR; Baldwin, KM; Edgerton, VR (1996). "Response of the neuromuscular unit to spaceflight: What has been learned from the rat model". Exerc. Sport Sci. Rev. 24: 399–425. PMID 8744257. - "NASA Muscle Atrophy Research (MARES) Website". - Lohuis, TD; Harlow, HJ; Beck, TD (2007). "Hibernating black bears (Ursus americanus) experience skeletal muscle protein balance during winter anorexia". Comp. Biochem. Physiol. B, Biochem. Mol. Biol. 147 (1): 20–28. doi:10.1016/j.cbpb.2006.12.020. PMID 17307375. - Roche, Alex F. (1994). "Sarcopenia: A critical review of its measurements and health-related significance in the middle-aged and elderly". American Journal of Human Biology 6: 33. doi:10.1002/ajhb.1310060107. - Dumé, Belle (18 May 2007). "'Muscle noise' could reveal diseases' progression". NewScientist.com news service. - Steinmetz, P. R. H.; Kraus, J. E. M.; Larroux, C.; Hammel, J. R. U.; Amon-Hassenzahl, A.; Houliston, E.; Wörheide, G.; Nickel, M. et al. (2012). "Independent evolution of striated muscles in cnidarians and bilaterians". Nature 487 (7406): 231. doi:10.1038/nature11180. - "Evolution of muscle fibers" (PDF). |Look up muscle in Wiktionary, the free dictionary.| - Media related to muscles at Wikimedia Commons - University of Dundee article on performing neurological examinations (Quadriceps "strongest") - Muscle efficiency in rowing - Human Muscle Tutorial (clear pictures of main human muscles and their Latin names, good for orientation) - Microscopic stains of skeletal and cardiac muscular fibers to show striations. Note the differences in myofibrilar arrangements.
http://en.wikipedia.org/wiki/Muscle
13
52
Table of Contents There are 12 problem types: Add, Subtract, Multiply, Divide, Add / Subtract, Multiply / Divide, Mixed, Add (Missing Number), Subtract (Missing Number), Multiply (Missing Number), Divide (Missing Number), Greater / Less Than. The first six types are self-explanatory. The fifth type, Mixed, allows a fact sheet to be created which contains a mixture of types 1-4 and 12. The Missing Number types include either a missing numerator or denominator as part of the problem. Finally, a Greater / Less Than problem allows a student to determine the greater/less than relationship between two numbers. [top] When creating and printing fact or answer sheets, you may wish to provide a description of the type of problem set. A description that is entered will automatically appear at the top of the fact or answer sheet. [top] The font size that is used to create a fact sheet can be changed to allow more or less problems per page. By selecting a smaller size, more problems can normally be presented horizontally as well as When selecting the problem type Division, you may select how to present the answers when creating an answer sheet. Selecting "Normal" will automatically show each answer up to five decimal places. Selecting "Whole" guarantees that each division problem will result in a whole number. The third choice, "Two-decimal rounding" will generate all answers with a rounded two-decimal answer. This option is only active when generating a Division (and Missing Number) or Mixed problem type. [top] [top] Row Layout - Include Name/Date/Score fields - By enabling this option, each fact sheet will automatically have a page header which will include a space for name, date, and score. The header is not included on answer - Don't allow answers of -1, 0, or 1 - If you would like to guarantee that no answer has a value of minus one, zero, or one, enable this option. - Generate numbers with two decimals - Although not frequently used, if you would like your problem facts to have up to two decimal places, enable this option. - Jumbo space for working solution - By enabling this option, extra space will be included between each row for solving problems. The extra space is not included on answer sheets. - No negative results for subtraction - By enabling this option, you are guaranteed that no subtraction problem will result in a negative answer (helpful for children who have not learned negative numbers yet). This option is only active when generating a Subtraction (and Missing Number) or Mixed problem type. - Number problems from 1 to ... - By selecting this option, all fact sheet problems will be numbered from 1 to the highest - Display problems horizontally - This option controls how the addition, subtraction, multiplication, and division problems are displayed. By selecting this option, problems will be presented horizontally (e.g. 3+5=8) rather than the default vertical layout where one number is beneath The row layout allows you to specify how many rows and how many problems per row to create when generating a fact sheet. Both the row count and the problems per row values must be in the range 1-25. Depending on the font size selected, a standard 8x11 sheet of paper contains 6 rows of 5 problems Range for Problems Creating a fact sheet problem requires the generation of a random numerator and a random denominator. In order to do this, a numeric "range" must be specified so that a number can be randomly selected from within that range. There are three diffent ways to specify a range. - Single Range - Being the easiest to use to generate fact sheet problems, this method uses a single numeric range to select both the numerator and denominator. The low and high end of the range is entered as the "minimum" and "maximum" values. For example, if fact sheet problems are to have numerators and denominators which are single-digits, the minimum and maximum values entered would be "0" and "9". Example: - Dual Range - The dual range method allows a different range to be specified for each the numerator and denominator. For example, if you would like fact sheet problems that always include a 7 as one number and a single-digit number as the other number, the ranges would be "7 to 7" and "0 to 9" respectfully. This method allows the most flexibility in the generation of both the numerator and denominator. Example: - Mixed Range - This method allows a mixed specification of both the single range and the dual range. More difficult to use (and not as popular), the mixed range method allows fact sheet problems to have either the minimum or maximum number entered as a range (i.e. "1 to 10"), while the other number is entered as a single value (i.e. "5"). When a mixed range is entered, the single value will be converted to a range using either the low or high value of the other range. For example, if you specify a range of "0 to 9" for the first number and "3" for the second number, the resulting ranges will be "0 to 9" and "0 to 3". For another example, if the first number is entered as "5" and the second number is a range entered as "1 to 10", the resulting ranges will be "5 to 10" and "1 to 10". Example: Numbers that are entered as a minimum, maximum, or a range value, must be in the range -9999 to 9999. This optional input field allows you to limit the range of your answers. If entered, you must specify your range in the "low to high" format. To limit your answers to a number from 1 to 10, enter "1 to 10". Although easy for students to determine answers, your answer range can also specify a single result as in "9 to 9". Be careful to use answer ranges that make sense for your problem ranges. For example, don't create postive addition facts and then input a negative answer range. Failure to input valid answer ranges will result in an error page. [top] Lock Minimum/Maximum Ranges you input ranges as the minimum and maximum values, this option controls whether those ranges will be used to select either a numerator or denominator. If this option is not selected (the default), the numerator for a fact problem may be the result of either input range; the denominator will be selected from the opposite range. If this option is selected, the numerator will be selected from the "Minimum or range" input value; the denominator will be selected from the "Maximum or range" Be aware that selecting this option can sometimes result in scenarios that cannot be resolved. For example, if you enable this option and select "No negative results for subtraction", a "Minimum or range" input value of 1 to 1, a "Maximum or range" input value of 5 to 10, you will receive an error (you cannot subtract a value 5-10 from 1 and get a positive result). [top] The Generate! button allows you to build your fact sheet. Once you have entered the required fields and selected any options, you may select one of three different Generate! choices (listed below). After selecting your choice, click the Generate! button to process your request. - Generate fact sheet - This choice will create and display your fact sheet (without answers). Each sheet will be displayed in a new browser window. Once displayed, you can review and/or print your fact sheet. On many browsers, you can move your mouse over each individual problem to view the - Generate answer sheet - This choice will display your fact sheet (with answers). Each sheet will be displayed in a new browser window. Once displayed, you can review and/or print your answer sheet. - Automatically email me daily - This choice will provide you the opportunity to automatically receive a "re-shuffled" set of problems for your fact sheet on a daily basis. Once you click the Generate! button, you are allowed to enter your email address and select which weekdays you would like to receive email. Each email received will contain an Internet link to generate a different set of problems, specific for that day. All fact sheet problems are based on the settings you entered for your fact sheet. For each set of options you select on the build page, you will get the same problems each time you click the Generate! button. If you are not satisfied with the problems generated for you, then you can click the Re-Shuffle button. Once you click Re-Shuffle, you can re-click the Generate! button to generate a new set of problems. There is no limit to the number of times you can re-shuffle your problems. [top]
http://www.mathfactcafe.com/help/buildit/410/
13
80
A vector space is a set of objects that can be added together and multiplied by elements of another set, while satisfying certain properties. Elements of the first set are called "vectors" while elements of the second set are called "scalars". The idea of a vector space is one of the most fundamental and important concepts in mathematics, physics, and engineering. In two dimensions, a vector has a "magnitude" (or length) and a "direction" (or angle).Perhaps the simplest vector to visualize is the velocity vector, showing the speed and direction of motion of a particle. Other extremely common vectors are the electric field and magnetic field vectors, though vectors abound in numerous areas of mathematics, physics, mechanical engineering, and aspects of electrical engineering. The most important operations involving vectors are the vector sum and the vector-scalar product. As an example of the first, if we are in a train traveling with speed given by one vector, and we throw something inside the train with a velocity, relative to the train, of another vector, the velocity of the object relative to a fixed observer is the sum of those two vectors. As an example of the second, if we double the current through an electromagnet, its magnetic field vector will be multiplied by the number 2. That is, its direction will be unchanged and its magnitude will double. A vector space is a set of vectors that can be added to each other or multiplied by a "scalar". (The term "scalar" is used for treatments of unusual vector spaces—see below. For the straightforward case, think of a scalar as just an ordinary real number.) Not everything is a vector: some examples of things that are not vectors are air temperature and pressure, or the electrostatic voltage. These have no direction. They are scalars. But there is a special type of derivative, the "gradient" of a scalar, that is a vector. This measures the change in a scalar from one point in space to another. Vector spaces have a "dimension". In the physically simple cases, that dimension is usually just 2 or 3. Vectors drawn as arrows on a piece of paper are two-dimensional vectors. Vectors giving velocity, electric field strength, etc., in real 3-dimensional space are three-dimensional vectors. Given a choice of "coordinate system" or "basis" for representing vectors, any vector can be denoted by 2 or 3 (or whatever the dimension is) scalars. So, for example, a particle's velocity vector can be represented by its x-velocity, y-velocity, and z-velocity. These numbers are called the "components" of the vector, and are generally written with subscripts running from 1 to the dimension of the space. So a vector might be represented as When represented in this way, the vector sum is very straightforward: and the vector-scalar product is equally straightforward: Because of this representation, each vector can be thought of as a point in a Cartesian coordinate system; or, rather, as an arrow from the origin to the point. For example, can be thought of as the arrow from the origin (0, 0, 0) to the point 4,5,6. In this case, (4,5,6) would be called the head of v, and (0,0,0) would be called the tail. - The space of n-tuples of real numbers is a vector space, where to add two vectors we simply add the corresponding components. The case n = 2 is exactly the case of vectors in the plane discussed above. - The set of polynomials with real coefficients is a vector space. If we add two polynomials together, we get another polynomial, and similarly, if we multiply a polynomial by a constant, we get another polynomial. Note that although it's also possible to multiply two polynomials and get another one, this is not part of the vector space structure: a vector space with a reasonable notion of multiplication of vectors is called an algebra. - The set of polynomials of degree less than or equal to n (for any ) is a vector space, for the same reason. - The set of all continuous functions on the real line is a vector space: the sum of two continuous functions is again continuous, as is the product of a continuous function with a constant. - The set of matrices is a vector space. - If V and W are vector spaces, then we can form a new vector space (called the direct sum of V and W) whose entries are ordered pairs (v,w) of elements of V and elements of W. Many familiar properties of vectors in the plane carry over to the set of vector spaces. For example, just as the plane is 2-dimensional, it makes sense to talk about the dimension of any vector space (though it may be infinite, as in the case of polynomials!) Vectors in the plane can all be written in the form ae1 + be2, where e1 = (1,0),e2 = (0,1), and a set elements with this same property that all vectors can be written as sums of multiples of vectors in the set is called a basis. Having a convenient basis often makes computations easier. It turns out that every finite dimensional vector space has a basis -- in fact, if we're feeling adventurous and assume the Axiom of Choice, even every infinite dimensional vector space has a basis. However, a general vector space has no notion of "distance": given a vector, there's not necessarily a way to define the length | v | of that vector. For example, it's not obvious how we should define the length of a polynomial or a matrix. A vector space endowed which a notion of distance is called normed. The above discussion only considers vector spaces in which the scalars are real numbers, but we could just as well talk about the set of polynomials with complex coefficients, where we multiple by complex scalars. More generally, given any field F, a vector space over F is an additive abelian group (where addition is commutative) and with which is associated a field of scalars, as the field of real numbers, such that the product of a scalar and an element of the group or a vector is defined, the product of two scalars times a vector is associative, one times a vector is the vector, and two distributive laws hold. In terms of another definition, a vector space is simply a module for which the ground ring is a field. Specifically, let V be vector space over a field F. Then for all u,v,w ∈ V and a,b ∈ F the following axioms are obeyed: 1. Commutativity: u + v = v + u. 2. Associativity: (u + v) + w = u + (v + w). 3. Identity: There exists a 0 ∈ V such that v + 0 = 0 + v = v. 4. Inverse: For all v there exists a (-v) such that v + (-v) =(-v) + v = 0. 7. Scalar sums: (a + b)v = av + bv. 8. Vector sums: a(v + w) = av + aw. Weisstein, Eric W. "Vector Space." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/VectorSpace.html
http://www.conservapedia.com/Vector_space
13
50
Posted on 29 March 2013. First grade students imagined that they were Christopher Columbus. As an explorer, they needed a way to sail across the ocean to discover new lands. They were instructed to use limited resources to design, plan, and construct a ship. Then they would test their ships in an actual water race where they selected the type of force they would use to get the ship to move. At the start of the project, the students were told to find 1-2 partners. They did research in the library using books and websites to find out how ships are constructed and how they move as well as answering other questions they had. Next, they conducted an experiment on Sinking/Floating to see which materials would work best for constructing their ship. Once they had selected a design and their materials, they drew a labeled diagram using Pixie. Then they built their ship with the help of the art teacher. They tested their ships in the water table to see if they would float and if they could achieve straight motion with the type of force they had chosen. They made adjustments to their ships based on their reflections and comparisons with the other voyagers’ experiments. Finally, we had the Great Ship Race where students raced their ships in a gutter full of water. They documented the race with the iPad video cameras. They rated their own and each others’ ships using a rubric. They also shared their ships with other first grade classes. This project scores in the Ideal/Target range of Research & Information Fluency. -Students created their own questions to assist with research. -Students used a variety of resources: books, websites, etc. -Students evaluated the resources based on appropriateness and quality to their project using a rubric. -Students used various types of experts to expand their questioning and revise their projects. -Students conducted experiments to test the designs of their ships and made adjustments accordingly. This project scores in the Ideal/Target range of Communication & Collaboration. -Students chose their own groups and assigned each other roles. -iPad videos and interviews were used to get ideas from other groups. -Students collaborated to research (using websites/videos), design (using Pixie), and construct their ships. -Students’ reflections were communicated through Pixie, iPads, and rating themselves with rubrics. -Students were able to use the media specialist and art teacher as expert sources. -Videos and pictures of their projects were posted to the classroom blog and their diagrams were posted to Comemories. This project scores in the Ideal/Target range of Critical Thinking/Problem Solving. -Students conducted a floating/sinking experiment prior to construction to plan their design. -Students regularly revised their ideas based on updated information. -Students were presented with a real-life problem related to Social Studies and Science. -Students were required to choose from a limited number of materials (or they could trade with each other). -Students reflected on their experiences individually and with their partners using a rubric. -The class reflected on the entire project and reviewed the answers to the questions posed. This project scores in the Ideal/Target range of Creativity/Innovation. -Students created their own boats using materials they chose and decided what type of force to use to make their ships move. -Students were encouraged to take risks and try new things that would help their ship succeed in the race. -Students were creative in managing their resources since they could only select 5 or they could trade with other students. -Students evaluated the creative process afterwards using a rubric. - Lesson Plan (Word) - Reflection Questions (PDF) - Resource Evaluation Rubric (PDF) - Supply List (PDF) - Ship Race Evaluation Rubric (PDF) - Copies of student boat diagrams Posted in Comm/Collab - Target, Creativity - Target, Critical Thinking - Target, Elementary School, Info Fluency - Target, Math, Project, Science, Social Studies Posted on 20 March 2013. This school is implementing the “Leader in Me” character education program, so for this project, students studied a famous American and predicted how that person would show the 7 Habits at their school. Students were grouped into pairs and decided which famous American they wanted to research (Helen Keller, Jackie Robinson, Martin L. King Jr., Abraham Lincoln, George Washington or Susan B. Anthony). The students then used PebbleGo, BrainPop Jr., biographies, or any other source to gather information. They notated and evaluated their sources, then they took their research and completed a four square planning sheet for their presentation. Next, partners decided what digital program to use to present their research online. Pixie and Comic Life were their top choices since those were the two programs they had learned so far this year. The students took turns putting their research into the comic strip or Pixie. They had to include the famous American’s contribution, one new fact, how that person would be a leader at the school, and any other interesting facts of their choice. The finished projects were presented to the class and published online via Flipsnack. Afterwards they evaluated how well they worked as partners by filling out a Partner Work Reflection Sheet. This project scores in the Approaching range of Research and Information Fluency. The teacher modeled how to read a book and gather research about a Famous American. Students worked together to gather research from multiple sources (online and print) to fill out a four square organizer in order to make sure they had all of the necessary information. They also recorded and rated their sources. This project scores in the Approaching range of Communication & Collaboration. The students worked together in pairs choosing what famous American to research and what type of digital program to use to show their research. Their work was published online for others outside the classroom to access. Students reflected on their roles using the partner work reflection worksheet. This project scores in the Developing range of Critical Thinking and Problem Solving. Students had to think of how the 7 Habits from the “Leader in Me” program were displayed in the life of their famous American. They had to apply what they have been learning in character education to a new historical situation and predict how their person would be a leader at their school. Their project was authentic because reinforces the school-wide “Leader in Me” program, and it will be used as an example of how the school is implementing the “Leader in Me” program. This project scores in the Developing range of Creativity and Innovation. Students could choose which digital tool they wanted to use to display their information. They were able to select their own pictures and special effects. They predicted how their character would respond in a new situation. Posted in Comm/Collab - App, Creativity - Dev, Critical Thinking - Dev, Elementary School, Info Fluency - App, Social Studies Posted on 20 March 2013. The objective of this lesson is for small groups of students to collaborate to create layered rhythm vocal ostinatos (repeating rhythm phrases) using puppets, based on a topic of their choice. Students will create ostinatos using repeated words or sounds in synchronization with puppet movements to create layers of sound patterns that compliment and contrast with the others in their group. During the course of this activity, students will become more confident and creative in their performances as shown through better voice projection, increased complexity of rhythms with movement, and increased sharing of ideas within their groups. They will plan, practice, perform and share these performances using various technology devices. Their performances will also be posted to Vimeo. This project scores in the Developing range of Research & Information Fluency. The students used a reliable professional source (the symphony website) as well as each others’ projects to get ideas and improve upon them. The lesson builds on their research of rhythm using the symphony website and prior experience reading and playing rhythm patterns in instrument groups. Students have the choice of using various technology tools to record their performances, including iPads, Flip cameras and laptops. This project scores in the Approaching range of Communication & Collaboration. Students worked in collaborative groups to create their ostinato vocal rhythms and puppet movements. They evaluated their performances and revised them in order to make them more complex. They chose what digital tool to use to record their performance and they posted the video of their projects online. This project scores in the Developing range of Critical Thinking & Problem Solving. Students had to think about ways to make their ostinatos more complex by adding additional layers, turning simple words into longer phrases, and adjusting their puppets’ movements. By studying the recordings of their performances, students evaluated each other and themselves based on many applicable criteria: creativity, balance, contrast, rhythm, teamwork. This project scores in the Developing range of Creativity & Innovation. All groups created the same product (puppet shows), but they were encouraged to think of new rhythms and topics for their ostinatos, making more than just one. This lesson provides multiple opportunities to plan, create, perform and share. It synthesizes the talents of students with different learning styles and abilities to create a new group experience. Each group performs for the class numerous times resulting in significant growth in the areas of rhythm, technology, teamwork and creativity. Posted in Comm/Collab - App, Creativity - Dev, Critical Thinking - Dev, Elementary School, Info Fluency - Dev, Music, Project Posted on 19 March 2013. In groups of three, students chose a topic from the science curriculum that was taught during the first semester. Students individually researched their sub-topics, developed a plan using 4-Square, and wrote an expository paper. Students then collaborated with their group to plan their presentations on their topics. Students recorded the information that they would share during their presentations on note cards and worked with their peers to develop an appropriate visual display. They then presented their information to the class. They evaluated their projects and presentations with a rubric. This project scores in the Approaching range of Research & Information Fluency. Students constructed questions to guide their research, they selected their own research tools, they rated their sources and research skills using a rubric, and organized their own information in a meaningful way. This project scores in the Developing range of Communication & Collaboration. Students worked in groups selected by teachers. They chose their topics (within the scope of our science curriculum for the first semester) and worked together to create a presentation for the class using a digital tool of their choice. This project scores in the Developing range of Critical Thinking & Problem Solving. The students used technology to come up with a new and creative way to present their information to the class. Several students used more than one program to create the visual components of their presentations. For example, students used Garageband to add sounds to their Keynotes. Students also had to decide which information was important to share and what order the group members would share their facts. Students evaluated their presentations using a rubric. This project scores in the Developing range of Creativity & Innovation. Students chose the information they wanted to share with their classmates, and they also chose the tools they would use for their presentation. They created an interesting, entertaining way to review the semester science curriculum with their class. They also rated their creative process using a rubric. - Lesson Plan (Word) - Assignment Guidelines (Word) - Research Guide (Word) - 2 Evaluation Rubrics (PDF) - 3 Examples of Student Papers (PDF) - 3 Examples of Student Projects (Keynote) Posted in Comm/Collab - Dev, Creativity - Dev, Critical Thinking - Dev, Elementary School, Info Fluency - App, Science Posted on 19 March 2013. Students review various computer programs. Students read various traditional fairy tales and fractured fairy tales. Students get in groups to write their own fractured fairy tales, following the steps of the writing process. Groups choose which program they would like to use to present their writings. Groups create storyboards on construction paper to plan out their presentation. Students use their chosen program to create a presentation for their writings. This project scores in the Developing range of Research & Information Fluency. Students researched fractured fairy tales that had been selected by the teacher and the school media specialist. They analyzed and extended the ideas in those stories to create their own fractured fairy tales. They also evaluated and rated the stories using a rubric This project scores in the Approaching range of Communication & Collaboration. Students worked in collaborative groups to decide which type of fairy tale to create and which digital tool to use to effectively communicate their fairy tale to an audience. Fairy tales were published online and classmates provided feedback. Students reflected on their group work using a rubric afterwards. This project scores in the Developing range of Critical Thinking & Problem Solving. Students had to work together to decide which elements of the fairy tale could be changed and which ones needed to remain in order to keep the core narrative recognizable. Students evaluated their own, as well their classmates,’ fractured fairy tales using a rubric. This project scores in the Developing range of Creativity and Innovation. Students were given the opportunity to choose which fairy tale to adapt and which digital tool to use. They were given creative license to change whatever aspect of the fairy tale they wanted as long as their new story retained certain recognizable features. They published their creations online and evaluated their creativity with a rubric. Posted in Comm/Collab - App, Creativity - Dev, Critical Thinking - Dev, Elementary School, Info Fluency - Dev, Language Arts Posted on 14 March 2013. Families visiting Three Lakes Park have an interactive way to learn more about the animals they see in the nature center thanks to third graders down the street at Chamberlayne Elementary. The third graders researched native animals in the park and created virtual guides that can be accesses via QR codes at the park’s exhibit. The students were required to include a description and facts, but were then given the choice of what other technologies to use to help the public learn more about the animals. The self-guided group work resulted in content rich InstaBlogg sites that include creative movies, keynotes, quia games, polls, thinglinks, beeclips and pixie projects. Students were required to use the background knowledge developed during our animal studies unit to create a product that encourages the community to learn more. The students were working in the Ideal range for Resarch and Information Fluency. This project was a culmination of our animal studies unit so students were already familiar with terms and megafauna. They were challenged to put their knowledge of animal relationships and adaptations to use in a relevant way so that others could benefit from their learning. Students were given a guide sheet and worked in groups to research the animal. They chose their own groups based on what animal they were interested in researching. The students used books from the library and Internet search sites such as OneSearch, DuckDuckGo and Pebble Go to find information about their animals. Because students were already familiar with content vocabulary and concepts from learning about the world’s various environments, they were able to hit the ground running. Most groups finished the required research and continued to find additional facts beyond the requirements. The facts that came from their own curiosity proved to be the most interesting for them and the ones they highlighted the most in their final product. One group, for example, learned that the large mouth bass has an amazing sense of smell. They were so proud of this fact in their video that their enthusiasm seemed to better engage the rest of the class when they watched the video. Groups also did some field research when they visited Three Lakes Park to view the animals up close and figure out where the best place was to put their QR codes. Students worked in the Ideal range of Communication & Collaboration as they took on new roles in this activity. They became the experts and needed to create an interesting site to engage community members and encourage them to learn more about the animals in their back yards. Students were asked to teach about their animals in the most interactive way that they could using a blog which could be accessed by visitors to Three Lakes Park via a QR code. With that goal in mind, some groups worked on a video, others created “fact or fiction” games that reflected a fun way that they like to learn, and others created pixie pictures to illustrate life cycles. Many groups delegated tasks and were able to create more than one technology project to enhance their site. They used what they liked from their favorite websites to make their blog more interesting for others. Because they had the editing link, their sites would often look different in the morning. This was because students were going over to each others’ houses and working on their sites at home. They continue to make edits to improve their sites and better serve the community! Students worked in the Ideal range of Critical Thinking and Problem Solving. With so many choices regarding their information and how to present it, students had to decide which facts were best to include and how to effectively communicate those ideas to the public. As they worked on their projects, the students needed less and less teacher assistance. They were taking advantage of shortcuts on the keyboard, dropping photos and videos into their folders for future use, and applying their knowledge from former Keynote lessons to perform advanced skills, like transitions and builds, on their own. They gained a better understanding of the pros and cons of each type of digital tool and made decisions based upon those insights. One of the goals of the project was to persuade visitors to protect the animals and preserve their environment, so students had think of ways to do that as well. This challenged them to apply the facts they learned to a new and specific situation at Three Lakes Park. Students also had the chance to help design the QR code poster that was displayed at the park. As a class they named important elements to include on the sign. Since they wanted to get people’s attention and make it easy to read, they realized the importance of font and color choice. Each group worked on a design and voted on the final poster as a class. Throughout the project, students were presented with challenges that had more than one solution. At the end of the project, they evaluated how well they performed each step of the process using a rubric. Students worked in the Ideal range of Creativity and Innovation. They enjoyed making things that were different from their classmates and that would “WOW” their audience. One student really wanted to make a game. Each day he would ask how to make a game, so the teacher introduced him to Quia and gave him a brief overview of how to program the game. He produced an amazing game and the questions reflected a strong understanding and ability to extend the knowledge. He created “distractor” choices that were tricky, unless you read his group’s site. That’s just one example of how students went beyond the basic requirements for the assignment and took risks. It was exciting to see what they came up with. Everything was left up to individual groups and each page reflects the diverse ideas in the class. Posted in Comm/Collab - Target, Creativity - Target, Critical Thinking - Target, Elementary School, Info Fluency - Target, Project, Science Posted on 14 March 2013. Students created their own blogs about an animal of their choice using Instablogg and a variety of other web tools. Topics included on the blog were the animal’s adaptations, habitat, diet, and fun facts. The students had 2 class periods to complete their research using the Internet and books from the library. As part of their research, students found an online photo of their animal to import into Thinglink. Thinglink allows users to create an interactive image with hotspots that can be clicked for more information. The students created a Thinglink image of their animal incorporating 3 adaptations (hotspots) and telling why they were beneficial to their animal. Students then created a video about the animal’s habitat using Photobooth and a background they selected so they appeared to be standing in the animal’s habitat. The video also included a variety of “fun” interesting facts. Videos were uploaded to Vimeo so students could post them to their blogs. Next, the students used Audacity to record an audio description of the diet of their animal. Those audio files were uploaded to Blabberize along with another photo of their animal, so it appeared like the animal was talking about its diet. The final step was to create a poll that asks visitors a question about their animal. Students embedded the Thinglink, the video, the Blabberize, and the poll onto their blogs. All blog links were posted to one page for easy access. This project scores in the Approaching range of Research & Information Fluency. The students chose an animal they were interested in and used Duck Duck Go (Safe Internet) search engine to acquire their information. They also searched for and selected their own books from the library. The accuracy and reliability of the sources was discussed. To guide their research, we talked about what people might want to know about the animals and the class developed 3 categories that they needed to research. This project scores in the Developing range of Communication & Collaboration. Students did not work in groups to conduct their research, but they did collaborate to produce the final product using a variety of digital tools. Their blog posts are online for others to view and interact with outside the classroom. This project scores in the Developing range of Critical Thinking & Problem Solving. Students worked together to figure out what questions they had about their animals and what categories they wanted to learn more about. They collaborated to answer some of the essential questions that they came up with together. They determined what was important to include on their blogs and how to divide the information based on which web tool would best convey that information. This project scores in the Approaching range of Creativity & Innovation. The students chose their animal, the information, and the pictures they wanted to use. They were introduced to a variety of web tools and were able to choose 3 of them to use to present their animal information on their blogs. They created useful, interactive, and entertaining sites for other people to learn more about their animals. Posted in Comm/Collab - Dev, Creativity - App, Critical Thinking - Dev, Elementary School, Info Fluency - App, Project, Science Posted on 13 March 2013. This project requires students to research an inventor or scientist, create a multimedia presentation to share their research findings, design and create an invention or experiment in collaboration with other students to answer a specific question about helping an egg do something “eggstraordinary”, graph their trials/data, and write a story based on their “Eggs” adventure. Students were encouraged to use a variety of resources to research, develop, and present their projects. This project scores in the Approaching range of Research & Information Fluency. Students were challenged to use multiple digital resources to learn about an important scientist or inventor of their choice and create a digital multimedia presentation. They were provided with a list of possible websites to use as well as questions to answer about their person. They cited and evaluated the helpfulness of each website they used. For the experiment phase they conducted their own research by testing their designs and recording the data. This project scores in the Approaching range of Communication & Collaboration. Students chose their own groups according to teacher’s guidelines and expectations. As a group, students had to decide on one of six questions for the focus of their experiment. Groups had to communicate to plan what materials to bring in to create or design their final products, and they worked together to run trials and record data. At the end of their project they evaluated how well they worked together. This project scores in the Approaching range of Critical Thinking & Problem Solving. Working in groups, students had to choose an egg challenge (such as making it bounce, roll, float, drop, or hold weight). Then they had to design and create an invention that would solve the challenge. During the trial and error stage, groups had to generate new questions regarding the outcome of their “eggs,” if they were unsuccessful. Students were given the choice to present their findings using a variety of digital tools such as graphs which require them to interpret their data. At the end of the experiment, students were asked to individually reflect upon their experience throughout this process. This project scores in the Approaching range of Creativity & Innovation. Students were given many choices including the inventor they wanted to research, the digital tool they wanted to use, and the egg problem they wanted to solve. They were also encouraged to take risks with their invention and develop original ways to solve the problem. Their creative writing assignment was also open-ended to encourage original ideas. Finally they reflected on the creative process at the end. - Lesson Plan (Word) - Project Guidelines for Students (PDF) - Student Creative Writing Sample (Pages) - Student Comic Life Sample - Student Keynote Sample - Five photos (PNG) of student projects Posted in Comm/Collab - App, Creativity - App, Critical Thinking - App, Elementary School, Info Fluency - App, Language Arts, Math, Project, Science Posted on 11 March 2013. This lesson was designed to enhance cross curriculum skills. Oral Language, Reading, Writing, Science, and Technology SOLs were all met throughout this class project. The students chose an animal to research in the library by using the Internet, encyclopedias, and nonfiction text. They used a graphic organizer to record the facts they found. Each student then used those facts, in two lessons with our school’s ITRT, to create a slide presentation using Keynote or a comic using Comic Life. In the end, the students presented their finished projects to their peers in class, and they were published online. This lesson scores in the Developing range in Research & Information Fluency. Students chose an animal to research. Classroom teachers and the librarian helped students pick appropriate animals. Students learned about The Big6 Research method. The school librarian guided students through a Promethean Board lesson on organizing their research. Classroom teachers modeled how to complete the Animal Facts Graphic Organizer. Students used resources in the library to record facts about their animal on their graphic organizer. Classroom teachers and the school librarian monitored and supported students with finding facts and completing the Animal Facts Graphic Organizer. The students were required to find certain facts but they were also asked to find one “fun fact” of their choice. This project scores in the Developing range of Communication & Collaboration. Although students worked individually on this project, they did present their projects to the class, they evaluated their presentations using a rubric, and they published their work online for others to see outside of their classroom. This project scores in the Approaching range of Critical Thinking & Problem Solving. Students chose type of digital tool they thought would best convey their information (movie, slideshow, comic). They were also asked to solve one of two real-life problems: (1) Your animal is endangered and becoming extinct. How will you inform others about your animal. How will you encourage people to help? (2) If you were a zookeeper, how would you create a habitat for your animal to survive at your zoo? Finally, students evaluated their own work using a rubric. This project scores in the Approaching range of Creativity & Innovation. Students could choose whether to make a movie, slideshow, or comic to present their animal research. They added their own ideas to solve an authentic problem. They also reflected on the creative process through their self-evaluation rubric. Posted in Comm/Collab - Dev, Creativity - App, Critical Thinking - App, Elementary School, Info Fluency - Dev, Science Posted on 11 March 2013. After completing our unit on Virginia’s native people, students chose a current Indian tribe of Virginia to research in self-selected groups. This correlates with VS.2g: The student will demonstrate knowledge of the physical geography and native peoples, past and present, of Virginia by identifying and locating the current state-recognized tribes. They found answers to given essential questions through the use of printed materials and online resources. They also came up with at least three additional questions that they felt would be beneficial to answer in their presentation. The students had the option to create a Keynote, iMovie, or website to compile their research. If they felt they could best present their research in another way, they were given the opportunity to submit their ideas to the teacher for approval. Using the digital tool of their choosing, students created a resource for other students to use in learning about these Virginia tribes. The projects were published on the class blog after completion. This project scores in the Developing range in Research & Information Fluency. Students were given questions to guide their research, but they also came up with their own questions. They were provided with several books to use if they chose to do so. While two websites were recommended, the students were also given time to use the search engine “Go Duck Go” to do their own searches. This project scores in the Approaching range in Communication & Collaboration. Students were able to choose their own groups and the Indian tribe that they decided on together to research. They had a choice of three tools to compile their research. They also were required to do a reflection at the end of the project evaluating their involvement in the group and how the group worked together. This project scores in the Developing range in Critical Thinking & Problem Solving. Students were expected to answer a set of questions to guide their research. If they choose to do so, they could also add additional information that they thought was of value. To inform others about the tribe they researched, they choose the product they felt would best help them communicate their findings. This project scores in the Developing range in Creativity & Innovation. Students were asked to design a product that informs others about the tribe they researched. The teacher recommended three different types of products which they could choose from. However, if the students thought another way would help them better communicate their information, they could present it to the teacher to be approved. Posted in Comm/Collab - App, Creativity - Dev, Critical Thinking - Dev, Elementary School, Info Fluency - Dev, Social Studies
http://blogs.henrico.k12.va.us/21/author/dhclough/
13
169
1. Compute the mode, median, and mean for the following four sets of numbers: a. 2, 7, 6, 5, 3, 8, 6, 4, 9, 7 b. 6, 2, 2, 5, 4, 2, 3, 4, 5, 6, 3 c. 2, 5, 8, 2, 8, 4, 2, 8, 1, 9, 9 d. 3, 5, 4, 8, 6, 9, 4, 43, 7, 2 To get the median, put all of the numbers in a set in increasing numerical order. Count how many numbers there are in each set. This is shown here: a. 10 numbers: 2, 3, 4, 5, 6, 6, 7, 7, 8, 9 b. 11 numbers: 2, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6 c. 11 numbers: 1, 2, 2, 2, 4, 5, 8, 8, 8, 9, 9 d. 10 numbers: 2, 3, 4, 4, 5, 6, 7, 8, 9, 43 Now if there are an odd number of items in a set, add 1 to how many there are. Divide the result by 2. Now count this many items up from the bottom. The one you land on is the median. So where there are 11 items in a set,as in set b. and c. adding 1 gives 12. Then dividing by 2 gives 6. Then count up from the bottom of the list (or down from the top of the list) and the median will be the 6th one. The median for b. is 4. The median for c. is 5. If the number of items in a set is even, divide it by 2. Then count this far up from the bottom of the list. The median will be halfway between the value you landed on and the next value. For the numbers in set a. and set d.-- there are 10 items in the list. And 10 divided by 2 is 5, so the median is halfway between the 5th and 6th items, counting up from the bottom of the list. For set a. the median is halfway between 6 and 6, which is 6. For set d. the median is halfway between 5 and 6, which is 5.5. The results are shown in the table below. set a b c d mode 6 & 7 2 2 & 8 4 median 6 4 5 5.5 mean 5.700 3.818 5.273 9.100 sd 2.214 1.537 3.197 12.115 q1 4 2.5 2 4 q2 6 4 5 5.5 q3 7 5 8 8 iqr 3 2.5 6 4 smallest 2 2 1 2 largest 9 6 9 43 range 7 4 8 41 Use this set of numbers for the following questions: 4, 3, 5, 4, 1, 2, 5, 4, 3, 4, 1, 2, 4, 3, 5, 2, 3, 5, 7, 6, 4, 1, 2, 4 2.Assume the numbers in the data are the answers you get when you ask people "How many magazines do you subscribe to?" What are the proper measures of central tendency and dispersion for this data? Calculate their values. Because the data would be counts of the number of magazines, it would be ratio scaled data. The strongest measures of central tendency and dispersion for ratio level data are the mean and standard deviation. mean 3.5 sd 1.588 3. Assume the numbers in the data are the answers you get when you ask people "Name your favorite television program." Then you classify each program according to its thematic content. You use a system that has seven different classes (eg. 1=science fiction, 2=comedy, 3=romance, 4=adventure, 5=news, ....). The numbers in the data indicate which category their favorite programs fall into. What are the proper measures of central tendency and dispersion for this data? Since the numbers stand for categories or types of television programs, the data is not likely to be scaled at the interval or ratio level. Since there is no apparent relation of order between the categories, the data isn't even ordinal. This only leaves the nominal level of scaling in which the numbers are merely stand-ins for names of categories. With this data, the only measure that can be used for central tendency is the mode. The only measure of dispersion that can be used for this data is the information theoretic uncertainty measure. 4. Assume the numbers in the data are the answers you get when you ask people "What is your household's annual income? I'm going to read a list of possible ranges, and I want you to stop me when I read the range that describes your household's income." You then read the following list and record their answers: 1) below $10,000 2) between $10,000 and $15,000 3) between $15,000 and $20,000 4) between $20,000 and $30,000 5) between $30,000 and $45,000 6) between $45,000 and $60,000 7) above $60,000 What are the proper measures of central tendency and dispersion for this data? Calculate their values. Because the numbers indicate which category individials fall into, and because the categories are ordered from the lowest to the highest income levels, this data would be at least ordinal scaled. Since the categories are not all the same size, it is not scaled at the interval or ratio level. Finally, since "0" does not mean "no income at all,"and since "2" does not mean "twice as much income" as "1", the data can't be ratio scaled. For ordinal data you can use both the median and the mode for central tendency. You can also use both the Inter Quartile Range (IQR) and the information theoretic uncertainty measure for dispersion. Since the median and IQR are stronger measures than the mode and the information theoretic uncertainty measure, they are the ones you should use. To calculate the median, sort the numbers so they are in increasing numerical order from smallest to highest: 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 6, 7 Count the number in the list. There are 24. If there are an even number of items on the list, divide the number by 2. In this example, you get "12." Count this many items up from the bottom of the list. In this example, the 12th item is the first "4". The median will be halfway between the value of this item and the value of the next item. In this example, both of those numbers are 4, so the median will also be 4. 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4 | 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 6, 7 If there are an odd number of items on the list (for example, 25), add 1 to how many there are. Divide the result by 2. Now count this many items up from the bottom. The one you land on is the median. To calculate the IQR, repeat the procedure again to find the median of each half of the list you used to calculate the median. The median of the first half of the original list is the first quartile of the list. The median of the second half is the third quartile. The Inter Quartile Range is the difference between the first and third quartiles. 1, 1, 1, 2, 2, 2 | 2, 3, 3, 3, 3, 4 | 4, 4, 4, 4, 4, 4 | 5, 5, 5, 5, 6, 7 mode 4 median 4 mean 3.5 sd 1.588 smallest 1 largest 7 q1 2 q2 4 q3 4.5 iqr 2.5 range 6 5. Below are the final exam scores in percentages for students in a course on postmodernist approaches to analysis of individual differences in skiing preferences. Males 64.10 76.56 95.31 75.00 75.00 53.13 64.06 46.88 98.44 78.13 85.94 93.75 67.19 87.50 92.19 71.88 76.56 70.31 78.13 71.88 93.75 32.81 79.69 50.00 71.88 57.81 60.94 59.38 68.75 75.00 Females 69.74 85.53 88.16 92.11 76.32 77.63 61.84 52.63 96.05 84.21 88.16 80.26 61.84 75.00 90.79 72.37 73.68 82.89 68.42 96.05 88.16 64.47 90.79 63.16 80.26 46.05 64.47 78.95 76.32 63.20 a. Which of the measures of central tendency are the most and least appropriate for this data? The mean and median are both appropriate for this data. Since the numbers are percentages, the data is scaled at the ratio level. The mode is almost useless because these are continuous variables and there is an infinite number of different possible values. Also, the mode is the weakest of the three measures of central tendency; it doesn't take advantage of the actual size of the percentages or the relative sizes of the various numbers in the data. b. Which tell you more about the relative performance of males and females on the exam? The median and the mean both tell you more than the mode. For one thing, the data for the males is bimodal -- there are two modes. Second, the mode is a "central" value only in the sense that it is the one that occurred more often than any other value. It is not central in the sense that it is the middle value rather than one of the extremely high or low ones. Finally, the mean and median both contain more of the information in the data than does the mode. Since they contain more information, they tell you more about the two groups of numbers. c. Discuss the benefits and drawbacks of each measure of central tendency for this data. The mode is not influenced by extreme values. The mode is sensitive only to the most frequently occurring score; it is insensitive to all other scores. The mode is of little value for non-categorical (e.g., continuous) data; it is used almost exclusively for discrete variables. The median can be used for discrete or continuous variables. The median is not influenced by extreme values. The median is sensitive only to the value of the middle point or points; it is not sensitive to the values of all other points. The mean requires interval or ratio data. The mean is the preferred measure for interval or ratio data. The mean is generally not used for discrete variables. The mean is sensitive to all scores in a sample (every number in the data affects the mean), which makes it a more "powerful" measure than the median or mode. The mean's sensitivity to all scores also makes it sensitive to extreme values, which is why the median is used when there are extreme values. d. Compute the range, interquartile range, and standard deviation. See table below males measure females 71.88 and 75.00 mode 88.16 73.440 median 76.975 72.398 mean 76.317 15.404 sd 12.798 64.060 q1 65.458 73.440 q2 76.975 79.690 q3 87.503 15.230 IQR 22.045 32.810 smallest 46.050 98.440 largest 96.050 65.630 range 50.000 e. Discuss the benefits and drawbacks of each measure of dispersion for this data. The range is the difference between the highest and lowest values. Because of this dependence on the two most unusual values, the range doesn't tell much about the data. For example, it tells nothing about how far from the center typical values lie. The interquartile range (IQR) is the difference between the first and third quartiles in the data. If you remove the top 25% and the bottom 25% of all cases and then calculate the range of the remaining cases, you will get the IQR. While the IQR is more valuable than the range because it is not influenced as much by extreme values, it is more difficult to calculate, as it requires the data points to be rank ordered. Neither the range or the interquartile range take all of the values in your data into account. The range is determined by the two most extreme values and the IQR is determined by the lowest and highest values in the middle 50% of your data. The deviation score for an individual is the difference between the individual's score and the mean. The standard deviation, the square root of the variance, is the square root of the mean of the squared deviation scores. Roughly speaking, the standard deviation tells you how far away from the mean the typical person's score is. The standard deviation is the most commonly used measure of dispersion for interval or ratio level data. Like the variance and the mean, the standard deviation is sensitive to all scores. 6. Use the table of random numbers (Table 7 in Appendix B) for this question. Use the last two digits of the 5-digit numbers. Starting at the top of the second column, scan down and mark the numbers that are between 10 and 29, including 10 and 29. Do this until you get a total of 15 numbers. Write these 15 two-digit numbers on a piece of paper. Calculate the median, the mean, and the standard deviation for these numbers. Use the computational equation for standard deviation. 7. Analyze all four sets of numbers in Question 1 in terms of which of the measures of central tendency are the most and least appropriate. For each set of numbers, discuss the benefits and drawbacks of each measure of central tendency. 8. On a mid-term exam, the median score is 73 and the mean is 79. Which student's score is likely to be further away from the median the one at the top of the class or the one at the bottom? Why? The one at the top is likely to be further away from the median, because the mean has been distorted (upwards) by an extreme score. You know the mean has been distorted upwards because the mean is higher than the median. Since half of the scores are above the median and half are below the median, and since the mean is higher than the median, there must be some scores that are a fairly long distance above the median. These extremely high scores distort the mean -- they pull it up above the median. 9. If the standard deviation of a sample is 5.3, a. What is the variance? Since the standard deviation is the square root of the variance, you square the standard deviation in order to get variance: 5.3 × 5.3 = 28.09 b. What is the sum of the squares? The equation for standard deviation is this: The equation for sum of the squares is this: You could write the equation for standard deviation like this: You can get rid of the square root by squaring both sides: And you can get rid of the division by (n-1) by multiplying both sides by that: And now you see how to get the sum of squares from the standard deviation -- square it and multiply by (n-1): 5.3 × 5.3 = 28.09 28.09 × n-1 = SS Although you don't know what the sample size is, you do know that the sum of squares can be had by squaring the standard deviation and multiplying it by the sample size minus one. c. What is the root mean square? The root mean square is another name for standard deviation. It is sometimes called the root mean square because it is the square root of the mean of the squared deviation scores. So, if the standard deviation is 5.3, so is the root mean square. 10. Compute the standard deviation, range, and interquartile range for the following data: 81.13 75.42 92.04 87.25 63.89 85.90 74.89 77.76 88.53 a. Remove the lowest score and repeat the calculations Here are the results: all 9 scores 8 largest mode NA NA median 81.13 83.515 mean 80.757 82.865 sd 8.753 6.468 q1 75.42 77.175 q2 81.13 83.515 q3 87.25 87.57 IQR 11.83 10.395 smallest 63.89 74.89 largest 92.04 92.04 range 28.15 17.15 b. Which of the three measures changed the most? Why? The range changed the most because it is sensitive only to extreme values, one of which is the lowest score c. Which of the three measures changed the least? Why? The IQR changed the least because it is not at all sensitive to extreme values.] 11. Multiply each of the nine numbers in Question 11 a by a constant, say 0.4, and calculate the standard deviation. What is the effect on the standard deviation of multiplying the numbers by a constant? Try it with a different constant, say 1.3. What is the effect? What is the general pattern here? See columns a, b, c in table below. The original numbers are in column a. The numbers multiplied by 0.4 and 1.3 are in columns b and c. a b c d e a a *.4 a *1.3 a -50 a -63.89 81.13 32.452 105.469 31.13 17.24 87.25 34.900 113.425 37.25 23.36 74.89 29.956 97.357 24.89 11.00 75.42 30.168 98.046 25.42 11.53 88.53 35.412 115.089 38.53 24.64 77.76 31.104 101.088 27.76 13.87 92.04 36.816 119.652 42.04 28.15 85.90 34.360 111.670 35.90 22.01 63.89 25.556 83.057 13.89 0.00 a a *.4 a *1.3 a -50 a -63.89 median 81.13 32.452 105.469 31.13 17.24 mean 80.757 32.303 104.984 30.757 16.867 sd 8.753 3.501 11.378 8.753 8.753 q1 75.42 30.168 98.046 25.42 11.53 q2 81.13 32.452 105.469 31.13 17.24 q3 87.25 34.90 113.425 37.25 23.36 IQR 11.83 4.732 15.379 11.83 11.83 smallest 63.89 25.556 83.057 13.89 0.00 largest 92.04 36.816 119.652 42.04 28.15 range 28.15 11.26 36.595 28.15 28.15 You can see that multiplying all of the numbers in a set of data by a constant has an effect on the dispersion. If the constant is greater than 1.0, the multiplication spreads the original set of values out over a longer range, increasing the distance between values in the process. Since the values are more spread out, the dispersion will be higher. If the constant is less than 1.0, the multiplication squeezes the original set of values together over a smaller range, decreasing the distance between values in the process. Since the values are less spread out, the dispersion will be lower. This is apparent in all three measures of dispersion -- the range, the IQR, and the standard deviation. When all the numbers in the set are multiplied by 0.4, the range changes from 28.15 to 11.26. If you multiply 28.15 by 0.4, you get 11.26. When you multiply the numbers by 1.3, the range is also increased by a factor of 1.3 from 28.15 to 36.595. So multiplying the numbers in a set of data by a constant multiplies the range by the same amount. The standard deviation also changes in exactly the same way the range does. When you multiply the numbers by 0.4, the standard deviation decreases by a factor of 0.4 from 8.753 to 3.501. When you multiply the numbers by 1.3, the standard deviation increases by a factor of 1.3 from 8.753 to 11.378. The IQR also changes in exactly the same way the other two measures do. When you multiply the numbers by 0.4, the IQR decreases by a factor of 0.4 from 11.83 to 4.732. When you multiply the numbers by 1.3, the IQR increases by a factor of 1.3 from 11.83 to 15.379. 12. Subtract a constant, say 50.0, from each of the nine numbers in Question 11, and calculate the standard deviation. What is the effect on the standard deviation of subtracting a constant? Try it with a different constant, say 63.89. What is the effect? What is the general pattern here? Both of these subtractions have no effect on any of the measures of dispersion. The reason for this is that adding or subtracting a constant from all of the values has no effect on the distance between values and thus no effect on the dispersion. They are neither spread further apart nor squeezed closer together. See columns a, d, e in table above 13. What is the nature of the sample data if s = 0 and n = 75? If the standard deviation is zero, all 75 values must be exactly the same. There is no spread at all. The only time the standard deviation can be zero when the sample contains more than one member is when all of the deviation scores are zero. This can only happen if all of the values are the same as the mean.
http://www.sfu.ca/personal/archives/richards/Zen/Pages/Chap6.htm
13
58
In order to measure the characteristics of individual molecules, a mass spectrometer converts them to ions so that they can be moved about and manipulated by external electric and magnetic fields. The three essential functions of a mass spectrometer, and the associated components, are: 1. A small sample is ionized, usually to cations by loss of an electron. The Ion Source 2. The ions are sorted and separated according to their mass and charge. The Mass Analyzer 3. The separated ions are then measured, and the results displayed on a chart. The Detector Because ions are very reactive and short-lived, their formation and manipulation must be conducted in a vacuum. Atmospheric pressure is around 760 torr (mm of mercury). The pressure under which ions may be handled is roughly 10-5 to 10-8 torr (less than a billionth of an atmosphere). Each of the three tasks listed above may be accomplished in different ways. In one common procedure, ionization is effected by a high energy beam of electrons, and ion separation is achieved by accelerating and focusing the ions in a beam, which is then bent by an external magnetic field. The ions are then detected electronically and the resulting information is stored and analyzed in a computer. A mass spectrometer operating in this fashion is outlined in the following diagram. The heart of the spectrometer is the ion source. Here molecules of the sample (black dots) are bombarded by electrons (light blue lines) issuing from a heated filament. This is called an EI (electron-impact) source. Gases and volatile liquid samples are allowed to leak into the ion source from a reservoir (as shown). Non-volatile solids and liquids may be introduced directly. Cations formed by the electron bombardment (red dots) are pushed away by a charged repeller plate (anions are attracted to it), and accelerated toward other electrodes, having slits through which the ions pass as a beam. Some of these ions fragment into smaller cations and neutral fragments. A perpendicular magnetic field deflects the ion beam in an arc whose radius is inversely proportional to the mass of each ion. Lighter ions are deflected more than heavier ions. By varying the strength of the magnetic field, ions of different mass can be focused progressively on a detector fixed at the end of a curved tube (also under a high vacuum). When a high energy electron collides with a molecule it often ionizes it by knocking away one of the molecular electrons (either bonding or non-bonding). This leaves behind a molecular ion (colored red in the following diagram). Residual energy from the collision may cause the molecular ion to fragment into neutral pieces (colored green) and smaller fragment ions (colored pink and orange). The molecular ion is a radical cation, but the fragment ions may either be radical cations (pink) or carbocations (orange), depending on the nature of the neutral fragment. An animated display of this ionization process will appear if you click on the ion source of the mass spectrometer diagram. 2. The Nature of Mass Spectra A mass spectrum will usually be presented as a vertical bar graph, in which each bar represents an ion having a specific mass-to-charge ratio (m/z) and the length of the bar indicates the relative abundance of the ion. The most intense ion is assigned an abundance of 100, and it is referred to as the base peak. Most of the ions formed in a mass spectrometer have a single charge, so the m/z value is equivalent to mass itself. Modern mass spectrometers easily distinguish (resolve) ions differing by only a single atomic mass unit (amu), and thus provide completely accurate values for the molecular mass of a compound. The highest-mass ion in a spectrum is normally considered to be the molecular ion, and lower-mass ions are fragments from the molecular ion, assuming the sample is a single pure compound. The following diagram displays the mass spectra of three simple gaseous compounds, carbon dioxide, propane and cyclopropane. The molecules of these compounds are similar in size, CO2 and C3H8 both have a nominal mass of 44 amu, and C3H6 has a mass of 42 amu. The molecular ion is the strongest ion in the spectra of CO2 and C3H6, and it is moderately strong in propane. The unit mass resolution is readily apparent in these spectra (note the separation of ions having m/z=39, 40, 41 and 42 in the cyclopropane spectrum). Even though these compounds are very similar in size, it is a simple matter to identify them from their individual mass spectra. By clicking on each spectrum in turn, a partial fragmentation analysis and peak assignment will be displayed. Even with simple compounds like these, it should be noted that it is rarely possible to explain the origin of all the fragment ions in a spectrum. Also, the structure of most fragment ions is seldom known with certainty. Since a molecule of carbon dioxide is composed of only three atoms, its mass spectrum is very simple. The molecular ion is also the base peak, and the only fragment ions are CO (m/z=28) and O (m/z=16). The molecular ion of propane also has m/z=44, but it is not the most abundant ion in the spectrum. Cleavage of a carbon-carbon bond gives methyl and ethyl fragments, one of which is a carbocation and the other a radical. Both distributions are observed, but the larger ethyl cation (m/z=29) is the most abundant, possibly because its size affords greater charge dispersal. A similar bond cleavage in cyclopropane does not give two fragments, so the molecular ion is stronger than in propane, and is in fact responsible for the the base peak. Loss of a hydrogen atom, either before or after ring opening, produces the stable allyl cation (m/z=41). The third strongest ion in the spectrum has m/z=39 (C3H3). Its structure is uncertain, but two possibilities are shown in the diagram. The small m/z=39 ion in propane and the absence of a m/z=29 ion in cyclopropane are particularly significant in distinguishing these hydrocarbons. Most stable organic compounds have an even number of total electrons, reflecting the fact that electrons occupy atomic and molecular orbitals in pairs. When a single electron is removed from a molecule to give an ion, the total electron count becomes an odd number, and we refer to such ions as radical cations. The molecular ion in a mass spectrum is always a radical cation, but the fragment ions may either be even-electron cations or odd-electron radical cations, depending on the neutral fragment lost. The simplest and most common fragmentations are bond cleavages producing a neutral radical (odd number of electrons) and a cation having an even number of electrons. A less common fragmentation, in which an even-electron neutral fragment is lost, produces an odd-electron radical cation fragment ion. Fragment ions themselves may fragment further. As a rule, odd-electron ions may fragment either to odd or even-electron ions, but even-electron ions fragment only to other even-electron ions. The masses of molecular and fragment ions also reflect the electron count, depending on the number of nitrogen atoms in the species. Ions with no nitrogen Ions having an This distinction is illustrated nicely by the following two examples. The unsaturated ketone, 4-methyl-3-pentene-2-one, on the left has no nitrogen so the mass of the molecular ion (m/z = 98) is an even number. Most of the fragment ions have odd-numbered masses, and therefore are even-electron cations. Diethylmethylamine, on the other hand, has one nitrogen and its molecular mass (m/z = 87) is an odd number. A majority of the fragment ions have even-numbered masses (ions at m/z = 30, 42, 56 & 58 are not labeled), and are even-electron nitrogen cations. The weak even -electron ions at m/z=15 and 29 are due to methyl and ethyl cations (no nitrogen atoms). The fragmentations leading to the chief fragment ions will be displayed by clicking on the appropriate spectrum. Repeated clicks will cycle the display. When non-bonded electron pairs are present in a molecule (e.g. on N or O), fragmentation pathways may sometimes be explained by assuming the missing electron is partially localized on that atom. A few such mechanisms are shown above. Bond cleavage generates a radical and a cation, and both fragments often share these roles, albeit unequally. Since a mass spectrometer separates and detects ions of slightly different masses, it easily distinguishes different isotopes of a given element. This is manifested most dramatically for compounds containing bromine and chlorine, as illustrated by the following examples. Since molecules of bromine have only two atoms, the spectrum on the left will come as a surprise if a single atomic mass of 80 amu is assumed for Br. The five peaks in this spectrum demonstrate clearly that natural bromine consists of a nearly 50:50 mixture of isotopes having atomic masses of 79 and 81 amu respectively. Thus, the bromine molecule may be composed of two 79Br atoms (mass 158 amu), two 81Br atoms (mass 162 amu) or the more probable combination of 79Br-81Br (mass 160 amu). Fragmentation of Br2 to a bromine cation then gives rise to equal sized ion peaks at 79 and 81 amu. The center and right hand spectra show that chlorine is also composed of two isotopes, the more abundant having a mass of 35 amu, and the minor isotope a mass 37 amu. The precise isotopic composition of chlorine and bromine is: Chlorine: 75.77% 35Cl and 24.23% 37Cl Bromine: 50.50% 79Br and 49.50% 81Br The presence of chlorine or bromine in a molecule or ion is easily detected by noticing the intensity ratios of ions differing by 2 amu. In the case of methylene chloride, the molecular ion consists of three peaks at m/z=84, 86 & 88 amu, and their diminishing intensities may be calculated from the natural abundances given above. Loss of a chlorine atom gives two isotopic fragment ions at m/z=49 & 51amu, clearly incorporating a single chlorine atom. Fluorine and iodine, by contrast, are monoisotopic, having masses of 19 amu and 127 amu respectively. It should be noted that the presence of halogen atoms in a molecule or fragment ion does not change the odd-even mass rules given above. To make use of a calculator that predicts the isotope clusters for different combinations of chlorine, bromine and other elements Click Here. This application was developed at Colby College. Two other common elements having useful isotope signatures are carbon, 13C is 1.1% natural abundance, and sulfur, 33S and 34S are 0.76% and 4.22% natural abundance respectively. For example, the small m/z=99 amu peak in the spectrum of 4-methyl-3-pentene-2-one (above) is due to the presence of a single 13C atom in the molecular ion. Although less important in this respect, 15N and 18O also make small contributions to higher mass satellites of molecular ions incorporating these elements. The calculator on the right may be used to calculate the isotope contributions to ion abundances 1 and 2 amu greater than the molecular ion (M). Simply enter an appropriate subscript number to the right of each symbol, leaving those elements not present blank, and press the "Calculate" button. The numbers displayed in the M+1 and M+2 boxes are relative to M being set at 100%. 4. Fragmentation Patterns The fragmentation of molecular ions into an assortment of fragment ions is a mixed blessing. The nature of the fragments often provides a clue to the molecular structure, but if the molecular ion has a lifetime of less than a few microseconds it will not survive long enough to be observed. Without a molecular ion peak as a reference, the difficulty of interpreting a mass spectrum increases markedly. Fortunately, most organic compounds give mass spectra that include a molecular ion, and those that do not often respond successfully to the use of milder ionization conditions. Among simple organic compounds, the most stable molecular ions are those from aromatic rings, other conjugated pi-electron systems and cycloalkanes. Alcohols, ethers and highly branched alkanes generally show the greatest tendency toward fragmentation. The mass spectrum of dodecane on the right illustrates the behavior of an unbranched alkane. Since there are no heteroatoms in this molecule, there are no non-bonding valence shell electrons. Consequently, the radical cation character of the molecular ion (m/z = 170) is delocalized over all the covalent bonds. Fragmentation of C-C bonds occurs because they are usually weaker than C-H bonds, and this produces a mixture of alkyl radicals and alkyl carbocations. The positive charge commonly resides on the smaller fragment, so we see a homologous series of hexyl (m/z = 85), pentyl (m/z = 71), butyl (m/z = 57), propyl (m/z = 43), ethyl (m/z = 29) and methyl (m/z = 15) cations. These are accompanied by a set of corresponding alkenyl carbocations (e.g. m/z = 55, 41 &27) formed by loss of 2 H. All of the significant fragment ions in this spectrum are even-electron ions. In most alkane spectra the propyl and butyl ions are the most abundant. The presence of a functional group, particularly one having a heteroatom Y with non-bonding valence electrons (Y = N, O, S, X etc.), can dramatically alter the fragmentation pattern of a compound. This influence is thought to occur because of a "localization" of the radical cation component of the molecular ion on the heteroatom. After all, it is easier to remove (ionize) a non-bonding electron than one that is part of a covalent bond. By localizing the reactive moiety, certain fragmentation processes will be favored. These are summarized in the following diagram, where the green shaded box at the top displays examples of such "localized" molecular ions. The first two fragmentation paths lead to even-electron ions, and the elimination (path #3) gives an odd-electron ion. Note the use of different curved arrows to show single electron shifts compared with electron pair shifts. The charge distributions shown above are common, but for each cleavage process the charge may sometimes be carried by the other (neutral) species, and both fragment ions are observed. Of the three cleavage reactions described here, the alpha-cleavage is generally favored for nitrogen, oxygen and sulfur compounds. Indeed, in the previously displayed spectra of 4-methyl-3-pentene-2-one and N,N-diethylmethylamine the major fragment ions come from alpha-cleavages. Further examples of functional group influence on fragmentation are provided by a selection of compounds that may be examined by clicking the left button below. Useful tables of common fragment ions and neutral species may be viewed by clicking the right button. The complexity of fragmentation patterns has led to mass spectra being used as "fingerprints" for identifying compounds. Environmental pollutants, pesticide residues on food, and controlled substance identification are but a few examples of this application. Extremely small samples of an unknown substance (a microgram or less) are sufficient for such analysis. The following mass spectrum of cocaine demonstrates how a forensic laboratory might determine the nature of an unknown street drug. Even though extensive fragmentation has occurred, many of the more abundant ions (identified by magenta numbers) can be rationalized by the three mechanisms shown above. Plausible assignments may be seen by clicking on the spectrum, and it should be noted that all are even-electron ions. The m/z = 42 ion might be any or all of the following: C3H6, C2H2O or C2H4N. A precise assignment could be made from a high-resolution m/z value (next section). Odd-electron fragment ions are often formed by characteristic rearrangements in which stable neutral fragments are lost. Mechanisms for some of these rearrangements have been identified by following the course of isotopically labeled molecular ions. A few examples of these rearrangement mechanisms may be seen by clicking the following button. 5. High Resolution Mass Spectrometry In assigning mass values to atoms and molecules, we have assumed integral values for isotopic masses. However, accurate measurements show that this is not strictly true. Because the strong nuclear forces that bind the components of an atomic nucleus together vary, the actual mass of a given isotope deviates from its nominal integer by a small but characteristic amount (remember E = mc2). Thus, relative to 12C at 12.0000, the isotopic mass of 16O is 15.9949 amu (not 16) and 14N is 14.0031 amu (not 14). The mass calculator on the right may be used to calculate the exact mass of a molecule based on its elemental composition. Simply enter an appropriate subscript number to the right of each symbol, leaving those elements not present blank, and press the "Calculate" button. Only the mass of the most abundant isotope, relative to C (12.0000), is used for these calculations. For compounds of chlorine and bromine, increments of 1.997 and 1.998 respectively must be added for each halogen to arrive at the higher mass isotope values.
http://www2.chemistry.msu.edu/faculty/reusch/VirtTxtJml/Spectrpy/MassSpec/masspec1.htm
13
106
Lecture Notes 2 - Bird Flight I |Origin of Flight Exactly how birds acquired the ability to fly has baffled scientists for years. Archaeopteryx provided a starting point for speculation. Built like a dinosaur, but with wings, scientists guessed at how a hypothetical ancestor might have taken flight. Some scientists support the arboreal hypothesis (e.g., Feduccia 1996) and suggest that the ancestors of Archaeopteryx lived in trees and glided into flapping flight (Figure to the right). But others argue that the claws of Archaeopteryx weren't suited to climbing. So, others support the cursorial hypothesis (e.g., Burgers and Chiappe 1999) and suggest that these ancestors used their long, powerful legs to run fast with their arms outstretched, and were at some point lifted up by air currents and carried into flapping flight (Figure to the bottom right). Studying living animals can throw light on their evolutionary past. Ken Dial (2003) of the Flight Lab at the University of Montana noticed the ability of gamebird chicks to escape danger by scrambling up vertical surfaces. The chicks first run very fast, flapping their immature, partially feathered wings, frantically creating enough momentum to run up a vertical surface to safety. Could this survival instinct be the origin of flight? And finally, James Carey, a UC Davis demographer and ecologist, has proposed that the evolution of bird flight is linked to parental care (Carey and Adams 2001). Whatever the origins, dinosaurs, and birds, eventually took to the air. Images & text used with permission. |Dinosaurs' flapping led to flight? The wing-assisted incline running hypothesis -- The feathered forelimbs of small, two-legged dinosaurs may have helped them run up hills or other inclines to escape predators. This half running, half flapping may have evolved into an ability to fly. Dial (2003) reported findings suggesting that the ability to fly evolved gradually. Feathers may have first protected animals from cold & wet weather, then been used out of necessity when something with big teeth was chasing them. Even before their wings develop enough to fly, some living birds use them to improve traction and gain speed. Dial studied birds, like partridges, capable of only limited flight. Energetically, "It's a lot cheaper to run than fly," Dial said. So these baby birds, with big feet & powerful legs, use them in combination with their wings, first to stay balanced and grounded, then to take on steeper and steeper inclines. Using this "wing assisted incline running," Chukar Partridges can negotiate 50 degree inclines right after hatching, 60 degree slopes at 4 days old, and at 20 days, can perform a vertical ascent. "The wings help them stick to the ground," said Dial. The wings only come into play on steep angles because at about a 50 - 60 degree incline the birds start slipping. Then they begin a head to tail movement, like a reptile, that pushes them to the ground to enhance traction. "They use their wings like spoilers on a race car, to give their feet better traction," he said. Use of this wing-assisted running doesn't stop when the birds are old enough to fly. Adult birds often choose the running and flapping option instead of flying because it is more energy efficient. - Written by Marsha Walton, CNN| Chukar Partridge flapping & climbing Jesus-Christ Hypothesis. Because all fossils of Archaeopteryx come from marine sediments, suggesting a coral-reef setting, Videler (2005) suggests that, like the Jesus Christ lizards [Basiliscus spp.; (a)], Archaeopteryx and its ancestors were 'Jesus-Christ dinosaurs' running over water to escape from predators and travel between islands in the coral lagoons of central Europe 150 million years ago. At first, both thrust and weight support were provided by the feet slapping against the water. Later, the wings gradually took over some of the weight support, with every step toward increased lift providing a fitness advantage. Biplane wing planform and flight performance of a feathered dinosaur (Chatterjee and Templin 2007) -- Microraptor gui, a four-winged dromaeosaur from the Early Cretaceous of China, provides strong evidence for an arboreal-gliding origin of avian flight. It possessed asymmetric flight feathers not only on the manus but also on the pes. A previously published reconstruction shows that the hindwing of Microraptor supported by a laterally extended leg would have formed a second pair of wings in tetrapteryx fashion. However, this wing design conflicts with known theropod limb joints that entail a parasagittal posture of the hindlimb. Here, we offer an alternative planform of the hindwing of Microraptor that is concordant with its feather orientation for producing lift and normal theropod hindlimb posture. In this reconstruction, the wings of Microraptor could have resembled a staggered biplane configuration during flight, where the forewing formed the dorsal wing and the metatarsal wing formed the ventral one. The contour feathers on the tibia were positioned posteriorly, oriented in a vertical plane for streamlining that would reduce the drag considerably. Leg feathers are present in many fossil dromaeosaurs, early birds, and living raptors, and they play an important role in flight during catching and carrying prey. A computer simulation of the flight performance of Microraptor suggests that its biplane wings were adapted for undulatory "phugoid" gliding (see below) between trees, where the horizontal feathered tail offered additional lift and stability and controlled pitch. Like the Wright 1903 Flyer, Microraptor, a gliding relative of early birds, took to the air with two sets of wings. Phugoid gliding is a type of flight where a plane (or Microraptor gui) pitches up and climbs, and then pitches down and descends, accompanied by speeding up and slowing down as it goes "uphill" and "downhill (Source: www.centennialofflight.gov). Theropod size and avian flight -- An 80-million-year-old dinosaur fossil unearthed in the Gobi Desert of Mongolia demonstrates that miniaturization, long thought to be a hallmark of bird origins and a necessary precursor of flight, occurred progressively in primitive dinosaurs. "This study alters our understanding of the evolution of birds by suggesting that flight is a 'spin-off' adaptation of a much earlier trend toward miniaturization in certain dinosaur lineages," said H. R. Lane (NSF). "Paleontologists thought that miniaturization occurred in the earliest birds, which then facilitated the origin of flight," said Alan Turner (American Museum of Natural History). "Now the evidence shows that this decrease in body size occurred well before the origin of birds and that the dinosaur ancestors of birds were, in a sense, pre-adapted for flight." Because most dinosaurs were too massive to fly, miniaturization is considered crucial to the origin of flight. To date, fossil evidence of miniaturization and other characteristics leading to flight has been sparse. While other dinosaurs of the Cretaceous Period were increased in size, this newly discovered dinosaur (Mahakala omnogovae) represented a step towards miniaturization necessary for flight. Other groups that evolved flight, such as pterosaurs and bats, all evolved from small ancestors. With the discovery of Mahakala, Turner et al. (2007) showed that this miniaturization occurred much earlier." Mahakalal was nearly full-grown when it died, measuring less than two feet in length and weighing about 24 ounces. In the broader context of the dinosaur family tree, Mahakala shows that dinosaurs' size decreased progressively as they evolved toward birds. "Many of the animals that were thought to look like giant lizards only a few years ago are now known to have been feathered, to have brooded their nests, to have been active, and to have had many other defining bird characteristics, like wishbones and three forward-facing toes," said Mark Norell (American Museum of Natural History). "We can now add that the precursors of birds were also small, primitive members of a lineage that later grew much larger--long after their divergence from the evolutionary stem leading to birds." Phylogeny and body size change within paravian theropods. A temporally calibrated cladogram depicting the phylogenetic position of Mahakala and paravian body size through time and across phylogeny is shown. Silhouettes are to scale, illustrating the relative magnitude of body size differences. Left-facing silhouettes near open circles show reconstructed ancestral body sizes. Ancestral paravian body size is estimated to be 600 to 700 g and 64 to 70 cm long. The ancestral deinonychosaur, troodontid, and dromaeosaurid body size is estimated at 700 g. Large numbers (1, 2, 3, and 4) indicate the four major body increase trends in Deinonychosauria. Ma, Maastrichtian; Ca, Campanian; Sa, Santonian; Co, Coniacian; Tu, Turonian; Ce, Cenomanian; Ab, Albian; Ap, Aptian; Bar, Barremian; Hau, Hauterivian; Va, Valanginian; Ber, Berriasian; Ti, Tithonian; Ki, Kimmeridgian. Ma, million years ago (From: Turner et al. 2007). The Berlin Archaeopteryx. In the earliest cast of the main slab (A), long hindlimb feathers are visible (B) (Longrich 2006). Berlin Archaeopteryx. A, Plumage of the right hindlimb. B, Schematic drawing. Abbreviations: cov, covert feathers; prt, pretibial feathers; pst, shafts of post-tibial feathers; pub, pubis; ti, tibia (Longrich 2006). Berlin Archaeopteryx. A, Reconstruction. B, Life restoration. The hindlimbs have been abducted to 90° so as show the area of the leg plumage. The area of the hindlimbs was measured distal to the body contour and proximal to the ankle (Longrich 2006). Case closed?? Support for the arboreal hypothesis -- Feathers cover the legs of the Berlin specimen of Archaeopteryx lithographica, extending from the cranial surface of the tibia and the caudal margins of both tibia and femur. These feathers exhibit features of flight feathers rather than contour feathers, including vane asymmetry, curved shafts, and a self-stabilizing overlap pattern. Many of these features facilitate lift generation in the wings and tail of birds, suggesting that the hindlimbs acted as airfoils. Longrich (2006) presented a new reconstruction of Archaeopteryx where the hindlimbs formed approximately 12% of total airfoil area. Depending upon their orientation, the hindlimbs could have reduced stall speed by up to 6% and turning radius by up to 12%. The presence of the “four-winged” planform in both Archaeopteryx and basal Dromaeosauridae indicates that their common ancestor used both forelimbs and hindlimbs to generate lift. The presence of flight feathers on the hindlimbs is inconsistent with the cursorial hypothesis, the Jesus-Christ hypothesis, and the wing-assisted incline running hypothesis; in these scenarios, such a specialization would serve no purpose, and would impede locomotion. The evidence presented by Longrich (2006), therefore, supports an arboreal origin of avian flight, and suggests that arboreal parachuting and gliding preceded the evolution of avian flight. Evolution of flight: a summary -- Although the timing remains unclear, the first step toward the evolution of flight involved a reduction in size, with their ancestors decreasing in size during the Triassic and well before the evolution of birds and flight. Endothermy must have evolved sometime between the early Late Triassic, when dinosaurs first appeared in the fossil record and the evolution of modern birds whose ancestors first appeared in the early Late Jurassic. More specifically, coelurosaurs, a diverse group of dinosaurs that likely included the ancestors of birds, exhibited substantial and sustained morphological transformation and this rapid evolution of skeletal diversity may indicate rapidly changing selection pressures as a result of radiation into new ecological niches. The evolution of endothermy may have been more likely in lineages, such as the smaller coelurosaurs, exposed to new selection pressures rather than in more conservative, larger-bodied, lineages (Schluter 2001). For example, the body temperatures of small dinosaurs (< 100 kg) that lived at mid-latitudes (45-55°) or higher would have been well below 30°C during winter if they were crocodile-like ectotherms (Seebacher 2003). Selection pressures for morphological and physiological thermoregulatory adaptations would likely have been strongest in such dinosaurs. Of course, without insulation, the thermoregulatory advantages gained from elevated resting metabolic rates would be limited. Most skin impressions from dinosaurs indicate the presence of naked skin (Sumida and Brochu 2000), except for integumentary structures in coelurosaurs that may have afforded thermal insulation (Chen et al. 1998). Although other dinosaurs may have possessed integumentary structures with insulatory qualities, current evidence suggests that these evolved only in coelurosaurs. The earliest known feathers stem from the Late Jurassic, so if those feathers possessed insulating qualities, true endothermy may have evolved sometime after that (Seebacher 2003). By the time Archaeopteryx arrived on the scene, therefore, birds obviously had the basic features needed for flight – relatively small with feathers and, if not truly endothermic, then, at minimum, an elevated metabolism. The question then is how the ancestors of Archaeopteryx, with the necessary characteristics, first took to the air. Several hypotheses have been proposed. Primary among them are the arboreal hypothesis (e.g., Feduccia 1996), with the ancestors of Archaeopteryx living in trees (or at least climbing into trees on a regular basis) and initially gliding before developing flapping flight, and the cursorial hypothesis (e.g., Burgers and Chiappe 1999), with these ancestors using long, powerful legs to run fast with their arms (wings) outstretched and, eventually, developing sufficient lift to take flight. Two additional hypotheses include the WAIR (wing-assisted incline running) hypothesis and the ‘Jesus-Christ’ hypothesis. Dial (2003) noticed the ability of young Chukars to escape danger by scrambling up inclined surfaces. The chicks first run very fast, flapping their rather small, partially feathered wings to creating enough momentum to run up a inclined surface to safety. The ancestors of birds may have using proto-wings in a similar fashion, with wings eventually evolving to the point of permitting not only running up inclined surfaces but, for an animal running across the ground, flight. Because all fossils of Archaeopteryx come from marine sediments, suggesting a coral-reef setting, Videler (2005) suggested that, like the Jesus Christ lizards (Basiliscus spp.), Archaeopteryx and its ancestors were 'Jesus-Christ dinosaurs' running over water to escape from predators and travel between islands in the coral lagoons of central Europe 150 million years ago. At first, both thrust and weight support were provided by the feet slapping against the water. Later, the wings gradually took over some of the weight support, with every step toward increased lift providing a fitness advantage. There is currently no clear concensus in support of any of these hypotheses for the origin of bird flight. However, a four-winged dromaeosaur (Microraptor gui) from the early Cretaceous of China provides evidence for an arboreal-gliding origin of avian flight. It had asymmetric flight feathers not only on the forelimb, but on the hindlimb as well. Chatterjee and Templin (2007) proposed that the wings of Microraptor could have resembled a staggered biplane configuration during flight, where the forewing formed the dorsal wing and the hindwing formed the ventral one. The contour feathers on the tibia fo the hindlimb were positioned posteriorly, oriented in a vertical plane for streamlining that would reduce the drag considerably. Leg feathers are present in many fossil dromaeosaurs, early birds, and living raptors, and they play an important role in flight during catching and carrying prey. A computer simulation of the flight performance of Microraptor suggested that its biplane wings were adapted for undulatory "phugoid" gliding between trees, where the horizontal feathered tail offered additional lift and stability and controlled pitch. Thus, Microraptor, a gliding relative of early birds, apparently took to the air with two sets of wings. In further support of the arboreal hypothesis, feathers also cover the legs of the Berlin specimen of Archaeopteryx lithographica, extending from the cranial surface of the tibia and the caudal margins of both tibia and femur. These feathers exhibit features of flight feathers rather than contour feathers, including vane asymmetry, curved shafts, and a self-stabilizing overlap pattern. Many of these features facilitate lift generation in the wings and tail of birds, suggesting that the hindlimbs acted as airfoils. Longrich (2006) presented a new reconstruction of Archaeopteryx where the hindlimbs formed approximately 12% of total airfoil area. Depending upon their orientation, the hindlimbs could have reduced stall speed by up to 6% and turning radius by up to 12%. The presence of “four-wings” in both Archaeopteryx and basal Dromaeosauridae suggests that their common ancestor used both forelimbs and hindlimbs to generate lift. In addition, the presence of flight feathers on the hindlimbs is inconsistent with the cursorial hypothesis and the Jesus-Christ hypothesis because flight feathers on the hindlimbs would seemingly limit running speed. The evidence presented by Longrich (2006), therefore, supports an arboreal origin of avian flight, and suggests that arboreal parachuting and gliding likely preceded the evolution of avian flight just as it apparently did in the evolution of flight in bats (Speakman 2001) and pterosaurs (Naish and Martill 2003). Archaeopteryx (Source: Nick Longrich) Although the presence of flight feathers on the hindlimbs would seem to support the arboreal hypothesis for the origin of flight, such feathers do not necessarily indicate that Archaeopteryx and its immediate ancestors were strictly tree-dwellers. Many present-day birds spend time both in trees and on the ground and Archaeopteryx likely did the same. With hindlimb feathers, as well as flight muscles less developed than those of present day birds, Archaeopteryx may have found it difficult, if not impossible, to take off directly from the ground. So, to take flight, Archaeopteryx and its ancestors likely sought elevated perches like trees for ‘launching.’ In doing so, they may very well have used wing-assisted incline running just like some present-day birds. For example, several petrels are known to climb trees to launch themselves into the air (del Hoyo et al. 1992), and, for some seabirds, the presence of ‘take-off trees’ is important in selection of breeding habitat (Sullivan and Wilson 2001). Neurological evidence supports the idea that Archaeopteryx was a rather accomplished flyer. Reconstruction of the braincase and inner ear of Archaeopteryx revealed strong similarities to present-day birds, with areas of the brain involved in hearing and vision enlarged and an enlarged forebrain that would enhance the rapid integration of sensory information required in a flying animal (Alonso et al. 2004). The Life of Birds by David Attenborough - The Mastery of Flight Flight requires lift, which occurs because wings move air downwards. Lift is created only when air strikes a wing at an angle (i.e., the angle of attack). When the leading edge of a wing is higher than the trailing edge, the bottom of the wing 'pushes' the air forward and creates an area of high pressure below and ahead of the wing. At the same time, air is deflected downward so, because of Newton's Third Law of Motion (for every action there is an equal and opposite reaction), the wing is deflected upward. Both the upper and lower surfaces of the wing deflect the air. The upper surface deflects air down because the airflow “sticks” to the wing surface and follows the tilted wing (the “Coanda effect”). Because of inertia, air moving over the top of the wing tends to keep moving in a straight line while, simultaneously, atmospheric pressure tends to force air against the top of the wing. The inertia, however, keeps the air moving over the wing from 'pushing' against the top of the wing with as much force as it would if the wing wasn't moving. This creates an area of lower pressure above the wing. Because air tends to move from areas of high pressure to areas of low pressure, air tends to move from the high pressure area below and ahead of the wing to the lower pressure area above and behind the wing. This air moves, therefore, toward the trailing edge of the wing, or the same direction as the airflow created by the wing's motion. As a result, air flows faster over the top of the wing. Because air under the wing is dragged slightly in the direction of travel, it moves slower than does the air moving over the top of the wing. Thus, air is flowing slower beneath the bottom of the wing. The faster-moving air going over the top of the wing exerts less pressure than the slower-moving air under the wing and, as a result, the wing is pushed upwards by the difference in pressure between the top and the bottom (the Bernoulli effect). So, both the development of low pressure above the wing (Bernoulli's Principle) and the wing's reaction to the deflected air underneath it (Newton's third Law) contribute to the total lift force generated. Note that air, both above AND below the wing, is deflected downward. Source: An excellent article about lift ("Lift doesn't suck") by Roger Long. Why does the slower moving air generate more pressure against the wing than the faster moving air? In calm air, the molecules are moving randomly in all directions. However, when air begins to move, most (but not all) molecules are moving in the same direction. The faster the air moves, the greater the number of air molecules moving in the same direction. So, air moving a bit slower will have more molecules moving in other directions. In the case of a wing, because air under the wing is moving a bit slower than air over the wing, more air molecules will be striking the bottom of the wing than will be striking the top of the wing. Clear example of how wings deflect air downward. Notice the trough formed in the clouds. How do airfoil shape, camber, and angle of attack influence lift? Click on this image! When the curvature over the top becomes greater by increasing the angle of attack (below), the amount of lift generated increases because the force with which the wing is pushed upward increases. Eventually, however, if the angle of attack becomes too great, the flow separates off the wing and less lift is generated. The result is stalling. Birds also tend to stall at low speeds because slower moving air may not move smoothly over the wing. If the angle of attack is too great, air flow over the top of the wing may become more turbulent & the result is less lift. Angle of attack decreases with increasing speed. Angle of attack during two wingbeats of a Ringed Turtle-Dove (Streptopelia risoria) flying at 1 meter/sec (A), 5 meters/sec (B), 9 meters/sec (C), and 17 meters/sec (D). Angle of attack at low speeds peaked at 52 degrees (proximal wing) and 43 degrees (distal wing), much greater than those commonly used by aircraft (0-15 degrees). At faster speeds, mean angle of attack decreased to 9-14 degrees (proximal wing) and -5-14 degrees (distal wing), within the range employed by aircraft. Shaded areas indicate downstroke; solid line = distal wing & dashed line = proximal wing (Hedrick et al. 2002). At low speeds (such as during take-off & landing), birds can maintain smooth air flow over the wing (and, therefore, maintain lift) by using the alula (also called the bastard wing). The alula is formed by feathers (usually 3 or 4) attached to the first digit. When these feathers are elevated (above right & below right), they keep air moving smoothly over the wing & help a bird maintain lift. At increasing angles of attack, an eddy starts to propagate from the trailing edge towards the leading edge of the wing. As a result, air flowing over the top of the wing separates from the upper surface and lift is lost. However, when coverts are lifted upward by the eddy, they prevent the spread of the eddy and work as 'eddy-flaps.' The 'covert eddy-flaps', by preventing the spread of the eddy toward the leading edge of the wing, help maintain lift (i.e., prevent stalling) at high angles of attack, e.g., when taking off or landing. Eoalulavis hoyasi. Top, fluorescence induced ultraviolet photo of the specimen before preparation. Bottom, reconstruction. A - alula, PR - primary remiges, and SR - secondary remiges (Sanz and Ortega 2002). | The fossilized remains of a tiny bird evidence that birds flew as nimbly 115 million years ago as their do today. The fossilized bird, Eoalulavis hoyasi, was found in limestone quarry in Spain (Sanz et al. 1996). About the size of a goldfinch, the bird had an alula, or bastard wing, that would have it stay aloft at slow speeds. Eoalulavis is the most primitive known with an alula. Archaeopteryx probably flapped and glided, but did not have an alula. Eoalulavis provides evidence that by 30 million years after Archaeopteryx, at least one group of early birds had developed the alula. Eoalulavis hoyasi, which means "dawn bird with a bastard wing from Las Hoyas," was discovered at a site where a freshwater lake existed millions of years ago. The bird may have hunted by wading in shallow water the way plovers and other shorebirds do today. To find out if this ligament played the same shoulder-stabilizing role in primitive animals, they looked to the alligator. Alligators are close relatives of birds and both are archosaurs, the “ruling reptiles” that appeared on the planet some 250 million years ago and evolved into the dinosaurs that dominated during the Mesozoic Era. So to understand the sweep of evolution, the alligator was a great starting place. In the lab, three alligators were put on motorized treadmills and X-ray videos were made. The video was used to make a 3D computer animation that showed the precise positioning of the shoulder as the animal walked. They found that alligators use muscles – not ligaments – to do the hard work of supporting the shoulder. Then Baier studied the skeleton of Archaeopteryx lithographica, and even traveled to Beijing to examine the fossilized remains of Confuciusornis, Sinornithoides youngi and Sinornithosaurus millenii, close relatives of modern birds recently discovered in China. If the acrocoracohumeral ligament was critical to the origin of flight, Baier expected to find evidence of it in Archaeopteryx. Surprisingly, however, the new ligament-based force balance system appears to have evolved more gradually in Mesozoic fliers. “What this means is that there were refinements over time in the flight apparatus of birds,” Baier said. “Our work also suggests that when early birds flew, they balanced their shoulders differently than birds do today. And so they could have flown differently. Some scientists think they glided down from trees or flapped off the ground. Our approach of looking at this force balance system can help us test these theories.” |Of course, a bird moving through the air is opposed by friction & this is called drag. The types of drag acting on birds are parasitic drag, pressure (or induced) drag, & friction (or profile) drag. Parasitic drag is caused by friction between a bird’s body and the air (and is termed parasitic because the body does not generate any lift). Induced drag occurs when the air flow separates from the surface of a wing, while friction drag is due to the friction between the air and bird moving through the air. Friction drag is minimized by a wing's thin leading edge (wings 'slice' through the air). Induced drag occurs at low speeds and at higher speeds as, at wing tips, air moves from the area of high pressure (under the wing) to the area of low pressure (top of the wing). As wings move through the air, this curling action causes spirals (vortices) of air (see photo of continuous vortices to the right) which can disrupt the smooth flow of air over a wing (and reduce lift).| A 'smoke angel' created after flares were released and caused by wingtip vortices (Photo source: US Air Force). Bird tails & flight -- Most birds have rather short triangular tails when spread. In flight, the tail is influenced by the time-varying wake of flapping wings and the flow over the body. It is reasonable to assume that body, wings and tail morphology have evolved in concert. Modelling the interaction between the wings and tail suggests that the induced drag of the wing–tail combination is lower than that for the wings alone. A tail thus enables the bird to have wings that are optimized for cruising speed (with the tail furled to minimize drag) and, at low speeds, the spread tail reduces induced drag during manoeuvring and turning flight. Observations show that tails are maximally spread at low speeds and then become furled increasingly with increasing speed (Hedenström 2002). Figure to the left. Flow visualization around mounted wingless starling bodies using the smoke-wire technique in a wind tunnel at 9 ms−1. (a) The bird with intact tail and covert feathers; (b) tail feathers protruding beyond ventral coverts are trimmed to the same length as coverts; (c) tail feathers, ventral and dorsal covert feathers removed. The height of the wake increases from (a) to (c). The dorsal boundary layer also becomes increasingly turbulent in (b) and (c) compared with the intact tail-body configuration in (a). From: Hedenström (2002). |(A) Depictions of the vortex-ring and continuous-vortex gaits. (B) Cross-sectional view of the wing profile. Lift produced during flapping provides weight support (upward force) and thrust (horizontal force). In the vortex-ring gait, lift is produced only during the downstroke, providing positive upward force and forward thrust. In the continuous-vortex gait, lift is produced during both the upstroke and the downstroke. The downstroke produces a positive upward force and forward thrust; the upstroke produces a positive upward force and rearward thrust. Partial flexion of the wing during the upstroke reduces the magnitude of the rearward thrust to less than that of the forward thrust produced during the downstroke, providing net positive thrust per wingbeat (From Hedrick et al. 2002).| |Birds are known to employ two different gaits in flapping flight, a vortex-ring gait in slow flight and a continuous-vortex gait in fast flight. In the vortex ring gait, the upstroke is aerodynamically passive (there is no bound circulation during this phase, and hence no trailing vortex), and the wings flex and move close to the body to minimize drag. In the continuous vortex gait (where each wingtip sheds a separate vortex trail during both the upstroke and downstroke), the wings are aerodynamically active throughout (i.e., lift is generated both during the downstroke and the upstroke), while the wings remain near-planar throughout and deform only by flexure at the wrist. Hedrick et al. (2002) studied the use of these gaits over a wide range of speeds in Cockatiels and Ringed Turtle-doves trained to fly in a wind tunnel. Despite differences in wing shape and wing loading, both species shifted from a vortex-ring to a continuous-vortex gait at a speed of 7 meters/sec. They found that the shift from a vortex-ring to a continuous-vortex gait depended on sufficient forward velocity to provide airflow over the wing during the upstroke similar to that during the downstroke. This shift in flight gait appeared to reflect the need to minimize drag and produce forward thrust in order to fly at high speed.| Flow visualization images by helium-bubble multi-flash photography (top) and sketch of vortex wake (below) as reconstructed by stereophotogrammetry, for the vortex ring gait of a slow-flying Rock Pigeon (Columba livia; left), and for the continuous vortex gait of a European Kestrel (Falco tinnunculus; right) in cruising flight (Rayner and Gordon 1997). The amount of drag varies with a bird's mass (increased mass = increased friction drag), shape, & speed (increased speed = increased induced drag at the wing tips), and with a wing's surface area & shape. Increased streamlining (e.g., no trailing legs and extended head) reduces drag (Pennycuick et al. 1996). As described below, some wing shapes help to reduce induced drag. Wing shapes vary substantially among birds: Skeletal elements of the wing of five species of birds scaled so the carpometacarpi are of equal length (Dial 1992). Theoretical wings that illustrate extremes of pointedness (shift in wingtip toward the leading edge) and convexity (decrease in acuteness of the wingtip). (a) rounded (low aspect ratio) and (b) pointed (high aspect ratio) wings; (c) concave and (d) convex wings (From: Lockwood et al. 1998). Distribution of species in terms of wing pointedness and convexity. Each point represents one species. a. tern, b. duck, c. pigeon, d. gull, e. magpie, f. buzzard (soaring hawk), and g. sparrowhawk (accipiter) (From: Lockwood et al. 1998). Aspect ratio affects the relative magnitude of induced and profile drag; if mass, wing area, and other wing shape parameters remain constant, a long, thin high-aspect ratio wing reduces the cost of flight and extends range. However, high aspect ratio is not necessarily associated with high speed (favored by smaller wings). Elliptical wings (low aspect ratio) can maximize thrust from flapping, whereas as more pointed wing (high-speed) with a sharp wingtip minimizes wing weight and wing inertia. Short wings must be flapped at high frequency to provide sufficient thrust. So, relatively short, pointed wings allow rapid wing-beats with reduced inertia and that translates into greater speed (e.g., shorebirds, auks, and ducks). More rounded (convex) wings produce more lift toward the wingtip (where the wing moves faster) and are particularly effective for birds that fly at slow speeds (e.g., taking off from the ground) or need high levels of acceleration. Many small passerines often fly slowly or in 'cluttered' habitats, or need rapid acceleration to escape predators. The same is true for birds like accipiters and corvids (crows and jays; Lockwood et al. 1998). Laysan Albatross wing (Source: http://www.ups.edu/biology/museum/wingphotos.html) Masked Booby in flight |Whistle through the wing - Birds molt for a variety of reasons. Molting regulates body temperature, keeps feathers neat and waterproof and allows seasonal changes in appearance for mating or migration. However, generating new feathers uses extra energy; staying warm with less plumage uses extra energy, and flying with smaller, work-in-progress wings requires extra energy. So, not all birds molt in the same way. Ducks, swans and geese, for example, shed all their flight feathers at once and are flightless until replacements have grown. Most other birds, however, lose and renew their feathers according to a continuous, pre-programmed sequence. This sequential molting gives rise to a range of temporary feather gaps that seem to reduce take-off speed, take-off angle and level flight speed and to impede predator evasion by raising a bird's minimum turning radius. Anders Hedenstrom and Shigeru Sunada of Cambridge University estimated how the aerodynamics of flight are affected by molting (Hedenstrom & Sunada 1999). They estimated drag and lift by analyzing the fluid dynamics of symmetrical gaps in flat, rectangular model wings of various width-to-length (aspect) ratios, at a fixed angle with respect to air flow – a system that reasonably approximates a bird in gliding, but not flapping, flight. Although the effects were small, Hedenstrom and Sunada concluded that both feather gap size and position affect flight performance. Large gaps, and gaps in the middle of the wing impede aerodynamic efficiency more than small, wing-tip gaps. They also found that the detrimental effect of molt gaps increases with increasing aspect ratio. In other words a bird with short, broad wings, like a vulture, won't miss a few feathers as much as one with long, narrow wings, like an albatross. "This is of great ecological significance," they muse, "as it could help explain why large birds show relatively slow rates of molting that are associated with rather small gaps." -- Sara Abdulla, Nature Science Update|| Wilson's Storm Petrel Photo by Brian Patteson Chimney Swift wing (Source: http://www.ups.edu/biology/museum/wingphotos.html) Falcons in flight! Check this video (click on the falcon). Peregrine Falcon with a camera mounted on its back! |How pigeons give falcons the slip -- A Peregrine Falcon dive-bombing at several hundred miles an hour to knock a pigeon out of the sky would seem to be a study in At those speeds, attention must be paid. But even a falcon in hot can become distracted. And what distracts it, is a patch of white on the rump of an otherwise blue-gray pigeon. "The brain can be primed by a conspicuous thing," said Alberto Palleroni. The falcon, he said, the conspicuous thing -- the white patch -- and doesn't notice the starting to turn away and escape. "In effect, it's a kind of a card or a ruse" on the part of the pigeon, Palleroni said. Palleroni et al. (2005) observed more than 1,800 falcon attacks on wild pigeons over seven years. They recorded the plumage types among the pigeons and noticed that while birds with white rump patches made up 20% of the pigeon population, very few were captured by the falcons. When a Peregrine Falcon attacks a pigeon, it plunges at speeds greater than 200 miles an hour, levels off and comes upon the pigeon from behind, punching it with what amounts to a closed fist. At those speeds even a grazing blow kills the pigeon; the falcon then circles back and picks it up. The only way the slower-flying pigeon can escape is by dipping a wing, rolling and veering off. If the falcon is distracted by the white patch, it won't notice the dipping of the wing (which, being blue-gray, blends with the landscape) until it's too late. Plumage color in pigeons is an independently heritable trait, Palleroni said, meaning it is not tied to selection involving sexual or other traits. So it is highly likely that the white rump feathers are an anti-predator adaptation to high-speed attacks. Not bad for a bird that many people disdain. "The feral pigeon is an amazing balance of adaptations and success," Palleroni said. - Henry Fountain, New York Times Golden Eagle in flight Philippine Eagle (currently critically endangered) Another important factor that influences a bird's flying ability is wing loading - the weight (or mass) of a bird divided by wing area wing area in square centimeters). Birds with low wing loading need less power to sustain flight. Birds considered to be the 'best' flyers, such as swallows & swifts, have lower wing loading values than other |The Flight Strategy of Magnificent Frigatebirds -- Frigatebirds cannot land on the sea because their feathers are not waterproof. If they did land, they would find it even harder to take off again because their legs are too short. Despite this, frigatebirds are perfectly suited for an aerial life over the sea because they have the lowest wing-loading (large wing area & low body mass) of any bird. Weimerskirch et al. (2003) investigated the movements of Magnificent Frigatebirds (Fregata magnificens) while foraging at sea off the coast of French Guiana. Because they are very light in comparison to their wing surface, frigatebirds can glide at altitudes of up to 2,500 meters. Then they glide downward, taking advantage of the next current. This flight strategy, which limits the bird's physical efforts, is the same as that used by migratory birds during long flights over land. Migratory birds, however, avoid flying over the sea due to a lack of thermals, while frigatebirds fly over the sea. As it turns out, ascending air currents are found over the sea only in tropical regions where the waters are warm enough to create such currents on a continuous basis. Frigatebirds can therefore fly night and day using this technique. To investigate the movements of Magnificent Frigatebirds, Weimerskirch et al. (2003) fitted the birds with satellite transmitters and altimeters, which allowed them to observe that the birds only occasionally come close to the sea surface to catch prey. They catch flying fish or squid driven above the surface by underwater predators like schools of tuna or dolphins. To identify such feeding opportunities, which are very rare, requires long hours of flight at high altitudes. Frigatebirds rarely feed their young, which consequently grow very slowly. The species is, however, well-adapted: it has a low reproductive rate and parent birds care for their young for over one year, the longest period of parental care of any bird.| Check this short video of soaring frigatebirds. Aspect ratio vs. wing loading index in some birds, airplanes, a hang-glider, a butterfly, and a maple seed. The numbers after various flying objects refer to aspect ratio. Fm, Fregata magnificens (Magnificent Frigatebird); Ga, Gallirallus australis (Common name: Weka; an endemic New Zealand bird in the rail family) (From: Norberg 2002). Different combinations of wing loading and aspect ratio permit particular flight modes and foraging strategies. Species with long wings and high aspect ratios also have low wing loadings, particularly those with low body mass, and their flight is inexpensive, e.g., many seabirds, swifts, and swallows. Birds with high wing loading and short wings, but still with high aspect ratios, are adapted to fast and rather inexpensive flight (short wings reduce profile power that is large in fast flight), e.g., loons, mergansers, geese, swans, ducks, and auks. Birds flying close to or among vegetation, e.g., flycatchers, tend to have low aspect ratios that contribute to high induced drag, but their low mass and wing loading reduce flight costs. The very low aspect ratios of many smaller birds that occupy densely-vegetated habitats, e.g., gallinaceous birds, mean that the energetic cost of flight is expensive, so these species spend much of their time walking. Birds with higher wing loading, e.g., penguins, are flightless (Norberg 2002). Flight styles -- Based on differences in aspect ratios and wing loading (Rayner 1988; see figure below), flight styles can also be categorized as either specialized or non-specialized. The non-specialists have average aspect ratios and average wing loading and are excellent flyers (capable of long flights and with good maneuverability) that typically use flapping flight. The non-specialists can be further subdivided, based on aspect ratio and speed, as slow non-specialists and fast non-specialists. In the slow category would be most passerines (Passeriformes), pelicans (Pelicaniformes), herons, egrets, ibises, and storks (Ciconiiformes), pigeons and doves (Columbiformes), cuckoos (Cuculiformes), most owls (Strigiformes), trogons (Trogoniformes), most birds in the order Gruiformes (e.g., gallinules, rails, and bustards), mousebirds (Coliiformes), woodpeckers (Piciformes), and parrots (Psittaciformes). Fast non-specialists include many falcons (Falconidae), gulls (Larinae), and storm-petrels (Hydrobatidae). Birds with morphological attributes (aspect ratio and wing loading) that differ (beyond one or two standard deviations) from those of ‘typical’ birds exhibit specialized flight styles (Rayner 1988). Among these specialized styles are: Approximate centroids of major groups of birds relative to aspect ratios and wing loading (From: Rayner 1988). Bird Flight Speeds (m/s) Plotted in Relation to Body Mass (kg) and Wing Loading (N/m2) for 138 Species of Six Main Monophyletic Group Bird flight speeds -- Alerstam et al. (2007) examined the cruising speeds of 138 different species of migrating birds in flapping flight using tracking radar. Mean airspeeds among the 138 species ranged between 8 and 23 m/s (or about 18 to 51 mph). Birds of prey, songbirds, swifts, gulls, terns, and herons had flight speeds in the lower part of this range, while pigeons, some of the waders, divers, swans, geese, and ducks were fast flyers in the range 15–20 m/s (33 - 45 mph). Cormorants, cranes, and skuas were among the species flying at intermediary speeds, about 15 m/s. The diving ducks reached the fastest mean speeds, with several species exceeding 20 m/s (and up to 23 m/s). An important factor in explaining variation in flight speed was phylogenetic group; species of the same group tended to fly at similar characteristic speeds. Depending on their ecological life style and foraging, birds are adapted to different aspects of flight performance, e.g., speed, agility, lift generation, escape, take-off, and energetic cost of flight.. These adaptations are likely to have implications for the flight apparatus (anatomy, physiology, and muscle operation) and the flight behavior that may constrain the cruising flight speed. Species flying at comparatively slow cruising speeds frequently use thermal soaring (raptors and storks), are adapted for hunting and load carrying (raptors), or for take-off and landing in dense vegetation (herons). Associated with these flight habits they have a lower ratio of elevator (supracoracoideus) to depressor (pectoralis) flight muscle (particularly low among birds of prey) compared with shorebirds and waterfowl. Alerstam et al. (2007) suggested that functional differences in flight apparatus and musculature among birds of different life and flight styles (differences often associated with evolutionary origin) have a significant influence on a birds performance and speed in sustained cruising flight. Altitude vs. time showing rapid descents during migratory flights as recorded by radar. (A) Barn Swallow, (B) Yellow Wagtail, (C) Reed Warbler, (D) Yellow Wagtail, (E) Meadow Pipit, and (F) Yellow Wagtail. |Diving speeds -- Hedenstrom and Liechti (2001) used radar to track the flights of migrating birds as they descended from their cruising altitudes after crossing the Mediterranean Sea. Dive angles were as great as 83.5 degrees and the maximum speed recorded was 53.7 meters/sec (or about 120 miles/hour). Larger birds can attain even greater speeds, with estimates of the top speed of Peregrine Falcons as high as 89 - 157 meters/sec (or about 200-350 miles/hour). Although such estimates may be correct, their accuracy is unknown because the speed of a diving falcon is difficult to measure. The required instrumention is complex, and the dive is a brief, rare event that takes place at unpredictable places and times (Tucker 1998).| The high wing loading of birds like grebes, loons (check Looney Lift-Off), and swans (see Tundra Swan below) means that it's more difficult for them to generate sufficient lift to take-off. That's why these birds often run along the surface of a lake for some distance before taking flight. They must generate enough speed to generate enough lift to get their relatively heavy bodies into the air! Want to see a Laysan Albatross taking flight?? Check this video! Canada Geese taking off (slow motion) Swans taking off |Take-off! -- Initiating flight is challenging, and considerable effort has focused on understanding the energetics and aerodynamics of take-off for both machines and animals. Available evidence suggests that birds maximize their initial flight velocity using leg thrust rather than wing flapping (e.g., see the drawings of a European Starling taking off from the ground below). The smallest birds, hummingbirds, are unique in their ability to perform sustained hovering but have small hindlimbs that could hinder generation of high leg thrust. During take-off by hummingbirds, Tobalske et al. (2004) measured hindlimb forces on a perch mounted with strain gauges and filmed wingbeat kinematics with high-speed video. Whereas other birds obtain 80–90% of their initial flight velocity using leg thrust, the leg contribution in hummingbirds was 59% during normal take-off. Unlike other species, hummingbirds beat their wings several times as they thrust using their hindlimbs. In a phylogenetic context, these results show that reduced body and hindlimb size in hummingbirds limits their peak acceleration during leg thrust and, ultimately, their take-off velocity. Previously, the influence of motivational state on take-off flight performance has not been investigated for any bird. Tobalske et al. (2004) studied the full range of motivational states by testing performance as the birds took off: (1) to initiate flight autonomously, (2) to escape a startling stimulus or (3) to aggressively chase a conspecific away from a feeder. Motivation affected performance. Escape and aggressive take-off featured decreased hindlimb contribution (46% and 47%, respectively) and increased flight velocity. When escaping, hummingbirds shortened their body movement prior to onset of leg thrust and began beating their wings earlier and at higher frequency. Thus, hummingbirds are capable of modulating their leg and wingbeat kinetics to increase take-off velocity.| European Starling taking off from the ground. Time notations (milliseconds) are relative to the defined start of take-off (vertical force > 105% of body weight). Key events: wings begin unfolding (73 ms) and start of downstroke (108 ms). From: Earls (2000). |Landing - Birds must usually be much more precise when landing than an airplane pilot; often landing on a branch rather than a runway. During landing, birds increase the angle of attack of their wings until they stall. This decreases both speed and lift. Birds also spread and lower their tails, with the tail increasing drag & acting like a brake. Finally, legs and feet are extended for landing. Click on the Raven to the right for a cool animation . . . . (Hint: After viewing the animation, left click & hold the round cursor at the bottom; you can move it and examine more carefully what's happening during landing). Also, check this slow-motion video of a pigeon landing on a branch and this one of a Barn Owl landing.| Tree Swallow landing Photo by Anupam Pal & used with his permission Click on the photo to see a short video of a Rock Pigeon landing in slow motion. Bald Eagle landing Eagle Owl landing |Leading-edge vortex lifts swifts -- How do birds fly up to a branch and land smoothly and precisely? It turns out that may use a completely different kind of lift -- which not only works at slow speeds, but even helps birds brake to a stop. Using a model of a wing in water containing particles lit with a laser, Videler et al. discovered how Common Swifts (Apus apus) create lift with a "leading-edge vortex" (LEV). Think of an LEV as a horizontal tornado that forms above a swept-back wing as it cuts through the air. The vortex is a zone. Like the low-pressure zone formed above conventional wings, it lift. Until this study, it had been seen in insects, but not birds. Birds have two-part wings. The proximal "arm wing" is rounded on front, humped on top, and sharp on the back -- just like most airplane wings. Further away, the "hand wing" is flatter on top and extremely sharp on the front. The hand wing resembles the wing of a fighter plane, and it is also often swept back -- angled -- toward the rear. Wings on some high-performance jets can change angle to alter the leading-edge vortex. Wings that are nearly straight out create more lift. Swept-back wings create more drag (air friction). Acrobatic birds may also take advantage of the LEV; changing wing angle gives them the ratio of lift and drag they need for flying and snatching insects in mid-air. The LEV not only creates lift, especially at slow speeds, but also confers another benefit that helps the swift perform insectivorous aerobatics. While conventional lift is chiefly an upward force, the LEV can also produce drag, which allows sudden steering. "The LEV can be used for controlling flight," says Videler. "It's very suited for that because there is no time delay, the forces are produced instantaneously. That's very useful if you want to maneuver very quickly." -- Courtesy of the University of Wisconsin Board of Regents Swifts hunt in the air, catching flying insects on the wing. To snag its prey, a swift has to be able to fly fast and make very tight turns, just like a jet fighter (From: Müller and Lentink 2004). When gliding, a Common Swift shows a torpedo-shaped body. Its arm-wing (close to its body) has a rounded leading edge. The bird's long, slender hand-wing has a much sharper profile. The inset shows the feathers at the hand-wing's leading edge. Alerstam, T., M. Rosén, J. Bäckman, P. G. P. Ericson, and O. Hellgren. 2007. Flight speeds among bird species: allometric and phylogenetic effects. PLoS Biology 5: e197. Alonso, P. D., A. C. Milner, R. A. Ketcham, M. J. Cookson and T. B. Rowe. 2004. The avian nature of the brain and inner ear of Archaeopteryx. Nature 430: 666 - 669.Baier, D. B., S. M. Gatesy, and F. A. Jenkins. 2006. A critical ligamentous mechanism in the evolution of avian flight. Science online publication 17 December 2006. Burgers, P. and L. M. Chiappe. 1999. The wings of Archaeopteryx as a primary thrust generator. Nature 399: 60-62. Carey, J.R. and J. Adams. 2001. The preadaptive role of parental care in the evolution of avian flight. Archaeopteryx 19: 97 - 108. Chatterjee, S. and R. J. Templin. 2007. Biplane wing planform and flight performance of the feathered dinosaur Microraptor gui. Proceedings of the National Academy of Science, online early - Jan. 2007. Chen P., Z. Dong, and S. Zhen. 1998. An exceptionally well-preserved theropod dinosaur from the Yixian Formation of China. Nature. 391:147–152. del Hoyo, J., A. Elliott, and J. Sargatal (eds.). 1992. Handbook of birds of the world, volume 1. Lynx Edicions, Barcelona, Spain. Dial, K. P. 1992. Avian forelimb muscles and nonsteady flight: can birds fly without using the muscles in their wings? Auk 109: 874-885. Dial, K. P. 2003. Wing-assisted incline running and the evolution of flight. Science 299:402-404. Earls, K. D. 2000. Kinematics and mechanics of ground take-off in the starling (Sturnus vulgaris) and the quail (Coturnix coturnix). Journal of Experimental Biology 203:725-739. Feduccia, A. 1996. The origin and evolution of birds. Yale Univ. Press, New Haven. Hedenström, A. 2002. Aerodynamics, evolution and ecology of avian flight. Trends in Ecology and Evolution 17: 415-422. Hedenström, A. and F. Liechti. 2001. Field estimates of body drag coefficient on the basis of dives in passerine birds. Journal of Experimental Biology 204: 1167-1175. Hedenstrom, A. and S. Sunada. 1999. On the aerodynamics of moult gaps in birds. Journal of Experimental Biology 202:67-76. Hedrick, T. L., B. W. Tobalske, and A. A. Biewener. 2002.Estimates of circulation and gait change based on a three-dimensional kinematic analysis of flight in cockatiels (Nymphicus hollandicus) and Ringed Turtle-doves (Streptopelia risoria). Journal of Experimental Biology 205:1389-1409. Lockwood, R., J. P. Swaddle, and J. M. V. Rayner. 1998. Avian wingtip shape reconsidered: wingtip shape indices and morphological adaptations to migration. Journal of Avian Biology 29: 273-292. Longrich, N. 2006. Structure and function of hindlimb feathers in Archaeopteryx lithographica. Paleobiology 32: 417-431. Müller, U. K. and David Lentink. 2004. Turning on a dime. Science 306: 1899 - 1900. Naish, D. and D. M. Martill. 2003. Pterosaurs – a successful invasion of prehistoric skies. Biologist 50: 213-216. Noberg, U. M. L. 2002. Structure, form, and function of flight in engineering and the living world. Journal of Morphology 252:52-81. Palleroni, A., C. T. Miller, M. Hauser, and P. Marler. 2005. Predation: Prey plumage adaptation against falcon attack. Nature 434:973-974. Pennycuick C. J., M. Klaassen, A. Kvist, and A. Lindström. 1996. Wingbeat frequency and the body drag anomaly: wind-tunnel observations on a thrush nightingale (Luscinia luscinia) and a teal (Anas crecca). Journal of Experimental Biology 199: 2757–2765. Rayner, J. M. V. 1988. Form and function in avian flight. Current Ornithology 5: 1-66. Sanz, J. L., L. M. Chiappe, P. Perez-Moreno, A. D. Buscalioni, J. J. Moratalla, F. Ortega, & F. J. Payata-Ariza. 1996. An Early Cretaceous bird from Spain and its implications for the evolution of avian flight. Nature 382: 442-445. Sanz, J. L. and F. Ortega. 2002. The birds from Las Hoyas. Science Progress 85:113-130. Schluter D. 2001. Ecology and the origin of species. Trends in Ecology and Evolution. 16:372–380. Seebacher, F. 2003. Dinosaur body temperatures: the occurrence of endothermy and ectothermy. Paleobiology 29: 105-122. Speakman, J. R. 2001. The evolution of flight and echolocation in bats: another leap in the dark. Mammal Review 31: 111-130. Sullivan, W. and K.-J. Wilson. 2001. Differences in habitat selection between Chatham Petrels (Pterodroma axillaris) and Broad-billed Prions (Pachyptila vittata): implications for management of burrow competition. New Zealand Journal of Ecology 25: 65-69. Sumida S. S., and C. A. Brochu. 2000. Phylogenetic context for the origin of feathers. American Zoologist. 40:486–503. Videler, J.J., E.J. Stamhuis, and G.D.E. Povel. 2004. Leading-edge vortex lifts swifts. Science 306:1960-1962. Weimerskirch, H., O. Chastel, C. Barbraud, and O. Tostain. 2003. Frigatebirds ride high on thermals. Nature 421:333-334. I - Introduction to Birds III - Bird Flight II Lift from flow turning (NASA) Spread wing-tips of the birds as a model for drag reduction Theory of Flapping Flight Back to BIO 554/754 syllabus Back to Avian Biology
http://www.people.eku.edu/ritchisong/554notes2.html
13
51
REVIEW – PROPORTIONAL REASONING A ratio is a comparison between two numbers measured in the same units. A ratio can be expressed in three ways as shown below: Ratios, like fractions, can be simplified. For example, the ratio 150 : 15 can also be expressed Notice that the numerator of the fraction is larger than the denominator. This can be common with ratios. If two ratios are equivalent (equal), the first (top) term of each ratio compares to the second (bottom) term in an identical manner. You can represent this equivalence in the two ratios here: An equation showing equivalent ratios is called a proportion. Cross Multiply and Divide When two fractions are equal to each other, any unknown numerator or denominator can be found. The following example shows the process. Solution: Cross multiply means multiply the numbers across the equals sign (the arrow). The divide part means divide that result by the number opposite the unknown ( x ) as shown below. This gives the result x = 3 × 2.1 ÷ 4 In other words, if Two figures are said to be similar figures if they have the same shape but are different sizes. A diagram drawn to scale to another diagram makes two similar figures. Also, an enlargement or a reduction of a photograph when reproduced to scale, produces similar figures. Corresponding angles are two angles that occupy the same relative position on similar figures. Corresponding sides are two sides that occupy the same relative position in similar figures. When we use the term “relative position,” you must remember that the one figure might be turned compared to the other figure. It is necessary to look arrange the two figures so they look the same before deciding which angles or sides correspond. When labelling figures, strings of capital letters in alphabetical order are used. The order of the letters tells you which sides and angles correspond. The two quadrilaterals are similar. Because ABCS is similar to WXYZ, we can use a symbol “~” which means “is similar to.” So ABCD ~ WXYZ DETERMINING SIDES IN SIMILAR FIGURES When working with the length of sides in similar figures, because the figures are always a reduction of enlargement of each other, the ratio of the corresponding sides is always the same. What this means is that by using a proportion, you can determine the lengths of all the sides in both figures. Example 1: The two figures below are similar. Find the lengths of the side of the smaller figure. Set up proportions using BC and GH as those two sides define the ratio. For this example, make sure the sides from the big figure are always on the top and the sides for the small figure are always on the bottom. The lengths of the smaller figure are: FG = 6 in., HI = 7 in., IJ = 4 in. And JF = 3 in. DETERMINING ANGLES IN SIMILAR FIGURES Since corresponding angles in similar figures must be equal, the only difficulty with determining the angle measures is making sure that the figures are arranged so they look the same. Sometimes this will already be done for you. But other times, you must carefully look at this arrangement. Example: If ΔRST is similar to ΔLMN, and the angle measure for ΔLMN are as listed below, what are the angle measure for the angles in ΔRST? Solution: Determine which angles correspond, and those angle measures are equal. ASK YOUR TEACHER FOR UNIT QUIZ 1. SCALE FACTOR IN SIMILAR FIGURES When figures are enlarged or reduced, this is often done by a scale factor. A scale factor is the ratio of a side in one figure compared to the corresponding side in the other figure. Earlier in this unit, we used the ratio of two corresponding sides in a proportion to calculate other sides. The difference with using a scale factor is the ratio when using scale factor is that it is always compared to 1. So a proportion is not necessary when the scale factor is 1: some number, e.g. 1:500. Usually the scale factor is a single number: example, the scale factor is 1.5 or the scale factor is one quarter. Whether dealing with an enlargement or a reduction, the process of solving the problem is the same. To solve this, multiply the original lengths by the scale factor to produce the scaled lengths. Example 1: A tissue has the dimensions of 9 cm by 10 cm. The company that makes the tissues wants to increase the dimensions of the tissues by 1.7. What are the new dimensions of the tissues? Solution: To get the new size, multiply each dimension by 1.7. length: 10 cm × 1.7 = 17 cm width: 9 cm × 1.7 = 15.3 cm Scale factors are also used on maps where a unit on a map represents a certain actual distance on the ground. For example, a scale factor might be 1 cm represents 5 km. Example 2: The scale on a neighbourhood map shows that 1 cm on the map represents an actual distance of 2.5 km. a) On the map, Waltham Street has a length of 14 cm. What would the actual length of street be? b) Centre Street has an actual length of 25 km. What would the length of the street be on the map? a) Multiply the map length by the scale factor. 14 cm × 2.5 = 35 km b) Divide the actual distance by the scale factor. 25 km ÷ 2.5 = 10 cm Proportions can also be used, including the English words and numbers, as before. CALCULATING SCALE FACTOR In the previous section, we used a given scale factor to calculate the length of sides when a figure is enlarged or reduced. In this section, we will learn about calculating the scale factor when the two corresponding sides in similar figures are given. Use a proportion to determine the scale factor. Remember, a scale factor is always 1:x where x is the number we are looking for. It may be stated as just a number, but it is really a ratio. Adam is drawing a scale drawing of a staircase. On the drawing, the height of one stair is 0.5 cm while the actual height of the stair is 20 cm. What was the scale factor that Adam used? Set up a ratio and divide to calculate the scale factor. Scale Factor = x = 20 × 1 ÷ 0.5 = 40 It is also important to note that when calculating scale factor, the units of the two numbers MUST be the same. You cannot calculate scale factor with cm and metres, for example. You must change one unit into the other before using the proportion. Example 2: Tara drew a diagram of her bedroom. In the diagram, the longest wall is 8.5 inches, but it actually measures 12.75 feet. What scale factor did Tara use when she made the diagram? Solution: Convert the units all to inches and then set up a proportion. Remember: 1 foot = 12 inches So, 12.75 feet × 12 inches = 153 inches Scale Factor = x = 153 × 1 ÷ 8.5 = 18 More Scale Factor Not all scale factors you will be given are in the form 1:x. Often, the 1 will be some other number. When this is the case, use a proportion to solve the problem. Example 1: Jacob is building a model of a room using a scale factor of 6:200. If the dimensions of the room are 650 cm by 480 cm, what will the dimensions of the model be? Solution: Set up a proportion and solve. One proportion for each dimension is necessary. The dimensions of the model are 19.5 cm by 14.4 cm. Example 2: The scale of a photograph of an organism under a microscope is 75:2. If the photograph has a dimension of 30 mm, how long was the original organism? Solution: Set up a proportion and solve. ASK YOUR TEACHER FOR UNIT QUIZ 2. Working with Similar Figures In the first part of this unit, you learned about similar figures and how to find their corresponding sides and angles. In this section you will determine if two figures are similar, and what changes you can make to a shape to keep it similar to the original. Example 1: looking at the two figures below, are they similar? If so, explain how you know. If not, explain what is missing or wrong. The angles marked with the same symbol are equal. You can see that 3 of the angles in the large figure are equal to their corresponding angles in the smaller figure. But you cannot state that the other 2 pairs of corresponding angles are equal as there is no evidence to support that. Therefore, you cannot state that the 2 figures are similar. DRAWING SIMILAR FIGURES Artists, architects, and planners use scale drawings in their work. The diagrams or models should be in proportion to the actual objects so that others can visualize what the real objects look like accurately. Example: Use graph paper to draw a figure similar to the one given, with the sides 1.5 times the length of the original. Remember that the corresponding angles must be equal. The lengths of the sides, starting in the bottom left corner and going clockwise around the figure are: 6 squares, 4 diagonals, 4 squares, 10 diagonals, 18 squares. The new lengths are: 6 × 1.5 = 9 squares 4 × 1.5 = 6 diagonals 4 × 1.5 = 6 squares 10 × 1.5 = 15 diagonals 18 × 1.5 = 27 squares Similar triangles are very useful in making calculations and determining measurements. There are certain things to know about triangles before proceeding. Triangles always have 3 sides and three angles. The sum of the angles of a triangle is always 1800. If two corresponding angles are equal, the third angles will also be equal because the sum must be 1800. There are several special triangles – an isosceles triangle has 2 sides equal in length, and the two angles opposite these sides are of equal measure. An equilateral triangle has all three sides equal in length and all three angles equal in measure to 600. Two triangles are similar if any two of the three corresponding angles are congruent, or one pair of corresponding angles is congruent and the corresponding sides beside the angles are proportional. Congruent means the same in size and shape. Example 2: Kevin notices that a 2 m pole casts a shadow of 5 m, and a second pole casts a shadow of 9.4 m. How tall is the second pole? Solution: First, always make a diagram if one is not provided. Then confirm that the triangles are similar, and then use a proportion to solve for x. Notice that 2 of the three corresponding angles are congruent. The third angles are also equal because the angle between the rays of the sun and the poles is the same in both cases. So the triangles are similar. Now set up a proportion to solve for x. YOU ARE NOW FINISHED THIS UNIT ASK YOUR TEACHER FOR THE UNIT #5 TEST
http://www.dlc-ubc.ca/wordpress_dlc_mu/wilander1/courses/math-10/unit-6/
13
58
Angle of attack is a term used in aerodynamics to describe the angle between the airfoil's chord line and the relative airflow, wind, effectively the direction in which the aircraft is currently moving. It can be described as the angle between where the wing is pointing and where it is going. The lift coefficient is a non-dimensional coefficient that relates the lift generated by an airfoil, the dynamic pressure of the fluid flow around the airfoil, and the planform area of the airfoil. It may also be described as the ratio of lift pressure to dynamic pressure. Angle of Attack Angle of attack (AOA, α, Greek letter alpha) is a term used in aerodynamics to describe the angle between the airfoil's chord line and the relative airflow, wind, effectively the direction in which the aircraft is currently moving. It can be described as the angle between where the wing is pointing and where it is going. The amount of lift generated by a wing is directly related to the angle of attack, with greater angles generating more lift (and more drag). This remains true up to the stall point, where lift starts to decrease again because of flow separation. Planes flying at high angles of attack can suddenly enter a stall if, for example, a strong wind gust changes the direction of the relative wind. Also, to maintain a given amount of lift, the angle of attack must be increased as speed through the air decreases. This is why stalling is an effect that occurs more frequently at low speeds. Nonetheless, a wing (or any other airfoil) can stall at any speed. Planes that already have a high angle of attack, for example because they are pulling g or a heavy payload, will stall at a speed well above the normal stall speed, since only a small increase in the angle of attack will take the wing above the critical angle. The critical angle is typically around 15° for most airfoils. Using a variety of additional aerodynamic surfaces — known as high-lift devices — like leading edge extensions (leading edge wing root extensions), fighter aircraft have increased the potential flyable alpha from about 20° to over 45°, and in some designs, 90° or more. That is, the plane remains flyable when the wing's chord is perpendicular to the direction of motion. Some aircraft are equipped with a built-in flight computer that automatically prevents the plane from lifting its nose any further when the maximum angle of attack is reached, irrespective of pilot input. This is called the angle of attack limiter or alpha limiter. Modern airliners that limit the angle of attack by means of computers include the Airbus 320, 330, 340, and 380 series. The pilot may disengage the alpha limiter at any time, thus allowing the plane to perform tighter turns (but with considerably higher risk of going into a stall). A famous military example of this is Pugachev's Cobra. Currently, the highest angle of attack recorded for a duration of 2-3 seconds is 120 degrees, performed in the Russian Su-27 by famous Russian test pilot Viktor Pugatshev at Paris Airshow in 1989. In sailing, the angle of attack is the angle between a mid-sail and the direction of the wind. The physical principles involved are the same as for aircraft. See points of sail. The lift coefficient (CL or CZ) is non-dimensional coefficient that relates the lift generated by an airfoil, the dynamic pressure of the fluid flow around the airfoil, and the planform area of the airfoil. It may also be described as the ratio of lift pressure to dynamic pressure. - Lift coefficient may be used to relate the total lift generated by an aircraft to the total area of the wing of the aircraft. In this application it is called the aircraft lift coefficient CL. The lift coefficient CL is equal to: where L is the lift force, ρ is fluid density, v is true airspeed, q is , and A is area. - Lift coefficent may also be used as a characteristic of a particular shape (or cross-section) of an airfoil. In this application it is called the section lift coefficient cL. It is common to show, for a particular airfoil section, the relationship between lift coefficient and angle of attack. It is also useful to show the relationship between lift coefficient and drag coefficient. The section lift coefficient is based on the concept of an infinite wing of non-varying cross-section. It is not practical to define the section lift coefficient in terms of total lift and total area because they are infinitely large. Rather, the lift is defined per unit span of the wing. In such a situation, the above formula becomes: where c is the chord length of the airfoil. Note that the lift equation does not include terms for angle of attack — that is because there is no mathematical relationship between lift and angle of attack. (In contrast, there is a straight-line relationship between lift and dynamic pressure; and between lift and area.) The relationship between the lift coefficient and angle of attack is complex and can only be determined by experimentation or complex analysis. See the accompanying graph. The graph for section lift coefficient vs. angle of attack follows the same general shape for all airfoils, but the particular numbers will vary. The graph shows an almost linear increase in lift coefficient with increasing angle of attack, up to a maximum point, after which the lift coefficient falls away rapidly. This indicates the lift coefficient at the stall of the airfoil. The lift coefficient is a dimensionless number. Source: Wikipedia (All text is available under the terms of the GNU Free Documentation License and Creative Commons Attribution-ShareAlike License.)
http://www.juliantrubin.com/encyclopedia/aviation/angle_of_attack.html
13
91
This topic applies to ArcGIS for Desktop Standard and ArcGIS for Desktop Advanced only. Topology is a collection of rules that, coupled with a set of editing tools and techniques, enables the geodatabase to more accurately model geometric relationships. ArcGIS implements topology through a set of rules that define how features may share a geographic space and a set of editing tools that work with features that share geometry in an integrated fashion. A topology is stored in a geodatabase as one or more relationships that define how the features in one or more feature classes share geometry. The features participating in a topology are still simple feature classes—rather than modifying the definition of the feature class, a topology serves as a description of how the features can be spatially related. Topology has long been a key GIS requirement for data management and integrity. In general, a topological data model manages spatial relationships by representing spatial objects (point, line, and area features) as an underlying graph of topological primitives—nodes, faces, and edges. These primitives, together with their relationships to one another and to the features whose boundaries they represent, are defined by representing the feature geometries in a planar graph of topological elements. Topology is fundamentally used to ensure data quality of the spatial relationships and to aid in data compilation. Topology is also used for analyzing spatial relationships in many situations, such as dissolving the boundaries between adjacent polygons with the same attribute values or traversing a network of the elements in a topology graph. Topology can also be used to model how the geometry from a number of feature classes can be integrated. Some refer to this as vertical integration of feature classes. Ways that features share geometry in a topology Features can share geometry within a topology. Here are some examples among adjacent features: - Area features can share boundaries (polygon topology). - Line features can share endpoints (edge-node topology). In addition, shared geometry can be managed between feature classes using a geodatabase topology. For example: - Line features can share segments with other line features. - Area features can be coincident with other area features. For example, parcels can nest within blocks. - Line features can share endpoint vertices with other point features (node topology). - Point features can be coincident with line features (point events). Parcels have commonly been managed using simple feature classes and geodatabase topology, so that the set of feature classes needed to model parcels, boundaries, corner points, and control points obey the required coincidence rules. Another way to manage parcels is with a parcel fabric, which automatically provides these layers for you. A fabric manages its internal topology, with no requirement to maintain a geodatabase topology or perform any topological editing for the set of layers used by parcels. A key difference between parcels modeled as simple features and parcels in a fabric is that fabric parcel boundaries (lines in a fabric) are not shared—there is a complete set of lines on the boundary of each parcel; fabric lines for adjacent parcels overlap and are coincident. Parcel fabrics may still participate in geodatabase topology; where overlapping boundary lines have differing geometry, the lines are cracked, and the topology graph is built as usual. Two views: Features and topological elements A layer of polygons can be described and used: - As collections of geographic features (points, lines, and polygons) - As a graph of topological elements (nodes, edges, faces, and their relationships) This means that there are two alternatives for working with features—one in which features are defined by their coordinates and another in which features are represented as an ordered graph of their topological elements. The evolution of geodatabase topology from coverages Reading this large topic is not necessary to implement geodatabase topologies. However, you may want to spend some time reading this if you are interested in the historical evolution and motivations for how topology is managed in the geodatabase. The genesis of Arc-node and Georelational ArcInfo Workstation coverage users have a long history and appreciation for the role that topology plays in maintaining the spatial integrity of their data. Here are the elements of the coverage data model. In a coverage, the feature boundaries and points were stored in a few main files that were managed and owned by ArcInfo Workstation. The ARC file held the linear or polygon boundary geometry as topological edges, which were referred to as arcs. The LAB file held point locations, which were used as label points for polygons or as individual point features such as for a wells feature layer. Other files were used to define and maintain the topological relationships between each of the edges and the polygons. For example, one file called the PAL file (which stands for Polygon-arc list) listed the order and direction of the arcs in each polygon. In ArcInfo Workstation, software logic was used to assemble the coordinates for each polygon for display, analysis, and query operations. The ordered list of edges in the PAL file was used to look up and assemble the edge coordinates held in the ARC file. The polygons were assembled during runtime when needed. The coverage model had several advantages: - It used a simple structure to maintain topology. - It enabled edges to be digitized and stored only once and shared by many features. - It could represent polygons of enormous size (with thousands of coordinates) because polygons were really defined as an ordered set of edges (arcs) - The Topology storage structure of the coverage was intuitive. Its physical topological files were readily understood by ArcInfo Workstation users. An interesting historical fact: Arc, when coupled with the table manager Info, was the genesis of the product name ArcInfo Workstation, which led to all subsequent Arc products in the Esri product family—ArcInfo, ArcIMS, ArcGIS, and so on. Coverages also had some disadvantages: - Some operations were slow because many features had to be assembled on the fly when they needed to be used. This included all polygons and multipart features such as regions (the coverage term for multipart polygons) and routes (the term for multipart line features). - Topological features (such as polygons, regions, and routes) were not ready to use until the coverage topology was built. If edges were edited, the topology had to be rebuilt. (Note: Partial processing was eventually used, which required rebuilding only the changed portions of the coverage topology.) In general, when edits are made to features in a topological dataset, a geometric analysis algorithm must be executed to rebuild the topological relationships regardless of the storage model. - Coverages were limited to single-user editing. Because of the need to ensure that the topological graph was in synchronization with the feature geometries, only a single user could update a topology at a time. Users would tile their coverages and maintain a tiled database for editing. This enabled individual users to lock down and edit one tile at a time. For general data use and deployment, users would append copies of their tiles into a mosaicked data layer. In other words, the tiled datasets they edited were not directly used across the organization. They had to be converted, which meant extra work and extra time. Shapefiles and simple geometry storage In the early 1980s, coverages were seen as a major improvement over the older polygon and line-based systems in which polygons were held as complete loops. In these older systems, all the coordinates for a feature were stored in each feature's geometry. Before the coverage and ArcInfo Workstation came along, these simple polygon and line structures were used. These data structures were simple but had the disadvantage of double digitized boundaries. That is, two copies of the coordinates of the adjacent portions of polygons with shared edges would be contained in each polygon's geometry. The main disadvantage was that GIS software at the time could not maintain shared edge integrity. Plus, storage costs were enormous, and each byte of storage came at a premium. During the early 1980s, a 300 MB disk drive was the size of a washing machine and cost $30,000. Holding two or more representations of coordinates was expensive, and the computations took too much compute time. Thus, the use of a coverage topology had real advantages. During the mid 1990s, interest in simple geometric structures grew because disk storage and hardware costs in general were coming down while computational speed was growing. At the same time, existing GIS datasets were more readily available, and the work of GIS users was evolving from primarily data compilation activities to include data use, analysis, and sharing. Users wanted faster performance for data use (for example, don't spend computer time to derive polygon geometries when we need them. Just deliver the feature coordinates of these 1,200 polygons as fast as possible). Having the full feature geometry readily available was more efficient. Thousands of geographic information systems were in use, and numerous datasets were readily available. Around this time, Esri developed and published its shapefile format. Shapefiles used a very simple storage model for feature coordinates. Each shapefile represented a single feature class (of points, lines, or polygons) and used a simple storage model for the feature's coordinates. Shapefiles could be easily created from coverages as well as many other geographic information systems. They were widely adopted as a de facto standard and are still massively used and deployed to this day. A few years later, ArcSDE pioneered a similar simple storage model in relational database tables. A feature table could hold one feature per row with the geometry in one of its columns along with other feature attribute columns. A sample feature table of state polygons is shown below. Each row represents a state. The shape column holds the polygon geometry of each state. This simple features model fits the SQL processing engine very well. Through the use of relational databases, we began to see GIS data scale to unprecedented sizes and numbers of users without degrading performance. We were beginning to leverage RDBMS for GIS data management. Shapefiles became ubiquitous, and using ArcSDE, this simple features mechanism became the fundamental feature storage model in RDBMSs. (To support interoperability, Esri was the lead author of the OGC and ISO simple features specification). Simple feature storage had clear advantages: - The complete geometry for each feature is held in one record. No assembly is required. - The data structure (physical schema) is very simple, fast, and scalable. - It is easy for programmers to write interfaces. - It is interoperable. Many wrote simple converters to move data in and out of these simple geometries from numerous other formats. Shapefiles were widely applied as a data use and interchange format. Its disadvantages were that maintaining the data integrity that was readily provided by topology was not as easy to implement for simple features. As a consequence, users applied one data model for editing and maintenance (such as coverages) and used another for deployment (such as shapefiles or ArcSDE layers). Users began to use this hybrid approach for editing and data deployment. For example, users would edit their data in coverages, CAD files, or other formats. Then, they would convert their data into shapefiles for deployment and use. Thus, even though the simple features structure was an excellent direct use format, it did not support the topological editing and data management of shared geometry. Direct use databases would use the simple structures, but another topological form was used for editing. This had advantages for deployment. But the disadvantage was that data would become out-of-date and have to be refreshed. It worked, but there was a lag time for information update. Bottom line—topology was missing. What GIS required and what the geodatabase topology model implements now is a mechanism that stores features using the simple feature geometry but enables topologies to be used on this simple, open data structure. This means that users can have the best of both worlds—a transactional data model that enables topological query, shared geometry editing, rich data modeling, and data integrity, but also a simple, highly scalable data storage mechanism that is based on open, simple feature geometry. This direct use data model is fast, simple, and efficient. It can also be directly edited and maintained by any number of simultaneous users. The topology framework in ArcGIS In effect, topology has been considered as more than a data storage problem. The complete solution includes the following: - A complete data model (objects, integrity rules, editing and validation tools, a topology and geometry engine that can process datasets of any size and complexity, and a rich set of topological operators, map display, and query tools) - An open storage format using a set of record types for simple features and a topological interface to query simple features, retrieve topological elements, and navigate their spatial relationships (that is, find adjacent areas and their shared edge, route along connected lines) - The ability to provide the features (points, lines, and polygons) as well as the topological elements (nodes, edges, and faces) and their relationships to one another - A mechanism that can support the following - Massively large datasets with millions of features - Ability to perform editing and maintenance by many simultaneous editors - Ready-to-use, always available feature geometry - Support for topological integrity and behavior - A system that goes fast and scales for many users and many editors - A system that is flexible and simple - A system that leverages the RDBMS SQL engine and transaction framework - A system that can support multiple editors, long transactions, historical archiving, and replication In a geodatabase topology, the validation process identifies shared coordinates between features (both in the same feature class and across feature classes). A clustering algorithm is used to ensure that the shared coordinates have the same location. These shared coordinates are stored as part of each feature's simple geometry. This enables very fast and scalable lookup of topological elements (nodes, edges, and faces). This has the added advantage of working quite well and scaling with the RDBMS's SQL engine and transaction management framework. During editing and update, as features are added, they are directly usable. The updated areas on the map, dirty areas, are flagged and tracked as updates are made to each feature class. At any time, users can choose to topologically analyze and validate the dirty areas to generate clean topology. Only the topology for the dirty areas needs rebuilding, saving processing time. The results are that topological primitives (nodes, edges, and faces) and their relationships to one another and their features can be efficiently discovered and assembled. This has several advantages: - Simple feature geometry storage is used for features. This storage model is open, efficient, and scales to large sizes and numbers of users. - This simple features data model is transactional and is multiuser. By contrast, the older topological storage models will not scale and have difficulties supporting multiple editor transactions and numerous other GIS data management workflows. - Geodatabase topologies fully support all the long transaction and versioning capabilities of the geodatabase. Geodatabase topologies need not be tiled, and many users can simultaneously edit the topological database—even their individual versions of the same features if necessary. - Feature classes can grow to any size (hundreds of millions of features) with very strong performance. - This topology implementation is additive. You can typically add this to an existing schema of spatially related feature classes. The alternative is that you must redefine and convert all your existing feature classes to new data schemas holding topological primitives. - There need only be one data model for geometry editing and data use, not two or more. - It is interoperable because all feature geometry storage adheres to simple features specifications from the Open Geospatial Consortium and ISO. - Data modeling is more natural because it is based on user features (such as parcels, streets, soil types, and watersheds) instead of topological primitives (such as nodes, edges, and faces). Users will begin to think about the integrity rules and behavior of their actual features instead of the integrity rules of the topological primitives. For example, how do parcels behave? This will enable stronger modeling for all kinds of geographic features. It will improve our thinking about streets, soils types, census units, watersheds, rail systems, geology, forest stands, land forms, physical features, and on and on. - Geodatabase topologies provide the same information content as maintained topological implementations—either you store a topological line graph and discover the feature geometry (like coverages) or you store the feature geometry and discover the topological elements and relationships (like geodatabases). In cases where users want to store the topological primitives, it is easy to create and post topologies and their relationships to tables for various analytic and interoperability purposes (such as users who want to post their features into an Oracle Spatial warehouse that stores tables of topological primitives). At a pragmatic level, the ArcGIS topology implementation works. It scales to extremely large geodatabases and multiuser systems without loss of performance. It includes validation and editing tools for building and maintaining topologies in geodatabases. It includes rich and flexible data modeling tools that enable users to assemble practical, working systems on file systems, in any relational database, and on any number of schemas.
http://resources.arcgis.com/en/help/main/10.1/0062/006200000002000000.htm
13
67
Electronic Warfare and Radar Systems Engineering Handbook - Transforms / Wavelets - [Go to TOC] TRANSFORMS / WAVELETS Signal processing using a transform analysis for calculations is a technique used to simplify or accelerate problem solution. For example, instead of dividing two large numbers, we might convert them to logarithms, subtract them, then look-up the anti-log to obtain the result. While this may seem a three-step process as opposed to a one-step division, consider that long-hand division of a four digit number by a three digit number, carried out to four places requires three divisions, 3-4 multiplication*s, and three subtractions. Computers process additions or subtractions much faster than multiplications or divisions, so transforms are sought which provide the desired signal processing using these steps. Other types of transforms include the Fourier transform, which is used to decompose or separate a waveform into a sum of sinusoids of different frequencies. It transforms our view of a signal from time based to frequency based. Figure 1 depicts how a square wave is formed by summing certain particular sine waves. The waveform must be continuous, periodic, and almost everywhere differentiable. The Fourier transform of a sequence of rectangular pulses is a series of sinusoids. The envelope of the amplitude of the coefficients of this series is a waveform with a Sin X/X shape. For the special case of a single pulse, the Fourier series has an infinite series of sinusoids that are present for the duration of the pulse. Digital Sampling of Waveforms In order to process a signal digitally, we need to sample the signal frequently enough to create a complete “picture” of the signal. The discrete Fourier transform (DFT) may be used in this regard. Samples are taken at uniform time intervals as shown in Figure 2 and processed. If the digital information is multiplied by the Fourier coefficients, a digital filter is created as shown Figure 3. If the sum of the resultant components is zero, the filter has ignored (notched out) that frequency sample. If the sum is a relatively large number, the filter has passed the signal. With the single sinusoid shown, there should be only one resultant. (Note that being “zero” and relatively large may just mean below or above the filter*s cutoff threshold) Figure 4 depicts the process pictorially: The vectors in the figure just happen to be pointing in a cardinal direction because the strobe frequencies are all multiples of the vector (phasor) rotation rate, but that is not normally the case. Usually the vectors will point in a number of different directions, with a resultant in some direction other than straight up. In addition, sampling normally has to taken at or above twice the rate of interest (also known as the Nyquist rate), otherwise ambiguous results may be obtained. Figure 4. Phasor Representation Fast Fourier Transforms One problem with this type of processing is the large number of additions, subtractions, and multiplications which are required to reconstruct the output waveform. The Fast Fourier transform (FFT) was developed to reduce this problem. It recognizes that because the filter coefficients are sine and cosine waves, they are symmetrical about 90, 180, 270, and 360 degrees. They also have a number of coefficients equal either to one or zero, and duplicate coefficients from filter to filter in a multibank arrangement. By waiting for all of the inputs for the bank to be received, adding together those inputs for which coefficients are the same before performing multiplications, and separately summing those combinations of inputs and products which are common to more than one filter, the required amount of computing may be cut drastically. - The number of computations for a DFT is on the order of N squared. - The number of computations for a FFT when N is a power of two is on the order of N log2 N. For example, in an eight filter bank, a DFT would require 512 computations, while an FFT would only require 56, significantly speeding up processing time. Windowed Fourier Transform The Fourier transform is continuous, so a windowed Fourier transform (WFT) is used to analyze non-periodic signals as shown in Figure 5. With the WFT, the signal is divided into sections (one such section is shown in Figure 5) and each section is analyzed for frequency content. If the signal has sharp transitions, the input data is windowed so that the sections converge to zero at the endpoints. Because a single window is used for all frequencies in the WFT, the resolution of the analysis is the same (equally spaced) at all locations in the time-frequency domain. The FFT works well for signals with smooth or uniform frequencies, but it has been found that other transforms work better with signals having pulse type characteristics, time-varying (non-stationary) frequencies, or odd shapes. The FFT also does not distinguish sequence or timing information. For example, if a signal has two frequencies (a high followed by a low or vice versa), the Fourier transform only reveals the frequencies and relative amplitude, not the order in which they occurred. So Fourier analysis works well with stationary, continuous, periodic, differentiable signals, but other methods are needed to deal with non-periodic or non-stationary signals. The Wavelet transform has been evolving for some time. Mathematicians theorized its use in the early 1900's. While the Fourier transform deals with transforming the time domain components to frequency domain and frequency analysis, the wavelet transform deals with scale analysis, that is, by creating mathematical structures that provide varying time/frequency/amplitude slices for analysis. This transform is a portion (one or a few cycles) of a complete waveform, hence the term wavelet. The wavelet transform has the ability to identify frequency (or scale) components, simultaneously with their location(s) in time. Additionally, computations are directly proportional to the length of the input signal. They require only N multiplications (times a small constant) to convert the waveform. For the previous eight filter bank example, this would be about twenty calculations, vice 56 for the FFT. In wavelet analysis, the scale that one uses in looking at data plays a special role. Wavelet algorithms process data at different scales or resolutions. If we look at a signal with a large "window," we would notice gross features. Similarly, if we look at a signal with a small "window," we would notice small discontinuities as shown in Figure 6. The result in wavelet analysis is to "see the forest and the trees." A way to achieve this is to have short high-frequency fine scale functions and long low-frequency ones. This approach is known as multi-resolution analysis. For many decades, scientists have wanted more appropriate functions than the sines and cosines (base functions) which comprise Fourier analysis, to approximate choppy signals. (Although Walsh transforms work if the waveform is periodic and stationary). By their definition, sine and cosine functions are non-local (and stretch out to infinity), and therefore do a very poor job in approximating sharp spikes. But with wavelet analysis, we can use approximating functions that are contained neatly in finite (time/frequency) domains. Wavelets are well-suited for approximating data with sharp discontinuities. The wavelet analysis procedure is to adopt a wavelet prototype function, called an "analyzing wavelet" or "mother wavelet." Temporal analysis is performed with a contracted, high-frequency version of the prototype wavelet, while frequency analysis is performed with a dilated, low-frequency version of the prototype wavelet. Because the original signal or function can be represented in terms of a wavelet expansion (using coefficients in a linear combination of the wavelet functions), data operations can be performed using just the corresponding wavelet coefficients as shown in Figure 7. If one further chooses the best wavelets adapted to the data, or truncates the coefficients below some given threshold, the data is sparsely represented. This "sparse coding" makes wavelets an excellent tool in the field of data compression. For instance, the FBI uses wavelet coding to store fingerprints. Hence, the concept of wavelets is to look at a signal at various scales and analyze it with various resolutions. Analyzing Wavelet Functions Fourier transforms deal with just two basis functions (sine and cosine), while there are an infinite number of wavelet basis functions. The freedom of the analyzing wavelet is a major difference between the two types of analyses and is important in determining the results of the analysis. The “wrong” wavelet may be no better (or even far worse than) than the Fourier analysis. A successful application presupposes some expertise on the part of the user. Some prior knowledge about the signal must generally be known in order to select the most suitable distribution and adapt the parameters to the signal. Some of the more common ones are shown in Figure 8. There are several wavelets in each family, and they may look different than those shown. Somewhat longer in duration than these functions, but significantly shorter than infinite sinusoids is the cosine packet shown in Figure 9. Wavelet Comparison With Fourier Analysis While a typical Fourier transform provides frequency content information for samples within a given time interval, a perfect wavelet transform records the start of one frequency (or event), then the start of a second event, with amplitude added to or subtracted from, the base event. Wavelets are especially useful in analyzing transients or time-varying signals. The input signal shown in Figure 9 consists of a sinusoid whose frequency changes in stepped increments over time. The power of the spectrum is also shown. Classical Fourier analysis will resolve the frequencies but cannot provide any information about the times at which each occurs. Wavelets provide an efficient means of analyzing the input signal so that frequencies and the times at which they occur can be resolved. Wavelets have finite duration and must also satisfy additional properties beyond those normally associated with standard windows used with Fourier analysis. The result after the wavelet transform is applied is the plot shown in the lower right. The wavelet analysis correctly resolves each of the frequencies and the time when it occurs. A series of wavelets is used in example 2. Example 2. Figure 10 shows the input of a clean signal, and one with noise. It also shows the output of a number of “filters” with each signal. A 6 dB S/N improvement can be seen from the d4 output. (Recall from Section 4.3 that 6 dB corresponds to doubling of detection range.) In the filter cascade, the HPFs and LPFs are the same at each level. The wavelet shape is related to the HPF and LPF in that it is the “impulse response” of an infinite cascade of the HPFs and LPFs. Different wavelets have different HPFs and LPFs. As a result of decimating by 2, the number of output samples equals the number of input samples. Wavelet Applications Some fields that are making use of wavelets are: astronomy, acoustics, nuclear engineering, signal and image processing (including fingerprinting), neurophysiology, music, magnetic resonance imaging, speech discrimination, optics, fractals, turbulence, earthquake-prediction, radar, human vision, and pure mathematics applications. See October 1996 IEEE Spectrum article entitled “Wavelet Analysis”, by Bruce, Donoho, and Gao. Table of Contents for Electronics Warfare and Radar Engineering Handbook Abbreviations | Decibel | Duty Cycle | Doppler Shift | Radar Horizon / Line of Sight | Propagation Time / Resolution | Modulation | Transforms / Wavelets | Antenna Introduction / Basics | Polarization | Radiation Patterns | Frequency / Phase Effects of Antennas | Antenna Near Field | Radiation Hazards | Power Density | One-Way Radar Equation / RF Propagation | Two-Way Radar Equation (Monostatic) | Alternate Two-Way Radar Equation | Two-Way Radar Equation (Bistatic) | Jamming to Signal (J/S) Ratio - Constant Power [Saturated] Jamming | Support Jamming | Radar Cross Section (RCS) | Emission Control (EMCON) | RF Atmospheric Absorption / Ducting | Receiver Sensitivity / Noise | Receiver Types and Characteristics | General Radar Display Types | IFF - Identification - Friend or Foe | Receiver Tests | Signal Sorting Methods and Direction Finding | Voltage Standing Wave Ratio (VSWR) / Reflection Coefficient / Return Loss / Mismatch Loss | Microwave Coaxial Connectors | Power Dividers/Combiner and Directional Couplers | Attenuators / Filters / DC Blocks | Terminations / Dummy Loads | Circulators and Diplexers | Mixers and Frequency Discriminators | Detectors | Microwave Measurements | Microwave Waveguides and Coaxial Cable | Electro-Optics | Laser Safety | Mach Number and Airspeed vs. Altitude Mach Number | EMP/ Aircraft Dimensions | Data Busses | RS-232 Interface | RS-422 Balanced Voltage Interface | RS-485 Interface | IEEE-488 Interface Bus (HP-IB/GP-IB) | MIL-STD-1553 & 1773 Data Bus | This HTML version may be printed but not reproduced on websites.
http://www.rfcafe.com/references/electrical/ew-radar-handbook/transforms-wavelets.htm
13
67
The standard functions that we defined back in Chapter 1 include polynomials, rational functions, trigonometric functions and rational functions of these, exponentials and polynomials in these, products of exponentials and polynomials and trigonometric functions, among other things. The classes of functions mentioned above can all be integrated by standard techniques, as can some others. We begin by reviewing the standard techniques, which have been described briefly in Chapter 19. First, you should recognize that you can integrate any power of the variable of integration, by reversing the product rule, unless that power is -1. Thus we have This implies that you can integrate any polynomial but also any power standing alone, even a fractional power or negative power. The result is another power except when the power integrated over is power is the , whose integral is the natural logarithm, ln x, so it integrates to Other functions that you should quickly recognize as integrable are those that are derivatives of commonly encountered functions. These include sine and cosine, the exponential function, and a few others, like Next you should be prepared to recognize functions that can be transformed into polynomials or powers or these other functions by changes of variable of integration that are more or less suggested by the integrand. If for example, you encounter you should think to yourself: the easy way to handle is to make it into . And lo and behold, the rest of the integrand is so this integral becomes Similarly, you should recognize To handle an integral like you should recognize that a substitution will handle it. To avoid confusion, the easiest way is to set u = 3x - 7 here which tells us so that the integral can be written as Similarly you should be prepared to recognize the need for a sequence of successive simple substitutions, as in integrals like In doing these you are wise to write out the substitution u = u(x) completely before applying it, and make an effort not to forget to apply the chain rule in making the transition from dx to du. With these means you can integrate any polynomial or power or any integral transformable into same by simple substitutions. Integration by parts allows you to extend your range of doable integrals to include polynomials multiplied by exponents or by logarithms or sines and cosines, among others. It transforms an integrand into a new one with part of it integrated and the rest differentiated. Thus given a polynomial in x times ln x you can differentiate the latter and integrate the former, and the result will be a power that can be integrated. With an exponent or appropriate trigonometric function, times a polynomial you can differentiate the polynomial and integrate the rest, doing this repeatedly until the polynomial becomes a constant. You can even integrate something like ex sin x this way, by integrating by parts twice. Here are details: First set u = ex, dv = sin x, which gives, on integrating by parts, the new integrand -vdu which is ex cos x. Another integration by parts similarly confronts us with the new integrand -ex sin x and we end up with an equation for the original integral The same technique can be used to integrate a product of an exponential and a sine or cosine and a polynomial in x. Consider for example . If we choose , we find (as just shown) and the integral is reduced to a doable one. You can integrate any polynomial in x as we have seen. You can also integrate any polynomial in sines and cosines by converting it into a sum of sines and cosines of different arguments using the expressions for them in terms of complex exponentials. Consider for example (sin x)2. We can write A similar reduction can be made for any product of any number of sines and cosines. Any such product can be written as a sum of individual sines and cosines of arguments that are sums and differences of the arguments of the factors, in this way. This implies that you can integrate any product of a power of x of cos x of sin x and of ekx, by applying the methods described so far. We have already seen that we can integrate any power of x, integral or not. The method of partial fractions provides a way of taking any rational function of x, in other words, any ratio of two polynomials, and writing it as a polynomial plus a sum of inverse powers of the (x - rj), when the denominator polynomial can be factored into linear factors and the rj are the roots of that polynomial. Suppose for example, our rational function is . This function has the following obvious properties: 1. When x is very large, it behaves like which is 2. 2. When x is very close to 1 it behaves like . 3. When x is very close to 2 it behaves like . 4. When x is 0 its value is . In general such a function can be written as a polynomial plus a sum of differentiable functions divided by the most singular terms at each root. Each of the latter terms must go to zero when x is very large. Here that means that our function can be written as and our properties above tell us immediately p(x) = 2, a(x) = 4, b(x) = 19 + b(x - 2), and which implies b = 6, and we can write our rational function as 2 + 4(x-1)-1 + 19(x-2)-2 + 6(x-2)-1 In general you can read off the coefficient of the leading singular term at each singular point by factoring that leading singular term out of the denominator, and evaluating the rest of the expression there. Each of the terms here can easily be integrated. If the singular term at a singular point has degree greater than 1, as the example above has at x = 2, you can find the coefficient of the non-leading terms by any of three way, whichever you find easier or more congenial to you. 1. You can factor out the leading singular factor and compute the Taylor series of the rest about the singular point. The relevant terms are those which when multiplied by the leading singular term are still singular. 2. You can subtract the leading singular term with the coefficient you have read off from the original expression; the difference will have a weaker singularity at the same point, and you can read off the coefficient of its leading term by inspection again, repeating if necessary. 3. You can evaluate the rational function at as many new points as needed to determine the unknown coefficients. That is the approach, evaluating at 0, used to determine b above. The polynomial terms are here those that do not go to zero when x approaches infinity. The leading term can be found by inspection. The others can be determined either by polynomial division or by evaluating the polynomial coefficients using the third method described above. If the denominator has terms that have complex roots, these may be treated exactly as real roots are. You can extend the realm of doable integrals to rational functions of sines and cosines, by using the substitution With this substitution, we get so that any rational function of sines and cosines becomes a rational function of u, and therefore susceptible to integration by partial fractions. There is one other class of standardly integrable functions. These are functions that have a square root of a quadratic function in them. The quadratic function may be reduced by completing the square a form (x - a)2 + b2 or (x - a)2 - b2 or b2 - (x - a)2, which can be changed by changes of variable into u2 + 1, u2 - 1 and 1 - u2, which can be handled by substitutions involving tan x, sec x and sin x. Completing the square consists of rewriting the quadratic function ax2 + bx + c
http://ocw.mit.edu/ans7870/18/18.013a/textbook/HTML/chapter27/section01.html
13
115
Click here for a pdf file of this document Students should be able to: 1. Describe Aristotles Horse Cart theory and what was wrong with it. 2. Describe Galileo's experiment that lead to his conclusions about inertia (a) Describe how this experiment is exemplified in modern day amusement parks 3. Define in a sentence Galileo's Law of Inertia (Alias-Newton's first Law of Motion) 4. Describe what effects an object's inertia. 5. Characterize rotational inertia (a) Describe the relationship between an objects rate of spin and the object's distribution of it's mass. 6. Give examples of how inertia is demonstrated in everyday life (TOYS) 7. Write in words Newton's Second Law of Motion. (a) Describe a force (b) Give the SI and English unit of force. (c) Give the symbols for force in SI and English systems. 8. Describe the relationship between force and acceleration. 9. Describe the relationship between force and mass. 10. Do problems that make proportionality predictions based on Newtons Second Law of Motion. (F=ma) 11. Describe the formula for calculating weight from mass. (w=mg) (a) Describe what is means to experience a certain number of gs. (b) Convert back and forth between gs and m/s2. 12. Write in a complete sentence Newton's Third Law of Motion. 13. Apply Newtons Third Law of Motion to Problems. 14. Be able to identify the "reaction force" in a given situation. 15. Distinguish between the concepts of mass and weight. 16. Memorize the value for the acceleration of any object near the surface of the Earth. (a) Describe what it means to be weightless. 17. Utilize Newton's Laws in conjunction with the Kinematics equations from chapter 1 to solve problems 1. A little boy pushes a wagon with his dog in it. The mass of the dog and wagon together is 45 kg. The wagon accelerates at 0.85 m/s2. What force is the boy pulling with? 2. A 1650 kg car accelerates at a rate of 4.0 m/s2. How much force is the car's engine producing? 3. A 68 kg runner exerts a force of 59 N. What is the acceleration of the runner? 4. A crate is dragged across an ice covered lake. The box accelerates at 0.08 m/s2 and is pulled by a 47 N force. What is the mass of the box? 5. 3 women push a stalled car. Each woman pushes with a 425 N force. What is the mass of the car if the car accelerates at 0.85 m/s2? 6. A tennis ball, 0.314 kg, is accelerated at a rate of 164 m/s2 when hit by a professional tennis player. What force does the player's tennis racket exert on the ball? 7. In an airplane crash a woman is holding an 8.18 kg, 18 pound, baby. In the crash the woman experiences a horizontal de-acceleration of 88.2 m/s2. How many g's is this de-acceleration? How much force must the woman exert to hold the baby in place? 8. When an F-14 airplane takes-off an aircraft carrier it is literally catapulted off the flight deck. The plane's final speed at take-off is 68.2 m/s. The F-14 starts from rest. The plane accelerates in 2 seconds and has a mass of 29,545 kg. What is the total force that gets the F-14 in the air? 9. A sports car accelerates from 0 to 60 mph, 27 m/s, in 6.3 seconds. The car exerts a force of 4106 N. What is the mass of the car? 10. A sled is pushed along an ice covered lake. It has some initial velocity before coming to a rest in 15 m. It took 23 seconds before the sled and rider come to a rest. If the rider and sled have a combined mass of 52.5 kg, what is the magnitude and direction of the stopping force? What do "we" call the stopping force? 11. A car is pulled with a force of 10,000 N. The car's mass is 1267 kg. But, the car covers 394.6 m in 15 seconds. (a) What is expected acceleration of the car from the 10,000 N force? (b) What is the actual acceleration of the car from the observed data of x and t? (c) What is the difference in accelerations? (d) What force caused this difference in acceleration? (e) What is the magnitude and direction of the force that caused the difference in acceleration? 12. 12. A little car has a maximum acceleration of 2.57 m/s2. What is the new maximum acceleration of the little car if it tows another car that has the same mass? 13. 13. A boy can accelerate at 1.00 m/s2 over a short distance. If the boy were to take an energy pill and suddenly have the ability to accelerate at 5.6 m/s2, then how would his new energy-pill-force compare to his earlier force? If the boy's earlier force was 45 N, what is the size of his energy-pill-force? 14. 14. A cartoon plane with four engines can accelerate at 8.9 m/s2 when one engine is running. What is the acceleration of the plane if all four engines are running and each produces the same force? 15. 15. While dragging a crate a workman exerts a force of 628 N. Later, the mass of the crate is increased by a factor of 3.8. If the workman exerts the same force, how does the new acceleration compare to the old acceleration? 16. 16. A rocket accelerates in a space at a rate of "1 g." The rocket exerts a force of 12,482 N. Later in flight the rocket exerts 46,458 N. What is the rockets new acceleration? What is the rocket's new acceleration in "g's?" 17. 17. A race car exerts 19,454 N while the car travels at a constant speed of 201 mph, 91.36 m/s. What is the mass of the car? (18-31 Weight and Mass) 18. A locomotives mass is 18181.81 kg. What is its weight? 19. A small car weighs 10168.25 N. What is its mass? 20. What is the weight of an infant whose mass is 1.76 kg? 21. An F-14s mass is 29,545 kg. What is its weight? 22. What is the mass of a runner whose weight is 648 N? 23. The surface gravity of the Sun is 274 m/s2. How many Earth gs is this? 24. The planet Mercury has 0.37 gs compared to the Earth. What is the acceleration on Mercury in m/s2? 25. A plane crashes with a de-acceleration of 185 m/s2. How many gs is this? 26. A baseball traveling 38 m/s is caught by the catcher. The catcher takes 0.1 seconds to stop the ball. What is the acceleration of the ball and how many gs is this? 27. A very fast car accelerates from a rest to 32 m/s, (71.68 mph), in 4.2 seconds. What is acceleration of the car and how many gs is this? 28. The Space Shuttle travels from launch to 529.2 m in 6.0 seconds. What is the acceleration of the shuttle and how many gs is this? 29. The space shuttles mass, (with boosters) is 654,506 kg. The average force of the shuttles engines is 25,656,635.2N. What is the acceleration of the shuttle in m/s2 and gs? 30. How can the answers to #11 and #12 both be correct? 31. What is the SI weight of a McDonalds Quarter Pounder sandwich? 32. A little boy, mass = 40 kg, is riding in a wagon pulled by a his HUGE dog, Howard. What is the acceleration of the wagon if the dog pulls with a force of 30 N? (Assume the wagon rolls on a friction less surface). 33. The wagon and boy mentioned in the previous problem are let loose by Howard the dog. The wagon freely rolls until it hits a patch of ground that slows down the wagon until it comes to a rest. If it takes 10 seconds to come to a stop in 15 meters, what if the frictional force stopping the wagon? 34. A speed boat in the water experiences an acceleration of 0.524 m/s2. The boat's mass is 842 kg. What is the force that the boat's engine's are putting out? 35. A stalled car is pushed with a force of 342 N from rest. How far does the car travel in 12 seconds is it's mass is 989 kg? 36. How far does the car travel in the previous problem if the pushing force is doubled? 37. A little boy is pulling a wagon full of 10 bricks. The mass of the wagon is too small to be considered. If the boy later is pulling the wagon with the same force and the wagon has 45 bricks in it, then how does the acceleration of the 45 brick wagon compare to the acceleration of the 10 brick wagon? 38. A car accelerates with a given force. Later the same car accelerates with 1/6 it's original acceleration and it now has 1.4 times its earlier mass. (A) How does the car's later force compare with the its earlier force? (B) If its earlier force is 1523 N, then what is the car's later force? 39. What force does the car exert if its mass is 1201 kg and the car goes from 5.4 m/s to 16.3 m/s in 107 meters? 40. What are Newton's 3 Laws and which ones are used in shaking a Catsup bottle to get the Catsup out when it is "stuck" in the bottle. 41. An ice skater is spinning when she begins to draw in her arms. As she does this what happens to her rate of spin? Which law does this fall under? 42. A 1027 kg car is resting at a stop light. The car moves with a force of 1528 N for 22 s. Then the car travels at a constant velocity for 10 seconds. Finally, the car stops with a force of 4056 N. HOW MUCH DISTANCE IS TRAVELED BY THE CAR DURING THIS JOURNEY? 1) 38.25 N 2) 6600 N 3) 0.87 m/s2 4) 587.5 kg 5)1500 kg 6) 51.50 N 7) 9 gs; 721.48 N 8) 1,007,484.5 N 9) 958.07 kg 10) 2.98 N 11a) 7.89 m/s2 11b) 2.62 m/s2 11c) 5.27 m/s2 11d) ??? 11e) 6682.15 N 12) 1.285 N 13) 252 N 14) 35.6 15) New Accel = (0.26) Old Acceleration 16) 3.72 gs 17) ??? 18) 178181.74 N 19) 1037.58 kg 20) 17.25 N 21) 289541 N 22) 66.12 kg 23) 27.96 gs 24) 3.63 m/s2 25)18.88 gs 26) 380 m/s2, 38.78 gs 27) 134.4 m/s2, 13.71 gs 28) 29.4 m/s , 3 gs 29) 29.4 m/s , 3 gs 30) ??? 31) ??? 32) 0.75 m/s2 33) 40 N (0.3 m/s2) 34) 441.21 N 35) 24.90 m, (0.34 m/s2) 36) 49.80 (twice as far) 37) accel of 45 brick wagon = (1/(4.5))[accel of the 10 brick wagon) 38) new froce = 0.233(old force; 355.37 N 39) 1327.44 N, (1.1053 m/s2) 40) 1st 41) Spin faster (1st) 42) 360.05m + 327.32 + 135.64m = 823.02 m 1 m = 45 kg, a = 0.85 m/s2, F = ? F = ma, F = (45)(0.85) F = 38.25 N 2 m =1650 kg, a = 4.0 m/s2, f =? F=ma, F = 1650 x 4.0 = 6600N 3 m = 68 kg, F = 59 N, a =? F = ma, 59 = 68a, a = 0.87 m/s2 4 a = 0.08 m/s2, F =47 N, m = ? F = ma, 47 = m(0.08), m = 587.5 kg 5 F = (3)425N = 1275 N, a = 0.85 m/s2, m =? F=ma, 1275=m(0.85 m/s2), m = 1500 kg 6 m=0.314 kg, a = 164 m/s2, F = ? F =ma, (0.314)(164), 51.50 N 7 m = 8.18 kg, a = 88.2 m/s2, gs?: 88.2/9.8 = 9 gs F =?, F=ma, 8.18(88.3) = 722.29 N = (162 lbs) 8 v = 68.2 m/s, vo = 0 (rest), t = 2 s, m = 29 545 kg v=vo + at, 68.2 = 0 +a(2), a =34.1 m/s2, F = ma, F = (29 545)34.1 =1 007 484.5 N 9 vo = 0, v = 27 m/s, t = 6.3 s, F = 4106 N, m=? : v=vo + at, 27 = 0 +a(6.3), a =4.2857 m/s2, F = ma, 4106 = (m)4.2857, m= 958.07 kg 10 vo = ?, v = 0, x = 15 m, t = 23 s, m =52.5 kg, F = ma, F = 52.5(0.0567) = 2.95 N FRICTION 12 If it is towing a car like itself, then the cqars engine is supplying the same force to double the mass. Therefore, (F=ma), the acceleration is half or 1.285 m/s2. 13 From F=ma: If the new acceleration is 5.6 times the old (5.6/ 1.00) then the force will also increase by 5.6. 14 Now it is four times the force with the same mass means four times the acceleration, 4 x 8.9 = 35.6 m/s2. 15 165N/3.8 = 165.26 N 16 Fnew/Fold = 46,458/12,482 = 3.72199968 anew/aold = 3.72199968 anew = 3.72 g = 3.72 gs = 36.48 m/s2. 18 g = 9.80 m/s2, m = 18181.81 kg, w = ?, w =mg w = 18181.81(9.8) = 178181.74 N 19 g = 9.80 m/s2, w = 10168.25 N, w=mg, m = 1037.58 kg 20 g = 9.80 m/s2, m= 1.76, w=?, w = 17.25 N 21 g = 9.80 m/s2, m = 29,545 kg, w=mg, w = 289541 N 22 g = 9.80 m/s2, w = 648, w=mg, m=66.12 kg 23 g = 9.80 m/s2, gSUN = 274 m/s2, 274/9.8 = 27.96 gEARTHs 24 g = 9.80 m/s2, 0.37gEARTH = 0.37(9.80) = 3.63 m/s2. 25 g = 9.80 m/s2, a = 185 m/s2, 185/g = 18.76 gs 26 vo = 38 m/s, v = 0, t = 0.1 s, a =? in gs v=vo + at, 0 = 38 +a(.1), a =-380 m/s2, 380 m/s2/9.80 m/s2 = 38.78 gs !!! 27 vo = 0, v = 32 m/s, t = 4.2, a =? v=vo + at, 32 = 0 +a(4.2), a =7.62 m/s2, 7.62 m/s2/9.80 m/s2 = 0.78 gs 28 vo = 0, x = 529.2, t = 6.0 s, a =? x = (vo)t + 1/2at2: 529.2 = 0 + 1/2a(6.0)2 a = 29.4 m/s2, 29.4 m/s2/9.80 m/s2 = 3.00 gs 29 m = 654,506 kg, F = 25,656,635.2 N F = ma : 25,656,635.2=654.506a a = 39.2m/s2, 39.2 m/s2/9.80 m/s2 = 4.00 gs 30 While the thrust could produce 4 gs of acceleration one of those gs of thrust is used to overcome gravity. The rest are used to accelerate the rocket. 31 1 lb = 2.205 kg = 1 Newton 32 m = 40 kg, F =30 N F = ma, 30 = (40)a, a = 0.75 m/s2 33 v = 0, t =10 s x = 15 m, m = 40 kg F = ma, F = 40(0.3) = 133.33 N 34 a = 0.524 m/s2, m = 842 kg, F = ? F =ma, F = 0.542(842) = 441.208 N 35 F =342 N, vo = 0, x =?, t = 12 s, m = 989 kg, F = ma: 342 = 989a, a = 0.3458 m/s2, x = (vo)t + 1/2at2 x = 0 + 1/2(0.3458)(12)2 x = 24.90 m 36 If the pushing force is doubled then the acceleration is doubled. Because the relationship between x and a in linear, if the acceleration is doubled then the distance is also doubled. Therefore, the car will travel 49.80 m. by Tony Wayne ...(If you are a teacher, please feel free to use these resources in your teaching.)
http://www.mrwaynesclass.com/Newton/worksheets/handout/index.html
13
55
Outline of U.S. History/The Civil War and Reconstruction That this nation under God shall have a new birth of freedom. President Abraham Lincoln, November 19, 1863 Secession and civil war Lincoln’s victory in the presidential election of November 1860 made South Carolina’s secession from the Union December 20 a foregone conclusion. The state had long been waiting for an event that would unite the South against the antislavery forces. By February 1, 1861, five more Southern states had seceded. On February 8, the six states signed a provisional constitution for the Confederate States of America. The remaining Southern states as yet remained in the Union, although Texas had begun to move on its secession. Less than a month later, March 4, 1861, Abraham Lincoln was sworn in as president of the United States. In his inaugural address, he declared the Confederacy “legally void.” His speech closed with a plea for restoration of the bonds of union, but the South turned a deaf ear. On April 12, Confederate guns opened fire on the federal garrison at Fort Sumter in the Charleston, South Carolina, harbor. A war had begun in which more Americans would die than in any other conflict before or since. In the seven states that had seceded, the people responded positively to the Confederate action and the leadership of Confederate President Jefferson Davis. Both sides now tensely awaited the action of the slave states that thus far had remained loyal. Virginia seceded on April 17; Arkansas, Tennessee, and North Carolina followed quickly. No state left the Union with greater reluctance than Virginia. Her statesmen had a leading part in the winning of the Revolution and the framing of the Constitution, and she had provided the nation with five presidents. With Virginia went Colonel Robert E. Lee, who declined the command of the Union Army out of loyalty to his native state. Between the enlarged Confederacy and the free-soil North lay the border slave states of Delaware, Maryland, Kentucky, and Missouri, which, despite some sympathy with the South, would remain loyal to the Union. Each side entered the war with high hopes for an early victory. In material resources the North enjoyed a decided advantage. Twenty-three states with a population of 22 million were arrayed against 11 states inhabited by nine million, including slaves. The industrial superiority of the North exceeded even its preponderance in population, providing it with abundant facilities for manufacturing arms and ammunition, clothing, and other supplies. It had a greatly superior railway network. The South nonetheless had certain advantages. The most important was geography; the South was fighting a defensive war on its own territory. It could establish its independence simply by beating off the Northern armies. The South also had a stronger military tradition, and possessed the more experienced military leaders. Western advance, Eastern stalemate The first large battle of the war, at Bull Run, Virginia (also known as First Manassas) near Washington, stripped away any illusions that victory would be quick or easy. It also established a pattern, at least in the Eastern United States, of bloody Southern victories that never translated into a decisive military advantage for the Confederacy. In contrast to its military failures in the East, the Union was able to secure battlefield victories in the West and slow strategic success at sea. Most of the Navy, at the war’s beginning, was in Union hands, but it was scattered and weak. Secretary of the Navy Gideon Welles took prompt measures to strengthen it. Lincoln then proclaimed a blockade of the Southern coasts. Although the effect of the blockade was negligible at first, by 1863 it almost completely prevented shipments of cotton to Europe and blocked the importation of sorely needed munitions, clothing, and medical supplies to the South. A brilliant Union naval commander, David Farragut, conducted two remarkable operations. In April 1862, he took a fleet into the mouth of the Mississippi River and forced the surrender of the largest city in the South, New Orleans, Louisiana. In August 1864, with the cry, “Damn the torpedoes! Full speed ahead,” he led a force past the fortified entrance of Mobile Bay, Alabama, captured a Confederate ironclad vessel, and sealed off the port. In the Mississippi Valley, the Union forces won an almost uninterrupted series of victories. They began by breaking a long Confederate line in Tennessee, thus making it possible to occupy almost all the western part of the state. When the important Mississippi River port of Memphis was taken, Union troops advanced some 320 kilometers into the heart of the Confederacy. With the tenacious General Ulysses S. Grant in command, they withstood a sudden Confederate counterattack at Shiloh, on the bluffs overlooking the Tennessee River. Those killed and wounded at Shiloh numbered more than 10,000 on each side, a casualty rate that Americans had never before experienced. But it was only the beginning of the carnage. In Virginia, by contrast, Union troops continued to meet one defeat after another in a succession of bloody attempts to capture Richmond, the Confederate capital. The Confederates enjoyed strong defense positions afforded by numerous streams cutting the road between Washington and Richmond. Their two best generals, Robert E. Lee and Thomas J. (“Stonewall”) Jackson, both far surpassed in ability their early Union counterparts. In 1862 Union commander George McClellan made a slow, excessively cautious attempt to seize Richmond. But in the Seven Days’ Battles between June 25 and July 1, the Union troops were driven steadily backward, both sides suffering terrible losses. After another Confederate victory at the Second Battle of Bull Run (or Second Manassas), Lee crossed the Potomac River and invaded Maryland. McClellan again responded tentatively, despite learning that Lee had split his army and was heavily outnumbered. The Union and Confederate Armies met at Antietam Creek, near Sharpsburg, Maryland, on September 17, 1862, in the bloodiest single day of the war: More than 4,000 died on both sides and 18,000 were wounded. Despite his numerical advantage, however, McClellan failed to break Lee’s lines or press the attack, and Lee was able to retreat across the Potomac with his army intact. As a result, Lincoln fired McClellan. Although Antietam was inconclusive in military terms, its consequences were nonetheless momentous. Great Britain and France, both on the verge of recognizing the Confederacy, delayed their decision, and the South never received the diplomatic recognition and the economic aid from Europe that it desperately sought. Antietam also gave Lincoln the opening he needed to issue the preliminary Emancipation Proclamation, which declared that as of January 1, 1863, all slaves in states rebelling against the Union were free. In practical terms, the proclamation had little immediate impact; it freed slaves only in the Confederate states, while leaving slavery intact in the border states. Politically, however, it meant that in addition to preserving the Union, the abolition of slavery was now a declared objective of the Union war effort. The final Emancipation Proclamation, issued January 1, 1863, also authorized the recruitment of African Americans into the Union Army, a move abolitionist leaders such as Frederick Douglass had been urging since the beginning of armed conflict. Union forces already had been sheltering escaped slaves as “contraband of war,” but following the Emancipation Proclamation, the Union Army recruited and trained regiments of African-American soldiers that fought with distinction in battles from Virginia to the Mississippi. About 178,000 African Americans served in the U.S. Colored Troops, and 29,500 served in the Union Navy. Despite the political gains represented by the Emancipation Proclamation, however, the North’s military prospects in the East remained bleak as Lee’s Army of Northern Virginia continued to maul the Union Army of the Potomac, first at Fredericksburg, Virginia, in December 1862 and then at Chancellorsville in May 1863. But Chancellorsville, although one of Lee’s most brilliant military victories, was also one of his most costly. His most valued lieutenant, General “Stonewall” Jackson, was mistakenly shot and killed by his own men. Gettysburg to Appomattox Yet none of the Confederate victories was decisive. The Union simply mustered new armies and tried again. Believing that the North’s crushing defeat at Chancellorsville gave him his chance, Lee struck northward into Pennsylvania at the beginning of July 1863, almost reaching the state capital at Harrisburg. A strong Union force intercepted him at Gettysburg, where, in a titanic three‑day battle—the largest of the Civil War—the Confederates made a valiant effort to break the Union lines. They failed, and on July 4 Lee’s army, after crippling losses, retreated behind the Potomac. More than 3,000 Union soldiers and almost 4,000 Confederates died at Gettysburg; wounded and missing totaled more than 20,000 on each side. On November 19, 1863, Lincoln dedicated a new national cemetery there with perhaps the most famous address in U.S. history. He concluded his brief remarks with these words: … we here highly resolve that these dead shall not have died in vain—that this nation, under God, shall have a new birth of freedom—and that government of the people, by the people, for the people, shall not perish from the earth. On the Mississippi, Union control had been blocked at Vicksburg, where the Confederates had strongly fortified themselves on bluffs too high for naval attack. In early 1863 Grant began to move below and around Vicksburg, subjecting it to a six‑week siege. On July 4, he captured the town, together with the strongest Confederate Army in the West. The river was now entirely in Union hands. The Confederacy was broken in two, and it became almost impossible to bring supplies from Texas and Arkansas. The Northern victories at Vicksburg and Gettysburg in July 1863 marked the turning point of the war, although the bloodshed continued unabated for more than a year-and-a-half. Lincoln brought Grant east and made him commander-in-chief of all Union forces. In May 1864 Grant advanced deep into Virginia and met Lee’s Confederate Army in the three-day Battle of the Wilderness. Losses on both sides were heavy, but unlike other Union commanders, Grant refused to retreat. Instead, he attempted to outflank Lee, stretching the Confederate lines and pounding away with artillery and infantry attacks. “I propose to fight it out along this line if it takes all summer,” the Union commander said at Spotsylvania, during five days of bloody trench warfare that characterized fighting on the eastern front for almost a year. In the West, Union forces gained control of Tennessee in the fall of 1863 with victories at Chattanooga and nearby Lookout Mountain, opening the way for General William T. Sherman to invade Georgia. Sherman outmaneuvered several smaller Confederate armies, occupied the state capital of Atlanta, then marched to the Atlantic coast, systematically destroying railroads, factories, warehouses, and other facilities in his path. His men, cut off from their normal supply lines, ravaged the countryside for food. From the coast, Sherman marched northward; by February 1865, he had taken Charleston, South Carolina, where the first shots of the Civil War had been fired. Sherman, more than any other Union general, understood that destroying the will and morale of the South was as important as defeating its armies. Grant, meanwhile, lay siege to Petersburg, Virginia, for nine months, before Lee, in March 1865, knew that he had to abandon both Petersburg and the Confederate capital of Richmond in an attempt to retreat south. But it was too late. On April 9, 1865, surrounded by huge Union armies, Lee surrendered to Grant at Appomattox Courthouse. Although scattered fighting continued elsewhere for several months, the Civil War was over. The terms of surrender at Appomattox were magnanimous, and on his return from his meeting with Lee, Grant quieted the noisy demonstrations of his soldiers by reminding them: “The rebels are our countrymen again.” The war for Southern independence had become the “lost cause,” whose hero, Robert E. Lee, had won wide admiration through the brilliance of his leadership and his greatness in defeat. With malice toward none For the North, the war produced a still greater hero in Abraham Lincoln—a man eager, above all else, to weld the Union together again, not by force and repression but by warmth and generosity. In 1864 he had been elected for a second term as president, defeating his Democratic opponent, George McClellan, the general he had dismissed after Antietam. Lincoln’s second inaugural address closed with these words: With malice toward none; with charity for all; with firmness in the right, as God gives us to see the right, let us strive on to finish the work we are in; to bind up the nation’s wounds; to care for him who shall have borne the battle, and for his widow, and his orphan—to do all which may achieve and cherish a just, and a lasting peace, among ourselves, and with all nations. Three weeks later, two days after Lee’s surrender, Lincoln delivered his last public address, in which he unfolded a generous reconstruction policy. On April 14, 1865, the president held what was to be his last Cabinet meeting. That evening—with his wife and a young couple who were his guests—he attended a performance at Ford’s Theater. There, as he sat in the presidential box, he was assassinated by John Wilkes Booth, a Virginia actor embittered by the South’s defeat. Booth was killed in a shootout some days later in a barn in the Virginia countryside. His accomplices were captured and later executed. Lincoln died in a downstairs bedroom of a house across the street from Ford’s Theater on the morning of April 15. Poet James Russell Lowell wrote: Never before that startled April morning did such multitudes of men shed tears for the death of one they had never seen, as if with him a friendly presence had been taken from their lives, leaving them colder and darker. Never was funeral panegyric so eloquent as the silent look of sympathy which strangers exchanged when they met that day. Their common manhood had lost a kinsman. The first great task confronting the victorious North—now under the leadership of Lincoln’s vice president, Andrew Johnson, a Southerner who remained loyal to the Union—was to determine the status of the states that had seceded. Lincoln had already set the stage. In his view, the people of the Southern states had never legally seceded; they had been misled by some disloyal citizens into a defiance of federal authority. And since the war was the act of individuals, the federal government would have to deal with these individuals and not with the states. Thus, in 1863 Lincoln proclaimed that if in any state 10 percent of the voters of record in 1860 would form a government loyal to the U.S. Constitution and would acknowledge obedience to the laws of the Congress and the proclamations of the president, he would recognize the government so created as the state’s legal government. Congress rejected this plan. Many Republicans feared it would simply entrench former rebels in power; they challenged Lincoln’s right to deal with the rebel states without consultation. Some members of Congress advocated severe punishment for all the seceded states; others simply felt the war would have been in vain if the old Southern establishment was restored to power. Yet even before the war was wholly over, new governments had been set up in Virginia, Tennessee, Arkansas, and Louisiana. To deal with one of its major concerns—the condition of former slaves—Congress established the Freedmen’s Bureau in March 1865 to act as guardian over African Americans and guide them toward self-support. And in December of that year, Congress ratified the 13th Amendment to the U.S. Constitution, which abolished slavery. Throughout the summer of 1865 Johnson proceeded to carry out Lincoln’s reconstruction program, with minor modifications. By presidential proclamation he appointed a governor for each of the former Confederate states and freely restored political rights to many Southerners through use of presidential pardons. In due time conventions were held in each of the former Confederate states to repeal the ordinances of secession, repudiate the war debt, and draft new state constitutions. Eventually a native Unionist became governor in each state with authority to convoke a convention of loyal voters. Johnson called upon each convention to invalidate the secession, abolish slavery, repudiate all debts that went to aid the Confederacy, and ratify the 13th Amendment. By the end of 1865, this process was completed, with a few exceptions. Both Lincoln and Johnson had foreseen that the Congress would have the right to deny Southern legislators seats in the U.S. Senate or House of Representatives, under the clause of the Constitution that says, “Each house shall be the judge of the … qualifications of its own members.” This came to pass when, under the leadership of Thaddeus Stevens, those congressmen called “Radical Republicans,” who were wary of a quick and easy “reconstruction,” refused to seat newly elected Southern senators and representatives. Within the next few months, Congress proceeded to work out a plan for the reconstruction of the South quite different from the one Lincoln had started and Johnson had continued. Wide public support gradually developed for those members of Congress who believed that African Americans should be given full citizenship. By July 1866, Congress had passed a civil rights bill and set up a new Freedmen’s Bureau—both designed to prevent racial discrimination by Southern legislatures. Following this, the Congress passed a 14th Amendment to the Constitution, stating that “all persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside.” This repudiated the Dred Scott ruling, which had denied slaves their right of citizenship. All the Southern state legislatures, with the exception of Tennessee, refused to ratify the amendment, some voting against it unanimously. In addition, Southern state legislatures passed “codes” to regulate the African-American freedmen. The codes differed from state to state, but some provisions were common. African Americans were required to enter into annual labor contracts, with penalties imposed in case of violation; dependent children were subject to compulsory apprenticeship and corporal punishments by masters; vagrants could be sold into private service if they could not pay severe fines. Many Northerners interpreted the Southern response as an attempt to reestablish slavery and repudiate the hard-won Union victory in the Civil War. It did not help that Johnson, although a Unionist, was a Southern Democrat with an addiction to intemperate rhetoric and an aversion to political compromise. Republicans swept the congressional elections of 1866. Firmly in power, the Radicals imposed their own vision of Reconstruction. In the Reconstruction Act of March 1867, Congress, ignoring the governments that had been established in the Southern states, divided the South into five military districts, each administered by a Union general. Escape from permanent military government was open to those states that established civil governments, ratified the 14th Amendment, and adopted African-American suffrage. Supporters of the Confederacy who had not taken oaths of loyalty to the United States generally could not vote. The 14th Amendment was ratified in 1868. The 15th Amendment, passed by Congress the following year and ratified in 1870 by state legislatures, provided that “The right of citizens of the United States to vote shall not be denied or abridged by the United States or any state on account of race, color, or previous condition of servitude.” The Radical Republicans in Congress were infuriated by President Johnson’s vetoes (even though they were overridden) of legislation protecting newly freed African Americans and punishing former Confederate leaders by depriving them of the right to hold office. Congressional antipathy to Johnson was so great that, for the first time in American history, impeachment proceedings were instituted to remove the president from office. Johnson’s main offense was his opposition to punitive congressional policies and the violent language he used in criticizing them. The most serious legal charge his enemies could level against him was that, despite the Tenure of Office Act (which required Senate approval for the removal of any officeholder the Senate had previously confirmed), he had removed from his Cabinet the secretary of war, a staunch supporter of the Congress. When the impeachment trial was held in the Senate, it was proved that Johnson was technically within his rights in removing the Cabinet member. Even more important, it was pointed out that a dangerous precedent would be set if the Congress were to remove a president because he disagreed with the majority of its members. The final vote was one short of the two-thirds required for conviction. Johnson continued in office until his term expired in 1869, but Congress had established an ascendancy that would endure for the rest of the century. The Republican victor in the presidential election of 1868, former Union general Ulysses S. Grant, would enforce the reconstruction policies the Radicals had initiated. By June 1868, Congress had readmitted the majority of the former Confederate states back into the Union. In many of these reconstructed states, the majority of the governors, representatives, and senators were Northern men—so-called carpetbaggers—who had gone South after the war to make their political fortunes, often in alliance with newly freed African Americans. In the legislatures of Louisiana and South Carolina, African Americans actually gained a majority of the seats. Many Southern whites, their political and social dominance threatened, turned to illegal means to prevent African Americans from gaining equality. Violence against African Americans by such extra-legal organizations as the Ku Klux Klan became more and more frequent. Increasing disorder led to the passage of Enforcement Acts in 1870 and 1871, severely punishing those who attempted to deprive the African-American freedmen of their civil rights. The end of Reconstruction As time passed, it became more and more obvious that the problems of the South were not being solved by harsh laws and continuing rancor against former Confederates. Moreover, some Southern Radical state governments with prominent African-American officials appeared corrupt and inefficient. The nation was quickly tiring of the attempt to impose racial democracy and liberal values on the South with Union bayonets. In May 1872, Congress passed a general Amnesty Act, restoring full political rights to all but about 500 former rebels. Gradually Southern states began electing members of the Democratic Party into office, ousting carpetbagger governments and intimidating African Americans from voting or attempting to hold public office. By 1876 the Republicans remained in power in only three Southern states. As part of the bargaining that resolved the disputed presidential elections that year in favor of Rutherford B. Hayes, the Republicans promised to withdraw federal troops that had propped up the remaining Republican governments. In 1877 Hayes kept his promise, tacitly abandoning federal responsibility for enforcing blacks’ civil rights. The South was still a region devastated by war, burdened by debt caused by misgovernment, and demoralized by a decade of racial warfare. Unfortunately, the pendulum of national racial policy swung from one extreme to the other. A federal government that had supported harsh penalties against Southern white leaders now tolerated new and humiliating kinds of discrimination against African Americans. The last quarter of the 19th century saw a profusion of “Jim Crow” laws in Southern states that segregated public schools, forbade or limited African-American access to many public facilities such as parks, restaurants, and hotels, and denied most blacks the right to vote by imposing poll taxes and arbitrary literacy tests. “Jim Crow” is a term derived from a song in an 1828 minstrel show where a white man first performed in “blackface.” Historians have tended to judge Reconstruction harshly, as a murky period of political conflict, corruption, and regression that failed to achieve its original high-minded goals and collapsed into a sinkhole of virulent racism. Slaves were granted freedom, but the North completely failed to address their economic needs. The Freedmen’s Bureau was unable to provide former slaves with political and economic opportunity. Union military occupiers often could not even protect them from violence and intimidation. Indeed, federal army officers and agents of the Freedmen’s Bureau were often racists themselves. Without economic resources of their own, many Southern African Americans were forced to become tenant farmers on land owned by their former masters, caught in a cycle of poverty that would continue well into the 20th century. Reconstruction-era governments did make genuine gains in rebuilding Southern states devastated by the war, and in expanding public services, notably in establishing tax-supported, free public schools for African Americans and whites. However, recalcitrant Southerners seized upon instances of corruption (hardly unique to the South in this era) and exploited them to bring down radical regimes. The failure of Reconstruction meant that the struggle of African Americans for equality and freedom was deferred until the 20th century—when it would become a national, not just a Southern issue. The Civil War and new patterns of American politics The controversies of the 1850s had destroyed the Whig Party, created the Republican Party, and divided the Democratic Party along regional lines. The Civil War demonstrated that the Whigs were gone beyond recall and the Republicans on the scene to stay. It also laid the basis for a reunited Democratic Party. The Republicans could seamlessly replace the Whigs throughout the North and West because they were far more than a free-soil/antislavery force. Most of their leaders had started as Whigs and continued the Whig interest in federally assisted national development. The need to manage a war did not deter them from also enacting a protective tariff (1861) to foster American manufacturing, the Homestead Act (1862) to encourage Western settlement, the Morrill Act (1862) to establish “land grant” agricultural and technical colleges, and a series of Pacific Railway Acts (1862-64) to underwrite a transcontinental railway line. These measures rallied support throughout the Union from groups to whom slavery was a secondary issue and ensured the party’s continuance as the latest manifestation of a political creed that had been advanced by Alexander Hamilton and Henry Clay. The war also laid the basis for Democratic reunification because Northern opposition to it centered in the Democratic Party. As might be expected from the party of “popular sovereignty,” some Democrats believed that full-scale war to reinstate the Union was unjustified. This group came to be known as the Peace Democrats. Their more extreme elements were called “Copperheads.” Moreover, few Democrats, whether of the “war” or “peace” faction, believed the emancipation of the slaves was worth Northern blood. Opposition to emancipation had long been party policy. In 1862, for example, virtually every Democrat in Congress voted against eliminating slavery in the District of Columbia and prohibiting it in the territories. Much of this opposition came from the working poor, particularly Irish and German Catholic immigrants, who feared a massive migration of newly freed African Americans to the North. They also resented the establishment of a military draft (March 1863) that disproportionately affected them. Race riots erupted in several Northern cities. The worst of these occurred in New York, July 13-16, 1863, precipitated by Democratic Governor Horatio Seymour’s condemnation of military conscription. Federal troops, who just days earlier had been engaged at Gettysburg, were sent to restore order. The Republicans prosecuted the war with little regard for civil liberties. In September 1862, Lincoln suspended the writ of habeas corpus and imposed martial law on those who interfered with recruitment or gave aid and comfort to the rebels. This breech of civil law, although constitutionally justified during times of crisis, gave the Democrats another opportunity to criticize Lincoln. Secretary of War Edwin Stanton enforced martial law vigorously, and many thousands—most of them Southern sympathizers or Democrats—were arrested. Despite the Union victories at Vicksburg and Gettysburg in 1863, Democratic “peace” candidates continued to play on the nation’s misfortunes and racial sensitivities. Indeed, the mood of the North was such that Lincoln was convinced he would lose his re-election bid in November 1864. Largely for that reason, the Republican Party renamed itself the Union Party and drafted the Tennessee Democrat Andrew Johnson to be Lincoln’s running mate. Sherman’s victories in the South sealed the election for them. Lincoln’s assassination, the rise of Radical Republicanism, and Johnson’s blundering leadership all played into a postwar pattern of politics in which the Republican Party suffered from overreaching in its efforts to remake the South, while the Democrats, through their criticism of Reconstruction, allied themselves with the neo-Confederate Southern white majority. U.S. Grant’s status as a national hero carried the Republicans through two presidential elections, but as the South emerged from Reconstruction, it became apparent that the country was nearly evenly divided between the two parties. The Republicans would be dominant in the industrial Northeast until the 1930s and strong in most of the rest of the country outside the South. However, their appeal as the party of strong government and national development increasingly would be perceived as one of allegiance to big business and finance. When President Hayes ended Reconstruction, he hoped it would be possible to build the Republican Party in the South, using the old Whigs as a base and the appeal of regional development as a primary issue. By then, however, Republicanism as the South’s white majority perceived it was identified with a hated African-American supremacy. For the next threequarters of a century, the South would be solidly Democratic. For much of that time, the national Democratic Party would pay solemn deference to states’ rights while ignoring civil rights. The group that would suffer the most as a legacy of Reconstruction was the African Americans.
http://en.m.wikibooks.org/wiki/Outline_of_U.S._History/The_Civil_War_and_Reconstruction
13
110
The Mercator projection is a cylindrical map projection presented by the Flemish geographer and cartographer Gerardus Mercator in 1569. It became the standard map projection for nautical purposes because of its ability to represent lines of constant course, known as rhumb lines or loxodromes, as straight segments which conserve the angles with the meridians. While the linear scale is equal in all directions around any point, thus preserving the angles and the shapes of small objects (which makes the projection conformal), the Mercator projection distorts the size and shape of large objects, as the scale increases from the Equator to the poles, where it becomes infinite. Properties and historical details Mercator's 1569 edition was a large planisphere measuring 202 by 124 cm, printed in eighteen separate sheets. As in all cylindrical projections, parallels and meridians are straight and perpendicular to each other. In accomplishing this, the unavoidable east-west stretching of the map, which increases as distance away from the equator increases, is accompanied by a corresponding north-south stretching, so that at every point location, the east-west scale is the same as the north-south scale, making the projection conformal. A Mercator map can never fully show the polar areas, since linear scale becomes infinitely high at the poles. Being a conformal projection, angles are preserved around all locations. However scale varies from place to place, distorting the size of geographical objects and conveying a distorted idea of the overall geometry of the planet. At latitudes greater than 70° north or south, the Mercator projection is practically unusable. All lines of constant bearing (rhumbs or loxodromes—those making constant angles with the meridians) are represented by straight segments on a Mercator map. The two properties, conformality and straight rhumb lines, make this projection uniquely suited to marine navigation: courses and bearings are measured using wind roses or protractors, and the corresponding directions are easily transferred from point to point, on the map, with the help of a parallel ruler or a pair of navigational protractor triangles. The name and explanations given by Mercator to his world map (Nova et Aucta Orbis Terrae Descriptio ad Usum Navigantium Emendata: "new and augmented description of Earth corrected for the use of sailors") show that it was expressly conceived for the use of marine navigation. Although the method of construction is not explained by the author, Mercator probably used a graphical method, transferring some rhumb lines previously plotted on a globe to a square graticule, and then adjusting the spacing between parallels so that those lines became straight, making the same angle with the meridians as in the globe. The development of the Mercator projection represented a major breakthrough in the nautical cartography of the 16th century. However, it was much ahead of its time, since the old navigational and surveying techniques were not compatible with its use in navigation. Two main problems prevented its immediate application: the impossibility of determining the longitude at sea with adequate accuracy and the fact that magnetic directions, instead of geographical directions, were used in navigation. Only in the middle of the 18th century, after the marine chronometer was invented and the spatial distribution of magnetic declination was known, could the Mercator projection be fully adopted by navigators. Several authors are associated with the development of Mercator projection: - German Erhard Etzlaub (c. 1460–1532), who had engraved miniature "compass maps" (about 10×8 cm) of Europe and parts of Africa, latitudes 67°–0°, to allow adjustment of his portable pocket-size sundials, was for decades declared to have designed "a projection identical to Mercator's". - Portuguese mathematician and cosmographer Pedro Nunes (1502–1578), who first described the loxodrome and its use in marine navigation, and suggested the construction of a nautical atlas composed of several large-scale sheets in the cylindrical equidistant projection as a way to minimize distortion of directions. If these sheets were brought to the same scale and assembled an approximation of the Mercator projection would be obtained (1537). - English mathematician Edward Wright (c. 1558–1615), who published accurate tables for its construction (1599, 1610). - English mathematicians Thomas Harriot (1560–1621) and Henry Bond (c.1600–1678) who, independently (c.1600 and 1645), associated the Mercator projection with its modern logarithmic formula, later deduced by calculus. - Greenland takes as much space on the map as Africa, when in reality Africa's area is 14 times greater and Greenland's is comparable to Algeria's alone. - Alaska takes as much area on the map as Brazil, when Brazil's area is nearly five times that of Alaska. - Finland appears with a greater north-south extent than India, although India's is greater. - Antarctica appears as the biggest continent, although it is actually the fifth in terms of area. Although the Mercator projection is still used commonly for navigation, due to its unique properties, cartographers agree that it is not suited to general reference world maps due to its distortion of land area. Mercator himself used the equal-area sinusoidal projection to show relative areas. As a result of these criticisms, modern atlases no longer use the Mercator projection for world maps or for areas distant from the equator, preferring other cylindrical projections, or forms of equal-area projection. The Mercator projection is still commonly used for areas near the equator, however, where distortion is minimal. Arno Peters stirred controversy when he proposed what is now usually called the Gall–Peters projection as the alternative to the Mercator. The projection he promoted is a specific parameterization of the cylindrical equal-area projection. In response, a 1989 resolution by seven North American geographical groups deprecated the use of cylindrical projections for general purpose world maps, which would include both the Mercator and the Gall–Peters. Many major online street mapping services (Bing Maps, OpenStreetMap, Google Maps, MapQuest, Yahoo Maps, and others) use a variant of the Mercator projection for their map images. Despite its obvious scale variation at small scales, the projection is well-suited as an interactive world map that can be zoomed seamlessly to large-scale (local) maps, where there is relatively little distortion due to the variant projection's near-conformality. The major online street mapping services tiling systems display most of the world at the lowest zoom level as a single square image, excluding the polar regions by truncation at latitudes of φmax = ±85.05113°. (See below.) Latitude values outside this range are mapped using a different relationship that doesn't diverge at φ = ±90°. Mathematics of the Mercator projection The spherical model Although the surface of Earth is best modelled by an oblate ellipsoid of revolution, for small scale maps the ellipsoid is approximated by a sphere of radius a. Many different ways exist for calculating a. The simplest include (a) the equatorial radius of the ellipsoid, (b) the arithmetic or geometric mean of the semi-axes of the ellipsoid, (c) the radius of the sphere having the same volume as the ellipsoid. The range of all possible choices is about 35 km, but for small scale (large region) applications the variation may be ignored, and mean values of 6,371 km and 40,030 km may be taken for the radius and circumference respectively. These are the values used for numerical examples in later sections. Only high-accuracy cartography on large scale maps requires an ellipsoidal model. Cylindrical projections The spherical approximation of Earth with radius a can be modelled by a smaller sphere of radius R, called the globe in this section. The globe determines the scale of the map. The various cylindrical projections specify how the geographic detail is transferred from the globe to a cylinder tangential to it at the equator. The cylinder is then unrolled to give the planar map. The fraction R/a is called the representative fraction (RF) or the principal scale of the projection. For example, a Mercator map printed in a book might have an equatorial width of 13.4 cm corresponding to a globe radius of 2.13 cm and an RF of approximately 1/300M (M is used as an abbreviation for 1,000,000 in writing an RF) whereas Mercator's original 1569 map has a width of 198 cm corresponding to a globe radius of 31.5 cm and an RF of about 1/20M. A cylindrical map projection is specified by formulæ linking the geographic coordinates of latitude φ and longitude λ to Cartesian coordinates on the map with origin on the equator and x-axis along the equator. By construction, all points on the same meridian lie on the same generator of the cylinder at a constant value of x, but the distance y along the generator (measured from the equator) is an arbitrary function of latitude, y(φ). In general this function does not describe the geometrical projection (as of light rays onto a screen) from the centre of the globe to the cylinder, which is only one of an unlimited number of ways to conceptually project a cylindrical map. Since the cylinder is tangential to the globe at the equator, the scale factor between globe and cylinder is unity on the equator but nowhere else. In particular since the radius of a parallel, or circle of latitude, is R cos φ, the corresponding parallel on the map must have been stretched by a factor of 1/cos φ = sec φ. This scale factor on the parallel is conventionally denoted by k and the corresponding scale factor on the meridian is denoted by h. Small element geometry The relations between y(φ) and properties of the projection, such as the transformation of angles and the variation in scale, follow from the geometry of corresponding small elements on the globe and map. The figure below shows a point P at latitude φ and longitude λ on the globe and a nearby point Q at latitude φ+δφ and longitude λ+δλ. The vertical lines PK and MQ are arcs of meridians of length Rδφ. The horizontal lines PM and KQ are arcs of parallels of length R(cos φ)δλ. The corresponding points on the projection define a rectangle of width δx and height δy. For small elements, the angle PKQ is approximately a right angle and therefore The previously mentioned scaling factors from globe to cylinder are given by - parallel scale factor - meridian scale factor Since the meridians are mapped to lines of constant x we must have x=R(λ−λ0) and δx=Rδλ, (λ in radians). Therefore in the limit of infinitesimally small elements Derivation of the Mercator projection The choice of the function y(φ) for the Mercator projection is determined by the demand that the projection be conformal, a condition which can be defined in two equivalent ways: - Equality of angles. The condition that a sailing course of constant azimuth α on the globe is mapped into a constant grid bearing β on the map. Setting α=β in the above equations gives y'(φ)=R secφ. - Isotropy of scale factors. This is the statement that the point scale factor is independent of direction so that small shapes are preserved by the projection. Setting h=k in the above equations again gives y'(φ)=R secφ. Integrating the equation In the first equation λ0 is the longitude of an arbitrary central meridian usually, but not always, that of Greenwich (i.e., zero). The difference (λ−λ0) is in radians. The function y(φ) is plotted alongside for the case R=1: it tends to infinity at the poles. The linear y-axis values are not usually shown on printed maps; instead some maps show the non-linear scale of latitude values on the right. More often than not the maps show only a graticule of selected meridians and parallels Inverse transformations Alternative expressions There are many alternative expressions for y(φ), all derived by elementary manipulations. Corresponding inverses are: For angles expressed in degrees: The above formulae are written in terms of the globe radius R. It is often convenient to work directly with the map width W=2πR. For example the basic transformation equations become Truncation and aspect ratio The ordinate y of the Mercator becomes infinite at the poles and the map must be truncated at some latitude less than ninety degrees. This need not be done symmetrically. Mercator's original map is truncated at 80°N and 66°S with the result that European countries were moved towards the centre of the map. The aspect ratio of his map is 198/120=1.65. Even more extreme truncations have been used: a Finnish school atlas was truncated at approximately 76°N and 56°S, an aspect ratio of 1.97. Much web based mapping uses a zoomable version of the Mercator projection with an aspect ratio of unity. In this case the maximum latitude attained must correspond to y=±W/2, or equivalently y/R=π. Any of the inverse transformation formulae may be used to calculate the corresponding latitudes: Scale factor The figure comparing the infinitesimal elements on globe and projection shows that when α=β the triangles PQM and P'Q'M' are similar so that the scale factor in an arbitrary direction is the same as the parallel and meridian scale factors: This result holds for an arbitrary direction: the definition of isotropy of the point scale factor. The graph shows the variation of the scale factor with latitude. Some numerical values are listed below. - at latitude 30° the scale factor is k= sec 30°=1.15, - at latitude 45° the scale factor is k= sec 45°=1.41, - at latitude 60° the scale factor is k= sec 60°=2, - at latitude 80° the scale factor is k= sec 80°=5.76, - at latitude 85° the scale factor is k= sec 85°=11.5 Working from the projected map requires the scale factor in terms of the Mercator ordinate y (unless the map is provided with an explicit latitude scale). Since ruler measurements can furnish the map ordinate y and also the width W of the map then y/R=2πy/W and the scale factor is determined using one of the alternative forms for the forms of the inverse transformation: The variation with latitude is sometimes indicated by multiple bar scales as shown below and, for example, on a Finnish school atlas. The interpretation of such bar scales is non-trivial. See the discussion on distance formulae below. Area scale The area scale factor is the product of the parallel and meridian scales hk = sec2φ. For Greenland, taking 73° as a median latitude, hk = 11.7. For Australia, taking 25° as a median latitude, hk = 1.2. For Great Britain, taking 55° as a median latitude, hk = 3.04. The classic way of showing the distortion inherent in a projection is to use Tissot's indicatrix. Nicolas Tissot noted that for cylindrical projections the scale factors at a point, specified by the numbers h and k, define an ellipse at that point of the projection. The axes of the ellipse are aligned to the meridians and parallels. For the Mercator projection, h=k, so the ellipses degenerate into circles with radius proportional to the value of the scale factor for that latitude. These circles are then placed on the projected map with an arbitrary overall scale (because of the extreme variation in scale) but correct relative sizes. One measure of a map's accuracy is a comparison of the length of corresponding line elements on the map and globe. Therefore, by construction, the Mercator projection is perfectly accurate, k=1, along the equator and nowhere else. At a latitude of ±25° the value of sec φ is about 1.1 and therefore the projection may be deemed accurate to within 10% in a strip of width 50° centred on the equator. Narrower strips are better: sec 8°=1.01, so a strip of width 16° (centred on the equator) is accurate to within 1% or 1 part in 100. Similarly sec 2.56°=1.001, so a strip of width 5.12° (centred on the equator) is accurate to within 0.1% or 1 part in 1,000. Therefore the Mercator projection is adequate for mapping countries close to the equator. Secant projection In a secant (in the sense of cutting) Mercator projection the globe is projected to a cylinder which cuts the sphere at two parallels with latitudes ±φ1. The scale is now true at these latitudes whereas parallels between these latitudes are contracted by the projection and their scale factor must be less than one. The result is that deviation of the scale from unity is reduced over a wider range of latitudes. An example of such a projection is The scale on the equator is 0.99; the scale is k=1 at a latitude of approximately ±8° (the value of φ1); the scale is k=1.01 at a latitude of approximately ±11.4°. Therefore the projection has an accuracy of 1%, over a wider strip of 22° compared with the 16° of the normal (tangent) projection. This is a standard technique of extending the region over which a map projection has a given accuracy. Generalization to the ellipsoid When the Earth is modelled by an ellipsoid (of revolution) the Mercator projection must be modified if it is to remain conformal. The transformation equations and scale factor for the non-secant version are The scale factor is unity on the equator, as it must be since the cylinder is tangential to the ellipsoid at the equator. The ellipsoidal correction of the scale factor increases with latitude but it is never greater than e2, a correction of less than 1%. (The value of e2 is about 0.006 for all reference ellipsoids.) This is much smaller than the scale inaccuracy, except very close to the equator. Only accurate Mercator projections of regions near the equator will necessitate the ellipsoidal corrections. Formulae for distance Converting ruler distance on the Mercator map into true (great circle) distance on the sphere is straightforward along the equator but nowhere else. One problem is the variation of scale with latitude, and another is that straight lines on the map (rhumb lines), other than the meridians or the equator, do not correspond to great circles. The distinction between rhumb (sailing) distance and great circle (true) distance was clearly understood by Mercator. (See Legend 12 on the 1569 map.) He stressed that the rhumb line distance is an acceptable approximation for true great circle distance for courses of short or moderate distance, particularly at lower latitudes. He even quantifies his statement: "When the great circle distances which are to be measured in the vicinity of the equator do not exceed 20 degrees of a great circle, or 15 degrees near Spain and France, or 8 and even 10 degrees in northern parts it is convenient to use rhumb line distances". For a ruler measurement of a short line, with midpoint at latitude φ, where the scale factor is k=secφ = 1/cos φ: - True distance = rhumb distance ≅ ruler distance × cos φ / RF. (short lines) With radius and great circle circumference equal to 6,371 km and 40,030 km respectively an RF of 1/300M, for which R=2.12 cm and W=13.34 cm, implies that a ruler measurement of 3 mm. in any direction from a point on the equator corresponds to approximately 900 km. The corresponding distances for latitudes 20°, 40°, 60° and 80° are 846 km, 689 km, 450 km and 156 km respectively. Longer distances require various approaches. On the equator Scale is unity on the equator (for a non-secant projection). Therefore interpreting ruler measurements on the equator is simple: - True distance = ruler distance / RF (equator) For the above model, with RF=1/300M, 1 cm corresponds to 3,000 km. On other parallels On any other parallel the scale factor is sec φ so that - Parallel distance = ruler distance × cos φ / RF (parallel). For the above model 1 cm corresponds to 1,500 km at a latitude of 60°. This is not the shortest distance between the chosen endpoints on the parallel because a parallel is not a great circle. The difference is small for short distances but increases as λ, the longitudinal separation, increases. For two points, A and B, separated by 10° of longitude on the parallel at 60° the distance along the parallel is approximately 0.5 km greater than the great circle distance. (The distance AB along the parallel is (a cosφ) λ. The length of the chord AB is 2(a cosφ)sin(λ/2). This chord subtends an angle at the centre equal to 2arcsin( cosφ sin(λ/2)) and the great circle distance between A and B is 2a arcsin( cosφ sin(λ/2)).) In the extreme case where the longitudinal separation is 180°, the distance along the parallel is one half of the circumference of that parallel; i.e., 10,007.5 km. On the other hand the geodesic between these points is a great circle arc through the pole subtending an angle of 60° at the center: the length of this arc is one sixth of the great circle circumference, about 6,672 km. The difference is 3,338 km so the ruler distance measured from the map is quite misleading even after correcting for the latitude variation of the scale factor. On a meridian A meridian of the map is a great circle on the globe but the continuous scale variation means ruler measurement alone cannot yield the true distance between distant points on the meridian. However, if the map is marked with an accurate and finely spaced latitude scale from which the latitude may be read directly—as is the case for the Mercator 1569 world map (sheets 3, 9, 15) and all subsequent nautical charts—the meridian distance between two latitudes φ1 and φ2 is simply If the latitudes of the end points cannot be determined with confidence then they can be found instead by calculation on the ruler distance. Calling the ruler distances of the end points on the map meridian as measured from the equator y1 and y2, the true distance between these points on the sphere is given by using any one of the inverse Mercator formulæ: where R may be calculated from the width W of the map by R=W/2π. For example, on a map with R=1 the values of y=0, 1, 2, 3 correspond to latitudes of φ=0°, 50°, 75°, 84° and therefore the successive intervals of 1 cm on the map correspond to latitude intervals on the globe of 50°, 25°, 9° and distances of 5,560 km, 2,780 km, and 1,000 km on the Earth. On a rhumb A straight line on the Mercator map at angle α to the meridians is a rhumb line. When α=π/2 or 3π/2 the rhumb corresponds to one of the parallels; only one, the equator, is a great circle. When α=0 or π it corresponds to a meridian great circle (if continued around the Earth). For all other values it is a spiral from pole to pole on the globe intersecting the meridians at the same angle: it is not a great circle. This section discusses only the last of these cases. If α is neither 0 nor π then the above figure of the infinitesimal elements shows that the length of an infinitesimal rhumb line on the sphere between latitudes φ; and φ+δφ is a secα δφ. Since α is constant on the rhumb this expression can be integrated to give, for finite rhumb lines on the Earth: Once again, if Δφ may be read directly from an accurate latitude scale on the map, then the rhumb distance between map points with latitudes φ1 and φ2 is given by the above. If there is no such scale then the ruler distances between the end points and the equator, y1 and y2, give the result via an inverse formula: These formulæ give rhumb distances on the sphere which may differ greatly from true distances whose determination requires more sophisticated calculations. References and footnotes - American Cartographer. 1989. 16(3): 222–223. - Maling, pages 77–79. - Snyder, Working manual pp 37—95. - Snyder, Flattening the Earth. - A generator of a cylinder is a straight line on the surface parallel to the axis of the cylinder. - The function y(φ) is not completely arbitrary: it must be monotonic increasing and antisymmetric (y(−φ)=−y(φ), so that y(0)=0): it is normally continuous with a continuous first derivative. - Snyder. Working Manual, page 20. - R is the radius of the globe and φ is measured in radians. - λ is measured in radians. - NIST. See Sections 4.26#ii and 4.23#viii - Osborne Chapter 2. - Snyder, Flattening the Earth, pp 147—149 - More general example of Tissot's indicatrix: the Winkel tripel projection. - Osborne, Chapters 5, 6 - See great-circle distance, the Vincenty's formulae or Mathworld. - Maling, Derek Hylton (1992), Coordinate Systems and Map Projections (second ed.), Pergamon Press, ISBN 0-08-037033-3 Check - Monmonier, Mark (2004), Rhumb Lines and Map Wars: A Social History of the Mercator Projection (Hardcover ed.), Chicago: The University of Chicago Press, ISBN 0-226-53431-6 - Olver, F. W.J.; Lozier, D.W.; Boisvert, R.F. et al., eds. (2010,), NIST Handbook of Mathematical Functions, Cambridge University Press - Osborne, Peter (2013), The Mercator Projections - Rapp, Richard H (1991), Geometric Geodesy, Part I - Snyder, John P (1993), Flattening the Earth: Two Thousand Years of Map Projections, University of Chicago Press, ISBN 0-226-76747-7 - Snyder, John P. (1987), Map Projections – A Working Manual. U.S. Geological Survey Professional Paper 1395, United States Government Printing Office, Washington, D.C.This paper can be downloaded from USGS pages. It gives full details of most projections, together with interesting introductory sections, but it does not derive any of the projections from first principles. - Maling, Derek Hylton (1992), Coordinate Systems and Map Projections (second ed.), Pergamon Press, ISBN 0-08-037033-3 Check See also - Transverse Mercator projection - Universal Transverse Mercator coordinate system - Gall–Peters projection - Jordan Transverse Mercator - Nautical chart - Tissot's indicatrix |Wikimedia Commons has media related to: Mercator projections| - Ad maiorem Gerardi Mercatoris gloriam – contains high-resolution images of the 1569 world map by Mercator. - Table of examples and properties of all common projections, from radicalcartography.net. - An interactive Java Applet to study the metric deformations of the Mercator Projection. - Web Mercator: Non-Conformal, Non-Mercator (Noel Zinn, Hydrometronics LLC) - Mercator's Projection at University of British Columbia - Mercator's Projection at Wolfram MathWorld - Google Maps Coordinates
http://en.wikipedia.org/wiki/Mercator_projection
13
50
Instructor/speaker: Prof. Walter Lewin So today, I will start with a general discussion on waves, as an introduction to electromagnetic waves, which we will discuss next week. We'll start with a very down-to-earth equation, Y equals one-third X. And I'm going to plot that for you, so here is Y and here is X, and that's a straight line through the origin, Y equals one-third X. Suppose, now, I want this line to move. I want this line to move with a speed of 6 meters per second in the plus X direction. All I will have to do now is to replace X in that equation by X - 6T. Notice the minus sign. I will go, then, in the plus X direction. The equation then becomes Y equals one-third times X - 6T. So look at it at T equals 1. At T equals 0, you already have the line. At T equals 1, you now have Y equals 1/3 X - 2. That means, here it will intersect at - 2, and there it will intersect at + 6, and the line parallel to the first one, this line is now T = 1, and this is T = 0. And it has moved in this direction, with a speed of 6 meters per second. And so what this is telling us, that if we ever want something to move with a speed V in the plus X direction, then all we have to do in our equations to replace X by X - VT, and if we want it to move in the minus X direction, then we replace X by X + VT. That's all we have to do. So now, I'm going to change to something that is a real wave. I now have Y = 2, times the sin 3X. That's a wave. It's not moving, not yet. So I can make a plot of Y as a function of X, and that plot will be like this. This is zero, so when the sine is zero, this is pi divided by 3, and this is 180 degrees, and it's again zero, this is 2 pi divided by 3, it's again zero. And lambda, which we call the wavelength, lambda, in this case, is from here to here, that is 2 pi divided by 3, this goes also from here to there. I will introduce a symbol K that you will often see, we call that the wave number, and K is simply defined as 2 pi divided by lambda. So in our specific case, K is 3. This here is K. If you know this number, you can immediately tell what the wavelength is. Now, I want to have this wave move. I want to have a traveling wave. And I want to have it move with 6 meters per second in the plus X direction. So the recipe is now very simple, all I have to do replace this X by X - 6T. So now I get Y equals 2 sin [3(X-6T)]. And if you now look at this curve, this equation, and you plot it a little bit later in time than T0 -- this is already T0 -- a little later in time, you will see that, indeed, it has moved in the plus X direction. And it's moving with a speed of 6 meters per second. So this equation, when you look at it, holds all the characteristics of the oscillation. It holds the amplitude. This 2 is the amplitude. This is - 2, that's the amplitude. This information, K, holds the information on the wavelength, and this information tells you what the speed is. And the minus sign, which is important, tells you that it's going in the plus X direction, and not in the minus X direction. Can we make such a traveling wave? Yes, we can do that, actually, quite easily. Suppose I have here a rotating wheel -- rotate with angular frequency omega, and let this has a radius R, and I give it 2 units, so that I get the same amplitude that I have here. And I attach to this a string, and I put some tension on the string, so that I create a wave as I rotate it, and the string is attached here, and as it rotates, the wave is going to propagate into the string with a velocity, let's say, V. So I can generate a traveling wave. The period of one oscillation -- if you were here on the string, you're going up, you're going down, you're going up, you're going down, that's all you're doing, when the wave passes by -- the period of one whole oscillation is obviously 2 pi divided by this omega. The wavelength lambda that you are creating -- from here to here is lambda -- well, if you know the speed with which it is traveling, and you know it has been traveling capital T seconds, one oscillation, that's a distance lambda. So this is V times T. But this is also V divided by F, if F is the frequency in Hertz. And so the frequency F is then also given by the speed divided by lambda. And so I can write down this equation now in a somewhat different form, Y equals 2 times the sine, and now I bring the 3 inside, so I get 3X-18T. This 18 is now that omega. This is omega T. In here is all the timing information. Omega, the period T, everything is in here. Here is all the spatial information. This is K. In here is the information about lambda. And so if I know omega, and I know K, then I can also find the velocity, which is omega divided by K. So everything is in here, omega divided by 3 gives me back my 6 meters per second. So once you have the equation, I can ask you any question about that wave, and you should be able, then, to answer. Wavelength, frequency, in hertz, in radians per second, speed, everything. You may ask me now, "Why do you discuss this with us?" Well, we are coming up to electromagnetic waves next week, and electromagnetic waves, you're going to see lambdas, you're going to see omegas, you're going to see capital Ts, you're going to see frequency, you're going to see Ks, everything you see there you're going to see next week. One exception, that Y, the displacement Y, will not be in centimeters or meters, but it will be an electric field, a traveling electric field, volts per meter. Or a traveling magnetic field, tesla. But other than that, all these quantities will return in exactly the same way. Now I want to discuss with you a standing wave first, because standing waves are going to be important. This is a traveling wave. And now comes something even more intriguing, which is a standing wave. Suppose I have a wave traveling in this direction, and I call that Y1, and Y0 is the amplitude, sine (K X - omega T). And notice now, I have all the symbols that we are familiar with. We have the K here, we have the omega here, and we have the amplitude here. And the minus sign tells me, [wssshhht], it's going in the plus direction. But I have another wave. And the wave is exactly identical, in terms of amplitude, in terms of wavelength, in terms of frequency, identical, but it's traveling in this direction. And so this is Y2, which is Y0 sine (KX + omega T). This plus sign tells me it's going in this direction. And so if this is a string, the net result is the sum of the two. So I have to add them up. So Y = Y1 + Y2. So I have to do some trigono- trigonometric manipulation, and this is what I leave -- I'll leave you with that, that's high school stuff -- you add the two up and you'll find 2 Y0-- notice that the amplitude has doubled -- times the sine (K X) times cosine (omega T). That's the sum of those two. And this is very, very different from a traveling wave. Nowhere will you see K X - omega T any more. K X is here, separate under the sine, and omega T is separate under the cosine. All the timing information is now separate from the spatial information. And so what does a standing wave like this look like? Well, let's -- a bracket here. Let's make a drawing of such a standing wave. So here we have Y, and here we have X. Let's only look at the sine K X for now. If X is 0, the sine is always 0, so this point will never move. But if K X is 180 degrees, it's also 0, always. So lambda over two will never move. X is lambda, when this is 360 degrees, it will never move. - lambda / 2, will never move. So what will it look like? Well, you're going to see something like this, let's take the moment when T equals 0, so when cosine omega T is plus 1. So we're going to have a curve like this, so this goes up to 2Y0 like this -- and this here is then my 2Y0. These points will never move, they will always stand still. There's nothing like a traveling wave. If it's a traveling wave, these points will see the wave go by, they will go up and down, they never do that. They sit still. They have a name. We call them nodes. Let's now look at little later. Let's look at T equals one quarter of a period. Now, the cosine is 0. So there's not a single point on the string that is not 0. So the string looks like this. If you took a picture of the string, you wouldn't even know it's oscillating. It would be just a straight line. And now, if we do -- look a little later, and we look at T equals one-half the period, then the cosine is -1. So now the curve will look like this. And so what does it mean? If we just look what's here happening, this is what's going to happen. The string is just doing this, and there are points that stand still. Nothing is going like this, nothing is going like this. You see this point going up and down, up and down, up and down, and this will do the same, and these nodes will do nothing. So that is what a standing wave will look like, and I think the name standing wave is a very appropriate name, very descriptive, because it's really standing, it's not -- it's not moving. At least, not traveling along the X direction. Can we make a standing wave? Yes, we can, and I will do that today. A standing wave can be made by shaking -- or rotating, in that fashion -- a string. So here I have a string, I -- say I attach the string to the wall there, and I move it up and down here. So a wave goes in -- I do just this, like the rotating disc -- the wave travels, but the wave is reflected, and so I have a wave going in and I have a wave coming back, so I have now two waves going through each other. And if the conditions are just right, then these reflective waves -- this one will reflect, when it arrives here, it will reflect again, it goes back again, and it will continue to reflect -- so if the conditions are just right, then these reflective waves will support each other, and they will generate a large amplitude -- as I will demonstrate to you -- but that's only the case for very specific frequencies, and we call those resonance frequencies. The lowest possible frequency for which this happens -- which we call the fundamental -- will make the string vibrate like this. So the whole thing goes. [wssshhht], [wssshhht], [wssshhht], and we call that the fundamental. We call that also the first harmonic. If now I increase the frequencies, then I get a second resonant frequency, and a node jumps in the middle -- there is already a node here, and there is a node here, because this motion of my hand here is very small, as I will demonstrate to you, for all practical purpose you can think of this being a node -- and so now the string in the second harmonic will oscillate like this. [Wssshhht], [wssshhht], [wssshhht], [wssshhht], so this is the second harmonic. And if we go up in frequencies, then -- this should be right in the middle, by the way -- and if I go up in frequency one step more, then I get another resonance whereby we get an extra node, and so we get the third harmonic, and you just can go on like that. You get a whole series of resonance frequencies. And so, for the fundamental, lambda 1 -- the 1 refers to the first harmonic -- is 2L, if L is the length of my string. This is L. You only have half a wavelength here, so L is 2L. But we know that the frequency is the velocity divided by the wavelength -- we see that there, frequency is velocity divided by the wavelength -- so the frequency F1 is the velocity divided by lambda 1, so that's divided by 2L. So that's the frequency in the fundamental for which this resonance phenomenon occurs. For the second harmonic, lambda 2 equals L. You can tell, you see a complete wavelength here. And F2, that frequency, is going to be twice F1. And F3 is going to be 3 F1. And if you want to know, for the Nth harmonic, N being Nancy, then lambda of N equals N -- 2L/N. Substitute in N1, and you find the wavelength for the first harmonic. Substitute for N2, and you find the wavelength for the second harmonic. And so on. And the frequency for the Nth harmonic -- N stands for Nancy -- is N times V divided by 2L. So here you see the entire series of frequencies and wavelengths for which we have resonance. Unlike in our LRC system that we discussed last time, where you had one resonance frequency, now you have an infinite number of resonance frequencies, and they are at very discrete values, equally spaced. I want to demonstrate this to you with a violin string, it's a very special violin string, it's here on the floor, it's a biggie, and I need some help from someone. You helped me before, would you mind helping me again? So here is, uh, one end of the string, which you're going to hold, you're going to be a node, believe it or not. Hold it better, two hands -- no, much better. You will see shortly, why -- no, no, no, much better. And walk back a little, walk further. Yes, that's good, hold it. I will put on a white glove, and there is a reason for that, because I want you to be able to see my hand when we're going to make it dark, so that you will convince yourself that my hand, which is generating the wave, is hardly moving at all. For practical purposes, it's a node, and yet we get these wonderful resonance phenomenon. So I'm going to make it very dark so that the UV will do its job, and you can see the string better, that's the only way we can make you see the string well. Don't let go, er- under any circumstances, you will hurt me if you do that. Of course, if I let go first, then [pfft], I will hurt you, but that's not my plan. OK, so let's try to go a little bit further back. Let's try to, uh, find, first the -- the fundamental. And I'll try to find it by exciting just the right frequency with my hand. There it is. I think I got it. That's the fundamental. And look how little my hand is moving here. And you will see a very large amplitude in the middle. And so these reflected waves, one runs to him, it runs back at me, it runs back at him, keeps reflecting many times, they support each other in a constructive way, that's what resonance is all about. And now I'll try to find the second harmonic -- so you'll see another node coming in at the middle. It's easier for you to see than for me, actually. And it's not always easy to find the -- no, no, no, I'm too low frequency, I have to go up. I think I got it now. Is this it? Yes, one extra node in the middle? Speak out up, please. [chorus of agreement] Ah, that's better. Now I can hear you, thank you. Um, there are three nodes now. My friend there is a node, I'm a node, and then there is one in the middle. If you subtract 1, the 3-1 is 2, then it's the second harmonic. And so now I will try to generate a very high frequency, in resonance, and then you count the number of nodes, subtract one, and then you know which harmonic I was able to generate. But I will try to -- not so easy to get a resonance in there. No, I'm off resonance. Yeah! Yeah! Yeah! Yeah! Yeah! Yeah! Yeah! [laughter] Got it, got it, got it! Got it! You keep counting. Oh, that's a super-high harmonic! [crowd responds] How many did you count? [crowd responds] 10, do I hear 10? [crowd responds] Do I hear 20? [laughter] Actually, I counted about 27, but that's OK. All right, thank you very much, it was great that you helped me. So that's, uh, standing waves. And you see the shapes, and you saw the, the mode of operation, very characteristic for standing waves. When I pluck a string, of a violin, or I strike it with a bow, or with a hammer, on a piano, just a hammer comes down, that is exposing the string to a whole set of frequencies. And so the string, now, decides which frequencies it like to oscillate in. And so it selects these resonance frequencies. And so if the string has a fundamental of 400 hertz, then it would start to resonate at 400, but simultaneously, it will be very happy with 800 hertz, and with 1200 hertz. And so the string will simultaneously -- if I bang it, or strike it, or pluck it -- simultaneously oscillate, often at more than one frequency. Fundamental, and several of the higher harmonics. And all the others that are present, all the other frequencies in this striking with the bow, I ignore, they're off resonance. So if you design a string instrument, then this is really a key equation. If you want a particular fundamental -- say your fundamental is 440 and this is a given number. And so N is 1. You can now manipulate V, because V depends on the tension on the string, and it depends on what kind of string you have. The speed in the string is the square root of the tension -- if you take 8.03, you will even see a proof for that -- divided by the mass of the string per unit length of the string. So you take four strings for a violin -- six for a guitar -- and you make them out of very different material -- different mass per unit length -- and so that gives you, then, difference velocities -- you can also fool around with the tension -- and so the four strings, then, have all four different fundamental. In the violin, they may have the same length. What are going to do now to play the violin? All that is left over is L, that's the only thing you can change, and that's what a violinist is doing. Goes with the finger, back and forth over the strings, make them shorter, pitch goes up, frequency goes up, makes them longer, frequency goes down. And you do the same with a guitar, and you do the same with a bass and a cello. So when you're playing, what you're doing all the time is changing L so that you get all these frequencies that you want to produce. If you take your instrument out of the closet, you may have noticed that it's really not in tune anymore, it's slightly off-tune. Well, what you can do now, you can change V a little bit -- uh, these little knobs -- and you can change the tension in the strings. And that's what violinists do when they tune their violin, they change the tension on the string to get just the right frequency. But the playing means you change L. A piano is different, that's really a luxury. A piano has 88 keys, and the length of each set of strings is fixed, so you don't have to worry about that. It's a great luxury, you may think, therefore, that it is much easier to play the piano that to play a violin, because you don't have to do this all the time, and be exactly at the right length. Well, that is true, of course, but given the fact that you have 88 keys, you can imagine you can hit occasionally the wrong key, and that's not what you want. If a string is vibrating, it is pushing on the air, and it's pulling on the air, and it's producing thereby what we call pressure waves. If I have a string that oscillates 400 hertz, it makes pressure waves -- pressure goes up, down, up, down, up, down -- 400 times per second it goes up, it reaches your eardrum, and your eardrum starts to shake 400 times per second, it goes back and forth, and your brain say, "I hear sound." That's the way it works. So it is the string, then you get the air, pressure waves, and then you get your eardrum, and then you get to brains, if there are any. Now, I want to discuss with you before I demonstrate some of this, I want to discuss with you instruments which don't have strings, and I will call them all woodwind instruments, although that's perhaps not an appropriate name for all of them. But I'll just call them woodwinds for now. And suppose I have, here, a box which is filled with air. Completely closed box, and it has a length L. And I put in here a loudspeaker, and I generate a particular frequency of sound. Then pressure waves are going to run, they're going to bounce off, and they come back, and I get reflected traveling waves. And what I get inside the -- the box, now, I get standing waves of air. It's not the box that goes into a standing resonance, but it's the air itself. And the frequencies that are produced, at which the system is in resonance, is given exactly by the same equation. Except, now, that V is non-negotiable -- V is now the speed of sound, which, at room temperature, is about 340 meters per second. So whenever you design an -- wi- we- woodwind instruments, that is non-negotiable. You cannot change V, which you can do when you are an instrument builder of strings. You will say, "Gee, if I have an instrument whereby the sound is inside a closed box, you're not going to hear very much." Well, that's true. You must let the sound go out somehow. And what is surprising, that if you take this end out -- off -- and you take this end off, that this box, which is now open on both sides, will still resonate at exactly those frequencies. And you've got to take 8.03 to get to the bottom of this. There are also resonant frequencies in case that the sound cavity -- if I call this a sound cavity -- is closed on one end and open on one end. The series of resonant frequencies is different from this one, though. It's not so important, but it is different. But you also get a whole series of resonance frequencies. The velocity of sound in a gas, V, is the square root of the temperature -- so it's a little temperature-dependent -- divided by the molecular weight. Well, you can't do much about the temperature in a room -- in general, it's room temperature -- and with air, you are stuck with the molecular weight, oxygen and nitrogen is about 30, there is not much you can do about that. But every one of you who plays woodwind instruments know that if you go from a cold room to a warm room that your instrument is no longer in tune. And that's because of this, the temperature change. So V changes, so your fundamentals change. And what do you do know? These people know what they do. They have a way of making the cavity a little shorter or a little longer. It's not very much, but they have a little bit to play with. And when they do that, so they compensate L for the slight difference in V to get back to the same fundamental that they need. So now you have a woodwind instrument. Low-frequency woodwind instruments will be big. And high-frequency woodwind instruments will be small, because in L lies the secret, you can't fool around with V, V is a God-given. And so how do you play an instrument now? Well, you have to change L, that's all you can do. And if you have a trombone, you're doing this, it's clear that you're changing L, you make the cavity shorter and longer. So that's easy. If you have a flute, you have holes in the flute. And if all the holes are closed, the flute is this long. But if you take your fingers off the holes, it gets shorter. And so when you take all your fingers off the holes, then you have a high frequency -- flute is only this long, if you put all your fingers on it, it's this long, and so the frequency is lower. And a trumpet is the same idea. You have valves that open holes and close holes. If you blow air in an instrument, it is like plucking a string, it's like exciting a string with a bow, you are dumping a whole spectrum of frequencies onto that air cavity. And you let the air cavity decide where it wants to resonate. And it will pick out the ones that it likes, it will pick out the fundamental, and maybe the second and the third harmonic. So in that sense, blowing air is like striking it with a bow, in the case of a string instrument. But blowing air is not always as easy as you may think it is. Have you ever tried to blow air into a trumpet? You just blow, [pffff], and you hear nothing. You have to do this. Something like that. A bizarre sound you have to make, you have to know how to spit in the instrument just the right way to get a sound out of it. I've tried it many times, it's really not easy. So blowing air is just said in a simple way, but in order to get it exc- to resonate, you've got to really know how to hold your lips, and how to excite that cavity. I can show you the easy relation between the frequency of the fundamental and the length of woodwind instruments. That's a one-to-one correlation -- this is on the web, you can download it, you don't have to copy this -- and you see there that, uh, this is only for an open open system, this is not for a closed open system. The number would be different. So if you are interested in very high frequencies, then an open open system which is only one centimeter long would give you a fundamental of 17000 Hertz, which most of you can hear, because you're still young, you can hear up to 20 kilohertz, probably. The second harmonic you would not be able to hear, that is too high for you. An instrument which is 10 centimeters long, open open, you would here the fundamental easily, 1700 hertz, second harmonic, 3400 hertz, no problem, third harmonic, fourth harmonic, no problem. And then when you go to the very low frequencies, uh, organ pipes that produce fundamentals in the range 20 and 30 hertz, are huge in size. And you -- in general, that holds. When you have a woodwind instrument which is tunes for low frequencies, it's big. And for high frequencies, like a flute, it's small. In a way, that's also true for string instruments. A bass which generates low frequencies, it's a big instrument. But the violin, which generates high frequencies, is a much shorter instrument. So in that sense, the reason is, they have both an L here. And it's the L, of course, that is crucial in terms of the fundamental. So I can now ge- uh, demonstrate to you the basic idea of a flute -- this is a flute. Now, the flute is this long. Higher frequency, because it's shorter. Even higher frequency, because it's shorter. That's all it takes. That speaks for itself, right? Make it longer, you make it shorter. I'll try it. [plays trombone] [applause] Trombone. 80 centimeters long. Open and open, on both sides. 80 centimeters, it would give me a fundamental a little higher than 170. And then it will give me a second harmonic, and a third harmonic, it all depends on how fast the air is flowing by, and there will be moments that you will hear more than one harmonic. I'll try to swing it around. It's not easy for me to hit the fundamental, but I'll try that, too. This is the second harmonic. [plays wind organ]. [wind organ] Fourth harmonic. [wind organ] Fifth harmonic. [wind organ] Fourth. [wind organ] Furdamen-, this is fundamental. [wind organ] This is the fundamental, 212 Hertz. [wind organ] 425 Hertz. [wind organ] 637, 637. [wind organ] [applause] Thank you, thank you, thank you. If you bang on a tuning fork, or you pluck on a string, in isolation, you hear nothing, almost nothing. I have here a tuning fork, and if I bang on it, you hear nothing, and I hear nothing -- almost nothing. Unless I heel- hold it very close to my ear. What we do now, with string instruments, we mount the strings on a box with air. A sound cavity. Sound- sounding board, it's called. And now the air inside can s- oscillate with it -- it doesn't always have to be precisely at resonance -- and also, the surface itself of the box can start to vibrate. So you're displacing more air, and the sound becomes loud and clear. You don't gain energy, but you drain the energy out of the oscillating string faster, and so for that short amount of time, you get louder sound. And I will demonstrate that, first, with the tuning fork. I hear it now very well, it's harder for you because it's farther away. Now you hear nothing. And now you hear it. It can actually be much better demonstrated with this little music box that I bought years ago in Switzerland. If I rotate this music box, it has a very romantic tune, you hear nothing. I hear a little bit. And now I put it on this box, unmistakable. So that's the idea of sounding boards. You have them violins, you have them on pianos, and, of course, the design of these sounding boards is top-secret, the manufacturer is not going to tell you how they built them, because the quality of the sound, of course, is partly in the design of the sounding board. I can make you hear, and I can make you see sound. And my goal for the remaining time is to make you see and hear at the same time. I have here a microphone, which is like your eardrum, and suppose I generate 440 Hertz, and I can do that with the tuning fork. So here is the amplitude of the oscillation of the membrane in the microphone, which is your eardrum, say, we amplify that, and we show you on an oscilloscope, the current after amplification. And so you're going to see a signal like this. And if this is 440 Hertz, so this is time, and this is the displacement of your eardrum -- in our case, it's a microphone, it's really a current after amplification -- and if this is 440 Hertz, then this time T will be about 2.3 milliseconds. One divided by 440. That's no problem for an oscilloscope. We can do much better than that. So the time resolution is not a problem. And so I will show you there the output of our microphone, I will show you this signal as a function of time. For 440 Hertz, you see a boring signal. And I can make a boring signal with a tuning fork, it's almost a pure sinusoid. But now, it just so happens, we have in our audie- in our audience, someone who can play the violin. And that person is going to produce a 440 Hertz. But at the same time, he's going to produce a second harmonic, and maybe a third harmonic, and maybe a fourth harmonic. And so, imagine now, that at th- simultaneously, your eardrum is going to do this, but at the same time, your eardrum is going to do this, because this is some higher harmonic. Then the net result is that your eardrum is going to do this. And that is what I'm going to show you. And so when you see the various instruments, you will recognize that on top of the fundamental, you will see these very characteristic harmonics, each instrument having its own cocktail, its own unique cocktail, and when you hear that cocktail, you say, "Oh, yes, that's a saxophone." Or you say, "No, that's a violin." You would never mistaken a saxophone for a violin. And that's because of the combination of the higher harmonics. And so we are so fortunate that we have four musicians in our audience. Tom, who is the violinist -- where is Tom? I hope you brought your violin [laughs]. Oh, you got it there. And then we have Emily -- I saw her already, with the clarinet -- so if you come this way, Emily [applause]. And we have Aaron -- Aaron has a bassoon. You may never have a bassoon. A bassoon is an instrument that produces a very low tone. So the instrument is going to be very big. Bigger than -- bigger than Aaron. Just wait and see. It's a beauty, it's a really beautiful instrument. A flute is only this big. Ah, look at that, beautiful, big, bassoon. And then we have Fabian, with a saxophone. So if you stand here, then I will first do the boring part, and what I will do is I will show you, then, 440 hertz signal looks like, produced with a tuning fork -- and we'll see it there, and so I have to change the light situation substantially. The musicians will get a little bit into the dark, but you will still be able to see them. And so I'm going to turn on, now, the microphone, and that's where you're going to see the signal, and when you make noise, you can hear -- hear and see yourself. Boring, and no signs of higher harmonics. Now Tom will try to produce 440 in his violin -- or close to 440 -- and then look for the higher harmonics, which makes the violin characteristic. Notice that the average spacing, the repetition, is, indeed, the same as it was with the 440, but you saw this incredible richness of harmonics. Tom happens to be, also, an excellent violin player, and so he insisted that he demonstrate that. [laughter] All right, Tom? Student: All right. Emily, would you mind producing something that comes close to 440 Hertz? Come a little closer to the microphone. Notice the big difference with the violin. Violin has many, many higher harmonics. Her instrument, maybe only one, maybe only the fundamental and the second harmonic. Can you try again? Now we have more, now we have more. Now, Emily did not insist that she wanted to play, but I did. So Emily, would you please? Aaron, with his bassoon. He ordered a special chair, because he says, "Look, with an instrument so big, L is so large, it's heavy." Clearly, a bass, which produces low frequencies, is heavier than a violin, and the same is true with woodwind instruments. Aaron, could you try something close to 440? Bizarre instrument, isn't it, eh? You see a weird combination of probably fundamental and second harmonic. Aaron, would you mind showing some of your expertise? This is a wonderful instrument. You don't see them too often, do you? Last, but -- but not least, we have a saxophone, Fabian. Now, you may have to stand a long distance from this microphone, because these instruments make a hell of a lot of noise, don't they? So give it a shot, and try 440, or come close to that. It doesn't have to be exact. Also, you see several higher harmonics, it's hard to s- hard to tell which. Would you mind, uh, playing something real hot? [laughter] [plays saxophone] [unintelligible] [laughter] [noise] If you think we're interested in hearing it, you're wrong, we want to see it! [laughter] [plays saxophone]. [unintelligible] [laughter] [unintelligible]. Thank all of them. [applause] Thank you very much. [applause] So during the last three minutes, I would like to discuss with you the speed of sound in a little bit more detail. Uh, you notice that the speed of sound is the -- proportional with the temperature and the molecular weight. In fact there are a few other things upstairs here. But these are the -- these are the major contributors. So I should really say it's proportional. If you take air -- as we discussed earlier, molecular weight is 30, that's a God-given, there's not much you can do about it. I would like to demonstrate to you, the dependence on molecular weight. And one way I could do that, I could take all the air out of 26-100 and replace it with helium. And then I would ask the same musicians to come, and I would ask them, then, to play their wind instruments. The Ls are fixed, there's nothing you can do about it. So their instruments don't know that I put helium in the audience, so the only thing that changes is V. The speed would go up by almost a factor of three, and so the fundamental would be free -- three times higher. And the harmonics would be three times higher. So you would hear much higher frequencies. And you wouldn't even recognize these instruments. This wouldn't be very practical. I cannot take the air out of 26-100, and replace it with helium. But what I can do, as I have done so often here in 26-100, I can suffer myself -- I can suffer, and put helium in my system. I have, here -- I have, here, my own sound cavity. I am, in a way, like a wind instrument. And, um, if I swallow helium, my sound cavity doesn't know that I'm producing helium. And you will say, when I talk to you as I do right now, "Yes, that's typical, that's Walter Lewin." You recognize my fundamentals, you recognize my harmonics, and it's unique for my voice. And so you will recognize me. But the moment that I fill my system with helium, nothing is changing in my system except for V. And so the frequency will go up. And that will be noticeable. And, in fact, chances are that you will say, "Hm, that's really not Walter Lewin any more." There's only one problem with helium. And that is there is no oxygen in helium. And that is also very noticeable for me. And yet, I really have to fill my lungs with helium all the way, and so I will be, for a while, without oxygen, and, um, so you may catch two birds with one stone. You may hear a strange frequency, and you see me on the floor. So I'll really try not to fall on the floor, then. OK, there we go. And it really doesn't sound like Walter Lewin any more, does it? I will see you Friday. All the best.
http://ocw.mit.edu/courses/physics/8-02-electricity-and-magnetism-spring-2002/video-lectures/lecture-26-traveling-waves-and-standing-waves/
13
86
x86 assembly language x86 assembly language is a family of backward-compatible assembly languages, which provide some level of compatibility all the way back to the Intel 8008. x86 assembly languages are used to produce object code for the x86 class of processors, which includes Intel's Core series and AMD's Phenom and Phenom II series. Like all assembly languages, it uses short mnemonics to represent the fundamental instructions that the CPU in a computer can understand and follow. Compilers sometimes produce assembly code as an intermediate step when translating a high level program into machine code. Regarded as a programming language, assembly coding is machine-specific and low level. Assembly languages are more typically used for detailed and/or time critical applications such as small real-time embedded systems or operating system kernels and device drivers. The Intel 8088 and 8086 were the first CPUs to have an instruction set that is now commonly referred to as x86. These 16-bit CPUs were an evolution of the previous generation of 8-bit CPUs such as the 8080, inheriting many characteristics and instructions, extended for the 16-bit era. The 8088 and 8086 both used a 20-bit address bus and 16-bit internal registers but while the 8086 had a 16-bit data bus, the 8088, intended as a low cost option for embedded applications, had an 8-bit data bus. The x86 assembly language covers the many different versions of CPUs that followed, from Intel; the 80188, 80186, 80286, 80386, 80486, Pentium, Pentium Pro, and so on, as well as non-Intel CPUs from AMD and Cyrix such as the 5x86 and K6 processors, and the NEC V20. The term x86 applies to any CPU which can run the original assembly language (usually it will run at least some of the extensions too). The modern x86 instruction set is a superset of 8086 instructions and a series of extensions to this instruction set that began with the Intel 8008 microprocessor. Nearly full binary backward compatibility exists between the Intel 8086 chip through to the current generation of x86 processors, although certain exceptions do exist. In practice it is typical to use instructions which will execute on anything later than an Intel 80386 (or fully compatible clone) processor or else anything later than an Intel Pentium (or compatible clone) processor but in recent years various operating systems and application software have begun to require more modern processors or at least support for later specific extensions to the instruction set (e.g. MMX, 3DNow!, SSE/SSE2/SSE3). Mnemonics and opcodes Each x86 assembly instruction is represented by a mnemonic which, often combined with one or more operands, translates to one or more bytes called an opcode; the NOP instruction translates to 0x90, for instance and the HLT instruction translates to 0xF4. There are potential opcodes with no documented mnemonic which different processors may interpret differently, making a program using them behave inconsistently or even generate an exception on some processors. These opcodes often turn up in code writing competitions as a way to make the code smaller, faster, more elegant or just show off the author's prowess. x86 assembly language has two main syntax branches: Intel syntax, originally used for documentation of the x86 platform, and AT&T syntax. Intel syntax is dominant in the MS-DOS and Windows world, and AT&T syntax is dominant in the Unix world, since Unix was created at AT&T Bell Labs. Here is a summary of the main differences between Intel syntax and AT&T syntax: |Parameter order||Source before the destination eax := 5 is mov $5, %eax |Destination before source eax := 5 is mov eax, 5 |Parameter Size||Mnemonics are suffixed with a letter indicating the size of the operands (e.g., "q" for qword, "l" for long (dword), "w" for word, and "b" for byte) addl $4, %esp |Derived from the name of the register that is used (e.g., rax, eax, ax, al imply q,l,w,b in that order) add esp, 4 |Immediate value sigils||Prefixed with a "$", and registers must be prefixed with a "%"||The assembler automatically detects the type of symbols; i.e., if they are registers, constants or something else.| |Effective addresses||General syntax DISP(BASE,INDEX,SCALE) movl mem_location(%ebx,%ecx,4), %eax |Use variables, and need to be in square brackets; additionally, size keywords like byte, word, or dword have to be used. mov eax, dword [ebx + ecx*4 + mem_location] x86 processors have a collection of registers available to be used as stores for binary data. Collectively the data and address registers are called the general registers. Each register has a special purpose in addition to what they can all do. - AX multiply/divide, string load & store - CX count for string operations & shifts - DX port address for IN and OUT - BX index register for MOVE - SP points to top of stack - BP points to base of stack frame - SI points to a source in stream operations - DI points to a destination in stream operations Along with the general registers there are additionally the: - IP instruction pointer - segment registers (CS, DS, ES, FS, GS, SS) which determine where a 64k segment starts (no FS & GS in 80286 & earlier) - extra extension registers (MMX, 3DNow!, SSE, etc.) (Pentium & later only). The IP register points to the memory offset of the next instruction in the code segment (it points to the first byte of the instruction). The IP register cannot be accessed by the programmer directly. The x86 registers can be used by using the MOV instructions. For example: mov ax, 1234h mov bx, ax copies the value 1234h (4660d) into register AX and then copies the value of the AX register into the BX register. (Intel syntax) Segmented addressing The x86 architecture in real and virtual 8086 mode uses a process known as segmentation to address memory, not the flat memory model used in many other environments. Segmentation involves composing a memory address from two parts, a segment and an offset; the segment points to the beginning of a 64 KB group of addresses and the offset determines how far from this beginning address the desired address is. In segmented addressing, two registers are required for a complete memory address: one to hold the segment, the other to hold the offset. In order to translate back into a flat address, the segment value is shifted four bits left (equivalent to multiplication by 24 or 16) then added to the offset to form the full address, which allows breaking the 64k barrier through clever choice of addresses, though it makes programming considerably more complex. In real mode/protected only, for example, if DS contains the hexadecimal number 0xDEAD and DX contains the number 0xCAFE they would together point to the memory address 0xDEAD * 0x10 + 0xCAFE = 0xEB5CE. Therefore, the CPU can address up to 1,048,576 bytes (1 MB) in real mode. By combining segment and offset values we find a 20-bit address. The original IBM PC restricted programs to 640 KB but an expanded memory specification was used to implement a bank switching scheme that fell out of use when later operating systems, such as Windows, used the larger address ranges of newer processors and implemented their own virtual memory schemes. Protected mode, starting with the Intel 80286, was utilized by OS/2. Several shortcomings, such as the inability to access the BIOS and the inability to switch back to real mode without resetting the processor, prevented widespread usage. The 80286 was also still limited to addressing memory in 16-bit segments, meaning only 216 bytes (64 kilobytes) could be accessed at a time. To access the extended functionality of the 80286, the operating system would set the processor into protected mode, enabling 24-bit addressing and thus 224 bytes of memory (16 megabytes). In protected mode, the segment selector can be broken down into three parts: a 13-bit index, a Table Indicator bit that determines whether the entry is in the GDT or LDT and a 2-bit Requested Privilege Level; see x86 memory segmentation. When referring to an address with a segment and an offset the notation of segment:offset is used, so in the above example the flat address 0xEB5CE can be written as 0xDEAD:0xCAFE or as a segment and offset register pair; DS:DX. There are some special combinations of segment registers and general registers that point to important addresses: - CS:IP (CS is Code Segment, IP is Instruction Pointer) points to the address where the processor will fetch the next byte of code. - SS:SP (SS is Stack Segment, SP is Stack Pointer) points to the address of the top of the stack, i.e. the most recently pushed byte. - DS:SI (DS is Data Segment, SI is Source Index) is often used to point to string data that is about to be copied to ES:DI. - ES:DI (ES is Extra Segment, DI is Destination Index) is typically used to point to the destination for a string copy, as mentioned above. The Intel 80386 featured three operating modes: real mode, protected mode and virtual mode. The protected mode which debuted in the 80286 was extended to allow the 80386 to address up to 4 GB of memory, the all new virtual 8086 mode (VM86) made it possible to run one or more real mode programs in a protected environment which largely emulated real mode, though some programs were not compatible (typically as a result of memory addressing tricks or using unspecified op-codes). The 32-bit flat memory model of the 80386's extended protected mode may be the most important feature change for the x86 processor family until AMD released x86-64 in 2003, as it helped drive large scale adoption of Windows 3.1 (which relied on protected mode) since Windows could now run many applications at once, including DOS applications, by using virtual memory and simple multitasking. Execution modes The x86 processors support five modes of operation for x86 code, Real Mode, Protected Mode, Long Mode, Virtual 86 Mode, and System Management Mode, in which some instructions are available and others are not. A 16-bit subset of instructions are available in real mode (all x86 processors), 16-bit protected mode (80286 onwards), V86 mode (80386 and later) and SMM (Some Intel i386SL, i486 and later). In 32-bit protected mode (Intel 80386 onwards), 32-bit instructions (including later extensions) are also available; in long mode (AMD Opteron onwards), 64-bit instructions, and more registers, are also available. The instruction set is similar in each mode but memory addressing and word size vary, requiring different programming strategies. The modes in which x86 code can be executed in are: - Real mode (16-bit) - Protected mode (16-bit and 32-bit) - Long mode (64-bit) - Virtual 8086 mode (16-bit) - System Management Mode (16-bit) Switching modes The processor enters real mode immediately after power on, so an operating system kernel, or other program, must explicitly switch to another mode if it wishes to run in anything but real mode. Switching modes is accomplished by modifying certain bits of the processor's control registers although some preparation is required beforehand in many cases, and some post switch cleanup may be required. Instruction types In general, the features of the modern x86 instruction set are: - A compact encoding - Variable length and alignment independent (encoded as little endian, as is all data in the x86 architecture) - Mainly one-address and two-address instructions, that is to say, the first operand is also the destination. - Memory operands as both source and destination are supported (frequently used to read/write stack elements addressed using small immediate offsets). - Both general and implicit register usage; although all seven (counting ebp) general registers in 32-bit mode, and all fifteen (counting rbp) general registers in 64-bit mode, can be freely used as accumulators or for addressing, most of them are also implicitly used by certain (more or less) special instructions; affected registers must therefore be temporarily preserved (normally stacked), if active during such instruction sequences. - Produces conditional flags implicitly through most integer ALU instructions. - Supports various addressing modes including immediate, offset, and scaled index but not PC-relative, except jumps (introduced as an improvement in the x86-64 architecture). - Includes floating point to a stack of registers. - Contains special support for atomic read-modify-write instructions ( xadd, and integer instructions which combine with the - SIMD instructions (instructions which perform parallel simultaneous single instructions on many operands encoded in adjacent cells of wider registers). Stack instructions The x86 architecture has hardware support for an execution stack mechanism. Instructions such as ret are used with the properly set up stack to pass parameters, to allocate space for local data, and to save and restore call-return points. The ret size instruction is very useful for implementing space efficient (and fast) calling conventions where the callee is responsible for reclaiming stack space occupied by parameters. When setting up a stack frame to hold local data of a recursive procedure there are several choices; the high level enter instruction takes a procedure-nesting-depth argument as well as a local size argument, and may be faster than more explicit manipulation of the registers (such as push bp ; mov bp, sp ; sub sp, size) but it is generally not used. Whether it is faster depends on the particular x86 implementation (i.e. processor) as well as the calling convention and code intended to run on multiple processors[clarification needed several sub-architectures?] will usually run faster on most targets without it. The full range of addressing modes (including immediate and base+offset) even for instructions such as pop, makes direct usage of the stack for integer, floating point and address data simple, as well as keeping the ABI specifications and mechanisms relatively simple compared to some RISC architectures (require more explicit call stack details). Integer ALU instructions x86 assembly has the standard mathematical operations, idiv; the logical operators neg; bitshift arithmetic and logical, shr; rotate with and without carry, ror, a complement of BCD arithmetic instructions, daa and others. Floating point instructions x86 assembly language includes instructions for a stack-based floating point unit. They include addition, subtraction, negation, multiplication, division, remainder, square roots, integer truncation, fraction truncation, and scale by power of two. The operations also include conversion instructions which can load or store a value from memory in any of the following formats: Binary coded decimal, 32-bit integer, 64-bit integer, 32-bit floating point, 64-bit floating point or 80-bit floating point (upon loading, the value is converted to the currently used floating point mode). x86 also includes a number of transcendental functions including sine, cosine, tangent, arctangent, exponentiation with the base 2 and logarithms to bases 2, 10, or e. The stack register to stack register format of the instructions is usually ), st, where st is equivalent to ) is one of the 8 stack registers ( st(7)). Like the integers, the first operand is both the first source operand and the destination operand. fdivr should be singled out as first swapping the source operands before performing the subtraction or division. The addition, subtraction, multiplication, division, store and comparison instructions include instruction modes that will pop the top of the stack after their operation is complete. So for example faddp st(1), st performs the calculation st(1) = st(1) + st(0), then removes st(0) from the top of stack, thus making what was the result in st(1) the top of the stack in SIMD instructions Modern x86 CPUs contain SIMD instructions, which largely perform the same operation in parallel on many values encoded in a wide SIMD register. Various instruction technologies support different operations on different register sets, but taken as complete whole (from MMX to SSE4.2) they include general computations on integer or floating point arithmetic (addition, subtraction, multiplication, shift, minimization, maximization, comparison, division or square root). So for example, paddw mm0, mm1 performs 4 parallel 16-bit (indicated by the w) integer adds (indicated by the mm0 values to mm1 and stores the result in mm0. Streaming SIMD Extensions or SSE also includes a floating point mode in which only the very first value of the registers is actually modified (expanded in SSE2). Some other unusual instructions have been added including a sum of absolute differences (used for motion estimation in video compression, such as is done in MPEG) and a 16-bit multiply accumulation instruction (useful for software-based alpha-blending and digital filtering). SSE (since SSE3) and 3DNow! extensions include addition and subtraction instructions for treating paired floating point values like complex numbers. These instruction sets also include numerous fixed sub-word instructions for shuffling, inserting and extracting the values around within the registers. In addition there are instructions for moving data between the integer registers and XMM (used in SSE)/FPU (used in MMX) registers. Data manipulation instructions The x86 processor also includes complex addressing modes for addressing memory with an immediate offset, a register, a register with an offset, a scaled register with or without an offset, and a register with an optional offset and another scaled register. So for example, one can encode mov eax, [Table + ebx + esi*4] as a single instruction which loads 32 bits of data from the address computed as (Table + ebx + esi * 4) offset from the ds selector, and stores it to the eax register. In general x86 processors can load and use memory matched to the size of any register it is operating on. (The SIMD instructions also include half-load instructions.) The x86 instruction set includes string load, store, move, scan and compare instructions ( cmps) which perform each operation to a specified size ( b for 8-bit byte, w for 16-bit word, d for 32-bit double word) then increments/decrements (depending on DF, direction flag) the implicit address register ( scas, and both for cmps). For the load, store and scan operations, the implicit target/source/comparison register is in the eax register (depending on size). The implicit segment registers used are ecx register is used as a decrementing counter, and the operation stops when the counter reaches zero or (for scans and comparisons) when inequality is detected. The stack is implemented with an implicitly decrementing (push) and incrementing (pop) stack pointer. In 16-bit mode, this implicit stack pointer is addressed as SS:[SP], in 32-bit mode it is SS:[ESP], and in 64-bit mode it is [RSP]. The stack pointer actually points to the last value that was stored, under the assumption that its size will match the operating mode of the processor (i.e., 16, 32, or 64 bits) to match the default width of the ret instructions. Also included are the instructions leave which reserve and remove data from the top of the stack while setting up a stack frame pointer in rbp. However, direct setting, or addition and subtraction to the rsp register is also supported, so the leave instructions are generally unnecessary. This code in the beginning of a function: push ebp ; save calling function's stack frame (ebp) mov ebp, esp ; make a new stack frame on top of our caller's stack sub esp, 4 ; allocate 4 bytes of stack space for this function's local variables ...is functionally equivalent to just: enter 4, 0 Other instructions for manipulating the stack include popf for storing and retrieving the (E)FLAGS register. The popa instructions will store and retrieve the entire integer register state to and from the stack. Values for a SIMD load or store are assumed to be packed in adjacent positions for the SIMD register and will align them in sequential little-endian order. Some SSE load and store instructions require 16-byte alignment to function properly. The SIMD instruction sets also include "prefetch" instructions which perform the load but do not target any register, used for cache loading. The SSE instruction sets also include non-temporal store instructions which will perform stores straight to memory without performing a cache allocate if the destination is not already cached (otherwise it will behave like a regular store.) Most generic integer and floating point (but no SIMD) instructions can use one parameter as a complex address as the second source parameter. Integer instructions can also accept one memory parameter as a destination operand. Program flow The x86 assembly has an unconditional jump operation, jmp, which can take an immediate address, a register or an indirect address as a parameter (note that most RISC processors only support a link register or short immediate displacement for jumping). Also supported are several conditional jumps, including jz (jump on zero), jnz (jump on non-zero), jg (jump on greater than, signed), jl (jump on less than, signed), ja (jump on above/greater than, unsigned), jb (jump on below/less than, unsigned). These conditional operations are based on the state of specific bits in the (E)FLAGS register. Many arithmetic and logic operations set, clear or complement these flags depending on their result. The comparison cmp (compare) and test instructions set the flags as if they had performed a subtraction or a bitwise AND operation, respectively, without altering the values of the operands. There are also instructions such as clc (clear carry flag) and cmc (complement carry flag) which work on the flags directly. Floating point comparisons are performed via ficom instructions which eventually have to be converted to integer flags. Each jump operation has three different forms, depending on the size of the operand. A short jump uses an 8-bit signed operand, which is a relative offset from the current instruction. A near jump is similar to a short jump but uses a 16-bit signed operand (in real or protected mode) or a 32-bit signed operand (in 32-bit protected mode only). A far jump is one that uses the full segment base:offset value as an absolute address. There are also indirect and indexed forms of each of these. In addition to the simple jump operations, there are the call (call a subroutine) and ret (return from subroutine) instructions. Before transferring control to the subroutine, call pushes the segment offset address of the instruction following the call onto the stack; ret pops this value off the stack, and jumps to it, effectively returning the flow of control to that part of the program. In the case of a far call, the segment base is pushed following the offset; far ret pops the offset and then the segment base to return. There are also two similar instructions, int (interrupt), which saves the current (E)FLAGS register value on the stack, then performs a far call, except that instead of an address, it uses an interrupt vector, an index into a table of interrupt handler addresses. Typically, the interrupt handler saves all other CPU registers it uses, unless they are used to return the result of an operation to the calling program (in software called interrupts). The matching return from interrupt instruction is iret, which restores the flags after returning. Soft Interrupts of the type described above are used by some operating systems for system calls, and can also be used in debugging hard interrupt handlers. Hard interrupts are triggered by external hardware events, and must preserve all register values as the state of the currently executing program is unknown. In Protected Mode, interrupts may be set up by the OS to trigger a task switch, which will automatically save all registers of the active task. ||This article may contain original research. (March 2013)| "Hello world!" program for DOS in MASM style assembly .model small .stack 100h .data msg db 'Hello world!$' .code start: mov ah, 09h ; Display the message lea dx, msg int 21h mov ax, 4C00h ; Terminate the executable int 21h end start "Hello World!" program for Windows in MASM style assembly ; requires /coff switch on 6.15 and earlier versions .386 .model small,c .stack 100h .data msg db "Hello World!",0 .code includelib MSVCRT extrn printf:near extrn exit:near public main main proc push offset msg call printf push 0 call exit main endp end main "Hello world!" program for Linux in NASM style assembly ; ; This program runs in 32-bit protected mode. ; build: nasm -f elf -F stabs name.asm ; link: ld -o name name.o ; ; In 64-bit protected mode you can use 64-bit registers (e.g. rax instead of eax, rbx instead of ebx, etc..) ; Also change "-f elf " for "-f elf64" in build command. ; section .data ; section for initialized data str: db 'Hello world!', 0Ah ; message string with new-line char at the end (10 decimal) str_len: equ $ - str ; calcs length of string (bytes) by subtracting this' address ($ symbol) ; from the str's start address section .text ; this is the code section global _start ; _start is the entry point and needs global scope to be 'seen' by the ; linker - equivalent to main() in C/C++ _start: ; procedure start mov eax, 4 ; specify the sys_write function code (from OS vector table) mov ebx, 1 ; specify file descriptor stdout -in linux, everything's treated as a file, ; even hardware devices mov ecx, str ; move start _address_ of string message to ecx register mov edx, str_len ; move length of message (in bytes) int 80h ; tell kernel to perform the system call we just set up - ; in linux services are requested through the kernel mov eax, 1 ; specify sys_exit function code (from OS vector table) mov ebx, 0 ; specify return code for OS (0 = everything's fine) int 80h ; tell kernel to perform system call "Hello world!" program for Linux in NASM style assembly using the C standard library ; ; This program runs in 32-bit protected mode. ; gcc links the standard-C library by default ; build: nasm -f elf -F stabs name.asm ; link: gcc -o name name.o ; ; In 64-bit protected mode you can use 64-bit registers (e.g. rax instead of eax, rbx instead of ebx, etc..) ; Also change "-f elf " for "-f elf64" in build command. ; global main ;main must be defined as it being compiled against the C-Standard Library extern printf ;declares use of external symbol as printf is declared in a different object-module. Linker resolves this symbol later segment .data ;section for initialized data string db 'Hello world!', 0Ah ;message string with new-line char at the end (10 decimal) ;string now refers to the starting address at which 'Hello, World' is stored. segment .text main: push string ;push the address of first character of string onto stack. This will be argument to printf call printf ;calls printf add esp, 4 ;advances stack-pointer by 4 flushing out the pushed string argument ret ;return print "/bin/sh\n" program in 64-bit mode Linux section .text global _start, write write: mov al, 1 ; write syscall syscall ret _start: mov rax, 0x0a68732f6e69622f ; /bin/sh\n push rax xor rax, rax mov rsi, rsp mov rdi, 1 mov rdx, 8 call write exit: ; just exit not a function xor rax, rax mov rax, 60 syscall Using the flags register Flags are heavily used for comparisons in the x86 architecture. When a comparison is made between two data, the CPU sets the relevant flag or flags. Following this, conditional jump instructions can be used to check the flags and branch to code that should run, e.g.: cmp eax, ebx jne do_something ; ... do_something: ; do something here Flags are also used in the x86 architecture to turn on and off certain features or execution modes. For example, to disable all maskable interrupts, you can use the instruction: The flags register can also be directly accessed. The low 8 bits of the flag register can be loaded into ah using the lahf instruction. The entire flags register can also be moved on and off the stack using the instructions Using the instruction pointer register The instruction pointer is called ip in 16-bit mode, eip in 32-bit mode, and rip in 64-bit mode. The instruction pointer register points to the memory address which the processor will next attempt to execute; it cannot be directly accessed in 16-bit or 32-bit mode, but a sequence like the following can be written to put the address of call next_line next_line: pop eax This sequence of instructions generates position-independent code because call takes an instruction-pointer-relative immediate operand describing the offset in bytes of the target instruction from the next instruction (in this case 0). Writing to the instruction pointer is simple — a jmp instruction sets the instruction pointer to the target address, so, for example, a sequence like the following will put the contents of In 64-bit mode, instructions can reference data relative to the instruction pointer, so there is less need to copy the value of the instruction pointer to another register. See also - Assembly language - X86 instruction listings - X86 architecture - CPU design - List of assemblers - Self-modifying code - Ram Narayam (2007-10-17). "Linux assemblers: A comparison of GAS and NASM". Retrieved 2008-07-02. - "The Creation of Unix". - Randall Hyde. "Which Assembler is the Best?". Retrieved 2008-05-18. - "GNU Assembler News, v2.1 supports Intel syntax". 2008-04-04. Retrieved 2008-07-02. - [Scott] (March 24, 2006). "P2 (286) Second-Generation Processors". Upgrading and Repairing PCs, 17th Edition (Book) (17 ed.). Que. ISBN 0-7897-3404-4. Retrieved July 2007. |Wikibooks has a book on the topic of: x86 Assembly| - Novice and Advanced Assembly resources for x86 Platform - Which Assembler is the Best? - A comparison of x86 assemblers - Intel 64 and IA-32 Software Developer Manuals - AMD64 Architecture Programmer's Manual Volume 1: Application Programming (PDF) - AMD64 Architecture Programmer's Manual Volume 2: System Programming (PDF) - AMD64 Architecture Programmer's Manual Volume 3: General-Purpose and System Instructions (PDF) - AMD64 Architecture Programmer's Manual Volume 4: 128-Bit Media Instructions (PDF) - AMD64 Architecture Programmer's Manual Volume 5: 64-Bit Media and x87 Floating-Point Instructions (PDF)
http://en.wikipedia.org/wiki/X86_assembly_language
13
78
Author: Robert Lawrence, D.C. Everest Junior High School, Schofield, WI Overview of Lesson: Students focus on methods of science, acting as paleontologists who make inferences about the weight of dinosaurs from models and the density of water. Three class periods. If a written lab report is expected, allow several more days. Students' Prior Knowledge: This activity is done after developing the concept that it is possible to approximate the height of an animal from limited evidence. Denver Earth Science Projects module on "Paleontology and Dinosaurs" has an activity called "A Lengthy Relationship" that works well as an introduction to methods of determining size. There are some difficult concepts in measurement and mathematics involved in this activity. The following questions identify what students know, or think they know, about how a dinosaur can be weighed. Students answer them on a sheet of paper. The sheets are not graded, but are collected and returned as part of the post-lab discussion. The models chosen for use in this activity are the Carnegie dinosaur models at the scale of 1:40. These models are considered to be as accurate as our current knowledge will allow. This scale means that 1 cm on this model represents 40 cm on the actual dinosaur. Since volume involves three dimensions (length X width X height), you have to cube the scale factor before using the model's volume to find the volume of the dinosaur it represents. If we assume the dinosaurs were the same density as land animals today, their density would be close to the density of water, which is 1 gram per cubic centimeter. Since we have a reasonable estimate of the density of dinosaurs, and can determine the volume of the scale model dinosaur, we can mathematically determine the mass using the equation for density. A major concept to be developed through this activity is that physical processes are assumed to be uniform in time and space. As geologists say, "the present is the key to the past". Students need to be aware of the assumptions that are made in this activity. A second concept is that scientists studying the past use experimentation and scientific method as much as any other scientists. All scientific ideas must be testable. 2. While doing this activity you will be working on state and district standards that require you to: -Design real or thought investigations to test the usefulness and limitations of a model. -Explain how the general rules of science apply to the development and use of evidence in science investigations. -Identify data and locate sources of information including your own records to answer the questions being investigated. -Use inferences to help decide possible results of your investigations, use observations to check your inferences. -State what you have learned from your investigations, relating your inferences to scientific knowledge and to data you have collected. -Evaluate, explain, and defend the validity of questions, hypotheses, and conclusions to your investigations. 3. Lab Format: Use this lab sheet to write notes and information while you do the activity. Your final copy is to be done on loose-leaf paper, in pen or typed. PROBLEM: How much did a _____________________ weigh?Teacher Notes: I bought the Carnegie dinosaur models in Sun Prairie, WI at the Dragon's Whistle. They are available at many other better quality toy stores as well. The activity can be done as a station if you want to cut down on the expense of buying many models. You will be assigned a species of dinosaur. When you write your report, put in the name of your dinosaur. HYPOTHESIS: Based on my research, I think that a ____________________ weighed ____________. A good starting point for your research is The Royal Tyrell Museums fossil encyclopedia at: Procedure:mass = density X volume - Use the data table to record your information. - Measure and record the length and height of the model. - Multiply the length and the height by the scale factor of 40 to get the size of the dinosaur. - Tie a string around the model - Fill the overflow container with water until the final drop added overflows. - Lower the model into the water. Catch the water that overflows with a graduated cylinder. The volume of the water that overflows is the volume of the dinosaur model. - Remove the model, refill the container, and repeat the measurement to check for accuracy. - Record the volume of the model. The volume of the dinosaur is 40X40X40 times greater. Calculate the volume of the dinosaur (in cubic centimeters). - Convert the volume to cubic meters by dividing by 1,000,000. - Calculate the mass of the dinosaur based on the data you collected from the model. density of dinosaur = density of water = 1000 kg / cubic meter DATA TABLE TRIAL 1 TRIAL 2 HEIGHT OF MODEL (cm) (cm) LENGTH OF MODEL (cm) (cm) HEIGHT OF DINOSAUR (m) (m) LENGTH OF DINOSAUR (m) (m) WATER DISPLACED (ml) (ml) VOLUME OF MODEL (cm3) (cm3) VOLUME OF DINOSAUR = MODEL VOLUME x40x40x40 (cm3) (cm3) VOLUME OF DINOSAUR (m3) (m3) (m3) MASS OF DINOSAUR = DINOSAUR VOLUME x 1000 Kg / m3 CONCLUSION: Based on my measurements, a ____________________ weighed around _____________________. A conclusion starts by answering the question at the beginning of the lab write-up, then adds interpretation and analysis. For this lab, please respond to the interpretation questions in the conclusion. 1. Convert your own weight (in pounds) to mass in kilograms by dividing your weight by 2.2. (There are 2.2 pounds in one kilogram.) 2. How many times greater was the mass of your dinosaur than your mass? 3. What assumptions have you made in the activity that would influence the accuracy of your results? 4. What effect would the mass of the dinosaur have on the way it moved? Consider Newton's 2nd Law of Motion as you answer. 5. What are possible sources of error in the method used to find the mass of a dinosaur? 6. What is another method that could be used to find the mass of a dinosaur? Answers for the interpretation questions: - Convert your weight (in pounds) to mass in kilograms by dividing your weight by 2.2. (There are 2.2 pounds in one kilogram.) Answers will vary. A hundred-pound person has a mass of 45 kilograms. - How many times greater was the mass of your dinosaur than your mass? Answers will vary. - What assumptions have you made in the activity that would influence the accuracy of your results? I assumed that the models are accurate, and that dinosaurs were about the density of water. - What effect would the mass of the dinosaur have on the way it moved? Consider Newton's 2nd Law of Motion as you answer. The greater the mass, the greater the force needed to accelerate that mass. The largest dinosaurs probably could not change their speed or direction very quickly. - What are possible sources of error in the method used to find the mass of a dinosaur? Possible sources of error would include errors based on the assumptions, errors in the measurement of the water, and errors in the calculations. - What is another method that could be used to find the mass of a dinosaur? This is an open-ended question; there are no other methods used. Because of the size of some of the models, the overflow cans have to be large. I used a 5 gallon bucket for the largest models and plastic pop bottles for the smaller models. A hole is cut near the top of the container and a short piece of plastic tubing is inserted to direct the overflow. The mathematics involved in this activity is very difficult for most middle school students. It takes time to explain the concepts of scale distance and then lead them into scale volume. It also is worth taking the time to lead students through the use of density in this activity. *write a short essay about the limits on the sizes of land creatures. *An interesting extension to this activity might be to show a short clip from the movie Honey, I Blew Up the Baby. Estimate with the students what they think the scale factor was for the movie, then remind them of the need to cube the scale factor to find scale volume and scale mass. A twenty pound baby at a scale factor of ten would weigh 20,000 pounds! Any human at that size would be reduced to a quivering blob! Vocabulary: density, scale factor, mass, and weight Math: measuring to correct accuracy, multiplication, division, ratios Language Arts: the use of suffixes and prefixes as they are used in dinosaur names. Wisconsin State Science Standards: Design real or thought investigations to test the usefulness and limitations of a model. Explain how the general rules of science apply to the development and use of evidence in science investigations, model-making, and applications. Identify data and locate sources of information including their own records to answer the questions being investigated. Use inferences to help decide possible results of their investigations, use observations to check their inferences. State what they have learned from investigations, relating their inferences to scientific knowledge and to data they have collected. Evaluate, explain, and defend the validity of questions, hypotheses, and conclusions to their investigations. Discuss the importance of their results and implications of their work with peers, teachers, and other adults. Raise further questions which still need to be answered. Analyze the geologic and life history of the earth, including change over time, using various forms of scientific evidence.
http://www.geology.wisc.edu/~museum/hughes/dinosaur-weight_students.html
13
54
- High School - Number & Quantity - Statistics & Probability - Language Arts - Social Studies - Art & Music - World Languages - Your Life Solve Problems Involving Measurement And Conversion Of Measurements From A Larger Unit To A Smaller Unit. 4.MD.1Know relative sizes of measurement u...more Know relative sizes of measurement units within one system of units including km, m, cm; kg, g; lb, oz.; l, ml; hr, min, sec. Within a single system of measurement, express measurements in a larger unit in terms of a smaller unit. Record measurement equivalents in a two- column table. For example, know that 1 ft is 12 times as long as 1 in. Express the length of a 4 ft snake as 48 in. Generate a conversion table for feet and inches listing the number pairs (1, 12), (2, 24), (3, 36), ... 4.MD.2Use the four operations to solve wor...more Use the four operations to solve word problems involving distances, intervals of time, liquid volumes, masses of objects, and money, including problems involving simple fractions or decimals, and problems that require expressing measurements given in a larger unit in terms of a smaller unit. Represent measurement quantities using diagrams such as number line diagrams that feature a measurement scale. 4.MD.3Apply the area and perimeter formula...more Apply the area and perimeter formulas for rectangles in real world and mathematical problems. For example, find the width of a rectangular room given the area of the flooring and the length, by viewing the area formula as a multiplication equation with an unknown factor. Represent And Interpret Data. 4.MD.4Make a line plot to display a data s...more Make a line plot to display a data set of measurements in fractions of a unit (1/2, 1/4, 1/8). Solve problems involving addition and subtraction of fractions by using information presented in line plots. For example, from a line plot find and interpret the difference in length between the longest and shortest specimens in an insect collection. Geometric Measurement: Understand Concepts Of Angle And Measure Angles. 4.MD.5Recognize angles as geometric shapes...more Recognize angles as geometric shapes that are formed wherever two rays share a common endpoint, and understand concepts of angle measurement: 4.MD.5.aAn angle is measured with reference ...more An angle is measured with reference to a circle with its center at the common endpoint of the rays, by considering the fraction of the circular arc between the points where the two rays intersect the circle. An angle that turns through 1/360 of a circle is called a “one-degree angle,” and can be used to measure angles. 4.MD.5.bAn angle that turns through n one-de...more An angle that turns through n one-degree angles is said to have an angle measure of n degrees. 4.MD.6Measure angles in whole-number degre...more Measure angles in whole-number degrees using a protractor. Sketch angles of specified measure. 4.MD.7Recognize angle measure as additive....more Recognize angle measure as additive. When an angle is decomposed into non-overlapping parts, the angle measure of the whole is the sum of the angle measures of the parts. Solve addition and subtraction problems to find unknown angles on a diagram in real world and mathematical problems, e.g., by using an equation with a symbol for the unknown angle measure. Major cluster will be a majority of the assessment, Supporting clusters will be assessed through their success at supporting the Major Clusters and Additional Clusters will be assessed as well. The assessments will strongly focus where the standards strongly focus. Now Creating a New Plan You'll be able to add text, files and other info to meet your student's needs. You'll be redirected to your new page in just a second. Moving Games. Just a moment...
http://powermylearning.com/directory/math/4/measurement-data
13
114
Information about Multiplication Multiplication is the mathematical operation of adding together multiple copies of the same number. For example, four multiplied by three is twelve, since three sets of four make twelve: Multiplication can also be viewed as counting objects arranged in a rectangle, or finding the area of rectangle whose sides have given lengths. Multiplication is one of four main operations in elementary arithmetic, and most people learn basic multiplication algorithms in elementary school. The inverse of multiplication is division. Notation and terminologyMultiplication is written using the multiplication sign "×" between the terms; that is, in infix notation. The result is expressed with an equals sign. For example, - (verbally, "two times three equals six") There are several other common notations for multiplication: - Multiplication is sometimes denoted by either a middle dot or a period: The middle dot is standard in the United States, the United Kingdom, and other countries where the period is used as a decimal point. In countries that use a comma as a decimal point, the period is used for multiplication instead. - The asterisk (e.g. 5 * 2) is often used with computers because it appears on every keyboard. This usage originated in the FORTRAN programming language. - In algebra, multiplication involving variables is often written as a (e.g. xy for x times y or 5x for five times x). This notation can also be used for numbers that are surrounded by parentheses (e.g. 5(2) or (5)(2) for five times two). The result of a multiplication is called a product, and is a multiple of each factor. For example 15 is the product of 3 and 5, and is both a multiple of 3 and a multiple of 5. The standard methods for multiplying numbers using pencil and paper require a multiplication table of memorized or consulted products of small numbers (typically any two numbers from 0 to 9), however one method, the peasant multiplication algorithm, does not. Many mathematics curricula developed according to the 1989 standards of the NCTM do not teach standard arithmetic methods, instead guiding students to invent their own methods of computation. Though widely adopted by many school districts in nations such as the United States, they have encountered resistance from some parents and mathematicians, and some districts have since abandoned such curricula in favor of traditional mathematics. Multiplying numbers to more than a couple of decimal places by hand is tedious and error prone. Common logarithms were invented to simplify such calculations. The slide rule allowed numbers to be quickly multiplied to about three places of accuracy. Beginning in the early twentieth century, mechanical calculators, such as the Marchant, automated multiplication of up to 10 digit numbers. Modern electronic computers and calculators have greatly reduced the need for multiplication by hand. Historical algorithmsMethods of multiplication were documented in the Egyptian, Greece, Babylonian, Indus valley, and Chinese civilizations. BabyloniansThe Babylonians used a sexagesimal positional number system, analogous to the modern day decimal system. Thus, Babylonian multiplication was very similar to modern decimal multiplication. Because of the relative difficulty of remembering 60 × 60 different products, Babylonian mathematicians employed multiplication tables. These tables consisted of a list of the first twenty multiples of a certain principal number n: n, 2n, ..., 20n; followed by the multiples of 10n: 30n 40n, and 50n. Then to compute any sexagesimal product, say 53n, one only needed to add 50n and 3n computed from the table. ChineseIn the books, Chou Pei Suan Ching dated prior to 300 B.C., and the Nine Chapters on the Mathematical Art, multiplication calculations were written out in words, although the early Chinese mathematicians employed an abacus in hand calculations involving addition and multiplication. Indus ValleyThe early Hindu mathematicians of the Indus valley region used a variety of intuitive tricks to perform multiplication. Most calculations were performed on small slate hand tablets, using chalk tables. One technique was that of lattice multiplication (or gelosia multiplication). Here a table was drawn up with the rows and columns labelled by the multiplicands. Each box of the table was divided diagonally into two, as a triangular lattice. The entries of the table held the partial products, written as decimal numbers. The product could then be formed by summing down the diagonals of the lattice. Products of sequences Capital pi notationThe product of a series of terms can be written with the product symbol, which derives from the capital letter Π (Pi) in the Greek alphabet. Unicode position U+220F (∏) is defined a n-ary product for this purpose, distinct from U+03A0 (Π), the letter. This is defined as: The subscript gives the symbol for a dummy variable ( in our case) and its lower value (); the superscript gives its upper value. So for example: In case m = n, the value of the product is the same as that of the single factor xm. If m > n, the product is the empty product, with the value 1. One may also consider products of infinitely many terms; these are called infinite products. Notationally, we would replace n above by the infinity symbol (∞). In the reals, the product of such a series is defined as the limit of the product of the first terms, as grows without bound. That is: One can similarly replace with negative infinity, and for some integer , provided both limits exist. Cartesian productThe definition of multiplication as repeated addition provides a way to arrive at a set-theoretic interpretation of multiplication of cardinal numbers. In the expression if the n copies of a are to be combined in disjoint union then clearly they must be made disjoint; an obvious way to do this is to use either a or n as the indexing set for the other. Then, the members of are exactly those of the Cartesian product . The properties of the multiplicative operation as applying to natural numbers then follow trivially from the corresponding properties of the Cartesian product. PropertiesFor integers, fractions, real and complex numbers, multiplication has certain properties: - the order in which two numbers are multiplied does not matter. This is called the commutative property, - x · y = y · x. - The associative property means that for any three numbers x, y, and z, - (x · y)·z = x·(y · z). - Note from algebra: the parentheses mean that the operations inside the parentheses must be done before anything outside the parentheses is done. - Multiplication also has what is called a distributive property with respect to the addition, - x·(y + z) = x·y + x·z. - Also of interest is that any number times 1 is equal to itself, thus, - 1 · x = x. - and this is called the identity property. In this regard the number 1 is known as the multiplicative identity. - The sum of zero numbers is zero. - This fact is directly received by means of the distributive property: - m · 0 = (m · 0) + m − m = (m · 0) + (m · 1) − m = m · (0 + 1) − m = (m · 1) − m = m − m = 0. - m · 0 = 0 - no matter what m is (as long as it is finite). - Multiplication with negative numbers also requires a little thought. First consider negative one (−1). For any positive integer m: - (−1)m = (−1) + (−1) +...+ (−1) = −m - This is an interesting fact that shows that any negative number is just negative one multiplied by a positive number. So multiplication with any integers can be represented by multiplication of whole numbers and (−1)'s. - All that remains is to explicitly define (−1)·(−1): - (−1)·(−1) = −(−1) = 1 - However, from a formal viewpoint, multiplication between two negative numbers is (again) directly received by means of the distributive property, e.g: (−1)·(−1) = (−1)·(−1) + (−2) + 2 = (−1)·(−1) + (−1)·2 + 2 = (−1)·(−1 + 2) + 2 = (−1)·1 + 2 = (−1) + 2 = 1 - Every number x, except zero, has a multiplicative inverse, 1/x, such that x·(1/x) = 1. - Multiplication by a positive number preserves order: if a > 0, then if b > c then a·b > a·c. Multiplication by a negative number reverses order: if a < 0, then if b > c then a·b < a·c. Multiplication with Peano's axioms - In the book Arithmetices principia, nova methodo exposita, Giuseppe Peano proposed a new system for multiplication based on his axioms for natural numbers. - Here, b' represents the successor of b, or the natural number which follows b. With his other nine axioms, it is possible to prove common rules of multiplication, such as the distributive or associative properties. Multiplication with set theoryIt is possible, though difficult, to create a recursive definition of multiplication with set theory. Such a system usually relies on the peano definition of multiplication. Multiplication with group theoryIt is easy to show that there is a group for multiplication- the non-zero rational numbers. Multiplication with the non-zero numbers satisfies - Closure - For all a and b in the group, a×b is in the group. - Associativity - This is just the associative property! (a×b)×c=a×(b×c) - Identity - This follows straight from the peano definition. Anything multiplied by one is itself. - Inverse - All non-zero numbers have a multiplicative inverse. - Multiplicative inverse, the reciprocal - Multiplication algorithm - Karatsuba algorithm, method for large numbers - Toom-Cook algorithm, method for very large numbers - Schönhage-Strassen algorithm, method for huge numbers - Multiplication table (times table) - Multiplication ALU, how computers multiply - Booth's multiplication algorithm - Floating point - Fused multiply-add - Wallace tree - Napier's bones - Peasant multiplication - Product (mathematics) - lists generalizations - Slide rule - Boyer, Carl B. (revised by Merzbach, Uta C.) (1991). History of Mathematics. John Wiley and Sons, Inc.. ISBN 0-471-54397-7. In its simplest meaning in mathematics and logic, an operation is an action or procedure which produces a new value from one or more input values. There are two common types of operations: unary and binary. - Practicing and Learning Multiplication - Multiplication and Arithmetic Operations In Various Number Systems at cut-the-knot - Modern Chinese Multiplication Techniques on an Abacus - Multiplication Worksheets and Puzzles - Math Games for Multiplication ..... Click the link for more information.rectangle is defined as a quadrilateral where all four of its angles are right angles. From this definition, it follows that a rectangle has two pairs of parallel sides; that is, a rectangle is a parallelogram. ..... Click the link for more information.Area is a physical quantity expressing the size of a part of a surface. The term Surface area is the summation of the areas of the exposed sides of an object. UnitsUnits for measuring surface area include: - square metre = SI derived unit ..... Click the link for more information.Length is the long dimension of any object. The length of a thing is the distance between its ends, its linear extent as measured from end to end. This may be distinguished from height, which is vertical extent, and width or breadth ..... Click the link for more information.Elementary arithmetic is the most basic kind of mathematics: it concerns the operations of addition, subtraction, multiplication, and division. Most people learn elementary arithmetic in elementary school. ..... Click the link for more information.In mathematics, computing, linguistics, and related disciplines, an algorithm is a finite list of well-defined instructions for accomplishing some task that, given an initial state, will proceed through a well-defined series of successive states, eventually terminating in an ..... Click the link for more information. - ''Main article Primary education An elementary school is an institution where children receive the first stage of compulsory education known as elementary or primary education. ..... Click the link for more information.In mathematics, especially in elementary arithmetic, division is an arithmetic operation which is the inverse of multiplication. Specifically, if c times b equals a, written: ..... Click the link for more information.The multiplication sign is the symbol × (multiplication sign is the preferred Unicode name for the codepoint represented by that glyph). The symbol is similar to the letter x but is a more symmetric cross, and has different uses. ..... Click the link for more information.Infix notation is the common arithmetic and logical formula notation, in which operators are written infix-style between the operands they act on (e.g. 2 + 2). It is not as simple to parse by computers as prefix notation ( e.g. + 2 2 ) or postfix notation ( e.g. ..... Click the link for more information. - The equal sign, equals sign, or "=" is a mathematical symbol used to indicate equality. It was invented in 1557 by Welshman Robert Recorde. ..... Click the link for more information.An interpunct · is a small dot used for interword separation in ancient Latin script, being perhaps the first consistent visual representation of word boundaries in written language. The dot is vertically centered, e.g. ..... Click the link for more information.Period and periodic may refer to: - An interval of time that an event, chain of events, instance or happening, takes place within. It is measured between a start point and an end point and generally repeats (which is where the term period came to describe a ..... Click the link for more information.Motto "In God We Trust" (since 1956) "E Pluribus Unum" ("From Many, One"; Latin, traditional) ..... Click the link for more information.Motto "Dieu et mon droit" (French) "God and my right" "God Save the Queen" ..... Click the link for more information.This article requires authentication or verification by an expert. Please assist in recruiting an expert or [ improve this article] yourself. See the talk page for details. This article has been tagged since June 2007. ..... Click the link for more information.Comma may refer to: - Comma (punctuation), a punctuation mark (,) - Comma (music), a kind of interval in music theory - Comma (butterfly), a species of butterfly - Comma (rhetoric), a short clause in Greek rhetoric ..... Click the link for more information.asterisk (*), is a typographical symbol or glyph. It is so called because it resembles a conventional image of a star (Latin astrum). Computer scientists and mathematicians often pronounce it as star (as, for example, in the A* search algorithm ..... Click the link for more information.Fortran Paradigm: multi-paradigm: procedural, imperative, structured, object-oriented Appeared in: 1957 Designed by: John W. Backus Developer: John W. ..... Click the link for more information.Algebra is a branch of mathematics concerning the study of structure, relation and quantity. The name is derived from the treatise written by the Arabic mathematician, astronomer, astrologer and geographer, ..... Click the link for more information.variable (IPA pronunciation: [ˈvæɹiəbl]) (sometimes called a pronumeral) is a symbolic representation denoting a quantity or expression. ..... Click the link for more information.Parenthesis may be: - Parenthesis, either of the curved-bracket ( ) punctuation marks that together make a set of parentheses - Parenthesis (rhetoric), parenthetical expression ..... Click the link for more information.coefficient is a constant multiplicative factor of a certain object. For example, the coefficient in 9x2 is 9. The object can be such things as a variable, a vector, a function, etc. ..... Click the link for more information.Product may mean: - Product (biology), something manufactured by an organelle - Product (business), an item that ideally satisfies a market's want or need - Product (chemistry), a substance found at the end of a chemical reaction ..... Click the link for more information.The word multiple can refer to: - multiples of numbers - dissociative identity disorder, for people with multiple personalities, sometimes called "multiples". - multiple birth, because having twins is sometimes called having "multiples". ..... Click the link for more information.multiplication table is a mathematical table used to define a multiplication operation for an algebraic system. The decimal multiplication table was traditionally taught as an essential part of elementary arithmetic around the world, as it lays the foundation for arithmetic ..... Click the link for more information.Ancient Egyptian multiplication is a systematic method for multiplying two numbers that does not require the multiplication table, only the ability to multiply and divide by 2, and to add. ..... Click the link for more information.worldwide view. Traditional mathematics is the term used for the style of mathematics instruction used for a period in the 20th century before the appearance of reform mathematics based on NCTM standards, so it is best defined by contrast with the alternatives. ..... Click the link for more information.In mathematics, the common logarithm is the logarithm with base 10. It is also known as the decadic logarithm, named after its base. It is indicated by log10(x), or sometimes Log(x) with a capital L ..... Click the link for more information.slide rule (often nicknamed a "slipstick") is a mechanical analog computer, consisting of at least two finely divided scales (rules), most often a fixed outer pair and a movable inner one, with a sliding window called the cursor. ..... Click the link for more information. This article is copied from an article on Wikipedia.org - the free encyclopedia created and edited by online user community. The text was not checked or edited by anyone on our staff. Although the vast majority of the wikipedia encyclopedia articles provide accurate and timely information please do not assume the accuracy of any particular article. This article is distributed under the terms of GNU Free Documentation License.
http://english.turkcebilgi.com/multiplication
13
66
A standard definition of static equilibrium - A system of particles is in static equilibrium when all the particles of the system are at rest and the total force on each particle is permanently zero. This is a strict definition, and often the term "static equilibrium" is used in a more relaxed manner interchangeably with "mechanical equilibrium", as defined next. A standard definition of mechanical equilibrium for a particle is: - The necessary and sufficient conditions for a particle to be in mechanical equilibrium is that the net force In physics, net force is the total force acting on an object. It is calculated by vector addition of all forces that are actually acting on that object. Net force has the same effect on the translational motion of the object as all actual forces taken together... acting upon the particle is zero. The necessary conditions for mechanical equilibrium for a system of particles are: - (i)The vector sum of all external forces is zero; - (ii) The sum of the moments of all external forces about any line is zero. As applied to a rigid body, the necessary and sufficient conditions become: - A rigid body In physics, a rigid body is an idealization of a solid body of finite size in which deformation is neglected. In other words, the distance between any two given points of a rigid body remains constant in time regardless of external forces exerted on it... is in mechanical equilibrium when the sum of all forces on all particles of the system is zero, and also the sum of all torque Torque, moment or moment of force , is the tendency of a force to rotate an object about an axis, fulcrum, or pivot. Just as a force is a push or a pull, a torque can be thought of as a twist.... s on all particles of the system is zero. A rigid body in mechanical equilibrium is undergoing neither linear nor rotational acceleration; however it could be translating or rotating at a constant velocity. However, this definition is of little use in continuum mechanics Continuum mechanics is a branch of mechanics that deals with the analysis of the kinematics and the mechanical behavior of materials modelled as a continuous mass rather than as discrete particles... , for which the idea of a particle is foreign. In addition, this definition gives no information as to one of the most important and interesting aspects of equilibrium states – their stability In mathematics, stability theory addresses the stability of solutions of differential equations and of trajectories of dynamical systems under small perturbations of initial conditions... An alternative definition of equilibrium that applies to conservative systems and often proves more useful is: - A system is in mechanical equilibrium if its position in configuration space - Configuration space in physics :In classical mechanics, the configuration space is the space of possible positions that a physical system may attain, possibly subject to external constraints... is a point at which the gradient In vector calculus, the gradient of a scalar field is a vector field that points in the direction of the greatest rate of increase of the scalar field, and whose magnitude is the greatest rate of change.... with respect to the generalized coordinates In the study of multibody systems, generalized coordinates are a set of coordinates used to describe the configuration of a system relative to some reference configuration.... of the potential energy In physics, potential energy is the energy stored in a body or in a system due to its position in a force field or due to its configuration. The SI unit of measure for energy and work is the Joule... Because of the fundamental relationship between force and energy, this definition is equivalent to the first definition. However, the definition involving energy can be readily extended to yield information about the stability of the equilibrium state. For example, from elementary calculus Calculus is a branch of mathematics focused on limits, functions, derivatives, integrals, and infinite series. This subject constitutes a major part of modern mathematics education. It has two major branches, differential calculus and integral calculus, which are related by the fundamental theorem... , we know that a necessary condition for a local minimum or a maximum of a differentiable function is a vanishing first derivative (that is, the first derivative is becoming zero). To determine whether a point is a minimum or maximum, one may be able to use the second derivative test In calculus, the second derivative test is a criterion often useful for determining whether a given stationary point of a function is a local maximum or a local minimum using the value of the second derivative at the point.... . The consequences to the stability of the equilibrium state are as follows: - Second derivative In calculus, the second derivative of a function ƒ is the derivative of the derivative of ƒ. Roughly speaking, the second derivative measures how the rate of change of a quantity is itself changing; for example, the second derivative of the position of a vehicle with respect to time is... < 0 : The potential energy is at a local maximum, which means that the system is in an unstable equilibrium state. If the system is displaced an arbitrarily small distance from the equilibrium state, the forces of the system cause it to move even farther away. - Second derivative > 0 : The potential energy is at a local minimum. This is a stable equilibrium. The response to a small perturbation is forces that tend to restore the equilibrium. If more than one stable equilibrium state is possible for a system, any equilibria whose potential energy is higher than the absolute minimum represent metastable states. - Second derivative = 0 or does not exist: The second derivative test fails, and one must typically resort to using the first derivative test In calculus, the first derivative test uses the first derivative of a function to determine whether a given critical point of a function is a local maximum, a local minimum, or neither.-Intuitive explanation:... . Both of the previous results are still possible, as is a third: this could be a region in which the energy does not vary, in which case the equilibrium is called neutral or indifferent or marginally stable. To lowest order, if the system is displaced a small amount, it will stay in the new state. In more than one dimension, it is possible to get different results in different directions, for example stability with respect to displacements in the x -direction but instability in the y -direction, a case known as a saddle point In mathematics, a saddle point is a point in the domain of a function that is a stationary point but not a local extremum. The name derives from the fact that in two dimensions the surface resembles a saddle that curves up in one direction, and curves down in a different direction... . Without further qualification, an equilibrium is stable only if it is stable in all directions. The special case of mechanical equilibrium of a stationary object is static equilibrium. A paperweight on a desk would be in static equilibrium. The minimal number of static equilibria of homogeneous, convex bodies (when resting under gravity on a horizontal surface) is of special interest. In the planar case, the minimal number is 4, while in three dimensions one can build an object with just one stable and one unstable balance point, this is called Gomboc A gömböc is a convex three-dimensional homogeneous body which, when resting on a flat surface, has just one stable and one unstable point of equilibrium. Its existence was conjectured by Russian mathematician Vladimir Arnold in 1995 and proven in 2006 by Hungarian scientists Gábor Domokos and Péter... . A child sliding down a slide A playground or play area is a place with a specific design for children be able to play there. It may be indoors but is typically outdoors... at constant speed would be in mechanical equilibrium, but not in static equilibrium. An example of mechanical equilibrium is a person trying to press a spring. He or she can push it up to a point after which it reaches a state where the force trying to compress it and the resistive force from the spring are equal, so the person cannot further press it. At this state the system will be in mechanical equilibrium. When the pressing force is removed the spring attains its original state. - Dynamic equilibrium A dynamic equilibrium exists once a reversible reaction ceases to change its ratio of reactants/products, but substances move between the chemicals at an equal rate, meaning there is no net change. It is a particular example of a system in a steady state... - Engineering mechanics Metastability describes the extended duration of certain equilibria acquired by complex systems when leaving their most stable state after an external action.... - Statically indeterminate In statics, a structure is statically indeterminate when the static equilibrium equations are insufficient for determining the internal forces and reactions on that structure.... Statics is the branch of mechanics concerned with the analysis of loads on physical systems in static equilibrium, that is, in a state where the relative positions of subsystems do not vary over time, or where components and structures are at a constant velocity... Water is a chemical substance with the chemical formula H2O. A water molecule contains one oxygen and two hydrogen atoms connected by covalent bonds. Water is a liquid at ambient conditions, but it often co-exists on Earth with its solid state, ice, and gaseous state . Water also exists in a... - Marion & Thornton, Classical Dynamics of Particles and Systems. Fourth Edition, Harcourt Brace & Company (1995).
http://www.absoluteastronomy.com/topics/Mechanical_equilibrium
13
50
Principles of Nature: towards a new visual language© copyright 2003-2008 Wayne Roberts. All rights reserved. Completing a scale structure of triangles: For reasons similar to those given in the case of the Eutrigon theorem, Q,...can represent any co-eutrigon, proving the theorem true for all co-eutrigons. We may summarise this reasoning as follows: The shape of any co-eutrigon is specified by the ratio of its leg lengths, a/b. If we let the shorter of a eutrigon's legs be a, then the ratio a/b is always in the range 0 < a/b < (or equal to) 1. It can be seen from an examination of [the] figure ... that every ratio of a/b is possible in the diagram: a can be vanishing small or can be any value up to and including a = b. Thus every possible shape of co-eutrigon can be accommodated in the diagram without altering the geometric relations, and thus the theorem holds true for all co-eutrigons. The geometric form of the Co-eutrigon theorem states: the area of any co-eutrigon (i.e. a triangle in which one angle is 120°) is equal to the area of the equilateral triangle on its hypotenuse ‘c’ minus the combined areas of the equilateral triangles on legs ‘a’ and ‘b’ [see figure C-ET1 above]. How can this be expressed algebraically, that is, as an equation? Earlier we determined the area of an equilateral triangle in etu (which is simply p2 where p is the side length) but we have not yet determined the equation for the area of a co-eutrigon in etu. However, there is a beautiful synchronicity with the equation for the eutrigon’s area which follows from a correspondence of the triangle altitudes between eutrigons and co-eutrigons. This again reflects the complementary relationship between the two classes of triangle. It is well-known and easily proven that triangles of the same base and same altitude have the same area. This means that there is a surprising resonance between a eutrigon’s and co-eutrigon’s areas —if legs a and b (i.e. the sides adjacent to the defining angle) are equal then it follows that they have the same area, namely their product, ab (as expressed in etu), [see figure below]. These two triangles have the same area because their bases and altitudes are the same. We may demonstrate the complementarity of these triangles more clearly by placing them as [below], Same triangles as above but here placed side-by-side to highlight their complementarity. The area of each triangle is identical when the bases b are equal and share the same 60° altitude a. We can therefore state the algebraic form of the Co-eutrigon Theorem in terms of the new relative units of area (etu) as, The area of a co-eutrigon (given in etu) = ab = c2 – a2 – b2 This follows from [Figure C-ET3] and the ‘area equation’, Q = C – A – B. As with the Eutrigon theorem’s algebraic form discussed earlier, the Co-eutrigon Theorem is also consistent with the Cosine Rule and is the same as that rule for the special case when angle C = 120°. ... Since 2CosC = -1, the Cosine Rule reduces to c2 = a2 + b2 + ab and, rearranging terms, we obtain the Co-eutrigon theorem form above, ab = c2 – a2 – b2 Including the Pythagorean equation, we now have three Pythagoras-like equations and their corresponding geometric theorems (through the induction of relative units). Given that a and b are the sides adjacent to the defining angle (e.g. the 90° angle of a right triangle), and c is the hypotenuse or side opposite the defining angle, the three equations are: These three equations and their associated geometric forms exactly correspond (via their respective stipulated internal angle) to the three regular polygons (equilateral triangle, square, and hexagon) which can uniformly tile the flat plane without gaps. This then completes a scale structure of not only three algebraic theorems but of their corresponding resonant geometric theorems, and it is reasonable to conjecture, I feel, that when recognised and implemented as a complete scale structure within mathematical practice, and utilised in ‘resonant application’, that significant advances may follow in number theory and in our understanding of the foundations needed for a new visual language and music. For example, the whole sub-discipline of trigonometry may now be re-examined in light of the new geometric understanding of the above equations (which includes the critical notion of relative units). Number theory too is likely to be extended via the key of the relative unit, and will call into question the very foundations of number and the meaning of ‘integers’—of how they are written or represented, and of new operations, properties, and transformations that may now be discovered and made possible. | back to top
http://www.principlesofnature.net/number_geometry_connections/completing_the_scale_co-eutrigons_and_co-eutrigon_theorem.htm
13
66
Required math: arithmetic Required physics: Newton’s law, kinetic energy As we stated in the page on kinetic energy, energy in physics is the ability to do work, and work, in turn, is defined as the product of a force and the distance over which it acts. If a single force acts on a constant mass over a given distance, it will cause the mass to accelerate with a constant acceleration , where these three quantities are related by Newton’s law: . It is important to realize that this formula is a mathematical expression of Newton’s assumption (based on observations) of how the world works. It is not the end result of some complicated mathematical derivation; it is simply stated as the starting point for Newton’s version of physics. From the page on kinetic energy, we can see that the result of a force acting on a mass for a certain time is that the mass speeds up (due to its acceleration) and after a time , it will have a velocity . The energy transferred to the mass by the force is all kinetic energy (energy of motion), and has the value In order for this to happen, the force has to be completely unopposed, which is virtually impossible to arrange in the real world. A mass falling due to gravity may seem to be unopposed, but the friction with the air works against the gravitational force, so that the velocity after a given time in free fall will be less than . In fact, a mass falling through the Earth’s atmosphere has a maximum attainable velocity known as the terminal velocity, whose value depends on the mass and the shape of the object. A sheet of paper weighing a few grams reaches its terminal velocity much faster than a small iron pellet of the same mass. In the case of objects falling through air, the energy due to the gravitational force that is not converted into kinetic energy of the falling object is transferred to the air molecules through which the mass falls. The energy may show up in the form of heat or turbulence in the air, both of which are forms of kinetic energy since they are due to the motion of the air molecules. But what happens when we actively oppose a force by moving a mass against the direction in which the force acts? We do this whenever we pick up some object, such as lifting a pencil off a desk. To do this, we are generating a force from the muscles in our arm (which is ultimately electrical force, but never mind that for now). If we raise the pencil at a constant velocity, then the amount of force we are generating is exactly equal and opposite to the gravitational force pulling the pencil down. To see this, remember Newton’s first law: an object with no net force acting on it will either remain at rest or move with a constant velocity. If we are moving the pencil at a constant velocity, it must have no net force acting on it, so the upward force we are generating must exactly balance the downward force due to gravity. However, the force we are exerting to lift the pencil acts through a certain distance, so by the definition of work, we should be transferring some energy to the pencil. Since there is no acceleration, there is no change in the velocity, so clearly this energy is not showing up as kinetic energy. Where is it going? This is where the idea of potential energy comes in. Whenever work is done by one force against another force, the mass is ‘storing’ this work as potential energy. If the first force (our arm lifting the pencil) is removed (we let the pencil go), then the second force (gravity) is free to act on the object and convert this stored energy back into kinetic energy (the pencil falls, and accelerates as it does so). As Galileo famously showed, and as countless high school physics students have verified ever since, the gravitational force on an object near the surface of the Earth is proportional to the object’s mass, and can be written as where is the acceleration due to gravity, with a value of approximately 9.8 metres per second per second. What this curious set of units means is that for every second in free fall (ideally in a vacuum), an object’s velocity increases by 9.8 metres per second. Given the gravitational force, we can find how much work we have to do to lift an object by a height : we need to resist a constant force through a distance , so we are doing an amount of work equal to (force times distance, remember). This is the amount of energy that is being ‘stored’ in the object and is therefore the amount that would be released if the object is dropped and allowed to fall through the distance . If the object’s entire store of potential energy is allowed to be converted into kinetic energy (by allowing the object to fall the full distance back to its starting point) then the kinetic energy it will have at that point is from which we can deduce its velocity as which, by the way, is independent of the mass, so Galileo was right after all: all objects fall at the same rate. A constant force like gravity near the Earth’s surface is a particularly easy example since we can find the amount of work done against the force by simple multiplication. Most real forces, of course, aren’t as cooperative, and vary with distance. This means that we need to use calculus to find the amount of work done, and thus the potential energy stored, when we move a mass around in such force fields. However, the principle is the same: find the amount of work that is needed to move the mass from point A to point B and the result is the potential energy stored in the mass. When the mass is released, the potential energy will be converted to kinetic energy (or released in some other way if the object is not totally free to move under the influence of the force) if and when it gets back to point A.
http://physicspages.com/2011/01/07/potential-energy/
13
58
by Ed Gough Any sailor can tell you that the surface of the sea is not flat. Ocean waves induced by winds and currents cause it to undulate with waves that range from less than a foot to massive hurricane swells the size of office buildings. In the background, there are also other undulations that are much less noticeable. Even when the ocean is perfectly calm, with no waves whatsoever, these underlying differences in height form gentle hills, ridges, and valleys similar to those on land. However, these differences in altitude at sea are much smaller than on land, and the areas they cover are much larger. The ocean's "hills" and "valleys" differ in height by only a few meters, at most, over the course of many miles, which is why, even on windless days with a glass-smooth sea, the most discerning observer cannot perceive their gentle slopes with the naked eye. The constant variation of sea surface topography—also called sea surface height, or altimetry—may seem like an esoteric scientific concern of interest only to oceanographers. The differences in surface height are much too small to have any direct effect on most day-to-day ship operations. For example, a "hill" of ocean water 50 nautical miles across and only six inches high has no effect on navigation either above or below the surface. Nevertheless, the accurate, consistent, and repeated measurement of the ocean's surface plays a vital role in the U.S. Navy's undersea warfare effort. It does so because even small altitude differences greatly influence the direction and strength of sound energy as it moves through the water beneath the varying ocean terrain. Through complex physical processes, the water under areas of higher altitude tends to be moving downward, forcing the thermocline deeper in those areas. In areas of lower altitude, the reverse happens; with the thermocline being pulled upward toward the surface. By applying known relationships between the height of the sea surface and the movement of the water below, it is possible to calculate the structure of the subsurface water column and thus its acoustic properties. By accurately measuring the ocean surface, we can calculate how acoustic energy will propagate through the water column and thus how the sonar systems of submarines and surface ships will perform against target vessels, regardless of whether the targets are nearby or far away. The Challenge of Timeliness But there's a catch. Just like analogous high and low pressure systems in the atmosphere, the ocean's "highs" and "lows"—its hills and valleys—do not just stay in one place, they constantly move around and change in size and shape depending on factors such as the water depth, wind, temperature and current. For example, strong and swift western boundary currents like the Gulf Stream in the Atlantic and the Kuroshio Current in the Pacific constantly shed warm and cold core eddies that spin off from the main current. These eddies can produce fast-moving ocean features that can disrupt or focus sound energy and impact acoustic performance at scales that are tactically significant for naval operations. Therefore, unlike terrain maps, which generally do not become outdated even after years without a new survey, mapping the constantly changing topography of the ocean surface requires remeasurement on the order of days to ensure that the information remains up to date and accurate. Revisiting mapped areas frequently and providing near-global broad ocean coverage are both key to successful ocean mapping. The only sensors that can meet both the temporal and the spatial requirements for ocean mapping are radar altimeters operating from satellites. A radar altimeter is simple in concept, working in much the same way as any other radar. From the satellite, it directs a pulse of radio-frequency energy to the target—in this case, a known location on the ocean surface beneath the satellite's flight path. Since the position of the radar and the velocity of the energy pulse are also known, the system can automatically calculate the height of the surface from the amount of the time it takes for the energy pulse to reach it and be reflected back up to the satellite. Ozone and water vapor in the atmosphere can complicate this computation somewhat, but dealing with atmospheric complications is relatively simple. The real challenge is the overall process of mapping huge areas of the sea surface and relating the resulting information to sonar performance. Current Altimetry Satellites The U.S. Navy's own altimetry satellite, the GEOSAT Follow-On (GFO), was recently decommissioned and taken out of service after operating many years beyond its design life. Two other satellites now provide the U.S. Navy with all of its sea surface altimetry data. One of these is JASON-1, which is operated by a consortium of the National Aeronautics and Space Administration (NASA), the National Oceanic and Atmospheric Administration (NOAA), and the French space agency. The other is the Envisat satellite, operated by the European Space Agency. The loss of GFO raises concerns because it was the only altimetry satellite designed specifically to meet the Navy's requirement to capture features that impact undersea warfare operations and provide a complete picture of the ocean dynamics. JASON-1 and Envisat were both designed to monitor long-term climate change and are therefore in orbits less suitable to properly capture ocean features on the time and space scales that the Navy requires. Moreover, both are also operating past their designed life. The Navy is making good use of them, but it will continue to feel the loss of its primary ocean measuring system until a replacement can be launched in 2013. Predicting Undersea "Weather" The Naval Oceanographic Office (NAVOCEANO) receives all of the Navy's—and most of the world's—real-time ocean data to feed its operational ocean models. Over 100 times more data comes from altimetry satellites than from all other sources of ocean data combined. The data first enters NAVOCEANO through the Oceanographic Data Division. The job there is to receive the data, apply all necessary corrections and calibrations, and process it through a series of quality checks to ensure the values are correct and the collection system is working properly. The goal at this point is to ensure that the data is accurately depicting the current state of the ocean surface before it is passed to the next group, the ocean modelers, for further processing. Collecting the data is just the first step in the process. The data represents the state of the ocean at some time in the near past, like yesterday or this morning. This is very useful information for building what are called historical climatologies—data bases that store data collected repeatedly in a given area for years. However, that is not NAVOCEANO's end game. Rather than merely collecting and storing sea surface data for later analysis, NAVOCEANO's goal is to provide information that will help submariners and other operators make successful decisions in the demanding real-time world of acoustic-driven operations. Consequently, NAVOCEANO's final products are exactly analogous to local weather forecasts. The National Weather Service receives data from satellites and weather stations all over the country and uses it to forecast tomorrow's temperature and other atmospheric conditions in specific localities. When the weatherman refers to the average high and low temperatures for any given day, he is using output from a historical climatology. However, the actual highs and lows usually differ significantly from these climatological averages, so the goal of the Weather Service is to provide accurate forecasts for specific locations at specific times. The U.S. Navy cannot rely on historical averages alone for conducting real-world operations. Actual conditions usually differ greatly from averages based on historical data, and even a small change in underwater conditions can be very important, because it can make a huge difference in acoustic propagation. Near-term measurements of past conditions are therefore absolutely essential for predicting acoustic performance. |Example of ocean model output, with colors representing the velocity of ocean currents: (left) black arrows show the direction of surface currents in the Atlantic from Cuba to Cape Cod; (center) surface current speeds along the Virginia – Maryland – Delaware coast (a closer look at a portion of the image to the left.); (right) surface currents at the entrance to Chesapeake Bay. (a closer look at a potion of the center image.) How Forecasting Works Even near-term measurements are just data, however, until NAVOCEANO turns them into information by inputting this data into numerical ocean forecast models that can predict the state of the ocean in a future place and time where operations will occur. The real value of oceanographic data is its ability to reveal the shape of the ocean, which is much like the atmosphere, only denser, with high and low pressure systems that can reveal the location of currents and eddies, their potential velocity, and the water temperature. All of these are determiners of acoustic detection ranges. Before the Navy had access to satellite altimetry data, submariners and other operators had to assume that the thermal structure of the water at a distant location they were observing with sonar was the same as the thermal structure of the water at their own position. Oceanographers knew this wasn't the case, but they had no way to accurately estimate the critical properties of the water at any distance from a location where they could collect current data. Attempting to estimate conditions as little as a mile away from that specific location was merely guessing—educated guessing, perhaps, but still just guessing. That all changed with satellite altimetry. According to Dr. Frank Bub, NAVOCEANO model and prediction system technical lead, nothing else provides as effective and complete a picture of the ocean as satellite altimetry—not buoys, and not ocean gliders. With altimetry, the location of each data point is known within centimeters, satellite passes come at regular intervals, and the data points are for exactly the same location pass after pass. Greg Jacobs, the model developer at the Naval Research Laboratory at Stennis Space Center, added that the model would not look like the real world without continuous data, since ocean features such as eddies, fronts, and currents cannot be predicted without a near constant stream of input. From Data to Predictions The Modeling Department at NAVOCEANO, with about 30 employees, models all of the world's oceans from the deep ocean to near-coastal areas continuously, 24 hours a day, 365 days a year. The modelers run three ocean models each day—a three-dimensional circulation model run at both regional and global scales; a two-dimensional circulation model for near-coastal areas; and wave models run on every scale from global to the surf zone. The department also does special requests, which it prioritizes according to the operational load and mission priority. Each day, the Modeling Department produces for the Fleet about 15,000 graphics that illustrate results of the model runs for various places in the world. "The Naval Oceanographic Office is the only organization in the world that provides fully dynamic global ocean forecasts out to 72 hours in the future," Bub noted. The forecasts that the Modeling Department produces after processing the altimetry data predict oceanographic conditions in the battlespace environment, but they do not yet show how the conditions in the forecasts will impact the Fleet's sonar systems. That is the job of NAVOCEANO's Acoustics Department. The acousticians take the data fields produced by the modelers and use them to make predictions that commanders can leverage to better understand how the environment impacts their mission. Temperature and salinity affect the propagation of sound waves. Ocean currents shape and move water masses of different temperature and salinity, and therefore density, and these water masses directly affect the propagation of sound waves through the ocean. Analysts in the Acoustics Department observe the ocean properties that the predictive models show for a specific area and determine how those properties will affect sound waves. The Acoustics Department runs acoustic propagation and performance models that combine the information on ocean conditions with sonar system design parameters to compute acoustic energy propagation for various sonars against different targets at different positions and depths. Predictions from the model runs are often condensed into a series of graphics, called "performance surfaces," that provide operational commanders with an "acoustic map" of the battlespace informing them how their sensors will perform. Ensuring Accurate Information for the Fleet The final step in the processing chain is the Naval Oceanography Anti-Submarine Warfare Center (NOAC), which works directly with the Fleet in undersea warfare. NOAC's uniformed Navy personnel use the results of the Acoustic Department's acoustic analysis to brief operational commanders directly on potential sonar performance in their operational area. Lt. Cmdr. Tim Campo, a former NOAC operations officer, said that his people have to be absolutely certain about the information that they are delivering. Any weak link in the chain — be it in the collection of satellite altimetry data, the fusion of that data, the running of ocean forecast models, or the prediction of acoustic system performance — adds to the uncertainty in the forecast acoustic performance products and reduces the accuracy of the acoustic performance briefs. "We are about taking uncertainty out of the operation," he said. NAVOCEANO's systematic effort to improve the quality of its forecasts now enables operational commanders to employ their forces, at least partially, on the basis of the NAVOCEANO "sonar performance surfaces." "We tell the Fleet operators how their sonar will perform in a specific area, Campo said. "Based on that information, operators place their assets and search for submarines." The Battlespace-on-Demand Doctrine All Navy meteorology and oceanography support—in particular the support for undersea warfare described above—is accomplished in accordance with the Battlespace-on-Demand (BonD) doctrine, a 'value chain' approach to provide the Fleet with relevant and actionable information on the physical battlespace environment and how it impacts operations and fielded systems. The BonD doctrine calls for three "tiers." Tier 1 is called the environment layer. This is where data from oceanographic sensors like a satellite altimeter is fed into numerical ocean models and formed into "nowcast" and forecast fields of data parameters like temperature, water density and sound speed that most influence sound propagation and thus acoustic sensor performance. Tier 2, the performance layer, is where environmental data computed and delivered from Tier 1 is ingested into acoustic propagation and performance models to determine how a specific sonar system will operate against targets in those waters. Tier 3 is the decision layer, where Tier 2 sonar performance is fused with other information about the tactical battlespace to create operational products on which operational commanders can base decisions. Each BonD tier is completely reliant on the one below it. The data collection itself, in this case, the sea height measurements that come from satellite altimeters, is the implied 'Tier 0,' the foundation on which all higher tiers and products rest. Without the satellites that constantly measure the ocean surface, those who are charged with defending America's interests at sea would lack critical operational knowledge about the performance of their sonar systems. |Example of ocean model output: (top) sea surface temperatures off the Atlantic coast from Florida to Nova Scotia; (center) ocean current velocities in the same area; (bottom) sea surface temperatures and currents in the Gulf of Mexico and western Caribbean Sea. The Foundation of It All So the foundation of the entire process remains the continuous, real-time satellite measurement of something as esoteric as sea surface topography. The resulting data points are the basic building blocks for the modern ocean models that ultimately keep the Navy informed about how well—or even how poorly—its sonar systems will perform in any given place at any given time. The systematic collection of altimetry data by satellites is the indispensible first step toward an accurate understanding of current conditions in the environment beneath the ocean's surface. As such, it is absolutely essential for ensuring that U.S. Navy warfighters have the information they need to make effective operational decisions in the immensely complicated world of undersea warfare. Ed Gough is deputy commander and technical director of the Naval Meteorology and Oceanography Command.
http://www.public.navy.mil/subfor/underseawarfaremagazine/Issues/Archives/issue_44/measure.html
13
67
|San José State University| & Tornado Alley an Infinitesimal in Geometry The Greek mathematician Archimedes long ago demonstrated the power of the concept of infinitesimals. For example, he derived the formula for the area enclosed with in a circle, by considering that area as composed of infinitesimal triangles with an apex at the center and a base on the circle. The area of such a triangle is one half of the height of the triangle, in this case the radius R of the circle, times the length of the base Rdθ. The total length of the bases of all the triangles is just the circumference of the circle, which is the radius times the sum of all the infinitesimal angles, 2π; i.e., 2πR. Thus the area enclosed by a circle of radius R is ½(R)(2πR), which is equal to πR². Likewise the volume of a ball enclosed within a sphere of radius R is sum of all the infinitesimal pyramids with their apices at the center and their bases on the sphere. The volume of a pyramid is one third of the product of its height times the area of its base. The height of each pyramid is the sphere radius R. The sum of the area of the bases is just the area of the sphere 4πR². Thus the volume of the ball is (1/3)r(4πR²) which is equal to (4/3)πR³. One construction using infinitesimals that is not valid is the derivation of the area of a sphere. Suppose the surface of a sphere was divided up into a large number of segments. A segment bounded by two longitudinal lines to the equator and the portion of the equator between them might be thought to be equivalent in area to a triangle whole height is equal to the arc distance from the equator to the pole. If the radius of the sphere is R then the arc distance from a pole to the equator is (2πR)/4. The area of such a supposed triangel would be ½(πR/2)ds where ds is the length of the base of the segment. The sum of the bases is 2πR. This would mean the area of both hemispheres of a sphere would be 2(πR/4)(2πR) which is equal to π²R². The correct value is πR². The correct derivation of the area of sphere is as follows. Let θ be an angle measured from a pole and let r be the radius of a latitude circle. If the width of the segment encompasses a longitudinal angle Δφ then the area of the band at θ with a height Rdθ is The area of the segment is then This is the area of the segment in one hemisphere; in two hemispheres the value is 2R²Δφ. For the entire sphere Δφ is equal to 2π and thus the area of a sphere is Consider a figure of constant width such as shown below. An infinitesimal parallelogram of width b and height dh has an area of bdh. The sum of all such parallelograms is b∫dh which is equal to bh. This applies no matter how sinuous the sides so long as the direction is not reversed. Another simple application of infinitesimals is to find the area of the sides of a prism, cylinder or any other figure of constant cross section. Let ds be an infinitesimal length on the perimeter of the base. A parallelogram with a base of ds and height of h has an area of hds. The area of the sides is equal to h∫ds which is equal to h times the circumberence of the base. This would also apply to a distorted prism or cylinder so long as the the cross section is constant. HOME PAGE OF Thayer Watkins
http://applet-magic.com/infinitesimalappl.htm
13
53
Types in programming are a way of grouping similar values into categories. In Haskell, the type system is a powerful way of ensuring there are fewer mistakes in your code. Programming deals with different sorts of entities. For example, consider adding two numbers together: What are 2 and 3? We can quite simply describe them as numbers. And what about the plus sign in the middle? That's certainly not a number, but it stands for an operation which we can do with two numbers – namely, addition. Similarly, consider a program that asks you for your name and then greets you with a "Hello" message. Neither your name nor the word Hello are numbers. What are they then? We might refer to all words and sentences and so forth as text. In fact, it's more normal in programming to use a slightly more esoteric word: String. Databases illustrate clearly the concept of types. For example, say we had a table in a database to store details about a person's contacts; a kind of personal telephone book. The contents might look like this: |First Name||Last Name||Telephone number||Address| |Sherlock||Holmes||743756||221B Baker Street London| |Bob||Jones||655523||99 Long Road Street Villestown| The fields in each entry contain values. Sherlock is a value as is 99 Long Road Street Villestown as well as 655523. As we've said, types are a way of categorizing data, so let us see how we could classify the values in this example. The first three fields seem straightforward enough. "First Name" and "Last Name" contain text, so we say that the values are of type String, while "Telephone Number" is clearly a number. At first glance one may be tempted to classify address as a String. However, the semantics behind an innocent address are quite complex. There are a whole lot of human conventions that dictate how we interpret it. For example, if the beginning of the address text contains a number it is likely the number of the house. If not, then it's probably the name of the house – except if it starts with "PO Box", in which case it's just a postal box address and doesn't indicate where the person lives at all. Clearly, there's more going on here than just text, as each part of the address has its own meaning. In principle there is nothing wrong with saying addresses are Strings, but when we describe something as a String all that we are saying is that it is a sequence of letters, numbers, etc. Claiming they're of some more specialized type, say, Address, is far more meaningful. If we know something is an Address, we instantly know much more about the piece of data – for instance, that we can interpret it using the "human conventions" that give meaning to addresses. In retrospect, we might also apply this rationale to the telephone numbers. It could be a good idea to speak in terms of a TelephoneNumber type. Then, if we were to come across some arbitrary sequence of digits which happened to be of type TelephoneNumber we would have access to a lot more information than if it were just a Number – for instance, we could start looking for things such as area and country codes on the initial digits. Another reason not to consider the telephone numbers as just Numbers is that doing arithmetics with them makes no sense. What is the meaning and expected effect of, say, adding 1 to a TelephoneNumber? It would not allow calling anyone by phone. That's a good reason for using a more specialized type than Number. Also, each digit comprising a telephone number is important; it's not acceptable to lose some of them by rounding it or even by omitting leading zeroes. Why types are useful So far, it seems that all what we've done was to describe and categorize things, and it may not be obvious why all of this talk would be so important for writing actual programs. Starting with this module, we will explore how Haskell uses types to the programmer's benefit, allowing us to incorporate the semantics behind, say, an address or a telephone number seamlessly in the code. Using the interactive :type command The best way to explore how types work in Haskell is from GHCi. The type of any expression can be checked with the immensely useful :t) command. Let us test it on the boolean values from the previous module: Example: Exploring the types of boolean values in GHCi Prelude> :type True True :: Bool Prelude> :type False False :: Bool Prelude> :t (3 < 5) (3 < 5) :: Bool Usage of :type is straightforward: enter the command into the prompt followed by whatever you want to find the type of. On the third example, we use :t, which we will be using from now on. GHCi will then print the type of the expression. The symbol ::, which will appear in a couple other places, can be read as simply "is of type", and indicates a type signature. :type reveals that truth values in Haskell are of type Bool, as illustrated above for the two possible values, True and False, as well as for a sample expression that will evaluate to one of them. It is worthy to note at this point that boolean values are not just for value comparisons. Bool captures in a very simple way the semantics of a yes/no answer, and so it can be useful to represent any information of such kind – say, whether a name was found in a spreadsheet, or whether a user has toggled an on/off option. Characters and strings Now let us try :t on something new. Literal characters are entered by enclosing them with single quotation marks. For instance, this is the single letter H: Example: Using the :type command in GHCi on a literal character Prelude> :t 'H' 'H' :: Char Literal character values, then, have type Char. Single quotation marks, however, only work for individual characters. If we need to enter actual text – that is, a string of characters – we use double quotation marks instead: Example: Using the :t command in GHCi on a literal string Prelude> :t "Hello World" "Hello World" :: [Char] Seeing this output, a pertinent question would be "why did we get Char" again? The difference is in the square brackets. [Char] means a number of characters chained together, forming a list. That is what text strings are in Haskell – lists of characters. A nice thing to be aware of is that Haskell allows for type synonyms, which work pretty much like synonyms in human languages (words that mean the same thing – say, 'fast' and 'quick'). In Haskell, type synonyms are alternative names for types. For instance, String is defined as a synonym of [Char], and so we can freely substitute one with the other. Therefore, to say: "Hello World" :: String is also perfectly valid, and in many cases a lot more readable. From here on we'll mostly refer to text values as String, rather than [Char]. Functional types So far, we have seen how values (strings, booleans, characters, etc.) have types and how these types help us to categorize and describe them. Now, the big twist, and what makes the Haskell's type system truly powerful: Not only values, but functions have types as well. Let's look at some examples to see how that works. Consider not, that negates boolean values (changing True to False and vice-versa). To figure out the type of a function we consider two things: the type of values it takes as its input and the type of value it returns. In this example, things are easy. not takes a Bool (the Bool to be negated), and returns a Bool (the negated Bool). The notation for writing that down is: Example: Type signature for not :: Bool -> Bool You can read this as " not is a function from things of type Bool to things of type Bool". In case you are wondering about using :t on a function... Prelude> :t not not :: Bool -> Bool ... it will work just as expected. This description of the type of a function in terms of the types of argument(s), and it shows that functions, being values in Haskell, also have type signatures. Text presents a problem to computers. Once everything is reduced to its lowest level, all a computer knows how to deal with are 1s and 0s: computers work in binary. As working with binary numbers isn't at all convenient, humans have come up with ways of making computers store text. Every character is first converted to a number, then that number is converted to binary and stored. Hence, a piece of text, which is just a sequence of characters, can be encoded into binary. Normally, we're only interested in how to encode characters into their numerical representations, because the computer generally takes care of the conversion to binary numbers without our intervention. The easiest way of converting characters to numbers is simply to write all the possible characters down, then number them. For example, we might decide that 'a' corresponds to 1, then 'b' to 2, and so on. This is exactly what a thing called the ASCII standard is: 128 of the most commonly-used characters, numbered. Of course, it would be a bore to sit down and look up a character in a big lookup table every time we wanted to encode it, so we've got two functions that can do it for us, chr (pronounced 'char') and Example: Type signatures for chr :: Int -> Char ord :: Char -> Int We already know what Char means. The new type on the signatures above, Int, amounts to integer numbers, and is one of quite a few different types of numbers. The type signature of chr tells us that it takes an argument of type Int, an integer number, and evaluates to a result of type Char. The converse is the case with ord: It takes things of type Char and returns things of type Int. With the info from the type signatures, it becomes immediately clear which of the functions encodes a character into a numeric code (ord) and which does the decoding back to a character (chr). To make things more concrete, here are a few examples of function calls to ord. Notice that the two functions aren't available by default; so before trying them in GHCi you need to use the :module Data.Char (or :m Data.Char) command to load the Data.Char module, where they are defined. Example: Function calls to Prelude> :m Data.Char Prelude Data.Char> chr 97 'a' Prelude Data.Char> chr 98 'b' Prelude Data.Char> ord 'c' 99 Functions in more than one argument The style of type signatures we have been using works fine enough for functions of one argument. But what would be the type of a function like this one? Example: A function with more than one argument xor p q = (p || q) && not (p && q) xor is the exclusive-or function, which evaluates to True if either one or the other argument is True, but not both; and The general technique for forming the type of a function that accepts more than one argument is simply to write down all the types of the arguments in a row, in order (so in this case p first then q), then link them all with ->. Finally, add the type of the result to the end of the row and stick a final -> in just before it. In this example, we have: - Write down the types of the arguments. In this case, the use of (&&)gives away that qhave to be of type Bool Bool ^^ p is a Bool ^^ q is a Bool as well - Fill in the gaps with Bool -> Bool - Add in the result type and a final ->. In our case, we're just doing some basic boolean operations so the result remains a Bool. Bool -> Bool -> Bool ^^ We're returning a Bool ^^ This is the extra -> that got added in The final signature, then, is: Example: The signature of xor :: Bool -> Bool -> Bool As you'll learn in the Practical Haskell section of the course, one popular group of Haskell libraries are the GUI (Graphical User Interface) ones. These provide functions for dealing with all the parts of Windows, Linux, or Mac OS you're familiar with: opening and closing application windows, moving the mouse around, etc. One of the functions from one of these libraries is called openWindow, and you can use it to open a new window in your application. For example, say you're writing a word processor, and the user has clicked on the 'Options' button. You need to open a new window which contains all the options that they can change. Let's look at the type signature for this function: openWindow :: WindowTitle -> WindowSize -> Window Don't panic! Here are a few more types you haven't come across yet. But don't worry, they're quite simple. All three of the types there, WindowTitle, WindowSize and Window are defined by the GUI library that provides openWindow. As we saw when constructing the types above, because there are two arrows, the first two types are the types of the parameters, and the last is the type of the result. WindowTitle holds the title of the window (what appears in the bar at the very top of the window, left of the close/minimize/etc. buttons), and WindowSize specifies how big the window should be. The function then returns a value of type Window which represents the actual window. One key point illustrated by this example, as well as the chr/ord one is that, even if you have never seen the function or don't know how it actually works, a type signature can immediately give you a good general idea of what the function is supposed to do. For that reason, a very useful habit to acquire is testing every new function you meet with :t. If you start doing so right now you'll not only learn about the standard library Haskell functions quite a bit quicker but also develop a useful kind of intuition. Curiosity pays off. :) Finding types for functions is a basic Haskell skill that you should become very familiar with. What are the types of the following functions? For any functions hereafter involving numbers, you can just pretend the numbers are Ints. Type signatures in code Now we've explored the basic theory behind types as well as how they apply to Haskell. The key way in which type signatures are used is for annotating functions in source files. Let us see what that looks like for xor function from an earlier example: Example: A function with its signature xor :: Bool -> Bool -> Bool xor p q = (p || q) && not (p && q) That is all we have to do, really. Signatures are placed just before the corresponding functions, for maximum clarity. The signatures we add in this way serve a dual role. They inform the type of the functions both to human readers of the code and to the compiler/interpreter. Type inference We just said that type signatures tell the interpreter (or compiler) what the function type is. However, up to now you wrote perfectly good Haskell code without any signatures, and it was accepted by GHC/GHCi. That shows that in general it is not mandatory to add type signatures. But that doesn't mean that if you don't add them Haskell simply forgets about types altogether! Instead, when you didn't tell Haskell the types of your functions and variables it figured them out through a process called type inference. In essence, the compiler performs inference by starting with the types of things it knows and then working out the types of the rest of the values. Let's see how that works with a general example. Example: Simple type inference -- We're deliberately not providing a type signature for this function isL c = c == 'l' isL is a function that takes an argument c and returns the result of evaluating c == 'l'. If we don't provide a type signature the compiler, in principle, does not know the type of c, nor the type of the result. In the expression c == 'l', however, it does know that 'l' is a Char. Since c and 'l' are being compared with equality with (==) and both arguments of (==) must have the same type, it follows that c must be a Char. Finally, since isL c is the result of (==) it must be a Bool. And thus we have a signature for the function: isL with a type isL :: Char -> Bool isL c = c == 'l' And, indeed, if you leave out the type signature the Haskell compiler will discover it through this process. You can verify that by using :t on isL with or without a signature. So, if type signatures are optional in most cases why should we care so much about them? Here are a few reasons: - Documentation: type signatures make your code easier to read. With most functions, the name of the function along with the type of the function is sufficient to guess what the function does. Of course, you should always comment your code properly too, but having the types clearly stated helps a lot, too. - Debugging: if you annotate a function with a type signature and then make a typo in the body of the function which changes the type of a variable the compiler will tell you, at compile-time, that your function is wrong. Leaving off the type signature could have the effect of allowing your function to compile, and the compiler would assign it an erroneous type. You wouldn't know until you ran your program that it was wrong. Types and readability To understand better how signatures can help documentation, let us have a glance at a somewhat more realistic example. The piece of code quoted below is a tiny module (modules are the typical way of preparing a library), and this way of organizing code is not too different from what you might find, say, when reading source code for the libraries bundled with GHC. Example: Module with type signatures module StringManip where import Data.Char uppercase, lowercase :: String -> String uppercase = map toUpper lowercase = map toLower capitalise :: String -> String capitalise x = let capWord = capWord (x:xs) = toUpper x : xs in unwords (map capWord (words x)) This tiny library provides three string manipulation functions. uppercase converts a string to upper case, lowercase to lower case, and capitalize capitalizes the first letter of every word. Each of these functions takes a String as argument and evaluates to another String. What is relevant to our discussion here is that, even if we do not understand how these functions work, looking at the type signatures allows us to immediately know the types of the arguments and return values. That information, when paired with sensible function names, can make it a lot easier to figure out how we can use the functions. Note that when functions have the same type we have the option of writing just one signature for all of them, by separating their names with commas, as it was done with Types prevent errors The role of types in preventing errors is central to typed languages. When passing expressions around you have to make sure the types match up like they did here. If they don't, you'll get type errors when you try to compile; your program won't pass the typecheck. This is really how types help you to keep your programs bug-free. To take a very trivial example: Example: A non-typechecking program "hello" + " world" Having that line as part of your program will make it fail to compile, because you can't add two strings together! In all likelihood the intention was to use the similar-looking string concatenation operator, which joins two strings together into a single one: Example: Our erroneous program, fixed "hello" ++ " world" An easy typo to make, but because you use Haskell, it was caught when you tried to compile. You didn't have to wait until you ran the program for the bug to become apparent. This was only a simple example. However, the idea of types being a system to catch mistakes works on a much larger scale too. In general, when you make a change to your program, you'll change the type of one of the elements. If this change isn't something that you intended, or has unforeseen consequences, then it will show up immediately. A lot of Haskell programmers remark that once they have fixed all the type errors in their programs, and their programs compile, that they tend to "just work": run the first time with only minor problems. Run-time errors, where your program goes wrong when you run it rather than when you compile it, are much rarer in Haskell than in other languages. This is a huge advantage of having a strong type system like Haskell does. - Lists, be they of characters or of other things, are very important entities in Haskell, and we will cover them in more detail in a little while. - The deeper truth is that functions are values, just like all the others. - This isn't quite what orddo, but that description fits our purposes well, and it's close enough. - In fact, it is not even the only type for integers! We will meet its relatives in a short while. - This method might seem just a trivial hack by now, but actually there are very deep reasons behind it, which we'll cover in the chapter on Currying. - This has been somewhat simplified to fit our purposes. Don't worry, the essence of the function is there. - As we discussed in "Truth values". That fact is actually stated by the type signature of (==) – if you are curious you can check it, although you will have to wait a little bit more for a full explanation of the notation used in it. - There are a few situations in which the compiler lacks information to infer the type, and so the signature becomes obligatory; and, in some other cases, we can influence to a certain extent the final type of a function or value with a signature. That needn't concern us for the moment, however.
http://en.wikibooks.org/wiki/Haskell/Type_basics
13
82
A quantum computer is a device for computation that makes direct use of quantum mechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from digital computers based on transistors. Whereas digital computers require data to be encoded into binary digits (bits), quantum computation utilizes quantum properties to represent data and perform operations on these data. A theoretical model is the quantum Turing machine, also known as the universal quantum computer. Quantum computers share theoretical similarities with non-deterministic and probabilistic computers, like the ability to be in more than one state simultaneously. The field of quantum computing was first introduced by Richard Feynman in 1982. Although quantum computing is still in its infancy, experiments have been carried out in which quantum computational operations were executed on a very small number of qubits (quantum bits). Both practical and theoretical research continues, and many national government and military funding agencies support quantum computing research to develop quantum computers for both civilian and national security purposes, such as cryptanalysis. Large-scale quantum computers could be able to solve certain problems much faster than any classical computer by using the best currently known algorithms, like integer factorization using Shor's algorithm or the simulation of quantum many-body systems. There exist quantum algorithms, such as Simon's algorithm, which run faster than any possible probabilistic classical algorithm. Given unlimited resources, a classical computer can simulate an arbitrary quantum algorithm so quantum computation does not violate the Church–Turing thesis. However, in practice infinite resources are never available and the computational basis of 500 qubits, for example, would already be too large to be represented on a classical computer because it would require 2500 complex values to be stored. (For comparison, a terabyte of digital information stores only 243 discrete on/off values) Nielsen and Chuang point out that "Trying to store all these complex numbers would not be possible on any conceivable classical computer." A classical computer has a memory made up of bits, where each bit represents either a one or a zero. A quantum computer maintains a sequence of qubits. A single qubit can represent a one, a zero, or, crucially, any quantum superposition of these two qubit states; moreover, a pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8. In general, a quantum computer with qubits can be in an arbitrary superposition of up to different states simultaneously (this compares to a normal computer that can only be in one of these states at any one time). A quantum computer operates by setting the qubits in a controlled initial state that represents the problem at hand and by manipulating those qubits with a fixed sequence of quantum logic gates. The sequence of gates to be applied is called a quantum algorithm. The calculation ends with measurement of all the states, collapsing each qubit into one of the two pure states, so the outcome can be at most classical bits of information. An example of an implementation of qubits for a quantum computer could start with the use of particles with two spin states: "down" and "up" (typically written and , or and ). But in fact any system possessing an observable quantity A which is conserved under time evolution and such that A has at least two discrete and sufficiently spaced consecutive eigenvalues, is a suitable candidate for implementing a qubit. This is true because any such system can be mapped onto an effective spin-1/2 system. A quantum computer with a given number of qubits is fundamentally different from a classical computer composed of the same number of classical bits. For example, to represent the state of an n-qubit system on a classical computer would require the storage of 2n complex coefficients. Although this fact may seem to indicate that qubits can hold exponentially more information than their classical counterparts, care must be taken not to overlook the fact that the qubits are only in a probabilistic superposition of all of their states. This means that when the final state of the qubits is measured, they will only be found in one of the possible configurations they were in before measurement. Moreover, it is incorrect to think of the qubits as only being in one particular state before measurement since the fact that they were in a superposition of states before the measurement was made directly affects the possible outcomes of the computation. For example: Consider first a classical computer that operates on a three-bit register. The state of the computer at any time is a probability distribution over the different three-bit strings 000, 001, 010, 011, 100, 101, 110, 111. If it is a deterministic computer, then it is in exactly one of these states with probability 1. However, if it is a probabilistic computer, then there is a possibility of it being in any one of a number of different states. We can describe this probabilistic state by eight nonnegative numbers A,B,C,D,E,F,G,H (where A = probability computer is in state 000, B = probability computer is in state 001, etc.). There is a restriction that these probabilities sum to 1. The state of a three-qubit quantum computer is similarly described by an eight-dimensional vector (a,b,c,d,e,f,g,h), called a ket. However, instead of adding to one, the sum of the squares of the coefficient magnitudes, , must equal one. Moreover, the coefficients are complex numbers. Since the probability amplitudes of the states are represented with complex numbers, the phase between any two states is a meaningful parameter, which is a key difference between quantum computing and probabilistic classical computing. If you measure the three qubits, you will observe a three-bit string. The probability of measuring a given string is the squared magnitude of that string's coefficient (i.e., the probability of measuring 000 = , the probability of measuring 001 = , etc..). Thus, measuring a quantum state described by complex coefficients (a,b,...,h) gives the classical probability distribution and we say that the quantum state "collapses" to a classical state as a result of making the measurement. Note that an eight-dimensional vector can be specified in many different ways depending on what basis is chosen for the space. The basis of bit strings (e.g., 000, 001, ..., 111) is known as the computational basis. Other possible bases are unit-length, orthogonal vectors and the eigenvectors of the Pauli-x operator. Ket notation is often used to make the choice of basis explicit. For example, the state (a,b,c,d,e,f,g,h) in the computational basis can be written as: The computational basis for a single qubit (two dimensions) is and . Using the eigenvectors of the Pauli-x operator, a single qubit is and . |Is a universal quantum computer sufficient to efficiently simulate an arbitrary physical system?| While a classical three-bit state and a quantum three-qubit state are both eight-dimensional vectors, they are manipulated quite differently for classical or quantum computation. For computing in either case, the system must be initialized, for example into the all-zeros string, , corresponding to the vector (1,0,0,0,0,0,0,0). In classical randomized computation, the system evolves according to the application of stochastic matrices, which preserve that the probabilities add up to one (i.e., preserve the L1 norm). In quantum computation, on the other hand, allowed operations are unitary matrices, which are effectively rotations (they preserve that the sum of the squares add up to one, the Euclidean or L2 norm). (Exactly what unitaries can be applied depend on the physics of the quantum device.) Consequently, since rotations can be undone by rotating backward, quantum computations are reversible. (Technically, quantum operations can be probabilistic combinations of unitaries, so quantum computation really does generalize classical computation. See quantum circuit for a more precise formulation.) Finally, upon termination of the algorithm, the result needs to be read off. In the case of a classical computer, we sample from the probability distribution on the three-bit register to obtain one definite three-bit string, say 000. Quantum mechanically, we measure the three-qubit state, which is equivalent to collapsing the quantum state down to a classical distribution (with the coefficients in the classical state being the squared magnitudes of the coefficients for the quantum state, as described above), followed by sampling from that distribution. Note that this destroys the original quantum state. Many algorithms will only give the correct answer with a certain probability. However, by repeatedly initializing, running and measuring the quantum computer, the probability of getting the correct answer can be increased. For more details on the sequences of operations used for various quantum algorithms, see universal quantum computer, Shor's algorithm, Grover's algorithm, Deutsch-Jozsa algorithm, amplitude amplification, quantum Fourier transform, quantum gate, quantum adiabatic algorithm and quantum error correction. Integer factorization is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of few prime numbers (e.g., products of two 300-digit primes). By comparison, a quantum computer could efficiently solve this problem using Shor's algorithm to find its factors. This ability would allow a quantum computer to decrypt many of the cryptographic systems in use today, in the sense that there would be a polynomial time (in the number of digits of the integer) algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers (or the related discrete logarithm problem, which can also be solved by Shor's algorithm), including forms of RSA. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security. However, other existing cryptographic algorithms do not appear to be broken by these algorithms. Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like the McEliece cryptosystem based on a problem in coding theory. Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem. It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm by brute force requires roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case, meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search (see Key size). Quantum cryptography could potentially fulfill some of the functions of public key cryptography. Besides factorization and discrete logarithms, quantum algorithms offering a more than polynomial speedup over the best known classical algorithm have been found for several problems, including the simulation of quantum physical processes from chemistry and solid state physics, the approximation of Jones polynomials, and solving Pell's equation. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, although this is considered unlikely. For some problems, quantum computers offer a polynomial speedup. The most well-known example of this is quantum database search, which can be solved by Grover's algorithm using quadratically fewer queries to the database than are required by classical algorithms. In this case the advantage is provable. Several other examples of provable quantum speedups for query problems have subsequently been discovered, such as for finding collisions in two-to-one functions and evaluating NAND trees. Consider a problem that has these four properties: For problems with all four properties, the time for a quantum computer to solve this will be proportional to the square root of the number of inputs. That can be a very large speedup, reducing some problems from years to seconds. It can be used to attack symmetric ciphers such as Triple DES and AES by attempting to guess the secret key. Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, many believe quantum simulation will be one of the most important applications of quantum computing. There are a number of technical challenges in building a large-scale quantum computer, and thus far quantum computers have yet to solve a problem faster than a classical computer. David DiVincenzo, of IBM, listed the following requirements for a practical quantum computer: One of the greatest challenges is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world causes the system to decohere. This effect is irreversible, as it is non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems, in particular the transverse relaxation time T2 (for NMR and MRI technology, also called the dephasing time), typically range between nanoseconds and seconds at low temperature.[not in citation given] These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than the decoherence time. If the error rate is small enough, it is thought to be possible to use quantum error correction, which corrects errors due to decoherence, thereby allowing the total calculation time to be longer than the decoherence time. An often cited figure for required error rate in each gate is 10−4. This implies that each gate must be able to perform its task in one 10,000th of the decoherence time of the system. Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between L and L2, where L is the number of bits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of L. For a 1000-bit number, this implies a need for about 104 qubits without error correction. With error correction, the figure would rise to about 107 qubits. Note that computation time is about or about steps and on 1 MHz, about 10 seconds. A very different approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads and relying on braid theory to form stable logic gates. There are a number of quantum computing models, distinguished by the basic elements in which the computation is decomposed. The four main models of practical importance are The Quantum Turing machine is theoretically important but direct implementation of this model is not pursued. All four models of computation have been shown to be equivalent to each other in the sense that each can simulate the other with no more than polynomial overhead. For physically implementing a quantum computer, many different candidates are being pursued, among them (distinguished by the physical system used to realize the qubits): The large number of candidates demonstrates that the topic, in spite of rapid progress, is still in its infancy. But at the same time, there is also a vast amount of flexibility. In 2005, researchers at the University of Michigan built a semiconductor chip that functioned as an ion trap. Such devices, produced by standard lithography techniques, may point the way to scalable quantum computing tools. An improved version was made in 2006. In 2009, researchers at Yale University created the first rudimentary solid-state quantum processor. The two-qubit superconducting chip was able to run elementary algorithms. Each of the two artificial atoms (or qubits) were made up of a billion aluminum atoms but they acted like a single one that could occupy two different energy states. Another team, working at the University of Bristol, also created a silicon-based quantum computing chip, based on quantum optics. The team was able to run Shor's algorithm on the chip. Further developments were made in 2010. Springer publishes a journal ("Quantum Information Processing") devoted to the subject. In April 2011, a team of scientists from Australia and Japan have finally made a breakthrough in quantum teleportation. They have successfully transferred a complex set of quantum data with full transmission integrity achieved. Also the qubits being destroyed in one place but instantaneously resurrected in another, without affecting their superpositions. In 2011, D-Wave Systems announced the first commercial quantum annealer on the market by the name D-Wave One. The company claims this system uses a 128 qubit processor chipset. On May 25, 2011 D-Wave announced that Lockheed Martin Corporation entered into an agreement to purchase a D-Wave One system. Lockheed Martin and the University of Southern California (USC) reached an agreement to house the D-Wave One Adiabatic Quantum Computer at the newly formed USC Lockheed Martin Quantum Computing Center, part of USC's Information Sciences Institute campus in Marina del Rey. During the same year, researchers working at the University of Bristol created an all-bulk optics system able to run an iterative version of Shor's algorithm. They successfully managed to factorize 21. In April 2012 a multinational team of researchers from the University of Southern California, Delft University of Technology, the Iowa State University of Science and Technology, and the University of California, Santa Barbara, constructed a two-qubit quantum computer on a crystal of diamond doped with some manner of impurity, that can easily be scaled up in size and functionality at room temperature. Two logical qubit directions of electron spin and nitrogen kernels spin were used. A system which formed an impulse of microwave radiation of certain duration and the form was developed for maintenance of protection against decoherence. By means of this computer Grover's algorithm for four variants of search has generated the right answer from the first try in 95% of cases. The class of problems that can be efficiently solved by quantum computers is called BQP, for "bounded error, quantum, polynomial time". Quantum computers only run probabilistic algorithms, so BQP on quantum computers is the counterpart of BPP ("bounded error, probabilistic, polynomial time") on classical computers. It is defined as the set of problems solvable with a polynomial-time algorithm, whose probability of error is bounded away from one half. A quantum computer is said to "solve" a problem if, for every instance, its answer will be right with high probability. If that solution runs in polynomial time, then that problem is in BQP. BQP is suspected to be disjoint from NP-complete and a strict superset of P, but that is not known. Both integer factorization and discrete log are in BQP. Both of these problems are NP problems suspected to be outside BPP, and hence outside P. Both are suspected to not be NP-complete. There is a common misconception that quantum computers can solve NP-complete problems in polynomial time. That is not known to be true, and is generally suspected to be false. The capacity of a quantum computer to accelerate classical algorithms has rigid limits — upper bounds of quantum computation's complexity. The overwhelming part of classical calculations cannot be accelerated on a quantum computer. A similar fact takes place for particular computational tasks, like the search problem, for which Grover's algorithm is optimal. Although quantum computers may be faster than classical computers, those described above can't solve any problems that classical computers can't solve, given enough time and memory (however, those amounts might be practically infeasible). A Turing machine can simulate these quantum computers, so such a quantum computer could never solve an undecidable problem like the halting problem. The existence of "standard" quantum computers does not disprove the Church–Turing thesis. It has been speculated that theories of quantum gravity, such as M-theory or loop quantum gravity, may allow even faster computers to be built. Currently, defining computation in such theories is an open problem due to the problem of time, i.e. there currently exists no obvious way to describe what it means for an observer to submit input to a computer and later receive output. |Wikimedia Commons has media related to: Quantum computer| Dictionary and translator for handheld New : sensagent is now available on your handheld A windows (pop-into) of information (full-content of Sensagent) triggered by double-clicking any word on your webpage. Give contextual explanation and translation from your sites ! With a SensagentBox, visitors to your site can access reliable information on over 5 million pages provided by Sensagent.com. Choose the design that fits your site. Improve your site content Add new content to your site from Sensagent by XML. Crawl products or adds Get XML access to reach the best products. Index images and define metadata Get XML access to fix the meaning of your metadata. Please, email us to describe your idea. Lettris is a curious tetris-clone game where all the bricks have the same square shape but different content. Each square carries a letter. To make squares disappear and save space for other squares you have to assemble English words (left, right, up, down) from the falling squares. Boggle gives you 3 minutes to find as many words (3 letters or more) as you can in a grid of 16 letters. You can also try the grid of 16 letters. Letters must be adjacent and longer words score better. See if you can get into the grid Hall of Fame ! Change the target language to find translations. Tips: browse the semantic fields (see From ideas to words) in two languages to learn more.
http://dictionary.sensagent.com/Quantum_computer/en-en/
13
52
Amplitude is the objective measurement of the degree of change (positive or negative) in atmospheric pressure (the compression and rarefaction of air molecules) caused by sound waves. Sounds with greater amplitude will produce greater changes in atmospheric pressure from high pressure to low pressure. Amplitude is almost always a comparative measurement, since at the lowest-amplitude end (silence), some air molecules are always in motion and at the highest end, the amount of compression and rarefaction though finite, is extreme. In electronic circuits, amplitude may be increased by expanding the degree of change in an oscillating electrical current. A woodwind player may increase the amplitude of their sound by providing greater force in the air column i.e. blowing harder. Amplitude is directly related to the acoustic energy or intensity of a sound. Both amplitude and intensity are related to sound's power. All three of these characteristics have their own related standardized measurements and will be discussed below. Amplitude is measured in the amount of force applied over an area. The most common unit of measurement of force applied to an area for acoustic study is the Newtons per square meter (N/m2). One Newton is the amount of force it takes to accelerate a 1-kilogram object by one meter per second (m/s) The benchmark threshold of hearing, in other words the smallest perceptible amplitude, is approximately 0.00002 N/m2 for a 1 kHz tone in laboratory conditions (this is actually contradicted by loudness curves discussed below). 60 N/m2 is considered by some to be the threshold of pain, but as we will see, this is also subjective and varies greatly by individual and age. |Discussions of amplitude depend largely on measurements of the oscillations in barometric pressure from one extreme (or peak) to the other. The degree of change above or below and imaginary center value is referred to as the peak amplitude or peak deviation of that waveform.| |If we tried to calculate the average amplitude of a sine wave, it would unfortunately equal zero, since it rises and falls symmetrically above and below the zero reference. This would not tell us very much about its amplitude, since low-amplitude and high-amplitude sine waves would appear equivalent. A more meaningful reference has been developed to measure the average amplitude of a wave over time, called the root-mean-squared or rms method. You may also see the rms measurement applied to the power output of an amplifier.| The rms value of a waveform represents a squaring the amplitude of each point of a waveform and then taking its mathematical average. The function of the squaring is to eliminate negative values, since all the negative values square to positive ones. This is extremely useful information for those using averaging level meters with audio equipment or software. Example: The rms of a sine wave with a hypothetical peak-to peak value of 1 to 1 will be 0.707. This can be used to extrapolate that any rms amplitude = 0.707 x peak amplitude. Peak amplitude = 1.414 x rms amplitude. When using audio gear or software, it is important to know whether your meter is a peak-reading meter or averaging meter (or neither). While there are many good reasons to keep an eye on a signals peak, the rms average is far more akin to the way we hear. Once you have an understanding of dBs described below, the markings on the meters should make more sense. Power and Intensity: If we picture a sound wave as an expanding sphere of energy, power is the total amount of kinetic energy contained on the spheres surface. By examining the formula below, you can see how power is a measurement of amplitude over time. The unit of measurement for power is the watt, named after James Watt. 1 watt = 1 Newton of work per second The power of the original sound source along with distance of measurement from the sound source combine to form the intensity, Intensity can be measured as watts per square meter or w/m2. Intensity can be seen as amplitude over time over an area. As the surface area of the sound sphere expands, the amount of energy generated by the sound source is distributed over an exponentially increasing surface area. The amount of energy in any given square meter of the expanding sphere's surface decreases exponentially by the inverse square law, which states that the energy drops off by 1/distance2. So acoustic energy twice the distance from the source is spread over four times the area and therefore has one-fourth the intensity, or simply put, relative intensity is the reciprocal of the change in distance squared. You may recall from your grade-school math that the , so as the radius of a sound sphere increases arithmetically, its surface area increases geometrically. The intensity of the source signal energy is distributed over the broadening surface area so that the , where s = the source intensity. The inverse square law is extremely useful to remember in microphone placement, where even small changes in distance can have a significant impact on the resultant signal strength. A few more relationships between amplitude, intensity and power: intensity equals the square of the amplitude, so if the amplitude of a sound is doubled, its intensity is quadrupled. Power is also proportional to amplitude squared, therefore power and intensity are proportional to each other. While power is measured in watts, the most-used acoustic measurement for intensity is the decibel (dB). Named in honor of Alexander Graham Bell, a decibel = 1/10 of a bel. A decibel is a logarithmic measurement that reflects the tremendous range of sound intensity our ears can perceive and closely correlates to the physiology of our ears and our perception of loudness. There are many different forms of decibel measurement and it is not always clear which method of computation is being used unless it is labeled properly. I must admit that I was once intimidated by logarithms, but with cheap calculators to do the math (one previously used log tables), just a simple understanding of how they work is all that is necessary for decibel calculations. A logarithm primer |can be thought of as "what power of 10 will result in x." For example, because 102 = 100. Decibels are often used to measure very minute values, which can also be expressed by logs of negative numbers. For example, and , a value we will use for our threshold of hearing measurement below. If it is expressed then .| A decibel is a measurement used to compare the ratio of intensities of two acoustic sounds (or electronic signal). The ratio (R) of two signals expressed by their power in watts (W1 and W2) is: |dBm=10 log10 (1 watt/.001 watt) dBm =10 log10 (1000) dBm=10 x 3 [because log10 1000 = 3] dBm is the form most commonly used to evaluate power in audio circuits. Since intensity (I) at a fixed distance of measurement is directly proportional to power, a similar measurement can be made: In this case, a doubling of power equals an increase of +3dB. When we study filters later on, you will notice that a filter cut-off frequency is defined as the half-power point, which is calculated as 3dB. While the original dB scale was created for comparison of intensity or power, it is also commonly used as a measurement of amplitude (A) or sound pressure as defined above. The formula for computing relative amplitude or sound pressure is: By comparing this formula to the one for dB above, the relationship between amplitude, power and intensity becomes clear. In this case, a doubling of amplitude from one source to another equals an increase of +6 dB as shown below: The most common acoustic ratio measures a current sound against a predetermined value of the threshold of audibility mentioned above but expressed as 2 x 10^-12 watts. This absolute measurement is referred to as the sound-pressure Level (SPL) and gives us a means of generalizing relative loudness of common acoustic sources (note that the dB is followed by SPL to indicate this mode of measurement). The logarithmic scale from the threshold of hearing to the threshold of pain ranges from 0.00002 N/m^2 to 200 N/m^2, or about 120-130 dB SPL, at which point the entire body, not just the ears sense the vibrations (NB: In preparing this article, it quickly became apparent that no standard for the threshold of feeling or the threshold of pain has been established, and in fact ranges in the references used from 120 dB SPL to 140 dB SPL, which is a huge variation of opinions and points out the differences between acoustic and psychoacoustic measurement). Younger people also have more effective protection mechanisms and so can tolerate louder sounds (surprise!). If we accept 130 dB as the threshold of pain, then humans hear sounds that range from the smallest perceptible amplitude to those that are 10,000,000,000,000 as loud or 10 watts/m2. Both the dB and dB SPLscales reflect the incredible discrimination of human hearing, our most sensitive sense by far. Here are some vague benchmarks (which of course depend on many factors, including the listeners distance from the sound). |Source||Power (watts/m2)||dB SPL| |Threshold of pain||10||130| |Jet takeoff from 500 ft.||1||120| |Medium-loud rock concert||.1||110| |New York subway||.001||90| |Jack-hammer from 50 ft.||.0001||80| |Vacuum cleaner from 10 ft.||.00001||70| |Light traffic from 100 ft.||.0000001||50| |Whisper from 5 ft.||.000000001||30| |Average household silence||.0000000001||20| |Threshold of hearing in young||.000000000001||0| Signals from microphones, most of which seek to accurately transform changes in SPL to proportional changes in voltage (V), can also be measured by the same method. If one were to change the miking distance to the sound source, the voltage differences could be measured as follows: If measured properly, halving the distance of the mic to the source, thanks to the inverse square law should double the voltage produced by the microphone, giving a +6 dB increase in amplitude (which if youve been reading closely also produces four times the intensity). For a standardized comparison of voltages, 0.775 volts is used as the reference level for = 0 dB. We have looked at two basic types of dB measurement, one for power and intensity, and the other for amplitude, SPL and voltage. Several other weighted dB scales, such as dBA are used for specific purposes, such as more closely mirroring the way we hear, but this will be discussed in further detail in the psychoacoustics sections. Dynamic envelope refers to the amplitude change over time of a sound event (usually a short one, such as an instrumental or synthesized note). As a very simple example (because there is usually much more going on in acoustic sounds), a note can have an initial attack characterized by the amount of time it takes to change from no sound to a maximum level, a decay phase, whereby the amplitude decreases to a steady-state sustain level, followed by a decay phase, characterized by the time it take the amplitude to change from the sustain level to 0. Not only do real world (and complexly synthesized) sounds have more complex overall envelopes, but they often exhibit different envelopes for all their individual frequency components. For further study, see Hyperphysics->Sound Level Measurement
http://www.indiana.edu/~emusic/acoustics/amplitude.htm
13
56
Vector fields – a simple and painless introduction [14 Sep 2009] Vector fields provide an interesting way to look at the world. First, a quick bit of background. A vector is a quantity with magnitude and direction. A simple example is the velocity of a car that is traveling at 100 km/h in a Northerly direction. The vector representing this motion has magnitude 100 km/h and direction North. We usually represent a vector using an arrow. The size and direction of the arrow indicates the magnitude and direction of the vector. (See more in the Vectors chapter in IntMath.) Before I tell you what a vector field is, let’s look at a few examples. Each of them involves patterns – the building block of all math. Weather charts – wind speed and direction The daily weather chart (showing wind speeds) is an example of a vector field. The wind direction and strength is given for different points on the Earth’s surface. In the above chart of the Australia/South Asia region, gentle winds (across most of the Indian Ocean) are indicated by a small arrow head, and stronger winds by larger arrow heads. Winds rotate around high and low pressure systems, in opposite directions for the Northern and Southern hemispheres. Below is another wind chart giving us a better idea what a vector field is. This time we are looking at the winds surrounding Cyclone Ike, which devastated Texas in September 2008. The winds in the center were over 200 km/h. (The scale on the chart is in knots, the unit we usually use for wind speed.) The direction of the "F" symbols indicate wind speed, while the color indicates magnitude. Wind chart – Cyclone Ike from the Quikscat satellite [source] Another example of a vector field is the pattern made by iron filings when under the influence of a magnetic field. Iron filings lining up around a magnet. Image source We’re ready for a loose definition of a vector field. Definition of Vector Field A vector field is simply a diagram that shows the magnitude and direction of vectors (forces, velocities, etc) in different parts of space. Vector fields exhibit certain common shapes, which include a “source” (where the vectors emanate out of one point), a “sink” (where the vectors disappear into a hole, something like a black hole effect), a “saddle point” (which looks like a horse’s saddle), and a “rotation” (where objects rotate around some point, something like a planetary system). Following are some excellent visualizations of vector fields for you. If you visit the links, you can play with the vector fields to explore the concepts involved. There’s some pretty neat art going on, too, but that’s just a lucky outcome. Rotations, sinks, sources, and saddle points Visit the following example from the Demonstrations site by Wolfram. These interactives are made using Mathematica. There are 2 things you can do: - Watch an animation that shows the outcome of parameter changes (click the “watch Web preview” link at the top right of each animation), or - You can download the Mathematica Player and interact with the documents, and explore by changing parameters. This is recommended! The link to the interactive: Vector Fields: Streamline through a Point In this interactive, you can grab the point indicated by the larger dot and explore the direction of the vector field. The developer says: Drag the locator; the red streamline then passes through that point, illustrating the flow of the vector field. The vector fields show rotations, sinks, sources, and saddle points. A saddle point A hilly bowl-shaped vector field A Neural Network and vector fields Your brain consists of millions of neural networks. When you start thinking about something, some of your neurons fire and this triggers nearby brain cells to fire. Soon (within milliseconds), a larger network of neurons begins to fire (you are getting excited about what you are thinking about and it begins to consume more of your attention). This scenario is demonstrated in the following screen shots from the animation: Cellular-Automaton-Like Neural Network in a Toroidal Vector Field The first torus shows the beginnings of our thought, the second one shows the network after some excitation. Neurons firing in a neural network The developer of this interactive says: Red indicates cellular activity (a neuronal spike) while blue indicates inactivity. Color intensity encodes the value of a binary internal state variable. The weights between each cell and each of its neighbors have been adjusted to reflect Boltzmann-like transition probabilities appropriate for flow in a constant vector field on a torus (the angle of the field is an adjustable parameter in this Demonstration). Vector Field Java Applet (from Falstad) This is a superb interactive. There are a huge number of examples here (use the pull-down arrows at the top right of the java applet), with clear animations indicating direction and magnitude (usually velocity or forces involved). The link: Vector field animations (java applet). A screen shot: 3-D Vector Fields Here is an animation from Australia’s Bureau of Meteorology showing the wind velocities involved when a high and a low pressure centre interact. Here are some more examples of 3-D vector fields, once again from Wolfram’s Demonstrations. Modeling Game Behaviour This next one is a vector field related to gaming and learning. We all learn from experience. We tend to be attracted to strategies (in real life and in games) that were successful in the past. ‘Experience-weighted attraction’ (EWA) learning is a model of game behaviour where we modify our strategies based on our success with those strategies. For example, in the game of chess, we may find that we are most successful when attacking with our knights, so we continue to use that approach. But our opponent learns what we are doing and develops a good counter-attack. We change our approach. The mathematics behind all this is that we can estimate probabilities of players changing approach. Since these are multi-dimensional problems, we can draw vector fields representing the probable outcomes. This interactive shows such a vector field. In this game, a player can either cooperate with the other player, one cooperates, the other doesn’t, they both defect or the other player defects. The following variables are involved when two players use the Bush–Mosteller reinforcement learning algorithm playing a symmetric game: - The payoff a defector gets when the other player cooperates (for temptation); - The payoff obtained by both players when they both cooperate;(for reward); - Both players obtain a payoff when they both defect (for punishment); and finally, - The payoff a cooperator gets when the other player defects (for suckers). Here’s what you’ll see: Helmholtz-Coil Fields (Electricity) Two parallel circular conductors both carry current in the same direction. The circles lie parallel to the x-y plane. A simple example of a Vector Field Here’s a more formal definition of a vector field. It applies to 3-dimensional space as well. Definition: A vector field in two dimensional space is a function that assigns to each point (x,y) a two dimensional vector given by F(x,y). This means every point on the plane has a vector associated with it (with magnitude and direction). Example: The force operating at a point (x,y) on a surface is given by f(x,y) = (-y,3x) We could also write this as f(x,y) = -yi + 3xj, where i is the unit vector in the x-direction and j is the unit vector in the y-direction. Let’s see what this means with 4 points in the plane. (a) If we are at the origin, (0,0), there is no force at all, since f(0,0) = 0i + 0j (b) At the point (1,1), the force will be f(1,1) = -1i + 3j, so the force is in the up-left direction, with magnitude √10. (c) Another point (-1,2) will have force -2i − 3j, which points down-left with magnitude √13. (d) At the point (-2,-4), the force vector will be f(-2,-4) = 4i − 6j that is, a force of magnitude √52 pointing right and down. (e) Point (4,4) will have force -4i + 12j, which points up-left with magnitude √160 = 12.65. Here are the 5 vectors we just described. Notice the forces in the middle of the vector field are very small, and they get bigger as you get further from the origin. That brings us to the end of our exploration of vector fields. They are not scary, despite the way many textbooks present the topic.
http://www.intmath.com/blog/vector-fields-a-simple-and-painless-introduction/3345
13
94
BEFORE FORT UNION The American Southwest officially became part of the United States at the close of the Mexican War in 1848, although the infiltration of Anglo-American people and culture had begun more than a generation earlier with the opening of the Santa Fe Trail between New Mexico and Missouri. Political organization of the Mexican Cession was part of the famous Compromise of 1850 when California was admitted to the Union as a free state, New Mexico and Utah territories were established with the right of popular sovereignty regarding the institution of black slavery, and the boundary controversy between the State of Texas and New Mexico Territory was settled. American military history in the region began with the outbreak of war between the United States and Mexico in 1846, and the United States Army would continue to be a major factor in political, social, cultural, and economic, as well as military developments in New Mexico Territory for nearly half a century. For a time New Mexico Territory included all of the present states of New Mexico and Arizona and portions of the present states of Colorado, Utah, and Nevada. The primary mission of the army in the region for four decades was to protect travelers and settlers (including the Pueblo Indians, Hispanic population, and Anglo residents) from hostile activities of some Indians. During the Civil War that responsibility was expanded to include Confederate troops who invaded the territory. The significance of the army in the region, however, extended far beyond protection, and the military establishment affected almost every institution and individual in New Mexico. Fort Union was one part of that vast system, and it was established at a time of extensive changes in the New Mexican political, social, economic, cultural, and military structure. In 1851 Fort Union was established almost 100 miles from Santa Fe near the Santa Fe Trail and served briefly as command headquarters for the several other forts in the territory and longer as protector of the vicinity from Indians who resented the loss of their lands, power, and traditional ways of life. Most military engagements between soldiers and Indians, however, occurred beyond the immediate jurisdiction of Fort Union. Even so, troops stationed at Union were frequently sent to participate in campaigns in the Southwest and on the plains. The post was always closely associated with the Santa Fe Trail, the economic lifeline that tied New Mexico to the eastern States. An important part of the mission of troops stationed at Fort Union was to protect that route from Indian raids and warfare, to keep open the shipping lane to the Southwest. Perhaps more important than fighting Indians over the years was Fort Union's role as the department (later district) quartermaster depot for military posts throughout the territory, 1851-1853 and 1861-1879 (it was a subdepot from 1853-1861), when much of the food, clothing, transportation, and shelter for the army was distributed from Fort Union store houses. This made Fort Union the hub of military freighting in the Southwest, an activity which also employed many civilians and has until recently been overlooked in evaluating the military history of the region. In addition, from 1851 to 1883, the department ordnance depot (known as the arsenal after the Civil War) was operated at Fort Union. Such logistical assignments at Fort Union were not as romantic in the public eye as fighting Indians, but they made the other military bases, field campaigns, and police actions possible. New Mexico was a large territory, it must be remembered, and Fort Union was not involved in everything going on there. One must be careful not to claim too much importance for Fort Union, just as one must be careful not to claim too much importance for the army in the region. It was just one part of a complex and changing society. The Anglo-American troops and civilian employees of the army who came from the eastern states to the Southwest, including those at Fort Union, helped to modify and destroy the traditional ways of life of Indians and Hispanos in the Southwest, a process that has since been called the "Americanization" of the region. Marion Sloan Russell (1845-1936) first visited Fort Union in 1852 and was there on many other occasions. She met her husband, Lieutenant Richard D. Russell, and was married at the post. A few years before her death she dictated her memoirs, including fond recollections of Fort Union. "That fort," she proclaimed, "became the base for United States troops during the long period required to Americanize the territory of New Mexico." That "Americanization," in part, was the result of the intrusion of Anglo institutions and values, including Protestantism, democratic ideals, political structures, public education, and a market economy into the combination of Indian and Hispanic cultures that had developed during previous centuries. It was a also the result of Anglo-American domination of the economy and government, which slowly affected the social structure and culture in the Southwest. This was not always a conscious goal or effort, but it resulted from circumstances in which Anglo power was enforced by the military (which also included some Hispanic soldiers and native New Mexican employees). The army thus performed primary and secondary functions in that process of change over the years. The overall effect appeared far-reaching and dramatic because the histories, traditions, and cultures of the Indians and Hispanos of the Southwest were markedly different from those of the Anglo conquerors. As historian Marc Simmons proclaimed, "the entire history of New Mexico from 1850 to the present is interwoven with attempts by the Indian and Hispano populations to come to terms with an alien Anglo society." The history of Fort Union must be set into that perspective of cultural change to see it as more than just another frontier military post established to fight Indians. The officers and men of the American army had to adapt to the peoples and cultures already in the Southwest, and they had to learn to survive and live productively in a geographical environment foreign to their earlier experiences but to which the native New Mexicans had already learned to accommodate their lives, ideas, and institutions. Because of Anglo beliefs in the superiority of their people and institutions over those of the Hispanics and Indians, army personnel often failed to assimilate native practices in dealing with the environment and misunderstood what was possible in the region. Americans from the United States were as determined to dominate the land as they were the people of the Southwest. The history of Fort Union is also part of that story. Fort Union was established in the heart of a vast region of plains (where there were few trees) and mountains, embracing portions of the present states of New Mexico, Texas, Oklahoma, Kansas, Colorado, and Utah. This included the western plains, ranging from the flat grasslands of the Llano Estacado of western Texas and eastern New Mexico to the eroded prairies bordering eastward-flowing streams running out of the Rocky Mountains toward the Mississippi River, the volcanic mesas and isolated peaks of northeastern New Mexico and southeastern Colorado, and the foothills and mountains of the southern Rockies. Fort Union was located in 1851 in the transition zone between the plains and the mountains, an area rich in several grasses which were excellent for grazing livestock and cutting for hay. The predominate grass was grama, and there were also found buffalo grass, switch grass, bluestem, antelope grass, and others. The military post was located west of the Turkey Mountains and east of the Sangre de Cristo Mountains. The Turkey Mountains comprise a circular group of timbered hills, formed by volcanic eruptions and igneous uplift, which were set aside as the Fort Union timber reservation. The Sangre de Cristos form the southernmost branch of the Rocky Mountain province. West of the Sangre de Cristos lies the Rio Grande, the fifth longest river in North America, the lifestream of New Mexico from early Indian occupation to the present. One of the military officers stationed in New Mexico in the late 1850s, Lieutenant William Woods Averell, Regiment of Mounted Riflemen, later wrote in his memoirs that "the principal topographical feature of New Mexico is the Rio Grande which enters it from Colorado on the north and running along the backbone of the Rocky Mountains, like a half-developed spinal cord in embryo, leaves it at El Paso on the south." Averell clearly understood the primacy of the Rio Grande to the territory. "As the Nile to lower Egypt, so is the Rio Grande to the habitable portion of New Mexico," he wrote. "Agriculture waits upon its waters which are drained away by unnumbered acequias to irrigate its fertile but thirsty soil." In addition, "the Mexicans, for protection and defense against twenty thousand savages, lived in towns from Taos to El Paso." The Sangre de Cristo range was an obstacle to travel between the plains where buffalo were plentiful and the agricultural settlements in the Rio Grande valley. There were several passes through the mountains, three of which were most important to plains Indians who visited the Pueblos and other New Mexican settlements and to the Pueblos and New Mexicans who ventured onto the plains to hunt buffalo and trade with the plains tribes. The Pueblos located at those three connections enjoyed a favored position in trade between the plains and the valley and prospered from the commerce. As points where different cultures met, they also faced special problems. The northern pass, perhaps the most difficult of the three, connected with Taos, northernmost Pueblo in New Mexico, via either Rayado Creek or the Cimarron River of New Mexico on the eastern side of the Sangre de Cristo range and the Taos Valley on the west. The southern pass, the least difficult route of the three, connected Pecos Pueblo in the Pecos River valley with the river Pueblos and Santa Fe, after it was founded in 1610, via Glorieta Pass. It was the route followed by the Santa Fe Trail in the nineteenth century. The middle pass followed up the Mora River valley from the plains and connected with Picuris Pueblo on the Rio Grande side. Fort Union was established at the eastern end of that middle pass to Picuris. Each of those three routes, it should be noted, followed reliable water sources. Transportation routes and settlements in the Southwest were located on or near flowing streams because of the general paucity of annual precipitation and its sporadic nature during any given year. All of the streams headed in the mountains and defined the patterns for permanent settlements. The Rio Grande was the largest and most important river in New Mexico, but a number of rivers and their tributary creeks were vital in the area surrounding Fort Union. None of these streams was navigable. The Arkansas River flowing eastward from the Colorado Rockies and across present Kansas had served as the international boundary (west of the 100th meridian, present Dodge City, Kansas) between the United States and Mexico, 1819-1848. Its valley was an important avenue for Anglo westward migration. The Santa Fe Trail, the major overland connection between New Mexico and the Missouri River valley and the primary route of supply for Fort Union and the army in the Southwest, followed a stretch of the Arkansas (the original route, later known as the Cimarron Route, from present Ellinwood, Kansas, to a point near present Cimarron, Ingalls, or Lakin, Kansas, and the later Mountain Route from Ellinwood to present La Junta, Colorado). Several Indian tribes lived and hunted along the Arkansas, and Bent's Fort was established on that stream by Bent, St. Vrain & Co. (Charles and William Bent and Ceran St. Vrain) in 1833, in part, to trade with some of them. Troops from Fort Union were sometimes sent to protect routes of transportation along the Arkansas, especially during the 1850s and the Civil War years. There are two Cimarron rivers in Fort Union country. One, a tributary of the Arkansas River, is formed by the joining of the Dry Cimarron (which begins in the Raton Mountains about 30 miles east of Raton Pass in New Mexico), Carrizozo Creek (heading in New Mexico), and Carrizo Creek (heading in Colorado) in the northwestern corner of the Oklahoma panhandle. Thus the main stream of this Cimarron is known as the Dry Cimarron in New Mexico (to distinguish it from the other Cimarron in New Mexico) and as the Cimarron River from Oklahoma eastward. The Dry Cimarron was also an appropriate name for the river because, in most years, its surface flow was only sporadic. Water could usually be found, however, by digging in the sandy bed. This Cimarron flows (when water is evident) eastward in present Oklahoma, Colorado, and Kansas, and back into Oklahoma where it joins the Arkansas west of present Tulsa. The Cimarron Route of the Santa Fe Trail followed this Cimarron River from Lower Spring south of present Ulysses, Kansas, to Willow Bar northeast of present Boise City, Oklahoma. The other Cimarron River flows eastward from the Sangre de Cristo range in New Mexico and joins the Canadian River just north of the famous Rock Crossing of the Canadian where the Cimarron Route of the Santa Fe Trail crossed on a streambed of solid stone. The Canadian River was also crossed farther upstream by the Bent's Fort or Raton Route (later known as the Mountain Route) of the Santa Fe Trail southwest of Raton Pass, and the Mountain Route crossed this Cimarron River at the present town of Cimarron, New Mexico, and other places. The Canadian, which flows through a deep canyon from a point a short distance south of the Rock Crossing until it reaches eastern New Mexico, was with few exceptions an obstacle to wagon travel to the east and northeast of Fort Union. The Canadian River was often called the Red River during the nineteenth century, which sometimes creates confusion because there are so many other Red rivers. The presence of two Cimarron rivers, plus the Dry Cimarron, also provides potential for a mix-up. Ute or Utah Creek flows south into the Canadian River, joining that stream near the eastern boundary of New Mexico. The Cimarron Route of the Santa Fe Trail crossed Ute Creek, and Fort Bascom was later located near its mouth on the Canadian. Two small streams, Rayado and Ocate creeks, head in the Sangre de Cristo Mountains. The Rayado is an affluent of the New Mexico Cimarron River and was crossed by the Mountain Route of the Santa Fe Trail. The Ocate flows to the Canadian River and was crossed by both major branches of the Santa Fe Trail. Both creeks were closely related to Fort Union. Troops were stationed at the Rayado before Union was established, and detachments from Fort Union were sent there briefly afterward. The Fort Union farm was located on the Ocate. The Pecos River flows south out of the Sangre de Cristos through New Mexico and Texas to the Rio Grande, and it drew settlers from all cultures which came into the area. Rio Gallinas, a tributary of the Pecos, runs through present Las Vegas, New Mexico. The Mora River and its tributary, Sapello River, which joins at present Watrous, New Mexico, drains eastward from the Sangre de Cristos to join the Canadian. Like the Pecos, the Mora valley drew settlers prior to the Anglo infiltration. It was a valley of rich soil which, with irrigation, produced fine crops of wheat, corn, other small grains, vegetables, and fruits. Fort Union was established on a tributary of the Mora, Wolf Creek (also known as Coyote Creek and occasionally as Dog Creek). The importance of these streams in the region cannot be exaggerated. The overwhelming factor throughout the entire area is aridity; the limited supply of water has been critical regardless of the terrain and other geographical features. "Aridity," William deBuys succinctly declared, "more than any other single factor, shapes this stark world." All human activity, from procuring basic necessities to traveling through the region, always has been constrained by the scarcity of a reliable source of water. Annual precipitation in the region averages below twenty inches per year, but "the capricious timing of it" according to deBuys, "makes the Southwestern environment particularly difficult." Much of the precipitation occurs during the summer months, most of it the result of "local high-intensity storms of relatively short duration." These thunderstorms are frequently accompanied by hail. From records kept at Fort Union during a period of ten years, the following monthly mean temperatures (degrees F.) and mean precipitation (inches) were derived: The record was clear that most precipitation occurred in July, August, and September, a period known in New Mexico as the "monsoon season" or "rainy season." Eveline M. Alexander, wife of Captain Andrew Jonathan Alexander, Third Cavalry, wrote in her diary in August 1866, following their trip from Fort Smith, Arkansas, to Fort Union: "We arrived here in the rainy season, . . . and every day we are treated to a shower of rain. However, you can see it coming so long before it reaches you that it is not much annoyance." A newcomer to the area, Mrs. Alexander had not yet felt the force of the violent thunderstorms with high winds and hail which were an annoyance according to the testimony of numerous residents in the territory. The region also experiences an abundance of wind. Complaints about the wind and the dust it whipped through the post were common at Fort Union. Some residents referred to it as "Fort Windy." The soils were easily blown about most seasons of the year because of the shortage of moisture. One of the first residents of the post, Catherine Cary (Mrs. Isaac) Bowen, commonly known as Katie, wrote that "in this territory nearly all the time we have high winds and the soil becomes so dry and powdered that the air is filled with clouds of the most disagreeable kind of dust." Later she commented about "one or two days of high winds which nearly buried us in dust." Her explanation was that "the grass in this country forms no sod, consequently the ground is much like an ash heap on the surface." On another occasion, Mrs. Bowen gave a more vivid description of the gales at Fort Union: Another officer's wife, Lydia Spencer (Mrs. William B.) Lane, who lived at Fort Union before and after the Civil War, complained about how the third post "was swept by the winds all summer long" in 1867. Her views of the wind and descriptive talents were comparable to those of Mrs. Bowen fifteen years earlier. Of the omnipresent winds, Mrs. Lane wrote: Soon after Private William Edward (Eddie) Matthews, Company L, Eighth Cavalry, arrived for duty at Fort Union in 1870, he reported to his family at Westminster, Maryland, about his new assignment: "The only objection I can find here is the miserable wind. Talk of March wind in the States, why it is not a comparison to this place. Wind, wind, and sand all the time. This Post is built on a plain, there is nothing to break the wind, therefore giving it full sway." A couple of weeks later Matthews noted that, during the sand storms, almost everyone who had to be outside wore goggles to protect their eyes. In March 1874, with his talent for humorous exaggeration, Matthews again described the wind at Fort Union: The persistent gales and resulting dust and sand storms at the third Fort Union were explained by yet another officer's wife, Frances A. (Mrs. Orsemus B.) Boyd, who resided at the post in 1872. Fort Union, she declared, "has always been noted for severe dust-storms. Situated on a barren plain, the nearest mountains, and those not very high, three miles distant, it has the most exposed position of any military fort in New Mexico." Mrs. Boyd also discerned that the fine soil and sand drifted like driven snow, especially against the buildings at the fort. "The sand-banks," she explained, "were famous playgrounds for the children." She believed that neither trees nor grass would grow at Fort Union because the abrasive dust either prevented plants from taking root or uprooted and scattered the plants. Despite the wind and dust, however, Mrs. Boyd considered Fort Union a place of much beauty, especially the surrounding area "where trees and green grass were to be found in abundance." Most Anglo-Americans, who came to New Mexico from other regions, held strong opinions about the land and climate, some favorable and some not. Ovando J. Hollister, a Colorado Volunteer in the Civil War, gave his favorable impression of the area, expressing well an attitude hinted at by many others. Lydia Lane enjoyed New Mexico and wrote of one of her several trips between Fort Union and Santa Fe, in 1867, as follows: "The road generally was excellent, the scenery beautiful, and at times grand. The breeze, filled with the odor of pine-trees, was exhilarating and delicious,you seemed to take in health with every breath of the pure air." Years later she also held fond memories of "the sights, sounds, and odors of the little Mexican towns!" She remembered, while passing through the communities, that "the barking of every dog in the village, bleating of terrified sheep and goats, and the unearthly bray of the ill-used burro (donkey) made a tremendous racket." Most of all she remembered "the smells! The smoke from the fires of cedar wood would have been as sweet as a perfume if it had reached us it its purity; but, mixed with heavy odors from sheep and goat corrals, it was indescribable." It was an impression that stayed with her. "I never get a whiff of burning cedar . . . that the whole panorama does not rise up before me, and it is with a thrill of pleasure I recall the past, scents and all." Another point of view was provided by Lieutenant Henry B. Judd, Third Artillery, following his arrival for duty in New Mexico late in 1848: Judd found nothing pretty, describing the "Country" as "the most dreary & desolate that ever caused the eye to ache by gazing upon." Eddie Matthews expressed similar opinions and was never fond of New Mexico Territory nor its inhabitants. In his bigoted judgment, somewhat typical of Anglo-Americans from the eastern United States, the land was not fit for civilized people, and the Indians and Hispanos were not civilized. He noted that the "wind which blows in all seasons" kept the "sand in motion nearly all the time." Many Anglo-Americans could not condone aridity, believing that to be a sign of a forsaken land. The Southwest experiences periodic droughts which affect all human cultures. Historian Charles L. Kenner concluded that drought has been "the Southwest's most persistent opponent of tranquility." Archaeologist J. Charles Kelley has conjectured that peace and war between the Pueblos and Indians of the plains was directly related to precipitation. When rainfall was adequate for agricultural surpluses in the Pueblos and an abundance of buffalo meat and robes on the plains, peaceful trade was predominant in their relations. During droughts, when neither culture had a surplus to trade, raiding and warfare predominated. Such was the situation in New Mexico when Inspector General George A. McCall was sent to inspect the military posts in the Ninth Military Department and, so far as possible, determine the actual losses in lives and property to the Indians during the preceding 18 months, the capacity of the New Mexicans to resist the attacks, and the amount of military force required to provide adequate protection. There was a great need to know more about New Mexico, for in 1850 little was known about the territory by the American people or the government officials in the East; in fact, not much at all was known about the people of the region and their customs, the population, economic resources, geography, and almost everything else. In 1853 a former territorial governor of New Mexico, William Carr Lane, declared that "I find a deplorable state of ignorance to exist" about New Mexican affairs in Washington, D.C. Although the military may have had more and better information about New Mexico than did any other government departments, because of reports from officers stationed there since the Mexican War, it must be understood that many of the decisions made regarding relations with New Mexicans and Indians, the establishment of Fort Union and missions assigned to it, and the administration of the Ninth Military Department which embraced New Mexico were often made with inadequate information and sometimes with considerable misinformation. When James S. Calhoun was appointed first Indian agent for New Mexico in 1849, Commissioner of Indian Affairs William Medill's letter of appointment declared: "So little is known here [Washington] of the condition and situation of the Indians in that region [New Mexico] that no specific instructions, relative to them can be given at present." Calhoun was requested to supply detailed reports about the Indians in the territory. By 1850 there were a few publications about New Mexico to which government officials and others could turn for information (although much of what was available was prejudiced against the New Mexicans), but there was little evidence that these were read by people who needed the information. The available publications included George Wilkins Kendall's Narrative of the Texan-Santa Fe Expedition (1844), Josiah Gregg's classic Commerce of the Prairies (1844), Thomas James's Three Years Among the Mexicans and Indians (1846), George F. A. Ruxton's Adventures in Mexico and the Rocky Mountains (1847), Frederick Adolphus Wislizenus, Memoir of a Tour to Northern Mexico, Connected with Col. Doniphan's Expedition, in 1846 and 1847 (1848), Second Lieutenant James W. Abert's Report and Map of the Examination of New Mexico (1848), and Lewis H. Garrard's Wah-to-yah and the Taos Trail (1850). One newspaper published in the East, Niles' Weekly Register, carried many New Mexican items, often reprinted from western newspapers, including the Santa Fe Republican which began publication in 1847. In addition there were several reports prepared by military officials that had been published. Until September 9, 1850, when Congress created New Mexico Territory, the boundaries of New Mexico had not been defined, and it would be some time before these were surveyed. James S. Calhoun, who had been appointed Indian agent for New Mexico in 1849, became the first territorial governor on March 3, 1851, ending the military rule of the region that had existed since General Kearny occupied Santa Fe on August 18, 1846. He had learned much about New Mexico during the previous two years, but many aspects of the region remained a mystery even to him. What he and others did know provided the basis for decisions in 1851 and after. In summary, the Hispanic and Pueblo Indian settlements of New Mexico were located mostly along the Rio Grande, with a few settlements east of that valley and fewer still to the west. These settlements were virtually surrounded by the so-called "wild" tribes, including Utes, various bands of Apaches, Navajos, and Plains tribes, most of whom had raided almost at will for decades. The settled areas suffered great losses of property and life as crops were destroyed, livestock stolen, and people killed or captured. The primary mission of the U. S. Army after successful occupation of the land, as declared by General Kearny at the time of the invasion and by other government officials many times later, was to protect New Mexican and Pueblo settlements from those Indians. In addition the Treaty of Guadalupe Hidalgo which ended the Mexican War provided that the United States would prevent raids on Mexican territory by Indians residing in the United States, or if these could not be prevented the U. S. would punish any Indians who did raid into Mexico. This was an impossible mission but required that the army make efforts to fulfill the agreement. At the same time, it was clear that the future of New Mexico, both its ability to attract settlers and its economic development, depended on control of the Indians. The military occupation of New Mexico was followed by a policy of providing some protection of population centers by stationing troops at those locations and at points along routes of travel that Indians followed in their raids. The success of that policy required more troops than were available. After the withdrawal of volunteer troops at the close of the Mexican War, the number of soldiers in the department was reduced substantially, never adequate to deal effectively with Indian raiders. The annual report of the secretary of war showed there were 665 troops in New Mexico in 1848, 708 in 1849, and 1,019 in 1850. Not only were the numbers small but, because of the vast territory, they were spread exceedingly thin to be effective. The largest concentration was at Santa Fe, where Fort Marcy, established in 1846, was the only fortification in the territory (the other military posts were simply bases of operation). The posts at El Paso and San Elizario, although located in Texas, were included in the department, but the troops those places were of minor importance in the protection of most of the settlements in New Mexico. Other posts were located at Albuquerque, Socorro, Abiquiu, Dona Ana, Las Vegas, Rayado, Taos, and Cebolleta. The hope that such distribution of troops would protect the towns and help to block the routes of Indian raiders was accompanied by the belief that the protection of lives and property would stimulate economic growth and attract additional settlers. Not only were the troops unable to cover the territory, despite their wide distribution, but the cost of providing for them at so many locations rose far beyond what Congress wanted to appropriate for the job. The next policy, inaugurated in 1851 with the appointment of a new commanding officer and specific orders to economize, saw the removal of the troops from most of the towns. The economy of New Mexico at mid-century operated mostly at a subsistence-level because of tradition, lack of capital, and perhaps most important because of the almost constant destruction perpetrated by Indian raids. It was not able to produce many supplies needed by the army. Even before the Mexican War, New Mexico had come to rely heavily upon the commerce of the Santa Fe Trail for manufactured items. The army had to depend on that same route. The need for economic development in New Mexico was clear, but that depended on the success of the military. New Mexican Governor Donaciano Vigil explained the situation in 1848: "The pacification of the Indians is another necessity of the first order, for as you already know the principal wealth of this country is the breeding of livestock, and the warfare of the Indians obstructs this almost completely." The constant threat of Indian raids made subsistence agriculture much more difficult. Hispanic farmers, facing loss or destruction of their crops and livestock to Indian raiders, usually produced little more than required for their own household. Pueblo farmers, who had lived with Indian raids and periodic droughts for centuries, attempted to store any surplus in order to survive during bad times. The army thus found few sources of supply among the people of the territory because the Hispanics did not have surplus commodities to sell and the Pueblos usually refused to sell any surpluses they had. By providing a market and offering protection from Indian raids, the army stimulated New Mexican agricultural development. Even so, prices were high for limited supplies available. At the same time, the army introduced a cash system into what had been largely an economy based on barter. The New Mexican livestock industry was dominated by the raising of sheep, primarily for meat and secondarily for wool. Sheep provided the major source of wealth in New Mexico, wealth that was concentrated in the hands of a few wealthy families (ricos). The remainder of the people were economically poor; some were peons. There were also cattle and horse herds which, as with sheep, were objects of Indian raids, but almost no swine or goats were raised. Most manufacturing in New Mexico was comprised of household handicrafts, there being almost no production for a market. Several villages had a grist mill operated either by water or animal power. These were not capable of producing surplus flour and meal for a market beyond the local economy. The occupation of the area by U. S. troops apparently stimulated the establishment of a few larger grist mills, including one erected by Donaciano Vigil on the Pecos and another built by Ceran St. Vrain on the Mora, and these mills, in turn, stimulated additional production of cereal grains (especially wheat) to supply the demands of the mills and the market provided by the presence of the army. By 1850 a local supply of flour was available for the army. Other items available in the local markets included mutton, beans, vegetables, melons, fruits, salt, and firewood. The army was not the only beneficiary, however, for those heading for the California gold fields in 1849 and after also bought whatever was available as they passed through New Mexico (another factor accounting for the high prices of produce). The army also relied, for the most part, on the local economy for facilities. With the exception of a portion of Fort Marcy at Santa Fe and the Post at San Elizario, the army rented most of the buildings used for quarters and storehouses in 1850. Almost everything else the military required had to be shipped in via the Santa Fe Trail or, in the case of the southern posts, across Texas. The result of all these factors was that it was tremendously expensive to supply the troops in New Mexico. Military freight contractors carried 422 wagon loads of supplies from Fort Leavenworth to the posts of New Mexico during 1850, a total of 2.15 million pounds of food, clothing, and equipment. Rates per hundred pounds varied from just under $8.00 to more than $14.00. In addition to transportation costs, rent for facilities and prices demanded for locally purchased supplies were considered to be exceptionally high in New Mexico. It fell on the new departmental commander, Lieutenant Colonel Edwin Vose Sumner, to try to reduce such costs to the military, beginning in 1851. Sumner and his superiors relied heavily on the information gathered and recommendations made by Inspector General McCall in 1850. McCall's reports comprised the most complete information about New Mexico that was available to the War Department at the time. Some of the things he found should have been revealing. For example, there was not one military veterinarian in the department that had to rely heavily on horses for dealing with Indians. Some of his recommendations, such as the removal of troops from the towns, were followed almost completely. Within two years after his inspection tour of New Mexico, all the posts he visited except Fort Marcy at Santa Fe were abandoned and new ones had been established at other locations. McCall commented several times about the disastrous effects Indian raids were having on the economy of New Mexico. On July 15, 1850, he wrote as follows: "The hill sides and the plains that were in days past covered with sheep and cattle are now bare in many parts of the state, yet the work of plunder still goes on!" He noted that Apaches and Navajos were not afraid to steal livestock "in the close vicinity of our military posts." He estimated that during the previous three months several herders had been killed, between 15,000 and 20,000 sheep had been stolen, and "several hundred head of cattle and mules" driven from the settlements. The army had been ineffective. The Indians "were on several occasions pursued by the troops, but without success." As directed, McCall gathered reports on the losses to Indians during the 18 months prior to September 1, 1850. He concluded that the loss in livestock included 181 horses, 402 mules, 788 cattle, and 47,300 sheep. Another estimate of New Mexican losses of livestock to Indians during five years, from 1846 through 1850, included 7,050 horses, 12,887 mules, 31,581 cattle, and 453,293 sheep. A further perspective of those estimates may be gained by comparison with the numbers of livestock recorded in the federal census of New Mexico in 1850: 5,079 horses, 8,654 mules, 32,977 cattle, and 377,271 sheep. The need for additional protection from Indians was evident. McCall provided his assessment of the non-Pueblo Indians of the area. He thought the Navajos might be persuaded to adapt to a Pueblo way of life, and declared the several Apache tribes were considered the most destructive raiders because "they have nothing of their own and must plunder or starve." He thought the Apaches would be the most difficult to subdue "owing to their numerical strength, their bold and independent character, and their immemorial predatory habits." McCall identified six bands of Apaches in New Mexico, enclosing the settlements on all sides with the aid of the Navajos and Utes. The Jicarilla Apaches to the northeast were considered "one of the most troublesome" because of their recent attacks along the Santa Fe Trail. The White Mountain and Sacramento Apaches "range the country extending north and south from the junction of the Gallinas with the Pecos to the lower end of the Jornada del Muerto. They continue to drive off stock and to kill the Mexican shepherds both in the vicinity of Vegas and along the Rio Grande." The Mescaleros to the southeast raided more into Texas and Mexico than in New Mexico. The Gila Apaches to the southwest also carried destruction to Mexico more than New Mexico. Peace with all bands of Apaches would require sufficient supplies of the means of life so that they might survive without stealing, for without aid, McCall reiterated, "they must continue to plunder, or they must starve." According to McCall, the Utes ranged beyond New Mexico, but those living north of most settlements were considered "warlike" and raided as far south as Abiquiu, Taos, and Mora. They sometimes united with Jicarilla Apaches in their forays. The Cheyennes and Arapahos to the northeast were not considered a serious threat to New Mexican settlements. The Comanches to the east rarely struck in New Mexico, but they raided into Mexico and traded stolen property and captives with other tribes and the New Mexicans. The Kiowas were seldom seen in New Mexico. It was clear to McCall that the first priority for the army in New Mexico was to deal effectively with the Indians. Not until that problem was resolved could the territory grow and prosper. McCall's primary duty in New Mexico was to inspect the military posts, evaluate the state of the army, and make recommendations for improvements. In addition to department headquarters at Santa Fe, McCall visited the ten other posts, reporting the number present and evaluating conditions. He found a total of 831 troops in the department, including 150 at Fort Marcy in Santa Fe, 44 at Taos, 41 at Rayado, and 82 at Las Vegas. His detailed inspection reports on the posts provided a thorough summary of the army in the department. Of Las Vegas McCall wrote, "The consumption of corn at this post is very great, and a large depot should be established either here or in the vicinity." The demand for corn at Las Vegas was "caused by troops and government trains passing and repassing." Wagon trains were outfitted there for the trip across the plains and forage was sometimes sent to the relief of westbound trains as far away as the Cimarron River. In addition to a supply depot in the area, a military post was needed to protect the route of supply from Fort Leavenworth and other wagon roads, including one from Las Vegas to Albuquerque via Anton Chico. Las Vegas, which McCall thought was a good location for a supply depot, was not a good location for such a garrison because it was too far from the homelands of the Indians causing the most problems and off the "line of march of the Comanches when they visit New Mexico." A better location, he thought, would be at Rayado or on the Pecos River. McCall was not impressed with Barclay's Fort as a possible army post, although the location was good, because it was too small for a depot or large garrison of troops and the owners wanted too much money to sell or rent it ($20,000.00 to sell or $2,000.00 per year rent). McCall thought Rayado was a good location for a military post. McCall was critical of the overall military situation in the territory, calling it inadequate for the task at hand. He recommended a minimum of 2,200 troops with at least 1,400 of those mounted. He recommended that the troops be moved from the towns to "the heart of the Indian country." Because of the difficulty of maintaining horses for mounted troops, McCall recommended the establishment of "grazing farms" which, he believed, would result in great savings. Everything in the military department needed to be structured to deal with the serious Indian problem facing settlements in the territory. Indian raids continued into 1851. In February Indian Agent Calhoun reported that, "during the past month the Indians have been active in every direction, and for no one month during the occupancy of the Territory by the American troops have they been more successful in their depredations." Late in January, near Pecos only 25 miles from Santa Fe, several large herds of sheep and other livestock were stolen and at least three herders were killed. The Utes had raided along the Arkansas River, and "the Apaches and Navajos have roamed in every direction through this Territory." In March 1851 a band of Jicarillas took about 1,000 sheep near Anton Chico and more sheep were stolen from Chilili. Some of the Jicarillas, however, expressed a desire for peace. On April 2 two principal chiefs, Chacon and Lobo, came to Santa Fe, along with Mescalero Chief José Cito. On that date these Indians agreed to reside on lands assigned to them and not to go nearer than 50 miles from any settlement or route of transportation. In return the government would furnish them with farm equipment and annuities. Some of the Jicarillas refused to be bound by the treaty, which was not approved by the U.S. government anyway, and in April they raided near Barclay's Fort and attacked the town of Mora, killing several people. When a large party of Jicarillas appeared along the Pecos Valley near San Miguel, La Cuesta, and Anton Chico, the residents were alarmed. No raids were reported, however, and Chacon declared that his people were starving and had to find food. Chacon's band, as a demonstration of their commitment to peace, had recovered livestock taken by the Navajos and returned the stock to its owners. To avoid potential problems between Jicarillas and settlers, however, Calhoun wanted the Indians to move farther away from settlements. Chacon went to Santa Fe and agreed to move his people away from the settlements. But the move did not immediately occur. Other Indians were raiding settlements while major changes were taking place in the military organization with the appointment of a new department commander in 1851. This resulted in the establishment of Fort Union. The troops in New Mexico, it is important to understand, were part of the larger U. S. Army and functioned under its organization and limitations. The Anglo-American tradition, begun during the colonial era, was that a standing army was a liability rather than an asset. Citizen-soldier volunteers could be raised temporarily for a crisis, such as an Indian war or a war for independence, but an army of permanent soldiers was expensive and a threat to freedom. After independence the army was a necessary part of frontier Indian policy, but it was kept small, often inadequate, and poor. As Don Russell pointed out, "had it not been for Indian wars there probably would have been no Regular Army, yet at no time was it organized and trained to fight Indians." Congress was reluctant to fund a military complex. The army that was designed for the early national period, when the western boundary was the Mississippi River, faced enormous new responsibility following the expansionist years during which the western boundary was pushed to the Pacific Ocean. An increase in size and monetary support of the army did not follow prior to the Civil War. Following that national calamity, fought primarily by citizen-soldiers on both sides, Congress determined to reduce military expenditures again, keeping the army handicapped until the frontier was settled. Thus the greatest problem faced by the army in the Southwest was not the Indian threat to settlement, nor even the arid environment and vast distances, but a parsimonious Congress which refused to recognize that an expanding nation required an expanding military force to deal effectively with Indians, explore new lands, improve roads, provide its own facilities, and supply itself over long routes. Funds were never sufficient for the demands made on the army, and manpower and equipment were usually inadequate for the job faced. As military historian Robert Utley expressed so cogently, Congress refused "to pay the price of Manifest Destiny." Too often presidential administrations devoted to budget economy viewed the military as a good place to reduce expenditures. Such a move in 1851 resulted in orders for troops at western posts to become farmers and produce some of their own food and forage. As a result of congressional limitations, the army was small in numbers, had substandard equipment and facilities, and experienced a difficult time recruiting and keeping competent soldiers. There was little honor but a lot of hardship connected with service on the frontier, one reason that the companies of most regiments were seldom if ever filled to authorized capacity and that the army experienced a high rate of desertion in the West. In most years more than 10% of the enlisted soldiers in the entire army, in some years more than 20%, deserted and, over time, some regiments lost more than 50% of those enlisted for five years before their term of service expired. As Utley concluded, "they simply got their fill of low pay, bad living conditions, and oppressive discipline that stood in such bold contrast to the seeming allurements of the civilian world." Military justice often seemed arbitrary and severe. Punishment frequently varied for the same crime. In February 1851 a general court-martial in Santa Fe tried the cases of several Second Dragoons charged with forming a secret society in New Mexico known as the "Dark Riders," which included among its objectives "robbing and desertion." Of those found guilty, one was sentenced "to forfeit twelve dollars of his Pay, to work under charge of the Guard for one month & then be returned to duty." Two were sentenced to lose twenty-five dollars of their pay and, additionally, each was "to walk a ring daily six hours for one month twelve feet in diameter, then to labor two months with Ball & Chain attached to his Leg under charge of the Guard & be returned to duty." Each of four others faced a much more severe sentence, "to forfeit all pay and allowances that are now or may become due him, to have his Head shaved, to have his face blackened daily and placed standing on a Barrel from 9 to 12 O'clock A.M., and from 2 to 5 O'clock P.M. daily for twenty days, then placed under charge of the Guard at hard Labor, with Ball & Chain attached to his Leg until an opportunity affords to be marched on foot carrying his Ball & Chain to Fort Leavenworth and there be drummed out of the Service." Soldiers serving such penalties were not available for regular duty and contributed to the shortage of personnel. Thus an under-strength army, always inadequate in authorized numbers, was further reduced in effectiveness and efficiency by being constantly undermanned. The army averaged only 82% of its mandated strength prior to 1850. In 1850 the authorized size of the army was four artillery regiments, eight infantry regiments, and three mounted regiments (two dragoons and one mounted riflemen). The artillery regiments were comprised of twelve companies and the cavalry and infantry regiments had ten companies. The company strength varied by type of service. Each light artillery company was authorized to contain 64 privates, and each heavy artillery company was to have 42. Each infantry company was to have 42 privates; the dragoons were authorized 50 privates; and the mounted riflemen were assigned 64. In 1850 Congress authorized all companies of all branches stationed on the frontier to have 74 privates. Each company had three commissioned officers (captain, first lieutenant, and second lieutenant) and eight non-commissioned officers (four sergeants and four corporals). In addition, the field staff of a regiment included four commissioned officers (colonel, lieutenant colonel, and two majors), with an adjutant and a quartermaster selected from the subalterns. The noncommissioned staff included a sergeant major, quartermaster sergeant, and musicians (buglers for the cavalry and fifers, drummers, and bandsmen for the artillery and infantry regiments). In addition to the regiments there were the general staff officers and members of the following departments: medical, paymaster, military storekeepers, corps of engineers, corps of topographical engineers, and ordnance. If filled to authorized level, the entire army in 1850 would have totaled over 13,000 officers and men. Because most units were not up to capacity, the actual strength was 10,763, most of whom were stationed in the West. Almost 10% of the army in 1850 was stationed among the eleven posts of the Ninth Military Department. There were two companies of Second Artillery, ten companies (the entire regiment) of Third Infantry, three companies of First Dragoons, and four companies of Second Dragoons. The total authorized strength for these units was 1,603 officers and men, but only 987 were actually present in the department. This was an average of just under 90 officers and men for each military post. A chronic problem in New Mexico was the absence of officers who should have been with their companies. Many officers could be away from their regimental duties because of a generous leave policy which permitted them to be absent from duty up to a year (occasionally longer). Vacancies also resulted from resignations and delays in appointing replacements, detached service with other units and in other places, courts-martial assignments, and recruiting duties. Each military post comprised a highly structured society and operated under a disciplined routine in which every officer and enlisted man had his duties to perform. Despite the daily schedule, which ran by the clock with appropriate calls of drum or bugle, there was a considerable amount of leisure time with nothing provided for the men to do. There was little direct contact between commissioned and non-commissioned troops. The post commander ruled, assisted by the post adjutant and a sergeant major. The duties and training of enlisted men were directed by sergeants and corporals, under command of company officers. Several officers were in charge of specific departments: the post quartermaster was in charge of quarters, clothing, transportation, and all other supplies except food; the post commissary officer was in charge of rations; and the surgeon was in charge of the post hospital and sanitation. At some posts the quartermaster and commissary duties were performed by the same officer. Enlisted men, sometimes assisted by a few civilian employees, provided the labor force for a multitude of tasks at the post. Not all of them were available for duty, as Utley made clear: "Allowing for men in confinement, on guard, sick, and detailed to fatigue duties, a post commander could not often count enough men to man the fort, much less to take the field." It was not easy to recruit skillful young men for the required five-year enlistment. By 1850 almost two-thirds of the enlisted men were foreign-born, many of them Irish and German, and one-fourth were illiterate. The pay for privates was $7.00 per month for infantrymen and $8.00 for cavalrymen. A sergeant drew $13.00 a month. Soldiers were supposed to be paid every two months, but at frontier posts it was sometimes as long as six months before the paymaster returned. The soldier required little cash, however, because most of his needs were furnished, including uniforms, rations, quarters, transportation, medical care, and equipment. Except for his expense to the company laundress and tailor (which could be avoided if the soldier washed his own clothing and made his own alterations), a soldier's pay was available for items such as additional food from the post or regimental sutler's store, tobacco, recreation, gambling, whiskey, and, if inclined, to send some home to his family. The uniforms were probably sufficient, but rations and quarters were often inadequate. The daily ration, according to historian Robert Frazer, "was both uninviting and dietetically impoverished, designed to fill the stomach at minimum cost." The monotonous fare as prescribed by Army Regulations included meat (twelve ounces of salt pork or bacon, or twenty ounces of fresh or salt beef) and flour or bread (eighteen ounces of flour or bread, or twelve ounces of hard bread; sixteen ounces of corn meal could be substituted for flour or bread) each day. For each 100 rations there were also issued eight quarts of beans or ten pounds of rice, one pound of coffee or one and one-half pounds of tea, twelve pounds of sugar, two quarts of salt, and four quarts of vinegar. In addition, for each 100 rations, the soldier received one pound of sperm candles and four pounds of soap. Some of the food items shipped to New Mexico, such as bacon and flour, frequently deteriorated during the trip and the subsequent storage before issue. Other foods, except for the issue of vegetables when scurvy was found among the troops, had to be purchased by the individual soldier. Often the enlisted men had the opportunity to buy vegetables, fruits, milk, butter, and eggs at frontier posts, provided they chose to use their pay for such items. Many apparently preferred to use their limited funds for tobacco and whiskey. Drunkenness was a chronic problem at all levels of the service. Excessive drinking, like desertion, was a way many soldiers sought escape from the realities of garrison life. Quarters varied from post to post, and soldiers sometimes were housed in tents because barracks were not available. They lived in tents, of course, when on field duty. Most company quarters, because of inadequate funds and unskilled labor, were poorly constructed, inadequately ventilated, hot in the summer, cold in the winter, and conducive to the spread of disease. The frontier army frequently experienced "a high rate of sickness and mortality." Medical care, intended to be part of the fringe benefits, was too often inadequate at frontier posts. Although training was an important part of turning recruits into disciplined soldiers, the army did not have a standardized training program. Thus many recruits joined companies for duty without any "idea of the duties they will be called on to perform, or of the discipline they will be required to undergo." According to military historian Edward Coffman, the new soldier "often found the diet inadequate, the uniforms ill-fitting, and the quarters uncomfortable. Neither was the adjustment to discipline and drill and all that was involved in learning to be a soldier a pleasant experience." While drill dominated a recruit's training, usually there was no training in marksmanship. Perhaps it was not considered necessary since most troops became laborers at frontier posts and used axes, hammers, saws, picks, and shovels more than muskets, sabers, or cannon. Their main contact with a weapon came when they stood the ubiquitous guard duty. Most of a soldier's time was spent on garrison duty at a small military post, the tedious routine of which was occasionally broken by field service. Time away from the fort was often spent as guard to a supply train, mail coach, or other group, and, at other times, marching from one duty station to service at another. They were also sent on scouts to investigate Indian "depredations" and on expeditions to locate and punish Indian offenders. Despite the images of an Indian-fighting army portrayed in popular media, enlisted men were seldom engaged in combat. On average, a frontier soldier might participate in battle with the enemy one time during a five-year enlistment. Only rarely were those engagements decisive, and military leaders had a difficult time trying to figure out how to deal most effectively with Indians. In the long run, many other factors besides the army contributed to the defeat and destruction of the Indians' traditional ways of life. Meanwhile officers and soldiers held justifiable misgivings about their way of life, treatment, and importance on the frontier. William B. Lane, an officer who served in New Mexico and was stationed at Fort Union both before and after the Civil War, later explained the difficulties of soldiering in the 1850s. The effectiveness of troops in the Ninth Military Department depended on their comfort, health, well-being, and training, but it also depended on the equipment with which they were supplied and the officers who led them. In battle the troops were only as good as their weapons and commanders. The Third Infantry was equipped with the .69 caliber percussion smoothbore musket, a reliable instrument with destructive impact (although not as accurate as a rifled musket). It was heavy to carry, weighing over nine pounds, and time-consuming to reload and fire during the heat of battle (it was a muzzle-loader). The musket was equipped for a bayonet which was sometimes attached for drill and in battle. Most of the time, however, it was detached and served a variety of purposes as a tool, especially in the field, and made a good candlestand. The soldiers in the Second Artillery and Second Dragoons carried the musketoon, a shortened version of the .69 caliber musket used by the infantry. It weighed six and one-half pounds. According to Major General Zenus R. Bliss, the musketoon was "a sort of brevet musket. It was nothing but an old musket sawed off to about two-thirds of its original length, and the rammer fastened to the barrel by a swivel to prevent its being lost or dropped when loading on horseback; it used the same cartridge as the musket, kicked like blazes, and had neither range nor accuracy, and was not near as good as the musket, and was only used because it could be more conveniently carried on horseback." Almost everyone agreed that the musketoon was unsatisfactory. In 1853 Inspector General J. K. F. Mansfield declared the musketoon was "a worthless arm . . . with no advocates." When McCall inspected the posts in New Mexico in 1850, he recorded that "the two batteries in possession of the Artillery companies are in good order and are complete, including carriages, limbers, caissons, harness, etc." Each battery, according to McCall, comprised one six-pounder gun, one twelve-pounder field howitzer, and three twelve-pounder mountain howitzers. Ammunition included fifty-six rounds for each gun and field howitzer and sixty rounds for each mountain howitzer. Each of the artillery pieces had a bronze tube. The six-pounder gun had a bore diameter of 3.67 inches and fired a projectile weighing 6.10 pounds. It had a muzzle velocity of 1,439 feet per second and a range of 1,523 yards at a five-degree elevation. The twelve-pounder field howitzer had a bore diameter of 4.62 inches and fired a projectile weighting 8.9 pounds. It had muzzle velocity of 1,054 feet per second and a range of 1,663 yards at a five-degree elevation. The twelve-pounder mountain howitzer was a lighter weight, mobile weapon designed for field duty. It had the same bore and fired the same projectile as the field howitzer. It utilized a powder charge of one-half pound, only half the charge of the field howitzer. It had a muzzle velocity of 650 feet per second and a range of 900 yards at a five-degree elevation. The twelve-pounder mountain howitzer was "the most popular and widely employed piece" during the 1850s and during and after the Civil War. It was mobile, when mounted on the prairie carriage as in New Mexico, and effective against Indians. The First Dragoons in New Mexico were still using the .525 caliber Hall's percussion carbine, a breech-loading weapon issued when the dragoons were first organized in 1833. The musketoon was the replacement weapon for Hall's carbine, beginning in 1849. The First Dragoons in New Mexico had not yet received the "improvement" in 1850 and may have considered the Hall's carbine a more effective weapon, given the criticism of the musketoon. The troops of the First and Second Dragoons in New Mexico carried sabers. Inspector McCall did not identify the style, but most likely these were the Model 1840 dragoon sabers which were issued to both regiments. Members of both dragoon regiments in the Ninth Military Department also carried pistols, the Colt .44 caliber dragoon revolver, a cap-and-ball six-shooter. A full complement of dragoon equipment and arms, including forty rounds of ammunition, weighed a total of seventy-eight pounds. When this was added to the weight of the trooper and the horse equipment (saddle and bridle), it made a heavy burden for the dragoon mounts and affected their efficiency in pursuit of Indians. Dragoons surrendered part of their mobility for the superiority of equipment. They did not always carry everything when engaged in chasing Indians. With this combination of arms, Utley concluded, "the frontier army easily outmatched the Indians in weaponry. It was without doubt the most important single advantage the soldiers enjoyed over their adversary, and time and again, when a test of arms could be engineered, it carried the day." The problem was to catch the Indians and force an engagement, for they enjoyed the advantage of better knowledge of the land and greater mobility. They could be elusive to the point of frustration and use the landscape to their advantage. Indian soldiers usually stood and fought only when they believed they enjoyed superiority of numbers or position on the field or when surprised in camp. Successful engagements by the U.S. Army depended on perseverance, luck, and the officers who directed the troops. Most of the officers in the Ninth Military Department were graduates of the Military Academy at West Point where they were trained to serve as officers, received general military education, and were provided special schooling in engineering. They were not taught how to fight Indians. It was not easy to keep officers in the army because pay was inadequate in comparison to similar civilian positions, there was no retirement plan available, promotion was exceedingly slow, and there was much quarreling and competition among them. Except in wartime, there were few opportunities for advancement, and, as military historian Coffman explained, "the tedious monotony of garrison life could be grindingly oppressive." The Ninth Military Department was comprised, as noted above, of minor military posts in a remote region of the nation. "The routine of small garrisons," wrote Coffman, "offered little in the way of professional development." The incentives to make a career of officer life were not strong. The sister of West Point graduate Edmund Kirby Smith, also the wife of an officer, declared "The Army offers no career which a man of talent can desireIt to be sure (and I am sorry to say it) offers a safe harbour for indolence and imbecility." In 1847 Captain Edmund B. Alexander, who would become the first commanding officer of Fort Union four years later, wrote to his family: "I think if I had my profession to choose over I would select anything but the Army." Officers and enlisted men frequently turned to whiskey for escape from their conditions, and alcoholism was a serious problem for the army. Perhaps many of the officers assigned to duty in New Mexico felt there was little to be gained from service there. The military organization demanded discipline of officers as well as enlisted men, and everything at the department level of the army was carried out by orders issued from the top down. Officers at military posts, from the commanding officer to the lowest lieutenant, were hesitant to take any action without specific orders. Although officers in command of field operations were usually given much individual discretion in dealing with whatever circumstances that might arise, there was guarded apprehension that any decision beyond specific instructions, which proved to be unsuccessful, might reflect badly on the officer and even lead to disciplinary action. The overall result was stifling for the officer corps, most of whom became mere functionaries in the chain of command. There was always an awareness among officers of who had rank over whom, which depended on the date of commission to a particular grade. The seeming simplicity of that system of seniority was complicated by the institution of brevet rank. Brevet rank (usually a rank higher than the regular commission of an officer, awarded for a variety of purposes) was the cause of much controversy among officers in the army and of confusion among historians. The practice created all sorts of problems, as Secretary of War John B. Floyd pointed out in 1858, because of its "uncertain and ill-defined rights." The concept was borrowed from the British during the American Revolution to provide a temporary grade for an officer serving in an appointment away from his regular assignment. In the War of 1812 Congress established brevet appointments as honorary ranks to reward individual officers for gallant and meritorious service in battle or for faithful service in the same commissioned rank for ten years (a way to provide a "promotion" when there were no openings in the service at that level). As established at that time brevet rank was only an award of honor, the officer received the pay of his regular commission and held only the position of his regular commission in the chain of command. "Had the brevet system remained purely honorary," historian Utley observed, "it would have been harmless." It did not. Many officers who held a brevet rank must have argued that such an appointment should be worth something, at least under some conditions. For whatever reasons, as Utley summarized, "brevet rank took effect, in both authority and pay, by special assignment of the President, in commands composed of different corps, on courts-martial [from 1829 to 1869], and in detachments composed of different corps." The resulting arrangement "had so many ramifications and nuances that it produced endless dispute and uncertainty, to say nothing of chaos in the computation of pay." During the Mexican War brevet ranks were widely conferred as the primary method of extending recognition for achievement in battle. Most of the officers who remained in the service in 1850, including those in New Mexico, held one or more brevets. "Thus," wrote Utley, "under certain conditions a captain with no brevet might find himself serving under a lieutenant who had picked up a brevet of major in Mexico." In 1851 Senator Jefferson Davis, who would become secretary of war a few years later, spoke out against the brevet system that "has produced such confusion in the Army that many of its best soldiers wish it could be obliterated." The practice continued because it was a way to accord honor to deserving officers and, perhaps even more important, it compensated career officers for the inordinately slow promotions up the regular commissioned ranks. In military correspondence, orders, and reports, it was customary during the nineteenth century that all officers were addressed as and signed their name over their brevet rank, whether they received pay and commanded at the brevet rank or not (although sometimes regular commission and brevet rank were both given). In 1870 officers who were not serving at their brevet rank were prohibited from wearing the uniform of their brevet rank and from using their brevet rank in official communication. The widespread use of brevet ranks remains confusing, and every student of the frontier army must be aware of the system. Throughout this study of the history of Fort Union, brevet ranks are given only when it was clear that the identified officer was actually serving in that rank, as during the Civil War or a brevet second lieutenant. Even then the use of the term is avoided as much as possible in an attempt to reduce misunderstanding. Perhaps the best illustration of brevet rank was provided in a humorous poem by Captain Arthur T. Lee: The army was firmly established in New Mexico Territory by 1851 and faced myriad problems. There were obstacles of terrain, climate, and distance from supplies. The territorial government was weak, and there were rumors of political unrest. The unique blend of Indian, Spanish, and Mexican heritage in New Mexico made it difficult to draw lines and determine who were the perpetrators and who the victims of a complex conflict that had developed for centuries. The injection of Anglo culture, with yet another system of priorities and values, made the situation less stable. The army's record in dealing with the Indian problem there, the primary mission of the troops stationed in the department, left much to be desired. A complete shakeup was about to occur, resulting in widespread reorganization and the establishment of Fort Union. Last Updated: 09-Jul-2005
http://www.nps.gov/history/history/online_books/foun/chap1.htm
13
84
In this section we are going to take a look at two fairly important problems in the study of calculus. There are two reasons for looking at these problems now. First, both of these problems will lead us into the study of limits, which is the topic of this chapter after all. Looking at these problems here will allow us to start to understand just what a limit is and what it can tell us about a Secondly, the rate of change problem that we’re going to be looking at is one of the most important concepts that we’ll encounter in the second chapter of this course. In fact, it’s probably one of the most important concepts that we’ll encounter in the whole course. So looking at it now will get us to start thinking about it from the very beginning. The first problem that we’re going to take a look at is the tangent line problem. Before getting into this problem it would probably be best to define a tangent line. A tangent line to the function f(x) at the point is a line that just touches the graph of the function at the point in question and is “parallel” (in some way) to the graph at that point. Take a look at the graph In this graph the line is a tangent line at the indicated point because it just touches the graph at that point and is also “parallel” to the graph at that point. Likewise, at the second point shown, the line does just touch the graph at that point, but it is not “parallel” to the graph at that point and so it’s not a tangent line to the graph at that point. At the second point shown (the point where the line isn’t a tangent line) we will sometimes call the line a secant line. We’ve used the word parallel a couple of times now and we should probably be a little careful with it. In general, we will think of a line and a graph as being parallel at a point if they are both moving in the same direction at that point. So, in the first point above the graph and the line are moving in the same direction and so we will say they are parallel at that point. At the second point, on the other hand, the line and the graph are not moving in the same direction and so they aren’t parallel at that point. Okay, now that we’ve gotten the definition of a tangent line out of the way let’s move on to the tangent line problem. That’s probably best done with an example. Example 1 Find the tangent line to at x = We know from algebra that to find the equation of a line we need either two points on the line or a single point on the line and the slope of the line. Since we know that we are after a tangent line we do have a point that is on the line. The tangent line and the graph of the function must touch at x = 1 so the point must be on the line. Now we reach the problem. This is all that we know about the tangent line. In order to find the tangent line we need either a second point or the slope of the tangent line. Since the only reason for needing a second point is to allow us to find the slope of the tangent line let’s just concentrate on seeing if we can determine the slope of the tangent line. At this point in time all that we’re going to be able to do is to get an estimate for the slope of the tangent line, but if we do it correctly we should be able to get an estimate that is in fact the actual slope of the tangent line. We’ll do this by starting with the point that we’re after, let’s call it . We will then pick another point that lies on the graph of the function, let’s call that point . For the sake of argument let’s take choose and so the second point will be . Below is a graph of the function, the tangent line and the secant line that connects P and Q. We can see from this graph that the secant and tangent lines are somewhat similar and so the slope of the secant line should be somewhat close to the actual slope of the tangent line. So, as an estimate of the slope of the tangent line we can use the slope of the secant line, let’s call it , Now, if we weren’t too interested in accuracy we could say this is good enough and use this as an estimate of the slope of the tangent line. However, we would like an estimate that is at least somewhat close the actual value. So, to get a better estimate we can take an x that is closer to and redo the work above to get a new estimate on the slope. We could then take a third value of x even closer yet and get an even better estimate. In other words, as we take Q closer and closer to P the slope of the secant line connecting Q and P should be getting closer and closer to the slope of the tangent line. If you are viewing this on the web, the image below shows this As you can see (if you’re reading this on the web) as we moved Q in closer and closer to P the secant lines does start to look more and more like the tangent line and so the approximate slopes (i.e. the slopes of the secant lines) are getting closer and closer to the exact slope. Also, do not worry about how I got the exact or approximate slopes. We’ll be computing the approximate slopes shortly and we’ll be able to compute the exact slope in a few sections. In this figure we only looked at Q’s that were to the right of P, but we could have just as easily used Q’s that were to the left of P and we would have received the same results. In fact, we should always take a look at Q’s that are on both sides of P. In this case the same thing is happening on both sides of P. However, we will eventually see that doesn’t have to happen. Therefore we should always take a look at what is happening on both sides of the point in question when doing this kind of process. So, let’s see if we can come up with the approximate slopes I showed above, and hence an estimation of the slope of the tangent line. In order to simplify the process a little let’s get a formula for the slope of the line between P and Q, , that will work for any x that we choose to work with. We can get a formula by finding the slope between P and Q using the “general” form of . Now, let’s pick some values of x getting closer and closer to , plug in and get some slopes. So, if we take x’s to the right of 1 and move them in very close to 1 it appears that the slope of the secant lines appears to be approaching -4. Likewise, if we take x’s to the left of 1 and move them in very close to 1 the slope of the secant lines again appears to be approaching -4. Based on this evidence it seems that the slopes of the secant lines are approaching -4 as we move in towards , so we will estimate that the slope of the tangent line is also -4. As noted above, this is the correct value and we will be able to prove this eventually. Now, the equation of the line that goes through is given by Therefore, the equation of the tangent line to at x = 1 There are a couple of important points to note about our work above. First, we looked at points that were on both sides of . In this kind of process it is important to never assume that what is happening on one side of a point will also be happening on the other side as well. We should always look at what is happening on both sides of the point. In this example we could sketch a graph and from that guess that what is happening on one side will also be happening on the other, but we will usually not have the graphs in front of us or be able to easily get them. Next, notice that when we say we’re going to move in close to the point in question we do mean that we’re going to move in very close and we also used more than just a couple of points. We should never try to determine a trend based on a couple of points that aren’t really all that close to the point in question. The next thing to notice is really a warning more than anything. The values of in this example were fairly “nice” and it was pretty clear what value they were approaching after a couple of computations. In most cases this will not be the case. Most values will be far “messier” and you’ll often need quite a few computations to be able to get an Last, we were after something that was happening at and we couldn’t actually plug into our formula for the slope. Despite this limitation we were able to determine some information about what was happening at simply by looking at what was happening around . This is more important than you might at first realize and we will be discussing this point in detail in later sections. Before moving on let’s do a quick review of just what we did in the above example. We wanted the tangent line to at a point . First, we know that the point will be on the tangent line. Next, we’ll take a second point that is on the graph of the function, call it and compute the slope of the line connecting P and Q as follows, We then take values of x that get closer and closer to (making sure to look at x’s on both sides of and use this list of values to estimate the slope of the tangent line, m. The tangent line will then be, Rates of Change The next problem that we need to look at is the rate of change problem. This will turn out to be one of the most important concepts that we will look at throughout this course. Here we are going to consider a function, f(x), that represents some quantity that varies as x varies. For instance, maybe f(x) represents the amount of water in a holding tank after x minutes. Or maybe f(x) is the distance traveled by a car after x hours. In both of these example we used x to represent time. Of course x doesn’t have to represent time, but it makes for examples that are easy to What we want to do here is determine just how fast f(x) is changing at some point, say . This is called the instantaneous rate of change or sometimes just rate of change of f(x) at As with the tangent line problem all that we’re going to be able to do at this point is to estimate the rate of change. So let’s continue with the examples above and think of f(x) as something that is changing in time and x being the time measurement. Again x doesn’t have to represent time but it will make the explanation a little easier. While we can’t compute the instantaneous rate of change at this point we can find the average rate of To compute the average rate of change of f(x) at all we need to do is to choose another point, say x, and then the average rate of change will be, Then to estimate the instantaneous rate of change at all we need to do is to choose values of x getting closer and closer to (don’t forget to chose them on both sides of ) and compute values of A.R.C. We can then estimate the instantaneous rate of change from that. Let’s take a look at an example. Example 2 Suppose that the amount of air in a balloon after t hours is given by Estimate the instantaneous rate of change of the volume after 5 hours. Okay. The first thing that we need to do is get a formula for the average rate of change of the volume. In this case this is, To estimate the instantaneous rate of change of the volume at we just need to pick values of t that are getting closer and closer to . Here is a table of values of t and the average rate of change for So, from this table it looks like the average rate of change is approaching 15 and so we can estimate that the instantaneous rate of change is 15 at this point. So, just what does this tell us about the volume at this point? Let’s put some units on the answer from above. This might help us to see what is happening to the volume at this point. Let’s suppose that the units on the volume were in cm3. The units on the rate of change (both average and instantaneous) are then cm3/hr. We have estimated that at the volume is changing at a rate of 15 cm3/hr. This means that at the volume is changing in such a way that, if the rate were constant, then an hour later there would be 15 cm3 more air in the balloon than there was at . We do need to be careful here however. In reality there probably won’t be 15 cm3 more air in the balloon after an hour. The rate at which the volume is changing is generally not constant and so we can’t make any real determination as to what the volume will be in another hour. What we can say is that the volume is increasing, since the instantaneous rate of change is positive, and if we had rates of change for other values of t we could compare the numbers and see if the rate of change is faster or slower at the other points. For instance, at the instantaneous rate of change is 0 cm3/hr and at the instantaneous rate of change is -9 cm3/hr. I’ll leave it to you to check these rates of change. In fact, that would be a good exercise to see if you can build a table of values that will support my claims on these rates of change. Anyway, back to the example. At the rate of change is zero and so at this point in time the volume is not changing at all. That doesn’t mean that it will not change in the future. It just means that exactly at the volume isn’t changing. Likewise at the volume is decreasing since the rate of change at that point is negative. We can also say that, regardless of the increasing/decreasing aspects of the rate of change, the volume of the balloon is changing faster at than it is at since 15 is larger than 9. We will be talking a lot more about rates of change when we get into the next chapter. Let’s briefly look at the velocity problem. Many calculus books will treat this as its own problem. I however, like to think of this as a special case of the rate of change problem. In the velocity problem we are given a position function of an object, f(t), that gives the position of an object at time t. Then to compute the instantaneous velocity of the object we just need to recall that the velocity is nothing more than the rate at which the position is changing. In other words, to estimate the instantaneous velocity we would first compute the average velocity, and then take values of t closer and closer to and use these values to estimate the Change of Notation There is one last thing that we need to do in this section before we move on. The main point of this section was to introduce us to a couple of key concepts and ideas that we will see throughout the first portion of this course as well as get us started down the path towards limits. Before we move into limits officially let’s go back and do a little work that will relate both (or all three if you include velocity as a separate problem) problems to a more general concept. First, notice that whether we wanted the tangent line, instantaneous rate of change, or instantaneous velocity each of these came down to using exactly the same formula. This should suggest that all three of these problems are then really the same problem. In fact this is the case as we will see in the next chapter. We are really working the same problem in each of these cases the only difference is the interpretation of the In preparation for the next section where we will discuss this in much more detail we need to do a quick change of notation. It’s easier to do here since we’ve already invested a fair amount of time into these problems. In all of these problems we wanted to determine what was happening at . To do this we chose another value of x and plugged into (1). For what we were doing here that is probably most intuitive way of doing it. However, when we start looking at these problems as a single problem (1) will not be the best formula to work with. What we’ll do instead is to first determine how far from we want to move and then define our new point based on that decision. So, if we want to move a distance of h from the new point would be . As we saw in our work above it is important to take values of x that are both sides of . This way of choosing new value of x will do this for us. If h>0 we will get value of x that are to the right of and if h<0 we will get values of x that are to the left of . Now, with this new way of getting a second x, (1) On the surface it might seem that (2) is going to be an overly complicated way of dealing with this stuff. However, as we will see it will often be easier to deal with (2) than it will be to deal with (1).
http://tutorial.math.lamar.edu/Classes/CalcI/Tangents_Rates.aspx
13
102
Supported browsers: Internet Explorer 6.0+, Firefox 3.0+, Safari 3.0+, Chrome 5.0+, iPhone4 (partial support and may have to reload the page). Some graphics may not work in Opera. Pythagorean theorem: a2 + b2 = c2 Let's look at an easy, visual proof of the Pythagorean theorem. (If the drawing canvas does not appear, is really short, or is blank, reload the page. The drawing canvas can be resized as you please (except on multitouch devices) by dragging and releasing the magenta drag-handle at the lower right of the canvas; the demo will restart.) Consider a right triangle with side lengths a<b<c (hypoteneuse) and angles A<B <C=90 degrees. Angle A is opposite side a. Angle B is opposite side b. Let's show angle A with the color amber and angle B with blue (amber = A; blue = B). The amber angle A plus the blue angle B equals 90 degrees because the angles of a triangle sum to 180 degrees and C is 90 degrees. The Pythagorean theorem states that a2 + b2 = c2. For each side of the triangle, the drawing to the left also shows a square. The area of the red square is a2. The area of the green square is b2. The area of the gray square is c2. The Pythagorean theorem says that the gray square area is equal to the sum of the red square area and the green square area. Let's do something with the gray square to see if we can prove this. Let's clone a copy of the triangle and slide it to the left and upwards until it touches the side of the square. The side of the square is the same length (c) as the hypoteneuse of the triangle because that's the size we made the gray square. What next? Can we tell anything about the gray angle between the triangle's side a and the side of the square: the angle adjacent to the triangle's blue angle B? All corners of any square are 90 degrees so that gray angle must be 90 - B. So it must equal the amber angle A since A + B = 90 degrees! Let's clone another copy of the triangle, rotate it 90 degrees counterclockwise, and slide it adjacent to the first cloned triangle and the side of the square. It fits perfectly: no gap! Similar to the previous step, let's clone a third copy of the triangle, rotate it 180 degrees counterclockwise, and slide it adjacent to the second triangle and the side of the square. It fits fine! Would a fourth clone of the triangle fit adjacent to the third cloned triangle, the side of the square, and the first cloned triangle?. Yes, each corner of the square is exactly covered by one amber angle A from one triangle and one blue angle B from another triangle. Everything fits perfectly! The four triangles don't completely cover the gray square. A small square at the center is left over; what size is it? Looking at the triangles, we see that the sides of the small square have length b-a. Now, the area of the large square (c2) equals the area of four triangles (4*ab/2 = 2ab) plus the area of the small square (b-a)2. So c2 = 2ab + (b2 -2ab + a2). 2ab and -2ab cancel out. c2 = b2 + a2, or a2 + b2 = c2. This is the Pythagorean theorem! We chose a<b and the four triangles didn't completely cover the gray square. What about the (isosceles) right triangle with a = b? We would find that the four triangles completely cover the large gray square and there is no small square left over in the middle. Each triangle's area would be one half a2. Four triangles' area would equal the large gray square: 2*a2 = a2 + b2 = c2 (for a=b). Instead of placing the triangles inside the large gray square, we could have placed them outside it such that the large gray square plus four triangles exactly forms a new larger square with sides of length a+b. Can you do this? Can you use a little algebra to prove the Pythagorean theorem? Changing direction, let's look at a special triangle. Suppose we draw a triangle with side lengths 3, 4, and 5. Can we tell whether this is a right triangle or not? 32 + 42 = 9 + 16 = 25 = 52. This is a right triangle! Take a sheet of paper with a square corner. Measure off three units (say three inches or centimeters) horizontally from the corner and mark that point. Measure off four units vertically from the corner and mark that point. Measure the distance between the two points: is it five units? The 3-4-5 right triangle is easy to remember and can be useful for measuring whether a corner of a room is square (90 degrees). Do the sides 5-12-13 form a right triangle? Integers that form a right triangle are called The Pythagorean theorem has been known for at least 2300 years. Pythagorean triples may have been known for 4000 years. Our proof used algebra but there are many different proofs of the Pythagorean theorem using only geometry. I hope that you found this outline of a proof to be simple and memorable. Further info: Wikipedia article: Pythagorean Theorem. Note: the Pythagorean theorem applies only for right triangles in a plane. It does not apply for right triangles on a non-Euclidean surface such as on a sphere. Take a ball and draw a large right "triangle" on it. Measure that it is different than in a plane. For example, legs of 3 and 4 do not give 5 for the "hypoteneuse" on the ball! I hope you found this interesting, useful, and/or fun. Is there a demo you would like me to add? Would you like to be notified when a new demo is available? Links for sharing, reporting a problem, or emailing me are available in the pull-down menu at the top of the page. Feel free to link to my pages, screencast them to YouTube, or reuse my source code with attribution
http://www.mongrav.org/math/PythagoreanTheorem.htm
13
95
Egyptian Estimates of the Size and Shape of the Earth 1. In considering ancient data about the size of the earth, it must be kept in mind that the mathematicians of those times had a problem we do not have today. Since people could not consult printed numerical tables, some of the basic data had to be expressed by round figures that could be easily memorized; the other data would be derived from those expressed in round figures. In Egypt the scientific basis for the calculation of geodetic measures was the geographic foot or geographic cubit (1 cubit=1½ feet): Geographic foot=307.7956704 mm In practice the geographic foot was the edge of a cube with a volume of 29.160 liters (artaba). In principle the geographic foot was obtained by dividing a degree into 600 stadia of 600 feet each (360,000 feet to the degree). The degree taken as reference was the average degree of latitude in Egypt: it was assumed that the arc of meridian that goes from latitude 31°30’ N to latitude 24° 00’ N, that is from the northern limit of Egypt to the First Cataract, had length of 2,700,000 feet or 1,800,000 cubits. This was considered the length of Egypt: 7½ degrees or 1/12 of the distance from the equator to the pole counting in degrees. It was assumed that Egypt has a length of 831,048.31 meters. It could also be said that the degree at the middle latitude of Egypt, 27° 45’ N, has a length of 110,806.64 meters. The importance of latitude 27° 45’ N for Egypt was underscored when King Akhenaten chose this latitude as the setting for his new capital, Akhetaten. In the calculation by geographic units the equator was assumed to be The figure of 217,000 stadia for the equator is a striking round figure, since, if the earth were a sphere, a great circle would be 216,000 stadia, since a degree by definition is 600 stadia (360 x 600 = 216,000). The Egyptians calculated that the polar flattening is 1:298.6. Accordingly the polar radius was: In the Third Dynasty the Egyptians adopted the septenary royal cubit of 525 mm. as their standard of lineal measurement. The royal cubit was considered as a symbol of the very structure of Egypt itself. The royal cubit was obtained by taking the basic foot of 300 mm., which is the starting point of all ancient linear measures, and deriving from it a cubit of 450 mm.; then to this cubit divided as usual into 6 palms or 24 fingers, there was added one more palm obtaining a cubit of 7 palms or 28 fingers (525 mm.) The ancient and medieval custom of referring to increased units by the term ”royal,” possibly is of Sumerian origin, since in Sumerian lugal means ”great” and ”royal.” The royal cubit of 525 mm is the edge of a cube containing 5 artabas (the artaba is that cube the edge of which is a geographic foot) o4 145.80 liters. The geographic cubit relates to the royal cubit as 1:6/7 x 3Ö 25. According to calculations by the royal cubit of 525 mm., the equator was assumed to be: In this case the equator was reckoned as almost 4 meters longer than according to the calculation by geographic units; but the values of the radii are hardly affected since they came to be Equatorial radius 12,148,823.32 cubits = 6,378,134.34 meters The calculation by royal cubits of 525 mm had the advantage that the basic dimension of the earth could be expressed easily in terms of atur (an atur is 15,000 royal cubits). The equatorial radius, begin 809.9218215 atur, could be taken as 810 atur = 6,378,750 meters. The polar radius, being 807.209424 atur, could be taken as 807.2 atur = 6,356,700 meters. The average radius, being 809.0176889 atur = 6,371,014.30 meters could be taken as 809 atur = 6,370,875 meters. It was easy to remember that the equatorial radius is 810 atur and that the average radius is 809 atur. These round values are only a few hundred meters off the absolutely exact figures. The calculation of the average radius as 809 atur could be made practically perfect by taking the value as 809 1/60 atur = 809.0166667 atur = 12,135,250 cubits = 6,371,006.250 meters. Given an ellipsoid of revolution, there is in principle a difference between the radius of a sphere of the same surface and the radius of a sphere of the same volume, but the difference is trivial. The two radii are practically identical with the average radius of the ellipsoid. As far as I have been able to establish, the Egyptians calculated the surface and volume of the earth by the average radius of the ellipsoid. As far as I have been able to establish, the Egyptians calculated the surface and volume of the earth by the average radius. It was found that for the calculation of the length of the degrees of latitude, particularly in Egypt, it was more convenient to compute by a reduced variety of the royal cubit, a royal cubit of 524.1482788 mm., which is the edge of a cube containing 144 liters. I have published a table that shows that on the basis of the lesser royal cubit there had been constructed a mnemonic formula that gives the length of all degrees of latitude from the equator to the pole. The Great Pyramid of Gizah, which incorporates the values of the degree of latitude, was planned by the lesser royal cubit, but the Second and Third Pyramid, which incorporate the total dimensions of the entire earth, were planned by the royal cubit of 525 mm. If one reckoned the size of the earth by the lesser royal cubit, the equator could be taken as: The khet is the Egyptian stadium: there was a khet of 600 geographic feet (1284.67740 meters) and a khet of 350 royal cubits (183.455190 meters). Good values were obtained by employing round figures expressed in atur.: Equatorial radius 811¼ atur = 6,378,229 meters These values are incorporated in the architecture of the Complex of King Zoser (Third Dynasty), the first large stone construction in the history of Egypt. The average radius could be expressed by the round figure of 810 1/3 atur = 12,155,000 cubits, which is an excellent approximation to the exact figure. This was important because the surface and the volume of the earth, being huge quantities, were calculated in square and cubic atur, starting from the length of the average radius. 2. Geodesic Surveys. Since the shape of the earth is irregular, today we try to express its dimensions by constructing an ellipsoid, called ellipsoid of reference, which fits as closely as possible the actual contour of the earth, called the geoid in scientific language. It is a striking fact that the Egyptians resorted to the same procedure. In the second half of the eighteenth century A.D. a number of French scholars came to the conclusion that ancient linear units of measure were related to the length of the arc of meridian from the equator to the pole. They concluded that all Greek statements about the size of the earth provide the same datum, except that different stadia were employed. Several ancient authors used different figures and different stadia to say what Aristotle says in De Coelo (298B), namely, that the circumference is 400,000 stadia. The scholars of the French Enlightenment were hampered by the lack of modern exact data about the size of the earth. Today I can state that Aristotle counted by a stadium of 300 barley feet (the barley foot is 9/8 of the Roman foot), stadium of 99,881.59 meters; he meant that a great circle is 39,952,636 meters. What Aristotle said is the same as was said by the romans when they counted a degree (of latitude) as 75 Roman miles (a mile was 5000 Roman feet of 295.9454489 mm.) The Roman foot is the edge of a quadrantal (80 librae in volume), which is a cube containing 8/9 of artaba (the cube the edge of which is a geographic foot). Some twenty years ago when I arrived at establishing the data that I have just listed I considered them breathtaking. It was only later that I realized that the ancients were aware of the fact that the degrees of latitude become longer as one approaches the poles. I discovered that the units used in Greece and Rome (and also in Mesopotamia, except for the very early period) were based on the length of the degree of latitude at latitude 37° 42’N, latitude of Mycenae. Herodotos refers to it as the latitude of the Heraeum of Samos in comparing Greek units with the Egyptian ones. In 1971 I believed that I was uttering a daring statement when I published that the Egyptians had reached the level of precision achieved by the great geodetic surveys conducted at the beginning of this century. It was only later that I was forced to realize that the Egyptians had reached the level of precision which we have reached in the last decade thanks to the new techniques of space exploration. At the beginning of this century a new level of precision was achieved in the field of geodesy, because for the first time surveys were conducted by marking enormous arcs that spanned an entire continent. I am referring to the surveys directed by the German scholar F. R. Helmert (completed in 1907) and by the American scholar J. F. Hayford (completed in 1909). Basically Helmert and Hayford used the same method that was used by the French surveys of the eighteenth century: marking by optical means series of geodetic triangles over an arc of latitude or longitude. However, the later scholars could also use heavenly bodies, of which the closest is the moon, in order to calculate distances on the surface of the earth; then, astrogeodesy was the only method available to measure across large bodies of water. Hayford submitted the following figures: Equatorial radius 6,378,388 meters On the basis of the information available to us today, we can say that Helmert came rather close to the correct figure. But for half a century scholars usually gave more weight to Hayford’s figures. According to the vote of an international meeting of 1924, it was generally agreed to adopt the Hayford ellipsoid as the International Ellipsoid. As late as 1967, Weikko A. Heiskanen, who was then the greatest authority on geodesy, declared that the Hayford ellipsoid ”can be considered a best-fitting ellipsoid for the whole earth.” (Physical Geodesy, p. 215). In my publication of 1971 I compared the Egyptian data with Hayford’s figures, but I pointed out that the Egyptian data happened to be closer to Helmert’s figures. At that time I could not know that the explanation of this fact was that the Egyptian data were even better data than those then generally accepted by modern scholars. For about thirty years the methods of geodesy remained those of the surveys of Helmert and Hayford. In 1938 the Soviet Union completed a survey which had the purpose of establishing a geodetic grid for the immense extension of the country. It was a major effort (the scholar who directed it, Krassowsky, received a Stalin prize in 1952), which arrived at the following basic data: Equatorial radius 6,378,245 meters The understanding of scholars was that, if one put together the Russian data with the data of Hayford and Helmert, one would have an indication of the degree of precision that could be reached. The methods of geodesy began to change during World War II when there was introduced electronic surveying. One advantage of electronic surveying is that it permits measurement of distances over large bodies of water. Today we no longer use optical means in surveying except for minor local work. From World War II on, huge amounts of talent and technical means were invested in the improvement of surveying techniques, because of the military interest. It is obvious that the ability to pinpoint mathematically the position of a target is fundamental in an age in which there are weapons such as rockets. The U.S. Army Map Service, in an effort completed in 1956, tried to improve on all the surveys conducted up to that time by marking on the surface of the earth segments twice as long (about 100 miles in length) as the longest marked up to that time. The following map indicates the arcs used in the AMS survey. It is remarkable that the U.S. Map Service chose to follow the course of the Nile and to extend the line indicated by the Nile north across eastern Europe. The Egyptians had surveyed the entire course of the Nile from the equator to the north. There is ancient information about the latitude of the junctions of the Nile with its several tributaries. To the north of Egypt the Egyptians were able to count across the Mediterranean and the Black Sea, marking reference points on the southern and northern coast of Turkey and in Crimea. In southern Russia they marked a huge base line along latitude 45° 12’N and from this base line they surveyed the great rivers of Russia as if they were an extension of the Nile. The AMS found it expedient to extend the course of the Nile to the north and then to cross it with an arc of parallel cutting across Europe. The Egyptians extended the line of the Nile to the north until it met latitude 45°12’N in Crimea. The line of this latitude met with an arc of meridian which went along latitude 45°12’N from the mouth of the Danube to the junction of the Po with the Ticino and was the starting point of the prehistoric geography of Europe. 3. Satellites. The data of geodesy were completely revolutionized when in 1960 there began to be launched artificial satellites (Echo 1, 12 August, 1960). The satellites made it possible to collect in a rapid time thousands of data all over the surface of the earth, including the surface of the oceans. Essentially the accuracy of these data depends on our ability to locate the position of the satellite; for this we can rely on new tools such as the laser beam. Satellites can be used to carry gravimetric and telemetric instruments, but the main value of satellites is that they change the angle of their course according to any variation in the gravitational pull on the surface of the earth. The course of a satellite responds to any undulation of the surface of the earth. In principle the tracking of the course of a satellite is a standard problem of astrodynamics: the course of a satellite is similar to that of a planet or a moon. A satellite follows an elliptic orbit in which the earth occupies one of the foci. But what concerns geodesists are the perturbations in the orbit which are determined by the irregularities in the gravity field, which in turn are related to irregularities in the shape of the earth. The use of satellites for geodetic survey has required not only the development of new technical devices, but also great advances in mathematical methods. In 196, when the use of artificial satellites was at the beginning, the International Astronomical Union meeting in Hamburg, adopted as the proper ellipsoid of reference the following one: Equatorial radius 6,378,160 meters Today there is universal agreement that 1:298.25 is the best figure; the only question under study is whether this figure can be improved by the addition of a decimal point. In 1975 NASA used the following data: These figures can be considered substantially final. A flattening of 1:298.255 implies a polar radius of 6,356,783 meters. If the flattening had been calculated as 1:298.25, as it is currently, the polar radius would have been 3.5 meters less. 4. Irregularities in the Shape of the Earth. Today research is directed at establishing the actual surface of the geoid by comparing it with the ideal line provided by the ellipsoid of reference. The aim is to achieve an accuracy within the range of one meter. The latest efforts are directed at the improvement in the precision of maps on which there is indicated the actual sea level in each area of the globe as being above or below the theoretical line of the ellipsoid of reference. The greatest discrepancies have been found to be a trough (about -110 meters) in the Indian Ocean, south of the southern tip of India, and a bump (about +85 meters) at the middle of the island of New Guinea. There are not many areas on dry land where the actual surface of the geoid comes close to the theoretical surface of the ellipsoid, but such coincidence does occur along an arc of meridian that begins at the equator, follows the course of the Nile, and continues in southern Russia up to about latitude 55°. When we compare the latest modern figures with the Egyptian ones, we must keep in mind that modern figures aim at establishing al ellipsoid of reference which fits as close as possible the average contour of the entire globe, whereas the Egyptians were concerned only with the northern hemisphere. The Egyptians pyramids were intended to be models in scale of the northern hemisphere. In terms of our way of thinking we can grasp better the shape of the pyramid if we try to think in terms of an octahedron of which the lower half is buried underground. However, the Egyptians never indicated that a pyramid extends underground. The base of the pyramid represents the equator and nothing is considered below what was called the Equatorial Nile. In Mesopotamia, however, cuneiform texts clearly indicate that the ziggurat Entemenaki of Babylon (the Biblical Tower of Babel), which also was a model of the northern hemisphere, was to be understood as extending as much underground as it extended above ground. Where there is set an ellipsoid of reference, compromises have to be made. In relation to the current ellipsoid of reference, the values of which I have mentioned above, the actual north pole is 19 or 20 meters higher, and the actual south pole is about 27 meters lower, than the line of the ellipsoid of reference. Therefore, when modern calculations arrive at figures like 6,356,757 meters for the polar radius, whereas the ancient Egyptians had settled for a figure equal to 6,356,774 meters, it must be concluded that the Egyptians had been most precise, since their figure for the polar radius applied only to the northern hemisphere. The latest modern calculations assume that the equatorial radius in the ellipsoid of reference should be about 6,378,142 meters. But it is recognized that the actual circle of the equator has an average radius which is about20 meters less. In calculating the equator in the ellipsoid of reference there must be chosen a figure that makes allowance for a substantial bulge in the contour of the geoid in the area south of the line of the equator. There is also a lesser bulging in the northern hemisphere around latitude 60°. The Egyptians set the equatorial radius at about 10 meters less than the modern figures, because they did not take into account the dimensions of the southern hemisphere. In conclusion, the Egyptian data about the size of the earth, on the basis of which they set their system of measures, were as precise as those that have been obtained by the latest technical and mathematical advances in space research. 6. Mexican data. Another astonishing result is obtained when one compares the Egyptian figures with those derived by Hugh Harleston from his study of the Mexican pyramids of Teotihuacan. He has concluded that these pyramids were planned by a unit which he calls hunab and estimates it as being 1,059.46309 mm. On the basis of my interpretation of the architecture of Teotihuacan, I would say that the hunab is a double unit and that we are dealing with a unit of 529.731547 mm., similar to the Egyptian royal cubit. I have some legitimate claim to discuss the architectural structure of the Mexican pyramids, since Harleston based the first step of his interpretation on my interpretation of the geometry of the ziggurat of Babylon. But, in any case, Harleston says that the hunab was intended to be such that 6,000,000 hunab are equal to the polar radius: polar radius of 6,356,778.6 meters. The Mexican data obtained completely independently by Harleston, coincide perfectly with what I have derived from my latest reexamination of the Egyptian data. The Egyptians too calculated the polar radius as close to 12,000,000 royal cubits. They counted that the polar radius was 12,108,141 royal cubits of 525 mm. I shall have occasion to demonstrate that the initial plan of the Third Pyramid of Gizah was a representation in scale of the northern hemisphere based on the assumption that the polar radius was 12,000,000 cubits. In a second step the surface of the base of this pyramid was increased by a few minutes in order to arrive at a pyramid related to an average radius of 809 1/60 atur = 12,135,250 cubits according to a scale of 1:120,000. The initial plan of the Second Pyramid was based on a scale 1:60,000; but this figure was slightly modified in the final plan. Similarly, I have published the information that the builders of the Great Pyramid began with a scale of 1:43,200 (1:360 x 120), since the perimeter of the base, which represents the equator, has the length of half a minute of degree. But in the final plan, the scale 1:43,200 was slightly modified, because of the specific length of the degree at the latitude of the equator. What I want to emphasize at this point is that the system of linear units of Teotihuacan in Mexico was based on a polar radius divided into 6,000,000 or 12,000,000 units, and that a similar reckoning had been incorporated into the Second and Third Pyramids of Gizah.
http://www.metrum.org/key/pyramids/estimates.htm
13
88
In this chapter we’ll cover the common features of the various kinds of collections which keep items in sequence. This will set the stage for the following chapters: In this section we’ll define what we mean by sequence in Sequence means “In Order”. We’ll talk about designing programs that use sequences in Working With a Sequence. We’ll compare the four kinds of sequences in Subspecies of Sequences. We’ll look at the common features of sequences in Features of a Sequence. A sequence is a collection of individual items. A sequence keeps the items in a specific order, which means we can identify each item by its numerical position within the collection. Some sequences (like the tuple) have a fixed number of elements, with static positions in the sequence. Other sequences (like the list) have a variable number of elements, and possibly dynamic positions in the sequence. Python has other collections which are not ordered. We’ll get to those in More Data Collections. Here’s a depiction of a sequence of four items. Each item has a position that identifies the item in the sequence. Sequences are used internally by Python. A number of statements and functions we have covered have sequence-related features. We’ll revisit a number of functions and statements to add the power of sequences to them. In particular, the for statement is something we glossed over in The for Statement. The idea that a for statement processes elements in a particular order, and a sequence stores items in order is an important connection. As we learn more about these data structures, we’ll see that the processing and the data are almost inseparable. It turns out that the range() function that we introduced generates a sequence object. You can see this object when you do the following: >>> range(6) [0, 1, 2, 3, 4, 5] >>> range(1,7) [1, 2, 3, 4, 5, 6] >>> range(2,36,3) [2, 5, 8, 11, 14, 17, 20, 23, 26, 29, 32, 35] The typical outline for programs what work with sequences is the following. This is pretty abstract; we’ll follow this outline with a more concrete example. Let’s say that we have a betting strategy for Roulette that we would like to simulate and collect statistics on the strategy’s performance. The verb collect is a hint that we will have a collection of samples, and a sequence is an appropriate type of collection. Let’s work backwards from our goal and see how we’ll use collections to do this simulation. Once we have all of the necessary steps that lead to our goal, we can just reverse the order of the steps and write our program. Print Results. We are done when we have printed the results from our simulation and analysis. In this case, the results are some simple descriptive statistics: the mean (“average”) and the number of samples. To print the values, we must have computed them. Compute Mean. The mean is the sum of the samples divided by the count of the samples. The sum is a reduction from the collection of outcomes, as is the count. To compute the sum and the count, we must have a collection of individual results from playing Roulette. Create Sample Collection. To create the samples, we have to simulate our betting strategy enough times to have meaningful statistics. We’ll use an iteration to create a collection of 100 individual outcomes of playing our strategy. Each outcome is the result of one session of playing Roulette. In order to collect 100 outcomes, we’ll need to create each individual outcome. Each outcome is based on placing and resolving bets. Resolve Bets. We apply the rules of Roulette to determine if the bet was a winner (and how much it won) or if the bet was a loser. Before we can resolve a bet, have to spin the wheel. And before we spin the whell, we have to place a bet. Spin Wheel. We generate a random result. We increase the number of spins we’ve played. In order for the spin to have any meaning, of course, we’ll need to have some bets placed. Place Bets. We use our betting strategy to determine what bet we will make and how much we will bet. For example, in the Martingale system, we bet on just one color. We double our bet when we lose and reset our bet to one unit when we win. Note that there are table limits, also, that will limit the largest bet we can place. When we reverse these steps, we have a very typical program that creates a sequence of samples and analyzes that sequence of samples. Other typical forms for programs may include reading a sequence of data elements from files, something we’ll turn to in later chapters. Some programs may be part of a web application, and process sequences that come from user input on a web form. There are four subspecies of sequence: When we create a tuple, str or Unicode, we’ve created an immutable, or static object. We can examine the object, looking at specific characters or values. We can’t change the object. This means that we can’t put additional data on the end of a str. What we can do, however, is create a new str that is the concatenation of the two original strings. When we create a list, on the other hand, we’ve created a mutable object. A list can have additional objects appended to it. Objects can be removed from a list, also. The order of the objects can be changed. One other note on str. While str objects are sequences of characters, there is no separate character data type. A character is treated as a str of length one. This relieves programmers from the C or Java burden of remembering which quotes to use for single characters as distinct from multi-character strings. It also eliminates any problems when dealing with Unicode multi-byte characters separate from US-ASCII single-byte characters. We call these subspecies because, to an extent, they are interchangeable. It may seem like a sequence of individual characters has little in common with a sequence of complex numbers. However, these two sequence objects do have some common kinds of features. In the next section, we’ll look at all of the features that are common among these sequence subspecies. A great deal of Python’s internals are sequence-based. Here are just a few examples: All the varieties of sequences (strings, tuples and lists) have some common characteristics. We’ll look at a bunch of Python language aspects of these pieces of data, including: Inside a Sequence. Our programs talk about sequences in two senses. Sometimes we talk about the sequence as a whole. Other times we talk about individual elements or subsequences. Naming an element or a subsequence is done with a new operator that we haven’t seen before. We’ll introduce it now, and return to it when we talk about each different kind of sequence. The operator is called a subscription. It puts a subscript after the sequence to identify which specific item or items from the sequence will be used. There are two forms for the operator: The single item format is sequence [ index ] This identifies one item based on the position number. The slice format is sequence [ start : end ] This identifies a subsequence of items with positions from start to end -1. This creates a new sequence which is a slice of the original sequence; there will be end - start items in the resulting sequence. Items are identified by their position numbers. The position numbers start with zero at the beginning of the sequence. Numbering From Zero Newbies are often tripped up because items in a sequence are numbered from zero. This leads to a small disconnect between or cardinal numbers and ordinal names. The ordinal names are words like “first”, “second” and “third”. The cardinal numbers used for these positions are 0, 1 and 2. We have two choices to try and reconcile these two identifiers: In this book, we’ll use conventional ordinal names starting with “first”, and emphasize that this is position 0 in the sequence. Positions are also numbered from the end of the sequence as well as the beginning. Position -1 is the last item of the sequence, -2 is the next-to-last item. Numbering In Reverse Experienced programmers are often tripped up because Python identifies items in a sequence from the right using negative numbers, as well as from the left using positive numbers. This means that each item in a sequence actually has two numeric indexes. Here’s a depiction of a sequence of four items. Each item has a position that identifies the item in the sequence. We’ll also show the reverse position numbers. Why do we have two different ways of identifying each position in the sequence? If you want, you can think of it as a handy short-hand. The last item in any sequence, S can be identified by the formula S[ len(S)-1 ]. For example, if we have a sequence with 4 elements, the last item is in position 3. Rather than write S[ len(S)-1 ], Python lets us simplify this to S[-1]. Factory Functions. There are also built-in factory (or “conversion”) functions for the sequence objects. These are ways to create sequences from other kinds of data. Accesssor Functions. There are several built-in accessor functions which return information about a sequence. These functions apply to all varieties of lists, strings and tuples. Enumerate the elements of a sequence, set or mapping. This yields a sequence of tuples based on the original iterable. Each of the tuples has two elements: a sequence number and the item from the original iterable. This kind of iterator is generally used with a for statement. This iterates through a iterable (sequence, set or mapping) in ascending or descending sorted order. Unlike a list’s sort() method function, this does not update the list, but leaves it alone. This kind of iterator is generally used with a for statement. This iterates through an iterable (sequence, set or mapping) in reverse order. This kind of iterator generally used with a for statement. Here’s an example: >>> the_tuple = ( 9, 7, 3, 12 ) >>> for v in reversed( the_tuple ): ... print v ... 12 3 7 9 This creates a new sequence of tuples. Each tuple in the new sequence has values taken from the input sequences. >>> color = ( "red", "green", "blue" ) >>> level = ( 20, 30, 40 ) >>> zip( color, level ) [('red', 20), ('green', 30), ('blue', 40)] The following functions don’t apply quite so widely. For example, applying any() or all() to a string is silly and always returns True. Similarly, applying sum() to a sequence that isn’t all numbers is silly and returns an TypeError. Tuples and Lists. What is the value in having both immutable sequences (tuples) and mutable sequences (lists)? What are the circumstances under which you would want to change a string? What are the problems associated with strings that grow in length? How can storage for variable length strings be managed? What is the value in making a distinction between Unicode strings and ASCII strings? Does it improve performance to restrict a string to single-byte characters? Should all strings simply be Unicode strings to make programs simpler? How should file reading and writing be handled? Statements and Data Structures. In order to introduce the for statement in The for Statement, we had to dance around the sequence issue. Would it make more sense to introduce the various types of collections first, and then describe statements that process the collections later? Something has to be covered first, and is therefore more fundamental. Is the processing statement more fundamental to programming, or is the data structure? Try to avoid extraneous spaces in lists and tuples. Python programs should be relatively compact. Prose writing typically keeps ()’s close to their contents, and puts spaces after commas, never before them. This should hold true for Python, also. The preferred formatting for lists and tuples, then, is (1,2,3) or (1, 2, 3). Spaces are not put after the enclosing [ ] or ( ). Spaces are not put before ,.
http://www.itmaybeahack.com/homepage/books/nonprog/html/p08_sequence/p08_c01_about.html
13
76
Lab 4: Simulating Bernoulli trials Recall that a Bernoulli trial is a random experiment with two possible outcomes, which can be called success and failure. Also recall that if we perform n independent Bernoulli trials, each of which is successful with probability p, and denote the number of successes by X, then we say that X has a binomial distribution with parameters n and p. Then, for k = 0, 1, ..., n, P(X = k) = Cn,k pk (1-p)n-k, where Cn,k = n!/[k!(n-k)!] is the number of ways to choose k objects out of n. Also recall that E[X] = np and Var(X) = np(1-p). The standard deviation of X is the square root of the variance. In this lab, you will observe some of the properties of Bernoulli trials and the binomial distribution by simulation. You will also get a glimpse of two of the most important results in probability theory, the Law of Large Numbers and the Central Limit Theorem. You will not be analyzing data for this lab, so just open up a blank MINITAB worksheet to get started. Simulating coin tosses To start with a simple case, let's suppose we want to simulate the procedure of tossing a coin 5 times. Each coin toss is a Bernoulli trial with success probability 1/2, so we can simulate this using MINITAB by going to Calc --> Random Data --> Bernoulli. You will generate a row of data for each coin toss, so put 5 in the top box. To store the values in column C1, type "C1" in the large box in the middle, and then put .5 in the box labeled "Event probability" (or "Probability of Success" if you are using MINITAB Version 14). You should get 5 numbers, all either zero or one, in the first column. Think of a zero as a tail and a one as a head. Now we want to conduct this experiment 1000 times. Rather than going through the procedure above 1000 times, we can simulate from the binomial distribution. The number of heads in 5 tosses of a coin has the binomial distribution with n = 5 and p = 1/2. Go to Calc --> Random Data --> Binomial. This time, you want 1000 rows of data. Choose a column in which to store the 1000 values, and enter 5 for the number of trials and .5 for the event probability. You should now have a column of 1000 numbers, all between zero and five. Each of these numbers is a binomial random variable, which you can think of as the number of heads in five tosses of a coin. Now you can look at your results by drawing a histogram. To answer some of the questions below, you may find it useful, before clicking on "OK" to draw the histogram, to click on the "Labels" button, then on the "Data Labels" tab. If you highlight the bubble "Use Y-value labels", then MINITAB will display at the top of the bars of the histogram how many times each number arose. You can also get MINITAB to display these results by going to Stat --> Tables --> Tally Individual Variables and selecting the column in which you stored the 1000 numbers. - Repeat this procedure three more times (if you wish, you can do the other three repetitions all at once by typing "C2 C3 C4" into the box labeled "Store in column(s)"). How many heads did you get in each of the four experiments? Next, suppose 200 students each take a 10-question multiple choice test in which there are five choices for each question. Assume that the students all choose their answers by random guessing. Simulate this process by generating an appropriate binomial random variable for each student. - What is the probability of getting zero heads in five tosses? What is the probability of getting exactly one head in five tosses? (Hint: these are probability questions, not questions about your particular simulation. You can either compute the probabilities by hand, or you can compute binomial probabilities P(X = k) using MINITAB by typing the value or values of k into some column, going to Calc --> Probability Distributions --> Binomial, typing the values for n and p into the appropriate boxes, and listing as the "Input Column" the column in which you typed the values of k. Also make sure to select the bubble "Probability". If the bubble "Cumulative probability" is selected instead, then MINITAB will give you P(X ≤ k).) - In your simulation, how many times out of 1000 were there zero heads in the five tosses? How many times was there one head? Include a histogram of your simulation results in your write-up along with your answers to these questions. As always, remember to make sure the axes are appropriately labeled. - Compare your answers to the previous two questions. In the 1000 simulations, did you get zero heads about as many times as you expected (remember that the number of times you expect to get zero heads is the number of simulations times the probability of getting zero heads in one simulation)? What about one head? - Present a table of your simulation results, showing how many students got each number of questions correct. Compare these numbers to the number of students you would expect to get each number of questions correct (again, the number of students you would expect to get, say, 2 questions correct is the total number of students times the probability that a given student gets 2 questions correct). How well do your simulation results compare to expectations? - What is the maximum number of questions that any of the students got correct? What is the probability that one particular student would get at least this number of questions correct? (Note: you may find it useful to use Calc --> Probability Distributions --> Binomial with the "Cumulative probability" bubble selected.) Variability in the number of heads In this section, you will simulate coin tossing and investigate how the variability in the number of heads depends on the number of tosses. (Note: if you are using the student version of MINITAB, which allows only 10,000 cells to be filled, it is possible that during this exercise you will run out of space. If necessary, you can delete a column by highlighting it and then going to Edit --> Delete Cells , or you can start a new worksheet by going to File --> New.) - First, simulate tossing a coin 15 times and counting the number of heads. Do 1000 repetitions of this procedure (so you will generate 1000 numbers, each a binomial random variable with n = 15 and p = 1/2). Present a histogram of the results. - Find the mean and standard deviation of the 1000 numbers that you got. (Remember you can get this using Stat --> Basic Statistics --> Display Descriptive Statistics.) Are these numbers close to the expected value and standard deviation of a binomial random variable with n = 15 and p = 1/2? - Now simulate tossing a coin 150 times. As before, do 1000 repetitions of this procedure, and present the results in a histogram. Based on the histogram, would it be unusual for the number of heads to be about 5 more or 5 fewer than expected? Would it be unusual for the number of heads to be 20 more or 20 fewer than expected? - Next simulate tossing coins 1500 and 15,000 times. As before, do 1000 repetitions of each procedure, and make two histograms. When a coin is tossed 15,000 times, is it unusual for the number of heads to be about 5 more or 5 fewer than expected? Is it unusual for the number of heads to be 20 more or 20 fewer than expected? - Based on your observations above, when the number of tosses increases, does the difference between the actual number of heads and the expected number of heads tend to get larger or smaller? - Now consider not the number of heads but the fraction of heads. The fraction of heads is the number of heads divided by the total number of tosses, so if 60 out of 100 tosses are heads, the fraction of heads is 60/100 = .60. (You can construct this variable by going to Calc --> Calculator, then in the box labeled "Expression" select the column containing the number of heads, then press the button / and then type the number of tosses, either 15, 150, 1500, or 15,000.) Present a side-by-side boxplot of the fractions of heads that you got in 15, 150, 1500, and 15,000 tosses. (To make the boxplot, go to Graph --> Boxplot , then choose "Simple" under "Multiple Y's", put the four columns in which you have recorded the fractions of heads in the "Graph Variables" box, and then click OK.) - The Law of Large Numbers states that as the number of tosses gets larger, the fraction of heads should get closer and closer to 1/2. Examine the side-by-side boxplots that you made for question 12. Are your simulation results in agreement with the Law of Large Numbers? Explain your answer. - If X denotes the number of heads in n tosses of a coin, what is the standard deviation of the random variable X? Does this standard deviation get larger or smaller when n gets larger? (Hint: this is a theoretical question, and is not asking about your simulation results.) Relate this theoretical result to the observations that you made from your simulations in question 11. - If X denotes the number of heads in n tosses of a coin, what is the standard deviation of the fraction of heads, which is X/n? Does this standard deviation get larger or smaller as n gets larger? Relate this to the observations that you made in response to question 13. The shape of the binomial distribution when np and n(1-p) are large Here you will investigate what the shape of the binomial distribution looks like when the expected number of successes and the expected number of failures are both large. - Generate 1000 random numbers each from the binomial distribution with n = 20 and p = .5, from the binomial distribution with n = 20 and p = .8, and from the binomial distribution with n = 20 and p = .95. Show the histograms and describe the shapes of the three distributions. Are the shapes of the distributions quite different, or do they look approximately the same? - Generate 1000 random numbers each from the binomial distribution with n = 2000 and p = .5, from the binomial distribution with n = 2000 and p = .8, and from the binomial distribution with n = 2000 and p = .95. Show the histograms and describe the shapes of the three distributions. This time, are the shapes of the distributions approximately the same? What do you conclude about how the shape of the distribution depends on the number of trials? (Later you will learn that this happens because of a famous result known as the Central Limit Theorem.)
http://www.math.ucsd.edu/~jschwein/11-lab4.html
13
111
|Part of a series on| The Universe is commonly defined as the totality of existence, including planets, stars, galaxies, the contents of intergalactic space, and all matter and energy. Similar terms include the cosmos, the world and nature. Scientific observation of the Universe, the observable part of which is about 93 billion light years in diameter, has led to inferences of its earlier stages. These observations suggest that the Universe has been governed by the same physical laws and constants throughout most of its extent and history. The Big Bang theory is the prevailing cosmological model that describes the early development of the Universe, which in physical cosmology is calculated to have occurred 13.798 ± 0.037 billion years ago. There are various multiverse hypotheses, in which physicists have suggested that the Universe might be one among many universes that likewise exist. The farthest distance that it is theoretically possible for humans to see is described as the observable Universe. Observations have shown that the Universe appears to be expanding at an accelerating rate. There are many competing theories about the ultimate fate of the Universe. Physicists remain unsure about what, if anything, preceded the Big Bang. Many refuse to speculate, doubting that any information from any such prior state could ever be accessible. Throughout recorded history, several cosmologies and cosmogonies have been proposed to account for observations of the Universe. The earliest quantitative geocentric models were developed by the ancient Greek philosophers. Over the centuries, more precise observations and improved theories of gravity led to Copernicus's heliocentric model and the Newtonian model of the Solar System, respectively. Further improvements in astronomy led to the realization that the Solar System is embedded in a galaxy composed of billions of stars, the Milky Way, and that other galaxies exist outside it, as far as astronomical instruments can reach. Careful studies of the distribution of these galaxies and their spectral lines have led to much of modern cosmology. Discovery of the red shift and cosmic microwave background radiation suggested that the Universe is expanding and had a beginning. History of the Universe According to the prevailing scientific model of the Universe, known as the Big Bang, the Universe expanded from an extremely hot, dense phase called the Planck epoch, in which all the matter and energy of the observable universe was concentrated. Since the Planck epoch, the Universe has been expanding to its present form, possibly with a brief period (less than 10−32 seconds) of cosmic inflation. Several independent experimental measurements support this theoretical expansion and, more generally, the Big Bang theory. Recent observations indicate that this expansion is accelerating because of dark energy, and that most of the matter in the Universe may be in a form which cannot be detected by present instruments, called dark matter. The common use of the "dark matter" and "dark energy" placeholder names for the unknown entities purported to account for about 95% of the mass-energy density of the Universe demonstrates the present observational and conceptual shortcomings and uncertainties concerning the nature and ultimate fate of the Universe. On 21 March 2013, the European-led research team behind the Planck cosmology probe released the mission's all-sky map of the cosmic microwave background. The map suggests the universe is slightly older than thought. According to the map, subtle fluctuations in temperature were imprinted on the deep sky when the cosmos was about 370,000 years old. The imprint reflects ripples that arose as early, in the existence of the universe, as the first nonillionth of a second. Apparently, these ripples gave rise to the present vast cosmic web of galaxy clusters and dark matter. According to the team, the universe is 13.798 ± 0.037 billion years old, and contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy. Also, the Hubble constant was measured to be 67.80 ± 0.77 (km/s)/Mpc. An earlier interpretation of astronomical observations indicated that the age of the Universe was 13.772 ± 0.059 billion years, (whereas the decoupling of light and matter, see CMBR, happened already 380,000 years after the Big Bang), and that the diameter of the observable universe is at least 93 billion light years or 8.80×1026 meters. According to general relativity, space can expand faster than the speed of light, although we can view only a small portion of the Universe due to the limitation imposed by light speed. Since we cannot observe space beyond the limitations of light (or any electromagnetic radiation), it is uncertain whether the size of the Universe is finite or infinite. Etymology, synonyms and definitions The word Universe derives from the Old French word Univers, which in turn derives from the Latin word universum. The Latin word was used by Cicero and later Latin authors in many of the same senses as the modern English word is used. The Latin word derives from the poetic contraction Unvorsum — first used by Lucretius in Book IV (line 262) of his De rerum natura (On the Nature of Things) — which connects un, uni (the combining form of unus, or "one") with vorsum, versum (a noun made from the perfect passive participle of vertere, meaning "something rotated, rolled, changed"). An alternative interpretation of unvorsum is "everything rotated as one" or "everything rotated by one". In this sense, it may be considered a translation of an earlier Greek word for the Universe, περιφορά, (periforá, "circumambulation"), originally used to describe a course of a meal, the food being carried around the circle of dinner guests. This Greek word refers to celestial spheres, an early Greek model of the Universe. Regarding Plato's Metaphor of the sun, Aristotle suggests that the rotation of the sphere of fixed stars inspired by the prime mover, motivates, in turn, terrestrial change via the Sun. Careful astronomical and physical measurements (such as the Foucault pendulum) are required to prove the Earth rotates on its axis. A term for "Universe" in ancient Greece was τὸ πᾶν (tò pán, The All, Pan (mythology)). Related terms were matter, (τὸ ὅλον, tò ólon, see also Hyle, lit. wood) and place (τὸ κενόν, tò kenón). Other synonyms for the Universe among the ancient Greek philosophers included κόσμος (cosmos) and φύσις (meaning Nature, from which we derive the word physics). The same synonyms are found in Latin authors (totum, mundus, natura) and survive in modern languages, e.g., the German words Das All, Weltall, and Natur for Universe. The same synonyms are found in English, such as everything (as in the theory of everything), the cosmos (as in cosmology), the world (as in the many-worlds interpretation), and Nature (as in natural laws or natural philosophy). Broadest definition: reality and probability The broadest definition of the Universe is found in De divisione naturae by the medieval philosopher and theologian Johannes Scotus Eriugena, who defined it as simply everything: everything that is created and everything that is not created. Definition as reality More customarily, the Universe is defined as everything that exists, (has existed, and will exist). According to our current understanding, the Universe consists of three principles: spacetime, forms of energy, including momentum and matter, and the physical laws that relate them. Definition as connected space-time It is possible to conceive of disconnected space-times, each existing but unable to interact with one another. An easily visualized metaphor is a group of separate soap bubbles, in which observers living on one soap bubble cannot interact with those on other soap bubbles, even in principle. According to one common terminology, each "soap bubble" of space-time is denoted as a universe, whereas our particular space-time is denoted as the Universe, just as we call our moon the Moon. The entire collection of these separate space-times is denoted as the multiverse. In principle, the other unconnected universes may have different dimensionalities and topologies of space-time, different forms of matter and energy, and different physical laws and physical constants, although such possibilities are purely speculative. Definition as observable reality According to a still-more-restrictive definition, the Universe is everything within our connected space-time that could have a chance to interact with us and vice versa. According to the general theory of relativity, some regions of space may never interact with ours even in the lifetime of the Universe, due to the finite speed of light and the ongoing expansion of space. For example, radio messages sent from Earth may never reach some regions of space, even if the Universe would live forever; space may expand faster than light can traverse it. Distant regions of space are taken to exist and be part of reality as much as we are; yet we can never interact with them. The spatial region within which we can affect and be affected is the observable universe. Strictly speaking, the observable Universe depends on the location of the observer. By traveling, an observer can come into contact with a greater region of space-time than an observer who remains still, so that the observable Universe for the former is larger than for the latter. Nevertheless, even the most rapid traveler will not be able to interact with all of space. Typically, the observable Universe is taken to mean the Universe observable from our vantage point in the Milky Way Galaxy. Size, age, contents, structure, and laws The size of the Universe is unknown; it may be infinite. The region visible from Earth (the observable universe) is a sphere with a radius of about 46 billion light years, based on where the expansion of space has taken the most distant objects observed. For comparison, the diameter of a typical galaxy is 30,000 light-years, and the typical distance between two neighboring galaxies is 3 million light-years. As an example, the Milky Way Galaxy is roughly 100,000 light years in diameter, and the nearest sister galaxy to the Milky Way, the Andromeda Galaxy, is located roughly 2.5 million light years away. There are probably more than 100 billion (1011) galaxies in the observable Universe. Typical galaxies range from dwarfs with as few as ten million (107) stars up to giants with one trillion (1012) stars, all orbiting the galaxy's center of mass. A 2010 study by astronomers estimated that the observable Universe contains 300 sextillion (3×1023) stars. The observable matter is spread homogeneously (uniformly) throughout the Universe, when averaged over distances longer than 300 million light-years. However, on smaller length-scales, matter is observed to form "clumps", i.e., to cluster hierarchically; many atoms are condensed into stars, most stars into galaxies, most galaxies into clusters, superclusters and, finally, the largest-scale structures such as the Great Wall of galaxies. The observable matter of the Universe is also spread isotropically, meaning that no direction of observation seems different from any other; each region of the sky has roughly the same content. The Universe is also bathed in a highly isotropic microwave radiation that corresponds to a thermal equilibrium blackbody spectrum of roughly 2.725 kelvin. The hypothesis that the large-scale Universe is homogeneous and isotropic is known as the cosmological principle, which is supported by astronomical observations. The present overall density of the Universe is very low, roughly 9.9 × 10−30 grams per cubic centimetre. This mass-energy appears to consist of 73% dark energy, 23% cold dark matter and 4% ordinary matter. Thus the density of atoms is on the order of a single hydrogen atom for every four cubic meters of volume. The properties of dark energy and dark matter are largely unknown. Dark matter gravitates as ordinary matter, and thus works to slow the expansion of the Universe; by contrast, dark energy accelerates its expansion. The current estimate of the Universe's age is 13.798 ± 0.037 billion years old. The Universe has not been the same at all times in its history; for example, the relative populations of quasars and galaxies have changed and space itself appears to have expanded. This expansion accounts for how Earth-bound scientists can observe the light from a galaxy 30 billion light years away, even if that light has traveled for only 13 billion years; the very space between them has expanded. This expansion is consistent with the observation that the light from distant galaxies has been redshifted; the photons emitted have been stretched to longer wavelengths and lower frequency during their journey. The rate of this spatial expansion is accelerating, based on studies of Type Ia supernovae and corroborated by other data. The relative fractions of different chemical elements — particularly the lightest atoms such as hydrogen, deuterium and helium — seem to be identical throughout the Universe and throughout its observable history. The Universe seems to have much more matter than antimatter, an asymmetry possibly related to the observations of CP violation. The Universe appears to have no net electric charge, and therefore gravity appears to be the dominant interaction on cosmological length scales. The Universe also appears to have neither net momentum nor angular momentum. The absence of net charge and momentum would follow from accepted physical laws (Gauss's law and the non-divergence of the stress-energy-momentum pseudotensor, respectively), if the Universe were finite. The Universe appears to have a smooth space-time continuum consisting of three spatial dimensions and one temporal (time) dimension. On the average, space is observed to be very nearly flat (close to zero curvature), meaning that Euclidean geometry is experimentally true with high accuracy throughout most of the Universe. Spacetime also appears to have a simply connected topology, at least on the length-scale of the observable Universe. However, present observations cannot exclude the possibilities that the Universe has more dimensions and that its spacetime may have a multiply connected global topology, in analogy with the cylindrical or toroidal topologies of two-dimensional spaces. The Universe appears to behave in a manner that regularly follows a set of physical laws and physical constants. According to the prevailing Standard Model of physics, all matter is composed of three generations of leptons and quarks, both of which are fermions. These elementary particles interact via at most three fundamental interactions: the electroweak interaction which includes electromagnetism and the weak nuclear force; the strong nuclear force described by quantum chromodynamics; and gravity, which is best described at present by general relativity. The first two interactions can be described by renormalized quantum field theory, and are mediated by gauge bosons that correspond to a particular type of gauge symmetry. A renormalized quantum field theory of general relativity has not yet been achieved, although various forms of string theory seem promising. The theory of special relativity is believed to hold throughout the Universe, provided that the spatial and temporal length scales are sufficiently short; otherwise, the more general theory of general relativity must be applied. There is no explanation for the particular values that physical constants appear to have throughout our Universe, such as Planck's constant h or the gravitational constant G. Several conservation laws have been identified, such as the conservation of charge, momentum, angular momentum and energy; in many cases, these conservation laws can be related to symmetries or mathematical identities. It appears that many of the properties of the Universe have special values in the sense that a Universe where these properties differ slightly would not be able to support intelligent life. Not all scientists agree that this fine-tuning exists. In particular, it is not known under what conditions intelligent life could form and what form or shape that would take. A relevant observation in this discussion is that for an observer to exist to observe fine-tuning, the Universe must be able to support intelligent life. As such the conditional probability of observing a Universe that is fine-tuned to support intelligent life is 1. This observation is known as the anthropic principle and is particularly relevant if the creation of the Universe was probabilistic or if multiple universes with a variety of properties exist (see below). Many models of the cosmos (cosmologies) and its origin (cosmogonies) have been proposed, based on the then-available data and conceptions of the Universe. Historically, cosmologies and cosmogonies were based on narratives of gods acting in various ways. Theories of an impersonal Universe governed by physical laws were first proposed by the Greeks and Indians. Over the centuries, improvements in astronomical observations and theories of motion and gravitation led to ever more accurate descriptions of the Universe. The modern era of cosmology began with Albert Einstein's 1915 general theory of relativity, which made it possible to quantitatively predict the origin, evolution, and conclusion of the Universe as a whole. Most modern, accepted theories of cosmology are based on general relativity and, more specifically, the predicted Big Bang; however, still more careful measurements are required to determine which theory is correct. Many cultures have stories describing the origin of the world, which may be roughly grouped into common types. In one type of story, the world is born from a world egg; such stories include the Finnish epic poem Kalevala, the Chinese story of Pangu or the Indian Brahmanda Purana. In related stories, the creation idea is caused by a single entity emanating or producing something by him- or herself, as in the Tibetan Buddhism concept of Adi-Buddha, the ancient Greek story of Gaia (Mother Earth), the Aztec goddess Coatlicue myth, the ancient Egyptian god Atum story, or the Genesis creation narrative. In another type of story, the world is created from the union of male and female deities, as in the Maori story of Rangi and Papa. In other stories, the Universe is created by crafting it from pre-existing materials, such as the corpse of a dead god — as from Tiamat in the Babylonian epic Enuma Elish or from the giant Ymir in Norse mythology – or from chaotic materials, as in Izanagi and Izanami in Japanese mythology. In other stories, the Universe emanates from fundamental principles, such as Brahman and Prakrti, the creation myth of the Serers, or the yin and yang of the Tao. From the 6th century BCE, the pre-Socratic Greek philosophers developed the earliest known philosophical models of the Universe. The earliest Greek philosophers noted that appearances can be deceiving, and sought to understand the underlying reality behind the appearances. In particular, they noted the ability of matter to change forms (e.g., ice to water to steam) and several philosophers proposed that all the apparently different materials of the world are different forms of a single primordial material, or arche. The first to do so was Thales, who proposed this material is Water. Thales' student, Anaximander, proposed that everything came from the limitless apeiron. Anaximenes proposed Air on account of its perceived attractive and repulsive qualities that cause the arche to condense or dissociate into different forms. Anaxagoras, proposed the principle of Nous (Mind). Heraclitus proposed fire (and spoke of logos). Empedocles proposed the elements: earth, water, air and fire. His four element theory became very popular. Like Pythagoras, Plato believed that all things were composed of number, with the Empedocles' elements taking the form of the Platonic solids. Democritus, and later philosophers—most notably Leucippus—proposed that the Universe was composed of indivisible atoms moving through void (vacuum). Aristotle did not believe that was feasible because air, like water, offers resistance to motion. Air will immediately rush in to fill a void, and moreover, without resistance, it would do so indefinitely fast. Although Heraclitus argued for eternal change, his quasi-contemporary Parmenides made the radical suggestion that all change is an illusion, that the true underlying reality is eternally unchanging and of a single nature. Parmenides denoted this reality as τὸ ἐν (The One). Parmenides' theory seemed implausible to many Greeks, but his student Zeno of Elea challenged them with several famous paradoxes. Aristotle responded to these paradoxes by developing the notion of a potential countable infinity, as well as the infinitely divisible continuum. Unlike the eternal and unchanging cycles of time, he believed the world was bounded by the celestial spheres, and thus magnitude was only finitely multiplicative. The Indian philosopher Kanada, founder of the Vaisheshika school, developed a theory of atomism and proposed that light and heat were varieties of the same substance. In the 5th century AD, the Buddhist atomist philosopher Dignāga proposed atoms to be point-sized, durationless, and made of energy. They denied the existence of substantial matter and proposed that movement consisted of momentary flashes of a stream of energy. The theory of temporal finitism was inspired by the doctrine of Creation shared by the three Abrahamic religions: Judaism, Christianity and Islam. The Christian philosopher, John Philoponus, presented the philosophical arguments against the ancient Greek notion of an infinite past and future. Philoponus' arguments against an infinite past were used by the early Muslim philosopher, Al-Kindi (Alkindus); the Jewish philosopher, Saadia Gaon (Saadia ben Joseph); and the Muslim theologian, Al-Ghazali (Algazel). Borrowing from Aristotle's Physics and Metaphysics, they employed two logical arguments against an infinite past, the first being the "argument from the impossibility of the existence of an actual infinite", which states: - "An actual infinite cannot exist." - "An infinite temporal regress of events is an actual infinite." - " An infinite temporal regress of events cannot exist." The second argument, the "argument from the impossibility of completing an actual infinite by successive addition", states: - "An actual infinite cannot be completed by successive addition." - "The temporal series of past events has been completed by successive addition." - " The temporal series of past events cannot be an actual infinite." Both arguments were adopted by Christian philosophers and theologians, and the second argument in particular became more famous after it was adopted by Immanuel Kant in his thesis of the first antinomy concerning time. Astronomical models of the Universe were proposed soon after astronomy began with the Babylonian astronomers, who viewed the Universe as a flat disk floating in the ocean, and this forms the premise for early Greek maps like those of Anaximander and Hecataeus of Miletus. Later Greek philosophers, observing the motions of the heavenly bodies, were concerned with developing models of the Universe based more profoundly on empirical evidence. The first coherent model was proposed by Eudoxus of Cnidos. According to Aristotle's physical interpretation of the model, celestial spheres eternally rotate with uniform motion around a stationary Earth. Normal matter, is entirely contained within the terrestrial sphere. This model was also refined by Callippus and after concentric spheres were abandoned, it was brought into nearly perfect agreement with astronomical observations by Ptolemy. The success of such a model is largely due to the mathematical fact that any function (such as the position of a planet) can be decomposed into a set of circular functions (the Fourier modes). Other Greek scientists, such as the Pythagorean philosopher Philolaus postulated that at the center of the Universe was a "central fire" around which the Earth, Sun, Moon and Planets revolved in uniform circular motion. The Greek astronomer Aristarchus of Samos was the first known individual to propose a heliocentric model of the Universe. Though the original text has been lost, a reference in Archimedes' book The Sand Reckoner describes Aristarchus' heliocentric theory. Archimedes wrote: (translated into English) You King Gelon are aware the 'Universe' is the name given by most astronomers to the sphere the center of which is the center of the Earth, while its radius is equal to the straight line between the center of the Sun and the center of the Earth. This is the common account as you have heard from astronomers. But Aristarchus has brought out a book consisting of certain hypotheses, wherein it appears, as a consequence of the assumptions made, that the Universe is many times greater than the 'Universe' just mentioned. His hypotheses are that the fixed stars and the Sun remain unmoved, that the Earth revolves about the Sun on the circumference of a circle, the Sun lying in the middle of the orbit, and that the sphere of fixed stars, situated about the same center as the Sun, is so great that the circle in which he supposes the Earth to revolve bears such a proportion to the distance of the fixed stars as the center of the sphere bears to its surface. Aristarchus thus believed the stars to be very far away, and saw this as the reason why there was no visible parallax, that is, an observed movement of the stars relative to each other as the Earth moved around the Sun. The stars are in fact much farther away than the distance that was generally assumed in ancient times, which is why stellar parallax is only detectable with telescopes. The geocentric model, consistent with planetary parallax, was assumed to be an explanation for the unobservability of the parallel phenomenon, stellar parallax. The rejection of the heliocentric view was apparently quite strong, as the following passage from Plutarch suggests (On the Apparent Face in the Orb of the Moon): Cleanthes [a contemporary of Aristarchus and head of the Stoics] thought it was the duty of the Greeks to indict Aristarchus of Samos on the charge of impiety for putting in motion the Hearth of the Universe [i.e. the earth], . . . supposing the heaven to remain at rest and the earth to revolve in an oblique circle, while it rotates, at the same time, about its own axis. The only other astronomer from antiquity known by name who supported Aristarchus' heliocentric model was Seleucus of Seleucia, a Hellenistic astronomer who lived a century after Aristarchus. According to Plutarch, Seleucus was the first to prove the heliocentric system through reasoning, but it is not known what arguments he used. Seleucus' arguments for a heliocentric theory were probably related to the phenomenon of tides. According to Strabo (1.1.9), Seleucus was the first to state that the tides are due to the attraction of the Moon, and that the height of the tides depends on the Moon's position relative to the Sun. Alternatively, he may have proved the heliocentric theory by determining the constants of a geometric model for the heliocentric theory and by developing methods to compute planetary positions using this model, like what Nicolaus Copernicus later did in the 16th century. During the Middle Ages, heliocentric models may have also been proposed by the Indian astronomer, Aryabhata, and by the Persian astronomers, Albumasar and Al-Sijzi. The Aristotelian model was accepted in the Western world for roughly two millennia, until Copernicus revived Aristarchus' theory that the astronomical data could be explained more plausibly if the earth rotated on its axis and if the sun were placed at the center of the Universe. |“||In the center rests the sun. For who would place this lamp of a very beautiful temple in another or better place than this wherefrom it can illuminate everything at the same time?||”| —Nicolaus Copernicus, in Chapter 10, Book 1 of De Revolutionibus Orbium Coelestrum (1543) As noted by Copernicus himself, the suggestion that the Earth rotates was very old, dating at least to Philolaus (c. 450 BC), Heraclides Ponticus (c. 350 BC) and Ecphantus the Pythagorean. Roughly a century before Copernicus, Christian scholar Nicholas of Cusa also proposed that the Earth rotates on its axis in his book, On Learned Ignorance (1440). Aryabhata (476–550), Brahmagupta (598–668), Albumasar and Al-Sijzi, also proposed that the Earth rotates on its axis. The first empirical evidence for the Earth's rotation on its axis, using the phenomenon of comets, was given by Tusi (1201–1274) and Ali Qushji (1403–1474). This cosmology was accepted by Isaac Newton, Christiaan Huygens and later scientists. Edmund Halley (1720) and Jean-Philippe de Cheseaux (1744) noted independently that the assumption of an infinite space filled uniformly with stars would lead to the prediction that the nighttime sky would be as bright as the sun itself; this became known as Olbers' paradox in the 19th century. Newton believed that an infinite space uniformly filled with matter would cause infinite forces and instabilities causing the matter to be crushed inwards under its own gravity. This instability was clarified in 1902 by the Jeans instability criterion. One solution to these paradoxes is the Charlier Universe, in which the matter is arranged hierarchically (systems of orbiting bodies that are themselves orbiting in a larger system, ad infinitum) in a fractal way such that the Universe has a negligibly small overall density; such a cosmological model had also been proposed earlier in 1761 by Johann Heinrich Lambert. A significant astronomical advance of the 18th century was the realization by Thomas Wright, Immanuel Kant and others of nebulae. Of the four fundamental interactions, gravitation is dominant at cosmological length scales; that is, the other three forces play a negligible role in determining structures at the level of planetary systems, galaxies and larger-scale structures. Because all matter and energy gravitate, gravity's effects are cumulative; by contrast, the effects of positive and negative charges tend to cancel one another, making electromagnetism relatively insignificant on cosmological length scales. The remaining two interactions, the weak and strong nuclear forces, decline very rapidly with distance; their effects are confined mainly to sub-atomic length scales. General theory of relativity Given gravitation's predominance in shaping cosmological structures, accurate predictions of the Universe's past and future require an accurate theory of gravitation. The best theory available is Albert Einstein's general theory of relativity, which has passed all experimental tests hitherto. However, because rigorous experiments have not been carried out on cosmological length scales, general relativity could conceivably be inaccurate. Nevertheless, its cosmological predictions appear to be consistent with observations, so there is no compelling reason to adopt another theory. General relativity provides a set of ten nonlinear partial differential equations for the spacetime metric (Einstein's field equations) that must be solved from the distribution of mass-energy and momentum throughout the Universe. Because these are unknown in exact detail, cosmological models have been based on the cosmological principle, which states that the Universe is homogeneous and isotropic. In effect, this principle asserts that the gravitational effects of the various galaxies making up the Universe are equivalent to those of a fine dust distributed uniformly throughout the Universe with the same average density. The assumption of a uniform dust makes it easy to solve Einstein's field equations and predict the past and future of the Universe on cosmological time scales. Einstein's field equations include a cosmological constant (Λ), that corresponds to an energy density of empty space. Depending on its sign, the cosmological constant can either slow (negative Λ) or accelerate (positive Λ) the expansion of the Universe. Although many scientists, including Einstein, had speculated that Λ was zero, recent astronomical observations of type Ia supernovae have detected a large amount of "dark energy" that is accelerating the Universe's expansion. Preliminary studies suggest that this dark energy corresponds to a positive Λ, although alternative theories cannot be ruled out as yet. Russian physicist Zel'dovich suggested that Λ is a measure of the zero-point energy associated with virtual particles of quantum field theory, a pervasive vacuum energy that exists everywhere, even in empty space. Evidence for such zero-point energy is observed in the Casimir effect. Special relativity and space-time The Universe has at least three spatial and one temporal (time) dimension. It was long thought that the spatial and temporal dimensions were different in nature and independent of one another. However, according to the special theory of relativity, spatial and temporal separations are interconvertible (within limits) by changing one's motion. To understand this interconversion, it is helpful to consider the analogous interconversion of spatial separations along the three spatial dimensions. Consider the two endpoints of a rod of length L. The length can be determined from the differences in the three coordinates Δx, Δy and Δz of the two endpoints in a given reference frame using the Pythagorean theorem. In a rotated reference frame, the coordinate differences differ, but they give the same length Thus, the coordinates differences (Δx, Δy, Δz) and (Δξ, Δη, Δζ) are not intrinsic to the rod, but merely reflect the reference frame used to describe it; by contrast, the length L is an intrinsic property of the rod. The coordinate differences can be changed without affecting the rod, by rotating one's reference frame. The analogy in spacetime is called the interval between two events; an event is defined as a point in spacetime, a specific position in space and a specific moment in time. The spacetime interval between two events is given by where c is the speed of light. According to special relativity, one can change a spatial and time separation (L1, Δt1) into another (L2, Δt2) by changing one's reference frame, as long as the change maintains the spacetime interval s. Such a change in reference frame corresponds to changing one's motion; in a moving frame, lengths and times are different from their counterparts in a stationary reference frame. The precise manner in which the coordinate and time differences change with motion is described by the Lorentz transformation. Solving Einstein's field equations The distances between the spinning galaxies increase with time, but the distances between the stars within each galaxy stay roughly the same, due to their gravitational interactions. This animation illustrates a closed Friedmann Universe with zero cosmological constant Λ; such a Universe oscillates between a Big Bang and a Big Crunch. In non-Cartesian (non-square) or curved coordinate systems, the Pythagorean theorem holds only on infinitesimal length scales and must be augmented with a more general metric tensor gμν, which can vary from place to place and which describes the local geometry in the particular coordinate system. However, assuming the cosmological principle that the Universe is homogeneous and isotropic everywhere, every point in space is like every other point; hence, the metric tensor must be the same everywhere. That leads to a single form for the metric tensor, called the Friedmann–Lemaître–Robertson–Walker metric where (r, θ, φ) correspond to a spherical coordinate system. This metric has only two undetermined parameters: an overall length scale R that can vary with time, and a curvature index k that can be only 0, 1 or −1, corresponding to flat Euclidean geometry, or spaces of positive or negative curvature. In cosmology, solving for the history of the Universe is done by calculating R as a function of time, given k and the value of the cosmological constant Λ, which is a (small) parameter in Einstein's field equations. The equation describing how R varies with time is known as the Friedmann equation, after its inventor, Alexander Friedmann. The solutions for R(t) depend on k and Λ, but some qualitative features of such solutions are general. First and most importantly, the length scale R of the Universe can remain constant only if the Universe is perfectly isotropic with positive curvature (k=1) and has one precise value of density everywhere, as first noted by Albert Einstein. However, this equilibrium is unstable and because the Universe is known to be inhomogeneous on smaller scales, R must change, according to general relativity. When R changes, all the spatial distances in the Universe change in tandem; there is an overall expansion or contraction of space itself. This accounts for the observation that galaxies appear to be flying apart; the space between them is stretching. The stretching of space also accounts for the apparent paradox that two galaxies can be 40 billion light years apart, although they started from the same point 13.8 billion years ago and never moved faster than the speed of light. Second, all solutions suggest that there was a gravitational singularity in the past, when R goes to zero and matter and energy became infinitely dense. It may seem that this conclusion is uncertain because it is based on the questionable assumptions of perfect homogeneity and isotropy (the cosmological principle) and that only the gravitational interaction is significant. However, the Penrose–Hawking singularity theorems show that a singularity should exist for very general conditions. Hence, according to Einstein's field equations, R grew rapidly from an unimaginably hot, dense state that existed immediately following this singularity (when R had a small, finite value); this is the essence of the Big Bang model of the Universe. A common misconception is that the Big Bang model predicts that matter and energy exploded from a single point in space and time; that is false. Rather, space itself was created in the Big Bang and imbued with a fixed amount of energy and matter distributed uniformly throughout; as space expands (i.e., as R(t) increases), the density of that matter and energy decreases. Space has no boundary – that is empirically more certain than any external observation. However, that does not imply that space is infinite... (translated, original German) |Bernhard Riemann (Habilitationsvortrag, 1854)| Third, the curvature index k determines the sign of the mean spatial curvature of spacetime averaged over length scales greater than a billion light years. If k=1, the curvature is positive and the Universe has a finite volume. Such universes are often visualized as a three-dimensional sphere S3 embedded in a four-dimensional space. Conversely, if k is zero or negative, the Universe may have infinite volume, depending on its overall topology. It may seem counter-intuitive that an infinite and yet infinitely dense Universe could be created in a single instant at the Big Bang when R=0, but exactly that is predicted mathematically when k does not equal 1. For comparison, an infinite plane has zero curvature but infinite area, whereas an infinite cylinder is finite in one direction and a torus is finite in both. A toroidal Universe could behave like a normal Universe with periodic boundary conditions, as seen in "wrap-around" video games such as Asteroids; a traveler crossing an outer "boundary" of space going outwards would reappear instantly at another point on the boundary moving inwards. The ultimate fate of the Universe is still unknown, because it depends critically on the curvature index k and the cosmological constant Λ. If the Universe is sufficiently dense, k equals +1, meaning that its average curvature throughout is positive and the Universe will eventually recollapse in a Big Crunch, possibly starting a new Universe in a Big Bounce. Conversely, if the Universe is insufficiently dense, k equals 0 or −1 and the Universe will expand forever, cooling off and eventually becoming inhospitable for all life, as the stars die and all matter coalesces into black holes (the Big Freeze and the heat death of the Universe). As noted above, recent data suggests that the expansion speed of the Universe is not decreasing as originally expected, but increasing; if this continues indefinitely, the Universe will eventually rip itself to shreds (the Big Rip). Experimentally, the Universe has an overall density that is very close to the critical value between recollapse and eternal expansion; more careful astronomical observations are needed to resolve the question. Big Bang model The prevailing Big Bang model accounts for many of the experimental observations described above, such as the correlation of distance and redshift of galaxies, the universal ratio of hydrogen:helium atoms, and the ubiquitous, isotropic microwave radiation background. As noted above, the redshift arises from the metric expansion of space; as the space itself expands, the wavelength of a photon traveling through space likewise increases, decreasing its energy. The longer a photon has been traveling, the more expansion it has undergone; hence, older photons from more distant galaxies are the most red-shifted. Determining the correlation between distance and redshift is an important problem in experimental physical cosmology. Other experimental observations can be explained by combining the overall expansion of space with nuclear and atomic physics. As the Universe expands, the energy density of the electromagnetic radiation decreases more quickly than does that of matter, because the energy of a photon decreases with its wavelength. Thus, although the energy density of the Universe is now dominated by matter, it was once dominated by radiation; poetically speaking, all was light. As the Universe expanded, its energy density decreased and it became cooler; as it did so, the elementary particles of matter could associate stably into ever larger combinations. Thus, in the early part of the matter-dominated era, stable protons and neutrons formed, which then associated into atomic nuclei. At this stage, the matter in the Universe was mainly a hot, dense plasma of negative electrons, neutral neutrinos and positive nuclei. Nuclear reactions among the nuclei led to the present abundances of the lighter nuclei, particularly hydrogen, deuterium, and helium. Eventually, the electrons and nuclei combined to form stable atoms, which are transparent to most wavelengths of radiation; at this point, the radiation decoupled from the matter, forming the ubiquitous, isotropic background of microwave radiation observed today. Other observations are not answered definitively by known physics. According to the prevailing theory, a slight imbalance of matter over antimatter was present in the Universe's creation, or developed very shortly thereafter, possibly due to the CP violation that has been observed by particle physicists. Although the matter and antimatter mostly annihilated one another, producing photons, a small residue of matter survived, giving the present matter-dominated Universe. Several lines of evidence also suggest that a rapid cosmic inflation of the Universe occurred very early in its history (roughly 10−35 seconds after its creation). Recent observations also suggest that the cosmological constant (Λ) is not zero and that the net mass-energy content of the Universe is dominated by a dark energy and dark matter that have not been characterized scientifically. They differ in their gravitational effects. Dark matter gravitates as ordinary matter does, and thus slows the expansion of the Universe; by contrast, dark energy serves to accelerate the Universe's expansion. Some speculative theories have proposed that this Universe is but one of a set of disconnected universes, collectively denoted as the multiverse, challenging or enhancing more limited definitions of the Universe. Scientific multiverse theories are distinct from concepts such as alternate planes of consciousness and simulated reality, although the idea of a larger Universe is not new; for example, Bishop Étienne Tempier of Paris ruled in 1277 that God could create as many universes as he saw fit, a question that was being hotly debated by the French theologians. Max Tegmark developed a four-part classification scheme for the different types of multiverses that scientists have suggested in various problem domains. An example of such a theory is the chaotic inflation model of the early Universe. Another is the many-worlds interpretation of quantum mechanics. Parallel worlds are generated in a manner similar to quantum superposition and decoherence, with all states of the wave function being realized in separate worlds. Effectively, the multiverse evolves as a universal wavefunction. If the big bang that created our multiverse created an ensemble of multiverses, the wave function of the ensemble would be entangled in this sense. The least controversial category of multiverse in Tegmark's scheme is Level I, which describes distant space-time events "in our own Universe". If space is infinite, or sufficiently large and uniform, identical instances of the history of Earth's entire Hubble volume occur every so often, simply by chance. Tegmark calculated our nearest so-called doppelgänger, is 1010115 meters away from us (a double exponential function larger than a googolplex). In principle, it would be impossible to scientifically verify an identical Hubble volume. However, it does follow as a fairly straightforward consequence from otherwise unrelated scientific observations and theories. Tegmark suggests that statistical analysis exploiting the anthropic principle provides an opportunity to test multiverse theories in some cases. Generally, science would consider a multiverse theory that posits neither a common point of causation, nor the possibility of interaction between universes, to be an idle speculation. Shape of the Universe The shape or geometry of the Universe includes both local geometry in the observable Universe and global geometry, which we may or may not be able to measure. Shape can refer to curvature and topology. More formally, the subject in practice investigates which 3-manifold corresponds to the spatial section in comoving coordinates of the four-dimensional space-time of the Universe. Cosmologists normally work with a given space-like slice of spacetime called the comoving coordinates. In terms of observation, the section of spacetime that can be observed is the backward light cone (points within the cosmic light horizon, given time to reach a given observer). If the observable Universe is smaller than the entire Universe (in some models it is many orders of magnitude smaller), one cannot determine the global structure by observation: one is limited to a small patch. Among the Friedmann–Lemaître–Robertson–Walker (FLRW) models, the presently most popular shape of the Universe found to fit observational data according to cosmologists is the infinite flat model, while other FLRW models include the Poincaré dodecahedral space and the Picard horn. The data fit by these FLRW models of space especially include the Wilkinson Microwave Anisotropy Probe (WMAP) and Planck maps of cosmic background radiation. NASA released the first WMAP cosmic background radiation data in February 2003, while a higher resolution map regarding Planck data was released by ESA in March 2013. Both probes have found almost perfect agreement with inflationary models and the standard model of cosmology, describing a flat, homogenous universe dominated by dark matter and dark energy. - Religious cosmology - Cosmic latte - Hindu cosmology - Dyson's eternal intelligence - Esoteric cosmology - False vacuum - Final anthropic principle - Fine-tuned Universe - Hindu cycle of the universe - Jain cosmology - Kardashev scale - The Mysterious Universe (book) - Non-standard cosmology - Observable universe - Omega Point - Rare Earth hypothesis - Vacuum genesis - World view - Zero-energy Universe Notes and references - "Universe". Webster's New World College Dictionary, Wiley Publishing, Inc. 2010. - "Universe". Encyclopedia Britannica. "the whole cosmic system of matter and energy of which Earth, and therefore the human race, is a part" - "Universe". Dictionary.com. Retrieved 2012-09-21. - "Universe". Merriam-Webster Dictionary. Retrieved 2012-09-21. - The American Heritage Dictionary of the English Language (4th ed.). Houghton Mifflin Harcourt Publishing Company. 2010. - Cambridge Advanced Learner's Dictionary. - Itzhak Bars; John Terning (November 2009). Extra Dimensions in Space and Time. Springer. pp. 27–. ISBN 978-0-387-77637-8. Retrieved 2011-05-01. - "Planck reveals an almost perfect universe". Planck. ESA. 2013-03-21. Retrieved 2013-03-21. - Planck collaboration (2013). "Planck 2013 results. XVI. Cosmological parameters". Submitted to Astronomy & Astrophysics. arXiv:1303.5076. - multiverse. Astronomy.pomona.edu. Retrieved 2011-11-28. - Palmer, Jason. (2011-08-03) BBC News – 'Multiverse' theory suggested by microwave background. Retrieved 2011-11-28. - Moskowitz, Clara (September 25, 2012). "Hubble Telescope Reveals Farthest View Into Universe Ever". Space.com. Retrieved 2012-09-26. - Brief History Of Time by Stephen Hawkings - In contrast to dark energy, which is expansive ("negative pressure"), the dark matter leads to "clumping" through gravitation. - Universe, ed. Martin Rees, pp. 54–55, Dorling Kindersley Publishing, New York 2005, ISBN 978-0-7566-1364-8 - Staff (21 March 2013). "Planck Reveals An Almost Perfect Universe". ESA. Retrieved 2013-03-21. - Clavin, Whitney; Harrington, J.D. (21 March 2013). "Planck Mission Brings Universe Into Sharp Focus". NASA. Retrieved 2013-03-21. - Overbye, Dennis (21 March 2013). "An Infant Universe, Born Before We Knew". New York Times. Retrieved 2013-03-21. - Staff (21 March 2013). "Mapping the Early Universe". New York Times. Retrieved 2013-03-23. - Boyle, Alan (21 March 2013). "Planck probe's cosmic 'baby picture' revises universe's vital statistics". NBC News. Retrieved 2013-03-21. - Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; et al. (Planck Collaboration) (20 March 2013). "Planck 2013 results. I. Overview of products and scientific results". Astronomy & Astrophysics (submitted). arXiv:1303.5062. - Bennett, C.L.; Larson, L.; Weiland, J.L.; Jarosk, N.; Hinshaw, N.; Odegard, N.; Smith, K.M.; Hill, R.S. et al. (December 20, 2012). Nine-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Final Maps and Results. arXiv:1212.5225. Retrieved 2013-01-01. - Lineweaver, Charles; Davis, Tamara M. (2005). "Misconceptions about the Big Bang". Scientific American. Retrieved 2008-11-06. - The Compact Edition of the Oxford English Dictionary, volume II, Oxford: Oxford University Press, 1971, p.3518. - Lewis, C. T. and Short, S A Latin Dictionary, Oxford University Press, ISBN 0-19-864201-6, pp. 1933, 1977–1978. - Liddell and Scott, p. 1392. - Liddell and Scott, pp. 1345–1346. - Yonge, Charles Duke (1870). An English-Greek lexicon. New York: American Book Company. p. 567. - Liddell and Scott, pp. 985, 1964. - Lewis and Short, pp. 1881–1882, 1175, 1189–1190. - OED, pp. 909, 569, 3821–3822, 1900. - Ellis, George F.R.; U. Kirchner, W.R. Stoeger (2004). "Multiverses and physical cosmology". Monthly Notices of the Royal Astronomical Society 347 (3): 921–936. arXiv:astro-ph/0305292. Bibcode:2004MNRAS.347..921E. doi:10.1111/j.1365-2966.2004.07261.x. - Lineweaver, Charles; Tamara M. Davis (2005). "Misconceptions about the Big Bang". Scientific American. Retrieved 2007-03-05. - Rindler (1977), p.196. - Christian, Eric; Samar, Safi-Harb. "How large is the Milky Way?". Retrieved 2007-11-28. - I. Ribas, C. Jordi, F. Vilardell, E.L. Fitzpatrick, R.W. Hilditch, F. Edward (2005). "First Determination of the Distance and Fundamental Properties of an Eclipsing Binary in the Andromeda Galaxy". Astrophysical Journal 635 (1): L37–L40. arXiv:astro-ph/0511045. Bibcode:2005ApJ...635L..37R. doi:10.1086/499161. McConnachie, A. W.; Irwin, M. J.; Ferguson, A. M. N.; Ibata, R. A.; Lewis, G. F.; Tanvir, N. (2005). "Distances and metallicities for 17 Local Group galaxies". Monthly Notices of the Royal Astronomical Society 356 (4): 979–997. arXiv:astro-ph/0410489. Bibcode:2005MNRAS.356..979M. doi:10.1111/j.1365-2966.2004.08514.x. - Mackie, Glen (February 1, 2002). "To see the Universe in a Grain of Taranaki Sand". Swinburne University. Retrieved 2006-12-20. - "Unveiling the Secret of a Virgo Dwarf Galaxy". ESO. 2000-05-03. Retrieved 2007-01-03. - "Hubble's Largest Galaxy Portrait Offers a New High-Definition View". NASA. 2006-02-28. Retrieved 2007-01-03. - Vergano, Dan (1 December 2010). "Universe holds billions more stars than previously thought". USA Today. Retrieved 2010-12-14. - N. Mandolesi, P. Calzolari, S. Cortiglioni, F. Delpino, G. Sironi (1986). "Large-scale homogeneity of the Universe measured by the microwave background". Letters to Nature 319 (6056): 751–753. Bibcode:1986Natur.319..751M. doi:10.1038/319751a0. - Hinshaw, Gary (November 29, 2006). "New Three Year Results on the Oldest Light in the Universe". NASA WMAP. Retrieved 2006-08-10. - Hinshaw, Gary (December 15, 2005). "Tests of the Big Bang: The CMB". NASA WMAP. Retrieved 2007-01-09. - Rindler (1977), p. 202. - Hinshaw, Gary (February 10, 2006). "What is the Universe Made Of?". NASA WMAP. Retrieved 2007-01-04. - Wright, Edward L. (September 12, 2004). "Big Bang Nucleosynthesis". UCLA. Retrieved 2007-01-05. M. Harwit, M. Spaans (2003). "Chemical Composition of the Early Universe". The Astrophysical Journal 589 (1): 53–57. arXiv:astro-ph/0302259. Bibcode:2003ApJ...589...53H. doi:10.1086/374415. C. Kobulnicky, E. D. Skillman; Skillman (1997). "Chemical Composition of the Early Universe". Bulletin of the American Astronomical Society 29: 1329. Bibcode:1997AAS...191.7603K. - "Antimatter". Particle Physics and Astronomy Research Council. October 28, 2003. Retrieved 2006-08-10. - Landau and Lifshitz (1975), p. 361. - WMAP Mission: Results – Age of the Universe. Map.gsfc.nasa.gov. Retrieved 2011-11-28. - Luminet, Jean-Pierre; Boudewijn F. Roukema (1999). "Topology of the Universe: Theory and Observations". Proceedings of Cosmology School held at Cargese, Corsica, August 1998. arXiv:astro-ph/9901364. Luminet, Jean-Pierre; J. Weeks, A. Riazuelo, R. Lehoucq, J.-P. Uzan (2003). "Dodecahedral space topology as an explanation for weak wide-angle temperature correlations in the cosmic microwave background". Nature 425 (6958): 593–595. arXiv:astro-ph/0310253. Bibcode:2003Natur.425..593L. doi:10.1038/nature01944. PMID 14534579. - Strobel, Nick (May 23, 2001). "The Composition of Stars". Astronomy Notes. Retrieved 2007-01-04. "Have physical constants changed with time?". Astrophysics (Astronomy Frequently Asked Questions). Retrieved 2007-01-04. - Stephen Hawking (1988). A Brief History of Time. Bantam Books. p. 125. ISBN 0-553-05340-X. - Martin Rees (1999). Just Six Numbers. HarperCollins Publishers. ISBN 0-465-03672-4. - Adams, F.C. (2008). "Stars in other universes: stellar structure with different fundamental constants". Journal of Cosmology and Astroparticle Physics 2008 (8): 010. arXiv:0807.3697. Bibcode:2008JCAP...08..010A. doi:10.1088/1475-7516/2008/08/010. - Harnik, R.; Kribs, G.D. and Perez, G. (2006). "A Universe without weak interactions". Physical Review D 74 (3): 035006. arXiv:hep-ph/0604027. Bibcode:2006PhRvD..74c5006H. doi:10.1103/PhysRevD.74.035006. - (Henry Gravrand, "La civilisation Sereer -Pangool") [in] Universität Frankfurt am Main, Frobenius-Institut, Deutsche Gesellschaft für Kulturmorphologie, Frobenius Gesellschaft, "Paideuma: Mitteilungen zur Kulturkunde, Volumes 43–44", F. Steiner (1997), pp. 144–5, ISBN 3515028420 - Will Durant, Our Oriental Heritage: "Two systems of Hindu thought propound physical theories suggestively similar to those of Greece. Kanada, founder of the Vaisheshika philosophy, held that the world was composed of atoms as many in kind as the various elements. The Jains more nearly approximated to Democritus by teaching that all atoms were of the same kind, producing different effects by diverse modes of combinations. Kanada believed light and heat to be varieties of the same substance; Udayana taught that all heat comes from the sun; and Vachaspati, like Newton, interpreted light as composed of minute particles emitted by substances and striking the eye." - Stcherbatsky, F. Th. (1930, 1962), Buddhist Logic, Volume 1, p. 19, Dover, New York: "The Buddhists denied the existence of substantial matter altogether. Movement consists for them of moments, it is a staccato movement, momentary flashes of a stream of energy... "Everything is evanescent“,... says the Buddhist, because there is no stuff... Both systems [Sānkhya, and later Indian Buddhism] share in common a tendency to push the analysis of existence up to its minutest, last elements which are imagined as absolute qualities, or things possessing only one unique quality. They are called “qualities” (guna-dharma) in both systems in the sense of absolute qualities, a kind of atomic, or intra-atomic, energies of which the empirical things are composed. Both systems, therefore, agree in denying the objective reality of the categories of Substance and Quality,... and of the relation of Inference uniting them. There is in Sānkhya philosophy no separate existence of qualities. What we call quality is but a particular manifestation of a subtle entity. To every new unit of quality corresponds a subtle quantum of matter which is called guna “quality”, but represents a subtle substantive entity. The same applies to early Buddhism where all qualities are substantive... or, more precisely, dynamic entities, although they are also called dharmas ('qualities')." - Craig, William Lane (June 1979). "Whitrow and Popper on the Impossibility of an Infinite Past". The British Journal for the Philosophy of Science 30 (2): 165–170 (165–6). doi:10.1093/bjps/30.2.165. - Boyer, C. A History of Mathematics. Wiley, p. 54. - Neugebauer, Otto E. (1945). "The History of Ancient Astronomy Problems and Methods". Journal of Near Eastern Studies 4 (1): 1–38. doi:10.1086/370729. JSTOR 595168. "the Chaldaean Seleucus from Seleucia" - Sarton, George (1955). "Chaldaean Astronomy of the Last Three Centuries B. C". Journal of the American Oriental Society 75 (3): 166–173 (169). doi:10.2307/595168. JSTOR 595168. "the heliocentrical astronomy invented by Aristarchos of Samos and still defended a century later by Seleucos the Babylonian" - William P. D. Wightman (1951, 1953), The Growth of Scientific Ideas, Yale University Press p. 38, where Wightman calls him Seleukos the Chaldean. - Lucio Russo, Flussi e riflussi, Feltrinelli, Milano, 2003, ISBN 88-07-10349-4. - Bartel, p. 527 - Bartel, pp. 527–9 - Bartel, pp. 529–34 - Bartel, pp. 534–7 - Nasr, Seyyed H. (1st edition in 1964, 2nd edition in 1993). An Introduction to Islamic Cosmological Doctrines (2nd ed.). 1st edition by Harvard University Press, 2nd edition by State University of New York Press. pp. 135–6. ISBN 0-7914-1515-5. - Misner, Thorne and Wheeler (1973), p. 754. - Misner, Thorne, and Wheeler (1973), p. 755–756. - Misner, Thorne, and Wheeler (1973), p. 756. - de Cheseaux JPL (1744). Traité de la Comète. Lausanne. pp. 223ff.. Reprinted as Appendix II in Dickson FP (1969). The Bowl of Night: The Physical Universe and Scientific Thought. Cambridge, MA: M.I.T. Press. ISBN 978-0-262-54003-2. - Olbers HWM (1826). "Unknown title". Bode's Jahrbuch 111.. Reprinted as Appendix I in Dickson FP (1969). The Bowl of Night: The Physical Universe and Scientific Thought. Cambridge, MA: M.I.T. Press. ISBN 978-0-262-54003-2. - Jeans, J. H. (1902). "The Stability of a Spherical Nebula" (PDF). Philosophical Transactions Royal Society of London, Series A 199 (312–320): 1–53. Bibcode:1902RSPTA.199....1J. doi:10.1098/rsta.1902.0012. JSTOR 90845. Retrieved 2011-03-17. - Rindler, p. 196; Misner, Thorne, and Wheeler (1973), p. 757. - Misner, Thorne and Wheeler, p. 756. - Einstein, A (1917). "Kosmologische Betrachtungen zur allgemeinen Relativitätstheorie". Preussische Akademie der Wissenschaften, Sitzungsberichte. 1917. (part 1): 142–152. - Rindler (1977), pp. 226–229. - Landau and Lifshitz (1975), pp. 358–359. - Einstein, A (1931). "Zum kosmologischen Problem der allgemeinen Relativitätstheorie". Sitzungsberichte der Preussischen Akademie der Wissenschaften, Physikalisch-mathematische Klasse 1931: 235–237. Einstein A., de Sitter W. (1932). "On the relation between the expansion and the mean density of the Universe". Proceedings of the National Academy of Sciences 18 (3): 213–214. Bibcode:1932PNAS...18..213E. doi:10.1073/pnas.18.3.213. PMC 1076193. PMID 16587663. - Hubble Telescope news release. Hubblesite.org (2004-02-20). Retrieved 2011-11-28. - "Mysterious force's long presence". BBC News. 2006-11-16. - Zel'dovich YB (1967). "Cosmological constant and elementary particles". Zh. Eksp. & Teor. Fiz. Pis'ma 6: 883–884. English translation in Sov. Phys. — JTEP Lett., 6, pp. 316–317 (1967). - Friedmann A. (1922). "Über die Krümmung des Raumes". Zeitschrift für Physik 10 (1): 377–386. Bibcode:1922ZPhy...10..377F. doi:10.1007/BF01332580. - "Cosmic Detectives". The European Space Agency (ESA). 2013-04-02. Retrieved 2013-04-15. - Munitz MK (1959). "One Universe or Many?". Journal of the History of Ideas 12 (2): 231–255. doi:10.2307/2707516. JSTOR 2707516. - Misner, Thorne and Wheeler (1973), p.753. - Linde A. (1986). "Eternal chaotic inflation". Mod. Phys. Lett. A1 (2): 81–85. Bibcode:1986MPLA....1...81L. doi:10.1142/S0217732386000129. Linde A. (1986). "Eternally existing self-reproducing chaotic inflationary Universe" (PDF). Phys. Lett. B175 (4): 395–400. Bibcode:1986PhLB..175..395L. doi:10.1016/0370-2693(86)90611-8. Retrieved 2011-03-17. - Tegmark M. (2003). "Parallel universes. Not just a staple of science fiction, other universes are a direct implication of cosmological observations". Scientific American 288 (5): 40–51. doi:10.1038/scientificamerican0503-40. PMID 12701329. - Tegmark, Max (2003). "Parallel Universes". In "Science and Ultimate Reality: from Quantum to Cosmos", honoring John Wheeler's 90th birthday. J. D. Barrow, P.C.W. Davies, & C.L. Harper eds. Cambridge University Press (2003): 2131. arXiv:astro-ph/0302131. Bibcode:2003astro.ph..2131T. - Shape of the Universe, WMAP website at NASA. - Luminet, Jean-Pierre; Jeff Weeks, Alain Riazuelo, Roland Lehoucq, Jean-Phillipe Uzan (2003-10-09). "Dodecahedral space topology as an explanation for weak wide-angle temperature correlations in the cosmic microwave background". dyijchjhcxchjnbvhjkjcfhNature 425 (6958): 593–5. arXiv:astro-ph/0310253. Bibcode:2003Natur.425..593L. doi:10.1038/nature01944. PMID 14534579. - Roukema, Boudewijn; Zbigniew Buliński, Agnieszka Szaniewska, Nicolas E. Gaudin (2008). "A test of the Poincare dodecahedral space topology hypothesis with the WMAP CMB data". Astronomy and Astrophysics 482 (3): 747. arXiv:0801.0006. Bibcode:2008A&A...482..747L. doi:10.1051/0004-6361:20078777. - Aurich, Ralf; Lustig, S., Steiner, F., Then, H. (2004). "Hyperbolic Universes with a Horned Topology and the CMB Anisotropy". Classical and Quantum Gravity 21 (21): 4901–4926. arXiv:astro-ph/0403597. Bibcode:2004CQGra..21.4901A. doi:10.1088/0264-9381/21/21/010. - "Planck reveals 'almost perfect' universe". Michael Banks. Physics World. 2013-03-21. Retrieved 2013-03-21. |Wikimedia Commons has media related to: Universe| |Wikiquote has a collection of quotations related to: Universe| - Bartel (1987). "The Heliocentric System in Greek, Persian and Hindu Astronomy". Annals of the New York Academy of Sciences 500 (1): 525–545. Bibcode:1987NYASA.500..525V. doi:10.1111/j.1749-6632.1987.tb37224.x. - Landau, Lev, Lifshitz, E.M. (1975). The Classical Theory of Fields (Course of Theoretical Physics, Vol. 2) (revised 4th English ed.). New York: Pergamon Press. pp. 358–397. ISBN 978-0-08-018176-9. - Liddell, H. G. and Scott, R. A Greek-English Lexicon, Oxford University Press, ISBN 0-19-864214-8 - Misner, C.W., Thorne, Kip, Wheeler, J.A. (1973). Gravitation. San Francisco: W. H. Freeman. pp. 703–816. ISBN 978-0-7167-0344-0. - Rindler, W. (1977). Essential Relativity: Special, General, and Cosmological. New York: Springer Verlag. pp. 193–244. ISBN 0-387-10090-3. - Weinberg, S. (1993). The First Three Minutes: A Modern View of the Origin of the Universe (2nd updated ed.). New York: Basic Books. ISBN 978-0-465-02437-7. OCLC 28746057. For lay readers. - Nussbaumer, Harry; Bieri, Lydia; Sandage, Allan (2009). Discovering the Expanding Universe. Cambridge University Press. ISBN 978-0-521-51484-2. - Is there a hole in the Universe? at HowStuffWorks - Stephen Hawking's Universe – Why is the Universe the way it is? - Cosmology FAQ - Cosmos – An "illustrated dimensional journey from microcosmos to macrocosmos" - Illustration comparing the sizes of the planets, the sun, and other stars - My So-Called Universe – Arguments for and against an infinite and parallel universes - The Dark Side and the Bright Side of the Universe Princeton University, Shirley Ho - Richard Powell: An Atlas of the Universe – Images at various scales, with explanations - Multiple Big Bangs - Universe – Space Information Centre - Exploring the Universe at Nasa.gov - Cosmography of the Local Universe at irfu.cea.fr (17:35) (arXiv) - The Known Universe created by the American Museum of Natural History - Understand The Size Of The Universe – by Powers of Ten - 3-D Video (01:46) – Over a Million Galaxies of Billions of Stars each – BerkeleyLab/animated - The Future of the Universe – NASAHome/News
http://en.wikipedia.org/wiki/Universe
13
56
In mathematics, the slope or the gradient of a straight line (within a Cartesian coordinate system) is a measure for the "steepness" of the line relative to the horizontal axis. With an understanding of algebra and geometry, one can calculate the slope of a straight line; with calculus, one can calculate the slope of the tangent to a curve at a point. Definition of slope The slope of a line in the plane containing the x and y axes is generally represented by the letter m, and is defined as the change in the y coordinate divided by the corresponding change in the x coordinate, between two distinct points on the line. This is described by the following equation: Given two points (x1, y1) and (x2, y2), the change in x from one to the other is x2 - x1, while the change in y is y2 - y1. Substituting both quantities into the above equation obtains the following: Since the y-axis is vertical and the x-axis is horizontal by convention, the above equation is often memorized as "rise over run", where Δy is the "rise" and Δx is the "run". Therefore, by convention, m is equal to the change in y, the vertical coordinate, divided by the change in x, the horizontal coordinate; that is, m is the ratio of the changes. This concept is fundamental to algebra, analytic geometry, trigonometry, and calculus. Note that the points chosen and the order in which they are used is irrelevant; the same line will always have the same slope. Other curves have "accelerating" slopes and one can use calculus to determine such slopes. Suppose a line runs through two points: P(13,8) and Q(1,2). By dividing the difference in y-coordinates by the difference in x-coordinates, one can obtain the slope of the line: The slope is 1/2 = 0.5. If a line runs through the points (4, 15) and (3, 21) then: The larger the absolute value of a slope, the steeper the line. A horizontal line has slope 0, a 45° rising line has a slope of +1, and a 45° falling line has a slope of -1. The slope of a vertical line is not defined (it does not make sense to define it as +∞, because it might just as well be defined as -∞). Two lines are parallel if and only if their slopes are equal or if they both are vertical and therefore undefined; they are perpendicular (i.e. they form a right angle) if and only if the product of their slopes is -1 or one has a slope of 0 and the other is vertical and undefined. Slope of a road, etc. There are two common ways to describe how steep a road is. One is by the angle in degrees, and the other is by the slope in a percentage. See also mountain railway. The formula for converting a slope in percentage to degrees is: If y is a linear function of x, then the coefficient of x is the slope of the line created by plotting the function. Therefore, if the equation of the line is given in the form then m is the slope. This form of a line's equation is called the slope-intercept form, because b can be interpreted as the y-intercept of the line, the y-coordinate where the line intersects the y-axis. If the slope m of a line and a point (x0, y0) on the line are both known, then the equation of the line can be found using the point-slope formula: For example, consider a line running through the points (2, 8) and (3, 20). This line has a slope, m, of (20 - 8) / (3 - 2) = 12. One can then write the line's equation, in point-slope form: y - 8 = 12(x - 2) = 12x - 24; or: y = 12x - 16. The slope of a linear equation in the general form: is given by the formula: −A/B. The concept of a slope is central to differential calculus. For non-linear functions, the rate of change varies along the curve. The derivative of the function at a point is the slope of the line tangent to the curve at the point, and is thus equal to the rate of change of the function at that point. Why calculus is necessary If we let Δx and Δy be the distances (along the x and y axes, respectively) between two points on a curve, then the slope given by the above definition, is the slope of a secant line to a curve. For a line, the secant between any two points is identical to the line itself; however, this is not the case for any other type of curve. For example, the slope of the secant intersecting y = x² at (0,0) and (3,9) is m = (9 - 0) / (3 - 0) = 3 (which happens to be the slope of the tangent at, and only at, x = 1.5). However, by moving the points used in the above formula closer together so that Δy and Δx decrease, the secant line more closely approximates a tangent line to the curve. It follows that the secant line is identical to the tangent line when Δy and Δx equal zero; however, this results in a slope of 0/0, which is an indeterminate form (see also division by zero). The concept of a limit is necessary to calculate this slope; the slope is the limit of Δy / Δx as Δy and Δx approach zero. However, Δx and Δy are interrelated such that it is sufficient to take the limit where only Δx approaches zero. This limit is the derivative of y with respect to x. It may be written (in calculus notation) as dy/dx. - The gradient is a generalization of the concept slope for functions of more than one variable. - slope definitionsda:Hældningstal
http://www.exampleproblems.com/wiki/index.php/Slope
13
53
In mathematics, a periodic function is a function that repeats its values in regular intervals or periods. The most important examples are the trigonometric functions, which repeat over intervals of length 2π radians. Periodic functions are used throughout science to describe oscillations, waves, and other phenomena that exhibit periodicity. Any function which is not periodic is called aperiodic. A function f is said to be periodic with period P (P being a nonzero constant) if we have for all values of x. If there exists a least positive constant P with this property, it is called the prime period. A function with period P will repeat on intervals of length P, and these intervals are referred to as periods. Geometrically, a periodic function can be defined as a function whose graph exhibits translational symmetry. Specifically, a function f is periodic with period P if the graph of f is invariant under translation in the x-direction by a distance of P. This definition of periodic can be extended to other geometric shapes and patterns, such as periodic tessellations of the plane. A function that is not periodic is called aperiodic. For example, the sine function is periodic with period 2π, since for all values of x. This function repeats on intervals of length 2π (see the graph to the right). Everyday examples are seen when the variable is time; for instance the hands of a clock or the phases of the moon show periodic behaviour. Periodic motion is motion in which the position(s) of the system are expressible as periodic functions, all with the same period. A simple example of a periodic function is the function f that gives the "fractional part" of its argument. Its period is 1. In particular, - f( 0.5 ) = f( 1.5 ) = f( 2.5 ) = ... = 0.5. The graph of the function f is the sawtooth wave. The trigonometric functions sine and cosine are common periodic functions, with period 2π (see the figure on the right). The subject of Fourier series investigates the idea that an 'arbitrary' periodic function is a sum of trigonometric functions with matching periods. If a function f is periodic with period P, then for all x in the domain of f and all integers n, - f(x + nP) = f(x). If f(x) is a function with period P, then f(ax+b), where a is a positive constant, is periodic with period P/|a|. For example, f(x)=sinx has period 2π, therefore sin(5x) will have period 2π/5. Double-periodic functions A function whose domain is the complex numbers can have two incommensurate periods without being constant. The elliptic functions are such functions. ("Incommensurate" in this context means not real multiples of each other.) Complex example Using complex variables we have the common period function: As you can see, since the cosine and sine functions are periodic, and the complex exponential above is made up of cosine/sine waves, then the above (actually Euler's formula) has the following property. If L is the period of the function then: Antiperiodic functions One common generalization of periodic functions is that of antiperiodic functions. This is a function f such that f(x + P) = −f(x) for all x. (Thus, a P-antiperiodic function is a 2P-periodic function.) Bloch-periodic functions A further generalization appears in the context of Bloch waves and Floquet theory, which govern the solution of various periodic differential equations. In this context, the solution (in one dimension) is typically a function of the form: where k is a real or complex number (the Bloch wavevector or Floquet exponent). Functions of this form are sometimes called Bloch-periodic in this context. A periodic function is the special case k = 0, and an antiperiodic function is the special case k = π/P. Quotient spaces as domain In signal processing you encounter the problem, that Fourier series represent periodic functions and that Fourier series satisfy convolution theorems (i.e. convolution of Fourier series corresponds to multiplication of represented periodic function and vice versa), but periodic functions cannot be convolved with the usual definition, since the involved integrals diverge. A possible way out is to define a periodic function on a bounded but periodic domain. To this end you can use the notion of a quotient space: See also - Periodic sequence - Almost periodic function - Definite pitch - Doubly periodic function - Floquet theory - Quasiperiodic function - Periodic summation - Ekeland, Ivar (1990). "One". Convexity methods in Hamiltonian mechanics. Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)] 19. Berlin: Springer-Verlag. pp. x+247. ISBN 3-540-50613-6. MR 1051888.
http://en.wikipedia.org/wiki/Periodic_function
13
100
ความเห็นเพิ่มเติมที่ 60 9 ก.ค. 2551 (02:51) Hideki Yukawa and the Pi Mesons Hideki Yukawa received the Nobel Prize in physics for 1949 for predicting the existence of what are now known as the pi mesons. In his 1934 article Yukawa argued that the nuclear strong force is carried by a particle with a mass approximately 200 times that of an electron. Shortly after Yukawa's prediction a particle with almost precisely this mass was discovered in cosmic ray phenomena. It looked at first that Yukawa had been uncannily accurate, but there were problems with the particle found in the cosmic ray records. Although its mass was 207 times that of an electron, it was a fermion with half-integral spin rather than a boson of integral spin as Yukawa predicted for the carrier of the strong nuclear force. It turned out that the cosmic ray particle was not the particle Yukawa was talking about. Instead the cosmic ray particle was essentially a heavy electron, which is now called the muon. Later three particles with masses approximately 270 times that of an electron were found. These did have the properties that Yukawa had predicted. One was of positive charge, one of negative charge and one was neutral. They were called pi-mesons but now they are known as pions. Too often when Yukawa work is described only the predictions are noted so that it seems to the student that Yukawa just made a successful guess. This page's purpose is to present some of Yukawa's analysis that lay behind the successful prediction. When Hideki Yukawa wrote his article physicist were searching for the functional form of the strong force between nucleons (protons and neutrons). It was widely presumed, on the basis of the Coulomb force, that the strong force would be a power of the distance between the nucleons; i.e., 1/rn for some value of n, where r is the distance. Some proposed that the exponent n might be as large as 7 to account for the apparent short range of the nuclear force. Yukawa proposed the quite different form. For the Coulomb force the potential energy is of the form V(r) = −1/r. Yukawa proposed that the potential for the strong force be of the form where λ is a parameter subject to physical measurement. This form for the potential was somewhat surprising but a little reflection indicates that if the force was carried by particles the functional form has to be of the nature of the one Yukawa proposed. To see this, consider the case of radiation from a point source. Radiation creates pressure propagated by photons. The intensity of the radiation is inversely proportional to the square of the distance from the source; i.e., A/r2. The number of photons (and consequentially the energy) on a wave front of radius r1 is the intensity A/r12 multiplied by the area of the spherical wave front 4πr12 or A. The wave front would have the same number of photons (and energy) when it expands to a radius of r2. And of course the number of photons and amount of energy have to be the same because there is conservation of energy and photons. Where would any reduction in energy go? But this strict inverse distance squared dependence is a special consequence of photons not decaying. If the strong force is carried by particles which decay then the intensity of the strong force will diminish with distance not only as the inverse of r squared but also because force-carrying particles decay over time. The number of particles remaining in a wave front after time t is N0exp(-αt) where t is time and α is the decay rate. If v is the velocity of the particles then the number remaining at distance r is N0exp(-(α/v)r). Thus the intensity of the strong force at distance r is of the form: For any other functional form it would be impossible to account for energy differences as a function of distance. The principle involved is that the force carried by particles has to be inversely proportion to the distance squared to account for the spreading of the particles over a larger spherical surface and must also be multiplied by an exponential factor to take into account the decay of the particles with time and hence distance. The potential function for a force is the function such that the negative of its gradient gives the force as a function of distance. Yukawa's potential function does not quite satisfy this condition but it is an approximation to one that does. Yukawa notes that the potential U=±g2/r satisfies the wave equation (∇2 - (1/c2)∂2/∂t2)U = 0. The potential function he postulates, U=±g2e-λr/r satisfies the equation: (∇2 - (1/c2)∂2/∂t2 - λ2)U = 0. where ∇2 is the Laplacian operator. The derivation of the above results makes use of the form of the vector operator ∇2 in spherical coordinates. The derivation also make use of the fact that the time derivatives are all zero. For these derivations see Appendix I. By the rules of quantum mechanics: ∂/∂x => (i/h)px and similarly for y and z ∂/∂t => -(i/h)W With these substitutions Yukawa's wave equation becomes: -(1/h2)(px2 + py2 + pz2) - (1/c2)(-(1/h2)W2 - λ2)U = 0 [(px2 + py2 + pz2) - W2/c2 + λ2h2]U = 0 or more succintly as [p2 - W2/c2 + λ2h2]U = 0 Yukawa defines the mass of the particle associated with the field U as mU such that mUc = λh A similar relationship based upon Heisenberg's Uncertainty Principle was developed by G.C. Wick in Nature in 1938. The Uncertainty Principle in this case applies the canonical conjugate coordinates of time and energy: ΔEΔt ≥ h/2π In Wick's analysis the uncetainty in time Δt is the time required for light to traverse the range of the nuclear force r, which corresponds to 1/λ in Yukawa's analysis; i.e., Δt = r/c. The uncertainty of energy ΔE is the mass-energy of the particle, mUc2. Thus, according to Wick's argument, mUc2(r/c) = h/2π or mUc = (h/2π)/r = (h/2π)λ which is essentially the same as Yukawa's relation. Modification of Yukawa's Analysis The potential energy function is defined such that the force of the field is equal to the negative of the gradient of the potential function. Although the potential Yukawa assumes involves an inverse function of distance with an exponential decay with distance it is not precisely the form that gives rise to the exponentially attenuated inverse distance squared form that has the proper form for a particle-based field. The potential function having that property is of the form: V(r) = ∫∞r(exp(-λz)/z2)dz where the term ±g2 has been dropped. This will be called the true-form potential in the material which follows. An integration by parts of this function yields the relationship: V(r) = exp(-λr)/r - λ∫∞r(exp(-λz)/z)dz The first term on the right is Yukawa's potential function. This can be represented in the form: V(r) = U(r) - λW(r) where U(r) is Yukawa's potential and W(r) = ±g2∫∞r(exp(-λz)/z)dz. Because the scale is so different at various parts of this graph it is more convenient to view the logarithms of the functions as shown below. The parallel pattern for V(r) and U(r) for large distances indicates that for distances in such ranges the functions are roughly proportional. In Appendix I it is shown that V = λ(∂V/∂r) rather than as for the Yukawa potential U = λ2 This would appear to lead to a drastically different wave equation, but it is shown in Appendix II that the Laplacian for the true-form potential V(r) can be put into the form: ∇2V = λ2V + 2λX(r) X(r) = ∫∞r(e-λz/z3)/dz. The integral involving the inverse cube of distance is extremely small compared to V for r large compared to unity. This means that the true-form potential implies the same things concerning mass as does the Yukawa potential, at least for distances greater than unity. The wave equation that would correspond to the true-form potential V(r) is: (∇2 - (1/c2)∂2/∂t2 - λ2)V = 2λ∫∞r(e-λz/z3)/dz. In particular the same relationship between λ and the mass of the particle would prevail. mUc = hλ However, the analysis indicates that the apparent mass of the particle for small distances would be different from that implied by the Yukawa potential. The Half-Life and the Mass of the Pi Mesons If the decay of the nucleonic field potential with distance is due to the decay of the meson over time then there should be a relationship between the spatial rate of decay of the potential λ and the rate of time decay α; i.e., λ = α/v where v is the velocity of the meson. The value of λ which corresponds to a mass 270 times the mass of an electron is 6.5x1012cm-1. The reciprocal of λ is in units of length. The reciprocal of this λ is 1.54x10-13 cm or 1.54 fermi. The half-lives of the positive and negative π mesons are equal and the value is 2.6x10-8 seconds. The neutral π meson has an alternate mode of decay from the other π mesons and its half-life is equal to 9x10-16 seconds. The reciprocal of the temporal rate of decay α is equal to the half-life divided by the natural logarithm of 2. Therefore the value of 1/α for the positive and negative π mesons is equal to 3.7x10-8 seconds and for the neutral π meson it is 1.3x10-15 seconds. The ratio of the reciprocal of λ to the reciprocal of α, which corresponds to an apparent velocity, is about 4x10-6 cm/sec for the positive and negative π mesons and to 1.2x102 or 120 cm/sec for the neutral π meson. This is an anomaly because some measurements indicate that the velocity of propagation of mesons is very close to the speed of light. When a particle moves at a speed close to the speed of light relativistic adjustments are required. Postponing the matter of relativistic correction, consider for now the implications of a meson velocity essentially equal to the speed of light. If the velocity of the mesons is equal to the speed of light then λ = α/c. mUc = hλ ατ = ln(2), where τ is the half-life, it follows that the relationship between the mass of the particle and its half-life would be given by: mU = (h/c)λ = (h/c)(β/c) = (h/c2)(ln(2)/τ mUc2 = hln(2)/τ mUc2τ = hln(2) = constant. Since particle masses are often expressed relative to the mass of the electron the above relationship is more conveniently expressed as: (mU/me)τ = ln(2)(h/mec2) = constant. The term h/mec2 is the reciprocal of the de Broglie frequency of an electron which is the de Broglie wavelength of an electron divided by the speed of light. The estimate of mass based upon the above relation would be in error by many orders of magnitude because the half-life of the meson is vastly longer than the time required for light to traverse a distance equal to the range of the nuclear force, r=1/λ. The vastly extended life-time of the mesons can be accounted for by relativistic effects. According to the Special Theory of Relativity time appears to be dilated in a coordinate system moving at uniform speed with respect to another coordinate system. If the true half-life of the meson is τ0 in a meson not moving with respect to the coordinate system in which measurement are being made then the half-life appears to be τ0/(1-β2)1/2 (where β=v/c) when the mesons are moving. The half-life of mesons is measured with respect to the nucleus and the laboratory coordinates system and can be vastly extended. In the coordinate system moving with the meson the half-life would be τ0 but the distance the meson would appear to travel away from the nucleus before decay would appear to be contracted according to the Special Theory of Relativity. The distance in the coordinate system of the meson would be: The determination of the true half-life of a positive or negative π meson and the velocity of propagation of the the mesons is a matter of finding the simultaneous solution to the two equations: τ0/(1-(v/c)2)1/2 = 2.6x10-8 seconds vτ0(1-(v/c)2)1/2 = 1.54x10-13 cm Dividing the second equation by the first and then dividing by c gives the equation: (v/c)(1-(v/c)2) = (1.54x10-13)/((2.6x10-8)(3x1010) = 1.97x10-16 which has a real solution of approximately (v/c)=1-1x10-16 Thus τ0 = 2.6x10-8(1.4x10-16) = 3.65x10-24 sec, which is approximately the time required for light to traverse a distance equal to the range of the nuclear force. This value of τ0 corresponds to a mean life of τ0/ln(2) or 5.27x10-24 seconds and this is the time required for light to traverse the range of the nuclear force. The Laplacian operator in spherical coordinates when applied to a spherically symmetrical function P(r) reduces to the evaluation of: ∇2V = (1/r2)(∂(r2∂P/∂r)/∂r)) When P(r) is the Yukawa potential U(r)=e-λr/r the Laplacian reduces to: ∂(e-λr/r)/∂r = -λ(e-λr/r) - (e-λr/r2) r2∂V/∂r = -λr(e-λr) - e-λr ∂(r2∂V/∂r)/∂r = λ2r(e-λr) -λ(e-λr) + λe-λr = λ2r(e-λr) ∇2U = (1/r2)(∂(r2(∂U/∂r)/∂r)) = λ2(e-λr/r) or ∇2U = λ2U When P(r) is the true-form potential V(r) the Laplacian reduces to: ∂V/∂r = -e-λr/r2 r2∂V/∂r = -e-λr ∂(r2∂V/∂r)/∂r = λe-λr ∇2V = λ(e-λr/r2) ∇2V = λ(∂V/∂r) The true-form potential V(r) = ∫∞r(exp(-λz)/z2)dz may be integrated by parts using u=1/z2 and dv=e-λz to give V(r) = (1/λ)e-λr/r2 - (2/λ)∫∞r(exp(-λz)/z3)dz. But e-λr/r2 = -dV/dr so V(r) = -(1/λ)dV/dr - (2/λ)∫∞r(e-λz/z3)dz which can be solved for dV/dr to give dV/dr = -λV(r) - 2∫∞r(e-λz/z3)dz When this expression for dV/dr is substituted into the expression for the Laplacian ∇2V(r) = -λ(∂V/∂r) the result is: ∇2V(r) = λ2V(r) +2λX(r) X(r) = ∫∞r(e-λz/z3)dz. For large values of r, X(r) is small. Also λ is a large number so λ, the coefficient of the second term, is small in comparison with λ², the coefficient in the first term. Therefore the true-form potential is essentially the same as the Yukawa potential for large values of r. The term W(r) is closely related to what is known in mathematics as the exponential integral function, Ei(s) = ∫(exp(-s)/s)ds. Shown below are the values of U(r), V(r) and W(r) for the case λ=1 based upon numerical integration in the case of V(r). Alternatively the distance variable may be considered to be measured in units of 1/λ . The graph below goes up to r=20 where the potential is essentially zero as compared with the values in the vicinity of r=1. The potential variable plotted is the difference the potential at a particular value of r and the value at r=20. ร่วมแบ่งปัน5963 ครั้ง - ดาว 451 ดวง
http://www.vcharkarn.com/vcafe/140696
13
116
Wind turbines are designed to exploit the wind energy that exists at a location. Aerodynamic modelling is used to determine the optimum tower height, control systems, number of blades, and blade shape. Energy in fluid is contained in four different forms: gravitational potential energy, thermodynamic pressure, kinetic energy from the velocity and finally thermal energy. Gravitational and thermal energy have a negligible effect on the energy extraction process. From a macroscopic point of view, the air flow about the wind turbine is at atmospheric pressure. If pressure is constant then only kinetic energy is extracted. However up close near the rotor itself the air velocity is constant as it passes through the rotor plane. This is because of conservation of mass. The air that passes through the rotor cannot slow down because it needs to stay out of the way of the air behind it. So at the rotor the energy is extracted by a pressure drop. The air directly behind the wind turbine is at sub-atmospheric pressure; the air in front is under greater than atmospheric pressure. It is this high pressure in front of the wind turbine that deflects some of the upstream air around the turbine. Albert Betz was together with Lancaster the first to study this phenomenon. He notably determined the maximum limit to wind turbine performance. The limit is now referred to as the Betz limit. This is derived by looking at the axial momentum of the air passing through the wind turbine. As stated above some of the air is deflected away from the turbine. This causes the air passing through the rotor plane to have a smaller velocity than the free stream velocity. The ratio of this reduction to that of the air velocity far away from the wind turbine is called the axial induction factor. It is defined as below. a is the axial induction factor. U1 is the wind speed far away from the rotor. U2 is the wind speed at the rotor. The first step to deriving the Betz limit is applying conservation of axial momentum. As stated above the wind loses speed after the wind turbine compared to the speed far away from the turbine. This would violate the conservation of momentum if the wind turbine was not applying a thrust force on the flow. This thrust force manifests itself through the pressure drop across the rotor. The front operates at high pressure while the back operates at low pressure. The pressure difference from the front to back causes the thrust force. The momentum lost in the turbine is balanced by the thrust force. Another equation is needed to relate the pressure difference to the velocity of the flow near the turbine. Here the Bernoulli equation is used between the field flow and the flow near the wind turbine. There is one limitation to the Bernoulli equation: the equation cannot be applied to fluid passing through the wind turbine. Instead conservation of mass is used to relate the incoming air to the outlet air. Betz used these equations and managed to solve the velocities of the flow in the far wake and near the wind turbine in terms of the far field flow and the axial induction factor. The velocities are given below. U4 is introduced here as the wind velocity in the far wake. This is important because the power extracted from the turbine is defined by the following equation. However the Betz limit is given in terms of the coefficient of power. The coefficient of power is similar to efficiency but not the same. The formula for the coefficient of power is given beneath the formula for power. Betz was able to develop an expression for Cp in terms of the induction factors. This is done by the velocity relations being substituted into power and power is substituted into the coefficient of power definition. The relationship Betz developed is given below. The Betz limit is defined by the maximum value that can be given by the above formula. This is found by taking the derivative with respect to the axial induction factor, setting it to zero and solving for the axial induction factor. Betz was able to show that the optimum axial induction factor is one third. The optimum axial induction factor was then used to find the maximum coefficient of power. This maximum coefficient is the Betz limit. Betz was able to show that the maximum coefficient of power of a wind turbine is 16/27. Airflow operating at higher thrust will cause the axial induction factor to rise above the optimum value. Higher thrust cause more air to be deflected away from the turbine. When the axial induction factor falls below the optimum value the wind turbine is not extracting all the energy it can. This reduces pressure around the turbine and allows more air to pass through the turbine, but not enough to account for lack of energy being extracted. The derivation of the Betz limit shows a simple analysis of wind turbine aerodynamics. In reality there is a lot more. A more rigorous analysis would include wake rotation, the effect of variable geometry. The effect of air foils on the flow is a major component of wind turbine aerodynamics. Within airfoils alone, the wind turbine aerodynamicist has to consider the effect of surface roughness, dynamic stall tip losses, solidity, among other problems. One key difference between actual turbines and the actuator disk, is that the energy is extracted through torque. The wind imparts a torque on the wind turbine, thrust is a necessary by-product of torque. Newtonian physics dictates that for every action there is an equal and opposite reaction. If the wind imparts a torque on the blades then the blades must be imparting a torque on the wind. This torque would then cause the flow to rotate. Thus the flow in the wake has two components, axial and tangential. This tangential flow is referred to as wake rotation. Torque is necessary for energy extraction. However wake rotation is considered a loss. Accelerating the flow in the tangential direction increases the absolute velocity. This in turn increases the amount of kinetic energy in the near wake. This rotational energy is not dissipated in any form that would allow for a greater pressure drop (Energy extraction). Thus any rotational energy in the wake is energy that is lost and unavailable. This loss is minimized by allowing the rotor to rotate very quickly. To the observer it may seem like the rotor is not moving fast; however, it is common for the tips to be moving through the air at 6 times the speed of the free stream. Newtonian mechanics defines power as torque multiplied by the rotational speed. The same amount of power can be extracted by allowing the rotor to rotate faster and produce less torque. Less torque means that there is less wake rotation. Less wake rotation means there is more energy available to extract. The simplest model for Horizontal Axis Wind Turbine Aerodynamics is Blade Element Momentum (BEM) Theory. The theory is based on the assumption that the flow at a given annulus does not effect the flow at adjacent annuli. This allows the rotor blade to be analyzed in sections, where the resulting forces are summed over all sections to get the overall forces of the rotor. The theory uses both axial and angular momentum balances to determine the flow and the resulting forces at the blade. The momentum equations for the far field flow dictate that the thrust and torque will induce a secondary flow in the approaching wind. This in-turn affects the flow geometry at the blade. The blade itself is the source of these thrust and torque forces. The force response of the blades is governed by the geometry of the flow, or better known as the angle of attack. Refer to the Airfoil article for more information on how airfoils create lift and drag forces at various angles of attack. This interplay between the far field momentum balances and the local blade forces requires one to solve the momentum equations and the airfoil equations simultaneously. Typically computers and numerical methods are employed to solve these models. There is a lot of variation between different version of BEM theory. First, one can consider the effect of wake rotation or not. Second, one can go further and consider the pressure drop induced in wake rotation. Third, the tangential induction factors can be solved with a momentum equation, an energy balance or orthognal geometric constraint; the latter a result of Biot-Savart law in vortex methods. These all lead to different set of equations that need to be solved. The simplest and most widely used equations are those that consider wake rotation with the momentum equation but ignore the pressure drop from wake rotation. Those equations are given below. a is the axial component of the induced flow, a' is the tangential component of the induced flow. is the solidity of the rotor, is the local inflow angle. and are the coefficient of normal force and the coefficient of tangential force respectively. Both these coefficients are defined with the resulting lift and drag coefficients of the airfoil. Blade Element Momentum (BEM) theory alone fails to accurately represent the true physics of real wind turbines. Two major shortcomings are the effect of discrete number of blades and far field effects when the turbine is heavily loaded. Secondary short-comings come from dealing with transient effects like dynamic stall, rotational effects like coriolis and centrifugal pumping, finally geometric effects that arise from coned and yawed rotors. The current state of the art in BEM uses corrections to deal with the major shortcoming. These corrections are discussed below. There is as yet no accepted treatment for the secondary shortcomings. These areas remain a highly active area of research in wind turbine aerodynamics. The effect of the discrete number of blades is dealt with by applying the Prandtl tip loss factor. The most common form of this factor is given below where B is the number of blades, R is the outer radius and r is the local radius. The definition of F is based on actuator disk models and not directly applicable to BEM. However the most common application multiplies induced velocity term by F in the momentum equations. As in the momentum equation there are many variations for applying F, some argue that the mass flow should be corrected in either the axial equation, or both axial and tangential equations. Others have suggested a second tip loss term to account for the reduced blade forces at the tip. Below shows above momentum equations with the most common application of F. The typical momentum theory applied in BEM is only effective for axial induction factors up to 0.4 (thrust coefficient of 0.96). Beyond this point the wake collapses and turbulent mixing occurs. This state is highly transient and largely unpredictable by theoretical means. Accordingly, several empirical relations have been developed. As the usual case there are several version, however a simple one that is commonly uses is a linear curve fit given below, with . The turbulent wake function given excludes the tip loss function, however the tip loss is applied simply by multiplying the resulting axial induction by the tip loss function. BEM is widely used due to its simplicity and overall accuracy. Limited success has been made with computational flow solvers based on Reynolds Averaged Navier Stokes (RANS) and other similar three-dimensional models. This is primarily due to the shear complexity modeling wind turbines. Wind turbine aerodynamics are dependent on far field conditions, several rotor diameters up and down stream, while at the same time being dependent on small scale flow conditions at the blade. Coupled with body motion, the need to have fine resolution and a large domain makes these models highly computationally intensive. For all practical purposes this approach is not worth it. As such these methods are relegated to research. One method that is commonly applied is Biot-Savart law. The model assumes that the wind turbine rotor is shedding a continuous sheet of vortices at the tip, and sometimes the root or along the blade as in lifting line theory. Biot-Savart law is applied to determine how the circulation of these vortices induces a flow in the far field. These methods have largely confirmed much of the applicability of BEM and shed insight on the structure of wind turbine wakes. Vortex methods have limitations due to its grounding in potential flow theory, as such cannot model viscous behavior. These methods are still computationally intesive and still rely on blade element theory for the blade forces. Just like RANS vortex methods are found solely in research environments. Typically, in daytime the variation follows the Wind profile power law, which predicts that wind speed rises proportionally to the seventh root of altitude. Doubling the altitude of a turbine, then, increases the expected wind speeds by 10% and the expected power by 34%. To avoid buckling, doubling the tower height generally requires doubling the diameter of the tower as well, increasing the amount of material by a factor of eight. At night time, or when the atmosphere becomes stable, wind speed close to the ground usually subsides whereas at turbine hub altitude it does not decrease that much or may even increase. As a result the wind speed is higher and a turbine will produce more power than expected from the 1/7th power law: doubling the altitude may increase wind speed by 20% to 60%. A stable atmosphere is caused by radiative cooling of the surface and is common in a temperate climate: it usually occurs when there is a (partly) clear sky at night. When the (high altitude) wind is strong (a 10-meter (33 ft) wind speed higher than approximately 6 to 7 m/s (20-23 ft/s)) the stable atmosphere is disrupted because of friction turbulence and the atmosphere will turn neutral. A daytime atmosphere is either neutral (no net radiation; usually with strong winds and/or heavy clouding) or unstable (rising air because of ground heating — by the sun). Here again the 1/7th power law applies or is at least a good approximation of the wind profile. Indiana had been rated as having a wind capacity of 30 MW, but by raising the expected turbine height from 50 m to 70 m, the wind capacity estimate was raised to 40,000 MW, and could be double that at 100 m. For HAWTs, tower heights approximately two to three times the blade length have been found to balance material costs of the tower against better utilisation of the more expensive active components. Wind turbines developed over the last 50 years have almost universally used either two or three blades. Aerodynamic efficiency increases with number of blades but with diminishing return. Increasing the number of blades from one to two yields a six percent increase in aerodynamic efficiency, whereas increasing the blade count from two to three yields only an additional three percent in efficiency. Further increasing the blade count yields minimal improvements in aerodynamic efficiency and sacrifices too much in blade stiffness as the blades become thinner. Component costs that are affected by blade count are primarily for materials and manufacturing of the turbine rotor and drive train. Generally, the fewer the number of blades, the lower the material and manufacturing costs will be. In addition, the fewer the number of blades, the higher the rotational speed will be. This is because blade stiffness requirements to avoid interference with the tower limit how thin the blades can be. Fewer blades with higher rotational speeds reduce peak torques in the drive train, resulting in lower gearbox and generator costs. System reliability is affected by blade count primarily through the dynamic loading of the rotor into the drive train and tower systems. While aligning the wind turbine to changes in wind direction (yawing), each blade experiences a cyclic load at its root end depending on blade position. This is true of one, two, three blades or more. However, these cyclic loads when combined together at the drive train shaft are symmetrically balanced for three blades, yielding smoother operation during turbine yaw. Turbines with one or two blades can use a pivoting teetered hub to also nearly eliminate the cyclic loads into the drive shaft and system during yawing. Finally, aesthetics can be considered a factor in that some people find that the three-bladed rotor is more pleasing to look at than a one- or two-bladed rotor. Modern wind turbines are designed to spin at varying speeds (a consequence of their generator design, see below). Use of aluminum and composites in their blades has contributed to low rotational inertia, which means that newer wind turbines can accelerate quickly if the winds pick up, keeping the tip speed ratio more nearly constant. Operating closer to their optimal tip speed ratio during energetic gusts of wind allows wind turbines to improve energy capture from sudden gusts that are typical in urban settings. In contrast, older style wind turbines were designed with heavier steel blades, which have higher inertia, and rotated at speeds governed by the AC frequency of the power lines. The high inertia buffered the changes in rotation speed and thus made power output more stable. The speed and torque at which a wind turbine rotates must be controlled for several reasons: Overspeed control is exerted in two main ways: aerodynamic stalling or furling, and mechanical braking. Furling is the preferred method of slowing wind turbines. Furling works by decreasing the angle of attack, which reduces the induced drag from the lift of the rotor, as well as the cross-section. One major problem in designing wind turbines is getting the blades to stall or furl quickly enough should a gust of wind cause sudden acceleration. A fully furled turbine blade, when stopped, has the edge of the blade facing into the wind. A fixed-speed HAWT inherently increases its angle of attack at higher wind speed as the blades speed up. A natural strategy, then, is to allow the blade to stall when the wind speed increases. This technique was successfully used on many early HAWTs. However, on some of these blade sets, it was observed that the degree of blade pitch tended to increase audible noise levels. Standard modern turbines all furl the blades in high winds. Since furling requires acting against the torque on the blade, it requires some form of pitch angle control. Many turbines use hydraulic systems. These systems are usually spring loaded, so that if hydraulic power fails, the blades automatically furl. Other turbines use an electric servomotor for every rotor blade. They have a small battery-reserve in case of an electric-grid breakdown. Small wind turbines (under 50 kW) with variable-pitching generally use systems operated by centrifugal force, either by flyweights or geometric design, and employ no electric or hydraulic controls. The variable wind speed wind turbine uses furling as its main method of rotation control. The wind turbines have three modes of operation: At above rated wind speed the rotors furl at an angle to maintain the torque. This is also known as feathering. Braking of a turbine can also be done by dumping energy from the generator into a resistor bank, converting the kinetic energy of the turbine rotation into heat. This method is useful if the kinetic load on the generator is suddenly reduced or is too small to keep the turbine speed within its allowed limit. Cyclically braking causes the blades to slow down, which increases the stalling effect, reducing the efficiency of the blades. This way, the turbine's rotation can be kept at a safe speed in faster winds while maintaining (nominal) power output. For a given survivable wind speed, the mass of a turbine is approximately proportional to the cube of its blade-length. Wind power intercepted by the turbine is proportional to the square of its blade-length. The maximum blade-length of a turbine is limited by both the strength and stiffness of its material. Labor and maintenance costs increase only gradually with increasing turbine size, so to minimize costs, wind farm turbines are basically limited by the strength of materials, and siting requirements. Typical modern wind turbines have diameters of 40 to 90 meters (130-300 ft) and are rated between 500 KW and 2 MW. Currently (2005) the most powerful turbine is rated at 6 MW. For large, commercial size horizontal-axis wind turbines, the generator is mounted in a nacelle at the top of a tower, behind the hub of the turbine rotor. Typically wind turbines generate electricity through asynchronous machines that are directly connected with the electricity grid. Usually the rotational speed of the wind turbine is slower than the equivalent rotation speed of the electrical network - typical rotation speeds for a wind generators are 5-20 rpm while a directly connected machine will have an electrical speed between 750-3600 rpm. Therefore, a gearbox is inserted between the rotor hub and the generator. This also reduces the generator cost and weight. Commercial size generators have a rotor carrying a field winding so that a rotating magnetic field is produced inside a set of windings called the stator. While the rotating field winding consumes a fraction of a percent of the generator output, adjustment of the field current allows good control over the generator output voltage. Very small wind generators (a few watts to perhaps a kilowatt in output) may use permanent magnets but these are too costly to use in large machines and do not allow convenient regulation of the generator voltage. Electrical generators inherently produce AC power. Older style wind generators rotate at a constant speed, to match power line frequency, which allowed the use of less costly induction generators. Newer wind turbines often turn at whatever speed generates electricity most efficiently. This can be solved using multiple technologies such as doubly fed induction generators or full-effect converters where the variable frequency current produced is converted to DC and then back to AC, matching the line frequency and voltage. Although such alternatives require costly equipment and cause power loss, the turbine can capture a significantly larger fraction of the wind energy. In some cases, especially when turbines are sited offshore, the DC energy will be transmitted from the turbine to a central (onshore) inverter for connection to the grid. Current production wind turbine blades are manufactured as large as 80 meters in diameter with prototypes in the range of 100 to 120 meters. In 2001, an estimated 50 million kilograms of fiberglass laminate were used in wind turbine blades. New materials and manufacturing methods provide the opportunity to improve wind turbine efficiency by allowing for larger, stronger blades. One of the most important goals when designing larger blade systems is to keep blade weight under control. Since gravity scales as the cube of the turbine radius, loading due to gravity becomes a constraining design factor for systems with larger blades. Current manufacturing methods for blades in the 40 to 50 meter range involve various proven fiberglass composite fabrication techniques. Manufactures such as Nordex and GE Wind use a hand lay-up, open-mold, wet process for blade manufacture. Other manufacturers use variations on this technique, some including carbon and wood with fiberglass in an epoxy matrix. Options also include prepreg fiberglass and vacuum-assisted resin transfer molding. Essentially each of these options are variations on the same theme: a glass-fiber reinforced polymer composite constructed through various means with differing complexity. Perhaps the largest issue with more simplistic, open-mold, wet systems are the emissions associated with the volatile organics released into the atmosphere. Preimpregnated materials and resin infusion techniques avoid the release of volatiles by containing all reaction gases. However, these contained processes have their own challenges, namely the production of thick laminates necessary for structural components becomes more difficult. As the perform resin permeability dictates the maximum laminate thickness, bleeding is required to eliminate voids and insure proper resin distribution. A unique solution to resin distribution is the use of a partially preimpregnated fiberglass. During evacuation, the dry fabric provides a path for airflow and, once heat and pressure are applied, resin may flow into the dry region resulting in a thoroughly impregnated laminate structure. Epoxy based composites are of greatest interest to wind turbine manufacturers because they deliver a key combination of environmental, production, and cost advantages over other resin systems. Epoxies also improve wind turbine blade composite manufacture by allowing for shorter cure cycles, increased durability, and improved surface finish. Prepreg operations further improve cost-effective operations by reducing processing cycles, and therefore manufacturing time, over wet lay-up systems. As turbine blades are approaching 60 meters and greater, infusion techniques are becoming more prevalent as the traditional resin transfer moulding injection time is too long as compared to the resin set up time, thus limiting laminate thickness. Injection forces resin through a thicker ply stack, thus depositing the resin where in the laminate structure before gelatin occurs. Specialized epoxy resins have been developed to customize lifetimes and viscosity to tune resin performance in injection applications. Carbon fiber reinforced load bearing spars have recently been identified as a cost-effective means for reducing weight and increasing stiffness. The use of carbon fibers in 60 meter turbine blades is estimated to result in a 38% reduction in total blade mass and a 14% decrease in cost as compared to a 100% fiberglass design. The use of carbon fibers has the added benefit of reducing the thickness of fiberglass laminate sections, further addressing the problems associated with resin wetting of thick lay-up sections. Wind turbine applications of carbon fiber may also benefit from the general trend of increasing use and decreasing cost of carbon fiber materials. Smaller blades can be made from light metals such as aluminum. Wood and canvas sails were originally used on early windmills due to their low price, availability, and ease of manufacture. These materials, however, require frequent maintenance during their lifetime. Also, wood and canvas have a relatively high drag (low aerodynamic efficiency) as compared to the force they capture. For these reasons they have been mostly replaced by solid airfoils.
http://www.reference.com/browse/Blades-R
13
528
In calculus, a branch of mathematics, the derivative is a measure of how a function changes as its input changes. Loosely speaking, a derivative can be thought of as how much one quantity is changing in response to changes in some other quantity; for example, the derivative of the position of a moving object with respect to time is the object's instantaneous velocity. The derivative of a function at a chosen input value describes the best linear approximation of the function near that input value. Informally, the derivative is the ratio of the infinitesimal change of the output over the infinitesimal change of the input producing that change of output. For a real-valued function of a single real variable, the derivative at a point equals the slope of the tangent line to the graph of the function at that point. In higher dimensions, the derivative of a function at a point is a linear transformation called the linearization. A closely related notion is the differential of a function. The process of finding a derivative is called differentiation. The reverse process is called antidifferentiation. The fundamental theorem of calculus states that antidifferentiation is the same as integration. Differentiation and integration constitute the two fundamental operations in single-variable calculus. Differentiation and the derivative Differentiation is a method to compute the rate at which a dependent output y changes with respect to the change in the independent input x. This rate of change is called the derivative of y with respect to x. In more precise language, the dependence of y upon x means that y is a function of x. This functional relationship is often denoted y = f(x), where f denotes the function. If x and y are real numbers, and if the graph of y is plotted against x, the derivative measures the slope of this graph at each point. The simplest case is when y is a linear function of x, meaning that the graph of y divided by x is a straight line. In this case, y = f(x) = m x + b, for real numbers m and b, and the slope m is given by where the symbol Δ (the uppercase form of the Greek letter Delta) is an abbreviation for "change in." This formula is true because - y + Δy = f(x+ Δx) = m (x + Δx) + b = m x + b + m Δx = y + mΔx. It follows that Δy = m Δx. This gives an exact value for the slope of a straight line. If the function f is not linear (i.e. its graph is not a straight line), however, then the change in y divided by the change in x varies: differentiation is a method to find an exact value for this rate of change at any given value of x. suggesting the ratio of two infinitesimal quantities. (The above expression is read as "the derivative of y with respect to x", "d y by d x", or "d y over d x". The oral form "d y d x" is often used conversationally, although it may lead to confusion.) Definition via difference quotients Let f be a real valued function. In classical geometry, the tangent line to the graph of the function f at a real number a was the unique line through the point (a, f(a)) that did not meet the graph of f transversally, meaning that the line did not pass straight through the graph. The derivative of y with respect to x at a is, geometrically, the slope of the tangent line to the graph of f at a. The slope of the tangent line is very close to the slope of the line through (a, f(a)) and a nearby point on the graph, for example (a + h, f(a + h)). These lines are called secant lines. A value of h close to zero gives a good approximation to the slope of the tangent line, and smaller values (in absolute value) of h will, in general, give better approximations. The slope m of the secant line is the difference between the y values of these points divided by the difference between the x values, that is, This expression is Newton's difference quotient. The derivative is the value of the difference quotient as the secant lines approach the tangent line. Formally, the derivative of the function f at a is the limit of the difference quotient as h approaches zero, if this limit exists. If the limit exists, then f is differentiable at a. Here f′ (a) is one of several common notations for the derivative (see below). Equivalently, the derivative satisfies the property that which has the intuitive interpretation (see Figure 1) that the tangent line to f at a gives the best linear approximation to f near a (i.e., for small h). This interpretation is the easiest to generalize to other settings (see below). Substituting 0 for h in the difference quotient causes division by zero, so the slope of the tangent line cannot be found directly using this method. Instead, define Q(h) to be the difference quotient as a function of h: Q(h) is the slope of the secant line between (a, f(a)) and (a + h, f(a + h)). If f is a continuous function, meaning that its graph is an unbroken curve with no gaps, then Q is a continuous function away from h = 0. If the limit exists, meaning that there is a way of choosing a value for Q(0) that makes the graph of Q a continuous function, then the function f is differentiable at a, and its derivative at a equals Q(0). In practice, the existence of a continuous extension of the difference quotient Q(h) to h = 0 is shown by modifying the numerator to cancel h in the denominator. Such manipulations can make the limiting value of Q for small h clear even though Q is still not defined at h = 0. This process can be long and tedious for complicated functions, and many shortcuts are commonly used to simplify the process. The squaring function f(x) = x² is differentiable at x = 3, and its derivative there is 6. This result is established by calculating the limit as h approaches zero of the difference quotient of f(3): The last expression shows that the difference quotient equals 6 + h when h ≠ 0 and is undefined when h = 0, because of the definition of the difference quotient. However, the definition of the limit says the difference quotient does not need to be defined when h = 0. The limit is the result of letting h go to zero, meaning it is the value that 6 + h tends to as h becomes very small: Hence the slope of the graph of the squaring function at the point (3, 9) is 6, and so its derivative at x = 3 is f '(3) = 6. More generally, a similar computation shows that the derivative of the squaring function at x = a is f '(a) = 2a. Continuity and differentiability If y = f(x) is differentiable at a, then f must also be continuous at a. As an example, choose a point a and let f be the step function that returns a value, say 1, for all x less than a, and returns a different value, say 10, for all x greater than or equal to a. f cannot have a derivative at a. If h is negative, then a + h is on the low part of the step, so the secant line from a to a + h is very steep, and as h tends to zero the slope tends to infinity. If h is positive, then a + h is on the high part of the step, so the secant line from a to a + h has slope zero. Consequently the secant lines do not approach any single slope, so the limit of the difference quotient does not exist. However, even if a function is continuous at a point, it may not be differentiable there. For example, the absolute value function y = |x| is continuous at x = 0, but it is not differentiable there. If h is positive, then the slope of the secant line from 0 to h is one, whereas if h is negative, then the slope of the secant line from 0 to h is negative one. This can be seen graphically as a "kink" or a "cusp" in the graph at x = 0. Even a function with a smooth graph is not differentiable at a point where its tangent is vertical: For instance, the function y = x1/3 is not differentiable at x = 0. Most functions that occur in practice have derivatives at all points or at almost every point. Early in the history of calculus, many mathematicians assumed that a continuous function was differentiable at most points. Under mild conditions, for example if the function is a monotone function or a Lipschitz function, this is true. However, in 1872 Weierstrass found the first example of a function that is continuous everywhere but differentiable nowhere. This example is now known as the Weierstrass function. In 1931, Stefan Banach proved that the set of functions that have a derivative at some point is a meager set in the space of all continuous functions. Informally, this means that hardly any continuous functions have a derivative at even one point. The derivative as a function Let f be a function that has a derivative at every point a in the domain of f. Because every point a has a derivative, there is a function that sends the point a to the derivative of f at a. This function is written f′(x) and is called the derivative function or the derivative of f. The derivative of f collects all the derivatives of f at all the points in the domain of f. Sometimes f has a derivative at most, but not all, points of its domain. The function whose value at a equals f′(a) whenever f′(a) is defined and elsewhere is undefined is also called the derivative of f. It is still a function, but its domain is strictly smaller than the domain of f. Using this idea, differentiation becomes a function of functions: The derivative is an operator whose domain is the set of all functions that have derivatives at every point of their domain and whose range is a set of functions. If we denote this operator by D, then D(f) is the function f′(x). Since D(f) is a function, it can be evaluated at a point a. By the definition of the derivative function, D(f)(a) = f′(a). For comparison, consider the doubling function f(x) = 2x; f is a real-valued function of a real number, meaning that it takes numbers as inputs and has numbers as outputs: The operator D, however, is not defined on individual numbers. It is only defined on functions: Because the output of D is a function, the output of D can be evaluated at a point. For instance, when D is applied to the squaring function, D outputs the doubling function, which we named f(x). This output function can then be evaluated to get f(1) = 2, f(2) = 4, and so on. Higher derivatives Let f be a differentiable function, and let f′(x) be its derivative. The derivative of f′(x) (if it has one) is written f′′(x) and is called the second derivative of f. Similarly, the derivative of a second derivative, if it exists, is written f′′′(x) and is called the third derivative of f. These repeated derivatives are called higher-order derivatives. If x(t) represents the position of an object at time t, then the higher-order derivatives of x have physical interpretations. The second derivative of x is the derivative of x′(t), the velocity, and by definition this is the object's acceleration. The third derivative of x is defined to be the jerk, and the fourth derivative is defined to be the jounce. A function f need not have a derivative, for example, if it is not continuous. Similarly, even if f does have a derivative, it may not have a second derivative. For example, let Calculation shows that f is a differentiable function whose derivative is f′(x) is twice the absolute value function, and it does not have a derivative at zero. Similar examples show that a function can have k derivatives for any non-negative integer k but no (k + 1)-order derivative. A function that has k successive derivatives is called k times differentiable. If in addition the kth derivative is continuous, then the function is said to be of differentiability class Ck. (This is a stronger condition than having k derivatives. For an example, see differentiability class.) A function that has infinitely many derivatives is called infinitely differentiable or smooth. On the real line, every polynomial function is infinitely differentiable. By standard differentiation rules, if a polynomial of degree n is differentiated n times, then it becomes a constant function. All of its subsequent derivatives are identically zero. In particular, they exist, so polynomials are smooth functions. The derivatives of a function f at a point x provide polynomial approximations to that function near x. For example, if f is twice differentiable, then in the sense that If f is infinitely differentiable, then this is the beginning of the Taylor series for f evaluated at x+h around x. Inflection point A point where the second derivative of a function changes sign is called an inflection point. At an inflection point, the second derivative may be zero, as in the case of the inflection point x=0 of the function y=x3, or it may fail to exist, as in the case of the inflection point x=0 of the function y=x1/3. At an inflection point, a function switches from being a convex function to being a concave function or vice versa. Notations for differentiation Leibniz's notation The notation for derivatives introduced by Gottfried Leibniz is one of the earliest. It is still commonly used when the equation y = f(x) is viewed as a functional relationship between dependent and independent variables. Then the first derivative is denoted by and was once thought of as an infinitesimal quotient. Higher derivatives are expressed using the notation for the nth derivative of y = f(x) (with respect to x). These are abbreviations for multiple applications of the derivative operator. For example, With Leibniz's notation, we can write the derivative of y at the point x = a in two different ways: Leibniz's notation allows one to specify the variable for differentiation (in the denominator). This is especially relevant for partial differentiation. It also makes the chain rule easy to remember: Lagrange's notation Sometimes referred to as prime notation, one of the most common modern notations for differentiation is due to Joseph-Louis Lagrange and uses the prime mark, so that the derivative of a function f(x) is denoted f′(x) or simply f′. Similarly, the second and third derivatives are denoted To denote the number of derivatives beyond this point, some authors use Roman numerals in superscript, whereas others place the number in parentheses: The latter notation generalizes to yield the notation f (n) for the nth derivative of f — this notation is most useful when we wish to talk about the derivative as being a function itself, as in this case the Leibniz notation can become cumbersome. Newton's notation Newton's notation for differentiation, also called the dot notation, places a dot over the function name to represent a time derivative. If y = f(t), then denote, respectively, the first and second derivatives of y with respect to t. This notation is used exclusively for time derivatives, meaning that the independent variable of the function represents time. It is very common in physics and in mathematical disciplines connected with physics such as differential equations. While the notation becomes unmanageable for high-order derivatives, in practice only very few derivatives are needed. Euler's notation If y = f(x) is a dependent variable, then often the subscript x is attached to the D to clarify the independent variable x. Euler's notation is then written - or , although this subscript is often omitted when the variable x is understood, for instance when this is the only variable present in the expression. Euler's notation is useful for stating and solving linear differential equations. Computing the derivative The derivative of a function can, in principle, be computed from the definition by considering the difference quotient, and computing its limit. In practice, once the derivatives of a few simple functions are known, the derivatives of other functions are more easily computed using rules for obtaining derivatives of more complicated functions from simpler ones. Derivatives of elementary functions Most derivative computations eventually require taking the derivative of some common functions. The following incomplete list gives some of the most frequently used functions of a single real variable and their derivatives. where r is any real number, then wherever this function is defined. For example, if , then and the derivative function is defined only for positive x, not for x = 0. When r = 0, this rule implies that f′(x) is zero for x ≠ 0, which is almost the constant rule (stated below). Rules for finding the derivative In many cases, complicated limit calculations by direct application of Newton's difference quotient can be avoided using differentiation rules. Some of the most basic rules are the following. - Constant rule: if f(x) is constant, then - for all functions f and g and all real numbers and . - for all functions f and g. By extension, this means that the derivative of a constant times a function is the constant times the derivative of the function: - for all functions f and g at all inputs where g ≠ 0. - Chain rule: If , then Example computation The derivative of Here the second term was computed using the chain rule and third using the product rule. The known derivatives of the elementary functions x2, x4, sin(x), ln(x) and exp(x) = ex, as well as the constant 7, were also used. Derivatives in higher dimensions Derivatives of vector valued functions A vector-valued function y(t) of a real variable sends real numbers to vectors in some vector space Rn. A vector-valued function can be split up into its coordinate functions y1(t), y2(t), …, yn(t), meaning that y(t) = (y1(t), ..., yn(t)). This includes, for example, parametric curves in R2 or R3. The coordinate functions are real valued functions, so the above definition of derivative applies to them. The derivative of y(t) is defined to be the vector, called the tangent vector, whose coordinates are the derivatives of the coordinate functions. That is, if the limit exists. The subtraction in the numerator is subtraction of vectors, not scalars. If the derivative of y exists for every value of t, then y′ is another vector valued function. If e1, …, en is the standard basis for Rn, then y(t) can also be written as y1(t)e1 + … + yn(t)en. If we assume that the derivative of a vector-valued function retains the linearity property, then the derivative of y(t) must be because each of the basis vectors is a constant. This generalization is useful, for example, if y(t) is the position vector of a particle at time t; then the derivative y′(t) is the velocity vector of the particle at time t. Partial derivatives Suppose that f is a function that depends on more than one variable. For instance, f can be reinterpreted as a family of functions of one variable indexed by the other variables: In other words, every value of x chooses a function, denoted fx, which is a function of one real number. That is, Once a value of x is chosen, say a, then f(x,y) determines a function fa that sends y to a² + ay + y²: In this expression, a is a constant, not a variable, so fa is a function of only one real variable. Consequently the definition of the derivative for a function of one variable applies: The above procedure can be performed for any choice of a. Assembling the derivatives together into a function gives a function that describes the variation of f in the y direction: This is the partial derivative of f with respect to y. Here ∂ is a rounded d called the partial derivative symbol. To distinguish it from the letter d, ∂ is sometimes pronounced "der", "del", or "partial" instead of "dee". In general, the partial derivative of a function f(x1, …, xn) in the direction xi at the point (a1 …, an) is defined to be: In the above difference quotient, all the variables except xi are held fixed. That choice of fixed values determines a function of one variable and, by definition, In other words, the different choices of a index a family of one-variable functions just as in the example above. This expression also shows that the computation of partial derivatives reduces to the computation of one-variable derivatives. An important example of a function of several variables is the case of a scalar-valued function f(x1,...xn) on a domain in Euclidean space Rn (e.g., on R² or R³). In this case f has a partial derivative ∂f/∂xj with respect to each variable xj. At the point a, these partial derivatives define the vector This vector is called the gradient of f at a. If f is differentiable at every point in some domain, then the gradient is a vector-valued function ∇f that takes the point a to the vector ∇f(a). Consequently the gradient determines a vector field. Directional derivatives If f is a real-valued function on Rn, then the partial derivatives of f measure its variation in the direction of the coordinate axes. For example, if f is a function of x and y, then its partial derivatives measure the variation in f in the x direction and the y direction. They do not, however, directly measure the variation of f in any other direction, such as along the diagonal line y = x. These are measured using directional derivatives. Choose a vector The directional derivative of f in the direction of v at the point x is the limit In some cases it may be easier to compute or estimate the directional derivative after changing the length of the vector. Often this is done to turn the problem into the computation of a directional derivative in the direction of a unit vector. To see how this works, suppose that v = λu. Substitute h = k/λ into the difference quotient. The difference quotient becomes: This is λ times the difference quotient for the directional derivative of f with respect to u. Furthermore, taking the limit as h tends to zero is the same as taking the limit as k tends to zero because h and k are multiples of each other. Therefore Dv(f) = λDu(f). Because of this rescaling property, directional derivatives are frequently considered only for unit vectors. If all the partial derivatives of f exist and are continuous at x, then they determine the directional derivative of f in the direction v by the formula: The same definition also works when f is a function with values in Rm. The above definition is applied to each component of the vectors. In this case, the directional derivative is a vector in Rm. Total derivative, total differential and Jacobian matrix When f is a function from an open subset of Rn to Rm, then the directional derivative of f in a chosen direction is the best linear approximation to f at that point and in that direction. But when n > 1, no single directional derivative can give a complete picture of the behavior of f. The total derivative, also called the (total) differential, gives a complete picture by considering all directions at once. That is, for any vector v starting at a, the linear approximation formula holds: Just like the single-variable derivative, f ′(a) is chosen so that the error in this approximation is as small as possible. If n and m are both one, then the derivative f ′(a) is a number and the expression f ′(a)v is the product of two numbers. But in higher dimensions, it is impossible for f ′(a) to be a number. If it were a number, then f ′(a)v would be a vector in Rn while the other terms would be vectors in Rm, and therefore the formula would not make sense. For the linear approximation formula to make sense, f ′(a) must be a function that sends vectors in Rn to vectors in Rm, and f ′(a)v must denote this function evaluated at v. To determine what kind of function it is, notice that the linear approximation formula can be rewritten as Notice that if we choose another vector w, then this approximate equation determines another approximate equation by substituting w for v. It determines a third approximate equation by substituting both w for v and a + v for a. By subtracting these two new equations, we get If we assume that v is small and that the derivative varies continuously in a, then f ′(a + v) is approximately equal to f ′(a), and therefore the right-hand side is approximately zero. The left-hand side can be rewritten in a different way using the linear approximation formula with v + w substituted for v. The linear approximation formula implies: This suggests that f ′(a) is a linear transformation from the vector space Rn to the vector space Rm. In fact, it is possible to make this a precise derivation by measuring the error in the approximations. Assume that the error in these linear approximation formula is bounded by a constant times ||v||, where the constant is independent of v but depends continuously on a. Then, after adding an appropriate error term, all of the above approximate equalities can be rephrased as inequalities. In particular, f ′(a) is a linear transformation up to a small error term. In the limit as v and w tend to zero, it must therefore be a linear transformation. Since we define the total derivative by taking a limit as v goes to zero, f ′(a) must be a linear transformation. In one variable, the fact that the derivative is the best linear approximation is expressed by the fact that it is the limit of difference quotients. However, the usual difference quotient does not make sense in higher dimensions because it is not usually possible to divide vectors. In particular, the numerator and denominator of the difference quotient are not even in the same vector space: The numerator lies in the codomain Rm while the denominator lies in the domain Rn. Furthermore, the derivative is a linear transformation, a different type of object from both the numerator and denominator. To make precise the idea that f ′ (a) is the best linear approximation, it is necessary to adapt a different formula for the one-variable derivative in which these problems disappear. If f : R → R, then the usual definition of the derivative may be manipulated to show that the derivative of f at a is the unique number f ′(a) such that This is equivalent to because the limit of a function tends to zero if and only if the limit of the absolute value of the function tends to zero. This last formula can be adapted to the many-variable situation by replacing the absolute values with norms. The definition of the total derivative of f at a, therefore, is that it is the unique linear transformation f ′(a) : Rn → Rm such that Here h is a vector in Rn, so the norm in the denominator is the standard length on Rn. However, f′(a)h is a vector in Rm, and the norm in the numerator is the standard length on Rm. If v is a vector starting at a, then f ′(a)v is called the pushforward of v by f and is sometimes written f*v. If the total derivative exists at a, then all the partial derivatives and directional derivatives of f exist at a, and for all v, f ′(a)v is the directional derivative of f in the direction v. If we write f using coordinate functions, so that f = (f1, f2, ..., fm), then the total derivative can be expressed using the partial derivatives as a matrix. This matrix is called the Jacobian matrix of f at a: The existence of the total derivative f′(a) is strictly stronger than the existence of all the partial derivatives, but if the partial derivatives exist and are continuous, then the total derivative exists, is given by the Jacobian, and depends continuously on a. The definition of the total derivative subsumes the definition of the derivative in one variable. That is, if f is a real-valued function of a real variable, then the total derivative exists if and only if the usual derivative exists. The Jacobian matrix reduces to a 1×1 matrix whose only entry is the derivative f′(x). This 1×1 matrix satisfies the property that f(a + h) − f(a) − f ′(a)h is approximately zero, in other words that Up to changing variables, this is the statement that the function is the best linear approximation to f at a. The total derivative of a function does not give another function in the same way as the one-variable case. This is because the total derivative of a multivariable function has to record much more information than the derivative of a single-variable function. Instead, the total derivative gives a function from the tangent bundle of the source to the tangent bundle of the target. The natural analog of second, third, and higher-order total derivatives is not a linear transformation, is not a function on the tangent bundle, and is not built by repeatedly taking the total derivative. The analog of a higher-order derivative, called a jet, cannot be a linear transformation because higher-order derivatives reflect subtle geometric information, such as concavity, which cannot be described in terms of linear data such as vectors. It cannot be a function on the tangent bundle because the tangent bundle only has room for the base space and the directional derivatives. Because jets capture higher-order information, they take as arguments additional coordinates representing higher-order changes in direction. The space determined by these additional coordinates is called the jet bundle. The relation between the total derivative and the partial derivatives of a function is paralleled in the relation between the kth order jet of a function and its partial derivatives of order less than or equal to k. By repeatedly taking the total derivative, one obtains higher versions of the Fréchet derivative, specialized to Rp. The kth order total derivative may be interpreted as a map which takes a point x in Rn and assigns to it an element of the space of k-linear maps from Rn to Rm- the "best" (in a certain precise sense) k-linear approximation to f at that point. By precomposing it with the diagonal map Δ, x→(x, x), a generalized Taylor series may be begun as where f(a) is identified with a constant function, (x-a)i are the components of the vector x-a, and (D f)i and (D2 f)j k are the components of D f and D2 f as linear transformations. The concept of a derivative can be extended to many other settings. The common thread is that the derivative of a function at a point serves as a linear approximation of the function at that point. - An important generalization of the derivative concerns complex functions of complex variables, such as functions from (a domain in) the complex numbers C to C. The notion of the derivative of such a function is obtained by replacing real variables with complex variables in the definition. If C is identified with R² by writing a complex number z as x + i y, then a differentiable function from C to C is certainly differentiable as a function from R² to R² (in the sense that its partial derivatives all exist), but the converse is not true in general: the complex derivative only exists if the real derivative is complex linear and this imposes relations between the partial derivatives called the Cauchy Riemann equations — see holomorphic functions. - Another generalization concerns functions between differentiable or smooth manifolds. Intuitively speaking such a manifold M is a space that can be approximated near each point x by a vector space called its tangent space: the prototypical example is a smooth surface in R³. The derivative (or differential) of a (differentiable) map f: M → N between manifolds, at a point x in M, is then a linear map from the tangent space of M at x to the tangent space of N at f(x). The derivative function becomes a map between the tangent bundles of M and N. This definition is fundamental in differential geometry and has many uses — see pushforward (differential) and pullback (differential geometry). - Differentiation can also be defined for maps between infinite dimensional vector spaces such as Banach spaces and Fréchet spaces. There is a generalization both of the directional derivative, called the Gâteaux derivative, and of the differential, called the Fréchet derivative. - One deficiency of the classical derivative is that not very many functions are differentiable. Nevertheless, there is a way of extending the notion of the derivative so that all continuous functions and many other functions can be differentiated using a concept known as the weak derivative. The idea is to embed the continuous functions in a larger space called the space of distributions and only require that a function is differentiable "on average". - The properties of the derivative have inspired the introduction and study of many similar objects in algebra and topology — see, for example, differential algebra. - The discrete equivalent of differentiation is finite differences. The study of differential calculus is unified with the calculus of finite differences in time scale calculus. - Also see arithmetic derivative. See also - Applications of derivatives - Automatic differentiation - Differentiability class - Generalizations of the derivative - Multiplicative inverse - Numerical differentiation - Symmetric derivative - Differentiation rules - Fractal derivative - Differential calculus, as discussed in this article, is a very well established mathematical discipline for which there are many sources. Almost all of the material in this article can be found in Apostol 1967, Apostol 1969, and Spivak 1994. - Spivak 1994, chapter 10. - See Differential (infinitesimal) for an overview. Further approaches include the Radon–Nikodym theorem, and the universal derivation (see Kähler differential). - Despite this, it is still possible to take the derivative in the sense of distributions. The result is nine times the Dirac measure centered at a. - Banach, S. (1931), "Uber die Baire'sche Kategorie gewisser Funktionenmengen", Studia. Math. (3): 174–179.. Cited by Hewitt, E and Stromberg, K (1963), Real and abstract analysis, Springer-Verlag, Theorem 17.8 - Apostol 1967, §4.18 - In the formulation of calculus in terms of limits, the du symbol has been assigned various meanings by various authors. Some authors do not assign a meaning to du by itself, but only as part of the symbol du/dx. Others define dx as an independent variable, and define du by du = dx•f′(x). In non-standard analysis du is defined as an infinitesimal. It is also interpreted as the exterior derivative of a function u. See differential (infinitesimal) for further information. - "The Notation of Differentiation". MIT. 1998. Retrieved 24 October 2012. - This can also be expressed as the adjointness between the product space and function space constructions. - Anton, Howard; Bivens, Irl; Davis, Stephen (February 2, 2005), Calculus: Early Transcendentals Single and Multivariable (8th ed.), New York: Wiley, ISBN 978-0-471-47244-5 - Apostol, Tom M. (June 1967), Calculus, Vol. 1: One-Variable Calculus with an Introduction to Linear Algebra 1 (2nd ed.), Wiley, ISBN 978-0-471-00005-1 - Apostol, Tom M. (June 1969), Calculus, Vol. 2: Multi-Variable Calculus and Linear Algebra with Applications 1 (2nd ed.), Wiley, ISBN 978-0-471-00007-5 - Courant, Richard; John, Fritz (December 22, 1998), Introduction to Calculus and Analysis, Vol. 1, Springer-Verlag, ISBN 978-3-540-65058-4 - Eves, Howard (January 2, 1990), An Introduction to the History of Mathematics (6th ed.), Brooks Cole, ISBN 978-0-03-029558-4 - Larson, Ron; Hostetler, Robert P.; Edwards, Bruce H. (February 28, 2006), Calculus: Early Transcendental Functions (4th ed.), Houghton Mifflin Company, ISBN 978-0-618-60624-5 - Spivak, Michael (September 1994), Calculus (3rd ed.), Publish or Perish, ISBN 978-0-914098-89-8 - Stewart, James (December 24, 2002), Calculus (5th ed.), Brooks Cole, ISBN 978-0-534-39339-7 - Thompson, Silvanus P. (September 8, 1998), Calculus Made Easy (Revised, Updated, Expanded ed.), New York: St. Martin's Press, ISBN 978-0-312-18548-0 Online books |Find more about Differentiation at Wikipedia's sister projects| |Definitions and translations from Wiktionary| |Media from Commons| |Learning resources from Wikiversity| |News stories from Wikinews| |Quotations from Wikiquote| |Source texts from Wikisource| |Textbooks from Wikibooks| |Travel information from Wikivoyage| |Find more about Derivative at Wikipedia's sister projects| |Definitions and translations from Wiktionary| |Media from Commons| |Learning resources from Wikiversity| |News stories from Wikinews| |Quotations from Wikiquote| |Source texts from Wikisource| |Textbooks from Wikibooks| |Travel information from Wikivoyage| - Crowell, Benjamin (2003), Calculus - Garrett, Paul (2004), Notes on First-Year Calculus, University of Minnesota - Hussain, Faraz (2006), Understanding Calculus - Keisler, H. Jerome (2000), Elementary Calculus: An Approach Using Infinitesimals - Mauch, Sean (2004), Unabridged Version of Sean's Applied Math Book - Sloughter, Dan (2000), Difference Equations to Differential Equations - Strang, Gilbert (1991), Calculus - Stroyan, Keith D. (1997), A Brief Introduction to Infinitesimal Calculus - Wikibooks, Calculus Web pages - Hazewinkel, Michiel, ed. (2001), "Derivative", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 - Khan Academy: Derivative lesson 1 - Weisstein, Eric W. "Derivative." From MathWorld - Derivatives of Trigonometric functions, UBC - Solved problems in derivatives
http://en.wikipedia.org/wiki/First_derivative
13
131
Some shorelines experience two almost equal high tides and two low tides each day, called a semi-diurnal tide. Some locations experience only one high and one low tide each day, called a diurnal tide. Some locations experience two uneven tides a day, or sometimes one high and one low each day; this is called a mixed tide. The times and amplitude of the tides at a locale are influenced by the alignment of the Sun and Moon, by the pattern of tides in the deep ocean, by the amphidromic systems of the oceans, and by the shape of the coastline and near-shore bathymetry (see Timing). Tides vary on timescales ranging from hours to years due to numerous influences. To make accurate records, tide gauges at fixed stations measure the water level over time. Gauges ignore variations caused by waves with periods shorter than minutes. These data are compared to the reference (or datum) level usually called mean sea level. While tides are usually the largest source of short-term sea-level fluctuations, sea levels are also subject to forces such as wind and barometric pressure changes, resulting in storm surges, especially in shallow seas and near coasts. Tidal phenomena are not limited to the oceans, but can occur in other systems whenever a gravitational field that varies in time and space is present. For example, the solid part of the Earth is affected by tides, though this is not as easily seen as the water tidal movements. Tide changes proceed via the following stages: - Sea level rises over several hours, covering the intertidal zone; flood tide. - The water rises to its highest level, reaching high tide. - Sea level falls over several hours, revealing the intertidal zone; ebb tide. - The water stops falling, reaching low tide. Tides produce oscillating currents known as tidal streams. The moment that the tidal current ceases is called slack water or slack tide. The tide then reverses direction and is said to be turning. Slack water usually occurs near high water and low water. But there are locations where the moments of slack tide differ significantly from those of high and low water. Tides are most commonly semi-diurnal (two high waters and two low waters each day), or diurnal (one tidal cycle per day). The two high waters on a given day are typically not the same height (the daily inequality); these are the higher high water and the lower high water in tide tables. Similarly, the two low waters each day are the higher low water and the lower low water. The daily inequality is not consistent and is generally small when the Moon is over the equator. Tidal changes are the net result of multiple influences that act over varying periods. These influences are called tidal constituents. The primary constituents are the Earth's rotation, the positions of the Moon and the Sun relative to Earth, the Moon's altitude (elevation) above the Earth's equator, and bathymetry. Variations with periods of less than half a day are called harmonic constituents. Conversely, cycles of days, months, or years are referred to as long period constituents. The tidal forces affect the entire earth, but the movement of the solid Earth is only centimeters. The atmosphere is much more fluid and compressible so its surface moves kilometers, in the sense of the contour level of a particular low pressure in the outer atmosphere. Principal lunar semi-diurnal constituent In most locations, the largest constituent is the "principal lunar semi-diurnal", also known as the M2 (or M2) tidal constituent. Its period is about 12 hours and 25.2 minutes, exactly half a tidal lunar day, which is the average time separating one lunar zenith from the next, and thus is the time required for the Earth to rotate once relative to the Moon. Simple tide clocks track this constituent. The lunar day is longer than the Earth day because the Moon orbits in the same direction the Earth spins. This is analogous to the minute hand on a watch crossing the hour hand at 12:00 and then again at about 1:05½ (not at 1:00). The Moon orbits the Earth in the same direction as the Earth rotates on its axis, so it takes slightly more than a day—about 24 hours and 50 minutes—for the Moon to return to the same location in the sky. During this time, it has passed overhead (culmination) once and underfoot once (at an hour angle of 00:00 and 12:00 respectively), so in many places the period of strongest tidal forcing is the above mentioned, about 12 hours and 25 minutes. The moment of highest tide is not necessarily when the Moon is nearest to zenith or nadir, but the period of the forcing still determines the time between high tides. Because the gravitational field created by the Moon weakens with distance from the Moon, it exerts a slightly stronger than average force on the side of the Earth facing the Moon, and a slightly weaker force on the opposite side. The Moon thus tends to "stretch" the Earth slightly along the line connecting the two bodies. The solid Earth deforms a bit, but ocean water, being fluid, is free to move much more in response to the tidal force, particularly horizontally. As the Earth rotates, the magnitude and direction of the tidal force at any particular point on the Earth's surface change constantly; although the ocean never reaches equilibrium—there is never time for the fluid to "catch up" to the state it would eventually reach if the tidal force were constant—the changing tidal force nonetheless causes rhythmic changes in sea surface height. Semi-diurnal range differences When there are two high tides each day with different heights (and two low tides also of different heights), the pattern is called a mixed semi-diurnal tide. Range variation: springs and neaps The semi-diurnal range (the difference in height between high and low waters over about half a day) varies in a two-week cycle. Approximately twice a month, around new moon and full moon when the Sun, Moon and Earth form a line (a condition known as syzygy) the tidal force due to the sun reinforces that due to the Moon. The tide's range is then at its maximum: this is called the spring tide, or just springs. It is not named after the season but, like that word, derives from the meaning "jump, burst forth, rise", as in a natural spring. When the Moon is at first quarter or third quarter, the sun and Moon are separated by 90° when viewed from the Earth, and the solar tidal force partially cancels the Moon's. At these points in the lunar cycle, the tide's range is at its minimum: this is called the neap tide, or neaps (a word of uncertain origin). Spring tides result in high waters that are higher than average, low waters that are lower than average, 'slack water' time that is shorter than average and stronger tidal currents than average. Neaps result in less extreme tidal conditions. There is about a seven-day interval between springs and neaps. The changing distance separating the Moon and Earth also affects tide heights. When the Moon is closest, at perigee, the range increases, and when it is at apogee, the range shrinks. Every 7½ lunations (the full cycles from full moon to new to full), perigee coincides with either a new or full moon causing perigean spring tides with the largest tidal range. Even at its most powerful this force is still weak causing tidal differences of inches at most. The shape of the shoreline and the ocean floor changes the way that tides propagate, so there is no simple, general rule that predicts the time of high water from the Moon's position in the sky. Coastal characteristics such as underwater bathymetry and coastline shape mean that individual location characteristics affect tide forecasting; actual high water time and height may differ from model predictions due to the coastal morphology's effects on tidal flow. However, for a given location the relationship between lunar altitude and the time of high or low tide (the lunitidal interval) is relatively constant and predictable, as is the time of high or low tide relative to other points on the same coast. For example, the high tide at Norfolk, Virginia, predictably occurs approximately two and a half hours before the Moon passes directly overhead. Land masses and ocean basins act as barriers against water moving freely around the globe, and their varied shapes and sizes affect the size of tidal frequencies. As a result, tidal patterns vary. For example, in the U.S., the East coast has predominantly semi-diurnal tides, as do Europe's Atlantic coasts, while the West coast predominantly has mixed tides. These include solar gravitational effects, the obliquity (tilt) of the Earth's equator and rotational axis, the inclination of the plane of the lunar orbit and the elliptical shape of the Earth's orbit of the sun. A compound tide (or overtide) results from the shallow-water interaction of its two parent waves. Phase and amplitude Because the M2 tidal constituent dominates in most locations, the stage or phase of a tide, denoted by the time in hours after high water, is a useful concept. Tidal stage is also measured in degrees, with 360° per tidal cycle. Lines of constant tidal phase are called cotidal lines, which are analogous to contour lines of constant altitude on topographical maps. High water is reached simultaneously along the cotidal lines extending from the coast out into the ocean, and cotidal lines (and hence tidal phases) advance along the coast. Semi-diurnal and long phase constituents are measured from high water, diurnal from maximum flood tide. This and the discussion that follows is precisely true only for a single tidal constituent. For an ocean in the shape of a circular basin enclosed by a coastline, the cotidal lines point radially inward and must eventually meet at a common point, the amphidromic point. The amphidromic point is at once cotidal with high and low waters, which is satisfied by zero tidal motion. (The rare exception occurs when the tide encircles an island, as it does around New Zealand, Iceland and Madagascar.) Tidal motion generally lessens moving away from continental coasts, so that crossing the cotidal lines are contours of constant amplitude (half the distance between high and low water) which decrease to zero at the amphidromic point. For a semi-diurnal tide the amphidromic point can be thought of roughly like the center of a clock face, with the hour hand pointing in the direction of the high water cotidal line, which is directly opposite the low water cotidal line. High water rotates about the amphidromic point once every 12 hours in the direction of rising cotidal lines, and away from ebbing cotidal lines. This rotation is generally clockwise in the southern hemisphere and counterclockwise in the northern hemisphere, and is caused by the Coriolis effect. The difference of cotidal phase from the phase of a reference tide is the epoch. The reference tide is the hypothetical constituent "equilibrium tide" on a landless Earth measured at 0° longitude, the Greenwich meridian. In the North Atlantic, because the cotidal lines circulate counterclockwise around the amphidromic point, the high tide passes New York Harbor approximately an hour ahead of Norfolk Harbor. South of Cape Hatteras the tidal forces are more complex, and cannot be predicted reliably based on the North Atlantic cotidal lines. History of tidal physics Investigation into tidal physics was important in the early development of heliocentrism and celestial mechanics, with the existence of two daily tides being explained by the Moon's gravity. Later the daily tides were explained more precisely by the interaction of the Moon's and the sun's gravity. Galileo Galilei in his 1632 Dialogue Concerning the Two Chief World Systems, whose working title was Dialogue on the Tides, gave an explanation of the tides. The resulting theory, however, was incorrect as he attributed the tides to the sloshing of water caused by the Earth's movement around the sun. He hoped to provide mechanical proof of the Earth's movement – the value of his tidal theory is disputed. At the same time Johannes Kepler correctly suggested that the Moon caused the tides, which he based upon ancient observations and correlations, an explanation which was rejected by Galileo. It was originally mentioned in Ptolemy's Tetrabiblos as having derived from ancient observation. Isaac Newton (1642–1727) was the first person to explain tides as the product of the gravitational attraction of astronomical masses. His explanation of the tides (and many other phenomena) was published in the Principia (1687). and used his theory of universal gravitation to explain the lunar and solar attractions as the origin of the tide-generating forces. Newton and others before Pierre-Simon Laplace worked the problem from the perspective of a static system (equilibrium theory), that provided an approximation that described the tides that would occur in a non-inertial ocean evenly covering the whole Earth. The tide-generating force (or its corresponding potential) is still relevant to tidal theory, but as an intermediate quantity (forcing function) rather than as a final result; theory must also consider the Earth's accumulated dynamic tidal response to the applied forces, which response is influenced by bathymetry, Earth's rotation, and other factors. Maclaurin used Newton’s theory to show that a smooth sphere covered by a sufficiently deep ocean under the tidal force of a single deforming body is a prolate spheroid (essentially a three dimensional oval) with major axis directed toward the deforming body. Maclaurin was the first to write about the Earth's rotational effects on motion. Euler realized that the tidal force's horizontal component (more than the vertical) drives the tide. In 1744 Jean le Rond d'Alembert studied tidal equations for the atmosphere which did not include rotation. Pierre-Simon Laplace formulated a system of partial differential equations relating the ocean's horizontal flow to its surface height, the first major dynamic theory for water tides. The Laplace tidal equations are still in use today. William Thomson, 1st Baron Kelvin, rewrote Laplace's equations in terms of vorticity which allowed for solutions describing tidally driven coastally trapped waves, known as Kelvin waves. Others including Kelvin and Henri Poincaré further developed Laplace's theory. Based on these developments and the lunar theory of E W Brown describing the motions of the Moon, Arthur Thomas Doodson developed and published in 1921 the first modern development of the tide-generating potential in harmonic form: Doodson distinguished 388 tidal frequencies. Some of his methods remain in use. The tidal force produced by a massive object (Moon, hereafter) on a small particle located on or in an extensive body (Earth, hereafter) is the vector difference between the gravitational force exerted by the Moon on the particle, and the gravitational force that would be exerted on the particle if it were located at the Earth's center of mass. Thus, the tidal force depends not on the strength of the lunar gravitational field, but on its gradient (which falls off approximately as the inverse cube of the distance to the originating gravitational body). The solar gravitational force on the Earth is on average 179 times stronger than the lunar, but because the sun is on average 389 times farther from the Earth, its field gradient is weaker. The solar tidal force is 46% as large as the lunar. More precisely, the lunar tidal acceleration (along the Moon-Earth axis, at the Earth's surface) is about 1.1 × 10−7 g, while the solar tidal acceleration (along the Sun-Earth axis, at the Earth's surface) is about 0.52 × 10−7 g, where g is the gravitational acceleration at the Earth's surface. Venus has the largest effect of the other planets, at 0.000113 times the solar effect. The ocean's surface is closely approximated by an equipotential surface, (ignoring ocean currents) commonly referred to as the geoid. Since the gravitational force is equal to the potential's gradient, there are no tangential forces on such a surface, and the ocean surface is thus in gravitational equilibrium. Now consider the effect of massive external bodies such as the moon and sun. These bodies have strong gravitational fields that diminish with distance in space and which act to alter the shape of an equipotential surface on the Earth. This deformation has a fixed spatial orientation relative to the influencing body. The Earth's rotation relative to this shape causes the daily tidal cycle. Gravitational forces follow an inverse-square law (force is inversely proportional to the square of the distance), but tidal forces are inversely proportional to the cube of the distance. The ocean surface moves because of the changing tidal equipotential, rising when the tidal potential is high, which occurs on the parts of the Earth nearest to and furthest from the moon. When the tidal equipotential changes, the ocean surface is no longer aligned with it, so the apparent direction of the vertical shifts. The surface then experiences a down slope, in the direction that the equipotential has risen. Laplace's tidal equations - The vertical (or radial) velocity is negligible, and there is no vertical shear—this is a sheet flow. - The forcing is only horizontal (tangential). - The Coriolis effect appears as an inertial force (fictitious) acting laterally to the direction of flow and proportional to velocity. - The surface height's rate of change is proportional to the negative divergence of velocity multiplied by the depth. As the horizontal velocity stretches or compresses the ocean as a sheet, the volume thins or thickens, respectively. The boundary conditions dictate no flow across the coastline and free slip at the bottom. The Coriolis effect (inertial force) steers currents moving towards the equator to the west and toward the east for flows moving away from the equator, allowing coastally trapped waves. Finally, a dissipation term can be added which is an analog to viscosity. Amplitude and cycle time The theoretical amplitude of oceanic tides caused by the moon is about 54 centimetres (21 in) at the highest point, which corresponds to the amplitude that would be reached if the ocean possessed a uniform depth, there were no landmasses, and the Earth were rotating in step with the moon's orbit. The sun similarly causes tides, of which the theoretical amplitude is about 25 centimetres (9.8 in) (46% of that of the moon) with a cycle time of 12 hours. At spring tide the two effects add to each other to a theoretical level of 79 centimetres (31 in), while at neap tide the theoretical level is reduced to 29 centimetres (11 in). Since the orbits of the Earth about the sun, and the moon about the Earth, are elliptical, tidal amplitudes change somewhat as a result of the varying Earth–sun and Earth–moon distances. This causes a variation in the tidal force and theoretical amplitude of about ±18% for the moon and ±5% for the sun. If both the sun and moon were at their closest positions and aligned at new moon, the theoretical amplitude would reach 93 centimetres (37 in). Real amplitudes differ considerably, not only because of depth variations and continental obstacles, but also because wave propagation across the ocean has a natural period of the same order of magnitude as the rotation period: if there were no land masses, it would take about 30 hours for a long wavelength surface wave to propagate along the equator halfway around the Earth (by comparison, the Earth's lithosphere has a natural period of about 57 minutes). Earth tides, which raise and lower the bottom of the ocean, and the tide's own gravitational self attraction are both significant and further complicate the ocean's response to tidal forces. Earth's tidal oscillations introduce dissipation at an average rate of about 3.75 terawatt. About 98% of this dissipation is by marine tidal movement. Dissipation arises as basin-scale tidal flows drive smaller-scale flows which experience turbulent dissipation. This tidal drag creates torque on the moon that gradually transfers angular momentum to its orbit, and a gradual increase in Earth–moon separation. The equal and opposite torque on the Earth correspondingly decreases its rotational velocity. Thus, over geologic time, the moon recedes from the Earth, at about 3.8 centimetres (1.5 in)/year, lengthening the terrestrial day. Day length has increased by about 2 hours in the last 600 million years. Assuming (as a crude approximation) that the deceleration rate has been constant, this would imply that 70 million years ago, day length was on the order of 1% shorter with about 4 more days per year. Observation and prediction From ancient times, tidal observation and discussion has increased in sophistication, first marking the daily recurrence, then tides' relationship to the sun and moon. Pytheas travelled to the British Isles about 325 BC and seems to be the first to have related spring tides to the phase of the moon. In the 2nd century BC, the Babylonian astronomer, Seleucus of Seleucia, correctly described the phenomenon of tides in order to support his heliocentric theory. He correctly theorized that tides were caused by the moon, although he believed that the interaction was mediated by the pneuma. He noted that tides varied in time and strength in different parts of the world. According to Strabo (1.1.9), Seleucus was the first to link tides to the lunar attraction, and that the height of the tides depends on the moon's position relative to the sun. The Naturalis Historia of Pliny the Elder collates many tidal observations, e.g., the spring tides are a few days after (or before) new and full moon and are highest around the equinoxes, though Pliny noted many relationships now regarded as fanciful. In his Geography, Strabo described tides in the Persian Gulf having their greatest range when the moon was furthest from the plane of the equator. All this despite the relatively small amplitude of Mediterranean basin tides. (The strong currents through the Euripus Strait and the Strait of Messina puzzled Aristotle.) Philostratus discussed tides in Book Five of The Life of Apollonius of Tyana. Philostratus mentions the moon, but attributes tides to "spirits". In Europe around 730 AD, the Venerable Bede described how the rising tide on one coast of the British Isles coincided with the fall on the other and described the time progression of high water along the Northumbrian coast. The first tide table in China was recorded in 1056 AD primarily for visitors wishing to see the famous tidal bore in the Qiantang River. The first known British tide table is thought to be that of John Wallingford, who died Abbot of St. Albans in 1213, based on high water occurring 48 minutes later each day, and three hours earlier at the Thames mouth than upriver at London. William Thomson (Lord Kelvin) led the first systematic harmonic analysis of tidal records starting in 1867. The main result was the building of a tide-predicting machine using a system of pulleys to add together six harmonic time functions. It was "programmed" by resetting gears and chains to adjust phasing and amplitudes. Similar machines were used until the 1960s. The first known sea-level record of an entire spring–neap cycle was made in 1831 on the Navy Dock in the Thames Estuary. Many large ports had automatic tide gage stations by 1850. William Whewell first mapped co-tidal lines ending with a nearly global chart in 1836. In order to make these maps consistent, he hypothesized the existence of amphidromes where co-tidal lines meet in the mid-ocean. These points of no tide were confirmed by measurement in 1840 by Captain Hewett, RN, from careful soundings in the North Sea. The tidal forces due to the Moon and Sun generate very long waves which travel all around the ocean following the paths shown in co-tidal charts. The time when the crest of the wave reaches a port then gives the time of high water at the port. The time taken for the wave to travel around the ocean also means that there is a delay between the phases the moon and their effect on the tide. Springs and neaps in the North Sea, for example, are two days behind the new/full moon and first/third quarter moon. This is called the tide's age. The ocean bathymetry greatly influences the tide's exact time and height at a particular coastal point. There are some extreme cases: the Bay of Fundy, on the east coast of Canada, is often stated to have the world's highest tides because of its shape, bathymetry and its distance from the continental shelf edge. Measurements made in November 1998 at Burntcoat Head in the Bay of Fundy recorded a maximum range of 16.3 metres (53 ft) and a highest predicted extreme of 17 metres (56 ft). Similar measurements made in March 2002 at Leaf Basin, Ungava Bay in northern Quebec gave similar values (allowing for measurement errors), a maximum range of 16.2 metres (53 ft) and a highest predicted extreme of 16.8 metres (55 ft). Ungava Bay and the Bay of Fundy lie similar distances from the continental shelf edge but Ungava Bay is free of pack ice for only about four months every year while the Bay of Fundy rarely freezes. Southampton in the United Kingdom has a double high water caused by the interaction between the region's different tidal harmonics, caused primarily by the east/west orientation of the English Channel and the fact that when it is high water at Dover it is low water at Land's End (some 300 nautical miles distant) and vice versa. This is contrary to the popular belief that the flow of water around the Isle of Wight creates two high waters. The Isle of Wight is important, however, since it is responsible for the 'Young Flood Stand', which describes the pause of the incoming tide about three hours after low water. Because the oscillation modes of the Mediterranean Sea and the Baltic Sea do not coincide with any significant astronomical forcing period, the largest tides are close to their narrow connections with the Atlantic Ocean. Extremely small tides also occur for the same reason in the Gulf of Mexico and Sea of Japan. Elsewhere, as along the southern coast of Australia, low tides can be due to the presence of a nearby amphidrome. Isaac Newton's theory of gravitation first enabled an explanation of why there were generally two tides a day, not one, and offered hope for detailed understanding. Although it may seem that tides could be predicted via a sufficiently detailed knowledge of the instantaneous astronomical forcings, the actual tide at a given location is determined by astronomical forces accumulated over many days. Precise results require detailed knowledge of the shape of all the ocean basins—their bathymetry and coastline shape. Current procedure for analysing tides follows the method of harmonic analysis introduced in the 1860s by William Thomson. It is based on the principle that the astronomical theories of the motions of sun and moon determine a large number of component frequencies, and at each frequency there is a component of force tending to produce tidal motion, but that at each place of interest on the Earth, the tides respond at each frequency with an amplitude and phase peculiar to that locality. At each place of interest, the tide heights are therefore measured for a period of time sufficiently long (usually more than a year in the case of a new port not previously studied) to enable the response at each significant tide-generating frequency to be distinguished by analysis, and to extract the tidal constants for a sufficient number of the strongest known components of the astronomical tidal forces to enable practical tide prediction. The tide heights are expected to follow the tidal force, with a constant amplitude and phase delay for each component. Because astronomical frequencies and phases can be calculated with certainty, the tide height at other times can then be predicted once the response to the harmonic components of the astronomical tide-generating forces has been found. The main patterns in the tides are - the twice-daily variation - the difference between the first and second tide of a day - the spring–neap cycle - the annual variation The Highest Astronomical Tide is the perigean spring tide when both the sun and the moon are closest to the Earth. When confronted by a periodically varying function, the standard approach is to employ Fourier series, a form of analysis that uses sinusoidal functions as a basis set, having frequencies that are zero, one, two, three, etc. times the frequency of a particular fundamental cycle. These multiples are called harmonics of the fundamental frequency, and the process is termed harmonic analysis. If the basis set of sinusoidal functions suit the behaviour being modelled, relatively few harmonic terms need to be added. Orbital paths are very nearly circular, so sinusoidal variations are suitable for tides. For the analysis of tide heights, the Fourier series approach has in practice to be made more elaborate than the use of a single frequency and its harmonics. The tidal patterns are decomposed into many sinusoids having many fundamental frequencies, corresponding (as in the lunar theory) to many different combinations of the motions of the Earth, the moon, and the angles that define the shape and location of their orbits. For tides, then, harmonic analysis is not limited to harmonics of a single frequency. In other words, the harmonies are multiples of many fundamental frequencies, not just of the fundamental frequency of the simpler Fourier series approach. Their representation as a Fourier series having only one fundamental frequency and its (integer) multiples would require many terms, and would be severely limited in the time-range for which it would be valid. The study of tide height by harmonic analysis was begun by Laplace, William Thomson (Lord Kelvin), and George Darwin. A.T. Doodson extended their work, introducing the Doodson Number notation to organise the hundreds of resulting terms. This approach has been the international standard ever since, and the complications arise as follows: the tide-raising force is notionally given by sums of several terms. Each term is of the form - A·cos(w·t + p) where A is the amplitude, w is the angular frequency usually given in degrees per hour corresponding to t measured in hours, and p is the phase offset with regard to the astronomical state at time t = 0 . There is one term for the moon and a second term for the sun. The phase p of the first harmonic for the moon term is called the lunitidal interval or high water interval. The next step is to accommodate the harmonic terms due to the elliptical shape of the orbits. Accordingly, the value of A is not a constant but also varying with time, slightly, about some average figure. Replace it then by A(t) where A is another sinusoid, similar to the cycles and epicycles of Ptolemaic theory. Accordingly, - A(t) = A·(1 + Aa·cos(wa·t + pa)) , which is to say an average value A with a sinusoidal variation about it of magnitude Aa , with frequency wa and phase pa . Thus the simple term is now the product of two cosine factors: - A·[1 + Aa·cos(wa ·t + pa)]·cos(w·t + p) Given that for any x and y - cos(x)·cos(y) = ½·cos( x + y ) + ½·cos( x–y ) , it is clear that a compound term involving the product of two cosine terms each with their own frequency is the same as three simple cosine terms that are to be added at the original frequency and also at frequencies which are the sum and difference of the two frequencies of the product term. (Three, not two terms, since the whole expression is (1 + cos(x))·cos(y) .) Consider further that the tidal force on a location depends also on whether the moon (or the sun) is above or below the plane of the equator, and that these attributes have their own periods also incommensurable with a day and a month, and it is clear that many combinations result. With a careful choice of the basic astronomical frequencies, the Doodson Number annotates the particular additions and differences to form the frequency of each simple cosine term. Remember that astronomical tides do not include weather effects. Also, changes to local conditions (sandbank movement, dredging harbour mouths, etc.) away from those prevailing at the measurement time affect the tide's actual timing and magnitude. Organisations quoting a "highest astronomical tide" for some location may exaggerate the figure as a safety factor against analytical uncertainties, distance from the nearest measurement point, changes since the last observation time, ground subsidence, etc., to avert liability should an engineering work be overtopped. Special care is needed when assessing the size of a "weather surge" by subtracting the astronomical tide from the observed tide. Careful Fourier data analysis over a nineteen-year period (the National Tidal Datum Epoch in the U.S.) uses frequencies called the tidal harmonic constituents. Nineteen years is preferred because the Earth, moon and sun's relative positions repeat almost exactly in the Metonic cycle of 19 years, which is long enough to include the 18.613 year lunar nodal tidal constituent. This analysis can be done using only the knowledge of the forcing period, but without detailed understanding of the mathematical derivation, which means that useful tidal tables have been constructed for centuries. The resulting amplitudes and phases can then be used to predict the expected tides. These are usually dominated by the constituents near 12 hours (the semi-diurnal constituents), but there are major constituents near 24 hours (diurnal) as well. Longer term constituents are 14 day or fortnightly, monthly, and semiannual. Semi-diurnal tides dominated coastline, but some areas such as the South China Sea and the Gulf of Mexico are primarily diurnal. In the semi-diurnal areas, the primary constituents M2 (lunar) and S2 (solar) periods differ slightly, so that the relative phases, and thus the amplitude of the combined tide, change fortnightly (14 day period). In the M2 plot above, each cotidal line differs by one hour from its neighbors, and the thicker lines show tides in phase with equilibrium at Greenwich. The lines rotate around the amphidromic points counterclockwise in the northern hemisphere so that from Baja California Peninsula to Alaska and from France to Ireland the M2 tide propagates northward. In the southern hemisphere this direction is clockwise. On the other hand M2 tide propagates counterclockwise around New Zealand, but this is because the islands act as a dam and permit the tides to have different heights on the islands' opposite sides. (The tides do propagate northward on the east side and southward on the west coast, as predicted by theory.) The exception is at Cook Strait where the tidal currents periodically link high to low water. This is because cotidal lines 180° around the amphidromes are in opposite phase, for example high water across from low water at each end of Cook Strait. Each tidal constituent has a different pattern of amplitudes, phases, and amphidromic points, so the M2 patterns cannot be used for other tide components. Because the moon is moving in its orbit around the earth and in the same sense as the Earth's rotation, a point on the earth must rotate slightly further to catch up so that the time between semidiurnal tides is not twelve but 12.4206 hours—a bit over twenty-five minutes extra. The two peaks are not equal. The two high tides a day alternate in maximum heights: lower high (just under three feet), higher high (just over three feet), and again lower high. Likewise for the low tides. When the Earth, moon, and sun are in line (sun–Earth–moon, or sun–moon–Earth) the two main influences combine to produce spring tides; when the two forces are opposing each other as when the angle moon–Earth–sun is close to ninety degrees, neap tides result. As the moon moves around its orbit it changes from north of the equator to south of the equator. The alternation in high tide heights becomes smaller, until they are the same (at the lunar equinox, the moon is above the equator), then redevelop but with the other polarity, waxing to a maximum difference and then waning again. The tides' influence on current flow is much more difficult to analyse, and data is much more difficult to collect. A tidal height is a simple number which applies to a wide region simultaneously. A flow has both a magnitude and a direction, both of which can vary substantially with depth and over short distances due to local bathymetry. Also, although a water channel's center is the most useful measuring site, mariners object when current-measuring equipment obstructs waterways. A flow proceeding up a curved channel is the same flow, even though its direction varies continuously along the channel. Surprisingly, flood and ebb flows are often not in opposite directions. Flow direction is determined by the upstream channel's shape, not the downstream channel's shape. Likewise, eddies may form in only one flow direction. Nevertheless, current analysis is similar to tidal analysis: in the simple case, at a given location the flood flow is in mostly one direction, and the ebb flow in another direction. Flood velocities are given positive sign, and ebb velocities negative sign. Analysis proceeds as though these are tide heights. In more complex situations, the main ebb and flood flows do not dominate. Instead, the flow direction and magnitude trace an ellipse over a tidal cycle (on a polar plot) instead of along the ebb and flood lines. In this case, analysis might proceed along pairs of directions, with the primary and secondary directions at right angles. An alternative is to treat the tidal flows as complex numbers, as each value has both a magnitude and a direction. Tide flow information is most commonly seen on nautical charts, presented as a table of flow speeds and bearings at hourly intervals, with separate tables for spring and neap tides. The timing is relative to high water at some harbour where the tidal behaviour is similar in pattern, though it may be far away. As with tide height predictions, tide flow predictions based only on astronomical factors do not incorporate weather conditions, which can completely change the outcome. The tidal flow through Cook Strait between the two main islands of New Zealand is particularly interesting, as the tides on each side of the strait are almost exactly out of phase, so that one side's high water is simultaneous with the other's low water. Strong currents result, with almost zero tidal height change in the strait's center. Yet, although the tidal surge normally flows in one direction for six hours and in the reverse direction for six hours, a particular surge might last eight or ten hours with the reverse surge enfeebled. In especially boisterous weather conditions, the reverse surge might be entirely overcome so that the flow continues in the same direction through three or more surge periods. A further complication for Cook Strait's flow pattern is that the tide at the north side (e.g. at Nelson) follows the common bi-weekly spring–neap tide cycle (as found along the west side of the country), but the south side's tidal pattern has only one cycle per month, as on the east side: Wellington, and Napier. The graph of Cook Strait's tides shows separately the high water and low water height and time, through November 2007; these are not measured values but instead are calculated from tidal parameters derived from years-old measurements. Cook Strait's nautical chart offers tidal current information. For instance the January 1979 edition for 41°13·9’S 174°29·6’E (north west of Cape Terawhiti) refers timings to Westport while the January 2004 issue refers to Wellington. Near Cape Terawhiti in the middle of Cook Strait the tidal height variation is almost nil while the tidal current reaches its maximum, especially near the notorious Karori Rip. Aside from weather effects, the actual currents through Cook Strait are influenced by the tidal height differences between the two ends of the strait and as can be seen, only one of the two spring tides at the north end (Nelson) has a counterpart spring tide at the south end (Wellington), so the resulting behaviour follows neither reference harbour. Tidal energy can be extracted by two means: inserting a water turbine into a tidal current, or building ponds that release/admit water through a turbine. In the first case, the energy amount is entirely determined by the timing and tidal current magnitude. However, the best currents may be unavailable because the turbines would obstruct ships. In the second, the impoundment dams are expensive to construct, natural water cycles are completely disrupted, ship navigation is disrupted. However, with multiple ponds, power can be generated at chosen times. So far, there are few installed systems for tidal power generation (most famously, La Rance by Saint Malo, France) which faces many difficulties. Aside from environmental issues, simply withstanding corrosion and biological fouling pose engineering challenges. Tidal power proponents point out that, unlike wind power systems, generation levels can be reliably predicted, save for weather effects. While some generation is possible for most of the tidal cycle, in practice turbines lose efficiency at lower operating rates. Since the power available from a flow is proportional to the cube of the flow speed, the times during which high power generation is possible are brief. Tidal flows are important for navigation, and significant errors in position occur if they are not accommodated. Tidal heights are also important; for example many rivers and harbours have a shallow "bar" at the entrance which prevents boats with significant draft from entering at low tide. Until the advent of automated navigation, competence in calculating tidal effects was important to naval officers. The certificate of examination for lieutenants in the Royal Navy once declared that the prospective officer was able to "shift his tides". Tidal flow timings and velocities appear in tide charts or a tidal stream atlas. Tide charts come in sets. Each chart covers a single hour between one high water and another (they ignore the leftover 24 minutes) and show the average tidal flow for that hour. An arrow on the tidal chart indicates the direction and the average flow speed (usually in knots) for spring and neap tides. If a tide chart is not available, most nautical charts have "tidal diamonds" which relate specific points on the chart to a table giving tidal flow direction and speed. The standard procedure to counteract tidal effects on navigation is to (1) calculate a "dead reckoning" position (or DR) from travel distance and direction, (2) mark the chart (with a vertical cross like a plus sign) and (3) draw a line from the DR in the tide's direction. The distance the tide moves the boat along this line is computed by the tidal speed, and this gives an "estimated position" or EP (traditionally marked with a dot in a triangle). Nautical charts display the water's "charted depth" at specific locations with "soundings" and the use of bathymetric contour lines to depict the submerged surface's shape. These depths are relative to a "chart datum", which is typically the water level at the lowest possible astronomical tide (although other datums are commonly used, especially historically, and tides may be lower or higher for meteorological reasons) and are therefore the minimum possible water depth during the tidal cycle. "Drying heights" may also be shown on the chart, which are the heights of the exposed seabed at the lowest astronomical tide. Tide tables list each day's high and low water heights and times. To calculate the actual water depth, add the charted depth to the published tide height. Depth for other times can be derived from tidal curves published for major ports. The rule of twelfths can suffice if an accurate curve is not available. This approximation presumes that the increase in depth in the six hours between low and high water is: first hour — 1/12, second — 2/12, third — 3/12, fourth — 3/12, fifth — 2/12, sixth — 1/12. Intertidal ecology is the study of intertidal ecosystems, where organisms live between the low and high water lines. At low water, the intertidal is exposed (or ‘emersed’) whereas at high water, the intertidal is underwater (or ‘immersed’). Intertidal ecologists therefore study the interactions between intertidal organisms and their environment, as well as among the different species. The most important interactions may vary according to the type of intertidal community. The broadest classifications are based on substrates — rocky shore or soft bottom. Intertidal organisms experience a highly variable and often hostile environment, and have adapted to cope with and even exploit these conditions. One easily visible feature is vertical zonation, in which the community divides into distinct horizontal bands of specific species at each elevation above low water. A species' ability to cope with desiccation determines its upper limit, while competition with other species sets its lower limit. Humans use intertidal regions for food and recreation. Overexploitation can damage intertidals directly. Other anthropogenic actions such as introducing invasive species and climate change have large negative effects. Marine Protected Areas are one option communities can apply to protect these areas and aid scientific research. The approximately fortnightly tidal cycle has large effects on intertidal and marine organisms. Hence their biological rhythms tend to occur in rough multiples of this period. Many other animals such as the vertebrates, display similar rhythms. Examples include gestation and egg hatching. In humans, the menstrual cycle lasts roughly a lunar month, an even multiple of the tidal period. Such parallels at least hint at the common descent of all animals from a marine ancestor. Shallow areas in otherwise open water can experience rotary tidal currents, flowing in directions that continually change and thus the flow direction (not the flow) completes a full rotation in 12½ hours (for example, the Nantucket Shoals). In addition to oceanic tides, large lakes can experience small tides and even planets can experience atmospheric tides and Earth tides. These are continuum mechanical phenomena. The first two take place in fluids. The third affects the Earth's thin solid crust surrounding its semi-liquid interior (with various modifications). Large lakes such as Superior and Erie can experience tides of 1 to 4 cm, but these can be masked by meteorologically induced phenomena such as seiche. The tide in Lake Michigan is described as 0.5 to 1.5 inches (13 to 38 mm) or 1¾ inches. Atmospheric tides are negligible at ground level and aviation altitudes, masked by weather's much more important effects. Atmospheric tides are both gravitational and thermal in origin and are the dominant dynamics from about 80 to 120 kilometres (50 to 75 mi), above which the molecular density becomes too low to support fluid behavior. Earth tides or terrestrial tides affect the entire Earth's mass, which acts similarly to a liquid gyroscope with a very thin crust. The Earth's crust shifts (in/out, east/west, north/south) in response to lunar and solar gravitation, ocean tides, and atmospheric loading. While negligible for most human activities, terrestrial tides' semi-diurnal amplitude can reach about 55 centimetres (22 in) at the equator—15 centimetres (5.9 in) due to the sun—which is important in GPS calibration and VLBI measurements. Precise astronomical angular measurements require knowledge of the Earth's rotation rate and nutation, both of which are influenced by Earth tides. The semi-diurnal M2 Earth tides are nearly in phase with the moon with a lag of about two hours. Some particle physics experiments must adjust for terrestrial tides. For instance, at CERN and SLAC, the very large particle accelerators account for terrestrial tides. Among the relevant effects are circumference deformation for circular accelerators and particle beam energy. Since tidal forces generate currents in conducting fluids in the Earth's interior, they in turn affect the Earth's magnetic field. Earth tides have also been linked to the triggering of earthquakes. See also earthquake prediction. Galactic tides are the tidal forces exerted by galaxies on stars within them and satellite galaxies orbiting them. The galactic tide's effects on the Solar System's Oort cloud are believed to cause 90 percent of long-period comets. Tsunamis, the large waves that occur after earthquakes, are sometimes called tidal waves, but this name is given by their resemblance to the tide, rather than any actual link to the tide. Other phenomena unrelated to tides but using the word tide are rip tide, storm tide, hurricane tide, and black or red tides. - Reddy, M.P.M. & Affholder, M. (2002). Descriptive physical oceanography: State of the Art. Taylor and Francis. p. 249. ISBN 90-5410-706-5. OCLC 223133263 47801346. - Hubbard, Richard (1893). Boater's Bowditch: The Small Craft American Practical Navigator. McGraw-Hill Professional. p. 54. ISBN 0-07-136136-7. OCLC 44059064. - Coastal orientation and geometry affects the phase, direction, and amplitude of amphidromic systems, coastal Kelvin waves as well as resonant seiches in bays. In estuaries seasonal river outflows influence tidal flow. - "Tidal lunar day". NOAA. Do not confuse with the astronomical lunar day on the Moon. A lunar zenith is the Moon's highest point in the sky. - Mellor, George L. (1996). Introduction to physical oceanography. Springer. p. 169. ISBN 1-56396-210-1. - Tide tables usually list mean lower low water (mllw, the 19 year average of mean lower low waters), mean higher low water (mhlw), mean lower high water (mlhw), mean higher high water (mhhw), as well as perigean tides. These are mean values in the sense that they derive from mean data."Glossary of Coastal Terminology: H–M". Washington Department of Ecology, State of Washington. Retrieved 5 April 2007. - "Types and causes of tidal cycles". U.S. National Oceanic and Atmospheric Administration (NOAA) National Ocean Service (Education section). - Swerdlow, Noel M.; Neugebauer, Otto (1984). Mathematical astronomy in Copernicus's De revolutionibus, Volume 1. Springer-Verlag. p. 76. ISBN 0-387-90939-7, 9780387909394 Check - Plait, Phil (11 March 2011). "No, the "supermoon" didn’t cause the Japanese earthquake". Discover Magazine. Retrieved 16 May 2012. - Rice, Tony (4 May 2012). "Super moon looms Saturday". WRAL-TV. Retrieved 5 May 2012. - U.S. National Oceanic and Atmospheric Administration (NOAA) National Ocean Service (Education section), map showing world distribution of tide patterns, semi-diurnal, diurnal and mixed semi-diurnal. - Thurman, H.V. (1994). Introductory Oceanography (7 ed.). New York, NY: Macmillan. pp. 252–276.ref - Ross, D.A. (1995). Introduction to Oceanography. New York, NY: HarperCollins. pp. 236–242. - Le Provost, Christian (1991). Generation of Overtides and compound tides (review). In Parker, Bruce B. (ed.) Tidal Hydrodynamics. John Wiley and Sons, ISBN 978-0-471-51498-5 - Accad, Y. & Pekeris, C.L. (November 28, 1978). "Solution of the Tidal Equations for the M2 and S2 Tides in the World Oceans from a Knowledge of the Tidal Potential Alone". Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences 290 (1368): 235–266. - "Tide forecasts". New Zealand: National Institute of Water & Atmospheric Research. Retrieved 2008-11-07. Including animations of the M2, S2 and K1 tides for New Zealand. - Schureman, Paul (1971). Manual of harmonic analysis and prediction of tides. U.S. Coast and geodetic survey. p. 204. - Lisitzin, E. (1974). "2 "Periodical sea-level changes: Astronomical tides"". Sea-Level Changes, (Elsevier Oceanography Series) 8. p. 5. - "What Causes Tides?". U.S. National Oceanic and Atmospheric Administration (NOAA) National Ocean Service (Education section). - See for example, in the 'Principia' (Book 1) (1729 translation), Corollaries 19 and 20 to Proposition 66, on pages 251–254, referring back to page 234 et seq.; and in Book 3 Propositions 24, 36 and 37, starting on page 255. - Wahr, J. (1995). Earth Tides in "Global Earth Physics", American Geophysical Union Reference Shelf #1,. pp. 40–46. - Zuosheng, Y.; Emery, K.O. & Yui, X. (July 1989). "Historical Development and Use of Thousand-Year-Old Tide-Prediction Tables". Limnology and Oceanography 34 (5): 953–957. doi:10.4319/lo.1989.34.5.0953. - Cartwright, David E. (1999). Tides: A Scientific History. Cambridge, UK: Cambridge University Press. - Case, James (March 2000). "Understanding Tides—From Ancient Beliefs to Present-day Solutions to the Laplace Equations". SIAM News 33 (2). - Doodson, A.T. (December, 1921). "The Harmonic Development of the Tide-Generating Potential". Proceedings of the Royal Society of London. Series A 100 (704): 305–329. Bibcode:1921RSPSA.100..305D. doi:10.1098/rspa.1921.0088. - Casotto, S. & Biscani, F. (April 2004). "A fully analytical approach to the harmonic development of the tide-generating potential accounting for precession, nutation, and perturbations due to figure and planetary terms". AAS Division on Dynamical Astronomy 36 (2): 67. - See e.g. Moyer, T.D. (2003), "Formulation for observed and computed values of Deep Space Network data types for navigation", vol.3 in Deep-space communications and navigation series, Wiley (2003), e.g. at pp.126–8. - NASA (May 4, 2000). "Interplanetary Low Tide". Retrieved September 26, 2009. - Two points on either side of the Earth sample the imposed gravity at two nearby points, effectively providing a finite difference of the gravitational force that varies as the inverse square of the distance. The derivative of 1/r2, with r = distance to originating body, varies as the inverse cube. - According to NASA the lunar tidal force is 2.21 times larger than the solar. - See Tidal force – Mathematical treatment and sources cited there. - Munk, W.; Wunsch, C. (1998). "Abyssal recipes II: energetics of tidal and wind mixing". Deep Sea Research Part I Oceanographic Research Papers 45 (12): 1977. Bibcode:1998DSRI...45.1977M. doi:10.1016/S0967-0637(98)00070-3. - Ray, R.D.; Eanes, R.J.; Chao, B.F. (1996). "Detection of tidal dissipation in the solid Earth by satellite tracking and altimetry". Nature 381 (6583): 595. Bibcode:1996Natur.381..595R. doi:10.1038/381595a0. - Lecture 2: The Role of Tidal Dissipation and the Laplace Tidal Equations by Myrl Hendershott. GFD Proceedings Volume, 2004, WHOI Notes by Yaron Toledo and Marshall Ward. - Flussi e riflussi. Milano: Feltrinelli. 2003. ISBN 88-07-10349-4. - van der Waerden, B.L. (1987). "The Heliocentric System in Greek, Persian and Hindu Astronomy". Annals of the New York Academy of Sciences 500 (1): 525–545 . doi:10.1111/j.1749-6632.1987.tb37224.x. - Cartwright, D.E. (1999). Tides, A Scientific History: 11, 18 - "The Doodson–Légé Tide Predicting Machine". Proudman Oceanographic Laboratory. Retrieved 2008-10-03. - Glossary of Meteorology American Meteorological Society. - Webster, Thomas (1837). The elements of physics. Printed for Scott, Webster, and Geary. p. 168. - "FAQ". Retrieved June 23, 2007. - O'Reilly, C.T.R.; Ron Solvason and Christian Solomon (2005). "Where are the World's Largest Tides". In Ryan, J. BIO Annual Report "2004 in Review" (in English) (Washington, D.C.: Biotechnol. Ind. Org.): 44–46. - Charles T. O'reilly, Ron Solvason, and Christian Solomon. "Resolving the World's largest tides", in J.A Percy, A.J. Evans, P.G. Wells, and S.J. Rolston (Editors) 2005: The Changing Bay of Fundy-Beyond 400 years, Proceedings of the 6th Bay of Fundy Workshop, Cornwallis, Nova Scotia, Sept. 29, 2004 to October 2, 2004. Environment Canada-Atlantic Region, Occasional Report no. 23. Dartmouth, N.S. and Sackville, N.B. - "English Channel double tides. Retrieved April 24, 2008". Bristolnomads.org.uk. Retrieved 2012-08-28. - To demonstrate this Tides Home Page offers a tidal height pattern converted into an .mp3 sound file, and the rich sound is quite different from a pure tone. - Center for Operational Oceanographic Products and Services, National Ocean Service, National Oceanic and Atmospheric Administration (January 2000). "Tide and Current Glossary". Silver Spring, MD. - Harmonic Constituents, NOAA. - Society for Nautical Research (1958). The Mariner's Mirror. Retrieved 2009-04-28. - Bos, A.R.; Gumanao, G.S.; van Katwijk, M.M.; Mueller, B.; Saceda, M.M. & Tejada, R.P. (2011). "Ontogenetic habitat shift, population growth, and burrowing behavior of the Indo-Pacific beach star Archaster typicus (Echinodermata: Asteroidea)". Marine Biology 158 (3): 639–648. doi:10.1007/s00227-010-1588-0. - Bos, A.R. & Gumanao, G.S. (2012). "The lunar cycle determines availability of coral reef fishes on fish markets". Journal of Fish Biology 81 (6): 2074–2079. doi:10.1111/j.1095-8649.2012.03454.x. PMID 23130702. - Darwin, Charles (1871). The Descent of Man, and Selection in Relation to Sex. London: John Murray. - Le Lacheur, Embert A. Tidal currents in the open sea: Subsurface tidal currents at Nantucket Shoals Light Vessel Geographical Review, April 1924. Accessed: 4 February 2012. - "Do the Great Lakes have tides?". Great Lakes Information Network. October 1, 2000. Retrieved 2010-02-10. - Calder, Vince. "Tides on Lake Michigan". Argonne National Laboratory. Retrieved 2010-02-10. - Dunkerson, Duane. "moon and Tides". Astronomy Briefly. Retrieved 2010-02-10. - "Linac". Stanford. - Arnaudon, L. et al. (1993). "Effects of Tidal Forces on the Beam Energy in LEP". PAC (IEEE). - Takao, M. & Shimida, T. (2000). "Long term variation of the circumference of the spring-8 storage ring". Proceedings of EPAC (Vienna, Austria). - Tanaka, Sachiko (2010). "Tidal triggering of earthquakes precursory to the recent Sumatra megathrust earthquakes of 26 December 2004 (Mw9.0), 28 March 2005 (Mw8.6), and 12 September 2007 (Mw8.5)". Geophys. Res. Lett. 37 (2): L02301. Bibcode:2010GeoRL..3702301T. doi:10.1029/2009GL041581. - Nurmi, P., Valtonen, M.J. & Zheng, J.Q. (2001). "Periodic variation of Oort Cloud flux and cometary impacts on the Earth and Jupiter". Monthly Notices of the Royal Astronomical Society 327 (4): 1367–1376. Bibcode:2001MNRAS.327.1367N. doi:10.1046/j.1365-8711.2001.04854.x. |Wikiquote has a collection of quotations related to: Tides| |Wikimedia Commons has media related to: Tides| - 150 Years of Tides on the Western Coast: The Longest Series of Tidal Observations in the Americas NOAA (2004). - Eugene I. Butikov: A dynamical picture of the ocean tides - Earth, Atmospheric, and Planetary Sciences MIT Open Courseware; Ch 8 §3 - Myths about Gravity and Tides by Mikolaj Sawicki (2005). - Ocean Motion: Open-Ocean Tides - Oceanography: tides by J. Floor Anthoni (2000). - Our Restless Tides: NOAA's practical & short introduction to tides. - Planetary alignment and the tides (NASA) - Tidal Misconceptions by Donald E. Simanek. - Tides and centrifugal force: Why the centrifugal force does not explain the tide's opposite lobe (with nice animations). - O. Toledano et al. (2008): Tides in asynchronous binary systems - Gif Animation of TPX06 tide model based on TOPEX/Poseidon (T/P) satellite radar altimetry - Gaylord Johnson "How Moon and Sun Generate the Tides" Popular Science, April 1934 - Tide gauge observation reference networks (French designation REFMAR: Réseaux de référence des observations marégraphiques) - NOAA Tide Predictions - NOAA Tides and Currents information and data - History of tide prediction - Department of Oceanography, Texas A&M University - Mapped, graphical and tabular tide charts for US displayed as calendar months - Mapped, graphical US tide tables/charts in calendar form from NOAA data - SHOM Tide Predictions - UK Admiralty Easytide - UK, South Atlantic, British Overseas Territories and Gibraltar tide times from the UK National Tidal and Sea Level Facility - Tide Predictions for Australia, South Pacific & Antarctica - Tide and Current Predictor, for stations around the world - World Tide Tables - Tidely U.S. Tide Predictions - Famous Tidal Prediction Pioneers and Notable Contributions
http://en.wikipedia.org/wiki/Tide
13
86
Triangle plays a very important role in the field of Geometry. Triangle is a geometrical shape which consists of three sides and three angles. The sum of the angles in a triangle is 180 degree. The sides of the triangle may or may not be the same. Depending on this the Triangles are classified as: Equilateral Triangle, Isosceles Triangle and Scalene Triangle. If all the sides of the triangle are equal then it is an Equilateral Triangle. In a triangle if the three sides are equal, then automatically the three angles will be equal. Here each angle will be of 60 degree as 3x = 180, so ‘x’ will be equal to 60 degree, here ‘x’ is the angle. If two sides of the triangle are equal then it is a Isosceles Triangle. In a triangle if the two sides are equal, then automatically the two angles will be equal. In a triangle, if no side is equal then the triangle is called as Scalene Triangle. The angles in a scalene triangle are all different. If we are having a triangle in which all angles are less than ninety degree then this type of articles are called as Acute Angle. If we are having a triangle in which one angle will be of ninety degree then this type of triangle is called as Right Angle triangle. If we are having a triangle in which one side is greater than ninety degree then this type of triangle are called as Obtuse Triangle. If we are having a right angle triangle but opposite side of the angles are equal then the other angle will be of 45 degree. This type of triangle is called as right angle isosceles triangle. If we are having a triangle with two angles as 45 and 100 and our task is to find the third angle. We can find it very easily as we know that sum of all the angles of triangle is 180. So the third angle can be found as, 45 + 100 + x = 180 145 + x = 180 x = 180 - 145 x = 35 The third angle is 35 degree. Triangle plays a very important role in the field of Geometry. Triangle is a geometrical shape which consists of three sides and three angles. The sum of the angles in a triangle is 180 degree. This is proved by the theorem given below known as the 'Triangle Sum Theorem' Sum of all the angles in a triangle is equals to 180 degrees. We know that a triangle is a Polygon which has three sides, 3 angles & 3 vertices. Now the Triangles can be classified further on the basis of the length of their sides & also on the basis of their ...Read More A line segment is a part of a line with two fixed points. It can be drawn by joining any two points on a plane. It has only length. Remember that two points always form a line whichever may be the direction. On the other hand, if 3 or more points are marked on a plane, they may or may not lie in the same line. When 3 or more points lie in the same line, they are ca...Read More Congruent Triangles are basically those Similar Triangles which are same in size and angle. But these triangles are flipped. According to congruent triangles definition, triangles are congruent only when all interior an...Read More An exterior angle of a triangle is defined when out of three sides one side is extended outside the triangle, exterior angles of a triangle is always bigger than the interior angles or we can say that the sum of two interior angles of a triangle is equal to the exterior angle of triangle. Three corners of a triangle are known as vertices. An exterior angle o...Read More Triangles are the polygons with three sides. We say that the triangle is the closed figure with three sides. Now we need to know that the Triangles are classified as per the measure of their lengths ...Read More We know that the sum of angles in a triangle is always fixed, which is equal to 180 degrees. As we know that the angle sum of a triangle is always 180 degrees, thus if any two angles of the triangle is known then we can find the third angle of the triangle by adding the two given angles and then by subtracting the sum of the two angles by 180 degrees. If we ha...Read More In a triangle there are three sides, than any one side of a triangle is always smaller than the sum of all the two other sides is the inequalities of triangle. In just opposite case, if any one side of a triangle is a...Read More Each Geometry angles and each geometry side has its own name, which is called as a Special name for sides and angles. Now we discuss each and every Special name for sides and angles: Altitude side – alti...Read More
http://www.tutorcircle.com/triangles-t43rp.html
13
54
are generated whenever an object moves through a liquid or gas. From Newton's second law of motion, the aerodynamic forces on the body are directly related to the change in momentum of the fluid with time. The fluid momentum is equal to the mass times the velocity of the fluid. Since the fluid is moving, defining the mass gets a little tricky. If the mass of fluid were brought to a halt, it would occupy some in space; and we could define its density to be the mass divided by the volume. With a little math we can show that the aerodynamic forces are to the density of the fluid that flows by the rocket. As a result of this derivation, we also find that lift and drag depend on the square of Here is the derivation, beginning with Newton's second law: F = d (m * V) / dt where F is the force, m is the mass, t is time, and V is the velocity. If we integrate this equation, we obtain: F = constant * V * m / t Since the fluid is moving, we must determine the mass in terms of the mass flow rate. The mass flow rate is the amount of mass passing a given point during some time interval and its units are mass/time. We can relate the mass flow rate to the density mathematically. The mass flow rate mdot is equal to the density r times the velocity times the area A through which the mass passes. mdot = m / t = r * V * A With knowledge of the mass flow rate, we can express the aerodynamic force as equal to the mass flow rate times the velocity. F = constant * V * r * V * A A quick units check: mass * length / time^2 = constant * length/time * mass/length^3 * length/time mass * length / time^2 = mass * length/time^2 Combining the velocity dependence and absorbing the area into the constant, we find: F = constant * r * V^2 The aerodynamic force equals a constant times the density times the velocity squared. The of a moving flow is equal to one half of the density times the velocity squared.Therefore, the aerodynamic force is directly proportional to the dynamic pressure of the flow. The velocity used in the lift and drag equations is the relative velocity between an object and the flow. Since the aerodynamic force depends on the square of the velocity, doubling the velocity will quadruple the lift and drag. You can investigate the effect of velocity and the other factors on the flight of a rocket by using the RocketModeler III Java Applet. Beginner's Guide Home
http://www.grc.nasa.gov/WWW/K-12/rocket/vel.html
13
75
This section highlights why monitoring glaciers is important. For starters, monitoring glaciers gives scientists one of the most significant indicators in determining whether observed climate change is regional or global. Another reason is that resent-day glaciers and the deposits from more extensive glaciation in the past have considerable economic importance in many areas. Knowledge gained from monitoring glaciers can be used by governments to make long-range plans to better cope with the economic impacts of climate change. Click on of the topics to the left to learn more. Glaciers provide many benefits to us as humans, as well as to other species. If we realize what benefits glaciers provide, we may then recognize why they are important. Moreover, we may begin to understand why monitoring glaciers is important. Glaciers are an important water resource of Alaska, the western United States, and throughout the world. For example, a number of the world’s desert regions, such as north-western China or the wine-growing area of Mendoza in Argentina, receive waters from adjacent mountain ranges. Much of this water is glacial runoff—perhaps not a large contribution but one of the greatest. Glaciers provide water when it is needed most, during hot, dry seasons and years. Glacial deposits also act as reservoirs for groundwater. Moraines are generally poor reservoirs because they contain large amounts of clay and are poorly sorted. Outwash, on the other hand, typically consists of sand and gravel and has been reworked by streams. This generally results in a sedimentary deposit that has had most of the clay flushed downstream. Wells constructed in outwash deposits can be very productive. Glacier ice itself has proved to be a profitable export commodity for some countries. Ice exports were a feature of the Norwegian economy before refrigerators were invented. Nowadays, people in Japan are able to cool their drinks with expensive ice hacked from an Alaskan glacier, and melted glacier ice is sold as an unusually pure mineral water in Iceland. Even in Peru, people make a living out of collecting glacier ice, grinding it up, mixing flavors with it, and then selling it at the markets as a local version of snow cones. In many parts of the world, meltwater from glaciers enhances electric power generation. In Switzerland, for example, hydro-electric power generation is big business. During the winter, half of Switzerland’s energy production if generated by water released from reservoirs. The Massa hydro-electric power station near Brig is owned by the Swiss Railways. This station runs almost entirely on meltwater from the Glosser Aletschgletscher, the largest glacier in the Alps. Apart from the attractive scenery, which is always a benefit, glaciers are recreational resources. They are favored by skiers, mountaineers, and mountain lovers. In areas where glaciers are relatively crevasse-free, glaciers provide opportunities for skiing even in July and August. Glaciers are tourist attractions all over the world. Large vehicles with snow tracks transport visitors around the Athabasca Glacier in the Rocky Mountains of Canada. Scenic overflights with glacier landings are major attractions in the Mount Cook region of New Zealand. Tourists who take part in boat excursions to the calving front of the Columbia Glacier in Alaska are often offered a unique drink: a Martini cooled by ice from a calving glacier. The increase or decrease in glacier area impacts high-mountain and high-latitude ecosystems. Glaciers provide habitat for many interesting species. Several new species of ice worms have been discovered during recent glacial studies in the North Cascades. Glacier retreat affects aquatic ecosystems, primarily with changes in base streamflow and temperatures. Glacial retreat also impacts wetlands, which provide habitat to plant and animal species, many of which are Threatened and Endangered. Due to glacial retreat, increased sediment is provided to wetlands and small lakes may form, providing new habitat. Glacier forelands newly exposed in front of receding glaciers provide excellent natural laboratories to study plant succession and soil development. The benefits of glaciers far outweigh any disadvantages; nevertheless, natural hazards created by glaciers are another important reason for monitoring them. If glaciers are monitored, we will begin to understand glacier “behavior” better and be better placed to avert future catastrophes. Remember the Titanic? Glaciers pose dangers to ocean transportation and shipping, as well as offshore oil installations. The glaciers on the west side of Greenland produce a large number of icebergs that drift south into shipping lanes of the North Atlantic. Some of Greenland’s icebergs have been known to travel as far south as Delaware before completely melting. Antarctica also produces icebergs from its many ice shelves. Some Antarctic icebergs exceed the area of Rhode Island. Surging glaciers suddenly and dramatically accelerate, advancing several miles in a few months and traveling many times their normal speeds. For example, the Hubbard Glacier in southern Alaska periodically surges and threatens to block the Russell Fjord. Hubbard Glacier most recently blocked the entrance to Russell Fiord in May 1986, and threatened to do so in June 2002. When an effective ice dam forms and remains stable, Russell Fjord fills with fresh water. In such a scenario, as in 1986, marine animals became trapped, and eventually perished, in water that becomes less and less salty from the glacier’s fresh water and increasingly murky from glacial sediment. As water levels rose, birds were driven from their nests, and many eggs and chicks were destroyed. Water level rises until one of two things happens. Either the ice dam fails or the lake fills until it reaches an ancient spillway at the south end of Russell Fjord. During the 1986 closure, fresh water flowing into the fjord raised the level of the lake 84 feet before the ice dam failed, which spared the nearby village of Yukutat. Catastrophic outburst floods result from the failure of ice-dammed lakes. In general, this happens where an advancing glacier moves across, or partway up, a river valley, blocking the drainage. Smaller lakes can also be impounded within tributary canyons along the lateral margins of large valley glaciers. Draining of glacier-dammed lakes occurs either through or over a glacier dam. When an outlet stream overtops the ice dam, erosion occurs so rapidly that destruction of the dam and flooding are almost assured. Similar trouble arises if the lake gets so deep that the dam floats loose from its footings or if lake water, enhanced by water pressure, melts open a hole in the dam. In addition, heat sources from subglacial volcanic or geothermal activity below a glacier can cause local melting at a glacier’s base and sometimes create holes penetrating to the surface. Meltwater accumulating at these spots episodically breaks out from under the ice and floods the adjoining outwash plain. Outburst floods can result in dense, viscous debris flows. The major hazard of debris flows is from burial or impact. People, animals, and buildings can be buried, smashed, or carried away. The threat of ice avalanches is apparent in densely populated mountain ranges with glaciers, such as the Alps. Ice avalanches may have volumes of millions of cubic yards and have covered whole villages in the past. A major problem with predicting ice avalanches is that, despite their spectacular effects, they are relatively rare, much rarer than snow avalanches. Glaciologists have tried to find ways of predicting the ice volume to be released in avalanches, the “run-out distance,” and the time of the event. It is now known that the ice in the unstable part of a glacier accelerates drastically prior to break-off, usually (but unfortunately not always) creating fresh crevasses. In practice it is difficult to monitor these developments because they occur most frequently at high altitudes. Further research on ice avalanches is essential because mountain regions like the Alps are increasingly being used for recreation, with the establishment of ski resorts and transport routes, and for the generation of hydro-electric power. Most of the world’s glacier ice is held in two large ice sheets, Antarctica and Greenland, which together contain an estimated 97% of all the glacier ice and 77% of the planet’s freshwater supply. If all the present glacier ice were to melt from Antarctica and Greenland, the oceans would rise about 260 feet and inundate most of the coastal cities of the world. A rise in sea level would alter the position and morphology of coastlines, causing coastal flooding and waterlogging of soils. Sea level rise would also create or destroy coastal wetlands and salt marshes and induce salt-water intrusion into aquifers, leading to salinization of groundwater. Coastal ecosystems are bound to be affected as well, for example, by increased salt stress on plants. It is estimated that 70% of the world’s sandy beaches would be affected by coastal erosion induced by sea level rise. Because variations in climate cause glaciers to advance and retreat, glaciers can serve as excellent indicators of climate change. Glaciers are sensitive to climate changes of various magnitudes and different time scales. In addition, their widespread geographic distribution makes them suitable for establishing proxy data and for evaluating the nature of global climate fluctuations. Glaciers tend to “average out” the short term meteorological variations and reflect longer term variations that take place over several decades or centuries. The remarkable signal characteristics of changes in glacier length, for example, are readily apparent by looking at cumulative values and different size categories. Cirque and other small glaciers reflect yearly changes in climate and mass balance almost without any delay. Mountain glaciers dynamically react to decadal variations in climatic and mass balance forcing with enhanced amplitudes after a delay of several years. The largest valley glaciers give strong and most efficiently smoothed signals of secular trends with a delay of several decades. Large ice sheets as in Greenland and Antarctica have even greater response times, probably over several millennia (thousands of years) or tens of millennia. Scientists widely recognize ice sheets and ice caps as libraries of atmospheric history from which past climatic and environmental conditions can be extracted. These large glaciers contain ice that dates back millennia to past ice ages. Reliable meteorological observations for climate reconstruction are limited or absent prior to A.D. 1850; therefore, valuable and unique information is trapped in the snow that piles up each year in the accumulation area of a glacier. Ice cores from accumulation areas provide information about the fluctuations of important atmospheric trace gases like carbon dioxide and methane, which are trapped in air bubbles in the ice. Moreover, measurements of oxygen isotopes in the ice can tell us the air temperature when the snow accumulated on the glacier’s surface. Such information about Earth’s past climate can help us predict the direction and magnitude of future climate changes. By studying the response and changes in glaciers, we can better understand—and anticipate - the range of past and possible future climate changes. The benefits and hazards of glaciers provide us with reasons for why monitoring glaciers is important. Glaciers are natural reservoirs of freshwater, and monitoring glaciers is important for assessing and predicting the impacts of glacial retreat on water resources in mountainous areas. The information gained from monitoring programs can be used by governments and individuals to make long-range plans to better cope with economic impacts of the loss of glaciers as valuable commodities and the loss of coastal settlements and resources due to sea level rise. The increase or decrease in glacier area impacts high-mountain and high-latitude ecosystems. A diversity of species and populations constitutes the world’s available gene pool, which is an important and irreplaceable resource. Monitoring glaciers can provide data to land managers, for example in national parks, for making decisions about protecting rare species that live in glaciers. Furthermore, monitoring will provide land managers with information regarding new and changing habitats created by glacial retreat. Dangerous glaciers impinge directly on the lives of people in mountain regions, and some have been responsible for huge loss of life. We need to understand glaciers better if catastrophes are to be averted. There are many different aspects of a glacier that might be monitored. Below is a list of some of the more common features monitored. Direct observations of glaciers are often difficult because they exist in cold, polar regions or high mountain areas that are inaccessible or inhospitable to humans. Furthermore, ice sheets and ice caps are so huge and change so slowly that repeat measurements are needed over large areas and long periods of time. Until the launches in 1972, 1975, 1978, and 1982 of the Landsat series of spacecrafts, glaciologists had no accurate means of measuring the areal extent of glacier ice on Earth. Satellite images provide means for delineating the areal extent of ice sheets and caps, and for determining the position of the termini of valley, outlet, or tidal glaciers for the entire globe. NASA satellite missions will also measure ice sheet elevations, changes in elevation through time, approximate sea ice thicknesses, and global sea level. Scientific understanding of glaciers owes much to the ability of remote sensing systems to extend human observations in time and space. This ability is important because understanding the worldwide extent, timing, and relative magnitude of glaciation is significant for understanding the mechanism responsible for abrupt climate change. [Provide link to USGS Landsat images and NASA ICESat program on the Web] According to Milankovitch and other 20th century theoretical climatologists, the glaciers in the area around 65° N latitude are especially sensitive to astronomical variations in Earth’s orbital cycles. In the Northern Hemisphere, glaciers on Baffin Island, Canada; in the Alaska Range, Alaska; in the southern tip of Greenland; in Iceland; and in Norway are at the right locations. The glaciers in Iceland, as with some of these other areas, are important as long-term indicators of climate change because of their latitudinal location. In addition, these glaciers are apparently just large enough not to be affected by short-term climatic variations, yet are small enough and dynamic enough to respond to changes caused by climatic variations over several decades. Mass balance is the difference between annual snow and ice accumulation and snow and ice ablation. It can be represented as an average thickness added to or lost from a glacier for a given year. It is the most sensitive annual glacier climate indicator. Mass balance is evaluated by measuring the addition and loss of snow and ice mass at points on a glacier’s surface and extrapolating the point data to the whole glacier surface. Ablation is measured by emplacing stakes in the glacier. As the glacier surface melts that amount of the stake emerging from the glacier is measured. The total melt at each stake by the end of the melt season is the net ablation. Most of the stakes must be replaced during the summer. Accumulation is measured by either probing or ice and snow stratigraphy in crevasses. Analyzing crevasse layering is similar to reading the widths of tree rings. Change in glacier length is one of the variables used to evaluate the effect of climate change. After a certain reaction time, following changes in mass balance, the length of a glacier will start changing and finally reach a new equilibrium. This means that, for a given change in mass balance, the length change is a function of a glacier’s original length. Furthermore, the change in mass balance can be quantitatively inferred from the easily observed length change. The extreme clarity of this signal makes it possible to apply very simple observational methods, for instance, repeated tape-line measurements from a fixed location beyond the terminus. This, in turn, enables the cooperation of numerous non-specialists with long-term measurements at hundreds of glacier snouts throughout the world. Glaciers are natural reservoirs that store water as ice instead of using a dam. Most significantly, glaciers yield the most water during the driest period, i.e., late summer, when it is often needed most. Glacier runoff is a combination of melt rates and glacier area. Melt rates are dependent on temperature. As temperatures increase (regionally and globally) and glaciers retreat, the size of the reservoir shrinks, and so does the available runoff. Even a small area of glacier cover is important to total basin runoff. In Stehekin Basin in the North Cascades, for example, the glaciated area is only 3.1% of the total basin, but the glaciers provide 35-40% of late summer runoff. The observed changes in glacier and alpine runoff make it apparent that we can no longer intelligently manage water resources without considering the changes in glacier runoff. Glacier-bed topography, a one-time measurement, is important for determining absolute volume changes in glaciers. It is also a significant parameter for predicting glacier movement, estimating the hydraulics of basal water flow, and modeling glacier dynamics. Bed topography can be determined by using ice radar. Rate of ice movement links mass balance with glacier geometry—area, terminus position, and surface elevation. Ice movement is detected by repeated surveying of targets on a glacier or by photogrammetric analysis of natural (crevasse) or artificial targets on a glacier. For a long-term monitoring program, where the mass and volume of a glacier are expected to change, a coincident data set of flow information is useful for determining the dynamic response of glaciers to changes in mass input. Glacier-monitoring efforts in national parks provide valuable information to park managers and interpreters, as well as to the scientific community at large, about the effects of regional and global climate change. Click on a park below to find out what kinds of information monitoring programs in parks are producing. Past published reports, dating back to 1914, and anecdotal evidence prior to that, show that the glaciers of Glacier National Park are excellent barometers of climate change. In recent years, researchers have compiled all existing information on all glaciers at Glacier National Park and have added measurements of glacial extent for the period 1979 to present. Most past efforts focused on only one or a few well-studied glaciers; recent studies have found and incorporated previously unused information into a more extensive picture of glacier activity in the park. Researchers mapped the area of each glacier in a standard spatial framework that was digitized within a geographic information system to create a time series for interpretation and analysis. General shrinkage has occurred for every glacier for which researchers have measurements but rates of change varied. The larger glaciers are now approximately 1/3 their size in 1850, and numerous smaller glaciers have disappeared. There has been a 73% reduction in the area of the park covered by glaciers from 1850-1993. Out of 84 watersheds, 18 have 1% glacier cover, 8 have 2% cover, and 4 have 3%. Average glacier area in the accumulation zone for September 1993 was 35%, indicating negative mass balances for most glaciers and continued shrinkage. A computer model indicates that present rates of increasing warming will eliminate all glaciers in Glacier National Park by 2030 (Hall, 1994). Even with no additional warming over that which has already occurred in the area, the glaciers are likely to be gone by 2100. Hall, M. H. P., 1994, Predicting the impact of climate change on glacier and vegetation distribution in Glacier National Park to the year 2100: Syracuse, State University of New York, M.S. Thesis, 192 p. Both the National Park Service and the U.S. Geological Survey have made systematic measurements of Mount Rainier’s glaciers since the late 1890s, making it one of the longest and most detailed records of glacier change in the United States. Monitoring techniques include: (1) mapping the terminus of glaciers, (2) determining glacier characteristics using remote imaging and digital mapping technology, (3) determining glacier volume, (4) monitoring glacier motion, (5) measuring mass balance, and (6) determining the extent of ancient glaciers using post-glacial landforms. Monitoring has revealed that in 1994 Mount Rainier’s glaciers had a combined area of 35 square miles, and an estimated total volume of 1.0 cubic mile. Between 1913 and 1994, the combined area dropped by 21% and total volume by 25%. In general, glaciers on the south side of the mountain shrank more than glaciers on the north side (total area losses of 27% and 17% respectively). The changing position of glacier termini indicate that all of the mountain’s major glaciers retreated between 1913 and the late 1950s, then advanced until the early 1980s, and then retreated significantly during the 1990s. Measuring glaciers can be difficult and expensive because glaciers are often located in remote areas where travel is complicated by rugged alpine terrains and adverse weather. This is true in the North Cascades Range of Washington, where dense forests, steep slopes, and poor weather make foot travel slow and often rigorous. Furthermore, helicopter support is limited by flight regulations in wilderness areas and is highly weather dependent. One approach for tracking glacier change is to monitor a single, relatively accessible glacier in a region and treat it as representative member of the glaciers in the region. In 1958, the U.S. Geological Survey (USGS) selected South Cascade Glacier, a valley glacier to the southwest of North Cascades National Park, to represent the North Cascades Range. Several times each year (in spring, summer, and autumn) ground crews make measurements of changes in the glacier’s mass, and researchers analyze that year’s aerial photographs. The result is a nearly 50 year record of the size, shape, and dynamics of the glacier, one of the longest continuous detailed records of glacier change in the world. In 1993, the National Park Service began a glacier monitoring program within the North Cascades National Park Complex in order to determine how representative the changes of South Cascade Glacier are to glacier changes in the rest of the North Cascades. Researchers selected four glaciers of different size, type, and location to represent a wider cross section of the region’s glacier population. Like the USGS program, researchers acquired yearly aerial photographs and made ground based measurements of the changing glacier mass. A similar program, conducted by the North Cascades Glacier Climate project, has provided additional information by conducting mass change studies on 48 other glaciers spread throughout the North Cascades. Researchers calculated changes in glacier area and volume between 1958 and 1998 for approximately 80% of the glaciers of the Upper Skagit River Basin. The majority of these glaciers shrank. Because the regional benchmark, South Cascade Glacier, also lost mass between 1958 and 1998, researchers assume that its mass-balance record can be used to represent yearly mass changes for most of the glaciers of the Upper Skagit Basin. Therefore most of the glaciers of the Upper Skagit Basin had more years when they lost mass than years when they gained mass. Furthermore, most of the mass they lost between 1958 and 1999 was lost between 1976 and 1998. Tangborn, W., 1980, Two models for estimating climate-glacier relationships in the North Cascades Washington, U.S.A.: Journal of Glaciology, v. 25, no. 91, p. 62-67. Introduction to Glaciers Nature's Landscape Architect Glaciers and National Parks Return to Views
http://www.nature.nps.gov/views/KCs/Glaciers/HTML/ET_Monitoring.htm
13
79
To set the stage for this issue’s discussion of e-discovery developments, we begin with this assessment of elementary computer science principles. How does a computer work, and what do computer forensic experts do to discover electronically stored information? Data Processing Fundamentals Computer hardware includes input devices (e.g., keyboard, mouse), a hard drive for long-term storage of information, a processor (that runs everything), the main circuit board (or motherboard) that connects key devices, memory chips for short-term storage of information, and an output device (i.e., a monitor) that displays the work. These hardware devices are connected by electrical circuits controlled by switches. (Computer processing basically involves the opening and closing of the switches at the right time and in the correct sequence as directed by the software program.) The hardware system is managed by a software program known as “operating system” software (e.g., Microsoft Windows). The operating system is the set of instructions (software is simply a permanent sequence of instructions) that allows the applications software (e.g., Microsoft Word) to be processed by the computer’s hardware while the user operates the keyboard. When the computer is turned on, its “boot (or start) up” process begins. The electric signal initiates instructions (often found on a read only memory or “ROM” chip located on the motherboard) to have the operating system software retrieved from a storage device and loaded onto a different memory chip. It does so after first making sure that all the hardware is connected and operating properly. Then the user, via keyboarding or mouse clicks, selects application software also to be loaded onto the memory chip. When the user then begins to use the applications software, the operating system directs a microprocessor known as the central processing unit or “CPU” (also mounted on the motherboard) to execute the software instructions. A computer purchased today uses a processing chip (such as the Intel Core i7) that can handle upwards of 80,000 mips, that is, million instructions per second. As the CPU processes the data, the results are displayed on the output device or monitor. The memory devices adjacent to the CPU on the motherboard include the cache memory chip and the random access memory or “RAM” chip. These chips contain integrated circuits that store information electronically. Cache memory or RAM hosts the programs placed there by the operating system in response to the user’s commands. They run the programs at high speed (cache more so) and interact with “registers” contained within the CPU while the data processing is occurring. These devices are fast because they store the data or program electronically (not mechanically and magnetically like the slower hard drive memory storage), but RAM and cache storage is temporary, vanishing instantly from memory when power to the computer is turned off. This feature is key to the computer’s functionality; because data is stored electronically on cache or RAM memory and remains there only when power is supplied, the input/output process is far faster than if the CPU had to access information from magnetic hard drive storage in the course of program execution. Data Storage Mechanics Computers store information in two forms: in primary storage devices, and in secondary storage devices. The term “primary” is used because the computer prefers to process data in that form of memory because it is faster. It only wishes to access data in “secondary” storage if it is not available in primary memory. Primary storage devices are the cache and RAM memory chips that contain millions of capacitors (components that store electrical energy) paired with transistors (switches) that are etched onto the surface of a silicone card in a pattern of rows and columns. Each capacitor can hold one “bit” of data (short for binary digit) through the electrical charge residing in each capacitor. When clustered together in eight bit “bytes” (short for binary term), the capacitors can hold large amounts of information. Readers are familiar with the terms “gigabyte” (a billion bytes) and “megabyte” (a million bytes) that describe the storage capacity of the computer’s memory devices and hard drive. Secondary storage devices are hard disk drives or flash drives that contain “media.” Media are surfaces coated with magnetic material containing small storage cells. Like the primary electronic form of storage, each magnetic cell holds one “bit” of data that adheres to the cell surface through the electromagnetic attraction between the electrical signal sent from the CPU to the media’s magnetic coating. Unlike electronic storage in which data vanishes when power is turned off, data remains saved in hard drive storage because the magnetism continues after the electricity is turned off. The hard drive is a stack of magnetically sensitive media devices; they are disks or platters that spin around an axis inside an enclosure. “Heads” positioned above the spinning disks on an arm are able to “read” the bytes of data stored in the cells on the disk, or they can “write” the data bytes they convey from the CPU onto open cells on the disk for storage or saving. Each cell has its own “address” on the disk. As the electro-magnetized heads pass over the magnetized surface of the disk, particles within the disk can be polarized magnetically in one of two directions that allows the computer to distinguish the number 1 from the number 0. As the disks spin, the read/write heads follow circular tracks of cells that are open or contain bits of data. The heads deposit (write) data into open cells or retrieve (read) blocks of data on the disk as the CPU processes the instructions of the software. A CD or DVD is also a form of secondary storage known as optical storage. They work similarly to hard disk drives except that a laser is used in place of electromagnetism to deposit or retrieve data to or from the disk. The CD or DVD contains microscopic pits etched onto the surface of the disk. As the laser passes over the disk it distinguishes data based upon the manner in which the beam is reflected off the pits’ surfaces. Again the reflections distinguish the number 1 from the number 0 in the data sequences. Why are computers so focused on 1s and 0s? Binary Number System All computers store and process information using the binary number system. (Computers do not recognize letters.) When a key stroke occurs, the keyboard generates a code representing the key and sends it to the CPU. The CPU “digitizes” the signal by breaking it down into small parts and converting the parts into binary numbers: the number “1” or “0” singularly or any combination of the two in a sequence. The binary number system has two functions in a computer. First, the computer is programmed to know that 1 means “open” and 0 means “close.” Thus these numbers determine whether an electrical switch present on a circuit is to be opened or closed, and when. Second, binary mathematics is similar to the decimal system (numbers 0 through 9) except that to create numbers beyond 1 or letters, a longer sequence of 1s and 0s is necessary. Thus most of the work of a computer involves processing long sequences of 1s and 0s that represent numbers and letters (alphanumeric data). Computing as we know it would not exist in the absence of the binary number system. Its ability to replicate numbers above 1 and letters—while the computer simply needs to distinguish two numbers—is the foundation of computer science. If computing required the computer to distinguish the 10 individual numbers of the decimal system, we would still be using slide rules. File Storage and Metadata As noted above, the read/write heads in the hard drive are involved when the user saves data or loads a software program onto the magnetic disks in the hard drive. When the user creates a combination of data to be saved, a “file” is created. The sequence of bytes stored in a “block” in the file might represent a program, a graphical image, or text for a document. Thus, the data combination may be a data file, a text file, a program file, a directory file, etc. To “save” the file in the computer, the save click signal is directed to the heads of the hard drive. They search the disks to find open or “unallotted” storage cells where new data may be placed. The data to be saved is then deposited in (or “written” to) the space where it adheres to the cells through electromagnetism. The file management system of the operating system software remembers the pathway taken by the computer to get to the spot on the hard drive disk where the file is stored so it may be retraced on demand. When a file is created, background information about the file is collected automatically by the computer. The information includes the date the file was created, the time it was created, when it was last modified or accessed, who created or edited the file, etc. This information is known as “metadata” or “data about the data.” Users do not ordinarily see the metadata, as it is not typically displayed on the monitor. There are dozens of metadata fields containing background information about each Word, Excel, or PowerPoint document, for example. As one might expect, metadata can be important in computer forensics because it reveals whether evidence has been altered or covered up in anticipation of, or during, the litigation. The search for discoverable evidence also considers whether data or documents have been deleted from storage and destroyed. This may happen intentionally (i.e., to hide evidence) or accidently (e.g., through auto-purge systems). Pressing the “delete” key does not by itself erase the document or e-mail from the e-mail program, network server, or hard drive. Deleted files end up in the recycle bin, where they may remain for some time, available to be restored. When we delete an e-mail from our inbox, all readers know the message is directed to the “deleted items” file in Microsoft Outlook. Clicking delete in that file calls up a warning box on the monitor asking if the user wishes to delete the message “permanently.” If “yes” is clicked, many would think the message has been deleted permanently from the system. But it has not been permanently deleted. Clicking delete merely disrupts the pathway necessary for the computer to return to the message, and tells the computer that the space occupied by the message in storage may now be written over by new data. The message (or a document) remains in storage on the network server or in the cells of the hard disk drive. When new e-mail messages are sent or received, or new documents are to be stored on the hard drive, the hard drive read/write heads search for cells that are empty, as well as cells that contain “deleted” but not erased data; that is, they search for “available” storage space. The heads will then write the new data to the empty cells, or they will “overwrite” the data to cells containing deleted but not erased data. If the new data completely overwrites the old data, the latter is erased permanently. But sometimes the overwriting process leaves fragments of the old data untouched and they persist in storage. A job of the computer forensic specialist is to restore deleted data that has not been erased by being overwritten, or to find fragmented data from incomplete overwrites that may contain relevant information. Data Search Scope In addition to running search programs to find deleted or fragmented data on the hard drive, the computer forensic specialist may also seek access to the computer’s or network’s backup tapes, known as “archival” data. Backup tapes copy the system regularly and may contain relevant data not otherwise available. Data searches on backup tapes are more difficult and expensive because data stored on such media is stored in a “sequential” or “linear” manner, and is not formatted typically for ease of access. To get to the data desired, the searcher must review in sequence from the beginning of the data files leading up to the data to be retrieved. This is to be distinguished from “random access” data searches in which the searcher can retrieve records from anywhere in the file in a random sequence, without first having to retrace the steps necessary to get to the record. Relevant data may also be found in other parts of the user’s computer. Data may reside in the computer’s “cache” memory used to store frequently accessed data, and in its Internet browsing program that retains information about the user’s Internet site visits. Discoverable information may reside in more sites than merely the key witness’s desktop or laptop. Those devices likely are part of a network, the server of which contains media on which data is stored. The user’s computer may be connected to a personal digital assistant, Blackberry, iPhone, tablet, or other mobile device containing “memory cards.” The user’s e-mails may reside on recipients’ computers or their mobile devices along with e-mail attachments. Data may also exist on the user’s Internet service provider’s system. The user may have “burned” or copied data to a CD or DVD via laser technology, or loaded data onto a flash drive or other portable storage media. Information may have been digitized onto media found in copiers and scanners. Plus, the user may have left voice-mails on systems containing media where the digitized message may well persist. Obviously forensic data searches involve a wide ambit of potential information sources. In conclusion, computers process sequences of 1s and 0s along electrical circuits as directed by software program instructions that receive inputs from the user’s keyboard or mouse clicks, or from electronically or magnetically stored information found in memory chips or disk drives within the computer. The outcome of the process is then converted to understandable language and displayed on the monitor. Computer forensics in litigation focuses on the storage capacity of computers that retains information even when it is deleted but not erased. It is interested in the who-what-and-when metadata of documents that proves witness involvement with the documents. It also focuses on the search for discoverable information stored on media beyond the litigant’s laptop or network server. All of this awaits service of the request to produce “electronically stored information.” Reference information supporting this discussion was found in a variety of web-based sources. For an excellent, and readable, explanation of computer science see Irv Englander, The Architecture of Computer Hardware and Systems Software: An Information Technology Approach (1996).
http://hennepin.timberlakepublishing.com/article.asp?article=1512&paper=1&cat=147
13
69
|This Vector tutorial has been selected by PSIgate as a recommended teaching tool. Click the PSIgate logo to access their large inventory of Science Tutorials.| In this tutorial we will examine some of the elementary ideas concerning vectors. The reason for this introduction to vectors is that many concepts in science, for example, displacement, velocity, force, acceleration, have a size or magnitude, but also they have associated with them the idea of a direction. And it is obviously more convenient to represent both quantities by just one symbol. That is |Graphically, a vector is represented by an arrow, defining the direction, and the length of the arrow defines the vector's magnitude. This is shown in Panel 1. . If we denote one end of the arrow by the origin O and the tip of the arrow by Q. Then the vector may be represented algebraically by OQ.|| This is often simplified to just . The line and arrow above the Q are there to indicate that the symbol represents a vector. Another notation is boldface type as: Q. Note, that since a direction is implied, . Even though their lengths are identical, their directions are exactly opposite, in fact OQ = -QO. The magnitude of a vector is denoted by absolute value signs around the vector symbol: magnitude of Q = |Q|. The operation of addition, subtraction and multiplication of ordinary algebra can be extended to vectors with some new definitions and a few new rules. There are two fundamental definitions. | #1 Two vectors, A and are equal if they have the same magnitude and direction, regardless of whether they have the same initial points, as shown in |#2 A vector having the same magnitude as A but in the opposite direction to A is denoted by -A , as shown in Panel 3.|| |We can now define vector addition. The sum of two vectors, A and B, is a vector C, which is obtained by placing the initial point of B on the final point of A, and then drawing a line from the initial point of A to the final point of B , as illustrated in Panel 4. This is sometines referred to as the "Tip-to-Tail" method.|| The operation of vector addition as described here can be written as C = A + B This would be a good place to try this simulation on the graphical addition of vectors. Use the "BACK" buttion to return to this point. |The graphical representation is shown in Panel 5. Inspection of the graphical representation shows that we place the initial point of the vector -B on the final point the vector A , and then draw a line from the initial point of A to the final point of -B to give the difference C.|| Associative Law for Addition: A + (B + C) = (A + B) + C |The verification of the Associative law is shown in Panel If we add A and B we get a vector E. And similarly if B is added to C , we get F . Now D = E + C = A + F. Replacing E with (A + B) and F with (B + C), we get (A +B) + C = A + (B + C) and we see that the law is verified. Stop now and make sure that you follow the above proof. Associative Law for Multiplication: (m + n)A = mA + nA, where m and n are two different scalars. Distributive Law: m(A + B) = mA + mB These laws allow the manipulation of vector quantities in much the same way as ordinary algebraic equations. | Let us consider the two-dimensional (or x, y)Cartesian Coordinate System, as shown in |We can define a unit vector in the x-direction by or it is sometimes denoted by . Similarly in the y-direction we use or sometimes . Any two-dimensional vector can now be represented by employing multiples of the unit vectors, and , as illustrated in Panel 8.|| The vector A can be represented algebraically by A = Ax + Ay. Where Ax and Ay are vectors in the x and y directions. If Ax and Ay are the magnitudes of Ax and Ay, then Ax and Ay are the vector components of A in the x and y directions respectively. |The actual operation implied by this is shown in Panel 9. Remember (or ) and (or ) have a magnitude of 1 so they do not alter the length of the vector, they only give it its direction. It is perhaps easier to understand this by having a look at an example. |Consider an object of mass, M, placed on a smooth inclined plane, as shown in Panel 10. The gravitational force acting on the object is F = mg where g is the acceleration due to gravity. In the unprimed coordinate system, the vector F can be written as F = -Fy, but in the primed coordinate system F = -Fx'+ Fy'. Which representation to use will depend on the particular problem that you are faced with. For example, if you wish to determine the acceleration of the block down the plane, then you will need the component of the force which acts down the plane. That is, -Fx'which would be equal to the mass times the acceleration. |By resolving each of these three vectors into their components we see that the result is Panel 11. Dx = Ax + Bx + Cx Dy = Ay + By + Cy Now you should use this simulation to study the very important topic of the algebraic addition of vectors. Use the "BACK" buttion to return to this point. Very often in vector problems you will know the length, that is, the magnitude of the vector and you will also know the direction of the vector. From these you will need to calculate the Cartesian components, that is, the x and y components. |The situation is illustrated in Panel 12. Let us assume that the magnitude of A and the angleq are given; what we wish to know is, what are Ax and Ay?|| From elementary trigonometry we have, that cosq = Ax/|A| therefore Ax = |A| cos q, Ay = |A| cos(90 - q) = |A| sinq. |In Polar coordinates one specifies the length of the line and it's orientation with respect to some fixed line. In Panel 13, the position of the dot is specified by it's distance from the origin, that is r, and the position of the line is at some angle q, from a fixed line as indicated. The quantities r and q are known as the Polar Coordinates of the point.|| It is possible to define fundamental unit vectors in the Polar Coordinate system in much the same way as for Cartesian coordinates. We require that the unit vectors be perpendicular to one another, and that one unit vector be in the direction of increasing r, and that the other is in the direction of increasing q. |In Panel 14, we have drawn these two unit vectors with the symbols and . It is clear that there must be a relation between these unit vectors and those of the Cartesian system.|| |These relationships are given in Panel 15.|| And secondly, the vector or cross product of two vectors, which results in a vector. In this tutorial we shall discuss only the scalar or dot product. |The scalar product of two vectors, A and B denoted by A·B, is defined as the product of the magnitudes of the vectors times the cosine of the angle between them, as illustrated in Panel 16.|| The rules for scalar products are given in the following list, . And in particular we have , since the angle between a vector and itself is 0 and the cosine of 0 is 1. Alternatively, we have , since the angle between andis 90º and the cosine of 90º is 0. In general then, if A·B = 0 and neither the magnitude of A nor B is 0, then A and B must be perpendicular. The definition of the scalar product given earlier, required a knowledge of the magnitude of A and B , as well as the angle between the two vectors. If we are given the vectors in terms of a Cartesian representation, that is, in terms of and , we can use the information to work out the scalar product, without having to determine the angle between the vectors. Because the other terms involved,, as we saw earlier. Return to: Physics Tutorials
http://www.physics.uoguelph.ca/tutorials/vectors/vectors.html
13
65
In physics, the tendency of a force to rotate the body to which it is applied. Torque is always specified with regard to the axis of rotation. It is equal to the magnitude of the component of the force lying in the plane perpendicular to the axis of rotation, multiplied by the shortest distance between the axis and the direction of the force component. Torque is the force that affects rotational motion; the greater the torque, the greater the change in this motion. Learn more about torque with a free trial on Britannica.com. A torque (τ) in physics, also called a moment (of force), is a pseudo-vector that measures the tendency of a force to rotate an object about some axis (center). The magnitude of a torque is defined as the product of a force and the length of the lever arm (radius). Just as a force is a push or a pull, a torque can be thought of as a twist. The SI unit for torque is the newton meter (N m). In Imperial and U.S. customary units, it is measured in foot pounds (ft·lbf) (also known as 'pound feet') and for smaller measurement of torque: inch pounds (in·lbf) or even inch ounces (in·ozf) . The symbol for torque is τ, the Greek letter tau. . Mathematically, the torque on a particle (which has the position r in some reference frame) can be defined as the cross product: The torque on a body determines the rate of change of its angular momentum, As can be seen from either of these relationships, torque is a vector, which points along the axis of the rotation it would tend to cause. where "×" indicates the vector cross product. The time-derivative of this is: This result can easily be proven by splitting the vectors into components and applying the product rule. Now using the definitions of velocity v = dr/dt, acceleration a = dv/dt and linear momentum p = mv, we can see that: And by definition, torque τ = r×F. Note that there is a hidden assumption that mass is constant — this is quite valid in non-relativistic mechanics. Also, total (summed) forces and torques have been used — it perhaps would have been more rigorous to write: The joule, which is the SI unit for energy or work, is also defined as 1 N m, but this unit is not used for torque. Since energy can be thought of as the result of "force times distance", energy is always a scalar whereas torque is "force cross distance" and so is a (pseudo) vector-valued quantity. The dimensional equivalence of these units, of course, is not simply a coincidence: a torque of 1 N m applied through a full revolution will require an energy of exactly 2π joules. Mathematically, In the strict SI system, angles are not given any dimensional unit, because they do not designate physical quantities, despite the fact that they are measurable indirectly simply by dividing two distances (the arc length and the radius): one way to conciliate the two systems would be to say that arc lengths are not measures of distances (given they are not measured over a straight line, and a full circle rotation returns to the same position, i.e. a null distance). So arc lengths should be measured in "radian meter" (rad·m), differently from straight segment lengths in "meters" (m). In such extended SI system, the perimeter of a circle whose radius is one meter, will be two pi rad·m, and not just two pi meters. If you apply this measure to a rotating wheel in contact with a plane surface, the center of the wheel will move across a distance measured in meters with the same value, only if the contact is efficient and the wheel does not slide on it: this does not happen in practice, unless the surface of contact is constrained and is then not perfectly plane (and can resist to the horizontal linear forces applied to the irregularities of the pseudo-plane surface of movement and to the surface of the pseudo-circular rotating wheel); but then the system generates friction that loses some energy spent by the engine: this lost energy does not change the measurement of the torque or the total energy spent in the system but the effective distance that has been made by the center of the wheel. The difference between the efficient energy spent by the engine and the energy produced in the linear movement is lost in friction and sliding, and this explains why, when applying the same non-null torque constantly to the wheel, so that the wheel moves at a constant speed according to the surface in contact, there may be no acceleration of the center of the wheel: in that case, the energy spent will be directly proportional to the distance made by the center of the wheel, and equal to the energy lost in the system by friction and sliding. For this reason, when measuring the effective power produced by a rotating engine and the energy spent in the system to generate a movement, you will often need to take into account the angle of rotation, and then, adding the radian in the unit system is necessary as well as making a difference between the measurement of arcs (in radian meter) and the measurement of straight segment distances (in meters), as a way to effectively compute the efficiency of the mobile system and the capacity of a motor engine to convert between rotational power (in radian watt) and linear power (in watts): in a friction-free ideal system, the two measurements would have equal value, but this does not happen in practice, each conversion losing energy in friction (it's easier to limit all losses of energy caused by sliding, by introducing mechanical constraints of forms on the surfaces of contacts). Depending on works, the extended units including radians as a fundamental dimension may or may not be used. A very useful special case, often given as the definition of torque in fields other than physics, is as follows: The construction of the "moment arm" is shown in the figure below, along with the vectors r and F mentioned above. The problem with this definition is that it does not give the direction of the torque but only the magnitude, and hence it is difficult to use in three-dimensional cases. If the force is perpendicular to the displacement vector r, the moment arm will be equal to the distance to the centre, and torque will be a maximum for the given force. The equation for the magnitude of a torque arising from a perpendicular force: For example, if a person places a force of 10 N on a spanner (wrench) which is 0.5 m long, the torque will be 5 N m, assuming that the person pulls the spanner by applying force perpendicular to the spanner. If a force of magnitude F is at an angle θ from the displacement arm of length r (and within the plane perpendicular to the rotation axis), then from the definition of cross product, the magnitude of the torque arising is: so if is constant, Understanding the relationship between torque, power and engine speed is vital in automotive engineering, concerned as it is with transmitting power from the engine through the drive train to the wheels. Power is typically a function of torque and engine speed. The gearing of the drive train must be chosen appropriately to make the most of the motor's torque characteristics. Steam engines and electric motors tend to produce maximum torque close to zero rpm, with the torque diminishing as rotational speed rises (due to increasing friction and other constraints). Therefore, these types of engines usually have quite different types of drivetrains from internal combustion engines. If a force is allowed to act through a distance, it is doing mechanical work. Similarly, if torque is allowed to act through a rotational distance, it is doing work. Power is the work per unit time. However, time and rotational distance are related by the angular speed where each revolution results in the circumference of the circle being travelled by the force that is generating the torque. The power injected by the applied torque may be calculated as: On the right hand side, this is a scalar product of two vectors, giving a scalar on the left hand side of the equation. Mathematically, the equation may be rearranged to compute torque for a given power output. Note that the power injected by the torque depends only on the instantaneous angular speed - not on whether the angular speed increases, decreases, or remains constant while the torque is being applied (this is equivalent to the linear case where the power injected by a force depends only on the instantaneous speed - not on the resulting acceleration, if any). In practice, this relationship can be observed in power stations which are connected to a large electrical power grid. In such an arrangement, the generator's angular speed is fixed by the grid's frequency, and the power output of the plant is determined by the torque applied to the generator's axis of rotation. Also, the unit newton meter is dimensionally equivalent to the joule, which is the unit of energy. However, in the case of torque, the unit is assigned to a vector, whereas for energy, it is assigned to a scalar. where rotational speed is in revolutions per unit time. Useful formula in SI units: where 60,000 comes from 60 seconds per minute times 1000 watts per kilowatt. Some people (e.g. American automotive engineers) use horsepower (imperial mechanical) for power, foot-pounds (lbf·ft) for torque and rpm (revolutions per minute) for angular speed. This results in the formula changing to: The constant below in, ft·lbf./min, changes with the definition of the horsepower; for example, using metric horsepower, it becomes ~32,550. Use of other units (e.g. BTU/h for power) would require a different custom conversion factor. By the definition of torque: torque=force x radius. We can rearrange this to determine force=torque/radius. These two values can be substituted into the definition of power: The radius r and time t have dropped out of the equation. However angular speed must be in radians, by the assumed direct relationship between linear speed and angular speed at the beginning of the derivation. If the rotational speed is measured in revolutions per unit of time, the linear speed and distance are increased proportionately by in the above derivation to give: If torque is in lbf·ft and rotational speed in revolutions per minute, the above equation gives power in ft·lbf/min. The horsepower form of the equation is then derived by applying the conversion factor 33,000 ft·lbf/min per horsepower: Torque Wrenches feature cam-over design.(TSC & TSP Cam-Over Torque Wrenches by Mountz Inc. Ensures Accurate, Proper Torque Control Is Applied to Fasteners) Jul 21, 2011; Manufactured with stainless steel head and rubber hand grip, TSC and TSP Torque Wrenches prevent fastener or bolt from... Torque test fundamentals: torque testing can help manufacturers attain consistent product performance and customer satisfaction.(QUALITY 101) Sep 01, 2009; [ILLUSTRATION OMITTED] Torque is defined as a force applied to a given distance from an axis of rotation. We experience...
http://www.reference.com/browse/wiki/Torque
13
65
In the 12th proposition in Chapter 3, Kepler states that the mean motion of a planet is found by taking half of the difference of the arithmetic and geometric means of the extreme motions and subtracting that from their geometric mean. Engaged in countless hours of examining this small proposition, ignoring ridiculously formalistic footnotes by the translator that don’t explain anything, a number of questions, some remaining unanswered, came to the fore. What do you Mean? Exactly what kind of a mean is the mean motion? The Greeks knew of at least three different kinds of means. All of these means are best thought of as singularities in a process. The effects of these means are described in terms of music in the Timaeus dialogue. Taking the whole string on a monochord, as is done in Book 3, and dividing it in half defines the boundaries of a process where the end has something of similarity to the beginning. For example, if you were to ask a man and a woman to sing a note, would they sing the exact same note? More than likely, something ironic would occur. If a tenor and soprano were to sing the note that is relatively in the same “place” in their vocal range, they would be singing two different tones that share a certain quality of sameness. The relationship thus created is the same as that created by dividing our string in half. This is commonly called an octave. The Greeks further investigated the space in between these boundary points. If you divide the space between these in half again, taking three-fourths of the whole string, a different note emerges. In this process, the string is changed by the same amount from both extremes but, in opposite directions. Because the differences between the mean and the extremes are equal, this was called the arithmetic mean. However, although the differences in length may be the same, the musical difference between the tones is not. What you may call the distance between the tones changes. The quality of change in tone between the whole string and the arithmetic mean is a different quality of change than the arithmetic mean and half of the string. As we are already finding out, auditory space, greatly abused in our culture, has its own characteristics that must be explored if one is to navigate it. Otherwise, the science of musical composition will remain forever unknown. As with any type of space, such as auditory or visual, the sensual impressions are simply the effects of principles and should not be confused with the principles themselves. However, if these principles are universal, it should not be surprising if we find a coherence between the effects perceived in these various types of spaces. Perhaps they are, after all, only one space. Now, the Greeks discovered that this latter musical intervalis characteristic of another type of process. The harmonic mean emerges when the change from each of the extremes is the same proportion of each. At this moment the differences between the mean and the extremes are in the same proportion as the extremes themselves. The proportion of the motions from each of the extremes is the same as the proportion of the extremes. This division of the string produces the inverse of the arithmetic division. How did this happen? Let’s try to find the principle that produced this affect. How are these sets of relationships that where generated by the arithmetic and harmonic divisions related to one another? The intervallic relationship between the whole string and arithmetic mean is the same as the relationship between the harmonic mean and half the string. Similarly, the interval between arithmetic mean and half the string is the same as the one as between the whole string and the harmonic mean. In the animation the tones have been played together to aid the reader. The key here is proportion. As you shorten the string in an arithmetic way, or at a constant rate, the proportion between the whole string and the shortened length changes in another way. What you hear is this latter proportion. If the proportion between two strings is the same as the proportion between two other strings, no matter how large or small, the difference in tone will always be the same. So, even though in the case of the arithmetic division, the distances of the mean from the extremes is equal, nonetheless the proportion is not. And this is what you hear. Between these two types of means, the proportions between the strings have an inverted relationship. There is a third type of process, by which the string can grow or shrink in constant proportion. See what happes when you apply this to the string. In other words, divide the space between the whole string an half of the string in such a way that the intervallic relations between the mean and the extremes is the same. Since pairs of strings that are in the same proportion produce the same musical intervals, the Greeks utilized this type of transformation to move about the musical realm. This process is called geometric. And this is how the Greeks filled out their diatonic musical scale. So the Greeks found had to find the right proportions that would allow them to move throughout the octave. Between the arithmetic and harmonic means, there is the proportion, 8/9. The Greeks constructed the major mode by using the interval of 8/9. Taking this twice gets you to a point where another, smaller, interval, 256/243, must be admitted to get to the arithmetic mean. Since the harmonic mean is the inverse of the arithmetic mean, this process can be repeated from the harmonic mean to get to half the string. The minor is then constructed by repeating this whole process from half of the string, in the opposite direction. So, which mean is Kepler talking about when he says the mean motion of the planet? Is he referring to the speed that the planet must travel if it is to complete the same circuit in the same amount of time moving constantly? If so, how would we find that speed? The speed is changing at every moment. We must find how it is changing. Because of Kepler, we know that the speed of a planet is the inverse of its distance from the sun. Therefore, any inquiry into to the speed of the planet, or the process that defines how it changes speed, demands that we look at the distances. Since we also know that Kepler discovered the shape of the orbit to be elliptical, this should be no problem. We just have to know the way that the distances change on an ellipse; but, which ellipse? Don’t the planets travel on different ellipses? Let’s taker a closer look. If we take the perihelial distance as a unit, and the speed of the planet at this position as maximum, what can we come up with? Since, the speed changes inversely to the distance, then at double the distance, the planet must be traveling at half the speed. And at triple the distance, it must be one third of the speed. And so forth. But here we have a few problems. These relative speeds occur at different places on these different ellipses. And the speed changes along these elliptical arcs between the isolated moments. If all of these arcs differ from each other in each ellipse, can we know how long they are? Even if we took equal angles, whether from the center or the focus, although we might be able to tell the distance and speed at that moment, we still could not determine the length of these elliptical arcs, or for that matter, what is happening between these moments. This is not a problem of one ellipse in particular, but all of them. But, luckily Kepler can help us generalize this process of change. How does the distance change as the planet travels in its orbit? In the Epitome of Copernican Astronomy, Kepler describes how the distance can be measured. In the animation the distance of the planet from the sun is made the radius of a circle. Notice that the circles change size in a non-constant way. Near the extremes the change is more gradual, while quicker near the middle longitudes. As the planet travels around the sun, it's distance, and therefore its speed, changes in a non-constant way. Notice furthermore, that this non-constant change is also different in different ellipses. Here you can see the distance of the planet from the sun as the radius of a circle. The distance changes non-constantly, while the change in angle at the center is constant. The difference between the greatest distance and the distance at a particular moment (AD) is always in proportion to the versed sin ( 1- cos ) of the eccentric anomaly, (AS). The total change in distance that occurs as the planet travels in its orbit is equal to double the eccentricity, which is what you get by subtracting the perihelial distance from the aphelial distance. If you were to lineup the distances for every degree of eccentric anomaly, the pink curve is the curve that would result. In the New Astronomy Kepler calls this curve the conchoid. Since the speed is inversely proportional to the distances, you can see the change of speed throughout the orbit by taking the proportion of the distance to the radius of the circle ( the arithmetic mean distance) and applying this proportion to the radius. The resulting curve, in blue, is the acceleration of the planet as it travels through its orbit. These figures correspond to the ones immediately above them. So, what does this tell us about the speed? It seems that we are at a loss. At every turn we keep finding more evidence that these ellipses, despite however similar they appear, are very different. But, maybe there are universals. Looking at the boundaries, the aphelial and perihelial distances, which determine the greatest and least speeds, can we find some singularities that are true for every ellipse? What is universal in all ellipses? An ellipse is the figure produced by the intersection of a cone with a particular plane. The projection of this figure onto a plane that is perpendicular to the axis creates another ellipse. Can you find the cone to which this projected ellipse belongs? We will use the characteristics of this projection to investigate the properties of the ellipse. In the projection, the apexes of the cones project to two specific points. These points are called by Kepler, the foci, meaning "hearths". This is because of the physical properties of these points. All the light emanating form one os these foci is gathered together again by reflection at the other focus. In our solar system, All of the planets share the sun as one focus, which kepler proves in the New Astronomy. From the preceding construction, since the radii of these projected circles grow and shrink at the the constant rate, it is easily seen that the sum of the distances from the foci stays constant for every point on the periphery of the ellipse. This being the case, it is also easily seen, that at one of the extremes, these distances are the aphelial and perihelial distances, whose sum is the total major axis. Taking half of this, the semi-major axis, produces the arithmetic mean between the two. Therefore, the point on the ellipse which is equidistant from both foci has a distance from each that is equal to this arithmetic mean. As the foci change, while keeping the sum of the distances constant, the ellipse becomes more narrow, but the major axis does not change. From the animation we can see that in every ellipse, this occurs at intersection of the semi-minor axis with the ellipse. What about this length? If we draw a circle, with radius equal to the semi-major axis around the ellipse and take the distance drawn perpendicular from the focus to the circle, we generate a height equal to the semi-minor axis. On the left, the two right triangles are equal. This can be proved by noting that they share the same base and they both have as their hypotenuse the radius of equal circles. Therefore, their heights, both perpendiculars to their base, are also equal. On the right, you see how height of the of FG, equal to the semi-minor axis is the geometric mean between PF and FA. This height on the circle is the geometric mean between the two lengths on the diameter that are produced by this perpendicular cut, which in our case, are the aphelial and perihelial distances. Thus, the semi-minor axis is also the geometric mean between the aphelial and perihelial distances. But what about the actual distance to the ellipse at this position, 90 degrees from the focus? To find this, can apply a relationship known to Kepler, but discovered by the ancients. The line drawn perpendicular from the diameter to the circle is cut by the ellipse in the proportion of the minor to the major axis of the ellipse, or as we just discovered, the geometric to the arithmetic mean of the aphelial and perihelial distances. This line dropped perpendicular to the diameter of the circle form it's perimeter, LP, is cut by the ellipse in the proportion of the minor axis to the major axis. So, we can apply the proportion from Archimedes to this height on the circle to find the height on the ellipse. Now, earlier we said that the harmonic mean divides a line in such a way that the differences between it and the extremes are in the same proportion as the extremes. If we can show that this is the relationship that this height has to the extreme distances, then we have found our harmonic mean in the ellipse. P : A :: (H-P) : (A-H) where H is the harmonic mean. This gives us AP - HP = HA - AP 2AP = HA + HP 2AP = H (A+P) 2AP /(A+P) = H. Hence, the height of the ellipse 90 degrees from the focus is the harmonic mean between the perihelial and aphelial distances. We have now found three singularities in the distances that are universal to all ellipses. In addition to that, with the use of a circle and ellipse, we can generate all three of these means that were known by the ancients, for any two numbers. Now that we have isolated what is universal to all ellipses, can we use this to tell us about the way a planet is traveling? Well, what we know so far is that the maximum speed is at perihelion and the minimal speed at aphelion. What about at these mean distances? Since the speed changes inversely to the distances, we can determine 3 things. At the geometric mean distance, the planet is traveling at the geometric mean of the speeds. At the arithmetic mean distance, we find the harmonic mean of the speeds. And, at the harmonic mean of the distances, we find the arithmetic mean speed! This inverse relationship should remind you of our divided string. But, again, these, although they are universal, are still singular moments in a process that is always changing. And unless we can know what is happening between these moments, how can we grasp what that process of change is? And so, our question is still unanswered. What is the mean motion that Kepler is referring to?
http://science.larouchepac.com/kepler/harmony-old/site.php?goto=ellipse.html
13
52
In physics, in terms of a metric theory of gravitation, a gravitational wave is a fluctuation in the curvature of space-time which propagates as a wave. Gravitational radiation results when gravitational waves are emitted from some object or system of gravitating objects. Gravitational waves are very weak. The strongest gravitational waves we can expect to observe on Earth would be generated by very distant and ancient events in which a great deal of energy moved very violently (examples include the collision of two neutron stars, or the collision of two super massive black holes). Such a wave should cause relative changes in distance everywhere on Earth, but these changes should be on the order of at most one part in 1021. In the case of the arms of the LIGO gravitational wave detector, this is less than one thousandth of the "diameter" of a proton. This should give some indication of why it has proven very difficult to detect even the strongest gravitational waves! The existence and indeed ubiquity of gravitational waves is an unambiguous prediction of Einstein's theory of General relativity. All competing gravitation theories currently thought to be viable (apparently in agreement to the level of accuracy with all available evidence) feature predictions about the nature of gravitational radiation. In principle, these predictions are sometimes significantly different from those of general relativity, but unfortunately, at present it seems to be sufficiently challenging simply to directly confirm the existence of gravitational radiation, much less study its detailed properties. Although gravitational radiation has not yet been unambiguously and directly detected, there is already significant indirect evidence for its existence. Most notably, observations of the binary pulsar PSR1913+16, which is thought to consist of two neutron stars orbiting rather tightly and rapidly around each other, have revealed a gradual in spiral at exactly the rate which would be predicted by general relativity. The simplest (and almost universally accepted) explanation for these observations is that general relativity must give an accurate account of gravitational radiation in such systems. Joseph H. Taylor Jr. and Russell A. Hulse shared the Nobel Prize in Physics in 1993 for this work. In Einstein's theory of General Relativity, gravitation is, essentially, identified with spacetime curvature. In the famous slogan promulgated by John Archibald Wheeler, matter tells spacetime how to curve, and spacetime tells matter how to move. For example, humans feel the ground pressing against their feet (or behind, according to stance). From the viewpoint of general relativity, this means that contact with the ground is preventing them from falling freely, thereby accelerating them. Since acceleration is identified with bending of world lines, this means that the world line of a human who is not in freefall is not a geodesic. On the other hand, far from any mass-energy, spacetime is almost perfectly flat, so geodesics behave much like straight lines in familiar solid geometry, so small objects can exhibit rectilinear inertial motion. General relativity (and similar theories of gravitation) are expressed by writing down a field equation (and possibly an equation of motion, if this is not implicit from the field equation, as it is, remarkably, in the case of general relativity). That is, these are classical relativistic field theories in which the gravitational field is at least partially identified with spacetime curvature. As a more or less inevitable consequence, in these theories, the rapid motion of mass-energy in some region will generate ripples in spacetime itself which radiate outward as gravitational waves. Indeed, this is in a sense how "field updating information" is typically conveyed from place to place. Like electromagnetic radiation, in general relativity (and many other theories), gravitational radiation travels at the speed of light and is transverse (meaning that the major effects of a gravitational wave on the motion of test particles occurs in a plane orthogonal to the direction of propagation). However (roughly speaking): - gravitational waves represent perturbations in a second rank tensor field (and, borrowing a term from quantum field theory, are said to spin-two), - electromagnetic waves result from perturbations in a vector field (and are said to be spin-one). - various other waves treated in physics result from perturbations in a scalar field (and are said to be spin-zero). In electromagnetism, certain motions of charged particles, like electrons, will radiate electromagnetic waves. Analogously in gravity, certain motions of mass or energy will radiate gravitational waves. In the quantum field theory which arises from classical electromagnetism, called quantum electrodynamics, there is a massless particle associated with electromagnetic radiation, called the photon. Attempts to create an analogous quantum field theory for classical gravity (general relativity) led to an analogous concept, a massless particle called the graviton. However, it turns out that this route to quantizing general relativity ultimately fails, which renders the possible role of the graviton somewhat problematic in gravitational physics. Probably it is best to think of this notion as one which arises in an approximation and which has some virtues, but which is probably not as the notion of the photon. The Nature of Gravitational Waves Gravitational waves represent fluctatations in the metric of space-time. That is, they alter the relative distance between test particles. It follows that to directly detect a gravitational wave, you should in essence look for tiny relative motions between two objects. In the case of the LIGO detectors, this is essentially relative motion between two suspended mirrors, and as we saw above the motion to be detected is far smaller than the size of an atom, in fact smaller than the "size" of an atomic nucleus. Since thermal motion in each mirror is far larger than this, understanding why anyone would expect LIGO to work takes some explaining! (See LIGO.) Imagine a perfect flat region of spacetime, with a bunch of mutually motionless test particles. Along comes a monochromatic linearly polarized gravitational wave. What happens to the test particles? Roughly speaking, they will oscillate in a cruciform manner, orthogonal to the direction of motion: - first, East/West separated particles draw together while North/South separated particles draw apart, - next, East/West separated particles draw apart while North/South separated particles draw together, and so forth. (Diagonally separated particles exhibit a relative motion which is more difficult to describe verbally, but which is more or less implied by this description.) The cross-sectional of a small box of test particles is invariant under these changes, and there is neglible motion in the direction of propagation (at least, neglecting gravitomagnetic effects; that is, we are tacitly assuming that the relative motion of our test particles is not very rapid). A monochromatic circularly polarized induces similar cruciform oscillation, except that the crucifix rotates with the same frequency as the frequency of cruciform oscillation. Interestingly enough, after the wave has passed, there may be some residual "secular" relative motion of the test particles. There are also some interesting optical effects. If, before the wave arrives, we look through the oncoming wavefronts at objects behind these wavefronts, we can see no optical distortion (if we could, of course, we would have advance notice of its impending arrival, in violation of the principle of causality). But if, after the wave has passed by, we turn and look through the departing wavefronts at objects which the wave has not yet reached, we will see optical distortions in the images of small shapes such as galaxies. Unfortunately, this is an utterly impractical method of detecting the very weak waves we can expect to occur in the vicinity of the solar system. Sources of Gravitational Waves Gravitational waves are caused by certain motions of mass or energy. The type of motion required is different from electromagnetism where any accelerating charge will radiate electromagnetic radiation. According to general relativity, the quadrupole moment (or some higher moment) of an isolated system must be time-varying in order for it to emit gravitational radiation. Here are some examples which illustrate when we should (assuming general relativity gives accurate predictions) expect a system to emit gravitational radiation: - An isolated object in approximately "rectilinear" motion will not radiate. (Needless to say, this motion is wrt some observer and can be only approximately rectilinear. Technically, this entails defining a weakly gravitating system possessing a time varying dipole moment but stationary quadrupole moment, with all moments being taken with respect to the origin.) This can be regarded as a consequence of the principle of conservation of linear momentum. (Caveat: this example is trickier than it looks, and in the case of a small object falling toward a large one, say, it leads to one of the most vexed questions in general relativity, the problem of treating radiation reaction). - A spherically pulsating spherical star (nonzero and non-stationary monopole moment or mass, but vanishing and hence stationary quadrupole moment) will not radiate, in agreement with Birkhoff's theorem. - A spinning disk (nonzero but stationary monopole and quadrupole moments) will not radiate. This can be regarded as a consequence of the principle of conservation of angular momentum. (Caveat: in general relativity, unlike Newtonian gravitation, a spinning disk will not generate an external field identical to the field of an equivalent but non-spinning disk, due to gravitomagnetic effects, but this does not contradict the absence of radiation. Roughly speaking, the field is generated as we concentrate matter, and if that matter has some angular momentum, but we end up with a stationary external gravitational field, that field will exhibit gravitomagnetism but not radiation.) - Two objects mounted on the endpoints of an isolated extensible curtain rod, which is provided with some kind of engine and which oscillates long/short/long with frequency ω, gives a system with time-varying quadrupole moment, so this system will radiate. Observers far from the rod and in the equatorial plane of the rod will observe linearly polarized radiation (aligned with the rod) with frequency ω. Observers lying on the axis of symmetry of the rod will observe no radiation, however. - A spinning non-axisymmetric planetoid (say with a large bump or dimple on the equator) will define a system with a time-varying quadrupole moment, so this system will radiate. As an idealization, one can study an isolated uniform mass curtain rod which is spinning with angular frequency ω about a rotation axis orthogonal to the rod, but passing through some point other than the centroid of the rod. This gives a system with time varying quadrupole moment, so the system will radiate. Observers far from the system and lying in the plane of rotation will observe linearly polarized radiation with frequency 2ω. Observers far from the system and near its axis of symmetry will observe circularly polarized radiation with frequency ω. - Two objects orbiting each other with angular frequency ω in a quasi-Keplerian planar orbit, gives a system with time-varying quadrupole moment, so this system will radiate. Observers far from the system and in its equatorial plane will observe linearly polarized radiation (aligned with the rod) with frequency 2ω. Observers far from the system and lying on its axis of symmetry will observe circularly polarized radiation with frequency ω. The last three examples illustrate a general rule-of-thumb: far from a radiatiating system, projection of the system on the "viewing plane" affords a rough and ready indication of what kind of radiation will be observed. These examples (and others) are most commonly studied using a simplified version of general relativity, sometimes called linearized general relativity, which gives indistinguishable results in the case of weak gravitational fields. (The external field of our Sun would be considered "weak" in this terminology.) Similar conclusions hold for the fully nonlinear theory, but it is much more difficult to obtain analytic results outside the domain of the linearized theory. This is one reason why so much work on phenomena such as the collision and merger of two black holes currently requires numerical analysis. Gravitational radiation carries energy away from a radiating system. Consequently, in the case of the quasi-Keplerian system discussed above, the two objects will gradually spiral in towards one another, becoming more tightly bound to compensate for this loss of energy. The predicted rate of this inspiral can also be computed, using the linearized approximation, and the result gives excellent agreement for observed binary pulsars (this is the theoretical basis for the Nobel Prize awarded to Hulse and Taylor). In the late stages of the inspiral of two neutron stars or black holes, however, the linearized theory is no longer adequate, so one must result to more complicated approximations, and eventually to numerical simulations. Similarly, in the case of the eccentric rotating rod, the frequency will decrease as the radiation gradually carries off energy from the system. We stress that some theories of gravitation give significantly different predictions concerning the nature and generation of gravitational radiation, while others give predictions which are almost identical to those of general relativity. All currently known theories other than general relativity are either in disagreement with observation, or in some sense more complicated than general relativity (see for example Brans-Dicke theory for an example illustrating the latter possibility). - Our solar system can not radiate very much gravitational radiation because the planets are mostly on the same plane of orbit with spins aligned to that plane. And even if this were not the case, the gravitational forces in our solar system are very weak with the exception of Mercury and the Sun. - If two spinning black holes were to collide, they could emit an enormous amount of gravitational radiation and lose energy in the process. As mentioned above, observations of the binary pulsar PSR J0737-3039 appeared to confirm predictions of general relativity with respect to energy emitted by gravitational waves, with the system's orbit observed to shrink 7 mm per day. This further confirms predictions made by Russell Alan Hulse and Joseph Hooton Taylor Jr. observing binary pulsar PSR B1913+16, for the discovery and analysis of which they were awarded the Nobel Prize in Physics in 1993. But to directly detect gravitational waves you would have to look for any motion they cause. Typically you would look for the expansion and contraction oscillations caused by the gravitational wave. A simple version of this setup is called a Weber bar -- a large, solid piece of metal with electronics attached to detect any vibrations. Unfortunately, Weber bars are not likely to be sensitive enough to detect anything but very powerful gravitational waves. A more sensitive version is the Interferometer, with test masses placed as many as four kilometers apart. Ground-based interferometers such as LIGO are now coming on line. The motion to be detected would be very slight -- a small fraction of the width of an atom, over a distance of four kilometers. Space-based interferometers, such as LISA are also being developed. One reason for the lack of direct detection so far is that the gravitational waves that we expect to be produced in nature are very weak, so that the signals for gravitational waves, if they exist, are buried under noise generated from other sources. Reportedly, ordinary terrestrial sources would be undetectable, despite their closeness, because of the great relative weakness of the gravitational force. Gravitational radiation has not been directly observed, although there are a number of existing and proposed experiments such as LIGO that intend to do so. A number of teams are working on making more sensitive and selective gravitational wave detectors and analysing their results. A commonly used technique to reduce the effects of noise is to use coincidence detection to filter out events that do not register on both detectors. There are two common types of detectors used in these experiments: - laser interferometers, which use long light paths, such as GEO, LIGO, TAMA, VIRGO, ACIGA and the space-based LISA; - resonant mass gravitational wave detectors which use large masses at very low temperatures such as AURIGA, ALLEGRO, EXPLORER and NAUTILUS. There are other prospects such as MiniGRAIL, a spherical gravitational wave antenna based at Leiden University. Some scientists even want to use the moon as a giant gravitational wave detector. The moon should be somewhat pliable to the contortions caused by gravitational waves. In November 2002, a team of Italian researchers at the Instituto Nazionale di Fisica Nucleare and the University of Rome La Sapienza produced an analysis of their experimental results that may be further indirect evidence of the existence of gravitational waves. Their paper, entitled "Study of the coincidences between the gravitational wave detectors EXPLORER and NAUTILUS in 2001", is based on a statistical analysis of the results from their detectors which shows that the number of coincident detections is greatest when both of their detectors are pointing into the center of our Milky Way galaxy. Bruce Allen of UWM's LIGO Scientific Collaboration (LSC) group is leading the development of the Einstein@Home project, developed to search data for signals coming from selected, extremely dense, rapidly rotating stars observed from LIGO in the US and the GEO 600 gravitational wave observatory in Germany . Such sources are believed to be either quark stars or neutron stars; a subclass of these stars are already observed by conventional means and are known as pulsars, electromagnetic wave-emitting celestial bodies. If some of these stars are not quite near-perfectly spherical, they should emit gravitational waves, which LIGO and GEO 600 may begin to detect. Einstein@Home is a small part of the LSC scientific program. It has been set up and released as a distributed computing project similar to SETI@home. That is, it relies on computer time donated by private computer users to process data generated by LIGO's and GEO 600's search for gravity waves. Scientists are eager to directly measure gravitational waves from astronomical sources, as they can probe phenomena that are difficult or impossible to study with electromagnetic radiation. For instance, although a black hole emits no visible radiation in the way that a regular star does, gravitational waves can be emitted when an object falls into a black hole, or when two black holes collide. If the inspiraling mass is significantly smaller than the central black hole, the emitted gravitational waves may, at least in some circumstances, allow physicists to directly probe the spacetime geometry around the event horizon (such observations are a primary goal of the LISA mission). Also, because gravitational waves are so weak (and thus difficult to detect), objects opaque to light are often transparent to gravitational radiation. In particular, gravitational waves could propagate while the universe was still opaque to light (i.e., at times before recombination). In this way, gravitational waves could help reveal information about the very structure of the universe. In contrast to electromagnetic radiation, it is not fully understood what difference the presence of gravitational radiation would make for the workings of the universe. A sufficiently strong sea of primordial gravitational radiation, with an energy density exceeding that of the big bang electromagnetic radiation by a few orders of magnitude, would shorten the life of the universe, violating existing data that show it is at least 13 billion years old. More promising is the hope to detect waves emitted by sources on astronomic size scales, such as: - supernovas or gamma ray bursts; - "chirps" from inspiraling coalescing binary stars; - periodic signals from spherically asymmetric neutron stars or quark stars; - stochastic gravitational wave background sources. Perturbation off Flat Space-time Consider that the full metric g is nearly the flat metric η plus some small perturbation h. - gμν = ημν + hμν The Einstein equation in vacuum is Where R is the Ricci curvature. We will expand R in perturbatively in powers of h. The zeroth order term can only be a function of the flat metric and therefore is identically zero. As the perturbation is to be small, we will solve only for the first order term and ignore all higher orders. Where δRμν is the deviation from the flat (and thus zero) Ricci curvature that depends linearly on the perturbation h. Now we need the formula for the Ricci curvature. Where Γ are the Christoffel symbols and is shorthand for . Only first two terms which are linear in Γ will contribute to the first order correction. Next we need the formula for the Christoffel symbols. Seeing as the flat metric is constant, the only first order terms will involve derivatives of the perturbation. The linearized Einstien equation now becomes Where Vα substitutes the expression and is the d'Alembertian or 4-Laplacian. Raising and lowering indices can be tricky. To first order you only use the flat metric. Also note the inverse metric has a negative perturbation plus higher order terms. Next we choose a particular coordinate system where Vα is identically zero. Some proof is necessary to make sure this is possible, but it is. We are left with a wave equation and our gauge condition. From experience with simpler wave equations we can guess the general form of the solution. Where is a null vector. The wave equation is now satisfied, but what choices of A will satisfy the gauge condition we used. If we don't want transformations to disturb our choice of gauge, then we better make the wave traceless, , and transverse, . For a wave traveling in the z direction, k = (1,0,0,1), the perturbation will take the following form. Thus the oscillations are transverse spacial distortions. The wave is called spin-2 because there are 2 different polarizations a,b. Light only has one! a is called the plus polarization and b is called the cross polarization. Perturbative versus Exact Gravitational waves differ markedly from electromagnetic waves in that electromagnetic waves can be derived exactly from Maxwell's equations. However gravitational waves, as a linear, spin-2 wave, as they are often thought of are only perturbations to certain space-time geometries. In other words, classically there are always linear, spin-1 E&M waves, but there are never linear, spin-2 gravitational waves. There are still wave-like fluctuations, but in general things are nonlinear, as is always the case in General Relativity. This is one of the reasons there may be no graviton. The thing that is analogous to electromagnetic radiation is the Weyl curvature, not the linear, spin-2 wave. Gravitational waves transmit energy Within parts of the scientific community there was initially some confusion as to if gravitational waves could transmit energy like electromagnetic waves can. This confusion came from the fact that gravitational waves have no local energy density - no contribution to the stress-energy tensor. Unlike Newtonian gravity, Einstein gravity is not a force theory. Gravity is not a force in General Relativity, it is geometry. Therefore the field was thought to not contain energy, like a gravitational potential would. But the field can most certainly carry energy as it can do mechanical work at a distance. And this has been proven using stress-energy pseudo tensors that transport energy as well as seeing how radiation can carry energy out to infinity. - LIGO, an American gravitational wave detector. - VIRGO and GEO 600, two European detectors. - TAMA, a Japanese detector. - Discussion of gravitational radiation on the USENET physics FAQ - Table of gravitational wave detectors - Laser Interferometer Gravitational Wave Observatory. LIGO Laboratory, California Institute of Technology. - Info page for "Einstein@Home," a distributed computing project processing raw data from LIGO Laboratory, at CalTech searching for gravity waves - Home page for Einstein@Home project - The Italian researchers' paper analyzing data from EXPLORER and NAUTILUS - Center for Gravitational Wave Physics. National Science Foundation [PHY 01- 14375]. - Australian International Gravitational Research Center. University of Western Australia. - TAMA project. Developing advanced techniques for km-sized interferometer. - Could superconductors transmute electromagnetic radiation into gravitational waves? -- Scientific American article - Science to ride gravitational waves, BBC news (Nov 2005 announcement of science run of LIGO and GEO 600 gravitational wave detectors). - B. Allen, et al., Observational Limit on Gravitational Waves from Binary Neutron Stars in the Galaxy. The American Physical Society, March 31, 1999. - Gravitational Radiation. Davis Associates, Inc. - Amos, Jonathan, Gravity wave detector all set. BBC, February 28, 2003. - Rickyjames, Doing the (Gravity) Wave. SciScoop, December 8, 2003. - Will, Clifford M., The Confrontation between General Relativity and Experiment. McDonnell Center for the Space Sciences, Department of Physics, Washington University, St. Louis MO. - Chakrabarty, Indrajit, "Gravitational Waves: An Introduction". arXiv:physics/9908041 v1, Aug 21, 1999.bg:Гравитационно излъчване
http://www.exampleproblems.com/wiki/index.php/Gravitational_radiation
13
72
||This article has an unclear citation style. (April 2012)| Microsatellites, also known as Simple Sequence Repeats (SSRs) or short tandem repeats (STRs), are repeating sequences of 2-6 base pairs of DNA. It is a type of variable number tandem repeat (VNTR). Microsatellites are typically co-dominant. They are used as molecular markers in genetics, for kinship, population and other studies. They can also be used for studies of gene duplication or deletion, marker assisted selection, and fingerprinting. One common example of a microsatellite is a (CA)n repeat, where n varies between alleles. These markers often present high levels of inter- and intra-specific polymorphism, particularly when the number of repetitions is 10 or greater. The repeated sequence is often simple, consisting of two, three or four nucleotides (di-, tri-, and tetranucleotide repeats respectively), and can be repeated 3 to 100 times, with the longer loci generally having more alleles due to the greater potential for slippage (see below). CA nucleotide repeats are very frequent in human and other genomes, and are present every few thousand base pairs. As there are often many alleles present at a microsatellite locus, genotypes within pedigrees are often fully informative, in that the progenitor of a particular allele can often be identified. In this way, microsatellites are ideal for determining paternity, population genetic studies and recombination mapping. It is also the only molecular marker to provide clues about which alleles are more closely related. Microsatellites are also predictors of SNP density as regions of thousands of nucleotides flanking microsatellites have an increased or decreased density of SNPs depending on the microsatellite sequence. The variability of microsatellites is due to a higher rate of mutation compared to other neutral regions of DNA. These high rates of mutation can be explained most frequently by slipped strand mispairing (slippage) during DNA replication on a single DNA strand. Mutation may also occur during recombination during meiosis, although genomic microsatellite distributions are associated with sites of recombination most probably as a consequence of repetitive sequences being involved in recombination rather than being a consequence of it. Some errors in slippage are rectified by proofreading mechanisms within the nucleus, but some mutations can escape repair. The size of the repeat unit, the number of repeats and the presence of variant repeats are all factors, as well as the frequency of transcription in the area of the DNA repeat. Interruption of microsatellites, perhaps due to mutation, can result in reduced polymorphism. However, this same mechanism can occasionally lead to incorrect amplification of microsatellites; if slippage occurs early on during PCR, microsatellites of incorrect lengths can be amplified. Analysis of microsatellites Microsatellites can be amplified for identification by the polymerase chain reaction (PCR) process, using the unique sequences of flanking regions as primers. DNA is repeatedly denatured at a high temperature to separate the double strand, then cooled to allow annealing of primers and the extension of nucleotide sequences through the microsatellite. This process results in production of enough DNA to be visible on agarose or polyacrylamide gels; only small amounts of DNA are needed for amplification because in this way thermocycling creates an exponential increase in the replicated segment. With the abundance of PCR technology, primers that flank microsatellite loci are simple and quick to use, but the development of correctly functioning primers is often a tedious and costly process. Creation of microsatellite primers If searching for microsatellite markers in specific regions of a genome, for example within a particular exon of a gene, primers can be designed manually. This involves searching the genomic DNA sequence for microsatellite repeats, which can be done by eye or by using automated tools such as repeat masker. Once the potentially useful microsatellites are determined (removing non-useful ones such as those with random inserts within the repeat region), the flanking sequences can be used to design oligonucleotide primers which will amplify the specific microsatellite repeat in a PCR reaction. Random microsatellite primers can be developed by cloning random segments of DNA from the focal species. These random segments are inserted into a plasmid or bacteriophage vector, which is in turn implanted into Escherichia coli bacteria. Colonies are then developed, and screened with fluorescently–labelled oligonucleotide sequences that will hybridize to a microsatellite repeat, if present on the DNA segment. If positive clones can be obtained from this procedure, the DNA is sequenced and PCR primers are chosen from sequences flanking such regions to determine a specific locus. This process involves significant trial and error on the part of researchers, as microsatellite repeat sequences must be predicted and primers that are randomly isolated may not display significant polymorphism. Microsatellite loci are widely distributed throughout the genome and can be isolated from semi-degraded DNA of older specimens, as all that is needed is a suitable substrate for amplification through PCR. More recent techniques involve using oligonucleotide sequences consisting of repeats complementary to repeats in the microsatellite to "enrich" the DNA extracted (Microsatellite enrichment). The oligonucleotide probe hybridizes with the repeat in the microsatellite, and the probe/microsatellite complex is then pulled out of solution. The enriched DNA is then cloned as normal, but the proportion of successes will now be much higher, drastically reducing the time required to develop the regions for use. However, which probes to use can be a trial and error process in itself. ISSR (for inter-simple sequence repeat) is a general term for a genome region between microsatellite loci. The complementary sequences to two neighboring microsatellites are used as PCR primers; the variable region between them gets amplified. The limited length of amplification cycles during PCR prevents excessive replication of overly long contiguous DNA sequences, so the result will be a mix of a variety of amplified DNA strands which are generally short but vary much in length. Sequences amplified by ISSR-PCR can be used for DNA fingerprinting. Since an ISSR may be a conserved or nonconserved region, this technique is not useful for distinguishing individuals, but rather for phylogeography analyses or maybe delimiting species; sequence diversity is lower than in SSR-PCR, but still higher than in actual gene sequences. In addition, microsatellite sequencing and ISSR sequencing are mutually assisting, as one produces primers for the other. Global Microsatellite Content with microarrays Using a CGH-style array manufactured by Nimblgen/Roche the entire microsatellite content of a genome can be measured quickly, inexpensively and en masse. It is important to note that this approach does not evaluate the genotype of any particular locus, but instead sums the contributions for a given repeated motif from the many positions in which that motif exists across the genome. This array evaluates all 1- to 6- mer repeats (and their cyclic permutations and complement). This approach has been used to place any species, sequenced or not, onto a taxonomic tree. That tree matched precisely the currently accepted phylogenic relationships. With this new platform technology it is possible to study the genomic variations within an individual for those genomic features that are most variable, microsatellites. Using this global microsatellite content array approach, studies indicate that there are major new genomic destabilization mechanisms that globally modify microsatellites, thus potentially altering very large numbers of genes. These global scale variations in both the tumor and germline patient samples may have important roles in the cancer process, of potential value in diagnosis, prognosis and therapy judgments . This Global Microsatellite Content array revealed that for the cancers studied, especially breast cancer, that there were elevated amounts of AT rich motifs. Pursuit of these AT rich motifs identified an AAAG motif that was variable in region immediately upstream of the start site of the Estrogen Related Receptor Gamma gene, a gene that had previously been implicated in breast cancer and tamoxifen resistance. This locus was found to be a promoter for the gene. A long allele was found to be approximately 3 times more prevalent in breast cancer patients (germline) than in cancer-free patients (p<0.01) and thus may be a risk marker. Microsatellites have proved to be versatile molecular markers, particularly for population analysis, but they are not without limitations. Microsatellites developed for particular species can often be applied to closely related species, but the percentage of loci that successfully amplify may decrease with increasing genetic distance. Point mutation in the primer annealing sites in such species may lead to the occurrence of ‘null alleles’, where microsatellites fail to amplify in PCR assays. Null alleles can be attributed to several phenomena. Sequence divergence in flanking regions can lead to poor primer annealing, especially at the 3’ section, where extension commences; preferential amplification of particular size alleles due to the competitive nature of PCR can lead to heterozygous individuals being scored for homozygosity (partial null). PCR failure may result when particular loci fail to amplify, whereas others amplify more efficiently and may appear homozygous on a gel assay, when they are in reality heterozygous in the genome. Null alleles complicate the interpretation of microsatellite allele frequencies and thus make estimates of relatedness faulty. Furthermore, stochastic effects of sampling that occurs during mating may change allele frequencies in a way that is very similar to the effect of null alleles; an excessive frequency of homozygotes causing deviations from Hardy-Weinberg equilibrium expectations. Since null alleles are a technical problem and sampling effects that occur during mating are a real biological property of a population, it is often very important to distinguish between them if excess homozygotes are observed. When using microsatellites to compare species, homologous loci may be easily amplified in related species, but the number of loci that amplify successfully during PCR may decrease with increased genetic distance between the species in question. Mutation in microsatellite alleles is biased in the sense that larger alleles contain more bases, and are therefore likely to be mistranslated in DNA replication. Smaller alleles also tend to increase in size, whereas larger alleles tend to decrease in size, as they may be subject to an upper size limit; this constraint has been determined but possible values have not yet been specified. If there is a large size difference between individual alleles, then there may be increased instability during recombination at meiosis. In tumour cells, where controls on replication may be damaged, microsatellites may be gained or lost at an especially high frequency during each round of mitosis. Hence a tumour cell line might show a different genetic fingerprint from that of the host tissue. Mechanisms for change The most common cause of length changes in short sequence repeats is replication slippage, caused by mismatches between DNA strands while being replicated during meiosis (Tautz 1994). Typically, slippage in each microsatellite occurs about once per 1,000 generations (Weber 1993). Slippage changes in repetitive DNA are orders of magnitude more common than point mutations in other parts of the genome (Jarne 1996). Most slippage results in a change of just one repeat unit, and slippage rates vary for different repeat unit sizes, and within different species (Kruglyak 1998). Short sequence repeats are distributed throughout the genome (King 1997). Presumably, their most probable means of expression will vary, depending on their location. In mammals, 20% to 40% of proteins contain repeating sequences of amino acids caused by short sequence repeats (Marcotte 1998). Most of the short sequence repeats within protein-coding portions of the genome have a repeating unit of three nucleotides, since that length will not cause frame-shift mutations (Sutherland 1995). Each trinucleotide repeating sequence is transcribed into a repeating series of the same amino acid. In yeasts, the most common repeated amino acids are glutamine, glutamic acid, asparagine, aspartic acid and serine. These repeating segments can affect the physical and chemical properties of proteins, with the potential for producing gradual and predictable changes in protein action (Hancock 2005). For example, length changes in tandemly repeating regions in the Runx2 gene lead to differences in facial length in domesticated dogs (Canis familiaris), with an association between longer sequence lengths and longer faces (Fondon 2004). This association also applies to a wider range of Carnivora species (Sears 2007). Length changes in polyalanine tracts within the HoxA13 gene are linked to Hand-Foot-Genital Syndrome, a developmental disorder in humans (Utsch 2002). Length changes in other triplet repeats are linked to more than 40 neurological diseases in humans (Pearson 2005). Evolutionary changes from replication slippage also occur in simpler organisms. For example, microsatellite length changes are common within surface membrane proteins in yeast, providing rapid evolution in cell properties (Bowen 2006). Specifically, length changes in the FLO1 gene control the level of adhesion to substrates (Verstrepen 2005). Short sequence repeats also provide rapid evolutionary change to surface proteins in pathenogenic bacteria, perhaps so they can keep up with immunological changes in their hosts (Moxon 1994). This is known as the Red Queen hypothesis (Van Valen 1973). Length changes in short sequence repeats in a fungus (Neurospora crassa) control the duration of its circadian clock cycles (Michael 2007). Length changes of microsatellites within promoters and other cis-regulatory regions can also change gene expression quickly, between generations. The human genome contains many (>16,000) short sequence repeats in regulatory regions, which provide ‘tuning knobs’ on the expression of many genes (Rockman 2002). Length changes in bacterial SSRs can affect fimbriae formation in Haemophilus influenza, by altering promoter spacing (Moxon 1994). Minisatellites are also linked to abundant variations in cis-regulatory control regions in the human genome (Rockman 2002). And microsatellites in control regions of the Vasopressin 1a receptor gene in voles influence their social behavior, and level of monogamy (Hammock 2005). Microsatellites within introns also influence phenotype, through means that are not currently understood. For example, a GAA triplet expansion in the first intron of the X25 gene appears to interfere with transcription, and causes Friedreich Ataxia (Bidichandani 1998). Tandem repeats in the first intron of the Asparagine synthetase gene are linked to acute lymphoblastic leukemia (Akagi 2008). A repeat polymorphism in the fourth intron of the NOS3 gene is linked to hypertension in a Tunisian population (Jemaa 2008). Reduced repeat lengths in the EGFR gene are linked with osteosarcomas (Kersting 2008). Microsatellites are distributed throughout the genome (Richard 2008). Almost 50% of the human genome is contained in various types of transposable elements (also called transposons, or ‘jumping genes’), and many of them contain repetitive DNA (Scherer 2008). It is probable that short sequence repeats in those locations are also involved in the regulation of gene expression (Tomilin 2008). Microsatellite analysis is a relatively new technology in the field of forensics, having come into popularity in the mid-to-late 1990s. It is used for the genetic fingerprinting of individuals. The microsatellites in use today for forensic analysis are all tetra- or penta-nucleotide repeats (4 or 5 repeated nucleotides), as these give a high degree of error-free data while being robust enough to survive degradation in non-ideal conditions. Shorter repeat sequences tend to suffer from artifacts such as PCR stutter and preferential amplification, as well as the fact that several genetic diseases are associated with tri-nucleotide repeats such as Huntington's disease. Longer repeat sequences will suffer more highly from environmental degradation and do not amplify by PCR as well as shorter sequences. The analysis is performed by extracting nuclear DNA from the cells of a forensic sample of interest, then amplifying specific polymorphic regions of the extracted DNA by means of the polymerase chain reaction. Once these sequences have been amplified, they are resolved either through gel electrophoresis or capillary electrophoresis, which will allow the analyst to determine how many repeats of the microsatellites sequence in question there are. If the DNA was resolved by gel electrophoresis, the DNA can be visualized either by silver staining (low sensitivity, safe, inexpensive), or an intercalating dye such as ethidium bromide (fairly sensitive, moderate health risks, inexpensive), or as most modern forensics labs use, fluorescent dyes (highly sensitive, safe, expensive). Instruments built to resolve microsatellite fragments by capillary electrophoresis also use fluorescent dyes to great effect. It is also used to follow up bone marrow transplant patients. In the United States, 13 core microsatellite loci have been decided upon to be the basis by which an individual genetic profile can be generated. These profiles are stored on a local, state and national level in DNA databanks such as CODIS. The British data base for microsatellite loci identification is the UK National DNA Database (NDNAD). The British system uses 10 loci, rather than the American 13 loci. - forest genetic resources - genetic marker - junk DNA - long interspersed repetitive element - microsatellite instability - mobile element - satellite DNA - short interspersed repetitive element - short tandem repeat - simple sequence length polymorphism (SSLP) - trinucleotide repeat disorders - variable number tandem repeat - Turnpenny, P. & Ellard, S. (2005). Emery's Elements of Medical Genetics, 12th. ed. Elsevier, London. - Queller, D.C., Strassman,,J.E. & Hughes, C.R. (1993). "Microsatellites and Kinship". Trends in Ecology and Evolution 8 (8): 285–288. doi:10.1016/0169-5347(93)90256-O. PMID 21236170. - D. B. Goldstein, A. R. Linares, L. L. Cavalli-Sforza, and M. W. Feldman (1995). "An Evaluation of Genetic Distances for Use With Microsatellite Loci". Genetics 139 (1): 463–471. PMC 1206344. PMID 7705647. - M.A. Varela and W. Amos (2010). "Heterogeneous distribution of SNPs in the human genome: Microsatellites as predictors of nucleotide diversity and divergence". Genomics 95 (3): 151–159. doi:10.1016/j.ygeno.2009.12.003. PMID 20026267. - Blouin, M.S., Parsons, M., Lacaille, V. & Lotz, S. (1996). "Use of microsatellite loci to classify individuals by relatedness". Molecular Ecology 5 (3): 393–401. doi:10.1111/j.1365-294X.1996.tb00329.x. PMID 8688959. - Q-Y. Huang, F-H. Xu, H. Shen, H-Y. Deng, Y-J. Liu, Y-Z. Liu, J-L. Li, R. R. Recker and H-W. Deng (2002). "Mutation Patterns at Dinucleotide Microsatellite Loci in Humans". Am J Hum Genet. 70 (3): 625–634. doi:10.1086/338997. PMC 384942. PMID 11793300. - Griffiths, A.J.F., Miller, J.F., Suzuki, D.T., Lewontin, R.C. & Gelbart, W.M. (1996). Introduction to Genetic Analysis, 5th Edition. W.H. Freeman, New York. - Jarne, P. & Lagoda, P.J.L. (1996). "Microsatellites, from molecules to populations and back". Trends in Ecology and Evolution 11 (10): 424–429. doi:10.1016/0169-5347(96)10049-5. PMID 21237902. - Kaukinen KH, Supernault KJ, and Miller KM (2004). "Enrichment of tetranucleotide microsatellite loci from invertebrate species". Journal of Shellfish Research 23 (2): 621. - Dakin, EE; Avise, JC (2004). "Microsatellite null alleles in parentage analysis". Heredity 93 (5): 504–509. doi:10.1038/sj.hdy.6800545. PMID 15292911. - Angel Carracedo. "DNA Profiling". Retrieved 2010-09-20. - "Technology for Resolving STR Alleles". Retrieved 2010-09-20. - Antin JH, Childs R, Filipovich AH, et al. (2001). "Establishment of complete and mixed donor chimerism after allogeneic lymphohematopoietic transplantation: recommendations from a workshop at the 2001 Tandem Meetings of the International Bone Marrow Transplant Registry and the American Society of Blood and Marrow Transplantation". Biol. Blood Marrow Transplant. 7 (9): 473–85. doi:10.1053/bbmt.2001.v7.pm11669214. PMID 11669214. - John M. Butler, Forensic DNA Typing: Biology, Technology, and Genetics of STR Markers, Second Edition, Academic Press, 2005. - "The National DNA Database". Retrieved 2010-09-20. - "House of Lords Select Committee on Science and Technology Written Evidence". Retrieved 2010-09-20. - "FBI CODIS Core STR Loci". Retrieved 2010-09-20. - Bidichandani S. I. et al. (1998). "The GAA triplet-repeat expansion in Friedreich ataxia interferes with transcription and may be associated with an unusual DNA structure". Am. J. Hum. Genet 62 (1): 111–121. doi:10.1086/301680. PMC 1376805. PMID 9443873. - Bowen S., Wheals A. E. (2006). "Ser//Thr-rich domains are associated with genetic variation and morphogenesis in Saccharomyces cerevisiae". Yeast 23 (8): 633–640. doi:10.1002/yea.1381. PMID 16823884. - Caporale L. H. (2003). "Natural selection and the emergence of a mutation phenotype: an update of the evolutionary synthesis considering mechanisms that affect genome variation". Ann. Rev. Micro 57: 467–485. doi:10.1146/annurev.micro.57.030502.090855. - Fondon J. W. III, Garner H. R.; Garner (2004). "Molecular origins of rapid and continuous morphological evolution". Proc. Natl. Acad. Sci. USA 1010 (52): 18058–18063. Bibcode:2004PNAS..10118058F. doi:10.1073/pnas.0408118101. - Hammock E. A. D., Young L. J.; Young (2005). "Microsatellite instability generates diversity in brain and sociobehavioral traits". Science 308 (5728): 1630–1634. Bibcode:2005Sci...308.1630H. doi:10.1126/science.1111427. PMID 15947188. - Hancock J. M., Simon M. (2005). "Simple sequence repeats in proteins and their significance for network evolution". Gene 345 (1): 113–118. doi:10.1016/j.gene.2004.11.023. PMID 15716087. - Jarne P., Lagoda P. J. L. (1996). "Microsatellites, from molecules to populations and back". Trends Ecol. Evol 11 (10): 424–429. doi:10.1016/0169-5347(96)10049-5. PMID 21237902. - Jemaa R. et al. (2008). "Association of a 27-bp repeat polymorphism in intron 4 of endothelial constitutive nitric oxide synthase gene with hypertension in a Tunisian population". Clin. Biochem 42 (9): 852–856. doi:10.1016/j.clinbiochem.2008.12.002. PMID 19111531. - Kashi Y. et al. (1997). "Simple sequence repeats as a source of quantitative genetic variation". Trends Gen 13 (2): 74–78. doi:10.1016/S0168-9525(97)01008-1. - Kersting C. et al. (2008). "Biological importance of a polymorphic CA sequence within intron I of the epidermal growth factor receptor gene (EGFR) in high grade central osteosarcomas". Gene Chrom. & Cancer 47 (8): 657–664. doi:10.1002/gcc.20571. - King D. G.; Soller, Morris; Kashi, Yechezkel (1997). "Evolutionary tuning knobs". Endeavor 21: 36–40. doi:10.1016/S0160-9327(97)01005-3. - Kinoshita Y. et al. (2007). "Control of FWA gene silencing in Arabadopsis thaliana by SINE-related direct repeats". Plant. J. 49 (1): 38–45. doi:10.1111/j.1365-313X.2006.02936.x. PMID 17144899. - Kruglyak S. et al. (1998). "Equilibrium distributions of microstellite repeat length resulting from a balance between slippage events and point mutations". Proc. Natl. Acad. Sci. USA 95 (18): 10774–10778. doi:10.1073/pnas.95.18.10774. PMC 27971. PMID 9724780. - Li Y-C. et al. (2002). "Microsatellites: genomic distribution, putative functions and mutational mechanisms: a review". Mol. Ecol 11 (12): 2453–2465. doi:10.1046/j.1365-294X.2002.01643.x. PMID 12453231. - Li Y-C. et al. (2003). "Microsatellites within genes: structure, function and evolution". Mol. Bio. Evol 21 (6): 991–1007. doi:10.1093/molbev/msh073. - Marcotte E. M. et al. (1998). "A census of protein repeats". J. Mol. Biol. 293 (1): 151–160. doi:10.1006/jmbi.1999.3136. PMID 10512723. - Mattick J. S. (2003). "Challenging the dogma: the hidden layer of non-protein-coding RNAs in complex organisms". BioEssays 25 (10): 930–939. doi:10.1002/bies.10332. PMID 14505360. - Meagher T., Vassiliadis C. (2005). "Phenotypic impacts of repetitive DNA in flowering plants". New Phyto 168: 71–80. doi:10.1111/j.1469-8137.2005.01527.x. - Michael T. P. et al. (2008). "Simple sequence repeats provide a substrate for phenotypic variation in the Neurospora crassa circadian clock". In Redfield, Rosemary. PLoS ONE 2 (8): e795. doi:10.1371/journal.pone.0000795. - Moxon E. R. et al. (1994). "Adaptive evolution of highly mutable loci in pathogenic bacteria". Curr. Bio 4: 24–32. doi:10.1016/S0960-9822(00)00005-1. - Müller K. J. et al.; Romano; Gerstner; Garcia-Marotot; Pozzi; Salamini; Rohde (1995). "The barley Hooded mutation caused by a duplication in a homeobox gene intron". Nature 374 (6524): 727–730. Bibcode:1995Natur.374..727M. doi:10.1038/374727a0. PMID 7715728. - Pearson C. E. et al. (2005). "Repeat instability: mechanisms of dynamic mutations". Nat. Rev. Gen. 6 (10): 729–742. doi:10.1038/nrg1689. - Pumpernik D. et al.. ", Replication slippage versus point mutation rates in short tandem repeats of the human genome. 2008. Mol. Genet". Genomics 279 (1): 53–61. - Richard G-F. et al. (2008). "Comparative genomics and molecular dynamics of DNA repeats in Eukaryotes". Micr. Mol. Bio. Rev 72 (4): 686–727. doi:10.1128/MMBR.00011-08. PMC 2593564. PMID 19052325. - Rockman M. V., Wray G. A. (2002). "Abundant raw material for cis-regulatory evolution in humans". Mol. Biol. Evol 19 (11): 1991–2004. doi:10.1093/oxfordjournals.molbev.a004023. PMID 12411608. - Scherer, S., 2008. A short guide to the human genome. Cold Spring Harbor University Press, Cold Spring NY. - Sears K. E. et al. (2007). "The correlated evolution of Runx2 tandem repeats, transcriptional activity, and facial length in Carnivora". Evol. & Dev 9 (6): 555–565. doi:10.1111/j.1525-142X.2007.00196.x. - Streelman J. T., Kocher T. D. (2002). "Microsatellite variation associated with prolactin expression and growth of salt-challenged Tilapia". Phys. Genom 9: 1–4. - Sutherland G. R., Richards R. I.; Richards (1995). "Simple tandem DNA repeats and human genetic disease". Proc. Natl. Acad. Sci. USA 92 (9): 3636–3641. Bibcode:1995PNAS...92.3636S. doi:10.1073/pnas.92.9.3636. PMC 42017. PMID 7731957. - Tautz D., Schlötterer C. (1994). "Simple sequences". Curr. Opin. Genet. Dev 4 (6): 832–837. doi:10.1016/0959-437X(94)90067-1. PMID 7888752. - Tomilin N. V. (2008). "Regulation of mammalian gene expression by retroelements and non-coding tandem repeats". BioEssays 30 (4): 338–348. doi:10.1002/bies.20741. PMID 18348251. - Utsch B. et al. (2002). "A novel stable stable polyalanine [poly(A)] expansion in the HoxA13 gene associated with hand-foot-genital syndrome: proper function of poly(A)-harbouring transcription factors depends on a critical repeat length?". Hum. Gen 110 (5): 488–494. doi:10.1007/s00439-002-0712-8. PMID 12073020. - Van Valen L (1973). "A new evolutionary law". Evol. Theory 1: 1–30. - Verstrepen K. J. et al. (2005). "Intragenic tandem repeats generate functional variability". Nat. Gen 37 (9): 986–990. doi:10.1038/ng1618. - Vinces M. D. et al.; Legendre; Caldara; Hagihara; Verstrepen (2009). "Unstable tandem repeats in promoters confer transcriptional evolvability". Science 324 (5931): 1213–1216. Bibcode:2009Sci...324.1213V. doi:10.1126/science.1170097. PMC 3132887. PMID 19478187. - About microsatellites: - Search tools : - SSR Finder - JSTRING - Java Search for Tandem Repeats in genomes - Microsatellite repeats finder - MISA - MIcroSAtellite identification tool - Phobos - a tandem repeat search tool for perfect and imperfect repeats - the maximum pattern size depends only on computational power - Tandem Repeats Finder - Zebrafish Repeats
http://en.wikipedia.org/wiki/Microsatellite_(genetics)
13
82
Gauss's law for gravity In physics, Gauss's law for gravity, also known as Gauss's flux theorem for gravity, is a law of physics which is essentially equivalent to Newton's law of universal gravitation. It is named after Carl Friedrich Gauss. Although Gauss's law for gravity is physically equivalent to Newton's law, there are many situations where Gauss's law for gravity offers a more convenient and simple way to do a calculation than Newton's law. The form of Gauss's law for gravity is mathematically similar to Gauss's law for electrostatics, one of Maxwell's equations. Gauss's law for gravity has the same mathematical relation to Newton's law that Gauss's law for electricity bears to Coulomb's law. This is because both Newton's law and Coulomb's law describe inverse-square interaction in a 3-dimensional space. Qualitative statement of the law The gravitational field g (also called gravitational acceleration) is a vector field – a vector at each point of space (and time). It is defined so that the gravitational force experienced by a particle is equal to the mass of the particle multiplied by the gravitational field at that point. Gauss's law for gravity states: The integral form of Gauss's law for gravity states: - (also written ) denotes a surface integral over a closed surface, - ∂V is any closed surface (the boundary of a closed volume V), - dA is a vector, whose magnitude is the area of an infinitesimal piece of the surface ∂V, and whose direction is the outward-pointing surface normal (see surface integral for more details), - g is the gravitational field, - G is the universal gravitational constant, and - M is the total mass enclosed within the surface ∂V. The left-hand side of this equation is called the flux of the gravitational field. Note that it is always negative (or zero), and never positive. This can be contrasted with Gauss's law for electricity, where the flux can be either positive or negative. The difference is because charge can be either positive or negative, while mass can only be positive. The differential form of Gauss's law for gravity states Relation to the integral form The two forms of Gauss's law for gravity are mathematically equivalent. The divergence theorem states: where V is a closed region bounded by a simple closed oriented surface ∂V and dV is an infinitesimal piece of the volume V (see volume integral for more details). The gravitational field g must be a continuously differentiable vector field defined on a neighborhood of V. Given also that we can apply the divergence theorem to the integral form of Gauss's law for gravity, which becomes: which can be rewritten: This has to hold simultaneously for every possible volume V; the only way this can happen is if the integrands are equal. Hence we arrive at which is the differential form of Gauss's law for gravity. It is possible to derive the integral form from the differential form using the reverse of this method. Although the two forms are equivalent, one or the other might be more convenient to use in a particular computation. Relation to Newton's law Deriving Gauss's law from Newton's law - er is the radial unit vector, - r is the radius, |r|. - M is the mass of the particle, which is assumed to be a point mass located at the origin. Outline of proof: (Click [show] button to the right.) g(r), the gravitational field at r, can be calculated by adding up the contribution to g(r) due to every bit of mass in the universe (see superposition principle). To do this, we integrate over every point s in space, adding up the contribution to g(r) associated with the mass (if any) at s, where this contribution is calculated by Newton's law. The result is: (d3s stands for dsxdsydsz, each of which is integrated from -∞ to +∞.) If we take the divergence of both sides of this equation with respect to r, and use the known theorem where δ(s) is the Dirac delta function, the result is Using the "sifting property" of the Dirac delta function, we arrive at which is the differential form of Gauss's law for gravity, as desired. Deriving Newton's law from Gauss's law and irrotationality It is impossible to mathematically prove Newton's law from Gauss's law alone, because Gauss's law specifies the divergence of g but does not contain any information regarding the curl of g (see Helmholtz decomposition). In addition to Gauss's law, the assumption is used that g is irrotational (has zero curl), as gravity is a conservative force: Even these are not enough: Boundary conditions on g are also necessary to prove Newton's law, such as the assumption that the field is zero infinitely far from a mass. The proof of Newton's law from these assumptions is as follows: Outline of proof Start with the integral form of Gauss's law: Apply this law to the situation where the volume V is a sphere of radius r centered on a point-mass M. It's reasonable to expect the gravitational field from a point mass to be spherically symmetric. (We omit the proof for simplicity.) By making this assumption, g takes the following form: (i.e., the direction of g is parallel to the direction of r, and the magnitude of g depends only on the magnitude, not direction, of r). Plugging this in, and using the fact that ∂V is a spherical surface with constant r and area , which is Newton's law. Poisson's equation and gravitational potential Since the gravitational field has zero curl (equivalently, gravity is a conservative force) as mentioned above, it can be written as the gradient of a scalar potential, called the gravitational potential: Then the differential form of Gauss's law for gravity becomes Poisson's equation: This provides an alternate means of calculating the gravitational potential and gravitational field. Although computing g via Poisson's equation is mathematically equivalent to computing g directly from Gauss's law, one or the other approach may be an easier computation in a given situation. In radially symmetric systems, the gravitational potential is a function of only one variable (namely, ), and Poisson's equation becomes (see Del in cylindrical and spherical coordinates): while the gravitational field is: When solving the equation it should be taken into account that in the case of finite densities ∂ϕ/∂r has to be continuous at boundaries (discontinuities of the density), and zero for r = 0. Gauss's law can be used to easily derive the gravitational field in certain cases where a direct application of Newton's law would be more difficult (but not impossible). See the article Gaussian surface for more details on how these derivations are done. Three such applications are as follows: We can conclude (by using a "Gaussian pillbox") that for an infinite, flat plate (Bouguer plate) of any finite thickness, the gravitational field outside the plate is perpendicular to the plate, towards it, with magnitude 2πG times the mass per unit area, independent of the distance to the plate (see also gravity anomalies). More generally, for a mass distribution with the density depending on one Cartesian coordinate z only, gravity for any z is 2πG times (the mass per unit area above z, minus the mass per unit area below z). In particular, a combination of two equal parallel infinite plates does not produce any gravity inside. Cylindrically symmetric mass distribution In the case of an infinite cylindrically symmetric mass distribution we can conclude (by using a cylindrical Gaussian surface) that the field strength at a distance r from the center is inward with a magnitude of 2G/r times the total mass per unit length at a smaller distance (from the axis), regardless of any masses at a larger distance. For example, inside an infinite hollow cylinder, the field is zero. Spherically symmetric mass distribution In the case of a spherically symmetric mass distribution we can conclude (by using a spherical Gaussian surface) that the field strength at a distance r from the center is inward with a magnitude of G/r2 times only the total mass within a smaller distance than r. All the mass at a greater distance than r from the center can be ignored. For example, a hollow sphere does not produce any net gravity inside. The gravitational field inside is the same as if the hollow sphere were not there (i.e. the resultant field is that of any masses inside and outside the sphere only). Although this follows in one or two lines of algebra from Gauss's law for gravity, it took Isaac Newton several pages of cumbersome calculus to derive it directly using his law of gravity; see the article shell theorem for this direct derivation. Derivation from Lagrangian The Lagrangian density for Newtonian gravity is Applying Hamilton's principle to this Lagrangian, the result is Gauss's law for gravity: See Lagrangian (Newtonian gravity) for details. - See, for example, Griffiths, David J. (1998). Introduction to Electrodynamics (3rd ed.). Prentice Hall. p. 50. ISBN 0-13-805326-X. - The mechanics problem solver, by Fogiel, pp 535–536 - For usage of the term "Gauss's law for gravity" see, for example, this article.
http://en.wikipedia.org/wiki/Gauss'_law_for_gravity
13
151
Relative density, or specific gravity,12 is the ratio of the density (mass of a unit volume) of a substance to the density of a given reference material. Specific gravity usually means relative density with respect to water. The term "relative density" is often preferred in modern scientific usage. If a substance's relative density is less than one then it is less dense than the reference; if greater than 1 then it is denser than the reference. If the relative density is exactly 1 then the densities are equal; that is, equal volumes of the two substances have the same mass. If the reference material is water then a substance with a relative density (or specific gravity) less than 1 will float in water. For example, an ice cube, with a relative density of about 0.91, will float. A substance with a relative density greater than 1 will sink. Temperature and pressure must be specified for both the sample and the reference. Pressure is nearly always 1 atm equal to 101.325 kPa. Where it is not, it is more usual to specify the density directly. Temperatures for both sample and reference vary from industry to industry. In British brewing practice the specific gravity as specified above is multiplied by 1000.3 Specific gravity is commonly used in industry as a simple means of obtaining information about the concentration of solutions of various materials such as brines, sugar solutions (syrups, juices, honeys, brewers wort, must, etc.) and acids. Relative density (RD) or specific gravity (SG) is a dimensionless quantity, as it is the ratio of either densities or weights where RD is relative density, ρsubstance is the density of the substance being measured, and ρreference is the density of the reference. (By convention ρ, the Greek letter rho, denotes density.) The reference material can be indicated using subscripts: RDsubstance/reference, which means "the relative density of substance with respect to reference". If the reference is not explicitly stated then it is normally assumed to be water at 4 °C (or, more precisely, 3.98 °C, which is the temperature at which water reaches its maximum density). In SI units, the density of water is (approximately) 1000 kg/m3 or 1 g/cm3, which makes relative density calculations particularly convenient: the density of the object only needs to be divided by 1000 or 1, depending on the units. The relative density of gases is often measured with respect to dry air at a temperature of 20 °C and a pressure of 101.325 kPa absolute, which has a density of 1.205 kg/m3. Relative density with respect to air can be obtained by Where M is the molar mass and the approximately equal sign is used because equality pertains only if 1 mol of the gas and 1 mol of air occupy the same volume at a given temperature and pressure i.e. they are both Ideal gases. Ideal behaviour is usually only seen at very low pressure. For example, one mol of an ideal gas occupies 22.414 L at 0 °C and 1 atmosphere whereas carbon dioxide has a molar volume of 22.259 L under those same conditions. - See Density for a table of the measured densities of water at various temperatures. The density of substances varies with temperature and pressure so that it is necessary to specify the temperatures and pressures at which the densities or weights were determined. It is nearly always the case that measurements are made at nominally 1 atmosphere (101.325 kPa the variations caused by changing weather patterns) but as specific gravity usually refers to highly incompressible aqueous solutions or other incompressible substances (such as petroleum products) variations in density caused by pressure are usually neglected at least where apparent specific gravity is being measured. For true (in vacuo) specific gravity calculations air pressure must be considered (see below). Temperatures are specified by the notation Ts/Tr) with Ts representing the temperature at which the sample's density was determined and Tr the temperature at which the reference (water) density is specified. For example SG (20°C/4°C) would be understood to mean that the density of the sample was determined at 20 °C and of the water at 4 °C. Taking into account different sample and reference temperatures we note that while SGH2O = 1.000000 (20°C/20°C) it is also the case that SGH2O = 0.998203/0.998840 = 0.998363 (20°C/4°C). Here temperature is being specified using the current ITS-90 scale and the densities4 used here and in the rest of this article are based on that scale. On the previous IPTS-68 scale the densities at 20 °C and 4 °C are, respectively, 0.9982071 and 0.9999720 resulting in an SG (20°C/4°C) value for water of 0.9982343. The temperatures of the two materials may be explicitly stated in the density symbols; for example: - relative density: or specific gravity: where the superscript indicates the temperature at which the density of the material is measured, and the subscript indicates the temperature of the reference substance to which it is compared. Relative density can also help quantify the buoyancy of a substance in a fluid, or determine the density of an unknown substance from the known density of another. Relative density is often used by geologists and mineralogists to help determine the mineral content of a rock or other sample. Gemologists use it as an aid in the identification of gemstones. Water is preferred as the reference because measurements are then easy to carry out in the field (see below for examples of measurement methods). As the principal use of specific gravity measurements in industry is determination of the concentrations of substances in aqueous solutions and these are found in tables of SG vs concentration it is extremely important that the analyst enter the table with the correct form of specific gravity. For example, in the brewing industry, the Plato table, which lists sucrose concentration by weight against true SG, were originally (20 °C/4 °C)5 that is based on measurements of the density of sucrose solutions made at laboratory temperature (20 °C) but referenced to the density of water at 4 °C which is very close to the temperature at which water has its maximum density of ρ(H2O) equal to 0.999972 g/cm3 (or 62.43 lbm·ft−3). The ASBC table6 in use today in North America, while it is derived from the original Plato table is for apparent specific gravity measurements at (20 °C/20 °C) on the IPTS-68 scale where the density of water is 0.9982071 g/cm3. In the sugar, soft drink, honey, fruit juice and related industries sucrose concentration by weight is taken from this work3 which uses SG (17.5 °C/17.5 °C). As a final example, the British SG units are based on reference and sample temperatures of 60°F and are thus (15.56°C/15.56°C).3 Relative density can be calculated directly by measuring the density of a sample and dividing it by the (known) density of the reference substance. The density of the sample is simply its mass divided by its volume. Although mass is easy to measure, the volume of an irregularly shaped sample can be more difficult to ascertain. One method is to put the sample in a water-filled graduated cylinder and read off how much water it displaces. Alternatively the container can be filled to the brim, the sample immersed, and the volume of overflow measured. The surface tension of the water may keep a significant amount of water from overflowing, which is especially problematic for small samples. For this reason it is desirable to use a water container with as small a mouth as possible. For each substance, the density, ρ, is given by When these densities are divided, references to the spring constant, gravity and cross-sectional area simply cancel, leaving Relative density is more easily and perhaps more accurately measured without measuring volume. Using a spring scale, the sample is weighed first in air and then in water. Relative density (with respect to water) can then be calculated using the following formula: - Wair is the weight of the sample in air (measured in pounds-force, newtons, or some other unit of force) - Wwater is the weight of the sample in water (measured in the same units). This technique cannot easily be used to measure relative densities less than one, because the sample will then float. Wwater becomes a negative quantity, representing the force needed to keep the sample underwater. Another practical method uses three measurements. The sample is weighed dry. Then a container filled to the brim with water is weighed, and weighed again with the sample immersed, after the displaced water has overflowed and been removed. Subtracting the last reading from the sum of the first two readings gives the weight of the displaced water. The relative density result is the dry sample weight divided by that of the displaced water. This method works with scales that can't easily accommodate a suspended sample, and also allows for measurement of samples that are less dense than water. The relative density of a liquid can be measured using a hydrometer. This consists of a bulb attached to a stalk of constant cross-sectional area, as shown in the diagram to the right. First the hydrometer is floated in the reference liquid (shown in light blue), and the displacement (the level of the liquid on the stalk) is marked (blue line). The reference could be any liquid, but in practice it is usually water. The hydrometer is then floated in a liquid of unknown density (shown in green). The change in displacement, Δx, is noted. In the example depicted, the hydrometer has dropped slightly in the green liquid; hence its density is lower than that of the reference liquid. It is, of course, necessary that the hydrometer floats in both liquids. The application of simple physical principles allows the relative density of the unknown liquid to be calculated from the change in displacement. (In practice the stalk of the hydrometer is pre-marked with graduations to facilitate this measurement.) In the explanation that follows, - ρref is the known density (mass per unit volume) of the reference liquid (typically water). - ρnew is the unknown density of the new (green) liquid. - RDnew/ref is the relative density of the new liquid with respect to the reference. - V is the volume of reference liquid displaced, i.e. the red volume in the diagram. - m is the mass of the entire hydrometer. - g is the local gravitational constant. - Δx is the change in displacement. In accordance with the way in which hydrometers are usually graduated, Δx is here taken to be negative if the displacement line rises on the stalk of the hydrometer, and positive if it falls. In the example depicted, Δx is negative. - A is the cross sectional area of the shaft. Since the floating hydrometer is in static equilibrium, the downward gravitational force acting upon it must exactly balance the upward buoyancy force. The gravitational force acting on the hydrometer is simply its weight, mg. From the Archimedes buoyancy principle, the buoyancy force acting on the hydrometer is equal to the weight of liquid displaced. This weight is equal to the mass of liquid displaced multiplied by g, which in the case of the reference liquid is ρrefVg. Setting these equal, we have Exactly the same equation applies when the hydrometer is floating in the liquid being measured, except that the new volume is V - AΔx (see note above about the sign of Δx). Thus, Combining (1) and (2) yields But from (1) we have V = m/ρref. Substituting into (3) gives This equation allows the relative density to be calculated from the change in displacement, the known density of the reference liquid, and the known properties of the hydrometer. If Δx is small then, as a first-order approximation of the geometric series equation (4) can be written as: This shows that, for small Δx, changes in displacement are approximately proportional to changes in relative density. A pycnometer (from Greek: πυκνός (puknos) meaning "dense"), also called pyknometer or specific gravity bottle, is a device used to determine the density of a liquid. A pycnometer is usually made of glass, with a close-fitting ground glass stopper with a capillary tube through it, so that air bubbles may escape from the apparatus. This device enables a liquid's density to be measured accurately by reference to an appropriate working fluid, such as water or mercury, using an analytical balancecitation needed. If the flask is weighed empty, full of water, and full of a liquid whose specific gravity is desired, the specific gravity of the liquid can easily be calculated. The particle density of a powder, to which the usual method of weighing cannot be applied, can also be determined with a pycnometer. The powder is added to the pycnometer, which is then weighed, giving the weight of the powder sample. The pycnometer is then filled with a liquid of known density, in which the powder is completely insoluble. The weight of the displaced liquid can then be determined, and hence the specific gravity of the powder. There is also a gas-based manifestation of a pycnometer known as a gas pycnometer. It compares the change in pressure caused by a measured change in a closed volume containing a reference (usually a steel sphere of known volume) with the change in pressure caused by the sample under the same conditions. The difference in change of pressure represents the volume of the sample as compared to the reference sphere, and is usually used for solid particulates that may dissolve in the liquid medium of the pycnometer design described above, or for porous materials into which the liquid would not fully penetrate. When a pycnometer is filled to a specific, but not necessarily accurately known volume, V and is placed upon a balance, it will exert a force where mb is the mass of the bottle and g the gravitational acceleration at the location at which the measurements are being made. ρa is the density of the air at the ambient pressure and ρb is the density of the material of which the bottle is made (usually glass) so that the second term is the mass of air displaced by the glass of the bottle whose weight, by Archimedes Principle must be subtracted. The bottle is, of course, filled with air but as that air displaces an equal amount of air the weight of that air is canceled by the weight of the air displaced. Now we fill the bottle with the reference fluid e.g. pure water. The force exerted on the pan of the balance becomes: If we subtract the force measured on the empty bottle from this (or tare the balance before making the water measurement) we obtain. where the subscript n indicated that this force is net of the force of the empty bottle. The bottle is now emptied, thoroughly dried and refilled with the sample. The force, net of the empty bottle, is now: where ρs is the density of the sample. The ratio of the sample and water forces is: This is called the Apparent Specific Gravity, denoted by subscript A, because it is what we would obtain if we took the ratio of net weighings in air from an analytical balance or used a hydrometer (the stem displaces air). Note that the result does not depend on the calibration of the balance. The only requirement on it is that it read linearly with force. Nor does SGA depend on the actual volume of the pycnometer. Further manipulation and finally substitution of SGV, the true specific gravity (the subscript V is used because this is often referred to as the specific gravity in vacuo), for ρs/ρw gives the relationship between apparent and true specific gravity. In the usual case we will have measured weights and want the true specific gravity. This is found from Since the density of dry air at 101.325 kPa at 20 °C is7 0.001205 g/cm3 and that of water is 0.998203 g/cm3 we see that the difference between true and apparent specific gravities for a substance with specific gravity (20°C/20°C) of about 1.100 would be 0.000120. Where the specific gravity of the sample is close to that of water (for example dilute ethanol solutions) the correction is even smaller. The pycnometer is used in ISO standard: ISO 1183-1:2004, ISO 1014–1985 and ASTM standard: ASTM D854. - Gay-Lussac, pear shaped, with perforated stopper, adjusted, capacity 1, 2, 5, 10, 25, 50 and 100 ml - as above, with ground-in thermometer, adjusted, side tube with cap - Hubbard, for bitumen and heavy oils, cylindrical type, ASTM D 70, 24 ml - as above, conical type, ASTM D 115 and D 234, 25 ml - Boot, with vacuum jacket and thermometer, capacity 5, 10, 25 and 50 ml Hydrostatic Pressure-based Instruments: This technology relies upon Pascal's Principle which states that the pressure difference between two points within a vertical column of fluid is dependent upon the vertical distance between the two points, the density of the fluid and the gravitational force. This technology is often used for tank gaging applications as a convenient means of liquid level and density measure. Vibrating Element Transducers: This type of instrument requires a vibrating element to be placed in contact with the fluid of interest. The resonant frequency of the element is measured and is related to the density of the fluid by a characterization that is dependent upon the design of the element. In modern laboratories precise measurements of specific gravity are made using oscillating U-tube meters. These are capable of measurement to 5 to 6 places beyond the decimal point and are used in the brewing, distilling, pharmaceutical, petroleum and other industries. The instruments measure the actual mass of fluid contained in a fixed volume at temperatures between 0 and 80 °C but as they are microprocessor based can calculate apparent or true specific gravity and contain tables relating these to the strengths of common acids, sugar solutions, etc. The vibrating fork immersion probe is another good example of this technology. This technology also includes many coriolis-type mass flow meters which are widely used in chemical and petroleum industry for high accuracy mass flow measurement and can be configured to also output density information based on the resonant frequency of the vibrating flow tubes.8 Ultrasonic Transducer: Ultrasonic waves are passed from a source, through the fluid of interest, and into a detector which measures the acoustic spectroscopy of the waves. Fluid properties such as density and viscosity can be inferred from the spectrum. Radiation-based Gauge: Radiation is passed from a source, through the fluid of interest, and into a scintillation detector, or counter. As the fluid density increases, the detected radiation "counts" will decrease. The source is typically the radioactive isotope cesium-137, with a half-life of about 30 years. A key advantage for this technology is that the instrument is not required to be in contact with the fluid—typically the source and detector are mounted on the outside of tanks or piping.9 Buoyant Force Transducer: the buoyancy force produced by a float in a homogeneous liquid is equal to the weight of the liquid that is displaced by the float. Since buoyancy force is linear with respect to the density of the liquid within which the float is submerged, the measure of the buoyancy force yields a measure of the density of the liquid. One commercially available unit claims the instrument is capable of measuring specific gravity with an accuracy of ± 0.005 SG units. The submersible probe head contains a mathematically characterized spring-float system. When the head is immersed vertically in the liquid, the float moves vertically and the position of the float controls the position of a permanent magnet whose displacement is sensed by a concentric array of Hall-effect linear displacement sensors. The output signals of the sensors are mixed in a dedicated electronics module that provides a single output voltage whose magnitude is a direct linear measure of the quantity to be measured.10 Substances with a specific gravity of 1 are neutrally buoyant, those with SG greater than one are denser than water, and so (ignoring surface tension effects) will sink in it, and those with an SG of less than one are less dense than water, and so will float. (Samples may vary, and these figures are approximate.) - Dana, Edward Salisbury (1922). A text-book of mineralogy: with an extended treatise on crystallography.... New York, London(Chapman Hall): John Wiley and Sons. pp. 195–200, 316. - Schetz, Joseph A.; Allen E. Fuhs (1999-02-05). Fundamentals of fluid mechanics. Wiley, John & Sons, Incorporated. pp. 111,142,144,147,109,155,157,160,175. ISBN 0-471-34856-2. - Hough, J.S., Briggs, D.E., Stevens, R and Young, T.W. Malting and Brewing Science, Vol. II Hopped Wort and Beer, Chapman and Hall, London, 1991, p. 881 - Bettin, H.; Spieweck, F.: (1990). Die Dichte des Wassers als Funktion der Temperatur nach Einführung des Internationalen Temperaturskala von 1990 (in German). PTB=Mitt. 100. pp. 195–196. - ASBC Methods of Analysis Preface to Table 1: Extract in Wort and Beer, American Society of Brewing Chemists, St Paul, 2009 - ASBC Methods of Analysis op. cit. Table 1: Extract in Wort and Beer - DIN51 757 (04.1994): Testing of mineral oils and related materials; determination of density - dead link - Density – VEGA Americas, Inc. Ohmartvega.com. Retrieved on 2011-09-30. - Process Control Digital Electronic Hydrometer. Gardco. Retrieved on 2011-09-30. - Fundamentals of Fluid Mechanics Wiley, B.R. Munson, D.F. Young & T.H. Okishi - Introduction to Fluid Mechanics Fourth Edition, Wiley, SI Version, R.W. Fox & A.T. McDonald - Thermodynamics: An Engineering Approach Second Edition, McGraw-Hill, International Edition, Y.A. Cengel & M.A. Boles - Munson, B. R.; D. F. Young, T. H. Okishi (2001). Fundamentals of Fluid Mechanics (4th ed.). Wiley. ISBN 978-0-471-44250-9. - Fox, R. W.; McDonald, A. T. (2003). Introduction to Fluid Mechanics (4th ed.). Wiley. ISBN 0-471-20231-2.
http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Specific_density
13
125
||This article may be too long to read and navigate comfortably. (June 2013)| |Part of a series on| The scientific method is a body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge. To be termed scientific, a method of inquiry must be based on empirical and measurable evidence subject to specific principles of reasoning. The Oxford English Dictionary defines the scientific method as: "a method or procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses." The chief characteristic which distinguishes the scientific method from other methods of acquiring knowledge is that scientists seek to let reality speak for itself,[discuss] supporting a theory when a theory's predictions are confirmed and challenging a theory when its predictions prove false. Although procedures vary from one field of inquiry to another, identifiable features distinguish scientific inquiry from other methods of obtaining knowledge. Scientific researchers propose hypotheses as explanations of phenomena, and design experimental studies to test these hypotheses via predictions which can be derived from them. These steps must be repeatable, to guard against mistake or confusion in any particular experimenter. Theories that encompass wider domains of inquiry may bind many independently derived hypotheses together in a coherent, supportive structure. Theories, in turn, may help form new hypotheses or place groups of hypotheses into context. Scientific inquiry is generally intended to be as objective as possible in order to reduce biased interpretations of results. Another basic expectation is to document, archive and share all data and methodology so they are available for careful scrutiny by other scientists, giving them the opportunity to verify results by attempting to reproduce them. This practice, called full disclosure, also allows statistical measures of the reliability of these data to be established (when data is sampled or compared to chance). Scientific method has been practiced in some form for at least one thousand years and is the process by which science is carried out. Because science builds on previous knowledge, it consistently improves our understanding of the world. The scientific method also improves itself in the same way, meaning that it gradually becomes more effective at generating new knowledge. For example, the concept of falsification (first proposed in 1934) reduces confirmation bias by formalizing the attempt to disprove hypotheses rather than prove them. The overall process involves making conjectures (hypotheses), deriving predictions from them as logical consequences, and then carrying out experiments based on those predictions to determine whether the original conjecture was correct. There are difficulties in a formulaic statement of method, however. Though the scientific method is often presented as a fixed sequence of steps, they are better considered as general principles. Not all steps take place in every scientific inquiry (or to the same degree), and not always in the same order. As noted by William Whewell (1794–1866), "invention, sagacity, [and] genius" are required at every step: - Formulation of a question: The question can refer to the explanation of a specific observation, as in "Why is the sky blue?", but can also be open-ended, as in "Does sound travel faster in air than in water?" or "How can I design a drug to cure this particular disease?" This stage also involves looking up and evaluating previous evidence from other scientists, including experience. If the answer is already known, a different question that builds on the previous evidence can be posed. When applying the scientific method to scientific research, determining a good question can be very difficult and affects the final outcome of the investigation. - Hypothesis: An hypothesis is a conjecture, based on the knowledge obtained while formulating the question, that may explain the observed behavior of a part of our universe. The hypothesis might be very specific, e.g., Einstein's equivalence principle or Francis Crick's "DNA makes RNA makes protein", or it might be broad, e.g., unknown species of life dwell in the unexplored depths of the oceans. A statistical hypothesis is a conjecture about some population. For example, the population might be people with a particular disease. The conjecture might be that a new drug will cure the disease in some of those people. Terms commonly associated with statistical hypotheses are null hypothesis and alternative hypothesis. A null hypothesis is the conjecture that the statistical hypothesis is false, e.g., that the new drug does nothing and that any cures are due to chance effects. Researchers normally want to show that the null hypothesis is false. The alternative hypothesis is the desired outcome, e.g., that the drug does better than chance. A final point: a scientific hypothesis must be falsifiable, meaning that one can identify a possible outcome of an experiment that conflicts with predictions deduced from the hypothesis; otherwise, it cannot be meaningfully tested. - Prediction: This step involves determining the logical consequences of the hypothesis. One or more predictions are then selected for further testing. The less likely that the prediction would be correct simply by coincidence, the stronger evidence it would be if the prediction were fulfilled; evidence is also stronger if the answer to the prediction is not already known, due to the effects of hindsight bias (see also postdiction). Ideally, the prediction must also distinguish the hypothesis from likely alternatives; if two hypotheses make the same prediction, observing the prediction to be correct is not evidence for either one over the other. (These statements about the relative strength of evidence can be mathematically derived using Bayes' Theorem.) - Testing: This is an investigation of whether the real world behaves as predicted by the hypothesis. Scientists (and other people) test hypotheses by conducting experiments. The purpose of an experiment is to determine whether observations of the real world agree with or conflict with the predictions derived from an hypothesis. If they agree, confidence in the hypothesis increases; otherwise, it decreases. Agreement does not assure that the hypothesis is true; future experiments may reveal problems. Karl Popper advised scientists to try to falsify hypotheses, i.e., to search for and test those experiments that seem most doubtful. Large numbers of successful confirmations are not convincing if they arise from experiments that avoid risk. Experiments should be designed to minimize possible errors, especially through the use of appropriate scientific controls. For example, tests of medical treatments are commonly run as double-blind tests. Test personnel, who might unwittingly reveal to test subjects which samples are the desired test drugs and which are placebos, are kept ignorant of which are which. Such hints can bias the responses of the test subjects. Failure of an experiment does not necessarily mean the hypothesis is false. Experiments always depend on several hypotheses, e.g., that the test equipment is working properly, and a failure may be a failure of one of the auxiliary hypotheses. (See the Duhem-Quine thesis.) Experiments can be conducted in a college lab, on a kitchen table, at CERN's Large Hadron Collider, at the bottom of an ocean, on Mars (using one of the working rovers), and so on. Astronomers do experiments, searching for planets around distant stars. Finally, most individual experiments address highly specific topics for reasons of practicality. As a result, evidence about broader topics is usually accumulated gradually. - Analysis: This involves determining what the results of the experiment show and deciding on the next actions to take. The predictions of the hypothesis are compared to those of the null hypothesis, to determine which is better able to explain the data. In cases where an experiment is repeated many times, a statistical analysis such as a chi-squared test may be required. If the evidence has falsified the hypothesis, a new hypothesis is required; if the experiment supports the hypothesis but the evidence is not strong enough for high confidence, other predictions from the hypothesis must be tested. Once a hypothesis is strongly supported by evidence, a new question can be asked to provide further insight on the same topic. Evidence from other scientists and experience are frequently incorporated at any stage in the process. Many iterations may be required to gather sufficient evidence to answer a question with confidence, or to build up many answers to highly specific questions in order to answer a single broader question. This model underlies the scientific revolution. One thousand years ago, Alhazen demonstrated the importance of forming questions and subsequently testing them, an approach which was advocated by Galileo in 1638 with the publication of Two New Sciences. The current method is based on a hypothetico-deductive model formulated in the 20th century, although it has undergone significant revision since first proposed (for a more formal discussion, see below). |The basic elements of the scientific method are illustrated by the following example from the discovery of the structure of DNA: The discovery became the starting point for many further studies involving the genetic material, such as the field of molecular genetics, and it was awarded the Nobel Prize in 1962. Each step of the example is examined in more detail later in the article. The scientific method also includes other components required even when all the iterations of the steps above have been completed: - Replication: If an experiment cannot be repeated to produce the same results, this implies that the original results were in error. As a result, it is common for a single experiment to be performed multiple times, especially when there are uncontrolled variables or other indications of experimental error. For significant or surprising results, other scientists may also attempt to replicate the results for themselves, especially if those results would be important to their own work. - External review: The process of peer review involves evaluation of the experiment by experts, who give their opinions anonymously to allow them to give unbiased criticism. It does not certify correctness of the results, only that the experiments themselves were sound (based on the description supplied by the experimenter). If the work passes peer review, which may require new experiments requested by the reviewers, it will be published in a peer-reviewed scientific journal. The specific journal that publishes the results indicates the perceived quality of the work. - Data recording and sharing: Scientists must record all data very precisely in order to reduce their own bias and aid in replication by others, a requirement first promoted by Ludwik Fleck (1896–1961) and others. They must supply this data to other scientists who wish to replicate any results, extending to the sharing of any experimental samples that may be difficult to obtain. The goal of a scientific inquiry is to obtain knowledge in the form of testable explanations that can predict the results of future experiments. This allows scientists to gain an understanding of reality, and later use that understanding to intervene in its causal mechanisms (such as to cure disease). The better an explanation is at making predictions, the more useful it is, and the more likely it is to be correct. The most successful explanations, which explain and make accurate predictions in a wide range of circumstances, are called scientific theories. Most experimental results do not result in large changes in human understanding; improvements in theoretical scientific understanding is usually the result of a gradual synthesis of the results of different experiments, by various researchers, across different domains of science. Scientific models vary in the extent to which they have been experimentally tested and for how long, and in their acceptance in the scientific community. In general, explanations become accepted by a scientific community as evidence in favor is presented, and as presumptions that are inconsistent with the evidence are falsified. Properties of scientific inquiry Scientific knowledge is closely tied to empirical findings, and always remains subject to falsification if new experimental observation incompatible with it is found. That is, no theory can ever be considered completely certain, since new evidence falsifying it might be discovered. If such evidence is found, a new theory may be proposed, or (more commonly) it is found that minor modifications to the previous theory are sufficient to explain the new evidence. The strength of a theory is related to how long it has persisted without falsification of its core principles. Confirmed theories are also subject to subsumption by more accurate theories. For example, thousands of years of scientific observations of the planets were explained almost perfectly by Newton's laws. However, these laws were then determined to be special cases of a more general theory (relativity), which explained both the (previously unexplained) exceptions to Newton's laws as well as predicting and explaining other observations such as the deflection of light by gravity. Thus independent, unconnected, scientific observations can be connected to each other, unified by principles of increasing explanatory power. Since every new theory must explain even more than the previous one, any successor theory capable of subsuming it must meet an even higher standard, explaining both the larger, unified body of observations explained by the previous theory and unifying that with even more observations. In other words, as scientific knowledge becomes more accurate with time, it becomes increasingly harder to produce a more successful theory, simply because of the great success of the theories that already exist. For example, the Theory of Evolution explains the diversity of life on Earth, how species adapt to their environments, and many other patterns observed in the natural world; its most recent major modification was unification with genetics to form the modern evolutionary synthesis. In subsequent modifications, it has also subsumed aspects of many other fields such as biochemistry and molecular biology. Beliefs and biases Scientific methodology directs that hypotheses be tested in controlled conditions which can be reproduced by others. The scientific community's pursuit of experimental control and reproducibility diminishes the effects of cognitive biases. For example, pre-existing beliefs can alter the interpretation of results, as in confirmation bias; this is a heuristic that leads a person with a particular belief to see things as reinforcing their belief, even if another observer might disagree (in other words, people tend to observe what they expect to observe). A historical example is the conjecture that the legs of a galloping horse are splayed at the point when none of the horse's legs touches the ground, to the point of this image being included in paintings by its supporters. However, the first stop-action pictures of a horse's gallop by Eadweard Muybridge showed this to be false, and that the legs are instead gathered together. In contrast to the requirement for scientific knowledge to correspond to reality, beliefs based on myth or stories can be believed and acted upon irrespective of truth, often taking advantage of the narrative fallacy that when narrative is constructed its elements become easier to believe. Myths intended to be taken as true must have their elements assumed a priori, while science requires testing and validation a posteriori before ideas are accepted. Elements of the scientific method There are different ways of outlining the basic method used for scientific inquiry. The scientific community and philosophers of science generally agree on the following classification of method components. These methodological elements and organization of procedures tend to be more characteristic of natural sciences than social sciences. Nonetheless, the cycle of formulating hypotheses, testing and analyzing the results, and formulating new hypotheses, will resemble the cycle described below. - Four essential elements of the scientific method are iterations, recursions, interleavings, or orderings of the following: - Characterizations (observations, definitions, and measurements of the subject of inquiry) - Hypotheses (theoretical, hypothetical explanations of observations and measurements of the subject) - Predictions (reasoning including logical deduction from the hypothesis or theory) - Experiments (tests of all of the above) Each element of the scientific method is subject to peer review for possible mistakes. These activities do not describe all that scientists do (see below) but apply mostly to experimental sciences (e.g., physics, chemistry, and biology). The elements above are often taught in the educational system as "the scientific method". The scientific method is not a single recipe: it requires intelligence, imagination, and creativity. In this sense, it is not a mindless set of standards and procedures to follow, but is rather an ongoing cycle, constantly developing more useful, accurate and comprehensive models and methods. For example, when Einstein developed the Special and General Theories of Relativity, he did not in any way refute or discount Newton's Principia. On the contrary, if the astronomically large, the vanishingly small, and the extremely fast are removed from Einstein's theories — all phenomena Newton could not have observed — Newton's equations are what remain. Einstein's theories are expansions and refinements of Newton's theories and, thus, increase our confidence in Newton's work. A linearized, pragmatic scheme of the four points above is sometimes offered as a guideline for proceeding: - Define a question - Gather information and resources (observe) - Form an explanatory hypothesis - Test the hypothesis by performing an experiment and collecting data in a reproducible manner - Analyze the data - Interpret the data and draw conclusions that serve as a starting point for new hypothesis - Publish results - Retest (frequently done by other scientists) The iterative cycle inherent in this step-by-step method goes from point 3 to 6 back to 3 again. While this schema outlines a typical hypothesis/testing method, it should also be noted that a number of philosophers, historians and sociologists of science (perhaps most notably Paul Feyerabend) claim that such descriptions of scientific method have little relation to the ways science is actually practiced. - Operation - Some action done to the system being investigated - Observation - What happens when the operation is done to the system - Model - A fact, hypothesis, theory, or the phenomenon itself at a certain moment - Utility Function - A measure of the usefulness of the model to explain, predict, and control, and of the cost of use of it. One of the elements of any scientific utility function is the refutability of the model. Another is its simplicity, on the Principle of Parsimony more commonly known as Occam's Razor. The scientific method depends upon increasingly sophisticated characterizations of the subjects of investigation. (The subjects can also be called unsolved problems or the unknowns.) For example, Benjamin Franklin conjectured, correctly, that St. Elmo's fire was electrical in nature, but it has taken a long series of experiments and theoretical changes to establish this. While seeking the pertinent properties of the subjects, careful thought may also entail some definitions and observations; the observations often demand careful measurements and/or counting. The systematic, careful collection of measurements or counts of relevant quantities is often the critical difference between pseudo-sciences, such as alchemy, and science, such as chemistry or biology. Scientific measurements are usually tabulated, graphed, or mapped, and statistical manipulations, such as correlation and regression, performed on them. The measurements might be made in a controlled setting, such as a laboratory, or made on more or less inaccessible or unmanipulatable objects such as stars or human populations. The measurements often require specialized scientific instruments such as thermometers, spectroscopes, particle accelerators, or voltmeters, and the progress of a scientific field is usually intimately tied to their invention and improvement. |"I am not accustomed to saying anything with certainty after only one or two observations."—Andreas Vesalius (1546)| Measurements in scientific work are also usually accompanied by estimates of their uncertainty. The uncertainty is often estimated by making repeated measurements of the desired quantity. Uncertainties may also be calculated by consideration of the uncertainties of the individual underlying quantities used. Counts of things, such as the number of people in a nation at a particular time, may also have an uncertainty due to data collection limitations. Or counts may represent a sample of desired quantities, with an uncertainty that depends upon the sampling method used and the number of samples taken. Measurements demand the use of operational definitions of relevant quantities. That is, a scientific quantity is described or defined by how it is measured, as opposed to some more vague, inexact or "idealized" definition. For example, electrical current, measured in amperes, may be operationally defined in terms of the mass of silver deposited in a certain time on an electrode in an electrochemical device that is described in some detail. The operational definition of a thing often relies on comparisons with standards: the operational definition of "mass" ultimately relies on the use of an artifact, such as a particular kilogram of platinum-iridium kept in a laboratory in France. The scientific definition of a term sometimes differs substantially from its natural language usage. For example, mass and weight overlap in meaning in common discourse, but have distinct meanings in mechanics. Scientific quantities are often characterized by their units of measure which can later be described in terms of conventional physical units when communicating the work. New theories are sometimes developed after realizing certain terms have not previously been sufficiently clearly defined. For example, Albert Einstein's first paper on relativity begins by defining simultaneity and the means for determining length. These ideas were skipped over by Isaac Newton with, "I do not define time, space, place and motion, as being well known to all." Einstein's paper then demonstrates that they (viz., absolute time and length independent of motion) were approximations. Francis Crick cautions us that when characterizing a subject, however, it can be premature to define something when it remains ill-understood. In Crick's study of consciousness, he actually found it easier to study awareness in the visual system, rather than to study free will, for example. His cautionary example was the gene; the gene was much more poorly understood before Watson and Crick's pioneering discovery of the structure of DNA; it would have been counterproductive to spend much time on the definition of the gene, before them. The history of the discovery of the structure of DNA is a classic example of the elements of the scientific method: in 1950 it was known that genetic inheritance had a mathematical description, starting with the studies of Gregor Mendel, and that DNA contained genetic information (Oswald Avery's transforming principle). But the mechanism of storing genetic information (i.e., genes) in DNA was unclear. Researchers in Bragg's laboratory at Cambridge University made X-ray diffraction pictures of various molecules, starting with crystals of salt, and proceeding to more complicated substances. Using clues painstakingly assembled over decades, beginning with its chemical composition, it was determined that it should be possible to characterize the physical structure of DNA, and the X-ray images would be the vehicle. ..2. DNA-hypotheses Another example: precession of Mercury The characterization element can require extended and extensive study, even centuries. It took thousands of years of measurements, from the Chaldean, Indian, Persian, Greek, Arabic and European astronomers, to fully record the motion of planet Earth. Newton was able to include those measurements into consequences of his laws of motion. But the perihelion of the planet Mercury's orbit exhibits a precession that cannot be fully explained by Newton's laws of motion (see diagram to the right), though it took quite some time to realize this. The observed difference for Mercury's precession between Newtonian theory and observation was one of the things that occurred to Einstein as a possible early test of his theory of General Relativity. His relativistic calculations matched observation much more closely than did Newtonian theory (the difference is approximately 43 arc-seconds per century), . An hypothesis is a suggested explanation of a phenomenon, or alternately a reasoned proposal suggesting a possible correlation between or among a set of phenomena. Normally hypotheses have the form of a mathematical model. Sometimes, but not always, they can also be formulated as existential statements, stating that some particular instance of the phenomenon being studied has some characteristic and causal explanations, which have the general form of universal statements, stating that every instance of the phenomenon has a particular characteristic. Scientists are free to use whatever resources they have — their own creativity, ideas from other fields, induction, Bayesian inference, and so on — to imagine possible explanations for a phenomenon under study. Charles Sanders Peirce, borrowing a page from Aristotle (Prior Analytics, 2.25) described the incipient stages of inquiry, instigated by the "irritation of doubt" to venture a plausible guess, as abductive reasoning. The history of science is filled with stories of scientists claiming a "flash of inspiration", or a hunch, which then motivated them to look for evidence to support or refute their idea. Michael Polanyi made such creativity the centerpiece of his discussion of methodology. William Glen observes that - the success of a hypothesis, or its service to science, lies not simply in its perceived "truth", or power to displace, subsume or reduce a predecessor idea, but perhaps more in its ability to stimulate the research that will illuminate … bald suppositions and areas of vagueness. In general scientists tend to look for theories that are "elegant" or "beautiful". In contrast to the usual English use of these terms, they here refer to a theory in accordance with the known facts, which is nevertheless relatively simple and easy to handle. Occam's Razor serves as a rule of thumb for choosing the most desirable amongst a group of equally explanatory hypotheses. Linus Pauling proposed that DNA might be a triple helix. This hypothesis was also considered by Francis Crick and James D. Watson but discarded. When Watson and Crick learned of Pauling's hypothesis, they understood from existing data that Pauling was wrong and that Pauling would soon admit his difficulties with that structure. So, the race was on to figure out the correct structure (except that Pauling did not realize at the time that he was in a race—see section on "DNA-predictions" below) Predictions from the hypothesis Any useful hypothesis will enable predictions, by reasoning including deductive reasoning. It might predict the outcome of an experiment in a laboratory setting or the observation of a phenomenon in nature. The prediction can also be statistical and deal only with probabilities. It is essential that the outcome of testing such a prediction be currently unknown. Only in this case does a successful outcome increase the probability that the hypothesis is true. If the outcome is already known, it is called a consequence and should have already been considered while formulating the hypothesis. If the predictions are not accessible by observation or experience, the hypothesis is not yet testable and so will remain to that extent unscientific in a strict sense. A new technology or theory might make the necessary experiments feasible. Thus, much scientifically based speculation might convince one (or many) that the hypothesis that other intelligent species exist is true. But since there no experiment now known which can test this hypothesis, science itself can have little to say about the possibility. In future, some new technique might lead to an experimental test and the speculation would then become part of accepted science. James D. Watson, Francis Crick, and others hypothesized that DNA had a helical structure. This implied that DNA's X-ray diffraction pattern would be 'x shaped'. This prediction followed from the work of Cochran, Crick and Vand (and independently by Stokes). The Cochran-Crick-Vand-Stokes theorem provided a mathematical explanation for the empirical observation that diffraction from helical structures produces x shaped patterns. In their first paper, Watson and Crick also noted that the double helix structure they proposed provided a simple mechanism for DNA replication, writing "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material". ..4. DNA-experiments Another example: general relativity Einstein's theory of General Relativity makes several specific predictions about the observable structure of space-time, such as that light bends in a gravitational field, and that the amount of bending depends in a precise way on the strength of that gravitational field. Arthur Eddington's observations made during a 1919 solar eclipse supported General Relativity rather than Newtonian gravitation. Once predictions are made, they can be sought by experiments. If the test results contradict the predictions, the hypotheses which entailed them are called into question and become less tenable. Sometimes the experiments are conducted incorrectly or are not very well designed, when compared to a crucial experiment. If the experimental results confirm the predictions, then the hypotheses are considered more likely to be correct, but might still be wrong and continue to be subject to further testing. The experimental control is a technique for dealing with observational error. This technique uses the contrast between multiple samples (or observations) under differing conditions to see what varies or what remains the same. We vary the conditions for each measurement, to help isolate what has changed. Mill's canons can then help us figure out what the important factor is. Factor analysis is one technique for discovering the important factor in an effect. Depending on the predictions, the experiments can have different shapes. It could be a classical experiment in a laboratory setting, a double-blind study or an archaeological excavation. Even taking a plane from New York to Paris is an experiment which tests the aerodynamical hypotheses used for constructing the plane. Scientists assume an attitude of openness and accountability on the part of those conducting an experiment. Detailed record keeping is essential, to aid in recording and reporting on the experimental results, and supports the effectiveness and integrity of the procedure. They will also assist in reproducing the experimental results, likely by others. Traces of this approach can be seen in the work of Hipparchus (190-120 BCE), when determining a value for the precession of the Earth, while controlled experiments can be seen in the works of Jābir ibn Hayyān (721-815 CE), al-Battani (853–929) and Alhazen (965-1039). Watson and Crick showed an initial (and incorrect) proposal for the structure of DNA to a team from Kings College - Rosalind Franklin, Maurice Wilkins, and Raymond Gosling. Franklin immediately spotted the flaws which concerned the water content. Later Watson saw Franklin's detailed X-ray diffraction images which showed an X-shape and was able to confirm the structure was helical. This rekindled Watson and Crick's model building and led to the correct structure. ..1. DNA-characterizations Evaluation and improvement The scientific method is iterative. At any stage it is possible to refine its accuracy and precision, so that some consideration will lead the scientist to repeat an earlier part of the process. Failure to develop an interesting hypothesis may lead a scientist to re-define the subject under consideration. Failure of a hypothesis to produce interesting and testable predictions may lead to reconsideration of the hypothesis or of the definition of the subject. Failure of an experiment to produce interesting results may lead a scientist to reconsider the experimental method, the hypothesis, or the definition of the subject. Other scientists may start their own research and enter the process at any stage. They might adopt the characterization and formulate their own hypothesis, or they might adopt the hypothesis and deduce their own predictions. Often the experiment is not done by the person who made the prediction, and the characterization is based on experiments done by someone else. Published results of experiments can also serve as a hypothesis predicting their own reproducibility. After considerable fruitless experimentation, being discouraged by their superior from continuing, and numerous false starts, Watson and Crick were able to infer the essential structure of DNA by concrete modeling of the physical shapes of the nucleotides which comprise it. They were guided by the bond lengths which had been deduced by Linus Pauling and by Rosalind Franklin's X-ray diffraction images. ..DNA Example Science is a social enterprise, and scientific work tends to be accepted by the scientific community when it has been confirmed. Crucially, experimental and theoretical results must be reproduced by others within the scientific community. Researchers have given their lives for this vision; Georg Wilhelm Richmann was killed by ball lightning (1753) when attempting to replicate the 1752 kite-flying experiment of Benjamin Franklin. To protect against bad science and fraudulent data, government research-granting agencies such as the National Science Foundation, and science journals including Nature and Science, have a policy that researchers must archive their data and methods so other researchers can test the data and methods and build on the research that has gone before. Scientific data archiving can be done at a number of national archives in the U.S. or in the World Data Center. Models of scientific inquiry The classical model of scientific inquiry derives from Aristotle, who distinguished the forms of approximate and exact reasoning, set out the threefold scheme of abductive, deductive, and inductive inference, and also treated the compound forms such as reasoning by analogy. In 1877, Charles Sanders Peirce (// like "purse"; 1839–1914) characterized inquiry in general not as the pursuit of truth per se but as the struggle to move from irritating, inhibitory doubts born of surprises, disagreements, and the like, and to reach a secure belief, belief being that on which one is prepared to act. He framed scientific inquiry as part of a broader spectrum and as spurred, like inquiry generally, by actual doubt, not mere verbal or hyperbolic doubt, which he held to be fruitless. He outlined four methods of settling opinion, ordered from least to most successful: - The method of tenacity (policy of sticking to initial belief) — which brings comforts and decisiveness but leads to trying to ignore contrary information and others' views as if truth were intrinsically private, not public. It goes against the social impulse and easily falters since one may well notice when another's opinion is as good as one's own initial opinion. Its successes can shine but tend to be transitory. - The method of authority — which overcomes disagreements but sometimes brutally. Its successes can be majestic and long-lived, but it cannot operate thoroughly enough to suppress doubts indefinitely, especially when people learn of other societies present and past. - The method of the a priori — which promotes conformity less brutally but fosters opinions as something like tastes, arising in conversation and comparisons of perspectives in terms of "what is agreeable to reason." Thereby it depends on fashion in paradigms and goes in circles over time. It is more intellectual and respectable but, like the first two methods, sustains accidental and capricious beliefs, destining some minds to doubt it. - The scientific method — the method wherein inquiry regards itself as fallible and purposely tests itself and criticizes, corrects, and improves itself. Peirce held that slow, stumbling ratiocination can be dangerously inferior to instinct and traditional sentiment in practical matters, and that the scientific method is best suited to theoretical research, which in turn should not be trammeled by the other methods and practical ends; reason's "first rule" is that, in order to learn, one must desire to learn and, as a corollary, must not block the way of inquiry. The scientific method excels the others by being deliberately designed to arrive — eventually — at the most secure beliefs, upon which the most successful practices can be based. Starting from the idea that people seek not truth per se but instead to subdue irritating, inhibitory doubt, Peirce showed how, through the struggle, some can come to submit to truth for the sake of belief's integrity, seek as truth the guidance of potential practice correctly to its given goal, and wed themselves to the scientific method. For Peirce, rational inquiry implies presuppositions about truth and the real; to reason is to presuppose (and at least to hope), as a principle of the reasoner's self-regulation, that the real is discoverable and independent of our vagaries of opinion. In that vein he defined truth as the correspondence of a sign (in particular, a proposition) to its object and, pragmatically, not as actual consensus of some definite, finite community (such that to inquire would be to poll the experts), but instead as that final opinion which all investigators would reach sooner or later but still inevitably, if they were to push investigation far enough, even when they start from different points. In tandem he defined the real as a true sign's object (be that object a possibility or quality, or an actuality or brute fact, or a necessity or norm or law), which is what it is independently of any finite community's opinion and, pragmatically, depends only on the final opinion destined in a sufficient investigation. That is a destination as far, or near, as the truth itself to you or me or the given finite community. Thus his theory of inquiry boils down to "Do the science." Those conceptions of truth and the real involve the idea of a community both without definite limits (and thus potentially self-correcting as far as needed) and capable of definite increase of knowledge. As inference, "logic is rooted in the social principle" since it depends on a standpoint that is, in a sense, unlimited. Paying special attention to the generation of explanations, Peirce outlined the scientific method as a coordination of three kinds of inference in a purposeful cycle aimed at settling doubts, as follows (in §III–IV in "A Neglected Argument" except as otherwise noted): 1. Abduction (or retroduction). Guessing, inference to explanatory hypotheses for selection of those best worth trying. From abduction, Peirce distinguishes induction as inferring, on the basis of tests, the proportion of truth in the hypothesis. Every inquiry, whether into ideas, brute facts, or norms and laws, arises from surprising observations in one or more of those realms (and for example at any stage of an inquiry already underway). All explanatory content of theories comes from abduction, which guesses a new or outside idea so as to account in a simple, economical way for a surprising or complicative phenomenon. Oftenest, even a well-prepared mind guesses wrong. But the modicum of success of our guesses far exceeds that of sheer luck and seems born of attunement to nature by instincts developed or inherent, especially insofar as best guesses are optimally plausible and simple in the sense, said Peirce, of the "facile and natural", as by Galileo's natural light of reason and as distinct from "logical simplicity". Abduction is the most fertile but least secure mode of inference. Its general rationale is inductive: it succeeds often enough and, without it, there is no hope of sufficiently expediting inquiry (often multi-generational) toward new truths. Coordinative method leads from abducing a plausible hypothesis to judging it for its testability and for how its trial would economize inquiry itself. Peirce calls his pragmatism "the logic of abduction". His pragmatic maxim is: "Consider what effects that might conceivably have practical bearings you conceive the objects of your conception to have. Then, your conception of those effects is the whole of your conception of the object". His pragmatism is a method of reducing conceptual confusions fruitfully by equating the meaning of any conception with the conceivable practical implications of its object's conceived effects — a method of experimentational mental reflection hospitable to forming hypotheses and conducive to testing them. It favors efficiency. The hypothesis, being insecure, needs to have practical implications leading at least to mental tests and, in science, lending themselves to scientific tests. A simple but unlikely guess, if uncostly to test for falsity, may belong first in line for testing. A guess is intrinsically worth testing if it has instinctive plausibility or reasoned objective probability, while subjective likelihood, though reasoned, can be misleadingly seductive. Guesses can be chosen for trial strategically, for their caution (for which Peirce gave as example the game of Twenty Questions), breadth, and incomplexity. One can hope to discover only that which time would reveal through a learner's sufficient experience anyway, so the point is to expedite it; the economy of research is what demands the leap, so to speak, of abduction and governs its art. 2. Deduction. Two stages: - i. Explication. Unclearly premissed, but deductive, analysis of the hypothesis in order to render its parts as clear as possible. - ii. Demonstration: Deductive Argumentation, Euclidean in procedure. Explicit deduction of hypothesis's consequences as predictions, for induction to test, about evidence to be found. Corollarial or, if needed, Theorematic. 3. Induction. The long-run validity of the rule of induction is deducible from the principle (presuppositional to reasoning in general) that the real is only the object of the final opinion to which adequate investigation would lead; anything to which no such process would ever lead would not be real. Induction involving ongoing tests or observations follows a method which, sufficiently persisted in, will diminish its error below any predesignate degree. Three stages: - i. Classification. Unclearly premissed, but inductive, classing of objects of experience under general ideas. - ii. Probation: direct Inductive Argumentation. Crude (the enumeration of instances) or Gradual (new estimate of proportion of truth in the hypothesis after each test). Gradual Induction is Qualitative or Quantitative; if Qualitative, then dependent on weightings of qualities or characters; if Quantitative, then dependent on measurements, or on statistics, or on countings. - iii. Sentential Induction. "...which, by Inductive reasonings, appraises the different Probations singly, then their combinations, then makes self-appraisal of these very appraisals themselves, and passes final judgment on the whole result". Many subspecialties of applied logic and computer science, such as artificial intelligence, machine learning, computational learning theory, inferential statistics, and knowledge representation, are concerned with setting out computational, logical, and statistical frameworks for the various types of inference involved in scientific inquiry. In particular, they contribute hypothesis formation, logical deduction, and empirical testing. Some of these applications draw on measures of complexity from algorithmic information theory to guide the making of predictions from prior distributions of experience, for example, see the complexity measure called the speed prior from which a computable strategy for optimal inductive reasoning can be derived. Communication and community Frequently the scientific method is employed not only by a single person, but also by several people cooperating directly or indirectly. Such cooperation can be regarded as one of the defining elements of a scientific community. Various techniques have been developed to ensure the integrity of scientific methodology within such an environment. Peer review evaluation Scientific journals use a process of peer review, in which scientists' manuscripts are submitted by editors of scientific journals to (usually one to three) fellow (usually anonymous) scientists familiar with the field for evaluation. The referees may or may not recommend publication, publication with suggested modifications, or, sometimes, publication in another journal. This serves to keep the scientific literature free of unscientific or pseudoscientific work, to help cut down on obvious errors, and generally otherwise to improve the quality of the material. The peer review process can have limitations when considering research outside the conventional scientific paradigm: problems of "groupthink" can interfere with open and fair deliberation of some new research. Documentation and replication Sometimes experimenters may make systematic errors during their experiments, unconsciously veer from scientific method (Pathological science) for various reasons, or, in rare cases, deliberately report false results. Consequently, it is a common practice for other scientists to attempt to repeat the experiments in order to duplicate the results, thus further validating the hypothesis. As a result, researchers are expected to practice scientific data archiving in compliance with the policies of government funding agencies and scientific journals. Detailed records of their experimental procedures, raw data, statistical analyses and source code are preserved in order to provide evidence of the effectiveness and integrity of the procedure and assist in reproduction. These procedural records may also assist in the conception of new experiments to test the hypothesis, and may prove useful to engineers who might examine the potential practical applications of a discovery. When additional information is needed before a study can be reproduced, the author of the study is expected to provide it promptly. If the author refuses to share data, appeals can be made to the journal editors who published the study or to the institution which funded the research. Since it is impossible for a scientist to record everything that took place in an experiment, facts selected for their apparent relevance are reported. This may lead, unavoidably, to problems later if some supposedly irrelevant feature is questioned. For example, Heinrich Hertz did not report the size of the room used to test Maxwell's equations, which later turned out to account for a small deviation in the results. The problem is that parts of the theory itself need to be assumed in order to select and report the experimental conditions. The observations are hence sometimes described as being 'theory-laden'. Dimensions of practice The primary constraints on contemporary science are: - Publication, i.e. Peer review - Resources (mostly funding) It has not always been like this: in the old days of the "gentleman scientist" funding (and to a lesser extent publication) were far weaker constraints. Both of these constraints indirectly require scientific method — work that violates the constraints will be difficult to publish and difficult to get funded. Journals require submitted papers to conform to "good scientific practice" and this is mostly enforced by peer review. Originality, importance and interest are more important - see for example the author guidelines for Nature. Philosophy and sociology of science Philosophy of science looks at the underpinning logic of the scientific method, at what separates science from non-science, and the ethic that is implicit in science. There are basic assumptions derived from philosophy that form the base of the scientific method - namely, that reality is objective and consistent, that humans have the capacity to perceive reality accurately, and that rational explanations exist for elements of the real world. These assumptions from methodological naturalism form the basis on which science is grounded. Logical Positivist, empiricist, falsificationist, and other theories have claimed to give a definitive account of the logic of science, but each has in turn been criticized. Thomas Kuhn examined the history of science in his The Structure of Scientific Revolutions, and found that the actual method used by scientists differed dramatically from the then-espoused method. His observations of science practice are essentially sociological and do not speak to how science is or can be practiced in other times and other cultures. Norwood Russell Hanson, Imre Lakatos and Thomas Kuhn have done extensive work on the "theory laden" character of observation. Hanson (1958) first coined the term for the idea that all observation is dependent on the conceptual framework of the observer, using the concept of gestalt to show how preconceptions can affect both observation and description. He opens Chapter 1 with a discussion of the Golgi bodies and their initial rejection as an artefact of staining technique, and a discussion of Brahe and Kepler observing the dawn and seeing a "different" sun rise despite the same physiological phenomenon. Kuhn and Feyerabend acknowledge the pioneering significance of his work. Kuhn (1961) said the scientist generally has a theory in mind before designing and undertaking experiments so as to make empirical observations, and that the "route from theory to measurement can almost never be traveled backward". This implies that the way in which theory is tested is dictated by the nature of the theory itself, which led Kuhn (1961, p. 166) to argue that "once it has been adopted by a profession ... no theory is recognized to be testable by any quantitative tests that it has not already passed". Paul Feyerabend similarly examined the history of science, and was led to deny that science is genuinely a methodological process. In his book Against Method he argues that scientific progress is not the result of applying any particular method. In essence, he says that for any specific method or norm of science, one can find a historic episode where violating it has contributed to the progress of science. Thus, if believers in scientific method wish to express a single universally valid rule, Feyerabend jokingly suggests, it should be 'anything goes'. Criticisms such as his led to the strong programme, a radical approach to the sociology of science. The postmodernist critiques of science have themselves been the subject of intense controversy. This ongoing debate, known as the science wars, is the result of conflicting values and assumptions between the postmodernist and realist camps. Whereas postmodernists assert that scientific knowledge is simply another discourse (note that this term has special meaning in this context) and not representative of any form of fundamental truth, realists in the scientific community maintain that scientific knowledge does reveal real and fundamental truths about reality. Many books have been written by scientists which take on this problem and challenge the assertions of the postmodernists while defending science as a legitimate method of deriving truth. Role of chance in discovery Somewhere between 33% and 50% of all scientific discoveries are estimated to have been stumbled upon, rather than sought out. This may explain why scientists so often express that they were lucky. Louis Pasteur is credited with the famous saying that "Luck favours the prepared mind", but some psychologists have begun to study what it means to be 'prepared for luck' in the scientific context. Research is showing that scientists are taught various heuristics that tend to harness chance and the unexpected. This is what professor of economics Nassim Nicholas Taleb calls "Anti-fragility"; while some systems of investigation are fragile in the face of human error, human bias, and randomness, the scientific method is more than resistant or tough - it actually benefits from such randomness in many ways (it is anti-fragile). Taleb believes that the more anti-fragile the system, the more it will flourish in the real world. Psychologist Kevin Dunbar says the process of discovery often starts with researchers finding bugs in their experiments. These unexpected results lead researchers to try and fix what they think is an error in their method. Eventually, the researcher decides the error is too persistent and systematic to be a coincidence. The highly controlled, cautious and curious aspects of the scientific method are thus what make it well suited for identifying such persistent systematic errors. At this point, the researcher will begin to think of theoretical explanations for the error, often seeking the help of colleagues across different domains of expertise. The development of the scientific method is inseparable from the history of science itself. Ancient Egyptian documents describe empirical methods in astronomy, mathematics, and medicine. The ancient Greek philosopher Thales in the 6th century BC refused to accept supernatural, religious or mythological explanations for natural phenomena, proclaiming that every event had a natural cause. The development of deductive reasoning by Plato was an important step towards the scientific method. Empiricism seems to have been formalized by Aristotle, who believed that universal truths could be reached via induction. There are hints of experimental methods from the Classical world (e.g., those reported by Archimedes in a report recovered early in the 20th century from an overwritten manuscript), but the first clear instances of an experimental scientific method seem to have been developed by Islamic scientists who introduced the use of experimentation and quantification within a generally empirical orientation. For example, Alhazen performed optical and physiological experiments, reported in his manifold works, the most famous being Book of Optics (1021). By the late 15th century, the physician-scholar Niccolò Leoniceno was finding errors in Pliny's Natural History. As a physician, Leoniceno was concerned about these botanical errors propagating to the materia medica on which medicines were based. To counter this, a botanical garden was established at Orto botanico di Padova, University of Padua (in use for teaching by 1546), in order that medical students might have empirical access to the plants of a pharmacopia. The philosopher and physician Francisco Sanches was led by his medical training at Rome, 1571–73, and by the philosophical skepticism recently placed in the European mainstream by the publication of Sextus Empiricus' "Outlines of Pyrrhonism", to search for a true method of knowing (modus sciendi), as nothing clear can be known by the methods of Aristotle and his followers — for example, syllogism fails upon circular reasoning. Following the physician Galen's method of medicine, Sanches lists the methods of judgement and experience, which are faulty in the wrong hands, and we are left with the bleak statement That Nothing is Known (1581). This challenge was taken up by René Descartes in the next generation (1637), but at the least, Sanches warns us that we ought to refrain from the methods, summaries, and commentaries on Aristotle, if we seek scientific knowledge. In this, he is echoed by Francis Bacon, also influenced by skepticism; Sanches cites the humanist Juan Luis Vives who sought a better educational system, as well as a statement of human rights as a pathway for improvement of the lot of the poor. The modern scientific method crystallized no later than in the 17th and 18th centuries. In his work Novum Organum (1620) — a reference to Aristotle's Organon — Francis Bacon outlined a new system of logic to improve upon the old philosophical process of syllogism. Then, in 1637, René Descartes established the framework for scientific method's guiding principles in his treatise, Discourse on Method. The writings of Alhazen, Bacon and Descartes are considered critical in the historical development of the modern scientific method, as are those of John Stuart Mill. Grosseteste was "the principal figure" in bringing about "a more adequate method of scientific inquiry" by which "medieval scientists were able eventually to outstrip their ancient European and Muslim teachers" (Dales 1973:62). ... His thinking influenced Roger Bacon, who spread Grosseteste's ideas from Oxford to the University of Paris during a visit there in the 1240s. From the prestigious universities in Oxford and Paris, the new experimental science spread rapidly throughout the medieval universities: "And so it went to Galileo, William Gilbert, Francis Bacon, William Harvey, Descartes, Robert Hooke, Newton, Leibniz, and the world of the seventeenth century" (Crombie 1962:15). So it went to us also.— Hugh G. Gauch, 2003. In the late 19th century, Charles Sanders Peirce proposed a schema that would turn out to have considerable influence in the development of current scientific methodology generally. Peirce accelerated the progress on several fronts. Firstly, speaking in broader context in "How to Make Our Ideas Clear" (1878), Peirce outlined an objectively verifiable method to test the truth of putative knowledge on a way that goes beyond mere foundational alternatives, focusing upon both deduction and induction. He thus placed induction and deduction in a complementary rather than competitive context (the latter of which had been the primary trend at least since David Hume, who wrote in the mid-to-late 18th century). Secondly, and of more direct importance to modern method, Peirce put forth the basic schema for hypothesis/testing that continues to prevail today. Extracting the theory of inquiry from its raw materials in classical logic, he refined it in parallel with the early development of symbolic logic to address the then-current problems in scientific reasoning. Peirce examined and articulated the three fundamental modes of reasoning that, as discussed above in this article, play a role in inquiry today, the processes that are currently known as abductive, deductive, and inductive inference. Thirdly, he played a major role in the progress of symbolic logic itself — indeed this was his primary specialty. Beginning in the 1930s, Karl Popper argued that there is no such thing as inductive reasoning. All inferences ever made, including in science, are purely deductive according to this view. Accordingly, he claimed that the empirical character of science has nothing to do with induction—but with the deductive property of falsifiability that scientific hypotheses have. Contrasting his views with inductivism and positivism, he even denied the existence of the scientific method: "(1) There is no method of discovering a scientific theory (2) There is no method for ascertaining the truth of a scientific hypothesis, i.e., no method of verification; (3) There is no method for ascertaining whether a hypothesis is 'probable', or probably true". Instead, he held that there is only one universal method, a method not particular to science: The negative method of criticism, or colloquially termed trial and error. It covers not only all products of the human mind, including science, mathematics, philosophy, art and so on, but also the evolution of life. Following Peirce and others, Popper argued that science is fallible and has no authority. In contrast to empiricist-inductivist views, he welcomed metaphysics and philosophical discussion and even gave qualified support to myths and pseudosciences. Popper's view has become known as critical rationalism. Although science in a broad sense existed before the modern era, and in many historical civilizations (as described above), modern science is so distinct in its approach and successful in its results that it now defines what science is in the strictest sense of the term. Relationship with mathematics Science is the process of gathering, comparing, and evaluating proposed models against observables. A model can be a simulation, mathematical or chemical formula, or set of proposed steps. Science is like mathematics in that researchers in both disciplines can clearly distinguish what is known from what is unknown at each stage of discovery. Models, in both science and mathematics, need to be internally consistent and also ought to be falsifiable (capable of disproof). In mathematics, a statement need not yet be proven; at such a stage, that statement would be called a conjecture. But when a statement has attained mathematical proof, that statement gains a kind of immortality which is highly prized by mathematicians, and for which some mathematicians devote their lives. Mathematical work and scientific work can inspire each other. For example, the technical concept of time arose in science, and timelessness was a hallmark of a mathematical topic. But today, the Poincaré conjecture has been proven using time as a mathematical concept in which objects can flow (see Ricci flow). Nevertheless, the connection between mathematics and reality (and so science to the extent it describes reality) remains obscure. Eugene Wigner's paper, The Unreasonable Effectiveness of Mathematics in the Natural Sciences, is a very well known account of the issue from a Nobel Prize-winning physicist. In fact, some observers (including some well known mathematicians such as Gregory Chaitin, and others such as Lakoff and Núñez) have suggested that mathematics is the result of practitioner bias and human limitation (including cultural ones), somewhat like the post-modernist view of science. George Pólya's work on problem solving, the construction of mathematical proofs, and heuristic show that the mathematical method and the scientific method differ in detail, while nevertheless resembling each other in using iterative or recursive steps. |Mathematical method||Scientific method| |1||Understanding||Characterization from experience and observation| |2||Analysis||Hypothesis: a proposed explanation| |3||Synthesis||Deduction: prediction from the hypothesis| |4||Review/Extend||Test and experiment| In Pólya's view, understanding involves restating unfamiliar definitions in your own words, resorting to geometrical figures, and questioning what we know and do not know already; analysis, which Pólya takes from Pappus, involves free and heuristic construction of plausible arguments, working backward from the goal, and devising a plan for constructing the proof; synthesis is the strict Euclidean exposition of step-by-step details of the proof; review involves reconsidering and re-examining the result and the path taken to it. Problems and issues History, philosophy, sociology - Goldhaber & Nieto 2010, p. 940 - " Rules for the study of natural philosophy", Newton 1999, pp. 794–6, from Book 3, The System of the World. - Oxford English Dictionary - entry for scientific. - "How does light travel through transparent bodies? Light travels through transparent bodies in straight lines only.... We have explained this exhaustively in our Book of Optics. But let us now mention something to prove this convincingly: the fact that light travels in straight lines is clearly observed in the lights which enter into dark rooms through holes.... [T]he entering light will be clearly observable in the dust which fills the air. —Alhazen, translated into English from German by M. Schwarz, from "Abhandlung über das Licht", J. Baarmann (ed. 1882) Zeitschrift der Deutschen Morgenländischen Gesellschaft Vol 36 as quoted in Sambursky 1974, p. 136. - He demonstrated his conjecture that "light travels through transparent bodies in straight lines only" by placing a straight stick or a taut thread next to the light beam, as quoted in Sambursky 1974, p. 136 to prove that light travels in a straight line. - David Hockney, (2001, 2006) in Secret Knowledge: rediscovering the lost techniques of the old masters ISBN 0-14-200512-6 (expanded edition) cites Alhazen several times as the likely source for the portraiture technique using the camera obscura, which Hockney rediscovered with the aid of an optical suggestion from Charles M. Falco. Kitab al-Manazir, which is Alhazen's Book of Optics, at that time denoted Opticae Thesaurus, Alhazen Arabis, was translated from Arabic into Latin for European use as early as 1270. Hockney cites Friedrich Risner's 1572 Basle edition of Opticae Thesaurus. Hockney quotes Alhazen as the first clear description of the camera obscura in Hockney, p. 240. - Morris Kline (1985) Mathematics for the nonmathematician. Courier Dover Publications. p. 284. ISBN 0-486-24823-2 - Shapere, Dudley (1974). Galileo: A Philosophical Study. University of Chicago Press. ISBN 0-226-75007-8. - Peirce, C. S., Collected Papers v. 1, paragraph 74. - " The thesis of this book, as set forth in Chapter One, is that there are general principles applicable to all the sciences." __ Gauch 2003, p. xv - Peirce (1877), "The Fixation of Belief", Popular Science Monthly, v. 12, pp. 1–15. Reprinted often, including (Collected Papers of Charles Sanders Peirce v. 5, paragraphs 358–87), (The Essential Peirce, v. 1, pp. 109–23). Peirce.org Eprint. Wikisource Eprint. - Gauch 2003, p. 1: This is the principle of noncontradiction. - Peirce, C. S., Collected Papers v. 5, in paragraph 582, from 1898: ... [rational] inquiry of every type, fully carried out, has the vital power of self-correction and of growth. This is a property so deeply saturating its inmost nature that it may truly be said that there is but one thing needful for learning the truth, and that is a hearty and active desire to learn what is true. - Taleb contributes a brief description of anti-fragility, http://www.edge.org/q2011/q11_3.html - Karl R. Popper (1963), 'The Logic of Scientific Discovery'. The Logic of Scientific Discovery pp. 17-20, 249-252, 437-438, and elsewhere. - Leon Lederman, for teaching physics first, illustrates how to avoid confirmation bias: Ian Shelton, in Chile, was initially skeptical that supernova 1987a was real, but possibly an artifact of instrumentation (null hypothesis), so he went outside and disproved his null hypothesis by observing SN 1987a with the naked eye. The Kamiokande experiment, in Japan, independently observed neutrinos from SN 1987a at the same time. - Peirce (1908), "A Neglected Argument for the Reality of God", Hibbert Journal v. 7, pp. 90-112. s:A Neglected Argument for the Reality of God with added notes. Reprinted with previously unpublished part, Collected Papers v. 6, paragraphs 452-85, The Essential Peirce v. 2, pp. 434-50, and elsewhere. - Gauch 2003, p. 3 - History of Inductive Science (1837), and in Philosophy of Inductive Science (1840) - Schuster and Powers (2005), Translational and Experimental Clinical Research, Ch. 1. Link. This chapter also discusses the different types of research questions and how they are produced. - This phrasing is attributed to Marshall Nirenberg. - Karl R. Popper, Conjectures and Refutations: The Growth of Scientific Knowledge, Routledge, 2003 ISBN 0-415-28594-1 - Lindberg 2007, pp. 2–3: "There is a danger that must be avoided. ... If we wish to do justice to the historical enterprise, we must take the past for what it was. And that means we must resist the temptation to scour the past for examples or precursors of modern science. ...My concern will be with the beginnings of scientific theories, the methods by which they were formulated, and the uses to which they were put; ... " - Galilei, Galileo (M.D.C.XXXVIII), Discorsi e Dimonstrazioni Matematiche, intorno a due nuoue scienze, Leida: Apresso gli Elsevirri, ISBN 0-486-60099-8, Dover reprint of the 1914 Macmillan translation by Henry Crew and Alfonso de Salvio of Two New Sciences, Galileo Galilei Linceo (1638). Additional publication information is from the collection of first editions of the Library of Congress surveyed by Bruno 1989, pp. 261–264. - Godfrey-Smith 2003 p. 236. - October 1951, as noted in McElheny 2004, p. 40:"That's what a helix should look like!" Crick exclaimed in delight (This is the Cochran-Crick-Vand-Stokes theory of the transform of a helix). - June 1952, as noted in McElheny 2004, p. 43: Watson had succeeded in getting X-ray pictures of TMV showing a diffraction pattern consistent with the transform of a helix. - Watson did enough work on Tobacco mosaic virus to produce the diffraction pattern for a helix, per Crick's work on the transform of a helix. pp. 137-138, Horace Freeland Judson (1979) The Eighth Day of Creation ISBN 0-671-22540-5 - — Cochran W, Crick FHC and Vand V. (1952) "The Structure of Synthetic Polypeptides. I. The Transform of Atoms on a Helix", Acta Cryst., 5, 581-586. - Friday, January 30, 1953. Tea time, as noted in McElheny 2004, p. 52: Franklin confronts Watson and his paper - "Of course it [Pauling's pre-print] is wrong. DNA is not a helix." However, Watson then visits Wilkins' office, sees photo 51, and immediately recognizes the diffraction pattern of a helical structure. But additional questions remained, requiring additional iterations of their research. For example, the number of strands in the backbone of the helix (Crick suspected 2 strands, but cautioned Watson to examine that more critically), the location of the base pairs (inside the backbone or outside the backbone), etc. One key point was that they realized that the quickest way to reach a result was not to continue a mathematical analysis, but to build a physical model. - "The instant I saw the picture my mouth fell open and my pulse began to race." —Watson 1968, p. 167 Page 168 shows the X-shaped pattern of the B-form of DNA, clearly indicating crucial details of its helical structure to Watson and Crick. - McElheny 2004 p.52 dates the Franklin-Watson confrontation as Friday, January 30, 1953. Later that evening, Watson urges Wilkins to begin model-building immediately. But Wilkins agrees to do so only after Franklin's departure. - Saturday, February 28, 1953, as noted in McElheny 2004, pp. 57–59: Watson found the base pairing mechanism which explained Chargaff's rules using his cardboard models. - Fleck 1979, pp. xxvii-xxviii - "NIH Data Sharing Policy." - Stanovich, Keith E. (2007). How to Think Straight About Psychology. Boston: Pearson Education. pg 123 - Brody 1993, pp. 44–45 - Hall, B. K.; Hallgrímsson, B., eds. (2008). Strickberger's Evolution (4th ed.). Jones & Bartlett. p. 762. ISBN 0-7637-0066-5. - Cracraft, J.; Donoghue, M. J., eds. (2005). Assembling the tree of life. Oxford University Press. p. 592. ISBN 0-19-517234-5. - Needham & Wang 1954 p.166 shows how the 'flying gallop' image propagated from China to the West. - "A myth is a belief given uncritical acceptance by members of a group ..." —Weiss, Business Ethics p. 15, as cited by Ronald R. Sims (2003) Ethics and corporate social responsibility: why giants fall p.21 - Imre Lakatos (1976), Proofs and Refutations. Taleb 2007, p. 72 lists ways to avoid narrative fallacy and confirmation bias. - For more on the narrative fallacy, see also Fleck 1979, p. 27: "Words and ideas are originally phonetic and mental equivalences of the experiences coinciding with them. ... Such proto-ideas are at first always too broad and insufficiently specialized. ... Once a structurally complete and closed system of opinions consisting of many details and relations has been formed, it offers enduring resistance to anything that contradicts it." - "Invariably one came up against fundamental physical limits to the accuracy of measurement. ... The art of physical measurement seemed to be a matter of compromise, of choosing between reciprocally related uncertainties. ... Multiplying together the conjugate pairs of uncertainty limits mentioned, however, I found that they formed invariant products of not one but two distinct kinds. ... The first group of limits were calculable a priori from a specification of the instrument. The second group could be calculated only a posteriori from a specification of what was done with the instrument. ... In the first case each unit [of information] would add one additional dimension (conceptual category), whereas in the second each unit would add one additional atomic fact.", —Pages 1-4: MacKay, Donald M. (1969), Information, Mechanism, and Meaning, Cambridge, MA: MIT Press, ISBN 0-262-63-032-X - See the hypothethico-deductive method, for example, Godfrey-Smith 2003, p. 236. - Jevons 1874, pp. 265–6. - pp.65,73,92,398 —Andrew J. Galambos, Sic Itur ad Astra ISBN 0-88078-004-5(AJG learned scientific method from Felix Ehrenhaft - Galileo 1638, pp. v-xii,1–300 - Brody 1993, pp. 10–24 calls this the "epistemic cycle": "The epistemic cycle starts from an initial model; iterations of the cycle then improve the model until an adequate fit is achieved." - Iteration example: Chaldean astronomers such as Kidinnu compiled astronomical data. Hipparchus was to use this data to calculate the precession of the Earth's axis. Fifteen hundred years after Kidinnu, Al-Batani, born in what is now Turkey, would use the collected data and improve Hipparchus' value for the precession of the Earth's axis. Al-Batani's value, 54.5 arc-seconds per year, compares well to the current value of 49.8 arc-seconds per year (26,000 years for Earth's axis to round the circle of nutation). - Recursion example: the Earth is itself a magnet, with its own North and South Poles William Gilbert (in Latin 1600) De Magnete, or On Magnetism and Magnetic Bodies. Translated from Latin to English, selection by Moulton & Schifferes 1960, pp. 113–117. Gilbert created a terrella, a lodestone ground into a spherical shape, which served as Gilbert's model for the Earth itself, as noted in Bruno 1989, p. 277. - "The foundation of general physics ... is experience. These ... everyday experiences we do not discover without deliberately directing our attention to them. Collecting information about these is observation." —Hans Christian Ørsted("First Introduction to General Physics" ¶13, part of a series of public lectures at the University of Copenhagen. Copenhagen 1811, in Danish, printed by Johan Frederik Schulz. In Kirstine Meyer's 1920 edition of Ørsted's works, vol.III pp. 151-190. ) "First Introduction to Physics: the Spirit, Meaning, and Goal of Natural Science". Reprinted in German in 1822, Schweigger's Journal für Chemie und Physik 36, pp.458-488, as translated in Ørsted 1997, p. 292 - "When it is not clear under which law of nature an effect or class of effect belongs, we try to fill this gap by means of a guess. Such guesses have been given the name conjectures or hypotheses." —Hans Christian Ørsted(1811) "First Introduction to General Physics" as translated in Ørsted 1997, p. 297. - "In general we look for a new law by the following process. First we guess it. ...", —Feynman 1965, p. 156 - "... the statement of a law - A depends on B - always transcends experience."—Born 1949, p. 6 - "The student of nature ... regards as his property the experiences which the mathematician can only borrow. This is why he deduces theorems directly from the nature of an effect while the mathematician only arrives at them circuitously." —Hans Christian Ørsted(1811) "First Introduction to General Physics" ¶17. as translated in Ørsted 1997, p. 297. - Salviati speaks: "I greatly doubt that Aristotle ever tested by experiment whether it be true that two stones, one weighing ten times as much as the other, if allowed to fall, at the same instant, from a height of, say, 100 cubits, would so differ in speed that when the heavier had reached the ground, the other would not have fallen more than 10 cubits." Two New Sciences (1638) —Galileo 1638, pp. 61–62. A more extended quotation is referenced by Moulton & Schifferes 1960, pp. 80–81. - In the inquiry-based education paradigm, the stage of "characterization, observation, definition, …" is more briefly summed up under the rubric of a Question - "To raise new questions, new possibilities, to regard old problems from a new angle, requires creative imagination and marks real advance in science." —Einstein & Infeld 1938, p. 92. - Crawford S, Stucki L (1990), "Peer review and the changing research record", "J Am Soc Info Science", vol. 41, pp 223-228 - See, e.g., Gauch 2003, esp. chapters 5-8 - Cartwright, Nancy (1983), How the Laws of Physics Lie. Oxford: Oxford University Press. ISBN 0-19-824704-4 - Andreas Vesalius, Epistola, Rationem, Modumque Propinandi Radicis Chynae Decocti (1546), 141. Quoted and translated in C.D. O'Malley, Andreas Vesalius of Brussels, (1964), 116. As quoted by Bynum & Porter 2005, p. 597: Andreas Vesalius,597#1. - Crick, Francis (1994), The Astonishing Hypothesis ISBN 0-684-19431-7 p.20 - McElheny 2004 p.34 - Glen 1994, pp. 37–38. - "The structure that we propose is a three-chain structure, each chain being a helix" — Linus Pauling, as quoted on p. 157 by Horace Freeland Judson (1979), The Eighth Day of Creation ISBN 0-671-22540-5 - McElheny 2004, pp. 49–50: January 28, 1953 - Watson read Pauling's pre-print, and realized that in Pauling's model, DNA's phosphate groups had to be un-ionized. But DNA is an acid, which contradicts Pauling's model. - June 1952. as noted in McElheny 2004, p. 43: Watson had succeeded in getting X-ray pictures of TMV showing a diffraction pattern consistent with the transform of a helix. - McElheny 2004 p.68: Nature April 25, 1953. - In March 1917, the Royal Astronomical Society announced that on May 29, 1919, the occasion of a total eclipse of the sun would afford favorable conditions for testing Einstein's General theory of relativity. One expedition, to Sobral, Ceará, Brazil, and Eddington's expedition to the island of Principe yielded a set of photographs, which, when compared to photographs taken at Sobral and at Greenwich Observatory showed that the deviation of light was measured to be 1.69 arc-seconds, as compared to Einstein's desk prediction of 1.75 arc-seconds. — Antonina Vallentin (1954), Einstein, as quoted by Samuel Rapport and Helen Wright (1965), Physics, New York: Washington Square Press, pp 294-295. - Mill, John Stuart, "A System of Logic", University Press of the Pacific, Honolulu, 2002, ISBN 1-4102-0252-6. - al-Battani, De Motu Stellarum translation from Arabic to Latin in 1116, as cited by "Battani, al-" (c.858-929) Encyclopaedia Britannica, 15th. ed. Al-Battani is known for his accurate observations at al-Raqqah in Syria, beginning in 877. His work includes measurement of the annual precession of the equinoxes. - McElheny 2004 p.53: The weekend (January 31-February 1) after seeing photo 51, Watson informed Bragg of the X-ray diffraction image of DNA in B form. Bragg gave them permission to restart their research on DNA (that is, model building). - McElheny 2004 p.54: On Sunday February 8, 1953, Maurice Wilkes gave Watson and Crick permission to work on models, as Wilkes would not be building models until Franklin left DNA research. - McElheny 2004 p.56: Jerry Donohue, on sabbatical from Pauling's lab and visiting Cambridge, advises Watson that textbook form of the base pairs was incorrect for DNA base pairs; rather, the keto form of the base pairs should be used instead. This form allowed the bases' hydrogen bonds to pair 'unlike' with 'unlike', rather than to pair 'like' with 'like', as Watson was inclined to model, on the basis of the textbook statements. On February 27, 1953, Watson was convinced enough to make cardboard models of the nucleotides in their keto form. - "Suddenly I became aware that an adenine-thymine pair held together by two hydrogen bonds was identical in shape to a guanine-cytosine pair held together by at least two hydrogen bonds. ..." —Watson 1968, pp. 194–197. - McElheny 2004 p.57 Saturday, February 28, 1953, Watson tried 'like with like' and admited these base pairs didn't have hydrogen bonds that line up. But after trying 'unlike with unlike', and getting Jerry Donohue's approval, the base pairs turned out to be identical in shape (as Watson stated above in his 1968 Double Helix memoir quoted above). Watson now felt confident enough to inform Crick. (Of course, 'unlike with unlike' increases the number of possible codons, if this scheme were a genetic code.) - See, e.g., Physics Today, 59(1), p42. Richmann electrocuted in St. Petersburg (1753) - Aristotle, "Prior Analytics", Hugh Tredennick (trans.), pp. 181-531 in Aristotle, Volume 1, Loeb Classical Library, William Heinemann, London, UK, 1938. - "What one does not in the least doubt one should not pretend to doubt; but a man should train himself to doubt," said Peirce in a brief intellectual autobiography; see Ketner, Kenneth Laine (2009) "Charles Sanders Peirce: Interdisciplinary Scientist" in The Logic of Interdisciplinarity). Peirce held that actual, genuine doubt originates externally, usually in surprise, but also that it is to be sought and cultivated, "provided only that it be the weighty and noble metal itself, and no counterfeit nor paper substitute"; in "Issues of Pragmaticism", The Monist, v. XV, n. 4, pp. 481-99, see p. 484, and p. 491. (Reprinted in Collected Papers v. 5, paragraphs 438-63, see 443 and 451). - Peirce (1898), "Philosophy and the Conduct of Life", Lecture 1 of the Cambridge (MA) Conferences Lectures, published in Collected Papers v. 1, paragraphs 616-48 in part and in Reasoning and the Logic of Things, Ketner (ed., intro.) and Putnam (intro., comm.), pp. 105-22, reprinted in Essential Peirce v. 2, pp. 27-41. - " ... in order to learn, one must desire to learn ..."—Peirce (1899), "F.R.L." [First Rule of Logic], Collected Papers v. 1, paragraphs 135-40, Eprint - Peirce (1877), "How to Make Our Ideas Clear", Popular Science Monthly, v. 12, pp. 286–302. Reprinted often, including Collected Papers v. 5, paragraphs 388–410, Essential Peirce v. 1, pp. 124–41. ArisbeEprint. Wikisource Eprint. - Peirce (1868), "Some Consequences of Four Incapacities", Journal of Speculative Philosophy v. 2, n. 3, pp. 140–57. Reprinted Collected Papers v. 5, paragraphs 264–317, The Essential Peirce v. 1, pp. 28–55, and elsewhere. Arisbe Eprint - Peirce (1878), "The Doctrine of Chances", Popular Science Monthly v. 12, pp. 604-15, see pp. 610-11 via Internet Archive. Reprinted Collected Papers v. 2, paragraphs 645-68, Essential Peirce v. 1, pp. 142-54. "...death makes the number of our risks, the number of our inferences, finite, and so makes their mean result uncertain. The very idea of probability and of reasoning rests on the assumption that this number is indefinitely great. .... ...logicality inexorably requires that our interests shall not be limited. .... Logic is rooted in the social principle." - Peirce (c. 1906), "PAP (Prolegomena for an Apology to Pragmatism)" (Manuscript 293, not the like-named article), The New Elements of Mathematics (NEM) 4:319-320, see first quote under "Abduction" at Commens Dictionary of Peirce's Terms. - Peirce, Carnegie application (L75, 1902), New Elements of Mathematics v. 4, pp. 37-38: For it is not sufficient that a hypothesis should be a justifiable one. Any hypothesis which explains the facts is justified critically. But among justifiable hypotheses we have to select that one which is suitable for being tested by experiment. - Peirce (1902), Carnegie application, see MS L75.329-330, from Draft D of Memoir 27: Consequently, to discover is simply to expedite an event that would occur sooner or later, if we had not troubled ourselves to make the discovery. Consequently, the art of discovery is purely a question of economics. The economics of research is, so far as logic is concerned, the leading doctrine with reference to the art of discovery. Consequently, the conduct of abduction, which is chiefly a question of heuretic and is the first question of heuretic, is to be governed by economical considerations. - Peirce (1903), "Pragmatism — The Logic of Abduction", Collected Papers v. 5, paragraphs 195-205, especially 196. Eprint. - Peirce, "On the Logic of Drawing Ancient History from Documents", Essential Peirce v. 2, see pp. 107-9. On Twenty Questions, p. 109: Thus, twenty skillful hypotheses will ascertain what 200,000 stupid ones might fail to do. - Peirce (1878), "The Probability of Induction", Popular Science Monthly, v. 12, pp. 705-18, see 718 Google Books; 718 via Internet Archive. Reprinted often, including (Collected Papers v. 2, paragraphs 669-93), (The Essential Peirce v. 1, pp. 155-69). - Peirce (1905 draft "G" of "A Neglected Argument"), "Crude, Quantitative, and Qualitative Induction", Collected Papers v. 2, paragraphs 755–760, see 759. Find under "Induction" at Commens Dictionary of Peirce's Terms. - . Brown, C. (2005) Overcoming Barriers to Use of Promising Research Among Elite Middle East Policy Groups, Journal of Social Behaviour and Personality, Select Press. - Hanson, Norwood (1958), Patterns of Discovery, Cambridge University Press, ISBN 0-521-05197-5 - Kuhn 1962, p. 113 ISBN 978-1-4432-5544-8 - Feyerabend, Paul K (1960) "Patterns of Discovery" The Philosophical Review (1960) vol. 69 (2) pp. 247-252 - Kuhn, Thomas S., "The Function of Measurement in Modern Physical Science", ISIS 52(2), 161–193, 1961. - Feyerabend, Paul K., Against Method, Outline of an Anarchistic Theory of Knowledge, 1st published, 1975. Reprinted, Verso, London, UK, 1978. - Higher Superstition: The Academic Left and Its Quarrels with Science, The Johns Hopkins University Press, 1997 - Fashionable Nonsense: Postmodern Intellectuals' Abuse of Science, Picador; 1st Picador USA Pbk. Ed edition, 1999 - The Sokal Hoax: The Sham That Shook the Academy, University of Nebraska Press, 2000 ISBN 0-8032-7995-7 - A House Built on Sand: Exposing Postmodernist Myths About Science, Oxford University Press, 2000 - Intellectual Impostures, Economist Books, 2003 - Dunbar, K., & Fugelsang, J. (2005). Causal thinking in science: How scientists and students interpret the unexpected. In M. E. Gorman, R. D. Tweney, D. Gooding & A. Kincannon (Eds.), Scientific and Technical Thinking (pp. 57-79). Mahwah, NJ: Lawrence Erlbaum Associates. - Oliver, J.E. (1991) Ch2. of The incomplete guide to the art of discovery. New York:NY, Columbia University Press. - Riccardo Pozzo (2004) The impact of Aristotelianism on modern philosophy. CUA Press. p.41. ISBN 0-8132-1347-9 - The ancient Egyptians observed that heliacal rising of a certain star, Sothis (Greek for Sopdet (Egyptian), known to the West as Sirius), marked the annual flooding of the Nile river. See Neugebauer, Otto (1969) , The Exact Sciences in Antiquity (2 ed.), Dover Publications, ISBN 978-0-486-22332-2, p.82, and also the 1911 Britannica, "Egypt". - The Rhind papyrus lists practical examples in arithmetic and geometry —1911 Britannica, "Egypt". - The Ebers papyrus lists some of the 'mysteries of the physician', as cited in the 1911 Britannica, "Egypt" - R. L. Verma (1969). Al-Hazen: father of modern optics. - Niccolò Leoniceno (1509), De Plinii et aliorum erroribus liber apud Ferrara, as cited by Sanches, Limbrick & Thomson 1988, p. 13 - 'I have sometimes seen a verbose quibbler attempting to persuade some ignorant person that white was black; to which the latter replied, "I do not understand your reasoning, since I have not studied as much as you have; yet I honestly believe that white differs from black. But pray go on refuting me for just as long as you like." '— Sanches, Limbrick & Thomson 1988, p. 276 - Sanches, Limbrick & Thomson 1988, p. 278. - Bacon, Francis Novum Organum (The New Organon), 1620. Bacon's work described many of the accepted principles, underscoring the importance of empirical results, data gathering and experiment. Encyclopaedia Britannica (1911), "Bacon, Francis" states: [In Novum Organum, we ] "proceed to apply what is perhaps the most valuable part of the Baconian method, the process of exclusion or rejection. This elimination of the non-essential, ..., is the most important of Bacon's contributions to the logic of induction, and that in which, as he repeatedly says, his method differs from all previous philosophies." - "John Stuart Mill (Stanford Encyclopedia of Philosophy)". plato.stanford.edu. Retrieved 2009-07-31. - Gauch 2003, pp. 52–53 - George Sampson (1970). The concise Cambridge history of English literature. Cambridge University Press. p.174. ISBN 0-521-09581-6 - Logik der Forschung, new appendices *XVII–*XIX (not yet available in the English edition Logic of scientific discovery) - Logic of Scientific discovery, p. 20 - Karl Popper: On the non-existence of scientific method. Realism and the Aim of Science (1983) - Karl Popper: Science: Conjectures and Refutations. Conjectures and Refuations, section VII - Karl Popper: On knowledge. In search of a better world, section II - "The historian ... requires a very broad definition of "science" — one that ... will help us to understand the modern scientific enterprise. We need to be broad and inclusive, rather than narrow and exclusive ... and we should expect that the farther back we go [in time] the broader we will need to be." — David Pingree (1992), "Hellenophilia versus the History of Science" Isis 83 554-63, as cited on p.3, David C. Lindberg (2007), The beginnings of Western science: the European Scientific tradition in philosophical, religious, and institutional context, Second ed. Chicago: Univ. of Chicago Press ISBN 978-0-226-48205-7 - "When we are working intensively, we feel keenly the progress of our work; we are elated when our progress is rapid, we are depressed when it is slow." — the mathematician Pólya 1957, p. 131 in the section on 'Modern heuristic'. - "Philosophy [i.e., physics] is written in this grand book--I mean the universe--which stands continually open to our gaze, but it cannot be understood unless one first learns to comprehend the language and interpret the characters in which it is written. It is written in the language of mathematics, and its characters are triangles, circles, and other geometrical figures, without which it is humanly impossible to understand a single word of it; without these, one is wandering around in a dark labyrinth." —Galileo Galilei, Il Saggiatore (The Assayer, 1623), as translated by Stillman Drake (1957), Discoveries and Opinions of Galileo pp. 237-8, as quoted by di Francia 1981, p. 10. - Pólya 1957 2nd ed. - George Pólya (1954), Mathematics and Plausible Reasoning Volume I: Induction and Analogy in Mathematics, - George Pólya (1954), Mathematics and Plausible Reasoning Volume II: Patterns of Plausible Reasoning. - Pólya 1957, p. 142 - Pólya 1957, p. 144 - Mackay 1991 p.100 - See the development, by generations of mathematicians, of Euler's formula for polyhedra as documented by Lakatos, Imre (1976), Proofs and refutations, Cambridge: Cambridge University Press, ISBN 0-521-29038-4 - Born, Max (1949), Natural Philosophy of Cause and Chance, Peter Smith, also published by Dover, 1964. From the Waynflete Lectures, 1948. On the web. N.B.: the web version does not have the 3 addenda by Born, 1950, 1964, in which he notes that all knowledge is subjective. Born then proposes a solution in Appendix 3 (1964) - Brody, Thomas A. (1993), The Philosophy Behind Physics, Springer Verlag, ISBN 0-387-55914-0. (Luis De La Peña and Peter E. Hodgson, eds.) - Bruno, Leonard C. (1989), The Landmarks of Science, ISBN 0-8160-2137-6 - Bynum, W.F.; Porter, Roy (2005), Oxford Dictionary of Scientific Quotations, Oxford, ISBN 0-19-858409-1. - di Francia, G. Toraldo (1981), The Investigation of the Physical World, Cambridge University Press, ISBN 0-521-29925-X. - Einstein, Albert; Infeld, Leopold (1938), The Evolution of Physics: from early concepts to relativity and quanta, New York: Simon and Schuster, ISBN 0-671-20156-5 - Feynman, Richard (1965), The Character of Physical Law, Cambridge: M.I.T. Press, ISBN 0-262-56003-8. - Fleck, Ludwik (1979), Genesis and Development of a Scientific Fact, Univ. of Chicago, ISBN 0-226-25325-2. (written in German, 1935, Entstehung und Entwickelung einer wissenschaftlichen Tatsache: Einführung in die Lehre vom Denkstil und Denkkollectiv) English translation, 1979 - Galileo (1638), Two New Sciences, Leiden: Lodewijk Elzevir, ISBN 0-486-60099-8 Translated from Italian to English in 1914 by Henry Crew and Alfonso de Salvio. Introduction by Antonio Favaro. xxv+300 pages, index. New York: Macmillan, with later reprintings by Dover. - Gauch, Hugh G., Jr. (2003), Scientific Method in Practice, Cambridge University Press, ISBN 0-521-01708-4 435 pages - Glen, William (ed.) (1994), The Mass-Extinction Debates: How Science Works in a Crisis, Stanford, CA: Stanford University Press, ISBN 0-8047-2285-4. - Godfrey-Smith, Peter (2003), Theory and Reality: An introduction to the philosophy of science, University of Chicago Press, ISBN 0-226-30063-3. - Goldhaber, Alfred Scharff; Nieto, Michael Martin (January–March 2010), "Photon and graviton mass limits", Rev. Mod. Phys. (American Physical Society) 82: 939, doi:10.1103/RevModPhys.82.939. pages 939-979. - Jevons, William Stanley (1874), The Principles of Science: A Treatise on Logic and Scientific Method, Dover Publications, ISBN 1-4304-8775-5. 1877, 1879. Reprinted with a foreword by Ernst Nagel, New York, NY, 1958. - Kuhn, Thomas S. (1962), The Structure of Scientific Revolutions, Chicago, IL: University of Chicago Press. 2nd edition 1970. 3rd edition 1996. - Lindberg, David C. (2007), The Beginnings of Western Science, University of Chicago Press 2nd edition 2007. - Mackay, Alan L. (ed.) (1991), Dictionary of Scientific Quotations, London: IOP Publishing Ltd, ISBN 0-7503-0106-6 - McElheny, Victor K. (2004), Watson & DNA: Making a scientific revolution, Basic Books, ISBN 0-7382-0866-3. - Moulton, Forest Ray; Schifferes, Justus J. (eds., Second Edition) (1960), The Autobiography of Science, Doubleday. - Needham, Joseph; Wang, Ling (王玲) (1954), Science and Civilisation in China, 1 Introductory Orientations, Cambridge University Press - Newton, Isaac (1687, 1713, 1726), Philosophiae Naturalis Principia Mathematica, University of California Press, ISBN 0-520-08817-4, Third edition. From I. Bernard Cohen and Anne Whitman's 1999 translation, 974 pages. - Ørsted, Hans Christian (1997), Selected Scientific Works of Hans Christian Ørsted, Princeton, ISBN 0-691-04334-5. Translated to English by Karen Jelved, Andrew D. Jackson, and Ole Knudsen, (translators 1997). - Peirce, C. S. — see Charles Sanders Peirce bibliography. - Poincaré, Henri (1905), Science and Hypothesis Eprint - Pólya, George (1957), How to Solve It, Princeton University Press, ISBN -691-08097-6 Check - Popper, Karl R., The Logic of Scientific Discovery, 1934, 1959. - Sambursky, Shmuel (ed.) (1974), Physical Thought from the Presocratics to the Quantum Physicists, Pica Press, ISBN 0-87663-712-8. - Sanches, Francisco; Limbrick, Elaine. Introduction, Notes, and Bibliography; Thomson, Douglas F.S. Latin text established, annotated, and translated. (1988), That Nothing is Known, Cambridge: Cambridge University Press, ISBN 0-521-35077-8 Critical edition. - Taleb, Nassim Nicholas (2007), The Black Swan, Random House, ISBN 978-1-4000-6351-2 - Watson, James D. (1968), The Double Helix, New York: Atheneum, Library of Congress card number 68-16217. - Bauer, Henry H., Scientific Literacy and the Myth of the Scientific Method, University of Illinois Press, Champaign, IL, 1992 - Beveridge, William I. B., The Art of Scientific Investigation, Heinemann, Melbourne, Australia, 1950. - Bernstein, Richard J., Beyond Objectivism and Relativism: Science, Hermeneutics, and Praxis, University of Pennsylvania Press, Philadelphia, PA, 1983. - Brody, Baruch A. and Capaldi, Nicholas, Science: Men, Methods, Goals: A Reader: Methods of Physical Science, W. A. Benjamin, 1968 - Brody, Baruch A., and Grandy, Richard E., Readings in the Philosophy of Science, 2nd edition, Prentice Hall, Englewood Cliffs, NJ, 1989. - Burks, Arthur W., Chance, Cause, Reason — An Inquiry into the Nature of Scientific Evidence, University of Chicago Press, Chicago, IL, 1977. - Alan Chalmers. What is this thing called science?. Queensland University Press and Open University Press, 1976. - Crick, Francis (1988), What Mad Pursuit: A Personal View of Scientific Discovery, New York: Basic Books, ISBN 0-465-09137-7. - Dewey, John, How We Think, D.C. Heath, Lexington, MA, 1910. Reprinted, Prometheus Books, Buffalo, NY, 1991. - Earman, John (ed.), Inference, Explanation, and Other Frustrations: Essays in the Philosophy of Science, University of California Press, Berkeley & Los Angeles, CA, 1992. - Fraassen, Bas C. van, The Scientific Image, Oxford University Press, Oxford, UK, 1980. - Franklin, James (2009), What Science Knows: And How It Knows It, New York: Encounter Books, ISBN 1-59403-207-6. - Gadamer, Hans-Georg, Reason in the Age of Science, Frederick G. Lawrence (trans.), MIT Press, Cambridge, MA, 1981. - Giere, Ronald N. (ed.), Cognitive Models of Science, vol. 15 in 'Minnesota Studies in the Philosophy of Science', University of Minnesota Press, Minneapolis, MN, 1992. - Hacking, Ian, Representing and Intervening, Introductory Topics in the Philosophy of Natural Science, Cambridge University Press, Cambridge, UK, 1983. - Heisenberg, Werner, Physics and Beyond, Encounters and Conversations, A.J. Pomerans (trans.), Harper and Row, New York, NY 1971, pp. 63–64. - Holton, Gerald, Thematic Origins of Scientific Thought, Kepler to Einstein, 1st edition 1973, revised edition, Harvard University Press, Cambridge, MA, 1988. - Kuhn, Thomas S., The Essential Tension, Selected Studies in Scientific Tradition and Change, University of Chicago Press, Chicago, IL, 1977. - Latour, Bruno, Science in Action, How to Follow Scientists and Engineers through Society, Harvard University Press, Cambridge, MA, 1987. - Losee, John, A Historical Introduction to the Philosophy of Science, Oxford University Press, Oxford, UK, 1972. 2nd edition, 1980. - Maxwell, Nicholas, The Comprehensibility of the Universe: A New Conception of Science, Oxford University Press, Oxford, 1998. Paperback 2003. - McCarty, Maclyn (1985), The Transforming Principle: Discovering that genes are made of DNA, New York: W. W. Norton, pp. 252 , ISBN 0-393-30450-7. Memoir of a researcher in the Avery–MacLeod–McCarty experiment. - McComas, William F., ed. PDF (189 KB), from The Nature of Science in Science Education, pp53–70, Kluwer Academic Publishers, Netherlands 1998. - Misak, Cheryl J., Truth and the End of Inquiry, A Peircean Account of Truth, Oxford University Press, Oxford, UK, 1991. - Piattelli-Palmarini, Massimo (ed.), Language and Learning, The Debate between Jean Piaget and Noam Chomsky, Harvard University Press, Cambridge, MA, 1980. - Popper, Karl R., Unended Quest, An Intellectual Autobiography, Open Court, La Salle, IL, 1982. - Putnam, Hilary, Renewing Philosophy, Harvard University Press, Cambridge, MA, 1992. - Rorty, Richard, Philosophy and the Mirror of Nature, Princeton University Press, Princeton, NJ, 1979. - Salmon, Wesley C., Four Decades of Scientific Explanation, University of Minnesota Press, Minneapolis, MN, 1990. - Shimony, Abner, Search for a Naturalistic World View: Vol. 1, Scientific Method and Epistemology, Vol. 2, Natural Science and Metaphysics, Cambridge University Press, Cambridge, UK, 1993. - Thagard, Paul, Conceptual Revolutions, Princeton University Press, Princeton, NJ, 1992. - Ziman, John (2000). Real Science: what it is, and what it means. Cambridge, UK: Cambridge University Press. |Wikibooks has a book on the topic of: The Scientific Method| - Scientific method at PhilPapers - Scientific method at the Indiana Philosophy Ontology Project - An Introduction to Science: Scientific Thinking and a scientific method by Steven D. Schafersman. - Introduction to the scientific method at the University of Rochester - Theory-ladenness by Paul Newall at The Galilean Library - Lecture on Scientific Method by Greg Anderson - Using the scientific method for designing science fair projects - SCIENTIFIC METHODS an online book by Richard D. Jarrard - Richard Feynman on the Key to Science (one minute, three seconds), from the Cornell Lectures. - Lectures on the Scientific Method by Nick Josh Karean, Kevin Padian, Michael Shermer and Richard Dawkins
http://en.wikipedia.org/wiki/Scientific_research
13
159
This article is about the concept in three-dimensional solid geometry. For other uses, see Sphere (disambiguation) A sphere (from Greek σφαῖρα — sphaira, "globe, ball") is a perfectly round geometrical and circular object in three-dimensional space, such as the shape of a round ball. Like a circle, which, in geometrical contexts, is in two dimensions, a sphere is the set of points which are all the same distance r from a given point in space. This distance r is known as the radius of the sphere, and the given point is known as the center of the sphere. The maximum straight distance through the sphere is known as the diameter. It passes through the center and is thus twice the radius. In mathematics, a distinction is made between the sphere (a two-dimensional closed surface embedded in three-dimensional Euclidean space) and the ball, a three-dimensional shape which includes the interior of a sphere. Volume of a sphere [ Circumscribed cylinder to a sphere In 3 dimensions, the volume inside a sphere (that is, the volume of a ball) is derived to be where r is the radius of the sphere and π is the constant pi. This formula was first derived by Archimedes, who showed that the volume of a sphere is 2/3 that of a circumscribed cylinder. (This assertion follows from Cavalieri's principle.) In modern mathematics, this formula can be derived using integral calculus, i.e. disk integration to sum the volumes of an infinite number of circular disks of infinitesimally small thickness stacked centered side by side along the x axis from x = 0 where the disk has radius r (i.e. y = r) to x = r where the disk has radius 0 (i.e. y = 0). At any given x, the incremental volume (δV) is given by the product of the cross-sectional area of the disk at x and its thickness (δx): The total volume is the summation of all incremental volumes: In the limit as δx approaches zero this becomes: At any given x, a right-angled triangle connects x, y and r to the origin, hence it follows from the Pythagorean theorem that: Thus, substituting y with a function of x gives: This can now be evaluated as follows: Therefore the volume of a sphere is: Alternatively this formula is found using spherical coordinates, with volume element In higher dimensions, the sphere (or hypersphere) is usually called an n-ball. General recursive formulas exist for the volume of an n-ball. For most practical purposes, the volume of a sphere inscribed in a cube can be approximated as 52.4% of the volume of the cube, since . For example, since a cube with edge length 1 m has a volume of 1 m3, a sphere with diameter 1 m has a volume of about 0.524 m3. Surface area of a sphere [ The surface area of a sphere is given by the following formula: This formula was first derived by Archimedes, based upon the fact that the projection to the lateral surface of a circumscribed cylinder (i.e. the Lambert cylindrical equal-area projection) is area-preserving. It is also the derivative of the formula for the volume with respect to r because the total volume of a sphere of radius r can be thought of as the summation of the surface area of an infinite number of spherical shells of infinitesimal thickness concentrically stacked inside one another from radius 0 to radius r. At infinitesimal thickness the discrepancy between the inner and outer surface area of any given shell is infinitesimal and the elemental volume at radius r is simply the product of the surface area at radius r and the infinitesimal thickness. At any given radius r, the incremental volume (δV) is given by the product of the surface area at radius r (A(r)) and the thickness of a shell (δr): The total volume is the summation of all shell volumes: In the limit as δr approaches zero this becomes: Since we have already proven what the volume is, we can substitute V: Differentiating both sides of this equation with respect to r yields A as a function of r: Which is generally abbreviated as: Alternatively, the area element on the sphere is given in spherical coordinates by . With Cartesian coordinates, the area element . More generally, see area element. The total area can thus be obtained by integration: Equations in ℝ3 [ In analytic geometry, a sphere with centre (x0, y0, z0) and radius r is the locus of all points (x, y, z) such that The points on the sphere with radius r can be parameterized via (see also trigonometric functions and spherical coordinates). A sphere of any radius centred at zero is an integral surface of the following differential form: This equation reflects the fact that the position and velocity vectors of a point traveling on the sphere are always orthogonal to each other. An image of one of the most accurate human-made spheres, as it refracts the image of Einstein in the background. This sphere was a fused quartz gyroscope for the Gravity Probe B experiment, and differs in shape from a perfect sphere by no more than 40 atoms (less than 10 nanometers ) of thickness. It was announced on 1 July 2008 that Australian scientists had created even more perfect spheres, accurate to 0.3 nanometers, as part of an international hunt to find a new global standard kilogram The sphere has the smallest surface area among all surfaces enclosing a given volume and it encloses the largest volume among all closed surfaces with a given surface area. For this reason, the sphere appears in nature: for instance bubbles and small water drops are roughly spherical, because the surface tension locally minimizes surface area. The surface area in relation to the mass of a sphere is called the specific surface area. From the above stated equations it can be expressed as follows: where is the ratio of mass to volume. A sphere can also be defined as the surface formed by rotating a circle about any diameter. If the circle is replaced by an ellipse, and rotated about the major axis, the shape becomes a prolate spheroid, rotated about the minor axis, an oblate spheroid. Pairs of points on a sphere that lie on a straight line through its center are called antipodal points. A great circle is a circle on the sphere that has the same center and radius as the sphere, and consequently divides it into two equal parts. The shortest distance between two distinct non-antipodal points on the surface and measured along the surface, is on the unique great circle passing through the two points. Equipped with the great-circle distance, a great circle becomes the Riemannian circle. If a particular point on a sphere is (arbitrarily) designated as its north pole, then the corresponding antipodal point is called the south pole and the equator is the great circle that is equidistant to them. Great circles through the two poles are called lines (or meridians) of longitude, and the line connecting the two poles is called the axis of rotation. Circles on the sphere that are parallel to the equator are lines of latitude. This terminology is also used for astronomical bodies such as the planet Earth, even though it is not spherical and only approximately spheroidal (see geoid). A sphere is divided into two equal "hemispheres" by any plane that passes through its center. If two intersecting planes pass through its center, then they will subdivide the sphere into four lunes or biangles, the vertices of which all coincide with the antipodal points lying on the line of intersection of the planes. The antipodal quotient of the sphere is the surface called the real projective plane, which can also be thought of as the northern hemisphere with antipodal points of the equator identified. The round hemisphere is conjectured to be the optimal (least area) filling of the Riemannian circle. If the planes don't pass through the sphere's center, then the intersection is called spheric section. Generalization to other dimensions [ Spheres can be generalized to spaces of any dimension. For any natural number n, an "n-sphere," often written as Sn, is the set of points in (n + 1)-dimensional Euclidean space which are at a fixed distance r from a central point of that space, where r is, as before, a positive real number. In particular: - a 0-sphere is a pair of endpoints of an interval (−r, r) of the real line - a 1-sphere is a circle of radius r - a 2-sphere is an ordinary sphere - a 3-sphere is a sphere in 4-dimensional Euclidean space. Spheres for n > 2 are sometimes called hyperspheres. The n-sphere of unit radius centered at the origin is denoted Sn and is often referred to as "the" n-sphere. Note that the ordinary sphere is a 2-sphere, because it is a 2-dimensional surface (which is embedded in 3-dimensional space). The surface area of the (n − 1)-sphere of radius 1 is where Γ(z) is Euler's Gamma function. Another expression for the surface area is and the volume is the surface area times or Generalization to metric spaces [ More generally, in a metric space (E,d), the sphere of center x and radius r > 0 is the set of points y such that d(x,y) = r. If the center is a distinguished point considered as origin of E, as in a normed space, it is not mentioned in the definition and notation. The same applies for the radius if it is taken to equal one, as in the case of a unit sphere. In contrast to a ball, a sphere may be an empty set, even for a large radius. For example, in Zn with Euclidean metric, a sphere of radius r is nonempty only if r2 can be written as sum of n squares of integers. In topology, an n-sphere is defined as a space homeomorphic to the boundary of an (n + 1)-ball; thus, it is homeomorphic to the Euclidean n-sphere, but perhaps lacking its metric. The n-sphere is denoted Sn. It is an example of a compact topological manifold without boundary. A sphere need not be smooth; if it is smooth, it need not be diffeomorphic to the Euclidean sphere. The Heine–Borel theorem implies that a Euclidean n-sphere is compact. The sphere is the inverse image of a one-point set under the continuous function ||x||. Therefore, the sphere is closed. Sn is also bounded; therefore it is compact. Spherical geometry [ The basic elements of Euclidean plane geometry are points and lines. On the sphere, points are defined in the usual sense, but the analogue of "line" may not be immediately apparent. If one measures by arc length one finds that the shortest path connecting two points lying entirely in the sphere is a segment of the great circle containing the points; see geodesic. Many theorems from classical geometry hold true for this spherical geometry as well, but many do not (see parallel postulate). In spherical trigonometry, angles are defined between great circles. Thus spherical trigonometry is different from ordinary trigonometry in many respects. For example, the sum of the interior angles of a spherical triangle exceeds 180 degrees. Also, any two similar spherical triangles are congruent. Eleven properties of the sphere [ In their book Geometry and the imagination David Hilbert and Stephan Cohn-Vossen describe eleven properties of the sphere and discuss whether these properties uniquely determine the sphere. Several properties hold for the plane which can be thought of as a sphere with infinite radius. These properties are: - The points on the sphere are all the same distance from a fixed point. Also, the ratio of the distance of its points from two fixed points is constant. - The first part is the usual definition of the sphere and determines it uniquely. The second part can be easily deduced and follows a similar result of Apollonius of Perga for the circle. This second part also holds for the plane. - The contours and plane sections of the sphere are circles. - This property defines the sphere uniquely. - The sphere has constant width and constant girth. - The width of a surface is the distance between pairs of parallel tangent planes. There are numerous other closed convex surfaces which have constant width, for example the Meissner body. The girth of a surface is the circumference of the boundary of its orthogonal projection on to a plane. It can be proved that each of these properties implies the other. A normal vector to a sphere, a normal plane and its normal section. The curvature of the curve of intersection is the sectional curvature. For the sphere each normal section through a given point will be a circle of the same radius, the radius of the sphere. This means that every point on the sphere will be an umbilical point. - All points of a sphere are umbilics. - At any point on a surface we can find a normal direction which is at right angles to the surface, for the sphere these are the lines radiating out from the center of the sphere. The intersection of a plane containing the normal with the surface will form a curve called a normal section and the curvature of this curve is the normal curvature. For most points on most surfaces, different sections will have different curvatures; the maximum and minimum values of these are called the principal curvatures. It can be proved that any closed surface will have at least four points called umbilical points. At an umbilic all the sectional curvatures are equal; in particular the principal curvatures are equal. Umbilical points can be thought of as the points where the surface is closely approximated by a sphere. - For the sphere the curvatures of all normal sections are equal, so every point is an umbilic. The sphere and plane are the only surfaces with this property. - The sphere does not have a surface of centers. - For a given normal section there is a circle whose curvature is the same as the sectional curvature, is tangent to the surface and whose center lines along on the normal line. Take the two centers corresponding to the maximum and minimum sectional curvatures: these are called the focal points, and the set of all such centers forms the focal surface. - For most surfaces the focal surface forms two sheets each of which is a surface and which come together at umbilical points. There are a number of special cases. For channel surfaces one sheet forms a curve and the other sheet is a surface; For cones, cylinders, toruses and cyclides both sheets form curves. For the sphere the center of every osculating circle is at the center of the sphere and the focal surface forms a single point. This is a unique property of the sphere. - All geodesics of the sphere are closed curves. - Geodesics are curves on a surface which give the shortest distance between two points. They are a generalization of the concept of a straight line in the plane. For the sphere the geodesics are great circles. There are many other surfaces with this property. - Of all the solids having a given volume, the sphere is the one with the smallest surface area; of all solids having a given surface area, the sphere is the one having the greatest volume. - It follows from isoperimetric inequality. These properties define the sphere uniquely. These properties can be seen by observing soap bubbles. A soap bubble will enclose a fixed volume and due to surface tension its surface area is minimal for that volume. This is why a free floating soap bubble approximates a sphere (though external forces such as gravity will distort the bubble's shape slightly). - The sphere has the smallest total mean curvature among all convex solids with a given surface area. - The mean curvature is the average of the two principal curvatures and as these are constant at all points of the sphere then so is the mean curvature. - The sphere has constant mean curvature. - The sphere is the only imbedded surface without boundary or singularities with constant positive mean curvature. There are other immersed surfaces with constant mean curvature. The minimal surfaces have zero mean curvature. - The sphere has constant positive Gaussian curvature. - Gaussian curvature is the product of the two principal curvatures. It is an intrinsic property which can be determined by measuring length and angles and does not depend on the way the surface is embedded in space. Hence, bending a surface will not alter the Gaussian curvature and other surfaces with constant positive Gaussian curvature can be obtained by cutting a small slit in the sphere and bending it. All these other surfaces would have boundaries and the sphere is the only surface without boundary with constant positive Gaussian curvature. The pseudosphere is an example of a surface with constant negative Gaussian curvature. - The sphere is transformed into itself by a three-parameter family of rigid motions. - Consider a unit sphere placed at the origin, a rotation around the x, y or z axis will map the sphere onto itself, indeed any rotation about a line through the origin can be expressed as a combination of rotations around the three-coordinate axis, see Euler angles. Thus there is a three-parameter family of rotations which transform the sphere onto itself, this is the rotation group SO(3). The plane is the only other surface with a three-parameter family of transformations (translations along the x and y axis and rotations around the origin). Circular cylinders are the only surfaces with two-parameter families of rigid motions and the surfaces of revolution and helicoids are the only surfaces with a one-parameter family. Cubes in relation to spheres [ ||This section requires expansion. (April 2012) For every sphere there are multiple cuboids that may be inscribed within the sphere. The largest cuboid which can be inscribed within a sphere is a cube. See also [ - ^ σφαῖρα, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus - ^ a b Pages 141, 149. E.J. Borowski, J.M. Borwein. Collins Dictionary of Mathematics. ISBN 0-00-434347-6. - ^ New Scientist | Technology | Roundest objects in the world created - ^ Weisstein, Eric W., "Spheric section", MathWorld. - ^ Hilbert, David; Cohn-Vossen, Stephan (1952). Geometry and the Imagination (2nd ed.). Chelsea. ISBN 0-8284-1087-9. - William Dunham. "Pages 28, 226", The Mathematical Universe: An Alphabetical Journey Through the Great Proofs, Problems and Personalities, ISBN 0-471-17661-3. External links [
http://www.algebra.com/algebra/homework/formulas/Sphere.wikipedia
13
188
What is DENSITY? The mass density or density of a material is its mass per unit volume. The symbol most often used for density is ρ (the lower case Greek letter rho). Mathematically, density is defined as mass divided by volume: where ρ is the density, m is the mass, and V is the volume. In some cases (for ... Density is a key concept in analyzing how materials interact in fluid mechanics, weather, geology, material sciences, engineering, and other fields of physics. DENSITY is a physical property of matter, as each element and compound has a unique density associated with it. Density defined in a qualitative manner as the measure of the relative "heaviness" of objects with a constant volume. Density is a physical characteristic, and is a measure of mass per unit of volume of a material or substance. Density is mass divided by volume. It is a measurement of the ... Density is an important concept in science, because in a universe where gravity is not uniform, weight is actually the least useful way to measure the physical property of ... Best Answer: the amount of mass or matter in any given volume. you can figure out the density of any object by doing this equation of: Mass (Grams)/Volume (mL)=Density The ... What is density? Density is a measurement of the mass of a substance per unit volume. Oceanographers use the metric system to measure the density of water, expressing it in terms of grams per cubic centimeter. The density of pure water at 39 F (4 C) is equal to 1; the average value scientists ... Density is one of the fundamental principles of physics. Density is the reason why a plank of wood floats on water while a piece of metal several times smaller noun, plural den·si·ties. 1. the state or quality of being dense; compactness; closely set or crowded condition. 2. stupidity; slow-wittedness; obtuseness. 3. the number of ... What is Density? Density is a property of matter that is unique to each substance. It is a measure of the mass of the substance in a standard unit of volume. Density is how much stuff there is in some amount of space. It's typically measured as how much mass (which is a bit like how much it weighs) there is in so much space. Thank you for requesting this book as a NOOK book from the publisher. An updated version of this page is now available containing the complete mass- volume- density module (including an online test assessment, and ideas for science projects) -- Best Answer: Exactly. M=(mass) divided by V= (volume) = Density ... An object or material's density is defined as its mass per unit volume. It is, essentially, a measuremement ... The formula for density is D=m/v (Density equals mass over volume) This means that density is equal to the amount of matter "stuff" in an object in a certain amount of space ... density definition: A measure of the quantity of some physical property (usually mass) per unit length, area, or volume (usually volume). ♦ Mass density is a measure of the mass of a substance per unit volume. Most substances (especially gases ... What is Density? - Density might seem like a difficult scientific concept, but it really just tells us how much stuff takes up a certain amount of space. Maybe you have a backpack for school. Imagine that backpack just ... Population density (in agriculture standing stock and standing crop) is a measurement of population per unit area or unit volume. It is frequently applied to living organisms, and particularly to humans. It is a key geographic term. Population density is population divided by total land area or ... The density of water is related to its specific gravity. The density is the weight of the water per its unit volume, which depends on the temperature of the water. While you can round the density to 1 gram per milliliter, here are more precise values for you. Best Answer: density is defined as mass per unit of volume. ... Density is the distance between two atoms of the matter(it may be solid,liquid or gas).Density of any matter ... WHAT IS DENSITY? Mass and volume are physical properties of matter. However, using mass and volume alone are not the best way to describe an object or substance. What is density? ... Return to the Question and Answer Archive Index Page | Could you please explain density? Density (ISO 31[?]: volumic mass) is a measure of mass per unit of volume. The higher an object's density, the higher its mass per volume. The average density of an object equals its total mass divided by its total volume. Weight (mass) per unit of volume used as a measure of the compactness of a substance. It is commonly expressed as kilograms per cubic meter (kg/m3) or pounds per cubic foot (lb/ft3), although kilograms per liter (kg/l) and pounds per gallon (lb/gallon) are also sometimes used. Density . Density is a way to measure how tightly packed an object is. Denser objects have less empty space in them, they are less holey. The density of an object is often measured in grams per milliliter. idatolen: Density is one of the physical properties of matter. In physics, it can also be expressed by the mathematical formula mass divided by the volume or D = Mass/Volume. Density. What IS density? ... Density is how heavy something is for its size. It's how massive it is divided by how big it is. A simple definition of density is how much something of a fixed volume weighs. This is best seen through examples of different substances. This page describes the term density and lists other pages on the Web where you can find additional information. Question What is the density of water? Asked by: Richard Answer This is a very powerful question. The answer to it tells one a lot about us as a species that interacts with the world. Density is the term given to describe an object and the amount of matter packed into a certain amount of mass. Density is also used to describe the weight of an object. For ... An updated version of this page is available containing the complete mass- volume- density module (including an online test assessment, and ideas for science projects) -- Definition of density ... Definition: The density of a material is the mass per unit volume. The SI unit of density is kg/m 2. What is density? How do they get ships made of iron and steel to float? The answer to this question is all about density. This is a look at the density of air at STP. ... The density of air is the mass per unit volume of atmospheric gases. It is denoted by the Greek letter rho, ρ. 5 Principles of Good Design. Increased density when in appropriate locations; Unite people & places through an integrated street network; Mixed uses; What is Water Density? Water density is the mass of water per unit of volume. It is important to note that water does not have a fixed density because water Body density is the proportion of fat in a human body as compared to its overall mass. The main ways of measuring body density are... What is density? ChaCha Answer: In physics, density is mass (m) per unit volume (V) - the ratio of the amount of matter in an object ... What is population density? ChaCha Answer: Population density is the measurement of a population per unit area or unit volume; in oth... benefits of density There are social, convenience, economic, and environmental benefits of living in places of higher density if they are designed to be mixed-use, walkable, and pedestrian scale. Definition of Density in the Definitions.net dictionary. Meaning of Density. What does Density mean? Information and translations of Density in the most comprehensive dictionary definitions resource on the web. Density is the degree of compactness of a substance. It is also a measure of the amount of information on a storage medium such as a disk or tape. The term can Material density, more often referred to simply as density, is a quantitative expression of the amount of mass contained per unit volume. Density, mass, volume, calculator, cubic centimeters, liters, gallons, cubic feet, cubic inches, weight of water calculator Density #1-3 What is density? Density is a comparison of how much matter there is in a certain amount of space. Which one is more dense? Demonstration: People in a square How about this: Which square is more dense? Density in simple terms is the measure of how much something weighs compared to its size. More precisely, it is the mass of an item divided by its volume. Best Answer: Kg/m^3 if i´m not mistaken (SI) ... Well ! 3 right answers. But, what is density - none has thought about it and trying to find out its unit. Density has two ... Specific Gravity is a measure of the density of a mineral. At times it is such a useful property that it is the only way to distinguish some minerals without laboratory or optical techniques. Related Questions. What is the difference between Voyagers @ the Village and Voyagers @ Outpost? - In 2008, the Voyagers program was incredibly popular. If you didn't find what you were looking for you can always try Google Search Add this page to your blog, web, or forum. This will help people know what is What is DENSITY
http://mrwhatis.com/density.html
13
75
In this lecture, we’ll derive the velocity distribution for two examples of laminar flow. First we’ll consider a wide river, by which we mean wide compared with its depth (which we take to be uniform) and we ignore the more complicated flow pattern near the banks. Our second example is smooth flow down a circular pipe. For the wide river, the water flow can be thought of as being in horizontal “sheets”, so all the water at the same depth is moving at the same velocity. As mentioned in the last lecture, the flow can be pictured as like a pile of printer paper left on a sloping desk: it all slides down, assume the bottom sheet stays stuck to the desk, each other sheet moves downhill a little faster than the sheet immediately beneath it. For flow down a circular pipe, the laminar “sheets” are hollow tubes centered on the line down the middle of the pipe. The fastest flowing fluid is right at that central line. For both river and tube flow, the drag force between adjacent small elements of neighboring sheets is given by force per unit area where now the z-direction means perpendicular to the small element of sheet. For a river flowing steadily down a gentle incline under gravity, we’ll assume all the streamlines point in the same direction, the river is wide and of uniform depth, and the depth is much smaller than the width. This means almost all the flow is well away from the edges (the river banks), so we’ll ignore the slowing down there, and just analyze the flow rate per meter of river width, taking it to be uniform across the river. The simplest basic question is: given the slope of the land and the depth of the river, what is the total flow rate? To answer, we need to find the speed of flow v(z) as a function of depth (we know the water in contact with the river bed isn’t flowing at all), and then add the flow contributions from the different depths (this will be an integral) to find the total flow. The function v(z) is called the “velocity profile”. We’ll prove it looks something like this: (For a smoothly flowing river, the downhill ground slope would be imperceptible on this scale.) But how do we begin to calculate v(z)? Recall that (in an earlier lecture) to find how hydrostatic pressure varied with depth, we mentally separated a cylinder of fluid from its surroundings, and applied Newton’s Laws: it wasn’t moving, so we figured its weight had to be balanced by the sum of the pressure forces it experienced from the rest of the fluid surrounding it. In fact, its weight was balanced by the difference between the pressure underneath and that on top. Taking a cue from that, here we isolate mentally a thin layer of the river, like one of those sheets of printer paper, lying between height z above the bed and This layer is moving, but at a steady speed, so the total force on it will still be zero. Like the whole river, this layer isn’t quite horizontal, its weight has a small but nonzero component dragging it downhill, and this weight component is balanced by the difference between the viscous force from the faster water above and that from slower water below. Bear in mind that the diagram below is at a tiny angle to the horizontal: Obviously, for the forces to balance, the backward drag on the thin layer from the slower moving water beneath has to be stronger than the forward drag from the faster water above, so the rate of change of speed with height above the river bed is decreasing on going up from the bed. Let us find the total force (which must be zero) on one square meter of the thin layer of water between heights z and : First, gravity: if the river is flowing downhill at some small angle , this square meter of the layer (volume , density ) experiences a gravitational force tugging it downstream (taking the small angle approximation, .) Next, the viscous drag forces: the square meter of layer experiences two viscous forces, one from the slower water below, equal to , tending to slow it down, one from the faster water above it, , tending to speed it up. Gravity must balance out the difference between the two viscous forces: We can already see from this equation that, unlike the fluid between the plates, v(z) can’t possibly be linear in zthe equation would not balance if were the same at z and ! Dividing throughout by and by , Taking now the limit and recalling the definition of the differential we find the differential equation The solution of this equation is easy: with C, D constants of integration. Remember that the velocity v(z) is zero at the bottom of the river, z = 0, so the constant D must be zero, and can be dropped immediately. But we’re not throughwe haven’t found C. To do that, we need to go to the top. What happens to the thin layer of river water at the very topthe layer in contact with the air? Assuming there is negligible wind, there is essentially zero parallel-to-the-surface force from above. So the balance of forces equation for the top layer is just We can take this top layer to be as thin as we like, so let’s look what happens in the limit of extreme thinness, . The term then goes to zero, so the other term must as well. Since is constant, this means So the velocity profile function has zero slope at the river surface. With this new information, we can finally fix the arbitrary integration constant C. Now the velocity profile and gives . Putting this value for C into v(z) we have the final result: This velocity profile v(z) is half the top part of a parabola: Knowing the velocity profile v(z) enables us to compute the total flow of water in the river. As explained earlier, we’re assuming a wide river having uniform depth, ignoring the slowdown near the edges of the river, taking the same v(z) all the way across. We’ll calculate the flow across one meter of width of the river, so the total flow is our result multiplied by the river’s width. The flow contribution from a single layer of thickness at height z is cubic meters per second across one meter of width. The total flow is the sum over all layers. In the limit of many infinitely thin layers, that is, , the sum becomes an integral, and the total flow rate in cubic meters per second per meter of width of the river. It is worth thinking about what this result means physically. The interesting part is that the flow is proportional to h3, where h is the depth of the river. So, if there’s a storm and the river is twice as deep as normal, and flowing steadily, the flow rate will be eight times normal. Exercise: plot on a graph the velocity profiles for two rivers, one of depth h and one 2h, having the same values of What is the ratio of the surface velocities of the two rivers? Suppose that one meter below the surface of one of the rivers, the water is flowing 0.5 m.sec-1 slower than it is flowing at the surface. Would that also be true of the other river? The flow rate for smooth flow through a pipe of circular cross-section can be found by essentially the same method. (This was the flow pattern analyzed by Poiseuille and used by him to confirm In the pipe, the flow is fastest in the middle, and the water in contact with the pipe wall (like that at the river bed) doesn’t flow at all. The river’s flow pattern was most naturally analyzed by thinking of flat layers of water, all the water in one layer having the same speed. What would be the corresponding picture for flow down a pipe? Here all the fluid at the same distance from the center moves down the pipe at the same speedinstead of flat layers of fluid, we have concentric hollow cylinders of fluid, one inside the next, with a tiny rod of the fastest fluid right at the center. This is again laminar flow, even though this time the “sheets” are rolled into tubes. The blue circular area on the cross-section of the pipe shown below represents one of these cylinders of fluidall the fluid between r and from the central line. Each of these hollow cylinders of water is pushed along the pipe by the pressure difference between the ends of the pipe. Each feels viscous forces from its two neighboring cylinders: the next bigger one, which surrounds it, tending to slow it down, but the next smaller one (inside it) tending to speed it up. Writing down the differential equation is a little more tricky that for the river, because we must take into account that the two surfaces of the hollow cylinder (inside and outside) have different areas, and . It turns out that the velocity profile is again parabolic: the details are given below. Suppose the pipe has radius a, length L and pressure drop Let us focus on the fluid in the cylinder between r and from the line down the middle, and we’ll take the cylinder to have unit length, for convenience. The pressure force maintaining the fluid motion is the difference between pressure x area for the two ends of this one meter long hollow cylinder: (We’re assuming , since we’ll be taking the limit, so the end area The equality becomes exact in the limit.) This force exactly balances the difference between the outer surface viscous drag force from the slower surrounding fluid and the inner viscous force from the central faster-moving fluid, very similar to the situation in the previous analysis of river flow. Using and remembering that the inner and outer surfaces of the cylinder have slightly different areas, that is positive, but is negative, the force equation is: in the limit , remembering the definition of the differential (see the similar analysis above for the river). This can now be integrated to give where C is a constant of integration. Dividing both sides by r and integrating again The constant C must be zero, since physically the fluid velocity is finite at r = 0. The constant D is determined by the requirement that the fluid speed is zero where the fluid is in contact with the tube, at r = a. The fluid velocity is therefore To find the total flow rate I down the pipe, we integrate over the flow in each hollow cylinder of water: in cubic meters per second. Notice the flow rate goes as the fourth power of the radius, so doubling the radius results in a sixteen-fold increase in flow. That is why narrowing of arteries is so serious. Thanks to Linda Fahlberg-Stojanovska for spotting a sign error in the earlier version.
http://galileo.phys.virginia.edu/classes/152.mf1i.spring02/RiverViscosity.htm
13
305
The Vietnam War was the legacy of France's failure to suppress nationalist forces in Indochina as it struggled to restore its colonial dominion after World War II. Led by Ho Chi Minh, a Communist-dominated revolutionary movement—the Viet Minh—waged a political and military struggle for Vietnamese independence that frustrated the efforts of the French and resulted ultimately in their ouster from the region. The U.S. Army's first encounters with Ho Chi Minh were brief and sympathetic. During World War II, Ho's anti-Japanese resistance fighters helped to rescue downed American pilots and furnished information on Japanese forces in Indochina. U.S. Army officers stood at Ho's side in August 1945 as he basked in the short-lived satisfaction of declaring Vietnam's independence. Five years later, however, in an international climate tense with ideological and military confrontation between Communist and non-Communist powers, Army advisers of the newly formed U.S. Military Assistance Advisory Group (MAAG), Indochina, were aiding France against the Viet Minh. With combat raging in Korea and mainland China recently fallen to the Communists, the war in Indochina now appeared to Americans as one more pressure point to be contained on a wide arc of Communist expansion in Asia. By underwriting French military efforts in Southeast Asia, the United States enabled France to sustain its economic recovery and to contribute, through the North Atlantic Treaty Organization (NATO), to the collective defense of western Europe. Provided with aircraft, artillery, tanks, vehicles, weapons, and other equipment and supplies a small portion of which they distributed to an anti-Communist Vietnamese army they had organized—the French did not fail for want of equipment. Instead, they put American aid at the service of a flawed strategy that sought to defeat the elusive Viet Minh in set-piece battles, but neglected to cultivate the loyalty and support of the Vietnamese people. Too few in number to provide more than a veneer of security in most rural areas, the French were unable to suppress the guerrillas or to prevent the underground Communist shadow government from reappearing whenever French forces left one area to fight elsewhere. The battle of Dien Bien Phu epitomized the shortcomings of French strategy. Located near the Laotian border in a rugged valley of remote northwestern Vietnam, Dien Bien Phu was not a congenial place to fight. Far inland from coastal supply bases and with roads vulnerable to the Viet Minh, the base depended almost entirely on air support. The French, expecting the Viet Minh to invade Laos, occupied Dien Bien Phu in November 1953 in order to force a battle. Yet they had little to gain from an engagement. Victory at Dien Bien Phu would not have ended the war; even if defeated, the Viet Minh would have retired to their mountain redoubts. And no French victory at Dien Bien Phu would have reduced Communist control over large segments of the population. On the other hand, the French had much to lose, in manpower, equipment, and prestige. Their position was in a valley, surrounded by high ground that the Viet Minh quickly fortified. While bombarding the besieged garrison with artillery and mortars, the attackers tunneled closer to the French positions. Supply aircraft that successfully ran the gauntlet of intense antiaircraft fire risked destruction on the ground from Viet Minh artillery. Eventually, supplies and ammunition could be delivered to the defenders only by parachute drop. As the situation became critical, France asked the United States to intervene. Believing that the French position was untenable and that even massive American air attacks using small nuclear bombs would be futile, General Matthew B. Ridgway, the Army Chief of Staff, helped to convince President Dwight D. Eisenhower not to aid them. Ridgway also opposed the use of U.S. ground forces, arguing that such an effort would severely strain the Army and possibly lead to a wider war in Asia. The fall of Dien Bien Phu on 7 May 1954, as peace negotiations were about to start in Geneva, hastened France's disengagement from Indochina. On 20 July, France and the Viet Minh agreed to end hostilities and to divide Vietnam temporarily into two zones at the 17th parallel. ( Map 47) In the North, the Viet Minh established a Communist government, with its capital at Hanoi. French forces withdrew to the South, and hundreds of thousands of civilians, most of whom were Roman Catholics, accompanied them. The question of unification was left to be decided by an election scheduled for 1956. The Emergence of South Vietnam As the Viet Minh consolidated control in the North, Ngo Dinh Diem, a Roman Catholic of mandarin background, sought to assert his authority over the chaotic conditions in the South in hopes of establishing an anti-Communist state. A onetime minister in the French colonial administration, Diem enjoyed a reputation for honesty. He had resigned his office in 1933 and had taken no part in the tumultuous events that swept over Vietnam after the war. Diem returned to Saigon in the summer of 1954 as premier with no political following except his family and a few Americans. His authority was challenged, first by the independent Hoa Hao and Cao Dai religious sects and then by the Binh Xuyen, an organization of gangsters that controlled Saigon's gambling dens and brothels and had strong influence with the police. Rallying an army, Diem defeated the sects and gained their grudging allegiance. Remnants of their forces, however, fled to the jungle to continue their resistance, and some, at a later date, became the nucleus of Communist guerrilla units. Diem was also challenged by members of his own army, where French influence persisted among the highest ranking officers. But he weathered the threat of an army coup, dispelling American doubts about his ability to survive in the jungle of Vietnamese politics. For the next few years, the United States commitment to defend South Vietnam's independence was synonymous with support for Diem. Americans now provided advice and support to the Army of the Republic of Vietnam (ARVN); at Diem's request, they replaced French advisers throughout his nation's military establishment. As the American role in South Vietnam was growing, U.S. defense policy was undergoing review. Officials in the Eisenhower administration believed that wars like those in Korea and Vietnam were too costly and ought to be avoided in the future. "Never again" was the rallying cry of those who opposed sending U.S. ground forces to fight a conventional war in Asia. Instead, the Eisenhower administration relied on the threat or use of massive nuclear retaliation to deter or, if necessary, to defeat the armies of the Soviet Union or Communist China. The New Look, as this policy was called, emphasized nuclear air power at the expense of conventional ground forces. If deterrence failed, planners envisioned the next war as a short, violent nuclear conflict of a few days' duration, conducted with forces in being. Ground forces were relegated to a minor role, and mobilization was regarded as an unnecessary luxury. In consequence, the Army's share of the defense budget decreased, the modernization of its forces was delayed, and its strength was reduced by 40 percent—from 1,404,598 in 1954 to 861,964 in 1956. A strategy dependent on one form of military power, the New Look was sharply criticized by soldiers and academics alike. Unless the United States was willing to risk destruction, critics argued, the threat of massive nuclear retaliation had little credibility. General Ridgway and his successor, General Maxwell D. Taylor, were vocal opponents. Both advocated balanced forces to enable the United States to cope realistically with a variety of military contingencies. The events of the late 1950's appeared to support their demand for flexibility. The United States intervened in Lebanon in 1956 to restore political stability there. Two years later an American military show of force in the Straits of Taiwan helped to dampen tensions between Communist China and the Nationalist Chinese Government on Formosa. Both contingencies underlined the importance of avoiding any fixed concept of war. Advocates of the flexible response doctrine foresaw a meaningful role for the Army as part of a more credible deterrent and as a means of intervening, when necessary, in limited and small wars. They wished to strengthen both conventional and unconventional forces; to improve strategic and tactical mobility; and to maintain troops and equipment at forward bases, close to likely areas of conflict. They placed a premium on highly responsive command and control, to allow a close meshing of military actions with political goals. The same reformers were deeply interested in the conduct of brushfire wars, especially among the underdeveloped nations. In the so-called third world, competing cold war ideologies and festering nationalistic, religious, and social conflicts interacted with the disruptive forces of modernization to create the preconditions for open hostilities. Southeast Asia was one of several such areas identified by the Army. Here the United States' central concern was the threat of North Vietnamese and perhaps Chinese aggression against South Vietnam and other non-Communist states. The United States took the lead in forming a regional defense pact, the Southeast Asia Treaty Organization (SEATO), signaling its commitment to contain Communist encroachment in the region. Meanwhile the 342 American advisers of MAAG, Vietnam (which replaced MAAG, Indochina, in 1955), trained and organized Diem's fledgling army to resist an invasion from the North. Three MAAG chiefs—Lt. Gens. John W. O'Daniel, Samuel T. Williams, and Lionel C. McGarr—reorganized South Vietnam's light mobile infantry groups into infantry divisions, compatible in design and mission with U.S. defense plans. The South Vietnamese Army, with a strength of about 150,000, was equipped with standard Army equipment and given the mission of delaying the advance of any invasion force until the arrival of American reinforcements. The residual influence of the army's earlier French training, however, lingered in both leadership and tactics. The South Vietnamese had little or no practical experience in administration and the higher staff functions, from which the French had excluded them. The MAAG's training and reorganization work was often interrupted by Diem's use of his army to conduct "pacification" campaigns to root out stay-behind Viet Minh cadre. Hence responsibility for most internal security was transferred to poorly trained and ill-equipped paramilitary forces, the Civil Guard and Self-Defense Corps, which numbered about 75,000. For the most part, the Viet Minh in the South avoided armed action and subscribed to a political action program in anticipation of Vietnam-wide elections in 1956, as stipulated by the Geneva Accords. But Diem, supported by the United States, refused to hold elections, claiming that undemocratic conditions in the North precluded a fair contest. (Some observers thought Ho Chi Minh sufficiently popular in the South to defeat Diem.) Buoyed by his own election as President in 1955 and by the adulation of his American supporters, Diem's political strength rose to its apex. While making some political and economic reforms, he pressed hard his attacks on political opponents and former Viet Minh, many of whom were not Communists at all but patriots who had joined the movement to fight for Vietnamese independence. By 1957 Diem's harsh measures had so weakened the Viet Minh that Communist leaders in the South feared for the movement's survival there. The southerners urged their colleagues in the North to sanction a new armed struggle in South Vietnam. For self-protection, some Viet Minh had fled to secret bases to hide and form small units. Others joined renegade elements of the former sect armies. From bases in the mangrove swamps of the Mekong Delta, in the Plain of Reeds near the Cambodian border, and in the jungle of War Zones C and D northwest of Saigon, the Communists began to rebuild their armed forces, to re-establish an underground political network, and to carry out propaganda, harassment, and terrorist activities. As reforms faltered and Diem became more dictatorial, the ranks of the rebels swelled with the politically disaffected. The Rise of the Viet Cong The insurgents, now called the Viet Cong, had organized several companies and a few battalions by 1959, the majority in the Delta and the provinces around Saigon. As Viet Cong military strength increased, attacks against the paramilitary forces, and occasionally against the South Vietnamese Army, became more frequent. Many were conducted to obtain equipment, arms, and ammunition, but all were hailed by the guerrillas as evidence of the government's inability to protect its citizens. Political agitation and military activity also quickened in the Central Highlands, where Viet Cong agents recruited among the Montagnard tribes. In 1959, after assessing conditions in the South, the leaders in Hanoi agreed to resume the armed struggle, giving it equal weight with political efforts to undermine Diem and reunify Vietnam. To attract the growing number of anti-Communists opposed to Diem, as well as to provide a democratic facade for administering the party's policies in areas controlled by the Viet Cong, Hanoi in December 1960 created the National Liberation Front of South Vietnam. The revival of guerrilla warfare in the South found the advisory group, the South Vietnamese Army, and Diem's government ill prepared to wage an effective campaign. In their efforts to train and strengthen Diem's army, U.S. advisers had concentrated on meeting the threat of a conventional North Vietnamese invasion. The ARVN's earlier antiguerrilla campaigns, while seemingly successful, had been carried out against a weak and dormant insurgency. The Civil Guard and Self-Defense Corps, which bore the brunt of the Viet Cong's attacks, were not under the MAAG's purview and proved unable to cope with the audacious Viet Cong. Diem's regime, while stressing military activities, neglected political, social, and economic reforms. American officials disagreed over the seriousness of the guerrilla threat, the priority to be accorded political or military measures, and the need for special counterguerrilla training for the South Vietnamese Army. Only a handful of the MAAG's advisers had personal experience in counterinsurgency warfare. Yet the U.S. Army was not a stranger to such conflict. Americans had fought insurgents in the Philippines at the turn of the century, conducted a guerrilla campaign in Burma during World War II, helped the Greek and Philippine Governments to subdue Communist insurgencies after the war, and studied the French failure in Indochina and the British success in Malaya. The Army did not, however, have a comprehensive doctrine for dealing with insurgency. For the most part, insurgent warfare was equated with the type of guerrilla or partisan struggles carried out during World War II behind enemy lines in support of conventional operations. This viewpoint reduced antiguerrilla warfare to providing security against enemy partisans operating behind friendly lines. Almost totally lacking was an appreciation of the political and social dimensions of insurgency and its role in the larger framework of revolutionary war. Insurgency meant above all a contest for political legitimacy and power—a struggle between contending political cultures over the organization of society. Most of the Army advisers and Special Forces who were sent to South Vietnam in the early 1960'S were poorly prepared to wage such a struggle. A victory for counterinsurgency in South Vietnam would require Diem's government not only to outfight the guerrillas, but to compete successfully with their efforts to organize the population in support of the government's cause. The Viet Cong thrived on their access to and control of the people, who formed the most important part of their support base. The population provided both economic and manpower resources to sustain and expand the insurgency; the people of the villages served the guerrillas as their first line of resistance against government intrusion into their "liberated zones" and bases. By comparison with their political effort, the strictly military aims of the Viet Cong were secondary. The insurgents hoped not to destroy government forces—although they did so when weaker elements could be isolated and defeated—but by limited actions to extend their influence over the population. By mobilizing the population, the Viet Cong compensated for their numerical and material disadvantages. The rule of thumb that ten soldiers were needed to defeat one guerrilla reflected the insurgents" political support rather than their military superiority. For the Saigon government, the task of isolating the Viet Cong from the population was difficult under any circumstances and impossible to achieve by force alone. Viet Cong military forces varied from hamlet and village guerrillas, who were farmers by day and fighters by night, to full-time professional soldiers. Organized into squads and platoons, part-time guerrillas had several military functions. They gathered intelligence, passing it on to district or provincial authorities; they proselytized, propagandized, recruited, and provided security for local cadres. They reconnoitered the battlefield, served as porters and guides, created diversions, evacuated wounded, and retrieved weapons. Their very presence and watchfulness in a hamlet or village inhibited the population from aiding the government. By contrast, the local and main force units consisted of full-time soldiers, most often recruited from the area where the unit operated. Forming companies and battalions, local forces were attached to a village, district, or provincial headquarters. Often they formed the protective shield behind which a Communist Party cadre established its political infrastructure and organized new guerrilla elements at the hamlet and village levels. As the link between guerrilla and main force units, local forces served as a reaction force for the former and as a pool of replacements and reinforcements for the latter. Having limited offensive capability, local forces usually attacked poorly defended, isolated outposts or weaker paramilitary forces, often at night and by ambush. Main force units were organized as battalions, regiments, and—as the insurgency matured—divisions. Subordinate to provincial, regional, and higher commands, such units were the strongest, most mobile, and most offensive-minded of the Viet Cong forces; their mission often was to attack and defeat a specific South Vietnamese unit. Missions were assigned and approved by a political officer who, in most cases, was superior to the unit's military commander. Party policy, military discipline, and unit cohesion were inculcated and reinforced by three-man party cells in every unit. Among the insurgents, war was always the servant of policy. As the Viet Cong's control over the population increased, their military forces grew in number and size. Squads and platoons became companies, companies formed battalions, and battalions were organized into regiments. This process of creating and enlarging units continued as long as the Viet Cong had a base of support among the population. After 1959, however, infiltrators from the North also became important. Hanoi activated a special military transportation unit to control overland infiltration along the Ho Chi Minh Trail through Laos and Cambodia. Then a special naval unit was set up to conduct sea infiltration. At first, the infiltrators were southern-born Viet Minh soldiers who had regrouped north after the French Indochina War. Each year until 1964, thousands returned south to join or to form Viet Cong units, usually in the areas where they had originated. Such men served as experienced military or political cadres, as technicians, or as rank-and-file combatants wherever local recruitment was difficult. When the pool of about 80,000 so-called regroupees ran dry, Hanoi began sending native North Vietnamese soldiers as individual replacements and reinforcements. In 1964 the Communists started to introduce entire North Vietnamese Army (NVA) units into the South. Among the infiltrators were senior cadres, who manned the expanding Viet Cong command system— regional headquarters, interprovincial commands, and the Central Office for South Vietnam (COSVN), the supreme military and political headquarters. As the southern branch of the Vietnamese Communist Party, COSVN was directly subordinate to the Central Committee in Hanoi. Its senior commanders were high-ranking officers of North Vietnam's Army. To equip the growing number of Viet Cong forces in the South, the insurgents continued to rely heavily on arms and supplies captured from South Vietnamese forces. But, increasingly, large numbers of weapons, ammunition, and other equipment arrived from the North, nearly all supplied by the Sino-Soviet bloc. From a strength of approximately 5,000 at the start of 1959, the Viet Cong's ranks grew to about 100,000 at the end of 1964. The number of infiltrators alone during that period was estimated at 41,000. The growth of the insurgency reflected not only North Vietnam's skill in infiltrating men and weapons, but South Vietnam's inability to control its porous borders, Diem's failure to develop a credible pacification program to reduce Viet Cong influence in the countryside, and the South Vietnamese Army's difficulties in reducing long-standing Viet Cong bases and secret zones. Such areas not only facilitated infiltration, but were staging areas for operations; they contained training camps, hospitals, depots, workshops, and command centers. Many bases were in remote areas seldom visited by the army, such as the U Minh Forest or the Plain of Reeds. But others existed in the heart of populated areas, in the "liberated zones." There Viet Cong forces, dispersed among hamlets and villages, drew support from the local economy. From such centers the Viet Cong expanded their influence into adjacent areas that were nominally under Saigon's control. A New President Takes Charge Soon after John F. Kennedy became President in 1961, he sharply increased military and economic aid to South Vietnam to help Diem defeat the growing insurgency. For Kennedy, insurgencies (or "wars of national liberation" in the parlance of Communist leaders) were a challenge to international security every bit as serious as nuclear war. The administration's approach to both extremes of conflict rested on the precepts of the flexible response. Regarded as a form of "sub-limited" or small war, insurgency was treated largely as a military problem—conventional war writ small—and hence susceptible to resolution by timely and appropriate military action. Kennedy's success in applying calculated military pressures to compel the Soviet Union to remove its offensive missiles from Cuba in 1962 reinforced the administration's disposition to deal with other international crises, including the conflict in Vietnam, in a similar manner. Though an advance over the New Look, his policy also had limitations. Long-term strategic planning tended to be sacrificed to short-term crisis management. Planners were all too apt to assume that all belligerents were rational and that the foe subscribed as they did to the seductive logic of the flexible response. Hoping to give the South Vietnamese a margin for success Kennedy periodically authorized additional military aid and support between I96I and November 1963, when he was assassinated. But potential benefits were nullified by the absence of a clear doctrine and a coherent operational strategy for the conduct of counterinsurgency, and by chronic military and political shortcomings on the part of the South Vietnamese. The U.S. Army played a major role in Kennedy's "beef up" of the American advisory and support efforts in South Vietnam. In turn, that role was made possible in large measure by Kennedy's determination to increase the strength and capabilities of Army forces for both conventional and unconventional operations. Between 1961 and 1964 the Army's strength rose from about 850,000 to nearly a million men, and the number of combat divisions grew from eleven to sixteen. These increases were backed up by an ambitious program to modernize Army equipment and, by stockpiling supplies and equipment at forward bases, to increase the deployability and readiness of Army combat forces. The build-up, however, did not prevent the call-up of 120,000 Reservists to active duty in the summer of 1961, a few months after Kennedy assumed office. Facing renewed Soviet threats to force the Western Powers out of Berlin, Kennedy mobilized the Army to reinforce NATO, if need be. But the mobilization revealed serious shortcomings in Reserve readiness and produced a swell of criticism and complaints from Congress and Reservists alike. Although Kennedy sought to remedy the deficiencies that were exposed and set in motion plans to reorganize the Reserves, the unhappy experience of the Berlin Crisis was fresh in the minds of national leaders when they faced the prospect of war in Vietnam a few years later. Facing trouble spots in Latin America, Africa, and Southeast Asia, Kennedy took a keen interest in the U.S. Army's Special Forces, believing that their skills in unconventional warfare were well suited to countering insurgency. During his first year in office, he increased the strength of the Special Forces from about 1,500 to 9,000 and authorized them to wear a distinctive green beret. In the same year he greatly enlarged their role in South Vietnam. First under the auspices of the Central Intelligence Agency and then under a military commander, the Special Forces organized the highland tribes into the Civilian Irregular Defense Group (CIDG) and in time sought to recruit other ethnic groups and sects in the South as well. To this scheme, underwritten almost entirely by the United States, Diem gave only tepid support. Indeed, the civilian irregulars drew strength from groups traditionally hostile to Saigon. Treated with disdain by the lowland Vietnamese, the Montagnards developed close, trusting relations with their Army advisers. Special Forces detachment commanders frequently were the real leaders of CIDG units. This strong mutual bond of loyalty between adviser and highlander benefited operations, but some tribal leaders sought to exploit the special relationship to advance Montagnard political autonomy. On occasion, Special Forces advisers found themselves in the awkward position of mediating between militant Montagnards and South Vietnamese officials who were suspicious and wary of the Americans' sympathy for the highlanders. Through a village self-defense and development program, the Special Forces aimed initially to create a military and political buffer to the growing Viet Cong influence in the Central Highlands. Within a few years, approximately 60,000 highlanders had enlisted in the CIDG program. As their participation increased, so too did the range of Special Forces activities. In addition to village defense programs, the Green Berets sponsored offensive guerrilla activities and border surveillance and control measures. To detect and impede the Viet Cong, camps were established astride infiltration corri- dors and near enemy base areas, especially along the Cambodian and Laotian borders. But the camps themselves were vulnerable to enemy attack and, despite their presence, infiltration continued. At times, border control diverted tribal units from village defense, the original heart of the CIDG program. By 1965, as the military situation in the highlands worsened, many CIDG units had changed their character and begun to engage in quasi-conventional military operations. In some instances, irregulars under the leadership of Army Special Forces stood up to crack enemy regiments, offering much of the military resistance to enemy efforts to dominate the highlands. Yet the Special Forces—despite their efforts in South Vietnam and in Laos, where their teams helped to train and advise anti-Communist Laotian forces in the early 1960'S—did not provide an antidote to the virulent insurgency in Vietnam. Long-standing animosities between Montagnard and Vietnamese prevented close, continuing co-operation between the South Vietnamese Army and the irregulars. Long on promises but short on action to improve the lot of the Montagnards, successive South Vietnamese regimes failed to win the loyalty of the tribesmen. And the Special Forces usually operated in areas that were remote from the main Viet Cong threat to the heavily populated and economically important Delta and coastal regions of the country. Besides the Special Forces, the Army's most important contribution to the fight was the helicopter. Neither Kennedy nor the Army anticipated the rapid growth of aviation in South Vietnam when the first helicopter transportation company arrived in December 1961. Within three years, however, each of South Vietnam's divisions and corps was supported by Army helicopters, with the faster, more reliable and versatile UH-1 (Huey) replacing the older CH-21. In addition to transporting men and supplies, helicopters were used to reconnoiter, to evacuate wounded, and to provide command and control. The Vietnam conflict became the crucible in which Army airmobile and air assault tactics evolved. As armament was added—first machine gun-wielding door-gunners, and later rockets and mini-guns—armed helicopters began to protect troop carriers against antiaircraft fire, to suppress enemy fire around landing zones during air assaults, and to deliver fire support to troops on the ground. Army fixed-wing aircraft also flourished. Equipped with a variety of detection devices, the OV-1 Mohawk conducted day and night surveillance of Viet Cong bases and trails. The Caribou, with its sturdy frame and ability to land and take off on short, unimproved airfields, proved ideal to supply remote camps. Army aviation revived old disagreements with the Air Force over the roles and missions of the two services and the adequacy of Air Force close air support. The expansion of the Army's own "air force" nevertheless continued, abetted by the Kennedy administration's interest in extending airmobility to all types of land warfare, from counterinsurgency to the nuclear battlefield. Secretary of Defense Robert S. McNamara himself encouraged the Army to test an experimental air assault division. During 1963 and 1964 the Army demonstrated that helicopters could successfully replace ground vehicles for mobility and provide fire support in lieu of ground artillery. The result was the creation in 1965 of the 1st Cavalry Division (Airmobile)—the first such unit in the Army. In South Vietnam the helicopter's effect on organization and operations was as sweeping as the influence of mechanized forces in World War II. Many of the operational concepts of airmobility, rooted in cavalry doctrine and operations, were pioneered by helicopter units between 1961 and 1964, and later adopted by the new airmobile division and by all Army combat units that fought in South Vietnam. In addition to Army Special Forces and helicopters, Kennedy greatly expanded the entire American advisory effort. Advisers were placed at the sector (provincial) level and were permanently assigned to infantry battalions and certain lower echelon combat units; additional intelligence advisers were sent to South Vietnam. Wide use was made of temporary training teams in psychological warfare, civic action, engineering, and a variety of logistical functions. With the expansion of the advisory and support efforts came demands for better communications, intelligence, and medical, logistical, and administrative support, all of which the Army provided from its active forces, drawing upon skilled men and units from U.S.-based forces. The result was a slow, steady erosion of its capacity to meet worldwide contingency obligations. But if Vietnam depleted the Army, it also provided certain advantages. The war was a laboratory in which to test and evaluate new equipment and techniques applicable to counterinsurgency—among others, the use of chemical defoliants and herbicides, both to remove the jungle canopy that gave cover to the guerrillas and to destroy his crops. As the activities of all the services expanded, U.S. military strength in South Vietnam increased from under 700 at the start of 1960 to almost 24,000 by the end of 1964. Of these, 15,000 were Army and a little over 2,000 were Army advisers. Changes in American command arrangements attested to the growing commitment. In February 1962 the Joint Chiefs of Staff established the United States Military Assistance Command, Vietnam (USMACV), in Saigon as the senior American military headquarters in South Vietnam, and appointed General Paul D. Harkins as commander (COMUSMACV). Harkins reported to the Commander in Chief, Pacific (CINCPAC), in Hawaii, but because of high-level interest in South Vietnam, enjoyed special access to military and civilian leaders in Washington as well. Soon MACV moved into the advisory effort hitherto directed by the Military Assistance Advisory Group. To simplify the advisory chain of command, the latter was disestablished in May 1964, and MACV took direct control. As the senior Army commander in South Vietnam, the MACV commander also commanded Army support units; for day-to-day operations, however, control of such units was vested in the corps and division senior advisers. For administrative and logistical support Army units looked to the U.S. Army Support Group, Vietnam (later the U.S. Army Support Command), which was established in mid-1962. Though command arrangements worked tolerably well, complaints were heard in and out of the Army. Some officials pressed for a separate Army component commander, who would be responsible both for operations and for logistical support—an arrangement enjoyed by other services in South Vietnam. Airmen tended to believe that an Army command already existed, disguised as MACV. They believed that General Harkins, though a joint commander, favored the Army in the bitter interservice rivalry over the roles and missions of aviation in South Vietnam. Some critics thought his span of control excessive, for Harkins' responsibility extended to Thailand, where Army combat units had deployed in 1962, aiming to overawe Communist forces in neighboring Laos. The Army undertook several logistical projects in Thailand, and Army engineers, signalmen, and other support forces remained there after combat forces withdrew in the fall of 1962. While the Americans strengthened their position in South Vietnam and Thailand, the Communists tightened their grip in Laos. In 1962 agreements on that small, land-locked nation were signed in Geneva requiring all foreign military forces to leave Laos. American advisers, including hundreds of Special Forces, departed. But the agreements were not honored by North Vietnam. Its army, together with Laotian Communist forces, consolidated their hold on areas adjacent to both North and South Vietnam through which passed the network of jungle roads called the Ho Chi Minh Trail. As a result, it became easier to move supplies south to support the Viet Cong in the face of the new dangers embodied in U.S. advisers, weapons, and tactics. At first the enhanced mobility and firepower afforded the South Vietnamese Army by helicopters, armored personnel carriers, and close air support surprised and overwhelmed the Viet Cong. Saigon's forces reacted more quickly to insurgent attacks and penetrated many Viet Cong areas. Even more threatening to the insurgents was Diem's strategic hamlet program, launched in late 1961. Diem and his brother Ngo Dinh Nhu, an ardent sponsor of the program, hoped to create thousands of new, fortified villages, often by moving peasants from their existing homes. Hamlet construction and defense were the responsibility of the new residents, with paramilitary and ARVN forces providing initial security while the peasants were recruited and organized. As security improved, Diem and Nhu hoped to enact social, economic, and political reforms which, when fully carried out, would constitute Saigon's revolutionary response to Viet Cong promises of social and economic betterment. If successful, the program might destroy the insurgency by separating and protecting the rural population from the Viet Cong, threatening the rebellion's base of support. By early 1963, however, the Viet Cong had learned to cope with the army's new weapons and more aggressive tactics and had begun a campaign to eliminate the strategic hamlets. The insurgents became adept at countering helicopters and slow-flying aircraft and learned the vulnerabilities of armored personnel carriers. In addition, their excellent intelligence, combined with the predictability of ARVN's tactics and pattern of operations, enabled the Viet Cong to evade or ambush government forces. The new weapons the United States had provided the South Vietnamese did not compensate for the stifling influence of poor leadership, dubious tactics, and inexperience. The much publicized defeat of government forces at the Delta village of Ap Bac in January 1963 demonstrated both the Viet Cong's skill in countering ARVN's new capabilities and the latter's inherent weaknesses. Faulty intelligence, poorly planned and executed fire support, and overcautious leadership contributed to the outcome. But Ap Bac's significance transcended a single battle. The defeat was a portent of things to come. Now able to challenge ARVN units of equal strength in quasi-conventional battles, the Viet Cong were moving into a more intense stage of revolutionary war. As the Viet Cong became stronger and bolder, the South Vietnamese Army became more cautious and less offensive-minded. Government forces became reluctant to respond to Viet Cong depredations in the countryside, avoided night operations, and resorted to ponderous sweeps against vague military objectives, rarely making contact with their enemies. Meanwhile, the Viet Cong concentrated on destroying strategic hamlets, showing that they considered the settlements, rather than ARVN forces, the greater danger to the insurgency. Poorly defended hamlets and outposts were overrun or subverted by enemy agents who infiltrated with peasants arriving from the countryside. The Viet Cong's campaign was aided by Saigon's failures. The government built too many hamlets to defend. Hamlet militia varied from those who were poorly trained and armed to those who were not trained or armed at all. Fearing that weapons given to the militia would fall to the Viet Cong, local officials often withheld arms. Forced relocation, use of forced peasant labor to construct hamlets, and tardy payment of compensation for relocation were but a few reasons why peasants turned against the program. Few meaningful reforms took place. Accurate information on the program's true condition and on the decline in rural security was hidden from Diem by officials eager to please him with reports of progress. False statistics and reports misled U.S. officials, too, about the progress of the counterinsurgency effort. If the decline in rural security was not always apparent to Americans, the lack of enlightened political leadership on the part of Diem was all too obvious. Diem habitually interfered in military matters—bypassing the chain of command to order operations, forbidding commanders to take casualties, and appointing military leaders on the basis of political loyalty rather than competence. Many military and civilian appointees, especially province and district chiefs, were dishonest and put career and fortune above the national interest. When Buddhist opposition to certain policies erupted into violent antigovernment demonstrations in 1963, Diem's uncompromising stance and use of military force to suppress the demonstrators caused some generals to decide that the President was a liability in the fight against the Viet Cong. On 1 November, with American encouragement, a group of reform-minded generals ousted Diem, who was murdered along with his brother. Political turmoil followed the coup. Emboldened, the insurgents stepped up operations and increased their control over many rural areas. North Vietnam's leaders decided to intensify the armed struggle, aiming to demoralize the South Vietnamese Army and further undermine political authority in the South. As Viet Cong military activity quickened, regular North Vietnamese Army units began to train for possible intervention in the war. Men and equipment continued to flow down the Ho Chi Minh Trail, with North Vietnamese conscripts replacing the dwindling pool of southerners who had belonged to the Viet Minh. Setting the Stage for Confrontation The critical state of rural security that came to light after Diem's death again prompted the United States to expand its military aid to Saigon. General Harkins and his successor General William C. Westmoreland urgently strove to revitalize pacification and counterinsurgency. Army advisers helped their Vietnamese counterparts to revise national and provincial pacification plans. They retained the concept of fortified hamlets as the heart of a new national counterinsurgency program, but corrected the old abuses, at least in theory. To help implement the program, Army advisers were assigned to the subsector (district) level for the first time, becoming more intimately involved in local pacification efforts and in paramilitary operations. Additional advisers were assigned to units and training centers, especially those of the Regional and Popular Forces (formerly called the Civil Guard and Self-Defense Corps). All Army activities, from aviation support to Special Forces, were strengthened in a concerted effort to undo the effects of years of Diem's mismanagement. At the same time, American officials in Washington, Hawaii, and Saigon began to explore ways to increase military pressure against North Vietnam. In 1964 the South Vietnamese launched covert raids under MACV's auspices. Some military leaders, however, believed that only direct air strikes against North Vietnam would induce a change in Hanoi's policies by demonstrating American determination to defend South Vietnam's independence. Air strike plans ranged from immediate massive bombardment of military and industrial targets to gradually intensifying attacks spanning several months. The interest in using air power reflected lingering sentiment in the United States against involving American ground forces once again in a land war on the Asian continent. Many of President Lyndon B. Johnson's advisers—among them General Maxwell D. Taylor, who was appointed Ambassador to Saigon in mid-1964—believed that a carefully calibrated air campaign would be the most effective means of exerting pressure against the North and, at the same time, the method least likely to provoke intervention by China. Taylor thought conventional Army ground forces ill suited to engage in day-to-day counterinsurgency operations against the Viet Cong in hamlets and villages. Ground forces might, however, be used to protect vital air bases in the South and to repel any North Vietnamese attack across the demilitarized zone, which separated North from South Vietnam. Together, a more vigorous counterinsurgency effort in the South and military pressure against the North might buy time for Saigon to put its political house in order, boost flagging military and civilian morale, and strengthen its military position in the event of a negotiated peace. Taylor and Westmoreland, the senior U.S. officials in South Vietnam, agreed that Hanoi was unlikely to change its course unless convinced that it could not succeed in the South. Both recognized that air strikes were neither a panacea nor a substitute for military efforts in the South. As each side undertook more provocative military actions, the likelihood of a direct military confrontation between North Vietnam and the United States increased. The crisis came in early August 1964 in the international waters of the Gulf of Tonkin. North Vietnamese patrol boats attacked U.S. naval vessels engaged in surveillance of North Vietnam's coastal defenses. The Americans promptly launched retaliatory air strikes. At the request of President Johnson, Congress overwhelmingly passed the Southeast Asia Resolution—the so-called Gulf of Tonkin Resolution—authorizing all actions necessary to protect American forces and to provide for the defense of the nation's allies in Southeast Asia. Considered by some in the administration as the equivalent of a declaration of war, this broad grant of authority encouraged Johnson to expand American military efforts within South Vietnam, against North Vietnam, and in Southeast Asia at large. By late 1964, both sides were poised to increase their stake in the war. Regular NVA units had begun moving south and stood at the Laotian frontier, on the threshold of crossing into South Vietnam's Central Highlands. U.S. air and naval forces stood ready to renew their attacks. On 7 February 1965, Communist forces attacked an American compound in Pleiku in the Central Highlands and a few days later bombed American quarters in Qui Nhon. The United States promptly bombed military targets in the North. A few weeks later, President Johnson approved ROLLING THUNDER, a campaign of sustained, direct air strikes of progressively increasing strength against military and industrial targets in North Vietnam. Signs of intensifying conflict appeared in South Vietnam as well. Strengthening their forces at all echelons, from village guerrillas to main force regiments, the Viet Cong quickened military activity in late 1964 and in the first half of 1965. At Binh Gia, a village forty miles east of Saigon in Phuoc Tuy Province, a multiregimental Viet Cong force—possibly the 1st Viet Cong Infantry Division—fought and defeated several South Vietnamese battalions. Throughout the spring the Viet Cong sought to disrupt pacification and oust the government from many rural areas. The insurgents made deep inroads in the central coastal provinces and withstood government efforts to reduce their influence in the Delta and in the critical provinces around Saigon. Committed to static defense of key towns and bases, government forces were unable or unwilling to respond to attacks against rural commu- nities. In late spring and early summer, strong Communist forces sought a major military victory over the South Vietnamese Army by attacking border posts and highland camps. The enemy also hoped to draw government forces from populated areas, to weaken pacification further. By whipsawing war-weary ARVN forces between coast and highland and by inflicting a series of damaging defeats against regular units, the enemy hoped to undermine military morale and popular confidence in the Saigon government. And by accelerating the dissolution of government military forces, already racked by high desertions and casualties, the Communists hoped to compel the South Vietnamese to abandon the battlefield and seek an all-Vietnamese political settlement that would compel the United States to leave South Vietnam. By the summer of 1965, the Viet Cong, strengthened by several recently infiltrated NVA regiments, had gained the upper hand over government forces in some areas of South Vietnam. With U.S. close air support and the aid of Army helicopter gunships, Saigon's forces repelled many enemy attacks, but suffered heavy casualties. Elsewhere highland camps and border outposts had to be abandoned. ARVN's cumulative losses from battle deaths and desertions amounted to nearly a battalion a week. Saigon was hard pressed to find men to replenish these heavy losses and completely unable to match the growth of Communist forces from local recruitment and infiltration. Some American officials doubted whether the South Vietnamese could hold out until ROLLING THUNDER created pressures sufficiently strong to convince North Vietnam's leaders to reduce the level of combat in the South. General Westmoreland and others believed that U.S. ground forces were needed to stave off an irrevocable shift of the military and political balance in favor of the enemy. For a variety of diplomatic, political, and military reasons, President Johnson approached with great caution any commitment of large ground combat forces to South Vietnam. Yet preparations had been under way for some time. In early March 1965, a few days after ROLLING THUNDER began, American marines went ashore in South Vietnam to protect the large airfield at Da Nang—a defensive security mission. Even as they landed, General Harold K. Johnson, Chief of Staff of the Army, was in South Vietnam to assess the situation. Upon returning to Washington, he recommended a substantial increase in American military assistance, including several combat divisions. He wanted U.S. forces either to interdict the Laotian panhandle to stop infiltration or to counter a growing enemy threat in the central and northern provinces. But President Johnson sanctioned only the dispatch of additional marines to increase security at Da Nang and to secure other coastal enclaves. He also authorized the Army to begin deploying nearly 20,000 logistical troops, the main body of the 1st Logistical Command, to Southeast Asia. (Westmoreland had requested such a command in late 1964.) At the same time, the President modified the marines' mission to allow them to conduct offensive operations close to their bases. A few weeks later, to protect American bases in the vicinity of Saigon, Johnson approved sending the first Army combat unit, the 173d Airborne Brigade (Separate), to South Vietnam. Arriving from Okinawa in early May, the brigade moved quickly to secure the air base at Bien Hoa, just northeast of Saigon. With its arrival, U.S. military strength in South Vietnam passed 50,000. Despite added numbers and expanded missions, American ground forces had yet to engage the enemy in full-scale combat. Indeed, the question of how best to use large numbers of American ground forces was still unresolved on the eve of their deployment. Focusing on population security and pacification, some planners saw U.S. combat forces concentrating their efforts in coastal enclaves and around key urban centers and bases. Under this plan, such forces would provide a security shield behind which the Vietnamese could expand the pacification zone; when required, American combat units would venture beyond their enclaves as mobile reaction forces. This concept, largely defensive in nature, reflected the pattern established by the first Army combat units to enter South Vietnam. But the mobility and offensive firepower of U.S. ground units suggested their use in remote, sparsely populated regions to seek out and engage main force enemy units as they infiltrated into South Vietnam or emerged from their secret bases. While secure coastal logistical enclaves and base camps still would be required, the weight of the military effort would be focused on the destruction of enemy military units. Yet even in this alternative, American units would serve indirectly as a shield for pacification activities in the more heavily populated lowlands and Delta. A third proposal had particular appeal to General Johnson. He wished to employ U.S. and allied ground forces across the Laotian panhandle to interdict enemy infiltration into South Vietnam. Here was a more direct and effective way to stop infiltration than the use of air power. Encumbered by military and political problems, the idea was revived periodically but always rejected. The pattern of deployment that actually developed in South Vietnam was a compromise between the first two concepts. For any type of operations, secure logistical enclaves at deep-water ports (Cam Ranh Bay, Nha Trang, Qui Nhon, for example) were a military necessity. In such areas combat units arrived and bases developed for regional logistical complexes to support the troops. As the administration neared a decision on combat deployment, the Army began to identify and ready units for movement overseas and to prepare mobilization plans for Selected Reserve forces. The dispatch of Army units to the Dominican Republic in May 1965 to forestall a leftist take-over caused only minor adjustments to the build-up plans. The episode nevertheless showed how unexpected demands elsewhere in the world could deplete the strategic reserve, and it underscored the importance of mobilization if the Army was to meet worldwide contingencies and supply trained combat units to Westmoreland as well. The prospect of deploying American ground forces also revived discussions of allied command arrangements. For a time, Westmoreland considered placing South Vietnamese and American forces under a single commander, an arrangement similar to that of U.S. and South Korean forces during the Korean War. In the face of South Vietnamese opposition, however, the idea was dropped. Arrangements with other allies were varied. Americans in South Vietnam were joined by combat units from Australia, New Zealand, South Korea, Thailand, and by noncombat elements from several other nations. Westmoreland entered into separate agreements with each commander in turn; the compacts ensured close co-operation with MACV, but fell short of giving Westmoreland command over the allied forces. While diversity marked these arrangements, Westmoreland strove for unity within the American build-up. As forces began to deploy to South Vietnam, the Army again sought to elevate the U.S. Army, Vietnam (USARV), to a full-fledged Army component command with responsibility for combat operations. But Westmoreland successfully warded off the challenge to his dual role as unified commander of MACV and Army commander. For the remainder of the war, USARV performed solely in a logistical and administrative capacity; unlike MACV's air and naval component commands, the Army component did not exercise operational control over combat forces, special forces, or field advisers. However, through its logistical, engineer, signal, medical, military police, and aviation commands all established in the course of the build-up, USARV commanded and managed a support base of unprecedented size and scope. Despite this victory, unity of command over the ground war in South Vietnam eluded Westmoreland, as did over-all control of U.S. military operations in support of the war. Most air and naval operations outside of South Vietnam, including ROLLING THUNDER, were carried out by the Commander in Chief, Pacific, and his air and naval commanders from his headquarters thousands of miles away in Hawaii. This patchwork of command arrangements contributed to the lack of a unified strategy, the fragmentation of operations, and the pursuit of parochial service interests to the detriment of the war effort. No single American commander had complete authority or responsibility to fashion an over-all strategy or to co-ordinate all military aspects of the war in Southeast Asia. Furthermore, Westmoreland labored under a variety of political and operational constraints on the use of the combat forces he did command. Like the Korean War, the struggle in South Vietnam was complicated by enemy sanctuaries and by geographical and political restrictions on allied operations. Ground forces were barred from operating across South Vietnam's borders into Cambodia, Laos, or North Vietnam, although the border areas of those countries were vital to the enemy's war effort. These factors narrowed Westmoreland's freedom of action and detracted from his efforts to make effective use of American military power. Groundwork for Combat: Build-up and Strategy On 28 July 1965, President Johnson announced plans to deploy additional combat units and to increase American military strength in South Vietnam to 175,000 by year's end. The Army already was preparing hundreds of units for duty in Southeast Asia, among them the newly activated 1st Cavalry Division (Airmobile). Other combat units—the 1st Brigade, 101st Airborne Division, and all three brigades of the 1st Infantry Division—were either ready to go or already on their way to Vietnam. Together with hundreds of support and logistical units, these combat units constituted the first phase of the build-up during the summer and fall of 1965. At the same time, President Johnson decided not to mobilize any Reserve units. The President's decision profoundly affected the manner in which the Army supported and sustained the build-up. To meet the call for additional combat forces and to obtain manpower to enlarge its training base and to maintain a pool for rotation and replacement of soldiers in South Vietnam, the Army had to increase its active strength, over the next three years, by nearly 1.5 million men. Necessarily, it relied on larger draft calls and voluntary enlistments, supplementing them with heavy draw downs of experienced soldiers from units in Europe and South Korea and extensions of some tours of duty to retain specialists, technicians, and cadres who could train recruits or round out deploying units. Combat units assigned to the strategic reserve were used to meet a large portion of MACV's force requirements, and Reservists were not available to replace them. Mobilization could have eased the additional burden of providing noncommissioned officers (NCO's) and officers to man the Army's growing training bases. As matters stood, On 28 July 1965, President Johnson announced plans to deploy additional combat units and to increase American military strength in South Vietnam to 175,000 by year's end. The Army already was preparing hundreds of units for duty In Southeast Asia, among them the newly activated 1st Cavalry Division (Airmobile). Other combat units—the 1st Brigade, 101st Airborne Division, and all three brigades of the 1st Infantry Division—were either ready to go or already on their way to Vietnam. Together with hundreds of support and logistical units, these combat units constituted the first phase of the build-up during the summer and fall of 1965. At the same time, President Johnson decided not to mobilize any Reserve units. The President's decision profoundly affected the manner in which the Army supported and sustained the build-up. To meet the call for additional combat forces and to obtain manpower to enlarge its training base and to maintain a pool for rotation and replacement of soldiers in South Vietnam, the Army had to increase its active strength, over the next three years, by nearly 1.5 million men. Necessarily, it relied on larger draft calls and voluntary enlistments, supplementing them with heavy draw downs of experienced soldiers from units in Europe and South Korea and extensions of some tours of duty to retain specialists, technicians, and cadres who could train recruits or round out deploying units. Combat units assigned to the strategic reserve were used to meet a large portion of MACV's force requirements, and Reservists were not available to replace them. Mobilization could have eased the additional burden of providing noncommissioned officers (NCO's) and officers to man the Army's growing training bases. As matters stood, The personnel turbulence caused by competing demands for the Army's limited manpower was intensified by a one-year tour of duty in South Vietnam. A large number of men was needed to sustain the rotational base, often necessitating the quick return to Vietnam of men with critical skills. The heightened demand for leaders led to accelerated training programs and the lowering of standards for NCO's and junior officers. Moreover, the one-year tour deprived units in South Vietnam of experienced leadership. In time, the infusion of less-seasoned NCO's and officers contributed to a host of morale problems that afflicted some Army units. At a deeper level, the administration's decision against calling the Reserves to active duty sent the wrong signal to friends and enemies alike, implying that the nation lacked the resolution to support an effort of the magnitude needed to achieve American objectives in South Vietnam. Hence the Army began to organize additional combat units. Three light infantry brigades were activated, and the 9th Infantry Division was reactivated. In the meantime the 4th and 25th Infantry Divisions were alerted for deployment to South Vietnam. With the exception of a brigade of the 25th, all of the combat units activated and alerted during the second half of 1965 deployed to South Vietnam during 1966 and 1967. By the end of 1965, U.S. military strength in South Vietnam had reached 184,000; a year later it stood at 385,000; and by the end of 1967 it approached 490,000. Army personnel accounted for nearly two-thirds of the total. Of the Army's eighteen divisions, at the end of 1967, seven were serving in South Vietnam. Facing a deteriorating military situation, Westmoreland in the summer of 1965 planned to use his combat units to blunt the enemy's spring-summer offensive. As they arrived in the country, Westmoreland moved them into a defensive arc around Saigon and secured bases for the arrival of subsequent units. His initial aim was defensive—to stop losing the war and to build a structure that could support a later transition to an offensive campaign. As additional troops poured in, Westmoreland planned to seek out and defeat major enemy forces. Throughout both phases, the South Vietnamese, relieved of major combat tasks, were to refurbish their forces and conduct an aggressive pacification program behind the American shield. In a third and final stage, as enemy main force units were driven into their secret zones and bases, Westmoreland hoped to achieve victory by destroying those sanctuaries and shifting the weight of the military effort to pacification, thereby at last subduing the Viet Cong throughout rural South Vietnam. The fulfillment of this concept rested not only on the success of American's efforts to find and defeat enemy forces, but on the success of Saigon's pacification program. In June 1965 the last in a series of coups that followed Diem's overthrow brought in a military junta headed by Lt. Gen. Nguyen Van Thieu as Chief of State and Air Vice Marshal Nguyen Cao Ky as Prime Minister. The new government provided the political stability requisite for successful pacification. Success hinged also on the ability of the U.S. air campaign against the North to reduce the infiltration of men and material, dampening the intensity of combat in the South and inducing Communist leaders in Hanoi to alter their long-term strategic goals. Should any strand of this threefold strategy—the campaign against Communist forces in the South, Saigon's pacification program, and the air war in the North—falter, Westmoreland's prospects would become poorer. Yet he was directly responsible for only one element, the U.S. military effort in the South. To a lesser degree, through American advice and assistance to the South Vietnamese forces, he also influenced Saigon's efforts to suppress the Viet Cong and to carry out pacification. Army Operations in III and IV Corps, 1965-1967 Centered on the defense of Saigon, Westmoreland's concept of operations in the III Corps area had a clarity of design and purpose that was not always apparent elsewhere in South Vietnam. (Map 48) Nearly two years would pass before U.S. forces could maintain a security belt around the capital and at the same time attack the enemy's bases. But Westmoreland's ultimate aims and the difficulties he would encounter were both foreshadowed by the initial combat operations in the summer and fall of 1965. Joined by newly arrived Australian infantrymen, the 173d Airborne Brigade during June began operations in War Zone D, a longtime enemy base north of Saigon. Though diverted several times to other tasks, the brigade gained experience in conducting heliborne assaults and accustomed itself to the rigors of jungle operations. It also established a pattern of operations that was to grow all too familiar. Airmobile assaults, often in the wake of B-52 air strikes, were followed by extensive patrolling, episodic contact with the Viet Cong, and withdrawal after a few days' stay in the enemy's territory. In early November the airborne soldiers uncovered evidence of the enemy's recent and hasty departure—abandoned camps, recently vacated tunnels, and caches of food and supplies. However, the Viet Cong, by observing the brigade, began to formulate plans for dealing with the Americans. On 8 November, moving deeper into War Zone D, the brigade encountered the first significant resistance. A multibattalion Viet Cong force attacked at close quarters and forced the Americans into a tight defensive perimeter. Hand-to-hand combat ensued as the enemy tried to "hug" Ameri- can soldiers to prevent the delivery of supporting air and artillery fire. Unable to prepare a landing zone to receive reinforcements or to evacuate casualties, the beleaguered Americans withstood repeated enemy assaults. At nightfall the Viet Cong ceased their attack and withdrew under cover of darkness. Next morning, when reinforcements arrived, the brigade pursued the enemy, finding evidence that he had suffered heavy casualties. Such operations inflicted losses but failed either to destroy the enemy's base or to prevent him from returning to it later on. Like the airborne brigade, the 1st Infantry Division initially divided its efforts. In addition to securing its base camps north of Saigon, the division helped South Vietnamese forces clear an area west of the capital in the vicinity of Cu Chi in Hau Nghia Province. Reacting to reports of enemy troop concentrations, units of the division launched a series of operations in the fall of 1965 and early 1966 that entailed quick forays into the Ho Bo and Boi Loi woods, the Michelin Rubber Plantation, the Rung Sat swamp, and War Zones C and D. In Operation MASTIFF, for example, the division sought to disrupt Viet Cong infiltration routes between War Zones C and D that crossed the Boi Loi woods in Tay Ninh Province, an area that had not been penetrated by government forces for several years. But defense of Saigon was the first duty of the "Big Red One" as well as of the 25th Infantry Division, which arrived in the spring of 1966. The 1st Division took up a position protecting the northern approaches, blocking Route 15 from the Cambodian border. The 25th guarded the western approaches, chiefly Route 1 and the Saigon River. The two brigades of the 25th Division served also as a buffer between Saigon and the enemy's base areas in Tay Ninh Province. Westmoreland hoped, however, that the 25th Division would loosen the insurgents' tenacious hold on Hau Nghia Province as well. Here American soldiers found to their amazement that the division's camp at Cu Chi had been constructed atop an extensive Viet Cong tunnel complex. Extending over an area of several miles, this subterranean network, one of several in the region, contained hospitals, command centers, and storage sites. The complex, though partially destroyed by Army "tunnel rats," was never completely eliminated and lasted for the duration of the war. The With Division worked closely with South Vietnamese Army and paramilitary forces throughout 1966 and 1967 to foster pacification in Hau Nghia and to secure its own base. But suppressing insurgency in Hau Nghia proved as difficult as eradicating the tunnels at Cu Chi. As the number of Army combat units in Vietnam grew larger, Westmoreland established two corps-size commands, I Field Force in the II Corps area and II Field Force in the III Corps area. Reporting directly to the MACV commander, the field force commander was the senior Army tactical commander in his area and the senior U.S. adviser to ARVN forces there. Working closely with his South Vietnamese counterpart, he co-ordinated ARVN and American operations by establishing territorial priorities for combat and pacification efforts. Through his deputy senior adviser, a position established in 1967, the field force commander was able to keep abreast both of the activities of U.S. sector (province) and subsector (district) advisers and of the progress of Saigon's pacification efforts. A similar arrangement was set up in I Corps, where the commander of the III Marine Amphibious Force was the equivalent of a field force commander. Only in IV Corps, in the Mekong Delta where few American combat units served, did Westmoreland choose not to establish a corps-size command. There the senior U.S. adviser served as COMUSMACV's representative; he commanded Army advisory and support units, but no combat units. Although Army commanders in III Corps were eager to seek out and engage enemy main force units in their strongholds along the Cambodian border, operations at first were devoted to base and area security and to clearing and rehabilitating roads. The 1st Infantry Division's first major encounter with the Viet Cong occurred in November as division elements carried out a routine road security operation along Route 13, in the vicinity of the village of Bau Bang. Trapping convoys along Route 13 had long been a profitable Viet Cong tactic. Ambushed by a large, well-entrenched enemy force, division troops reacted aggressively and mounted a successful counterattack. But the road was by no means secured; close to enemy bases, the Cambodian border, and Saigon, Route 13 would be the site of several major battles in years to come. Roads were a major concern of U.S. commanders. In some operations, infantrymen provided security as Army engineers improved neglected routes. Defoliants and the Rome plow—a bulldozer modified with sharp front blades—removed from the sides of important highways the jungle growth that provided cover for Viet Cong ambushes. Road-clearing operations also contributed to pacification by providing peasants with secure access to local markets. In III Corps, with its important road network radiating from Saigon, ground mobility was as essential as airmobility for the conduct of military operations. Lacking as many helicopters as the airmobile division, the 1st and 25th Infantry Divisions, like all Army units in South Vietnam, strained the resources of their own aviation support units and of other Army aviation units providing area support to obtain the maximum airmobile capacity for each operation. Nevertheless, on many occasions the Army found itself road bound. Road and convoy security was also the original justification for introducing Army mechanized and armor units into South Vietnam in 1966. At first Westmoreland was reluctant to bring heavy mechanized equipment into South Vietnam, for it seemed ill suited either to counterinsurgency operations or to operations during the monsoon season, when all but a few roads were impassable. Armor advocates pressed Westmoreland to reconsider his policy. Operation CIRCLE PINES, carried out by elements of the 25th Infantry Division in the spring of 1966, successfully combined an infantry force and an armor battalion. This experience, together with new studies indicating a greater potential for mechanized forces, led Westmoreland to reverse his original policy and request deployment of the 11th Armored Cavalry Regiment, with its full complement of tanks, to Vietnam. Arriving in III Corps in the last half of 1966, the regiment set up base at Xuan Loc, on Route I northeast of Saigon in Long Khanh Province. In addition to assuming an area support mission and strengthening the eastern approaches to Saigon as part of Westmoreland's security belt around the capital, squadrons of the regiment supported Army units throughout the corps zone, often "homesteading" with other brigades or divisions. Route security, however, was only the first step in carving out a larger role for Army mechanized forces. Facing an enemy who employed no armor, American mechanized units, often in conjunction with airmobile assaults, acted both as blocking or holding forces and as assault or reaction forces, where terrain permitted. "Jungle bashing," as offensive armor operations were sometimes called, had its uses but also its limitations. The intimidating presence of tanks and personnel carriers was often nullified by their cumbersomeness and noise, which alerted the enemy to an impending attack. The Viet Cong also took countermeasures to immobilize tracked vehicles. Crude tank traps, locally manufactured mines (often made of plastic to thwart discovery by metal detectors), and well-aimed rocket or recoilless rifle rounds could disable a tank or personnel carrier. Together with the dust and tropical humidity, such weapons placed a heavy burden on Army maintenance units. Yet mechanized units brought the allies enhanced mobility and firepower and often were essential to counter ambushes or destroy an enemy force protected by bunkers. As Army strength increased in III Corps, Westmoreland encouraged his units to operate farther afield. In early 1966 intelligence reports indicated that enemy strength and activity were increasing in many of his base areas. In two operations during the early spring of 1966, units of the 1st and 25th Divisions discovered Viet Cong training camps and supply dumps, some of the sites honeycombed with tunnels. But they failed to engage major enemy forces. As Army units made the deepest penetration of War Zone C since 1961, all signs pointed to the foe's hasty withdrawal into Cambodia. An airmobile raid failed to locate the enemy's command center, COSVN. (COSVN, in fact, was fragmented among several sites in Tay Ninh Province and in nearby Cambodia.) Like the 173d Airborne Brigade's operations, the new attacks had no lasting effects. By May 1966 an ominous build-up of enemy forces, among them NVA regiments that had infiltrated south, was detected in Phuoc Long and Binh Long Provinces in northern III Corps. U.S. commanders viewed the build-up as a portent of the enemy's spring offensive, plans for which included an attack on the district town of Loc Ninh and on a nearby Special Forces camp. The 1st Division responded, sending a brigade to secure Route 13. But the threat to Loc Ninh heightened in early June, when regiments of the 9th Viet Cong Division took up positions around the town. The arrival of American reinforcements apparently prevented an assault. About a week later, however, an enemy regiment was spotted in fortified positions in a rubber plantation adjacent to Loc Ninh. Battered by massive air and artillery strikes, the regiment was dislodged and its position overrun, ending the threat. Americans recorded other successes, trapping Viet Cong ambushers in a counterambush, securing Loc Ninh, and spoiling the enemy's spring offensive. But if the enemy still underestimated the mobility and firepower that U.S. commanders could bring to bear, he had learned how easily Americans could be lured away from their base camps. By the summer of 1966 Westmoreland believed he had stopped the losing trend of a year earlier and could begin the second phase of his general campaign strategy. This entailed aggressive operations to search out and destroy enemy main force units, in addition to continued efforts to improve security in the populated areas of III Corps. In Operation ATTLEBORO he sent the 196th Infantry Brigade and the 3d Brigade, 4th Infantry Division, to Tay Ninh Province to bolster the security of the province seat. Westmoreland's challenge prompted COSVN to send the 9th Viet Cong Division on a "countersweep," the enemy's term for operations to counter allied search and destroy tactics. Moving deeper into the enemy's stronghold, the recently arrived and inexperienced 196th Infantry Brigade sparred with the enemy. Then an intense battle erupted, as elements of the brigade were isolated and surprised by a large enemy force. Operation ATTLEBORO quickly grew to a multidivision struggle as American commanders sought to maintain contact with the Viet Cong and to aid their own surrounded forces. Within a matter of days, elements of the 1st and 25th Divisions, the 173d Airborne Brigade, and the 11th Armored Cavalry Regiment had converged on War Zone C. Control of ATTLEBORO passed in turn from the 25th to the 1st Division and finally to the II Field Force, making it the first Army operation in South Vietnam to be controlled by a corps-size headquarters. With over 22,000 U.S. troops participating, the battle had become the largest of the war. Yet combat occurred most often at the platoon and company levels, usually at night. As the number of American troops increased, the 9th Viet Cong Division shied away, withdrawing across the Cambodian border. Then Army forces departed, leaving to the Special Forces the task of detecting the enemy's inevitable return. As the threat along the border abated, Westmoreland turned his attention to the enemy's secret zones near Saigon, among them the so-called Iron Triangle in Binh Duong Province. Harboring the headquarters of Military Region IV, the Communist command that directed military and terrorist activity in and around the capital, this stronghold had gone undisturbed for several years. Westmoreland hoped to find the command center, disrupt Viet Cong activity in the capital region, and allow South Vietnamese forces to accelerate pacification and uproot the stubborn Viet Cong political organization that flourished in many villages and hamlets. Operation CEDAR FALLS began on 8 January 1967 with the objectives of destroying the headquarters, interdicting the movement of enemy forces into the major war zones in III Corps, and defeating Viet Cong units encamped there. Like ATTLEBORO before it, CEDAR FALLS tapped the manpower and resources of nearly every major Army unit in the corps area. A series of preliminary maneuvers brought Army units into position. Several air assaults sealed off the Iron Triangle, exploiting the natural barriers of the rivers that formed two of its boundaries. Then American units began a series of sweeps to push the enemy toward the blocking forces. At the village of Ben Suc, long under the sway of the insurgents, sixty helicopters descended into seven landing zones in less than a minute. Ben Suc was surrounded, its entire population evacuated, and the village and its tunnel complex destroyed. But insurgent forces had fled before the heliborne assault. As CEDAR FALLS progressed, U.S. troops destroyed hundreds of enemy fortifications, captured large quantities of supplies and food, and evacuated other hamlets. Contact with the enemy was fleeting. Most of the Viet Cong, including the high-level cadre of the regional command, had escaped, sometimes infiltrating through allied lines. By the time Army units left the Iron Triangle, MACV had already received reports that Viet Cong and NVA regiments were returning to War Zone C in preparation for a spring offensive. This time Westmoreland hoped to prevent Communist forces from escaping into Cambodia, as they had done in ATTLEBORO. From forward field positions established during earlier operations, elements of the 25th and 1st Divisions, the 196th Infantry Brigade, and the 11th Armored Cavalry Regiment launched JUNCTION CITY, moving rapidly to establish a cordon around the war zone and to begin a new sweep of the base area. As airmobile and mechanized units moved into positions on the morning of 21 February 1967, elements of the 173d Airborne Brigade made the only parachute drop of the Vietnam War—and the first combat airborne assault since the Korean War—to establish a blocking position near the Cambodian border. Then other U.S. units entered the horseshoe-shaped area of operations through its open end. Despite the emphasis on speed and surprise, Army units did not encounter many enemy troops at the outset. As the operation entered its second phase, however, American forces concentrated their efforts in the eastern portion of War Zone C, close to Route I3. Here several violent battles erupted, as Communist forces tried to isolate and defeat individual units and possibly also to screen the retreat of their comrades into Cambodia. On I9 March a mechanized unit of the gth Infantry Division was attacked and nearly overrun along Route Is near the battered village of Bau Bang. The combined firepower of armored cavalry, supporting artillery, and close air support finally caused the enemy to break contact. A few days later, at Fire Support Base GOLD, in the vicinity of Soul Tre, an infantry and artillery battalion of the Pith Infantry engaged the 272d Viet Cong Regiment. Behind an intense, walking mortar barrage, enemy troops breached GOLD'S defensive perimeter and rushed into the base. Man-to-man combat ensued. A complete disaster was averted when Army artillerymen lowered their howitzers and fired, directly into the oncoming enemy, Beehive artillery rounds that contained hundreds of dartlike projectiles. The last major encounter with enemy troops during JUNCTION CITY occurred at the end of March, when elements of two Viet Cong regiments, the 271st and the 70th (the latter directly subordinate to COSVN) attacked a battalion of the 1st Infantry Division in a night defensive position deep in War Zone C, near the Cambodian border. The lopsided casualties—over 600 enemy killed in contrast to 10 Americans—forcefully illustrated once again the U.S. ability to call in overwhelmingly superior fire support by artillery, armed helicopters, and tactical aircraft. Thereafter, JUNCTION CITY became a pale shadow of the multidivision effort it had been at its outset. Most Army units were withdrawn, either to return to their bases or to participate in other operations. The 196th Infantry Brigade was transferred to I Corps to help replace Marine forces sent north to meet a growing enemy threat near the demilitarized zone. Contacts with enemy forces in this final phase were meager. Again a planned Viet Cong offensive had been aborted; the enemy himself escaped, though not unscathed. In the wake of JUNCTION CITY, MACV's attention reverted to the still critical security conditions around Saigon. The 1st Infantry Division returned to War Zone D to search for the 271st Viet Cong Regiment and to disrupt the insurgents' lines of communications between War Zones C and D. Despite two major contacts, the main body of the regiment eluded its American pursuers. Army units again returned to the Iron Triangle between April and July 1967, after enemy forces were detected in their old stronghold. Supplies and documents were found in quantities even larger than those discovered in CEDAR FALLS. Once again, however, encounters with the Communists were fleeting. The enemy's reappearance in the Iron Triangle and War Zone D, combined with rocket and mortar attacks on U.S. bases around Saigon, heightened Westmoreland's concern about the security of the capital. When the 1st Infantry Division's base at Phuoc Vinh and the Bien Hoa Air Base were attacked in mid-1967, the division mounted operations into the Ong Dong jungle and the Vinh Loi woods. Other operations swept the jungles and villages of Bien Hoa Province and sought once again to support pacification in Hau Nghia Province. These actions pointed up a basic problem. The large, multidivision operations into the enemy's war zones produced some benefits for the pacification campaign; by keeping enemy main force regiments at bay, Westmoreland impeded their access to heavily populated areas and prevented them from reinforcing Viet Cong provincial and district forces. Yet when American units were shifted to the border, the local Viet Cong units gained a measure of relief Westmoreland faced a strategic dilemma: he could not afford to keep substantial forces away from their bases for more than a few months at a time without jeopardizing local security. Unless he received additional forces, Westmoreland would always be torn between two operational imperatives. By the summer of 1967, MACV's likelihood of receiving more combat troops, beyond those scheduled to deploy during the latter half of the year and in early 1968, had become remote. In Washington the administration turned down his request for an additional 200,000 men. Meanwhile, however, the 9th Infantry Division and the 199th Infantry Brigade arrived in South Vietnam. Westmoreland stationed the brigade at Bien Hoa, where it embarked on FAIRFAX, a year-long operation in which it worked closely with a South Vietnamese ranger group to improve security in Gia Dinh Province, which surrounded the capital. Units of the brigade "paired off,' with South Vietnamese rangers and, working closely with paramilitary and police forces, sought to uproot the very active Viet Cong local forces and destroy the enemy's political infrastructure. Typical activities included ambushes by combined forces; cordon and search operations in villages and hamlets, often in conjunction with the Vietnamese police; psychological and civic action operations; surprise road blocks to search for contraband and Viet Cong supporters; and training programs to develop proficient military and local self-defense capabilities. Likewise, the 9th Infantry Division set up bases east and south of Saigon. One brigade deployed to Bear Cat; another set up camp at Tan An in Long An Province, south of Saigon, where it sought to secure portions of Route 4, an important north-south highway connecting Saigon with the rice-rich lower Delta. Further south, the 2d Brigade, 9th Infantry Division, established its base at Dong Tam in Dinh Tuong Province in IV Corps. Located in the midst of rice paddies and swamps, Dong Tam was created by Army engineers with sand dredged from the My Tho River. From this 600-acre base, the brigade began a series of riverine operations unique to the Army's experience in South Vietnam. To patrol and fight in the inundated marshlands and rice paddies and along the numerous canals and waterways crossing the Delta, the Army modernized the concept of riverine warfare employed during the Civil War by Union forces on the Mississippi River and by the French during the Indochina War. The Mobile Riverine Force utilized a joint Army-Navy task force controlled by a ground commander. In contrast to amphibious operations, where control reverts to the ground commander only after the force is ashore, riverine warfare was an extension of land combat, with infantry units traveling by water rather than by trucks or tracked vehicles. Aided by a Navy river support squadron and river assault squadron, infantrymen were housed on barracks ships and supported by gunships or fire support boats called monitors. Howitzers and mortars mounted on barges provided artillery support. The ad Brigade, 9th Infantry Division, began operations against the Cam Son Secret Zone, approximately 10 miles west of Dong Tam, in May 1967. Meanwhile, the war of main force units along the borders waxed and waned in relation to seasonal weather cycles, which affected the enemy's pattern of logistical activity, his ability to infiltrate men and supplies from North Vietnam, and his penchant for meticulous preparation of the battlefield. By the fall of 1967, enemy activity had increased again in the base areas, and sizable forces began appearing along South Vietnam's border from the demilitarized zone to III Corps. By the year's end, American forces had returned to War Zone C to screen the Cambodian border to prevent Communist forces from re-entering South Vietnam. Units of the 25th Infantry Division that had been conducting operations in the vicinity of Saigon moved to the border. Elements of the 1st Infantry Division had resumed road-clearing operations along Route I3, but the division soon faced another major enemy effort to capture Loc Ninh. On 29 October Viet Cong units assaulted the CIDG camp and the district command post, breaching the defense perimeter. Intense air and artillery fire prevented its complete loss. Within a few hours, South Vietnamese and U.S. reinforcements reached Loc Ninh, their arrival made possible by the enemy's failure to capture the local airstrip. When the build-up ended, ten Army battalions were positioned within Loc Ninh and between the town and the Cambodian border. During the next two days allied units warded off repeated enemy attacks as Communist forces desperately tried to score a victory. Tactical air support and artillery fire prevented the enemy from massing though he outnumbered allied forces by about ten to one. At the end of a ten-day battle, over 800 enemy were left on the battlefield, while allied deaths numbered only 50. Some 452 close air support sorties, 8 B-52 bomber strikes, and 30,125 rounds of artillery had been directed at the enemy. Once again, Loc Ninh had served as a lightning rod to attract U.S. forces to the border. The pattern of two wars—one in the villages, one on the border—continued without decision. Army Operations in II and I Corps, 1965-1967 Spearheaded by at least three NVA regiments, Communist forces mounted a strong offensive in South Vietnam's Central Highlands during the summer of 1965, overrunning border camps and besieging some district towns. Here the enemy threatened to cut the nation in two. To meet the danger, Westmoreland proposed to introduce the newly organized Army airmobile division, the 1st Cavalry Division, with its large contingent of helicopters, directly into the highlands. Some of his superiors in Hawaii and Washington opposed this plan, preferring to secure coastal bases. Though Westmoreland contended that enclave security made poor use of U.S. mobility and offensive firepower, he was unable to overcome the fear of an American Dien Bien Phu, if a unit in the highlands should be isolated and cut off from the sea. In the end, the deployment of Army forces to II Corps reflected a compromise. As additional American and South Korean forces arrived during 1965 and 1966, they often reinforced South Vietnamese efforts to secure coastal enclaves, usually centered on the most important cities and ports. (Map 49) At Phan Thiet, Tuy Hoa, Qui Nhon, Nha Trang, and Cam Ranh Bay, allied forces provided area security, not only protecting the ports and logistical complexes that developed in many of these locations, but also assisting Saigon's forces to expand the pacified zone that extended from the urban cores to the countryside. Here, as in III Corps, Westmoreland addressed two enemy threats. Local insurgents menaced populated areas along the coastal plain, while enemy main force units intermittently pushed forward in the western highlands. Between the two regions stretched the Piedmont, a transitional area in whose lush valleys lived many South Vietnamese. In the piedmont's craggy hills and jungle-covered uplands, local and main force Viet Cong units had long flourished by exacting food and taxes from the lowland population through a well-entrenched shadow government. Although the enemy's bases in the Piedmont did not have the notoriety of the secret zones near Saigon, they served similar purposes, harboring units, command centers, and training and logistical facilities. Extensions of the Ho Chi Minh Trail ran from the highlands through the Piedmont to the coast, facilitating the movement of enemy units and supplies from province to province. To be effective, allied operations on the coast had to uproot local units living amid the population and to eradicate the enemy base areas in the Piedmont, together with the main force units that supported the village and hamlet guerrillas. Despite their sparse population and limited economic resources, the highlands had a strategic importance equal to and perhaps greater than the coastal plain. Around the key highland towns—Pleiku, Kontum, Ban Me Thuot, and Da Lat—South Vietnamese and U.S. forces had created enclaves. Allied forces protected the few roads that traversed the highlands, screened the border, and reinforced outposts and Montagnard settlements from which the irregulars and Army Special Forces sought to detect enemy cross-border movements and to strengthen tribal resistance to the Communists. Such border posts and tribal camps, rather than major towns, most often were the object of enemy attacks. Combined with road interdiction, such attacks enabled the Communists to disperse the limited number of defenders and to discourage the maintenance of outposts. Such actions served a larger strategic objective. The enemy planned to develop the highlands into a major base area from which to mount or support operations in other areas. A Communist-dominated highlands would be a strategic fulcrum, enabling the enemy to shift the weight of his operations to any part of South Vietnam. The highlands also formed a "killing zone" where Communist forces could mass. Challenging American forces had become the principal objective of leaders in Hanoi, who saw their plans to undermine Saigon's military resistance thwarted by U.S. intervention. Salient victories against Americans, they believed, might deter a further build-up and weaken Washington's resolve to continue the war. The 1st Cavalry Division (Airmobile) moved with its 435 helicopters into this hornet's nest in September 1965, establishing its main base at An Khe, a government stronghold on Route I9, halfway between the coastal port of Qui Nhon and the highland city of Pleiku. The location was strategic: at An Khe the division could help to keep open the vital east-west road from the coast to the highlands and could pivot between the highlands and the coastal districts, where the Viet Cong had made deep inroads. Meanwhile, the1st Brigade, 101st Airborne Division, had begun operations in the rugged Song Con valley, about 18 miles northeast of An Khe. Here, on 15 September, one battalion ran into heavy fire from an enemy force in the tree line around its landing zone. Four helicopters were lost and three company commanders killed; reinforcements could not land because of the intense enemy fire. With the fight at close quarters, the Americans were unable to call in close air support, armed gunships, and artillery fire, except at the risk of their own lives. But as the enemy pressed them back, supporting fires were placed almost on top of the contending forces. At dusk the fighting subsided; as the Americans steeled themselves for a night attack, the enemy, hard hit by almost 100 air strikes and 11,000 rounds of artillery, slipped away. Inspection of the battlefield revealed that the Americans had unwittingly landed in the midst of a heavily bunkered enemy base. The fight had many hallmarks of highland battles that were to come. Americans had little information about enemy forces or the area of operations; the enemy could "hug" Army units to nullify their massive advantage in firepower. In compensation, the enemy underestimated the accuracy of such fire and the willingness of U.S. commanders to call it in even when fighting at close quarters. Finally, enemy forces when pressed too hard could usually escape, and pursuit, as a rule, was futile. Less than a month later the newly arrived airmobile division received its own baptism of combat. The North Vietnamese Army attacked a Special Forces camp at Plei Me; when it was repulsed, Westmoreland directed the division to launch an offensive to locate and destroy enemy regiments that had been identified in the vicinity of the camp. The result was the battle of the Ia Drang valley, named for a small river that flowed through the area of operations. For thirty-five days the division pursued and fought the 32d, 33d, and 66th North Vietnamese Regiments, until the enemy, suffering heavy casualties, returned to his bases in Cambodia. With scout platoons of its air cavalry squadron covering front and flanks, each battalion of the division's 1st Brigade established company bases from which patrols searched for enemy forces. For several days neither ground patrols nor aero-scouts found any trace, but on 4 November the scouts spotted a regimental aid station several miles west of Plei Me. Quick reacting aerorifle platoons converged on the site. Hovering above, the airborne scouts detected an enemy battalion nearby and attacked from UH-IB gunships with aerial rockets and machine guns. Operating beyond the range of their ground artillery, Army units engaged the enemy in an intense firelight.. Again enemy troops "hugged" American forces, then broke contact as reinforcements began to arrive. The search for the main body of the enemy continued for the next few days, with Army units concentrating their efforts in the vicinity of the Chu Pang Massif, a mountain near the Cambodian border that was believed to be an enemy base. Communist forces were given little rest, as patrols harried and ambushed them. The enemy attacked an American patrol base, Landing Zone MARY, at night, but was repulsed by the first night air assault into a defensive perimeter under fire, accompanied by aerial rocket fire. The heaviest fighting was yet to come. As the division began the second stage of its campaign, enemy forces began to move out of the Chu Pong base. Units of the 1st Cavalry Division advanced to establish artillery bases and landing zones at the base of the mountain. Landing Zone X-RAY was one of several U.S. positions vulnerable to attack by the enemy forces that occupied the surrounding high ground. Here on 14 November began fighting that pitted three battalions against elements of two NVA regiments. Withstanding repeated mortar attacks and infantry assaults, the Americans used every means of firepower available to them—the division's own gunships, massive artillery bombardment, hundreds of strafing and bombing attacks by tactical aircraft, and earth-shaking bombs dropped by B-52 bombers from Guam—to turn back a determined enemy. The Communists lost 600 dead, the Americans 79. Although badly hurt, the enemy did not leave the Ia Drang valley. Elements of the 66th North Vietnamese Regiment moving east toward Plei Me encountered an American battalion on 17 November, a few miles north of X-RAY. The fight that resulted was a gory reminder of the North Vietnamese mastery of the ambush. The Communists quickly snared three U.S. companies in their net. As the trapped units struggled for survival, nearly all semblance of organized combat disappeared in the confusion and mayhem. Neither reinforcements nor effective firepower could be brought in. At times combat was reduced to valiant efforts by individuals and small units to avert annihilation. When the fighting ended that night, 60 percent of the Americans were casualties, and almost one of every three soldiers in the battalion had been killed. Lauded as the first major American triumph of the Vietnam War, the battle of the Ia Drang valley was in truth a costly and problematic victory. The airmobile division, committed to combat less than a month after it arrived in-country, relentlessly pursued the enemy for thirty-five days over difficult terrain and defeated three NVA regiments. In part, its achievements underlined the flexibility that Army divisions had gained in the early 1960's under the Reorganization Objective Army Division (ROAD) concept. Replacing the pentomic division with its five lightly armed battle groups, the ROAD division, organized around three brigades, facilitated the creation of brigade and battalion task forces tailored to respond and fight in a variety of military situations. The newly organized division reflected the Army's embrace of the concept of flexible response and proved eminently suitable for operations in Vietnam. The helicopter was given great credit as well. Nearly every aspect of the division's operations was enhanced by its airmobile capacity. Artillery batteries were moved sixty-seven times by helicopter. Intelligence, medical, and all manner of logistical support benefited as well from the speed and flexibility provided by helicopters. Despite the fluidity of the tactical situation, airmobile command and control procedures enabled the division to move and to keep track of its units over a large area, and to accommodate the frequent and rapid changes in command arrangements as units were moved from one headquarters to another. Yet for all the advantages that the division accrued from airmobility, its performance was not without blemish. Though the conduct of division-size airmobile operations proved tactically sound, two major engagements stemmed from the enemy's initiative in attacking vulnerable American units. On several occasions massive air and artillery support provided the margin of victory (if not survival). Above all, the division's logistical self-sufficiency fell short of expectations. It could support only one brigade in combat at a time, for prolonged and intense operations consumed more fuel and ammunition than the division's helicopters and fixed-wing Caribou aircraft could supply. Air Force tactical airlift became necessary for resupply. Moreover, in addition to combat losses and damage, the division's helicopters suffered from heavy use and from the heat, humidity, and dust of Vietnam, taxing its maintenance capacity. Human attrition was also high; hundreds of soldiers, the equivalent of almost a battalion, fell victim to a resistant strain of malaria peculiar to Vietnam's highlands. Westmoreland's satisfaction in blunting the enemy's offensive was tempered by concern that enemy forces might re-enter South Vietnam and resume their offensive while the airmobile division recuperated at the end of November and during most of December. He thus requested immediate reinforcements from the Army's With Infantry Division, based in Hawaii and scheduled to deploy to South Vietnam in the spring of 1966. By the end of 1965, the division's 3d Brigade had been airlifted to the highlands and, within a month of its arrival, had joined elements of the 1st Cavalry Division to launch a series of operations to screen the border. Army units did not detect any major enemy forces trying to cross from Cambodia into South Vietnam. Each operation, however, killed hundreds of enemy soldiers and refined airmobile techniques, as Army units learned to cope with the vast territorial expanse and difficult terrain of the highlands. In Operation MATADOR, for example, air strikes were used to blast holes in the forests, enabling helicopters to bring in heavy engineer equipment to construct new landing zones for use in future operations. Operation LINCOLN, a search and destroy operation on the Chu Pong Massif, featured combined armor and airmobile operations; air cavalry scouts guided armored vehicles of the 3d Brigade, 25th Infantry Division, as they operated in a lightly wooded area near Pleiku City. Also in LINCOLN, Army engineers, using heli-lifted equipment, in two days cleared and constructed a runway to handle C-130 air transports in an area inaccessible by road. Despite the relative calm that followed the Ia Drang fighting, the North Vietnamese left no doubt of their intent to continue infiltration and to challenge American forces along the highland border. In February 1966 enemy forces overran the Special Forces camp at A Shau, in the remote northwest corner of I Corps. The loss of the camp had long-term consequences, enabling the enemy to make the A Shau valley a major logistical base and staging area for forces infiltrating into the Piedmont and coastal areas. The loss also highlighted certain differences between operational concepts of the Army and the marines. Concentrating their efforts in the coastal districts of I Corps and lacking the more extensive helicopter support enjoyed by Army units, the marines avoided operations in the highlands. On the other hand, Army commanders in II Corps sought to engage the enemy as close to the border as possible and were quick to respond to threats to Special Forces camps in the highlands. Operations near the border were essential to Westmoreland's efforts to keep main force enemy units as far as possible from heavily populated areas. For Hanoi's strategists, however, a reciprocal relation existed between highlands and coastal regions. Here, as in the south, the enemy directed his efforts to preserving his own influence among the population near the coast, from which he derived considerable support. At the same time, he maintained a constant military threat in the highlands to divert allied forces from efforts at pacification. Like the chronic shifting of units from the neighborhood of Saigon to the war zones in III Corps, the frequent movement of American units between coast and border in II Corps reflected the Communist desire to relieve allied military pressure whenever guerrilla and local forces were endangered. In its broad outlines, Hanoi's strategy to cope with U.S. forces was the same employed by the Viet Minh against the French and by Communist forces in 1964 and 1965 against the South Vietnamese Army. Whether it would be equally successful remained to be seen. The airmobile division spent the better part of the next two years fighting Viet Cong and NVA main force units in the coastal plain and Piedmont valleys of Binh Dinh Province. Here the enemy had deep roots, while pacification efforts were almost dead. Starting in early 1966, the 1st Cavalry Division embarked on a series of operations against the ad Viet Cong and the `8th and Id North Vietnamese Regiments of the 3d North Vietnamese Division (the Yellow Star Division). For the most part, the 1st Cavalry Division operated in the Bong Son plain and the adjacent hills, from which enemy units reinforced the hamlet and village guerrillas who gathered in taxes, food, and recruits. As in the highlands, the division exploited its airmobility, using helicopters to establish positions in the upper reaches of the valleys. They sought to flush the enemy from his hiding places and drive him toward the coast, where American, South Vietnamese, and South Korean forces held blocking positions. When trapped, the enemy was attacked by ground, naval, and air fire. The scheme was a new version of an old tactical concept, the "hammer and anvil," with the coastal plain and the natural barrier formed by the South China Sea forming the anvil or killing zone. Collectively the operations became known as the Binh Dinh Pacification Campaign. For forty-two days elements of the airmobile division scoured the An Lao and Kim Son valleys, pursuing enemy units that had been surprised and routed from the Bong Son plain. Meanwhile, Marine forces in neighboring Quang Ngai Province in southern I Corps sought to bar the enemy's escape routes to the north. The enemy units evaded the Americans, but thousands of civilians fled from the Viet Cong-dominated valleys to government-controlled areas. Although the influx of refugees taxed the government's already strained relief services, the exodus of peasants weakened the Viet Cong's infrastructure and aimed a psychological blow at the enemy's prestige. The Communists had failed either to confront the Americans or to protect the population over which they had gained control. Failing to locate the fleeing enemy in the An Lao valley, units of the airmobile division assaulted another enemy base area, a group of valleys and ridges southwest of the Bong Son plain known as the Crow's Foot or the Eagle's Claw. Here some Army units sought to dislodge the enemy from his upland bases while others established blocking positions at the "toe" of each valley, where it found outlet to the plain. In six weeks over 1,300 enemy soldiers were killed. Enemy forces in northern Binh Dinh Province were temporarily thrown off balance. Beyond this, the long-term effects of the operation were unclear. The 1st Cavalry Division did not stay in one area long enough to exploit its success. Whether the Saigon government could marshal its forces effectively to provide local security and to reassert its political control remained to be seen. Later operations continued to harass an elusive foe. Launching a new attack without the extensive preparatory reconnaissance that often alerted the enemy, Army units again surprised him in the Bong Son area but soon lost contact. The next move was against an enemy build-up in the vicinity of the Vinh Thanh Special Forces Camp. Here the Green Berets watched the "Oregon Trail," an enemy infiltration corridor that passed through the Vinh Thanh valley from the highlands to the coast. Forestalling the attack, Army units remained in the area where they conducted numerous patrols and made frequent contact with the enemy. (One U.S. company came close to being overrun in a ferocious firelight.)) But again the action had little enduring effect, except to increase the enemy's caution by demonstrating the airmobile division's agility in responding to a threat. After a brief interlude in the highlands, the division returned to Binh Dinh Province in September 1966. Conditions in the Bong Son area differed little from those the division had first encountered. For the most part, the Viet Cong rather than the Saigon government had been successful in reasserting their authority, and pacification was at a standstill. The division devoted most of its resources for the remainder of 1966 and throughout 1967 to supporting renewed efforts at pacification. In the fall of 1966, for the first time in a year, all three of the division's brigades were reunited and operating in Binh Dinh Province. Although elements of the division were occasionally transferred to the highlands as the threat there waxed and waned, the general movement of forces was toward the north. Army units increasingly were sent to southern I Corps during 1967, replacing Marine units in operations similar to those in Binh Dinh Province. In one such operation the familiar pattern of hammer and anvil was tried anew, with some success. The 1st Cavalry Division opened with a multibattalion air assault in an upland valley to flush the enemy toward the coast, where allied ground and naval forces were prepared to bar his escape. Enemy forces had recently left their mountain bases to plunder the rice harvest and to harass South Vietnamese forces providing security for provincial elections. These units were caught with their backs to the sea. For most of October, allied forces sought to destroy the main body of a Communist regiment isolated on the coast and to seize an enemy base in the nearby Phu Cat Mountain. The first phase consisted of several sharp combat actions near the coastal hamlet of Hoa Hoi. With South Vietnamese and U.S. naval forces blocking an escape by sea, the encircled enemy fought desperately to return to the safety of his bases in the upland valleys. His plight was compounded when floods forced his troops out of their hiding places and exposed them to attacks. After heavy losses, remnants of the regiment divided into small parties that escaped through allied lines. As contacts with the enemy diminished on the coast, American efforts shifted inland, with several sharp engagements occurring when enemy forces tried to delay pursuit or to divert the allies from entering base areas. By the end of October, as the Communists retreated north and west, the running fight had accounted for over 2,000 enemy killed. Large caches of supplies, equipment, and food were uncovered, and the Viet Cong's shadow government in some coastal hamlets and villages was severely damaged, some hamlets reverting to government control for the first time in several years. Similar operations continued through 1967 and into early 1968. In addition to offensive operations against enemy main forces, Army units in Binh Dinh worked in close co-ordination with South Vietnamese police, Regional and Popular Forces, and the South Vietnamese Army to help the Saigon government gain a foothold in villages and hamlets dominated or contested by the Communists. The 1st Cavalry Division adopted a number of techniques in support of pacification. Army units frequently participated in cordon and search operations: airmobile forces seized positions around a hamlet or village at dawn to prevent the escape of local forces or cadres, while South Vietnamese authorities undertook a methodical house-to-house search. The Vietnamese checked the legal status of residents, took a census, and interrogated suspected Viet Cong to obtain more information about the enemy's local political and military apparatus. At the same time, allied forces engaged in a variety of civic action and psychological operations; specially trained pacification cadres established the rudiments of local government and provided various social and economic services. At other times, the division might participate in "checkpoint and snatch" operations, establishing surprise roadblocks and inspecting traffic on roads frequented by the insurgents. Although much weakened by such methods, enemy forces found opportunities to attack American units. They aimed both to win a military victory and to remind the local populace of their presence and power. An attack on Landing Zone BIRD, an artillery base on the Bong Son plain, was one such example. Taking advantage of the Christmas truce of 1966, enemy units moved into position and mounted a ferocious attack as soon as the truce ended. Although portions of the base were overrun, the onslaught was checked when artillerymen their guns and fired Beehive antipersonnel rounds directly into the waves of oncoming enemy troops. Likewise, several sharp firefights occurred immediately after the 1967 Tet truce, when the enemy took advantage of the cease-fire to move back among the population. This time units of the 1st Cavalry Division forced the enemy to leave the coastal communities and seek refuge in the Piedmont. As the enemy moved across the boundary into southern I Corps, so too did units of the airmobile division. About a month later, the 3d Brigade, 25th Infantry Division, also moved to southern I Corps. Throughout the remainder of 1967, other Army units transferred to either I Corps to reinforce the marines or to the highlands to meet renewed enemy threats. As the strength of American units committed to the Binh Dinh Pacification Campaign decreased during late 1967 and early 1968, enemy activity in the province quickened as the Viet Cong sought to reconstitute their weakened military forces and to regain a position of influence among the local population. In many respects, the Binh Dinh campaign was a microcosm of Westmoreland's over-all campaign strategy. It showed clearly the intimate relation between the war against enemy main force units and the fight for pacification waged by the South Vietnamese, and it demonstrated the effectiveness of the airmobile concept. After two years of persistent pursuit of the NVA's Yellow Star Division, the 1st Cavalry Division had reduced the combat effectiveness of each of its three regiments. By the end of 1967, the threat to Binh Dinh Province posed by enemy main force units had been markedly reduced. The airmobile division's operations against the 3d North Vietnamese Division, as well as its frequent role in operations directly in support of pacification, had weakened local guerrilla forces and created an environment favorable to pacification. The campaign in Binh Dinh also exposed the vulnerabilities of Westmoreland's campaign strategy. Despite repeated defeats at the hands of the Americans, the three NVA regiments still existed. They contrived to find respite and a measure of rehabilitation, building their strength anew with recruits filtering down from the North, with others found in-country, and with Viet Cong units consolidated into their ranks. Although much weakened, Communist forces persistently returned to areas cleared by the 1st Cavalry Division. Even more threatening to the allied cause, Saigon's pacification efforts languished as South Vietnamese forces failed in many instances to provide security to the villages and effective police action to root out local Viet Cong cadres. And the government, dealing with a population already skeptical, failed to grant the political, social, and economic benefits it had promised. The Highlands: Progress or Stalemate? Moreover, the allies could not concentrate their efforts everywhere as they had in strategic Binh Dinh. The expanse of the highlands compelled Army operations there to be carried out with economy of force. During 1966 and 1967, the Americans engaged in a constant search for tactical concepts and techniques to maximize their advantages of firepower and mobility and to compensate for the constraints of time, distance, difficult terrain, and an inviolable border. Here the war was fought primarily to prevent the incursion of NVA units into South Vietnam and to erode their combat strength. In the highlands, each side pursued a strategy of military confrontation, seeking to weaken the fighting forces and will of its opponent through attrition. Each sought military victories to convince opposing leaders of the futility of continuing the contest. For the North Vietnamese, however, confrontation in the highlands had the additional purpose of relieving allied pressure in other areas, where pacification jeopardized their hold on the rural population. Of all the factors influencing operations in the highlands, the most significant may well have been the strength and success of pacification elsewhere. For Americans, the most difficult problem was to locate the enemy. Yet Communist strategists sometimes created threats to draw in the Americans. Recurrent menaces to Special Forces camps reflected the enemy's seasonal cycle of operations, his desire to harass and eliminate such camps, and his hope of luring allied forces into situations where he held the military advantages. Thus Army operations in the highlands during 1966 and 1967 were characterized by wide-ranging, often futile searches, punctuated by sporadic but intense battles fought usually at the enemy's initiative. For the first few months of 1966, the Communists lay low. In May, however, a significant concentration of enemy forces appeared in Pleiku and Kontum Provinces. The 1st Brigade, 101st Airborne Division, the reserve of I Field Force, was summoned to Pleiku and subsequently moved to Dak To, a CIDG camp in northern Kontum Province, to assist a besieged South Vietnamese force at the nearby government post at Toumorong. Although the 24th North Vietnamese Regiment had surrounded Toumorong, allied forces secured the road to Dak To and evacuated the government troops, leaving one battalion of the 101st inside the abandoned camp and one company in an exposed defensive position in the jungle a short distance beyond. On the night of 6 June a large North Vietnamese force launched repeated assaults on this lone company. Facing disaster, the commander called in air strikes on his own position to stop the enemy's human-wave attacks. Relief arrived the next morning, as additional elements of the brigade were heli-lifted to the battlefield to pursue and trap the North Vietnamese. Fighting to close off the enemy's escape routes, the Americans called in renewed air strikes, including B-52's. By 20 June enemy resistance had ended, and the NVA regiment that had begun the fighting, leaving behind dead, escaped to the safety of its Laotian base. Although the enemy's push in Kontum Province was blunted, the siege of Toumorong was only one aspect of his summer offensive in the highlands. Suspecting that NVA forces meant to return to the Ia Drang, Westmoreland sent the 3d Brigade, With Infantry Division, back into the valley in May. Dividing the area into "checkerboard" squares, the brigade methodically searched each square. Small patrols set out ambushes and operated for several days without resupply to avoid having helicopters reveal their location. After several days in one square, the patrols leapfrogged by helicopter to another. Though the Americans made only light, sporadic contacts, the cumulative toll of enemy killed was equal to many short, violent battles. One significant contact was made in late May near the Chu Pong Massif. A running battle ensued, as the enemy again sought safety in Cambodia. Westmoreland now appealed to Washington for permission to maneuver Army units behind the enemy, possibly into Cambodian territory. But officials refused, fearing international repercussions, and the NVA sanctuary remained inviolate. Yet the operation confirmed that sizable enemy forces had returned to South Vietnam and, as in the fall of 1965, were threatening the outposts at Plei Me and Duc Co. To meet the renewed threat, I Field Force sent additional Army units to Pleiku Province and launched a new operation under the 1st Cavalry Division. The action followed the now familiar pattern of extensive heli-lifts, establishment of patrol bases, and intermittent contact with an enemy who usually avoided American forces. When the Communists elected to fight, they preferred to occupy high ground; dislodging them from hilltop bunkers was a difficult task, requiring massive air and artillery support. By the time the enemy left Pleiku again at the end of August, his forces had incurred nearly 500 deaths. Border battles continued, however, and some were sharp. When enemy forces appeared in strength around a CIDG camp at Plei Djering in October, elements of the 4th Infantry and 1st Cavalry Divisions rapidly reinforced the camp, clashing with the enemy in firefights during October and November. As North Vietnamese forces began to withdraw through the Plei Trap valley, the 1st Brigade, 101st Airborne Division, was airlifted from Phu Yen to northern Kontum to try to block their escape, but failed to trap them before they reached the border. Army operations in the highlands were continued by the 4th Infantry Division. In addition to screening the border to detect infiltration, the division constructed a new road between Pleiku and the highland outpost at Plei Djering and helped the Saigon government resettle thousands of Montagnards in secure camps. Contact with the enemy generally was light, the heaviest occurring in mid-February 1967, in an area west of the Nam Sathay River near the Cambodian border, when Communist forces unsuccessfully tried to overrun several American fire bases. Despite infrequent contacts, however, 4th Division troops killed 700 enemy over a period of three months. In I Corps as well, the enemy seemed intent on dispersing American forces to the border regions. Heightened activity along the demilitarized zone drew marines from southern I Corps. To replace them, Army units were transferred from III and II Corps to the area vacated by the marines, among them the 196th Infantry Brigade, which was pulled out of Operation JUNCTION CITY, and the 3d Brigade, With Infantry Division, which had been operating in the II Corps Zone. Together with the 1st Brigade, 101st Airborne Division, these units formed Task Force OREGON, activated on 12 April 1967 and placed under the operational control of the III Marine Amphibious Force. Army infantry units were now operating in all four of South Vietnam's corps areas. Once at Chu Lai, the Army forces supported an extensive South Vietnamese pacification effort in Quang Tin Province. To the north, along the demilitarized zone, Army heavy artillery engaged in almost daily duels with NVA guns to the north. In Quang Tri Province, the marines fought a hard twelve-day battle to prevent NVA forces from dominating the hills surrounding Khe Sanh. The enemy's heightened military activity along the demilitarized zone, which included frontal attacks across it, prompted American officials to begin construction of a barrier consisting of highly sophisticated electronic and acoustical sensors and strong point defenses manned by allied forces. Known as the McNamara Line, after Secretary of Defense Robert S. McNamara, who vigorously promoted the concept, the barrier was to extend across South Vietnam and eventually into Laos. Westmoreland was not enthusiastic about the project, for he hesitated to commit large numbers of troops to man the strongpoints and doubted that the barrier would prevent the enemy from breaching the demilitarized zone. Hence the McNamara Line was never completed. Throughout the summer of 1967, Marine forces endured some of the most intense enemy artillery barrages of the war and fought several battles with NVA units that infiltrated across the I7th parallel. Their stubborn defense, supported by massive counterbattery fire, naval gunfire, and air attacks, ended the enemy's offensive in northern I Corps, but not before Westmoreland had to divert additional Army units as reinforcements. A brigade of the 1st Cavalry Division and South Korean units were deployed to southern I Corps to replace additional marines who had been shifted further north. The depth of the Army's commitment in I Corps was shown by Task Force OREGON'S reorganization as the 23d Infantry Division (Americal). The only Army division to be formed in South Vietnam, its name echoed a famous division of World War II that had also been organized in the Pacific. If the enemy's aim was to draw American forces to the north, he evidently was succeeding. Even as Westmoreland shifted allied forces from II Corps to I Corps, fighting intensified in the highlands. After Army units made several contacts with enemy forces during May and June, Westmoreland moved the 173d Airborne Brigade from III Corps to II Corps to serve as the I Field Force's strategic reserve. Within a few days, however, the brigade was committed to an effort to forestall enemy attacks against the CIDG camps of Dak To, Dak Seang, and Dak Pek in northern Kontum Province. Under the control of the 4th Infantry Division, the operation continued throughout the summer until the enemy threat abated. A few months later, however, reconnaissance patrols in the vicinity of Dak To detected a rapid and substantial build-up of enemy forces in regimental strength. Believing an attack to be imminent, 4th Infantry Division forces reinforced the garrison. In turn, the 173d Airborne Brigade returned to the highlands, arriving on 2 November. From 3 to 15 November enemy forces estimated to number 12,000 probed, harassed, and attacked American and South Vietnamese positions along the ridges and hills surrounding the camp. As the attacks grew stronger, more U.S. and South Vietnamese reinforcements were sent, including two battalions from the airmobile division and six ARVN battalions. By mid-November allied strength approached 8,000. Despite daily air and artillery bombardments of their positions, the North Vietnamese launched two attacks against Dak To on 15 November, destroying two C-130 aircraft and causing severe damage to the camp's ammunition dump. Allied forces strove to dislodge the enemy from the surrounding hills, but the North Vietnamese held fast in fortified positions. The center of enemy resistance was Hill 875; here, two battalions of the 173d Airborne Brigade made a slow and painful ascent against determined resistance and under grueling physical conditions, fighting for every foot of ground. Enemy fire was so intense and accurate that at times the Americans were unable to bring in reinforcements by helicopter or to provide fire support. In fighting that resembled the hill battles of the final stage of the Korean War, the confusion at Dak To pitted soldier against soldier in classic infantry battle. In desperation, beleaguered U.S. commanders on Hill 875 called in artillery and even B-52 air strikes at perilously close range to their own positions. On 17 November American forces at last gained control of Hill 875. The battle of Dak To was the longest and most violent in the highlands since the battle of the Ia Drang two years before. Enemy casualties numbered in the thousands, with an estimated 1,400 killed. Americans had suffered too. Approximately one-fifth of the 173d Airborne Brigade had become casualties, with 174 killed, 642 wounded, and 17 missing in action. If the battle of the Ia Drang exemplified airmobility in all its versatility, the battle of Dak To, with the arduous ascent of Hill 875, epitomized infantry combat at its most basic and the crushing effect of supporting air power. Yet Dak To was only one of several border battles in the waning months of 1967. At Song Be and Loc Ninh in III Corps, and all along the northern border of I Corps, the enemy exposed his positions in order to confront U.S. forces in heavy fighting. By the end of 1967 the 1st Infantry Division had again concentrated near the Cambodian border, and the With Infantry Division had returned to War Zone C. The enemy's threat in I Corps caused Westmoreland to disperse more Army units. In the vacuum left by their departure, local Viet Cong sought to reconstitute their forces and to reassert their control over the rural population. In turn, Viet Cong revival often was a prelude to the resurgence of Communist military activity at the district and village level. Hard pressed to find additional Army units to shift from III Corps and II Corps to I Corps, Westmoreland asked the Army to accelerate deployment of two remaining brigades of the 101st Airborne Division from the United States. Arriving in December 1967, the brigades were added to the growing number of Army units operating in the northern provinces. While allied forces were under pressure, the border battles of 1967 also led to a reassessment of strategy in Hanoi. Undeviating in their long-term aim of unification, the leaders of North Vietnam recognized that their strategy of military confrontation had failed to stop the American military buildup in the South or to reduce U.S. military pressure on the North. The enemy's regular and main force units had failed to inflict a salient military defeat on American forces. Although the North Vietnamese Army maintained the tactical initiative, Westmoreland had kept its units at bay and in some areas, like Binh Dinh Province, diminished their influence on the contest for control of the rural population. Many Communist military leaders perceived the war to be a stalemate and thought that continuing on their present course would bring diminishing returns, especially if their local forces were drastically weakened. On the other side, Westmoreland could rightly point to some modest progress in improving South Vietnam's security and to punishing defeats inflicted on several NVA regiments and divisions. Yet none of his successes were sufficient to turn the tide of the war. The Communists had matched the build-up of American combat forces, the number of enemy divisions in the South increasing from one in early 1965 to nine at the start of 1968. Against 320 allied combat battalions, the North Vietnamese and Viet Cong could marshal 240. Despite heavy air attacks against enemy lines of infiltration, the flow of men from the North had continued unabated, even increasing toward the end of 1967. Although the Military Assistance Command had succeeded in warding off defeat in 1965 and had gained valuable time for the South Vietnamese to concentrate their political and military resources on pacification, security in many areas of South Vietnam had improved little. Americans noted that the Viet Cong, in one district within artillery range of Saigon, rarely had any unit as large as a company. Yet, relying on booby traps, mines, and local guerrillas, they tied up over 6,000 American and South Vietnamese troops. More and more, success in the South seemed to depend not only on Westmoreland's ability to hold off and weaken enemy main force units, but on the equally important efforts of the South Vietnamese Army, the Regional and the Popular Forces, and a variety of paramilitary and police forces to pacify the countryside. Writing to President Johnson in the spring of 1967, outgoing Ambassador Henry Cabot Lodge warned that if the South Vietnamese "dribble along and do not take advantage of the success which MACV has achieved against the main force and the Army of North Viet-Nam, we must expect that the enemy will lick his wounds, pull himself together and make another attack in '68." Westmoreland's achievements, he added, would be "judged not so much on the brilliant performance of the U.S. troops as on the success in getting ARVN, RF and PF quickly to function as a first-class . . . counter-guerrilla force." Meanwhile the war appeared to be in a state of equilibrium. Only an extraordinary effort by one side or the other could bring a decision. The Tet Offensive The Tet offensive marked a unique stage in the evolution of North Vietnam's People's War. Hanoi's solution to the stalemate in the South was the product of several factors. North Vietnam's large unit war was unequal to the task of defeating American combat units. South Vietnam was becoming politically and militarily stronger, while the Viet Cong's grip over the rural population eroded. Hanoi's leaders suspected that the United States, frustrated by the slow pace of progress, might intensify its military operations against the North. (Indeed, Westmoreland had broached plans for an invasion of the North when he appealed for additional forces in 1967.) The Tet offensive was a brilliant stroke of strategy by Hanoi, designed to change the arena of war from the battlefield to the negotiating table, and from a strategy of military confrontation to one of talking and fighting. Communist plans called for violent, widespread, simultaneous military actions in rural and urban areas throughout the South—a general offensive. But as always, military action was subordinate to a larger political goal. By focusing attacks on South Vietnamese units and facilities, Hanoi sought to undermine the morale and will of Saigon's forces. Through a collapse of military resistance, the North Vietnamese hoped to subvert public confidence in the government's ability to provide security, triggering a crescendo of popular protest to halt the fighting and force a political accommodation. In short, they aimed at a general uprising. Hanoi's generals, however, were not completely confident that the general offensive would succeed. Viet Cong forces, hastily reinforced with new recruits and part-time guerrillas, bore the brunt. Except in the northern pro- vinces, the North Vietnamese Army stayed on the sidelines, poised to exploit success. While hoping to spur negotiations, Communist leaders probably had the more modest goals of reasserting Viet Cong influence and undermining Saigon's authority so as to cast doubt on its credibility as the United States' ally. In this respect, the offensive was directed toward the United States and sought to weaken American confidence in the Saigon government, discredit Westmoreland's claims of progress, and strengthen American antiwar sentiment. Here again, the larger purpose was to bring the United States to the negotiating table and hasten American disengagement from Vietnam. The Tet offensive began quietly in mid-January 1968 in the remote northwest corner of South Vietnam. Elements of three NVA divisions began to mass near the Marine base at Khe Sanh. At first the ominous proportions of the build-up led the Military Assistance Command to expect a major offensive in the northern provinces. To some observers the situation at Khe Sanh resembled Dien Bien Phu, the isolated garrison where the Viet Minh had defeated French forces in 1954. Khe Sanh, however, was a diversion, an attempt to entice Westmoreland to defend yet another border post by withdrawing forces from the populated areas of the South. While pressure around Khe Sanh increased, 85,000 Communist troops prepared for the Tet offensive. Since the fall of 1967, the enemy had been infiltrating arms, ammunition, and men, including entire units, into Saigon and other cities and towns. Most of these meticulous preparations went undetected, although MACV received warnings of a major enemy action to take place in early 1968. The command did pull some Army units closer to Saigon just before the attack. However, concern over the critical situation at Khe Sanh and preparations for the Tet holiday festivities preoccupied most Americans and South Vietnamese. Even when Communist forces prematurely attacked Kontum, Qui Nhon, Da Nang, and other towns in the northern and central provinces on 29 January, Americans were unprepared for what followed. On 31 January combat erupted throughout the entire country. Thirty-six of 44 provincial capitals and 64 of 242 district towns were attacked, as well as 5 of South Vietnam's 6 autonomous cities, among them Hue and Saigon. Once the shock and confusion wore off, most attacks were crushed in a few days. During those few days, however, the fighting was some of the most violent ever seen in the South or experienced by many ARVN units. Though the South Vietnamese were the main target, American units were swept into the turmoil. All Army units in the vicinity of Saigon helped to repel Viet Cong attacks there and at the nearby logistical base of Long Binh. In some American compounds, cooks, radiomen, and clerks took up arms in their own defense. Military police units helped root the Viet Cong out of Saigon, and Army helicopter gunships were in the air almost continuously, assisting the allied forces. The most tenacious combat occurred in Hue, the ancient capital of Vietnam, where the 1st Cavalry and 101st Airborne Divisions, together with marines and South Vietnamese forces, participated in the only extended urban combat of the war. Hue had a tradition of Buddhist activism, with overtones of neutralism, separatism, and anti-Americanism, and Hanoi's strategists thought that here if anywhere the general offensive-general uprising might gain a political foothold. Hence they threw North Vietnamese regulars into the battle, indicating that the stakes at Hue were higher than elsewhere in the South. House-to-house and street-to-street fighting caused enormous destruction, necessitating massive reconstruction and community assistance programs after the battle. The allies took three weeks to recapture the city. The slow, hard-won gains of 1967 vanished overnight as South Vietnamese and Marine forces were pulled out of the countryside to reinforce the city. Yet throughout the country the South Vietnamese forces acquitted themselves well, despite high casualties and many desertions. Stunned by the attacks, civilian support for the Thieu government coalesced instead of weakening. Many Vietnamese for whom the war had been an unpleasant abstraction were outraged. Capitalizing on the new feeling, South Vietnam's leaders for the first time dared to enact general mobilization. The change from grudging toleration of the Viet Cong to active resistance provided an opportunity to create new local defense organizations and to attack the Communist infrastructure. Spurred by American advisers, the Vietnamese began to revitalize pacification. Most important, the Viet Cong suffered a major military defeat, losing thousands of experienced combatants and seasoned political cadres, seriously weakening the insurgent base in the South. Americans at home saw a different picture. Dramatic images of the Viet Cong storming the American Embassy in the heart of Saigon and the North Vietnamese Army clinging tenaciously to Hue obscured Westmoreland's assertion that the enemy had been defeated. Claims of progress in the war, already greeted with skepticism, lost more credibility in both public and official circles. The psychological jolt to President Johnson's Vietnam policy was redoubled when the military requested an additional 206,00 troops. Most were intended to reconstitute the strategic reserve in the United States, exhausted by Westmoreland's appeals for combat units between 1965 and 1967. But the magnitude of the new request, at a time when almost a half-million U.S. troops were already in Vietnam, cast doubts on the conduct of the war and prompted a reassessment of American policy and strategy. Without mobilization, the United States was overcommitted. The Army could send few additional combat units to Vietnam without making deep inroads on forces destined for NATO or South Korea. The dwindling strategic reserve left Johnson with fewer options in the spring of 1968 than in the summer of 1965. His problems were underscored by heightened international tensions when North Korea captured an American naval vessel, the USS Pueblo, a week before the Tet offensive; by Soviet armed intervention in Czechoslovakia in the summer of 1968; and by chronic crises in the Mideast. In addition, Army units in the United States were needed often between 1965 and 1968 to enforce federal civil rights legislation and to restore public order in the wake of civil disturbances. Again, as in 1967, Johnson refused to sanction a major troop levy, but he did give Westmoreland some modest reinforcements to bolster the northern provinces. Again tapping the strategic reserve, the Army sent him the 3d Brigade, 82d Airborne Division, and the 1st Brigade, 5th Infantry Division (Mechanized)—the last Army combat units to deploy to South Vietnam. In addition, the President called to active duty a small number of Reserve units, totaling some 40,000 men, for duty in Southeast Asia and South Korea, the only use of Reserves during the Vietnam War. For Westmoreland, Johnson's decision meant that future operations would have to make the best possible use of American forces, and that the South Vietnamese Army would have to shoulder a larger share of the war effort. The President also curtailed air strikes against North Vietnam to spur negotiations. Finally, on 31 March Johnson announced his decision not to seek reselection in order to give his full attention to the goal of resolving the conflict. Hanoi had suffered a military defeat, but had won a political and diplomatic victory by shifting American policy toward disengagement. For the Army the new policy meant a difficult time. In South Vietnam, as in the United States, its forces were stretched thin. The Tet offensive had concentrated a large portion of the combat forces in I Corps, once a Marine preserve. A new command, the XXIV Corps, had to be activated at Da Nang, and Army logistical support, previously confined to the three southern corps zones, extended to the five northern provinces as well. While Army units reinforced Hue and the demilitarized zone, the marines at Khe Sanh held fast. Enemy pressure on the besieged base increased daily, but the North Vietnamese refrained from an all-out attack, still hoping to divert American forces from Hue. Recognizing that he could ill afford Khe Sanh's defense, Westmoreland decided to subject the enemy to the heaviest air and artillery bombardment of the war. His tactical gamble succeeded; the enemy withdrew, and the Communist offensive slackened. The enemy nevertheless persisted in his effort to weaken the Saigon government, launching nationwide "mini-Tet" offensives in May and August. Pockets of heavy fighting occurred throughout the south, and Viet Cong forces again tried to infiltrate into Saigon—the last gasps of the general offensive-general uprising. Thereafter enemy forces generally dispersed and avoided contact with Americans. In turn, the allies withdrew from Khe Sanh itself in the summer of 1968. Its abandonment signaled the demise of the McNamara Line and further postponement of MACV's hopes for large-scale American cross-border operations. For the remainder of 1968, Army units in I Corps were content to help restore security around Hue and other coastal areas, working closely with the marines and the South Vietnamese in support of pacification. North Vietnamese and Viet Cong forces generally avoided offensive operations. As armistice negotiations began in Paris, both sides prepared to enter a new phase of the war. The last phase of American involvement in South Vietnam was carried out under a broad policy called Vietnamization. Its main goal was to create strong, largely self-reliant South Vietnamese military forces, an objective consistent with that espoused by U.S. advisers as early as the 1950'S. But Vietnamization also meant the withdrawal of a half-million American soldiers. Past efforts to strengthen and modernize South Vietnam's Army had proceeded at a measured pace, without the pressure of diminishing American support, large-scale combat, or the presence of formidable North Vietnamese forces in the South. Vietnamization entailed three overlapping phases: redeployment of American forces and the assumption of their combat role by the South Vietnamese; improvement of ARVN's combat and support capabilities, especially firepower and mobility; and replacement of the Military Assistance Command by an American advisory group. Vietnamization had the added dimension of fostering political, social, and economic reforms to create a vibrant South Vietnamese state based on popular participation in national political life. Such reforms, however, depended on progress n the pacification program which never had a clearly fixed timetable. The task of carrying out the military aspects of Vietnamization fell to General Creighton W. Abrams, who succeeded General Westmoreland as MACV commander in mid-1968, when the latter returned to the United States to become Chief of Staff of the Army. Although he had the aura of a blunt, hard-talking, World War II tank commander, Abrams had spent two years as Westmoreland's deputy, working closely with South Vietnamese commanders. Like Westmoreland before him, Abrams viewed the military situation after Tet as an opportunity to make gains in pacifying rural areas and to reduce the strength of Communist forces in the South. Until the weakened Viet Cong forces could be rebuilt or replaced with NVA forces, both guerrilla and regular Communist forces had adopted a defensive posture. Nevertheless, 90,000 NVA forces were in the South, or in border sanctuaries, waiting to resume the offensive at a propitious time. Abrams still had strong American forces; indeed, they reached their peak strength of 543,000 in March 1969. But he was also under pressure from Washington to minimize casualties and to conduct operations with an eye toward leaving the South Vietnamese in the strongest possible military position when U.S. forces withdrew. With these considerations in mind, Abrams decided to disrupt and destroy the enemy's bases, especially those near the border, to prevent their use as staging areas for offensive operations. His primary objective was the enemy's logistical support system rather than enemy main combat forces. At the same time, to enhance Saigon's pacification efforts and improve local security, Abrams intended to emphasize small unit operations, with extensive patrolling and ambushes, aiming to reduce the enemy's base of support among the rural population. To the greatest extent possible, he planned to improve ARVN's performance by conducting combined operations with American combat units. As the South Vietnamese Army assumed the lion's share of combat, it was expected to shift operations to the border and to assume a role similar to that performed by U.S. forces between 1965 and 1969. The Regional and Popular Forces, in turn, were to take over ARVN's role in area security and pacification support, while the newly organized People's Self-Defense Force took on the task of village and hamlet defense. Stressing the close connection between combat and pacification operations, the need for co-operation between American and South Vietnamese forces, and the importance of co-ordinating all echelons of Saigon's armed forces, Abrams propounded a "one war" concept. Yet even in his emphasis on combined operations and American support of pacification, Abrams' strategy had strong elements of continuity with Westmoreland's. For the first, operations in War Zones C and D in 1967 and the thrust into the A Shau valley in 1968 were ample precedents. Again, Westmoreland had laid the foundation for a more extensive U.S. role in pacification in 1967 by establishing Civil Operations Rural Development Support (CORDS). Under CORDS, the Military Assistance Command took charge of all American activities, military and civilian, in support of pacification. Abrams' contribution was to enlarge the Army's role. Under him, the U.S. advisory effort at provincial and district levels grew as the territorial forces gained in importance, and additional advisers were assigned to the Phoenix program, a concerted effort to eliminate the Communist political apparatus. Numerous mobile advisory teams helped the South Vietnamese Army and paramilitary forces to become adept in a variety of combat and support functions. Despite all efforts, many Americans doubted whether Saigon's armed forces could successfully play their enlarged role under Vietnamization. Earlier counterinsurgency efforts had languished under less demanding circumstances, and Saigon's forces continued to be plagued with high desertions, spotty morale, and shortages of high quality leaders. Like the French before them, U.S. advisers had assumed a major role in providing and co-ordinating logistical and firepower support, leaving the Vietnamese inexperienced in the conduct of large combined-arms operations. Despite the Viet Cong's weakened condition, South Vietnamese forces also continued to incur high casualties. Similarly, pacification registered ostensible gains in rural security and other measures of progress, but such improvements often obscured its failure to establish deep roots. The Phoenix program, despite its success in seizing low-level cadres, rarely caught hard-core, high-level party officials, many of whom survived, as they had in the mid-1950's, by taking more stringent security measures. Furthermore, the program was abused by some South Vietnamese officials, who used it as a vehicle for personal vendettas. Saigon's efforts at political, social, and economic reform likewise were susceptible to corruption, venality, and nepotism. Temporary social and economic benefits for the peasantry rested on an uncertain foundation of continued American aid, as did South Vietnam's entire economy and war effort. Influencing all parts of the struggle was a new defense policy enunciated by Richard M. Nixon, who became President in January 1969. The "Nixon Doctrine" harkened back to the precepts of the New Look, placing greater reliance on nuclear retaliation, encouraging allies to accept a larger share of their own defense burden, and barring the use of U.S. ground forces in limited wars in Asia, unless vital national interests were at stake. Under this policy, American ground forces in South Vietnam, once withdrawn, were unlikely to return. For President Thieu in Saigon, the future was inauspicious. For the time being, large numbers of American forces were still present to bolster his country's war effort; what would happen when they departed, no one knew. Military Operations, 1968-1969 Vietnamization began in earnest when two brigades of the U.S. Army's 9th Infantry Division left South Vietnam in July 1968, making the South Vietnamese Army responsible for securing the southern approaches to Saigon. The protective area that Westmoreland had developed around the capital was still intact. Allied forces engaged in a corps-wide counteroffensive to locate and destroy remnants of the enemy units that had participated in the Tet offensive, combining thousands of small unit operations, frequent sweeps through enemy bases, and persistent screening of the Cambodian border to prevent enemy main force units from returning. As the Military Assistance Command anticipated, the Communists launched a Tet offensive in 1969, but a much weaker one than a year earlier. Allied forces easily suppressed the outbreaks. Meanwhile, in critical areas around Saigon pacification had begun to take hold. Such signs of progress probably resulted mainly from the attrition of Viet Cong forces during Tet 1968. But the vigilant screening of the border contributed to the enemy's difficulty in reaching and helping local insurgent forces. Yet Saigon was not impregnable. With increasing frequency, enemy sappers penetrated close enough to launch powerful rocket attacks against the capital. Such incidents terrorized civilians, caused military casualties, and were a violent reminder of the government's inability to protect the population. Sometimes simultaneous attacks were conducted throughout the country. An economy-of-force measure, the attacks brought little risk to the enemy and compelled allied forces to suspend other tasks while they cleared the "rocket belts" around every major urban center and base in the country. In the Central Highlands the war of attrition continued. Until its redeployment of 1970, the Army protected major highland population centers and kept open important interior roads. Special Forces worked with the tribal highlanders to detect infiltration and harass enemy secret zones. As in the past, highland camps and outposts were a magnet for enemy attacks, meant to lure reaction forces into an ambush or to divert the allies from operations elsewhere. Ben Het in Kontum Province was besieged from March to July of 1969. Other bases—Thien Phuoc and Thuong Duc in I Corps; Bu Prang, Dak Seang, and Dak Pek in II Corps; and Katum, Bu Dop, and Tong Le Chon in III Corps—were attacked because of their proximity to Communist strongholds and infiltration routes. In some cases camps had to be abandoned, but in most the attackers were repulsed. By the time the 5th Special Forces Group left South Vietnam in March 1971, all CIDG units had been converted to Regional Forces or absorbed by the South Vietnamese Rangers. The departure of the Green Berets brought an end to any significant Army role in the highlands. Following the withdrawal of the 4th and 9th Divisions, Army units concentrated around Saigon and in the northern provinces. Operating in Quang Ngai, Quang Tin, and Quang Nam Provinces, the 23d Infantry Division (Americal) conducted a series of operations in 1968 and 1969 to secure and pacify the heavily populated coastal plain of southern I Corps. Along the demilitarized zone, the 1st Brigade, 5th Infantry Division (Mechanized), helped marines and South Vietnamese forces to screen the zone and to secure the northern coastal region, including a stretch of highway, the "street without joy," that was notorious from the time of the French. The 101st Airborne Division (converted to the Army's second airmobile division in 1968) divided its attention between the defense of Hue and forays into the enemy's base in the A Shau valley. Since the 1968 Tet offensive, the Communists had restocked the A Shau valley with ammunition, rice, and equipment. The logistical build-up pointed to a possible NVA offensive in early 1969. In quick succession, Army operations were launched in the familiar pattern: air assaults, establishment of fire support bases, and exploration of the lowlands and surrounding hills to locate enemy forces and supplies. This time the Army met stiff enemy resistance, especially from antiaircraft guns. The North Vietnamese had expected the American forces and now planned to hold their ground. On 11 May 1969, a battalion of the 101st Airborne Division climbing Hill 937 found the 28th North Vietnamese Regiment waiting for it. The struggle for "Hamburger Hill" raged for ten days and became one of the war's fiercest and most controversial battles. Entrenched in tiers of fortified bunkers with well-prepared fields of fire, the enemy forces withstood repeated attempts to dislodge them. Supported by intense artillery and air strikes, Americans made a slow, tortuous climb, fighting hand to hand. By the time Hill 937 was taken, three Army battalions and an ARVN regiment had been committed to the battle. Victory, however, was ambiguous as well as costly; the hill itself had no strategic or tactical importance and was abandoned soon after its capture. Critics charged that the battle wasted American lives and exemplified the irrelevance of U.S. tactics in Vietnam. Defending the operation, the commander of the 101st acknowledged that the hill's only significance was that the enemy occupied it. "My mission," he said, "was to destroy enemy forces and installations. We found the enemy on Hill 937, and that is where we fought them." About one month later the 101st left the A Shau valley, and the North Vietnamese were free to use it again. American plans to return in the summer of 1970 came to nothing when enemy pressure forced the abandonment of two fire support bases needed for operations there. The loss of Fire Support Base O'REILLY, only eleven miles from Hue, was an ominous sign that enemy forces had reoccupied the A Shau and were seeking to dominate the valleys leading to the coastal plain. Until it redeployed in 1971, the 101st Airborne, with the marines and South Vietnamese forces, now devoted most of its efforts to protecting Hue. The operations against the A Shau had achieved no more than Westmoreland's large search and destroy operations in 1967. As soon as the allies left, the enemy reclaimed his traditional bases. The futility of such operations was mirrored in events on the coastal plain. Here the 23d Infantry Division fought in an area where the population had long been sympathetic to the Viet Cong. As in other areas, pacification in southern I Corps seemed to improve after the 1968 Tet offensive, though enemy units still dominated the Piedmont and continued to challenge American and South Vietnamese forces on the coast. Operations against them proved to be slow, frustrating exercises in warding off NVA and Viet Cong main force units while enduring harassment from local guerrillas and the hostile population. Except during spasms of intense combat, as in the summer of 1969 when the Americal Division confronted the 1st North Vietnamese Regiment, most U.S. casualties were caused by snipers, mines, and booby traps. Villages populated by old men, women, and children were as dangerous as the elusive enemy main force units. Operating in such conditions day after day induced a climate of fear and hate among the Americans. The already thin line between civilian and combatant was easily blurred and violated. In the hamlet of My Lai, elements of the Americal Division killed about two hundred civilians in the spring of 1968. Although only one member of the division was tried and found guilty of war crimes, the repercussions of the atrocity were felt throughout the Army. However rare, such acts undid the benefit of countless hours of civic action by Army units and individual soldiers and raised unsettling questions about the conduct of the war. What happened at My Lai could have occurred in any Army unit in Vietnam in the late 1960's and early 1970's. War crimes were born of a sense of frustration that also contributed to a host of morale and discipline problems, among enlisted men and officers alike. As American forces were withdrawn by a government eager to escape the war, the lack of a clear military objective contributed to a weakened sense of mission and a slackening of discipline. The short-timer syndrome, the reluctance to take risks in combat toward the end of a soldier's one-year tour, was compounded by the "last-casualty" syndrome. Knowing that all U.S. troops would soon leave Vietnam, no soldier wanted to be the last to die. Meanwhile, in the United States harsh criticism of the war, the military, and traditional military values had become widespread. Heightened individualism, growing permissiveness, and a weakening of traditional bonds of authority pervaded American society and affected the Army's rank and file. The Army grappled with problems of drug abuse, racial tensions, weakened discipline, and lapses of leadership. While outright refusals to fight were few in number, incidents of "fragging"— murderous attacks on officers and noncoms—occurred frequently enough to compel commands to institute a host of new security measures within their cantonments. All these problems were symptoms of larger social and political forces and underlined a growing disenchantment with the war among soldiers in the field. As the Army prepared to leave Vietnam, lassitude and war-weariness at times resulted in tragedy, as at Fire Support Base MARY ANN in 1971. There soldiers of the Americal Division, soon to go home, relaxed their security and were overrun by a North Vietnamese force. Such incidents reflected a decline in the quality of leadership among both noncommissioned and commissioned officers. Lowered standards, abbreviated training, and accelerated promotions to meet the high demand for noncommissioned and junior officers often resulted in the assignment of squad, platoon, and company leaders with less combat experience than the troops they led. Careerism and ticket-punching in officer assignments, false reporting and inflated body counts, and revelations of scandal and corruption all raised disquieting questions about the professional ethics of Army leadership. Critics indicted the tactics and techniques used by the Army in Vietnam, noting that airmobility, for example, tended to distance troops from the population they were sent to protect and that commanders aloft in their command and control helicopters were at a psychological and physical distance from the soldiers they were supposed to lead. With most U.S. combat units slated to leave South Vietnam during 1970 and 1971, time was a critical factor for the success of Vietnamization and pacification. Neither program could thrive if Saigon's forces were distracted by enemy offensives launched from bases in Laos or Cambodia. While Abrams' logistical offensive temporarily reduced the level of enemy activity in the South, bases outside South Vietnam had been inviolable to allied ground forces. Harboring enemy forces, command facilities, and logistical depots, the Cambodian and Laotian bases threatened the fragile progress made in the South since Tet 1968. To the Nixon administration, Abrams' plans to violate the Communist sanctuaries had the special appeal of gaining more time for Vietnamization and of compensating for the bombing halt over North Vietnam. Because of their proximity to Saigon, the bases in Cambodia received first priority. Planning for the cross-border attack occurred at a critical time in Cambodia. In early 1970 Cambodia's neutralist leader, Prince Norodom Sihanouk, was overthrown by his pro-Western Defense Minister, General Lon Nol. Among Lon Nol's first actions was closing the port of Sihanoukville to supplies destined for Communist forces in the border bases and in South Vietnam. He also demanded that Communist forces leave Cambodia and accepted Saigon's offer to apply pressure against those located near the border. A few weeks earlier, American B-52 bombers had begun in secret to bomb enemy bases in Cambodia. By late April, South Vietnamese military units, accompanied by American advisers, had mounted large-scale ground operations across the border. On 1 May 1970, units of the 1st Cavalry Division, the 25th Infantry Division, and the 11th Armored Cavalry followed. Cambodia became a new battlefield of the Vietnam War. Cutting a broad swath through the enemy's Cambodian bases, Army units discovered large, sprawling, well-stocked storage sites, training camps, and hospitals, all recently occupied. What Americans did not find were large enemy forces or COSVN headquarters. Only small delaying forces offered sporadic resistance, while main force units retreated to northeastern Cambodia. Meanwhile the expansion of the war produced violent demonstrations in the United States. In response to the public outcry, Nixon imposed a geographical and time limit on operations in Cambodia, enabling the enemy to stay beyond reach. At the end of June, one day short of the sixty days allotted to the operation, all advisers accompanying the South Vietnamese and all U.S. Army units had left Cambodia. Political and military events in Cambodia triggered changes in the war as profound as those engendered by the Tet offensive. From a quiescent "sideshow" of the war, Cambodia became an arena for the major belligerents. Military activity increased in northern Cambodia and southern Laos as Hanoi established new infiltration routes and bases to replace those lost during the incursion. Hanoi made clear that it regarded all Indochina as a single theater of operations. Cambodia itself was engulfed in a virulent civil war. As U.S. Army units withdrew, the South Vietnamese Army found itself in a race against Communist forces to secure the Cambodian capital of Phnom Penh. Americans provided Saigon's overextended forces air and logistical support to enable them to stabilize the situation there. The time to strengthen Vietnamization gained by the incursion now had to be weighed in the balance against ARVN's new commitment in Cambodia. To the extent that South Vietnam's forces bolstered Lon Nol's regime, they were unable to contribute to pacification and rural security in their own country. Moreover, the South Vietnamese performance in Cambodia was mixed. When working closely with American advisers, the army acquitted itself well. But when forced to rely on its own resources, the army revealed its inexperience and limitations in attempting to plan and execute large operations. Despite ARVN's equivocal performance, less than a year later the Americans pressed the South Vietnamese to launch a second cross-border operation, this time into Laos. Although U.S. air, artillery, and logistical support would be provided, this time Army advisers would not accompany South Vietnamese forces. The Americans' enthusiasm for the operation exceeded that of their allies. Anticipating high casualties, South Vietnam's leaders were reluctant to involve their army once more in extended operations outside their country. But American intelligence had detected a North Vietnamese build-up in the vicinity of Tchepone, a logistical center on the Ho Chi Minh Trail approximately 25 miles west of the South Vietnamese border in Laos. The Military Assistance Command regarded the build-up as a prelude to an NVA spring offensive in the northern provinces. Like the Cambodian incursion, the Laotian invasion was justified as benefiting Vietnamization, but with the added bonuses of spoiling a prospective offensive and cutting the Ho Chi Minh Trail. In preparation for the operation, Army helicopters and artillery were moved to the vicinity of the abandoned base at Khe Sanh. The 101st Airborne Division conducted a feint toward the A Shau valley to conceal the true objective. On 8 February 1971, spearheaded by tanks and with airmobile units leapfrogging ahead to establish fire support bases in Laos, a South Vietnamese mechanized column advanced down Highway 9 toward Tchepone. Operation LAM SON 719 had begun. The North Vietnamese were not deceived. South Vietnamese forces numbering about 2s,000 became bogged down by heavy enemy resistance and bad weather. The drive toward Tchepone stalled. Facing the South Vietnamese were elements of five NVA divisions, as well as a tank regiment, an artillery regiment, and at least nineteen antiaircraft battalions. After a delay of several days, South Vietnamese forces air-assaulted into the heavily bombed town of Tchepone. By that time, the North Vietnamese had counterattacked with Soviet-built T54 and T55 tanks, heavy artillery, and infantry. They struck the rear of the South Vietnamese forces strung out on Highway 9, blocking their main avenue of withdrawal. Enemy forces also overwhelmed several South Vietnamese fire support bases, depriving ARVN units of desperately needed flank protection. The South Vietnamese also lacked antitank weapons to counter the North Vietnamese armor that appeared on the Laotian jungle trails. The result was near-disaster. Army helicopter pilots trying to rescue South Vietnamese soldiers from their besieged hilltop fire bases encountered intense antiaircraft fire. Panic ensued when some South Vietnamese units ran out of ammunition. In some units all semblance of an orderly withdrawal vanished as desperate South Vietnamese soldiers pushed the wounded off evacuation helicopters or clung to helicopter skids to reach safety. Eventually, ARVN forces punched their way out of Laos, but only after paying a heavy price. That the South Vietnamese Army had reached its objective of Tchepone was of little consequence. Its stay there was brief and the supply caches it discovered disappointingly small. Saigon's forces had failed to sever the Ho Chi Minh Trail; infiltration reportedly increased during LAM SON 719, as the North Vietnamese shifted traffic to roads and trails further to the west in Laos. In addition to losing nearly 2,000 men, the South Vietnamese lost large amounts of equipment during their disorderly withdrawal, and the U.S. Army lost IO7 helicopters, the highest number in any one operation of the war. Supporters pointed to heavy enemy casualties and argued that equipment losses were reasonable, given the large number of helicopters used to support LAM SON 719. The battle nevertheless raised disturbing questions among Army officials about the vulnerability of helicopters in mid- or highintensity conflict. What was the future of airmobility in any war where the enemy possessed a significant antiaircraft capability? LAM SON 719 proved to be a less ambiguous test of Vietnamization than the Cambodian incursion. The South Vietnamese Army did not perform well in Laos. Reflecting on the operation, General Ngo Quan Truong, the commander of I Corps, noted ARVN's chronic weakness in planning for and co-ordinating combat support. He also noted that from the battalion to the division level, the army had become dependent on U.S. advisers. At the highest levels of command, he added, "the need for advisers was more acutely felt in two specific areas: planning and leadership. The basic weakness of ARVN units at regimental and sometimes division level in those areas," he continued, "seriously affected the performance of subordinate units." LAM SON 719 scored one success, forestalling a Communist spring offensive in the northern provinces; in other respects, it was a failure and an ill omen for the future. Withdrawal: The Final Battles As the Americans withdrew, South Vietnam's combat capability declined. The United States furnished its allies the heavier M48 tank to match the NVA's T54 tank and heavier artillery to counter North Vietnamese 130mm. guns, though past experience suggested that additional arms and equipment could not compensate for poor skills and mediocre leadership. In fact, the weapons and equipment were insufficient to offset the reduction in U.S. combat strength. In mid-1968, for example, an aggregate of fifty-six allied combat battalions were present in South Vietnam's two northern provinces; in 1972, after the departure of most American units, only thirty battalions were in the same area. Artillery strength in the northern region declined from approximately 400 guns to 169 in the same period, and ammunition supply rates fell off as well. Similar reductions took place throughout South Vietnam, causing decreases in mobility, firepower, intelligence support, and air support. Five thousand American helicopters were replaced by about 500. American specialties—B-52 strikes, photo reconnaissance, and the use of sensors and other means of target acquisition—were drastically curtailed. Such losses were all the more serious because operations in Cambodia and Laos had illustrated how deeply ingrained in the South Vietnamese Army the American style of warfare had become. Nearly two decades of U.S. military involvement were exacting an unexpected price. As one ARVN division commander commented, "Trained as they were through combined action with US units, the [South Vietnamese] unit commander was used to the employment of massive firepower." That habit, he added, "was hard to relinquish." By November 1971, when the 101st Airborne Division withdrew from the South, Hanoi was planning its 1972 spring offensive. With ARVN's combat capacity diminished and nearly all U.S. combat troops gone, North Vietnam sensed an opportunity to demonstrate the failure of Vietnamization, hasten ARVN's collapse, and revive the stalled peace talks. In its broad outlines and goals, the 1972 offensive resembled Bet 1968, except that the North Vietnamese Army, instead of the Viet Cong, bore the major burden of combat. The Nguyen-Hue offensive or Easter offensive began on 30 March 1972. Total U.S. military strength in South Vietnam was about 95,000, of which only 6,000 were combat troops, and the task of countering the offensive on the ground fell almost exclusively to the South Vietnamese. Attacking on three fronts, the North Vietnamese Army poured across the demilitarized zone and out of Laos to capture Quang Tri, South Vietnam's northernmost province. In the Central Highlands, enemy units moved into Kontum Province, forcing Saigon to relinquish several border posts before government forces contained the offensive. On 2 April, Viet Cong and North Vietnamese forces struck Loc Ninh, just south of the Cambodian border on Highway 13, and advanced south to An Loc along one of the main invasion routes toward Saigon. A two-month-long battle ensued, until enemy units were driven from An Loc and forced to disperse to bases in Cambodia. By late summer the Easter offensive had run its course; the South Vietnamese, in a slow, cautious counteroffensive, recaptured Quang Tri City and most of the lost province. But the margin of victory or defeat often was supplied by the massive supporting firepower provided by U.S. air and naval forces. The tactics of the war were changing. Communist forces now made extensive use of armor and artillery. Among the new weapons in the enemy's arsenal was the Soviet SA-7 hand-held antiaircraft missile, which posed a threat to slow-flying tactical aircraft and helicopters. On the other hand, the Army's attack helicopter, the Cobra, outfitted with TOW antitank missiles, proved effective against NVA armor at stand-off range. In their antitank role, Army attack helicopters were crucial to ARVN's success at An Loc, suggesting a larger role for helicopters in the future as part of a combined arms team in conventional combat. Vietnamization continued to show mixed results. The benefits of the South Vietnamese Army's newly acquired mobility and firepower were dissipated as it became responsible for securing areas vacated by American forces. Improvements of territorial and paramilitary troops were offset as they became increasingly vulnerable to attack by superior North Vietnamese forces. Insurgency was also reviving. Though their progress was less spectacular than the blitzkrieg-like invasion of the South, North Vietnamese forces entered the Delta in thousands between 1969 and 1973 to replace the Viet Cong—one estimate suggested a tenfold increase in NVA strength, from 3,000 to 30,000, in this period. Here the fighting resembled that of the early 1960's, as enemy forces attacked lightly defended outposts and hamlets to regain control over the rural population in anticipation of a cease-fire. The strength of the People's Self-Defense Force, Saigon's first line of hamlet and village defense, after steady increases in 1969 and 1970, began to decline after 1971, also suggesting a revival of the insurgency in the countryside. Pursuing a strategy used successfully in the past, the North Vietnamese forced ARVN troops to the borders, exposing the countryside and leaving its protection in the hands of weaker forces. Such unfavorable signs, however, did not disturb South Vietnam's leaders as long as they could count on continued United States air and naval support. Nixon's resumption of the bombing of North Vietnam during the Easter offensive and, for the first time, his mining of North Vietnamese ports encouraged this expectation, as did the intense American bombing of Hanoi and Haiphong in late 1972. But such pressure was intended, at least in part, to force North Vietnam to sign an armistice. If Thieu was encouraged by the display of U.S. military muscle, the course of negotiations could only have been a source of discouragement. Hanoi dropped an earlier demand for Thieu's removal, but the United States gave up its insistence on Hanoi's withdrawal of its troops from the South. In early 1973 the United States, North and South Vietnam, and the Viet Cong signed an armistice that promised a cease-fire and national reconciliation. In fact, fighting continued, but the Military Assistance Command was dissolved, remaining U.S. forces withdrawn, and American military action in South Vietnam terminated. Perhaps most important of all, American advisers—still in many respects the backbone of ARVN's command structure were withdrawn. Between 1973 and 1975 South Vietnam's military security further declined through a combination of old and new factors. Plagued by poor maintenance and shortages of spare parts, much of the equipment provided Saigon's forces under Vietnamization became inoperable. A rise in fuel prices stemming from a worldwide oil crisis further restricted ARVN's use of vehicles and aircraft. South Vietnamese forces in many areas of the country were on the defensive, confined to protecting key towns and installations. Seeking to preserve its diminishing assets, the South Vietnamese Army became garrison bound and either reluctant or unable to react to a growing number of guerrilla attacks that eroded rural security. Congressionally mandated reductions in U.S. aid further reduced the delivery of repair parts, fuel, and ammunition. American military activities in Cambodia and Laos, which had continued after the cease-fire in South Vietnam went into effect, ended in 1973 when Congress cut off funds. Complaining of this austerity, President Thieu noted that he had to fight a "poor man's war." Vietnamization's legacy was that South Vietnam had to do more with less. In 1975 North Vietnam's leaders began planning for a new offensive, still uncertain whether the United States would resume bombing or once again intervene in the South. When their forces overran Phuoc Long Province, north of Saigon, without any American military reaction, they decided to proceed with a major offensive in the Central Highlands. Neither President Nixon, weakened by the Watergate scandal and forced to resign, nor his successor, Gerald Ford, was prepared to challenge Congress by resuming U.S. military activity in Southeast Asia. The will of Congress seemed to reflect the mood of an American public weary of the long and inconclusive war. What had started as a limited offensive in the highlands to draw off forces from populated areas now became an all-out effort to conquer South Vietnam. Thieu, desiring to husband his military assets, decided to retreat rather than to reinforce the highlands. The result was panic among his troops and a mass exodus toward the coast. As Hanoi's forces spilled out of the highlands, they cut off South Vietnamese defenders in the northern provinces from the rest of the country. Other NVA units now crossed the demilitarized zone, quickly overrunning Hue and Da Nang, and signaling the collapse of South Vietnamese resistance in the north. Hurriedly established defense lines around Saigon could not hold back the inexorable enemy offensive against the capital. As South Vietnamese leaders waited in vain for American assistance, Saigon fell to the Communists on 29 April 1975. The Post-Vietnam Army Saigon's fall was a bitter end to the long American effort to sustain South Vietnam. Ranging from advice and support to direct participation in combat and involving nearly three million U.S. servicemen, the effort failed to stop Communist leaders from reaching their goal of unifying a divided nation. South Vietnam's military defeat tended to obscure the crucial inability of this massive military enterprise to compensate for Saigon's political shortcomings. Over a span of nearly two decades, a series of regimes failed to mobilize fully and effectively their nation's political, social, and economic resources to foster a popular base of support. North Vietnamese main force units ended the war, but local insurgency among the people of the South made that outcome possible and perhaps inevitable. The U.S. Army paid a high price for its long involvement in South Vietnam. American military deaths exceeded 58,000, and of these about two-thirds were soldiers. The majority of the dead were low-ranking enlisted men (E-2 and E-3), young men twenty-three years old or younger, of whom approximately 13 percent were black. Most deaths were caused by small-arms fire and gunshot, but a significant portion, almost 30 percent, stemmed from mines, booby traps, and grenades. Artillery, rockets, and bombs accounted for only a small portion of the total fatalities. If not for the unprecedented medical care that the Army provided in South Vietnam, the death toll would have been higher yet. Nearly 300,000 Americans were wounded, of whom half required hospitalization. The lives of many seriously injured men, who would have become fatalities in earlier wars, were saved by rapid helicopter evacuation direct to hospitals close to the combat zone. Here, relatively secure from air and ground attack, usually unencumbered by mass casualties, and with access to an uninterrupted supply of whole blood, Army doctors and nurses availed themselves of the latest medical technology to save thousands of lives. As one medical officer pointed out, the Army was able to adopt a "civilian philosophy of casualty triage" in the combat zone that directed the "major effort first to the most seriously injured." But some who served in South Vietnam suffered more insidious damage from the adverse psychological effects of combat or the long-term effects of exposure to chemical agents. More than a decade after the end of the war, 1,761 American soldiers remain listed as missing in action. The war-ravaged Vietnamese, north and south, incurred the greatest losses. South Vietnamese military deaths exceeded 200,000. War-related civilian deaths in the South approached a half-million, while the injured and maimed numbered many more. Accurate estimates of enemy casualties run afoul of the difficulty in distinguishing between civilians and combatants, imprecise body counts, and the difficulty of verifying casualties in areas controlled by the enemy. Nevertheless, nearly a million Viet Cong and North Vietnamese soldiers are believed to have perished in combat through the spring of 1975 For the U.S. Army the scars of the war ran even deeper than the grim statistics showed. Given its long association with South Vietnam's fortunes, the Army could not escape being tarnished by its ally's fall. The loss compounded already unsettling questions about the Army's role in Southeast Asia, about the soundness of its advice to the South Vietnamese, about its understanding of the nature of the war, about the appropriateness of its strategy and tactics, and about the adequacy of the counsel provided by Army leaders to national decision makers. Marked by ambiguous military objectives, defensive strategy, lack of tactical initiative, ponderous tactics, and untidy command arrangements, the struggle in Vietnam seemed to violate most of the time-honored principles of war. Many officers sought to erase Vietnam from the Army's corporate memory, feeling uncomfortable with the ignominy of failure or believing that the lessons and experience of the war were of little use to the post-Vietnam Army. Although a generation of officers, including many of the Army's future leaders, cut their combat teeth in Vietnam, many regretted that the Army's reputation, integrity, and professionalism had been tainted in the service of a flawed strategy and a dubious ally. Even before South Vietnam fell, Army strategists turned their attention to what seemed to them to be the Army's more enduring and central mission—the defense of western Europe. Ending a decade of neglect of its forces there, the Army began to strengthen and modernize its NATO contingent. Army planners doubted that in any future European war they would enjoy the luxury of a gradual, sustained mobilization, or unchallenged control of air and sea lines of communication, or access to support facilities close to the battlefield. France's decision in 1966 to end its affiliation with NATO had already forced the Army to re-evaluate its strategy and support arrangements. The end of the draft in 1972 and the transition to an all-volunteer Army in 1973—a reflection of popular dissatisfaction with the Vietnam War—added to the unlikelihood of another war similar to Vietnam and made it seem more than ever an anomaly. Instead, Army planners faced a possible future conflict that would begin with little or no warning and confront allied forces-in-being with a numerically superior foe. Combat in such a war was likely to be violent and sustained, entailing deep thrusts by armored forces, intense artillery and counterbattery fire, and a fluid battlefield with a high degree of mobility. Army doctrine to fight this war, codified in 1976 in FM (Field Manual) 100-5, Operations, barely acknowledged the decade of Army combat in Vietnam. The new doctrine of "active defense" drew heavily on the experience of armored operations in World War II and recent fighting in the Middle East between Arab and Israeli forces. From a study of about 1,000 armored battles, Army planners deduced that an outnumbered defender could force a superior enemy to concentrate his forces and reveal his intentions, and thus bring to bear in the all-important initial phase of the battle sufficient forces and firepower in the critical area to defeat his main attack. The conversion of the 1st Cavalry Division, the unit that exemplified combat operations in South Vietnam, from an airmobile division to a new triple capabilities (TRICAP) division symbolized the post-Vietnam Army's reorientation toward combat in Europe. Infused with additional mechanized and artillery forces to give it greater flexibility and firepower, the division's triple capabilities—armor, airmobility, and air cavalry—better suited it to carry out the tactical concepts of FM 100-5 than its previous configuration. Yet the Army did not totally ignore its Vietnam experience. U.S. armor and artillery forces had gained valuable experience there in co-ordinating operations with airmobile forces. Although some in the military questioned whether helicopters could operate in mid-intensity conflict, Army doctrine rested heavily on concepts of airmobility that had evolved during Vietnam. Helicopters were still expected to move forces from one sector of the battlefield to another, to carry out reconnaissance and surveillance, to provide aerial fire support, and to serve as antitank weapons systems. In many respects, the role contemplated for helicopters in the post-Vietnam Army harkened back to concepts of airmobility originally formulated for the atomic battlefield of the early 1960's, but modified by combat in Vietnam. Like the Army of the Vietnam era, the postwar Army continued a common hallmark of the American military tradition by emphasizing technology and firepower over manpower. The Army's new operational doctrine had its share of critics. Stressing tactical operations of units below the division, the doctrine of FM IOO-5 neglected the role of larger Army echelons. Recognition of this deficiency led to a revival of interest in the role of divisions, corps, and armies in the gray area between grand strategy and tactics. But some strategists warned that the Army seemed to be preparing for the war it was least likely to fight. Like the strategists of the New Look in the 1950's, they viewed an attack on Army forces in Europe as a mere trip wire that would ignite a nuclear confrontation between the superpowers and thus make the land battle irrelevant. With insurgencies, small wars, subversion, and terrorism flourishing throughout Asia, Africa, and Latin America, others believed that that Army would sooner or later find itself once again engaged in conflicts that closely resembled Vietnam. Ten years after the loss of South Vietnam, the U.S. Army's major overseas commitments remained anchored in NATO and South Korea. International realities still compelled it to prepare for a variety of contingencies. In addition to organizing divisions to fight in Europe, the Army revived its old interest in light infantry divisions. By the mid-1980's two such divisions, the 10th Mountain Division and the 6th Infantry Division (Light), had been activated, giving the Army once again a total of eighteen divisions. Lower active-duty strength required many divisions to be fleshed out by Reserve Components before they could be committed to combat. Nevertheless, the Army viewed its new divisions as suitable for use in a rapid deployment force to reinforce NATO or world trouble spots. Although their strength was drastically reduced following the Vietnam War, Special Forces continued to be called upon to advise and train anti-Communist military forces in Latin America and elsewhere and to participate in a variety of special activities to counter terrorism. Operations like the abortive attempt to rescue American hostages in Iran and the successful operation to prevent a Communist takeover of the Caribbean island of Grenada attested to the Army's continuing need for both rapidly deployable and special-purpose forces. The realities of a complex world reinforced the pervasive influence of flexible response on the U.S. national security policy. Many other missions fell under the doctrinal umbrella of low-intensity conflict, a vague and faddish term that became popular in the 1980's as counterinsurgency had two decades earlier. The relevance of Vietnam to low-intensity conflict remains an open question. Nevertheless, by the 1980's the conduct and lessons of the war in Vietnam had again become the subject of lively debate in the Army. Reassessments of its role tend to center around the issue of whether the Army should have devoted more effort to pacification or to defeating the conventional military threat posed by North Vietnam. These issues stem from the ambiguities of the war and the paradox of the Army's experience. Reliance on massive firepower and technological superiority and the ability to marshal vast logistical resources have been hallmarks of the American military tradition. Tactics have often seemed to exist apart from larger issues, strategies, and objectives. Yet in Vietnam the Army experienced tactical success and strategic failure. The rediscovery of the Vietnam War suggests that its most important legacy may be the lesson that unique historical, political, cultural, and social factors always impinge on the military. Strategic and tactical success rests not only on military progress but on correctly analyzing the nature of the particular conflict, understanding the enemy's strategy, and realistically assessing the strengths and weaknesses of allies. A new humility and a new sophistication may form the best parts of the complex heritage left the Army by the long, bitter war in Vietnam. page updated 27 April 2001 Return to the Table of Contents Return to CMH Online
http://www.history.army.mil/books/AMH/AMH-28.htm
13
62
A tropical cyclone is a storm system characterized by a large low-pressure center and numerous thunderstorms that produce strong winds and heavy rain. Tropical cyclones feed on heat released when moist air rises, resulting in condensation of water vapor contained in the moist air. They are fueled by a different heat mechanism than other cyclonic windstorms such as nor'easters, European windstorms, and polar lows, leading to their classification as "warm core" storm systems. Tropical cyclones originate in the doldrums near the equator, about 10° away from it. The term "tropical" refers to both the geographic origin of these systems, which form almost exclusively in tropical regions of the globe, and their formation in maritime tropical air masses. The term "cyclone" refers to such storms' cyclonic nature, with counterclockwise rotation in the Northern Hemisphere and clockwise rotation in the Southern Hemisphere. Depending on its location and strength, a tropical cyclone is referred to by names such as hurricane, typhoon, tropical storm, cyclonic storm, tropical depression, and simply cyclone. While tropical cyclones can produce extremely powerful winds and torrential rain, they are also able to produce high waves and damaging storm surge as well as spawning tornadoes. They develop over large bodies of warm water, and lose their strength if they move over land. This is why coastal regions can receive significant damage from a tropical cyclone, while inland regions are relatively safe from receiving strong winds. Heavy rains, however, can produce significant flooding inland, and storm surges can produce extensive coastal flooding up to 40 kilometres (25 mi) from the coastline. Although their effects on human populations can be devastating, tropical cyclones can also relieve drought conditions. They also carry heat and energy away from the tropics and transport it toward temperate latitudes, which makes them an important part of the global atmospheric circulation mechanism. As a result, tropical cyclones help to maintain equilibrium in the Earth's troposphere, and to maintain a relatively stable and warm temperature worldwide. Many tropical cyclones develop when the atmospheric conditions around a weak disturbance in the atmosphere are favorable. The background environment is modulated by climatological cycles and patterns such as the Madden-Julian oscillation, El Niño-Southern Oscillation, and the Atlantic multidecadal oscillation. Others form when other types of cyclones acquire tropical characteristics. Tropical systems are then moved by steering winds in the troposphere; if the conditions remain favorable, the tropical disturbance intensifies, and can even develop an eye. On the other end of the spectrum, if the conditions around the system deteriorate or the tropical cyclone makes landfall, the system weakens and eventually dissipates. It is not possible to artificially induce the dissipation of these systems with current technology. |Part of a series on| |Tropical cyclones portal| All tropical cyclones are areas of low atmospheric pressure near the Earth's surface. The pressures recorded at the centers of tropical cyclones are among the lowest that occur on Earth's surface at sea level. Tropical cyclones are characterized and driven by the release of large amounts of latent heat of condensation, which occurs when moist air is carried upwards and its water vapor condenses. This heat is distributed vertically around the center of the storm. Thus, at any given altitude (except close to the surface, where water temperature dictates air temperature) the environment inside the cyclone is warmer than its outer surroundings. A strong tropical cyclone will harbor an area of sinking air at the center of circulation. If this area is strong enough, it can develop into a large "eye". Weather in the eye is normally calm and free of clouds, although the sea may be extremely violent. The eye is normally circular in shape, and may range in size from 3 kilometres (1.9 mi) to 370 kilometres (230 mi) in diameter. Intense, mature tropical cyclones can sometimes exhibit an outward curving of the eyewall's top, making it resemble a football stadium; this phenomenon is thus sometimes referred to as the stadium effect. There are other features that either surround the eye, or cover it. The central dense overcast is the concentrated area of strong thunderstorm activity near the center of a tropical cyclone; in weaker tropical cyclones, the CDO may cover the center completely. The eyewall is a circle of strong thunderstorms that surrounds the eye; here is where the greatest wind speeds are found, where clouds reach the highest, and precipitation is the heaviest. The heaviest wind damage occurs where a tropical cyclone's eyewall passes over land. Eyewall replacement cycles occur naturally in intense tropical cyclones. When cyclones reach peak intensity they usually have an eyewall and radius of maximum winds that contract to a very small size, around 10 kilometres (6.2 mi) to 25 kilometres (16 mi). Outer rainbands can organize into an outer ring of thunderstorms that slowly moves inward and robs the inner eyewall of its needed moisture and angular momentum. When the inner eyewall weakens, the tropical cyclone weakens (in other words, the maximum sustained winds weaken and the central pressure rises.) The outer eyewall replaces the inner one completely at the end of the cycle. The storm can be of the same intensity as it was previously or even stronger after the eyewall replacement cycle finishes. The storm may strengthen again as it builds a new outer ring for the next eyewall replacement. |Size descriptions of tropical cyclones| |Less than 2 degrees latitude||Very small/midget| |2 to 3 degrees of latitude||Small| |3 to 6 degrees of latitude||Medium/Average| |6 to 8 degrees of latitude||Large anti-dwarf| |Over 8 degrees of latitude||Very large| One measure of the size of a tropical cyclone is determined by measuring the distance from its center of circulation to its outermost closed isobar, also known as its ROCI. If the radius is less than two degrees of latitude or 222 kilometres (138 mi), then the cyclone is "very small" or a "midget". A radius between 3 and 6 latitude degrees or 333 kilometres (207 mi) to 670 kilometres (420 mi) are considered "average-sized". "Very large" tropical cyclones have a radius of greater than 8 degrees or 888 kilometres (552 mi). Use of this measure has objectively determined that tropical cyclones in the northwest Pacific Ocean are the largest on earth on average, with Atlantic tropical cyclones roughly half their size. Other methods of determining a tropical cyclone's size include measuring the radius of gale force winds and measuring the radius at which its relative vorticity field decreases to 1×10−5 s−1 from its center. A tropical cyclone's primary energy source is the release of the heat of condensation from water vapor condensing at high altitudes, with solar heating being the initial source for evaporation. Therefore, a tropical cyclone can be visualized as a giant vertical heat engine supported by mechanics driven by physical forces such as the rotation and gravity of the Earth. In another way, tropical cyclones could be viewed as a special type of mesoscale convective complex, which continues to develop over a vast source of relative warmth and moisture. While an initial warm core system, such as an organized thunderstorm complex, is necessary for the formation of a tropical cyclone, a large flux of energy is needed to lower atmospheric pressure more than a few millibars (0.10 inch of mercury). The inflow of warmth and moisture from the underlying ocean surface is critical for tropical cyclone strengthening. A significant amount of the inflow in the cyclone is in the lowest 1 kilometre (3,300 ft) of the atmosphere. Condensation leads to higher wind speeds, as a tiny fraction of the released energy is converted into mechanical energy; the faster winds and lower pressure associated with them in turn cause increased surface evaporation and thus even more condensation. Much of the released energy drives updrafts that increase the height of the storm clouds, speeding up condensation. This positive feedback loop continues for as long as conditions are favorable for tropical cyclone development. Factors such as a continued lack of equilibrium in air mass distribution would also give supporting energy to the cyclone. The rotation of the Earth causes the system to spin, an effect known as the Coriolis effect, giving it a cyclonic characteristic and affecting the trajectory of the storm. What primarily distinguishes tropical cyclones from other meteorological phenomena is deep convection as a driving force. Because convection is strongest in a tropical climate, it defines the initial domain of the tropical cyclone. By contrast, mid-latitude cyclones draw their energy mostly from pre-existing horizontal temperature gradients in the atmosphere. To continue to drive its heat engine, a tropical cyclone must remain over warm water, which provides the needed atmospheric moisture to keep the positive feedback loop running. When a tropical cyclone passes over land, it is cut off from its heat source and its strength diminishes rapidly. The passage of a tropical cyclone over the ocean can cause the upper layers of the ocean to cool substantially, which can influence subsequent cyclone development. Cooling is primarily caused by upwelling of cold water from deeper in the ocean because of the wind. The cooler water causes the storm to weaken. This is a negative feedback process that causes the storms to weaken over sea because of their own effects. Additional cooling may come in the form of cold water from falling raindrops (this is because the atmosphere is cooler at higher altitudes). Cloud cover may also play a role in cooling the ocean, by shielding the ocean surface from direct sunlight before and slightly after the storm passage. All these effects can combine to produce a dramatic drop in sea surface temperature over a large area in just a few days. Scientists at the US National Center for Atmospheric Research estimate that a tropical cyclone releases heat energy at the rate of 50 to 200 exajoules (1018 J) per day, equivalent to about 1 PW (1015 watt). This rate of energy release is equivalent to 70 times the world energy consumption of humans and 200 times the worldwide electrical generating capacity, or to exploding a 10-megaton nuclear bomb every 20 minutes. While the most obvious motion of clouds is toward the center, tropical cyclones also develop an upper-level (high-altitude) outward flow of clouds. These originate from air that has released its moisture and is expelled at high altitude through the "chimney" of the storm engine. This outflow produces high, thin cirrus clouds that spiral away from the center. The clouds are thin enough for the sun to be visible through them. These high cirrus clouds may be the first signs of an approaching tropical cyclone. As air parcels are lifted within the eye of the storm the vorticity is reduced, causing the outflow from a tropical cyclone to have anti-cyclonic motion. |Basins and WMO Monitoring Institutions| |Basin||Responsible RSMCs and TCWCs| |North Atlantic||National Hurricane Center (United States)| |North-East Pacific||National Hurricane Center (United States)| |North-Central Pacific||Central Pacific Hurricane Center (United States)| |North-West Pacific||Japan Meteorological Agency| |North Indian Ocean||India Meteorological Department| |South-West Indian Ocean||Météo-France| |Australian region||Bureau of Meteorology† (Australia) Meteorological and Geophysical Agency† (Indonesia) Papua New Guinea National Weather Service† |Southern Pacific||Fiji Meteorological Service Meteorological Service of New Zealand† |†: Indicates a Tropical Cyclone Warning Center| There are six Regional Specialized Meteorological Centers (RSMCs) worldwide. These organizations are designated by the World Meteorological Organization and are responsible for tracking and issuing bulletins, warnings, and advisories about tropical cyclones in their designated areas of responsibility. Additionally, there are six Tropical Cyclone Warning Centers (TCWCs) that provide information to smaller regions. The RSMCs and TCWCs are not the only organizations that provide information about tropical cyclones to the public. The Joint Typhoon Warning Center (JTWC) issues advisories in all basins except the Northern Atlantic for the purposes of the United States Government. The Philippine Atmospheric, Geophysical and Astronomical Services Administration (PAGASA) issues advisories and names for tropical cyclones that approach the Philippines in the Northwestern Pacific to protect the life and property of its citizens. The Canadian Hurricane Center (CHC) issues advisories on hurricanes and their remnants for Canadian citizens when they affect Canada. On 26 March 2004, Cyclone Catarina became the first recorded South Atlantic cyclone and subsequently struck southern Brazil with winds equivalent to Category 2 on the Saffir-Simpson Hurricane Scale. As the cyclone formed outside the authority of another warning center, Brazilian meteorologists initially treated the system as an extratropical cyclone, although subsequently classified it as tropical. Worldwide, tropical cyclone activity peaks in late summer, when the difference between temperatures aloft and sea surface temperatures is the greatest. However, each particular basin has its own seasonal patterns. On a worldwide scale, May is the least active month, while September is the most active whilst November is the only month with all the tropical cyclone basins active. In the Northern Atlantic Ocean, a distinct hurricane season occurs from June 1 to November 30, sharply peaking from late August through September. The statistical peak of the Atlantic hurricane season is 10 September. The Northeast Pacific Ocean has a broader period of activity, but in a similar time frame to the Atlantic. The Northwest Pacific sees tropical cyclones year-round, with a minimum in February and March and a peak in early September. In the North Indian basin, storms are most common from April to December, with peaks in May and November. In the Southern Hemisphere, the tropical cyclone year begins on July 1 and runs all year round and encompasses the tropical cyclone seasons which run from November 1 until the end of April with peaks in mid-February to early March. |Season lengths and seasonal averages| |Basin||Season start||Season end||Tropical Storms |Category 3+ TCs |Australia Southwest Pacific||November||April||9||4.8||1.9| The formation of tropical cyclones is the topic of extensive ongoing research and is still not fully understood. While six factors appear to be generally necessary, tropical cyclones may occasionally form without meeting all of the following conditions. In most situations, water temperatures of at least 26.5 °C (79.7 °F) are needed down to a depth of at least 50 m (160 ft); waters of this temperature cause the overlying atmosphere to be unstable enough to sustain convection and thunderstorms. Another factor is rapid cooling with height, which allows the release of the heat of condensation that powers a tropical cyclone. High humidity is needed, especially in the lower-to-mid troposphere; when there is a great deal of moisture in the atmosphere, conditions are more favorable for disturbances to develop. Low amounts of wind shear are needed, as high shear is disruptive to the storm's circulation. Tropical cyclones generally need to form more than 555 km (345 mi) or 5 degrees of latitude away from the equator, allowing the Coriolis effect to deflect winds blowing towards the low pressure center and creating a circulation. Lastly, a formative tropical cyclone needs a pre-existing system of disturbed weather, although without a circulation no cyclonic development will take place. Low-latitude and low-level westerly wind bursts associated with the Madden-Julian oscillation can create favorable conditions for tropical cyclogenesis by initiating tropical disturbances. Most tropical cyclones form in a worldwide band of thunderstorm activity called by several names: the Intertropical Front (ITF), the Intertropical Convergence Zone (ITCZ), or the monsoon trough. Another important source of atmospheric instability is found in tropical waves, which cause about 85% of intense tropical cyclones in the Atlantic ocean, and become most of the tropical cyclones in the Eastern Pacific basin. Tropical cyclones move westward when equatorward of the subtropical ridge, intensifying as they move. Most of these systems form between 10 and 30 degrees away of the equator, and 87% form no farther away than 20 degrees of latitude, north or south. Because the Coriolis effect initiates and maintains tropical cyclone rotation, tropical cyclones rarely form or move within about 5 degrees of the equator, where the Coriolis effect is weakest. However, it is possible for tropical cyclones to form within this boundary as Tropical Storm Vamei did in 2001 and Cyclone Agni in 2004. Although tropical cyclones are large systems generating enormous energy, their movements over the Earth's surface are controlled by large-scale winds—the streams in the Earth's atmosphere. The path of motion is referred to as a tropical cyclone's track and has been compared by Dr. Neil Frank, former director of the National Hurricane Center, to "leaves carried along by a stream". Tropical systems, while generally located equatorward of the 20th parallel, are steered primarily westward by the east-to-west winds on the equatorward side of the subtropical ridge—a persistent high pressure area over the world's oceans. In the tropical North Atlantic and Northeast Pacific oceans, trade winds—another name for the westward-moving wind currents—steer tropical waves westward from the African coast and towards the Caribbean Sea, North America, and ultimately into the central Pacific ocean before the waves dampen out. These waves are the precursors to many tropical cyclones within this region. In the Indian Ocean and Western Pacific (both north and south of the equator), tropical cyclogenesis is strongly influenced by the seasonal movement of the Intertropical Convergence Zone and the monsoon trough, rather than by easterly waves. Tropical cyclones can also be steered by other systems, such as other low pressure systems, high pressure systems, warm fronts, and cold fronts. The Earth's rotation imparts an acceleration known as the Coriolis effect, Coriolis acceleration, or colloquially, Coriolis force. This acceleration causes cyclonic systems to turn towards the poles in the absence of strong steering currents. The poleward portion of a tropical cyclone contains easterly winds, and the Coriolis effect pulls them slightly more poleward. The westerly winds on the equatorward portion of the cyclone pull slightly towards the equator, but, because the Coriolis effect weakens toward the equator, the net drag on the cyclone is poleward. Thus, tropical cyclones in the Northern Hemisphere usually turn north (before being blown east), and tropical cyclones in the Southern Hemisphere usually turn south (before being blown east) when no other effects counteract the Coriolis effect. When a tropical cyclone crosses the subtropical ridge axis, its general track around the high-pressure area is deflected significantly by winds moving towards the general low-pressure area to its north. When the cyclone track becomes strongly poleward with an easterly component, the cyclone has begun recurvature. A typhoon moving through the Pacific Ocean towards Asia, for example, will recurve offshore of Japan to the north, and then to the northeast, if the typhoon encounters southwesterly winds (blowing northeastward) around a low-pressure system passing over China or Siberia. Many tropical cyclones are eventually forced toward the northeast by extratropical cyclones in this manner, which move from west to east to the north of the subtropical ridge. An example of a tropical cyclone in recurvature was Typhoon Ioke in 2006, which took a similar trajectory. Officially, landfall is when a storm's center (the center of its circulation, not its edge) crosses the coastline. Storm conditions may be experienced on the coast and inland hours before landfall; in fact, a tropical cyclone can launch its strongest winds over land, yet not make landfall; if this occurs, then it is said that the storm made a direct hit on the coast. As a result of the narrowness of this definition, the landfall area experiences half of a land-bound storm by the time the actual landfall occurs. For emergency preparedness, actions should be timed from when a certain wind speed or intensity of rainfall will reach land, not from when landfall will occur. When two cyclones approach one another, their centers will begin orbiting cyclonically about a point between the two systems. The two vortices will be attracted to each other, and eventually spiral into the center point and merge. When the two vortices are of unequal size, the larger vortex will tend to dominate the interaction, and the smaller vortex will orbit around it. This phenomenon is called the Fujiwhara effect, after Sakuhei Fujiwhara. A tropical cyclone can cease to have tropical characteristics through several different ways. One such way is if it moves over land, thus depriving it of the warm water it needs to power itself, quickly losing strength. Most strong storms lose their strength very rapidly after landfall and become disorganized areas of low pressure within a day or two, or evolve into extratropical cyclones. While there is a chance a tropical cyclone could regenerate if it managed to get back over open warm water, if it remains over mountains for even a short time, weakening will accelerate. Many storm fatalities occur in mountainous terrain, as the dying storm unleashes torrential rainfall, leading to deadly floods and mudslides, similar to those that happened with Hurricane Mitch in 1998. Additionally, dissipation can occur if a storm remains in the same area of ocean for too long, mixing the upper 60 metres (200 ft) of water, dropping sea surface temperatures more than 5 °C (9 °F). Without warm surface water, the storm cannot survive. A tropical cyclone can dissipate when it moves over waters significantly below 26.5 °C (79.7 °F). This will cause the storm to lose its tropical characteristics (i.e. thunderstorms near the center and warm core) and become a remnant low pressure area, which can persist for several days. This is the main dissipation mechanism in the Northeast Pacific ocean. Weakening or dissipation can occur if it experiences vertical wind shear, causing the convection and heat engine to move away from the center; this normally ceases development of a tropical cyclone. Additionally, its interaction with the main belt of the Westerlies, by means of merging with a nearby frontal zone, can cause tropical cyclones to evolve into extratropical cyclones. This transition can take 1–3 days. Even after a tropical cyclone is said to be extratropical or dissipated, it can still have tropical storm force (or occasionally hurricane/typhoon force) winds and drop several inches of rainfall. In the Pacific ocean and Atlantic ocean, such tropical-derived cyclones of higher latitudes can be violent and may occasionally remain at hurricane or typhoon-force wind speeds when they reach the west coast of North America. These phenomena can also affect Europe, where they are known as European windstorms; Hurricane Iris's extratropical remnants are an example of such a windstorm from 1995. Additionally, a cyclone can merge with another area of low pressure, becoming a larger area of low pressure. This can strengthen the resultant system, although it may no longer be a tropical cyclone. Studies in the 2000s have given rise to the hypothesis that large amounts of dust reduce the strength of tropical cyclones. In the 1960s and 1970s, the United States government attempted to weaken hurricanes through Project Stormfury by seeding selected storms with silver iodide. It was thought that the seeding would cause supercooled water in the outer rainbands to freeze, causing the inner eyewall to collapse and thus reducing the winds. The winds of Hurricane Debbie—a hurricane seeded in Project Stormfury—dropped as much as 31%, but Debbie regained its strength after each of two seeding forays. In an earlier episode in 1947, disaster struck when a hurricane east of Jacksonville, Florida promptly changed its course after being seeded, and smashed into Savannah, Georgia. Because there was so much uncertainty about the behavior of these storms, the federal government would not approve seeding operations unless the hurricane had a less than 10% chance of making landfall within 48 hours, greatly reducing the number of possible test storms. The project was dropped after it was discovered that eyewall replacement cycles occur naturally in strong hurricanes, casting doubt on the result of the earlier attempts. Today, it is known that silver iodide seeding is not likely to have an effect because the amount of supercooled water in the rainbands of a tropical cyclone is too low. Other approaches have been suggested over time, including cooling the water under a tropical cyclone by towing icebergs into the tropical oceans. Other ideas range from covering the ocean in a substance that inhibits evaporation, dropping large quantities of ice into the eye at very early stages of development (so that the latent heat is absorbed by the ice, instead of being converted to kinetic energy that would feed the positive feedback loop), or blasting the cyclone apart with nuclear weapons. Project Cirrus even involved throwing dry ice on a cyclone. These approaches all suffer from one flaw above many others: tropical cyclones are simply too large and short-lived for any of the weakening techniques to be practical. Tropical cyclones out at sea cause large waves, heavy rain, and high winds, disrupting international shipping and, at times, causing shipwrecks. Tropical cyclones stir up water, leaving a cool wake behind them, which causes the region to be less favourable for subsequent tropical cyclones. On land, strong winds can damage or destroy vehicles, buildings, bridges, and other outside objects, turning loose debris into deadly flying projectiles. The storm surge, or the increase in sea level due to the cyclone, is typically the worst effect from landfalling tropical cyclones, historically resulting in 90% of tropical cyclone deaths. The broad rotation of a landfalling tropical cyclone, and vertical wind shear at its periphery, spawns tornadoes. Tornadoes can also be spawned as a result of eyewall mesovortices, which persist until landfall. Over the past two centuries, tropical cyclones have been responsible for the deaths of about 1.9 million people worldwide. Large areas of standing water caused by flooding lead to infection, as well as contributing to mosquito-borne illnesses. Crowded evacuees in shelters increase the risk of disease propagation. Tropical cyclones significantly interrupt infrastructure, leading to power outages, bridge destruction, and the hampering of reconstruction efforts. Although cyclones take an enormous toll in lives and personal property, they may be important factors in the precipitation regimes of places they impact, as they may bring much-needed precipitation to otherwise dry regions. Tropical cyclones also help maintain the global heat balance by moving warm, moist tropical air to the middle latitudes and polar regions. The storm surge and winds of hurricanes may be destructive to human-made structures, but they also stir up the waters of coastal estuaries, which are typically important fish breeding locales. Tropical cyclone destruction spurs redevelopment, greatly increasing local property values. Intense tropical cyclones pose a particular observation challenge, as they are a dangerous oceanic phenomenon, and weather stations, being relatively sparse, are rarely available on the site of the storm itself. Surface observations are generally available only if the storm is passing over an island or a coastal area, or if there is a nearby ship. Usually, real-time measurements are taken in the periphery of the cyclone, where conditions are less catastrophic and its true strength cannot be evaluated. For this reason, there are teams of meteorologists that move into the path of tropical cyclones to help evaluate their strength at the point of landfall. Tropical cyclones far from land are tracked by weather satellites capturing visible and infrared images from space, usually at half-hour to quarter-hour intervals. As a storm approaches land, it can be observed by land-based Doppler radar. Radar plays a crucial role around landfall by showing a storm's location and intensity every several minutes. In-situ measurements, in real-time, can be taken by sending specially equipped reconnaissance flights into the cyclone. In the Atlantic basin, these flights are regularly flown by United States government hurricane hunters. The aircraft used are WC-130 Hercules and WP-3D Orions, both four-engine turboprop cargo aircraft. These aircraft fly directly into the cyclone and take direct and remote-sensing measurements. The aircraft also launch GPS dropsondes inside the cyclone. These sondes measure temperature, humidity, pressure, and especially winds between flight level and the ocean's surface. A new era in hurricane observation began when a remotely piloted Aerosonde, a small drone aircraft, was flown through Tropical Storm Ophelia as it passed Virginia's Eastern Shore during the 2005 hurricane season. A similar mission was also completed successfully in the western Pacific ocean. This demonstrated a new way to probe the storms at low altitudes that human pilots seldom dare. Because of the forces that affect tropical cyclone tracks, accurate track predictions depend on determining the position and strength of high- and low-pressure areas, and predicting how those areas will change during the life of a tropical system. The deep layer mean flow, or average wind through the depth of the troposphere, is considered the best tool in determining track direction and speed. If storms are significantly sheared, use of wind speed measurements at a lower altitude, such as at the 700 hPa pressure surface (3,000 metres / 9,800 feet above sea level) will produce better predictions. Tropical forecasters also consider smoothing out short-term wobbles of the storm as it allows them to determine a more accurate long-term trajectory. High-speed computers and sophisticated simulation software allow forecasters to produce computer models that predict tropical cyclone tracks based on the future position and strength of high- and low-pressure systems. Combining forecast models with increased understanding of the forces that act on tropical cyclones, as well as with a wealth of data from Earth-orbiting satellites and other sensors, scientists have increased the accuracy of track forecasts over recent decades. However, scientists are not as skillful at predicting the intensity of tropical cyclones. The lack of improvement in intensity forecasting is attributed to the complexity of tropical systems and an incomplete understanding of factors that affect their development. Tropical cyclones are classified into three main groups, based on intensity: tropical depressions, tropical storms, and a third group of more intense storms, whose name depends on the region. For example, if a tropical storm in the Northwestern Pacific reaches hurricane-strength winds on the Beaufort scale, it is referred to as a typhoon; if a tropical storm passes the same benchmark in the Northeast Pacific Basin, or in the Atlantic, it is called a hurricane. Neither "hurricane" nor "typhoon" is used in either the Southern Hemisphere or the Indian Ocean. In these basins, storms of tropical nature are referred to simply as "cyclones". Additionally, as indicated in the table below, each basin uses a separate system of terminology, making comparisons between different basins difficult. In the Pacific Ocean, hurricanes from the Central North Pacific sometimes cross the International Date Line into the Northwest Pacific, becoming typhoons (such as Hurricane/Typhoon Ioke in 2006); on rare occasions, the reverse will occur. It should also be noted that typhoons with sustained winds greater than 67 metres per second (130 kn) or 150 miles per hour (240 km/h) are called Super Typhoons by the Joint Typhoon Warning Center. A tropical depression is an organized system of clouds and thunderstorms with a defined, closed surface circulation and maximum sustained winds of less than 17 metres per second (33 kn) or 39 miles per hour (63 km/h). It has no eye and does not typically have the organization or the spiral shape of more powerful storms. However, it is already a low-pressure system, hence the name "depression". The practice of the Philippines is to name tropical depressions from their own naming convention when the depressions are within the Philippines' area of responsibility. A tropical storm is an organized system of strong thunderstorms with a defined surface circulation and maximum sustained winds between 17 metres per second (33 kn) (39 miles per hour (63 km/h)) and 32 metres per second (62 kn) (73 miles per hour (117 km/h)). At this point, the distinctive cyclonic shape starts to develop, although an eye is not usually present. Government weather services, other than the Philippines, first assign names to systems that reach this intensity (thus the term named storm). A hurricane or typhoon (sometimes simply referred to as a tropical cyclone, as opposed to a depression or storm) is a system with sustained winds of at least 33 metres per second (64 kn) or 74 miles per hour (119 km/h). A cyclone of this intensity tends to develop an eye, an area of relative calm (and lowest atmospheric pressure) at the center of circulation. The eye is often visible in satellite images as a small, circular, cloud-free spot. Surrounding the eye is the eyewall, an area about 16 kilometres (9.9 mi) to 80 kilometres (50 mi) wide in which the strongest thunderstorms and winds circulate around the storm's center. Maximum sustained winds in the strongest tropical cyclones have been estimated at about 85 metres per second (165 kn) or 195 miles per hour (314 km/h). |Tropical Cyclone Classifications (all winds are 10-minute averages)| |Beaufort scale||10-minute sustained winds (knots)||N Indian Ocean |SW Indian Ocean |NE Pacific & NHC, CHC & CPHC |0–6||<28 knots (32 mph; 52 km/h)||Depression||Trop. Disturbance||Tropical Low||Tropical Depression||Tropical Depression||Tropical Depression||Tropical Depression| |7||28–29 knots (32–33 mph; 52–54 km/h)||Deep Depression||Depression| |30–33 knots (35–38 mph; 56–61 km/h)||Tropical Storm||Tropical Storm| |8–9||34–47 knots (39–54 mph; 63–87 km/h)||Cyclonic Storm||Moderate Tropical Storm||Tropical Cyclone (1)||Tropical Cyclone (1)||Tropical Storm| |10||48–55 knots (55–63 mph; 89–102 km/h)||Severe Cyclonic Storm||Severe Tropical Storm||Tropical Cyclone (2)||Tropical Cyclone (2)||Severe Tropical Storm| |11||56–63 knots (64–72 mph; 104–117 km/h)||Typhoon||Hurricane (1)| |12||64–72 knots (74–83 mph; 119–133 km/h)||Very Severe Cyclonic Storm||Tropical Cyclone||Severe Tropical Cyclone (3)||Severe Tropical Cyclone (3)||Typhoon| |73–85 knots (84–98 mph; 135–157 km/h)||Hurricane (2)| |86–89 knots (99–102 mph; 159–165 km/h)||Severe Tropical Cyclone (4)||Severe Tropical Cyclone (4)||Major Hurricane (3)| |90–99 knots (100–110 mph; 170–180 km/h)||Intense Tropical Cyclone| |100–106 knots (120–120 mph; 190–200 km/h)||Major Hurricane (4)| |107–114 knots (123–131 mph; 198–211 km/h)||Severe Tropical Cyclone (5)||Severe Tropical Cyclone (5)| |115–119 knots (132–137 mph; 213–220 km/h)||Very Intense Tropical Cyclone||Super Typhoon| |>120 knots (140 mph; 220 km/h)||Super Cyclonic Storm||Major Hurricane (5)| The word typhoon, which is used today in the Northwest Pacific, may be derived from Urdu, Persian and Arabic ţūfān (طوفان), which in turn originates from Greek tuphōn (Τυφών), a monster in Greek mythology responsible for hot winds. The related Portuguese word tufão, used in Portuguese for typhoons, is also derived from Greek tuphōn. It is also similar to Chinese "dafeng" ("daifung" in Cantonese) (大風 – literally big winds), which may explain why "typhoon" came to be used for East Asian cyclones. The word hurricane, used in the North Atlantic and Northeast Pacific, is derived from the name of a native Caribbean Amerindian storm god, Huracan, via Spanish huracán. (Huracan is also the source of the word Orcan, another word for the European windstorm. These events should not be confused.) Huracan became the Spanish term for hurricanes. Storms reaching tropical storm strength were initially given names to eliminate confusion when there are multiple systems in any individual basin at the same time, which assists in warning people of the coming storm. In most cases, a tropical cyclone retains its name throughout its life; however, under special circumstances, tropical cyclones may be renamed while active. These names are taken from lists that vary from region to region and are usually drafted a few years ahead of time. The lists are decided on, depending on the regions, either by committees of the World Meteorological Organization (called primarily to discuss many other issues), or by national weather offices involved in the forecasting of the storms. Each year, the names of particularly destructive storms (if there are any) are "retired" and new names are chosen to take their place. Tropical cyclones that cause extreme destruction are rare, although when they occur, they can cause great amounts of damage or thousands of fatalities. The 1970 Bhola cyclone is the deadliest tropical cyclone on record, killing more than 300,000 people and potentially as many as 1 million after striking the densely populated Ganges Delta region of Bangladesh on 13 November 1970. Its powerful storm surge was responsible for the high death toll. The North Indian cyclone basin has historically been the deadliest basin. Elsewhere, Typhoon Nina killed nearly 100,000 in China in 1975 due to a 100-year flood that caused 62 dams including the Banqiao Dam to fail. The Great Hurricane of 1780 is the deadliest Atlantic hurricane on record, killing about 22,000 people in the Lesser Antilles. A tropical cyclone does need not be particularly strong to cause memorable damage, primarily if the deaths are from rainfall or mudslides. Tropical Storm Thelma in November 1991 killed thousands in the Philippines, while in 1982, the unnamed tropical depression that eventually became Hurricane Paul killed around 1,000 people in Central America. Hurricane Katrina is estimated as the costliest tropical cyclone worldwide, causing $81.2 billion in property damage (2008 USD) with overall damage estimates exceeding $100 billion (2005 USD). Katrina killed at least 1,836 people after striking Louisiana and Mississippi as a major hurricane in August 2005. Hurricane Andrew is the second most destructive tropical cyclone in U.S history, with damages totaling $40.7 billion (2008 USD), and with damage costs at $31.5 billion (2008 USD), Hurricane Ike is the third most destructive tropical cyclone in U.S history. The Galveston Hurricane of 1900 is the deadliest natural disaster in the United States, killing an estimated 6,000 to 12,000 people in Galveston, Texas. Hurricane Mitch caused more than 10, 000 fatalities in Latin America. Hurricane Iniki in 1992 was the most powerful storm to strike Hawaii in recorded history, hitting Kauai as a Category 4 hurricane, killing six people, and causing U.S. $3 billion in damage. Other destructive Eastern Pacific hurricanes include Pauline and Kenna, both causing severe damage after striking Mexico as major hurricanes. In March 2004, Cyclone Gafilo struck northeastern Madagascar as a powerful cyclone, killing 74, affecting more than 200,000, and becoming the worst cyclone to affect the nation for more than 20 years. The most intense storm on record was Typhoon Tip in the northwestern Pacific Ocean in 1979, which reached a minimum pressure of 870 mbar (25.69 inHg) and maximum sustained wind speeds of 165 knots (85 m/s) or 190 miles per hour (310 km/h). Tip, however, does not solely hold the record for fastest sustained winds in a cyclone. Typhoon Keith in the Pacific and Hurricanes Camille and Allen in the North Atlantic currently share this record with Tip. Camille was the only storm to actually strike land while at that intensity, making it, with 165 knots (85 m/s) or 190 miles per hour (310 km/h) sustained winds and 183 knots (94 m/s) or 210 miles per hour (340 km/h) gusts, the strongest tropical cyclone on record at landfall. Typhoon Nancy in 1961 had recorded wind speeds of 185 knots (95 m/s) or 215 miles per hour (346 km/h), but recent research indicates that wind speeds from the 1940s to the 1960s were gauged too high, and this is no longer considered the storm with the highest wind speeds on record. Similarly, a surface-level gust caused by Typhoon Paka on Guam was recorded at 205 knots (105 m/s) or 235 miles per hour (378 km/h). Had it been confirmed, it would be the strongest non-tornadic wind ever recorded on the Earth's surface, but the reading had to be discarded since the anemometer was damaged by the storm. In addition to being the most intense tropical cyclone on record, Tip was the largest cyclone on record, with tropical storm-force winds 2,170 kilometres (1,350 mi) in diameter. The smallest storm on record, Tropical Storm Marco, formed during October 2008, and made landfall in Veracruz. Marco generated tropical storm-force winds only 37 kilometres (23 mi) in diameter. Hurricane John is the longest-lasting tropical cyclone on record, lasting 31 days in 1994. Before the advent of satellite imagery in 1961, however, many tropical cyclones were underestimated in their durations. John is the second longest-tracked tropical cyclone in the Northern Hemisphere on record, behind Typhoon Ophelia of 1960, which had a path of 8,500 miles (12,500 km). Reliable data for Southern Hemisphere cyclones is unavailable. Most tropical cyclones form on the side of the subtropical ridge closer to the equator, then move poleward past the ridge axis before recurving into the main belt of the Westerlies. When the subtropical ridge position shifts due to El Nino, so will the preferred tropical cyclone tracks. Areas west of Japan and Korea tend to experience much fewer September-November tropical cyclone impacts during El Niño and neutral years. During El Niño years, the break in the subtropical ridge tends to lie near 130°E which would favor the Japanese archipelago. During El Niño years, Guam's chance of a tropical cyclone impact is one-third of the long term average. The tropical Atlantic ocean experiences depressed activity due to increased vertical wind shear across the region during El Niño years. During La Niña years, the formation of tropical cyclones, along with the subtropical ridge position, shifts westward across the western Pacific ocean, which increases the landfall threat to China. While the number of storms in the Atlantic has increased since 1995, there is no obvious global trend; the annual number of tropical cyclones worldwide remains about 87 ± 10. However, the ability of climatologists to make long-term data analysis in certain basins is limited by the lack of reliable historical data in some basins, primarily in the Southern Hemisphere. In spite of that, there is some evidence that the intensity of hurricanes is increasing. Kerry Emanuel stated, "Records of hurricane activity worldwide show an upswing of both the maximum wind speed in and the duration of hurricanes. The energy released by the average hurricane (again considering all hurricanes worldwide) seems to have increased by around 70% in the past 30 years or so, corresponding to about a 15% increase in the maximum wind speed and a 60% increase in storm lifetime." Atlantic storms are becoming more destructive financially, since five of the ten most expensive storms in United States history have occurred since 1990. According to the World Meteorological Organization, “recent increase in societal impact from tropical cyclones has largely been caused by rising concentrations of population and infrastructure in coastal regions.” Pielke et al. (2008) normalized mainland U.S. hurricane damage from 1900–2005 to 2005 values and found no remaining trend of increasing absolute damage. The 1970s and 1980s were notable because of the extremely low amounts of damage compared to other decades. The decade 1996–2005 was the second most damaging among the past 11 decades, with only the decade 1926–1935 surpassing its costs. The most damaging single storm is the 1926 Miami hurricane, with $157 billion of normalized damage. Often in part because of the threat of hurricanes, many coastal regions had sparse population between major ports until the advent of automobile tourism; therefore, the most severe portions of hurricanes striking the coast may have gone unmeasured in some instances. The combined effects of ship destruction and remote landfall severely limit the number of intense hurricanes in the official record before the era of hurricane reconnaissance aircraft and satellite meteorology. Although the record shows a distinct increase in the number and strength of intense hurricanes, therefore, experts regard the early data as suspect. The number and strength of Atlantic hurricanes may undergo a 50–70 year cycle, also known as the Atlantic Multidecadal Oscillation. Nyberg et al. reconstructed Atlantic major hurricane activity back to the early 18th century and found five periods averaging 3–5 major hurricanes per year and lasting 40–60 years, and six other averaging 1.5–2.5 major hurricanes per year and lasting 10–20 years. These periods are associated with the Atlantic multidecadal oscillation. Throughout, a decadal oscillation related to solar irradiance was responsible for enhancing/dampening the number of major hurricanes by 1–2 per year. Although more common since 1995, few above-normal hurricane seasons occurred during 1970–94. Destructive hurricanes struck frequently from 1926–60, including many major New England hurricanes. Twenty-one Atlantic tropical storms formed in 1933, a record only recently exceeded in 2005, which saw 28 storms. Tropical hurricanes occurred infrequently during the seasons of 1900–25; however, many intense storms formed during 1870–99. During the 1887 season, 19 tropical storms formed, of which a record 4 occurred after 1 November and 11 strengthened into hurricanes. Few hurricanes occurred in the 1840s to 1860s; however, many struck in the early 19th century, including a 1821 storm that made a direct hit on New York City. Some historical weather experts say these storms may have been as high as Category 4 in strength. These active hurricane seasons predated satellite coverage of the Atlantic basin. Before the satellite era began in 1960, tropical storms or hurricanes went undetected unless a reconnaissance aircraft encountered one, a ship reported a voyage through the storm, or a storm hit land in a populated area. The official record, therefore, could miss storms in which no ship experienced gale-force winds, recognized it as a tropical storm (as opposed to a high-latitude extra-tropical cyclone, a tropical wave, or a brief squall), returned to port, and reported the experience. Proxy records based on paleotempestological research have revealed that major hurricane activity along the Gulf of Mexico coast varies on timescales of centuries to millennia. Few major hurricanes struck the Gulf coast during 3000–1400 BC and again during the most recent millennium. These quiescent intervals were separated by a hyperactive period during 1400 BC and 1000 AD, when the Gulf coast was struck frequently by catastrophic hurricanes and their landfall probabilities increased by 3–5 times. This millennial-scale variability has been attributed to long-term shifts in the position of the Azores High, which may also be linked to changes in the strength of the North Atlantic Oscillation. According to the Azores High hypothesis, an anti-phase pattern is expected to exist between the Gulf of Mexico coast and the Atlantic coast. During the quiescent periods, a more northeasterly position of the Azores High would result in more hurricanes being steered towards the Atlantic coast. During the hyperactive period, more hurricanes were steered towards the Gulf coast as the Azores High was shifted to a more southwesterly position near the Caribbean. Such a displacement of the Azores High is consistent with paleoclimatic evidence that shows an abrupt onset of a drier climate in Haiti around 3200 14C years BP, and a change towards more humid conditions in the Great Plains during the late-Holocene as more moisture was pumped up the Mississippi Valley through the Gulf coast. Preliminary data from the northern Atlantic coast seem to support the Azores High hypothesis. A 3000-year proxy record from a coastal lake in Cape Cod suggests that hurricane activity increased significantly during the past 500–1000 years, just as the Gulf coast was amid a quiescent period of the last millennium. The U.S. National Oceanic and Atmospheric Administration Geophysical Fluid Dynamics Laboratory performed a simulation to determine if there is a statistical trend in the frequency or strength of tropical cyclones over time. The simulation concluded "the strongest hurricanes in the present climate may be upstaged by even more intense hurricanes over the next century as the earth's climate is warmed by increasing levels of greenhouse gases in the atmosphere". In an article in Nature, Kerry Emanuel stated that potential hurricane destructiveness, a measure combining hurricane strength, duration, and frequency, "is highly correlated with tropical sea surface temperature, reflecting well-documented climate signals, including multidecadal oscillations in the North Atlantic and North Pacific, and global warming". Emanuel predicted "a substantial increase in hurricane-related losses in the twenty-first century". In more recent work published by Emanuel (in the March 2008 issue of the Bulletin of the American Meteorological Society), he states that new climate modeling data indicates “global warming should reduce the global frequency of hurricanes.” The new work suggests that, even in a dramatically warming world, hurricane frequency and intensity may not substantially rise during the next two centuries. Similarly, P.J. Webster and others published an article in Science examining the "changes in tropical cyclone number, duration, and intensity" over the past 35 years, the period when satellite data has been available. Their main finding was although the number of cyclones decreased throughout the planet excluding the north Atlantic Ocean, there was a great increase in the number and proportion of very strong cyclones. |Rank||Hurricane||Season||Cost (2005 USD)| |6||“New England”||1938||$39.2 billion| |Main article: List of costliest Atlantic hurricanes| The strength of the reported effect is surprising in light of modeling studies that predict only a one half category increase in storm intensity as a result of a ~2 °C (3.6 °F) global warming. Such a response would have predicted only a ~10% increase in Emanuel's potential destructiveness index during the 20th century rather than the ~75–120% increase he reported. Secondly, after adjusting for changes in population and inflation, and despite a more than 100% increase in Emanuel's potential destructiveness index, no statistically significant increase in the monetary damages resulting from Atlantic hurricanes has been found. Sufficiently warm sea surface temperatures are considered vital to the development of tropical cyclones. Although neither study can directly link hurricanes with global warming, the increase in sea surface temperatures is believed to be due to both global warming and natural variability, e.g. the hypothesized Atlantic Multidecadal Oscillation (AMO), although an exact attribution has not been defined. However, recent temperatures are the warmest ever observed for many ocean basins. In February 2007, the United Nations Intergovernmental Panel on Climate Change released its fourth assessment report on climate change. The report noted many observed changes in the climate, including atmospheric composition, global average temperatures, ocean conditions, among others. The report concluded the observed increase in tropical cyclone intensity is larger than climate models predict. Additionally, the report considered that it is likely that storm intensity will continue to increase through the 21st century, and declared it more likely than not that there has been some human contribution to the increases in tropical cyclone intensity. However, there is no universal agreement about the magnitude of the effects anthropogenic global warming has on tropical cyclone formation, track, and intensity. For example, critics such as Chris Landsea assert that man-made effects would be "quite tiny compared to the observed large natural hurricane variability". A statement by the American Meteorological Society on 1 February 2007 stated that trends in tropical cyclone records offer "evidence both for and against the existence of a detectable anthropogenic signal" in tropical cyclogenesis. Although many aspects of a link between tropical cyclones and global warming are still being "hotly debated", a point of agreement is that no individual tropical cyclone or season can be attributed to global warming. Research reported in the 3 September 2008 issue of Nature found that the strongest tropical cyclones are getting stronger, particularly over the North Atlantic and Indian oceans. Wind speeds for the strongest tropical storms increased from an average of 140 miles per hour (230 km/h) in 1981 to 156 miles per hour (251 km/h) in 2006, while the ocean temperature, averaged globally over the all regions where tropical cyclones form, increased from 28.2 °C (82.8 °F) to 28.5 °C (83.3 °F) during this period. In addition to tropical cyclones, there are two other classes of cyclones within the spectrum of cyclone types. These kinds of cyclones, known as extratropical cyclones and subtropical cyclones, can be stages a tropical cyclone passes through during its formation or dissipation. An extratropical cyclone is a storm that derives energy from horizontal temperature differences, which are typical in higher latitudes. A tropical cyclone can become extratropical as it moves toward higher latitudes if its energy source changes from heat released by condensation to differences in temperature between air masses; additionally, although not as frequently, an extratropical cyclone can transform into a subtropical storm, and from there into a tropical cyclone. From space, extratropical storms have a characteristic "comma-shaped" cloud pattern. Extratropical cyclones can also be dangerous when their low-pressure centers cause powerful winds and high seas. A subtropical cyclone is a weather system that has some characteristics of a tropical cyclone and some characteristics of an extratropical cyclone. They can form in a wide band of latitudes, from the equator to 50°. Although subtropical storms rarely have hurricane-force winds, they may become tropical in nature as their cores warm. From an operational standpoint, a tropical cyclone is usually not considered to become subtropical during its extratropical transition. In popular culture, tropical cyclones have made appearances in different types of media, including films, books, television, music, and electronic games. The media can have tropical cyclones that are entirely fictional, or can be based on real events. For example, George Rippey Stewart's Storm, a best-seller published in 1941, is thought to have influenced meteorologists into giving female names to Pacific tropical cyclones. Another example is the hurricane in The Perfect Storm, which describes the sinking of the Andrea Gail by the 1991 Perfect Storm. Also, hypothetical hurricanes have been featured in parts of the plots of series such as The Simpsons, Invasion, Family Guy, Seinfeld, Dawson's Creek, and CSI Miami. The 2004 film The Day After Tomorrow includes several mentions of actual tropical cyclones as well as featuring fantastical "hurricane-like" non-tropical Arctic storms.
http://www.thefullwiki.org/Tropical_cyclones
13
60
The Old Chicken Run Problem use algebraic equations to determine the maximum area of a rectangle with a given partial perimeter. To be able to do this problem students need to be able to measure lengths and calculate areas of rectangles using the formula: area = length x width. They should probably have tried Peter’s Second String, Level 5 and seen how to use tables to solve the problem or tried Peter’s Third String, Level 6 and seen how to use algebra to solve the problem. Apparently in some areas of New Guinea they measure the area of land by its perimeter. When you think about it this isn’t such a good idea. A piece of land can have a relatively large perimeter and only a small area. This sequence of problems is built up from this simple bad idea. There are seven problems in the Problem Solving section that focus on the perimeter-area relationship. These come in two sets. First there is the set of Peter’s String problems. These are Peters’ String, Measurement, Level 4, Peters’ Second String, Measurement, Level 5, Peters’ Third String, Algebra, Level 6, The Old Chicken Run Problem, Algebra, Level 6 and the Polygonal String Problem, Algebra, Level 6. These follow through on the non-link between rectangles’ areas and perimeters, going as far as showing that among all quadrilaterals with a fixed perimeter, the square has the largest area. In the second last of these five problems we are able to use an idea that has been developed to look at the old problem of maximising the area of a chicken run. This is often given as an early application of calculus but doesn’t need more than an elementary knowledge of parabolas. The final problem looks at the areas of regular polygons with a fixed perimeter. We show that they are ‘bounded above’ by the circle with the same perimeter. The second set of lessons looks at the problem from the other side: does area have anything to say about perimeter? This leads to questions about the maximum and minimum perimeters for a given area. The lessons here are Karen’s Tiles, Measurement, Level 5 and Karen’s Second Tiles, Algebra, Level 6. Mathematics is more than doing calculations or following routine instructions. Thinking and creating are at the heart of the subject. Though there are some problems that have a set procedure or a formula that can be used to solve them, most worthwhile problems require the use of known mathematics (but not necessarily formulae) in a novel way. Throughout this Problem Solving section of the website we are hoping to motivate students to think about what they are doing and see connections between various aspects of what they are doing. The mathematical question asked here is what can we say about the rectangle of biggest area that is enclosed by a wall and a fence? This question is typical of a lot of mathematical ones that attempt to maximise quantities with given restrictions. There are obvious benefits for this type of maximising activity. The problem and the Extension here are useful as introductions to the type of problem that occurs often in Levels 7 and 8. Here calculus will be used to maximise or minimise various quantities. The method of finding two equations and eliminating one of the variables is one that is often used in max and min problems in calculus. The ideas in this sequence of problems further help to develop the student’s concept of mathematics, the thought structure underlying the subject, and the way the subject develops. We start off with a piece of string and use this to realise that there is no direct relation between the perimeter of a rectangle and its area. This leads us to thinking about what areas are possible. A natural consequence of this is to try to find the largest and smallest areas that a given perimeter can encompass. We end up solving both these problems. The largest area comes from a square and the smallest area is as small as we like to make it. Some of the techniques we have used to produce the largest area are then applied in a completely different situation – the chicken run. This positive offshoot of what is really a very pure piece of mathematics initially, is the kind of thing that frequently happens in maths. Somehow, sanitised bits of mathematics, produced in a pure mathematician’s head, can often be applied to real situations. The next direction that the problem takes is to turn the original question around. Don’t ask given perimeter what do we know about area, ask given area what do we know about perimeter. Again there seems to be no direct link. But having spent time with rectangles, the obvious thing to do is to look at other shapes. The farmer was putting a new chicken run up against a brick wall. He had 20 metres of wire to put round the run. If he made a rectangular run, how big an area could he enclose? - Introduce the problem to the class. Get them to consider how they would approach the problem. - Initially, let them investigate the problem in any way that they want. They might want to start off with string and make a model of the situation. To use a table they will have to find some equations. They may need some help to do this. - Move round the groups as they work to check on progress. Encourage them to set up some equations and reduce the number of dependent variables to one. - If a lot of the pairs are having problems, then you may want a class brainstorming session to help them along. - Share the students’ answers. Get them to write up their work in their books. Make sure that they have carefully explained their arguments. - Get the more able students to try the Extension Problem. Extension to the problem The farmer decided that he wanted to have some ‘rooms’ in the chicken run to separate some of the hens. So he used the 20 m of wire slightly differently. We show this in the diagram. What is the biggest area that he can contain now? We will do this problem by the most sophisticated way open to the students at this point of time. In the diagram below, put in a variable x, the distance that the run is from the wall, and y the length of the run. Then we can set up some equations. First we know that the length of the chicken wire is 20 m. But it is also equal to x + y + x. So 2x + y = 20 … (1). Then we know that the area, A, of the chicken run is xy. So A = xy ... (2) Eliminating y from (1) and (2) gives A = x(20 – 2x). At this point we are in exactly the same situation as we were in Peter's Third String, Level 6. We have a parabola. Its maximum point is halfway between x = 0 and x = 10 (where 20 – 2x = 0). So the maximum is at x = 5. When x = 5, A = 50. So the maximum area is 50 m2. Note: 1. This problem can be solved using a table as in Peter’s Second String, Level 5. It can also be solved using Calculus but that seems to be an unnecessarily complicated way to solve it. 2. The answer to this problem is not a square. The chicken run of maximum area does not have x = y. Solution to the extension The two equations we get this time are 4x + y = 20 and A = xy. Eliminating y now gives the equation A = x(20 – 4x). This parabola has its maximum point halfway between x = 0 and x = 5. So the maximum is at x = 2.5, where A = 25 m2.
http://www.nzmaths.co.nz/resource/old-chicken-run-problem
13
153
||This article may be too technical for most readers to understand. (September 2012)| The squaring function is denoted by a superscript 2, as in (x + 1)2. However when superscripts are not available, as for instance in programming languages or plain text files, the notations x^2 or x**2 are commonly used. The adjective which corresponds to squaring is quadratic. An integer that is the square of some other integer, for example 25 which is 52, is known as a square number. All square numbers are non-negative. More generally, for any ring an element which is the square of some other element is known as a perfect square, or more simply, a square. In algebraic structures where addition, subtraction and multiplication are defined and the multiplication is a bilinear map,1 the square function satisfies the identity x2 = (−x)2, i.e. the squaring function is even. The squaring function monotonically increases on positive numbers [0, +∞), but monotonically decreases on (−∞,0]. Hence, zero is its global minimum. The only cases where the square x2 of a number is less than x occur when 0 < x < 1, that is, when x belongs to an open interval (0,1). This implies that the square of an integer is never less than the original number. Any positive real number is the square of exactly two numbers, one strictly positive one strictly negative, 0 is just the square itself. For this reason, it is possible to define the square root function, which associates with a real number the non-negative number whose square is the original number. No square root can be taken of a negative number within the system of real numbers, because squares of all real numbers are non-negative. This permits the expansion of the real number system to the complex numbers, by postulating the imaginary unit i, which is one of the square roots of −1. The property "every non negative real number is a square" has been generalized to the notion of real closed field, which is an ordered field such that every non negative element is a square. The real closed fields can not be distinguished from the field of real numbers by their algebraic properties: every property of the real numbers, which may be expressed in first-order logic (that is expressed by a formula in which the variables that are quantified by ∀ or ∃ represent elements, not sets), is true for every real closed field, and conversely every property of the first-order logic, which is true for a specific real closed field is also true for the real numbers. The squaring function is defined in any field. An element in the image of this function is called a square of the field, and the inverse images of a square are called square roots. A square usually has two square roots whose sum is 0. There are two exceptions: In any field, 0 has only one square root, which is 0 itself. In a field of characteristic 2, an element has zero or one square root. Otherwise, any non-zero element either has two square roots (see below why not more) or does not have any. Given an odd prime number p, a non-zero element of the field Z/pZ with p elements is a quadratic residue if it is a square in Z/pZ. Otherwise, it is a quadratic non-residue. Zero, while a square, is not considered a quadratic residue. There are (p − 1)/2 quadratic residues and (p − 1)/2 quadratic non-residues. The quadratic residues form a group under multiplication. The properties of quadratic residues are widely used in number theory. The squaring function is defined in any ring. Depending on the ring, it may have different properties that are sometimes used to classify rings. Zero may be the square of some non-zero elements. A commutative ring such that the square of a non zero element is never zero is called a reduced ring. More generally, in a commutative ring, a radical ideal is an ideal I such that implies . Both notions are important in algebraic geometry, because of Hilbert's Nullstellensatz. An element of a ring that is equal to its square is called an idempotent. In any ring, 0 and 1 are idempotents. There are no other idempotents in fields and more generally in integral domains. Also, each element of an integral domain has no more than 2 square roots due to the difference of two squares identity:2 if u2 − v2 = 0, then u = v or u + v = 0, where the latter means that two roots are mutually additive inverse. There are several major uses of the squaring function in geometry. The name of the squaring function shows its importance in the definition of the area: it comes from the fact that the area of a square with sides of length l is equal to l2. The area depends quadratically on the size: the area of a shape n times larger is n2 times greater. The inverse-square law is a manifestation of quadratic dependence of area of the sphere to its radius. The squaring function is related to distance through the Pythagorean theorem and its generalization, the parallelogram law. Euclidean distance is not a smooth function, nor are any of its odd powers C∞-smooth. It is the square of distance (denoted d2 or r2) which is a smooth and analytic function. The dot product of a Euclidean vector with itself is equal to the square of its length: v⋅v = v2. This is further generalised to quadratic forms in linear spaces. The inertia tensor in mechanics is an example of a quadratic form. It demonstrates a quadratic relation of the moment of inertia to the size (length). In linear algebra, a projection is a function from a vector space into itself which is equal to its square (under function composition). The usual projections of geometry are special cases of this general notion. The functions from a vector space to itself whose squares are the identity function are called involutions. They constitute an important class of symmetries in geometry, which contains the reflections. Another, more well known, function is the square of the absolute value | z |2 = z z, which is real-valued. It is very important for quantum mechanics: see probability amplitude and Born rule. On the complex plane this function equals to the square of the distance to 0 discussed above. Squaring is used in statistics and probability theory in determining the standard deviation of a set of values, or a random variable. The deviation of each value xi from the mean of the set is defined as the difference . These deviations are squared, then a mean is taken of the new set of numbers (each of which is positive). This mean is the variance, and its square root is the standard deviation. In finance, the volatility of a financial instrument is the standard deviation of its values. - Exponentiation by squaring - Polynomial SOS, the representation of a non-negative polynomial as the sum of squares of polynomials - Hilbert's seventeenth problem, for the representation of positive polynomials as a sum of squares of rational functions - Square-free polynomial - Cube (algebra) - Metric tensor - Quadratic equation - Polynomial ring - Difference of two squares - Brahmagupta–Fibonacci identity - Euler's four-square identity - Degen's eight-square identity - Lagrange's identity - acceleration, length per square time - cross section (physics), an area-dimensioned quantity - coupling constant (has square charge in the denominator, and may be expressed with square distance in the numerator) - kinetic energy (quadratic dependence on velocity) - specific energy, a (square velocity)-dimensioned quantity - Algebraic structures with "+", "−" and "⋅" include, but are not limited to rings (such as various number systems) and inner product spaces (such as Euclidean vector space or Hilbert space). These examples show all prominent properties of the squaring function, such as its algebraic identities. - The u2 − v2 = (u − v)(u + v) identity is provided by commutativity of multiplication in an integral domain. - Marshall, Murray Positive polynomials and sums of squares. Mathematical Surveys and Monographs, 146. American Mathematical Society, Providence, RI, 2008. xii+187 pp. ISBN 978-0-8218-4402-1, ISBN 0-8218-4402-4 - Rajwade, A. R. (1993). Squares. London Mathematical Society Lecture Note Series 171. Cambridge University Press. ISBN 0-521-42668-5. Zbl 0785.11022.
http://www.bioscience.ws/encyclopedia/index.php?title=Square_(algebra)
13
113
|A plane figure is reflection-symmetric if and only if there is a line which reflects the figure onto itself. This line is a symmetry line for the figure.| The capital letters A, B, C, D, E, H, I, K, M, O, T, U, V, W, X, and Y are often written as reflection symmetric figures. Some are symmetric about a horizontal line (BCDEHIKOX) whereas others are symmetric about a vertical line (AHIMOTUVWXY). As you can see since some are in both lists (HIOX), there may be more than one line of symmetry. A challenge would be to find words such as DIXIE or COOKBOOK composed entirely of letters with a horizontal line of symmetry or MOM, WAXY, YOUTH (written vertically!) composed entirely of letters with a vertical line of symmetry. After collecting enough of these words you might make them into a crossword puzzle (for extra credit)! Our textbook states and proves what they call the Flip-Flop Theorem: (reflection is symmetric). |If F and G are points/figures, and rl(F)=G, then rl(G)=F.| From this it can be proved that every segment has two lines of symmetry: itself and its perpendicular bisector. This is the same as the letter I discussed above. Angles only have one line of symmetry: the angle bisector which causes one ray to reflect onto the other ray. A circle has infinitely many lines of symmetry (no matter which way you draw the diameter, the semicircles are reflections of each other). The section concludes with the following important result. |If a figure is symmetric, then any pair of corresponding parts under the symmetry are congruent.| Rorschach inkblots and logos commonly are reflective-symmetric. These symmetries will be useful when applied to various polygons. Symmetry is also important in algebra. The function y=x2 defines a parabola in which the sign of x doesn't matter. This makes it an even function (the exponent of 2 is another clue). Also, our definition of isosceles includes and does not exclude the equilateral triangle. Just as there are special names associated with the sides of a right triangle (hypotenuse and legs), there are special names associated with the angles and sides of an isosceles triangle. The angle determined by the two equal sides is called the vertex angle. The side opposite the vertex angle is called the base. The two angles opposite the equal sides are the base angles (and are equal). These can also be described as the angles at the endpoints of the base. Three important theorems are as follows. Certain terms will be defined further below. The line containing the bisector of the vertex angle of an isosceles triangle| is a symmetry line for the triangle. In an isosceles triangle, the bisector of the vertex angle,| the perpendicular bisector of the base, and the median to the base determine the same line. |If a triangle has two congruent sides, then the angles opposite them are congruent.| |Every equilateral triangle has three lines of symmetry. These are the bisectors of the angles/sides.| |If a triangle is equilateral, then it is equiangular.| A corollary (a theorem which logically follows immediately from another theorem) is that the angles of an equilateral triangle are all 60°. Although the line of symmetry of an isosceles triangle is an angle bisector, median, perpendicular bisector, and an altitude, in most triangles, these lines are different. |A ray is an angle bisector if and only if it forms two angles of equal measure with the sides of the angle.| |The three angle bisectors of a triangle are coincident at the incenter.| The incenter is equidistant (distance r) from all three sides of a triangle. Thus if a circle were drawn with the incenter as its center with a radius r, it would be inscribed in the triangle. |A segment, ray, or line is a perpendicular bisector (of a segment) if and only if it contains the midpoint of the segment and is perpendicular to the segment.| |The three perpendicular bisectors of a triangle are coincident at the circumcenter.| The circumcenter is equidistant (distance r) from all three vertices of a triangle. Thus if a circle were drawn with the circumcenter as its center with a radius r, it would circumscribed the triangle. |A segment is an altitude if and only if it is perpendicular to the line containing the side opposite a vertex and contains that vertex.| Altitude can also refer to the length of the segment described above. Trapezoids also have altitudes. In addition, the height of the 3-dimensional objects: prisms, cylinders, pyramids, and cones is termed altitude. |The three altitudes of a triangle are coincident at the orthocenter.| The orthocenter need not be in the interior of a triangle. It will be located inside only if the triangle is acute. If the triangle is right, the orthocenter will be on the hypotenuse. If the triangle is obtuse, the orthocenter will be outside the triangle. |A segment is a median of a triangle if and only if it connects one vertex to the midpoint of the opposite side.| |The three medians of a triangle are coincident at the centroid.| If the triangle were made of a uniformly dense material, the centroid would be the center of mass or center of gravity of the triangle. A thin solid object in this shape would balance at this point. Thus if a triangle is hung by a vertex, a line toward the local point of graviational attraction (local nadir or straight down) would describe a median and go through the centroid. Medians also have another important property. |Medians always divide each other into a 1:2 ratio, with the larger portion (2/3) toward the vertex and the smaller portion (1/3) toward the opposite side.| |The circumcenter, orthocenter, and centroid are always collinear. This line is called the Euler Line.| In an isosceles triangle, all four of these points are collinear. In an equilateral triangle, all four are coincident. An interesting construction is the Nine-point circle, the circle which goes through the midpoints of each side, the base of each altitude, as well as the midpoint of the segments between the orthocenter and each vertex. |A quadrilateral is a kite if and only if it has two distinct pair of consecutive sides congruent.| This name should be familiar from the shape of the scientific instrument allegedly used by Benjamin Franklin, which are now used primarily as toys. An arrowhead, or the Startrek chevron is typically in the shape of a nonconvex kite. Another common name for this shape is dart. The vertices shared by the congruent sides are ends. The line containing the ends of a kite is a symmetry line for a kite. The symmetry line for a kite bisects the angles at the ends of the kite. The symmetry diagonal of a kite is a perpendicular bisector of the other diagonal. |A quadrilateral is a trapezoid if and only if it has at least one pair of sides parallel.| Some books define trapezoids as having exactly one pair of parallel sides, so beware. The parallel sides in a trapezoid are called bases. In a trapezoid, consecutive angles between pairs of parallel sides are supplementary. |A trapezoid is an isosceles trapezoid if and only if it has base angles which are congruent.| If follows directly that the sides opposite the congruent angles in an isosceles trapezoid are congruent. In an isosceles trapezoid, the perpendicular bisector of one base is also the other base's perpendicular bisector. This bisector is thus also a line of symmetry. One of my favorite questions uses an isosceles trapozoid. If we give the height of an isosceles trapozoid, as well as the length of its two bases, it is possible to find its perimeter. Example: suppose we know a certain isosceles trapozoid has bases of 10 and 16 with height 4. We know that right triangles are formed outside the rectangular region defined by the height and shorter base. The right triangles have a base of 3 and height of 4 thus hypotenuse of 5. Hence the perimeter is 36. The triangles formed need not be integer and we will continue with area in a later lesson. |A quadrilateral is a parallelogram if and only if both pair of sides are parallel.| |A quadrilateral is a rectangle if and only if all angles are congruent.| |A quadrilateral is a rhombus if and only if all sides are congruent.| |A quadrilateral is a square if and only if all sides and all angles are congruent.| |A plane figure F is rotation-symmetric if and only if there is a rotation (strictly) between 0° and 360° such that R(F)=F. The center of R is the center of symmetry for F.| A figure is said to have n-fold rotational symmetry if n rotations each of magnitude 360°/n produce an identical figure. The last rotation returns it to its original position. A figure can have rotational and reflective properties separately or together. A figure with reflective properties can have rotational symmetry if and only if the lines of symmetry intersect. Going back to our letter examples, H, I, O, X had two intersecting lines of symmetric and thus both reflective and rotational symmetry. The letters N, S, and Z have rotational symmetry but not reflective symmetry. Certain letters (M & W, b & q, d & p, n & u, h & y? or 4??) rotate into the other of the pair! A polygon can be either equilateral or equiangular without, necessarily, being both (regular). The rectangle is an example of an equiangular quadrilateral, and the rhombus is an example of an equilateral quadrilateral. Neither has to be a square. However, for 3-gons, as stated in the theorem above, an equilateral triangle must also be equianglar. |In any regular polygon, a point termed the center is equidistant from all vertices.| |The distance from the midpoint of a side of a regular polygon to the center is the apothem.| The apothem may also refer to the segment with length as describe above. The apothem is often used in some formulas. For example, the area of a regular polygon is A=asn/2, where a is the apothem, s is the length of each side, and n is the number of sides. Since p=sn or the perimeter is the number of sides times the length of each side, it can also be written A=ap/2. |Every regular n-gon has n lines of symmetry and n-fold rotational symmetry. The lines of symmetry are all either angle bisectors or perpendicular bisectors of each side (or both if n is odd).| The measure of the internal angles of a regular n-gon can be found as follows. Triangulate the polygon by drawing the n-3 diagonals from one vertex to all other vertices. This divides the n-gon into n-2 triangles. The angles of any triangle sum to 180°. Thus the internal angles of any n-gon sum to (n-2)180°. The internal angles of a regular n-gon will be (n-2)180°/n. We gave the formula for the number of diagonals of an n-gon in lesson 2 as n(n-3)/2. You might consider further how many different lengths these diagonals might be, especially in a regular polygon.
http://www.andrews.edu/~calkins/math/webtexts/geom06.htm
13
86
Reviews for Tests Most tests and quizzes will consist of short answer questions, calculations, and other "typical" math problems. Some questions are similar to homework, while others ask you to synthesize the text and class discussion. Expect one short essay question. You should look over your class notes, your returned homework, and the homework answer sheet. The following questions provide a guide to the important ideas of each chapter. Chapter 1: Analytic Geometry - What is mathematics? What do mathematicians study? Is math a science or an art? - What different kinds of numbers are there? Give examples. - What is analytic geometry? What are its "parts"? Why is it a great idea? Give some examples of how analytic geometry approaches problems. Compare and contrast to other approaches. - What is algebra about? What is it an abstraction of? What is algebra used for? - How do you solve a quadratic equation? - What is geometry about? Who were the first people to do geometry? - What are some of the terms that Euclid defined? What are his "Common Notions" and what role do they play? What are his Postulates and what role do they play? - State some theorems (propositions) that are found in Euclid's Elements. How do you prove them? - Which is easier: algebra without geometry, or geometry without algebra? Explain. - What is a line? How do you find the equation of a line? What information do you need to find this equation? - What is a circle? How do you find the equation of a circle? What information do you need to find this equation? - What are some other useful formulas from Cartesian geometry? Chapter 2: Modeling - What is a mathematical model? What purposes does it serve, and how is it used? - What families of models did we study? What are their formulas? Draw graphs characteristic of each kind of model. What type of parameters a and b are associated with each graph? - How do you decide which model is most likely to fit a data set? - How do you find a model to fit a set of data? Describe the role of the "new data" and how it is derived from the original data for each kind of model. - How do you measure the goodness of the fit? - What kind of real world situations can mathematics model well? Which models go with which situations most frequently? What kind of situations might mathematics have a harder time with? - Who in history was instrumental in developing the mathematical view of the world that scientists use today? What, briefly, did they do, and when did they live? - Write a paragraph about mathematical modeling: what is it, why is it a great idea? Give an example to illustrate your points. Chapter 3: Calculus - What kind of questions does calculus answer? Why are simpler methods not sufficient? - Where besides math is calculus used? - Compare and contrast the value of a function f at x = a with the limit of the function at a. - Summarize the "calculus approach" without resorting to short-hand formulas. In particular, write an English description of how you find the instantaneous velocity of an object from the position function. Also, how do you find the area under the graph of a function and over an interval on the x-axis? - What four different kinds of problems are united under the calculus umbrella? - State the Fundamental Theorem of Calculus in some form. - What does the symbol Δ represent? How is it used to indicate change and rate of change? - What units are typically used for rate of change? - Compare the ideas of average rate of change and instantaneous rate of change. - What is a derivative? an antiderivative? - Using tables of derivatives or antiderivatives, how do you do the following? - find the instantaneous velocity of a position function? - find the position function from the velocity function? - find the slope of a line tangent to a graph? find the equation of that line? - find the area under the graph of a function? - How do you estimate answers to the above when the tables cannot be used? - What does the program SUM do, and for what kinds of problems is it helpful? - How old is calculus? Who first articulated the methods of calculus? When? - Give at least one interesting fact about the history of calculus or its discoverer that is not found in our textbook. Chapter 4: Statistics - What kinds of questions are addressed by probability and statistics? - What is a probability? How are probabilities calculated? - What is a statistic? How many did we study? How do you calculate them? - What is a statistical distribution? How is a distribution described? - What is a Bernoulli trial? Give some examples. - What kind of experiments probably result in normal distributions? Give some examples. - In a normal distribution, what is the relationship between an x-value and a z-value? - Given a population with some normal attribute, how do you determine either what fraction or how many of the population are within a certain range for the attribute? - What is the Z-test? When can it be used? How is it performed? What kinds of answers does it produce? How accurate is it? - Write a paragraph about statistics: what is it, how was it a change from previous mathematical approaches, why is it a great idea? Give an example to illustrate your points. Chapter 5: Proof - What is a mathematical proof? What purposes does it serve? - What is an axiomatic system? Give two examples we have studied in this course. - Explain the terms primitive term, axiom, theorem. - Explain the terms and, or, not, negation, converse, contrapositive, implies, all, some. How are these ideas and the rules of logic used to create new statements from old ones? - How do you negate a statement? That is, how do you turn a false statement into a true one and vice-versa? How can you simplify negated statements? - Give some examples of true mathematical statements. - Give an example of a true statement that is not addressed in the field of mathematics. - How is mathematical truth different from truth in other fields? - Give an example of one theorem and its proof. - What rules of arithmetic and algebra did we use in proofs? In particular, what is the Zero Principle? - How do you prove that something exists? How do you prove that something is unique? How do you prove that a given set of numbers is the entire solution set to an equation? How do you prove that an algebraic equation is always true? How do you prove that an algebraic equation is not always true? - What is a natural number? What are the precise definitions of even number and odd number? - What philosophical issues must mathematics address about proof? - How do mathematicians judge whether a proof is correct? Chapter 6: Famous Theorems - State several famous theorems that we studied. - Give particular examples of the theorems you listed. - Where possible, draw pictures illustrating the theorems you listed. Final Exam (Spring 2011) - Know Euclid's proof methods for Propositions I.1, I.3, I.9, and I.10. - Explain how analytic geometry contributes to each of the other areas we studied: modeling, calculus, and statistics. - Were there hints of the methods of calculus in Greek geometry? Explain. How did calculus improve upon ancient Greek geometry? - What models does calculus make the most use of? - Describe a probabilistic experiment yielding data that is well fit by a simple model. - How does calculus help in statistics? - What theorem from this course do you think is especially important or beautiful? Support your choice. - What is mathematics? Class, College, and Life spring 2011 course description spring 2011 syllabus instructions for the TI-84 just the regression instructions reviews for tests and quizzes back to great ideas in math back to Margie's home page
http://www2.stetson.edu/~mhale/ideas/reviews.htm
13
52
This course examines some of the major ideas in measurement. You will explore procedures for measuring and learn about standard units in the metric and customary systems, the relationships among units, and the approximate nature of measurement. You will also examine how measurement can illuminate mathematical concepts such as irrational numbers, properties of circles, or area and volume formulas, and discover how other mathematical concepts can inform measurement tasks such as indirect measurement. The course consists of 10 sessions, each with a half hour of video programming, problem-solving activities provided online and in a print guide, and interactive activities and demonstrations on the Web. Although each session includes suggested times for how long it may take to complete all of the required activities, these times are approximate. Some activities may take longer. You should allow at least two and a half hours for each session. The 10th session (choose video program 10, 11, or 12, depending on your grade level) explores ways to apply the concepts of measurement you've learned in your own classroom. You should complete the sessions sequentially. Session 1: What Does It Mean To Measure? Explore what can be measured and what it means to measure. Identify measurable properties such as weight, surface area, and volume, and discuss which metric units are more appropriate for measuring these properties. Refine your use of precision instruments, and learn about alternate methods such as displacement. Explore approximation techniques, and reason about how to make better approximations. Session 2: Fundamentals of Measurement Investigate the difference between a count and a measure, and examine essential ideas such as unit iteration, partitioning, and the compensatory principle. Learn about the many uses of ratio in measurement and how scale models help us understand relative sizes. Investigate the constant of proportionality in isosceles right triangles, and learn about precision and accuracy in measurement. Session 3: The Metric System Learn about the relationships between units in the metric system and how to represent quantities using different units. Estimate and measure quantities of length, mass, and capacity, and solve measurement problems. Session 4: Angle Measurement Review appropriate notation for angle measurement, and describe angles in terms of the amount of turn. Use reasoning to determine the measures of angles in polygons based on the idea that there are 360 degrees in a complete turn. Learn about the relationships among angles within shapes, and generalize a formula for finding the sum of the angles in any n-gon. Use activities based on Geo-Logo to explore the differences among interior, exterior, and central angles. Session 5: Indirect Measurement and Trigonometry Learn how to use the concept of similarity to measure distance indirectly, using methods involving similar triangles, shadows, and transits. Apply basic right-angle trigonometry to learn about the relationships among steepness, angle of elevation, and height-to-distance ratio. Use trigonometric ratios to solve problems involving right triangles. Session 6: Area Learn that area is a measure of how much surface is covered. Explore the relationship between the size of the unit used and the resulting measurement. Find the area of irregular shapes by counting squares or subdividing the figure into sections. Learn how to approximate the area more accurately by using smaller and smaller units. Relate this counting approach to the standard area formulas for triangles, trapezoids, and parallelograms. Investigate the circumference and area of a circle. Examine what underlies the formulas for these measures, and learn how the features of the irrational number pi () affect both of these measures. Session 8: Volume Explore several methods for finding the volume of objects, using both standard cubic units and non-standard measures. Explore how volume formulas for solid objects such as spheres, cylinders, and cones are derived and related. Session 9: Measurement Relationships Examine the relationships between area and perimeter when one measure is fixed. Determine which shapes maximize area while minimizing perimeter, and vice versa. Explore the proportional relationship between surface area and volume. Construct open-box containers, and use graphs to approximate the dimensions of the resulting rectangular prism that holds the maximum volume. Session 10: Classroom Case Studies Explore how the concepts developed in this course can be applied at different grade levels. Examine case studies of K-2, 3-5, and 6-8 teachers (former course participants, all of whom have adapted their new knowledge to their classrooms), as well as a set of typical measurement problems for these levels of students.
http://www.learner.org/courses/learningmath/measurement/overview/index.html
13
100
Many students seem puzzled by function notation well after it has been introduced, and ask “why can’t we just write instead of ?”. To motivate the use of function notation and improve understanding, I advocate using multi-variable functions instead of single-variable functions in introducing this notation. My introduction usually proceeds something like this: Suppose you are talking to a friend over the telephone, and they have a piece of paper in front of them on which they have drawn a circle using a ruler and a compass. You need the clearest, most concise instructions possible so that you can exactly duplicate the appearance of the paper your friend has… your circle must be exactly the same size as theirs, and in the exact same location on the piece of paper. What information do you need in order to be able to do this? For the size of the circle, either the radius or the diameter will work nicely (most students come up with this answer quickly). If you are using a compass to draw the circle, the radius will probably be more convenient. However, the radius alone does not tell you where to locate the circle on the paper. What aspect of a circle will do the best job of telling you where it is located (this question is often challenging for students to answer initially)? The center of a circle is the most useful attribute for identifying its location. The location of a circle’s center on a piece of paper can be described using a coordinate pair (assuming the bottom left corner of the paper is the origin, and the coordinates are given in centimeters). So, you will only need three numbers to duplicate the circle. Suppose your friend tells you three numbers: “3, 12, and 7″. Can you duplicate their circle with that information? Why, or why not? There is a small problem… your friend has not told you what each of the numbers represents. This is the first big reason for using “function notation”. If someone asks you to draw Circle(3, 12, 7) without giving you any other information, you don’t have enough information to be certain you are drawing the correct circle. Which of the three numbers is the radius? Or is it a diameter? Which is the x-coordinate of the center? Etc. What is missing is the “function definition”. The function definition tells you what the function does, how many parameters (or “arguments”) the function requires as inputs, what those parameters are, and in what order they occur. In this example, the following information is probably what you wanted to know: Circle(x, y, R) draws a circle with center at (x, y) and radius R Now you have enough information to interpret “Circle(3, 12, 7)” correctly. The definition tells you the name of the function (for when it is referred to later), how many parameters have to be supplied (three in this case), the order they appear in the parameter list that must be used when the function is invoked (x-coordinate, then y-coordinate, then Radius), and what the function will do with that information. Function Notation in Math Class Turning to how function notation will typically be used in math class, you will usually see function definitions that look like this: Or, in English: there is a function called “f” (instead of “Circle” as used in my example above) which requires only one parameter, which is labelled “x”. You will be asked to evaluate the function “f” for a particular value of “x” using this notation: This can be read as “what is the value of “f” when its parameter is 4?”. To answer such a question, refer to the right hand side of the function definition above, which tells you to multiply the first (and only) parameter by three, then add two. So, in this example Furthermore (thanks to Christopher for pointing this out) when you begin to talk about more than one function at a time, as in “compare these two graphs” or “which function reaches 100 first?”, it becomes very useful to have a name for each graph or function. If we had defined them both using “y =”, we would be stuck trying to distinguish between them using potentially ambiguous phrases like “the first equation” or “the curve with the highest y-intercept”. If equations are defined using function notation, for example the notation provides names for us to use when referring to them. We can then use “f” and “g” to label each on the graph, refer to them in conversation, and/or invoke them on specific values. This eliminates much confusion when comparing multiple functions, talking about functions of functions such as f(g(x)), or referring to functions defined previously. There is one additional notation you should know about: the symbol used to indicate “composition of functions”, or “taking a function of a function”. The following two expressions are equivalent: So, when you see the open circle between two function names, evaluate the one to the right of the circle, then plug its result in for the variable in the function on the left of the circle. Or, if you prefer the notation on the left of the equal sign above, you are welcome to convert the expression to that notation. For example, using the definitions of “f” and “g” from above: Alternatives to Function Notation What would we do if function notation did not exist? We would have to invent one or more alternative notations. Yet, we already use such notations all the time. A single (unary) minus sign tells us to negate a quantity. If we were to define it using function notation, it might look like: Neg(x) = the value which, when added to x, will equal zero Instead of using the plus sign to represent addition, we could define the Add function: Add(x,y) = the arithmetic sum of x and y Instead of using exponent notation, we could define the Square function as: Square(x) = the result of multiplying x by itself However, the notation we use for the most frequently used operations is much more convenient than function notation. Consider for a second how polynomial functions would look in the absence of exponential and arithmetic notation: So, function notation is wonderful for complex or infrequently used functions. However, it can get in the way for simple operations or ones that are used frequently. Function Notation in other subjects There is one more wrinkle that is useful to know about function notation. You may occasionally see function definitions such as: If you read through the above carefully, you will notice that there are two variables in the definition that are NOT shown in the parameter list. Shock and dismay! What to do? This situation usually occurs when there are constants involved (“a” and “b” in the above example) which people prefer to identify by name instead of by their value. For example, the constant (pi) is almost always referred to by its Greek letter instead of its decimal approximation, because this allows the person using the constant to fill in as many decimal places as they need in their calculation, and also because this gives someone reading the function definition a better idea of what this constant represents (where it came from). Any variables in a function definition that are NOT included in the parameter list should be constants that you already know from the context in which the function is being used (Physics and Economics are two areas where this happens quite often). Why is it f(x) so often? Why don’t math teachers give their functions more interesting names, instead of “f”, “g”, or perhaps even the exotic “h”? Usually because they would like to be able to write the function name quickly and clearly, and because the function is an abstract one… it does not actually correspond to a real-world situation. For example, a function that describes the height of an object above the ground (a projectile motion problem) might be defined as follows by a math teacher who is not discussing any of the physics behind it: and as follows by a physics teacher where “g” is a constant used to represent the acceleration due to gravity, “” is the initial velocity, and “” is the initial height. The initial velocity and height values might be included as variables in the definition in order to remind all students of the role those constants play in the definition, even though their values are given immediately below. A teacher who is going to have to write the name of the function on the board many times over the next few minutes is much more likely to pick a single letter function name (like “H” above, which can be written very quickly and clearly) than a word (like “Height”, as might be used in a computer programming language). But why is f(x) used so often? I think mostly because “f” is the first letter of the word “function”, and when a teacher is making up a random function to illustrate a point, they do not wish to spend too much time being creative with the name of the function, thus “f” leaps onto the board. So, the next time you see you may still roll your eyes at the use of “f(x)” instead of “y”, but you will hopefully now understand why this notation exists, what sorts of situations require it to be used, and how to interpret it. From a teacher’s perspective, I would much rather write the above definition then ask you to evaluate than have to write out …Please calculate the value of “y” when “x” is 5… Notation is a shorthand, a concise way of representing relationships or operations. Mathematics has many notations, including the arithmetic operators (plus, minus, times, divided by), powers, roots, logarithms, sigma notation, pi notation, and others. Function notation is yet another notation, one which you will see used many times in quantitative subjects like biology, chemistry, physics, economics, probability, statistics, and mathematics.
http://mathmaine.wordpress.com/2010/02/22/function-notation/
13
118
||This article may contain original research. (May 2013)| |Part of a series on| Hubble's law is the name for the theory in physical cosmology (proven by observation) that: (1) all objects observed in deep space (intergalactic space) are found to have a Doppler shift observable relative velocity to Earth, and to each other; and (2) that this Doppler-shift-measured velocity, of various galaxies receding from the Earth, is proportional to their distance from the Earth and all other interstellar bodies. In effect, the space-time volume of the observable universe is expanding and Hubble's law is the direct physical observation of this process. The motion of astronomical objects due solely to this expansion is known as the Hubble flow. Hubble's law is considered the first observational basis for the expanding space paradigm and today serves as one of the pieces of evidence most often cited in support of the Big Bang model. Although widely attributed to Edwin Hubble, the law was first derived from the General Relativity equations by Georges Lemaître in a 1927 article where he proposed that the Universe is expanding and suggested an estimated value of the rate of expansion, now called the Hubble constant. Two years later Edwin Hubble confirmed the existence of that law and determined a more accurate value for the constant that now bears his name. The recession velocity of the objects was inferred from their redshifts, many measured earlier by Vesto Slipher (1917) and related to velocity by him. The law is often expressed by the equation v = H0D, with H0 the constant of proportionality (the Hubble constant) between the "proper distance" D to a galaxy (which can change over time, unlike the comoving distance) and its velocity v (i.e. the derivative of proper distance with respect to cosmological time coordinate; see Uses of the proper distance for some discussion of the subtleties of this definition of 'velocity'). The SI unit of H0 is s−1 but it is most frequently quoted in (km/s)/Mpc, thus giving the speed in km/s of a galaxy 1 megaparsec (3.09×1019 km) away. The reciprocal of H0 is the Hubble time. As of 20 December 2012 the Hubble constant, as measured by NASA's Wilkinson Microwave Anisotropy Probe (WMAP) and reported in arxiv (http://arxiv.org/pdf/1212.5225.pdf), is 69.32 ± 0.80 (km/s)/Mpc (or 21.25 ± 0.25 (km/s)/Mega-lightyear) A decade before Hubble made his observations, a number of physicists and mathematicians had established a consistent theory of the relationship between space and time by using Einstein's field equations of general relativity. Applying the most general principles to the nature of the universe yielded a dynamic solution that conflicted with the then prevailing notion of a static universe. FLRW equations In 1922, Alexander Friedmann derived his Friedmann equations from Einstein's field equations, showing that the universe might expand at a rate calculable by the equations. The parameter used by Friedmann is known today as the scale factor which can be considered as a scale invariant form of the proportionality constant of Hubble's law. Georges Lemaître independently found a similar solution in 1927. The Friedmann equations are derived by inserting the metric for a homogeneous and isotropic universe into Einstein's field equations for a fluid with a given density and pressure. This idea of an expanding spacetime would eventually lead to the Big Bang and Steady State theories of cosmology. Shape of the universe Before the advent of modern cosmology, there was considerable talk about the size and shape of the universe. In 1920, the famous Shapley-Curtis debate took place between Harlow Shapley and Heber D. Curtis over this issue. Shapley argued for a small universe the size of the Milky Way galaxy and Curtis argued that the universe was much larger. The issue was resolved in the coming decade with Hubble's improved observations. Cepheid variable stars outside of the Milky Way Edwin Hubble did most of his professional astronomical observing work at Mount Wilson Observatory, the world's most powerful telescope at the time. His observations of Cepheid variable stars in spiral nebulae enabled him to calculate the distances to these objects. Surprisingly, these objects were discovered to be at distances which placed them well outside the Milky Way. They continued to be called "nebulae" and it was only gradually that the term "galaxies" took over. Combining redshifts with distance measurements The parameters that appear in Hubble’s law: velocities and distances, are not directly measured. In reality we determine, say, a supernova brightness, which provides information about its distance, and the redshift z = ∆λ/λ of its spectrum of radiation. Hubble correlated brightness and parameter z. Combining his measurements of galaxy distances with Vesto Slipher and Milton Humason's measurements of the redshifts associated with the galaxies, Hubble discovered a rough proportionality between redshift of an object and its distance. Though there was considerable scatter (now known to be caused by peculiar velocities – the 'Hubble flow' is used to refer to the region of space far enough out that the recession velocity is larger than local peculiar velocities), Hubble was able to plot a trend line from the 46 galaxies he studied and obtain a value for the Hubble constant of 500 km/s/Mpc (much higher than the currently accepted value due to errors in his distance calibrations). (See cosmic distance ladder for details.) At the time of discovery and development of Hubble’s law it was acceptable to explain redshift phenomenon as a Doppler shift in the context of special relativity, and use the Doppler formula to associate redshift z with velocity. Today the velocity-distance relationship of Hubble's law is viewed as a theoretical result with velocity to be connected with observed redshift not by the Doppler effect, but by a cosmological model relating recessional velocity to the expansion of the universe. Even for small z the velocity entering the Hubble law is no longer interpreted as a Doppler effect, although at small z the velocity-redshift relation for both interpretations is the same. Hubble Diagram Hubble's law can be easily depicted in a "Hubble Diagram" in which the velocity (assumed approximately proportional to the redshift) of an object is plotted with respect to its distance from the observer. A straight line of positive slope on this diagram is the visual depiction of Hubble's law. Cosmological constant abandoned After Hubble's discovery was published, Albert Einstein abandoned his work on the cosmological constant, which he had designed to modify his equations of general relativity, to allow them to produce a static solution which, in their simplest form, model either an expanding or contracting universe. After Hubble's discovery that the Universe was, in fact, expanding, Einstein called his faulty assumption that the Universe is static his "biggest mistake". On its own, general relativity could predict the expansion of the universe, which (through observations such as the bending of light by large masses, or the precession of the orbit of Mercury) could be experimentally observed and compared to his theoretical calculations using particular solutions of the equations he had originally formulated. In 1931, Einstein made a trip to Mount Wilson to thank Hubble for providing the observational basis for modern cosmology. The discovery of the linear relationship between redshift and distance, coupled with a supposed linear relation between recessional velocity and redshift, yields a straightforward mathematical expression for Hubble's Law as follows: - is the recessional velocity, typically expressed in km/s. - H0 is Hubble's constant and corresponds to the value of (often termed the Hubble parameter which is a value that is time dependent and which can be expressed in terms of the scale factor) in the Friedmann equations taken at the time of observation denoted by the subscript 0. This value is the same throughout the universe for a given comoving time. - is the proper distance (which can change over time, unlike the comoving distance which is constant) from the galaxy to the observer, measured in mega parsecs (Mpc), in the 3-space defined by given cosmological time. (Recession velocity is just v = dD/dt). Hubble's law is considered a fundamental relation between recessional velocity and distance. However, the relation between recessional velocity and redshift depends on the cosmological model adopted, and is not established except for small redshifts. For distances D larger than the radius of the Hubble sphere rHS , objects recede at a rate faster than the speed of light (See Uses of the proper distance for a discussion of the significance of this): Since the Hubble "constant" is a constant only in space, not in time, the radius of the Hubble sphere may increase or decrease over various time intervals. The subscript '0' indicates the value of the Hubble constant today. Current evidence suggests the expansion of the universe is accelerating (see Accelerating universe), meaning that for any given galaxy, the recession velocity dD/dt is increasing over time as the galaxy moves to greater and greater distances; however, the Hubble parameter is actually thought to be decreasing with time, meaning that if we were to look at some fixed distance D and watch a series of different galaxies pass that distance, later galaxies would pass that distance at a smaller velocity than earlier ones. Redshift velocity and recessional velocity Redshift can be measured by determining the wavelength of a known transition, such as hydrogen α-lines for distant quasars, and finding the fractional shift compared to a stationary reference. Thus redshift is a quantity unambiguous for experimental observation. The relation of redshift to recessional velocity is another matter. For an extensive discussion, see Harrison. Redshift velocity The redshift z often is described as a redshift velocity, which is the recessional velocity that would produce the same redshift if it were caused by a linear Doppler effect (which, however, is not the case, as the shift is caused in part by a cosmological expansion of space, and because the velocities involved are too large to use a non-relativistic formula for Doppler shift). This redshift velocity can easily exceed the speed of light. In other words, to determine the redshift velocity vrs, the relation: is used. That is, there is no fundamental difference between redshift velocity and redshift: they are rigidly proportional, and not related by any theoretical reasoning. The motivation behind the "redshift velocity" terminology is that the redshift velocity agrees with the velocity from a low-velocity simplification of the so-called Fizeau-Doppler formula Here, λo, λe are the observed and emitted wavelengths respectively. The "redshift velocity" vrs is not so simply related to real velocity at larger velocities, however, and this terminology leads to confusion if interpreted as a real velocity. Next, the connection between redshift or redshift velocity and recessional velocity is discussed. This discussion is based on Sartori. Recessional velocity Suppose R(t) is called the scale factor of the universe, and increases as the universe expands in a manner that depends upon the cosmological model selected. Its meaning is that all measured distances D(t) between co-moving points increase proportionally to R. (The co-moving points are not moving relative to each other except as a result of the expansion of space.) In other words: where t0 is some reference time. If light is emitted from a galaxy at time te and received by us at t0, it is red shifted due to the expansion of space, and this redshift z is simply: Suppose a galaxy is at distance D, and this distance changes with time at a rate dtD . We call this rate of recession the "recession velocity" vr: We now define the Hubble constant as and discover the Hubble law: From this perspective, Hubble's law is a fundamental relation between (i) the recessional velocity contributed by the expansion of space and (ii) the distance to an object; the connection between redshift and distance is a crutch used to connect Hubble's law with observations. This law can be related to redshift z approximately by making a Taylor series expansion: If the distance is not too large, all other complications of the model become small corrections and the time interval is simply the distance divided by the speed of light: According to this approach, the relation cz = vr is an approximation valid at low redshifts, to be replaced by a relation at large redshifts that is model-dependent. See velocity-redshift figure. Observability of parameters Strictly speaking, neither v nor D in the formula are directly observable, because they are properties now of a galaxy, whereas our observations refer to the galaxy in the past, at the time that the light we currently see left it. For relatively nearby galaxies (redshift z much less than unity), v and D will not have changed much, and v can be estimated using the formula where c is the speed of light. This gives the empirical relation found by Hubble. For distant galaxies, v (or D) cannot be calculated from z without specifying a detailed model for how H changes with time. The redshift is not even directly related to the recession velocity at the time the light set out, but it does have a simple interpretation: (1+z) is the factor by which the universe has expanded while the photon was travelling towards the observer. Expansion velocity vs relative velocity In using Hubble's law to determine distances, only the velocity due to the expansion of the universe can be used. Since gravitationally interacting galaxies move relative to each other independent of the expansion of the universe, these relative velocities, called peculiar velocities, need to be accounted for in the application of Hubble's law. The Finger of God effect, discovered in 1938 by Benjamin Kenneally, is one result of this phenomenon. In systems that are gravitationally bound, such as galaxies or our planetary system, the expansion of space is a much weaker effect than the attractive force of gravity. Idealized Hubble's Law The mathematical derivation of an idealized Hubble's Law for a uniformly expanding universe is a fairly elementary theorem of geometry in 3-dimensional Cartesian/Newtonian coordinate space, which, considered as a metric space, is entirely homogeneous and isotropic (properties do not vary with location or direction). Simply stated the theorem is this: - Any two points which are moving away from the origin, each along straight lines and with speed proportional to distance from the origin, will be moving away from each other with a speed proportional to their distance apart. In fact this applies to non-Cartesian spaces as long as they are locally homogeneous and isotropic; specifically to the negatively- and positively-curved spaces frequently considered as cosmological models (see shape of the universe). An observation stemming from this theorem is that seeing objects recede from us on Earth is not an indication that Earth is near to a center from which the expansion is occurring, but rather that every observer in an expanding universe will see objects receding from them. ‘Ultimate fate' and age of the universe The value of the Hubble parameter changes over time either increasing or decreasing depending on the sign of the so-called deceleration parameter which is defined by In a universe with a deceleration parameter equal to zero, it follows that H = 1/t, where t is the time since the Big Bang. A non-zero, time-dependent value of simply requires integration of the Friedmann equations backwards from the present time to the time when the comoving horizon size was zero. It was long thought that q was positive, indicating that the expansion is slowing down due to gravitational attraction. This would imply an age of the universe less than 1/H (which is about 14 billion years). For instance, a value for q of 1/2 (once favoured by most theorists) would give the age of the universe as 2/(3H). The discovery in 1998 that q is apparently negative means that the universe could actually be older than 1/H. However, estimates of the age of the universe are very close to 1/H. Olbers' paradox The expansion of space summarized by the Big Bang interpretation of Hubble's Law is relevant to the old conundrum known as Olbers' paradox: if the universe were infinite, static, and filled with a uniform distribution of stars, then every line of sight in the sky would end on a star, and the sky would be as bright as the surface of a star. However, the night sky is largely dark. Since the 17th century, astronomers and other thinkers have proposed many possible ways to resolve this paradox, but the currently accepted resolution depends in part upon the Big Bang theory and in part upon the Hubble expansion. In a universe that exists for a finite amount of time, only the light of finitely many stars has had a chance to reach us yet, and the paradox is resolved. Additionally, in an expanding universe distant objects recede from us, which causes the light emanating from them to be redshifted and diminished in brightness. Dimensionless Hubble parameter Instead of working with Hubble's constant, a common practice is to introduce the dimensionless Hubble parameter, usually denoted by h, and to write the Hubble's parameter H0 as 100 h km s −1 Mpc−1, all the uncertainty relative of the value of H0 being then relegated on h. Determining the Hubble constant The value of the Hubble constant is estimated by measuring the redshift of distant galaxies and then determining the distances to the same galaxies (by some other method than Hubble's law). Uncertainties in the physical assumptions used to determine these distances have caused varying estimates of the Hubble constant. Earlier measurement and discussion approaches For most of the second half of the 20th century the value of was estimated to be between 50 and 90 (km/s)/Mpc. The value of the Hubble constant was the topic of a long and rather bitter controversy between Gérard de Vaucouleurs who claimed the value was around 100 and Allan Sandage who claimed the value was near 50. In 1996, a debate moderated by John Bahcall between Gustav Tammann and Sidney van den Bergh was held in similar fashion to the earlier Shapley-Curtis debate over these two competing values. Current measurements This previously wide variance in estimates was partially resolved with the introduction of the ΛCDM model of the universe in the late 1990s. With the ΛCDM model observations of high-redshift clusters at X-ray and microwave wavelengths using the Sunyaev-Zel'dovich effect, measurements of anisotropies in the cosmic microwave background radiation, and optical surveys all gave a value of around 70 for the constant. The consistency of the measurements from all these methods below lends support to both the measured value of and the ΛCDM model. Using Hubble space telescope data The Hubble Key Project (led by Dr. Wendy L. Freedman, Carnegie Observatories) used the Hubble space telescope to establish the most precise optical determination in May 2001 of 72 ± 8 (km/s)/Mpc, consistent with a measurement of based upon Sunyaev-Zel'dovich effect observations of many galaxy clusters having a similar accuracy. Using WMAP data The most precise cosmic microwave background radiation determinations were 71 ± 4 (km/s)/Mpc, by WMAP in 2003, and 70.4 +1.5 −1.6 (km/s)/Mpc, for measurements up to 2006. The five year release from WMAP in 2008 found 71.9 +2.6 −2.7 (km/s)/Mpc using WMAP-only data and 70.1 ± 1.3 (km/s)/Mpc when data from other studies were incorporated, while the seven year release in 2010 found H0 = 71.0 ± 2.5 (km/s)/Mpc using WMAP-only data and H0 = 70.4 +1.3 −1.4 (km/s)/Mpc when data from other studies were incorporated. These values arise from fitting a combination of WMAP and other cosmological data to the simplest version of the ΛCDM model. If the data are fit with more general versions, tends to be smaller and more uncertain: typically around 67 ± 4 (km/s)/Mpc although some models allow values near 63 (km/s)/Mpc. Using Chandra X-ray Observatory data Acceleration of the expansion A value for measured from standard candle observations of Type Ia supernovae, which was determined in 1998 to be negative, surprised many astronomers with the implication that the expansion of the universe is currently "accelerating" (although the Hubble factor is still decreasing with time, as mentioned above in the Interpretation section; see the articles on dark energy and the ΛCDM model). Derivation of the Hubble parameter Start with the Friedmann equation: Matter-dominated universe (with a cosmological constant) If the universe is matter-dominated, then the mass density of the universe can just be taken to include matter so where is the density of matter today. We know for nonrelativistic particles that their mass density decreases proportional to the inverse volume of the universe so the equation above must be true. We can also define (see density parameter for ) so Also, by definition, where the subscript nought refers to the values today, and . Substituting all of this in into the Friedmann equation at the start of this section and replacing with gives Matter- and dark energy-dominated universe where is the mass density of the dark energy. By definition an equation of state in cosmology is , and if we substitute this into the fluid equation, which describes how the mass density of the universe evolves with time, If w is constant, Therefore for dark energy with a constant equation of state w, . If we substitute this into the Friedman equation in a similar way as before, but this time set which is assuming we live in a spatially flat universe, (see Shape of the Universe) If dark energy does not have a constant equation-of-state w, then and to solve this we must parametrize , for example if , giving Other ingredients have been formulated recently. In certain era, where the high energy experiments seem to have a reliable access in analyzing the property of the matter dominating the background geometry, with this era we mean the quark-gluon plasma, the transport properties have been taken into consideration. Therefore, the evolution of the Hubble parameter and of other essential cosmological parameters, in such a background are found to be considerably (non-negligibly) different than their evolution in an ideal, gaseous, non-viscous background. Units derived from the Hubble constant Hubble time ||This article may be confusing or unclear to readers. (April 2013)| The Hubble constant has units of inverse time, i.e. ~ 2.3×10−18 s−1. "Hubble time" is defined as . The value of Hubble time in the standard cosmological model is 4.35×1017 s or 13.8 billion years. (Liddle 2003, p. 57) The phrase "expansion timescale" means "Hubble time". The Hubble unit is defined as , where is around 1, and denotes the uncertainty in . is 100 km/s / Mpc = 1 dm/s/pc. The unit of time, then has as many seconds as there are decimetres in a parsec. As mentioned above, is the current value of Hubble parameter H. In a model in which speeds are constant, H decreases with time. In the naive model where H is constant the Hubble time would be the time taken for the universe to increase in size by a factor of e (because the solution of dx/dt = x is x = exp(t), where is the size of some feature at some arbitrary initial condition t = 0). Hubble length The Hubble length or Hubble distance is a unit of distance in cosmology, defined as —the speed of light multiplied by the Hubble time. It is equivalent to 4228 million parsecs or 13.8 billion light years. (The numerical value of the Hubble length in light years is, by definition, equal to that of the Hubble time in years.) The Hubble distance would be the distance at which galaxies are currently receding from us at the speed of light, as can be seen by substituting D = c/H0 into the equation for Hubble's law, v = H0D. Hubble volume The Hubble volume is sometimes defined as a volume of the universe with a comoving size of . The exact definition varies: it is sometimes defined as the volume of a sphere with radius , or alternatively, a cube of side . Some cosmologists even use the term Hubble volume to refer to the volume of the observable universe, although this has a radius approximately three times larger. See also - Peter Coles, ed. (2001). Routledge Critical Dictionary of the New Cosmology. Routledge. p. 202. ISBN 0-203-16457-1. - The Swinburne Astronomy Online Encyclopedia of Astronomy. "Hubble Flow". Swinburne University of Technology. Retrieved 2013-05-14. - Lemaître, Georges (1927). "Un univers homogène de masse constante et de rayon croissant rendant compte de la vitesse radiale des nébuleuses extra-galactiques". Annales de la Société Scientifique de Bruxelles A47: 49–56. Bibcode:1927ASSB...47...49L (Full article, PDF). Partially translated (the translator remains unidentified) in Lemaître, Georges (1931). "Expansion of the universe, A homogeneous universe of constant mass and increasing radius accounting for the radial velocity of extra-galactic nebulæ". Monthly Notices of the Royal Astronomical Society 91: 483–490. Bibcode:1931MNRAS..91..483L - Sidney van den Bergh (2011-06-06). "[1106.1195] The Curious Case of Lemaitre's Equation No. 24". arXiv:1106.1195 [physics.hist-ph]. - Block (1970). "[1106.3928] A Hubble Eclipse: Lemaitre and Censorship". arXiv:1106.3928 [physics.hist-ph]. - Nature. "Edwin Hubble in translation trouble : Nature News". Nature.com. Retrieved 2012-08-15. - Hubble, Edwin, "A Relation between Distance and Radial Velocity among Extra-Galactic Nebulae" (1929) Proceedings of the National Academy of Sciences of the United States of America, Volume 15, March 15, 1929: Issue 3, pp. 168–173, communicated January 17, 1929 (Full article, PDF) - Malcolm S Longair (2006). The Cosmic Century. Cambridge University Press. p. 109. ISBN 0-521-47436-1. - Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; et al. (Planck Collaboration) (20 March 2013). "Planck 2013 results. I. Overview of products and scientific results". Astronomy & Astrophysics (submitted). arXiv:1303.5062. - Staff (21 March 2013). "Planck Reveals An Almost Perfect Universe". ESA. Retrieved 21 March 2013. - Clavin, Whitney; Harrington, J.D. (21 March 2013). "Planck Mission Brings Universe Into Sharp Focus". NASA. Retrieved 21 March 2013. - Overbye, Dennis (21 March 2013). "An Infant Universe, Born Before We Knew". New York Times. Retrieved 21 March 2013. - Boyle, Alan (21 March 2013). "Planck probe's cosmic 'baby picture' revises universe's vital statistics". NBC News. Retrieved 21 March 2013. - Friedman, A. (1922). "Über die Krümmung des Raumes". Zeitschrift für Physik 10 (1): 377–386. Bibcode:1922ZPhy...10..377F. doi:10.1007/BF01332580. (English translation: Friedman, A. (1999). "On the Curvature of Space". General Relativity and Gravitation 31 (12): 1991–2000. Bibcode:1999GReGr..31.1991F. doi:10.1023/A:1026751225741.) - Wendy L Freedman et al. (2001). "Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant". Astrophys J 553 (1): 47–72. arXiv:astro-ph/0012376. Bibcode:2001ApJ...553...47F. doi:10.1086/320638. - Steven Weinberg (2008). Cosmology. Oxford University Press. p. 28. ISBN 0-19-852682-2. - Sandage, A. R. (May,1958). "Current Problems in the Extragalactic Distance Scale". Astrophysical Journal 127 (3): 513–526. Bibcode:1958ApJ...127..513S. doi:10.1086/146483. - R. P. Kirshner, Hubble's Diagram and Cosmic Expansion, Online Article - NASA. "WMAP- Cosmological Constant or Dark Energy". Nasa.go. Retrieved 2013-05-18. <ref>tag; no text was provided for refs named cmapcc(see the help page). Cite error: Invalid - Isaacson, Walter (2007). Einstein: His Life and Universe. Simon and Schuster. p. 354. ISBN 0-7432-6473-8., Extract of page 354 - Tamara M. Davis, Charles H. Lineweaver (2000). "Superluminal Recessional Velocities". arXiv:astro-ph/0011070 [astro-ph]. doi:10.1063/1.1363540. - William C. Keel (2007). The Road to Galaxy Formation (2 ed.). Springer. p. 7. ISBN 3-540-72534-2. - Is the universe expanding faster than the speed of light? (see final paragraph) - Edward Harrison (1992). "The redshift-distance and velocity-distance laws". Astrophysical Journal, Part 1 403: 28–31. Bibcode:1993ApJ...403...28H. doi:10.1086/172179.. A pdf file can be found here . - MS Madsen (1995). The Dynamic Cosmos. CRC Press. p. 35. ISBN 0-412-62300-5. - Avishai Dekel, J. P. Ostriker (1999). Formation of Structure in the Universe. Cambridge University Press. p. 164. ISBN 0-521-58632-1. - Thanu Padmanabhan (1993). Structure formation in the universe. Cambridge University Press. p. 58. ISBN 0-521-42486-0. - Leo Sartori (1996). Understanding Relativity. University of California Press. p. 163, Appendix 5B. ISBN 0-520-20029-2. - Leo Sartori (1996). [0520200292 Understanding Relativity] Check |url=scheme (help). University of California Press. pp. 304–305. ISBN 0-520-07986-8. - S. I. Chase, Olbers' Paradox, entry in the Physics FAQ; see also I. Asimov, "The Black of Night", in Asimov on Astronomy (Doubleday, 1974), ISBN 0-385-04111-X. - Peebles P.J.E., "Principles of Physical Cosmology", Princeton University Press, 1993. - Dennis Overbye, Lonely Hearts of the Cosmos: The Scientific Quest for the Secret of the Universe, Harper-Collins (1991), ISBN 0-06-015964-2 & ISBN 0-330-29585-3 (finalist, Nation Book Critics Circle Award for non-fiction). Second edition (with new afterword) Back Bay, 1999. Gives an account of the history of the dispute and rivalries. - W. L. Freedman, B. F. Madore, B. K. Gibson, L. Ferrarese, D. D. Kelson, S. Sakai, J. R. Mould, R. C. Kennicutt, Jr., H. C. Ford, J. A. Graham, J. P. Huchra, S. M. G. Hughes, G. D. Illingworth, L. M. Macri, P. B. Stetson (2001). "Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant". The Astrophysical Journal 553 (1): 47–72. arXiv:astro-ph/0012376. Bibcode:2001ApJ...553...47F. doi:10.1086/320638.. Preprint available here. - D. N. Spergel et al. (2007). "Three-year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Implications for Cosmology". Astrophysical Journal Supplement Series 170 (2): 377–408. arXiv:astro-ph/0603449. Bibcode:2007ApJS..170..377S. doi:10.1086/513700.; available online at LAMBDA - Table 7 of Hinshaw, G. (WMAP Collaboration). et al. (feb 2009). "Five-Year Wilkinson Microwave Anisotropy Probe Observations: Data Processing, Sky Maps, and Basic Results". The Astrophysical Journal Supplement 180 (2): 225–245. arXiv:0803.0732. Bibcode:2009ApJS..180..225H. doi:10.1088/0067-0049/180/2/225. (Table is on p. 54) - "Seven-Year Wilson Microwave Anisotropy Probe (WMAP) Observations: Sky Maps, Systematic Errors, and Basic Results" (PDF). nasa.gov. Retrieved 2010-12-02. (see p. 39 for a table of best estimates for various cosmological parameters) - Results for and other cosmological parameters obtained by fitting a variety of models to several combinations of WMAP and other data are available at the NASA's LAMBDA website. - Chandra independently determines Hubble constant in Spaceflight Now - S. Perlmutter, " Supernovae, Dark Energy, and the Accelerating Universe", Physics Today, April 2003, pp 53-60 - A. Tawfik, "Quark-Hadron Phase Transitions in Viscous Early Universe" - A. Tawfik et al. "The Hubble parameter in the early universe with viscous QCD matter and finite cosmological constant" - A. Tawfik, et al., "Viscous quark-gluon plasma in the early universe" - "A small journey in the Bogdanoff universe". Ybmessager.free.fr. Retrieved 2012-08-15. - Kutner, Marc (2003). Astronomy: A Physical Perspective. New York: Cambridge University Press. ISBN 0-521-52927-1 - Hubble, E. P. (1937). The Observational Approach to Cosmology. Oxford: Clarendon Press - Eng, A. E. (1985). A New Approach to Starlight Runs. Oswego - Liddle, Andrew R. (2003). An Introduction to Modern Cosmology (2nd ed.). Chichester: Wiley. ISBN 0-470-84835-9 Further reading - Freedman, W. L.; Madore, B. F. (2010). "The Hubble Constant". Annual Review of Astronomy and Astrophysics 48: 673. doi:10.1146/annurev-astro-082708-101829.
http://en.wikipedia.org/wiki/Hubble's_law
13
146
Rotational kinetics deals with the cause of rotation. In order to cause rotation in an object, torque must be applied. If the applied torque causes rotation, the relation between the applied torque and the pace of generated rotation is the basis of rotational kinetics. If rotation does not occur, the torque applied is often referred to as "moment." The first topic to study is " the moment of a force." Torque or Moment of a Force When a force is applied to the handle of a wrench (normally perpendicular to it), the product of the force and the perpendicular distance it has from the center of the bolt is called the torque or moment of that force. Mathematically, torque is the product of force and perpendicular distance, or torque is the product of perpendicular force and distance. Torque may be either clockwise (CW), or counterclockwise (CCW). By convention, CCW is usually taken to be positive, and thus CW is negative; therefore, torque is a vector quantity. Example 1: In the figure shown, find the torque of force ( F ) about point A, the point at which the beam is fixed into the wall. If more than one force is generating torque on an object, then the sum of torques or the net torque should be calculated. Example 2: In the figure shown, find the net torque of the forces shown about point A, the point at which the beam is fixed into the wall. In this problem the net tendency of rotation is clockwise as the (-) sign in -11.5Nm indicates. Example 3: In the figure shown, find the net torque of the forces shown about point B, the point at which the beam is fixed into the wall. In this problem the net tendency of rotation is clockwise as the (-) sign in -25Nm indicates. Example 4: In the figure shown, find the torque of force F about point A. As you may have noticed, it is possible to draw a perpendicular line from Point A to vector 25N sin54º. The length of the perpendicular line becomes 1.6m. If you decide to draw a perpendicular line from A to vector 25N cos54º, you need to first extend the vector from its tail to the left. Doing this makes the extension to pass through A. Point A falls on the extended vector and makes the perpendicular line to have a length of 0.0. Example 5: In the following 4 figures, determine (a) the case for which the torque of the 10-N force is maximum and explain why. (b) What is the value of torque about point D in Fig. 4? (c) How do you determine the perpendicular distance ( d^ ) ? Solution: (a) The case in Fig. 1, has the maximum torque because the magnitude of the applied force is the same for all cases, but the perpendicular distance in Fig. 1 is greatest. That makes the torque in Fig. 1 maximum. (b) Zero, because the perpendicular distance in Fig. 4 is zero. (c) To find d^ , the line of action of the applied force must be extended, and then from the desired point a line segment be drawn perpendicular to it as shown in Figures 1 through 3. Example 6: In the figure shown, find the net torque of the forces shown about point A. An object is said to be in rotational equilibrium if the net torque acting on it is zero that means ΣT = 0. The torque sum may be taken about any single point on the object or out of it. It is usually taken about a point for which perpendicular distances from all acting forces are convenient to calculate. ΣT = 0 means that the sum of CW torques equals the sum of CCW torques; in other words, the total CW tendency of rotation is equal to the total CCW tendency of rotation. Example 7: In the figure shown, find F such that the seesaw is rotational equilibrium that means it is neither rotating cw nor ccw. Example 8: In the figure shown, assume that the beam is weightless and find the unknown force F that brings the seesaw in rotational equilibrium. The Center of Mass of Uniform Objects The center of mass of an object is the point that all of its mass can be assumed to be concentrated at. For geometrically symmetric objects, such as rectangular boxes, spheres, cylinders, etc., geometric center is easily determined. Geometric center is the same thing as the center of symmetry. If the material of an object is uniformly distributed throughout its volume to where it has the same mass density everywhere, the geometric center and the mass center are the same point in that object. In the above two examples, this was the case where the geometric center of the plank was indeed its center of mass as well. For geometrically symmetric and constant density objects, the geometric center is the same point as the mass center is. For a uniform and rectangular plank of wood, the geometric center or mass center is at its midpoint. It means that all of the mass of such plank can be assumed to be concentrated at its geometric center or its middle. This is specially helpful in the following example: Example 9: In the figure shown, an 8.0-m long 550-N heavy uniform plank of wood is pivoted 2.0m off its middle at P to form an unbalanced seesaw. It is then loaded with a 420-N force as shown. Find the magnitude of F that keeps the plank in rotational equilibrium. Example 10: In the figure shown, a force of 300.N is applied on the lever at A. Find the reaction force F that the crate exerts on the lever at B. Chapter 9 Test Yourself 1: 1) To cause rotation in an object about an axis, (a) a force that goes through that axis must be applied (b) a force that does not go through that axis must be applied (c) a torque must be applied (d) b & c. click here. 2) Torque is defined as the product of (a) a force and a parallel distance (b) a force and a perpendicular distance (c) a force and an angle. 3) When the line of action of a force passes through the point about which torque is to be calculated, (a) the perpendicular distance is zero (b) torque is maximum (c) clockwise rotation occurs click here. 4) When the line of action of a force passes through the point about which torque is to be calculated, (a) that force does not generate any torque (b) that force either pulls that point or pushes it (c) both a & b Problem: A 1/2-inch water pipe made of copper has a length of 1.00m an is horizontally fixed into a concrete wall at its left end. The right end of it is free. click here. 5) To easily bend this pipe clockwise about its fixed end, (a) the right (free) end of it must be pushed down (b) the middle of it must be pushed down (c) the left (fixed) end of it, nearest to the wall, must be pushed down. 6) The torque necessary to just bend this pipe clockwise is (a) the same no matter which point of it is pushed down (b) maximum, if the right end of it is pushed down (c) minimum, if the left end of is pushed down (d) both b & c. 7) The force necessary to just bend this pipe clockwise is (a) the same no matter which point of it is pushed down (b) maximum, if the right end of it is pushed down (c) minimum, if the left end of is pushed down (d) neither a, nor b, nor c. 8) It takes a certain amount of bending moment (torque) to bend this pipe. Now, if this torque is provided by applying a downward force at its right end, the force is (a) maximum (b) minimum (c) neither a, nor b. click here. 9) It takes a certain amount of bending moment (torque) to bend this pipe. Now, if this torque is provided by applying a downward force very close to its left end, the force is (a) maximum (b) minimum (c) neither a, nor b. 10) If a downward force is applied exactly at its left end, bending does not occur because (a) the downward force passes through the point about which we want bending to occur and passing through that point means zero perpendicular distance and consequently zero torque (b) applying a downward force at the very left end practically pushes that end down and has no torque arm (c) both a & b. click here. 11) If the necessary torque to bend the pipe is 120Nm and the right end is pushed down, the applied downward force is (a) 6.0N (b) 24N (c) 120N. 12) If the necessary torque to bend the pipe is 120Nm and the middle of the pipe is pushed down, the applied downward force is (a) 60.N (b) 240N (c) 120N. 13) If the necessary torque to bend the pipe is 120Nm and 0.20m from the wall is pushed down, the applied downward force is (a) 600N (b) 150N (c) 1200N. click here. 14) If the necessary torque to bend the pipe is 120Nm and 0.25m from its right end is pushed down, the applied downward force is (a) 720N (b) 480N (c) 160N. 15) If the necessary torque to bend the pipe is 120Nm and 0.15m from its left end is pushed down, the applied downward force is (a) 800N (b) 360N (c) 1600N. 16) If in Example 5, Fig. 1, the angle of the 10-N force is 42.0º below horizontal, and the beam's length is 2.00m, then d┴ is (a) 1.0m (b) 1.33m (c) 1.49m. 17) If in Example 5, Fig. 3, the angle of the 10-N force is 15.0º below horizontal, and the beam's length is 2.00m, then d┴ is (a) 0.52m (b) 2.49m (c) 1.93m. click here. 18) In Example 5, Fig. 4, the angle of the 10-N force is 0.0º with the horizontal, and the beam's length is 2.00m, then d┴ is (a) 0.52m (b) 2.00m (c) 0. 19) For an object to be in rotational equilibrium, which sum acting on it must be zero? of (a) forces (b) torques (c) work. 20) The sum of torques acting on an object may be taken about (a) the center of mass of the object (b) any point on the object (c) any point even if it is not a point of that object (d) a, b & c. click here. 21) For ease of calculation, the sum of torques (moments) may be taken about a point of an object for which perpendicular distances from all forces can either be readily seen or easily determined. (a) True (b) False 22) If force F passes through point A, the torque of F about A is (a) maximum (b) not easy to calculate (c) 0. 23) The center of gravity of a rectangular and uniform sheet of metal that coincides with its geometric center, is at (a) its center (b) the point of intersection of its diagonals (c) both a & b. click here. 24) In Example 8, if the 3.5m is replaced by 2.5m and the 730N by 800N, the value of F becomes: (a) 565N (b)700N (c) 500N. First solve, then look at the solution of the example. 25) In Example 8, if the 300.N acts upward, the 730N is replaced by 800N, and the 0.900m is changed to 1.00m, the value of F becomes: (a) 3000N (b)1850N (c) 5500N. First solve, then look at the solution of the example. Problem: Neatly redraw the figure of Example 9 and replace the 420N by 500N. Also, let the beam weigh 600N. Answer the following questions: click here. 26) The torque of the weight force (the 600N force) about point P is (a) 1200Nm (b) -1200Nm (c) - 2400Nm. 27) The torque of the 500N force about point P is (a) -2500Nm (b) 2500Nm (c) - 1500Nm. 28) The torque of F about point P is (a) -1.0F (b) +1.0F (c) -3.0F. click here. 29) Adding the above 3 torques corresponding to the 3 existing forces and, for rotational equilibrium, setting the sum equal to zero, results in a value for F of (a) 3700N (b) 2700N (c) 4700N. 30) In Example 10, if distance BC is 0.25m instead of 0.50m, the magnitude of F becomes: (a) 2400N (b) 600N (c) 800N. Torque-Angular Motion Relation: The same way a net constant force along a straight path creates constant acceleration, a net constant torque applied to an object creates constant angular acceleration in that object. Newton's Second Law for both translational and rotational motions are shown below: Newton's 2nd Law for straight line motion is: ΣF = Ma ; where M is the mass of the object. Newton's 2nd Law for rotational motion is: ΣT = I a , where ( I ) is the mass moment of inertia of the object and a is its angular acceleration. We have learned the concept of torque ( T ) and angular acceleration ( a ). What is left to learn here is the mass moment of inertia ( I ). Mass moment of Inertia ( I ): Consider mass M that is attached to a weightless rod of length R and the rod is connected to a vertical rod at right angle as shown below (Fig. A). Assume that the vertical rod is free to rotate about itself and can be put into rotation by applying torque to the vertical axis. Also, consider exactly the same device but with a rod of length 2R as shown in (Fig. B). Example 11: A 1.50-kg mass is connected to a weightless rod of length 45.0 cm and then attached to a vertical axis that is free to rotate as shown in Fig. A above. The apparatus is initially at rest. The vertical axis is then given a twist by a constant net torque of 6.08Nm for a period of 2.00 seconds. Find (a) the angular acceleration of the spinning mass, and (b) the angle it travels within the 2.00-sec. period. Assume frictionless rotation. Solution: (a) The mass moment of inertia ( I ), or the resistance toward rotation is I = MR2. I = MR2 ; I = (1.50 kg)(0.45 m)2 ; I = 0.304 kg m2 ( Note the unit of I ). ΣT = I a ; 6.08Nm = (0.304kg m2 ) a ; a = 20.0 rd/s2. (b) q = (1/2) a t 2 + wi t ; q = (1/2) a t 2 ; q = 40.0 rd. Moment of Inertia of a Thin Ring: The mass M at radius R (as shown above) can be distributed uniformly along a thin ring as shown below. Its mass moment of inertia ( I ) will still be given by I = MR2. (This is good only for finding I about the axis through the center and normal to the ring's plane). Moment of Inertia of a Solid Disk: A solid disk of radius R may be thought as a combination of infinite number of thin rings which radii range from zero to R. Of course, as we go from the inner rings to the outer rings, each ring has more mass as well as a greater radius that makes its corresponding ( I ) increasingly greater. To find the total ( I ) for all rings, calculus must be employed and integration performed. The result is: I = (1/2) MR2 . Again, this is good only for finding I about the axis through the center and normal to the disc's plane. Note that for a solid disk, M is much greater than that of a thin ring of the same radius and material. Example 12: A 255-kg solid disk of radius 0.632m is free to spin on a frictionless axle-bearing system in a vertical plane as shown. A force of 756-N is applied tangent to its outer edge for 1.86s that puts it in rotation from rest. Calculate (a) the mass moment of inertia of the disk, (b) the torque applied as a result of the applied force, (c) the angular acceleration of the disk, and (d) its angular speed and the linear speed of points on its edge at the end of the 1.86-sec. period. (a) I = (1/2)MR2 = (0.5)(255kg)(0.632m)2 = 50.9kgm2. (b) TC = F R = (756N)(0.632m) = 478 Nm. ( Torque of F about the axle at C ) (c) ΣTC = Ia ; 478 Nm = (50.9 kgm2)( a) ; a = 9.39 rd/s2. (d) α =(ωf -ωi)/t ; ωf =α t ; ωf = (9.39 rd/s2)(1.86s) = 17.5 rd/s. vf = Rωf ; vf = (0.632m )( 17.5 rd/s2) = 11.1 m/s. Kinetic Energy and Angular Momentum in Rotational Motion: In straight line motion, K.E. = (1/2)Mv2. In rotational motion, the equivalent is obviously K.E. = (1/2) I w2. In straight line motion, linear momentum is Mv. In rotational motion, angular momentum is Iw. Example 13: In Example 12, find the K.E. and the angular momentum of the solid disk at t = 1.86s. Solution: At t = 1.86s, ωf = 17.5 rd/s ; therefore, K.E. = (1/2)Iω2 = 0.5( 50.9 kgm2)(17.5rd/s)2 = 7790 J. Angular Momentum = Iω = ( 50.9 kgm2)(17.5rd/s) = 891 kg m2/s. Chapter 9 Test Yourself 2: 1) Similar to Newton's Laws for motion along a straight line, Newton's First Law applied to rotation should state that " If the net torque applied to an object (a wheel for example) is zero, the object is either not rotating or if it is rotating, it rotates at constant angular (a) velocity (b) acceleration (c) angular displacement." click here. 2) Newton's Second Law applied to rotation should state that " A non-zero net torque, ΣT acting on an object of mass moment of inertia I, creates an angular acceleration a in it such that (a) ΣT = M a (b) the object rotates at constant angular velocity (c) ΣT = I a. click here. 3) Mass moment of inertia, I of a rigid object, is the resistance that object shows toward (a) motion along a straight line (b) a back-and-forth or up-and-down motion (c) rotation. 4) The mass moment of inertia for a solid disk about the axis through its center and normal to its plane is (a) I = MR2 (b) I = 1/2 MR (c) I = 1/2 MR2. 5) The mass moment of inertia for a ring about the axis through its center and normal to its plane is (a) I = MR2 (b) I = πMR (c) I = 1/2 MR2. click here. 6) The mass moment of inertia of a 400-kg solid disk of radius 2.00m about the axis through its center and normal to its plane is (a) 1600kgm2 (b) 800kgm2 (c) 600kgm2. 7) The mass moment of inertia of a 4.00-kg metal ring of radius 0.50m about the axis through its center and normal to its plane is (a) 1.0kgm2 (b) 0.5kgm2 (c) 2.0kgm2. 8) Solve Example 12, with assumption that the axle imposes a frictional torque of 78Nm. Since the externally applied force of 756N to the rim creates a CCW torque to rotate the wheel in CCW direction, the frictional torque that always opposes the direction of rotation, will act CW. The net torque is therefore, (a) 576Nm (b) 400Nm (c) 756N. 9) The angular acceleration then is (a) 7.86 rd/s2 (b) 11.31 rd/s2 (c) - 7.86 rd/s2. click here. 10) If the solid disk is accelerated for the 2.45s, at the end of this period, its angular speed is (a) 19.3 rd/s (b) 19.3 rd/s2 (c) 9.3 rd/s. click here. 11) The linear speed of points on its outer edge that are at a radius of 0.632m is (a) 11.2 rd/s (b) 21.2 rd/s2 (c) 12.2m/s. 12) The rotational K.E. of the disk at the end of the 2.45s period is (a) 9480 J. (b) 7840 watts. (c) neither a nor b. 13) The angular momentum of the disk at the end of the 2.45s period is (a) 289 kgm2/s. (b) 982 kgm2/s. (c) 829 kgm2/s. Problems: [Apply 3 significant figures to all numbers.] 1) In each of the figures shown, assuming CCW torque to be positive, find the net torque of the system of forces about the axis passing through the indicated point: 2) In the figure shown, calculate The position of the weight W2 for the equilibrium of the seesaw. 3) A solid disk of mass 200kg and radius 0.800m goes under a constant net torque of 160Nm for 3.00s. Find (a) its mass moment of inertia, (b) its angular acceleration, and (c) its final speed. 4) A bicycle wheel (Model it as a ring.) with a mass of 2.00-kg and a radius of 35.0cm is initially at rest. Find (a) its mass moment of inertia. A 16.0-N force is applied tangent to its outer edge for 1.50s and then released. Find (b) the resulting applied torque. If the friction at its axle generates a counter torque of 0.100Nm, find (c) its angular acceleration, and (d) its final angular speed. The wheel will then slow down and eventually come to stop. During the slowing down phase, only the frictional torque is present. Calculate (e) the angular deceleration, and (f) the stopping time. 5) The anchor wheel in a car is also called the "fly wheel." This wheel is attached to the crank shaft and one of its important functions is to absorb the pulsations of the pistons and make the crank shaft turn smoothly without vibration. The starter (electric motor) gets engaged with the teeth on the edge of this wheel to crank the engine. The clutch plate also gets attached (by friction) to this wheel when clutch is released. Anyway, the mass of a flywheel (a solid disc) is 18.0kg and has a radius of 19.0cm. Calculate (a) its mass moment of inertia about the axis through its center and normal to its plane (that is the crankshaft), and (b) its rotational K.E. when it is turning at 3600rpm. Note that the translational kinetic energy (Chapter 6) was given by ½Mv2. Here, in Chapter 9, the rotational kinetic energy is of course ½ Iω2. Answers: 1) 57.0Nm, -122Nm, 491Nm, -168 ft-lb, -64.0Nm 2) 2.20m 3) 64.0kgm2, 2.50rd/s2, 7.50rd/s 4) 0.245kgm2, 5.60Nm, 22.4 rd/s2, 33.7rd/s, -0.408rd/s2, 82.6s 5) 0.325 kgm2, 23100J
http://www.pstcc.edu/departments/natural_behavioral_sciences/Web%20Physics/Chapter09.htm
13
69
Polar coordinate is a method of presentation of points in the plane with use of Ordered Pair. This polar coordinate system comprises of an origin, pole, a Ray of specific angle and a polar axis. The polar axis is any line which initiates from the origin and extends to the indefinite Point in any of the prescribed direction. The Position of the point can be determined by the position and distance from the origin and by its angle. If the rotation of polar axis is anticlockwise then the angle generated will be positive. If it is clockwise the angle generated will be negative. Locus and Polar CoordinatesBack to Top Locus for a Point can be defined as a curve or a path that results from the condition (s) that is satisfied by the point for an Algebraic Relation governed by some fixed rule. This statement is valid only for those points which are lying in the plane and not in the outer region of the plane, i.e., the equation of loci cannot be found for those points lying in the exterior. And as we know that a point can be located or represented in a plane using two forms of systems namely the Rectangular/Cartesian Coordinate System or the Polar Coordinate System. A proper relation can be framed between the locus and the Polar Coordinates by using these coordinates to find the locus of any arbitrary point lying in the plane. The simple technique which can be followed for doing this is by substituting the values of polar coordinates in place of x and y, i.e., x = h cos A y = h sin A The steps for finding the equation of the locus of a point P being the same: First of all assign the coordinates to the point of which the locus has to be found, i.e. P (n, m). The second thing which we need to remember is to express the given conditions as equations in terms of the known quantities and unknown parameters. The unknown parameters should be eliminated, such that only the know quantities are left for consideration. Replace n by x, and m by y. The resulting equation representing the locus of the point P can be obtained by replacing n by x, and m by y. All the points lying in the plane and making the same angle with the positive x- axis will have the same Slope and also the equation of the loci. Applications of Polar CoordinatesBack to Top Their use in this context is where the phenomenon being considered is essentially related to the direction and the length from a Centre point of the plane. Moreover, many physical systems—such as those concerned with objects moving around a central point or with phenomena originating from a central point—are simpler and more spontaneous to model using polar coordinates. The initial incentive taken in the field of explaining the applications of Polar Coordinates was the study related to circular and orbital motion. Navigation can also be a noticeable application of Polar coordinates, as we consider the direction of travel can be given as an angle and distance from the object being considered. Thus the destination can be come to know by knowing the navigation direction of the Polar coordinates. According to the conventions that are followed by the aircrafts using the polar coordinates the magnetic north can be understood as moving in a direction along 360 and the rest directions are specified by 90, 180 and 270 respectively. Another application of Polar Coordinate Systems is modelling which can be used to display radial Symmetry providing natural settings for the polar coordinate system, with the central point acting as the pole. For Example we can consider the following: Groundwater flow equation applied to radically symmetric wells. Systems including gravitational fields with a radical force also have a good usage of the polar coordinate system. Modelling with polar coordinates also cover their usage in asymmetric systems like we can get a proportionate response to an incoming sound signal from a given direction is usually found in microphone's pickup pattern. Transformation from Rectangular to Polar CoordinatesBack to Top - Rectangular coordinates and - Polar coordinates. The Polar coordinate system is defined in terms of distance from a fixed point and an angle when viewed from a particular direction. Let the distance of a point P(x, y) from origin (an arbitrary fixed point) be denoted by ‘h’ denoted by the symbol O). Consider the angle between the radial line from the point P to O and the given line “θ = 0” (a kind of positive axis for our polar coordinate system be angle ‘A’. Where, h ≥ 0 & 0≤ A < 2π. The transformation of Rectangular Co-ordinates to the Polar co-ordinates can be done using certain formulae: In the given diagram we have, By the rule of Pythagoras: r = √ (x2 + y2) And the Slope can be found by tan A, or tan A = y / x , so therefore: A = tan-1 (y / x) So the rectangular point: (x, y) can be converted to polar coordinates like this: (√ (x2 + y2), tan-1 (y / x)). An Example can be taken to understand this transformation better, Example: A point is having rectangular coordinates as (3, 4). Find out the corresponding Polar Coordinates. Solution: r = radius or distance of the point from the Centre (fixed point) = square root of (32 + 42) = 5, and Angle A = tan-1(4/3) = 53.13º so, the polar co-ordinates can be given as (r, A) = (5, 53.13º). Equation of a LocusBack to Top And as we know that a point can be located or represented in a different way we can say that if a point moves in a plane following certain geometrical conditions, it traces out a path. This path of the moving point is called its locus. To explain the geometrical meaning of locus we need to derive an equation which can be described as a relation existing between the coordinates of all the point on the path, and which holds for no other points except those lying on the path. If we are interested in finding the equation of the locus of a point we need to follow these steps: The first step is to locate the coordinates of the point for which the locus has to be found, i.e. P (h, k). This helps you identifying the location of the point lying in the plane for measurements. Next step is to determine the known quantities and unknown parameters while expressing the given conditions as equations in terms of the known quantities and unknown parameters. This assists you for further calculation. Unknown parameters should be omitted. Replacing the point h by ‘x’ and k by ‘y’, results into an equation representing the locus of the point P. All the points making the same angle with the real axis will have the same Slope and also the equation of the loci. Polar Coordinates to CartesianBack to Top Basically we use graph to mark these coordinates. Cartesian coordinates consist of two points, position on x- axis and Position of y- axis. These coordinates define how far an object is from origin. Polar coordinates also have coordinates as x and y- axis, which defines the position of the object on the graph but in addition it also has angles defined. This angle lies between x- axis and y- axis. Four points are required to convert the coordinates from one form to another. These are position of x and y- axis, hypotenuse and angle which lie between the x and y- axis. Let us see how we can convert polar coordinates to Cartesian coordinates: In polar coordinates we are given two points 'r' and angle 'θ'. Here side 'r' is √x2 + y2. When we convert polar coordinates to Cartesian coordinates we need to calculate ‘x’ and ‘y’ points. In polar form the x- coordinates are associated with cos function and y- coordinates are associate with sin function. Let us understand the conversion of polar to Cartesian with an example. Assume we have polar coordinates 'r' and 'θ' as (13, 22.60). Now we have to calculate x and y axes. So cos(22.60) = x / 13 => x = 12.006 This is the position of x- axis on the graph. Now calculate the y axis: sin (22.60) = y / 13 => 4.996, This is the position of y- coordinates. Now Cartesian coordinates are (12, 5). Graphing Polar CoordinatesBack to Top Graphing Polar Coordinates means to plot them in Cartesian System via Transformations: X = H cos s and, Y = H sin s, Where, H ≥ 0 & 0≤ s < 2π. Now each and every point P(X, Y) lying in ordinary x - y plane can be written in this new (h, A)-form. It being a consequence of the fact that 'P' lies on circumference of some Circle which is centered at the origin 'O' and has a radius 'R' where, R = H. That is the distance from point 'P' to origin is equal to that of Radius of Circle. From these above mentioned relationships we find that coordinates of our point 'P' are satisfying the equation: X2 + Y2 = H2, Substituting the values of 1 as, cos2 s+ sin2 s = 1, X2 + Y2 = H2 (cos2 s+ sin2 s), ⇒X2 + Y2 = H2, Thus providing proof for point P(X, Y) to lie on circle of radius 'R' centered at 'O'. Polar coordinates can be explained by taking a suitable Example as follows: What is (4, 3) in Polar Coordinates? Solution: H2 = 42 + 32, H = √ (42 + 32) H= √ ( 16 + 9) = √ (25) = 5 Use the Tangent Function to find the angle: tan ( s ) = 3 / 4, S = tan-1 (3 / 14). Polar Coordinates IntegrationBack to Top Length of arc can be given by following Definite Integral (limit “a” to “b”): L = ∫((r (Ө)) 2 + [dr (Ө) / d Ө] 2 d Ө) ½, Region of integration “R” covered under this arc AB is bounded by the curve (Ө)and lines “Ө = a” and “Ө = b”. Area of this region can be given as definite integral (limit “a” to “b”): R = 1/2 ∫ r (Ө)) 2 d Ө, Cartesian coordinates can be used to calculate a minute area element as: d A = d x d y. For converting Cartesian coordinates to polar form we use Jacobian determinant given as: J = det (δ (x, y) / δ (r, Ө)) = r cos2 Ө + r sin2 Ө = r (cos2 Ө + sin2 Ө) = r, Thus, area of element in polar form can be written as: d A= d x d y = J dr d Ө = r dr d Ө, So, integration of a function given in form of polar form can be given with limits of first integral as 0 to r (Ө) and that of second as 'a' to 'b': ∫∫R f (r, Ө) d A = a∫b 0∫r f(r, Ө) r dr d Ө, Thus it is possible to represent area of any region 'R', covered by some curve r (Ө) in form of polar coordinates integration also.
http://math.tutorcircle.com/analytical-geometry/polar-coordinates.html
13
74
Any connected graph without a cycle is called a tree. Every edge of a tree is a bridge. In general, a graph without any cycles, whether connected or not, is called a forest. We have already seen some trees: And many non-trees: Some Facts and Theorems about Trees A rooted tree has one vertex distinguished from the others. This distinguished vertex is called the root. A vertex of a tree is called a leaf if it has no children. Vertices that have children are called internal vertices. The root is an internal vertex unless it is the only vertex in the graph, in which case it is a leaf. The level of a vertex is the number of edges (length) of a path from the root to that vertex. The height of a rooted tree is the maximum level of any of its vertices. Given an internal vertex v, the children of v are all those vertices that are adjacent to v and are one level farther away from the root than v. If w is a child of v, then v is called the parent of w and all other vertices in the path from the root to w are ancestors of w, while w is a descendant of each of these vertices. Vertices with the same parent are called siblings. Let T be a tree. If T has only one or two vertices, then each is called a terminal vertex. If T has at least three vertices, then a vertex of degree 1 in T is called a terminal vertex (or a leaf), and a vertex of degree greater than 1 is called an internal vertex (or a branch vertex). An m-ary tree is a tree in which each vertex has at most m children. The order in which the children are listed is relevant. In particular, in a binary tree, if an internal vertex has two children, one must distinguish between the the first child called the left child and the right child (the second child). An internal vertex of a binary tree has at most one left child and one right child. A full binary tree is a binary tree in which every internal vertex has exactly two children. Given any internal vertex v of a binary tree, the left subtree of v is the binary tree whose root is the left child of v and includes all the descendants of v and their edge set. The right subtree is analogous. Full and Complete Binary Trees Full binary tree: Each node is either a leaf or internal node with exactly two non -empty children. Note: In other words, every node is either a leaf or has two children. For efficiency, any Huffman coding (animation) is a full binary tree. Complete Binary Tree: If the height of the tree is h, then all levels except possibly level h are completely full. Level h has all nodes filled in to the left side. "In the general case we may number the nodes 1, 2, ..., n; this numbering has the useful property that the father of node k is node ë k/2 û; the sons of node k are nodes 2k and 2k + 1. The terminal nodes are numbers n + 1 through 2n + 1, inclusive." (Knuth, The Art of Computer Programming -Vol 1 Fundamental Algorithms, 1969, p 401) Example For the following graph complete the following: The root is ___. The left child of b is ____, while i is a _____ child of h. The right subtree of g is ___&__, the left subtree of g is ___&___. Vertex i is a ________ of g, g is an _______ of i. The height of this tree is _____. The root, a, is located at level __ and c is located at level ___. There are eleven vertices and consequently ____ arcs in this tree. Vertices c and f are __________ of b and ___________ to each other. Two more Theorems Example Is there a full binary tree with 5 internal vertices and 6 terminal vertices? If so, how many total vertices does this tree Example Is there a binary tree with height 4 and 8 terminal vertices? What is the greatest possible number of terminal vertices in a binary tree of height 4? Traversal of a Binary Tree One frequently has to perform operations on trees involving every vertex. In order to perform the operation a recursive tree traversal is required. There are three principal ways to traverse a tree: preorder, inorder, and postorder. The essential difference is the order in which the root is processed: first, middle, or last. To help with clarity of operation, consider a tree in which the root and internal vertices are arithmetic operators and the terminal vertices are variables: What is the preorder traversal of this tree? Solution What is the inorder traversal of this tree? Solution What is the postorder traversal of this tree? Solution Let m = 1 and n = 0.5 then determine the value of the expression found in your three solutions above. What are the preorder, inorder, and postorder expresions for the binary tree in the m-ary example above?
http://people.uncw.edu/tompkinsj/133/graphTheory/trees.htm
13
206
The Gini coefficient (also known as the Gini index or Gini ratio) is a measure of statistical dispersion developed by the Italian statistician and sociologist Corrado Gini and published in his 1912 paper "Variability and Mutability" (Italian: Variabilità e mutabilità). The Gini coefficient measures the inequality among values of a frequency distribution (for example levels of income). A Gini coefficient of zero expresses perfect equality, where all values are the same (for example, where everyone has an exactly equal income). A Gini coefficient of one (100 on the percentile scale) expresses maximal inequality among values (for example where only one person has all the income). However, a value greater than one may occur if some persons have negative income or wealth. For larger groups, values close to or above 1 are very unlikely in practice however. Gini coefficient is commonly used as a measure of inequality of income or wealth. For OECD countries, in the late 2000s, considering the effect of taxes and transfer payments, the income Gini coefficient ranged between 0.24 to 0.49, with Slovenia the lowest and Chile the highest. The countries in Africa had the highest pre-tax Gini coefficients in 2008–2009, with South Africa the world's highest at 0.7. The global income inequality Gini coefficient in 2005, for all human beings taken together, has been estimated to be between 0.61 and 0.68 by various sources. There are some issues in interpreting a Gini coefficient. The same value may result from many different distribution curves. The demographic structure should be taken into account. Countries with an aging population, or with a baby boom, experience an increasing pre-tax Gini coefficient even if real income distribution for working adults remain constant. Scholars have devised over a dozen variants of the Gini coefficient. The Gini coefficient is usually defined mathematically based on the Lorenz curve, which plots the proportion of the total income of the population (y axis) that is cumulatively earned by the bottom x% of the population (see diagram). The line at 45 degrees thus represents perfect equality of incomes. The Gini coefficient can then be thought of as the ratio of the area that lies between the line of equality and the Lorenz curve (marked A in the diagram) over the total area under the line of equality (marked A and B in the diagram); i.e., G = A / (A + B). If all people have non-negative income (or wealth, as the case may be), the Gini coefficient can theoretically range from 0 to 1; it is sometimes expressed as a percentage ranging between 0 and 100. In practice, both extreme values are not quite reached. If negative values are possible (such as the negative wealth of people with debts), then the Gini coefficient could theoretically be more than 1. Normally the mean (or total) is assumed positive, which rules out a Gini coefficient less than zero. A low Gini coefficient indicates a more equal distribution, with 0 corresponding to complete equality, while higher Gini coefficients indicate more unequal distribution, with 1 corresponding to complete inequality. When used as a measure of income inequality, the most unequal society (assuming no negative incomes) will be one in which a single person receives 100% of the total income and the remaining people receive none (G = 1−1/N); and the most equal society will be one in which every person receives the same income (G = 0). An alternative approach would be to consider the Gini coefficient as half of the relative mean difference, which is a mathematical equivalence. The mean difference is the average absolute difference between two items selected randomly from a population, and the relative mean difference is the mean difference divided by the average, to normalize for scale. The Gini index is defined as a ratio of the areas on the Lorenz curve diagram. If the area between the line of perfect equality and the Lorenz curve is A, and the area under the Lorenz curve is B, then the Gini index is A / (A + B). Since A + B = 0.5, the Gini index is G = 2 * A or G = 1 – 2 B. If the Lorenz curve is represented by the function Y = L (X), the value of B can be found with integration and: In some cases, this equation can be applied to calculate the Gini coefficient without direct reference to the Lorenz curve. For example (taking y to mean the income or wealth of a person or household): - For a population uniform on the values yi, i = 1 to n, indexed in non-decreasing order (yi ≤ yi+1): - This may be simplified to: - This formula actually applies to any real population, since each person can be assigned his or her own yi. - For a discrete probability function f(y), where yi, i = 1 to n, are the points with nonzero probabilities and which are indexed in increasing order (yi < yi+1): - For a cumulative distribution function F(y) that has a mean μ and is zero for all negative values of y: - (This formula can be applied when there are negative values if the integration is taken from minus infinity to plus infinity.) - Since the Gini coefficient is half the relative mean difference, it can also be calculated using formulas for the relative mean difference. For a random sample S consisting of values yi, i = 1 to n, that are indexed in non-decreasing order (yi ≤ yi+1), the statistic: - is a consistent estimator of the population Gini coefficient, but is not, in general, unbiased. Like G, G (S) has a simpler form: There does not exist a sample statistic that is in general an unbiased estimator of the population Gini coefficient, like the relative mean difference. For some functional forms, the Gini index can be calculated explicitly. For example, if y follows a lognormal distribution with the standard deviation of logs equal to , then where is the cumulative distribution function of the standard normal distribution. Sometimes the entire Lorenz curve is not known, and only values at certain intervals are given. In that case, the Gini coefficient can be approximated by using various techniques for interpolating the missing values of the Lorenz curve. If (Xk, Yk) are the known points on the Lorenz curve, with the Xk indexed in increasing order (Xk – 1 < Xk), so that: - Xk is the cumulated proportion of the population variable, for k = 0,...,n, with X0 = 0, Xn = 1. - Yk is the cumulated proportion of the income variable, for k = 0,...,n, with Y0 = 0, Yn = 1. - Yk should be indexed in non-decreasing order (Yk > Yk – 1) If the Lorenz curve is approximated on each interval as a line between consecutive points, then the area B can be approximated with trapezoids and: is the resulting approximation for G. More accurate results can be obtained using other methods to approximate the area B, such as approximating the Lorenz curve with a quadratic function across pairs of intervals, or building an appropriately smooth approximation to the underlying distribution function that matches the known data. If the population mean and boundary values for each interval are also known, these can also often be used to improve the accuracy of the approximation. The Gini coefficient calculated from a sample is a statistic and its standard error, or confidence intervals for the population Gini coefficient, should be reported. These can be calculated using bootstrap techniques but those proposed have been mathematically complicated and computationally onerous even in an era of fast computers. Ogwang (2000) made the process more efficient by setting up a “trick regression model” in which the incomes in the sample are ranked with the lowest income being allocated rank 1. The model then expresses the rank (dependent variable) as the sum of a constant A and a normal error term whose variance is inversely proportional to yk; Ogwang showed that G can be expressed as a function of the weighted least squares estimate of the constant A and that this can be used to speed up the calculation of the jackknife estimate for the standard error. Giles (2004) argued that the standard error of the estimate of A can be used to derive that of the estimate of G directly without using a jackknife at all. This method only requires the use of ordinary least squares regression after ordering the sample data. The results compare favorably with the estimates from the jackknife with agreement improving with increasing sample size. The paper describing this method can be found here: http://web.uvic.ca/econ/ewp0202.pdf However it has since been argued that this is dependent on the model’s assumptions about the error distributions (Ogwang 2004) and the independence of error terms (Reza & Gastwirth 2006) and that these assumptions are often not valid for real data sets. It may therefore be better to stick with jackknife methods such as those proposed by Yitzhaki (1991) and Karagiannis and Kovacevic (2000). The debate continues. where u is mean income of the population, Pi is the income rank P of person i, with income X, such that the richest person receives a rank of 1 and the poorest a rank of N. This effectively gives higher weight to poorer people in the income distribution, which allows the Gini to meet the Transfer Principle. Note that the Deaton formulation rescales the coefficient so that its value is 1 if all the are zero except one. Gini coefficients of representative income distributions |Income Distribution Function||Gini Coefficient (rounded)| |y = 1 for all x||0.0| |y = x⅓||0.143| |y = x½||0.200| |y = x + b (b = 10% of max income)||0.273| |y = x + b (b = 5% of max income)||0.302| |y = x||0.333| |y = x2||0.500| |y = x3||0.600| Given the normalization of both the cumulative population and the cumulative share of income used to calculate the Gini coefficient, the measure is not overly sensitive to the specifics of the income distribution, but rather only on how incomes vary relative to the other members of a population. The exception to this is in the redistribution of wealth resulting in a minimum income for all people. When the population is sorted, if their income distribution were to approximate a well known function, then some representative values could be calculated. Some representative values of the Gini coefficient for income distributions approximated by some simple functions are tabulated below. While the income distribution of any particular country need not follow such simple functions, these functions give a qualitative understanding of the income distribution in a nation given the Gini coefficient. The effects of minimum income policy due to redistribution can be seen in the linear relationships above. Generalized inequality index The Gini coefficient and other standard inequality indices reduce to a common form. Perfect equality—the absence of inequality—exists when and only when the inequality ratio, , equals 1 for all j units in some population (for example, there is perfect income equality when everyone’s income equals the mean income , so that for everyone). Measures of inequality, then, are measures of the average deviations of the from 1; the greater the average deviation, the greater the inequality. Based on these observations the inequality indices have this common form: where pj weights the units by their population share, and f(rj) is a function of the deviation of each unit’s rj from 1, the point of equality. The insight of this generalised inequality index is that inequality indices differ because they employ different functions of the distance of the inequality ratios (the rj) from 1. Gini coefficient of income distributions Gini coefficients of income are calculated on market income as well as disposable income basis. The Gini coefficient on market income – sometimes referred to as pre-tax Gini index – is calculated on income before taxes and transfers, and it measures inequality in income without considering the effect of taxes and social spending already in place in a country. The Gini coefficient on disposable income – sometimes referred to as after-tax Gini index – is calculated on income after taxes and transfers, and it measures inequality in income after considering the effect of taxes and social spending already in place in a country. The difference in Gini indices between OECD countries, on after-taxes and transfers basis, is significantly narrower.[page needed] For OECD countries, over 2008–2009 period, Gini coefficient on pre-taxes and transfers basis for total population ranged between 0.34 to 0.53, with South Korea the lowest and Italy the highest. Gini coefficient on after-taxes and transfers basis for total population ranged between 0.25 to 0.48, with Denmark the lowest and Mexico the highest. For United States, the country with the largest population in OECD countries, the pre-tax Gini index was 0.49, and after-tax Gini index was 0.38, in 2008–2009. The OECD averages for total population in OECD countries was 0.46 for pre-tax income Gini index and 0.31 for after-tax income Gini Index. Taxes and social spending that were in place in 2008–2009 period in OECD countries significantly lowered effective income inequality, and in general, "European countries — especially Nordic and Continental welfare states — achieve lower levels of income inequality than other countries." Using the Gini can help quantify differences in welfare and compensation policies and philosophies. However it should be borne in mind that the Gini coefficient can be misleading when used to make political comparisons between large and small countries or those with different immigration policies (see limitations of Gini coefficient section). The Gini index for the entire world has been estimated by various parties to be between 0.61 and 0.68. The graph shows the values expressed as a percentage, in their historical development for a number of countries. US income Gini indices over time |Gini indexes – before and after taxes between 1980 and 2010| Taxes and social spending in most countries have significant moderating effect on income inequality Gini indices. For the late 2000s, the United States had the 4th highest measure of income inequality out of the 34 OECD countries measured, after taxes and transfers had been taken into account. The table below presents the Gini indices for household income, without including the effect of taxes and transfers, for the United States at various times, according to the US Census Bureau. The Gini values are a national composite, with significant variations in Gini between the states. The states of Utah, Alaska and Wyoming have a pre-tax income inequality Gini coefficient that is 10% lower than the U.S. average, while Washington D.C. and Puerto Rico 10% higher. After including the effects of federal and state taxes, the U.S. Federal Reserve estimates 34 states in the USA have a Gini coefficient between 0.30 and 0.35, with the state of Maine the lowest. At the county and municipality levels, the pre-tax Gini index ranged from 0.21 to 0.65 in 2010 across the United States, according to Census Bureau estimates. |1967||0.397||(first year reported)| Regional income Gini indices According to UNICEF, Latin America and the Caribbean region had the highest net income Gini index in the world at 48.3, on unweighted average basis in 2008. The remaining regional averages were: sub-Saharan Africa (44.2), Asia (40.4), Middle East and North Africa (39.2), Eastern Europe and Central Asia (35.4), and High-income Countries (30.9). Using the same method, the United States is claimed to have a Gini index of 36, while South Africa had the highest income Gini index score of 67.8. World income Gini index since 1800s The table below presents the estimated world income Gini index over the last 200 years, as calculated by Milanovic. Taking income distribution of all human beings, the worldwide income inequality has been constantly increasing since the early 19th century. There was a steady increase in global income inequality Gini score from 1820 to 2002, with a significant increase between 1980 and 2002. This trend appears to have peaked and begun a reversal with rapid economic growth in emerging economies, particularly in the large populations of BRIC countries. |Year||World Gini index| If we consider the population size of every country, which is a more accurate method, the world Gini index has been falling since the early 1960s. In 1962 it was 0.57, in 2000 0.50. Gini coefficient is widely used in fields as diverse as sociology, economics, health science, ecology, engineering and agriculture. For example, in social sciences and economics, in addition to income Gini coefficients, scholars have published education Gini coefficients and opportunity Gini coefficients. Gini coefficient of education Education Gini index estimates the inequality in education for a given population. It is used to discern trends in social development through educational attainment over time. From a study of 85 countries, Thomas et al. estimate Mali had the highest education Gini index of 0.92 in 1990 (implying very high inequality in education attainment across the population), while the United States had the lowest education inequality Gini index of 0.14. Between 1960 and 1990, South Korea, China and India had the fastest drop in education inequality Gini Index. They also claim education Gini index for the United States slightly increased over the 1980 – 1990 period. Gini coefficient of opportunity Similar in concept to income Gini coefficient, opportunity Gini coefficient measures inequality of opportunity. The concept builds on Amartya Sen's suggestion that inequality coefficients of social development should be premised on the process of enlarging people’s choices and enhancing their capabilities, rather than process of reducing income inequality. Kovacevic in a review of opportunity Gini coefficient explains that the coefficient estimates how well a society enables its citizens to achieve success in life where the success is based on a person’s choices, efforts and talents, not his background defined by a set of predetermined circumstances at birth, such as, gender, race, place of birth, parent's income and circumstances beyond the control of that individual. Gini coefficients and income mobility In 1978, A. Shorrocks introduced a measure based on income Gini coefficients to estimate income mobility. This measure, generalized by Maasoumi and Zandvakili, is now generally referred to as Shorrocks index, sometimes as Shorrocks mobility index or Shorrocks rigidity index. It attempts to estimate whether the income inequality Gini coefficient is permanent or temporary, and to what extent a country or region enables economic mobility to its people so that they can move from one (e.g. bottom 20%) income quantile to another (e.g. middle 20%) over time. In other words, Shorrocks index compares inequality of short-term earnings such as annual income of households, to inequality of long-term earnings such as 5-year or 10-year total income for same households. Shorrocks index is calculated in number of different ways, a common approach being from the ratio of income Gini coefficients between short-term and long-term for the same region or country. A 2010 study using social security income data for the United States since 1937 and Gini-based Shorrocks indexes concludes that its income mobility has had a complicated history, primarily due to mass influx of women into the country's labor force after World War II. Income inequality and income mobility trends have been different for men and women workers between 1937 and the 2000s. When men and women are considered together, the Gini coefficient-based Shorrocks Index trends imply long-term income inequality has been substantially reduced among all workers, in recent decades for the United States. Other scholars, using just 1990s data or other short periods have come to different conclusions. For example, Sastre and Ayala, conclude from their study of income Gini coefficient data between 1993 and 1998 for six developed economies, that France had the least income mobility, Italy the highest, and the United States and Germany intermediate levels of income mobility over those 5 years. Features of Gini coefficient Gini coefficient has features that make it useful as a measure of dispersion in a population, and inequalities in particular. It is a ratio analysis method making it easier to interpret. It also avoids references to a statistical average or position unrepresentative of most of the population, such as per capita income or gross domestic product. For a given time interval, Gini coefficient can therefore be used to compare diverse countries and different regions or groups within a country; for example states, counties, urban versus rural areas, gender and ethnic groups. Gini coefficients can be used to compare income distribution over time, thus it is possible to see if inequality is increasing or decreasing independent of absolute incomes. - Anonymity: it does not matter who the high and low earners are. - Scale independence: the Gini coefficient does not consider the size of the economy, the way it is measured, or whether it is a rich or poor country on average. - Population independence: it does not matter how large the population of the country is. - Transfer principle: if income (less than the difference), is transferred from a rich person to a poor person the resulting distribution is more equal. Limitations of Gini coefficient The Gini coefficient is a relative measure. Its proper use and interpretation is controversial. Mellor explains it is possible for the Gini coefficient of a developing country to rise (due to increasing inequality of income) while the number of people in absolute poverty decreases. This is because the Gini coefficient measures relative, not absolute, wealth. Kwok concludes that changing income inequality, measured by Gini coefficients, can be due to structural changes in a society such as growing population (baby booms, aging populations, increased divorce rates, extended family households splitting into nuclear families, emigration, immigration) and income mobility. Gini coefficients are simple, and this simplicity can lead to oversights and can confuse the comparison of different populations; for example, while both Bangladesh (per capita income of $1,693) and the Netherlands (per capita income of $42,183) had an income Gini index of 0.31 in 2010, the quality of life, economic opportunity and absolute income in these countries are very different, i.e. countries may have identical Gini coefficients, but differ greatly in wealth. Basic necessities may be available to all in a developed economy, while in an undeveloped economy with the same Gini coefficient, basic necessities may be unavailable to most or unequally available, due to lower absolute wealth. - Different income distributions with the same Gini coefficient Even when the total income of a population is the same, in certain situations two countries with different income distributions can have the same Gini index (e.g. cases when income Lorenz Curves cross). Table A illustrates one such situation. Both countries have a Gini index of 0.2, but the average income distributions for household groups are different. As another example, in a population where the lowest 50% of individuals have no income and the other 50% have equal income, the Gini coefficient is 0.5; whereas for another population where the lowest 75% of people have 25% of income and the top 25% have 75% of the income, the Gini index is also 0.5. Economies with similar incomes and Gini coefficients can have very different income distributions. Bellù and Liberati claim that to rank income inequality between two different populations based on their Gini indices is sometimes not possible, or misleading. - Extreme wealth inequality, yet low income Gini coefficient A Gini index does not contain information about absolute national or personal incomes. Populations can have very low income Gini indices, yet simultaneously very high wealth Gini index. By measuring inequality in income, the Gini ignores the differential efficiency of use of household income. By ignoring wealth (except as it contributes to income) the Gini can create the appearance of inequality when the people compared are at different stages in their life. Wealthy countries such as Sweden can show a low Gini coefficient for disposable income of 0.31 thereby appearing equal, yet have very high Gini coefficient for wealth of 0.79 to 0.86 thereby suggesting an extremely unequal wealth distribution in its society. These factors are not assessed in income-based Gini. |1||20,000||1 & 2||50,000| |3||40,000||3 & 4||90,000| |5||60,000||5 & 6||130,000| |7||80,000||7 & 8||170000| |9||120,000||9 & 10||270000| - Small sample bias – sparsely populated regions more likely to have low Gini coefficient Gini index has a downward-bias for small populations. Counties or states or countries with small populations and less diverse economies will tend to report small Gini coefficients. For economically diverse large population groups, a much higher coefficient is expected than for each of its regions. Taking world economy as one, and income distribution for all human beings, for example, different scholars estimate global Gini index to range between 0.61 and 0.68. As with other inequality coefficients, the Gini coefficient is influenced by the granularity of the measurements. For example, five 20% quantiles (low granularity) will usually yield a lower Gini coefficient than twenty 5% quantiles (high granularity) for the same distribution. Philippe Monfort has shown that using inconsistent or unspecified granularity limits the usefulness of Gini coefficient measurements. The Gini coefficient measure gives different results when applied to individuals instead of households, for the same economy and same income distributions. If household data is used, the measured value of income Gini depends on how the household is defined. When different populations are not measured with consistent definitions, comparison is not meaningful. Deininger and Squire (1996) show that income Gini coefficient based on individual income, rather than household income, are different. For United States, for example, they find that individual income-based Gini index was 0.35, while for France they report individual income-based Gini index to be 0.43. According to their individual focussed method, in the 108 countries they studied, South Africa had the world's highest Gini index at 0.62, Malaysia had Asia's highest Gini index at 0.5, Brazil the highest at 0.57 in Latin America and Caribbean region, and Turkey the highest at 0.5 in OECD countries. (in 2010 adjusted dollars) | % of Population | % of Population |$15,000 – $24,999||11.9%||12.0%| |$25,000 – $34,999||12.1%||10.9%| |$35,000 – $49,999||15.4%||13.9%| |$50,000 – $74,999||22.1%||17.7%| |$75,000 – $99,999||12.4%||11.4%| |$100,000 – $149,999||8.3%||12.1%| |$150,000 – $199,999||2.0%||4.5%| |$200,000 and over||1.2%||3.9%| |United State's Gini on pre-tax basis - Gini coefficient is unable to discern the effects of structural changes in populations Expanding on the importance of life-span measures, the Gini coefficient as a point-estimate of equality at a certain time, ignores life-span changes in income. Typically, increases in the proportion of young or old members of a society will drive apparent changes in equality, simply because people generally have lower incomes and wealth when they are young than when they are old. Because of this, factors such as age distribution within a population and mobility within income classes can create the appearance of inequality when none exist taking into account demographic effects. Thus a given economy may have a higher Gini coefficient at any one point in time compared to another, while the Gini coefficient calculated over individuals' lifetime income is actually lower than the apparently more equal (at a given point in time) economy's. Essentially, what matters is not just inequality in any particular year, but the composition of the distribution over time. Kwok claims income Gini index for Hong Kong has been high (0.434 in 2010), in part because of structural changes in its population. Over recent decades, Hong Kong has witnessed increasing numbers of small households, elderly households and elderly living alone. The combined income is now split into more households. Many old people are living separately from their children in Hong Kong. These social changes have caused substantial changes in household income distribution. Income Gini coefficient, claims Kwok, does not discern these structural changes in its society. Household money income distribution for the United States, summarized in Table C of this section, confirms that this issue is not limited to just Hong Kong. According to the US Census Bureau, between 1979 and 2010, the population of United States experienced structural changes in overall households, the income for all income brackets increased in inflation-adjusted terms, household income distributions shifted into higher income brackets over time, while the income Gini coefficient increased. Another limitation of Gini coefficient is that it is not a proper measure of egalitarianism, as it is only measures income dispersion. For example, if two equally egalitarian countries pursue different immigration policies, the country accepting a higher proportion of low-income or impoverished migrants will report a higher Gini coefficient and therefore may appear to exhibit more income inequality. - Gini coefficient falls yet the poor gets poorer, Gini coefficient rises yet everyone getting richer |Income bracket||Year 1 |20% – 40%||1,000||1,200||500| |40% – 60%||2,000||2,200||1,000| |60% – 80%||5,000||5,500||2,000| Arnold describes one limitation of Gini coefficient to be income distribution situations where it misleads. The income of poorest fifth of households can be lower when Gini coefficient is lower, than when the poorest income bracket is earning a larger percentage of all income. Table D illustrates this case, where the lowest income bracket has an average household market income of $500 per year at Gini index of 0.51, and zero income at Gini index of 0.48. This is counter-intuitive and Gini coefficient cannot tell what is happening to each income bracket or the absolute income, cautions Arnold. Feldstein similarly explains one limitation of Gini coefficient as its focus on relative income distribution, rather than real levels of poverty and prosperity in society. He claims Gini coefficient analysis is limited because in many situations it intuitively implies inequality that violate the so-called Pareto improvement principle. The Pareto improvement principle, named after the Italian economist Vilfredo Pareto, states that a social, economic or income change is good if it makes one or more people better off without making anyone else worse off. Gini coefficient can rise if some or all income brackets experience a rising income. Feldstein’s explanation is summarized in Table D. The table shows that in a growing economy, consistent with Pareto improvement principle, where income of every segment of the population has increased, from one year to next, the income inequality Gini coefficient can rise too. In contrast, in another economy, if everyone gets poorer and is worse off, income inequality is less and Gini coefficient lower. - Inability to value benefits and income from informal economy affects Gini coefficient accuracy Some countries distribute benefits that are difficult to value. Countries that provide subsidized housing, medical care, education or other such services are difficult to value objectively, as it depends on quality and extent of the benefit. In absence of free markets, valuing these income transfers as household income is subjective. The theoretical model of Gini coefficient is limited to accepting correct or incorrect subjective assumptions. In subsistence-driven and informal economies, people may have significant income in other forms than money, for example through subsistence farming or bartering. These income tend to accrue to the segment of population that is below-poverty line or very poor, in emerging and transitional economy countries such as those in sub-Saharan Africa, Latin America, Asia and Eastern Europe. Informal economy accounts for over half of global employment and as much as 90 per cent of employment in some of the poorer sub-Saharan countries with high official Gini inequality coefficients. Schneider et al., in their 2010 study of 162 countries, report about 31.2%, or about $20 trillion, of world's GDP is informal. In developing countries, the informal economy predominates for all income brackets except for the richer, urban upper income bracket populations. Even in developed economies, between 8% (United States) to 27% (Italy) of each nation's GDP is informal, and resulting informal income predominates as a livelihood activity for those in the lowest income brackets. The value and distribution of the incomes from informal or underground economy is difficult to quantify, making true income Gini coefficients estimates difficult. Different assumptions and quantifications of these incomes will yield different Gini coefficients. Gini has some mathematical limitations as well. It is not additive and different sets of people cannot be averaged to obtain the Gini coefficient of all the people in the sets. Alternatives to Gini coefficient Given the limitations of Gini coefficient, other statistical methods are used in combination or as an alternative measure of population dispersity. For example, entropy measures are frequently used (e.g. the Theil Index and the Atkinson index). These measures attempt to compare the distribution of resources by intelligent agents in the market with a maximum entropy random distribution, which would occur if these agents acted like non-intelligent particles in a closed system following the laws of statistical physics. Relation to other statistical measures Gini coefficient closely related to the AUC (Area Under receiver operating characteristic Curve) measure of performance. The relation follows the formula Gini coefficient is also closely related to Mann–Whitney U. In certain fields such as ecology, Simpson's index is used, which is related to Gini. Simpson index scales as mirror opposite to Gini; that is, with increasing diversity Simpson index takes a smaller value (0 means maximum, 1 means minimum heterogeneity per classic Simpson index). Simpson index is sometimes transformed by subtracting the observed value from the maximum possible value of 1, and then it is known as Gini-Simpson Index. Other uses Although the Gini coefficient is most popular in economics, it can in theory be applied in any field of science that studies a distribution. For example, in ecology the Gini coefficient has been used as a measure of biodiversity, where the cumulative proportion of species is plotted against cumulative proportion of individuals. In health, it has been used as a measure of the inequality of health related quality of life in a population. In education, it has been used as a measure of the inequality of universities. In chemistry it has been used to express the selectivity of protein kinase inhibitors against a panel of kinases. In engineering, it has been used to evaluate the fairness achieved by Internet routers in scheduling packet transmissions from different flows of traffic. In statistics, building decision trees, it is used to measure the purity of possible child nodes, with the aim of maximising the average purity of two child nodes when splitting, and it has been compared with other equality measures. The discriminatory power refers to a credit risk model's ability to differentiate between defaulting and non-defaulting clients. The formula , in calculation section above, may be used for the final model and also at individual model factor level, to quantify the discriminatory power of individual factors. It is related to accuracy ratio in population assessment models. See also Notes and references - Gini, C. (1912). "Italian: Variabilità e mutabilità" (Variability and Mutability', C. Cuppini, Bologna, 156 pages. Reprinted in Memorie di metodologica statistica (Ed. Pizetti E, Salvemini, T). Rome: Libreria Eredi Virgilio Veschi (1955). - Gini, C. (1909). "Concentration and dependency ratios" (in Italian). English translation in Rivista di Politica Economica, 87 (1997), 769–789. - "Current Population Survey (CPS) – Definitions and Explanations". US Census Bureau. - Note: Gini coefficient becomes 1, only in a large population where one person has all the income. In the special case of just two people, where one has no income and the other has all the income, the Gini coefficient is 0.5. For 5 people set, where 4 have no income and the fifth has all the income, the Gini coefficient is 0.8. See: FAO, United Nations – Inequality Analysis, The Gini Index Module (PDF format), fao.org. - Sadras, V. O.; Bongiovanni, R. (2004). "Use of Lorenz curves and Gini coefficients to assess yield inequality within paddocks". Field Crops Research 90 (2–3): 303–310. doi:10.1016/j.fcr.2004.04.003. - Gini, C. (1936). "On the Measure of Concentration with Special Reference to Income and Statistics", Colorado College Publication, General Series No. 208, 73–79. - "Income distribution – Inequality: Income distribution – Inequality – Country tables". OECD. 2012. - "South Africa Overview". The World Bank. 2011. - Ali, Mwabu and Gesami (March 2002). "Poverty reduction in Africa: Challenges and policy options" (PDF). African Economic Research Consortium, Nairobi. - Evan Hillebrand (June 2009). "Poverty, Growth, and Inequality over the Next 50 Years" (PDF). FAO, United Nations – Economic and Social Development Department. - "The Real Wealth of Nations: Pathways to Human Development, 2010". United Nations Development Program. 2011. pp. 72–74. ISBN 9780230284456. - Shlomo Yitzhaki (1998). "More than a Dozen Alternative Ways of Spelling Gini". Economic Inequality 8: 13–30. - Myung Jae Sung (August 2010). Population Aging, Mobility of Quarterly Incomes, and Annual Income Inequality: Theoretical Discussion and Empirical Findings. - Blomquist, N. (1981). "A comparison of distributions of annual and lifetime income: Sweden around 1970". Review of Income and Wealth 27 (3): 243–264. doi:10.1111/j.1475-4991.1981.tb00227.x. - "Gini Coefficient". Wolfram Mathworld. - Firebaugh, Glenn (1999). "Empirics of World Income Inequality". American Journal of Sociology 104 (6): 1597–1630. doi:10.1086/210218.. See also ——— (2003). "Inequality: What it is and how it is measured". The New Geography of Global Income Inequality. Cambridge, MA: Harvard University Press. ISBN 0-674-01067-1. - N. C. Kakwani (April 1977). "Applications of Lorenz Curves in Economic Analysis". Econometrica 45 (3): 719–728. doi:10.2307/1911684. JSTOR 1911684. - Chu, Davoodi, Gupta (March 2000). "Income Distribution and Tax and Government Social Spending Policies in Developing Countries". International Monetary Fund. - "Monitoring quality of life in Europe – Gini index". Eurofound. 26 August 2009 - Chen Wang, Koen Caminada, and Kees Goudswaard (July–September 2012). "The redistributive effect of social transfer programmes and taxes: A decomposition across countries". International Social Security Review 65 (3): 27–48. doi:10.1111/j.1468-246X.2012.01435.x. - Bob Sutcliffe (April 2007). "Postscript to the article ‘World inequality and globalization’ (Oxford Review of Economic Policy, Spring 2004)". Retrieved 2007-12-13 - Income distribution – Inequality. Gini coefficient after taxes and transfers. OECD. StatExtracts. Retrieved: 24 December 2012. - "A brief look at post-war U.S. Income Inequality". United States Census Bureau. 1996. - "Table 3. Income Distribution Measures Using Money Income and Equivalence-Adjusted Income: 2007 and 2008". Income, Poverty, and Health Insurance Coverage in the United States: 2008. United States Census Bureau. p. 17. - "Income, Poverty and Health Insurance Coverage in the United States: 2009". Newsroom. United States Census Bureau. - "Income, Poverty and Health Insurance Coverage in the United States: 2011". Newsroom. United States Census Bureau. September 12, 2012. Retrieved January 23, 2013. - Daniel H. Cooper, Byron F. Lutz, and Michael G. Palumbo (September 22, 2011). "Quantifying the Role of Federal and State Taxes in Mitigating Income Inequality". Federal Reserve, Boston, United States. - Adam Bee (February 2012). "Household Income Inequality Within U.S. Counties: 2006–2010". Census Bureau, U.S. Department of Commerce. - Isabel Ortiz and Matthew Cummins (April 2011). "Global Inequality: Beyond the Bottom Billion". UNICEF. p. 26. - Berg, Andrew G.; Ostry, Jonathan D. (2011). "Equality and Efficiency". Finance and Development (International Monetary Fund) 48 (3). Retrieved September 10, 2012. - Branko Milanovic (September 2011). "More or Less". Finance & Development (International Monetary Fund) 48 (3). - Albert Berry and John Serieux (September 2006). "Riding the Elephants: The Evolution of World Economic Growth and Income Distribution at the End of the Twentieth Century (1980–2000)". United Nations (DESA Working Paper No. 27). - Thomas, Wang, Fan (January 2001). "Measuring education inequality – Gini coefficients of education". The World Bank. - John E. Roemer (September 2006). "ECONOMIC DEVELOPMENT AS OPPORTUNITY EQUALIZATION". Yale University. - John Weymark (2003). "Generalized Gini Indices of Equality of Opportunity". Journal of Economic Inequality 1 (1): 5–24. doi:10.1023/A:1023923807503. - Milorad Kovacevic (November 2010). "Measurement of Inequality in Human Development – A Review". United Nations Development Program. - Anthony Atkinson (1999). "The contributions of Amartya Sen to Welfare Economics". Scand. J. Of Economics 101 (2): 173–190. doi:10.1111/1467-9442.00151. - Roemer et al; Aaberge, Rolf; Colombino, Ugo; Fritzell, Johan; Jenkins, Stephen P; Lefranc, Arnaud; Marx, Ive; Page, Marianne et al. (March 2003). "To what extent do fiscal regimes equalize opportunities for income acquisition among citizens?". Journal of Public Economics 87 (3–4): 539–565. doi:10.1016/S0047-2727(01)00145-1. - Shorrocks, Anthony (December 1978). "Income Inequality and Income Mobility". Journal of Economic Theory 19 (2): 376–393. doi:10.1016/0022-0531(78)90101-1. - Maasoumi and Zanvakili; Zandvakili, Sourushe (1986). "A class of generalized measures of mobility with applications". Economic Letters 22: 97–102. doi:10.1016/0165-1765(86)90150-3. - Wojciech Kopczuk, Emmanuel Saez and Jae Song (2010). "Earnings Inequality and Mobility in the United States: Evidence from Social Security Data Since 1937". The Quarterly Journal of Economics 125 (1): 91–128. doi:10.1162/qjec.2010.125.1.91. - Wen-Hao Chen (March 2009). "CROSS-NATIONAL DIFFERENCES IN INCOME MOBILITY: EVIDENCE FROM CANADA, THE UNITED STATES, GREAT BRITAIN AND GERMANY". Review of Income and Wealth 55 (1): 75–100. doi:10.1111/j.1475-4991.2008.00307.x. - Mercedes Sastre and Luis Ayala (2002). "Europe vs. The United States: Is There a Trade-Off Between Mobility and Inequality?". Institute for Social and Economic Research, University of Essex. - Lorenzo Giovanni Bellù and Paolo Liberati (2006). "Inequality Analysis – The Gini Index". Food and Agriculture Organization, United Nations. - Julie A. Litchfield (March 1999). "Inequality: Methods and Tools". The World Bank. - Stefan V. Stefanescu (2009). "Measurement of the Bipolarization Events". World Academy of Science, Engineering and Technology 57: 929–936. - Ray, Debraj (1998). Development Economics. Princeton, NJ: Princeton University Press. p. 188. ISBN 0-691-01706-9. - Thomas Garrett (Spring 2010). "U.S. Income Inequality: It's Not So Bad". Inside the Vault (U.S. Federal Reserve, St Louis) 14 (1). - John W. Mellor (June 2, 1989). Dramatic Poverty Reduction in the Third World: Prospects and Needed Action. International Food Policy Research Institute. pp. 18–20. - KWOK Kwok Chuen (2010). "Income Distribution of Hong Kong and the Gini Coefficient". The Government of Hong Kong, China. - "The Real Wealth of Nations: Pathways to Human Development (2010 Human Development Report – see Stat Tables)". United Nations Development Program. 2011. pp. 152–156. - Fernando G De Maio (2007). "Income inequality measures". Journal of Epidemiology and Community Health 61 (10): 849–852. doi:10.1136/jech.2006.052969. PMC 2652960. PMID 17873219. - Domeij and Floden; Flodén, Martin (2010). "Inequality Trends in Sweden 1978–2004". Review of Economic Dynamics 13 (1): 179–208. doi:10.1016/j.red.2009.10.005. - Domeij and Klein (January 2000). "Accounting for Swedish wealth inequality". - George Deltas (February 2003). "The Small-Sample Bias of the Gini Coefficient: Results and Implications for Empirical Research". The Review of Economics and Statistics 85 (1): 226–234. doi:10.1162/rest.2003.85.1.226. - Philippe Monfort (2008). "Convergence of EU regions – Measures and evolution". European Union – Europa. p. 6. - Klaus Deininger and Lyn Squire (1996). "A New Data Set Measuring Income Inequality". World Bank Economic Review 10 (3): 565–591. doi:10.1093/wber/10.3.565. - "Income, Poverty, and Health Insurance Coverage in the United States: 2010 (see Table A-2)". Census Bureau, Dept of Commerce, United States. September 2011. - Congressional Budget Office: Trends in the Distribution of Household Income Between 1979 and 2007. October 2011. see p. i–x, with definitions on ii–iii - Roger Arnold (2007). Economics. pp. 573–581. ISBN 978-0324538014. - Frank Cowell (2007). "Inequality decomposition – three bad measures". Bulletin of Economic Research 40 (4): 309–311. doi:10.1111/j.1467-8586.1988.tb00274.x. - Martin Feldstein (August, 1998). "Is income inequality really the problem? (Overview)". U.S. Federal Reserve. - Taylor and Weerapana (2009). Principles of Microeconomics: Global Financial Crisis Edition. pp. 416–418. ISBN 978-1439078211. - Martin Feldstein (1998). "Income inequality and poverty". National Bureau of Economic Research. - Friedrich Schneider et al (2010). "New Estimates for the Shadow Economies all over the World". International Economic Journal 24 (4): 443–461. doi:10.1080/10168737.2010.525974. - The Informal Economy. International Institute for Environment and Development, United Kingdom. 2011. ISBN 978-1-84369-822-7. - J. Barkley Rosser, Jr., Marina V. Rosser, and Ehsan Ahmed (March, 2000). "INCOME INEQUALITY AND THE INFORMAL ECONOMY IN TRANSITION ECONOMIES". Journal of Comparative Economics 28 (1): 156–171. doi:10.1006/jcec.2000.1645. - Gorana Krstić and Peter Sanfey (February 2010). "Earnings inequality and the informal economy: evidence from Serbia". European Bank for Reconstruction and Development. - Friedrich Schneider (December 2004). "The Size of the Shadow Economies of 145 Countries all over the World: First Results over the Period 1999 to 2003". - Hand, David J.; Robert J. Till (2001). "A Simple Generalisation of the Area Under the ROC Curve for Multiple Class Classification Problems". Machine Learning 45 (2): 171–186. doi:10.1023/A:1010920819831. - Iddo Eliazar and Igor Sokolov (2010). "Measuring statistical heterogeneity: The Pietra index". Physica A-Statistical Mechanics and Its Applications 389 (1): 117–125. doi:10.1016/j.physa.2009.08.006. - Wen-Chung Lee (1999). "Probabilistic Analysis of Global Performances of Diagnostic Tests: Interpreting the Lorenz Curve-Based Summary Measures". Statistics in Medicine 18 (4): 455–471. doi:10.1002/(SICI)1097-0258(19990228)18:4<455::AID-SIM44>3.0.CO;2-A. PMID 10070686. - Robert K. Peet (1974). "The Measurement of Species Diversity". Annual Review of Ecology and Systematics 5: 285–307. doi:10.1146/annurev.es.05.110174.001441. JSTOR 2096890. - Wittebolle, Lieven; et al. (2009). "Initial community evenness favours functionality under selective stress". Nature 458 (7238): 623–626. doi:10.1038/nature07840. PMID 19270679. - Asada, Yukiko (2005). "Assessment of the health of Americans: the average health-related quality of life and its inequality across individuals and groups". Population Health Metrics 3: 7. doi:10.1186/1478-7954-3-7. PMC 1192818. PMID 16014174. - Halffman, Willem; Leydesdorff, L (2010). "Is Inequality Among Universities Increasing? Gini Coefficients and the Elusive Rise of Elite Universities". Minerva 48 (1): 55–72. doi:10.1007/s11024-010-9141-3. PMC 2850525. PMID 20401157. - Graczyk, Piotr (2007). "Gini Coefficient: A New Way To Express Selectivity of Kinase Inhibitors against a Family of Kinases". Journal of Medicinal Chemistry 50 (23): 5773–5779. doi:10.1021/jm070562u. PMID 17948979. - Shi, Hongyuan; Sethu, Harish (2003). "Greedy Fair Queueing: A Goal-Oriented Strategy for Fair Real-Time Packet Scheduling". Proceedings of the 24th IEEE Real-Time Systems Symposium. IEEE Computer Society. pp. 345–356. ISBN 0-7695-2044-8. - Gonzalez, Luis; et al. (2010). "The Similarity between the Square of the Coeficient of Variation and the Gini Index of a General Random Variable". Journal of Quantitative Methods for Economics and Business Administration 10: 5–18. ISSN 1886-516X. - George A. Christodoulakis and Stephen Satchell (Editors) (November 2007). The Analytics of Risk Model Validation (Quantitative Finance). Academic Press. ISBN 978-0750681582. Further reading - Amiel, Y.; Cowell, F.A. (1999). Thinking about Inequality. Cambridge. ISBN 0-521-46696-2. - Anand, Sudhir (1983). Inequality and Poverty in Malaysia. New York: Oxford University Press. ISBN 0-19-520153-1. - Brown, Malcolm (1994). "Using Gini-Style Indices to Evaluate the Spatial Patterns of Health Practitioners: Theoretical Considerations and an Application Based on Alberta Data". Social Science Medicine 38 (9): 1243–1256. doi:10.1016/0277-9536(94)90189-9. PMID 8016689. - Chakravarty, S. R. (1990). Ethical Social Index Numbers. New York: Springer-Verlag. ISBN 0-387-52274-3. - Deaton, Angus (1997). Analysis of Household Surveys. Baltimore MD: Johns Hopkins University Press. ISBN 0-585-23787-5. - Dixon, PM, Weiner J., Mitchell-Olds T, Woodley R. (1987). "Bootstrapping the Gini coefficient of inequality". Ecology (Ecological Society of America) 68 (5): 1548–1551. doi:10.2307/1939238. JSTOR 1939238. - Dorfman, Robert (1979). "A Formula for the Gini Coefficient". The Review of Economics and Statistics (The MIT Press) 61 (1): 146–149. doi:10.2307/1924845. JSTOR 1924845. - Firebaugh, Glenn (2003). The New Geography of Global Income Inequality. Cambridge MA: Harvard University Press. ISBN 0-674-01067-1. - Gastwirth, Joseph L. (1972). "The Estimation of the Lorenz Curve and Gini Index". The Review of Economics and Statistics (The MIT Press) 54 (3): 306–316. doi:10.2307/1937992. JSTOR 1937992. - Giles, David (2004). "Calculating a Standard Error for the Gini Coefficient: Some Further Results". Oxford Bulletin of Economics and Statistics 66 (3): 425–433. doi:10.1111/j.1468-0084.2004.00086.x. - Gini, Corrado (1912). "Variabilità e mutabilità" Reprinted in Memorie di metodologica statistica (Ed. Pizetti E, Salvemini, T). Rome: Libreria Eredi Virgilio Veschi (1955). - Gini, Corrado (1921). "Measurement of Inequality of Incomes". The Economic Journal (Blackwell Publishing) 31 (121): 124–126. doi:10.2307/2223319. JSTOR 2223319. - Giorgi, G. M. (1990). A bibliographic portrait of the Gini ratio, Metron, 48, 183–231. - Karagiannis, E. and Kovacevic, M. (2000). "A Method to Calculate the Jackknife Variance Estimator for the Gini Coefficient". Oxford Bulletin of Economics and Statistics 62: 119–122. doi:10.1111/1468-0084.00163. - Mills, Jeffrey A.; Zandvakili, Sourushe (1997). "Statistical Inference via Bootstrapping for Measures of Inequality". Journal of Applied Econometrics 12 (2): 133–150. doi:10.1002/(SICI)1099-1255(199703)12:2<133::AID-JAE433>3.0.CO;2-H. - Modarres, Reza and Gastwirth, Joseph L. (2006). "A Cautionary Note on Estimating the Standard Error of the Gini Index of Inequality". Oxford Bulletin of Economics and Statistics 68 (3): 385–390. doi:10.1111/j.1468-0084.2006.00167.x. - Morgan, James (1962). "The Anatomy of Income Distribution". The Review of Economics and Statistics (The MIT Press) 44 (3): 270–283. doi:10.2307/1926398. JSTOR 1926398. - Ogwang, Tomson (2000). "A Convenient Method of Computing the Gini Index and its Standard Error". Oxford Bulletin of Economics and Statistics 62: 123–129. doi:10.1111/1468-0084.00164. - Ogwang, Tomson (2004). "Calculating a Standard Error for the Gini Coefficient: Some Further Results: Reply". Oxford Bulletin of Economics and Statistics 66 (3): 435–437. doi:10.1111/j.1468-0084.2004.00087.x. - Xu, Kuan (January 2004). How Has the Literature on Gini's Index Evolved in the Past 80 Years?. Department of Economics, Dalhousie University. Retrieved 2006-06-01. The Chinese version of this paper appears in Xu, Kuan (2003). "How Has the Literature on Gini's Index Evolved in the Past 80 Years?". China Economic Quarterly 2: 757–778. - Yitzhaki, S. (1991). "Calculating Jackknife Variance Estimators for Parameters of the Gini Method". Journal of Business and Economic Statistics (American Statistical Association) 9 (2): 235–239. doi:10.2307/1391792. JSTOR 1391792. - Deutsche Bundesbank: Do banks diversify loan portfolios?, 2005 (on using e.g. the Gini coefficient for risk evaluation of loan portfolios) - Forbes Article, In praise of inequality - Measuring Software Project Risk With The Gini Coefficient, an application of the Gini coefficient to software - The World Bank: Measuring Inequality - Travis Hale, University of Texas Inequality Project:The Theoretical Basics of Popular Inequality Measures, online computation of examples: 1A, 1B - Article from The Guardian analysing inequality in the UK 1974 – 2006 - World Income Inequality Database - Income Distribution and Poverty in OECD Countries - Gini Coefficient Calculator
http://en.wikipedia.org/wiki/Gini_index
13
64
Complex Analysis/Complex Numbers/Introduction This book assumes you have some passing familiarity with the complex numbers. Indeed much of the material in the book assumes your already familiar with the multi-variable calculus. If you have not encountered the complex numbers previously it would be a good idea to read a more detailed introduction which will have many more worked examples of arithmetic of complex numbers which this book assumes is already familiar. Such an introduction can often be found in an Algebra (or "Algebra II") text, such as the Algebra wikibook's section on complex numbers. Intuitively a complex number z is a number written in the form: where x and y are real number and i is an imaginary number that satisfies . We call x the real part and y the imaginary part of z, and denote them by and , respectively. Note that for the number , , not . Also, to distinguish between complex and purely real numbers, we will often use the letters z and w for the complex numbers. It is useful to have a more formal definition of the complex numbers. For example, one frequently encounters treatments of the complex numbers that state that is the number so that , and we then operate with using many of our usual rules for arithmetic. Unfortunately if one is not careful this will lead to difficulties. Not all of the usual rules for algebra carry through in the way one might expect. For example, there is a flaw in the following calculation: , but is very difficult to point out the flaw without first being clear about what a complex number is, and what operations are allowed with complex numbers. Mathematically the complex numbers are defined as an ordered pair, endowed with algebraic operations. A complex number z is an ordered pair of real numbers. That is where x and y are real numbers. The collection of all complex numbers is denoted by the symbol . The most immediate consequence of this definition is that we may think of a complex number as a point lying the the plane. Comparing this definition with the intuitive definition above, it is easy to see that the imaginary number i simply acts as a place holder for denoting which number belongs in the second coordinate. We define the following two functions on the complex plane. Let be a complex number. We define the real part is as function given by . Similarly we define the imaginary part as a function given by . We say two complex numbers are equal if and only if they are equal as ordered pairs. That is if and then z = w if and only if x = u and y = v. Put more succinctly, two complex numbers are equal iff their real parts and imaginary parts are equal. If complex numbers were simply ordered pairs there would not really be much to say about them. But the complex numbers are ordered pairs together with several algebraic operations, and it is these operations that make the complex numbers so interesting. Let z = (x, y) and w = (u, v) then we define addition as: - z + w = (x + u, y + v) and multiplication as: - z · w = (x · u − y · v, x · v + y · u) Of course, we can view any real number r as being a complex number. Using our intuitive model for the complex numbers it is clear that the real number r should correspond to the complex number (r, 0), and with this identification the above operations correspond exactly to the usual definitions of addition and multiplication of real numbers. For the remainder of the text we will freely refer to a real number r as being a complex number, where the above identification is understood. The following facts about addition and multiplication follow easily from the corresponding operators for the real numbers. Their verification is left as an exercise to the reader. Let z, w and v be complex numbers, then: |• z + (w + v) = (z + w) + v||(Associativity of addition);| |• z · (w · v) = (z · w) · v||(Associativity of multiplication);| |• z + w = w + z||(Commutativity of addition);| |• z · w = w · z||(Commutativity of multiplication);| |• z · (w + v) = z · w + z · v||(Distributive Property).| One nice feature of complex addition and multiplication is that 0 and 1 play the same role in the real numbers as they do in the complex numbers. That is 0 is the additive identity for the complex numbers (meaning z + 0 = 0 + z = z) and 1 is the multiplicative identity (meaning z · 1 = 1 · z = z). Of course it is natural at this point to ask about subtraction and division. But stating the formula's for subtraction and division outright, we instead follow the usual course for other subjects of algebra and first discuss inverses. Let z = (x, y) be any complex number, then we define the additive inverse −z as: - −z = (−x, −y) Then it is immediate to verify that z + −z = 0. Now for any two complex numbers z and w we define z − w to be z + −w. We now turn to doing the same for multiplication. Let z = (x, y) be any non-zero complex number, then we define the multiplicative inverse, as: It is left to the reader to verify that . We may now of course define division as . Just as with the real numbers, division by zero remains undefined. In order for this last definition to make more sense it helps to introduce two more operations on the complex numbers. The first is the absolute value. Let z = (x, y) be any complex number, then we define the complex absolute value, denoted |z| as: Notice that |z| is always a real number and |z| ≥ 0 for any z. Of course with this definition of the absolute value, if z = (x, y) then |z| is exactly the same as the norm of the vector (x, y). Before introducing the second definition, notice that our intuitive definition simply required us to find a number whose square was −1. Of course i2 = (−i)2 = −1, so for a starting point one could have chosen -i as the most basic imaginary number. This idea motivates the following definition. Let z = (x, y) be any complex number, then we define the conjugate of z, denoted as: With this definition it is an easy exercise to check that , so dividing both sides by |z|2 we arrive at . Compare this with the definition of the multiplicative inverse above. Recall that, every point in the plane can be written using rectangular coordinates such as (x, y) where of course the numbers denote the distance from the x and y axes respectively. But the point could equally well be described using polar coordinates (r, θ), where the first number represents the distance from the origin, and the second is the angle that is made with the positive x axis when you connect the origin and the point with a line segment. Since complex numbers may be thought of simply as points in the plane, we can immediately derive a polar representation of a complex number. As usual we can let a point z = (x, y) = (r cos θ, r sin θ) where . The choice of θ is not unique because sine and cosine are 2π periodic. A value θ for which z = (r cos θ, r sin θ) is called an argument of z. If we restrict our choice of θ so that 0 ≤ θ < 2π then the choice of θ is unique provided that z ≠ 0. This is often called the principle branch of the argument. As a shorthand, we may write , so . This notation simplifies multiplication and taking powers, because by elementary trigonometric identities. Applying this formula can therefore simplify many calculations with complex numbers. Using induction we can show that holds for all positive integers . Now that we have set up the basic concept of a complex number, we continue to topological properties of the complex plane. - Determine in terms of and . - Determine in terms of and . - Show that the absolute value on the complex plane obeys the triangle inequality. That is show that: - Show that the absolute value on the complex plane obeys the reverse triangle inequality. That is show that: - Given a non-zero complex number determine and so that . - Determine formulas for and in terms of and . - Find distinct complex numbers , so that . Hint: Use the formula given above for and the periodicity of and . Read in another language This page is available in 1 language
http://en.m.wikibooks.org/wiki/Complex_Analysis/Complex_Numbers/Introduction
13
95
X-ray crystallography is a method of determining the arrangement of atoms within a crystal, in which a beam of X-rays strikes a crystal and scatters into many different directions. From the angles and intensities of these scattered beams, a crystallographer can produce a three-dimensional picture of the density of electrons within the crystal. From this electron density, the mean positions of the atoms in the crystal can be determined, as well as their chemical bonds, their disorder and sundry other information. Since very many materials can form crystals — such as salts, metals, minerals, semiconductors, as well as various inorganic, organic and biological molecules — X-ray crystallography has been fundamental in the development of many scientific fields. In its first decades of use, this method determined the size of atoms, the lengths and types of chemical bonds, and the atomic-scale differences among various materials, especially minerals and alloys. The method also revealed the structure and functioning of many biological molecules, including vitamins, drugs, proteins and nucleic acids such as DNA. X-ray crystallography is still the chief method for characterizing the atomic structure of new materials and in discerning materials that appear similar by other experiments. X-ray crystal structures can also account for unusual electronic or elastic properties of a material, shed light on chemical interactions and processes, or serve as the basis for designing pharmaceuticals against diseases. After a crystal has been obtained or grown in the laboratory, it is mounted on a goniometer and gradually rotated while being bombarded with X-rays, producing a diffraction pattern of regularly spaced spots known as reflections. The two-dimensional images taken at different rotations are converted into a three-dimensional model of the density of electrons within the crystal using the mathematical method of Fourier transforms, combined with chemical data known for the sample. Poor resolution (fuzziness) or even errors may result if the crystals are too small, or not uniform enough in their internal makeup. X-ray crystallography is related to several other methods for determining atomic structures. Similar diffraction patterns can be produced by scattering electrons or neutrons, which are likewise interpreted as a Fourier transform. If single crystals of sufficient size cannot be obtained, various X-ray scattering methods can be applied to obtain less detailed information; such methods include fiber diffraction, powder diffraction and small-angle X-ray scattering (SAXS). In all these methods, the scattering is elastic; the scattered X-rays have the same wavelength as the incoming X-ray. By contrast, inelastic X-ray scattering methods are useful in studying excitations of the sample, rather than the distribution of its atoms. Crystals have long been admired for their regularity and symmetry, but they were not investigated scientifically until the 17th century. Johannes Kepler hypothesized in his work Strena seu de Nive Sexangula (1611) that the hexagonal symmetry of snowflake crystals was due to a regular packing of spherical water particles. Crystal symmetry was first investigated experimentally by Nicolas Steno (1669), who showed that the angles between the faces are the same in every exemplar of a particular type of crystal, and by René Just Haüy (1784), who discovered that every face of a crystal can be described by simple stacking patterns of blocks of the same shape and size. Hence, William Hallowes Miller in 1839 was able to give each face a unique label of three small integers, the Miller indices which are still used today for identifying crystal faces. Haüy's study led to the correct idea that crystals are a regular three-dimensional array (a Bravais lattice) of atoms and molecules; a single unit cell is repeated indefinitely along three principal directions that are not necessarily perpendicular. In the 19th century, a complete catalog of the possible symmetries of a crystal was worked out by Johann Hessel, Auguste Bravais, Yevgraf Fyodorov,, Arthur Schönflies and (belatedly) William Barlow. On the basis of the available data and physical reasoning, Barlow proposed several crystal structures in the 1880s that were validated later by X-ray crystallography; however, the available data were too few in the 1880s to accept his models as conclusive. X-rays were discovered by Wilhelm Conrad Röntgen in 1895, just as the studies of crystal symmetry were being concluded. Physicists were initially uncertain of the nature of X-rays, although it was soon suspected (correctly) that they were waves of electromagnetic radiation, in other words, another form of light. At that time, the wave model of light — specifically, the Maxwell theory of electromagnetic radiation — was well accepted among scientists, and experiments by Charles Glover Barkla showed that X-rays exhibited phenomena associated with electromagnetic waves, including transverse polarization and spectral lines akin to those observed in the visible wavelengths. Single-slit experiments in the laboratory of Arnold Sommerfeld suggested the wavelength of X-rays was roughly 1 Angström, one ten millionth of a millimetre. However, X-rays are composed of photons, and thus are not only waves of electromagnetic radiation but also exhibit particle-like properties. The photon concept was introduced by Albert Einstein in 1905, but it was not broadly accepted until 1922, when Arthur Compton confirmed it by the scattering of X-rays from electrons. Therefore, these particle-like properties of X-rays, such as their ionization of gases, caused William Henry Bragg to argue in 1907 that X-rays were not electromagnetic radiation. Nevertheless, Bragg's view was not broadly accepted and the observation of X-ray diffraction in 1912 confirmed for most scientists that X-rays were a form of electromagnetic radiation. Crystals are regular arrays of atoms, and X-rays can be considered waves of electromagnetic radiation. Atoms scatter X-ray waves, primarily through the atoms' electrons. Just as an ocean wave striking a lighthouse produces secondary circular waves emanating from the lighthouse, so an X-ray striking an electron produces secondary spherical waves emanating from the electron. This phenomenon is known as scattering, and the electron (or lighthouse) is known as the scatterer. A regular array of scatterers produces a regular array of spherical waves. Although these waves cancel one another out in most directions (destructive interference), they add constructively in a few specific directions, determined by Bragg's law where n is any integer. These specific directions appear as spots on the diffraction pattern, often called reflections. Thus, X-ray diffraction results from an electromagnetic wave (the X-ray) impinging on a regular array of scatterers (the repeating arrangement of atoms within the crystal). X-rays are used to produce the diffraction pattern because their wavelength λ is typically the same order of magnitude (1-100 Ångströms) as the spacing d between planes in the crystal. In principle, any wave impinging on a regular array of scatterers produces diffraction, as predicted first by Francesco Maria Grimaldi in 1665. To produce significant diffraction, the spacing between the scatterers and the wavelength of the impinging wave should be roughly similar in size. For illustration, the diffraction of sunlight through a bird's feather was first reported by James Gregory in the later 17th century. The first man-made diffraction gratings for visible light were constructed by David Rittenhouse in 1787, and Joseph von Fraunhofer in 1821. However, visible light has too long a wavelength (typically, 5500 Ångströms) to observe diffraction from crystals. However, prior to the first X-ray diffraction experiments, the spacings between lattice planes in a crystal were not known with certainty. The idea that crystals could be used as a diffraction grating for X-rays arose in 1912 in a conversation between Paul Peter Ewald and Max von Laue in the English Garden in Munich. Ewald had proposed a resonator model of crystals for his thesis, but this model could not be validated using visible light, since the wavelength was much larger than the spacing between the resonators. Von Laue realized that electromagnetic radiation of a shorter wavelength was needed to observe such small spacings, and suggested that X-rays might have a wavelength comparable to the unit-cell spacing in crystals. Von Laue worked with two technicians, Walter Friedrich and his assistant Paul Knipping, to shine a beam of X-rays through a sphalerite crystal and record its diffraction on a photographic plate. After being developed, the plate showed a large number of well-defined spots arranged in a pattern of intersecting circles around the spot produced by the central beam. Von Laue developed a law that connects the scattering angles and the size and orientation of the unit-cell spacings in the crystal, for which he was awarded the Nobel Prize in Physics in 1914. As described in the mathematical derivation below, the X-ray scattering is determined by the density of electrons within the crystal. Since the energy of an X-ray is much greater than that of an atomic electron, the scattering may be modeled as Thomson scattering, the interaction of an electromagnetic ray with a free electron. This model is generally adopted to describe the polarization of the scattered radiation. The intensity of Thomson scattering declines as 1/m² with the mass m of the charged particle that is scattering the radiation; hence, the atomic nuclei, which are thousands of times heavier than an electron, contribute negligibly to the scattered X-rays. After Von Laue's pioneering research, the field developed rapidly, most notably by physicists William Lawrence Bragg and his father William Henry Bragg. In 1912-1913, the younger Bragg developed Bragg's law, which connects the observed scattering with reflections from evenly spaced planes within the crystal. The earliest structures were generally simple and marked by one-dimensional symmetry. However, as computational and experimental methods improved over the next decades, it became feasible to deduce reliable atomic positions for more complicated two- and three-dimensional arrangements of atoms in the unit-cell. The potential of X-ray crystallography for determining the structure of molecules and minerals — then only known vaguely from chemical and hydrodynamic experiments — was realized immediately. The earliest structures were simple inorganic crystals and minerals, but even these revealed fundamental laws of physics and chemistry. The first atomic-resolution structure to be solved (in 1914) was that of table salt. (When an atomic structure is determined by X-ray crystallography, it is said to be "solved".) The distribution of electrons in the table-salt structure showed that crystals are not necessarily comprised of covalently bonded molecules, and proved the existence of ionic compounds. The structure of diamond was solved in the same year, proving the tetrahedral arrangement of its chemical bonds and showing that the C-C single bond was 1.52 Ångströms. Other early structures included copper, calcium fluoride (CaF2, also known as fluorite), calcite (CaCO3) and pyrite (FeS2) in 1914; spinel (MgAl2O4) in 1915; the rutile and anatase forms of titanium dioxide (TiO2) in 1916; pyrochroite and, by extension, brucite [Mn(OH)2 and Mg(OH)2, respectively] in 1919; and wurtzite (hexagonal ZnS) in 1920. The structure of graphite was solved in 1916 by the related method of powder diffraction, which was developed by Peter Debye and Paul Scherrer and, independently, by Albert Hull in 1917. The structure of graphite was determined from single-crystal diffraction in 1924 by two groups independently. Hull also used the powder method to determine the structures of various metals, such as iron and magnesium. X-ray crystallography has led to a better understanding of chemical bonds and non-covalent interactions. The initial studies revealed the typical radii of atoms, and confirmed many theoretical models of chemical bonding, such as the tetrahedral bonding of carbon in the diamond structure, the octahedral bonding of metals observed in ammonium hexachloroplatinate (IV), and the resonance observed in the planar carbonate group and in aromatic molecules. Kathleen Lonsdale's 1928 structure of hexamethylbenzene established the hexagonal symmetry of benzene and showed a clear difference in bond length between the aliphatic C-C bonds and aromatic C-C bonds; this finding led to the idea of resonance between chemical bonds, which had profound consequences for the development of chemistry. Her conclusions were anticipated by William Henry Bragg, who published models of naphthalene and anthracene in 1921 based on other molecules, an early form of molecular replacement. Also in the 1920s, Victor Moritz Goldschmidt and later Linus Pauling developed rules for eliminating chemically unlikely structures and for determining the relative sizes of atoms. These rules led to the structure of brookite (1928) and an understanding of the relative stability of the rutile, brookite and anatase forms of titanium oxide. The distance between two covalently bonded atoms is a sensitive measure of the bond strength and its bond order; thus, X-ray crystallographic studies have led to the discovery of even more exotic types of bonding in inorganic chemistry, such as metal-metal double bonds, metal-metal quadruple bonds, and three-center, two-electron bonds. X-ray crystallography — or, strictly speaking, an inelastic Compton scattering experiment — has also provided evidence for the partially covalent character of hydrogen bonds. In the field of organometallic chemistry, the X-ray structure of ferrocene initiated scientific studies of sandwich compounds, while that of Zeise's salt stimulated research into "back bonding" and metal-pi complexes in general. Finally, X-ray crystallography had a pioneering role in the development of supramolecular chemistry, particularly in clarifying the structures of the crown ethers and the principles of host-guest chemistry. In material sciences, many complicated inorganic and organometallic systems have been analyzed using single-crystal methods, such as fullerenes, metalloporphyrins, and other complicated compounds. Single-crystal diffraction is also used in the pharmaceutical industry, due to recent problems with polymorphs. The major factors affecting the quality of single-crystal structures are the crystal's size and regularity; recrystallization is a commonly used technique to improve these factors in small-molecule crystals. The Cambridge Structural Database contains over 400,000 structures; over 99% of these structures were determined by X-ray diffraction. Since the 1920s, X-ray diffraction has been the principal method for determining the arrangement of atoms in minerals and metals. The application of X-ray crystallography to mineralogy began with the structure of garnet, which was determined in 1924 by Menzer. A systematic X-ray crystallographic study of the silicates was undertaken in the 1920s. This study showed that, as the Si/O ratio is altered, the silicate crystals exhibit significant changes in their atomic arrangements. Machatschki extended these insights to minerals in which aluminium substitutes for the silicon atoms of the silicates. The first application of X-ray crystallography to metallurgy likewise occurred in the mid-1920s. Most notably, Linus Pauling's structure of the alloy Mg2Sn led to his theory of the stability and structure of complex ionic crystals. The first structure of an organic compound, hexamethylenetetramine, was solved in 1923. This was followed by several studies of long-chain fatty acids, which are an important component of biological membranes. In the 1930s, the structures of much larger molecules with two-dimensional complexity began to be solved. A significant advance was the structure of phthalocyanine, a large planar molecule that is closely related to porphyrin molecules important in biology, such as heme, corrin and chlorophyll. X-ray crystallography of biological molecules took off with Dorothy Crowfoot Hodgkin, who solved the structures of cholesterol (1937), vitamin B12 (1945) and penicillin (1954), for which she was awarded the Nobel Prize in Chemistry in 1964. In 1969, she succeeded in solving the structure of insulin, on which she worked for over thirty years. Crystal structures of proteins (which are irregular and hundreds of times larger than cholesterol) began to be solved in the late 1950s, beginning with the structure of sperm whale myoglobin by Max Perutz and Sir John Cowdery Kendrew, for which they were awarded the Nobel Prize in Chemistry in 1962. Since that success, over 39000 X-ray crystal structures of proteins, nucleic acids and other biological molecules have been determined. For comparison, the nearest competing method, nuclear magnetic resonance (NMR) spectroscopy has produced roughly 6000 structures. Moreover, crystallography can solve structures of arbitrarily large molecules, whereas solution-state NMR is restricted to relatively small molecules (less than 70 kDa). X-ray crystallography is now used routinely by scientists to determine how a pharmaceutical interacts with its protein target and what changes might be advisable to improve it. However, intrinsic membrane proteins remain challenging to crystallize because they require detergents or other means to solubilize them in isolation, and such detergents often interfere with crystallization. Such membrane proteins are a large component of the genome and include many proteins of great physiological importance, such as ion channels and receptors. X-rays range in wavelength from 10 to 0.01 nanometers; a typical wavelength used for crystallography is roughly 1 Å (0.1 nm), which is on the scale of covalent chemical bonds and the radius of a single atom. Longer-wavelength photons (such as ultraviolet radiation) would not have sufficient resolution to determine the atomic positions. At the other extreme, shorter-wavelength photons such as gamma rays are difficult to produce in large numbers, difficult to focus, and interact too strongly with matter, producing particle-antiparticle pairs. Therefore, X-rays are the "sweetspot" for wavelength when determining atomic-resolution structures from the scattering of electromagnetic radiation. All of these scattering methods generally use monochromatic X-rays, which are restricted to a single wavelength with minor deviations. A broad spectrum of X-rays (that is, a blend of X-rays with different wavelengths) can also be used to carry out X-ray diffraction, a technique known as the Laue method. This is the method used in the original discovery of X-ray diffraction. Laue scattering provides much structural information with only a short exposure to the X-ray beam, and is therefore used in structural studies of very rapid events (time-resolved X-ray crystallography). However, it is not as well-suited as monochromatic scattering for determining the full atomic structure of a crystal. It is better suited to crystals with relatively simple atomic arrangements, such as minerals. The Laue back reflection mode records X-rays scattered backwards also from a broad spectrum source. This is useful if the sample is too thick or bulky for X-rays to transmit through it. The diffracting planes in the crystal are determined by knowing that the normal to the diffracting plane bisects the angle between the incident beam and the diffracted beam. A Greninger chart can be used to interpret the back reflection Laue photograph. The X-calibre RTXDB and MWL 110 are commercial systems for Laue back reflection pattern recording. This technique can be used in materials analysis or nondestructive testing. Other particles, such as electrons and neutrons, may be used to produce a diffraction pattern. Although electron, neutron, and X-ray scattering use very different equipment, the resulting diffraction patterns are analyzed using the same coherent diffraction imaging techniques. As derived below, the electron density within the crystal and the diffraction patterns are related by a simple mathematical method, the Fourier transform, which allows the density to be calculated relatively easily from the patterns. However, this works only if the scattering is weak, i.e., if the scattered beams are much less intense than the incoming beam. Weakly scattered beams pass through the remainder of the crystal without undergoing a second scattering event. Such re-scattered waves are called "secondary scattering" and hinder the calculation of the density of scatterers. Any sufficiently thick crystal will produce secondary scattering but since X-rays interact relatively weakly with the electrons, this is generally not a significant concern. By contrast, electron beams may produce strong secondary scattering even for very small crystals (e.g., 100 μm) used in X-ray crystallography. In such cases, extremely thin samples, roughly 100 nanometers or less, must be used to avoid secondary scattering; the primary scattered electron beams leave the sample before they have a chance to undergo secondary scattering. Since this thickness corresponds roughly to the diameter of many viruses, a promising direction is the electron diffraction of isolated macromolecular assemblies, such as viral capsids and molecular machines, which may be carried out with a cryo-electron microscope. Neutron diffraction is an excellent method for structure determination, although it has been difficult to obtain intense, monochromatic beams of neutrons in sufficient quantities. Traditionally, nuclear reactors have been used, although the new Spallation Neutron Source holds much promise in the near future. Being uncharged, neutrons scatter much more readily from the atomic nuclei rather than from the electrons. Therefore, neutron scattering is very useful for observing the positions of light atoms with few electrons, especially hydrogen, which is essentially invisible in the X-ray diffraction of larger molecules. Neutron scattering also has the remarkable property that the solvent can be made invisible by adjusting the ratio of normal water, H2O, and heavy water, D2O. The oldest and most precise method of X-ray crystallography is single-crystal X-ray diffraction, in which a beam of X-rays strikes a single crystal, producing scattered beams. When they land on a piece of film or other detector, these beams make a diffraction pattern of spots; the strengths and angles of these beams are recorded as the crystal is gradually rotated. Each spot is called a reflection, since it corresponds to the reflection of the X-rays from one set of evenly spaced planes within the crystal. For single crystals of sufficient purity and regularity, X-ray diffraction data can determine the mean chemical bond lengths and angles to within a few thousandths of an Ångström and to within a few tenths of a degree, respectively. The atoms in a crystal are also not static, but oscillate about their mean positions, usually by less than a few tenths of an Ångström. X-ray crystallography allows the size of these oscillations to be measured quantitatively. The technique of single-crystal X-ray crystallography has three basic steps. The first — and often most difficult — step is to obtain an adequate crystal of the material under study. The crystal should be sufficiently large (typically larger than 100 micrometres in all dimensions), pure in composition and regular in structure, with no significant internal imperfections such as cracks or twinning. A small or irregular crystal will give fewer and less reliable data, from which it may be impossible to determine the atomic arrangement. In the second step, the crystal is placed in an intense beam of X-rays, usually of a single wavelength (monochromatic X-rays), producing the regular pattern of reflections. As the crystal is gradually rotated, previous reflections disappear and new ones appear; the intensity of every spot is recorded at every orientation of the crystal. Multiple data sets may have to be collected, with each set covering slightly more than half a full rotation of the crystal and typically containing tens of thousands of reflection intensities. In the third step, these data are combined computationally with complementary chemical information to produce and refine a model of the arrangement of atoms within the crystal. The final, refined model of the atomic arrangement — now called a crystal structure — is usually stored in a public database. As the crystal's repeating unit, its unit cell, becomes larger and more complex, the atomic-level picture provided by X-ray crystallography becomes less well-resolved (more "fuzzy") for a given number of observed reflections. Two limiting cases of X-ray crystallography—"small-molecule" and "macromolecular" crystallography—are often discerned. Small-molecule crystallography typically involves crystals with fewer than 100 atoms in their asymmetric unit; such crystal structures are usually so well resolved that the atoms can be discerned as isolated "blobs" of electron density. By contrast, macromolecular crystallography often involves tens of thousands of atoms in the unit cell. Such crystal structures are generally less well-resolved (more "smeared out"); the atoms and chemical bonds appear as tubes of electron density, rather than as isolated atoms. In general, small molecules are also easier to crystallize than macromolecules; however, X-ray crystallography has proven possible even for viruses with hundreds of thousands of atoms. Although crystallography can be used to characterize the disorder in an impure or irregular crystal, crystallography generally requires a pure crystal of high regularity to solve for the structure of a complicated arrangement of atoms. Pure, regular crystals can sometimes be obtained from natural or man-made materials, such as samples of metals, minerals or other macroscopic materials. The regularity of such crystals can sometimes be improved with annealing and other methods. However, in many cases, obtaining a diffraction-quality crystal is the chief barrier to solving its atomic-resolution structure. Small-molecule and macromolecular crystallography differ in the range of possible techniques used to produce diffraction-quality crystals. Small molecules generally have few degrees of conformational freedom, and may be crystallized by a wide range of methods, such as chemical vapor deposition and recrystallisation. By contrast, macromolecules generally have many degrees of freedom and their crystallization must be carried out to maintain a stable structure. For example, proteins and larger RNA molecules cannot be crystallized if their tertiary structure has been unfolded; therefore, the range of crystallization conditions is restricted to solution conditions in which such molecules remain folded. Protein crystals are almost always grown in solution. The most common approach is to lower the solubility of its component molecules very gradually; however, if this is done too quickly, the molecules will precipitate from solution, forming a useless dust or amorphous gel on the bottom of the container. Crystal growth in solution is characterized by two steps: nucleation of a microscopic crystallite (possibly having only 100 molecules), followed by growth of that crystallite, ideally to a diffraction-quality crystal. The solution conditions that favor the first step (nucleation) are not always the same conditions that favor the second step (its subsequent growth). The crystallographer's goal is to identify solution conditions that favor the development of a single, large crystal, since larger crystals offer improved resolution of the molecule. Consequently, the solution conditions should disfavor the first step (nucleation) but favor the second (growth), so that only one large crystal forms per droplet. If nucleation is favored too much, a shower of small crystallites will form in the droplet, rather than one large crystal; if favored too little, no crystal will form whatsoever. It is extremely difficult to predict good conditions for nucleation or growth of well-ordered crystals. In practice, favorable conditions are identified by screening; a very large batch of the molecules is prepared, and a wide variety of crystallization solutions are tested. Hundreds, even thousands, of solution conditions are generally tried before finding one that succeeds in crystallizing the molecules. The various conditions can use one or more physical mechanisms to lower the solubility of the molecule; for example, some may change the pH, some contain salts of the Hofmeister series or chemicals that lower the dielectric constant of the solution, and still others contain large polymers such as polyethylene glycol that drive the molecule out of solution by entropic effects. It is also common to try several temperatures for encouraging crystallization, or to gradually lower the temperature so that the solution becomes supersaturated. These methods require large amounts of the target molecule, as they use high concentration of the molecule(s) to be crystallized. Due to the difficulty in obtaining such large quantities (milligrams) of crystallisation grade protein, dispensing robots have been developed that are capable of accurately dispensing crystallisation trial drops that are of the order on 100 nanoliters in volume. This means that roughly 10-fold less protein is used per-experiment when compared to crystallisation trials setup by hand (on the order on 1 microliter). Several factors are known to inhibit or mar crystallization. The growing crystals are generally held at a constant temperature and protected from shocks or vibrations that might disturb their crystallization. Impurities in the molecules or in the crystallization solutions are often inimical to crystallization. Conformational flexibility in the molecule also tends to make crystallization less likely, due to entropy. Ironically, molecules that tend to self-assemble into regular helices are often unwilling to assemble into crystals. Crystals can be marred by twinning, which can occur when a unit cell can pack equally favorably in multiple orientations; although recent advances in computational methods have begun to allow the structures of twinned crystals to be solved, it is still very difficult. Having failed to crystallize a target molecule, a crystallographer may try again with a slightly modified version of the molecule; even small changes in molecular properties can lead to large differences in crystallization behavior. Once they are full-grown, the crystals are mounted so that they may be held in the X-ray beam and rotated. There are several methods of mounting. Although crystals were once loaded into glass capillaries with the crystallization solution (the mother liquor), a more modern approach is to scoop the crystal up in a tiny loop, made of nylon or plastic and attached to a solid rod, that is then flash-frozen with liquid nitrogen. This freezing reduces the radiation damage of the X-rays, as well as the noise in the Bragg peaks due to thermal motion (the Debye-Waller effect). However, untreated crystals often crack if flash-frozen; therefore, they are generally pre-soaked in a cryoprotectant solution before freezing. Unfortunately, this pre-soak may itself cause the crystal to crack, ruining it for crystallography. Generally, successful cryo-conditions are identified by trial and error. The capillary or loop is mounted on a goniometer, which allows it to be positioned accurately within the X-ray beam and rotated. Since both the crystal and the beam are often very small, the crystal must be centered within the beam to within roughly 25 micrometres accuracy, which is aided by a camera focused on the crystal. The most common type of goniometer is the "kappa goniometer", which offers three angles of rotation: the ω angle, which rotates about an axis roughly perpendicular to the beam; the κ angle, about an axis at roughly 50° to the ω axis; and, finally, the φ angle about the loop/capillary axis. When the κ angle is zero, the ω and φ axes are aligned. The κ rotation allows for convenient mounting of the crystal, since the arm in which the crystal is mounted may be swung out towards the crystallographer. The oscillations carried out during data collection (mentioned below) involve the ω axis only. An older type of goniometer is the four-circle goniometer, and its relatives such as the six-circle goniometer. Smaller, weaker X-ray sources are often used in laboratories to check the quality of crystals before bringing them to a synchrotron and sometimes to solve a crystal structure. In such systems, electrons are boiled off of a cathode and accelerated through a strong electric potential of roughly 50 kV; having reached a high speed, the electrons collide with a metal plate, emitting bremsstrahlung and some strong spectral lines corresponding to the excitation of inner-shell electrons of the metal. The most common metal used is copper, which can be kept cool easily, due to its high thermal conductivity, and which produces strong Kα and Kβ lines. The Kβ line is sometimes suppressed with a thin layer (0.0005 in. thick) of nickel foil. The simplest and cheapest variety of sealed X-ray tube has a stationary anode (the Crookes tube) and produces circa 2 kW of X-ray radiation. The more expensive variety has a rotating-anode type source that produces circa 14 kW of X-ray radiation. X-rays are generally filtered to a single wavelength (made monochromatic) and collimated to a single direction before they are allowed to strike the crystal. The filtering not only simplifies the data analysis, but also removes radiation that degrades the crystal without contributing useful information. Collimation is done either with a collimator (basically, a long tube) or with a clever arrangement of gently curved mirrors. Mirror systems are preferred for small crystals (under 0.3 mm) or with large unit cells (over 150 Å). When a crystal is mounted and exposed to an intense beam of X-rays, it scatters the X-rays into a pattern of spots or reflections that can be observed on a screen behind the crystal. A similar pattern may be seen by shining a laser pointer at a compact disc. The relative intensities of these spots provide the information to determine the arrangement of molecules within the crystal in atomic detail. The intensities of these reflections may be recorded with photographic film, an area detector or with a charge-coupled device (CCD) image sensor. The peaks at small angles correspond to low-resolution data, whereas those at high angles represent high-resolution data; thus, an upper limit on the eventual resolution of the structure can be determined from the first few images. Some measures of diffraction quality can be determined at this point, such as the mosaicity of the crystal and its overall disorder, as observed in the peak widths. Some pathologies of the crystal that would render it unfit for solving the structure can also be diagnosed quickly at this point. One image of spots is insufficient to reconstruct the whole crystal; it represents only a small slice of the full Fourier transform. To collect all the necessary information, the crystal must be rotated step-by-step through 180°, with an image recorded at every step; actually, slightly more than 180° is required to cover reciprocal space, due to the curvature of the Ewald sphere. However, if the crystal has a higher symmetry, a smaller angle such as 90° or 45° may be recorded. The axis of the rotation should generally be changed at least once, to avoid developing a "blind spot" in reciprocal space close to the rotation axis. It is customary to rock the crystal slightly (by 0.5-2°) to catch a broader region of reciprocal space. Multiple data sets may be necessary for certain phasing methods. For example, MAD phasing requires that the scattering be recorded at at least three (and usually four, for redundancy) wavelengths of the incoming X-ray radiation. A single crystal may degrade too much during the collection of one data set, owing to radiation damage; in such cases, data sets on multiple crystals must be taken. In order to process the data, a crystallographer must first index the reflections within the multiple images recorded. This means identifying the dimensions of the unit cell and which image peak corresponds to which position in reciprocal space. A byproduct of indexing is to determine the symmetry of the crystal, i.e., its space group. Some space groups can be eliminated from the beginning, since they require symmetries known to be absent in the molecule itself. For example, reflection symmetries cannot be observed in chiral molecules; thus, only 65 space groups of 243 possible are allowed for protein molecules which are almost always chiral. Indexing is generally accomplished using an autoindexing routine. Having assigned symmetry, the data is then integrated. This converts the hundreds of images containing the thousands of reflections into a single file, consisting of (at the very least) records of the Miller index of each reflection, and an intensity for each reflection (at this state the file often also includes error estimates and measures of partiality (what part of a given reflection was recorded on that image)). A full data set may consist of hundreds of separate images taken at different orientations of the crystal. The first step is to merge and scale these various images, that is, to identify which peaks appear in two or more images (merging) and to scale the relative images so that they have a consistent intensity scale. Optimizing the intensity scale is critical because the relative intensity of the peaks is the key information from which the structure is determined. The repetitive technique of crystallographic data collection and the often high symmetry of crystalline materials cause the diffractometer to record many symmetry-equivalent reflections multiple times. This allows a merging or symmetry related R-factor to be calculated based upon how similar are the measured intensities of symmetry equivalent reflections, thus giving a score to assess the quality of the data. While all four of the above methods are used to solve the phase problem for protein crystallography, small molecule crystallography generally yields data suitable for structure solution using direct methods (ab initio phasing). A similar quality criterion is Rfree, which is calculated from a subset (~10%) of reflections that were not included in the structure refinement. Both R factors depend on the resolution of the data. As a rule of thumb, Rfree should be approximately the resolution in Ångströms divided by 10; thus, a data-set with 2 Å resolution should yield a final Rfree of roughly 0.2. Chemical bonding features such as stereochemistry, hydrogen bonding and distribution of bond lengths and angles are complementary measures of the model quality. Phase bias is a serious problem in such iterative model building. Omit maps are a common technique used to check for this. It may not be possible to observe every atom of the crystallized molecule - it must be remembered that the resulting electron density is an average of all the molecules within the crystal. In some cases, there is too much residual disorder in those atoms, and the resulting electron density for atoms existing in many conformations is smeared to such an extent that it is no longer detectable in the electron density map. Weakly scattering atoms such as hydrogen are routinely invisible. It is also possible for a single atom to appear multiple times in an electron density map, e.g., if a protein sidechain has multiple (<4) allowed conformations. In still other cases, the crystallographer may detect that the covalent structure deduced for the molecule was incorrect, or changed. For example, proteins may be cleaved or undergo post-translational modifications that were not detected prior to the crystallization. Once the model of a molecule's structure has been finalized, it is often deposited in a crystallographic database such as the Protein Data Bank (for protein structures) or the Cambridge Structural Database (for small molecules). Many structures obtained in private commercial ventures to crystallize medicinally relevant proteins, are not deposited in public crystallographic databases. where the integral is summed over all possible values of q. The three-dimensional real vector q represents a point in reciprocal space, that is, to a particular oscillation in the electron density as one moves in the direction in which q points. The length of q corresponds to 2π divided by the wavelength of the oscillation. The corresponding formula for a Fourier transform will be used below where the integral is summed over all possible values of the position vector r within the crystal. The Fourier transform F(q) is generally a complex number, and therefore has a magnitude |F(q)| and a phase φ(q) related by the equation The intensities of the reflections observed in X-ray diffraction give us the magnitudes |F(q)| but not the phases φ(q). To obtain the phases, full sets of reflections are collected with known alterations to the scattering, either by modulating the wavelength past a certain absorption edge or by adding strongly scattering (i.e., electron-dense) metal atoms such as mercury. Combining the magnitudes and phases yields the full Fourier transform F(q), which may be inverted to obtain the electron density f(r). Crystals are often idealized as being perfectly periodic. In that ideal case, the atoms are positioned on a perfect lattice, the electron density is perfectly periodic, and the Fourier transform F(q) is zero except when q belongs to the reciprocal lattice (the so-called Bragg peaks). In reality, however, crystals are not perfectly periodic; atoms vibrate about their mean position, and there may be disorder of various types, such as mosaicity, dislocations, various point defects, and heterogeneity in the conformation of crystallized molecules. Therefore, the Bragg peaks have a finite width and there may be significant diffuse scattering, a continuum of scattered X-rays that fall between the Bragg peaks. An intuitive understanding of X-ray diffraction can be obtained from the Bragg model of diffraction. In this model, a given reflection is associated with a set of evenly spaced sheets running through the crystal, usually passing through the centers of the atoms of the crystal lattice. The orientation of a particular set of sheets is identified by its three Miller indices (h, k, l), and let their spacing be noted by d. William Lawrence Bragg proposed a model in which the incoming X-rays are scattered specularly (mirror-like) from each plane; from that assumption, X-rays scattered from adjacent planes will combine constructively (constructive interference) when the angle θ between the plane and the X-ray results in a path-length difference that is an integer multiple n of the X-ray wavelength λ. A reflection is said to be indexed when its Miller indices (or, more correctly, its reciprocal lattice vector components) have been identified from the known wavelength and the scattering angle 2θ. Such indexing gives the unit-cell parameters, the lengths and angles of the unit-cell, as well as its space group. Since Bragg's law does not interpret the relative intensities of the reflections, however, it is generally inadequate to solve for the arrangement of atoms within the unit-cell; for that, a Fourier transform method must be carried out. The incoming X-ray beam has a polarization and should be represented as a vector wave; however, for simplicity, let it be represented here as a scalar wave. We also ignore the complication of the time dependence of the wave and just focus on the wave's spatial dependence. Plane waves can be represented by a wave vector kin, and so the strength of the incoming wave at time t=0 is given by At position r within the sample, let there be a density of scatterers f(r); these scatterers should produce a scattered spherical wave of amplitude proportional to the local amplitude of the incoming wave times the number of scatterers in a small volume dV about r where S is the proportionality constant. Let's consider the fraction of scattered waves that leave with an outgoing wave-vector of kout and strike the screen at rscreen. Since no energy is lost (elastic, not inelastic scattering), the wavelengths are the same as are the magnitudes of the wave-vectors |kin| = |kout|. From the time that the photon is scattered at r until it is absorbed at rscreen, the photon undergoes a change in phase The net radiation arriving at rscreen is the sum of all the scattered waves throughout the crystal which may be written as a Fourier transform where q = kout - kin. The measured intensity of the reflection will be square of this amplitude For every reflection corresponding to a point q in the reciprocal space, there is another reflection of the same intensity at the opposite point −q. This opposite reflection is known as the Friedel mate of the original reflection. This symmetry results from the mathematical fact that the density of electrons f(r) at a position r is always a real number. As noted above, f(r) is the inverse transform of its Fourier transform F(q); however, such an inverse transform is a complex number in general. To ensure that f(r) is real, the Fourier transform F(q) must be such that the Friedel mates F(−q) and F(q) are complex conjugates of one another. Thus, F(−q) has the same magnitude as F(q)— that is, |F|(q) = |F|(−q)—but they have the opposite phase, i.e., φ(q) = −φ(q) The equality of their magnitudes ensures that the Friedel mates have the same intensity |F|2. This symmetry allows one to measure the full Fourier transform from only half the reciprocal space, e.g., by rotating the crystal slightly more than a 180°, instead of a full turn. In crystals with significant symmetry, even more reflections may have the same intensity (Bijvoet mates); in such cases, even less of the reciprocal space may need to be measured, e.g., slightly more than 90°. The Friedel-mate constraint can be derived from the definition of the inverse Fourier transform Since Euler's formula states that eix = cos(x) + i sin(x), the inverse Fourier transform can be separated into a sum of a purely real part and a purely imaginary part since Isin = −Isin implies that Isin=0. has a Fourier transform C(q)that is the squared magnitude of F(q) Therefore, the autocorrelation function c(r) of the electron density (also known as the Patterson function) can be computed directly from the reflection intensities, without computing the phases. In principle, this could be used to determine the crystal structure directly; however, it is difficult to realize in practice. The autocorrelation function corresponds to the distribution of vectors between atoms in the crystal; thus, a crystal of N atoms in its unit cell may have N(N-1) peaks in its Patterson function. Given the inevitable errors in measuring the intensities, and the mathematical difficulties of reconstructing atomic positions from the interatomic vectors, this technique is rarely used to solve structures, except for the simplest crystals. In principle, an atomic structure could be determined from applying X-ray scattering to non-crystalline samples, even to a single molecule. However, crystals offer a much stronger signal due to their periodicity. A crystalline sample is by definition periodic; a crystal is composed of many unit cells repeated indefinitely in three independent directions. Such periodic systems have a Fourier transform that is concentrated at periodically repeating points in reciprocal space known as Bragg peaks; the Bragg peaks correspond to the reflection spots observed in the diffraction image. Since the amplitude at these reflections grows linearly with the number N of scatterers, the observed intensity of these spots should grow quadratically, like N². In other words, using a crystal concentrates the weak scattering of the individual unit cells into a much more powerful, coherent reflection that can be observed above the noise. This is an example of constructive interference. In a non-crystalline sample, molecules within that sample would be in random orientations and therefore would have a continuous Fourier spectrum that spreads its amplitude more uniformly and with a much reduced intensity, as is observed in SAXS. More importantly, the orientational information is lost. In the crystal, the molecules adopt the same orientation within the crystal, whereas in a liquid, powder or amorphous state, the observed signal is averaged over the possible orientations of the molecules. Although theoretically possible with sufficiently low-noise data, it is generally difficult to obtain atomic-resolution structures of complicated, asymmetric molecules from such rotationally averaged scattering data. An intermediate case is fiber diffraction in which the subunits are arranged periodically in at least one dimension.
http://www.reference.com/browse/x-axes
13
56
|Part of a series on| Evolution is the change in the inherited characteristics of biological populations over successive generations. Evolutionary processes give rise to diversity at every level of biological organisation, including species, individual organisms and molecules such as DNA and proteins. All life on earth is descended from a last universal ancestor that lived approximately 3.8 billion years ago. Repeated speciation and the divergence of life can be inferred from shared sets of biochemical and morphological traits, or by shared DNA sequences. These homologous traits and sequences are more similar among species that share a more recent common ancestor, and can be used to reconstruct evolutionary histories, using both existing species and the fossil record. Existing patterns of biodiversity have been shaped both by speciation and by extinction. Charles Darwin was the first to formulate a scientific argument for the theory of evolution by means of natural selection. Evolution by natural selection is a process that is inferred from three facts about populations: 1) more offspring are produced than can possibly survive, 2) traits vary among individuals, leading to different rates of survival and reproduction, and 3) trait differences are heritable. Thus, when members of a population die they are replaced by the progeny of parents that were better adapted to survive and reproduce in the environment in which natural selection took place. This process creates and preserves traits that are seemingly fitted for the functional roles they perform. Natural selection is the only known cause of adaptation, but not the only known cause of evolution. Other, nonadaptive causes of evolution include mutation and genetic drift. In the early 20th century, genetics was integrated with Darwin's theory of evolution by natural selection through the discipline of population genetics. The importance of natural selection as a cause of evolution was accepted into other branches of biology. Moreover, previously held notions about evolution, such as orthogenesis and "progress" became obsolete. Scientists continue to study various aspects of evolution by forming and testing hypotheses, constructing scientific theories, using observational data, and performing experiments in both the field and the laboratory. Biologists agree that descent with modification is one of the most reliably established facts in science. Discoveries in evolutionary biology have made a significant impact not just within the traditional branches of biology, but also in other academic disciplines (e.g., anthropology and psychology) and on society at large. The proposal that one type of animal could descend from an animal of another type goes back to some of the first pre-Socratic Greek philosophers, such as Anaximander and Empedocles. In contrast to these materialistic views, Aristotle understood all natural things, not only living things, as being imperfect actualisations of different fixed natural possibilities, known as "forms", "ideas", or (in Latin translations) "species". This was part of his teleological understanding of nature in which all things have an intended role to play in a divine cosmic order. The Roman poet and philosopher Titus Lucretius Carus proposed the possibility of evolutionary changes of organisms. Variations of this idea became the standard understanding of the Middle Ages, and were integrated into Christian learning, but Aristotle did not demand that real types of animals corresponded one-for-one with exact metaphysical forms, and specifically gave examples of how new types of living things could come to be. Leonardo da Vinci simply wrote, "Motion is the cause of all life". In the 17th century the new method of modern science rejected Aristotle's approach, and sought explanations of natural phenomena in terms of laws of nature which were the same for all visible things, and did not need to assume any fixed natural categories, nor any divine cosmic order. But this new approach was slow to take root in the biological sciences, which became the last bastion of the concept of fixed natural types. John Ray used one of the previously more general terms for fixed natural types, "species", to apply to animal and plant types, but unlike Aristotle he strictly identified each type of living thing as a species, and proposed that each species can be defined by the features that perpetuate themselves each generation. These species were designed by God, but showing differences caused by local conditions. The biological classification introduced by Carolus Linnaeus in 1735 also viewed species as fixed according to a divine plan. Other naturalists of this time speculated on evolutionary change of species over time according to natural laws. Maupertuis wrote in 1751 of natural modifications occurring during reproduction and accumulating over many generations to produce new species. Buffon suggested that species could degenerate into different organisms, and Erasmus Darwin proposed that all warm-blooded animals could have descended from a single micro-organism (or "filament"). The first full-fledged evolutionary scheme was Lamarck's "transmutation" theory of 1809 which envisaged spontaneous generation continually producing simple forms of life developed greater complexity in parallel lineages with an inherent progressive tendency, and that on a local level these lineages adapted to the environment by inheriting changes caused by use or disuse in parents. (The latter process was later called Lamarckism.) These ideas were condemned by established naturalists as speculation lacking empirical support. In particular Georges Cuvier insisted that species were unrelated and fixed, their similarities reflecting divine design for functional needs. In the meantime, Ray's ideas of benevolent design had been developed by William Paley into a natural theology which proposed complex adaptations as evidence of divine design, and was admired by Charles Darwin. The critical break from the concept of fixed species in biology began with the theory of evolution by natural selection, which was formulated by Charles Darwin. Partly influenced by An Essay on the Principle of Population by Thomas Robert Malthus, Darwin noted that population growth would lead to a "struggle for existence" where favorable variations could prevail as others perished. Each generation, many offspring fail to survive to an age of reproduction because of limited resources. This could explain the diversity of animals and plants from a common ancestry through the working of natural laws working the same for all types of thing. Darwin was developing his theory of "natural selection" from 1838 onwards until Alfred Russel Wallace sent him a similar theory in 1858. Both men presented their separate papers to the Linnean Society of London. At the end of 1859, Darwin's publication of On the Origin of Species explained natural selection in detail and in a way that led to an increasingly wide acceptance of Darwinian evolution. Thomas Henry Huxley applied Darwin's ideas to humans, using paleontology and comparative anatomy to provide strong evidence that humans and apes shared a common ancestry. Some were disturbed by this since it implied that humans did not have a special place in the universe. Precise mechanisms of reproductive heritability and the origin of new traits remained a mystery. Towards this end, Darwin developed his provisional theory of pangenesis. In 1865 Gregor Mendel reported that traits were inherited in a predictable manner through the independent assortment and segregation of elements (later known as genes). Mendel's laws of inheritance eventually supplanted most of Darwin's pangenesis theory. August Weismann made the important distinction between germ cells (sperm and eggs) and somatic cells of the body, demonstrating that heredity passes through the germ line only. Hugo de Vries connected Darwin's pangenesis theory to Weismann's germ/soma cell distinction and proposed that Darwin's pangenes were concentrated in the cell nucleus and when expressed they could move into the cytoplasm to change the cells structure. De Vries was also one of the researchers who made Mendel's work well-known, believing that Mendelian traits corresponded to the transfer of heritable variations along the germline. To explain how new variants originate, De Vries developed a mutation theory that led to a temporary rift between those who accepted Darwinian evolution and biometricians who allied with de Vries. At the turn of the 20th century, pioneers in the field of population genetics, such as J.B.S. Haldane, Sewall Wright, and Ronald Fisher, set the foundations of evolution onto a robust statistical philosophy. The false contradiction between Darwin's theory, genetic mutations, and Mendelian inheritance was thus reconciled. In the 1920s and 1930s a modern evolutionary synthesis connected natural selection, mutation theory, and Mendelian inheritance into a unified theory that applied generally to any branch of biology. The modern synthesis was able to explain patterns observed across species in populations, through fossil transitions in palaeontology, and even complex cellular mechanisms in developmental biology. The publication of the structure of DNA by James Watson and Francis Crick in 1953 demonstrated a physical basis for inheritance. Molecular biology improved our understanding of the relationship between genotype and phenotype. Advancements were also made in phylogenetic systematics, mapping the transition of traits into a comparative and testable framework through the publication and use of evolutionary trees. In 1973, evolutionary biologist Theodosius Dobzhansky penned that "nothing in biology makes sense except in the light of evolution", because it has brought to light the relations of what first seemed disjointed facts in natural history into a coherent explanatory body of knowledge that describes and predicts many observable facts about life on this planet. Since then, the modern synthesis has been further extended to explain biological phenomena across the full and integrative scale of the biological hierarchy, from genes to species. This extension has been dubbed "eco-evo-devo". Evolution in organisms occurs through changes in heritable traits – particular characteristics of an organism. In humans, for example, eye colour is an inherited characteristic and an individual might inherit the "brown-eye trait" from one of their parents. Inherited traits are controlled by genes and the complete set of genes within an organism's genome is called its genotype. The complete set of observable traits that make up the structure and behaviour of an organism is called its phenotype. These traits come from the interaction of its genotype with the environment. As a result, many aspects of an organism's phenotype are not inherited. For example, suntanned skin comes from the interaction between a person's genotype and sunlight; thus, suntans are not passed on to people's children. However, some people tan more easily than others, due to differences in their genotype; a striking example are people with the inherited trait of albinism, who do not tan at all and are very sensitive to sunburn. Heritable traits are passed from one generation to the next via DNA, a molecule that encodes genetic information. DNA is a long polymer composed of four types of bases. The sequence of bases along a particular DNA molecule specify the genetic information, in a manner similar to a sequence of letters spelling out a sentence. Before a cell divides, the DNA is copied, so that each of the resulting two cells will inherit the DNA sequence. Portions of a DNA molecule that specify a single functional unit are called genes; different genes have different sequences of bases. Within cells, the long strands of DNA form condensed structures called chromosomes. The specific location of a DNA sequence within a chromosome is known as a locus. If the DNA sequence at a locus varies between individuals, the different forms of this sequence are called alleles. DNA sequences can change through mutations, producing new alleles. If a mutation occurs within a gene, the new allele may affect the trait that the gene controls, altering the phenotype of the organism. However, while this simple correspondence between an allele and a trait works in some cases, most traits are more complex and are controlled by multiple interacting genes. Recent findings have confirmed important examples of heritable changes that cannot be explained by changes to the sequence of nucleotides in the DNA. These phenomena are classed as epigenetic inheritance systems. DNA methylation marking chromatin, self-sustaining metabolic loops, gene silencing by RNA interference and the three dimensional conformation of proteins (such as prions) are areas where epigenetic inheritance systems have been discovered at the organismic level. Developmental biologists suggest that complex interactions in genetic networks and communication among cells can lead to heritable variations that may underlay some of the mechanics in developmental plasticity and canalization. Heritability may also occur at even larger scales. For example, ecological inheritance through the process of niche construction is defined by the regular and repeated activities of organisms in their environment. This generates a legacy of effects that modify and feed back into the selection regime of subsequent generations. Descendants inherit genes plus environmental characteristics generated by the ecological actions of ancestors. Other examples of heritability in evolution that are not under the direct control of genes include the inheritance of cultural traits and symbiogenesis. An individual organism's phenotype results from both its genotype and the influence from the environment it has lived in. A substantial part of the variation in phenotypes in a population is caused by the differences between their genotypes. The modern evolutionary synthesis defines evolution as the change over time in this genetic variation. The frequency of one particular allele will become more or less prevalent relative to other forms of that gene. Variation disappears when a new allele reaches the point of fixation — when it either disappears from the population or replaces the ancestral allele entirely. Natural selection will only cause evolution if there is enough genetic variation in a population. Before the discovery of Mendelian genetics, one common hypothesis was blending inheritance. But with blending inheritance, genetic variance would be rapidly lost, making evolution by natural selection implausible. The Hardy-Weinberg principle provides the solution to how variation is maintained in a population with Mendelian inheritance. The frequencies of alleles (variations in a gene) will remain constant in the absence of selection, mutation, migration and genetic drift. Variation comes from mutations in genetic material, reshuffling of genes through sexual reproduction and migration between populations (gene flow). Despite the constant introduction of new variation through mutation and gene flow, most of the genome of a species is identical in all individuals of that species. However, even relatively small differences in genotype can lead to dramatic differences in phenotype: for example, chimpanzees and humans differ in only about 5% of their genomes. Mutations are changes in the DNA sequence of a cell's genome. When mutations occur, they can either have no effect, alter the product of a gene, or prevent the gene from functioning. Based on studies in the fly Drosophila melanogaster, it has been suggested that if a mutation changes a protein produced by a gene, this will probably be harmful, with about 70% of these mutations having damaging effects, and the remainder being either neutral or weakly beneficial. Mutations can involve large sections of a chromosome becoming duplicated (usually by genetic recombination), which can introduce extra copies of a gene into a genome. Extra copies of genes are a major source of the raw material needed for new genes to evolve. This is important because most new genes evolve within gene families from pre-existing genes that share common ancestors. For example, the human eye uses four genes to make structures that sense light: three for colour vision and one for night vision; all four are descended from a single ancestral gene. New genes can be generated from an ancestral gene when a duplicate copy mutates and acquires a new function. This process is easier once a gene has been duplicated because it increases the redundancy of the system; one gene in the pair can acquire a new function while the other copy continues to perform its original function. Other types of mutations can even generate entirely new genes from previously noncoding DNA. The generation of new genes can also involve small parts of several genes being duplicated, with these fragments then recombining to form new combinations with new functions. When new genes are assembled from shuffling pre-existing parts, domains act as modules with simple independent functions, which can be mixed together to produce new combinations with new and complex functions. For example, polyketide synthases are large enzymes that make antibiotics; they contain up to one hundred independent domains that each catalyze one step in the overall process, like a step in an assembly line. In asexual organisms, genes are inherited together, or linked, as they cannot mix with genes of other organisms during reproduction. In contrast, the offspring of sexual organisms contain random mixtures of their parents' chromosomes that are produced through independent assortment. In a related process called homologous recombination, sexual organisms exchange DNA between two matching chromosomes. Recombination and reassortment do not alter allele frequencies, but instead change which alleles are associated with each other, producing offspring with new combinations of alleles. Sex usually increases genetic variation and may increase the rate of evolution. Gene flow is the exchange of genes between populations and between species. It can therefore be a source of variation that is new to a population or to a species. Gene flow can be caused by the movement of individuals between separate populations of organisms, as might be caused by the movement of mice between inland and coastal populations, or the movement of pollen between heavy metal tolerant and heavy metal sensitive populations of grasses. Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer. Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among bacteria. In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species. Horizontal transfer of genes from bacteria to eukaryotes such as the yeast Saccharomyces cerevisiae and the adzuki bean beetle Callosobruchus chinensis has occurred. An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which have received a range of genes from bacteria, fungi and plants. Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains. Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and bacteria, during the acquisition of chloroplasts and mitochondria. It is possible that eukaryotes themselves originated from horizontal gene transfers between bacteria and archaea. From a Neo-Darwinian perspective, evolution occurs when there are changes in the frequencies of alleles within a population of interbreeding organisms. For example, the allele for black colour in a population of moths becoming more common. Mechanisms that can lead to changes in allele frequencies include natural selection, genetic drift, genetic hitchhiking, mutation and gene flow. Evolution by means of natural selection is the process by which genetic mutations that enhance reproduction become and remain more common in successive generations of a population. It has often been called a "self-evident" mechanism because it necessarily follows from three simple facts: These conditions produce competition between organisms for survival and reproduction. Consequently, organisms with traits that give them an advantage over their competitors pass these advantageous traits on, while traits that do not confer an advantage are not passed on to the next generation. The central concept of natural selection is the evolutionary fitness of an organism. Fitness is measured by an organism's ability to survive and reproduce, which determines the size of its genetic contribution to the next generation. However, fitness is not the same as the total number of offspring: instead fitness is indicated by the proportion of subsequent generations that carry an organism's genes. For example, if an organism could survive well and reproduce rapidly, but its offspring were all too small and weak to survive, this organism would make little genetic contribution to future generations and would thus have low fitness. If an allele increases fitness more than the other alleles of that gene, then with each generation this allele will become more common within the population. These traits are said to be "selected for". Examples of traits that can increase fitness are enhanced survival and increased fecundity. Conversely, the lower fitness caused by having a less beneficial or deleterious allele results in this allele becoming rarer — they are "selected against". Importantly, the fitness of an allele is not a fixed characteristic; if the environment changes, previously neutral or harmful traits may become beneficial and previously beneficial traits become harmful. However, even if the direction of selection does reverse in this way, traits that were lost in the past may not re-evolve in an identical form (see Dollo's law). Natural selection within a population for a trait that can vary across a range of values, such as height, can be categorised into three different types. The first is directional selection, which is a shift in the average value of a trait over time — for example, organisms slowly getting taller. Secondly, disruptive selection is selection for extreme trait values and often results in two different values becoming most common, with selection against the average value. This would be when either short or tall organisms had an advantage, but not those of medium height. Finally, in stabilizing selection there is selection against extreme trait values on both ends, which causes a decrease in variance around the average value and less diversity. This would, for example, cause organisms to slowly become all the same height. A special case of natural selection is sexual selection, which is selection for any trait that increases mating success by increasing the attractiveness of an organism to potential mates. Traits that evolved through sexual selection are particularly prominent in males of some animal species, despite traits such as cumbersome antlers, mating calls or bright colours that attract predators, decreasing the survival of individual males. This survival disadvantage is balanced by higher reproductive success in males that show these hard to fake, sexually selected traits. Natural selection most generally makes nature the measure against which individuals and individual traits, are more or less likely to survive. "Nature" in this sense refers to an ecosystem, that is, a system in which organisms interact with every other element, physical as well as biological, in their local environment. Eugene Odum, a founder of ecology, defined an ecosystem as: "Any unit that includes all of the organisms...in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity and material cycles (ie: exchange of materials between living and nonliving parts) within the system." Each population within an ecosystem occupies a distinct niche, or position, with distinct relationships to other parts of the system. These relationships involve the life history of the organism, its position in the food chain and its geographic range. This broad understanding of nature enables scientists to delineate specific forces which, together, comprise natural selection. Natural selection can act at different levels of organisation, such as genes, cells, individual organisms, groups of organisms and species. Selection can act at multiple levels simultaneously. An example of selection occurring below the level of the individual organism are genes called transposons, which can replicate and spread throughout a genome. Selection at a level above the individual, such as group selection, may allow the evolution of co-operation, as discussed below. In addition to being a major source of variation, mutation may also function as a mechanism of evolution when there are different probabilities at the molecular level for different mutations to occur, a process known as mutation bias. If two genotypes, for example one with the nucleotide G and another with the nucleotide A in the same position, have the same fitness, but mutation from G to A happens more often than mutation from A to G, then genotypes with A will tend to evolve. Different insertion vs. deletion mutation biases in different taxa can lead to the evolution of different genome sizes. Developmental or mutational biases have also been observed in morphological evolution. For example, according to the phenotype-first theory of evolution, mutations can eventually cause the genetic assimilation of traits that were previously induced by the environment. Mutation bias effects are superimposed on other processes. If selection would favor either one out of two mutations, but there is no extra advantage to having both, then the mutation that occurs the most frequently is the one that is most likely to become fixed in a population. Mutations leading to the loss of function of a gene are much more common than mutations that produce a new, fully functional gene. Most loss of function mutations are selected against. But when selection is weak, mutation bias towards loss of function can affect evolution. For example, pigments are no longer useful when animals live in the darkness of caves, and tend to be lost. This kind of loss of function can occur because of mutation bias, and/or because the function had a cost, and once the benefit of the function disappeared, natural selection leads to the loss. Loss of sporulation ability in a bacterium during laboratory evolution appears to have been caused by mutation bias, rather than natural selection against the cost of maintaining sporulation ability. When there is no selection for loss of function, the speed at which loss evolves depends more on the mutation rate than it does on the effective population size, indicating that it is driven more by mutation bias than by genetic drift. Genetic drift is the change in allele frequency from one generation to the next that occurs because alleles are subject to sampling error. As a result, when selective forces are absent or relatively weak, allele frequencies tend to "drift" upward or downward randomly (in a random walk). This drift halts when an allele eventually becomes fixed, either by disappearing from the population, or replacing the other alleles entirely. Genetic drift may therefore eliminate some alleles from a population due to chance alone. Even in the absence of selective forces, genetic drift can cause two separate populations that began with the same genetic structure to drift apart into two divergent populations with different sets of alleles. It is usually difficult to measure the relative importance of selection and neutral processes, including drift. The comparative importance of adaptive and non-adaptive forces in driving evolutionary change is an area of current research. The neutral theory of molecular evolution proposed that most evolutionary changes are the result of the fixation of neutral mutations by genetic drift. Hence, in this model, most genetic changes in a population are the result of constant mutation pressure and genetic drift. This form of the neutral theory is now largely abandoned, since it does not seem to fit the genetic variation seen in nature. However, a more recent and better-supported version of this model is the nearly neutral theory, where a mutation that would be neutral in a small population is not necessarily neutral in a large population. Other alternative theories propose that genetic drift is dwarfed by other stochastic forces in evolution, such as genetic hitchhiking, also known as genetic draft. The time for a neutral allele to become fixed by genetic drift depends on population size, with fixation occurring more rapidly in smaller populations. The number of individuals in a population is not critical, but instead a measure known as the effective population size. The effective population is usually smaller than the total population since it takes into account factors such as the level of inbreeding and the stage of the lifecycle in which the population is the smallest. The effective population size may not be the same for every gene in the same population. Recombination allows alleles on the same strand of DNA to become separated. However, the rate of recombination is low (approximately two events per chromosome per generation). As a result, genes close together on a chromosome may not always be shuffled away from each other and genes that are close together tend to be inherited together, a phenomenon known as linkage. This tendency is measured by finding how often two alleles occur together on a single chromosome compared to expectations, which is called their linkage disequilibrium. A set of alleles that is usually inherited in a group is called a haplotype. This can be important when one allele in a particular haplotype is strongly beneficial: natural selection can drive a selective sweep that will also cause the other alleles in the haplotype to become more common in the population; this effect is called genetic hitchhiking or genetic draft. Genetic draft caused by the fact that some neutral genes are genetically linked to others that are under selection can be partially captured by an appropriate effective population size. Gene flow is the exchange of genes between populations and between species. The presence or absence of gene flow fundamentally changes the course of evolution. Due to the complexity of organisms, any two completely isolated populations will eventually evolve genetic incompatibilities through neutral processes, as in the Bateson-Dobzhansky-Muller model, even if both populations remain essentially identical in terms of their adaptation to the environment. If genetic differentiation between populations develops, gene flow between populations can introduce traits or alleles which are disadvantageous in the local population and this may lead to organism within these populations to evolve mechanisms that prevent mating with genetically distant populations, eventually resulting in the appearance of new species. Thus, exchange of genetic information between individuals is fundamentally important for the development of the biological species concept (BSC). During the development of the modern synthesis, Sewall Wright's developed his shifting balance theory that gene flow between partially isolated populations was an important aspect of adaptive evolution. However, recently there has been substantial criticism of the importance of the shifting balance theory. Evolution influences every aspect of the form and behaviour of organisms. Most prominent are the specific behavioural and physical adaptations that are the outcome of natural selection. These adaptations increase fitness by aiding activities such as finding food, avoiding predators or attracting mates. Organisms can also respond to selection by co-operating with each other, usually by aiding their relatives or engaging in mutually beneficial symbiosis. In the longer term, evolution produces new species through splitting ancestral populations of organisms into new groups that cannot or will not interbreed. These outcomes of evolution are sometimes divided into macroevolution, which is evolution that occurs at or above the level of species, such as extinction and speciation and microevolution, which is smaller evolutionary changes, such as adaptations, within a species or population. In general, macroevolution is regarded as the outcome of long periods of microevolution. Thus, the distinction between micro- and macroevolution is not a fundamental one – the difference is simply the time involved. However, in macroevolution, the traits of the entire species may be important. For instance, a large amount of variation among individuals allows a species to rapidly adapt to new habitats, lessening the chance of it going extinct, while a wide geographic range increases the chance of speciation, by making it more likely that part of the population will become isolated. In this sense, microevolution and macroevolution might involve selection at different levels – with microevolution acting on genes and organisms, versus macroevolutionary processes such as species selection acting on entire species and affecting their rates of speciation and extinction. A common misconception is that evolution has goals or long-term plans; realistically however, evolution has no long-term goal and does not necessarily produce greater complexity. Although complex species have evolved, they occur as a side effect of the overall number of organisms increasing and simple forms of life still remain more common in the biosphere. For example, the overwhelming majority of species are microscopic prokaryotes, which form about half the world's biomass despite their small size, and constitute the vast majority of Earth's biodiversity. Simple organisms have therefore been the dominant form of life on Earth throughout its history and continue to be the main form of life up to the present day, with complex life only appearing more diverse because it is more noticeable. Indeed, the evolution of microorganisms is particularly important to modern evolutionary research, since their rapid reproduction allows the study of experimental evolution and the observation of evolution and adaptation in real time. Adaptation is the process that makes organisms better suited to their habitat. Also, the term adaptation may refer to a trait that is important for an organism's survival. For example, the adaptation of horses' teeth to the grinding of grass. By using the term adaptation for the evolutionary process and adaptive trait for the product (the bodily part or function), the two senses of the word may be distinguished. Adaptations are produced by natural selection. The following definitions are due to Theodosius Dobzhansky. Adaptation may cause either the gain of a new feature, or the loss of an ancestral feature. An example that shows both types of change is bacterial adaptation to antibiotic selection, with genetic changes causing antibiotic resistance by both modifying the target of the drug, or increasing the activity of transporters that pump the drug out of the cell. Other striking examples are the bacteria Escherichia coli evolving the ability to use citric acid as a nutrient in a long-term laboratory experiment, Flavobacterium evolving a novel enzyme that allows these bacteria to grow on the by-products of nylon manufacturing, and the soil bacterium Sphingobium evolving an entirely new metabolic pathway that degrades the synthetic pesticide pentachlorophenol. An interesting but still controversial idea is that some adaptations might increase the ability of organisms to generate genetic diversity and adapt by natural selection (increasing organisms' evolvability). Adaptation occurs through the gradual modification of existing structures. Consequently, structures with similar internal organisation may have different functions in related organisms. This is the result of a single ancestral structure being adapted to function in different ways. The bones within bat wings, for example, are very similar to those in mice feet and primate hands, due to the descent of all these structures from a common mammalian ancestor. However, since all living organisms are related to some extent, even organs that appear to have little or no structural similarity, such as arthropod, squid and vertebrate eyes, or the limbs and wings of arthropods and vertebrates, can depend on a common set of homologous genes that control their assembly and function; this is called deep homology. During evolution, some structures may lose their original function and become vestigial structures. Such structures may have little or no function in a current species, yet have a clear function in ancestral species, or other closely related species. Examples include pseudogenes, the non-functional remains of eyes in blind cave-dwelling fish, wings in flightless birds, and the presence of hip bones in whales and snakes. Examples of vestigial structures in humans include wisdom teeth, the coccyx, the vermiform appendix, and other behavioural vestiges such as goose bumps and primitive reflexes. However, many traits that appear to be simple adaptations are in fact exaptations: structures originally adapted for one function, but which coincidentally became somewhat useful for some other function in the process. One example is the African lizard Holaspis guentheri, which developed an extremely flat head for hiding in crevices, as can be seen by looking at its near relatives. However, in this species, the head has become so flattened that it assists in gliding from tree to tree—an exaptation. Within cells, molecular machines such as the bacterial flagella and protein sorting machinery evolved by the recruitment of several pre-existing proteins that previously had different functions. Another example is the recruitment of enzymes from glycolysis and xenobiotic metabolism to serve as structural proteins called crystallins within the lenses of organisms' eyes. A critical principle of ecology is that of competitive exclusion: no two species can occupy the same niche in the same environment for a long time. Consequently, natural selection will tend to force species to adapt to different ecological niches. This may mean that, for example, two species of cichlid fish adapt to live in different habitats, which will minimise the competition between them for food. An area of current investigation in evolutionary developmental biology is the developmental basis of adaptations and exaptations. This research addresses the origin and evolution of embryonic development and how modifications of development and developmental processes produce novel features. These studies have shown that evolution can alter development to produce new structures, such as embryonic bone structures that develop into the jaw in other animals instead forming part of the middle ear in mammals. It is also possible for structures that have been lost in evolution to reappear due to changes in developmental genes, such as a mutation in chickens causing embryos to grow teeth similar to those of crocodiles. It is now becoming clear that most alterations in the form of organisms are due to changes in a small set of conserved genes. Interactions between organisms can produce both conflict and co-operation. When the interaction is between pairs of species, such as a pathogen and a host, or a predator and its prey, these species can develop matched sets of adaptations. Here, the evolution of one species causes adaptations in a second species. These changes in the second species then, in turn, cause new adaptations in the first species. This cycle of selection and response is called co-evolution. An example is the production of tetrodotoxin in the rough-skinned newt and the evolution of tetrodotoxin resistance in its predator, the common garter snake. In this predator-prey pair, an evolutionary arms race has produced high levels of toxin in the newt and correspondingly high levels of toxin resistance in the snake. Not all co-evolved interactions between species involve conflict. Many cases of mutually beneficial interactions have evolved. For instance, an extreme cooperation exists between plants and the mycorrhizal fungi that grow on their roots and aid the plant in absorbing nutrients from the soil. This is a reciprocal relationship as the plants provide the fungi with sugars from photosynthesis. Here, the fungi actually grow inside plant cells, allowing them to exchange nutrients with their hosts, while sending signals that suppress the plant immune system. Coalitions between organisms of the same species have also evolved. An extreme case is the eusociality found in social insects, such as bees, termites and ants, where sterile insects feed and guard the small number of organisms in a colony that are able to reproduce. On an even smaller scale, the somatic cells that make up the body of an animal limit their reproduction so they can maintain a stable organism, which then supports a small number of the animal's germ cells to produce offspring. Here, somatic cells respond to specific signals that instruct them whether to grow, remain as they are, or die. If cells ignore these signals and multiply inappropriately, their uncontrolled growth causes cancer. Such cooperation within species may have evolved through the process of kin selection, which is where one organism acts to help raise a relative's offspring. This activity is selected for because if the helping individual contains alleles which promote the helping activity, it is likely that its kin will also contain these alleles and thus those alleles will be passed on. Other processes that may promote cooperation include group selection, where cooperation provides benefits to a group of organisms. There are multiple ways to define the concept of "species". The choice of definition is dependent on the particularities of the species concerned. For example, some species concepts apply more readily toward sexually reproducing organisms while others lend themselves better toward asexual organisms. Despite the diversity of various species concepts, these various concepts can be placed into one of three broad philosophical approaches: interbreeding, ecological and phylogenetic. The biological species concept (BSC) is a classic example of the interbreeding approach. Defined by Ernst Mayr in 1942, the BSC states that "species are groups of actually or potentially interbreeding natural populations, which are reproductively isolated from other such groups". Despite its wide and long-term use, the BSC like others is not without controversy, for example because these concepts cannot be applied to prokaryotes, and this is called the species problem. Some researchers have attempted a unifying monistic definition of species, while others adopt a pluralistic approach and suggest that there may be different ways to logically interpret the definition of a species. " Barriers to reproduction between two diverging sexual populations are required for the populations to become new species. Gene flow may slow this process by spreading the new genetic variants also to the other populations. Depending on how far two species have diverged since their most recent common ancestor, it may still be possible for them to produce offspring, as with horses and donkeys mating to produce mules. Such hybrids are generally infertile. In this case, closely related species may regularly interbreed, but hybrids will be selected against and the species will remain distinct. However, viable hybrids are occasionally formed and these new species can either have properties intermediate between their parent species, or possess a totally new phenotype. The importance of hybridisation in producing new species of animals is unclear, although cases have been seen in many types of animals, with the gray tree frog being a particularly well-studied example. Speciation has been observed multiple times under both controlled laboratory conditions and in nature. In sexually reproducing organisms, speciation results from reproductive isolation followed by genealogical divergence. There are four mechanisms for speciation. The most common in animals is allopatric speciation, which occurs in populations initially isolated geographically, such as by habitat fragmentation or migration. Selection under these conditions can produce very rapid changes in the appearance and behaviour of organisms. As selection and drift act independently on populations isolated from the rest of their species, separation may eventually produce organisms that cannot interbreed. The second mechanism of speciation is peripatric speciation, which occurs when small populations of organisms become isolated in a new environment. This differs from allopatric speciation in that the isolated populations are numerically much smaller than the parental population. Here, the founder effect causes rapid speciation after an increase in inbreeding increases selection on homozygotes, leading to rapid genetic change. The third mechanism of speciation is parapatric speciation. This is similar to peripatric speciation in that a small population enters a new habitat, but differs in that there is no physical separation between these two populations. Instead, speciation results from the evolution of mechanisms that reduce gene flow between the two populations. Generally this occurs when there has been a drastic change in the environment within the parental species' habitat. One example is the grass Anthoxanthum odoratum, which can undergo parapatric speciation in response to localised metal pollution from mines. Here, plants evolve that have resistance to high levels of metals in the soil. Selection against interbreeding with the metal-sensitive parental population produced a gradual change in the flowering time of the metal-resistant plants, which eventually produced complete reproductive isolation. Selection against hybrids between the two populations may cause reinforcement, which is the evolution of traits that promote mating within a species, as well as character displacement, which is when two species become more distinct in appearance. Finally, in sympatric speciation species diverge without geographic isolation or changes in habitat. This form is rare since even a small amount of gene flow may remove genetic differences between parts of a population. Generally, sympatric speciation in animals requires the evolution of both genetic differences and non-random mating, to allow reproductive isolation to evolve. One type of sympatric speciation involves cross-breeding of two related species to produce a new hybrid species. This is not common in animals as animal hybrids are usually sterile. This is because during meiosis the homologous chromosomes from each parent are from different species and cannot successfully pair. However, it is more common in plants because plants often double their number of chromosomes, to form polyploids. This allows the chromosomes from each parental species to form matching pairs during meiosis, since each parent's chromosomes are represented by a pair already. An example of such a speciation event is when the plant species Arabidopsis thaliana and Arabidopsis arenosa cross-bred to give the new species Arabidopsis suecica. This happened about 20,000 years ago, and the speciation process has been repeated in the laboratory, which allows the study of the genetic mechanisms involved in this process. Indeed, chromosome doubling within a species may be a common cause of reproductive isolation, as half the doubled chromosomes will be unmatched when breeding with undoubled organisms. Speciation events are important in the theory of punctuated equilibrium, which accounts for the pattern in the fossil record of short "bursts" of evolution interspersed with relatively long periods of stasis, where species remain relatively unchanged. In this theory, speciation and rapid evolution are linked, with natural selection and genetic drift acting most strongly on organisms undergoing speciation in novel habitats or small populations. As a result, the periods of stasis in the fossil record correspond to the parental population and the organisms undergoing speciation and rapid evolution are found in small populations or geographically restricted habitats and therefore rarely being preserved as fossils. Extinction is the disappearance of an entire species. Extinction is not an unusual event, as species regularly appear through speciation and disappear through extinction. Nearly all animal and plant species that have lived on Earth are now extinct, and extinction appears to be the ultimate fate of all species. These extinctions have happened continuously throughout the history of life, although the rate of extinction spikes in occasional mass extinction events. The Cretaceous–Paleogene extinction event, during which the non-avian dinosaurs went extinct, is the most well-known, but the earlier Permian–Triassic extinction event was even more severe, with approximately 96% of species driven to extinction. The Holocene extinction event is an ongoing mass extinction associated with humanity's expansion across the globe over the past few thousand years. Present-day extinction rates are 100–1000 times greater than the background rate and up to 30% of current species may be extinct by the mid 21st century. Human activities are now the primary cause of the ongoing extinction event; global warming may further accelerate it in the future. The role of extinction in evolution is not very well understood and may depend on which type of extinction is considered. The causes of the continuous "low-level" extinction events, which form the majority of extinctions, may be the result of competition between species for limited resources (competitive exclusion). If one species can out-compete another, this could produce species selection, with the fitter species surviving and the other species being driven to extinction. The intermittent mass extinctions are also important, but instead of acting as a selective force, they drastically reduce diversity in a nonspecific manner and promote bursts of rapid evolution and speciation in survivors. Highly energetic chemistry is thought to have produced a self-replicating molecule around 4 billion years ago, and half a billion years later the last common ancestor of all life existed. The current scientific consensus is that the complex biochemistry that makes up life came from simpler chemical reactions. The beginning of life may have included self-replicating molecules such as RNA and the assembly of simple cells. All organisms on Earth are descended from a common ancestor or ancestral gene pool. Current species are a stage in the process of evolution, with their diversity the product of a long series of speciation and extinction events. The common descent of organisms was first deduced from four simple facts about organisms: First, they have geographic distributions that cannot be explained by local adaptation. Second, the diversity of life is not a set of completely unique organisms, but organisms that share morphological similarities. Third, vestigial traits with no clear purpose resemble functional ancestral traits and finally, that organisms can be classified using these similarities into a hierarchy of nested groups – similar to a family tree. However, modern research has suggested that, due to horizontal gene transfer, this "tree of life" may be more complicated than a simple branching tree since some genes have spread independently between distantly related species. Past species have also left records of their evolutionary history. Fossils, along with the comparative anatomy of present-day organisms, constitute the morphological, or anatomical, record. By comparing the anatomies of both modern and extinct species, paleontologists can infer the lineages of those species. However, this approach is most successful for organisms that had hard body parts, such as shells, bones or teeth. Further, as prokaryotes such as bacteria and archaea share a limited set of common morphologies, their fossils do not provide information on their ancestry. More recently, evidence for common descent has come from the study of biochemical similarities between organisms. For example, all living cells use the same basic set of nucleotides and amino acids. The development of molecular genetics has revealed the record of evolution left in organisms' genomes: dating when species diverged through the molecular clock produced by mutations. For example, these DNA sequence comparisons have revealed that humans and chimpanzees share 98% of their genomes and analyzing the few areas where they differ helps shed light on when the common ancestor of these species existed. Prokaryotes inhabited the Earth from approximately 3–4 billion years ago. No obvious changes in morphology or cellular organisation occurred in these organisms over the next few billion years. The eukaryotic cells emerged between 1.6 – 2.7 billion years ago. The next major change in cell structure came when bacteria were engulfed by eukaryotic cells, in a cooperative association called endosymbiosis. The engulfed bacteria and the host cell then underwent co-evolution, with the bacteria evolving into either mitochondria or hydrogenosomes. Another engulfment of cyanobacterial-like organisms led to the formation of chloroplasts in algae and plants. The history of life was that of the unicellular eukaryotes, prokaryotes and archaea until about 610 million years ago when multicellular organisms began to appear in the oceans in the Ediacaran period. The evolution of multicellularity occurred in multiple independent events, in organisms as diverse as sponges, brown algae, cyanobacteria, slime moulds and myxobacteria. Soon after the emergence of these first multicellular organisms, a remarkable amount of biological diversity appeared over approximately 10 million years, in an event called the Cambrian explosion. Here, the majority of types of modern animals appeared in the fossil record, as well as unique lineages that subsequently became extinct. Various triggers for the Cambrian explosion have been proposed, including the accumulation of oxygen in the atmosphere from photosynthesis. About 500 million years ago, plants and fungi colonised the land and were soon followed by arthropods and other animals. Insects were particularly successful and even today make up the majority of animal species. Amphibians first appeared around 364 million years ago, followed by early amniotes and birds around 155 million years ago (both from "reptile"-like lineages), mammals around 129 million years ago, homininae around 10 million years ago and modern humans around 250,000 years ago. However, despite the evolution of these large animals, smaller organisms similar to the types that evolved early in this process continue to be highly successful and dominate the Earth, with the majority of both biomass and species being prokaryotes. Concepts and models used in evolutionary biology, such as natural selection, have many applications. Artificial selection is the intentional selection of traits in a population of organisms. This has been used for thousands of years in the domestication of plants and animals. More recently, such selection has become a vital part of genetic engineering, with selectable markers such as antibiotic resistance genes being used to manipulate DNA. In repeated rounds of mutation and selection proteins with valuable properties have evolved, for example modified enzymes and new antibodies, in a process called directed evolution. Understanding the changes that have occurred during organism's evolution can reveal the genes needed to construct parts of the body, genes which may be involved in human genetic disorders. For example, the mexican tetra is an albino cavefish that lost its eyesight during evolution. Breeding together different populations of this blind fish produced some offspring with functional eyes, since different mutations had occurred in the isolated populations that had evolved in different caves. This helped identify genes required for vision and pigmentation. In computer science, simulations of evolution using evolutionary algorithms and artificial life started in the 1960s and was extended with simulation of artificial selection. Artificial evolution became a widely recognised optimisation method as a result of the work of Ingo Rechenberg in the 1960s. He used evolution strategies to solve complex engineering problems. Genetic algorithms in particular became popular through the writing of John Holland. Practical applications also include automatic evolution of computer programs. Evolutionary algorithms are now used to solve multi-dimensional problems more efficiently than software produced by human designers and also to optimise the design of systems. In the 19th century, particularly after the publication of On the Origin of Species in 1859, the idea that life had evolved was an active source of academic debate centred on the philosophical, social and religious implications of evolution. Today, the modern evolutionary synthesis is accepted by a vast majority of scientists. However, evolution remains a contentious concept for some theists. While various religions and denominations have reconciled their beliefs with evolution through concepts such as theistic evolution, there are creationists who believe that evolution is contradicted by the creation myths found in their religions and who raise various objections to evolution. As had been demonstrated by responses to the publication of Vestiges of the Natural History of Creation in 1844, the most controversial aspect of evolutionary biology is the implication of human evolution that humans share common ancestry with apes and that the mental and moral faculties of humanity have the same types of natural causes as other inherited traits in animals. In some countries, notably the United States, these tensions between science and religion have fuelled the current creation-evolution controversy, a religious conflict focusing on politics and public education. While other scientific fields such as cosmology and Earth science also conflict with literal interpretations of many religious texts, evolutionary biology experiences significantly more opposition from religious literalists. The teaching of evolution in American secondary school biology classes was uncommon in most of the first half of the 20th century. The Scopes Trial decision of 1925 caused the subject to become very rare in American secondary biology textbooks for a generation, but it was gradually re-introduced later and became legally protected with the 1968 Epperson v. Arkansas decision. Since then, the competing religious belief of creationism was legally disallowed in secondary school curricula in various decisions in the 1970s and 1980s, but it returned in pseudoscientific form as intelligent design, to be excluded once again in the 2005 Kitzmiller v. Dover Area School District case. History of evolutionary thought |Find more about evolution at Wikipedia's sister projects| |Definitions and translations from Wiktionary| |Media from Commons| |Learning resources from Wikiversity| |News stories from Wikinews| |Quotations from Wikiquote| |Source texts from Wikisource| |Textbooks from Wikibooks| Here you can share your comments or contribute with more information, content, resources or links about this topic.
http://www.mashpedia.com/Evolution
13
832
Back to top of Section 4 4.1.1 Dimensional and Temporal Scale Factors In Section 2 the properties of fission chain reactions were described using two simplified mathematical models: the discrete step chain reaction, and the more accurate continuous chain reaction model. A more detailed discussion of fission weapon design is aided by introducing more carefully defined means of quantifying the dimensions and time scales involved in fission explosions. These scale factors make it easier to analyze time-dependent neutron multiplication in systems of varying composition and geometry. These scale factors are based on an elaboration of the continuous chain reaction model. It uses the concept of the "average neutron collision" which combines the scattering, fission, and absorption cross sections, with the total number of neutrons emitted per fission, to create a single figure of merit which can be used for comparing different assemblies. The basic idea is this, when a neutron interacts with an atom we can think of it as consisting of two steps: If the interaction is ordinary neutron capture, then no neutron is emitted from the collision. If the interaction is a scattering event, then one neutron is emitted. If the interaction is a fission event, then the average number of neutrons produced per fission is emitted (this average number is often designated by nu). By combining these we get the average number of neutrons produced per collision (also called the number of secondaries), designated by c: Eq. 4.1.1-1 c = (cross_scatter + cross_fission*avg_n_per_fission)/cross_total the total cross section, cross_total, is equal to: Eq. 4.1.1-2 cross_total = cross_scatter + cross_fission + cross_absorb The total neutron mean free path, the average distance a neutron will travel before undergoing a collision, is given by: Eq. 4.1.1-3 MFP = 1/(cross_total * N)where N is the number of atoms per unit volume, determined by the density. In computing the effective reactivity of a system we must also take into account the rate at which neutrons are lost by escape from the system. This rate is measured by the number of neutrons lost per collision. For a given geometry, the rate is determined by the size of the system in MFPs. Put another way, for a given geometry and degree of reactivity, the size of the system as measured in MFPs, is determined only by the parameter c. The higher the value of c, the smaller the assembly can be. An indication of the effect of c on the size of a critical assembly can be gained by the following table of critical radii (in MFPs) for bare (unreflected) spheres: Table 4.1.1-1. Critical Radius (r_c) vs Number of Secondaries (c ) c value r_c (crit. radius in MFP) 1.0 infinite 1.02 12.027 1.05 7.277 1.10 4.873 1.20 3.172 1.40 1.985 1.60 1.476 If the composition, geometry, and reactivity of a system are specified then the size of a system in MFPs is fixed. From Eq. 4.1.1-3 we can see that the physical size or scale of the system (measured in centimeters, say) is inversely proportional to its density. Since the mass of the system is equal to volume*density, and volume varies with the cube of the radius, we can immediately derive the following scaling law: Eq. 4.1.1-4 mcrit_c = mcrit_0/(rho/rho_0)^2 = mcrit_0/C^2That is, the critical mass of a system is inversely proportional to the square of the density. C is the degree of compression (density ratio). This scaling law applies to bare cores, it also applies cores with a surrounding reflector, if the reflector is density has an identical degree of compression. This is usually not the case in real weapon designs, a higher degree of compression generally being achieved in the core than in the reflector. An approximate relationship for this is: Eq. 4.1.1-5 mcrit_c = mcrit_0/(C_c^1.2 * C_r^0.8)where C_c is the compression of the core, and C_r is the compression of the reflector. Note that when C_c = C_r, then this is identical to Eq. 4.1.1-4. For most implosion weapon designs (since C_c > C_r) we can use the approximate relationship: Eq. 4.1.1-6 mcrit_c = mcrit_0/C_c^1.7 These same considerations are also valid for any other specified degree of reactivity, not just critical cores. Fission explosives depend on a very rapid release of energy. We are thus very interested in measuring the rate of the fission reaction. This is done using a quantity called the effective multiplication rate or "alpha". The neutron population at time t is given by: Eq. 4.1.1-7 N_t = N_0*e^(alpha*t) Alpha thus has units of 1/t, and the neutron population will increase by a factor of e (2.71...) in a time interval equal to 1/alpha. This interval is known as the "time constant" (or "e-folding time") of the system, t_c. The more familiar concept of "doubling time" is related to alpha and the time constant simply by: Eq. 4.1.1-8 doubling_time = (ln 2)/alpha = (ln 2)*t_c Alpha is often more convenient than t_c or doubling times since its value is bounded and continuous: zero at criticality; positive for supercritical systems; and negative for subcritical systems. The time constant goes to infinity at criticality. The term "time constant" seems unsatisfactory for this discussion though since it is hardly constant, t_c continually changes during reactivity insertion and disassembly. Therefore I will henceforth refer to the quantity 1/alpha as the "multiplication interval". Alpha is determined by the reactivity (c and the probability of escape), and the length of time it takes an average neutron (for a suitably defined average) to traverse an MFP. If we assume no losses from the system then alpha can be calculated by: Eq. 4.1.1-9 alpha = (1/tau)*(c - 1) = (v_n/total_MFP)*(c - 1) where tau is the average neutron lifetime between collisions; and v_n is the average neutron velocity (which is 2.0x10^9 cm/sec for a 2 MeV neutron, the average fission spectrum energy). The "no losses" assumption is an idealization. It provides an upper bound for reaction rates, and provides a good indication of the relative reaction rates in different materials. For very large assemblies, consisting of many critical masses, neutron losses may actually become negligible and approach the alphas given below. The factor c - 1 used above is the "neutron number", it represents the average neutron excess per collision. In real systems there is always some leakage, when this leakage is taken in account we get the "effective neutron number" which is always less than c - 1. When the effective neutron number is zero the system is exactly critical. 4.1.2 Nuclear Properties of Fissile Materials The actual value of alpha at a given density is the result of many interacting factors: the relative neutron density and cross sections values as a function of neutron energy, weighted by neutron velocity which in turn is determined by the fission neutron energy spectrum modified by the effects of both moderation and inelastic scattering. Ideally the value of alpha should be determined by "integral experiments", that is, measured directly in the fissile material where all of these effects will occur naturally. Calculating tau and alpha from differential cross section measurements, adjusted neutron spectrums, etc. is fraught with potential error. In the table below I give some illustrative values of c, total cross section, total mean free path lengths for the principal fissionable materials (at 1 MeV), and the alphas at maximum uncompressed densities. Compression to above normal density (achievable factors range up to 3 or so in weapons) reduce the MFPs, alphas (and the physical dimensions of the system) proportionately. Table 4.1.2-1 Fissile Material Properties Isotope c cross_total total_MFP density alpha t_double (barns) (cm) (1/microsec) (nanosec) U-233 1.43 6.5 3.15 18.9 273 2.54 U-235 1.27 6.8 3.04 18.9 178 3.90 Pu-239 1.40 7.9 2.54 19.8 315 2.20 Values of c and total MFP can be easily calculated for mixtures of materials as well. In real fission weapons (unboosted) effective values for alpha are typically in the range 25-250 (doubling times of 2.8 to 28 nanoseconds). All nations interested in nuclear weapons technology have performed integral experiments to measure alpha, but published data is sparse and in general is limited to the immediate region of criticality. Collecting data for systems at high densities requires extremely difficult high explosive experiments, and data for high alpha systems can only be done in actual nuclear weapon tests. Some integral alpha data is available for systems near prompt critical. The most convenient measurements are of the negative alpha value for fast neutron chain reactions at delayed criticality. Since at prompt critical alpha is exactly zero, the ratio of the magnitude of this delayed critical measurement to the fraction of fission neutrons that are delayed allows the alpha value to be calculated. These were the only sort of alpha measurements available to the Manhattan Project for the design of the first atomic bombs. The most informative values are from the Godiva and Jezebel unreflected reactor experiments. These two systems used bare metal weapon grade cores, so the properties of weapons material was being measured directly. Godiva consisted of oralloy (93.71 wt% U-235, 5.24 wt% U-238, 1.05 wt% U-234), Jezebel of weapon-grade delta-phase plutonium alloy (94.134 wt% Pu-239, 4.848 wt% Pu-240, 1.018 wt% gallium) Table 4.1.2-2 Properties of Bare Critical Metal Assemblies Mass, Density, and Measured Alpha are at Delayed Critical (D.C.) Assembly Material Mass Density Meas. Alpha Del. Neutron Calc. Alpha Name kg (1/microsec) Fraction (1/microsec) Godiva Oralloy 52.25 18.71 -1.35 0.0068 199 Jezebel WG-Pu 16.45 15.818 -0.66 0.0023 287 The calculated values of alpha from the Godiva and Jezebel experiments are reasonably close to those calculated above from 1 MeV cross section data. Adjusting for density, we get 270/microsecond for U-235 (1 Mev data) vs 199/microsecond (experimental), and for plutonium 252/microsecond vs 287/microsecond. The effective value of alpha (the actual multiplication rate), taking into account neutron leakage, varies with the size of the system. If the system radius R = r_c, then it is exactly one critical mass (m = M_crit), and alpha is zero. The more critical masses present, the closer alpha comes to the limiting value. This can be estimated from the relation: Eq. 4.1.2-1 alpha_eff = alpha_max*[1 - (r_c/R)^2] = alpha_max*[1 - (M_crit/m)^(2/3)] Notice that using the two tables above we can immediately estimate the critical mass of a bare plutonium sphere: mass_crit = [(2*1.985*2.54 cm)^3]*(Pi/6)*19.8 g/cm^3 = 10,600 grams The published figure is usually given as 10.5 kg. 4.1.3 Distribution of Neutron Flux and Energy in the Core Since neutron leakage occurs at the surface of a critical or supercritical core, the strength of the neutron flux is not constant throughout the core. Since the rate of energy release at any point in the core is proportional to the flux at that point, this also affects the energy density throughout the core. This is a matter of some significance, since it influences weapon efficiency and the course of events in terminating the divergent fission chain reaction. 126.96.36.199 Flux Distribution in the Core For a bare (unreflected) critical spherical system, the flux distribution is given by: Eq. 188.8.131.52-1 flux(r) = max_flux * Sin(Pi*r/(r + 0.71*MFP))/(Pi*r/(r + 0.71*MFP)) (using the diffusion approximation) where Sin takes radians as an argument. If we measure r in MFPs, then by referring to Table 4.1.1-1 we can relate the flux distribution to the parameter c. Computing the ratio between the flux at the surface of the critical system, and the maximum flux (in the center) we find: Table 4.1.3-1 Relative Flux at Surface c value flux(r_c) 1.0 0.0 (at the limit) 1.02 0.0587 1.05 0.0963 1.10 0.1419 1.20 0.2117 1.40 0.3182 1.60 0.4018 This shows that as c increases, the flux distribution becomes flatter with less drop in the flux near the surface. The flux distribution function above applies only to bare critical systems. If the system is supercritical, then the flux distribution becomes flatter, since neutron production over-balances loss. The greater the value of alpha for the system, the flatter it becomes. The addition of a neutron reflector also flattens the distribution, even for the same degree of reactivity. The flux distribution function is useful though, since the maximum rate of fission occurs at the moment when the core passes through second criticality (on the way to disassembling, see below). 184.108.40.206 Energy Distribution in the Core As long as the geometry doesn't change, the relative flux distribution remains the same throughout the fission process. The fission reaction rate at any point in the core is proportional to the flux. The net burnup of fissile material (and total energy release) is determined by the reaction rate integrated over time. This indicates that the degree of burnup (the efficiency of utilization) varies throughout the core. The outer layers of material will be fissioned less efficiently than the material near the center. The steeper the drop off in flux the greater this effect will be. We can thus expect less efficient utilization of fissile material in small cores, and in materials with low values of c. From the relatively low value of c for U-235 compared to U-233 and Pu-239, we can expect that U-235 will be used less efficiently. This is observed in pure fission tests, the difference being about 15% in nominal yield (20 kt) pure fission designs. The energy density (energy content per unit volume) in any region of the core is determined not only by the total energy produced in that region, but also by the flow of heat in to and out from the region. The energy present in the core rises by a factor of e (2.71...) every multiplication interval (neglecting any losses from the surface). Nearly all of the energy present has thus been produced in the last one or two multiplication intervals, which in a high alpha system is a very short period of time (10 nanoseconds or less). There is not much time for heat flow to significantly alter this energy distribution. Close to the end point of the fission process, the energy density in the core is so high that significant flow can occur. Since most of the energy is present as a photon gas the dominant mechanism is radiation (photon) heat transport, although electron kinetic heat transport may be significant as well. This heat flow can be modelled by the diffusion approximation just like neutron transport, but in this case estimating the photon mean free path (the opacity of the material) is quite difficult. A rough magnitude estimate for the photon MFP is a few millimeters. The major of effect of energy flow is the loss of energy from a layer about 1 photon mean free path thick (referred to as one optical thickness) at the surface of the core. In a bare core this cooling can be quite dramatic, but the presence of a high-Z tamper (which absorbs and re-emits energy) greatly reduces this cooling. Losses also occur deeper in the core, but below a few photon MFPs it becomes negligible. Otherwise, there is a significant shift in energy out of the center of the core that tends to flatten the energy distribution. The energy density determines the temperature and pressure in the core, so there is also a variation in these parameters. Since the temperature in radiation dominated matter varies with the fourth power of the energy density, the temperature distribution is rather flat (except near the surface perhaps). The pressure is proportional to the energy density, so it varies in similar degree. 4.1.4 History of a Fission Explosion To clarify the issues governing fission weapon design it is very helpful to understand the sequence of events that occurs in every fission explosion. The final event in the process - disassembly - is especially important since it terminates the fission energy release and thus determines the efficiency of the bomb. 220.127.116.11 Sequence of Events Several distinct physical states can be identified during the detonation of a fission bomb. In each of these states a different set of physical processes dominates. 18.104.22.168.1 Initial State Before the process that leads to a fission explosion is initiated, the fissile material is in a subcritical configuration. Reactivity insertion begins by increasing the average density of the configuration in some way. 22.214.171.124.2 Delayed Criticality When the density has increased just to the point that a neutron population in the mass is self-sustaining, the state of delayed criticality has been achieved. Although nearly all neutrons produced by fission are emitted as soon as the atom splits (within 10^-14 sec or so), a very small proportion of neutrons (0.65% for U-235, 0.25% for Pu-239) are emitted by fission fragments with delays of up to a few minutes. In delayed criticality these neutrons are required to maintain the chain reaction. These long delays mean that power level changes can only occur slowly. All nuclear reactors operate in a state of delayed criticality. Due to the slowness of neutron multiplication in this state it is of no significance in nuclear explosions, although it is important for weapon safety considerations. 126.96.36.199.3 Prompt Criticality When reactivity increases to the point that prompt neutrons alone are sufficient to maintain the chain reaction then the state of prompt criticality has been reached. Rapid multiplication can occur after this point. In bomb design the term "criticality" usually is intended to mean "prompt criticality". For our purposes we can take the value of alpha as being zero at this point. The reactivity change required to move from delayed to prompt criticality is quite small (for plutonium the prompt and delayed critical mass difference is only 0.80%, for U-235 it is 2.4%), so in practice the distinction is unimportant. Passage through prompt criticality into the supercritical state is also termed "first criticality". 188.8.131.52.4 Supercritical Reactivity Insertion The insertion time of a supercritical system is measured from the point of prompt criticality, when the divergent chain reaction begins. During this phase the reactivity climbs, along with the value of alpha, as the density of the core continues to increase. Any insertion system will have some maximum degree of reactivity which marks the end of the insertion phase. This phase may be terminated by reaching a plateau value, by passing the point of maximum reactivity and beginning to spontaneously deinsert, or by undergoing explosive disassembly. 184.108.40.206.5 Exponential Multiplication This phase may overlap supercritical insertion to any degree. Any neutrons introduced into the core after prompt criticality will initiate a rapid divergent chain reaction that increases in power exponentially with time, the rate being determined by alpha. If exponential multiplication begins before maximum reactivity, and insertion is sufficiently fast, there may be significant increases in alpha during the course of the chain reaction. Throughout the exponential multiplication phase the cumulative energy released remains too small to disrupt the supercritical geometry on the time scale of the reaction. Exponential multiplication is always terminated by explosive disassembly. The elapsed time from neutron injection in the supercritical state to the beginning of explosive disassembly is called the "incubation time". 220.127.116.11.6 Explosive Disassembly The bomb core is disassembled by a combination of internal expansion that accelerates all portions of the core outward, and the "blow-off" or escape of material from the surface, which generates a rarefaction wave propagating inward from the surface. The drop in density throughout the core, and the more rapid loss of material at the surface, cause the neutron leakage in the core to increase and the effective value of alpha to decline. The speed of both the internal expansion and surface escape processes is proportional to the local speed of sound in the core. Thus disassembly occurs when the time it takes sound to traverse a significant fraction of the core radius becomes comparable to the time constant of the chain reaction. Since the speed of sound is determined by the energy density in the core, there is a direct relationship between the value of alpha at the time of disassembly and the amount of energy released. The faster is the chain reaction, the more efficient is the explosion. As long as the value of alpha is positive (the core is supercritical) the fission rate continues to increase. Thus the peak power (energy production rate) occurs at the point where the core drops back to criticality (this point is called "second criticality"). Although this terminates the divergent chain reaction, and exponential increase in energy output, this does not mean that significant power output has ended. A convergent chain reaction continues the release of energy at a significant, though rapidly declining, rate for a short time afterward. 30% or more of the total energy release typically occurs after the core has become sub-critical. 18.104.22.168 The Disassembly Process The internal expansion of the core is caused by the existence of an internal pressure gradient. The escape of material from the surface is caused by an abrupt drop in pressure near the surface, allowing material to expand outward very rapidly. Both of these features are present in every fission bomb, but the degree to which each contributes to disassembly varies. Consider a spherical core with internal pressure declining from the center towards the surface. At any radius r within the core the pressure gradient is dP/dR. Now consider a shell of material centered at r, that is sufficiently thin so that the slope of the pressure gradient does not change appreciably across it. The mass of the shell is determined by its area, density, and thickness: m = thickness * area * densityThe outward force exerted on the shell is determined by the pressure difference across the shell and the shell area: F = dP/dR * thickness * areaFrom Newton's second law of motion we know that acceleration is related to force and mass by: a = F/m so: a = (dP/dR * thickness * area)/(thickness * area * density) = (dP/dR)/density If density is constant in the core, then the outward acceleration at any point is proportional to the pressure gradient; the steeper the gradient, the greater the acceleration. The kinetic energy acquired comes at the expense of the internal energy of the expanding material. The limiting case of a steep pressure gradient is a sudden drop to zero. In this case the acceleration is infinite, the internal energy of the material is completely converted to kinetic energy instantaneously and it expands outwards at constant velocity (escape velocity). The edge of the pressure drop propagates back into the material as a rarefaction wave at the local speed of sound. The pressure at the leading edge of the expanding material (moving in the opposite direction at escape velocity) is zero. The pressure discontinuity thus immediately changes into a continuous pressure change of steadily diminishing slope. See Section 22.214.171.124 Release Waves for more discussion of this process. In a bare core, thermal radiation from the surface causes a large energy loss in a surface layer about one optical thickness deep. Since energy lost from the core by thermal radiation cannot contribute to expansion, this has the effect of delaying disassembly. It does create a very steep pressure gradient in the layer however, and a correspondingly high outward acceleration. Deeper in the core, the pressure gradient is much flatter and the acceleration is lower. After the surface layer has expanded outward by a few times its original thickness, it has acquired considerable velocity, and the surface pressure drop rarefaction has propagated a significant distance back into the core. At this point the pressure and density profile of the core closely resembles the early stages of expansion from an instantaneous pressure drop, the development of the profile having been delayed slightly by the time it took the surface to accelerate to near escape velocity. A bomb core will typically be surrounded by a high-Z tamper. A layer of tamper (about one optical thickness deep) absorbs the thermal radiation emitted by the core and is heated by it. As its temperature increases, this layer begins to radiate energy back to the core, reducing the core's energy loss. In addition, the heating also generates considerable pressure in the tamper layer. The combined effect of reduced core surface cooling, and this external pressure is to create a much more gradual pressure drop in the outer layer of the core and a correspondingly reduced acceleration. The expanding core and heated tamper layer creates a shock wave in the rest of the tamper. This has important consequences for the disassembly process. The rarefaction wave velocity is not affected by the presence of the tamper, but the rate at which the density drops after arrival of the rarefaction wave is strongly affected. The rate of density drop is determined by the limiting outward expansion velocity, this is in turn determined by the shock velocity in the tamper. The denser the tamper the slower the shock, and the slower the density decrease behind the rarefaction wave. In any case the shock velocity in the tamper is much slower than the escape velocity of expansion into a vacuum. The disassembly of a tamped core thus more closely resembles one dominated by internal expansion rather than surface escape. 126.96.36.199 Post Disassembly Expansion The expanding core creates a radiation dominated shock wave in the tamper that compresses it by at least a factor of 7, and perhaps as high as 16 due to ionization effects. This pileup of high density material at the shock front is called the "snow plow" effect. By the time this shock has moved a few centimeters into the tamper, the rarefaction wave will have reached the center of the core and the entire core will be expanding outward uniformly. The basic structure of the early fireball has now developed, consisting of a thin highly compressed shell just behind the shock front containing nearly all of the mass that has been shocked and heated so far. This shell travels outward at nearly the same velocity as the shock front. The volume inside this shell is a region of very low density. Temperature and pressure behind the shock front is essentially uniform though since nearly all of the energy present is contained in the radiation field (i.e. it exists as a photon gas). Since the shock wave is radiation dominated, the front does not contain an abrupt pressure jump. Instead there is a transition zone with a thickness about equal to the radiation mean free path in the high-Z tamper material (typically a few millimeters). In this zone the temperature and pressure climb steadily to their final value. This overall explosion structure remains the same as the shock expands outward until it reaches a layer of low-Z material (a beryllium reflector, or the high explosive). The transition zone marking the shock front remains thin as long as the shock is travelling through opaque high-Z material. Low-Z material becomes completely ionized as it is heated, and once it is completely ionized it is nearly transparent to radiation and is no longer efficiently heated. When the shock front emerges at the boundary of the high-Z tamper and the low-Z material, it spits into two regions. A radiation driven shock front moves quickly away from the high-Z surface, bleaching the low-Z material to transparency. This faster shock front only creates a partial transition to the final temperature and pressure. The transition is completed by a second shock, this one a classical mechanical shock, driven by the opaque material. 4.1.5 Fission Weapon Efficiency Fundamental to analyzing the design of fission bombs is understanding the factors that influence the efficiency of the explosion - the percentage of fissile material actually fissioned. The efficiency and the amount of fissile material present determine the amount of energy released by the explosion - the bomb's yield. I have organized my discussion of design principles around the issue of efficiency since it is the most important design characteristic of any fission device. Any weapon designer must have a firm grasp on the expected efficiency in order to make successful yield predictions, and a firm grasp on the factors affecting efficiency is required to make design tradeoffs. In the discussion below (and in later subsections as well) I assume that the system under discussion is spherically symmetric, and of homogenous density, unless otherwise stated. Spherical symmetry is the simplest geometry to analyze, and also happens to be the preferred geometry for efficient nuclear weapons. 188.8.131.52 Efficiency Equations It is intrinsically difficult to accurately predict the performance of a particular design from fundamental physical principles alone. To make good predictions on this basis requires sophisticated computer simulations that include hydrodynamic, radiation, and neutronic effects. Even here it is very valuable to have actual test data to use for calibrating these simulation models. Nuclear weapon programs have historically relied heavily on extrapolating tested baseline designs using scaling laws like the efficiency equations I discuss below, especially in the early years of development. These equations are derived from idealized models of bomb core behavior and consequently have serious limitations in making absolute efficiency estimates. The predictions of the Theoretical Section at Los Alamos underestimated the yield of the first atomic bomb by a factor of three; an attempts a few years later to recompute the bomb efficiency using the best models, physical data, and computers available at the time led to a yield overestimate by a factor of two. From the description of core disassembly given above we can see that two possible idealizations are possible for deriving convenient efficiency equations: The basic approach is to model how quickly the core expands to the point of second criticality. To within a constant scaling factor, this fixes the efficiency of the explosion. In the first modelling approach, the state of second criticality is based on the average density of the entire core. In the second approach, second criticality is based on the surface loss of excess critical masses from a residual core which remains at constant initial density. The first efficiency equation to be developed was the Bethe-Feynmann equation, prepared by Hans Bethe and Richard Feynman at Berkeley in 1942 based on the uniform expansion model. A somewhat different efficiency equation was presented by Robert Serber in early 1943 at Los Alamos, which was also based on uniform expansion but also explicitly included the exponential growth in energy release (which the Bethe-Feynmann equation did not). A problem with these derivations is that to keep the resultant formulas relatively simple, they assume that the expanding core remains at essentially constant density during deinsertion, which is only true (even approximately) when the degree of supercriticality is small. For the purposes of this FAQ I have taken the second approach for deriving an efficiency equation, using the surface escape model. This model has the advantage that the residual core remains at constant density regardless of the degree of supercriticality. Comparing it to the other efficiency equations provides some insight into the sensitivity of the assumptions in the various models. 184.108.40.206.1 The Serber Efficiency Equation Revisited Let us first consider the factors that affect the efficiency of a homogenous untamped supercritical mass. In this system, disassembly begins as fissile material expands off the core's surface into a vacuum. We make the following simplifying assumptions: If r is the initial outer radius, and r_c is the critical radius, then the reaction halts when: Eq. 220.127.116.11.1-1 Integral[c_s(t) dt] = r - r_c where c_s(t) is the speed of sound at time t. If kinetic pressure is negligible compared to radiation pressure (this is true in all but extremely low yield explosions), then: Eq. 18.104.22.168.1-2 c_s(t) = [(E(t)*gamma)/(3*V*rho)]^0.5 where E(t) is the cumulative energy produced by the reaction, V is the volume of the core, and rho is its density. We also have: Eq. 22.214.171.124.1-3 E(t) = (E1/(c - 1)) * e^(alpha*t) where E1 is a constant that gives the energy yield per fission (E1 = 2.88 x 10^-4 erg/fission). Thus: Eq. 126.96.36.199.1-4 Eff(t) = E(t)/E_total = (E1/((c - 1)*E_total)) * e^(alpha*t) where Eff(t) is the efficiency at time t, and E_total is the energy yield at 100% efficiency. Eq. 188.8.131.52.1-5 r - r_c = Integral[(E(t)*gamma/(3*V*rho))^0.5 dt] = (gamma*E1/(3*M*(c-1)))^0.5 * Integral[e^(alpha*t/2)dt] = (gamma*E1/(3*M*(c-1)))^0.5 * 2/alpha * e^(alpha*t/2) where M is the fissile mass. Rearranging and squaring we get: Eq. 184.108.40.206.1-6 e^(alpha*t) = (r - r_c)^2 * ((3M*(c-1))/(gamma*E1)) * (alpha^2)/4 Substituting into the efficiency equation: Eq. 220.127.116.11.1-7 Eff(t) = [3*alpha^2 * M * (r - r_c)^2]/(4*gamma*E_total) If E2 is a constant equal to fission energy/gram in ergs (7.25 x 10^17 erg/g for Pu-239), and gamma is equal to 4/3 for a photon gas, then: Eq. 18.104.22.168.1-8 Eff(t) = [9*alpha^2 * (r - r_c)^2]/(16*E2) We can observe at this point that efficiency is determined by the actual value of alpha and the difference between the actual radius of the assembly, and the radius of the mass just sufficient to keep the chain reaction going. Note that it is the values of these parameters WHEN DISASSEMBLY ACTUALLY OCCURS that are relevant. Now using r = r_c(1 + delta) so that (r - r_c) = delta*r_c, we get: Eq. 22.214.171.124.1-9 Eff(t) = [9*alpha^2 * delta^2 * r_c^2]/(16*E2) If we let tau = (total_MFP/v_n) then: Eq. 126.96.36.199.1-10 alpha_max = (v_n/total_MFP)*(c - 1) = (c - 1)/tau and Eq. 188.8.131.52.1-11 alpha_eff = ((c - 1)/tau)*[1 - (1/(1 + delta)^2)] Eq. 184.108.40.206.1-12 Eff(t) = ((c-1)/tau)^2 * 9/(16*E2) * r_c^2 * delta^2 *[1-(1/(1+ delta)^2)]^2 = ((c-1)/tau)^2 * 9/(16*E2) * r_c^2 *[delta - (delta/(1+ delta)^2)]^2 In the range of 0 < delta < 1 (up to 8 critical masses), the expression [delta - (delta/(1+ delta)^2)]^2 is very close to 0.6*delta^3, giving us: Eq. 220.127.116.11.1-13 Eff(t) = 0.338*((c-1)/tau)^2 * r_c^2/E2 * delta^3 = 0.338/E2 * alpha_max^2 * r_c^2 * delta^3 This last equation is identical with the equation derived by Robert Serber in the spring of 1943 and published in The Los Alamos Primer, except that his constant is 0.667 (i.e. gives efficiencies 1.98 times higher). Serber derived his efficiency equation from rough dynamical considerations without using a hydrodynamic model of disassembly and admits that his result is 2-4 time higher than the true value. This is consistent with the above derivation. Both the equation given above and Serber's equation differ significantly from the Bethe-Feynman equation however, which gives an efficiency relationship of: Eq. 18.104.22.168.1-14 Eff = (1/(gamma - 1)E2) * alpha_max^2 * r_c^2 * (delta*(1 + 3*delta/2)^2)/(1 + delta) after reformulating to equivalent terms. This is a much more linear relationship between delta and efficiency, than the cubic relationship of Serber. Due to the crudeness of all of these derivations, the significance of this difference cannot be assessed at present. Equation 22.214.171.124.1-13 shows that efficiency is proportional to the square of the maximum multiplication rate of the material, and the critical radius (also due to material properties), and is the cube of the excess critical radius excess delta. Extending to larger values, we can approximate it in the range 1 < delta < 3 (up to 64 critical masses), with the expression: Eq. 126.96.36.199.1-15 Eff(t) = 0.338/E2 * alpha_max^2 * r_c^2 * delta^(7/3) 188.8.131.52.2 The Density Dependent Efficiency Equation The efficiency equations given above leave something to be desired for evaluating fission weapon designs. I have included it to assist in making comparisons with the available literature, but I will give it a different form below. The choice of fissile materials available to a weapon designer is quite limited, and the nuclear and physical properties of these materials are fixed. It is desirable then to separate these factors from the factors that a designer can influence - namely, the mass of material present, and the density achieved. The density is of particular interest since it is the only factor that changes in a given design during insertion. Understanding how efficiency changes with density is essential to understanding the problem of predetonation for example. Returning to equation Eq. 184.108.40.206.1-8: Eff(t) = [9*alpha^2 * (r - r_c)^2]/(16*E2) we want to reformulate it so that it consists of two parts, one that does not depend on density, and one that depends only on density. Let the composition and mass of the system be fixed. We will normalize the radius and density so that they are expressed relative to the system's critical state. If rho_crit and r_crit are the values for density and radius of the critical state, and rho_rel and r_rel are the values of the system that we want to evaluate: Eq. 220.127.116.11.2-1 rho_rel = rho_actual/rho_crit and Eq. 18.104.22.168.2-2 r_rel = r_actual/r_crit When the system is exactly critical, rho_rel = 1 and r_rel = 1. Of course we are interested in states where rho_rel > 1, and r_rel < 1. We can relate r_rel to rho_rel: Eq. 22.214.171.124.2-3 r_rel = (1/rho_rel)^(1/3) * r_crit Using this notation, and letting alpha_max_c be the value of alpha_max at the critical state density, we can write: alpha = alpha_max_c * rho_rel * (1 - (r_c/r_rel)^2) In this case r_c refers to the effective critical radius at density rho_rel not rho_crit; that is, r_c IS NOT r_crit. Instead it is equal to r_crit/rho_rel. Using this, and the relation for r_rel above, we can eliminate r_crit: Eq. 126.96.36.199.2-4 alpha = alpha_max_c * rho_rel * (1 - ((1/rho_rel)/(1/rho_rel)^(1/3))^2) = alpha_max_c * rho_rel * (1 - (rho_rel)^(-4/3)) Substituting into the efficiency equation: Eq. 188.8.131.52.2-5 Eff = (9/16*E2) * alpha^2 * (r_rel - r_c)^2 Eq. 184.108.40.206.2-6 Eff = (9/(16*E2))*(alpha_max_c*rho_rel*(1 - (rho_rel)^(-4/3)))^2 * (r_rel - r_c)^2 Splitting constant and density dependent factors between two lines: Eq. 220.127.116.11.2-7 Eff = (9/(16*E2)) * alpha_max_c^2 * rho_rel^2 * (1-(rho_rel)^(-4/3))^2 * (r_rel - r_c)^2 We can eliminate r_rel and r_c, replacing them with expressions of rho_rel and r_crit: Eq. 18.104.22.168.2-8 r_rel - r_c = (1/rho_rel)^(1/3) * r_crit) - (r_crit/rho_rel) = ((1/rho_rel)^(1/3) - (1/rho_rel)) * r_crit Eq. 22.214.171.124.2-9 Eff = (9/(16*E2)) * alpha_max_c^2 * r_crit^2 * rho_rel^2 * (1-(rho_rel)^(-4/3))^2 * ((1/rho_rel)^(1/3)-(1/rho_rel))^2 Recall that the rho_rel, the relative density, is not generally the compression ratio compared to normal density. This is true only if amount of fissile material in the system is exactly one critical mass at normal density (as was approximately true in the Fat Man bomb). For "sub-crit" systems, rho_rel is smaller than the actual compression of the material since compressive work is required to raise the initial sub-critical system to the critical state. For a system consisting of more than one critical mass (at normal density), rho_rel is higher than the actual compression. By looking in turn at each of the density dependent terms we can gain insight into the significance of the efficiency equation. First note that alpha_max_c is a fundamental property of the fissile material and does not change, even though it is system dependent (being normalized to the critical density of the system). The term (rho_rel^2) is introduced by the reduction of the MFP with increasing density and contributes to enhanced efficiency at all values of rho_rel. The term (1-(rho_rel)^(-4/3)))^2 represents the effect of neutron leakage. At rho_rel=1 the value is 0. It has a limiting value of 1 when rho_rel is high, i.e. no leakage occurs. As this term approaches one, and leakage becomes insignificant, it ceases to be a significant contributor to further efficiency enhancement. The term ((1/rho_rel)^(1/3)-(1/rho_rel))^2 describes the distance the rarefaction wave must travel to shut down the reaction. At rho_rel=1 it is 0. It initially increases rapidly, but soon slows down at reaches a maximum at about rho_rel = 5.196. Thereafter it declines slowly. This signifies that fact that once the critical radius of the system at rho_rel is small compared to the physical radius no further efficiency gain is obtained from this source. Instead further increases in density simply reduce the scale of the system, allowing faster disassembly. We can provide some approximations for the efficiency equation to make the overall effect of density more apparent. In the range of 1 < rho_rel < 2 it is approximately: Eq. 126.96.36.199.2-10 Eff = (9/(16*E2)) * alpha_max_c^2 * r_crit^2 * ((rho_rel - 1)^3)/8 In the range of 2 < rho_rel < 4.5 it is approximately: Eq. 188.8.131.52.2-11 Eff = ((9/(16*E2)) * alpha_max_c^2 * r_crit^2 * ((rho_rel - 1)^(2.333))/8 In the range of 4 < rho_rel < 8 it is approximately: Eq. 184.108.40.206.2-12 Eff = (9/(16*E2)) * alpha_max_c^2 * r_crit^2 * ((rho_rel - 1)^(1.8))/5 220.127.116.11.3 The Mass and Density Dependent Efficiency Equation The maximum degree of compression above normal density that is achievable is limited by technology. It is of interest then to consider how the amount of material present affects efficiency at a given level of compression, since it is the other major parameter that a designer can manipulate. To examine this we would like to reintroduce an explicit term for mass. To do this we renormalize the equation to a fixed standard density rho_0 (the uncompressed density of the fissile material), and use rho_0 and the corresponding value of the critical mass M_c to replace the scale parameter r_crit. Thus: Eqs. 18.104.22.168.3-1 through 22.214.171.124.3-5 alpha_max_crit = alpha_max_0 * (rho_crit/rho_0) m_rel = m/M_c rho_crit = rho_0/m_rel^(1/2) rho_rel = rho/rho_crit = (rho/rho_0)*m_rel^(1/2) r_crit = ((m/rho_crit)*(3/(2Pi)))^(1/3) = (m*m_rel^(1/2)/rho_0)^(1/3) * (3/2Pi)^(1/3) = (m^(3/2)/(M_c^(1/2) * rho_0))^(1/3) * (3/2Pi)^(1/3) = m^(1/2) * (M_c^(1/2) * rho_0)^(-1/3) * (3/2Pi)^(1/3) Assuming the density rho >= rho_crit, we get: Eq. 126.96.36.199.3-6 Eff = (9/(16*E2))*(3/(2Pi))^(2/3) * alpha_max_0^2 * (rho_crit/rho_0)^2 * (rho/rho_crit)^2 * (m^(1/2) * (M_c^(1/2) * rho_0)^(-1/3))^2 * (1-((rho_0/rho)^(4/3) * m_rel^(-2/3)))^2 * (((rho_0/rho)^(1/3) * m_rel^(-1/6)) - ((rho_0/rho) * m_rel^(-1/2)))^2 Eq. 188.8.131.52.3-7 Eff = (9/(16*E2))*(3/(2Pi))^(2/3) * alpha_max_0^2 * (rho/rho_0)^2 * m/(M_c^(1/3) * rho_0^(2/3)) * (1-((rho_0/rho)^(4/3) * m_rel^(-2/3)))^2 * m_rel^(-1) * (((rho_0 * m_rel)/rho)^(1/3) - (rho_0/rho))^2 Eq. 184.108.40.206.3-8 Eff = (9/(16*E2))*(3/(2Pi))^(2/3) * alpha_max_0^2 * m/(M_c^(1/3)) * (M_c/m) (rho^2)/(rho_0^(8/3)) * (1 - ((rho_0/rho)^(4/3) * m_rel^(-2/3)))^2 * (((rho_0 * m_rel)/rho)^(1/3) - (rho_0/rho))^2 Eq. 220.127.116.11.3-9 Eff = (9/(16*E2))*(3/(2Pi))^(2/3) * alpha_max_0^2 * M_c^(2/3) * (rho/(rho_0^(4/3)))^2 * (1 - ((rho_0/rho)^(4/3) * m_rel^(-2/3)))^2 * (((rho_0 * m_rel)/rho)^(1/3) - (rho_0/rho))^2 The first line of this equation consists entirely of constants, some of them fixed by the choice of material and reference density. From the next two lines it is clear that the density dependency is the same. The effect of increasing the mass of the system is to modestly reduce leakage and retard disassembly. 18.104.22.168.4 The Mass Dependent Efficiency Equation It is useful to also have an equation that considers only the effect of mass. Including this as the only variable allows presenting a simplified form that makes the effect of varying the mass in a particular design easier to visualize. Also in gun-type designs no compression occurs, so the chief method of manipulating yield is by varying the mass of fissile material present. Taking the mass and density dependent equation, we can set the density to a fixed nominal value, rho, and then simplify. Let rho = rho_0: Eq. 22.214.171.124.4-1 Eff = (9/16*E2)*(3/2Pi)^(2/3) * alpha_max_0^2 * M_c^(2/3) * (rho_0/(rho_0^(4/3))^2 *(1 - ((rho_0/rho_0)^(4/3) * m_rel^(-2/3)))^2 * (((rho_0 * m_rel)/rho_0)^(1/3) - (rho_0/rho_0))^2 = (9/16*E2)*(3/2Pi)^(2/3) * alpha_max_0^2 * M_c^(2/3) * rho_0^(-2/3) * (1 - m_rel^(-2/3))^2 * ((m_rel)^(1/3) - 1)^2 Since M_c/rho_0 is the volume of a critical assembly (m_rel = 1): Eq. 126.96.36.199.4-2 Eff = (9/16*E2)*(3/2Pi)^(2/3) * alpha_max_0^2 * vol_crit^(2/3) * (1 - m_rel^(-2/3))^2 * ((m_rel)^(1/3) - 1)^2 Eq. 188.8.131.52.4-3 Eff = (9/16*E2)*(2^(2/3)) * alpha_max_0^2 * r_crit^2 * (1 - m_rel^(-2/3))^2 * ((m_rel)^(1/3) - 1)^2 Again the top line consists of numeric and material constants, the second of mass dependent terms. This equation shows that efficiency is zero when m_rel = 1, as expected. Efficiency is negligible when m_rel < 1.05, similar to the power of conventional explosives. It climbs very quickly however, increasing by a factor of 400 or so between 1.05 and 1.5, where efficiency becomes significant. The Little Boy bomb had m_rel = 2.4. If its fissile content had been increased by a mere 16%, its yield would have increased by 75% (whether this could be done while maintaining a safe criticality margin is a different matter). 184.108.40.206.5 Limitations of the Efficiency Equations These formulas provide good scaling laws, and a rough means to calculate efficiency. But we should return to the simplifying assumptions made earlier to understand their limitations. It is obvious that alpha is not constant during disassembly. As material blows off, the size of the core and the value of alpha both decrease, which has a negative effect on efficiency. This is the most important factor not accounted for, and results in a lower effective coefficient in the efficiency equation. The assumption about uniform temperature, and no energy loss is also not really true. The energy production rate in any region of the core is proportional to the neutron flux density. This density is highest in the center and lowest at the surface (although not dramatically so). Furthermore, the high radiation energy density in the core corresponds to a high radiation loss rate from the surface. Based on the Stefan-Boltzmann law it would seem that the loss rate from a bare core could eventually match the energy production rate. This doesn't really occur because of the high opacity of ionized high-Z material; thermal energy from inside the core cannot readily reach the surface. But by the same token, the surface can cool dramatically. Since core expansion starts at the surface, and the rate is determined by temperature, this surface cooling can significantly retard disassembly. When scaling from known designs, most of these issues have little significance since the deviations from the theoretical model used for the derivations affects both system similarly. The efficiency equations also breaks down at very small yields. To eliminate gamma from the equations I assumed that the core was radiation dominated at the time of disassembly. When yields drop to the low hundreds of tons and below, the value of gamma approximates that of a perfect gas which changes only the constant term in the equations, reducing efficiency by 20%. When yields drop to the ton range then the properties of condensed matter (like physical strength, heat of vaporization, etc.) become apparent. This tends to increase the energy release since these properties resist the expansion effects. There is another factor that imposes an effective upper limit on efficiency regardless of other attempts to enhance yield. This is the decrease in fissile content of the core. The alert reader may have noticed that it is possible to calculate efficiencies that are greater than 1 using the equations. This is because energy release is represented as an exponentially increasing function of time without regard for the amount of energy actually present in the fissile material. At some point, the fact that the fission process depletes the fissile material present must have an effect on the progress of the chain reaction. The limiting factor here is due to the dilution of the fissile material by the fission products. Most isotopes have roughly the same absorption cross section for fast neutrons, a few barns. The core initially consists of fissile material, but as the chain reaction proceeds each fission event replaces one fissile nucleus with two fission product nuclei. When 50% of the material has fissioned, for every 100 initial fissile atoms there are now 50 remaining, and 100 non-fissile atoms, i.e. the fissile content has declined to only 33%. This parasitic absorption will eventually extinguish the reaction entirely, regardless of what yield enhancement techniques are used (generally at an efficiency substantially below 50%). 220.127.116.11 Effect of Tampers and Reflectors on Efficiency So far I have been explicitly assuming a bare fissile mass for efficiency estimation. Of course, most designs surround the core with layers of material intended to scatter escaping neutrons back into the fissile mass, or to retard the hydrodynamic expansion. I use the term "reflector" to refer to the neutron scattering properties of the surrounding material, and "tamper" to refer to the effect on hydrodynamic expansion. The distinction is logical because the two effects are fundamentally unrelated, and because the term tamper was borrowed from explosive blasting technique where it refers only to the containment of the blast. This distinction is not usually made in US weapons programs, from Manhattan Project on. The custom is to use "tamper" to refer to both effects, although "neutronic tamper" and "reflector" are used if the neutron reflection effect alone is intended. In the bare core, the fissile material that has been reached by the inward moving rarefaction wave expands outward very rapidly. In radiation dominated matter, expansion into a vacuum reaches a limiting speed of six times the local speed of sound in the material (this is the velocity at the outer surface of the expanding sphere of material). The density of matter behind the rarefaction front (which moves toward the center of the core) thus drops very rapidly and is almost immediately lost to the fission reaction. If a layer of dense material surrounds the core then something very different occurs. The fissile material is not expanding into a vacuum, instead it has to compress and accelerate matter ahead of it. That is, it creates a shock wave. The expansion velocity of the core is then limited to the velocity of accelerated material behind the expanding shock front, which is close to the shock velocity itself. If the tamper and fissile core have similar densities, then this expansion velocity is similar to the speed of sound in the core and only 1/6 as fast as the unimpeded expansion velocity. This confining effect means that the drop in alpha as disassembly proceeds is not nearly as abrupt as in a vacuum. It thus reduces the importance of the inaccurate assumption of constant alpha used in deriving the efficiency equation. Another important effect is caused by the radiation cooling of the core. In a vacuum this energy is lost to free space. An opaque tamper absorbs this energy, and a layer of material one mean free path thick is heated to nearly the temperature and pressure of the core. The expansion shock wave then arises not at the surface of the core, but some distance away in the tamper (on the order of a few millimeters). A rarefaction wave must then propagate back to the surface of the core before its expansion even begins. In effect, this increases size of the expansion distance term ((1/rho_rel)^(1/3)-(1/rho_rel))^2 in the efficiency equation. In a bare core, any neutron that reaches the surface of the core is lost forever to the reaction. A reflector scatters the neutrons, a process that causes some fraction of them to eventually reenter the fissile mass (usually after being scattered several times). Its effect on efficiency then can be described simply by reducing the neutron leakage term (rho_rel)^(-4/3) by a constant factor, or by reducing the reference density critical mass terms. The leakage or critical mass adjustments must take into account time absorption effects. This means that leakage cannot simply be reduced by the probability of a lost neutron eventually returning, and the reflected critical mass cannot be based simply on the steady state criticality value. For example when an efficiently reflected assembly is only slightly supercritical, then multiplication is dependent mostly (or entirely) on the reflected neutrons that reenter the core. On average each of these neutrons spends quite a lot of time outside the core before being scattered back in. The relevant value for alpha_max in this system is not the value for the fissile material, but is instead: alpha _max = 1/(average neutron life outside of core) This is likely to be at least an order of magnitude larger than the core material alpha_max value. An optimally efficient fission explosion requires that the explosive disassembly of the core occur when the neutron multiplication rate (designated alpha) is at a maximum. Ideally the bomb will be designed to compress the core to this state (or close to it) before injecting neutrons to initiate the chain reaction. If neutrons enter the mass after criticality, but before this ideal time, the result is predetonation (or preinitiation): disassembly at a sub-optimal multiplication rate, producing a reduced yield. How significant this problem is depends on the reactivity insertion rate. Something like 45 multiplication intervals must elapse before really significant amounts of energy are released. Prior to this point predetonation is not possible. The number of these intervals that occur during a period of time is obtained by integrating alpha over the period. When alpha is effectively constant it is simply alpha*t. During insertion, alpha is not constant. When insertion begins its value is zero. If a neutron is injected early in insertion and insertion is slow, we can accumulate 45 multiplication intervals when alpha is still quite low. In this case a dramatic reduction in yield will occur. On the other hand, if it were possible for insertion to be so fast that full insertion is achieved before accumulating enough multiplication intervals to disassemble the bomb then no predetonation problem would exist. To evaluate this problem let us consider a critical system with initial radius r_0 undergoing uniform spherical compression, with the radius decreasing at a constant rate v, then alpha is: Eq. 18.104.22.168-1 alpha = alpha_max_0 * ((r_0/(r_0 - v*t))^3 - ((r_0 - v*t)/r_0)) Integrating, we obtain: Eq. 22.214.171.124-2 Int[alpha] = alpha_max_0*(r_0^3/(2v*(r_0-v*t)^2) - (t-(v*t^2)/(2*rc))) Which allows to compute the number of elapsed multiplication intervals between times t_1 and t_2. For example, consider a system with the following parameters with a critical radius r = 4.5 cm, a radial implosion velocity v = 2.5x10^5 cm/sec, and alpha_max_0 = 2.8x10^8/sec. Figure 126.96.36.199-1 shows the accumulation of elapsed neutron multiplication intervals (Y axis) as implosion proceeds (seconds on X axis). Recall that disassembly occurs when the speed of sound, c_s, integrated over the life of the chain reaction is equal to r - r_c, the difference between the outer radius and the critical radius. Since c_s is proportional to the square root of the energy released, it increases by a factor of e every 2 multiplication intervals. Disassembly thus occurs quite abruptly, effectively occurring over a period of two multiplication intervals. The condition for disassembly is thus: Eq. 188.8.131.52-3 r(t) - r_c(t) = 2*c_s(t)/alpha(t) for some time t. Since r - r_c is a polynomial function, and c_s is a transcendental (exponential) function, no closed form means of calculating t is possible. However these functions are monotonically increasing in the range of values of interest so numeric and graphical techniques can easily determine when the disassembly condition occurs. The value of alpha at that point then determines efficiency. Taking our previous example (r = 4.5 cm, v = 2.5x10^5 cm/sec, alpha_max = 2.8x10^8/sec) we can plot the net implosion distance (r - r_c) and the integrated expansion distance (2*c_s/alpha) against the implosion time. This is shown in the log plot in Figure 184.108.40.206-2 for the period between 1 and 1.3 microseconds. Distance is in centimeters (Y axis) and time is in seconds (X axis). If a neutron is present at the beginning of insertion, we see that the disassembly condition occurs at t = 1.25x10^-6 sec. At this point 52 multiplication intervals have elapsed, and the effective value of alpha is 8.6x10^7/sec. The corresponding yield is about 0.5 kt. The parameters above approximately describe the Fat Man bomb. This shows that even in the worst case, neutrons being present at the moment of criticality, quite a substantial yield would have been created. Predetonation does not necessarily result in an insignificant fizzle. It is not feasible though to make a high explosive driven implosion system fast enough to completely defeat predetonation through insertion speed alone (radiation driven implosion and fusion boosting offer means of overcoming it however). The likelihood of predetonation occurring depends on the neutron background, the average rate at which neutron injection events occur. I use the term "neutron injection event" instead of simply talking about neutrons for a specific reason: the major source of neutrons in a fission device is spontaneous fission of the fissile material itself (or of contaminating isotopes). Each spontaneous fission produces an average of 2-3 neutrons (depending on the isotope). However, these neutrons are all released at the same moment, and thus either a fission chain reaction is initiated at the moment, or they all very quickly disappear. Each fission is a single injection event, neutrons from other sources are uncorrelated and are thus individual injection events. Now neutron injection during insertion is not guaranteed to initiate a divergent chain reaction. At criticality (alpha equals zero), each fission generates on average one fission in the next generation. Since each fission produces nu neutrons (nu is in the range of 2-3 neutrons, 2.9 for Pu-239), this means that each individual neutron has only 1/nu chance of causing a new fission. At positive values of alpha, the odds are better of course, but clearly we must consider then the probability that each injection actually succeeds in creating a divergent chain reaction. This probability is dependent on alpha, but since non-fission capture is a significant possibility in any fissile system, it does not truly converge to 1 regardless of how high alpha is (although with plutonium it comes close). Near criticality the probability of starting a chain reaction (P_chain) for a single neutron is thus about 34% for plutonium, and 40% for U-235. Since spontaneous fission injects multiple neutrons, the P_chain for this injection event is high, about 70% for both Pu-239 and U-235. If the average rate of neutron injection is R_inj, then the probability of initiating a chain reaction during an insertion time of length T is the Poisson function: Eqs. 220.127.116.11-4 P_init = 1 - e^((-T/R_inj)*P_chain) If T is much smaller than R_inj then this equation reduces approximately to P_init = (T/R_inj)*P_chain. When T is much smaller than R_inj predetonation is unlikely, and the yield of the fission bomb (which will be the optimum yield) can be predicted with high confidence. As the ratio of T/R_inj becomes larger yield variability increases. When (T/R_inj)*P_chain is equal to ln 2 (0.693...) then the probability of predetonation and no predetonation is equal, although when predetonation occurs close to full assembly the yield reduction is small. As T/R_inj continues to increase predetonation becomes virtually certain. With a large enough value to T/R_inj the yield becomes predictable again, but this time it is the minimum yield that results when neutrons are present at the beginning of insertion. For an implosion bomb a typical spread between the optimum and minimum yields is something like 40:1. In the Fat Man bomb the neutron source consisted of about 60 g of Pu-240, which produced an average of one fission every 37 microseconds. The probability of predetonation was 12% (from a declassified Oppenheimer memo), assuming an average P_chain of 0.7 we can estimate the insertion time at 6.7 microseconds, or 4.7 microseconds if P_chain was close to 1. The chance of large yield reduction was much smaller than this however. There was a 6% chance of a yield < 5 kt, and only a 2% chance of a yield < 1 kt. As we have seen, in no case would the yield have been smaller than 0.5 kt or so. Spontaneous fission is not the only cause for concern, since neutrons can enter the weapon from outside. Natural neutron sources are not cause for concern, but in a combat situation very powerful sources of neutrons may be encountered - other nuclear weapons. One kiloton of fission yield produces a truly astronomical number of excess neutrons - about 3x10^24, with a fluence of 1.5x10^10 neutrons/cm^2 500 m away. A kiloton of fusion yields 3-4 times as many. The fission reaction itself emits all of its neutrons in less than a microsecond, but due to moderation these neutrons arrive at distant locations over a much longer period of time. Most of them arrive in a pulse lasting a millisecond, but thermal neutrons can continue to arrive for much longer periods of time. This is not the whole problem though. Additional neutrons called "delayed neutrons" continue to be emitted for about a minute from the excited fission products. These amount to only 1% or so of the prompt neutrons, but this is still an average arrival rate of 2.5x10^6 neutrons/cm^2-sec for a kiloton of fission at 500 m. With weapons sensitive to predetonation, careful spacing of explosions in distance and time may be necessary. Neutron hardening - lining the bomb with moderating and neutron absorbing materials - may be necessary to hold predetonation problems to a tolerable level (it is virtually impossible to eliminate it entirely in this way). 4.1.6 Methods of Core Assembly The principal problem in fission weapon design is how to rapidly assemble or compress the fissile material from a subcritical state to a supercritical one. Methods of doing this can be classified in two ways: Subsonic assembly means that shock waves are not involved. Assembly is performed by adiabatic compression, or by continuous acceleration. As a practical matter, only one subsonic assembly scheme needs to be considered: gun assembly. Supersonic assembly means that shock waves are involved. Shock waves cause instantaneous acceleration, and naturally arise whenever the very large forces required for extremely rapid assembly occur. The are thus the natural tools to use for assembly. Shocks are normally created by using high explosives, or by collisions between high velocity bodies (which have in turn been accelerated by high explosive shocks). The term "implosion" is generally synonymous with supersonic assembly. Most fission weapons have been designed with assembly schemes of this type. Assembly may be performed by compressing the core along one, two, or three axes. One-D compression is used in guns, and plane shock wave compression schemes. Two and three-D compression are known as cylindrical implosion and spherical implosion respectively. Plane shock wave assembly might logically be called "linear Implosion", but this term has been usurped (in the US at any rate) by a variant on cylindrical implosion (see below). The basic principles involved with these approaches are discussed in detail in Section 3.7, Principles of Implosion. To the approaches just mentioned, we might add more some difficult to classify hybrid schemes such as: "pseudo-spherical implosion", where the mass is compressed into a roughly spherical form by convergent shock waves of more complex form; and "linear implosion" where a compressive shock wave travels along a cylindrical body (or other axially symmetric form - like an ellipsoid), successively squeezing it from one end to the other (or from both ends towards the middle). Schemes of this sort may be used where high efficiency is not called for, and difficult design constraints are involved, such as severe size or mass limitations. Hybrid combinations of gun and implosion are also possible - firing a bullet into an assembly that is also compressed. The number of axes of assembly naturally affect the overall shape of the bomb. One-D assembly methods naturally tend to produce long, thin weapon designs; 2-D methods lead to disk-shaped or short cylindrical systems; and 3-D methods lead to spherical designs. The subsections detailing assembly methods are divided in gun assembly (subsonic assembly) and implosion assembly (supersonic assembly). Even though it superficially resembles gun assembly, linear implosion is discussed in the implosion section since it actually has much more in common with other shock compression approaches. The performance of an assembly method can be evaluated by two key metrics: the total insertion time and the degree of compression. Total insertion time (and the related insertion rate) is principally important for its role in minimizing the probability of predetonation. The degree of compression determines the efficiency of the bomb, the chief criteria of bomb performance. Short insertion times and high compression are usually associated since the large forces needed to produce one also tend to cause the other. 18.104.22.168 Gun Assembly This was the first technique to be seriously proposed for creating fission explosions, and the first to be successfully developed. The first nuclear weapon to be used in war was the gun-type bomb called Little Boy, dropped on Hiroshima. Basic gun assembly is very simple in both concept and execution. The supercritical assembly is divided into two pieces, each of which is subcritical. One of these, the projectile, is propelled into the other, called the target, by the pressure of propellant combustion gases in a gun barrel. Since artillery technology is very well developed, there are really no significant technical problems involved with designing or manufacturing the assembly system. The simple single-gun design (one target, one projectile) imposes limits on weapon, mass, efficiency and yield that can be substantially improved by using a "double-gun" design using two projectiles fired at each other. These two approaches are discussed in separate sections below. Even more sophisticated "complex" guns, that combine double guns with implosion are discussed in Hybrid Assembly techniques. Gun designs may be used for several applications. They are very simple, and may be used when development resources are scarce or extremely reliability is called for. Gun designs are natural where weapons can be relatively long and heavy, but weapon diameter is severely limited - such as nuclear artillery shells (which are "gun type" weapons in two senses!) or earth penetrating "bunker busters" (here the characteristics of a gun tube - long, narrow, heavy, and strong - are ideal). Single guns are used where designs are highly conservative (early US weapon, the South African fission weapon), or where the inherent penalties of the design are not a problem (bunker busters perhaps). Double guns are probably the most widely used gun approach (in atomic artillery shells for example). 22.214.171.124.1 Single Gun Systems We might conclude that a practical limit for simple gun assembly (using a single gun) is a bit less than 2 critical masses, reasoning as follows: each piece must be less than 1 critical mass, if we have two pieces then after they are joined the sum must be less than 2 critical masses. Actually we can do much better than this. If we hollow out a supercritical assembly by removing a chunk from the center like an apple core, we reduce its effective density. Since the critical mass of a system is inversely proportional to the square of the density, we have increased the critical mass remaining material (which we shall call the target) while simultaneously reducing its actual mass. The piece that was removed (which will be called the bullet) must still be a bit less than one critical mass since it is solid. Using this reasoning, letting the bullet have the limiting value of one full critical mass, and assuming the neutron savings from reflection is the same for both pieces (a poor assumption for which correction must be made) we have: Eq. 126.96.36.199.1-1 M_c/((M - M_c)/M)^2 = M - M_c where M is the total mass of the assembly, and M_c is the standard critical mass. The solution of this cubic equation is approximately M = 3.15 M_c. In other words, with simple gun assembly we can achieve an assembly of no more than 3.15 critical masses. Of course a practical system must include a safety factor, and reduce the ratio to a smaller value than this. The weapon designer will undoubtedly surround the target assembly with a very good neutron reflector. The bullet will not be surrounded by this reflector until it is fired into the target, its effective critical mass limit is higher, allowing a larger final assembly than the 3.15 M_c calculated above. Looking at U-235 critical mass tables for various candidate reflectors we can estimate the achievable critical mass ratios taking into account differential reflector efficiency. A steel gun barrel is actually a fairly good neutron reflector, but it will be thinner and less effective than the target reflector. M_c for U-235 (93.5% enrichment) reflected by 10.16 cm of tungsten carbide (the reflector material used in Little Boy) is 16.5 kg, when reflected by 5.08 cm of iron it is 29.3 kg (the steel gun barrel of Little Boy was an average of 6 cm thick). This is a ratio of 1.78, and is probably close to the achievable limit (a beryllium reflector might push it to 2). Revising Eq. 188.8.131.52.1-1 we get: Eq. 184.108.40.206.1-2 M_c/((M - (1.78 M_c))/M)^2 = M - (1.78 M_c) which has a solution of M = 4.51 M_c. If a critical mass ratio of 2 is used for beryllium, then M = 4.88 M_c. This provides an upper bound on the performance of simple gun-type weapons. Some additional improvement can be had by adding fast neutron absorbers to the system, either natural boron, or boron enriched in B-10. A boron-containing sabot (collar) around the bullet will suppress the effect of neutron reflection from the barrel, and a boron insert in the target will absorb neutrons internally thereby raising the critical mass. In this approach the system would be designed so that the sabot is stripped of the bullet as it enters the target, and the insert is driven out of the target by the bullet. This system was apparently used in the Little Boy weapon. Using the M_c for 93.5% enriched U-235, the ratio M/M_c for Little Boy was (64 kg)/(16.5 kg) = 3.88, well within the limit of 4.51 (ignoring the hard-to-estimate effects of the boron abosrbers). It appears then that the Little Boy design (completed some six months before the required enriched uranium was available) was developed with the use of >90% enrichment uranium in mind. The actual fissile load used in the weapon was only 80% enriched however, with a corresponding WC reflected critical mass of 26.5 kg, providing an actual ratio of 64/26.5 = 2.4. The mass-dependent efficiency equation shows that it is desirable to assembly as many critical masses as possible. Applying this equation to Little Boy (and ignoring the equation's limitations in the very low yield range) we can examine the effect of varying the amount of fissile material present: 1.05 80 kg 1.1 1.2 tons 1.2 17 tons 1.3 78 tons 1.4 220 tons 1.5 490 tons 1.6 930 tons 1.8 2.5 kt 2.0 5.2 kt 2.25 10.5 kt 2.40 15.0 kt LITTLE BOY 2.5 18.6 kt 2.75 29.6 kt 3.0 44 kt 3.1 If its fissile content had been increased by a mere 25%, its yield would have tripled. The explosive efficiency of Little Boy was 0.23 kt/kg of fissile material (1.3%), compared to 2.8 kt/kg (16%) for Fat Man (both are adjusted to account for the yield contribution from tamper fast fission). Use of 93.5% U-235 would have at least doubled Little Boy yield and efficiency, but it would still have remained disappointing compared to the yields achievable using implosion and the same quantity of fissile material. 220.127.116.11.2 Double Gun Systems Significant weight savings a possible by using a "double-gun" - firing two projectiles at each other to achieve the same insertion velocity. With all other factors being the same (gun length, projectile mass, materials, etc.) the mass of a gun varies with the fourth power of velocity (doubling velocity requires quadrupling pressure, quadrupling barrel thickness increases mass sixteen-fold). By using two projectiles the required velocity is cut by half, and so is the projectile mass (for each gun). On the other hand, to keep the same total gun length though, the projectile must be accelerated in half the distance, and of course there are now two guns. The net effect is to cut the required mass by a factor of eight. The mass of the breech block (which seals the end of the gun) reduces this weight saving somewhat, and of course there is the offsetting added complexity. A double gun can improve on the achievable assembled mass size since the projectile mass is divided into two sub-critical pieces, each of which can be up to one critical mass in size. Modifying Eq. 18.104.22.168.1-1 we get: Eq. 22.214.171.124.1-3 M_c/((M - 2M_c)/M)^2 = M - 2M_c with a solution of M = 4.88 M_c. Taking into account the effect of differential reflector efficiency we get mass ratios of ratios of 3.56 (tungsten carbide) and 4 (beryllium) which give assembled mass size limits of M = 7.34 M_c and M = 8 M_c respectively. Another variant of the double gun concept is to still only have two fissile masses - a hollow mass and a cylindrical core as in the single gun - but to drive them both together with propellant. One possible design would be to use a constant diameter gun bore equal to the target diameter, with the smaller diameter core being mounted in a sabot. In this design the target mass would probably be heavier than the core/sabot system, so one end of the barrel might be reinforced to take higher pressures. Another more unusual approach would be to fire the target assembly down an annular (ring shaped) bore. This design appears to have been used in the U.S. W-33 atomic artillery shell, which is reported to have had an annular bore. These larger assembled masses give significantly more efficient bombs, but also require large amounts of fissile material to achieve them. And since there is no compression of the fissile material, the large efficiency gains obtainable through implosive compression is lost. These shortcomings can be offset somewhat using fusion boosting, but gun designs are inherently less efficient than implosion designs when comparing equal fissile masses or yields. 126.96.36.199.3 Weapon Design and Insertion Speed In addition to the efficiency and yield limitations, gun assembly has some other significant shortcomings: First, guns tend to be long and heavy. There must be sufficient acceleration distance in the gun tube before the projectile begins insertion. Increasing the gas pressure in the gun can shorten this distance, but requires a heavier tube. Second, gun assembly is slow. Since it desirable to keep the weight and length of the weapon down, practical insertion velocities are limited to velocities below 1000 m/sec (usually far below). The diameter of a core is on the order of 15 cm, so the insertion time must be at least a 150 microseconds or so. In fact, achievable insertion times are much longer than this. Taking into account only the physical insertion of the projectile into the core underestimates the insertion problem. As previously indicated, to maximize efficiency both pieces of the core must be fairly close to criticality by themselves. This means that a critical configuration will be achieved before the projectile actually reaches the target. The greater the mass of fissile material in the weapon, the worse this problem becomes. With greater insertion distances, higher insertion velocities are required to hold the probability of predetonation to a specified value. This in turn requires greater accelerations or acceleration distances, further increasing the mass and length of the weapon. In Little Boy a critical configuration was reached when the projectile and target were still 25 cm apart. The insertion velocity was 300 m/sec, giving an overall insertion time of 1.35 milliseconds. Long insertion times like this place some serious constraints on the materials that can be used in the bomb since it is essential to keep neutron background levels very low. Plutonium is excluded entirely, only U-235 and U-233 may be used. Certain designs may be somewhat sensitive to the isotopic composition of the uranium also. High percentages of even-numbered isotopes may make the probability of predetonation unacceptably high. The 64 kg of uranium in Little Boy had an isotopic purity of about 80% U-235. The 12.8 kg of U-238 and U-234 produced a neutron background of around 1 fission/14 milliseconds, giving Little Boy a predetonation probability of 8-9%. In contrast to the Fat Man bomb, predetonation in a Little Boy type bomb would result in a negligible yield in nearly every case. The predetonation problem also prevents the use of a U-238 tamper/reflector around the core. A useful amount of U-238 (200 kg or so) would produce a fission background of 1 fission/0.9 milliseconds. Gun-type weapons are obviously very sensitive to predetonation from other battlefield nuclear explosions. Without hardening, gun weapons cannot be used within a few of kilometers of a previous explosion for at least a minute or two. Attempting to push close to the mass limit is risky also. The closer the two masses are to criticality, the smaller the margin of safety in the weapon, and the easier it is to cause accidental criticality. This can occur if a violent impact dislodges the projectile, allowing it to travel toward the target. It can also occur if water leaks into the weapon, acting as a moderator and rendering the system critical (in this case though a high yield explosion could not occur). Due to the complicated geometry, calculating where criticality is achieved in the projectile's travel down the barrel is extremely difficult, as is calculating the effective value of alpha vs time as insertion continues. Elaborate computation intensive Monte Carlo techniques are required. In the development of Little Boy these things had to be extrapolated from measurements made in scale models. Once insertion is completed, neutrons need to be introduced to begin the chain reaction. One route to doing this is to use a highly reliable "modulated" neutron initiator, an initiator that releases neutrons only when triggered. The sophisticated neutron pulse tubes used in modern weapons are one possibility. The Manhattan Project developed a simple beryllium/polonium 210 initiator named "Abner" that brought the two materials together when struck by the projectile. If neutron injection is reliable, then the weapon designer does not need to worry about stopping the projectile. The entire nuclear reaction will be completed before the projectile travels a significant distance. On the other hand, if the projectile can be brought to rest in the target without recoiling back then an initiator is not even strictly necessary. Eventually the neutron background will start the reaction unaided. A target designed to stop the projectile once insertion is complete is called a "blind target". The Little Boy bomb had a blind target design. The deformation expansion of the projectile when it impacted on the stop plate of the massive steel target holder guaranteed that it would lodge firmly in place. Other designs might add locking rings or other retention devices. Because of the use of a blind target design, Little Boy would have exploded successfully without the Abner initiators. Oppenheimer only decided to include the initiators in the bomb fairly late in the preparation process. Even without Abner, the probability that Little Boy would have failed to explode within 200 milliseconds was only 0.15%; a delay as long as one second was vanishingly small - 10^-14. Atomic artillery shells have tended to be gun-type systems, since it is relatively easy to make a small diameter, small volume package this way (at the expense of large amounts of U-235). Airbursts are the preferred mode of detonation for battlefield atomic weapons which, for an artillery shell travelling downward at several hundred meters per second, means that initiation must occur at a precise time. Gun-type atomic artillery shells always include polonium/beryllium initiators to ensure this. 188.8.131.52 Implosion Assembly High explosive driven implosion assembly uses the ability of shock waves to instantaneously compress and accelerate material to high velocities. This allows compact designs to rapidly compress fissile material to densities much higher than normal on a time scale of microseconds, leading to efficient and powerful explosions. The speed of implosion is typically several hundred times faster than gun assembly (e.g. 2-3 microseconds vs. 1 millisecond). Densities twice the normal maximum value can be reached, and advanced designs may be able to do substantially better than this (compressions of three and four fold are often claimed in the unclassified literature, but these seem exaggerated). Weapon efficiency is typically an order of magnitude better than gun designs. The design of an implosion bomb can be divided into two parts: The high explosive system may be essentially unconfined (like that in the Fat Man bomb), but increased explosive efficiency can be obtained by placing a massive tamper around the explosive. The system then acts like a piston turned inside out, the explosive gases are trapped between the outer tamper and the inner implosion hardware, which is driven inward as the gases expand. The added mass of the tamper is no doubt greater than the explosive savings, but if the tamper is required anyway (for radiation confinement, say) then it adds to the compactness of the design. If you have not consulted Section 3.7 Principles of Implosion, it may be a good idea to do so. 184.108.40.206.1 Energy Required for Compression As explained in Section 3.4 Hydrodynamics, shock compression dissipates energy in three ways: Only the first of these is ultimately desirable for implosion, although depending on the system design some or all of the kinetic energy may be reclaimable as compressive work. The energy expended in entropic heating is not only lost, but also makes the material more resistant to further compression. Shock compression always dissipates some energy as heat, and is less efficient than gentle isentropic (constant entropy) compression. Examining the pressure and total energy required for isentropic compression thus provides a lower bound on the work required to reach a given density. Below are curves for the energy required for isentropic and shock compression of uranium up to a compression factor of 3. For shock compression only the energy the appears as internal energy (compression and heating) are included, kinetic energy is ignored. The energy expenditure figures on the X axis are in ergs/cm^3 of uncompressed uranium, the y axis gives the relative volume change (V/V_0). Shock compression, being less efficient, is the upper curve. It can be seen that as compression factors rise above 1.5 (a V/V_0 ratio of 0.67), the amount of work required for shock compression compared to isentropic compression rises rapidly. The kink in the shock compression curve at V/V_0 of 0.5 is not a real phenomenon, it is due to the transition from experimental data to a theoretical Thomas-Fermi EOS. It is interesting to note that to double the density of one cubic centimeter of uranium (18.9 grams) 1.7 x 10^12 ergs is required for shock compression. This is the amount of energy found in 40 grams of TNT, about twice the weight of the uranium. The efficiency of an implosion system at transferring high explosive energy to the core is generally not better than 30%, and may be worse (possibly much worse if the design is inefficient). This allows us the make a good estimate of the amount of explosive required to compress a given amount of uranium or plutonium to high density (a minimum of 6 times the mass of the fissile material for a compression factor of 2). These curves also show that very high shock compressions (four and above) are so energetically expensive as to be infeasible. To achieve a factor of only 3, 7.1x10^11 ergs/g of uranium is required. Factoring implosion efficiency (30%), the high explosive (if it is TNT) must have a mass 56 times that of the material being compressed. Reports in the unclassified literature of compressions of four and higher can thus be safely discounted. Compression figures for plutonium are classified above 30 kilobars, but there is every reason to believe that they are not much different from that of uranium. Although there are large density variations from element to element at low pressure, the low density elements are also the most compressible, so that at high pressures (several megabars) the plot of density vs atomic number becomes a fairly smooth function. This implies that what differences there may be in behavior between U and Pu at low pressure will tend to disappear in the high pressure region. Actually, even in the low pressure region the available information shows that the difference in behavior isn't all that great, despite the astonishingly large number of phases (six) and bizarre behavior exhibited by plutonium at atmospheric pressure. The highest density phases of both metals have nearly identical atomic volumes at room pressure, and the number of phases of both metals drops rapidly with increasing pressure, with only two phases existing for both metals above 30 kilobars. The lowest density phase of plutonium, the delta phase, in particular disappears very rapidly. The amount of energy expended in compression at these low pressures is trivial. The compression data for uranium is thus a good substitute for plutonium, especially at high pressures and high compressions. The shock and isentropic pressures required corresponding to the compression energy curves are shown below. The pressures shown on the X axis are in kilobars, the y axis gives the relative volume change (V/V_0). Since the compression energies of interest vary by many orders of magnitude over compressions ranging up to 3, it is often more convenient to look at logarithmic plots or energy. Figure 220.127.116.11.1-3, below, gives the isentropic curve from 10^7 ergs/cm^3 to 10^12 ergs/cm^3. Since the energy for shock compression is virtually identical to the isentropic value at small compressions, the curve for shock compression is given for compression energies of 10^10 erg/cm^3 (V/V_0 ~ 0.9) 18.104.22.168.2 Shock Wave Generation Systems The only practical means of generating shock waves in weapons is through the use of high explosives. When suitably initiated, these energetic materials support detonation waves: a self-sustaining shock wave that triggers energy releasing chemical reactions, and is driven by the expanding gases that are produced by these reactions. Normally a high explosive is initiated at a single point. The detonation propagates as a convex detonation wave, with a more or less spherical surface, from that point. To drive an implosion, a divergent detonation wave must be converted into a convergent one (or a planar one for linear implosion). Three approaches can be identified for doing this. 22.214.171.124.2.1 Multiple Initiation Points In this approach, the high explosive is initiated simultaneously by a large number of detonators all over its surface. The idea is that if enough detonation points exist, then it will approximate the simultaneous initiation of the entire surface, producing an appropriately shaped shock from the outset. The problem with this approach is that colliding shock waves do not tend to "smooth out", rather the reverse happens. A high pressure region forms at the intersection of the waves, leading to high velocity jets that outrun the detonation waves and disrupting the hoped for symmetry. The multiple detonation point approach was the first one tried at Los Alamos during the Manhattan Project to build a spherical implosion bomb. Attempts were made to suppress the jetting phenomenon by constantly increasing the number of points, or by inserting inert spacers at the collision points to suppress the jets. The problems were not successfully worked out at the time. Since the war this approach has been used with reasonable success in laboratory megagauss field experiments employing the simpler cylindrical geometry. There is also evidence of continuing US interest in this approach. It is not clear whether this technique has been successfully adapted for use in weapons. 126.96.36.199.2.2 Explosive Lenses The basic idea here is to use the principle of refraction to shape a detonation wave, just as it is used in optics to shape a light wave. Optical lenses use combinations of materials in which light travels at different speeds. This difference in speed gives rise to the refractive index, which bends the wave when it crosses the boundary between materials. Explosive lenses use materials that transmit detonation or shock waves at different speeds. The original scheme used a hollow cone of an explosive with a high detonation velocity, and an inner cone of an explosive with a low velocity. The detonator initiates the high velocity explosive at the apex of the cone. A high velocity detonation wave then travels down the surface of the hollow cone, initiating the inner explosive as it goes by. The low velocity detonation wave lags behind, causing the formation of a concave (or planar) detonation wave. With any given combination of explosives, the curvature of the wave produced is determined by the apex angle of the lens. The narrower the angle, the greater the curvature. However, for a given lens base area the narrower angle, the taller the lens, and the greater its volume. Both of these are undesirable in weapons, since volume and mass are at a premium. To create a spherical implosion wave, a number of inward facing lenses need to be arranged on the surface of a sphere so that the convergent spherical segments that each produces merge into one wave. There is substantial advantage in using a large number of lenses. Having many lenses means that each lens has a small base area, and needs to produce a wave with a smaller curvature, both of which reduce the thickness of the lens layer. A more symmetrical implosion can probably be achieved with more lenses also. It is important to have the lens detonation points (and optical axes) spaced as regularly as possible to minimize irregularities, and to make the height of each lens identical. The largest number of points that can be spaced equidistantly from their neighbors on the surface of a sphere is 20 - corresponding to the 20 triangular facets of an icosahedron (imagine the sphere encased in a circumscribed polyhedron, with each facet touching the sphere at one point). The next largest number is 12 - corresponding to the 12 pentagonal facets of the dodecahedron. 12 lenses, even 20 lenses, is an undesirably small number (although some implosion systems have used the 20 point icosahedral layout). A close approximation to strict regularity can be achieved with more points by interleaving a dodecahedron and icosahedron to produce a polyhedron tiled with hexagonal and pentagonal facets, 20 hexagons and 12 pentagons, for a total of 32 points. This pattern is the same familiar one found on a soccer ball, and was used as the original implosion system lens layout in Gadget, and other early US nuclear weapons. Designs with 40, 60, 72, and 92 lenses have also been used (although these do not rely on Platonic solids for providing the layout pattern). More lenses lead to a thinner, less massive explosive lens shell, and greater implosion uniformity. The penalty for more lenses is more fabrication effort, and a more powerful and complex initiation system (not a trivial problem originally, but greatly simplified by modern pulse power technology). A simple implosion system could be very massive. The 32 point systems used in early US nuclear weapons had an external diameter of 1.4 m and weighed over 2000 kg. Current systems may be less than 30 cm, and weigh as little as 20 kg, but probably do not follow the same design approach as earlier weapons. To a degree these multi-lens systems all suffer from the same shortcoming as the basic multi-point detonation approach: strict uniformity of the spherical implosion wave is unachievable. The detonation wave spreads out radially from each detonation point, so each wave produces a circular segment of a spherical wave. If you consider an icosahedron or a "soccer ball", you can see that when circles are inscribed in each of the regular polygons they touch each of their neighbor circles at one point. This marks the moment when the individual wavelets start to merge into a single wave. The gaps left between the inscribed circles however are irregular areas where distortions are bound to arise as the wave edges spread into them, possibly even leading to jetting. Since the shock wave created by the lens exits from it at the velocity of the slow (and relatively weak) explosive, it desirable to have a layer of powerful explosive inside the lens system (perhaps the same one used as the fast lens component). This layer provides most of the driving force for the implosion, for the most part the lens system (which may well be much more massive) simply provides a mechanism for spherical initiation. Ideally, the best combination of explosives is the fastest and slowest that are available. This provides the greatest possible refractive index, and thus bending effect, and allows using a wider lens angle. The fastest and slowest explosives generally known are HMX (octogen) and baratol respectively. HMX has a detonation velocity of 9110 m/sec (at a pressed density of 1.89), the dense explosive baratol (76% barium nitrate/24% TNT) has a velocity of 4870 m/sec (cast density 2.55). Explosives with slightly slower detonation velocities include the even denser plumbatol - 4850 m/sec (cast density 2.89) for a composition of 70% lead nitrate/30% TNT; and the relatively light boracitol - 4860 m/sec (cast density 1.55) for a composition of 60% boric acid/40% TNT. Mixtures of TNT with glass or plastic microspheres have proven to be an effective, light weight, and economical slow explosive in recent unclassified explosive lens work (I don't have data on their velocities though). During WWII Los Alamos developed lenses using combination of Composition B (or Comp B) for the fast explosive (detonation velocity of 7920 m/sec, at a cast density 1.72), and baratol for the slow explosive. Later systems have used the very fast HMX as a fast explosive, often as a plastic bonded mixture consisting almost entirely of HMX. Plumbatol, a denser and slightly slower explosive, may have been used in some later lens system designs. Boracitol is definitely known to have been used, probably in thermonuclear weapon triggers and perhaps in other types of weapons as well. The idea of explosives lenses appears to have originated with M. J. Poole of the Explosives Research Committee in England. In 1942 he prepared a report describing a two-dimensional arrangement of explosives (RDX and baratol) to create a plane detonation wave. This idea was brought to Los Alamos in May 1944 by James Tuck, where he expanded it by suggesting a 3-D lens for creating a spherical implosion wave as a solution to making an implosion bomb. A practical lens design was proposed separately by Elizabeth Boggs of the US Explosives Research Laboratory, and by Johann Von Neumann. The Boggs proposal was the earlier of the two, although it was Von Neumann's proposal who directly influenced the Manhattan Project. The task of developing a successful spherical implosion wave system is extremely difficult. Although the concept involved is simple, actually designing a lens is not trivial. The detonation wave velocity is affected by events occurring some distance behind the front. When the wave crosses from the fast explosive into the slow explosive it does not instantly assume the steady state detonation velocity of the slow explosive. Unlike the analogy with light, the velocity change is gradual and occurs over a significant distance. Since energy can be lost through the surface of the lens, thus reducing the fast wave velocity, the test environment of the lens also affects its performance. The behavior of a lens can only be calculated using sophisticated 2 and 3-D hydrodynamic computer codes that have been validated against experimental data. Practical lens development generally requires a combination of experimentation, requiring precision explosive manufacture and sophisticated instruments to measure shock wave shape and arrival times, and numerical modelling (computer simulation) to extrapolate from test results. An iterative design, test, and redesign cycle allows the development of efficient, high-performance lenses. During the Manhattan Project, due to the primitive state of computers and high explosive science and instrumentation, lenses could only be designed by trial and error (guided to some extent by scaling laws deduced from previous experiments). This required the detonation of over 20,000 test lens (and for each one tested, several were fabricated and rejected). When successful sub-scale implosion systems were scaled up to full size, it was discovered that the lenses had to be redesigned. Assembling the lenses into a complete implosion system aggravates the design and development problems. To avoid shock wave collisions that disrupt symmetry, the surfaces of the lenses need to be aligned very accurately. In a spherical system, the implosion wave that is created is completely hidden by the layer of detonating explosive. The chief region of interest is a small region in the center with perhaps < 0.1% the volume of the whole system. Very expensive diagnostic equipment and difficult experiments are required to study the implosion process, or even to verify that it works at all. Hemispherical tests can be quite useful though to validate lens systems before full spherical testing. 188.8.131.52.2.3 Advanced Wave Shaping Techniques The conical lens design used by the Manhattan Project and early U.S. nuclear weapons is not the only lens design possible, or even the best. It had the crucial advantage of being simple in form (eliminating the need to design or fabricate complex shapes), and of having a single design variable - the cone apex angle. This made it possible to devise workable lenses with the crude methods then available. Other geometric arrangements of materials that transmit shocks slowly can be used to shape a convex shock into a concave one. The shock slowing component of a lens, such as the inner cone of a conical explosive lens, does not really need to be another explosive. An inert substance that transmits a shock more slowly than the fast explosive detonation wave will also work. The great range of materials available that are not explosives gives much greater design flexibility. An additional (potential) advantage is that shock waves attenuate as they travel through non-explosive materials, and slow down. This can make lens design more complex, since this attenuation must be taken into account, but the reduced velocity can also lead to a more compact lens. Care must be taken though to insure that the attenuated shock remains strong enough to initiate the inner explosive layer. By consulting the equation for shock velocity we can see that a high compressibility (low value of gamma) and a high density both lead to low shock wave velocities. An ideal material would be a highly compressible material of relatively high density. This describes an unusual class of filled plastic foams that have been developed at the Allied-Signal Kansas City Plant (the primary supplier of non-nuclear components for US nuclear weapons). It is quite possible that these foams were developed for use as wave shaping materials. By extending the idea of custom tailoring the density and compressibility of materials, we can imagine that different arrangements of materials of varying properties can be used to reshape shock waves in a variety of ways. Inserting low density materials, like solid or foam plastics, into explosives can also inhibit detonation propagation and allow the designer to "fold" the path the detonation wave must take. If suitable detonation inhibiting bodies are arranged in a grid inside a cone of high explosive, the same effect as the high explosive lens can be obtained with a lower lens density and with a larger apex angle. French researchers have described advanced lens systems using alternating layers of explosive and inert material. This creates an anisotropic detonation velocity in the system, very slow across the layers, but fast along the them. A compact lens for producing spherically curved waves has been demonstrated using a cylindrical version of this system, with a slow explosive between the inert layers, and a curved "nose cone"-like surface covered by fast explosive. It is possible to completely and uniformly cover a sphere with circles if the number of lenses (and circles) is less than or equal to two. A single lens capable of bending a single detonation wave into a complete spherically convergent wave can, in principle, be made so that the resulting wave is entirely uniform. This extends the principle of the explosive lens to its most extreme form. It is also possible to use two lenses, each covering a hemisphere, which meet at the equator of the sphere and can smoothly join two hemispherical implosion waves. The single point detonation system is illustrated below. This idea makes use of a cardioid-like logarithmic spiral: fffffff fssssssssssf fsssssssssssssf fCsssssssssssssfD <- Detonator fsssssssssssssf fssssssssssf f = fast explosive fffffff s = slow explosive C = core This not a very practical design as given. The thickness of the slow explosive on the detonator side would have to be considerable to achieve the necessary bending. Inserting detonation path folding spacers in the explosive could also dramatically reduce the size (but making manufacturing extremely difficult). A variation on this using the French layered explosive approach has also been proposed. It is unlikely that a slow explosive would really be used for the inner slow lens component, since the velocity differential is not that great. The high degree of shock bending required strongly encourages using something that transmits shocks as slowly as possible such as an advanced inert material. Such an implosion system would be extremely difficult to design and possibly to manufacture. The continuously varying 3-D surfaces would require considerable experimentation to perfect, and the surfaces would be a nightmare to machine. Once an acceptable shape were developed, and suitable molds or dies were made, the actual manufacture might be quite easy, requiring only pressing of explosives and plastics into molds, or forming metal sheet in a die. The system would remain quite intolerant in any imperfections in dimensions or material however. The difficulty in making compact and light implosion systems can be judged by the US progress in developing them. The initial Fat Man implosion system had a diameter of almost 60 inches. A significantly smaller system (30 inches) was not tested until 1951, a 22 inch system in mid-1952, and a 16 inch system in 1955. By 1955 a decade had passed since the invention of nuclear weapons, and hundreds of billions of dollars (in today's money) had been spent on developing and producing bombs and bomb delivery systems. These later systems must have used some advanced wave shaping technologies, which have remained highly classified. Clearly developing them is not an easy task (although the difficulty may be conceptual as much as technological). 184.108.40.206.2.4 Cylindrical and Planar Shock Techniques Cylindrical and planar shock waves can be generated using the techniques previously described, making allowances for the geometry differences. A cylindrical shock can be created using the 2-D analog of the explosive lens, a wedge shaped lens with the same cross section as the conical version. A planar shock is simply a shaped shock with zero curvature. A complete cylindrical implosion would require several parallel wedge-shaped explosive lenses arranged around the cylinder axis to form a star shape. To make the implosion truly cylindrical (as opposed to conical) it is necessary to detonate each of these lenses along the entire apex of the wedge simultaneously. This can be done by using a lens made out of sheets of high explosive (supported by a suitable backing) to create a plane shock. The edge of this sheet lens would join the apex of the wedge. This sheet lens need not extend out radially, it can join at an angle so that it folds into the space between the star points. Some special techniques are also available based on the peculiar characteristics of the 1-D and 2-D geometries. The basic principle for these techniques is the "flying plate line charge", illustrated below. A metal plate is covered on one side with a sheet of explosive. It is detonated on one edge, and the detonation wave travels across the plate. As it does so the detonation accelerates the plate, driving it to the right. After the explosive has completely detonated the flying plate will be flat again. The angle between the original stationary plate and the flying plate is determined by the ratio between the detonation velocity, and the velocity of the accelerated plate. When this high velocity plate strikes the secondary explosive charge the shock will detonate it, creating a planar detonation. As described above, the system doesn't quite work. A single detonator will actually create a circular detonation front in the explosive sheet, expanding from the initiation point. This can be overcome by first using a long, narrow flying plate (a flying strip if you will) to detonate the edge of wide plate. This wide plate can then be used to initiate the planar detonation. The flying strip approach can also be used to detonate the cylindrical lens system described above in place of the sheet lens. The flying plate scheme can be easily extended to create cylindrical detonations. This is a cross section view of a hollow truncated cone covered by a layer of explosives. The wide end of the cone is joined to a sheet of explosives with a detonator in the center. The single detonator located on the axis causes an expanding circular detonation in the explosive sheet. When the shock wave reaches the perimeter, it continues travelling along the surface of the cone. The cone collapses starting at the wide end. The angle of the cone is such that a cylindrical flying plate is created that initiates a cylindrical detonation in the secondary explosive. Flying plate systems are much easier to develop than explosive lenses. Instrumentation for observing their behavior is relatively simple. Multiple contact pins and an oscilloscope can easily measure plate motion, and well established spark gap photography can image the plate effectively. The choice of explosives in an implosion system is driven by the desire for high performance, safety, ease of fabrication, or sometimes by special properties like the slow detonation velocity needed in explosive lenses. The desire for high performance leads to the selection of very energetic explosives that have very high detonation velocities and pressures (these three things are closely correlated). The highest performance commonly known explosive is HMX. Using HMX as the main explosive will provide the greatest compression. HMX was widely used in US weapons from the late fifties on into the 1970s, often in a formula called PBX-9404 (although this particular formulation proved to have particularly serious safety problems - causing eight fatalities in a six month period in 1959 among personnel fabricating the explosive). HMX is known to be the principal explosive in many Soviet weapon designs since Russia is selling the explosive extracted from decommissioned warheads for commercial use. The chemically related RDX is a close second in power. It was the principal explosive used in most early US designs, in the form of a castable mixture called Composition B. In recent years the US has become increasingly concerned with weapon safety, following some prominent accidents in which HE detonation caused widespread plutonium contamination and in the wake of repeated fatal explosions during fabrication. Many of the high energy explosives used, such as RDX and HMX, are rather sensitive to shock and heat. While normally an impact on the order of 100 ft/sec is required to detonate one these explosives, if a sliding or friction-producing impact occurs then these explosives can be set off by an impact as slow as 10 ft/sec (this requires only a drop of 18 inches)! This has led to the use of explosives that are insensitive to shock or fire. Insensitive explosives are all based on TATB, the chemical cousin DATB lacks this marked insensitivity. These explosives have very unusual reaction rate properties that make them extremely insensitive to shock, impact, or heat. TATB is reasonably powerful, being only a little less powerful than Comp B. A composition known as PBX-9504 has been developed that adds 15% HMX to a TATB mixture, creating a compromise between added power and added sensitivity. Another very strong explosive called PETN has not been used much (or at all) as a main explosive in nuclear weapons due to its sensitivity, although it used in detonators. Fabricating explosives for implosion systems is a demanding task, requiring rigid quality control. Many explosive components have complex shapes, most require tight dimensional tolerances, and all require a highly uniform product. Velocity variations cannot be greater than a few percent. Achieving such uniformity means carefully controlling such factors as composition, purity, particle size, crystal structure, curing time and curing temperature. Casting was the first method used for manufacturing implosion components since a very homogenous product can be produced in fairly complex shapes. Unfortunately the most desirable explosives do not melt, which makes casting of the pure explosive impossible. The original solution adopted by the US to this problem was to use castable mixtures of the desired explosive and TNT. TNT is the natural choice for this, being the only reasonably powerful, easily melted explosive available. Composition B, the first explosive used, typically consisted of 63% RDX, 36% TNT, and 1% wax (cyclotol, a mixture with a higher proportion of RDX to TNT, was used later). Great care must be taken to ensure that the slurry of solid explosive and melted TNT is uniform since settling occurs. Considerable attention must be paid to controlling the particle size of the solid explosive, and to monitoring the casting, cooling, and curing processes. Mold making is also a challenging task, requiring considerable experimentation at Los Alamos before an acceptable product could be made. Pressing is a traditional way of manufacturing explosives products, but its inability to make complex shapes, and problems with density variations and voids prevented its use during WWII. Plastic explosives (that is - soft, pliable explosives) can be pressed into uniform complex shapes quite easily, but their lack of strength make them unattractive in practical weapon designs. During the forties and fifties advances in polymer technology led to the creation of PBXs (plastic bonded explosives). These explosives use a polymer binder that sets during or after fabrication to make a rigid mass. The first PBX was developed at Los Alamos in 1947, an RDX-polystyrene formulation later designated PBX 9205. Some early work used epoxy binders that harden after fabrication through chemical reactions, but current plastic binders are thermosetting resins (possibly in combination with a plasticizer). Explosive granules are coated with the plastic binder and formed by pressing, usually followed by machining of the billet. The desire for maximum explosive energy has led to the selection of polymers and plasticizers that actively participate in the explosion, releasing energy through chemical reactions. Emphasis on this has led to undesirable side effects - like sensitization of the main explosive (as occurred with PBX-9404), or poor stability. In the 1970s the W-68 warhead, the comprising large part of the U.S. submarine warhead inventory, developed problems due to decomposition of the LX-09 PBX being used, requiring the rebuilding of 3,200 warheads. LX-09 also exhibited sensitivity problems similar to PBX-9404, in 1977 three men were killed at the Pantex plant in Amarillo from a LX-09 billet explosion. Normally the explosive and polymer binder are processed together to form a granulated material called a molding powder. This powder is formed using hot pressing - either isostatic (hydrostatic) or hydraulic presses, using evaluated molds (1 mm pressure is typical). The formed material may represent the final component, but normally additional machining to final specifications is required. PBXs contain a higher proportion of the desired explosive, possess greater structural strength, and also don't melt. These last two properties make them easier to machine to final dimensions. Plastic bonding is very important in insensitive high explosives (IHEs), since mixing the insensitive explosives with the more sensitive TNT would defeat the purpose of using them. PBX was first used in a full-scale nuclear detonation during the Redwing Blackfoot shot in June 1956. PBXs have replaced melt castable explosives in all US weapons. The PBX compositions that have been used by the U.S. include PBX-9404, PBX-9010, PBX-9011, PBX-9501, LX-04, LX-07, LX-09, LX-10, LX-11. Insensitive PBXs used are PBX-9502 and LX-17. Table 220.127.116.11.2.5-1. Basic Properties Of Explosives Used In Us Nuclear Weapons EXPLOSIVE DETONATION DENSITY SENSITIVITY VELOCITY PRESSURE m/sec kilobars HMX 9110 390 1.89/pressed Moderate LX-10 8820 375 1.86/pressed Moderate LX-09 8810 377 1.84/pressed Moderate PBX-9404 8800 375 1.84/pressed Moderate RDX 8700 338 1.77/pressed Moderate PETN 8260 335 1.76/pressed High Cyclotol 8035 - 1.71/cast Low Comp B 63/36 7920 295 1.72/cast Low TATB 7760 291 1.88/pressed Very Low PBX-9502 7720 - 1.90/pressed Very Low DATB 7520 259 1.79/pressed Low HNS 7000 200 1.70/pressed Low TNT 6640 210 1.56/cast Low Baratol 76/24 4870 140 2.55/cast Moderate Boracitol 60/40 4860 - 1.55/cast Low Plumbatol 70/30 4850 - 2.89/cast Moderate 18.104.22.168.2.6 Detonation Systems Creating a symmetric implosion wave requires close synchronization in firing the detonators. Tolerances on the order of 100 nanoseconds are required. Conventional detonators rely on electrically heating a wire, which causes a small quantity of a sensitive primary explosive to detonate (lead azide, mercury fulminate, etc.). The primary usually then initiates a secondary explosive, like PETN or tetryl, which fires the main charge. The process of resistively heating the wire, followed by heat conduction to the primary explosive until it reaches detonation temperature requires a few milliseconds, with correspondingly large timing errors. Conventional detonators thus lack the necessary precision for firing an implosion system. One approach to reducing the duration of action of the detonator is to send a sudden, powerful surge of current through a very fine wire (made of gold or platinum), heating it to the point of vaporization. This technique, called an exploding wire or exploding bridge wire (EBW) detonator, was invented by Luis Alvarez at Los Alamos during the Manhattan Project. Current surge rise times of a fraction of a microsecond are feasible, with a spread in detonation times of a few nanoseconds. An exploding wire detonator can be used to initiate a primary explosive (usually lead azide), as in a conventional detonator. But if the current surge is energetic enough, then the exploding wire can directly initiate a less sensitive booster explosive (usually PETN). The advantage of doing this is that the detonation system is extremely safe from accidental activation by heat, stray currents, or static electricity. Only very powerful, very fast current surges can fire the detonators. This type of exploding wire detonator is one of the safest types of detonators known. The disadvantage is the need to supply those very powerful, very fast current surges. A typical EBW requires 5 KV, with a peak current of at least 500-1000 amps. A few kiloamps is more typical of most EBW detonators, but a multi-EBW system would probably try to minimize the required current. With sufficient care in detonator design and construction, inherent detonator accuracies of better than 10 nanoseconds are achievable. Since WWII, a number of detonator designs based on exploding foils have been developed. Exploding foil detonators could be used to fire the booster explosive directly, as in EBW detonators, but generally this implies the use of different concept called a "slapper" detonator. This idea (developed at Lawrence Livermore) uses the expanding foil plasma to drive another thin foil or plastic film to high velocities, which initiates the explosive by impacting the surface. Normally the driving energy is provided entirely by heating of the foil plasma from the current passing through it, but more sophisticated designs may use a "back strap" to create a magnetic field that drives the plasma forward. Slappers are fairly efficient at converting electrical energy into flyer kinetic energy, it is not hard to achieve 25-30% energy transfer. A typical slapper detonator consists of an explosive pellet pressed to a high density for maximum strength (plastic bonded explosives can also be used). Next to the explosive pellet is an insulation disk with a hole in the center which is set against the explosive pellet. An insulating "flyer" film, such as Kapton or Mylar with a metal foil etched to one side is placed against the disk. A necked down section of the etched foil acts as the bridgewire. The high current firing pulse causes vaporization of the necked down section of the foil. This then shears the insulated flyer which accelerates down the barrel of the disk and impacts the explosive pellet. This impact energy transmits a shock wave into the explosive causing it to detonate. Another possible advantage of a slapper detonator is the ability to initiate an area of explosive surface rather than a point. This may make compact implosion systems easier to design. This system has several advantages over the EBW detonator. These include: Exploding wire detonators were used in the first atomic device, but have since been replaced in the U.S. arsenal by foil slappers, and very probably in all other arsenals as well. Due to the ability of slapper detonators to use insensitive primary explosives, these are almost certainly used with all insensitive high explosive equipped warheads (unless supplanted by an even more advanced technology - like laser detonators). More recently laser detonating systems have been developed. These use a high power solid state laser to deliver sufficient energy in the form of a short optical pulse to initiate a primary or booster explosive. The laser energy is conducted to the detonator by a fiber optic cable. This is a safe detonator system, but the laser and its power supply is relatively heavy. A typical system might use a 1 W solid state laser to fire a single detonator. It is not known if this system has been used in any nuclear weapons. Another fast detonator is the spark gap detonator. This uses a high voltage (approx. 5 KV) spark across a narrow gap to initiate the primary explosive. If a suitably sensitive primary explosive is used (lead azide, or the especially sensitive lead styphnate) then the current required is quite small, and a modest capacitor can supply sufficient power (10-100 millijoules per detonator). The chief disadvantage of this detonator design is that it is one of the least safe known. Static charges, or other induced currents, can very easily fire a spark gap detonator. For this reason they have probably never been used in deployed nuclear weapons. Detonation systems require a reasonably compact and light high speed pulse power supply. To achieve accurate timing and fast response requires a powerful power source capable of extremely fast discharge, as well as fast, accurate, and reliable switching components, and close attention to managing the inductance of the entire system. The normal method of providing the power for an EBW multi-detonator system is to discharge a high capacitance, high voltage, low inductance capacitor. Voltage range is several kilovolts, 5 KV is typical. Silicone oil filled capacitors using Kraft paper, polypropylene, or Mylar dielectrics are suitable types, as are ceramic-type capacitors. Compact power supplies for charging capacitors are readily available. The capacitor must be matched with a switch that can handle high voltages and currents, and transition from a safe non-conducting state to a fully conducting one rapidly without adding undue inductance to the circuit. A variety of technologies are available: triggered spark gaps, krytrons, thyratrons, and explosive switches are some that could be used. The current rise time of the firing pulse can actually be much longer than the required timing accuracy since the firing of an EBW detonator is basically determined by achieving a threshold current. As long as the current rise is synchronous for all detonators, they will fire simultaneously. Still a rise time of no more than 2-3 microseconds is desirable. The capacitance required for a 5 KV EBW is on the order of 1 microfarad per detonator. A 32 detonator system (like Fat Man) thus requires at least 32 mF and to produce a 32 kA current surge. For a rise time of 3 microseconds this requires no more than 100 nanohenries of total inductance. A modern plastic cased capacitor of 40 microfarads, rated at 5 KV, with 100 nanohenries of inductance weighs about 4 kg. Triggered spark gaps are sealed devices filled with high pressure air, argon, or SF6. A non-conducting gap between electrodes is closed by applying a triggering potential to a wire or grid in the gap. Compact versions of these devices are typically rated at 20-100 KV, and 50-150 kiloamps. The triggering potential is typically one-half to one-third the maximum voltage, with switch current rise times of 10-100 nanoseconds. Krytrons are a type of cold cathode trigger discharge tube. Krytrons are small gas filled tubes. Some contain a small quantity of Ni-63, weak beta emitter (92 yr half-life, 63 KeV) that keeps the gas in a slightly ionized state. Applying a trigger voltage causes an ionization cascade to close the switch. These devices have maximum voltage ratings from 3 to 10 KV, but peak current rating of only 300-3000 amps making them unsuitable for directly firing multiple EBW detonators. They are small (2 cm long), rugged, and accurate (jitter 20-40 nanoseconds) however, and are triggered by voltages of only 200-300 V. They are very convenient then for triggering other high current devices, like spark gaps, by discharging through a pulse current transformer (they can, in turn, be conveniently triggered using a small capacitor, pulse transformer, and a thyristor). Krytrons are used commercially in powerful xenon flash lamp systems, among other uses. Krytrons have faster response times than other types of trigger discharge tubes. A vacuum tube relative of the krytron, the sprytron, is very similar and has very high radiation resistance. It is probably the sprytron that is actually used in U.S. nuclear weapons. The only manufacturer of krytrons and sprytrons is EG&G, the same company that provided the spark gap cascades for Gadget, Fat Man, and other early atomic weapons. Other switching techniques that have been developed are explosive switches, and various other vacuum or gas-filled tube devices like hydrogen thyratrons and arc discharge tubes. An explosive switch uses the shock wave from an explosive charge to break down a dielectric layer between metal plates. Both this technique and the thyratron were under development at Los Alamos at the end of WWII. Detonators are wired in parallel for reliability and to minimize inductance. For additional reliability, redundant detonation circuits may be used. In the Fat Man bomb the detonators were wired in parallel in spark gap triggered circuits. There were four detonating circuits, any two of which provided sufficient power for all 32 detonators. Each detonator was wired to two different circuits so that the failure of any one detonator circuit (and up to two of them) would not have affected the implosion. The whole system was fired by a spark gap cascade - the trigger spark gap supplied a current surge to fire the four main circuits simultaneously. With sufficient care timing accuracies of 10 nanoseconds are achievable, which is probably better than practical implosion systems require (100 nanosecond accuracy is more typical). Although the types of switches and capacitors mentioned here are, for the most part, available from many commercial sources and have many commercial uses, they are nonetheless subject to dual use export controls. Attempts to export of krytrons illegally has been especially well publicized over the years, but they are not the only such devices suitable for these applications. The detonator bridge wire used in EBWs is typically made of high purity gold or platinum, 20-50 microns wide and about 1 mm long. PETN is invariably used as the explosive, possibly with a tetryl booster charge. Slapper detonators use metal foils (usually aluminum, but gold foil would work well also) deposited on a thin plastic film (usually Kapton). A wider variety of primary explosives can be used. PETN or HMX may have been used in slappers used in earlier weapon systems, but weapons using IHE probably use the highly heat stable HNS. A possible substitute for a capacitor bank in a detonation system is an explosive generator, also called a flux compression generator (FCG). This consists of a primary coil that is energized to create a strong magnetic field by a capacitor discharge. At the moment of maximum field strength an explosive charge drives a conducting plate into the field, rapidly compressing it. The rising magnetic field induces a powerful high voltage current in a secondary coil. Any of the switching technologies mentioned above can then be used to switch the load to the detonating system. A substantial fraction of the chemical energy of the explosive can be converted to electrical power in this way. FCGs can potentially provide ample power for detonators and external neutron initiators at a very modest weight. Extensive research on these generators has been conducted at Los Alamos and Lawrence Livermore, and they are known to have been incorporated into actual weapon designs (possibly the Mk12, which had 92 initiation points). 22.214.171.124.3 Implosion Hardware Designs Once created implosion shocks can be used to drive different implosion hardware systems. By implosion hardware, I mean systems of materials that are inert from the viewpoint of chemical energy release: the fissile material itself, and any reflectors, tampers, pushers, drivers, buffers, etc. One approach to designing an implosion hardware system is to simply use the direct compression of the explosive generated shock wave to accomplish the desired reactivity insertion. This is the "solid pit design" used in Gadget and Fat Man. A variety of other designs make use of high velocity collisions to generate the compressive shocks for reactivity insertion. These velocities of course are obtained from the energy provided by the high explosive shocks. 126.96.36.199.3.1 Solid Pit Designs Since shock waves inherently compress the material through which they pass, an obvious way of using the implosion wave is simply to let it pass through the fissile core, compressing it as it converges on the center. This technique can (and has) been used successfully, but it has some inherent problems not all of which can be remedied. First, the detonation pressure of available explosives (limit 400 kilobars) is not high enough for much compression. A 25% density increase is all that can be obtained in uranium at this pressure, delta-phase plutonium can reach 50% due to the low pressure delta->alpha phase transformation. This pressure can be augmented in two ways: by reflecting the shock at high impedance interfaces, and by convergence. Since the fissile material is about an order of magnitude denser than the explosive itself, the first phenomenon is certain to occur to some extent. It can be augmented by inserting one or more layers of materials of increasing density between the explosive and the dense tamper and fissile material in the center. As a limit, shock pressure can double when reflected at an interface. To approach this limit the density increase must be large, which means that no more than 2 or 3 intermediate layers can be used. The second phenomenon, shock convergence, is limited by the ratio of the fissile core radius to the outer radius of the implosion hardware. The intensification is approximately proportional to this ratio. A large intensification thus implies a large diameter system - which is bulky and heavy. Another problem with the solid pit design is the existence of the Taylor wave, the sharp drop in pressure with increasing distance behind the detonation front. This creates a ramp-shaped shock profile: a sudden jump to the peak shock pressure, followed by a slope down to zero pressure a short distance behind the shock front. Shock convergence actually steepens the Taylor wave since the front is augmented by convergence to a greater degree than the material behind the front (which is at a larger radius). If the Taylor wave is not suppressed, by the time the shock reaches the center of the fissile mass, the outer portions may have already expanded back to their original density. The use of intermediate density "pusher" layers between the explosive and the tamper helps suppress or flatten the Taylor wave. The reflected high pressure shock reinforces the pressure behind the shock front so that instead of declining to zero pressure, it declines to a pressure equal to the pressure jump at the reflection interface. That is, if P is the initial shock pressure, and P -> 0 indicates a drop from P to zero through the Taylor wave, then the reflection augments both by p:(P + p) -> (0 + p). The Gadget/Fat Man design had an intermediate aluminum pusher between the explosive and the uranium tamper, and had a convergence factor of about 5. As a rough estimate, one can conclude that the 300 kilobar pressure of Composition B could be augmented by a factor of 4 by shock reflection (doubling at the HE/Al interface, and the Al/U interface), and a factor of 5 by convergence, leading to a shock pressure of 6 megabars at the plutonium core. Assuming an alpha phase plutonium equation of state similar to that of uranium this leads to a compression of a bit less than 2, which when combined with the phase transformation from delta to alpha gives a maximum density increase of about 2.5. The effective compression may have been significantly less than this, but it is generally consistent with the observed yield of the devices. 188.8.131.52.3.2 Levitated Core Designs In the solid pit design, the Taylor wave is reduced but not eliminated. Also, the kinetic energy imparted by the convergent shock is not efficiently utilized. It would be preferable to achieve uniform compression throughout the fissile core and tamper, and to be able to make use of the full kinetic energy in compressing the material (bringing the inward motion of material in the core to a halt at the moment of maximum compression). This can be accomplished by using a shell, or hollow core, instead of a solid one (see Section 3.7.4 Collapsing Shells). The shell usually consists of an outer layer of tamper material, and an inner layer of fissile material. When the implosion wave arrives at the inner surface of the shell, the pressure drops to zero and an unloading wave is created. The shock compressed material (which has also been accelerated inward) expands inward to zero pressure, converting the compression energy into even greater inward directed motion (approximately doubling it). In this way energy loss by the outward expansion of material in the Taylor wave region is minimized. Simply allowing this fast imploding hollow shell to collapse completely would achieve substantial compression. In practice this is never done. It is more efficient to allow the collapsing shell to collide with a motionless body in the center (the "levitated core"), the collision creating two shock waves - one moving inward to the center of the stationary levitated core (accelerating it inward), and one moving outward through the imploding shell (decelerating it). The pressure between these two shocks is initially constant so that when the converging shock reaches the center of the core, the region extending from the center out to the location of expanding shock has achieved reasonably even and efficient compression. I use word "reasonably" because the picture is a bit more complicated than just described. First, by the time the shell impacts the levitated core it has acquired the character of a thick collapsing shell. The inner surface will be moving faster than the outer surface, and a region close to the inner surface will be somewhat compressed. Second, the inward and outward moving shocks do not move at constant speed. The inward moving shock is a classical converging shock with a shock velocity that accelerates and strengthens all the way to the center. The outward moving shock is a diverging or expanding shock that slows down and weakens. In the classical converging shock region (the levitated core, and the innermost layer of the colliding shell) high compression is achieved and the material is brought to a halt when the shock reaches the center. In the outer diverging region, only about half of the implosion velocity is lost when the diverging shock compresses and decelerates it, and there is insufficient time for inward flow to bring it to a halt before the converging shock reaches the center. Thus the outer region is still collapsing (slowly) when the inner shock reaches complete convergence (assuming that the outer shock has not yet reached the surface of the pit (tamper shell plus core) and initiated an inward moving release wave). Immediately after the converging shock reaches the center, the shock rebound begins. This is an outward moving shock that accelerates material away from the center, creating an expanding low density region surrounded by a layer compressed to an even greater degree than in the initial implosion. Once the rebound shock expands to a given radius the average density of the volume within that radius falls rapidly. For a radius well outside the classical converging shock region, the true average density may continue to increase due to the continuing collapse of the outer regions until the rebound shock arrives. The structure of the shell/core system at the time of rebound shock arrival is actually hollow - a low density region in the center with a highly compressed shell, but the average density is at a maximum. Whether this configuration is acceptable or not depends on the weapon design, it may be acceptable in a homogenous un-boosted core but will not be acceptable in a boosted or a composite core design where high density at the center is desired. Since the divergence of the outward shock is not great, and it is offset somewhat by the slower collapse velocity of the outer surface of the thick shell, we can treat it approximately as a constant speed shock traversing the impacting shell. The converging shock can be treated by the classical model (see Section 3.7.3 Convergent Shocks). This allows us to estimate the minimum shell/levitated core mass ratio for efficient compression, the case in which the shock reaches the surface of the shell, and the center simultaneously. If the shell and levitated core have identical densities and compressibilities, then the two shocks will have the same initial velocity (the velocity change behind the shock front in both cases will be exactly half the impact velocity). If the shell has thickness r_shell, then the shock will traverse the shell in time: Eq. 184.108.40.206.3.2-1 t_shell = r_shell/v If the levitated core has radius r_lcore, the shock will reach the center in time: Eq. 220.127.116.11.3.2-2 t_lcore = (r_lcore/v)*alpha Alpha is this case is the convergent shock scaling parameter (see Section 3.7.3). For a spherical implosion, and a gamma of 3 (approximately correct for most condensed matter, and for uranium and plutonium in particular), alpha is equal to 0.638 (the exact value will be somewhat higher than this). Since we want t_shell = t_lcore: Eq. 18.104.22.168.3.2-3 r_shell = alpha * r_lcore = 0.638 r_lcore That is, the thickness of the shell is smaller than the radius of the core by a factor of 0.638. But since volume is proportional to the cube of the radius: Eq. 22.214.171.124.3.2-4 m_shell = density*(4*Pi/3)*[(r_shell + r_lcore)^3 - r_lcore^3] and Eq. 126.96.36.199.3.2-5 m_lcore = density*(4*Pi/3)*r_lcore^3 This gives us the mass ratio: Eq. 188.8.131.52.3.2-6 m_shell/m_lcore = ((1.638)^3 - 1^3)/1^3 = 3.4 Thus we want the impacting shell to have at least 3.4 times as much mass as the levitated core. The ratio used may be considerably larger. Now it is important to realize that in principle the shell/levitated core mass ratio is unrelated to the tamper/fissile material mass ratio. The boundary between tamper and fissile material can be located in the shell (i.e. the shell is partly tamper and partly fissile, the levitated core entirely fissile), it can be located between the shell and core (i.e. the shell is tamper and the core is fissile), or it can be located in the core (i.e. the shell is tamper, and the core is partly tamper and partly fissile). The tamper/fissile material ratio is determined by neutron conservation, hydrodynamic confinement, and critical mass considerations. It appears however that the initial practice of the US (starting with the Mk4 design and the Sandstone test series) was to design levitated core weapons so that the shell was the uranium tamper, and the levitated portion was a solid fissile core. The mass of the tamper would have been similar to that used in the Gadget (115 kg), a large enough mass to allow the use of different pit sizes and compositions while ensuring sufficient driver mass. These early pure fission bombs were designed to use a variety of pits to produce different yields, and to allow the composition (U-235/Pu-239 ratio) to be varied to match the actual production schedules of these materials. Levitation is achieved by having some sort of support structure that will not disrupt the implosion symmetry. The most widely used approach seems to be the use of truncated hollow cones (or conically tapered thin walled tubes if you prefer), usually made out of aluminum. Six of these are used, pairs on opposite sides of the levitated core for each axis of motion. Supporting wires (presumably under tension) have also been used. The levitated core of the Hurricane device (the first British test) used "caltrops" (probably six of them) for support. A caltrops is a four pronged device originally used in the Middle Ages as an obstacle against soldiers and horses, and more recently against vehicle tires. Each of the prongs can be thought of as the vertex of a tetrahedron, with the point where they all join as the tetrahedral center. A caltrops has the property that no matter how you drop it, three of the prongs forms a tripod with the fourth prong pointed straight up. Dimples on the core might be used to seat the support prongs securely. Another possibility is to use a strong light weight foam to fill the gap between shell and core (such foams have been produced at the Allied-Signal Kansas City Plant). A significant problem with using a foam support is that plastic foams are usually excellent thermal insulators, which could cause severe problems from self-heating in a plutonium levitated core. A serious problem with hollow shell designs is the tensile stress generated by the Taylor wave (see Section 184.108.40.206.2 Free Surface Release Waves in Solids). As the release wave moves out from the inner shell surface, it encounters declining pressure due to the Taylor wave. The "velocity doubling" effect generates a pressure drop equal in magnitude to the shock peak pressure. If the pressure that the release wave encounters is below this pressure, a negative pressure (tension) is created (you can think of this as the faster moving part of the plate pulling the slower part along). This tensile stress builds up the farther back the release wave travels. If it exceeds the strength of the material it will fracture or "spall". This can cause the entire inner layer of material to peel off, or it may simply create a void. A new release wave will begin at the spall surface. Spalling disrupts implosion symmetry and can also ruin the desired collision timing. It was primarily fears concerning spalling effects that prevented the use of levitated core designs in the first implosion bombs. One approach to dealing with spalling is simply to make sure that excessive tensile stresses do not appear in the design. This requires strong materials, and at least one of the following: Another approach is to adopt the "if you can't beat'em, join'em" strategy. Instead of trying to prevent separation in the shell, accommodation for the phenomenon is included in the design. This can be done by constructing the shell from separate layers. When the release wave reaches the boundary between shell layers (and tensile stress exists at that point), the inner layer will fly off the outer layer, and a new release wave will begin. This will create a series of imploding shells, separated by gaps. As each shell layer converges toward the center, the inner surface will accelerate while the outer surface will decelerate. This will tend to bring the layers back together. If they do not rejoin before impact occurs with the core, a complicated arrangement of shocks may develop. The design possibilities for using these multiple shocks will not be considered here. The concept of the levitated core and colliding shells can be extended to multiple levitation - having one collapsing shell collide with a second, which then collides with the levitated core. The outer shell, due to the concentration of momentum in its inner surface and the effects of elastic collision, could enhance the the velocity of the inner shell. This idea requires a large diameter system to be practical. It is possible that the "Type D" pit (that is, the hardware located between the explosive and fissile core) developed in the early fifties for the 60 inch diameter HE assemblies then in the US arsenal was such a system. It considerably increased explosive yields with identical cores. It seems almost certain that the most efficient kiloton range pure fission bomb ever tested - the Hamlet device detonated in Upshot-Knothole Harry (19 May 1953) - used multiple levitation. It was described as being the first "hollow core" device, presumably the use of a fissile core that itself was an outer shell and an inner levitated core. A TX-13D bomb assembly (a 60 inch implosion system using a Type D pit) was used with the core. The yield was 37 kt. 220.127.116.11.3.3 Thin Shell (Flying Plate) Designs Thin shell, or flying plate designs, take the hollow core idea to an extreme. In these designs a very thin, but relatively large diameter shell is driven inward by the implosion system. As with the regular hollow core design, a levitated core in the center is used. The advantages of a flying plate design are: a greatly increased efficiency in the utilization of high explosive energy; and a higher collision speed - leading to faster insertion and greater compression for a given amount of explosive. Thin shell flying plate designs are standard now in the arsenals of the nuclear weapon states. A thin plate, a few millimeters thick, is thinner than the Taylor wave of an explosive shock. The shock acceleration, followed by full release, is completed before the Taylor wave causes a significant pressure drop. The maximum initial shock acceleration is thus achieved. Even greater energy transfer than this occurs however. When the release wave reaches the plate/explosive interface (completing the expansion and velocity doubling of the plate), a rarefaction wave propagates into the explosive gases. The gases expand, converting their internal energy into kinetic energy, and launching a new (but weaker) shock into the plate. A cyclic process thus develops in which a series of shocks of diminishing magnitude accelerate the plate to higher and higher velocities. If viewed from the inner surface, the observer would see a succession of velocity jumps of diminishing size and at lengthening intervals. The plate continues to accelerate over a distance of a few centimeters. The maximum velocity achievable by this means can approach the escape velocity of the explosive gases, which is 8.5 km/sec for Comp B. Velocities up to 8 km/sec have been reported using HMX-based explosives. This can be compared to the implosion velocity of the plutonium pit in the Gadget/Fat Man design, which was some 2 km/sec. Optimum performance is found when a small gap (a few mm) separates the high explosive from the plate. Among other things, this gap reduces the strength of the Taylor wave. The gap may be an air space, but it is usually filled with a low impedance material (like a plastic). The mass ratio between the explosive and the plate largely determines the system performance. For reasonable efficiency it is important to have a ratio r of at least 1 (HE mass/plate mass). At r=1 about 30% of the chemical energy in the explosive is transferred to the plate. Below r=1, the efficiency drops off rapidly. Efficiency reaches a maximum at r=2, when 35% of the energy is transferred. Since a higher mass ratio means more energy available, the actual final velocity and energy in the plate increases monotonically with r, as shown in the table below. Higher values of r also cause the plate to approach its limiting value with somewhat shorter travel distances. |Table 18.104.22.168.3.3-1. Flying Plate Drive Efficiency| |Plate/HE Mass Ratio (R)||Energy Fraction Transferred||Relative Velocity||Plate/Detonation Velocity Ratio| By the time the flying plate converges from a radius of 10-20 cm to collide with the levitated core, it is no longer a thin shell. The velocity difference that is inherent in thick shell collapse leads to a collision velocity of the inner surface that is higher than the average plate velocity. Collision velocities of experimental uranium systems of 8.5 km/sec have been reported. The flying plate can be used in a variety of ways. It can be the collapsing shell of a levitated core design. Or it can be used as a driver which collides with, and transfers energy to a shell, which then implodes on to a levitated core. 22.214.171.124.3.4 Shock Buffers Powerful shock waves can dissipate significant amounts of energy in entropic heating. Energy that contributes to entropy increase is lost to compression. This problem can be overcome by using a shock buffer. A shock buffer is a layer of low impedance (i.e. low density) material that separates two denser layers. When a shock is driven into the buffer from one of the dense layers, a weaker shock of low pressure (but higher velocity) is created (see 126.96.36.199.3 Shock Waves at a Low Impedance Boundary). This shock is reflected at the opposite interface, driving a shock of increased pressure into the second dense layer. This shock is still weaker than the original shock however, and dissipates much less entropy. A series of shock reflections ensue in the buffer, each one increases the pressure in the buffer, but by diminishing amounts (the pressure of the original shock is the limiting value). A series of shocks is driven into the second dense material, each successive shock creating a pressure jump of diminishing magnitude. The shock buffer thus effectively splits the original powerful shock into a series of weaker ones, essentially eliminating entropic heating. The first two shocks produced account for most of the compression. The following shocks tend to overtake the leading ones since they are travelling through compressed and accelerated material. Ideally, the shock sequence should be timed so that they all converge at the center of the system. The thickness of the buffer is selected so that this ideal is approached as closely as possible. The usual thickness is probably a few millimeters. The buffer can be employed to cushion a plate collision also. In this case, the reflected shocks gradually decelerate the impactor (driver plate), and accelerate the driven plate, without dissipating heat. This converts a largely inelastic supersonic collision into an elastic one. If the mass of the driven plate is substantially lower than the mass of the driver, it can be accelerated to greater velocities than the original driver velocity. In principle an elastic collision can boost the driven plate two as much as twice the velocity of the driver (if the driver/driven plate mass ratio is very large). In practice this technique can transfer 65-80% of the driver energy to the driven plate, and provide driven plate velocities that are 50% greater than the driver velocity (or more). Since the explosive/plate mass ratio required for direct explosive drive increases very rapidly for velocities above 50% of the detonation velocity, the buffered plate collision method is the most efficient one for achieving velocities above this. In an weapon implosion design a thin uranium or tungsten shell would probably be used as a driver. Two likely low density materials for use as buffers are graphite and beryllium. Beryllium is an excellent neutron reflector which is commonly used in nuclear weapon designs for this reason. It thus may be a convenient shock buffer material that does double duty. Graphite is also a good neutron reflector. From information on manufacturing processes used at the Y-12 Plant at Oak Ridge, and the Allied-Signal Kansas City Plant, it is known that thin layers of graphite are used in the construction of nuclear weapons. The use of graphite as a shock buffer is a likely reason. 188.8.131.52.3.5 Cylindrical Implosion The discussion of implosion has implicitly assumed a spherically symmetric implosion since this geometry is the simplest, and also the most efficient and widely used. Few changes are needed though to translate the discussion above to cylindrical geometry. The changes required all relate to the differences in shock convergence in cylindrical geometry. There is a much lower degree of energy focusing during shock convergence, resulting in lower pressure increase for the same convergence ratio (reference radius/inner radius). A cylindrical solid core system would thus be much less effective in generating high pressures and compressions. For a levitated core design, the shell/levitated core mass ratio must be recalculated. The appropriate value for alpha is 0.775 in this case, but the volume only increases by r^2, so: Eq. 184.108.40.206.3.4-1 m_shell/m_lcore = ((1.775)^2 - 1^2)/1^2 = 2.15 The possibility of producing cylindrical implosion by methods that do not work for spherical geometries deserves some comment however. The flying plate line charge systems described above (220.127.116.11.2.4 Cylindrical and Planar Shock Techniques) for initiating a cylindrical implosion shocks in high explosives can be used to drive flying plates directly. Such a single-stage system would probably not be capable of generating as fast an implosion as a two stage system; one in which the first plate initiates a convergent detonation which then drives a second flying plate. A single stage system would be simpler to develop and build, and potentially lighter and more compact however. Cylindrical implosion systems are easier to develop that spherical ones. This largely because they are easier to observe. Axial access to the system is available during the implosion, allowing photographic and electronic observation and measurement. Cylindrical test systems were used to develop the implosion lens technology at Los Alamos that was later applied to the spherical bomb design. 18.104.22.168.3.6 Planar Implosion Planar implosion superficially resembles the gun assembly method - one body is propelled toward another to achieve assembly. The physics of the assembly process is completely different however, with shock compression replacing physical insertion. The planar implosion process is some two orders of magnitude faster than gun assembly, and can be used with materials with high neutron background (i.e. plutonium). By analogy with spherical and cylindrical implosion, the natural name for this technique might be "linear implosion". This name is used for a different approach discussed below in Hybrid Assembly Techniques. Most of the comments made above about implosion still apply after a fashion, but some ideas, like the levitated core, have little significance in this geometry. Planar implosion is attractive where a cylindrical system with a severe radius constraint exists. Shock wave lenses for planar implosion are much easier to develop than in other geometries. A plane wave lens is used by itself, not as part of a multi-lens system. It is much easier to observe and measure the flat shock front, than the curved shocks in convergent systems. Finally, flat shocks fronts are stable while convergent ones are not. Although they tend to bend back at the edges due to energy loss, plane shock fronts actually tend to flatten out by themselves if irregularities occur. 22.214.171.124 Hybrid Assembly Techniques For special applications, assembly techniques that do not fit neatly in the previously discussed categories may be used. 126.96.36.199.1 Complex Guns Additional improvements in gun system performance are possible by combining implosion with gun assembly. The implosion system here would be a very weak one - a layer of explosive to collapse a ring of fissile material or dense tamper on to the gun assembled core. This would allow further increases in the amount of fissile material used, and generate modest efficiency gains through small compression factors. A significant increase in insertion speed is also possible, which may be important where battlefield neutron sources may cause predetonation (this may make the technique especially attractive for artillery shell use). Complex gun approaches have reportedly been used in Soviet artillery shell designs. 188.8.131.52.2 Linear Implosion In weapons with severe size (especially radius) and mass constraints (like artillery shells) some technique other than gun assembly may be desired. For example, plutonium cannot be used in guns at all so a plutonium fueled artillery shell requires some other approach. A low density, non-spherical, fissile mass can be squeezed and deformed into a supercritical configuration by high explosives without using neat, symmetric implosion designs. The technique of linear implosion, developed at LLNL, apparently accomplishes this by embedding an elliptical or football shaped mass in a cylinder of explosive, which is then initiated at each end. The detonation wave travels along the cylinder, deforming the fissile mass into a spherical form. Extensive experimentation is likely to be required to develop this into a usable technique. Three physical phenomenon may contribute to reactivity insertion: Since the detonation generated pressure are transient, and affect different parts of the mass at different times, compression to greater than normal densities do not occur. The reactivity insertion then is likely to be rather small, and weapon efficiency quite low (which can be offset by boosting). The use of metastable delta-phase plutonium alloys is especially attractive in this type of design. A rather weak impulse is sufficient to irreversibly collapse it into the alpha phase, giving a density increase of 23%. The supercritical mass formed by linear implosion is stable - it does not disassemble or expand once the implosion is completed. This relieves the requirement for a modulated neutron initiator, since spontaneous fission (or a calibrated continuous neutron source) can assure detonation. If desired, a low intensity initiator of the polonium/beryllium type can no doubt be used. Special initiation patterns may be advantageous in this design, such as annual initiation - where the HE cylinder is initiated along the rim of each end to create a convergent shock wave propagating up the cylinder. 4.1.7 Nuclear Design Principles The design of the nuclear systems of fission weapons naturally divides into several areas - fissionable materials, core compositions, reflectors, tampers, and neutron initiating techniques. 184.108.40.206 Fissile Materials In the nuclear weapons community a distinction is made between "fissile" and "fissionable". Fissile means a material that can be induced to fission by neutrons of energy - fast or slow. These materials always have fairly high average cross sections for the fission spectrum neutrons of interest in fission explosive devices. Fissionable simply means that the material can be induced to fission by neutrons of a sufficiently high energy. As examples, U-235 is fissile, but U-238 is only fissionable. There are three principal fissile isotopes available for designing nuclear explosives: U-235, Pu-239, and U-233. There are other fissile isotopes that can be used in principle, but various factors (like cost, or half-life, or critical mass size) that prevent them from being serious candidates. Of course none of the fissile isotopes mentioned above is actually available in pure form. All actual fissile materials are a mixture of various isotopes, the proportion of different isotopes can have important consequences in weapon design. The discussion of these materials will be limited here to the key nuclear properties of isotope mixtures commonly available for use in weapons. The reader is advised to turn to Section 6 - Nuclear Materials for more lengthy and detailed discussions of isotopes, and material properties. See also Table 4.1.2-1 for comparative nuclear properties for the three isotopes. 220.127.116.11.1 Highly Enriched Uranium (HEU) Highly enriched uranium (HEU) is produced by processing natural uranium with isotopic separation techniques. Natural uranium consists of 99.2836% U-238, 0.7110% U-235, and 0.0054% U-234 (by mass). Enrichment processes increase the proportion of light isotopes (U-235 and U-234) to heavy ones (U-238). Enriched uranium thus contains a higher percentage of U-235 (and U-234) than natural uranium, but all three isotopes are always present in significant concentrations. The term "HEU" usually refers to uranium with a U-235 of 20% or more. Uranium known to have been used in fission weapon designs ranges in enrichment from 80-93.5%. In the US uranium with enrichment around 93.5% is sometimes called Oralloy (abbreviated Oy) for historical reasons (Oralloy, or Oak Ridge ALLOY, was a WWII codename for weapons grade HEU). As much as half of the US weapon stockpile HEU has an enrichment in the range of 20-80%. This material is probably used in thermonuclear weapon designs. The techniques which have actually been used for producing HEU are gaseous diffusion, gas centrifuges, electromagnetic enrichment (Calutrons), and aerodynamic (nozzle/vortex) enrichment. Other enrichment processes have been used, some even as part of an overall enrichment system that produced weapons grade HEU, but none are suitable for the producing the highly enriched product. The original HEU production process used by the Manhattan Project relied on Calutrons, these were discontinued at the end of 1946. From that time on the dominant production process for HEU throughout the world has been gaseous diffusion. The vast majority of the HEU that has been produced to date, and nearly all that has been used in weapons, has been produced through gaseous diffusion. Although it is enormously more energy efficient, the only countries to have built or used HEU production facilities using gas centrifuges has been the Soviet Union, Pakistan, and The United Kingdom. Pakistan's production has been very small, the United Kingdom apparently has never operated there facility for HEU production. High enrichment is important for reducing the required weapon critical mass, and for boosting the maximum alpha value for the material. The effect of enrichment on critical mass can be seen in the following table: Figure 18.104.22.168.1. Uranium Critical Masses for Various Enrichments and Reflectors total kg/U-235 content kg (density = 18.9) Enrichment Reflector (% U-235) None Nat. U Be 10 cm 10 cm 93.5 48.0/44.5 18.4/17.2 14.1/13.5 90.0 53.8/48.4 20.8/18.7 15.5/14.0 80.0 68. /54.4 26.5/21.2 19.3/15.4 70.0 86. /60.2 33. /23.1 24.1/16.9 60.0 120 /72. 45. /27. 32. /19.2 50.0 170 /85. 65. /33. 45. /23. 40.0 250 /100 100 /40. 70. /28. 30.0 440 /132 190 /57. 130 /39. 20.0 800 /160 370 /74 245 /49. The total critical mass, and the critical mass of contained U-235 are both shown. The increase in critical mass with lower enrichment is of course less pronounced when calculated by U-235 content. Even with equivalent critical masses present, lower enrichment reduces yield per kg of U-235 by reducing the maximum alpha. This is due to the non-fission neutron capture cross section of U-238, and the softening of the neutron spectrum through inelastic scattering (see the discussion of U-238 as a neutron reflector below for more details about this). U-238 has a spontaneous fission rate that is 35 times higher than U-235. It thus accounts for essentially all neutron emissions from even the most highly enriched HEU. The spontaneous fission rate in uranium (SF/kg-sec) of varying enrichment can be calculated by: SF Rate = (fraction U-235)*0.16 + (1 - (fraction U-235))*5.5 For 93.5% HEU this rate (0.5 n/sec-kg) is low enough that large amounts can be used in weapon designs without concern for predetonation. If used in the Little Boy design (which actually used 80% enriched uranium, however) it would produce only one neutron every 31 milliseconds on average. No problem exists for any design up to the limiting size of gun-type weapons. 50% HEU on the other hand would be difficult to use in a gun-type weapon. A beryllium reflector would minimize the mass (and thus the amount of U-238 present), but to have a reasonable amount HEU present (e.g. 2.5 critical masses) would produce one neutron every 3.2 millisecs, making predetonation a significant prospect. The rate is never high enough though to make a significant difference for implosion assembly. Plutonium is produced by neutron bombardment of U-238, which captures a neutron to form U-239. The U-239 then decays into neptunium-239, which decays in turn to form Pu-239. Since the vast majority of nuclear reactors use low enriched uranium fuel (< 20% U-235, 3-4% typically for commercial reactors), they also contain large amounts of U-238. Plutonium production is thus an inevitable consequence of operation in most reactors. Pu-239 is the principal isotope produced, and is the most desired isotope for use in weapons or as a nuclear fuel. Multiple captures and other side reactions invariably produce an isotope mixture however. The principal contaminating isotope is always Pu-240, formed by non-fission neutron capture by Pu-239. The exposure of U-238 to neutron irradiation is measured by the fuel "burn-up", the number of megawatt-days (thermal) per tonne of fuel. The higher the burn-up, the greater the percentage of contaminating isotopes. Weapon production reactors use fuel burn-ups of 600-1000 MWD/tonne, light water power reactors have a typical design burn-up of 33000 MWD/tonne, and have been pushed to 45000 MWD/tonne by using higher enrichment fuel. Plutonium is commonly divided into categories based on the Pu-240 content: The first US plutonium weapon (Fat Man) used plutonium with a Pu-240 content of only 0.9%, largely due to the hurried production schedule (only 100 MWD/tonne irradiations were used to get the plutonium out of the pile and into bombs quickly). Modern US nuclear weapons use weapons grade plutonium with a nominal 6.5% Pu-240 content. A lower Pu-240 content is not necessary for correct weapon functioning and increases the cost. The US has produced low-burnup supergrade plutonium to blend with higher burn-up feedstocks to produce weapons grade material. Plutonium produced in power reactors varies in composition, but its isotope profile remains broadly similar. If U-238 is exposed to extremely high burn-ups as in some fast breeder reactor designs (100,000 MWD/tonne), or if plutonium is separated from spent fuel and used as fuel in other reactors, it tends toward an equilibrium composition. Representative plutonium compositions are: Pu-238 Pu-239 Pu-240 Pu-241 Pu-242 Weapon Grade 0.0% 93.6% 5.8% 0.6% 0.0% 0.0% 92.8% 6.5% 0.7% 0.0% Reactor grade 2.0% 61.0% 24.0% 10.0% 3.0% Equilibrium 4.0% 32.0% 34.0% 15.0% 15.0% These isotopes do not decay at the same rate, so the isotopic composition of plutonium changes with time (this is also true of HEU, but the decay process there is so slow as to be unimportant). The shortest lived isotopes found in weapon, fuel, or reactor grade plutonium in significant quantities are Pu-241 (13.2 yr) and Pu-238 (86.4 yr). The other isotopes have half-lives in the thousands of years and thus undergo little change over a human lifespan. The decay of Pu-241 (to americium-241) is of particular significance in weapons, since weapons grade plutonium contains no Pu-238 to speak of. To understand the significance of these composition variations, we need to look at two principal factors: the critical mass size, and the spontaneous fission rate. An additional factor, decay self-heating, will be considered but is much less important. Below are the estimated bare (unreflected) critical masses (kg) for spheres of pure plutonium isotopes in the alpha phase (and americium-241, since it is formed in weapons grade plutonium): Pu-238 9 kg Pu-239 10 kg Pu-240 40 kg Pu-241 12 kg Pu-242 90 kg Am-241 114 kg The most striking thing about this table is that they all have critical masses! In contrast U-238 (or natural uranium, or even LEU) has no critical mass since it is incapable of supporting a fast fission chain reaction. This means that regardless of isotopic composition, plutonium will produce a nuclear explosion if it can be assembled into a supercritical mass fast enough. Next observe that the critical masses for Pu-239 and Pu-241 are nearly the same, while the critical masses for Pu-240 and 242 are both several times higher. Because of this disparity, Pu-239 and Pu-241 tend to dominate the fissionability of any mixture, and it is commonplace in the literature to talk about these two isotopes as "fissile", while Pu-240 and 242 are termed "non-fissile". However it is not really true that 240 and 242 are non-fissile, which has an important consequence (shown in the table below): Figure 22.214.171.124.2 Critical Masses for Plutonium of Various Compositions total kg/Pu-239 content kg), density = 19.4 Isotopic Composition Reflector atomic % None 10 cm nat. U 239 240 100% 0% 10.5/10.5 4.4/4.4 90% 10% 11.5/10.3 4.8/4.3 80% 20% 12.6/10.0 5.4/4.3 70% 30% 13.9/ 9.7 6.1/4.3 60% 40% 15.4/ 9.2 7.0/4.2 50% 50% 17.2/ 8.6 8.0/4.0 40% 60% 20.0/ 8.0 9.2/3.7 20% 80% 28.4/ 5.7 13. /2.6 0% 100% 40. / 0.0 20. /0.0 We can see that while the critical mass increases with declining "fissile" isotope content, the mass of Pu-239 present in each critical system diminishes. This is the exact opposite of the effect of isotopic dilution in uranium. In the range of isotopic compositions encountered in normal reactor produced plutonium, the content of Pu-239 in the reflected critical assemblies scarcely change at all. Thus regardless of isotopic composition, we can estimate the approximate critical mass based solely on the quantities of Pu-239, Pu-241 (and Pu-238) in the assembly. Pu-242, having a higher critical mass, is a more effective diluent but it is only a minor constituent compared to Pu-240 in most isotopic mixtures. Even if Pu-242 is considered as the main diluent, the picture remains broadly similar. The reason a relatively low concentration of Pu-240 is tolerable in weapon grade plutonium is due to the emission of neutrons through spontaneous fission. A high performance fission weapon is designed to initiate the fission reaction close to the maximum possible compression achievable by the implosion system, and predetonation must be avoided. The fastest achievable insertion rate is probably about 1 microsecond, it was 4.7 microseconds in Fat Man, and many designs will fall somewhere in the middle of this range. We can calculate the spontaneous fission rate in a mass of plutonium with the following formula: SF Rate (SF/kg-sec) = (%Pu-238)*1.3x10^4 + (%Pu-239)*1.01x10^-1 + (%Pu-240)*4.52x10^3 + (%Pu-242)*8.1x10^3 For the 6.2 kg of plutonium (about 1% Pu-240) in Fat Man this is about 25,000 fissions/sec (or one every 40 microseconds). A weapon made with 4.5 kg of 6.5% Pu-240 weapon grade plutonium undergoes fission at a rate of 132,000 fission/sec (one every 7.6 microseconds). In an advanced design the window of vulnerability, in which a neutron injection will substantially reduce yield, might be as small as 0.5 microseconds, in this case weapon grade plutonium would produce only a 7% chance of substandard yield. Even the plutonium found in the discharged fuel of light water power reactors can be used in weapons however. With a composition of 2% Pu-238, 61% Pu-239, 24% Pu-240, 10% Pu-241, and 3% Pu-242 we can calculate a fission rate of 159,000 fissions/kg-sec. If 6-7 kg were required in a design, then the average rate would be about 1 fission/microsecond. A fast insertion would have a significant chance of no predetonation at all, and would produce a substantial yield (a few kt) even in a worst case. The US actually tested a nuclear device made from plutonium with a Pu-240 content of >19% in 1962. The yield was less than 20 kt. Although this was first made public in 1977, the exact amount of Pu-240, yield, and the date of the test are still classified. Plutonium produces a substantial amount of heat from radioactive decay. This amounts to 2.4 W/kg in weapon grade plutonium, and 14.5 W/kg in reactor grade plutonium. This can make plutonium much warmer than the surrounding environment, and consideration of this heating effect must be taken into account in weapon design to ensure that deleterious temperatures aren't reached under any envisioned operating conditions. Thin shell designs are naturally resistant to these effects however, due to the large surface area of the thin plutonium shell. It can cause problems in levitated cores though, since the pit will have little thermal contact with surrounding materials. Self heating can be calculated from the following formula: Q (W/kg) = (%Pu-238)*5.67 + (%Pu-239)*0.019 + (%Pu-240)*0.07 + (%Pu-241)*0.034 + (%Pu-242)*0.0015 + (%Am-241)*1.06 The extremely weak decay energy of Pu-241 produces little heating considering the very short half-life, but Pu-241 decay does alter the isotopic and chemical composition substantially over a course of several years. Half of it decays over 13.2 years, giving rise to americium-241. This is a short half-life radioisotope with energetic decay. As Pu-241 is converted into americium significant increases in self-heating increases and radiotoxicity occur; a very slight (and probably insignificant) decline in reactivity also occurs. Perhaps most important consequence of americium buildup is its effect on the alloy composition. Americium is one of the elements that can serve as an alloying agent to stabilize plutonium in the delta phase. Since alloying agents for this purpose are usually present to the extent of about 3% (atomic) in plutonium, a 0.6% addition of a new alloying agent (americium) is a significant composition change. This is not a serious problem with weapon grade plutonium, although it does have to be taken into account when selecting the alloy. In reactor grade plutonium the effect is quite pronounced since the decay of Pu-241 can add 10% americium to the alloy over a couple of decades. This would undoubtedly have important effects on alloy density and strength. When refurbishing nuclear weapons it has been routine practice to extract americium from the plutonium and refabricate the pit. This is apparently not essential. The US is currently not refabricating weapon pits, and won't in significant numbers for several more years. Since weapon grade plutonium production has been shut down in the US, Russia, the UK, and France, the remaining supply of this material will become essentially free of Pu-241 (and Am-241 after reprocessing) over the next few decades. 126.96.36.199.2.1 Plutonium Oxide Any sophisticated weapon design would use plutonium in the form of a metal, probably an alloy. The possibility of using plutonium (di)oxide (PuO2) in a bomb design is of interest because the bulk of the separated plutonium existing worldwide is in this form. A terrorist group stealing plutonium from a repository might seek to use the oxide directly in a weapon. Plutonium oxide is a bulky green powder as usually prepared. Its color may range from yellow to brown however. Oxygen has an extremely small neutron cross section, so plutonium oxide behaves essentially like a low density form of elemental plutonium. The maximum (crystal) density for plutonium oxide is 11.45, but the bulk powder is usually much less dense. A loose, unconsolidated powder might have a density of only 3-4. When compacted under pressure, substantially higher densities are achievable, perhaps 5-6 depending on pressure used. When compacted under very high pressure and sintered the oxide can reach densities of 9.7-10.0 The critical mass of reactor grade plutonium is about 13.9 kg (unreflected), or 6.1 kg (10 cm nat. U) at a density of 19.4. A powder compact with a density of 8 would thus have a critical mass that is (19.4/8)^2 time higher: 82 kg (unreflected) and 36 kg (reflected), not counting the weight of the oxygen (which adds another 14%). If compressed to crystal density these values drop to 40 kg and 17.5 kg. Uranium-233 is the same chemical element as U-235, but its nuclear properties are more closely akin to plutonium. Like plutonium it is an artificial isotope that must be bred in a nuclear reactor. Its critical mass is lower than U-235, and its material alpha value is higher, both are close to those of Pu-239. Its half-life and bulk radioactivity are much closer to those of Pu-239 than U-235 also. U-233 has been studied as a possible weapons material since the early days of the Manhattan Project. It is attractive in designs where small amounts of efficient material are desirable, but the spontaneous fission rate of plutonium is a liability, such as small, compact fission weapons with low performance (and thus light weight) assembly systems. It does not seem to have been used much, if at all, in actual weapons by the US. It has been employed in many US tests however, possibly indicating its use in deployed weapons. The reason for this is the difficulty of manufacture. It must be made by costly irradiation in reactors, but unlike plutonium, its fertile isotope (thorium-232) is not naturally part of uranium fuel. To produce significant quantities of U-233, a special production reactor is required that burns concentrated fissile material for fuel - either plutonium or moderately to highly enriched uranium. This further increases cost and inconvenience, making it more expensive even than plutonium (which also has the advantage of a substantially lower critical mass). Significant resources have been devoted to U-233 production in the US however. In the fifties, up to three breeder reactors were loaded with thorium at Savannah River for U-233 production, and a pilot-scale "Thorex" separation plant was built. U-233 has some advantages over plutonium, principally its lower neutron emission background. Like other odd numbered fissile isotopes U-233 does not readily undergo spontaneous fission, also important is the fact that the adjacent even numbered isotopes have relatively low fission rates as well. The principal isotopic contaminants for U-233 is U-232, which is produced by an n,2n reaction during breeding. U-232 has a spontaneous fission rate almost 1000 times lower than Pu-240, and is normally present at much lower concentrations. If appropriate precautions are taken to use low Th-230 containing thorium, and an appropriate breeding blanket/reactor design is used, then weapons-grade U-233 can be produced with U-232 levels of around 5 parts per million (0.0005%). Above 50 ppm (0.005%) of U-232 is considered low grade. Due to the short half-life of U-232 (68.9 years) the alpha particle emission of normal U-233 is quite high, perhaps 3-6 times higher than in weapons grade plutonium. This makes alpha->n reactions involving light element impurities in the U-233 a possible issue. Even with low grade U-233, and very low chemical purity uranium the emission levels are not comparable to emissions of Pu-240 in weapon grade plutonium, but they may be high enough to preclude using impure U-233 in a gun assembly weapon. If purity levels of 1 ppm or better are maintained for key light elements (achievable back in the 1940s, and certainly readily obtainable today), then any normal isotopic grade of U-233 can be used in gun designs as well. Although the U-232 contaminant produces significant amount of self-heating (718 W/kg), it is presnt to small a concentration to have a significant effect. A bare critical mass of low grade U-233 (16 kg) would emit 5.06 watts, 11% of it due to U-232 heating. Potentially a more serious problem is due to the decay chain of U-232. It leads to a series of short-lived isotopes, some of which put out powerful gamma emissions. These emissions increase over a period of a couple of years after the U-233 is refined due to the accumulation of the longest lived intermediary, Th-228. A 10 kg sphere of weapons grade U-233 (5 ppm U-232) could be expected to reach 11 millirem/hr at 1 meter after 1 month, 0.11 rem/hr after 1 year, and 0.20 rem/hr after 2 years. Glove-box handling of such components, as is typical of weapons assembly and disassembly work, would quickly create worker safety problems. An annual 5 rem exposure limit would be exceeded with less than 25 hours of assembly work if 2-year old U-233 were used. Even 1 month old material would require limiting assembly duties to less than 10 hours per week. Typical critical mass values for U-233 (98.25%, density 18.6) are: Reflector None Nat. U Be 5.3 cm 10 cm 4.2 cm Mass(kg) 16 7.6 5.7 7.6 Self heating can be calculated from the following formula: Q (W/kg) = (%U-232)*7.18 + (%U-233)*0.0027 + (%U-234)*0.0018 188.8.131.52 Composite Cores If more than one type of fissile material is available (e.g. U-235 and plutonium, or U-235 and U-233) an attractive design option is to combine them within a single core design. This eliminates the need for multiple weapon designs, can provide synergistic benefits from the properties of the two materials, and result in optimal use of the total weapon-grade fissile material inventory. U-235 is produced by isotope enrichment and is generally much cheaper than the reactor-bred Pu-239 or U-233 (typically 3-5 times cheaper). The latter two materials have higher maximum alpha values, making them more efficient nuclear explosives, and lower critical masses. Plutonium has the undesirable property of having a high neutron emission rate (causing predetonation). U-233 has the undesirable property of having a high gamma emission rate (causing health concerns). By combining U-235 with Pu-239, or U-235 with U-233, the efficiency of the U-235 is increased, and the required mass for the core is reduced compared to pure U-235. On the other hand, the neutron or gamma emission rates are reduced compared to pure plutonium or U-233 cores, and are significantly cheaper as well. When a higher alpha material is used with a lower alpha material, the high alpha material is always placed in the center. Two reasons can be given for this. First, the greatest overall alpha for the core is achieved if the high alpha material (with the fastest neutron multiplication rate) is placed where the neutron flux is highest (i.e. in the center). Second, the neutron leakage from the core is determined by the radius of the core as measured in mean free paths. By concentrating the material with the shortest MFP in a small volume in the center, the "size" of the core in MFPs is maximized, and neutron leakage minimized. Composite cores can be used in any type of implosion system (solid core, levitated core, etc.). The ratio of plutonium to HEU used has generally been dictated by the relative inventories or production rates of the two materials. These designs have largely dropped out of use in the US (and probably Soviet/Russian) arsenal as low weight thermonuclear weapon designs came to dominate the stockpile. 184.108.40.206 Tampers and Reflectors Although the term "tamper" has long been used to refer to both the effects of hydrodynamic confinement, and neutron reflection, I am careful to distinguish between these effects. I use the term "tamper" to refer exclusively to the confinement of the expanding fissile mass. I use "reflector" to describe the enhancement of neutron conservation through back-scattering into the fissile core. One material may perform both functions, but the physical phenomenon are unrelated, and the material properties responsible for the two effects are largely distinct. In some designs one or the other function may be mostly absent, and in other designs different materials may be used to provide most of each benefit. Since the efficiency of a fission device is critically dependent on the rate of neutron multiplication, the effect of neutron conservation due to a reflector is generally more important than the inertial confinement effect of a tamper in maximizing device efficiency. 220.127.116.11.1 Tampers Tamping is provided by a layer adjacent to the fissile mass. This layer dramatically reduces the rate at which the heated core material can expand by limiting its velocity to that of a high pressure shock wave (a six-fold reduction compared to the rate at which it could expand into a vacuum). Two physical properties are required to accomplish this: high mass density, and optical opacity to the thermal radiation emitted by core. High mass density requires a high atomic mass, and a high atomic density. Since high atomic mass is closely correlated to high atomic number, and high atomic number confers optical opacity to the soft X-ray spectrum of the hot core, the second requirement is automatically taken care of. An additional tamping effect is obtained from the fact that a layer of tamper about one optical thickness (x-ray mean free path) deep becomes heated to temperatures comparable to the bomb core. The hydrodynamic expansion thus begins at the boundary of this layer, not the actual core/tamper boundary. This increases the distance the rarefaction wave must travel to cause significant disassembly. To be effective, a tamper must be in direct contact with the fissile core surface. The thickness of the tamper need not be very large though. The shock travels outward at about the same speed as the rarefaction wave travelling inward. This means that if the tamper thickness is equal to the radius of the core, then by the time the shock reaches the surface of the tamper, all of the core will be expanding and no more tamping effect can be obtained. Since an implosion compressed bomb core is on the order of 3 cm (for Pu-239 or U-233), a tamper thickness of 3 cm is usually plenty. In selecting a tamper, some consideration must be given to the phenomenon of Rayleigh-Taylor instability (see Section 3.8). During the period of inward flow following the passage of a convergent shock wave, instability can arise if the tamper is less dense than the fissile core. This is affected by the pressure gradient, length of time of implosion, implosion symmetry, the initial smoothness of the tamper/core interface, and the density difference. The ideal tamper would the densest available material. The ten densest elements are (in descending order): Osmium 22.57 Iridium 22.42 Platinum 21.45 Rhenium 21.02 Neptunium 20.02 Plutonium 19.84 Gold 19.3 Tungsten 19.3 Uranium 18.95 Tantalum 16.65 Although the precious metals osmium, iridium, platinum, or gold might seem to be too valuable to seriously consider blowing up, they are actually much cheaper than the fissile materials used in weapon construction. The cost of weapon-grade fissile material is inherently high. The US is currently buying surplus HEU from Russia for US$24/g, weapon grade plutonium is said to be valued 5 times higher. In the late 1940s U-235 cost $150/g in then-year dollars (worth several times current dollars)! If the precious metals actually had unique capabilities for enhancing the efficiency of fissile material, it might indeed be cost effective to employ them. No one is known to have actually used any of these materials as a fission tamper however. Rhenium is much cheaper than the precious metals, and is a serious contender for a tamper material. Neptunium is a transuranic that is no cheaper than plutonium, and is actually a candidate fissile material itself. It is thus not qualified to be considered a tamper, nor is the costly and fissile plutonium. Gold would not be seriously considered as a tamper since tungsten has identical density but is much cheaper (it has been used as a fusion tamper however). Natural and depleted uranium (DU) has been widely used as a tamper due in large part to valuable nuclear properties (discussed below). The cheapness of DU (effectively free) certainly doesn't hurt. Tungsten carbide (WC), with a maximum density of 15.63 (14.7 is more typical of fabricated pieces), is not an outstanding tamper material, but it is high enough to merit consideration as a combined tamper/reflector material since it is a very good reflector. In comparison two other elements normally though of as being dense do not measure up: mercury (13.54), and lead (11.35). Lead has been used as a fusion tamper in radiation implosion designs though, either as the pure element or as a lead-bismuth alloy. The usefulness of a material as a reflector is principally determined by its mean free path for scattering. The shorter this value, the better the reflector. To see the importance of a short MFP, consider the typical geometry of a bomb - a spherical fissile core, with radius r_core, surrounded by a spherical reflector. The average distance from the center of the assembly at which an escaping neutron is first scattered is r_core + MFP. If the scattering MFP for a reflector is comparable to r_core, the reflector volume in which scattering occurs is much larger than the volume of the core. The direction of scattering is essentially random, so under these conditions a scattered neutron is unlikely to reenter the core. Most that eventually do reenter will have scattered several times, traversing a distance that is a multiple of the MFP value. Reducing the value of MFP will considerably reduce the volume in which scattering occurs, and thus increase the likelihood that a neutron will reenter, and reduce the average path it will traverse before doing so. Since the neutron population in the core is increasing very fast, approximately doubling in the time it takes a neutron to traverse one MFP, the importance of an average reflected neutron to the chain reaction is greatly diluted by the "time absorption" effect. It represents an older and thus less numerous neutron generation, which has been overwhelmed by more recent generations. This effect can be represented mathematically by including in the reflector a fictitious absorber whose absorption cross section is inversely proportional to the neutron velocity. Due to time absorption, as well as the effects of geometry, effectiveness of a reflector thus drops very rapidly with increasing MFP. For a constant MFP, increasing reflector thickness also has a point of diminishing returns. Most of the benefit in critical mass reduction occurs with a reflector thickness of one 1 MFP. With 2 MFPs of reflector, the critical mass has usually dropped to within a few percent of its value for an infinitely thick reflector. Time absorption also causes the benefits of a reflector to drop off rapidly with thicknesses exceeding about one MFP. A very thick reflector offers few benefits over a relatively thin one. Experimental data showing the variation of critical mass with reflector thickness can be misleading for evaluating reflector performance in weapons since critical systems are non-multiplying (alpha = 0). These experiments are useful when the reflector is relatively thin (a few centimeters), but thick reflector data is not meaningful. For example, consider the following critical mass data for beryllium reflected plutonium: Table 18.104.22.168.2-1. Beryllium-Plutonium Reflector Savings Beryllium Alpha Phase Pu Critical Mass (d = 19.25) Thickness (cm) (kg) 0.00 10.47 5.22 5.43 8.17 4.66 13.0 3.93 21.0 3.22 32.0 2.47 The very low critical mass with a 32 cm reflector is meaningless in a high alpha system, it would behave instead as if the reflector were much thinner (and critical mass correspondingly higher). Little or no benefit is gained for reflectors thicker than 10 cm. Even a 10 cm reflector may offer slight advantage over one substantially thinner. [Note: The table above, combined with the 2 MFP rule for reflector effectiveness, might lead one to conclude that beryllium's MFP must be in the order of 16 cm. This is not true. Much of the benefit of very thick beryllium reflectors is due to its properties as a moderator, slowing down neutrons so that they are more effective in causing fission. This moderation effect is useless in a bomb since the effects of time absorption are severe for moderated neutrons.] In the Fat Man bomb, the U-238 reflector was 7 cm thick since a thicker one would have been of no value. In assemblies with a low alpha, additional reflectivity benefits are seen with uranium reflectors exceeding 10 cm thick. To reduce the neutron travel time it is also important for the neutron reflector to be in close proximity to the fissile core, preferably in direct contact with it. Since MFP decreases when the reflector is compressed, it is very beneficial to compress the reflector along with the fissile core. Many elements have similar scattering microscopic cross sections for fission spectrum neutrons (2.5 - 3.5 barns). Consequently the MFP tends to correlate with atomic density. Some materials (uranium and tungsten for example) have unusually high scattering cross sections that compensate for a low atomic density. The parameter c (the average number of secondaries per collision) is also significant. This is the same c mentioned earlier in connection with the alpha of fissile materials. In reflector materials the effective value of c over the spectrum of neutrons present is always less than 1. Only two reflector materials produce significant neutron multiplication: U-238 (from fast fission) and beryllium (from the Be-9 + n -> 2n + Be-8 reaction). Neutron multiplication in U-238 becomes significant when the neutron energy is above 1.5 MeV (about 40% of all fission neutrons), but a neutron energy of 4 MeV is necessary in beryllium. Further, U-238 produces more neutrons per reaction on average (2.5 vs 2). For fission spectrum neutrons this gives U-238 a value of c = 1.05, and Be a value of c = 1.03. Remember, this if for fission spectrum neutrons, i.e. neutron undergoing their first collision! The effective value is lower though since after one or more collisions the energy spectrum changes. Each uranium fast fission neutron is considerably more significant in augmenting the chain reaction in the core, compared to beryllium multiplied neutrons, due to the higher energy of fast fission neutrons. U-238 fast fission is an energy producing reaction, and generates neutrons with an average energy of 2 MeV. The beryllium multiplication reaction absorbs energy (1.665 MeV per reaction) and thus produces slow, low energy neutrons for whom time absorption is especially severe. The energy produced by U-238 fast fission can also significantly augment the yield of a fission bomb. It is estimated that 20% of the yield of the Gadget/Fat Man design came from fast fission of the natural uranium tamper. Both beryllium and uranium have negative characteristics in that they tend to reduce the energy of scattered neutrons (and reduce the effective value of c below 1). In beryllium this is due to moderation - the transfer of energy from the neutron to an atomic nucleus through elastic scattering. In uranium it is due to inelastic scattering. 22.214.171.124.2.1 Moderation and Inelastic Scattering The energy loss with moderation is a proportional one - each collision robs the neutron of the same average fraction of its remaining energy. This fraction is determined by the atomic weight of the nucleus: E_collision/E_initial = Exp(-epsilon) the constant epsilon being calculated from: epsilon = 1 + ((A - 1)^2 * ln((A - 1)/(A + 1))/(2*A) ) where A is the atomic number. The equation is undefined when A=1, but taking the limit as it approaches 1 gives the value for light hydrogen which is epsilon=1. If A is larger than 5 or so then it can be approximated by: epsilon ~= 2/(A + 2/3). Epsilon values for some light isotopes of interest are: A Isotopes Epsilon 1 H 1.000 2 D 0.725 3 T, He-3 0.538 4 He-4 0.425 6 Li-6 0.299 7 Li-7 0.260 9 Be-9 0.207 10 B-10 0.187 12 C-12 0.158 Since epsilon is close to zero when A is large, we can easily see that moderation is significant only for light atoms. The atomic weight of beryllium (9) is light enough to make this effect significant. The average number of collisions n required to reduce a neutron of energy E_initial to E_final can be expressed by: n = (1/epsilon) * ln(E_initial/E_final) Since A=9 for beryllium, it takes 3.35 collisions to reduce neutron energy by half. The average number of collisions for a neutron reentering the fissile mass will likely be substantially higher than this, unless the reflector is thin (in which case most of the neutrons will escape without reflection). For comparison carbon (A=12) takes 4.39 collisions to achieve similar moderation, iron (A=56) takes 19.6, and U-238 takes 165. Clearly heavy atoms do not cause significant moderation. However they can experience another phenomenon called inelastic scattering that also absorbs energy from neutrons. In inelastic scattering, the collision excites the nucleus into a higher energy state, stealing the energy from the neutron. The excited nucleus quickly drops back to its ground state, producing an x-ray. Inelastic scattering is mostly important only in very heavy nuclei that have many excitation states (like tungsten and uranium). The effect drops off rapidly with atomic mass. In balance, the energy loss by moderation in beryllium is more serious than the energy loss by inelastic scattering in uranium. This is partly due to the fact that every elastic collision reduces neutron energy, while only some collisions produce inelastic scattering. 126.96.36.199.2.2 Comparison of Reflector Materials Below is a list of candidate materials, and their atomic densities. The list includes the six highest atomic density pure elements (C - in two allotropic forms, Be, Ni, Co, Fe, and Cu), and a number of compounds that are notable for having high atomic densities. Atomic densities for the major tampers materials are also shown. Table 188.8.131.52.2.2-1. Candidate Reflector Materials Cross sections and MFPs are for fission spectrum neutrons Reflector Material At. Density Avg. Cross. MFP moles/cm^3 barns cm Carbon (C,diamond) 0.292 2.37 2.40 Beryllium Oxide (BeO) 0.241 2.79 2.47 Beryllium (Be) 0.205 2.83 2.86 Beryllium Carbide (BeC) 0.190 2.60 3.36 Carbon (C, graphite) 0.188 2.37 3.73 Water (H2O) 0.167 3.54 2.81 Nickel (Ni) 0.152 3.84 2.85 Tungsten Carbide (WC) 0.150 4.55 2.43 Cobalt (Co) 0.148 3.68 3.05 Iron (Fe) 0.141 3.66 3.22 Copper (Cu) 0.141 3.65 3.23 ... Osmium (Os) 0.118 Iridium (Ir) 0.117 Rhenium (Re) 0.110 Platinum (Pt) 0.110 Tungsten (W) 0.105 6.73 2.35 Gold (Au) 0.098 Plutonium (Pu) 0.083 Uranium (U) 0.080 7.79 2.66 Mercury (Hg) 0.068 Lead (Pb) 0.055 From this list it can be seen that the highest atomic density materials consist of light elements. Some compounds achieve higher atomic densities than pure elements by packing together atoms of different sizes. Thus BeO is denser (in both mass and moles/cm^3) than Be, and WC is denser than W (only in moles/cm^3). Using critical mass data, some of these materials can be ordered by reflector efficiency. In the ordering below X > Y means X is a better reflector than Y, and (X > Y) means that though X is better than Y, the difference is so slight that they are nearly equal (MFPs are shown below each material): Be > (BeO > WC) > U > W > Cu > H2O > (Graphite > Fe) 2.86 2.47 2.28 2.66 2.43 3.23 2.82 3.73 3.22 From this the general trend of lower MFPs for better reflectors is visible, but is not extremely strong. The effects of neutron multiplication and moderation are largely responsible. As noted earlier this ranking, made using critical assemblies, tends to overvalue beryllium somewhat with respect to use in weapons. Nonetheless beryllium is still by and large the best reflector, especially when low mass is desirable. Uranium and tungsten carbide are the best compromise reflector/tampers. Carbon is a fairly good neutron reflector. It has the disadvantage of being a light element that moderates neutrons, but being heavier than beryllium (At Wt 12 vs 9) it moderates somewhat less. When used as a shock buffer, additional significant benefits from neutron reflection can be obtained. The singularly high atomic density and short MFP for diamond makes it an interesting material. Before dismissing the possibility out of hand as ridiculous, given its cost, it should be noted that synthetic industrial diamond cost only $2500/kg, far less than the fissile material used in the core. It can also be formed into high density compacts. Iron is a surprisingly good reflector, though not good enough to be considered for this use in sophisticated designs. It may be important due to its use as a structural material - as in the casing of a nuclear artillery shell, or the barrel of gun-type weapon. With a 4.6 cm radius core the following reflector thicknesses have been found to be equally effective: Be 4.2 cm U 5.3 cm W 5.8 cm Graphite 10. Cm Viewed from the other perspective (variation in critical mass with identical thicknesses of different materials) we get: Table 184.108.40.206.2.2-2. Critical Mass for 93.5% U-235 (kg) Material Reflector Thicknesses 2.54 cm 5.08 cm 10.16 cm Be 29.2 20.8 14.1 BeO - 21.3 15.5 WC - 21.3 16.5 U 30.8 23.5 18.4 W 31.2 24.1 19.4 H2O 24 22.9 Cu 32.4 25.4 20.7 Graphite 35.5 29.5 24.2 Fe 36.0 29.5 25.3 Below is a plot showing the change of Oralloy critical mass with reflector thickness graphically (taken from LA-10860-MS, Critical Dimensions of Systems Containing 235-U, 239-Pu, and 233-U; 1986 Rev.): The variation of plutonium and U-233 critical masses with reflector thickness can be determined using the chart below (also taken from LA-10860-MS) with the above chart for Oralloy: The variation of critical mass with reflector thickness is sometimes also expressed in terms of reflector savings, the reduction in critical radius for a given reflector thickness: Table 220.127.116.11.2.2-3. Reflector Savings (cm) for Various Reflector/Fissile Material Combinations Fissile Material 93.5% U-235 Plutonium Reflector Reflector Thicknesses (cm) Reflector Thicknesses (cm) Material 1.27 2.54 5.08 10.16 1.27 2.54 5.08 10.16 Be 0.90 1.46 2.14 2.94 0.73 1.11 1.51 1.97 U 0.81 1.31 1.87 2.40 0.66 1.01 1.36 1.66 W 0.82 1.29 1.82 2.29 0.67 1.00 1.33 1.59 Fe 0.59 0.92 1.36 1.70 0.50 0.74 1.04 1.25 18.104.22.168.3 Combined Tamper/Reflector Systems In most weapon designs, both the benefits of tamping and neutron reflection are desired. Two design options are available: Designs for relatively heavy implosion bombs typically use U-238 (as natural or depleted uranium) as a compromise material. It is very good to excellent in both respects, and boosts yield as well. The Gadget/Fat Man design used a 120 kg natural uranium tamper (7 cm thick). All of the early U.S. implosion designs used uranium as a tamper/reflector. The spontaneous fission rate in U-238 precludes its use in gun-type designs. The Little Boy weapon used tungsten carbide as a compromise material. Its density is fairly high, and it is an excellent neutron reflector (second only to beryllium among practical reflector materials). It is less dense than the uranium core, but since the Little Boy core was not compressed, Rayleigh-Taylor instability was not a factor in design. Tungsten metal was used in the South African gun-type weapons, this choice places greater emphasis on tamping over reflection, compared to tungsten carbide. It is interesting to note the dual-use restrictions placed on tungsten alloys and carbide: Parts made of tungsten, tungsten carbide, or tungsten alloys (>90% tungsten) having a mass >20 kg and a hollow cylindrical symmetry (including cylinder segments) with an inside diameter greater than 10cm but less than 30 cm. This is clearly based on its use as a reflector in gun-type weapons. Beryllium is used as a reflector in modern light weight fission warheads, and thermonuclear triggers. It has special value for triggers since it is essentially transparent to thermal radiation emitted by the core. It is a very efficient reflector for its mass, the best available. But due to its extremely low mass density, it is nearly useless as a tamper. In boosted designs tamping may be unnecessary, but it is also possible to insert a (thin) tamper layer between the core and beryllium reflector). The n,2n reaction is also useful in boosted designs, since that fraction of fusion neutrons that escape the core without capture or substantial scatter still retain enough energy to release reasonably energetic neutrons in the reflector. Beryllium has relatively high compressibility, which may also add to its effectiveness as a reflector. It is also interesting to note that the Allied-Signal Kansas City Plant has developed a capability for depositing tungsten-rhenium films up to 4 mm thick. This would be a nearly ideal material and thickness for a tamper in a beryllium reflected flying plate implosion design. By alloying rhenium with tungsten, the density of the tungsten can be increased (so that it matches or exceeds the density of alpha phase plutonium), and the ductility and workability of tungsten is improved. Notable confirmation of this comes form the 31 kt Schooner cratering test in 1968 (part of the Plowshare program). Some of the most prominent radionuclides in the debris cloud were radioactive isotopes of tungsten and rhenium. It is also possible that uranium foils known to have been manufactured for weapons were used as tampers in flying plate designs. 4.1.8 Fission Initiation Techniques Once a supercritical mass is assembled, neutrons must be injected to start the chain reaction. This is not really a problem for a gun type weapon, since the design allows the supercritical mass to remain in the fully assembled state indefinitely. Eventually a neutron from the prevailing background is certain to cause a full yield explosion. It is a major problem in an implosion bomb since the interval during which the bomb is near optimum criticality is quite short - both in absolute length (less than a microsecond), and also as a proportion of the time the bomb is in a critical state. The first technique to be seriously considered for use in a weapon was simply to include a continuous neutron emitter, either a material with a high spontaneous fission rate, or an alpha emitter that knocks neutrons loose from beryllium mixed with it. Such an emitter produces neutrons randomly, but with a specific average rate. This inevitably creates a random distribution in initiation time and yield (called stochastic initiation). By tuning the average emission rate a balance between pre and post detonation can be achieved so that a high probability of a reasonably powerful (but uncertain) yield can be achieved. This idea was proposed for the Fat Man bomb at an early stage of development. A far superior idea is to use a modulated neutron initiator - a neutron emitter or neutron generator that can be turned on at a specific time. This is a much more difficult approach to develop, regardless of the technique used. Modulated initiators can be either internal designs, which are placed inside the fissile pit and activated by the implosion wave, or external designs which are placed outside the fission assembly. It should be noted that it is very desirable for an initiator to emit at least several neutrons during the optimum period, since a single neutron may be captured without causing fission. If a large number can be generated then the total length of the chain reaction can be significantly shortened. A pulse of 1 million neutrons could cut the total reaction length by 25% or so (approx. 100 nanoseconds), which may be useful for ensuring optimal efficiency. 22.214.171.124 Modulated Beryllium/Polonium Initiators This general type of initiator was used in all of the early bomb designs. The fundamental idea is to trigger the generation of neutrons at the selected moment by mixing a strong alpha emitter with the element beryllium. About 1 time out of 30 million, when an alpha particle collides with a beryllium atom a neutron is knocked loose. The key difficulty here is keeping the alpha emitter out of contact with the beryllium, and then achieving sufficiently rapid and complete mixing that a precisely timed burst of neutrons is emitted. The very short range of alpha particles in solid matter (a few tens of microns) would make the first requirement relatively easy to achieve, except for one thing. Most strong alpha emitters also emit gamma rays, which penetrate many centimeters of solid matter and also occasionally knock loose neutrons. Finding a radioisotope with sufficiently low gamma emissions greatly restricts the range of choices. A suitable radioisotope must also have a relatively short half-life (no more than a few decades) so sufficient activity can be provided by a small amount, and be reasonably economical to produce. One isotope appears to be the clear favorite when all these factors are considered: polonium 210. Although other alpha emitters have been considered, all radioisotope based modulated initiators appear to have used Po-210 as the alpha source. This isotope has a half-life of only 138.39 days though. On the one hand, this means a strong emitter alpha source can be quite small (50 curies, which emits 1.85 x 10^12 alphas/sec, weighs only 11 mg). On the other, the Po-210 disappears quickly and must be constantly replenished to maintain a standing arsenal. Polonium-208 and actinium-227 have also been considered for this role. The second requirement: carefully timed, fast, efficient mixing, needs very clever designs for implosion weapons. After considering several proposals, a neutron initiator called "Urchin" or "screwball" was selected by Los Alamos for Gadget/Fat Man. All of the designs considered were based on placing the initiator at the center of the fissile mass, and using the arrival of the convergent shock to drive the mixing process. This insured that the entire mass was highly compressed (although perhaps not optimally compressed), and placed the initiator where the neutrons emitted would be most effective. The Urchin was a sphere consisting of a hollow beryllium shell, with a solid spherical beryllium pellet nested inside. The polonium was deposited in layer between the shell and the pellet. Both the shell and the pellet were coated with a thin metal film to prevent the polonium (or its alpha particles) from reaching the beryllium. The mixing was brought about by using the Munroe Effect (also called the shaped charge, or hollow charge, effect): shock waves collide, powerful high velocity jets are formed. This effect was created by cutting parallel wedge-shaped groves in the inner surface of the shell. When the implosion shock collapsed these grooves, sheet-like beryllium jets would erupt through the polonium layer, and cause violent turbulence that would quickly mix the polonium and beryllium together. By placing the small mass of polonium as a layer trapped between two relatively large masses of beryllium, the Urchin designers were hedging their bets. Even if the Monroe effect did not work as advertised, any mixing process or turbulence present would likely disrupt the carefully isolated polonium layer and cause it to mix. The whole initiator weighed about 7 grams. The outer shell was 2 cm wide and 0.6 cm thick, the solid inner sphere was 0.8 cm wide. 15 parallel wedge-shaped grooves, each 2.09 mm deep, were cut into the inner surface of the shell. Both the shell and the inner pellet were formed by hot pressing in a nickel carbonyl atmosphere, which deposited a nickel layer on the surfaces. The surfaces of the shell and central sphere were also coated with 0.1 mm of gold. Combined with the nickel layer, the gold film provided a barrier between the polonium and the beryllium. 50 curies polonium-210 (11 mg) was deposited on the grooves inside the shell and on the central sphere. This much polonium produces a thermal output of 0.1 watts, causing very noticeable warming in such a small object. Post war studies showed that no more than 10 curies still provided an acceptable initiation effect, allowing the manufacture of initiators that remained usable for up to a year. Other designs for generating mixing have been considered. One design considered during or shortly after WWII used a spherical shell whose interior surface was covered with conical indentations. The shell was coated with a metal film, and polonium was deposited on the interior surface as in the Urchin design. In this design the cavity inside the hollow shell was empty, there was no central pellet. The principal advantage here is that the initiator could be made smaller while still being reliable. A shortcoming of the Urchin was that the Munroe effect is less robust in linear geometry. The formation of a jet when a wedge collapses depends on the apex angle and other factors, and could conceivably fail (its use may have been due to the more thorough study given the linear geometry by Fuchs during the war). The jet effect is quite robust in conical geometry however, the collapse of the conical pits producing high velocity jets of beryllium metal squirting into the cavity under nearly all conditions. Pyrimidal pits provide similar advantages, and have been used in hollow and central sphere equipped initiators. The smaller TOM initiator (about 1 cm) that replaced the Urchin was probably based on the hollow conical pit (or tetrahedral pit) design. This design was proposed for use in 1948, but not put into production until January 1950 by Los Alamos. It was first tested (in a weapon test) in May 1951. One advantage of the TOM initiator was more efficient use of the polonium (more neutrons per gram of Po-210). One sophisticated design that was developed and patented by Klaus Fuchs and Rubby Sherr during the Manhattan project was based on using the outgoing implosion rebound, rather than the incoming converging shock to accomplish mixing. This slight delay in initiation thus achieved was expected to allow significantly more compression to occur. If internal initiators are used in fusion-boosted designs it is essential that they be quite small, the smaller the better (external initiation is best). In gun-type weapons initiators are not strictly required, but may be desirable if the detonation time of the weapon needs to be precisely controlled. A low intensity polonium source can be used in this case, as can a simple system to bring the source and beryllium into contact upon impact by the bullet (like driving a beryllium foil coated piston into a sleeve coated with polonium). 126.96.36.199 External Neutron Initiators (ENIs) These devices (sometimes called "neutron generators") rely on a miniature linear particle accelerator called a "pulse neutron tube" which collides deuterium and tritium nuclei together to generate high energy neutrons through a fusion reaction. The tube is an evacuated tube a few centimeters long with an ion source at one end, and an ion target at the other. The target contains one of the hydrogen isotopes adsorbed on its surface as a metal hydride (which isotope it is varies with the design). When a current surge is applied to the ion source, an electrical arc creates a dense plasma of hydrogen isotope ions. This cloud of ions is then extracted from the source, and accelerated to an energy of 100-170 KeV by the potential gradient created by a high voltage acceleration electrode. Slamming into the target, a certain percentage of them fuse to release a burst of 14.1 MeV neutrons. These neutrons do not form a beam, they are emitted isotropically. Early pulse neutron tubes used titanium hydride targets, but superior performance is obtained by using scandium hydride which is standard in current designs. A representative tube design is the unclassified Milli-Second Pulse (MSP) tube developed at Sandia. It has a scandium tritide target, containing 7 curies of tritium as 5.85 mg of ScT2 deposited on a 9.9 cm^2 molybdenum backing. A 0.19-0.25 amp deuteron beam current produces about 4-5 x 10^7 neutrons/amp-microsecond in a 1.2 millisecond pulse with accelerator voltages of 130-150 KeV for a total of 1.2 x 10^10 neutrons per pulse. For comparison the classified Sandia model TC-655, which was developed for nuclear weapons, produced a nominal 3 x 10^9 neutron pulse. A variety of ion source designs can be used. The MSP tube used a high current arc between a scandium deuteride cathode and an anode to vaporize and ionize deuterium. Other designs (like the duoplasmotron) may use an arc to ionize a hydrogen gas feed. The ion output current limits the intensity of the neutron pulse. Public domain ion source designs typically have a ion current limit of several amps. If we assume that the TC-655 achieved a 10 amp current from its ion source (the design of which is classified) then we can estimate an emission rate of up to 5 x 10^8 neutrons/microsecond in a pulse 6 microseconds long. It is misleading though to think of a neutron tube as producing all its neutrons in a sudden burst. From the perspective of the fission process in a bomb core, it is not sudden at all. A typical core alpha is 100-400/microsecond, with corresponding neutron multiplication intervals of 2.5-10 nanoseconds. Any neutrons that enter the core in one multiplication interval will increase by a factor of e (2.7...) in the next, overwhelming the external neutron flux. From this point on, the fission process will proceed on its course unaffected by the ENI. Only the neutrons that enter the core during a single multiplication interval really count, and they count only insofar as they determine the time that the exponential chain reaction begins. Clearly, the vast majority of the neutrons in a 6 microsecond pulse are utterly irrelevant. The important factor in determining how effective an ENI is in precisely controlling the start of the chain reaction is the beam current intensity and how sharply and precisely it can be turned on. These are the design parameters that should be optimized in a weapon tube. Note that only a small fraction of the neutrons generated will actually get into the core. If we assume a compressed core diameter of 6 cm, and a target-to-core distance of 30 cm (remember, it has to be safety outside the implosion system!), then only about 3% of the neutron flux will enter the core - an arrival rate of 15,000 neutrons/nanosecond using a 10 amp ion source. This many neutrons will significantly accelerate the chain reaction, cutting it by some 15 multiplication intervals. The ENI does not have to be placed near the actual fission assembly. Since warhead dimensions are typically no more than 1-2 meters it can be placed virtually anywhere in the weapon, as long as there isn't a thick layer of moderating material (plastic, hydrocarbon fuel, etc.) between the ENI and the fission core. The power supply required to drive a pulse tube has many similarities to the EBW pulse power supply. A pulse of a few hundred volts at a few hundred amps is needed to drive the ion source, and a 130-170 KeV pulse of several amps is required to extract the ions and accelerate the beam. This high voltage pulse controls actual neutron production and should thus have as fast an onset time as possible. This high voltage pulse can be supplied by discharging a capacitor of several KV through a pulse current transformer. Pulse neutron tubes have been available commercially for decades (in non-miniaturized form) for use as a laboratory neutron source, or for non-destructive testing. An additional type of ENI, not based on fusion reactions, has been successfully tested but apparently never deployed. This is the use of a compact betatron, a type of electron accelerator, to produce energetic photons (several MeV). These photons cause photon induced fission, and photon -> neutron reactions directly in the core. 188.8.131.52 Internal Tritium/Deuterium Initiators Another approach to making an internal neutron initiator is to harness the high temperatures and densities achieved near the center of an implosion to trigger D+T fusion reactions. A few tenths of a gram of each isotope is placed in a small high pressure sphere at the center of the core in this scheme. The number of actual fusions produced is small, but it may seem surprising that any could occur at all. The occurrence of fusion during a collision between two nuclei is a statistical process. The probability of it occurring on a given collision depends on the collision velocity. The velocity of the nuclei is in turn a statistical process which depends upon the temperature. The hydrogen plasma is in thermal equilibrium with a mean temperature of a few hundred thousand degrees K, but the Maxwellian energy distribution means that a very small number of ions is travelling at velocities very much higher than average. Given the very large number of ions present, a significant fusion rate results. Only a few fusions are actually necessary for reliable initiation after all. The main attraction of this scheme is that the half-life of tritium (12.3 years) is much longer than Po-210, so the initiator can be stored ready-to-use for long periods of time. The system is also physically simpler, and more compact than ENIs. It is not clear whether this type of initiator has actually been used in weapon designs. Like any munition, the development of a fission weapon will require a variety of tests. These include component tests, and perhaps tests of the complete weapon. Tests of components like the firing system, detonators, etc. are similar to the requirements of non-nuclear munitions and need no comment. Even conservative gun assembly designs will normally require proof testing of the gun/propellant combination to verify the internal ballistics. In addition to these routine types of tests, fission weapon development requires (or at least benefits greatly) from certain types of test that are unique to nuclear weapons. These include nuclear tests, by which I mean tests of the nuclear properties of materials and designs, not nuclear explosions (although an actual explosion of substantial yield is one possible type of nuclear test). Implosion designs, by which I mean any design using shock waves for core assembly, also call for hydrodynamic tests - tests of materials under the extreme conditions of shock compression. Combined nuclear and hydrodynamic tests, called hydronuclear tests, provide a more direct way of developing data for weapon design, evaluating design concepts, or evaluating actual designs. "Hydronuclear" is a somewhat vague term. Hydronuclear tests can mean shock compression experiments that create sub-critical conditions, or supercritical conditions with yields ranging from negligible all the way up to a substantial fraction of full weapon yield. Tests of negligible yield are often called "zero yield tests", although this is also not a precise term. Generally it is taken to mean a test in which the nuclear energy release is small compared to the conventional explosive energy used for assembly - a few kg of HE equivalent for example. However even in sub-critical tests the nuclear energy release is not actually "zero". It appears that the Comprehensive Test Ban Treaty (CTBT) now being negotiated in Geneva will use a "no-criticality" standard for defining legal experiments with high explosives and fissile material. 184.108.40.206 Nuclear Tests A variety of nuclear tests are of interest for collecting design data. Since the performance of nuclear weapons is the combined effect of many individual nuclear properties, the most desirable measurements for weapon design purposes are "integral experiments" - experiments that directly measure overall weapon design parameters that combine many different effects. Critical mass experiments determine the quantity of fissile material required criticality with a variety of fissile material compositions and densities, in various geometries, and with various reflector systems. These provide a basic reference for evaluating nuclear computer codes, estimating material requirements for weapons, and (extremely important) for doing safety evaluations. The closer the critical mass experiment resembles actual weapon configurations, the more useful it is. A considerable amount of critical mass data has been published openly which makes it possible to perform reasonably good "first cut" weapon design evaluations using scaling laws (like the efficiency equations). Any weapon development program will want to perform criticality tests of systems closely resembling actual proposed designs, differing only in the amount of fissile material present. Critical mass values can be predicted with good accuracy by extrapolation by taking neutron multiplication measurements in a succession of sub-critical tests using increasing quantities of fissile materials. Such tests can be conducted safety in the laboratory without special protective equipment since each successive test allows progressive refinements of critical mass estimates, and allows the calculation of safe masses for the next test. Tests intended to closely approach or reach criticality must be conducted under stringent safety conditions however. Even a very slight degree of criticality in an unmoderated system can produce a deadly radiation flux in seconds. Accidents during critical mass experiments killed two researchers at Los Alamos in 1945 and 1946 (Harry Daghlian and Louis Slotin) before manual experiments were banned there. Basic critical mass tests are basically non-multiplying and do not measure alpha, the extremely important fast neutron multiplication parameter. Direct measurements of this require establishing systems with significant levels of supercriticality capable of creating rapid increases in neutron populations. A variety of laboratory tests can be used for this. All of them depend on creating a supercritical state that persists for a very short period of time (milliseconds to microseconds) to prevent melt-down (or worse). Such experiments necessarily produce large neutron fluxes, and thus must be conducted under remote control. One type of experiment creates a transient supercritical state by propelling a small fissile mass though a larger slightly sub-critical mass. The supercritical state exists while the small mass is inserted, and terminates when the mass exits the other side. Examples of this type of experiment are the "Dragon" experiments conducted at Los Alamos in early 1945, in which a fissile mass was dropped through a hole bored in a subcritical assembly (so-called because it was like "tickling the tail of a dragon"). Shorter assembly times (and thus higher multiplication rates) can be investigated by using a gun instead of gravity to accelerate the fissile projectile. This approach obviously extends naturally to evaluating a full-up gun weapon design with only the amount of fissile material in the bullet or target differing from the actual deployment weapon. This type of test was actually used by South Africa for evaluating its gun assembly weapon (using a test device named "Melba"). These experiments can explore assembly durations in the range of 0.1-10 milliseconds. A second type of experiment achieves an even higher multiplication rate under controlled conditions by using the thermal expansion of the core to shut down the reaction. This is called a fast neutron pulse reactor. A solid core of fissile material is assembled that is slightly supercritical at room temperature, but is kept subcritical by the presence of a control rod, by removing a section of reflector, or by controlling the insertion of fissile material. When the rod is removed (or reflector is inserted), it becomes supercritical and rapidly heats up. The expansion of the material at sonic velocity, mediated by an acoustic wave, shuts down the reaction in a matter of microseconds. Assembly durations of 5-500 microseconds can be investigated. Examples of this type of experiment are a series of fast pulsed reactors operated by the US during the late forties and early fifties: the bare uranium core Godiva, the bare plutonium core Jezebel, and the reflected uranium and plutonium assemblies Topsy and Popsy. These fast metal assemblies can also be used to collect multiplication data at the border of criticality by adjusting their density in various ways. All of the mentioned US assemblies have been used to measure multiplication rates by studying the change in rates with density in the region between delayed and prompt criticality. These measurements can be extrapolated to estimate the maximum values of the materials. Although little data on alpha values for weapons-usable material have been published in general, results of these types of experiments are available. 220.127.116.11 Hydrodynamic Tests Hydrodynamic tests can evaluate shock compression techniques and designs, and collect data on the properties of nuclear materials under shock compression conditions. The latter sort of test requires conducting shock experiments with actual nuclear materials of course. This is not much of a problem from a safety point of view for uranium since comparatively non-toxic and nuclearly inert natural or depleted uranium is available. Hydrodynamic tests on complete implosion weapon designs can be conducted for uranium weapons simply by substituting natural uranium or DU for the actual U-235 or U-233. This is not true for plutonium. There is no non-toxic, non-fissile form of plutonium. The radiotoxicity of plutonium make hydrodynamic tests much more hazardous to perform and care to avoid criticality is essential. It is interesting to note that a considerable amount of high-pressure shock equation of state data has been published for uranium, but very little or none has been for plutonium. Uranium can be used as a plutonium substitute to some extent, but the unique and bizarre physical state diagram of plutonium limits this to some extent. This is especially true in situations were very accurate EOS knowledge is required. The small safety margins involved in creating one-point safe sealed pit weapons, and in preparing for hydronuclear tests, places a premium on precise knowledge of plutonium behavior. Measurements in weapon-type implosion systems are very difficult to make since they must be taken through the layer of expanding explosion gases. Flying plate systems are widely used for collecting equation of state data ranging up to fairly high shock pressures (several megabars). Advanced weapon programs typically use sophisticated instruments like light gas guns to generate very high pressure shock data. Even with natural uranium or DU, full scale hydrodynamic test of weapon designs will require special test facilities including heavily reinforced test cells, with provision for instrumentation . The cells will unavoidably remain contaminated with detectable levels of uranium, showing the nature of tests that have been conducted there. 18.104.22.168 Hydronuclear Tests Hydronuclear tests are the ultimate in integral experiments, since they combine the full range of hydrodynamic and nuclear effects. Although implosion weapons (i.e. Fat Man) have been successfully developed without any tests of this kind, a weapon development program is likely to regard such tests as highly desirable. In hydronuclear tests of a candidate weapon design data on both the rate of increase of alpha during compression, and the maximum alpha value achieved can be collected. The first type of data is useful to determine the ideal moment of initiation for maximum efficiency, the second for determining how efficient the weapon will be. The influence of time absorption and other effects dependent on neutron energy (fission cross sections, moderation, inelastic scattering, etc.) changes with effective multiplication rate. This encourages weapon developers to conduct tests at very high multiplication rates to collect good data for weapon performance prediction. Since weapon efficiency and yield are dependent primarily on the effective multiplication rate, this means tests with large releases of nuclear energy. Prohibiting tests with substantial nuclear energy yields (tens or hundreds of tons) may not prevent a nation from developing fission weapons, but it does at least restrict its ability to predict weapon yield. A serious problem with hydronuclear tests is predicting what is going to happen in advance. On one hand, it is obvious that if one can predict exactly what will happen, then there is no need for the test at all. On the other hand, not being able to estimate the effects reasonably well in advance makes conducting the test extremely difficult, even perilous. The reason for this should be clear from the efficiency equations. Since at low degrees of supercriticality efficiency and yield scale as (rho - 1)^3, fairly small variations in compression cause fairly large variations in yield. For example, if a two-fold compression factor is intended in create a supercritical density of rho = 1.02 (and a yield of say, 50 kg), then a 5% variation in compression could cause a result ranging from a complete failure to approach criticality, to a 45-fold overshoot (2.2 tonnes). Since designing suitable instrumentation requires having a fairly good knowledge about the range of conditions to be measured, the first would result in no data been collected. The second could destroy the test facility (and also result in no data being collected!). Actual US tests have been known to overshoot target yields of kilograms, producing yields in the tens and even hundreds of tons.
http://nuclearweaponarchive.org/Nwfaq/Nfaq4-1.html
13
71
The last tutorial ended with the creation of a Sprite class. The class had the functionality to move itself as well as detect when it was about to go out of the game screen, and respond to that by changing its direction of movement. In this part, I'm going to show you how to implement simple collision detection in PyGame. The first question that needs to be answered is; what exactly is collision detection. Well, as you may have already figured out from the name, it is that part of a game that detects when two (or more) objects are about to collide with each other. Technically, that is where the job of the collision detection part of a game ends. But in most simple games, the part that detects collisions is also responsible for what comes next, responding to those collisions. Collision detection is one of the few pieces that is present in almost all games that have ever been made. I'm sure you've seen almost realistic physics effects in games like FEAR, GTA 4, or even Crayon Deluxe. Heck, even Prince of Persia and Pac-Man had collision detection. The physics effects are a separate topic, and are also usually handled by a different part of the game, appropriately called the Physics Engine. But before the physics engine can go ahead and do its magic, it needs to be activated by something. That something is usually the collision detection part of a game. Techniques for Collision Detection Before we can actually start blasting away PyGame code, we need to understand how to detect collisions. There are a number of techniques for doing so. The simplest are listed here along with brief descriptions. If you want more detailed information, I suggest you check out some of the resources listed at the end of the tutorial. - Bounding Box Collision - Bounding Circle Collision - Pixel Overlap Test Note that in the bounding box/circle tests, it is very difficult to find the exact point where two sprites have collided. The pixel overlap test fixes this by allowing us to find exactly where the two sprites have collided. This can result in better physics simulation. If the sprites always keep the same orientation (they are not rotated), the bounding box/circle method can give us a good approximation of the angle of collision. If however the sprites have rotated before collision, the pixel overlap test is usually the one to go with. If you remember the last tutorial, we have already used a simple form of the bounding box collision test when we tested the sprites for collisions against the walls. The piece of code that did that is shown below: def Update(self, scr=None): # start if self.x < 0: self.x = self.maxX if self.x > self.maxX: self.x = 0 if self.y < 0: self.y = self.maxY if self.y > self.maxY: self.y = 0 # end self.rectangle.move_ip(self.x, self.y) if scr != None: scr.blit(self.image, self.GetPosition()) The important parts are the ones between the start and end comments. While not strictly collision tests, they do provide the most rudimentary form of collision detection. This code actually tests if the sprite has moved beyond the borders of the screen and wraps it around to the other side if it has. Our Collision Detection Engine What we are going to do today is to create a simple billiards simulation. We will use both the bounding box and the bounding circle collision tests to see the difference in their results. What we are going to create will look like a couple of balls moving on a billiards table, colliding with the walls and each other. While not all that exciting, it may just be the start of a Snooker Club type game. The game code is in two parts. Firstly, there is the code for the Ball Sprite class. We create this class as it makes it a lot easier to manage things. The code for the Ball class is given here. I'll explain the entire code line by line so that you get a hang of how things are done in Pygame. class Ball: def __init__(self, radius=50, init_pos=(0, 0), init_speed=[0, 0], color=pygame.Color('red')): """ This function creates a new Ball object. By default, the new Ball has a radius of 50 pixels, a starting position of (0,0), a speed of (0,0) and a red color. """ # create Surface to hold image for both drawing and erasing the Ball self.img = pygame.Surface((radius * 2, radius * 2)) self.bg = pygame.Surface((radius * 2, radius * 2)) # fill both surfaces with the transparent color self.img.fill(transColor) self.bg.fill(transColor) # draw the Ball shape to both the img and the bg surface pygame.draw.circle(self.img, color, (radius, radius), radius) pygame.draw.circle(self.bg, bgColor, (radius, radius), radius) # set the color key for both surfaces self.img.set_colorkey(transColor) self.bg.set_colorkey(transColor) # convert both Surfaces for faster bliting to the screen self.img.convert() self.bg.convert() # create rectangle for the Ball image # give it the initial position that was passed via init_pos self.rect = self.img.get_rect(init_pos) # set the speed of this Ball object self.speed = init_speed def set_speed(self, new_speed): self.speed = new_speed def move(self, bounding_rect): """ Moves the Ball object according to self.speed Takes a pygame.Rect() object in bounding_rect parameter. When moving, checks if the Ball is within the bounds of this rectangle. If not, moves the Ball to correct this situation """ # check if we have a valid bounding_rect. if not, just crash the game # after giving an error message if not isinstance(bounding_rect, pygame.Rect): sys.exit("ERROR: Invalid type for bounding_rect parameter!\n") # once we have done sanity checking, continue with moving the Ball self.rect.move_ip(self.speed, self.speed) # now, check if the Ball is within the bounds of the bounding_rect if bounding_rect.contains(self.rect): pass # nothing to do here, as the Ball is within bounds else: # if the Ball is outside of bounding_rect # first, we find which the direction we are getting out of bounds if self.rect.top < bounding_rect.top: # from the TOP side. we move the Ball to the max top first self.rect.top = bounding_rect.top # then, we check if the Balls current Y velocity will take # it out of bounds again. if it will, we inverse the Y velocity # by multiplying it with -1 if (self.rect.top + self.speed) < bounding_rect.top: self.speed *= -1 elif self.rect.bottom > bounding_rect.bottom: # likewise for bottom side self.rect.bottom = bounding_rect.bottom if (self.rect.bottom + self.speed) > bounding_rect.bottom: self.speed *= -1 # now, we do the same for the left & right side if self.rect.left < bounding_rect.left: self.rect.left = bounding_rect.left if (self.rect.left + self.speed) < bounding_rect.left: self.speed *= -1 elif self.rect.right > bounding_rect.right: self.rect.right = bounding_rect.right if (self.rect.right + self.speed) > bounding_rect.right: self.speed *= -1 def erase(self, surface): # erase the Ball object from its current location surface.blit(self.bg, self.rect) def draw(self, surface): surface.blit(self.img, self.rect) Let me explain the code: - First of all, the lines starting with the # symbol are comments. Comments are totally ignored by the computer when running the program, and are just here to ease the understanding of the program by any one reading it. In both the movefunction, the first thing you might have noticed is the explanation for the function given between the triple quotes. This is the DocString for the function. In Python, both classes and functions can have DocStrings. DocStrings are kind of like comments, in the sense that they are ignored by the Python language. However, they provide valuable information about a function/class, and are actually accessible from within a Python program, as opposed to comments which are only seen when a person views the actual code. For example, if you want to see the DocString for the move function, you write: - The import statements, as you already know from the previous tutorials, imports the pygame library as well as other modules needed in the program. A class definition is started like so: Next, we have declared a special function named init. This is a special function in the sense that it is called by Python automatically every time we create a new object from the Ball class. In Python classes, all functions must be passed a first argument self, which is sort of like a pointer (variable) to the object from which the function was called. As we'll see later in the code, we can call any method on a Ball object by using the syntax: As you can see, we never pass any parameter, however, Python automatically passes the selfparameter, making it point to ballObject, the object which was used to call the function. The code for the __init__function is given below: def __init__(self, radius=50, init_pos=(0, 0), init_speed=[0, 0], color=pygame.Color('red')): """ This function creates a new Ball object. By default, the new Ball has a radius of 50 pixels, a starting position of (0,0), a speed of (0,0) and a red color. """ # create Surface to hold image for both drawing and erasing the Ball self.img = pygame.Surface((radius * 2, radius * 2)) self.bg = pygame.Surface((radius * 2, radius * 2)) # fill both surfaces with the transparent color self.img.fill(transColor) self.bg.fill(transColor) # draw the Ball shape to both the img and the bg surface pygame.draw.circle(self.img, color, (radius, radius), radius) pygame.draw.circle(self.bg, bgColor, (radius, radius), radius) # set the color key for both surfaces self.img.set_colorkey(transColor) self.bg.set_colorkey(transColor) # convert both Surfaces for faster bliting to the screen self.img.convert() self.bg.convert() # create rectangle for the Ball image # give it the initial position that was passed via init_pos self.rect = self.img.get_rect(init_pos) # set the speed of this Ball object self.speed = init_speed Most of the code should be familiar to you from the previous tutorials. First, we create two surfaces to hold the actual image of the Ball and another image with the background to erase it. In the last tutorial, we loaded an image file from disk and created a surface out of it. Here, we do things differently. Rather than use a predefined image file for the Balls, we create the Ball image in the code using pygames built-in functions. This allows us to control many aspects of the image, including radius and color. To draw a circle, we use the function: pygame.draw.circle(SURFACE, COLOR, CENTER, RADIUS) All the parameters are self explanatory. The next line of code however, needs some discussion: What we are doing here is setting a 'Color Key' for the surface. A color key can be thought of as simply a transparent color. We are telling pygame that the transColor (which is previously defined as pure White) is to be treated as transparent. So, whenever pygame blits the surface, it ignores any pixels that have the same color as the color key. This is needed because surfaces can only be rectangular, while the circle we draw is, well, circular. Thus, there is a portion of the rectangular surface that would not be part of the circle. We thus tell pygame to ignore the extra region by setting the color key equal to white. This concept can take some time to understand, so an image is attached to help you comprehend. Every thing else in the __init__function is pretty much what you did in the previous tutorials. movefunction is quite simple. What it does is to move the Ball objects position by adding the velocity to its current position, while checking that the object remains inside a rectangular area that is passed to the function as the parameter bounding_rect. The comments are quite explanatory and I don't think require any deep discussion. The reason why I created a separate move function is that it makes things a lot simpler. Say you change the way the Ball class handles collisions with the walls, then all that you need to change is the code inside of the class itself. Nothing outside will change. This is called the concept of encapsulation and is one of the biggest benefits of using classes. The details of how the class does what it does are hidden from the code that actually uses objects of the class. Now we come to the main function. The place where collisions detection is actually done. Compared to the rest of the code, the collision detection is quite simple. The code for the collision detection is given here: def collision_detect(ball_list, bounding_rect): bList = list(ball_list) for ballA in bList: # remove the current Ball object, as we do not want to test it again bList.remove(ballA) for ballB in bList: # check if the rectangles of the two calls are overlapping # since we are using bounding box collision detection, this is # how we test for a collision if ballA.rect.colliderect(ballB.rect): # inverse the velocity of one of the Ball objects at random b = random.choice([ballA, ballB]) x = b.speed y = b.speed x *= -1 y *= -1 b.set_speed([x, y]) # now, move the Balls away so they don't collide any more while ballA.rect.colliderect(ballB.rect): ballA.move(bounding_rect) ballB.move(bounding_rect) As parameters, this function receives a list of Balls that need to be checked against each other for collisions, and a rectangular area bounding the movement of the balls. The first thing we do is to create a variable bListthat holds a copy of all the Ball objects that the function received. Next, we loop through all the Balls in the list. We check if the rectangles of the two Balls we are checking overlap in the following line of code: Pygames rectangles have a built-in function for checking if two rectangles are colliding with each other. We are simply using that function to check if the rectangles of the two Ball objects are colliding with each other. If they are, we treat it as a collision of the two Ball objects, since we are using bounding box collision test. Once a collision is detected, we chose a random Ball object from amongst the two that we are checking and reverse its velocity so that it will now move away from the other ball to avoid the collision. Next, we move one of the Ball objects until we reach a point when the two Balls are not colliding with each other anymore. While the results are not very accurate or even pretty, this is a simple way of handling collisions. If better results are required, you could change the code that changes the velocities of the balls once a collision has been detected. Everything else need not change. The rest of the code just uses the Ball class and the collision detection function to create a small demo of the application. It's quite simple and you have already seen it in the previous tutorials. Well, that's about it. If you were hoping for (or even dreading) lots of Math, sorry to disappoint. The Math is there if you want it, its just that for demonstrating simple collision detection, we do not need to use it. Hope you find this useful.
http://tech.pro/tutorial/1007/collision-detection-with-pygame
13
98
Objectives: To recall the concept of a function. Suppose that A and B denote two sets of objects. They may be sets of numbers, sets of books, sets of humans, etc. Then a rule that assigns exactly one member of the set B to each member of the set A is called a function from A to B. For example, if A represents the set of books, and B represents the set of whole numbers, then the rule which assigns to every book, the number of pages in that book is a function from the set of books to the set of whole numbers. Although the function concept is general be defined for arbitrary sets A and described above, in mathematics it is normally used when both B are sets of numbers. We will apply the concept to the situation where both A and the set of real numbers. Recall 2: (The domain of a function) The domain of a function from A to B is the members of A that have corresponding members of B that are assigned by the rule that the function describes. In the book example above, the domain is all books. However, the domain may not always be the whole set A. Take, for example, the function from the set of real numbers the square root of that number. The domain of this function is not the set of all real numbers because the negative numbers cannot be assigned a value. The square root of a negative number does not exist! (That is if we don't count the complex numbers!). In this case, the domain is the set of all non negative numbers. The range of a function from A to B is the set of members of B that are assigned to members of A by the function. In the case of the books, the range is the numbers that are numbers of pages of books. Negative numbers are not in the range of this function, because a book cannot have a negative number of pages. Zero is not in the range, because a book with 0 pages is not considered a book (or is it?) Moreover, not all the positive whole numbers are in the range, because it may be that there are no books with (say) 13 pages; and, it is hardly likely that there are books with more than (say) 100000 pages. When the sets A and B are sets of numbers (as they will be in most cases,) the members of the domain are called the independent variables while the members of the range are called the dependent variables. This makes sense, given the fact that the members of B depend on the members of A. There are several forms of notation used in illustrating particular functions. Most common is the single letter notation like f or g We denote the function as f(x), meaning that the function f assigns the value f(x) to the value x. For example, if f is the squaring function (the function that takes each real number to its square), then it is denoted f(x) = x2, meaning f(2) = 22 = 4, f(3) = 32 = 9, f(4) = 42 = 16 Another notation that is commonly used in higher mathematics is f:A-> B, which is a more descriptive illustration, suggesting that f takes the values of A and sends them to values of B. Through most of the previous topics, we have worked with equations that involve only one variable. Equations x^2 - x + 40 = 0 These equations are useful in solving many problems. The word problems are examples of the usefulness of such equations. Many problems will involve more than one variable and it will become increasingly important to know how to treat equations of more than one variable, say an x and a y. Some of these equations may be put in the form where the y can be isolated and put on one side of the equation, with the remaining terms on the other side of the equation. Such is the case with the equation In this case, if we may assume that x is different from 0, the equation can be put in the form y This is an equation that may also be viewed as a function. Namely, the function that takes real numbers to real numbers in the following way: Take the real number and divide the result by 2 times x. Let y be the number that you get from this Mathematically, it is clear that y must be the same as Recall 7: (How the function equations more dynamic.) From Recall 6 an equation can be thought of as a function from the set of real numbers to the set of real numbers and this function may be denoted as You may not think that there is much difference between thinking of the equation and the function f(x) = ; but, the fact is that with the function concept we may find f(x+2), which is This will become extremely useful when we learn how to graph functions. Example 1. Find the domain of the following function that takes real numbers to real numbers. f(x) = . Solution: Pick any number x in the interval where -5 < x < 5. This function cannot handle such numbers because the expression under the root sign is negative for such numbers; and when we take the square root, to find the value that the function takes x to, we find that we are are attempting to computethe square root of a negative number. The square root of a negative number is not a real number --there is no real number whose square is negative. Therefore, we must exclude the real number between -5 and 5 from the domain. The natural domain of this function is all real numbers greater than or equal to 5 and all real numbers less than or equal to -5. Example 2. Find the domain of the function. Solution: Notice that when x = 3 or when x = -2, the rational expression is not defined as a finite real number (they are poles). Since functions must take real numbers to real numbers (and since every real number must be finite), we must exclude the values x = 3 and x = -2 from the domain. So the domain includes all real numbers except -2 and 3. Example 3. Find the range of the function f(x) =(x + 3)2 + 4 Solution: If x = -3, the term (x + 3)2 vanishes. Also this term can never be less than zero. So f(x) is smallest when x = -3 and the value of f(-3) is 4. As x takes values greater than -3, the function takes values that are greater than f(-3). In fact any value greater than 4 is achieved by some value different from x = -3. Take any large value, say 10,000, and ask if there is some value of x where f(x) = 10,000. This amounts to solving the equation 10,000 = (x + 3)2 + 4. We may solve this equation by subtracting 4 from both sides to get 9,996 = (x + 3)2. that x + 3 = SQRT(9,996), x = -3 +SQRT(9,996). We now know that f(-3+SQRT(9,996)) = 10,000. Clearly, we may play the same game with any number greater than 4. So, the range of this function is the set of all real numbers greater than or equal to 4. Example 4. Let f(x) be Find f(x + 1). function f itself is the rule which says that if you give it a real number x then it will first cube that number, add 2 times the square of that number, subtract 13 times that number and add 10, then, divide the result by that number minus 1. To find f(x + 1) simply follow the rules replacing that number by x+1. So, we have f(x+1) given by 1. Write the following equation as a function of x 3x2 - xy = 2y - 4 2. Find the domain of the function f(x) = (3x2+4)/(x+2). 3 Find the domain of the function f(x) given by 4 Find the domain of the function given by the expression 5 Find the range of the function f(x) given by the the range of the function f(x) given by the expression 7 For the function below, find f(-2), f(0), 8 For the function below, find f(x + 2) and f(x) =(4x2-16x)/2x, x different from 0. 9 For the function below, find g(x2) . g(x) =(4x2-16x)/(2x-1), x different from 1/2 10. For the function below, find g((x f(g(1/2)) and g(f(1/2)), where and g(x) =2/x . 12 Find f(g(4)) and g(f(4)), f(x) =1/(4-sqrt(x)) and g(x) =4/(x2) . and simplify it, [f(x+2)-f(2)]/x and simplify it for the f(x) = 3x2 + 2x - 1. Find [f(x+2)-f(2)]/x and simplify it for the function f(x) =1/(2x+1) . 2. All real numbers except x = -2. 4 -2/3 <= x <= 2/3 5 All real numbers greater than or equal to 2. 6 real numbers y such that 0 <= y <= 2. 7 f(-2) = 6, f(0) = 0, f(1) = 3, f(2) = 10. 8 f(x + 2) = 2x - 4. 12 f(g(4)) = 2/7 and g(f(4)) = 16. 13 -2/3 <= x <= 2/3 14 3x + 14.
http://www.marlboro.edu/academics/study/mathematics/courses/functions
13
93
You can enter formulas in two ways, either directly into the cell itself, or at the input line. Either way, you need to start a formula with one of the following symbols: =, + or -. Starting with anything else causes the formula to be treated as if it were text. Operators in formulas Each cell on the worksheet can be used as a data holder or a place for data calculations. Entering data is accomplished simply by typing in the cell and moving to the next cell or pressing Enter. With formulas, the equals sign indicates that the cell will be used for a calculation. A mathematical calculation like 15 + 46 can be accomplished as shown below. |Simple Calculation in 1 Cell||Calculation by Reference| While the calculation on the left was accomplished in only one cell, the real power is shown on the right where the data is placed in cells and the calculation is performed using references back to the cells. In this case, cells B3 and B4 were the data holders with B5 the cell where the calculation was performed. Note that the formula was shown as =B3 + B4. The plus sign indicates that the contents of cells B3 and B4 are to be added together and then have the result in the cell holding the formula. All formulas build upon this concept. Other ways of entering formulas are shown in Table 1. These cell references allow formulas to use data from anywhere in the worksheet being worked on or from any other worksheet in the workbook that is opened. If the data needed was on different worksheets they would be referenced by referring to the worksheet, for example =SUM(Sheet2.B12+Sheet3.A11). Table 1: Common ways to enter formulas. |=A1+10||Displays the contents of cell A1 plus 10.| |=A1*16%||Displays 16% of the contents of A1.| |=A1 * A2||Displays the result of the multiplication of A1 and A2.| |=ROUND(A1;1)||Displays the contents of cell A1 rounded to one decimal place.| |=EFFECTIVE(5%;12)||Calculates the effective interest for 5% annual nominal interest with 12 payments a year.| |=B8-SUM(B10:B14)||Calculates B8 minus the sum of the cells B10 to B14.| |=SUM(B8;SUM(B10:B14))||Calculates the sum of cells B10 to B14 and adds the value to B8.| |=SUM(B1:B65536)||Sums all numbers in column B.| |=AVERAGE(BloodSugar)||Displays the average of a named range defined under the name BloodSugar.| |=IF(C31>140; "HIGH"; "OK")||Displays the results of a conditional analysis of data from two sources. If C31 = 144, then HIGH is displayed, otherwise OK is displayed.| Functions can be identified in Table 1 with a word, for example ROUND, followed by parentheses enclosing references or numbers. It is also possible to establish ranges for inclusion by naming them using Insert > Names, for example BloodSugar representing a range such as B3:B10. Logical functions can also be performed as represented by the IF statement which results in a conditional response based upon the data in the identified cell. A value of 3 in cell A2 would return the result Positive, -9 the result Negative. You can use the following operators in OpenOffice.org Calc: arithmetic, comparative, text, and reference. The addition, subtraction, multiplication and division operators return numerical results. The Negation and Percent operators identify a characteristic of the number found in the cell, for example -37. The example for Exponentiation illustrates how to enter a number that is being multiplied by itself a certain number of times, for example 23 = 2*2*2. Table 2: Arithmetical operators Comparative operators are found in formulas that use the IF function and return either a true or false answer; for example, =IF(B6>G12; 127; 0) which, loosely translated, means if the contents of cell B6 are greater than the contents of cell G12, then return the number 127, otherwise return the number 0. A direct answer of TRUE or FALSE can be obtained by entering a formula such as =B6>B12. If the numbers found in the referenced cells are accurately represented, the answer TRUE is returned, otherwise FALSE is returned. Table 3: Comparative operators |= (equal sign)||Equal||A1=B1| |> (Greater than)||Greater than||A1>B1| |< (Less than)||Less than||A1<B1| |>= (Greater than or equal to)||Greater than or equal to||A1>=B1| |<= (Less than or equal to)||Less than or equal to||A1<=B1| If cell A1 contains the numerical value 4 and cell B1 the numerical value 5, the above examples would yield results of FALSE, FALSE, TRUE, FALSE, TRUE, and TRUE. It is common for users to place text in spreadsheets. To provide for variability in what and how this type of data is displayed, text can be joined together in pieces coming from different places on the spreadsheet. Below is an example. In this example, specific pieces of the text were found in three different cells. To join these segments together, the formula also adds required spaces and punctuation housed within quotation marks resulting in a formula of =B6 & " " & C6 & ", " D6. The result is the concatenation into a date formatted in a particular sequence. Taking this example further, the result cell is defined as a name, then text concatenation is performed using this defined name. Calc has a CONCATENATE function which performs the same operation. Defining Names on a worksheet. In its simplest form a reference refers to a single cell, but references can also refer to a rectangle or cuboid range or a reference in a list of references. To build such references you need reference operators. An individual cell is identified by the column identifier (letter) located along the upper edge of the spreadsheet and a row identifier (number) found along the side of the spreadsheet. On spreadsheets read from left to right, the upper left cell is A1. The range operator is written as colon. An expression using the range operator has the following syntax: reference left : reference right The range operator builds a reference to the smallest range including both the cells referenced with the left reference and the cells referenced with the right reference. In the upper left corner of the figure above, the reference A1:D12 is shown, corresponding to the cells included in the drag operation with the mouse to highlight the range. |A2:B4||Reference to a rectangle range with 6 cells, 2 column width × 3 row height. When you click on the reference in the formula in the input line, a border indicates the rectangle.| |(A2:B4):C9||Reference to a rectangle range with cell A2 top left and cell C9 bottom right. So the range contains 24 cells, 3 column width × 8 row height.| |Sheet1.A3:Sheet3.D4||Reference to a cuboid range with 24 cells, 4 column width × 2 row height × 3 sheets depth.| When you enter B4:A2 or A4:B2 directly, then Calc will turn it to A2:B4. So the left top cell of the range is left of the colon and the bottom right cell is right of the colon. But if you name the cell B4 for example with '_start' and A2 with '_end', you can use _start:_end without any error. Calc can not reference a whole column of unspecified length via A:A or a whole row via 1:1 yet as you might know from other spreadsheet programs, see Issue 20495 . Reference Concatenation Operator The concatenation operator is written as tilde. An expression using the concatenation operator has the following syntax reference left ~ reference right The result of such an expression is a reference list, which is an ordered list of references. Some functions can take a reference list as argument, SUM, MAX or INDEX for example. The reference concatenation is sometimes called 'union'. But it is not the union of two sets, 'reference left' and 'reference right' as normally understood in set theory. COUNT(A1:C3~B2:D2) returns 12 (=9+3), but it has only 10 cells when considered as the union of the two sets of cells. Notice that SUM(A1:C3;B2:D2) is different from SUM( A1:C3~B2:D2) although they give the same result. The first is a function call with 2 parameters, each of them a reference to a range. The second is a function call with 1 parameter, which is a reference list. The intersection operator is written as exclamation mark. An expression using the intersection operator has the following syntax reference left ! reference right If the references refer to single ranges, the result is a reference to a single range, containing all cells, which are both in the left reference and in the right reference. If the references are reference lists, than each list item from the left is intersected with each one from the right and these results are concatenated to a reference list. The order is, to first intersect the first item from the left with all items from the right, then intersect the second item from the left with all items from the right, and so on. |A2:B4 ! B3:D6||This results a reference to the range B3:B4, because these cells are inside A2:B4 and inside B3:D4.| |(A2:B4~B1:C2) ! (B2:C6~C1:D3)||First the intersections A2:B4!B2:C6, A2:B4!C1:D3, B1:C2!B2:C6 and B1:C2!C1:D3 are calculated. This results in B2:B4, empty, B2:C2, and C1:C2. Then these results are concatenated, dropping empty parts. So the final result is the reference list B2:B4 ~ B2:C2 ~ C1:C2.| You can use the intersection operator to refer a cell in a cross tabulation in an understandable way. If you have columns labeled 'Temperature' and 'Precipitation' and the rows labeled 'January', 'February', 'March',… then the expression 'February' ! !Temperature' will reference to the cell containing the temperature in February. The intersection operator (!) should have a higher precedence than the concatenation operator (~), but do not rely on the precedence. Relative and absolute references References are the way that we refer to the location of a particular cell in Calc and can be either relative (to the current cell) or absolute (a fixed amount). An example of a relative reference will illustrate the difference between a relative reference and absolute reference using the spreadsheet shown below. - Type the numbers 4 and 11 into cells C3 and C4 respectively of that spreadsheet. - Copy the formula in cell B5 to cell C5. You can do this by using a simple copy and paste or click and drag B5 to C5 as shown below. The formula in B5 calculates the sum of values in the two cells B3 and B4. - Click in cell C5. The formula bar shows =C3+C4 rather than =B3+B4 and the value in C5 is 15, the sum of 4 and 11 which are the values in C3 and C4. In cell B5 the references to cells B3 and B4 are relative references. This means that Calc interprets the formula in B5 and applies it to the cells in the B column and puts the result in the in the cell holding the formula. When you copied the formula to another cell, the same procedure was used to calculate the value to put in that cell. This time the formula in cell C5 referred to cells C3 and C4. You can think of a relative address as a pair of offsets to the current cell. Cell B1 is 1 column to the left of Cell C5 and 4 rows above. The address could be written as R[-1]C[-4]. In fact earlier spreadsheets allowed this notation method to be used in formulas. Whenever you copy this formula from cell B5 to another cell the result will always be the sum of the two numbers taken from the two cells one and two rows above the cell containing the formula. Relative addressing is the default method of referring to addresses in Calc. You may want to multiply a column of numbers by a fixed amount. A column of figures might show amounts in US Dollars. To convert these amounts to Euros it is necessary to multiply each dollar amount by the exchange rate. $US10.00 would be multiplied by 0.75 to convert to Euros, in this case Eur7.50. The following example shows how to input an exchange rate and use that rate to convert amounts in a column form USD to Euros. - Input the exchange rate Eur:USD (0.75) in cell D1. Enter amounts (in USD) into cells D2, D3 and D4, for example 10, 20, and 30. - In cell E2 type the formula =D2*D1. The result is 7.5, correctly shown. - Copy the formula in cell E2 to cell E3. The result is 200, clearly wrong! Calc has copied the formula using relative addressing - the formula in E3 is =D3*D2 and not what we want which is =D3*D1. - In cell E2 edit the formula to be =D2*$D$1. Copy it to cells E3 and E4. The results are now 15 and 22.5 which are correct. Step 2: Setting the exchange rate of Eur at 7.5, then copying it to E3. Copying formula from E2 to E3 and changing the formula to read absolute reference. Applying the correct formula from E2 to E3. The $ signs before the D and the 1 convert the reference to cell D1 from relative to absolute or fixed. If the formula is copied to another cell the second part will always show $D$1. The interpretation of this formula is “take the value in the cell one column to the left in the same row and multiply it by the value in cell D1”. Cell references can be shown in four ways: |D1||Relative, from cell E3: the cell one column to the left and two rows above| |$D$1||Absolute, from cell E3:the cell D1| |$D1||Partially absolute, from cell E3: the cell in column D and two rows above| |D$1||Partially absolute, from cell E3: the cell one column to the left and row 1| |To change references in formulas highlight the cell and press Shift-F4 to cycle through the four different types of references. This is of limited value in more complicated formulas, it is usually quicker to edit the formula by hand.| Knowledge of the use of relative and absolute references is essential if you want to copy and paste formulas and to link spreadsheets. Order of calculation Order of calculation refers to the sequence that numerical operations are performed. Division and multiplication are performed before addition or subtraction. There is a common tendency to expect calculations to be made from left to right as the equation would be read in English. Calc evaluates the entire formula, then based upon programming precedence breaks the formula down executing multiplication and division operations before other operations. Therefore, when creating formulas you should test your formula to make sure that the correct result is being obtained. Following is an example of order of calculation in operation. Table 4 – Order of Calculation |Left To Right Calculation||Ordered Calculation| |1+3*2+3 = 11||=1+3*2+3 result 10| |1+3=4, then 4 X 2 = 8, then 8 + 3 = 11||3*2=6, then 1 + 6 + 3 = 10| |Another possible intention could be:||The program resolves the multiplication of 3 X 2 before dealing with the numbers being added.| |1+3*2+3 = 20| If you intend for the result to be either of the two possible solutions on the left, the way to achieve these results would be to order the formula as: |((1+3) * 2)+3 = 11||(1+3) * (2+3) = 20| |Use parentheses to group operations in the order you intend; for example = B4+G12*C4/M12 becoming =((B4+G12)*C4)/M12.| Another powerful feature of Calc is the ability to link data through several worksheets. The naming of worksheets can be helpful to identify where specific data may be found. A name such a Payroll or Boise Sales is much more meaningful than Sheet1. The function named SHEET() returns the sheet number in the collection of spreadsheets. There are several worksheets in each book and they are numbered from the left: Sheet1, Sheet2, and so forth. If you drag the worksheets around to different locations among the tabs, the function returns the number referring to the current position of this worksheet. An example of calculations obtaining data from other work can be seen in a business setting where a business combines its branch operations into a single worksheet. |Sheet containing data for Branch 1.| |Sheet containing data for Branch 2.| |Sheet containing data for Branch 3.| |Sheet containing combined data for all branches.| The spreadsheets have been set up with identical structures. The easiest way to do this is to set up the first Branch spreadsheet, input data, format cells, and prepare the formulas for the various sums of rows and columns. - On the worksheet tab, right-click and select Rename Sheet.... Type Branch1. Right-click on the tab again and select Move/Copy Sheet... - In the Move/Copy Sheet dialog, select the Copy option and select Sheet 2 in the area Insert before. Click OK, right-click on the tab of the sheet Branch1_2 and rename it to Branch2. Repeat to produce the Branch3 and Combined worksheets. - Enter the data for Branch 2 and Branch 3 into the respective sheets. Each sheet stands alone and reports the results for the individual branches. - In the Combined worksheet, click on cell K7. Type =, click on the tab Branch1, click on cell K7, press +, repeat for sheets Branch2 and Branch3 and press Enter. You now have a formula in cell K1 which adds the revenue from Greenery Sales for the 3 Branches. - Copy the formula, highlight the range K7..N17, click Edit > Paste Special, uncheck the Paste all and Formats boxes in the Selection area of the dialog box and click OK. You will see the following message: - Click Yes. You have now copied the formulas into each cell while maintaining the format you set up in the original worksheet. Of course, in this example you would have to tidy the worksheet up by removing the zeros in the non-formatted rows. |The Calc default is to paste all the attributes of the original cell(s) - formats, notes, objects, text strings and numbers.| The Function Wizard can also be used to accomplish the linking. Use of this Wizard is described in detail in the section on Functions.
http://wiki.openoffice.org/wiki/Documentation/OOo3_User_Guides/Calc_Guide/Creating_formulas
13
50
Thrust-to-weight ratio is a ratio of thrust to weight of a rocket, jet engine, propeller engine, or a vehicle propelled by such an engine. It is a dimensionless quantity and is an indicator of the performance of the engine or vehicle. The instantaneous thrust-to-weight ratio of a vehicle varies continually during operation due to progressive consumption of fuel or propellant, and in some cases due to a gravity gradient. The thrust-to-weight ratio based on initial thrust and weight is often published and used as a figure of merit for quantitative comparison of the initial performance of vehicles. The thrust-to-weight ratio can be calculated by dividing the thrust (in SI units – in newtons) by the weight (in newtons) of the engine or vehicle. It is a dimensionless quantity. For valid comparison of the initial thrust-to-weight ratio of two or more engines or vehicles, thrust must be measured under controlled conditions. The thrust-to-weight ratio and wing loading are the two most important parameters in determining the performance of an aircraft. For example, the thrust-to-weight ratio of a combat aircraft is a good indicator of the manoeuvrability of the aircraft. The thrust-to-weight ratio varies continually during a flight. Thrust varies with throttle setting, airspeed, altitude and air temperature. Weight varies with fuel burn and changes of payload. For aircraft, the quoted thrust-to-weight ratio is often the maximum static thrust at sea-level divided by the maximum takeoff weight. Propeller-driven aircraft For propeller-driven aircraft, the thrust-to-weight ratio can be calculated as follows: - is engine power The thrust-to-weight ratio of a rocket, or rocket-propelled vehicle, is an indicator of its acceleration expressed in multiples of gravitational acceleration g. Rockets and rocket-propelled vehicles operate in a wide range of gravitational environments, including the weightless environment. It is customary to calculate the thrust-to-weight ratio using initial gross weight at sea-level on earth. This is sometimes called Thrust-to-Earth-weight ratio. The thrust-to-Earth-weight ratio of a rocket, or rocket-propelled vehicle, is an indicator of its acceleration expressed in multiples of earth’s gravitational acceleration, g0. It is important to note that the thrust-to-weight ratio for a rocket varies as the propellant gets utilized. If the thrust is constant, then the maximum ratio (maximum acceleration of the vehicle) is achieved just before the propellant is fully consumed (propellant weight is practically zero at this point). So for each rocket there a characteristic thrust-to-weight curve or acceleration curve, not just a scalar quantity. The thrust-to-weight ratio of an engine is larger for the bare engine than for the whole launch vehicle. The thrust-to-weight ratio of a bare engine is of use since it determines the maximum acceleration that any vehicle using that engine could theoretically achieve with minimum propellant and structure attached. For a takeoff from the surface of the earth using thrust and no aerodynamic lift, the thrust-to-weight ratio for the whole vehicle has to be more than one. In general, the thrust-to-weight ratio is numerically equal to the g-force that the vehicle can generate. Provided the vehicle's g-force exceeds local gravity (expressed as a multiple of g0) then takeoff can occur. The thrust to weight ratio of rockets is typically far higher than that of airbreathing jet engines. This is because of the much higher density of the material that is formed into the exhaust, compared to that of air; therefore, far less engineering materials are needed for pressurising it. Many factors affect a thrust-to-weight ratio, and the instantaneous value typically varies over the flight with the variations of thrust due to speed and altitude, and the weight due to the remaining propellant and payload mass. The main factors that affect thrust include freestream air temperature, pressure, density, and composition. Depending on the engine or vehicle under consideration, the actual performance will often be affected by buoyancy and local gravitational field strength. The Russian-made RD-180 rocket engine (which powers Lockheed Martin’s Atlas V) produces 3,820 kN of sea-level thrust and has a dry mass of 5,307 kg. Using the Earth surface gravitational field strength of 9.807 m/s², the sea-level thrust-to-weight ratio is computed as follows: (1 kN = 1000 N = 1000 kg⋅m/s²) |Concorde||0.373||Max Takeoff Weight, Full Reheat| |English Electric Lightning||0.63||maximum takeoff weight, No Reheat| |F-22 Raptor||>1.09 (1.26 with loaded weight & 50% fuel)||Maximum takeoff weight, Dry Thrust| |Mikoyan MiG-29||1.09||Full internal fuel, 4 AAMs| |F-15 Eagle||1.04||nominally loaded| |F-16 Fighting Falcon||1.096| |Hawker Siddeley Harrier||1.1| |English Electric Lightning||~1.2||on an empty weight basis, full reheat| |Eurofighter Typhoon||1.07 (100% fuel, 2 IRIS-T, 4 MBDA Meteor)| |Space Shuttle||1.5||Take-off | |Dassault Rafale||0.988 (100% fuel, 2 EM A2A missile, 2 IR A2A missile) version M | |Space Shuttle||3||Peak (throttled back for astronaut comfort)| Jet and Rocket Engines |Jet or Rocket engine||Mass |RD-0410 nuclear rocket engine||2,000||4,400||35.2||7,900||1.8| |J58 jet engine (SR-71 Blackbird)||2,722||6,000||150||34,000||5.2| |Rolls-Royce/Snecma Olympus 593 turbojet with reheat (Concorde) |Pratt & Whitney F119||1,800||3,900||91||20,500||7.95| |RD-0750 rocket engine, three-propellant mode||4,621||10,190||1,413||318,000||31.2| |RD-0146 rocket engine||260||570||98||22,000||38.4| |SSME rocket engine (Space Shuttle)||3,177||7,000||2,278||512,000||73.1| |RD-180 rocket engine||5,393||11,890||4,152||933,000||78.5| |F-1 (Saturn V first stage)||8,391||18,500||7,740.5||1,740,100||94.1| |NK-33 rocket engine||1,222||2,690||1,638||368,000||136.7| |Merlin 1D rocket engine||440||970||690||160,000||159.9| Rocket thrusts are vacuum thrusts unless otherwise noted Fighter Aircraft Table a: Thrust To Weight Ratios, Fuels Weights, and Weights of Different Fighter Planes |Specifications / Fighters||F-15K||F-15C||MiG-29K||MiG-29B||JF-17||J-10||F-35A||F-35B||F-35C||F-22| |Engine(s) Thrust Maximum (lbf)||58,320 (2)||46,900 (2)||39,682 (2)||36,600 (2)||18,300 (1)||27,557 (1)||39,900 (1)||39,900 (1)||39,900 (1)||70,000 (2)| |Aircraft Weight Empty (lb)||37,500||31,700||28,050||24,030||14,520||20,394||29,300||32,000||34,800||43,340| |Aircraft Weight Full fuel (lb)||51,023||45,574||39,602||31,757||19,650||28,760||47,780||46,003||53,800||61,340| |Aircraft Weight Max Take-off load (lb)||81,000||68,000||49,383||40,785||28,000||42,500||70,000||60,000||70,000||83,500| |Total fuel weight (lb)||13,523||13,874||11,552||07,727||05,130||08,366||18,480||14,003||19,000||18,000| |T/W ratio (Thrust / AC weight full fuel)||1.14||1.03||1.00||1.15||0.93||0.96||0.84||0.87||0.74||1.14| Table b: Thrust To Weight Ratios, Fuels Weights, and Weights of Different Fighter Planes (In International System) |In International System||F-15K||F-15C||MiG-29K||MiG-29B||JF-17||J-10||F-35A||F-35B||F-35C||F-22| |Engine(s) Thrust Maximum (kgf)||26,456 (2)||21,274 (2)||18,000 (2)||16,600 (2)||08,300 (1)||12,500 (1)||18,098 (1)||18,098 (1)||18 098 (1)||31,764 (2)| |Aircraft Weight Empty (kg)||17,010||14,379||12,723||10,900||06,586||09,250||13,290||14,515||15,785||19,673| |Aircraft Weight Full fuel (kg)||23,143||20,671||17,963||14,405||08,886||13,044||21,672||20,867||24,403||27,836| |Aircraft Weight Max Take-off load (kg)||36,741||30,845||22,400||18,500||12,700||19,277||31,752||27,216||31,752||37,869| |Total fuel weight (kg)||06,133||06,292||05,240||03,505||02,300||03,794||08,382||06,352||08,618||08,163| |T/W ratio (Thrust / AC weight full fuel)||1.14||1.03||1.00||1.15||0.93||0.96||0.84||0.87||0.74||1.14| - Fuel density used in calculations = 0.803 Kilograms/Liter - The Number inside ( ) brackets is the Number of Engine(s). - Engines powering F-15K are the Pratt & Whitney Engines, not General Electric's. - MiG-29K's empty weight is an estimate. - JF-17's Engine rating is of RD-93. - JF-17 if mated with its engine WS-13, and if that engine gets its promised 18,969 lb then the T/W ratio becomes 0.97 - J-10's empty weight & fuel weight is an estimate. - J-10's Engine rating is of AL-31FN. - J-10 if mated with its engine WS-10A, and if that engine gets its promised 132 KN(29,674 lbf) then the T/W ratio becomes 1.03 See also - John P. Fielding. Introduction to Aircraft Design, Cambridge University Press, ISBN 978-0-521-65722-8 - Daniel P. Raymer (1989). Aircraft Design: A Conceptual Approach, American Institute of Aeronautics and Astronautics, Inc., Washington, DC. ISBN 0-930403-51-7 - George P. Sutton & Oscar Biblarz. Rocket Propulsion Elements, Wiley, ISBN 978-0-471-32642-7 - Daniel P. Raymer, Aircraft Design: A Conceptual Approach, Section 5.1 - John P. Fielding, Introduction to Aircraft Design, Section 4.1.1 (p.37) - John P. Fielding, Introduction to Aircraft Design, Section 3.1 (p.21) - Daniel P. Raymer, Aircraft Design: A Conceptual Approach, Equation 5.2 - Daniel P. Raymer, Aircraft Design: A Conceptual Approach, Equation 5.1 - George P. Sutton & Oscar Biblarz, Rocket Propulsion Elements (p. 442, 7th edition) “thrust-to-weight ratio F/Wg is a dimensionless parameter that is identical to the acceleration of the rocket propulsion system (expressed in multiples of g0) if it could fly by itself in a gravity-free vacuum” - George P. Sutton & Oscar Biblarz, Rocket Propulsion Elements (p. 442, 7th edition) “The loaded weight Wg is the sea-level initial gross weight of propellant and rocket propulsion system hardware.” - "Thrust-to-Earth-weight ratio". The Internet Encyclopedia of Science. Retrieved 2009-02-22. - "F-15 Eagle Aircraft". About.com:Inventors. Retrieved 2009-03-03. - Section 9 "The English Electric (BAC) Lightning". Vectorsite. Archived from the original on 2004-02-04. Retrieved 2012-10-12. - Kampflugzeugvergleichstabelle Mader/Janes - Thrust: 6.781 million lbf, Weight: 4.5 million lb"Space Shuttle". Wikipedia. Retrieved 2009-09-10. - "Space Shuttle". Wikipedia. Retrieved 2009-09-10. - Wade, Mark. "RD-0410". Encyclopedia Astronautica. Retrieved 2009-09-25. - "«Konstruktorskoe Buro Khimavtomatiky» - Scientific-Research Complex / RD0410. Nuclear Rocket Engine. Advanced launch vehicles". KBKhA - Chemical Automatics Design Bureau. Retrieved 2009-09-25. - Aircraft: Lockheed SR-71A Blackbird - "Factsheets : Pratt & Whitney J58 Turbojet". National Museum of the United States Air Force. Retrieved 2010-04-15. - "Rolls-Royce SNECMA Olympus - Jane's Transport News". Retrieved 2009-09-25. "With afterburner, reverser and nozzle ... 3,175 kg ... Afterburner ... 169.2 kN" - Military Jet Engine Acquisition, RAND, 2002. - "«Konstruktorskoe Buro Khimavtomatiky» - Scientific-Research Complex / RD0750.". KBKhA - Chemical Automatics Design Bureau. Retrieved 2009-09-25. - "RD-180". Retrieved 2009-09-25. - Encyclopedia Astronautica: F-1 - Astronautix NK-33 entry - "SpaceX Unveils Plans To Be World’s Top Rocket Maker". Aviation Week and Space Technology. 2011-08-11. Retrieved 2012-10-11.(subscription required) - "Lockheed Martin Website".
http://en.wikipedia.org/wiki/Thrust-to-weight_ratio
13
96
In order to get started learning any programming language there are a number of concepts and ideas that are necessary. The goal of this chapter is to introduce you to the basic vocabulary of programming and some of the fundamental building blocks of Python. A value is one of the fundamental things — like a word or a number — that a program manipulates. The values we have seen so far are 5 (the result when we added 2 + 3), and "Hello, World!". We often refer to these values as objects and we will use the words value and object interchangeably. Actually, the 2 and the 3 that are part of the addition above are values(objects) as well. These objects are classified into different classes, or data types: 4 is an integer, and "Hello, World!" is a string, so-called because it contains a string or sequence of letters. You (and the interpreter) can identify strings because they are enclosed in quotation marks. If you are not sure what class a value falls into, Python has a function called type which can tell you. Not surprisingly, strings belong to the class str and integers belong to the class int. When we show the value of a string using the print function, such as in the third example above, the quotes are no longer present. The value of the string is the sequence of characters inside the quotes. The quotes are only necessary to help Python know what the value is. In the Python shell, it is not necessary to use the print function to see the values shown above. The shell evaluates the Python function and automatically prints the result. For example, consider the shell session shown below. When we ask the shell to evaluate type("Hello, World!"), it responds with the appropriate answer and then goes on to display the prompt for the next use. Python 3.1.2 (r312:79360M, Mar 24 2010, 01:33:18) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> type("Hello, World!") <class 'str'> >>> type(17) <class 'int'> >>> "Hello, World" 'Hello, World' >>> Note that in the last example, we simply ask the shell to evaluate the string “Hello, World”. The result is as you might expect, the string itself. Continuing with our discussion of data types, numbers with a decimal point belong to a class called float, because these numbers are represented in a format called floating-point. At this stage, you can treat the words class and type interchangeably. We’ll come back to a deeper understanding of what a class is in later chapters. What about values like "17" and "3.2"? They look like numbers, but they are in quotation marks like strings. Strings in Python can be enclosed in either single quotes (') or double quotes ("), or three of each (''' or """) Double quoted strings can contain single quotes inside them, as in "Bruce's beard", and single quoted strings can have double quotes inside them, as in 'The knights who say "Ni!"'. Strings enclosed with three occurrences of either quote symbol are called triple quoted strings. They can contain either single or double quotes: Triple quoted strings can even span multiple lines: Python doesn’t care whether you use single or double quotes or the three-of-a-kind quotes to surround your strings. Once it has parsed the text of your program or command, the way it stores the value is identical in all cases, and the surrounding quotes are not part of the value. But when the interpreter wants to display a string, it has to decide which quotes to use to make it look like a string. So the Python language designers usually chose to surround their strings by single quotes. What do think would happen if the string already contained single quotes? When you type a large integer, you might be tempted to use commas between groups of three digits, as in 42,000. This is not a legal integer in Python, but it does mean something else, which is legal: Well, that’s not what we expected at all! Because of the comma, Python chose to treat this as a pair of values. In fact, the print function can print any number of values as long as you separate them by commas. Notice that the values are separated by spaces when they are displayed. Remember not to put commas or spaces in your integers, no matter how big they are. Also revisit what we said in the previous chapter: formal languages are strict, the notation is concise, and even the smallest change might mean something quite different from what you intended. Check your understanding 2.1.1: How can you determine the type of a variable? 2.1.2: What is the data type of 'this is what kind of data'? Sometimes it is necessary to convert values from one type to another. Python provides a few simple functions that will allow us to do that. The functions int, float and str will (attempt to) convert their arguments into types int, float and str respectively. We call these type conversion functions. The int function can take a floating point number or a string, and turn it into an int. For floating point numbers, it discards the decimal portion of the number - a process we call truncation towards zero on the number line. Let us see this in action: The last case shows that a string has to be a syntactically legal number, otherwise you’ll get one of those pesky runtime errors. Modify the example by deleting the bottles and rerun the program. You should see the integer 23. The type converter float can turn an integer, a float, or a syntactically legal string into a float. The type converter str turns its argument into a string. Remember that when we print a string, the quotes are removed. However, if we print the type, we can see that it is definitely str. Check your understanding 2.2.1: What value is printed by the following statement: print( int(53.785) ) One of the most powerful features of a programming language is the ability to manipulate variables. A variable is a name that refers to a value. Assignment statements create new variables and also give them values to refer to. message = "What's up, Doc?" n = 17 pi = 3.14159 This example makes three assignments. The first assigns the string value "What's up, Doc?" to a new variable named message. The second gives the integer 17 to n, and the third assigns the floating-point number 3.14159 to a variable called pi. The assignment token, =, should not be confused with equals, which uses the token ==. The assignment statement links a name, on the left hand side of the operator, with a value, on the right hand side. This is why you will get an error if you enter: 17 = n When reading or writing code, say to yourself “n is assigned 17” or “n gets the value 17” or “n is a reference to the object 17” or “n refers to the object 17”. Don’t say “n equals 17”. A common way to represent variables on paper is to write the name with an arrow pointing to the variable’s value. This kind of figure, known as a reference diagram, is often called a state snapshot because it shows what state each of the variables is in at a particular instant in time. (Think of it as the variable’s state of mind). This diagram shows the result of executing the assignment statements. If you ask Python to evaluate a variable, it will produce the value that is currently linked to the variable. In other words, evaluating a variable will give you the value that is referred to by the variable. In each case the result is the value of the variable. To see this in even more detail, we can run the program using codelens. Now, as you step thru the statements, you can see the variables and the values they reference as those references are created. Variables also have types; again, we can ask the interpreter what they are. The type of a variable is the type of the object it currently refers to. We use variables in a program to “remember” things, like the current score at the football game. But variables are variable. This means they can change over time, just like the scoreboard at a football game. You can assign a value to a variable, and later assign a different value to the same variable. This is different from math. In math, if you give x the value 3, it cannot change to refer to a different value half-way through your calculations! To see this, read and then run the following program. You’ll notice we change the value of day three times, and on the third assignment we even give it a value that is of a different type. A great deal of programming is about having the computer remember things, e.g. The number of missed calls on your phone, and then arranging to update or change the variable when you miss another call. Check your understanding 2.3.2: What is printed after the following set of statements? day = "Thursday" day = 32.5 day = 19 print(day) Variable names can be arbitrarily long. They can contain both letters and digits, but they have to begin with a letter or an underscore. Although it is legal to use uppercase letters, by convention we don’t. If you do, remember that case matters. Bruce and bruce are different variables. The underscore character ( _) can appear in a name. It is often used in names with multiple words, such as my_name or price_of_tea_in_china. There are some situations in which names beginning with an underscore have special meaning, so a safe rule for beginners is to start all names with a letter. If you give a variable an illegal name, you get a syntax error. In the example below, each of the variable names is illegal. 76trombones = "big parade" more$ = 1000000 class = "Computer Science 101" 76trombones is illegal because it does not begin with a letter. more$ is illegal because it contains an illegal character, the dollar sign. But what’s wrong with class? It turns out that class is one of the Python keywords. Keywords define the language’s syntax rules and structure, and they cannot be used as variable names. Python has thirty-something keywords (and every now and again improvements to Python introduce or eliminate one or two): You might want to keep this list handy. If the interpreter complains about one of your variable names and you don’t know why, see if it is on this list. Programmers generally choose names for their variables that are meaningful to the human readers of the program — they help the programmer document, or remember, what the variable is used for. Beginners sometimes confuse “meaningful to the human readers” with “meaningful to the computer”. So they’ll wrongly think that because they’ve called some variable average or pi, it will somehow automagically calculate an average, or automagically associate the variable pi with the value 3.14159. No! The computer doesn’t attach semantic meaning to your variable names. So you’ll find some instructors who deliberately don’t choose meaningful names when they teach beginners — not because they don’t think it is a good habit, but because they’re trying to reinforce the message that you, the programmer, have to write some program code to calculate the average, or you must write an assignment statement to give a variable the value you want it to have. Check your understanding 2.4.1: True or False: the following is a legal variable name in Python: A_good_grade_is_A+ A statement is an instruction that the Python interpreter can execute. We have only seen the assignment statement so far. Some other kinds of statements that we’ll see shortly are while statements, for statements, if statements, and import statements. (There are other kinds too!) An expression is a combination of values, variables, operators, and calls to functions. Expressions need to be evaluated. If you ask Python to print an expression, the interpreter evaluates the expression and displays the result. In this example len is a built-in Python function that returns the number of characters in a string. We’ve previously seen the print and the type functions, so this is our third example of a function! The evaluation of an expression produces a value, which is why expressions can appear on the right hand side of assignment statements. A value all by itself is a simple expression, and so is a variable. Evaluating a variable gives the value that the variable refers to. If we take a look at this same example in the Python shell, we will see one of the distinct differences between statements and expressions. >>> y = 3.14 >>> x = len("hello") >>> print(x) 5 >>> print(y) 3.14 >>> y 3.14 >>> Note that when we enter the assignment statement, y = 3.14, only the prompt is returned. There is no value. This is due to the fact that statements, such as the assignment statement, do not return a value. They are simply executed. On the other hand, the result of executing the assignment statement is the creation of a reference from a variable, y, to a value, 3.14. When we execute the print function working on y, we see the value that y is referring to. In fact, evaluating y by itself results in the same response. Operators are special tokens that represent computations like addition, multiplication and division. The values the operator works on are called operands. The following are all legal Python expressions whose meaning is more or less clear: 20 + 32 hour - 1 hour * 60 + minute minute / 60 5 ** 2 (5 + 9) * (15 - 7) The tokens +, -, and *, and the use of parenthesis for grouping, mean in Python what they mean in mathematics. The asterisk (*) is the token for multiplication, and ** is the token for exponentiation. Addition, subtraction, multiplication, and exponentiation all do what you expect. When a variable name appears in the place of an operand, it is replaced with the value that it refers to before the operation is performed. For example, what if we wanted to convert 645 minutes into hours. In Python 3, the division operator uses the token / which always evaluates to a floating point result. In the previous example, what we might have wanted to know was how many whole hours there are, and how many minutes remain. Python gives us two different flavors of the division operator. The second, called integer division, uses the token //. It always truncates its result down to the next smallest integer (to the left on the number line). Take care that you choose the correct flavor of the division operator. If you’re working with expressions where you need floating point values, use the division operator /. If you want an integer result, use //. The modulus operator, sometimes also called the remainder operator or integer remainder operator works on integers (and integer expressions) and yields the remainder when the first operand is divided by the second. In Python, the modulus operator is a percent sign (%). The syntax is the same as for other operators: So 7 divided by 3 is 2 with a remainder of 1. The modulus operator turns out to be surprisingly useful. For example, you can check whether one number is divisible by another—if x % y is zero, then x is divisible by y. Also, you can extract the right-most digit or digits from a number. For example, x % 10 yields the right-most digit of x (in base 10). Similarly x % 100 yields the last two digits. Finally, returning to our time example, the remainder operator is extremely useful for doing conversions, say from seconds, to hours, minutes and seconds. If we start with a number of seconds, say 7684, the following program uses integer division and remainder to convert to an easier form. Step through it to be sure you understand how the division and remainder operators are being used to compute the correct values. Check your understanding 2.6.1: What is printed from the following statement? print (18 / 4) 2.6.2: What is printed from the following statement? print (18 // 4) 2.6.3: What is printed from the following statement? print (18 % 4) The program in the previous section works fine but is very limited in that it only works with one value for total_secs. What if we wanted to rewrite the program so that it was more general. One thing we could do is allow the use to enter any value they wish for the number of seconds. The program would then print the proper result for that starting value. In order to do this, we need a way to get input from the user. Luckily, in Python there is a built-in function to accomplish this task. As you might expect, it is called input. n = input("Please enter your name: ") The input function allows the user to provide a prompt string. When the function is evaluated, the prompt is shown. The user of the program can enter the name and press return. When this happens the text that has been entered is returned from the input function, and in this case assigned to the variable n. Even if you asked the user to enter their age, you would get back a string like "17". It would be your job, as the programmer, to convert that string into a int or a float, using the int or float converter functions we saw earlier. To modify our previous program, we will add an input statement to allow the user to enter the number of seconds. Then we will convert that string to an integer. From there the process is the same as before. The variable str_seconds will refer to the string that is entered by the user. As we said above, even though this string may be 7684, it is still a string and not a number. To convert it to an integer, we use the int function. The result is referred to by total_secs. Now, each time you run the program, you can enter a new value for the number of seconds to be converted. Check your understanding 2.7.1: What is printed from the following statements? n = input("Please enter your age: ") # user types in 18 print ( type(n) ) When more than one operator appears in an expression, the order of evaluation depends on the rules of precedence. Python follows the same precedence rules for its mathematical operators that mathematics does. Due to some historical quirk, an exception to the left-to-right left-associative rule is the exponentiation operator **. A useful hint is to always use parentheses to force exactly the order you want when exponentiation is involved: Check your understanding 2.8.1: What is the value of the following expression: 16 - 2 * 5 // 3 + 1 2.8.2: What is the value of the following expression: 2 ** 2 ** 3 * 3 As we have mentioned previously, it is legal to make more than one assignment to the same variable. A new assignment makes an existing variable refer to a new value (and stop referring to the old value). The first time bruce is printed, its value is 5, and the second time, its value is 7. The assignment statement changes the value (the object) that bruce refers to. Here is what reassignment looks like in a reference diagram: It is important to note that in mathematics, a statement of equality is always true. If a is equal to b now, then a will always equal to b. In Python, an assignment statement can make two variables equal, but because of the possibility of reassignment, they don’t have to stay that way: Line 4 changes the value of a but does not change the value of b, so they are no longer equal. We will have much more to say about equality in a later chapter. In some programming languages, a different symbol is used for assignment, such as <- or :=. The intent is that this will help to avoid confusion. Python chose to use the tokens = for assignment, and == for equality. This is a popular choice also found in languages like C, C++, Java, and C#. Check your understanding 2.9.1: After the following statements, what are the values of x and y? x = 15 y = x x = 22 One of the most common forms of reassignment is an update where the new value of the variable depends on the old. For example, x = x + 1 This means get the current value of x, add one, and then update x with the new value. The new value of x is the old value of x plus 1. Although this assignment statement may look a bit strange, remember that executing assignment is a two-step process. First, evaluate the right-hand side expression. Second, let the variable name on the left-hand side refer to this new resulting object. The fact that x appears on both sides does not matter. The semantics of the assignment statement makes sure that there is no confusion as to the result. If you try to update a variable that doesn’t exist, you get an error because Python evaluates the expression on the right side of the assignment operator before it assigns the resulting value to the name on the left. Before you can update a variable, you have to initialize it, usually with a simple assignment. In the above example, x was initialized to 6. Updating a variable by adding 1 is called an increment; subtracting 1 is called a decrement. Sometimes programmers also talk about bumping a variable, which means the same as incrementing it by 1. Check your understanding 2.10.1: What is printed by the following statements? x = 12 x = x - 1 print (x) A statement that assigns a value to a name (variable). To the left of the assignment operator, =, is a name. To the right of the assignment token is an expression which is evaluated by the Python interpreter and then assigned to the name. The difference between the left and right hand sides of the assignment statement is often confusing to new programmers. In the following assignment: n = n + 1 n plays a very different role on each side of the =. On the right it is a value and makes up part of the expression which will be evaluated by the Python interpreter before assigning it to the name on the left. Evaluate the following numerical expressions in your head, then use the active code window to check your results: - 5 ** 2 - 9 * 5 - 15 / 12 - 12 / 15 - 15 // 12 - 12 // 15 - 5 % 2 - 9 % 5 - 15 % 12 - 12 % 15 - 6 % 6 - 0 % 7 You look at the clock and it is exactly 2pm. You set an alarm to go off in 51 hours. At what time does the alarm go off? Write a Python program to solve the general version of the above problem. Ask the user for the time now (in hours), and ask for the number of hours to wait. Your program should output what the time will be on the clock when the alarm goes off. You go on a wonderful holiday leaving on day number 3 (a Wednesday). You return home after 137 nights. Write a general version of the program which asks for the starting day number, and the length of your stay, and it will tell you the number of day of the week you will return on. Take the sentence: All work and no play makes Jack a dull boy. Store each word in a separate variable, then print out the sentence on one line using print. Add parenthesis to the expression 6 * 1 - 2 to change its value from 4 to -6. The formula for computing the final amount if one is earning compound interest is given on Wikipedia as Write a Python program that assigns the principal amount of 10000 to variable P, assign to n the value 12, and assign to r the interest rate of 8% (0.08). Then have the program prompt the user for the number of years, t, that the money will be compounded for. Calculate and print the final amount after t years. Write a program that will compute the area of a circle. Prompt the user to enter the radius and print a nice message back to the user with the answer. Write a program that will compute the area of a rectangle. Prompt the user to enter the width and height of the rectangle. Print a nice message with the answer. Write a program that will compute MPG for a car. Prompt the user to enter the number of miles driven and the number of gallons used. Print a nice message with the answer. Write a program that will convert degrees celsius to degrees fahrenheit. Write a program that will convert degrees fahrenheit to degrees celsius.
http://interactivepython.org/courselib/static/thinkcspy/SimplePythonData/simpledata.html
13
50
Measuring angles can be challenging. Sometimes, two lines that create an angle don’t even intersect or intersect, but not at their endpoints. For this reason, specifying an accurate vertex for the angle is important. To dimension an angle, start the DIMANGULAR command. You can get there in two ways: - Home tab> Annotation panel> Dimension drop-down list> Angular - Annotate tab> Dimensions panel> Dmension drop-down list> Angular You see the Select arc, circle, line, or <specify vertex>: prompt. You can respond in one of 4 ways: Select an arc If you select an arc, DIMANGULAR dimensions the arc. The arc’s center is the vertex of the angle. You can place the dimension either inside or outside the arc. Select a circle If you select a circle, DIMANGULAR uses the point you picked when selecting the circle as the first angle endpoint. The circle’s center is the vertex. You are prompted for the second angle endpoint; pick a point on the circle. In this way, you are dimensioning an arc, which is just a portion of a circle. Tip: Let’s say that you draw a circle and then draw lines that cross the circle, as you see below. If you want to select the circle and try using the Intersection object snap, you end up selecting a line, because it’s on top of the circle. That’s because newer objects are on top of older objects. If you want to selecft the circle, select it, and right-click it, and choose Draw Order> Bring to Front. Of course, you could get the same angle measurement by selecting the lines, but if you want to be dimensioning the circle (perhaps you’ll erase the lines later), bringing the circle to the front can help. Select a line If you select a line, DIMANGULAR’s prompt asks you for a second line. If the lines don’t intersect, the implied intersection is the vertex. Press Enter to specify all the points of the angle If you want to individually specify the vertex and the two angle endpoints, just press Enter. You’re then prompted for the vertex, 1st angle endpoint and 2nd angle endpoint. Dimensioning the outside angle In the above example, the dimension measures the minor angle, the portion that is less than 180°. By simple moving the cursor below the vertex, you can measure the major angle, as you see here. Always use object snaps when specify the vertex and the angle endpoints. This will ensure that you get an accurate measurement. Remember that you can specify the decimal precision of a dimension in the dimension style. For angular dimensions, start the DIMSTYLE command to open the Dimension Style dialog box. Then, on the Primary Units, tab, use the Angular Dimension section’s Precision drop-down list. Do you have any tips for dimensions angles? Leave a comment!
http://www.ellenfinkelstein.com/acadblog/dimensioning-basics-part-iii-create-accurate-dimensions-for-angles/
13
212
In physics, the Coriolis effect is an apparent deflection of moving objects when they are viewed from a rotating reference frame. For example, consider two children on opposite sides of a spinning roundabout (carousel), who are throwing a ball to each other (see picture). From the children's point of view, this ball's path is curved sideways by the Coriolis effect. From the thrower's perspective, the deflection is to the right with anticlockwise carousel rotation (viewed from above). Deflection is to the left with clockwise rotation. The effect on Earth is due to this fact: the Earth is rotating fastest at the equator, and rotates not at all at the poles (in km/hr). A bird flying north, away from the equator, carries this faster motion with it (or, equivalently, the earth under the bird is rotating more slowly than it was) - and the bird's flight curves eastward slightly (though its heading stays straight north). In general: objects moving away from the equator curve eastward; objects moving towards the equator curve westward. Moving away from the equator, the land underneath rotates more slowly, and vice-versa. An object gains or loses relative speed over ground as it moves away from, or towards, the equator, respectively. Newton's laws of motion govern the motion of an object in an inertial frame of reference. When transforming Newton's laws to a rotating frame of reference, the Coriolis force appears, along with the centrifugal force. If the rotation speed of the frame is not constant, the Euler force will also appear. All three forces are proportional to the mass of the object. The Coriolis force is proportional to the speed of rotation and the centrifugal force is proportional to its square. The Coriolis force acts in a direction perpendicular to the rotation axis and to the velocity of the body in the rotating frame and is proportional to the object's speed in the rotating frame. The centrifugal force acts outwards in the radial direction and is proportional to the distance of the body from the axis of the rotating frame. These three additional forces are termed either inertial forces, fictitious forces or pseudo forces. These names are used in a technical sense, to mean simply that these forces vanish in an inertial frame of reference. The mathematical expression for the Coriolis force appeared in an 1835 paper by a French scientist Gaspard-Gustave Coriolis in connection with hydrodynamics, and also in the tidal equations of Pierre-Simon Laplace in 1778. Early in the 20th century, the term Coriolis force began to be used in connection with meteorology. Perhaps the most commonly encountered rotating reference frame is the Earth. Moving objects on the surface of the Earth experience a Coriolis force, and appear to veer to the right in the northern hemisphere, and to the left in the southern. Exactly on the equator, motion east or west, remains (precariously) along the line of the equator. Initial motion of a pendulum in any other direction will lead to a motion in a loop. Movements of air in the atmosphere and water in the ocean are notable examples of this behavior: rather than flowing directly from areas of high pressure to low pressure, as they would on a non-rotating planet, winds and currents tend to flow to the right of this direction north of the equator, and to the left of this direction south of the equator. This effect is responsible for the rotation of large cyclones (see Coriolis effects in meteorology). Gaspard-Gustave Coriolis published a paper in 1835 on the energy yield of machines with rotating parts, such as waterwheels. This paper considered the supplementary forces that are detected in a rotating frame of reference. Coriolis divided these supplementary forces into two categories. The second category contained a force that arises from the cross product of the angular velocity of a coordinate system and the projection of a particle's velocity into a plane perpendicular to the system's axis of rotation. Coriolis referred to this force as the "compound centrifugal force" due to its analogies with the centrifugal force already considered in category one. By the early 20th century the effect was known as the "acceleration of Coriolis". By 1919 it was referred by to as "Coriolis' force" and by 1920 as "Coriolis force". Understanding the kinematics of how exactly the rotation of the Earth affects airflow was partial at first. Late in the 19th century, the full extent of the large scale interaction of pressure gradient force and deflecting force that in the end causes air masses to move along isobars was understood. In non-vector terms: at a given rate of rotation of the observer, the magnitude of the Coriolis acceleration of the object is proportional to the velocity of the object and also to the sine of the angle between the direction of movement of the object and the axis of rotation. The vector formula for the magnitude and direction of the Coriolis acceleration is where (here and below) v is the velocity of the particle in the rotating system, and Ω is the angular velocity vector which has magnitude equal to the rotation rate ω and is directed along the axis of rotation of the rotating reference frame, and the × symbol represents the cross product operator. The equation may be multiplied by the mass of the relevant object to produce the Coriolis force: See fictitious force for a derivation. The Coriolis effect is the behavior added by the Coriolis acceleration. The formula implies that the Coriolis acceleration is perpendicular both to the direction of the velocity of the moving mass and to the frame's rotation axis. So in particular: The vector cross product can be evaluated as the determinant of a matrix: where the vectors i, j, k are unit vectors in the x, y and z directions. The Coriolis effect exists only when using a rotating reference frame. In the rotating frame it behaves exactly like a real force (that is to say, it causes acceleration and has real effects). However, Coriolis force is a consequence of inertia, and is not attributable to an identifiable originating body, as is the case for electromagnetic or nuclear forces, for example. From an analytical viewpoint, to use Newton's second law in a rotating system, Coriolis force is mathematically necessary, but it disappears in a non-accelerating, inertial frame of reference. For a mathematical formulation see Mathematical derivation of fictitious forces. A denizen of a rotating frame, such as an astronaut in a rotating space station, very probably will find the interpretation of everyday life in terms of the Coriolis force accords more simply with intuition and experience than a cerebral reinterpretation of events from an inertial standpoint. For example, nausea due to an experienced push may be more instinctively explained by Coriolis force than by the law of inertia. See also Coriolis effect (perception). In meteorology, a rotating frame (the Earth) with its Coriolis force proves a more natural framework for explanation of air movements than a hypothetical, non-rotating, inertial frame without Coriolis forces. In long-range gunnery, sight corrections for the Earth's rotation are based upon Coriolis force. These examples are described in more detail below. The acceleration entering the Coriolis force arises from two sources of change in velocity that result from rotation: the first is the change of the velocity of an object in time. The same velocity (in an inertial frame of reference where the normal laws of physics apply) will be seen as different velocities at different times in a rotating frame of reference. The apparent acceleration is proportional to the angular velocity of the reference frame (the rate at which the coordinate axes change direction), and to the component of velocity of the object in a plane perpendicular to the axis of rotation. This gives a term . The minus sign arises from the traditional definition of the cross product (right hand rule), and from the sign convention for angular velocity vectors. The second is the change of velocity in space. Different positions in a rotating frame of reference have different velocities (as seen from an inertial frame of reference). In order for an object to move in a straight line it must therefore be accelerated so that its velocity changes from point to point by the same amount as the velocities of the frame of reference. The effect is proportional to the angular velocity (which determines the relative speed of two different points in the rotating frame of reference), and to the component of the velocity of the object in a plane perpendicular to the axis of rotation (which determines how quickly it moves between those points). This also gives a term . The time, space and velocity scales are important in determining the importance of the Coriolis effect. Whether rotation is important in a system can be determined by its Rossby number, which is the ratio of the velocity, U, of a system to the product of the Coriolis parameter,f, and the length scale, L, of the motion: The Rossby number is the ratio of inertial to Coriolis forces. A small Rossby number signifies a system which is strongly affected by Coriolis forces, and a large Rossby number signifies a system in which inertial forces dominate. For example, in tornadoes, the Rossby number is large, in low-pressure systems it is low and in oceanic systems it is of the order of unity. As a result, in tornadoes the Coriolis force is negligible, and balance is between pressure and centrifugal forces. In low-pressure systems, centrifugal force is negligible and balance is between Coriolis and pressure forces. In the oceans all three forces are comparable. An atmospheric system moving at U = 10 m/s occupying a spatial distance of L = 1000 km, has a Rossby number of approximately 0.1. A man playing catch may throw the ball at U = 30 m/s in a garden of length L = 50 m. The Rossby number in this case would be about = 6000. Needless to say, one does not worry about which hemisphere one is in when playing catch in the garden. However, an unguided missile obeys exactly the same physics as a baseball, but may travel far enough and be in the air long enough to notice the effect of Coriolis. Long-range shells in the Northern Hemisphere landed close to, but to the right of, where they were aimed until this was noted. (Those fired in the southern hemisphere landed to the left.) In fact, it was this effect that first got the attention of Coriolis himself. Consider a location with latitude on a sphere that is rotating around the north-south axis. A local coordinate system is set up with the x axis horizontally due east, the y axis horizontally due north and the z axis vertically upwards.The rotation vector, velocity of movement and Coriolis acceleration expressed in this local coordinate system (listing components in the order East (e), North (n) and Upward (u)) are: When considering atmospheric or oceanic dynamics, the vertical velocity is small and the vertical component of the Coriolis acceleration is small compared to gravity. For such cases, only the horizontal (East and North) components matter. The restriction of the above to the horizontal plane is (setting vu=0): where is called the Coriolis parameter. By setting vn = 0, it can be seen immediately that (for positive and ) a movement due east results in an acceleration due south. Similarly, setting ve = 0, it is seen that a movement due north results in an acceleration due east. In general, observed horizontally, looking along the direction of the movement causing the acceleration, the acceleration always is turned 90° to the right and of the same size regardless of the horizontal orientation. That is: On a merry-go-round in the night Coriolis was shaken with fright Despite how he walked 'Twas like he was stalked By some fiend always pushing him right – David Morin, Eric Zaslow, E'beth Haley, John Golden, and Nathan Salwen As a different case, consider equatorial motion setting φ = 0°. In this case, Ω is parallel to the North or n-axis, and: Accordingly, an eastward motion (that is, in the same direction as the rotation of the sphere) provides an upward acceleration known as the Eötvös effect, and an upward motion produces an acceleration due west. The motion of the Sun as seen from Earth is dominated by the Coriolis and centrifugal forces. For ease of explanation consider the situation of a distant star (with mass m) located over the equator, at position , perpendicular to the rotation vector so . It is observed to rotate in the opposite direction as the Earth's rotation once a day, making its velocity . The fictitious force consisting of Coriolis and centrifugal forces is: This can be recognised as the centripetal force that will keep the star in a circular movement around the observer. The general situation for a star, not above the equator is more complicated. Just as for air flows on Earth's surface, on the northern hemisphere a star's trajectory will be deflected to the right. After rising at a certain angle, it will bend to the right, culminate and start setting. Perhaps the most important instance of the Coriolis effect is in the large-scale dynamics of the oceans and the atmosphere. In meteorology and ocean science, it is convenient to use a rotating frame of reference where the Earth is stationary. The fictitious centrifugal and Coriolis forces must then be introduced. Their relative importance is determined by the Rossby number. Tornadoes have a high Rossby number, so Coriolis forces are unimportant, and are not discussed here. As discussed next, low-pressure areas are phenomena where Coriolis forces are significant. If a low-pressure area forms in the atmosphere, air will tend to flow in towards it, but will be deflected perpendicular to its velocity by the Coriolis acceleration. A system of equilibrium can then establish itself creating circular movement, or a cyclonic flow. Because the Rossby number is low, the force balance is largely between the pressure gradient force acting towards the low-pressure area and the Coriolis force acting away from the center of the low pressure. Instead of flowing down the gradient, large scale motions in the atmosphere and ocean tend to occur perpendicular to the pressure gradient. This is known as geostrophic flow. On a non-rotating planet fluid would flow along the straightest possible line, quickly eliminating pressure gradients. Note that the geostrophic balance is thus very different from the case of "inertial motions" (see below) which explains why mid-latitude cyclones are larger by an order of magnitude than inertial circle flow would be. This pattern of deflection, and the direction of movement, is called Buys-Ballot's law. In the atmosphere, the pattern of flow is called a cyclone. In the Northern Hemisphere the direction of movement around a low-pressure area is counterclockwise. In the Southern Hemisphere, the direction of movement is clockwise because the rotational dynamics is a mirror image there. At high altitudes, outward-spreading air rotates in the opposite direction. Cyclones rarely form along the equator due to the weak Coriolis effect present in this region. An air or water mass moving with speed subject only to the Coriolis force travels in a circular trajectory called an 'inertial circle'. Since the force is directed at right angles to the motion of the particle, it will move with a constant speed, and perform a complete circle with frequency f. The magnitude of the Coriolis force also determines the radius of this circle: On the Earth, a typical mid-latitude value for f is 10−4 s−1; hence for a typical atmospheric speed of 10 m/s the radius is 100 km, with a period of about 14 hours. In the ocean, where a typical speed is closer to 10 cm/s, the radius of an inertial circle is 1 km. These inertial circles are clockwise in the northern hemisphere (where trajectories are bent to the right) and anti-clockwise in the southern hemisphere. If the rotating system is a parabolic turntable, then f is constant and the trajectories are exact circles. On a rotating planet, f varies with latitude and the paths of particles do not form exact circles. Since the parameter f varies as the sine of the latitude, the radius of the oscillations associated with a given speed are smallest at the poles (latitude = ±90°), and increase toward the equator. The Coriolis effect strongly affects the large-scale oceanic and atmospheric circulation, leading to the formation of robust features like jet streams and western boundary currents. Such features are in geostrophic balance, meaning that the Coriolis and pressure gradient forces balance each other. Coriolis acceleration is also responsible for the propagation of many types of waves in the ocean and atmosphere, including Rossby waves and Kelvin waves. It is also instrumental in the so-called Ekman dynamics in the ocean, and in the establishment of the large-scale ocean flow pattern called the Sverdrup balance. The practical impact of the Coriolis effect is mostly caused by the horizontal acceleration component produced by horizontal motion. There are other components of the Coriolis effect. Eastward-traveling objects will be deflected upwards (feel lighter), while westward-traveling objects will be deflected downwards (feel heavier). This is known as the Eötvös effect. This aspect of the Coriolis effect is greatest near the equator. The force produced by this effect is similar to the horizontal component, but the much larger vertical forces due to gravity and pressure mean that it is generally unimportant dynamically. In addition, objects traveling upwards or downwards will be deflected to the west or east respectively. This effect is also the greatest near the equator. Since vertical movement is usually of limited extent and duration, the size of the effect is smaller and requires precise instruments to detect. Coriolis rotation can conceivably play a role on scales as small as a bathtub. It is a commonly held myth that the every-day rotation of a bathtub or toilet vortex is due to whether one is in the northern or southern hemisphere. An article in Nature, by Ascher Shapiro, describes an experiment in which all other forces to the system are removed by filling a 6 ft. tank with water and allowing it to settle for 24 hrs (to remove any internal velocity), in a room where the temperature has stabilized (temperature differences in the room can introduce forces inside the fluid). The drain plug is then very slowly removed, and tiny pieces of floating wood are used to observe rotation. During the first 12 to 15 mins, no rotation is observed. Then, a vortex appears and consistently begins to rotate in a counter-clockwise direction (the experiment was performed in the Northern hemisphere, in Boston, MA). This is repeated and the results averaged to make sure the effect is real. The Coriolis effect does indeed play a role in vortex rotation for draining liquids that have come to rest for a long time. ["Bath-Tub Vortex", Nature. Dec 15th, 1962. Vol 196, No. 4859, p. 1080-1081] In reality, this experiment shows that the Coriolis effect is a few orders of magnitude smaller than various random influences on drain direction, such as the geometry of the container and the direction in which water was initially added to it. In the above experiment, if the water settles for 2 hrs or less (instead of 24), then the vortex can be seen to rotate in either direction. Most toilets flush in only one direction, because the toilet water flows into the bowl at an angle. If water shot into the basin from the opposite direction, the water would spin in the opposite direction. The idea that toilets and bathtubs drain differently in the Northern and Southern Hemispheres has been popularized by several television programs, including The Simpsons episode "Bart vs. Australia" and the The X-Files episode "Die Hand Die Verletzt". Several science broadcasts and publications, including at least one college-level physics textbook, have also stated this. Some sources that incorrectly attribute draining direction to the Coriolis force also get the direction wrong, claiming that water would turn clockwise into drains in the Northern Hemisphere. The Rossby number can also tell us about the bathtub. If the length scale of the tub is about L = 1 m, and the water moves towards the drain at about U = 60 cm/s, then the Rossby number is about 6 000. Thus, the bathtub is, in terms of scales, much like a game of catch, and rotation is unlikely to be important. The dominant physical process that creates the rapid vortex close to the plug hole is the conservation of angular momentum. The radius of rotation decreases as water approaches the plug hole so the rate of rotation increases, equivalent to bringing your arms and legs in while spinning on a chair. Ballistic missiles and satellites appear to follow curved paths when plotted on common world maps mainly because the earth is spherical and the shortest distance between two points on the Earth's surface (called a great circle) is usually not a straight line on those maps. Every two-dimensional (flat) map necessarily distorts the Earth's curved (three-dimensional) surface in some way. Typically (as in the commonly used Mercator projection, for example), this distortion increases with proximity to the poles. In the northern hemisphere for example, a ballistic missile fired toward a distant target using the shortest possible route (a great circle) will appear on such maps to follow a path north of the straight line from target to destination, and then curve back toward the equator. This occurs because the latitudes, which are projected as straight horizontal lines on most world maps, are in fact circles on the surface of a sphere, which get smaller as they get closer to the pole. Being simply a consequence of the sphericity of the Earth, this would be true even if the Earth didn't rotate. The Coriolis effect is of course also present, but its effect on the plotted path is much smaller. The Coriolis effects became important in external ballistics for calculating the trajectories of very long-range artillery shells. The most famous historical example was the Paris gun, used by the Germans during World War I to bombard Paris from a range of about 120 km (75 mi). Figure 1 is an animation of the classic illustration of Coriolis force. Another visualization of the Coriolis and centrifugal forces is this animation clip. Figure 3 is a graphical version. Here is a question: given the radius of the turntable R, the rate of angular rotation ω, and the speed of the cannonball (assumed constant) v, what is the correct angle θ to aim so as to hit the target at the edge of the turntable? The inertial frame of reference provides one way to handle the question: calculate the time to interception, which is tf = R / v . Then, the turntable revolves an angle ω tf in this time. If the cannon is pointed an angle θ = ω tf = ω R / v, then the cannonball arrives at the periphery at position number 3 at the same time as the target. No discussion of Coriolis force can arrive at this solution as simply, so the reason to treat this problem is to demonstrate Coriolis formalism in an easily visualized situation. The trajectory in the inertial frame (denoted A) is a straight line radial path at angle θ. The position of the cannonball in ( x, y ) coordinates at time t is: In the turntable frame (denoted B), the x- y axes rotate at angular rate ω, so the trajectory becomes: and three examples of this result are plotted in Figure 4. To determine the components of acceleration, a general expression is used from the article fictitious force: in which the term in Ω × vB is the Coriolis acceleration and the term in Ω × ( Ω × rB) is the centrifugal acceleration. The results are (let α = θ − ωt): producing a centrifugal acceleration: producing a Coriolis acceleration: Figure 5 and Figure 6 show these accelerations for a particular example. It is seen that the Coriolis acceleration not only cancels the centrifugal acceleration, but together they provide a net "centripetal", radially inward component of acceleration (that is, directed toward the center of rotation): and an additional component of acceleration perpendicular to rB (t): The "centripetal" component of acceleration resembles that for circular motion at radius rB, while the perpendicular component is velocity dependent, increasing with the radial velocity v and directed to the right of the velocity. The situation could be described as a circular motion combined with an "apparent Coriolis acceleration" of 2ωv. However, this is a rough labeling: a careful designation of the true centripetal force refers to a local reference frame that employs the directions normal and tangential to the path, not coordinates referred to the axis of rotation. These results also can be obtained directly by two time differentiations of rB (t). Agreement of the two approaches demonstrates that one could start from the general expression for fictitious acceleration above and derive the trajectories of Figure 4. However, working from the acceleration to the trajectory is more complicated than the reverse procedure used here, which, of course, is made possible in this example by knowing the answer in advance. As a result of this analysis an important point appears: all the fictitious accelerations must be included to obtain the correct trajectory. In particular, besides the Coriolis acceleration, the centrifugal force plays an essential role. It is easy to get the impression from verbal discussions of the cannonball problem, which are focussed on displaying the Coriolis effect particularly, that the Coriolis force is the only factor that must be considered; emphatically, that is not so. A turntable for which the Coriolis force is the only factor is the parabolic turntable. A somewhat more complex situation is the idealized example of flight routes over long distances, where the centrifugal force of the path and aeronautical lift are countered by gravitational attraction. Figure 7 illustrates a ball tossed from 12:00 o'clock toward the center of a counterclockwise rotating carousel. On the left, the ball is seen by a stationary observer above the carousel, and the ball travels in a straight line to the center, while the ball-thrower rotates counterclockwise with the carousel. On the right the ball is seen by an observer rotating with the carousel, so the ball-thrower appears to stay at 12:00 o'clock. The figure shows how the trajectory of the ball as seen by the rotating observer can be constructed. On the left, two arrows locate the ball relative to the ball-thrower. One of these arrows is from the thrower to the center of the carousel (providing the ball-thrower's line of sight), and the other points from the center of the carousel to the ball.(This arrow gets shorter as the ball approaches the center.) A shifted version of the two arrows is shown dotted. On the right is shown this same dotted pair of arrows, but now the pair are rigidly rotated so the arrow corresponding to the line of sight of the ball-thrower toward the center of the carousel is aligned with 12:00 o'clock. The other arrow of the pair locates the ball relative to the center of the carousel, providing the position of the ball as seen by the rotating observer. By following this procedure for several positions, the trajectory in the rotating frame of reference is established as shown by the curved path in the right-hand panel. The ball travels in the air, and there is no net force upon it. To the stationary observer the ball follows a straight-line path, so there is no problem squaring this trajectory with zero net force. However, the rotating observer sees a curved path. Kinematics insists that a force (pushing to the right of the instantaneous direction of travel for a counterclockwise rotation) must be present to cause this curvature, so the rotating observer is forced to invoke a combination of centrifugal and Coriolis forces to provide the net force required to cause the curved trajectory. Figure 8 describes a more complex situation where the tossed ball on a turntable bounces off the edge of the carousel and then returns to the tosser, who catches the ball. The effect of Coriolis force on its trajectory is shown again as seen by two observers: an observer (referred to as the "camera") that rotates with the carousel, and an inertial observer. Figure 8 shows a bird's-eye view based upon the same ball speed on forward and return paths. Within each circle, plotted dots show the same time points. In the left panel, from the camera's viewpoint at the center of rotation, the tosser (smiley face) and the rail both are at fixed locations, and the ball makes a very considerable arc on its travel toward the rail, and takes a more direct route on the way back. From the ball tosser's viewpoint, the ball seems to return more quickly than it went (because the tosser is rotating toward the ball on the return flight). On the carousel, instead of tossing the ball straight at a rail to bounce back, the tosser must throw the ball toward the right of the target and the ball then seems to the camera to bear continuously to the left of its direction of travel to hit the rail (left because the carousel is turning clockwise). The ball appears to bear to the left from direction of travel on both inward and return trajectories. The curved path demands this observer to recognize a leftward net force on the ball. (This force is "fictitious" because it disappears for a stationary observer, as is discussed shortly.) For some angles of launch, a path has portions where the trajectory is approximately radial, and Coriolis force is primarily responsible for the apparent deflection of the ball (centrifugal force is radial from the center of rotation, and causes little deflection on these segments). When a path curves away from radial, however, centrifugal force contributes significantly to deflection. The ball's path through the air is straight when viewed by observers standing on the ground (right panel). In the right panel (stationary observer), the ball tosser (smiley face) is at 12 o'clock and the rail the ball bounces from is at position one (1). From the inertial viewer's standpoint, positions one (1), two (2), three (3) are occupied in sequence. At position 2 the ball strikes the rail, and at position 3 the ball returns to the tosser. Straight-line paths are followed because the ball is in free flight, so this observer requires that no net force is applied. A video clip of the tossed ball and other experiments are found at youtube: coriolis effect (2-11), University of Illinois WW2010 Project (some clips repeat only a fraction of a full rotation), and youtube. To demonstrate the Coriolis effect, a parabolic turntable can be used. On a flat turntable, the inertia of a co-rotating object would force it off the edge. But if the surface of the turntable has the correct parabolic bowl shape (see Figure 9) and is rotated at the correct rate, the force components shown in Figure 10 are arranged so the component of gravity tangential to the bowl surface will exactly equal the centripetal force necessary to keep the object rotating at its velocity and radius of curvature (assuming no friction). (See banked turn.) This carefully contoured surface allows the Coriolis force to be displayed in isolation. Discs cut from cylinders of dry ice can be used as pucks, moving around almost frictionlessly over the surface of the parabolic turntable, allowing effects of Coriolis on dynamic phenomena to show themselves. To get a view of the motions as seen from the reference frame rotating with the turntable, a video camera is attached to the turntable so as to co-rotate with the turntable, with results as shown in Figure 11. In the left panel of Figure 11, which is the viewpoint of a stationary observer, the gravitational force in the inertial frame pulling the object toward the center (bottom ) of the dish is proportional to the distance of the object from the center. A centripetal force of this form causes the elliptical motion. In the right panel, which shows the viewpoint of the rotating frame, the inward gravitational force in the rotating frame (the same force as in the inertial frame) is balanced by the outward centrifugal force (present only in the rotating frame). With these two forces balanced, in the rotating frame the only unbalanced force is Coriolis (also present only in the rotating frame), and the motion is an inertial circle. Analysis and observation of circular motion in the rotating frame is a simplification compared to analysis or observation of elliptical motion in the inertial frame. Because this reference frame rotates several times a minute, rather than only once a day like the Earth, the Coriolis acceleration produced is many times larger, and so easier to observe on small time and spatial scales, than is the Coriolis acceleration caused by the rotation of the Earth. In a manner of speaking, the Earth is analogous to such a turntable. The rotation has caused the planet to settle on a spheroid shape, such that the normal force, the gravitational force and the centrifugal force exactly balance each other on a "horizontal" surface. (See equatorial bulge.) The Coriolis effect caused by the rotation of the Earth can be seen indirectly through the motion of a Foucault pendulum. A practical application of the Coriolis effect is the mass flow meter, an instrument that measures the mass flow rate and density of a fluid flowing through a tube. The operating principle involves inducing a vibration of the tube through which the fluid passes. The vibration, though it is not completely circular, provides the rotating reference frame which gives rise to the Coriolis effect. While specific methods vary according to the design of the flow meter, sensors monitor and analyze changes in frequency, phase shift, and amplitude of the vibrating flow tubes. The changes observed represent the mass flow rate and density of the fluid. In polyatomic molecules, the molecule motion can be described by a rigid body rotation and internal vibration of atoms about their equilibrium position. As a result of the vibrations of the atoms, the atoms are in motion relative to the rotating coordinate system of the molecule. Coriolis effects will therefore be present and will cause the atoms to move in a direction perpendicular to the original oscillations. This leads to a mixing in molecular spectra between the rotational and vibrational levels. Flies (Diptera) and moths (Lepidoptera) utilize the Coriolis effect when flying: their halteres, or antennae in the case of moths, oscillate rapidly and are used as vibrational gyroscopes. See Coriolis effect in insect stability. In this context, the Coriolis effect has nothing to do with the rotation of the Earth. Birds could theoretically use the Earth's Coriolis Effect to partly determine their direction over time (although the sensory precision required would be extraordinary). (from Bird Migration by Griffin). frame of reference, where the black object moves in a straight line. In the picture at the bottom the observer(red dot) sees the object follow a curved path, because of the Coriolis and Centrifugal effects]] over Iceland spins counter-clockwise due to balance between the Coriolis force and the pressure gradient force.]] The Coriolis effect is a force that is found in a rotating object. Gaspard Gustave de Coriolis first described the Coriolis effect in 1835 using mathematics. It is a fictitious force that acts upon all bodies which are described using a rotating frame of reference.The force is perpendicular to both the axis of movement of the body it acts on as well as that of the frame of reference.
http://www.thefullwiki.org/Coriolis_effect
13
132
From Wikipedia, the free encyclopedia A logic gate performs a logical operation on one or more logic inputs and produces a single logic output. The logic normally performed is Boolean logic and is most commonly found in digital circuits. Logic gates are primarily implemented electronically using diodes or transistors, but can also be constructed using electromagnetic relays, fluidics, optical or even mechanical elements. A Boolean logical input or output always takes one of two logic levels. These logic levels can go by many names including: on / off, high (H) / low (L), one (1) / zero (0), true (T) / false (F), positive / negative, positive / ground, open circuit / close circuit, potential difference / no difference. For consistency, the names 1 and 0 will be used below. A logic gate takes one or more logic-level inputs and produces a single logic-level output. Because the output is also a logic level, an output of one logic gate can connect to the input of one or more other logic gates. Two outputs cannot be connected together, however, as they may be attempting to produce different logic values. In electronic logic gates, this would cause a short circuit. In electronic logic, a logic level is represented by a certain voltage (which depends on the type of electronic logic in use). Each logic gate requires power so that it can source and sink currents to achieve the correct output voltage. In logic circuit diagrams the power is not shown, but in a full electronic schematic, power connections are required. Basic logic gates and mechanical equivalents While semiconductor electronic logic (see later) is preferred in most applications, relays and switches are still used in some industrial applications and for educational purposes. In this article, the various types of logic gates are illustrated with drawings of their relay-and-switch implementations, although the reader should remember that these are electrically different from the semiconductor equivalents that are discussed later. Relay logic was historically important in industrial automation (see ladder logic and programmable logic controller). In relay logic, the two logic levels are 'open circuit' and 'closed circuit'. The electrical signal is extra, it can be of any voltage and any current, and can also flow in either direction. In electronic circuits, logic levels are fixed voltage levels. It is not possible to pass any other signal through the gates, as the logic levels are the only voltages possible. For more information about how modern semiconductor logic gates work, see CMOS. The three types of essential logic gate are the AND, the OR and the NOT gate. With these three, any conceivable Boolean equation can be implemented. However, for convenience, the derived types NAND, NOR, XOR and XNOR are also used, which often use fewer circuit elements for a given equation than an implementation based solely on AND, OR and NOT would do. In fact, the NAND has the lowest component count of any gate apart from NOT when implemented using modern semiconductor techniques, and since a NAND can implement both a NOT and, by application of De Morgan's Law, an OR function, this single type can effectively replace AND, OR and NOT, making it the only type of gate that is needed in a real system. Programmable logic arrays will very often contain nothing but NAND gates to simplify their internal design. In the switch circuit, the circuit is closed when both A and B switches are pressed, otherwise the circuit is open. |A||B||A AND B| Another important arrangement is the OR gate, whose truth table is shown below, left. The output is 1 when input A or input B is 1. The output is also 1 when both inputs are 1. In the switch circuit, the circuit is closed when switch A or switch B (or both) are pressed, else the circuit is open. |A||B||A OR B| A simpler arrangement is the NOT gate, whose truth table is shown opposite. In the NOT gate the output is the logical opposite of the input. This means if the input is 1, the output is 0, and if the input is 0 the output is 1. In the switch circuit, a normally closed switch is used, making a closed circuit. If the switch is pushed the circuit becomes open circuit. The NAND gate is the NOT of an AND gate. That is, the output is 1 when NOT (A AND B are 1), as shown in the truth table. |A||B||A NAND B| The NOR gate is the NOT of an OR gate. That is, the output is 1 only when both inputs are 0, as shown in the truth table. |A||B||A NOR B| XOR and XNOR gates |A||B||A XOR B| XOR is a 'stricter' version of the OR gate. Rather than allowing the output to be 1 when either one or both of the inputs are 1, an XOR gate has a 1 output only when only one input is 1. Thus, it has the truth table shown to the right. This can also be interpreted (for a two-input gate) as "1 output when the inputs are different". |A||B||A XNOR B| XNOR is an inverted version of the XOR gate. Thus, it has the truth table shown to the right. This can also be interpreted as "1 output when the inputs are same". The preceding simple logic gates can be combined to form more complicated Boolean logic circuits. Logic circuits are often classified in two groups: combinatorial logic, in which the outputs are continuous-time functions of the inputs, and sequential logic, in which the outputs depend on information stored by the circuits as well as on the inputs. The simplest form of electronic logic is diode logic. This allows AND and OR gates to be built, but not inverters, and so is an incomplete form of logic. To build a complete logic system, valves or transistors can be used. The simplest family of logic gates using bipolar transistors is called resistor-transistor logic, or RTL. Unlike diode logic gates, RTL gates can be cascaded indefinitely to produce more complex logic functions. These gates were used in early integrated circuits. For higher speed, the resistors used in RTL were replaced by diodes, leading to diode-transistor logic, or DTL. It was then discovered that one transistor could do the job of two diodes in the space of one diode, so transistor-transistor logic, or TTL, was created. In some types of chip, to reduce size and power consumption still further, the bipolar transistors were replaced with complementary field-effect transistors (MOSFETs), resulting in complementary metal-oxide-semiconductor (CMOS) logic. For small-scale logic, designers now use prefabricated logic gates from families of devices such as the TTL 7400 series invented by Texas Instruments and the CMOS 4000 series invented by RCA, and their more recent descendants. These devices usually contain transistors with multiple emitters, used to implement the AND function, which are not available as separate components. Increasingly, these fixed-function logic gates are being replaced by programmable logic devices, which allow designers to pack a huge number of mixed logic gates into a single integrated circuit. The field-programmable nature of programmable logic devices such as FPGAs has removed the 'hard' property of hardware; it is now possible to change the logic design of a hardware system by reprogramming some of its components, thus allowing the features or function of a hardware implementation of a logic system to be changed. Electronic logic gates differ significantly from their relay-and-switch equivalents. They are much faster, consume much less power, and are much smaller (all by a factor of a million or more in most cases). Also, there is a fundamental structural difference. The switch circuit creates a continuous metallic path for current to flow (in either direction) between its input and its output. The semiconductor logic gate, on the other hand, acts as a high-gain voltage amplifier, which sinks a tiny current at its input and produces a low-impedance voltage at its output. It is not possible for current to flow between the output and the input of a semiconductor logic gate. Another important advantage of standardised semiconductor logic gates, such as the 7400 and 4000 families, is that they are cascadable. This means that the output of one gate can be wired to the inputs of one or several other gates, and so on ad infinitum, enabling the construction of circuits of arbitrary complexity without requiring the designer to understand the internal workings of the gates. In practice, the output of one gate can only drive a finite number of inputs to other gates, a number called the 'fanout limit', but this limit is rarely reached in the newer CMOS logic circuits, as compared to TTL circuits. Also, there is always a delay, called the 'propagation delay', from a change in input of a gate to the corresponding change in its output. When gates are cascaded, the total propagation delay is approximately the sum of the individual delays, an effect which can become a problem in high-speed circuits. Electronic Logic levels The two logic levels in binary logic circuits can be described as two voltage ranges, "zero" and "one", or "high" and "low". Each technology has its own requirements for the voltages used to represent the two logic levels, to ensure that the output of any device can reliably drive the input of the next device. Usually, two non-overlapping voltage ranges, one for each level, are defined. The difference between the high and low levels ranges from 0.7 volts in ECL logic to around 28 volts in relay logic. Logic gates and hardware NAND and NOR logic gates are the two pillars of logic, in that all other types of Boolean logic gates (i.e., AND, OR, NOT, XOR, XNOR) can be created from a suitable network of just NAND or just NOR gate(s). They can be built from relays or transistors, or any other technology that can create an inverter and a two-input AND or OR gate. These functions can be seen in the table below. |OR||Any high input will drive the output high| |NOR||Any high input will drive the output low| |AND||Any low input will drive the output low| |NAND||Any low input will drive the output high| There are two sets of symbols in common use, both now defined by ANSI/IEEE Std 91-1984 and its supplement ANSI/IEEE Std 91a-1991. The "distinctive shape" set, based on traditional schematics, is used for simple drawings and is quicker to draw by hand. It is sometimes unofficially described as "military", reflecting its origin if not its modern usage. The "rectangular shape" set, based on IEC 60617-12, has rectangular outlines for all types of gate, and allows representation of a much wider range of devices than is possible with the traditional symbols. The IEC's system has been adopted by other standards, such as EN 60617-12:1999 in Europe and BS EN 60617-12:1999 in the United Kingdom. |Type||Distinctive shape||Rectangular shape| |In electronics a NOT gate is more commonly called an inverter. The circle on the symbol is called a bubble, and is generally used in circuit diagrams to indicate an inverted input or output.| |In practice, the cheapest gate to manufacture is usually the NAND gate. Additionally, Charles Peirce showed that NAND gates alone (as well as NOR gates alone) can be used to reproduce all the other logic gates. Symbolically, a NAND gate can also be shown using the OR shape with bubbles on its inputs, and a NOR gate can be shown as an AND gate with bubbles on its inputs. This reflects the equivalency due to De Morgans law, but it also allows a diagram to be read more easily, or a circuit to be mapped onto available physical gates in packages easily, since any circuit node that has bubbles at both ends can be replaced by a simple bubble-less connection and a suitable change of gate. If the NAND is drawn as OR with input bubbles, and a NOR as AND with input bubbles, this gate substitution occurs automatically in the diagram (effectively, bubbles "cancel"). This is commonly seen in real logic diagrams - thus the reader must not get into the habit of associating the shapes exclusively as OR or AND shapes, but also take into account the bubbles at both inputs and outputs in order to determine the "true" logic function indicated. Two more gates are the exclusive-OR or XOR function and its inverse, exclusive-NOR or XNOR. The two input Exclusive-OR is true only when the two input values are different, false if they are equal, regardless of the value. If there are more than two inputs, the gate generates a true at its output if the number of trues at its input is odd (). In practice, these gates are built from combinations of simpler logic gates. DeMorgan equivalent symbols By use of De Morgan's theorem, an AND gate can be turned into an OR gate by inverting the sense of the logic at its inputs and outputs. This leads to a separate set of symbols with inverted inputs and the opposite core symbol. These symbols can make circuit diagrams for circuits using active low signals much clearer and help to show accidental connection of an active high output to an active low input or vice-versa. Storage of bits Related to the concept of logic gates (and also built from them) is the idea of storing a bit of information. The gates discussed up to here cannot store a value: when the inputs change, the outputs immediately react. It is possible to make a storage element either through a capacitor (which stores charge due to its physical properties) or by feedback. Connecting the output of a gate to the input causes it to be put through the logic again, and choosing the feedback correctly allows it to be preserved or modified through the use of other inputs. A set of gates arranged in this fashion is known as a "latch", and more complicated designs that utilise clocks (signals that oscillate with a known period) and change only on the rising edge are called edge-triggered "flip-flops". The combination of multiple flip-flops in parallel, to store a multiple-bit value, is known as a register. These registers or capacitor-based circuits are known as computer memory. They vary in performance, based on factors of speed, complexity, and reliability of storage, and many different types of designs are used based on the application. Three-state logic gates Three-state, or 3-state, logic gates have three states of the output: high (H), low (L) and high-impedance (Z). The high-impedance state plays no role in the logic, which remains strictly binary. These devices are used on buses to allow multiple chips to send data. A group of three-states driving a line with a suitable control circuit is basically equivalent to a multiplexer, which may be physically distributed over separate devices or plug-in cards. 'Tri-state', a widely-used synonym of 'three-state', is a trademark of the National Semiconductor Corporation. Logic circuits include such devices as multiplexers, registers, ALUs, and computer memory, all the way up through complete microprocessors which can contain more than a 100 million gates. In practice, the gates are made from field effect transistors (FETs), particularly metal-oxide-semiconductor FETs (MOSFETs). History and development The earliest logic gates were made mechanically. Charles Babbage, around 1837, devised the Analytical Engine. His logic gates relied on mechanical gearing to perform operations. Electromagnetic relays were later used for logic gates. In 1891, Almon Strowger patented a device containing a logic gate switch circuit (U.S. Patent 0447918). Strowger's patent was not in widespread use until the 1920s. Starting in 1898, Nikola Tesla filed for patents of devices containing logic gate circuits (see List of Tesla patents). Eventually, vacuum tubes replaced relays for logic operations. Lee De Forest's modification, in 1907, of the Fleming valve can be used as AND logic gate. Claude E. Shannon introduced the use of Boolean algebra in the analysis and design of switching circuits in 1937. Walther Bothe, inventor of the coincidence circuit, got part of the 1954 Nobel Prize in physics, for the first modern electronic AND gate in 1924. Active research is taking place in molecular logic gates. Common Basic Logic ICs |4001||7402||Quad two-input NOR gate| |4011||7400||Quad two-input NAND gate| |4049||7404||Hex NOT gate (inverting buffer)| |4070||7486||Quad two-Input XOR gate| |4071||7432||Quad two-input OR gate| |4077||74266||Quad two-input XNOR gate| |4081||7408||Quad two-input AND gate| For more CMOS logic ICs, including gates with more than two inputs, see 4000 series. - Symbols for logic gates. Twenty First Century Books, Breckenridge, CO. - Tesla's invention of the AND logic gate. Twenty First Century Books, Breckenridge, CO. - Wireless Remote Control and the Electronic Computer Logic Gate. Twenty First Century Books, Breckenridge, CO. - Anderson, Leland I., "Nikola Tesla — Guided Weapons & Computer Technology". ISBN 0-9636012-5-3 - Bigelow, Ken, "How logic gates work internally (for several logic families)", play-hookey.com. - C. E. Shannon, "A symbolic analysis of relay and switching circuits," Transactions American Institute of Electrical Engineers, vol. 57, pp. 713-723, March 1938. - The IEC symbols are defined in IEC 60617-12 (1997-12), Graphical symbols for diagrams - Part 12: Binary logic elements - "LEGO Logic Gates". goldfish.org.uk, 2005. - Awschalom, D., D. Loss, and N. Samarth, Semiconductor Spintronics and Quantum Computation (2002), Springer-Verlag, Berlin, Germany. - Bostock, Geoff, Programmable Logic Devices. Technology and Applications (1988), McGraw-Hill, New York, NY. - Brown, Stephen D. et al., Field-Programmable Gate Arrays (1992), Kluwer Academic Publishers, Boston, MA.
http://kiwitobes.com/wiki/Logic_gate.html
13
58
Although algebra is a word that has not commonly been heard in grades 35 classrooms, the mathematical investigations and conversations of students in these grades frequently include elements of algebraic reasoning. These experiences and conversations provide rich contexts for advancing mathematical understanding and are also an important precursor to the more formalized study of algebra in the middle and secondary grades. In grades 35, algebraic ideas should emerge and be investigated as students In grades 35, students should investigate numerical and geometric patterns and express them mathematically in words or symbols. They should analyze the structure of the pattern and how it grows or changes, organize this information systematically, and use their analysis to develop generalizations about the mathematical relationships in the pattern. For example, a teacher might ask students to describe patterns they see in the "growing squares" display (see fig. 5.3) and express the patterns in mathematical sentences. Students should be encouraged to explain these patterns verbally and to make predictions about what will happen if the sequence is continued. In this example, one student might notice that the area changes in a predictable wayit increases by the next odd number with each new square. Another student might notice that the previous square always fits into the "corner" of the next-larger square. This observation might lead to a description of the area of a square as equal to the area of the previous square plus "its two sides and one more." A student might represent his thinking as in figure 5.4.» As they study ways to measure geometric objects, students will have opportunities to make generalizations based on patterns. For example, consider the problem in figure 5.5. Fourth graders might make a table (see fig. 5.6) and note the iterative nature of the pattern. That is, there is a consistent relationship between the surface area of one tower and the next-bigger tower: "You add four to the previous number." Fifth graders could be challenged to justify a general rule with reference to the geometric model, for example, "The surface area is always four times the number of cubes plus two more because there are always four square units around each cube and one extra on each end of the tower." Once a relationship is established, students should be able to use it to answer questions like, "What is the surface area of a tower with fifty cubes?" or "How many cubes would there be in a tower with a surface area of 242 square units?" In this example, some students may use a table to organize and order their data, and others may use connecting cubes to model the growth of an arithmetic sequence. Some students may use words, but others may use numbers and symbols to express their ideas about the functional relationship. Students should have many experiences organizing data and examining different representations. Computer simulations are an interactive way to explore functional relationships and the various ways they are represented. In a simulation of two runners along a track, students can control the speed and starting point of the runners and can view the results by watching the race and examining a table and graph of the time-versus-distance relationship. Students need to feel comfortable using various techniques for organizing and expressing ideas about relationships and functions. In grades 35, students can investigate properties such as commutativity, associativity, and distributivity of multiplication over addition. Is 35 the same as 53? Is 1527 equal to 2715? Will reversing the factors always result in the same product? What if one of the factors is a decimal number (e.g., 1.56)? An area model can help students see that two factors in either order have equal products, as represented by congruent rectangles with different orientations (see fig. 5.7). » At this grade band the idea and usefulness of a variable (represented by a box, letter, or symbol) should also be emerging and developing more fully. As students explore patterns and note relationships, they should be encouraged to represent their thinking. In the example showing the sequence of squares that grow (fig. 5.3), students are beginning to use the idea of a variable as they think about how to describe a rule for finding the area of any square from the pattern they have observed. As students become more experienced in investigating, articulating, and justifying generalizations, they can begin to use variable notation and equations to represent their thinking. Teachers will need to model how to represent thinking in the form of equations. In this way, they can » help students connect the ways they are describing their findings to mathematical notation. For example, a student's description of the surface area of a cube tower of any size ("You get the surface area by multiplying the number of cubes by 4 and adding 2") can be recorded by the teacher as S = 4n + 2. Students should also understand the use of a variable as a placeholder in an expression or equation. For example, they should explore the role of n in the equation 8015 = 40n and be able to find the value of n that makes the equation true. Historically, much of the mathematics used today was developed to model real-world situations, with the goal of making predictions about those situations. As patterns are identified, they can be expressed numerically, graphically, or symbolically and used to predict how the pattern will continue. Students in grades 35 develop the idea that a mathematical model has both descriptive and predictive power. Students in these grades can model a variety of situations, including geometric patterns, real-world situations, and scientific experiments. Sometimes they will use their model to predict the next element in a pattern, as students did when they described the area of a square in terms of the previous smaller square (see fig. 5.3). At other times, students will be able to make a general statement about how one variable is related to another variable: If a sandwich costs $3, you can figure out how many dollars any number of sandwiches costs by multiplying that number by 3 (two sandwiches cost $6, three sandwiches cost $9, and so forth). In this case, students have developed a model of a proportional relationship: the value of one variable (total cost, C) is always three times the value of the other (number of sandwiches, S), or C = 3 S. In modeling situations that involve real-world data, students need to know that their predictions will not always match observed outcomes for a variety of reasons. For example, data often contain measurement error, experiments are influenced by many factors that cause fluctuations, and some models may hold only for a certain range of values. However, predictions based on good models should be reasonably close to what actually happens. Students in grades 35 should begin to understand that different models for the same situation can give the same results. For example, as a group of students investigates the relationship between the number of cubes in a tower and its surface area, several models emerge. One student thinks about each side of the tower as having the same number of units of surface area as the number of cubes (n). There are four sides and an extra unit on each end of the tower, so the surface area is four times the number of cubes plus two (4 n + 2). Another student thinks about how much surface area is contributed by each cube in the tower: each end cube contributes five units of surface area and each "middle" cube contributes four units of surface area. Algebraically, the surface area would be 2 5 + (n 2) 4. For a tower of twelve cubes, the first student thinks, "4 times 12, that's 48, plus 2 is 50." The second student thinks, "The two end cubes each have 5, so that's 10. There are 10 » more cubes. They each have 4, so that's 40. 40 plus 10 is 50." Students in this grade band may not be able to show how these solutions are algebraically equivalent, but they can recognize that these different models lead to the same solution. Change is an important mathematical idea that can be studied using the tools of algebra. For example, as part of a science project, students might plant seeds and record the growth of a plant. Using the data represented in the table and graph (fig. 5.9), students can describe how the rate of growth varies over time. For example, a student might express the rate of growth in this way: "My plant didn't grow for the first four days, then it grew slowly for the next two days, then it started to grow faster, then it slowed down again." In this situation, students are focusing not simply on the height of the plant each day but on what has happened between the recorded heights. This work is a precursor to later, more focused attention on what the slope of a line represents, that is, what the steepness of the line shows about the rate of change. Students should have opportunities to study situations that display different patterns of changechange that occurs at a constant rate, such as someone walking at a constant speed, and rates of change that increase or decrease, as in the growing-plant example. |Home | Table of Contents | Purchase | Resources| |NCTM Home | Illuminations Web site| Copyright © 2000 by the National Council of Teachers of Mathematics.
http://www.fayar.net/east/teacher.web/Math/Standards/document/chapter5/alg.htm
13
240
Go to the previous, next section. A Lisp program is composed mainly of Lisp functions. This chapter explains what functions are, how they accept arguments, and how to define them. In a general sense, a function is a rule for carrying on a computation given several values called arguments. The result of the computation is called the value of the function. The computation can also have side effects: lasting changes in the values of variables or the contents of data structures. Here are important terms for functions in Emacs Lisp and for other function-like objects. append. These functions are also called built-in functions or subrs. (Special forms are also considered primitives.) Usually the reason that a function is a primitives is because it is fundamental, or provides a low-level interface to operating system services, or because it needs to run fast. Primitives can be modified or added only by changing the C sources and recompiling the editor. See section Writing Emacs Primitives. command-executecan invoke; it is a possible definition for a key sequence. Some functions are commands; a function written in Lisp is a command if it contains an interactive declaration (see section Defining Commands). Such a function can be called from Lisp expressions like other functions; in this case, the fact that the function is a command makes no difference. Strings are commands also, even though they are not functions. A symbol is a command if its function definition is a command; such symbols can be invoked with M-x. The symbol is a function as well if the definition is a function. See section Command Loop Overview. Function: subrp object This function returns t if object is a built-in function (i.e. a Lisp primitive). (subrp 'message) ; messageis a symbol, => nil ; not a subr object. (subrp (symbol-function 'message)) => t Function: byte-code-function-p object This function returns t if object is a byte-code function. For example: (byte-code-function-p (symbol-function 'next-line)) => t A function written in Lisp is a list that looks like this: (lambda (arg-variables...) [documentation-string] [interactive-declaration] body-forms...) (Such a list is called a lambda expression for historical reasons, even though it is not really an expression at all--it is not a form that can be evaluated meaningfully.) The first element of a lambda expression is always the symbol lambda. This indicates that the list represents a function. The reason functions are defined to start with lambda is so that other lists, intended for other uses, will not accidentally be valid as The second element is a list of argument variable names (symbols). This is called the lambda list. When a Lisp function is called, the argument values are matched up against the variables in the lambda list, which are given local bindings with the values provided. See section Local Variables. The documentation string is an actual string that serves to describe the function for the Emacs help facilities. See section Documentation Strings of Functions. The interactive declaration is a list of the form code-string). This declares how to provide arguments if the function is used interactively. Functions with this declaration are called commands; they can be called using M-x or bound to a key. Functions not intended to be called in this way should not have interactive declarations. See section Defining Commands, for how to write an interactive The rest of the elements are the body of the function: the Lisp code to do the work of the function (or, as a Lisp programmer would say, "a list of Lisp forms to evaluate"). The value returned by the function is the value returned by the last element of the body. Consider for example the following function: (lambda (a b c) (+ a b c)) We can call this function by writing it as the CAR of an expression, like this: ((lambda (a b c) (+ a b c)) 1 2 3) The body of this lambda expression is evaluated with the variable a bound to 1, b bound to 2, and c bound to 3. Evaluation of the body adds these three numbers, producing the result 6; therefore, this call to the function returns the value 6. Note that the arguments can be the results of other function calls, as in this example: ((lambda (a b c) (+ a b c)) 1 (* 2 3) (- 5 4)) Here all the arguments (* 2 3), and (- 5 4) are evaluated, left to right. Then the lambda expression is applied to the argument values 1, 6 and 1 to produce the value 8. It is not often useful to write a lambda expression as the CAR of a form in this way. You can get the same result, of making local variables and giving them values, using the special form (see section Local Variables). And let is clearer and easier to use. In practice, lambda expressions are either stored as the function definitions of symbols, to produce named functions, or passed as arguments to other functions (see section Anonymous Functions). However, calls to explicit lambda expressions were very useful in the old days of Lisp, before the special form let was invented. At that time, they were the only way to bind and initialize local Our simple sample function, (lambda (a b c) (+ a b c)), specifies three argument variables, so it must be called with three arguments: if you try to call it with only two arguments or four arguments, you get a It is often convenient to write a function that allows certain arguments to be omitted. For example, the function substring accepts three arguments--a string, the start index and the end index--but the third argument defaults to the end of the string if you omit it. It is also convenient for certain functions to accept an indefinite number of arguments, as the functions To specify optional arguments that may be omitted when a function is called, simply include the keyword &optional before the optional arguments. To specify a list of zero or more extra arguments, include the &rest before one final argument. Thus, the complete syntax for an argument list is as follows: (required-vars... [&optional optional-vars...] [&rest rest-var]) The square brackets indicate that the clauses, and the variables that follow them, are optional. A call to the function requires one actual argument for each of the required-vars. There may be actual arguments for zero or more of the optional-vars, and there cannot be any more actual arguments than &rest exists. In that case, there may be any number of extra actual arguments. If actual arguments for the optional and rest variables are omitted, then they always default to nil. However, the body of the function is free to consider nil an abbreviation for some other meaningful value. This is what nil as the third argument means to use the length of the string supplied. There is no way for the function to distinguish between an explicit argument of an omitted argument. Common Lisp note: Common Lisp allows the function to specify what default value to use when an optional argument is omitted; GNU Emacs Lisp always uses For example, an argument list that looks like this: (a b &optional c d &rest e) b to the first two actual arguments, which are required. If one or two more arguments are provided, d are bound to them respectively; any arguments after the first four are collected into a list and e is bound to that list. If there are only two arguments, nil; if two or three nil; if four arguments or fewer, There is no way to have required arguments following optional ones--it would not make sense. To see why this must be so, suppose c in the example were optional and d were required. If three actual arguments are given; then which variable would the third argument be for? Similarly, it makes no sense to have any more arguments (either required or optional) after a Here are some examples of argument lists and proper calls: ((lambda (n) (1+ n)) ; One required: 1) ; requires exactly one argument. => 2 ((lambda (n &optional n1) ; One required and one optional: (if n1 (+ n n1) (1+ n))) ; 1 or 2 arguments. 1 2) => 3 ((lambda (n &rest ns) ; One required and one rest: (+ n (apply '+ ns))) ; 1 or more arguments. 1 2 3 4 5) => 15 A lambda expression may optionally have a documentation string just after the lambda list. This string does not affect execution of the function; it is a kind of comment, but a systematized comment which actually appears inside the Lisp world and can be used by the Emacs help facilities. See section Documentation, for how the documentation-string is accessed. It is a good idea to provide documentation strings for all commands, and for all other functions in your program that users of your program should know about; internal functions might as well have only comments, since comments don't take up any room when your program is loaded. The first line of the documentation string should stand on its own, apropos displays just this first line. It should consist of one or two complete sentences that summarize the function's purpose. The start of the documentation string is usually indented, but since these spaces come before the starting double-quote, they are not part of the string. Some people make a practice of indenting any additional lines of the string so that the text lines up. This is a mistake. The indentation of the following lines is inside the string; what looks nice in the source code will look ugly when displayed by the help commands. You may wonder how the documentation string could be optional, since there are required components of the function that follow it (the body). Since evaluation of a string returns that string, without any side effects, it has no effect if it is not the last form in the body. Thus, in practice, there is no confusion between the first form of the body and the documentation string; if the only body form is a string then it serves both as the return value and as the documentation. In most computer languages, every function has a name; the idea of a function without a name is nonsensical. In Lisp, a function in the strictest sense has no name. It is simply a list whose first element is lambda, or a primitive subr-object. However, a symbol can serve as the name of a function. This happens when you put the function in the symbol's function cell (see section Symbol Components). Then the symbol itself becomes a valid, callable function, equivalent to the list or subr-object that its function cell refers to. The contents of the function cell are also called the symbol's function definition. When the evaluator finds the function definition to use in place of the symbol, we call that symbol function indirection; see section Symbol Function Indirection. In practice, nearly all functions are given names in this way and referred to through their names. For example, the symbol as a function and does what it does because the primitive subr-object #<subr car> is stored in its function cell. We give functions names because it is more convenient to refer to them by their names in other functions. For primitive subr-objects such as #<subr car>, names are the only way you can refer to them: there is no read syntax for such objects. For functions written in Lisp, the name is more convenient to use in a call than an explicit lambda expression. Also, a function with a name can refer to itself--it can be recursive. Writing the function's name in its own definition is much more convenient than making the function definition point to itself (something that is not impossible but that has various disadvantages in Functions are often identified with the symbols used to name them. For example, we often speak of "the function car", not distinguishing between the symbol car and the primitive subr-object that is its function definition. For most purposes, there is no need to distinguish. Even so, keep in mind that a function need not have a unique name. While a given function object usually appears in the function cell of only one symbol, this is just a matter of convenience. It is easy to store it in several symbols using fset; then each of the symbols is equally well a name for the same function. A symbol used as a function name may also be used as a variable; these two uses of a symbol are independent and do not conflict. We usually give a name to a function when it is first created. This is called defining a function, and it is done with the defun special form. Special Form: defun name argument-list body-forms defun is the usual way to define new Lisp functions. It defines the symbol name as a function that looks like this: (lambda argument-list . body-forms) This lambda expression is stored in the function cell of name. The value returned by evaluating the defun form is name, but usually we ignore this value. As described previously (see section Lambda Expressions), argument-list is a list of argument names and may include the &rest. Also, the first two forms in body-forms may be a documentation string and an interactive Note that the same symbol name may also be used as a global variable, since the value cell is independent of the function cell. Here are some examples: (defun foo () 5) => foo (foo) => 5 (defun bar (a &optional b &rest c) (list a b c)) => bar (bar 1 2 3 4 5) => (1 2 (3 4 5)) (bar 1) => (1 nil nil) (bar) error--> Wrong number of arguments. (defun capitalize-backwards () "Upcase the last letter of a word." (interactive) (backward-word 1) (forward-word 1) (backward-char 1) (capitalize-word 1)) => capitalize-backwards Be careful not to redefine existing functions unintentionally. defun redefines even primitive functions such as without any hesitation or notification. Redefining a function already defined is often done deliberately, and there is no way to distinguish deliberate redefinition from unintentional redefinition. Defining functions is only half the battle. Functions don't do anything until you call them, i.e., tell them to run. This process is also known as invocation. The most common way of invoking a function is by evaluating a list. For example, evaluating the list (concat "a" "b") calls the function concat. See section Evaluation, for a description of evaluation. When you write a list as an expression in your program, the function name is part of the program. This means that the choice of which function to call is made when you write the program. Usually that's just what you want. Occasionally you need to decide at run time which function to call. Then you can use the functions Function: funcall function &rest arguments funcall calls function with arguments, and returns whatever function returns. funcall is a function, all of its arguments, including function, are evaluated before funcall is called. This means that you can use any expression to obtain the function to be called. It also means that funcall does not see the expressions you write for the arguments, only their values. These values are not evaluated a second time in the act of calling function; funcall enters the normal procedure for calling a function at the place where the arguments have already been evaluated. The argument function must be either a Lisp function or a primitive function. Special forms and macros are not allowed, because they make sense only when given the "unevaluated" argument funcall cannot provide these because, as we saw above, it never knows them in the first place. (setq f 'list) => list (funcall f 'x 'y 'z) => (x y z) (funcall f 'x 'y '(z)) => (x y (z)) (funcall 'and t nil) error--> Invalid function: #<subr and> Compare this example with that of Function: apply function &rest arguments apply calls function with arguments, just like funcall but with one difference: the last of arguments is a list of arguments to give to function, rather than a single argument. We also say that this list is appended to the other apply returns the result of calling function. As with funcall, function must either be a Lisp function or a primitive function; special forms and macros do not make sense in (setq f 'list) => list (apply f 'x 'y 'z) error--> Wrong type argument: listp, z (apply '+ 1 2 '(3 4)) => 10 (apply '+ '(1 2 3 4)) => 10 (apply 'append '((a b c) nil (x y z) nil)) => (a b c x y z) An interesting example of using apply is found in the description mapcar; see the following section. It is common for Lisp functions to accept functions as arguments or find them in data structures (especially in hook variables and property lists) and call them using that accept function arguments are often called functionals. Sometimes, when you call such a function, it is useful to supply a no-op function as the argument. Here are two different kinds of no-op function: Function: identity arg This function returns arg and has no side effects. Function: ignore &rest args This function ignores any arguments and returns A mapping function applies a given function to each element of a list or other collection. Emacs Lisp has three such functions; mapconcat, which scan a list, are described here. For the third mapping function, section Creating and Interning Symbols. Function: mapcar function sequence mapcar applies function to each element of sequence in turn. The results are made into a The argument sequence may be a list, a vector or a string. The result is always a list. The length of the result is the same as the length of sequence. For example: (mapcar 'car '((a b) (c d) (e f))) => (a c e) (mapcar '1+ [1 2 3]) => (2 3 4) (mapcar 'char-to-string "abc") => ("a" "b" "c") ;; Call each function in my-hooks. (mapcar 'funcall my-hooks) (defun mapcar* (f &rest args) "Apply FUNCTION to successive cars of all ARGS, until one ends. Return the list of results." ;; If no list is exhausted, (if (not (memq 'nil args)) ;; Apply function to CARs. (cons (apply f (mapcar 'car args)) (apply 'mapcar* f ;; Recurse for rest of elements. (mapcar 'cdr args))))) (mapcar* 'cons '(a b c) '(1 2 3 4)) => ((a . 1) (b . 2) (c . 3)) Function: mapconcat function sequence separator mapconcat applies function to each element of sequence: the results, which must be strings, are concatenated. Between each pair of result strings, mapconcat inserts the string separator. Usually separator contains a space or comma or other suitable punctuation. The argument function must be a function that can take one argument and returns a string. (mapconcat 'symbol-name '(The cat in the hat) " ") => "The cat in the hat" (mapconcat (function (lambda (x) (format "%c" (1+ x)))) "HAL-8000" "") => "IBM.9111" In Lisp, a function is a list that starts with alternatively a primitive subr-object); names are "extra". Although usually functions are defined with defun and given names at the same time, it is occasionally more concise to use an explicit lambda expression--an anonymous function. Such a list is valid wherever a function name is. Any method of creating such a list makes a valid function. Even this: (setq silly (append '(lambda (x)) (list (list '+ (* 3 4) 'x)))) => (lambda (x) (+ 12 x)) This computes a list that looks like (lambda (x) (+ 12 x)) and makes it the value (not the function definition!) of Here is how we might call this function: (funcall silly 1) => 13 (It does not work to write (silly 1), because this function is not the function definition of silly. We have not given silly any function definition, just a value as a variable.) Most of the time, anonymous functions are constants that appear in your program. For example, you might want to pass one as an argument to the function mapcar, which applies any given function to each element of a list. Here we pass an anonymous function that multiplies a number by two: (defun double-each (list) (mapcar '(lambda (x) (* 2 x)) list)) => double-each (double-each '(2 11)) => (4 22) In such cases, we usually use the special form of simple quotation to quote the anonymous function. Special Form: function function-object This special form returns function-object without evaluating it. In this, it is equivalent to quote. However, it serves as a note to the Emacs Lisp compiler that function-object is intended to be used only as a function, and therefore can safely be compiled. See section Quoting, for comparison. function instead of quote makes a difference inside a function or macro that you are going to compile. For example: (defun double-each (list) (mapcar (function (lambda (x) (* 2 x))) list)) => double-each (double-each '(2 11)) => (4 22) If this definition of double-each is compiled, the anonymous function is compiled as well. By contrast, in the previous definition quote is used, the argument passed to mapcar is the precise list shown: (lambda (arg) (+ arg 5)) The Lisp compiler cannot assume this list is a function, even though it looks like one, since it does not know what mapcar does with the mapcar will check that the CAR of the third element is the symbol +! The advantage of that it tells the compiler to go ahead and compile the constant We sometimes write function instead of quoting the name of a function, but this usage is just a sort of (function symbol) == (quote symbol) == 'symbol documentation in section Access to Documentation Strings, for a realistic example using function and an anonymous function. The function definition of a symbol is the object stored in the function cell of the symbol. The functions described here access, test, and set the function cell of symbols. Function: symbol-function symbol This returns the object in the function cell of symbol. If the symbol's function cell is void, a void-function error is This function does not check that the returned object is a legitimate function. (defun bar (n) (+ n 2)) => bar (symbol-function 'bar) => (lambda (n) (+ n 2)) (fset 'baz 'bar) => bar (symbol-function 'baz) => bar If you have never given a symbol any function definition, we say that that symbol's function cell is void. In other words, the function cell does not have any Lisp object in it. If you try to call such a symbol as a function, it signals a Note that void is not the same as nil or the symbol void. The symbols void are Lisp objects, and can be stored into a function cell just as any other object can be (and they can be valid functions if you define them in turn with void is an object. A void function cell contains no object whatsoever. You can test the voidness of a symbol's function definition with fboundp. After you have given a symbol a function definition, you can make it void once more using Function: fboundp symbol t if the symbol has an object in its function cell, nil otherwise. It does not check that the object is a legitimate Function: fmakunbound symbol This function makes symbol's function cell void, so that a subsequent attempt to access this cell will cause a error. (See also makunbound, in section Local Variables.) (defun foo (x) x) => x (fmakunbound 'foo) => x (foo 1) error--> Symbol's function definition is void: foo Function: fset symbol object This function stores object in the function cell of symbol. The result is object. Normally object should be a function or the name of a function, but this is not checked. There are three normal uses of this function: defun. See section Classification of List Forms, for an example of this usage. defunwere not a primitive, it could be written in Lisp (as a macro) using Here are examples of the first two uses: firstthe same definition carhas. (fset 'first (symbol-function 'car)) => #<subr car> (first '(1 2 3)) => 1 ;; Make the symbol carthe function definition of xfirst. (fset 'xfirst 'car) => car (xfirst '(1 2 3)) => 1 (symbol-function 'xfirst) => car (symbol-function (symbol-function 'xfirst)) => #<subr car> ;; Define a named keyboard macro. (fset 'kill-two-lines "\^u2\^k") => "\^u2\^k" When writing a function that extends a previously defined function, the following idiom is often used: (fset 'old-foo (symbol-function 'foo)) (defun foo () "Just like old-foo, except more so." (old-foo) (more-so)) This does not work properly if foo has been defined to autoload. In such a case, when old-foo, Lisp attempts old-foo by loading a file. Since this presumably foo rather than old-foo, it does not produce the proper results. The only way to avoid this problem is to make sure the file is loaded before moving aside the old definition of See also the function indirect-function in section Symbol Function Indirection. You can define an inline function by using defun. An inline function works just like an ordinary function except for one thing: when you compile a call to the function, the function's definition is open-coded into the caller. Making a function inline makes explicit calls run faster. But it also has disadvantages. For one thing, it reduces flexibility; if you change the definition of the function, calls already inlined still use the old definition until you recompile them. Another disadvantage is that making a large function inline can increase the size of compiled code both in files and in memory. Since the advantages of inline functions are greatest for small functions, you generally should not make large functions inline. It's possible to define a macro to expand into the same code that an inline function would execute. But the macro would have a limitation: you can use it only explicitly--a macro cannot be called with mapcar and so on. Also, it takes some work to convert an ordinary function into a macro. (See section Macros.) To convert it into an inline function is very easy; simply replace Inline functions can be used and open coded later on in the same file, following the definition, just like macros. Emacs versions prior to 19 did not have inline functions. Here is a table of several functions that do things related to function calling and function definitions. They are documented elsewhere, but we provide cross references here. Go to the previous, next section.
http://www.slac.stanford.edu/comp/unix/gnu-info/elisp_12.html
13
107
Most students of electricity begin their study with what is known as direct current (DC), which is electricity flowing in a constant direction, and/or possessing a voltage with constant polarity. DC is the kind of electricity made by a battery (with definite positive and negative terminals), or the kind of charge generated by rubbing certain types of materials against each other. As useful and as easy to understand as DC is, it is not the only “kind” of electricity in use. Certain sources of electricity (most notably, rotary electro-mechanical generators) naturally produce voltages alternating in polarity, reversing positive and negative over time. Either as a voltage switching polarity or as a current switching direction back and forth, this “kind” of electricity is known as Alternating Current (AC): Figure below Direct vs alternating current Whereas the familiar battery symbol is used as a generic symbol for any DC voltage source, the circle with the wavy line inside is the generic symbol for any AC voltage source. One might wonder why anyone would bother with such a thing as AC. It is true that in some cases AC holds no practical advantage over DC. In applications where electricity is used to dissipate energy in the form of heat, the polarity or direction of current is irrelevant, so long as there is enough voltage and current to the load to produce the desired heat (power dissipation). However, with AC it is possible to build electric generators, motors and power distribution systems that are far more efficient than DC, and so we find AC used predominately across the world in high power applications. To explain the details of why this is so, a bit of background knowledge about AC is necessary. If a machine is constructed to rotate a magnetic field around a set of stationary wire coils with the turning of a shaft, AC voltage will be produced across the wire coils as that shaft is rotated, in accordance with Faraday's Law of electromagnetic induction. This is the basic operating principle of an AC generator, also known as an alternator: Figure below Notice how the polarity of the voltage across the wire coils reverses as the opposite poles of the rotating magnet pass by. Connected to a load, this reversing voltage polarity will create a reversing current direction in the circuit. The faster the alternator's shaft is turned, the faster the magnet will spin, resulting in an alternating voltage and current that switches directions more often in a given amount of time. While DC generators work on the same general principle of electromagnetic induction, their construction is not as simple as their AC counterparts. With a DC generator, the coil of wire is mounted in the shaft where the magnet is on the AC alternator, and electrical connections are made to this spinning coil via stationary carbon “brushes” contacting copper strips on the rotating shaft. All this is necessary to switch the coil's changing output polarity to the external circuit so the external circuit sees a constant polarity: Figure below DC generator operation The generator shown above will produce two pulses of voltage per revolution of the shaft, both pulses in the same direction (polarity). In order for a DC generator to produce constant voltage, rather than brief pulses of voltage once every 1/2 revolution, there are multiple sets of coils making intermittent contact with the brushes. The diagram shown above is a bit more simplified than what you would see in real life. The problems involved with making and breaking electrical contact with a moving coil should be obvious (sparking and heat), especially if the shaft of the generator is revolving at high speed. If the atmosphere surrounding the machine contains flammable or explosive vapors, the practical problems of spark-producing brush contacts are even greater. An AC generator (alternator) does not require brushes and commutators to work, and so is immune to these problems experienced by DC generators. The benefits of AC over DC with regard to generator design is also reflected in electric motors. While DC motors require the use of brushes to make electrical contact with moving coils of wire, AC motors do not. In fact, AC and DC motor designs are very similar to their generator counterparts (identical for the sake of this tutorial), the AC motor being dependent upon the reversing magnetic field produced by alternating current through its stationary coils of wire to rotate the rotating magnet around on its shaft, and the DC motor being dependent on the brush contacts making and breaking connections to reverse current through the rotating coil every 1/2 rotation (180 degrees). So we know that AC generators and AC motors tend to be simpler than DC generators and DC motors. This relative simplicity translates into greater reliability and lower cost of manufacture. But what else is AC good for? Surely there must be more to it than design details of generators and motors! Indeed there is. There is an effect of electromagnetism known as mutual induction, whereby two or more coils of wire placed so that the changing magnetic field created by one induces a voltage in the other. If we have two mutually inductive coils and we energize one coil with AC, we will create an AC voltage in the other coil. When used as such, this device is known as a transformer: Figure below Transformer “transforms” AC voltage and current. The fundamental significance of a transformer is its ability to step voltage up or down from the powered coil to the unpowered coil. The AC voltage induced in the unpowered (“secondary”) coil is equal to the AC voltage across the powered (“primary”) coil multiplied by the ratio of secondary coil turns to primary coil turns. If the secondary coil is powering a load, the current through the secondary coil is just the opposite: primary coil current multiplied by the ratio of primary to secondary turns. This relationship has a very close mechanical analogy, using torque and speed to represent voltage and current, respectively: Figure below Speed multiplication gear train steps torque down and speed up. Step-down transformer steps voltage down and current up. If the winding ratio is reversed so that the primary coil has less turns than the secondary coil, the transformer “steps up” the voltage from the source level to a higher level at the load: Figure below Speed reduction gear train steps torque up and speed down. Step-up transformer steps voltage up and current down. The transformer's ability to step AC voltage up or down with ease gives AC an advantage unmatched by DC in the realm of power distribution in figure below. When transmitting electrical power over long distances, it is far more efficient to do so with stepped-up voltages and stepped-down currents (smaller-diameter wire with less resistive power losses), then step the voltage back down and the current back up for industry, business, or consumer use. Transformers enable efficient long distance high voltage transmission of electric energy. Transformer technology has made long-range electric power distribution practical. Without the ability to efficiently step voltage up and down, it would be cost-prohibitive to construct power systems for anything but close-range (within a few miles at most) use. As useful as transformers are, they only work with AC, not DC. Because the phenomenon of mutual inductance relies on changing magnetic fields, and direct current (DC) can only produce steady magnetic fields, transformers simply will not work with direct current. Of course, direct current may be interrupted (pulsed) through the primary winding of a transformer to create a changing magnetic field (as is done in automotive ignition systems to produce high-voltage spark plug power from a low-voltage DC battery), but pulsed DC is not that different from AC. Perhaps more than any other reason, this is why AC finds such widespread application in power systems. When an alternator produces AC voltage, the voltage switches polarity over time, but does so in a very particular manner. When graphed over time, the “wave” traced by this voltage of alternating polarity from an alternator takes on a distinct shape, known as a sine wave: Figure below Graph of AC voltage over time (the sine wave). In the voltage plot from an electromechanical alternator, the change from one polarity to the other is a smooth one, the voltage level changing most rapidly at the zero (“crossover”) point and most slowly at its peak. If we were to graph the trigonometric function of “sine” over a horizontal range of 0 to 360 degrees, we would find the exact same pattern as in Table below. Trigonometric “sine” function. |Angle (o)||sin(angle)||wave||Angle (o)||sin(angle)||wave| The reason why an electromechanical alternator outputs sine-wave AC is due to the physics of its operation. The voltage produced by the stationary coils by the motion of the rotating magnet is proportional to the rate at which the magnetic flux is changing perpendicular to the coils (Faraday's Law of Electromagnetic Induction). That rate is greatest when the magnet poles are closest to the coils, and least when the magnet poles are furthest away from the coils. Mathematically, the rate of magnetic flux change due to a rotating magnet follows that of a sine function, so the voltage produced by the coils follows that same function. If we were to follow the changing voltage produced by a coil in an alternator from any point on the sine wave graph to that point when the wave shape begins to repeat itself, we would have marked exactly one cycle of that wave. This is most easily shown by spanning the distance between identical peaks, but may be measured between any corresponding points on the graph. The degree marks on the horizontal axis of the graph represent the domain of the trigonometric sine function, and also the angular position of our simple two-pole alternator shaft as it rotates: Figure below Alternator voltage as function of shaft position (time). Since the horizontal axis of this graph can mark the passage of time as well as shaft position in degrees, the dimension marked for one cycle is often measured in a unit of time, most often seconds or fractions of a second. When expressed as a measurement, this is often called the period of a wave. The period of a wave in degrees is always 360, but the amount of time one period occupies depends on the rate voltage oscillates back and forth. A more popular measure for describing the alternating rate of an AC voltage or current wave than period is the rate of that back-and-forth oscillation. This is called frequency. The modern unit for frequency is the Hertz (abbreviated Hz), which represents the number of wave cycles completed during one second of time. In the United States of America, the standard power-line frequency is 60 Hz, meaning that the AC voltage oscillates at a rate of 60 complete back-and-forth cycles every second. In Europe, where the power system frequency is 50 Hz, the AC voltage only completes 50 cycles every second. A radio station transmitter broadcasting at a frequency of 100 MHz generates an AC voltage oscillating at a rate of 100 million cycles every second. Prior to the canonization of the Hertz unit, frequency was simply expressed as “cycles per second.” Older meters and electronic equipment often bore frequency units of “CPS” (Cycles Per Second) instead of Hz. Many people believe the change from self-explanatory units like CPS to Hertz constitutes a step backward in clarity. A similar change occurred when the unit of “Celsius” replaced that of “Centigrade” for metric temperature measurement. The name Centigrade was based on a 100-count (“Centi-”) scale (“-grade”) representing the melting and boiling points of H2O, respectively. The name Celsius, on the other hand, gives no hint as to the unit's origin or meaning. Period and frequency are mathematical reciprocals of one another. That is to say, if a wave has a period of 10 seconds, its frequency will be 0.1 Hz, or 1/10 of a cycle per second: An instrument called an oscilloscope, Figure below, is used to display a changing voltage over time on a graphical screen. You may be familiar with the appearance of an ECG or EKG (electrocardiograph) machine, used by physicians to graph the oscillations of a patient's heart over time. The ECG is a special-purpose oscilloscope expressly designed for medical use. General-purpose oscilloscopes have the ability to display voltage from virtually any voltage source, plotted as a graph with time as the independent variable. The relationship between period and frequency is very useful to know when displaying an AC voltage or current waveform on an oscilloscope screen. By measuring the period of the wave on the horizontal axis of the oscilloscope screen and reciprocating that time value (in seconds), you can determine the frequency in Hertz. Time period of sinewave is shown on oscilloscope. Voltage and current are by no means the only physical variables subject to variation over time. Much more common to our everyday experience is sound, which is nothing more than the alternating compression and decompression (pressure waves) of air molecules, interpreted by our ears as a physical sensation. Because alternating current is a wave phenomenon, it shares many of the properties of other wave phenomena, like sound. For this reason, sound (especially structured music) provides an excellent analogy for relating AC concepts. In musical terms, frequency is equivalent to pitch. Low-pitch notes such as those produced by a tuba or bassoon consist of air molecule vibrations that are relatively slow (low frequency). High-pitch notes such as those produced by a flute or whistle consist of the same type of vibrations in the air, only vibrating at a much faster rate (higher frequency). Figure below is a table showing the actual frequencies for a range of common musical notes. The frequency in Hertz (Hz) is shown for various musical notes. Astute observers will notice that all notes on the table bearing the same letter designation are related by a frequency ratio of 2:1. For example, the first frequency shown (designated with the letter “A”) is 220 Hz. The next highest “A” note has a frequency of 440 Hz -- exactly twice as many sound wave cycles per second. The same 2:1 ratio holds true for the first A sharp (233.08 Hz) and the next A sharp (466.16 Hz), and for all note pairs found in the table. Audibly, two notes whose frequencies are exactly double each other sound remarkably similar. This similarity in sound is musically recognized, the shortest span on a musical scale separating such note pairs being called an octave. Following this rule, the next highest “A” note (one octave above 440 Hz) will be 880 Hz, the next lowest “A” (one octave below 220 Hz) will be 110 Hz. A view of a piano keyboard helps to put this scale into perspective: Figure below An octave is shown on a musical keyboard. As you can see, one octave is equal to seven white keys' worth of distance on a piano keyboard. The familiar musical mnemonic (doe-ray-mee-fah-so-lah-tee) -- yes, the same pattern immortalized in the whimsical Rodgers and Hammerstein song sung in The Sound of Music -- covers one octave from C to C. While electromechanical alternators and many other physical phenomena naturally produce sine waves, this is not the only kind of alternating wave in existence. Other “waveforms” of AC are commonly produced within electronic circuitry. Here are but a few sample waveforms and their common designations in figure below Some common waveshapes (waveforms). These waveforms are by no means the only kinds of waveforms in existence. They're simply a few that are common enough to have been given distinct names. Even in circuits that are supposed to manifest “pure” sine, square, triangle, or sawtooth voltage/current waveforms, the real-life result is often a distorted version of the intended waveshape. Some waveforms are so complex that they defy classification as a particular “type” (including waveforms associated with many kinds of musical instruments). Generally speaking, any waveshape bearing close resemblance to a perfect sine wave is termed sinusoidal, anything different being labeled as non-sinusoidal. Being that the waveform of an AC voltage or current is crucial to its impact in a circuit, we need to be aware of the fact that AC waves come in a variety of shapes. So far we know that AC voltage alternates in polarity and AC current alternates in direction. We also know that AC can alternate in a variety of different ways, and by tracing the alternation over time we can plot it as a “waveform.” We can measure the rate of alternation by measuring the time it takes for a wave to evolve before it repeats itself (the “period”), and express this as cycles per unit time, or “frequency.” In music, frequency is the same as pitch, which is the essential property distinguishing one note from another. However, we encounter a measurement problem if we try to express how large or small an AC quantity is. With DC, where quantities of voltage and current are generally stable, we have little trouble expressing how much voltage or current we have in any part of a circuit. But how do you grant a single measurement of magnitude to something that is constantly changing? One way to express the intensity, or magnitude (also called the amplitude), of an AC quantity is to measure its peak height on a waveform graph. This is known as the peak or crest value of an AC waveform: Figure below Peak voltage of a waveform. Another way is to measure the total height between opposite peaks. This is known as the peak-to-peak (P-P) value of an AC waveform: Figure below Peak-to-peak voltage of a waveform. Unfortunately, either one of these expressions of waveform amplitude can be misleading when comparing two different types of waves. For example, a square wave peaking at 10 volts is obviously a greater amount of voltage for a greater amount of time than a triangle wave peaking at 10 volts. The effects of these two AC voltages powering a load would be quite different: Figure below A square wave produces a greater heating effect than the same peak voltage triangle wave. One way of expressing the amplitude of different waveshapes in a more equivalent fashion is to mathematically average the values of all the points on a waveform's graph to a single, aggregate number. This amplitude measure is known simply as the average value of the waveform. If we average all the points on the waveform algebraically (that is, to consider their sign, either positive or negative), the average value for most waveforms is technically zero, because all the positive points cancel out all the negative points over a full cycle: Figure below The average value of a sinewave is zero. This, of course, will be true for any waveform having equal-area portions above and below the “zero” line of a plot. However, as a practical measure of a waveform's aggregate value, “average” is usually defined as the mathematical mean of all the points' absolute values over a cycle. In other words, we calculate the practical average value of the waveform by considering all points on the wave as positive quantities, as if the waveform looked like this: Figure below Waveform seen by AC “average responding” meter. Polarity-insensitive mechanical meter movements (meters designed to respond equally to the positive and negative half-cycles of an alternating voltage or current) register in proportion to the waveform's (practical) average value, because the inertia of the pointer against the tension of the spring naturally averages the force produced by the varying voltage/current values over time. Conversely, polarity-sensitive meter movements vibrate uselessly if exposed to AC voltage or current, their needles oscillating rapidly about the zero mark, indicating the true (algebraic) average value of zero for a symmetrical waveform. When the “average” value of a waveform is referenced in this text, it will be assumed that the “practical” definition of average is intended unless otherwise specified. Another method of deriving an aggregate value for waveform amplitude is based on the waveform's ability to do useful work when applied to a load resistance. Unfortunately, an AC measurement based on work performed by a waveform is not the same as that waveform's “average” value, because the power dissipated by a given load (work performed per unit time) is not directly proportional to the magnitude of either the voltage or current impressed upon it. Rather, power is proportional to the square of the voltage or current applied to a resistance (P = E2/R, and P = I2R). Although the mathematics of such an amplitude measurement might not be straightforward, the utility of it is. Consider a bandsaw and a jigsaw, two pieces of modern woodworking equipment. Both types of saws cut with a thin, toothed, motor-powered metal blade to cut wood. But while the bandsaw uses a continuous motion of the blade to cut, the jigsaw uses a back-and-forth motion. The comparison of alternating current (AC) to direct current (DC) may be likened to the comparison of these two saw types: Figure below Bandsaw-jigsaw analogy of DC vs AC. The problem of trying to describe the changing quantities of AC voltage or current in a single, aggregate measurement is also present in this saw analogy: how might we express the speed of a jigsaw blade? A bandsaw blade moves with a constant speed, similar to the way DC voltage pushes or DC current moves with a constant magnitude. A jigsaw blade, on the other hand, moves back and forth, its blade speed constantly changing. What is more, the back-and-forth motion of any two jigsaws may not be of the same type, depending on the mechanical design of the saws. One jigsaw might move its blade with a sine-wave motion, while another with a triangle-wave motion. To rate a jigsaw based on its peak blade speed would be quite misleading when comparing one jigsaw to another (or a jigsaw with a bandsaw!). Despite the fact that these different saws move their blades in different manners, they are equal in one respect: they all cut wood, and a quantitative comparison of this common function can serve as a common basis for which to rate blade speed. Picture a jigsaw and bandsaw side-by-side, equipped with identical blades (same tooth pitch, angle, etc.), equally capable of cutting the same thickness of the same type of wood at the same rate. We might say that the two saws were equivalent or equal in their cutting capacity. Might this comparison be used to assign a “bandsaw equivalent” blade speed to the jigsaw's back-and-forth blade motion; to relate the wood-cutting effectiveness of one to the other? This is the general idea used to assign a “DC equivalent” measurement to any AC voltage or current: whatever magnitude of DC voltage or current would produce the same amount of heat energy dissipation through an equal resistance:Figure below An RMS voltage produces the same heating effect as a the same DC voltage In the two circuits above, we have the same amount of load resistance (2 Ω) dissipating the same amount of power in the form of heat (50 watts), one powered by AC and the other by DC. Because the AC voltage source pictured above is equivalent (in terms of power delivered to a load) to a 10 volt DC battery, we would call this a “10 volt” AC source. More specifically, we would denote its voltage value as being 10 volts RMS. The qualifier “RMS” stands for Root Mean Square, the algorithm used to obtain the DC equivalent value from points on a graph (essentially, the procedure consists of squaring all the positive and negative points on a waveform graph, averaging those squared values, then taking the square root of that average to obtain the final answer). Sometimes the alternative terms equivalent or DC equivalent are used instead of “RMS,” but the quantity and principle are both the same. RMS amplitude measurement is the best way to relate AC quantities to DC quantities, or other AC quantities of differing waveform shapes, when dealing with measurements of electric power. For other considerations, peak or peak-to-peak measurements may be the best to employ. For instance, when determining the proper size of wire (ampacity) to conduct electric power from a source to a load, RMS current measurement is the best to use, because the principal concern with current is overheating of the wire, which is a function of power dissipation caused by current through the resistance of the wire. However, when rating insulators for service in high-voltage AC applications, peak voltage measurements are the most appropriate, because the principal concern here is insulator “flashover” caused by brief spikes of voltage, irrespective of time. Peak and peak-to-peak measurements are best performed with an oscilloscope, which can capture the crests of the waveform with a high degree of accuracy due to the fast action of the cathode-ray-tube in response to changes in voltage. For RMS measurements, analog meter movements (D'Arsonval, Weston, iron vane, electrodynamometer) will work so long as they have been calibrated in RMS figures. Because the mechanical inertia and dampening effects of an electromechanical meter movement makes the deflection of the needle naturally proportional to the average value of the AC, not the true RMS value, analog meters must be specifically calibrated (or mis-calibrated, depending on how you look at it) to indicate voltage or current in RMS units. The accuracy of this calibration depends on an assumed waveshape, usually a sine wave. Electronic meters specifically designed for RMS measurement are best for the task. Some instrument manufacturers have designed ingenious methods for determining the RMS value of any waveform. One such manufacturer produces “True-RMS” meters with a tiny resistive heating element powered by a voltage proportional to that being measured. The heating effect of that resistance element is measured thermally to give a true RMS value with no mathematical calculations whatsoever, just the laws of physics in action in fulfillment of the definition of RMS. The accuracy of this type of RMS measurement is independent of waveshape. For “pure” waveforms, simple conversion coefficients exist for equating Peak, Peak-to-Peak, Average (practical, not algebraic), and RMS measurements to one another: Figure below Conversion factors for common waveforms. In addition to RMS, average, peak (crest), and peak-to-peak measures of an AC waveform, there are ratios expressing the proportionality between some of these fundamental measurements. The crest factor of an AC waveform, for instance, is the ratio of its peak (crest) value divided by its RMS value. The form factor of an AC waveform is the ratio of its RMS value divided by its average value. Square-shaped waveforms always have crest and form factors equal to 1, since the peak is the same as the RMS and average values. Sinusoidal waveforms have an RMS value of 0.707 (the reciprocal of the square root of 2) and a form factor of 1.11 (0.707/0.636). Triangle- and sawtooth-shaped waveforms have RMS values of 0.577 (the reciprocal of square root of 3) and form factors of 1.15 (0.577/0.5). Bear in mind that the conversion constants shown here for peak, RMS, and average amplitudes of sine waves, square waves, and triangle waves hold true only for pure forms of these waveshapes. The RMS and average values of distorted waveshapes are not related by the same ratios: Figure below Arbitrary waveforms have no simple conversions. This is a very important concept to understand when using an analog D'Arsonval meter movement to measure AC voltage or current. An analog D'Arsonval movement, calibrated to indicate sine-wave RMS amplitude, will only be accurate when measuring pure sine waves. If the waveform of the voltage or current being measured is anything but a pure sine wave, the indication given by the meter will not be the true RMS value of the waveform, because the degree of needle deflection in an analog D'Arsonval meter movement is proportional to the average value of the waveform, not the RMS. RMS meter calibration is obtained by “skewing” the span of the meter so that it displays a small multiple of the average value, which will be equal to be the RMS value for a particular waveshape and a particular waveshape only. Since the sine-wave shape is most common in electrical measurements, it is the waveshape assumed for analog meter calibration, and the small multiple used in the calibration of the meter is 1.1107 (the form factor: 0.707/0.636: the ratio of RMS divided by average for a sinusoidal waveform). Any waveshape other than a pure sine wave will have a different ratio of RMS and average values, and thus a meter calibrated for sine-wave voltage or current will not indicate true RMS when reading a non-sinusoidal wave. Bear in mind that this limitation applies only to simple, analog AC meters not employing “True-RMS” technology. Over the course of the next few chapters, you will learn that AC circuit measurements and calculations can get very complicated due to the complex nature of alternating current in circuits with inductance and capacitance. However, with simple circuits (figure below) involving nothing more than an AC power source and resistance, the same laws and rules of DC apply simply and directly. AC circuit calculations for resistive circuits are the same as for DC. Series resistances still add, parallel resistances still diminish, and the Laws of Kirchhoff and Ohm still hold true. Actually, as we will discover later on, these rules and laws always hold true, its just that we have to express the quantities of voltage, current, and opposition to current in more advanced mathematical forms. With purely resistive circuits, however, these complexities of AC are of no practical consequence, and so we can treat the numbers as though we were dealing with simple DC quantities. Because all these mathematical relationships still hold true, we can make use of our familiar “table” method of organizing circuit values just as with DC: One major caveat needs to be given here: all measurements of AC voltage and current must be expressed in the same terms (peak, peak-to-peak, average, or RMS). If the source voltage is given in peak AC volts, then all currents and voltages subsequently calculated are cast in terms of peak units. If the source voltage is given in AC RMS volts, then all calculated currents and voltages are cast in AC RMS units as well. This holds true for any calculation based on Ohm's Laws, Kirchhoff's Laws, etc. Unless otherwise stated, all values of voltage and current in AC circuits are generally assumed to be RMS rather than peak, average, or peak-to-peak. In some areas of electronics, peak measurements are assumed, but in most applications (especially industrial electronics) the assumption is RMS. Things start to get complicated when we need to relate two or more AC voltages or currents that are out of step with each other. By “out of step,” I mean that the two waveforms are not synchronized: that their peaks and zero points do not match up at the same points in time. The graph in figure below illustrates an example of this. Out of phase waveforms The two waves shown above (A versus B) are of the same amplitude and frequency, but they are out of step with each other. In technical terms, this is called a phase shift. Earlier we saw how we could plot a “sine wave” by calculating the trigonometric sine function for angles ranging from 0 to 360 degrees, a full circle. The starting point of a sine wave was zero amplitude at zero degrees, progressing to full positive amplitude at 90 degrees, zero at 180 degrees, full negative at 270 degrees, and back to the starting point of zero at 360 degrees. We can use this angle scale along the horizontal axis of our waveform plot to express just how far out of step one wave is with another: Figure below Wave A leads wave B by 45o The shift between these two waveforms is about 45 degrees, the “A” wave being ahead of the “B” wave. A sampling of different phase shifts is given in the following graphs to better illustrate this concept: Figure below Examples of phase shifts. Because the waveforms in the above examples are at the same frequency, they will be out of step by the same angular amount at every point in time. For this reason, we can express phase shift for two or more waveforms of the same frequency as a constant quantity for the entire wave, and not just an expression of shift between any two particular points along the waves. That is, it is safe to say something like, “voltage 'A' is 45 degrees out of phase with voltage 'B'.” Whichever waveform is ahead in its evolution is said to be leading and the one behind is said to be lagging. Phase shift, like voltage, is always a measurement relative between two things. There's really no such thing as a waveform with an absolute phase measurement because there's no known universal reference for phase. Typically in the analysis of AC circuits, the voltage waveform of the power supply is used as a reference for phase, that voltage stated as “xxx volts at 0 degrees.” Any other AC voltage or current in that circuit will have its phase shift expressed in terms relative to that source voltage. This is what makes AC circuit calculations more complicated than DC. When applying Ohm's Law and Kirchhoff's Laws, quantities of AC voltage and current must reflect phase shift as well as amplitude. Mathematical operations of addition, subtraction, multiplication, and division must operate on these quantities of phase shift as well as amplitude. Fortunately, there is a mathematical system of quantities called complex numbers ideally suited for this task of representing amplitude and phase. Because the subject of complex numbers is so essential to the understanding of AC circuits, the next chapter will be devoted to that subject alone. One of the more fascinating applications of electricity is in the generation of invisible ripples of energy called radio waves. The limited scope of this lesson on alternating current does not permit full exploration of the concept, some of the basic principles will be covered. With Oersted's accidental discovery of electromagnetism, it was realized that electricity and magnetism were related to each other. When an electric current was passed through a conductor, a magnetic field was generated perpendicular to the axis of flow. Likewise, if a conductor was exposed to a change in magnetic flux perpendicular to the conductor, a voltage was produced along the length of that conductor. So far, scientists knew that electricity and magnetism always seemed to affect each other at right angles. However, a major discovery lay hidden just beneath this seemingly simple concept of related perpendicularity, and its unveiling was one of the pivotal moments in modern science. This breakthrough in physics is hard to overstate. The man responsible for this conceptual revolution was the Scottish physicist James Clerk Maxwell (1831-1879), who “unified” the study of electricity and magnetism in four relatively tidy equations. In essence, what he discovered was that electric and magnetic fields were intrinsically related to one another, with or without the presence of a conductive path for electrons to flow. Stated more formally, Maxwell's discovery was this: A changing electric field produces a perpendicular magnetic field, and A changing magnetic field produces a perpendicular electric field. All of this can take place in open space, the alternating electric and magnetic fields supporting each other as they travel through space at the speed of light. This dynamic structure of electric and magnetic fields propagating through space is better known as an electromagnetic wave. There are many kinds of natural radiative energy composed of electromagnetic waves. Even light is electromagnetic in nature. So are X-rays and “gamma” ray radiation. The only difference between these kinds of electromagnetic radiation is the frequency of their oscillation (alternation of the electric and magnetic fields back and forth in polarity). By using a source of AC voltage and a special device called an antenna, we can create electromagnetic waves (of a much lower frequency than that of light) with ease. An antenna is nothing more than a device built to produce a dispersing electric or magnetic field. Two fundamental types of antennae are the dipole and the loop: Figure below Dipole and loop antennae While the dipole looks like nothing more than an open circuit, and the loop a short circuit, these pieces of wire are effective radiators of electromagnetic fields when connected to AC sources of the proper frequency. The two open wires of the dipole act as a sort of capacitor (two conductors separated by a dielectric), with the electric field open to dispersal instead of being concentrated between two closely-spaced plates. The closed wire path of the loop antenna acts like an inductor with a large air core, again providing ample opportunity for the field to disperse away from the antenna instead of being concentrated and contained as in a normal inductor. As the powered dipole radiates its changing electric field into space, a changing magnetic field is produced at right angles, thus sustaining the electric field further into space, and so on as the wave propagates at the speed of light. As the powered loop antenna radiates its changing magnetic field into space, a changing electric field is produced at right angles, with the same end-result of a continuous electromagnetic wave sent away from the antenna. Either antenna achieves the same basic task: the controlled production of an electromagnetic field. When attached to a source of high-frequency AC power, an antenna acts as a transmitting device, converting AC voltage and current into electromagnetic wave energy. Antennas also have the ability to intercept electromagnetic waves and convert their energy into AC voltage and current. In this mode, an antenna acts as a receiving device: Figure below Basic radio transmitter and receiver While there is much more that may be said about antenna technology, this brief introduction is enough to give you the general idea of what's going on (and perhaps enough information to provoke a few experiments). Contributors to this chapter are listed in chronological order of their contributions, from most recent to first. See Appendix 2 (Contributor List) for dates and contact information. Harvey Lew (February 7, 2004): Corrected typographical error: “circuit” should have been “circle”. Duane Damiano (February 25, 2003): Pointed out magnetic polarity error in DC generator illustration. Mark D. Zarella (April 28, 2002): Suggestion for improving explanation of “average” waveform amplitude. John Symonds (March 28, 2002): Suggestion for improving explanation of the unit “Hertz.” Jason Starck (June 2000): HTML document formatting, which led to a much better-looking second edition. Lessons In Electric Circuits copyright (C) 2000-2013 Tony R. Kuphaldt, under the terms and conditions of the Design Science License.
http://www.ibiblio.org/kuphaldt/electricCircuits/AC/AC_1.html
13
50
Boolean satisfiability problem In computer science, satisfiability (often written in all capitals or abbreviated SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it establishes if the variables of a given Boolean formula can be assigned in such a way as to make the formula evaluate to TRUE. Equally important is to determine whether no such assignments exist, which would imply that the function expressed by the formula is identically FALSE for all possible variable assignments. In this latter case, we would say that the function is unsatisfiable; otherwise it is satisfiable. For example, the formula a AND b is satisfiable because one can find the values a = TRUE and b = TRUE, which make (a AND b) = TRUE. To emphasize the binary nature of this problem, it is frequently referred to as Boolean or propositional satisfiability. SAT was the first known example of an NP-complete problem. That briefly means that there is no known algorithm that efficiently solves all instances of SAT, and it is generally believed (but not proven, see P versus NP problem) that no such algorithm can exist. Further, a wide range of other naturally occurring decision and optimization problems can be transformed into instances of SAT. A class of algorithms called SAT solvers can efficiently solve a large enough subset of SAT instances to be useful in various practical areas such as circuit design and automatic theorem proving, by solving SAT instances made by transforming problems that arise in those areas. Extending the capabilities of SAT solving algorithms is an ongoing area of progress. However, no current such methods can efficiently solve all SAT instances. Basic definitions, terminology and applications In complexity theory, the satisfiability problem (SAT) is a decision problem, whose instance is a Boolean expression written using only AND, OR, NOT, variables, and parentheses. The question is: given the expression, is there some assignment of TRUE and FALSE values to the variables that will make the entire expression true? A formula of propositional logic is said to be satisfiable if logical values can be assigned to its variables in a way that makes the formula true. The Boolean satisfiability problem is NP-complete. The propositional satisfiability problem (PSAT), which decides whether a given propositional formula is satisfiable, is of central importance in various areas of computer science, including theoretical computer science, algorithmics, artificial intelligence, hardware design, electronic design automation, and verification. A literal is either a variable or the negation of a variable (the negation of an expression can be reduced to negated variables by De Morgan's laws). For example, is a positive literal and is a negative literal. A clause is a disjunction of literals. For example, is a clause (read as "x-sub-one or not x-sub-2"). There are several special cases of the Boolean satisfiability problem in which the formula are required to be conjunctions of clauses (i.e. formulae in conjunctive normal form). Determining the satisfiability of a formula in conjunctive normal form where each clause is limited to at most three literals is NP-complete; this problem is called "3SAT", "3CNFSAT", or "3-satisfiability". Determining the satisfiability of a formula in which each clause is limited to at most two literals is NL-complete; this problem is called "2SAT". Determining the satisfiability of a formula in which each clause is a Horn clause (i.e. it contains at most one positive literal) is P-complete; this problem is called Horn-satisfiability. The Cook–Levin theorem states that the Boolean satisfiability problem is NP-complete, and in fact, this was the first decision problem proved to be NP-complete. However, beyond this theoretical significance, efficient and scalable algorithms for SAT that were developed over the last decade have contributed to dramatic advances in our ability to automatically solve problem instances involving tens of thousands of variables and millions of constraints. Examples of such problems in electronic design automation (EDA) include formal equivalence checking, model checking, formal verification of pipelined microprocessors, automatic test pattern generation, routing of FPGAs, and so on. A SAT-solving engine is now considered to be an essential component in the EDA toolbox. Complexity and restricted versions SAT was the first known NP-complete problem, as proved by Stephen Cook in 1971 and independently by Leonid Levin in 1973. Until that time, the concept of an NP-complete problem did not even exist. The problem remains NP-complete even if all expressions are written in conjunctive normal form with 3 variables per clause (3-CNF), yielding the 3SAT problem. This means the expression has the form: - (x11 OR x12 OR x13) AND - (x21 OR x22 OR x23) AND - (x31 OR x32 OR x33) AND ... where each x is a variable or a negation of a variable, and each variable can appear multiple times in the expression. A useful property of Cook's reduction is that it preserves the number of accepting answers. For example, if a graph has 17 valid 3-colorings, the SAT formula produced by the reduction will have 17 satisfying assignments. NP-completeness only refers to the run-time of the worst case instances. Many of the instances that occur in practical applications can be solved much more quickly. See runtime behavior below. SAT is easier if the formulas are restricted to those in disjunctive normal form, that is, they are disjunction (OR) of terms, where each term is a conjunction (AND) of literals (possibly negated variables). Such a formula is indeed satisfiable if and only if at least one of its terms is satisfiable, and a term is satisfiable if and only if it does not contain both x and NOT x for some variable x. This can be checked in polynomial time. Furthermore, if they are restricted to being in full disjunctive normal form, in which every variable appears exactly once in every conjunction, they can be checked in constant time (each conjunction represents one satisfying assignment). But it can take exponential time and space to convert a general SAT problem to disjunctive normal form. SAT is also easier if the number of literals in a clause is limited to 2, in which case the problem is called 2SAT. This problem can also be solved in polynomial time, and in fact is complete for the class NL. Similarly, if we limit the number of literals per clause to 2 and change the OR operations to XOR operations, the result is exclusive-or 2-satisfiability, a problem complete for SL = L. One of the most important restrictions of SAT is HORNSAT, where the formula is a conjunction of Horn clauses. This problem is solved by the polynomial-time Horn-satisfiability algorithm, and is in fact P-complete. It can be seen as P's version of the Boolean satisfiability problem. Here is an example, where ¬ indicates negation: E has two clauses (denoted by parentheses), four variables (x1, x2, x3, x4), and k=3 (three literals per clause). To solve this instance of the decision problem we must determine whether there is a truth value (TRUE or FALSE) we can assign to each of the variables (x1 through x4) such that the entire expression is TRUE. In this instance, there is such an assignment (x1 = TRUE, x2 = TRUE, x3=TRUE, x4=TRUE), so the answer to this instance is YES. This is one of many possible assignments, with for instance, any set of assignments including x1 = TRUE being sufficient. If there were no such assignment(s), the answer would be NO. 3-SAT is NP-complete and it is used as a starting point for proving that other problems are also NP-hard. This is done by polynomial-time reduction from 3-SAT to the other problem. An example of a problem where this method has been used is the Clique problem. 3-SAT can be further restricted to One-in-three 3SAT, where we ask if exactly one of the literals in each clause is true, rather than at least one. This restriction remains NP-complete. There is a simple randomized algorithm due to Schöning (1999) that runs in time where n is the number of clauses and succeeds with high probability to correctly decide 3-Sat. The exponential time hypothesis is that no algorithm can solve 3-Sat faster than . A variant of the 3-satisfiability problem is the one-in-three 3SAT (also known variously as 1-in-3 SAT and exactly-1 3SAT) is an NP-complete problem. The problem is a variant of the 3-satisfiability problem (3SAT). Like 3SAT, the input instance is a collection of clauses, where each clause consists of exactly three literals, and each literal is either a variable or its negation. The one-in-three 3SAT problem is to determine whether there exists a truth assignment to the variables so that each clause has exactly one true literal (and thus exactly two false literals). (In contrast, ordinary 3SAT requires that every clause has at least one true literal.) One-in-three 3SAT is listed as NP-complete problem LO4 in the standard reference, Computers and Intractability: A Guide to the Theory of NP-Completeness by Michael R. Garey and David S. Johnson. It was proved to be NP-complete by Thomas J. Schaefer as a special case of Schaefer's dichotomy theorem, which asserts that any problem generalizing Boolean satisfiability in a certain way is either in the class P or is NP-complete. Schaefer gives a construction allowing an easy polynomial-time reduction from 3SAT to one-in-three 3SAT. Let "(x or y or z)" be a clause in a 3CNF formula. Add six new boolean variables a, b, c, d, e, and f, to be used to simulate this clause and no other. Let R(u,v,w) be a predicate that is true if and only if exactly one of the booleans u, v, and w is true. Then the formula "R(x,a,d) and R(y,b,d) and R(a,b,e) and R(c,d,f) and R(z,c,false)" is satisfiable by some setting of the new variables if and only if at least one of x, y, or z is true. We may thus convert any 3SAT instance with m clauses and n variables into a one-in-three 3SAT instance with 5m clauses and n + 6m variables. Another reduction involves only four new variables and three clauses: . To prove that must exist, one first express as product of maxterms, then show that Note the left side is evaluated true if and only if the right hand side is one-in-three 3SAT satisfied. The other variables are newly added variables that don't exist in any expression. The one-in-three 3SAT problem is often used in the literature as a known NP-complete problem in a reduction to show that other problems are NP-complete. A clause is Horn if it contains at most one positive literal. Such clauses are of interest because they are able to express implication of one variable from a set of other variables. Indeed, one such clause can be rewritten as , that is, if are all true, then y needs to be true as well. The problem of deciding whether a set of Horn clauses is satisfiable is in P. This problem can indeed be solved by a single step of the Unit propagation, which produces the single minimal model of the set of Horn clauses (w.r.t. the set of literal assigned to true). A generalization of the class of Horn formulae is that of renamable-Horn formulae, which is the set of formulae that can be placed in Horn form by replacing some variables with their respective negation. Checking the existence of such a replacement can be done in linear time; therefore, the satisfiability of such formulae is in P as it can be solved by first performing this replacement and then checking the satisfiability of the resulting Horn formula. Another special case is the class of problems where each clause only contains exclusive or operators. This is in P, since an XOR-SAT formula is a system of linear equations mod 2, and can be solved by Gaussian elimination. Schaefer's dichotomy theorem The restrictions above (CNF, 2CNF, 3CNF, Horn, XOR-SAT) bound the considered formulae to be conjunction of subformulae; each restriction states a specific form for all subformulae: for example, only binary clauses can be subformulae in 2CNF. Schaefer's dichotomy theorem states that, for any restriction to Boolean operators that can be used to form these subformulae, the corresponding satisfiability problem is in P or NP-complete. The membership in P of the satisfiability of 2CNF, Horn, and XOR-SAT formulae are special cases of this theorem. As mentioned briefly above, though the problem is NP-complete, many practical instances can be solved much more quickly. Many practical problems are actually "easy", so the SAT solver can easily find a solution, or prove that none exists, relatively quickly, even though the instance has thousands of variables and tens of thousands of constraints. Other much smaller problems exhibit run-times that are exponential in the problem size, and rapidly become impractical. Unfortunately, there is no reliable way to tell the difficulty of the problem without trying it. Therefore, almost all SAT solvers include time-outs, so they will terminate even if they cannot find a solution. Finally, different SAT solvers will find different instances easy or hard, and some excel at proving unsatisfiability, and others at finding solutions. All of these behaviors can be seen in the SAT solving contests. Extensions of SAT An extension that has gained significant popularity since 2003 is Satisfiability modulo theories (SMT) that can enrich CNF formulas with linear constraints, arrays, all-different constraints, uninterpreted functions, etc. Such extensions typically remain NP-complete, but very efficient solvers are now available that can handle many such kinds of constraints. The satisfiability problem becomes more difficult (PSPACE-complete) if we allow both "for all" and "there exists" quantifiers to bind the Boolean variables. An example of such an expression would be: SAT itself uses only quantifiers. If we allow only quantifiers, it becomes the Co-NP-complete tautology problem. If we allow both, the problem is called the quantified Boolean formula problem (QBF), which can be shown to be PSPACE-complete. It is widely believed that PSPACE-complete problems are strictly harder than any problem in NP, although this has not yet been proved. A number of variants deal with the number of variable assignments making the formula true. Ordinary SAT asks if there is at least one such assignment. MAJSAT, which asks if the majority of all assignments make the formula true, is complete for PP, a probabilistic class. The problem of how many variable assignments satisfy a formula, not a decision problem, is in #P. UNIQUE-SAT is the problem of determining whether a formula has exactly one assignment, is complete for US. When it is known that the problem has at least one assignment or no assignments, the problem is called UNAMBIGOUS-SAT. Although this problem seems easier, it has been shown that if there is a practical (randomized polynomial-time) algorithm to solve this problem, then all problems in NP can be solved just as easily. The maximum satisfiability problem, an FNP generalization of SAT, asks for the maximum number of clauses which can be satisfied by any assignment. It has efficient approximation algorithms, but is NP-hard to solve exactly. Worse still, it is APX-complete, meaning there is no polynomial-time approximation scheme (PTAS) for this problem unless P=NP. An algorithm which correctly answers if an instance of SAT is solvable can be used to find a satisfying assignment. First, the question is asked on formula . If the answer is "no", the formula is unsatisfiable. Otherwise, the question is asked on , i.e. the first variable is assumed to be 0. If the answer is "no", it is assumed that , otherwise . Values of other variables are found subsequently. This property is used in several theorems in complexity theory: Algorithms for solving SAT There are two classes of high-performance algorithms for solving instances of SAT in practice: the conflict-driven clause learning algorithm, which can be viewed as a modern variant of the DPLL algorithm (well known implementation include Chaff, GRASP) and stochastic local search algorithms, such as WalkSAT. A DPLL SAT solver employs a systematic backtracking search procedure to explore the (exponentially sized) space of variable assignments looking for satisfying assignments. The basic search procedure was proposed in two seminal papers in the early 60s (see references below) and is now commonly referred to as the Davis–Putnam–Logemann–Loveland algorithm ("DPLL" or "DLL"). Theoretically, exponential lower bounds have been proved for the DPLL family of algorithms. In contrast, randomized algorithms like the PPSZ algorithm by Paturi, Pudlak, Saks, and Zani set variables in a random order according to some heuristics, for example bounded-width resolution. If the heuristic can't find the correct setting, the variable is assigned randomly. The PPSZ algorithm has a runtime of for 3-SAT with a single satisfying assignment. Currently this is the best known runtime for this problem. In the setting with many satisfying assignments the randomized algorithm by Schöning has a better bound. Modern SAT solvers (developed in the last ten years) come in two flavors: "conflict-driven" and "look-ahead". Conflict-driven solvers augment the basic DPLL search algorithm with efficient conflict analysis, clause learning, non-chronological backtracking (aka backjumping), as well as "two-watched-literals" unit propagation, adaptive branching, and random restarts. These "extras" to the basic systematic search have been empirically shown to be essential for handling the large SAT instances that arise in electronic design automation (EDA). Look-ahead solvers have especially strengthened reductions (going beyond unit-clause propagation) and the heuristics, and they are generally stronger than conflict-driven solvers on hard instances (while conflict-driven solvers can be much better on large instances which actually have an easy instance inside). Modern SAT solvers are also having significant impact on the fields of software verification, constraint solving in artificial intelligence, and operations research, among others. Powerful solvers are readily available as free and open source software. In particular, the conflict-driven MiniSAT, which was relatively successful at the 2005 SAT competition, only has about 600 lines of code. An example for look-ahead solvers is march_dl, which won a prize at the 2007 SAT competition. Certain types of large random satisfiable instances of SAT can be solved by survey propagation (SP). Particularly in hardware design and verification applications, satisfiability and other logical properties of a given propositional formula are sometimes decided based on a representation of the formula as a binary decision diagram (BDD). Propositional satisfiability has various generalisations, including satisfiability for quantified Boolean formula problem, for first- and second-order logic, constraint satisfaction problems, 0-1 integer programming, and maximum satisfiability problem. - Schaefer, Thomas J. (1978). "The complexity of satisfiability problems". Proceedings of the 10th Annual ACM Symposium on Theory of Computing. San Diego, California. pp. 216–226. doi:10.1145/800133.804350. - "The international SAT Competitions web page". Retrieved 2007-11-15. - "An improved exponential-time algorithm for k-SAT", Paturi, Pudlak, Saks, Zani ||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (December 2010)| References are ordered by date of publication: - Davis, M.; Putnam, H. (1960). "A Computing Procedure for Quantification Theory". Journal of the ACM 7 (3): 201. doi:10.1145/321033.321034. - Davis, M.; Logemann, G.; Loveland, D. (1962). "A machine program for theorem-proving". Communications of the ACM 5 (7): 394–397. doi:10.1145/368273.368557. - Cook, S. A. (1971). "The complexity of theorem-proving procedures". Proceedings of the 3rd Annual ACM Symposium on Theory of Computing: 151–158. doi:10.1145/800157.805047. - Michael R. Garey and David S. Johnson (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness. W.H. Freeman. ISBN 0-7167-1045-5. A9.1: LO1 – LO7, pp. 259 – 260. - Marques-Silva, J. P.; Sakallah, K. A. (1999). "GRASP: a search algorithm for propositional satisfiability". IEEE Transactions on Computers 48 (5): 506. doi:10.1109/12.769433. - Marques-Silva, J.; Glass, T. (1999). "Combinational equivalence checking using satisfiability and recursive learning". Design, Automation and Test in Europe Conference and Exhibition, 1999. Proceedings (Cat. No. PR00078). p. 145. doi:10.1109/DATE.1999.761110. ISBN 0-7695-0078-1. - R. E. Bryant, S. M. German, and M. N. Velev, Microprocessor Verification Using Efficient Decision Procedures for a Logic of Equality with Uninterpreted Functions, in Analytic Tableaux and Related Methods, pp. 1–13, 1999. - Schoning, T. (1999). A probabilistic algorithm for k-SAT and constraint satisfaction problems. p. 410. doi:10.1109/SFFCS.1999.814612. - Moskewicz, M. W.; Madigan, C. F.; Zhao, Y.; Zhang, L.; Malik, S. (2001). "Chaff". Proceedings of the 38th conference on Design automation - DAC '01. p. 530. doi:10.1145/378239.379017. ISBN 1581132972. - Clarke, E.; Biere, A.; Raimi, R.; Zhu, Y. (2001). Formal Methods in System Design 19: 7. doi:10.1023/A:1011276507260. - Gi-Joon Nam; Sakallah, K. A.; Rutenbar, R. A. (2002). "A new FPGA detailed routing approach via search-based Boolean satisfiability". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 21 (6): 674. doi:10.1109/TCAD.2002.1004311. - Giunchiglia, E.; Tacchella, A. (2004). Giunchiglia, Enrico; Tacchella, Armando, eds. 2919. doi:10.1007/b95238. Missing or empty - Babic, D.; Bingham, J.; Hu, A. J. (2006). "B-Cubing: New Possibilities for Efficient SAT-Solving". IEEE Transactions on Computers 55 (11): 1315. doi:10.1109/TC.2006.175. - Rodriguez, C.; Villagra, M.; Baran, B. (2007). "Asynchronous team algorithms for Boolean Satisfiability". 2007 2nd Bio-Inspired Models of Network, Information and Computing Systems. p. 66. doi:10.1109/BIMNICS.2007.4610083. - Carla P. Gomes, Henry Kautz, Ashish Sabharwal, Bart Selman (2008). "Satisfiability Solvers". In Frank Van Harmelen, Vladimir Lifschitz, Bruce Porter. Handbook of knowledge representation. Foundations of Artificial Intelligence 3. Elsevier. pp. 89–134. doi:10.1016/S1574-6526(07)03002-7. ISBN 978-0-444-52211-5. More information on SAT: - WinSAT v2.04: A Windows-based SAT application made particularly for researchers. - The MiniSAT Solver - Fast SAT Solver - simple but fast implementation of SAT solver based on genetic algorithms International Conference on Theory and Applications of Satisfiability Testing: SAT solving in general: Evaluation of SAT solvers:
http://en.wikipedia.org/wiki/Boolean_satisfiability_problem
13
51
Introduction to Optical Birefringence Birefringence is formally defined as the double refraction of light in a transparent, molecularly ordered material, which is manifested by the existence of orientation-dependent differences in refractive index. Many transparent solids are optically isotropic, meaning that the index of refraction is equal in all directions throughout the crystalline lattice. Examples of isotropic solids are glass, table salt (sodium chloride, illustrated in Figure 1(a)), many polymers, and a wide variety of both organic and inorganic compounds. The simplest crystalline lattice structure is cubic, as illustrated by the molecular model of sodium chloride in Figure 1(a), an arrangement where all of the sodium and chloride ions are ordered with uniform spacing along three mutually perpendicular axes. Each chloride ion is surrounded by (and electrostatically bonded to) six individual sodium ions and vice versa for the sodium ions. The lattice structure illustrated in Figure 1(b) represents the mineral calcite (calcium carbonate), which consists of a rather complex, but highly ordered three-dimensional array of calcium and carbonate ions. Calcite has an anisotropic crystalline lattice structure that interacts with light in a totally different manner than isotropic crystals. The polymer illustrated in Figure 1(c) is amorphous and devoid of any recognizable periodic crystalline structure. Polymers often possess some degree of crystalline order and may or may not be optically transparent. Crystals are classified as being either isotropic or anisotropic depending upon their optical behavior and whether or not their crystallographic axes are equivalent. All isotropic crystals have equivalent axes that interact with light in a similar manner, regardless of the crystal orientation with respect to incident light waves. Light entering an isotropic crystal is refracted at a constant angle and passes through the crystal at a single velocity without being polarized by interaction with the electronic components of the crystalline lattice. The term anisotropy refers to a non-uniform spatial distribution of properties, which results in different values being obtained when specimens are probed from several directions within the same material. Observed properties are often dependent on the particular probe being employed and often vary depending upon the whether the observed phenomena are based on optical, acoustical, thermal, magnetic, or electrical events. On the other hand, as mentioned above, isotropic properties remain symmetrical, regardless of the direction of measurement, with each type of probe reporting identical results. Anisotropic crystals, such as quartz, calcite, and tourmaline, have crystallographically distinct axes and interact with light by a mechanism that is dependent upon the orientation of the crystalline lattice with respect to the incident light angle. When light enters the optical axis of anisotropic crystals, it behaves in a manner similar to the interaction with isotropic crystals, and passes through at a single velocity. However, when light enters a non-equivalent axis, it is refracted into two rays, each polarized with the vibration directions oriented at right angles (mutually perpendicular) to one another and traveling at different velocities. This phenomenon is termed double refraction or birefringence and is exhibited to a greater or lesser degree in all anisotropic crystals. Electromagnetic radiation propagates through space with oscillating electric and magnetic field vectors alternating in sinusoidal patterns that are perpendicular to one another and to the direction of wave propagation. Because visible light is composed of both electrical and magnetic components, the velocity of light through a substance is partially dependent upon the electrical conductivity of the material. Light waves passing through a transparent crystal must interact with localized electrical fields during their journey. The relative speed at which electrical signals travel through a material varies with the type of signal and its interaction with the electronic structure, and is determined by a property referred to as the dielectric constant of the material. The vectorial relationship defining the interaction between a light wave and a crystal through which it passes is governed by the inherent orientation of lattice electrical vectors and the direction of the wave's electric vector component. Therefore, a careful consideration of the electrical properties of an anisotropic material is fundamental to the understanding of how a light wave interacts with the material as it propagates through. The phenomenon of double refraction is based on the laws of electromagnetism, first proposed by British mathematician James Clerk Maxwell in the 1860s. His elaborate series of equations demonstrate that the velocity of light through a material equals the speed of light in a vacuum (c) divided by the product of the square root of the material's dielectric constant (e) multiplied by the magnetic permeability (m) of the medium. In general, biological and related materials have a magnetic permeability very near 1.0, as do many conducting and non-conducting specimens of interest to the microscopist. The dielectric constant of a material is therefore related to the refractive index through a simple equation: ε = n2 where e is a variable representing the dielectric constant, and n is the material's measured refractive index. This equation was derived for specific frequencies of light and ignores dispersion of polychromatic light as it passes through the material. Anisotropic crystals are composed of complex molecular and atomic lattice orientations that have varying electrical properties depending upon the direction from which they are being probed. As a result, the refractive index also varies with direction when light passes through an anisotropic crystal, giving rise to direction-specific trajectories and velocities. Perhaps one of the most dramatic demonstrations of double refraction occurs with calcium carbonate (calcite) crystals, as illustrated in Figure 2. The rhombohedral cleavage block of calcite produces two images when it is placed over an object, and then viewed with reflected light passing through the crystal. One of the images appears as would normally be expected when observing an object through clear glass or an isotropic crystal, while the other image appears slightly displaced, due to the nature of doubly-refracted light. When anisotropic crystals refract light, they split the incoming rays into two components that take different paths during their journey through the crystal and emerge as separate light rays. This unusual behavior, as discussed above, is attributed to the arrangement of atoms in the crystalline lattice. Because the precise geometrical ordering of the atoms is not symmetrical with respect to the crystalline axes, light rays passing through the crystal can experience different refractive indices, depending upon the direction of propagation. One of the rays passing through an anisotropic crystal obeys the laws of normal refraction, and travels with the same velocity in every direction through the crystal. This light ray is termed the ordinary ray. The other ray travels with a velocity that is dependent upon the propagation direction within the crystal, and is termed the extraordinary ray. Therefore, each light ray entering the crystal is split into an ordinary and an extraordinary ray that emerge from the distant end of the crystal as linearly polarized rays having their electric field vectors vibrating in planes that are mutually perpendicular. These phenomena are illustrated in Figures 2 through 4. The calcite crystal presented in Figure 3(b) is positioned over the capital letter A on a white sheet of paper demonstrating a double image observed through the crystal. If the crystal were to be slowly rotated around the letter, one of the images of the letter will remain stationary, while the other precesses in a 360-degree circular orbit around the first. The orientation of the electric vector vibration planes for both the ordinary (O) and extraordinary (E) rays are indicated by lines with doubled arrows in Figure 3(b). Note that these axes are perpendicular to each other. The crystal optical axis, which makes an equal angle (103 degrees) with all three crystal faces joined at the corner, is also indicated at the lower portion of the crystal. The degree of birefringence in calcite is so pronounced that the images of the letter A formed by the ordinary and extraordinary rays are completely separated. This high level of birefringence is not observed in all anisotropic crystals. Transparent dichroic polarizers can be utilized to determine the electric vector directions for the extraordinary and ordinary rays in a calcite crystal, as presented in Figures 3(a) and Figure 3(c). When the polarizer is oriented so that all light waves having electric vectors oriented in the horizontal direction are transmitted (Figure 3(a)), waves having similar vectors in the vertical direction are absorbed, and vice versa (Figure 3(c)). In the calcite crystal presented in Figure 3, the extraordinary ray has a vertical electric vector vibration angle, which is absorbed when the polarizer is oriented in a horizontal direction (Figure 3(a)). In this case, only light from the ordinary ray is passed through the polarizer and its corresponding image of the letter A is the only one observed. In contrast, when the polarizer is turned so that the vibration transmission direction is oriented vertically (Figure 3(c)), the ordinary ray is blocked and the image of the letter A produced by the extraordinary ray is the only one visible. In Figure 3, the incident light rays giving rise to the ordinary and extraordinary rays enter the crystal in a direction that is oblique with respect to the optical axis, and are responsible for the observed birefringent character. The behavior of an anisotropic crystal is different, however, if the incident light enters the crystal in a direction that is either parallel or perpendicular to the optical axis, as presented in Figure 4. When an incident ray enters the crystal perpendicular to the optical axis, it is separated into ordinary and extraordinary rays, as described above, but instead of taking different pathways, the trajectories of these rays are coincident. Even though the ordinary and extraordinary rays emerge from the crystal at the same location, they exhibit different optical path lengths and are subsequently shifted in phase relative to one another (Figure 4(b)). The two cases just described are illustrated in Figure 4(a), for the oblique case (see Figures 2 and 3), and Figure 4(b) for the situation where incident light is perpendicular to the optical axis of a birefringent crystal. In the case where incident light rays impact the crystal in a direction that is parallel to the optical axis (Figure 4(c)), they behave as ordinary light rays and are not separated into individual components by an anisotropic birefringent crystal. Calcite and other anisotropic crystals act as if they were isotropic materials (such as glass) under these circumstances. The optical path lengths of the light rays emerging from the crystal are identical, and there is no relative phase shift. Although it is common to interchangeably use the terms double refraction and birefringence to indicate the ability of an anisotropic crystal to separate incident light into ordinary and extraordinary rays, these phenomena actually refer to different manifestations of the same process. The actual division of a light ray into two visible species, each refracting at a different angle, is the process of double refraction. In contrast, birefringence refers to the physical origin of the separation, which is the existence of a variation in refractive index that is sensitive to direction in a geometrically ordered material. The difference in refractive index, or birefringence, between the extraordinary and ordinary rays traveling through an anisotropic crystal is a measurable quantity, and can be expressed as an absolute value by the equation: Birefringence (B) = |ne - no| where n(e) and n(o) are the refractive indices experienced by the extraordinary and ordinary rays, respectively. This expression holds true for any part or fragment of an anisotropic crystal with the exception of light waves propagated along the optical axis of the crystal. Because the refractive index values for each component can vary, the absolute value of this difference can determine the total amount of birefringence, but the sign of birefringence will be either a negative or positive value. A determination of the birefringence sign by analytical methods is utilized to segregate anisotropic specimens into categories, which are termed either positively or negatively birefringent. The birefringence of a specimen is not a fixed value, but will vary with the orientation of the crystal relative to the incident angle of the illumination. The optical path difference is a classical optical concept related to birefringence, and both are defined by the relative phase shift between the ordinary and extraordinary rays as they emerge from an anisotropic material. In general, the optical path difference is computed by multiplying the specimen thickness by the refractive index, but only when the medium is homogeneous and does not contain significant refractive index deviations or gradients. This quantity, as well as the value of birefringence, is usually expressed in nanometers and grows larger with increasing specimen thickness. For a system with two refractive index values (n(1) and n(2)), the optical path difference (D) is determined from the equation: Optical Path Difference D = (n1 - n2) • t (Thickness) In order to consider the phase relationship and velocity difference between the ordinary and extraordinary rays after they pass through a birefringent crystal, a quantity referred to as the relative retardation is often determined. As mentioned above, the two light rays are oriented so that they are vibrating at right angles to each other. Each ray will encounter a slightly different electrical environment (refractive index) as it enters the crystal and this will affect the velocity at which the ray passes through the crystal. Because of the difference in refractive indices, one ray will pass through the crystal at a slower rate than the other ray. In other words, the velocity of the slower ray will be retarded with respect to the faster ray. This retardation value (the relative retardation) can be quantitatively determined using the following equation: Retardation (Γ) = Thickness (t) x Birefringence (B) Γ = t • |ne - no| Where G is the quantitative retardation of the material, t is the thickness of the birefringent crystal (or material) and B is the measured birefringence, as defined above. Factors contributing to the value of retardation are the magnitude of the difference in refractive indices for the environments seen by the ordinary and extraordinary rays, and also the specimen thickness. Obviously, the greater the thickness or difference in refractive indices, the greater the degree of retardation between waves. Early observations made on the mineral calcite indicated that thicker calcite crystals caused greater differences in splitting of the images seen through the crystals, such as those illustrated in Figure 3. This observation agrees with the equation above, which indicates retardation will increase with crystal (or sample) thickness. The behavior of an ordinary light ray in a birefringent crystal can be described in terms of a spherical wavefront based on the Huygens' principle of wavelets emanating from a point source of light in a homogeneous medium (as illustrated in Figure 5). The propagation of these waves through an isotropic crystal occurs at constant velocity because the refractive index experienced by the waves is uniform in all directions (Figure 5(a)). In contrast, the expanding wavefront of extraordinary waves, which encounter refractive index variations as a function of direction (see Figure 5(b)), can be described by the surface of an ellipsoid of revolution. The upper and lower limits of extraordinary wave velocities are defined by the long and short axes of the ellipsoid (Figure 5(c)). The wavefront reaches its highest velocity when propagating in the direction parallel to the long axis of the ellipsoid, which is referred to as the fast axis. On the other hand, the slowest wavefronts occur when the wave travels along the short axis of the ellipsoid. This axis is termed the slow axis. Between these two extremes, wavefronts traveling in other directions experience a gradient of refractive index, which is dependent upon orientation, and propagate with velocities of intermediate values. Transparent crystalline materials are generally classified into two categories defined by the number of optical axes present in the molecular lattices. Uniaxial crystals have a single optical axis and comprise the largest family of common birefringent specimens, including calcite, quartz, and ordered synthetic or biological structures. The other major class is biaxial crystals, which are birefringent materials that feature two independent optical axes. The ordinary and extraordinary wavefronts in uniaxial crystals coincide at either the slow or the fast axis of the ellipsoid, depending upon the distribution of refractive indices within the crystal (illustrated in Figure 6). The optical path difference or relative retardation between these rays is determined by the lag of one wave behind the other in surface wavefronts along the propagation direction. In cases where the ordinary and extraordinary wavefronts coincide at the long or major axis of the ellipsoid, then the refractive index experienced by the extraordinary wave is greater than that of the ordinary wave (Figure 6(b)). This situation is referred to as positive birefringence. However, if the ordinary and extraordinary wavefronts overlap at the minor axis of the ellipsoid (Figure 6(a)), then the opposite is true. In effect, the refractive index through which the ordinary wave passes exceeds that of the extraordinary wave, and the material is termed negatively birefringent. A diagrammatic ellipsoid relating the orientation and relative magnitude of refractive index in a crystal is termed the refractive index ellipsoid, and is illustrated in Figures 5 and 6. Returning to the calcite crystal presented in Figure 2, the crystal is illustrated having the optical axis positioned at the top left-hand corner. Upon entering the crystal, the ordinary light wave is refracted without deviation from the normal incidence angle as if it were traveling through an isotropic medium. Alternatively, the extraordinary wave deviates to the left and travels with the electric vector perpendicular to that of the ordinary wave. Because calcite is a negatively birefringent crystal, the ordinary wave is the slow wave and the extraordinary wave is the fast wave. Birefringent Crystals in a Polarizing Optical Microscope As mentioned above, light that is doubly refracted through anisotropic crystals is polarized with the electric vector vibration directions of the ordinary and extraordinary light waves being oriented perpendicular to each other. The behavior of anisotropic crystals under crossed polarized illumination in an optical microscope can now be examined. Figure 7 illustrates a birefringent (anisotropic) crystal placed between two polarizers whose vibration directions are oriented perpendicular to each other (and lying in directions indicated by the arrows next to the polarizer and analyzer labels). Non-polarized white light from the illuminator enters the polarizer on the left and is linearly polarized with an orientation in the direction indicated by the arrow (adjacent to the polarizer label), and is arbitrarily represented by a red sinusoidal light wave. Next, the polarized light enters the anisotropic crystal (mounted on the microscope stage) where it is refracted and divided into two separate components vibrating parallel to the crystallographic axes and perpendicular to each other (the red open and filled light waves). The polarized light waves then travel through the analyzer (whose polarization position is indicated by the arrow next to the analyzer label), which allows only those components of the light waves that are parallel to the analyzer transmission azimuth to pass. The relative retardation of one ray with respect to another is indicated by an equation (thickness multiplied by refractive index difference) that relates the variation in speed between the ordinary and extraordinary rays refracted by the anisotropic crystal. In order to examine more closely how birefringent, anisotropic crystals interact with polarized light in an optical microscope, the properties of an individual crystal will be considered. The specimen material is a hypothetical tetragonal, birefringent crystal having an optical axis oriented in a direction that is parallel to the long axis of the crystal. Light entering the crystal from the polarizer will be traveling perpendicular to the optical (long) axis of the crystal. The illustrations in Figure 8 present the crystal as it will appear in the eyepieces of a microscope under crossed-polarized illumination as it is rotated around the microscope optical axis. In each frame of Figure 8, the axis of the microscope polarizer is indicated by the capital letter P and is oriented in an East-West (horizontal) direction. The axis of the microscope analyzer is indicated by the letter A and is oriented in a North-South (vertical) direction. These axes are perpendicular to each other and result in a totally dark field when observed through the eyepieces with no specimen on the microscope stage. Figure 8(a) illustrates the anisotropic tetragonal, birefringent crystal in an orientation where the long (optical) axis of the crystal lies parallel to the transmission azimuth of the polarizer. In this case, light passing through the polarizer, and subsequently through the crystal, is vibrating in a plane that is parallel to the direction of the polarizer. Because none of the light incident on the crystal is refracted into divergent ordinary and extraordinary waves, the isotropic light waves passing through the crystal fail to produce electric vector vibrations in the correct orientation to traverse through the analyzer and yield interference effects (see the horizontal arrow in Figure 8(a), and the discussion below). As a result the crystal is very dark, being almost invisible against the black background. For the purposes of illustration, the crystal depicted in Figure 8(a) is not totally extinct (as it would be between crossed polarizers) but passes a small portion of red light, to enable the reader to note the position of the crystal. Microscopists classically refer to this orientation as being a position of extinction for the crystal, which is important as a reference point for determining the refractive indices of anisotropic materials with a polarizing microscope. By removing the analyzer in a crossed polarizing microscope, the single permitted direction of light vibration passing through the polarizer interacts with only one electrical component in the birefringent crystal. The technique allows segregation of a single refractive index for measurement. Subsequently, the remaining refractive index of a birefringent material can then be measured by rotation of the polarizer by 90 degrees. The situation is very different in Figure 8(b), where the long (optical) axis of the crystal is now positioned at an oblique angle (a) with respect to the polarizer transmission azimuth, a situation brought about through rotation of the microscope stage. In this case, a portion of the light incident upon the crystal from the polarizer is passed on to the analyzer. To obtain a quantitative estimate of the amount of light passing through the analyzer, simple vector analysis can be applied to solve the problem. The first step is to determine the contributions from the polarizer to o and e (see Figure 8(b); the letters refer to the ordinary (o) ray and extraordinary (e) ray, which are discussed above). Projections of the vectors are dropped onto the axis of the polarizer, and assume an arbitrary value of 1 for both o and e, which are proportional to the actual intensities of the ordinary and extraordinary ray. The contributions from the polarizer for o and e are illustrated with black arrows designated by x and y on the polarizer axis (P) in Figure 8(b). These lengths are then measured on the vectors o and e (illustrated as red arrows designating the vectors), which are then added together to produce the resultant vector, r'. A projection from the resultant onto the analyzer axis (A) produces the absolute value, R. The value of R on the analyzer axis is proportional to the amount of light passing through the analyzer. The results indicate that a portion of light from the polarizer passes through the analyzer and the birefringent crystal displays some degree of brightness. The maximum brightness for the birefringent material is observed when the long (optical) axis of the crystal is oriented at a 45 degree angle with respect to both the polarizer and analyzer, as illustrated in Figure 8(c). Dropping the projections of the vectors o and e onto the polarizer axis (P) determines the contributions from the polarizer to these vectors. When these projections are then measured on the vectors, the resultant can be determined by completing a rectangle to the analyzer axis (A). The technique just described will work for the orientation of any crystal with respect to the polarizer and analyzer axis because o and e are always at right angles to each other, with the only difference being the orientation of o and e with respect to the crystal axes. When the ordinary and extraordinary rays emerge from the birefringent crystal, they are still vibrating at right angles with respect to one another. However, the components of these waves that pass through the analyzer are vibrating in the same plane (as illustrated in Figure 8). Because one wave is retarded with respect to the other, interference (either constructive or destructive) occurs between the waves as they pass through the analyzer. The net result is that some birefringent samples acquire a spectrum of color when observed in white light through crossed polarizers. Quantitative analysis of the interference colors observed in birefringent samples is usually accomplished by consulting a Michel-Levy chart similar to the one illustrated in Figure 9. As is evident from this graph, the polarization colors visualized in the microscope and recorded onto film or captured digitally can be correlated with the actual retardation, thickness, and birefringence of the specimen. The chart is relatively easy to use with birefringent samples if two of the three required variables are known. When the specimen is placed between crossed polarizers in the microscope and rotated to a position of maximum brightness with any one of a variety of retardation plates, the color visualized in the eyepieces can be traced on the retardation axis to find the wavelength difference between the ordinary and extraordinary waves passing through the specimen. Alternatively, by measuring the refractive indices of an anisotropic specimen and calculating their difference (the birefringence), the interference color(s) can be determined from the birefringence values along the top of the chart. By extrapolating the angled lines back to the ordinate, the thickness of the specimen can also be estimated. The lower section of the Michel-Levy chart (x-axis) marks the orders of retardation in multiples of approximately 550 nanometers. The area between zero and 550 nanometers is known as the first order of polarization colors, and the magenta color that occurs in the 550 nanometer region is often termed first-order red. Colors between 550 and 1100 nanometers are termed second-order colors, and so on up the chart. The black color at the beginning of the chart is known as zero-order black. Many of the Michel-Levy charts printed in textbooks plot higher-order colors up to the fifth or sixth order. The most sensitive area of the chart is first-order red (550 nanometers), because even a slight change in retardation causes the color to shift dramatically either up in wavelength to cyan or down to yellow. Many microscope manufacturers take advantage of this sensitivity by providing a full-wave retardation plate or first-order red compensator with their polarizing microscopes to assist scientists in determining the properties of birefringent materials. Categories of Birefringence Although birefringence is an inherent property of many anisotropic crystals, such as calcite and quartz, it can also arise from other factors, such as structural ordering, physical stress, deformation, flow through a restricted conduit, and strain. Intrinsic birefringence is the term utilized to describe naturally occurring materials that have asymmetry in refractive index that is direction-dependent. These materials include many anisotropic natural and synthetic crystals, minerals, and chemicals. Structural birefringence is a term that applies to a wide spectrum of anisotropic formations, including biological macromolecular assemblies such as chromosomes, muscle fibers, microtubules, liquid crystalline DNA, and fibrous protein structures such as hair. Unlike many other forms of birefringence, structural birefringence is often sensitive to refractive index fluctuations or gradients in the surrounding medium. In addition, many synthetic materials also exhibit structural birefringence, including fibers, long-chain polymers, resins, and composites. Stress and strain birefringence occur due to external forces and/or deformation acting on materials that are not naturally birefringent. Examples are stretched films and fibers, deformed glass and plastic lenses, and stressed polymer castings. Finally, flow birefringence can occur due to induced alignment of materials such as asymmetric polymers that become ordered in the presence of fluid flow. Rod-shaped and plate-like molecules and macromolecular assemblies, such as high molecular weight DNA and detergents, are often utilized as candidates in flow birefringence studies. In conclusion, birefringence is a phenomenon manifested by an asymmetry of properties that may be optical, electrical, mechanical, acoustical, or magnetic in nature. A wide spectrum of materials display varying degrees of birefringence, but the ones of specific interest to the optical microscopist are those specimens that are transparent and readily observed in polarized light. Douglas B. Murphy - Department of Cell Biology and Microscope Facility, Johns Hopkins University School of Medicine, 725 N. Wolfe Street, 107 WBSB, Baltimore, Maryland 21205. Kenneth R. Spring - Scientific Consultant, Lusby, Maryland, 20657. Thomas J. Fellers and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.
http://www.microscopyu.com/articles/polarized/birefringenceintro.html
13
80
Early history traces the development of the Somali people to an Arab sultanate, which was founded in the seventh century A.D. by Koreishite immigrants from Yemen. During the 15th and 16th centuries, Portuguese traders landed in present Somali territory and ruled several coastal towns. The sultan of Oman and Zanzibar subsequently took control of these towns and their surrounding territory. Somalia's modern history began in the late l9th century, when various European powers began to trade and establish themselves in the area. The British East India Company's desire for unrestricted harbor facilities led to the conclusion of treaties with the sultan of Tajura as early as 1840. It was not until 1886, however, that the British gained control over northern Somalia through treaties with various Somali chiefs who were guaranteed British protection. British objectives centered on safeguarding trade links to the east and securing local sources of food and provisions for its coaling station in Aden. The boundary between Ethiopia and British Somaliland was established in 1897 through treaty negotiations between British negotiators and King Menelik. During the first two decades of this century, British rule was challenged through persistent attacks led by Mohamed Abdullah. A long series of intermittent engagements and truces ended in 1920 when British warplanes bombed Abdullah's stronghold at Taleex. Although Abdullah was defeated as much by rival Somali factions as by British forces, he was lauded as a popular hero and stands as a major figure of national identity to some Somalis. In 1885, Italy obtained commercial advantages in the area from the sultan of Zanzibar and in 1889 concluded agreements with the sultans of Obbia and Aluula, who placed their territories under Italy's protection. Between 1897 and 1908, Italy made agreements with the Ethiopians and the British that marked out the boundaries of Italian Somaliland. The Italian Government assumed direct administration, giving the territory colonial status. Italian occupation gradually extended inland. In 1924, the Jubaland Province of Kenya, including the town and port of Kismayo, was ceded to Italy by the United Kingdom. The subjugation and occupation of the independent sultanates of Obbia and Mijertein, begun in 1925, were completed in 1927. In the late 1920s, Italian and Somali influence expanded into the Ogaden region of eastern Ethiopia. Continuing incursions climaxed in 1935 when Italian forces launched an offensive that led to the capture of Addis Ababa and the Italian annexation of Ethiopia in 1936. Following Italy's declaration of war on the United Kingdom in June 1940, Italian troops overran British Somaliland and drove out the British garrison. In 1941, British forces began operations against the Italian East African Empire and quickly brought the greater part of Italian Somaliland under British control. From 1941 to 1950, while Somalia was under British military administration, transition toward self-government was begun through the establishment of local courts, planning committees, and the Protectorate Advisory Council. In 1948 Britain turned the Ogaden and neighboring Somali territories over to Ethiopia. In Article 23 of the 1947 peace treaty, Italy renounced all rights and titles to Italian Somaliland. In accordance with treaty stipulations, on September 15, 1948, the Four Powers referred the question of disposal of former Italian colonies to the UN General Assembly. On November 21, 1949, the General Assembly adopted a resolution recommending that Italian Somaliland be placed under an international trusteeship system for 10 years, with Italy as the administering authority, followed by independence for Italian Somaliland. In 1959, at the request of the Somali Government, the UN General Assembly advanced the date of independence from December 2 to July 1, 1960. Meanwhile, rapid progress toward self-government was being made in British Somaliland. Elections for the Legislative Assembly were held in February 1960, and one of the first acts of the new legislature was to request that the United Kingdom grant the area independence so that it could be united with Italian Somaliland when the latter became independent. The protectorate became independent on June 26, 1960; five days later, on July 1, it joined Italian Somaliland to form the Somali Republic. In June 1961, Somalia adopted its first national constitution in a countrywide referendum, which provided for a democratic state with a parliamentary form of government based on European models. During the early post-independence period, political parties reflected clan loyalties, which contributed to a basic split between the regional interests of the former British-controlled north and the Italian-controlled south. There also was substantial conflict between pro-Arab, pan-Somali militants intent on national unification with the Somali-inhabited territories in Ethiopia and Kenya and the "modernists," who wished to give priority to economic and social development and improving relations with other African countries. Gradually, the Somali Youth League, formed under British auspices in 1943, assumed a dominant position and succeeded in cutting across regional and clan loyalties. Under the leadership of Mohamed Ibrahim Egal, prime minister from 1967 to 1969, Somalia greatly improved its relations with Kenya and Ethiopia. The process of party-based constitutional democracy came to an abrupt end, however, on October 21, 1969, when the army and police, led by Maj. Gen. Mohamed Siad Barre, seized power in a bloodless coup. Following the coup, executive and legislative power was vested in the 20-member Supreme Revolutionary Council (SRC), headed by Maj. Gen. Siad Barre as president. The SRC pursued a course of "scientific socialism" that reflected both ideological and economic dependence on the Soviet Union. The government instituted a national security service, centralized control over information, and initiated a number of grassroots development projects. Perhaps the most impressive success was a crash program that introduced an orthography for the Somali language and brought literacy to a substantial percentage of the population. The SRC became increasingly radical in foreign affairs, and in 1974, Somalia and the Soviet Union concluded a treaty of friendship and cooperation. As early as 1972, tensions began increasing along the Somali-Ethiopian border; these tensions heightened after the accession to power in Ethiopia in 1973 of the Mengistu Hailemariam regime, which turned increasingly toward the Soviet Union. In the mid-1970s, the Western Somali Liberation Front (WSLF) began guerrilla operations in the Ogaden region of Ethiopia. Fighting increased, and in July 1977, the Somali National Army (SNA) crossed into the Ogaden to support the insurgents. The SNA moved quickly toward Harer, Jijiga, and Dire Dawa, the principal cities of the region. Subsequently, the Soviet Union, Somalia's most important source of arms, embargoed weapons shipments to Somalia. The Soviets switched their full support to Ethiopia, with massive infusions of Soviet arms and 10,000-15,000 Cuban troops. In November 1977, President Siad Barre expelled all Soviet advisers and abrogated the friendship agreement with the U.S.S.R. In March 1978, Somali forces retreated into Somalia; however, the WSLF continues to carry out sporadic but greatly reduced guerrilla activity in the Ogaden. Such activities also were subsequently undertaken by another dissident group, the Ogaden National Liberation Front (ONLF). Following the 1977 Ogaden war, President Barre looked to the West for international support, military equipment, and economic aid. The United States and other Western countries traditionally were reluctant to provide arms because of the Somali Government's support for insurgency in Ethiopia. In 1978, the United States reopened the U.S. Agency for International Development mission in Somalia. Two years later, an agreement was concluded that gave U.S. forces access to military facilities in Somalia. In the summer of 1982, Ethiopian forces invaded Somalia along the central border, and the United States provided two emergency airlifts to help Somalia defend its territorial integrity. From 1982 to 1990 the United States viewed Somalia as a partner in defense. Somali officers of the National Armed Forces were trained in U.S. military schools in civilian as well as military subjects. Within Somalia, Siad Barre's regime confronted insurgencies in the northeast and northwest, whose aim was to overthrow his government. By 1988, Siad Barre was openly at war with sectors of his nation. At the President's order, aircraft from the Somali National Air Force bombed the cities in the northwest province, attacking civilian as well as insurgent targets. The warfare in the northwest sped up the decay already evident elsewhere in the republic. Economic crisis, brought on by the cost of anti-insurgency activities, caused further hardship as Siad Barre and his cronies looted the national treasury. By 1990, the insurgency in the northwest was largely successful. The army dissolved into competing armed groups loyal to former commanders or to clan-tribal leaders. The economy was in shambles, and hundreds of thousands of Somalis fled their homes. In 1991, Siad Barre and forces loyal to him fled the capital; he later died in exile in Nigeria. In the same year, Somaliland declared itself independent of the rest of Somalia, with its capital in Hargeisa. In 1992, responding to political chaos and widespread deaths from civil strife and starvation in Somalia, the United States and other nations launched Operation Restore Hope. Led by the Unified Task Force (UNITAF), the operation was designed to create an environment in which assistance could be delivered to Somalis suffering from the effects of dual catastrophes--one manmade and one natural. UNITAF was followed by the United Nations Operation in Somalia (UNOSOM). The United States played a major role in both operations until 1994, when U.S. forces withdrew. The prevailing chaos in much of Somalia after 1991 contributed to growing influence by various Islamic groups, including al-Tabliq, al-Islah (supported by Saudi Arabia), and Al-Ittihad Al-Islami (Islamic Unity). These groups, which are among the main non-clan-based forces in Somalia, share the goal of establishing an Islamic state. They differ in their approach; in particular, Al-Ittihad supports the use of violence to achieve that goal and has claimed responsibility for terrorist acts. In the mid-1990s, Al-Ittihad came to dominate territory in Puntland as well as central Somalia near Gedo. It was forcibly expelled from these localities by Puntland forces as well as Ethiopian attacks in the Gedo region. Since that time, Al-Ittihad has adopted a longer term strategy based on integration into local communities and establishment of Islamic schools, courts, and relief centers. After the attack on the United States of September 11, 2001, Somalia gained greater international attention as a possible base for terrorism--a concern that became the primary element in U.S. policy toward Somalia. The United States and other members of the anti-terrorism coalition examined a variety of short- and long-term measures designed to cope with the threat of terrorism in and emanating from Somalia. Economic sanctions were applied to Al-Ittihad and to the Al-Barakaat group of companies, based in Dubai, which conducted currency exchanges and remittances transfers in Somalia. The United Nations also took an increased interest in Somalia, including proposals for an increased UN presence and for strengthening a 1992 arms embargo. Somalia1 has been without a central government since its last president, dictator Mohamed Siad Barre, fled the country in 1991. Subsequent fighting among rival faction leaders resulted in the killing, displacement, and starvation of thousands of persons and led the U.N. to intervene militarily in 1992. Following the U.N. intervention, periodic attempts at national reconciliation were made, but they did not succeed. In September 1999, during a speech before the U.N. General Assembly, Djiboutian President Ismail Omar Guelleh announced an initiative to facilitate reconciliation under the auspices of the Inter-Governmental Authority for Development (IGAD). In March 2000, formal reconciliation efforts began with a series of small focus group meetings of various elements of Somali society in Djibouti. In May 2000, in Arta, Djibouti, delegates representing all clans and a wide spectrum of Somali society were selected to participate in a "Conference for National Peace and Reconciliation in Somalia." More than 900 delegates, including representatives of nongovernmental organizations (NGO's), attended the Conference. The Conference adopted a charter for a 3-year Transitional National Government (TNG) and selected a 245-member Transitional National Assembly (TNA), which included 24 members of Somali minority groups and 25 women. In August 2000, the Assembly elected Abdiqassim Salad Hassan as Transitional President. Ali Khalif Gallayr was named Prime Minister in October 2000, and he appointed the 25-member Cabinet. Administrations in the northwest (Somaliland) and northeast ("Puntland") areas of the country do not recognize the results of the Djibouti Conference, nor do several Mogadishu-based factional leaders. In October the TNA passed a vote of no confidence in the TNG, and Gallayr was dismissed as Prime Minister. In November Abdiqassim appointed Hassan Abshir Farah as the new Prime Minister. Serious interclan fighting continued to occur in parts of the country, notably in the central regions of Hiran and Middle Shabelle, the southern regions of Gedo and Lower Shabelle, and in the Middle Juba and Lower Juba regions. No group controls more than a fraction of the country's territory. There is no national judicial system. Leaders in the northeast proclaimed the formation of the Puntland state in 1998. Puntland's leader, Abdullahi Yusuf, publicly announced that he did not plan to break away from the remainder of the country, but the Puntland Administration did not participate in the Djibouti Conference or recognize the TNG that emerged from it. In July Yusuf announced his refusal to abide by the Constitution and step down. This led to a confrontation with Chief Justice Yusuf Haji Nur, who claimed interim presidential powers pending elections. In November traditional elders elected Jama Ali Jama as the new Puntland President. Yusuf refused to accept the elders' decision, and in December he seized by force the town of Garowe, reportedly with Ethiopian support. Jama fled to Bosasso. Both Yusuf and Jama continued to claim the presidency, and there were continued efforts to resolve the conflict at year's end 2001. A ban on political parties in Puntland remained in place. In the northwest, the "Republic of Somaliland" continued to proclaim its independence within the borders of former British Somaliland. Somaliland has sought international recognition since 1991 without success. Somaliland's government includes a parliament, a functioning civil court system, executive departments organized as ministries, six regional governors, and municipal authorities in major towns. During the year 2001, 97 percent of voters in a referendum voted for independence for Somaliland and for a political party system. Presidential and parliamentary elections were scheduled to be held in February 2002; however, President Egal requested and Parliament granted a 1-year extension for the next elections. Somalia has a long history of internal instability; in some instances, clan feuds have lasted more than a century. Most of this turmoil has been associated with disagreements and factionalism between and among the major branches of the Somali lineage system, which includes pastoral nomads such as the Dir, Daarood, Isaaq, and Hawiye, and agriculturalists such as the Digil and Rahanwayn. In more recent times, these historical animosities have expressed themselves through the emergence of clan-based dissident and insurgent movements. Most of these groups grew to oppose Siad Barre's regime because the president refused to make political reforms, unleashed a reign of terror against the country's citizenry, and concentrated power in the hands of his Mareehaan subclan (the Mareehaan belonged to the Daarood clan). After Siad Barre fled Mogadishu in January 1991, the Somali nation state collapsed, largely along warring clan lines. In the aftermath of the 1969 coup, the central government acquired control of all legislative, administrative, and judicial functions. The only legally permitted party was the Somali Revolutionary Socialist Party (SRSP). In April 1970, Siad Barre authorized the creation of National Security Courts (NSCs), which shortly thereafter tried approximately sixty people: leaders of the previous government, businessmen, lawyers, and senior military personnel who had failed to support the coup. In September 1970, the Supreme Revolutionary Council (SRC) proclaimed that any person who harmed the nation's unity, peace, or sovereignty could be sentenced to death. The government also promised to punish anyone who spread false propaganda against Siad Barre's regime. Until the early 1980s, the Siad Barre regime generally shunned capital punishment in favor of imprisonment and reeducation of actual, suspected, or potential opponents. The earlier parliamentary government had been able to hold people without trial up to ninety days during a state of emergency, but the military government removed most legal restrictions on preventive detention. After the coup, a local revolutionary council or the National Security Service (NSS) could detain individuals regarded as dangerous to peace, order, good government, or the aims and spirit of the revolution. Additionally, regional governors could order the search and arrest of persons suspected of a crime or of activities considered threatening to public order and security, and could requisition property or services without compensation. In 1974 the government began to require all civil servants to sign statements of intent to abide by security regulations. Furthermore, any contact between foreigners and Somali citizens had to be reported to the Ministry of Foreign Affairs. By the late 1970s, most Somalis were ignoring this latter regulation. The Somali government became more repressive after an unsuccessful 1971 coup. Officials maintained that the coup attempt by some SRC members had sought to protect the interests of the trading bourgeoisie and the tribal structure. Many expected that the conspirators would receive clemency. Instead, the government executed them. Many Somalis found this act inconsistent with Islamic principles and as a consequence turned against Siad Barre's regime. During its first years in power, the SRC sought to bolster nationalism by undermining traditional Somali allegiance to Islamic religious leaders and clan groups. Although it tried to avoid entirely alienating religious leaders, the government restricted their involvement in politics. During the early 1970s, some Islamic leaders affirmed that Islam could never coexist with scientific socialism; however, Siad Barre claimed that the two concepts were compatible because Islam propagated a classless society based on egalitarianism. In the mid-1970s, the government tried to eliminate a rallying point for opposition by substituting allegiance to the nation for traditional allegiance to family and clan. Toward this end, the authorities stressed individual responsibility for all offenses, thereby undermining the concept of collective responsibility that existed in traditional society and served as the basis of diya-paying groups. The government also abolished traditional clan leadership responsibilities and titles such as sultan and shaykh. By the late 1980s, it was evident that Siad Barre had failed to create a sense of Somali nationalism. Moreover, he had been unable to destroy the family and clan loyalties that continued to govern the lives of most Somalis. As antigovernment activities escalated, Siad Barre increasingly used force and terror against his opponents. This cycle of violence further isolated his regime, caused dissent within the SNA, and eventually precipitated the collapse of his government. From 1969 until the mid-1970s, Siad Barre's authoritarian regime enjoyed a degree of popular support, largely because it acted with a decisiveness not displayed by the civilian governments of the 1960s. Even the 1971 coup attempt failed to affect the stability of the government. However, Somalia's defeat in the Ogaden War signaled the beginning of a decline in Siad Barre's popularity that culminated in his January 1991 fall from power. Before the war, many Somalis had criticized Siad Barre for not trying to reincorporate the Ogaden into Somalia immediately after Ethiopian emperor Haile Selassie's death in 1975. The government was unable to stifle this criticism largely because the Somali claim to the Ogaden had overwhelming national support. The regime's commitment of regular troops to the Ogaden proved highly popular, as did Siad Barre's expulsion of the Soviet advisers, who had been resented by most Somalis. However, Somalia's defeat in the Ogaden War refocused criticism on Siad Barre. After the spring 1978 retreat toward Hargeysa, Siad Barre met with his generals to discuss the battlefield situation, and ordered the execution of six of them for activities against the state. This action failed to quell SNA discontent over Siad Barre's handling of the war with Ethiopia. On April 9, 1978, a group of military officers (mostly Majeerteen) attempted a coup d'état. Government security forces crushed the plot within hours and subsequently arrested seventy-four suspected conspirators. After a month-long series of trials, the authorities imprisoned thirty-six people associated with the coup and executed another seventeen. After the war, it was evident that the ruling alliance among the Mareehaan, Ogaden, and Dulbahante clans had been broken. The Ogaden--the clan of Siad Barre's mother, which had the most direct stake in the war--broke with the regime over the president's wartime leadership. To prevent further challenges to his rule, Siad Barre placed members of his own clan in important positions in the government, the armed forces, the security services, and other state agencies. Throughout the late 1970s, growing discontent with the regime's policies and personalities prompted the defection of numerous government officials and the establishment of several insurgent movements. Because unauthorized political activity was prohibited, these organizations were based abroad. The best known was the Somali Salvation Front (SSF), which operated from Ethiopia. The SSF had absorbed its predecessor, the Somali Democratic Action Front (SODAF), which had been formed in Rome in 1976. Former minister of justice Usmaan Nur Ali led the Majeerteen-based SODAF. Lieutenant Colonel Abdillaahi Yuusuf Ahmad, a survivor of the 1978 coup attempt, commanded the SSF. Other prominent SSF personalities included former minister of education Hasan Ali Mirreh and former ambassador Muse Islan Faarah. The SSF, which received assistance from Ethiopia and Libya, claimed to command a guerrilla force numbering in the thousands. Ethiopia placed a radio transmitter at the SSF's disposal from which Radio Kulmis (unity) beamed anti-Siad Barre invective to listeners in Somalia. Although it launched a low- intensity sabotage campaign in 1981, the SSF lacked the capabilities to sustain effective guerrilla operations against the SNA. The SSF's weakness derived from its limited potential as a rallying point for opposition to the government. Although the SSF embraced no ideology or political philosophy other than hostility to Siad Barre, its nationalist appeal was undermined by its reliance on Ethiopian support. The SSF claimed to encompass a range of opposition forces, but its leading figures belonged with few exceptions to the Majeerteen clan. In October 1981, the SSF merged with the radical-left Somali Workers Party (SWP) and the Democratic Front for the Liberation of Somalia (DFLS) to form the Somali Salvation Democratic Front (SSDF). The SWP and DFLS, both based in Aden (then the capital of the People's Democratic Republic of Yemen--South Yemen), had included some former SRSP Central Committee members who faulted Siad Barre for compromising Somalia's revolutionary goals. An eleven-man committee led the SSDF. Yuusuf Ahmad, a former SNA officer and head of the SDF acted as chairman; former SWP leader Idris Jaama Husseen served as vice chairman; Abdirahman Aidid Ahmad, former chairman of the SRSP Ideology Bureau and founding father of the DFLS, was secretary for information. The SSDF promised to intensify the military and political struggles against the Siad Barre regime, which was said to have destroyed Somali unity and surrendered to United States imperialism. Like the SSF, the SSDF suffered from weak organization, a close identification with its Ethiopian and Libyan benefactors, and its reputation as a Majeerteen party. Despite its shortcomings, the SSDF played a key role in fighting between Somalia and Ethiopia in the summer of 1982. After a SNA force infiltrated the Ogaden, joined with the WSLF and attacked an Ethiopian army unit outside Shilabo, about 150 kilometers northwest of Beledweyne, Ethiopia retaliated by launching an operation against Somalia. On June 30, 1982, Ethiopian army units, together with SSDF guerrillas, struck at several points along Ethiopia's southern border with Somalia. They crushed the SNA unit in Balumbale and then occupied that village. In August 1982, the Ethiopian/SSDF force took the village of Goldogob, about 50 kiloeters northwest of Galcaio. After the United States provided emergency military assistance to Somalia, the Ethiopian attacks ceased. However, the Ethiopian/SSDF units remained in Balumbale and Goldogob, which Addis Ababa maintained were part of Ethiopia that had been liberated by the Ethiopian army. The SSDF disputed the Ethiopian claim, causing a power struggle that eventually resulted in the destruction of the SSDF's leadership. On October 12, 1985, Ethiopian authorities arrested Ahmad and six of his lieutenants after they repeatedly indicated that Balumbale and Goldogob were part of Somalia. The Ethiopian government justified the arrests by saying that Ahmad had refused to comply with a SSDF Central Committee decision relieving him as chairman. Mahammad Abshir, a party bureaucrat, then assumed command of the SSDF. Under his leadership, the SSDF became militarily moribund, primarily because of poor relations with Addis Ababa. In August 1986, the Ethiopian army attacked SSDF units, then launched a war against the movement, and finally jailed its remaining leaders. For the next several years, the SSDF existed more in name than in fact. In late 1990, however, after Ethiopia released former SSDF leader Ahmad, the movement reemerged as a fighting force in Somalia, albeit to a far lesser degree than in the early 1980s. In April 1981, a group of Isaaq emigrés living in London formed the Somali National Movement (SNM), which subsequently became the strongest of Somalia's various insurgent movements. According to its spokesmen, the rebels wanted to overthrow Siad Barre's dictatorship. Additionally, the SNM advocated a mixed economy and a neutral foreign policy, rejecting alignment with the Soviet Union or the United States and calling for the dismantling of all foreign military bases in the region. In the late 1980s, the SNM adopted a pro-Western foreign policy and favored United States involvement in a post-Siad Barre Somalia. Other SNM objectives included establishment of a representative democracy that would guarantee human rights and freedom of speech. Eventually, the SNM moved its headquarters from London to Addis Ababa to obtain Ethiopian military assistance, which initially was limited to old Soviet small arms. In October 1981, the SNM rebels elected Ahmad Mahammad Culaid and Ahmad Ismaaiil Abdi as chairman and secretary general, respectively, of the movement. Culaid had participated in northern Somali politics until 1975, when he went into exile in Djibouti and then in Saudi Arabia. Abdi had been politically active in the city of Burao in the 1950s, and, from 1965 to 1967, had served as the Somali government's minister of planning. After the authorities jailed him in 1971 for antigovernment activities, Abdi left Somalia and lived in East Africa and Saudi Arabia. The rebels also elected an eight-man executive committee to oversee the SNM's military and political activities. On January 2, 1982, the SNM launched its first military operation against the Somali government. Operating from Ethiopian bases, commando units attacked Mandera Prison near Berbera and freed a group of northern dissidents. According to the SNM, the assault liberated more than 700 political prisoners; subsequent independent estimates indicated that only about a dozen government opponents escaped. At the same time, other commando units raided the Cadaadle armory near Berbera and escaped with an undetermined amount of arms and ammunition. Mogadishu responded to the SNM attacks by declaring a state of emergency, imposing a curfew, closing gasoline stations to civilian vehicles, banning movement in or out of northern Somalia, and launching a search for the Mandera prisoners (most of whom were never found). On January 8, 1982, the Somali government also closed its border with Djibouti to prevent the rebels from fleeing Somalia. These actions failed to stop SNM military activities. In October 1982, the SNM tried to increase pressure against the Siad Barre regime by forming a joint military committee with the SSDF. Apart from issuing antigovernment statements, the two insurgent groups started broadcasting from the former Radio Kulmis station, now known as Radio Halgan (struggle). Despite this political cooperation, the SNM and SSDF failed to agree on a common strategy against Mogadishu. As a result, the alliance languished. In February 1983, Siad Barre visited northern Somalia in a campaign to discredit the SNM. Among other things, he ordered the release of numerous civil servants and businessmen who had been arrested for antigovernment activities, lifted the state of emergency, and announced an amnesty for Somali exiles who wanted to return home. These tactics put the rebels on the political defensive for several months. In November 1983, the SNM Central Committee sought to regain the initiative by holding an emergency meeting to formulate a more aggressive strategy. One outcome was that the military wing--headed by Abdulqaadir Kosar Abdi, formerly of the SNA--assumed control of the Central Committee by ousting the civilian membership from all positions of power. However, in July 1984, at the Fourth SNM Congress, held in Ethiopia, the civilians regained control of the leadership. The delegates also elected Ahmad Mahammad Mahamuud "Silanyo" SNM chairman and reasserted their intention to revive the alliance with the SSDF. After the Fourth SNM Congress adjourned, military activity in northern Somalia increased. SNM commandos attacked about a dozen government military posts in the vicinity of Hargeysa, Burao, and Berbera. According to the SNM, the SNA responded by shooting 300 people at a demonstration in Burao, sentencing seven youths to death for sedition, and arresting an unknown number of rebel sympathizers. In January 1985, the government executed twenty- eight people in retaliation for antigovernment activity. Between June 1985 and February 1986, the SNM claimed to have carried out thirty operations against government forces in northern Somalia. In addition, the SNM reported that it had killed 476 government soldiers and wounded 263, and had captured eleven vehicles and had destroyed another twenty-two, while losing only 38 men and two vehicles. Although many independent observers said these figures were exaggerated, SNM operations during the 1985-86 campaign forced Siad Barre to mount an international effort to cut off foreign aid to the rebels. This initiative included reestablishment of diplomatic relations with Libya in exchange for Tripoli's promise to stop supporting the SNM. Despite efforts to isolate the rebels, the SNM continued military operations in northern Somalia. Between July and September 1987, the SNM initiated approximately thirty attacks, including one on the northern capital, Hargeysa; none of these, however, weakened the government's control of northern Somalia. A more dramatic event occurred when a SNM unit kidnapped a Médecins Sans Frontières medical aid team of ten Frenchmen and one Djiboutian to draw the world's attention to Mogadishu's policy of impressing men from refugee camps into the SNA. After ten days, the SNM released the hostages unconditionally. Siad Barre responded to these activities by instituting harsh security measures throughout northern Somalia. The government also evicted suspected pro-SNM nomad communities from the Somali- Ethiopian border region. These measures failed to contain the SNM. By February 1988, the rebels had captured three villages around Togochale, a refugee camp near the northwestern Somali- Ethiopian border. Following the rebel successes of 1987-88, Somali-Ethiopian relations began to improve. On March 19, 1988, Siad Barre and Ethiopian president Mengistu Haile Mariam met in Djibouti to discuss ways of reducing tension between the two countries. Although little was accomplished, the two agreed to hold further talks. At the end of March 1988, the Ethiopian minister of foreign affairs, Berhanu Bayih, arrived in Mogadishu for discussions with a group of Somali officials, headed by General Ahmad Mahamuud Faarah. On April 4, 1988, the two presidents signed a joint communiqué in which they agreed to restore diplomatic relations, exchange prisoners of war, start a mutual withdrawal of troops from the border area, and end subversive activities and hostile propaganda against each other. Faced with a cutoff of Ethiopian military assistance, the SNM had to prove its ability to operate as an independent organization. Therefore, in late May 1988 SNM units moved out of their Ethiopian base camps and launched a major offensive in northern Somalia. The rebels temporarily occupied the provincial capitals of Burao and Hargeysa. These early successes bolstered the SNM's popular support, as thousands of disaffected Isaaq clan members and SNA deserters joined the rebel ranks. Over the next few years, the SNM took control of almost all of northwestern Somalia and extended its area of operations about fifty kilometers east of Erigavo. However, the SNM did not gain control of the region's major cities (i.e., Berbera, Hargeysa, Burao, and Boorama), but succeeded only in laying siege to them. With Ethiopian military assistance no longer a factor, the SNM's success depended on its ability to capture weapons from the SNA. The rebels seized numerous vehicles such as Toyota Land Cruisers from government forces and subsequently equipped them with light and medium weapons such as 12.7mm and 14.5mm machine guns, 106mm recoilless rifles, and BM-21 rocket launchers. The SNM possessed antitank weapons such as Soviet B-10 tubes and RPG- 7s. For air defense the rebels operated Soviet 30mm and 23mm guns, several dozen Soviet ZU23 2s, and Czech-made twin-mounted 30mm ZU30 2s. The SNM also maintained a small fleet of armed speed boats that operated from Maydh, fifty kilometers northwest of Erigavo, and Xiis, a little west of Maydh. Small arms included 120mm mortars and various assault rifles, such as AK-47s, M-16s, and G-3s. Despite these armaments, rebel operations, especially against the region's major cities, suffered because of an inadequate logistics system and a lack of artillery, mine- clearing equipment, ammunition, and communications gear. To weaken Siad Barre's regime further, the SNM encouraged the formation of other clan-based insurgent movements and provided them with political and military support. In particular, the SNM maintained close relations with the United Somali Congress (USC), which was active in central Somalia, and the Somali Patriotic Movement (SPM), which operated in southern Somalia. Both these groups sought to overthrow Siad Barre's regime and establish a democratic form of government. The USC, a Hawiye organization founded in 1989, had suffered from factionalism based on subclan rivalries since its creation. General Mahammad Faarah Aidid commanded the Habar Gidir clan, and Ali Mahdi Mahammad headed the Abgaal clan. The SPM emerged in March 1989, after a group of Ogaden officers, led by Umar Jess, deserted the SNA and took up arms against Siad Barre. Like the USC, the SPM experienced a division among its ranks. The moderates, under Jess, favored an alliance with the SNM and USC and believed that Somalia should abandon its claims to the Ogaden. SPM hardliners wanted to recapture the Ogaden and favored a stronger military presence along the Somali-Ethiopian border. On November 19, 1989, the SNM and SPM issued a joint communiqué announcing the adoption of a "unified stance on internal and external political policy." On September 12, 1990, the SNM concluded a similar agreement with the USC. Then, on November 24, 1990, the SNM announced that it had united with the SPM and the USC to pursue a common military strategy against the SNA. Actually, the SNM had concluded the unification agreement with Aidid, which widened the rift between the two USC factions. By the beginning of 1991, all three of the major rebel organizations had made significant military progress. The SNM had all but taken control of northern Somalia by capturing the towns of Hargeysa, Berbera, Burao, and Erigavo. On January 26, 1991, the USC stormed the presidential palace in Mogadishu, thereby establishing its control over the capital. The SPM succeeded in overrunning several government outposts in southern Somalia. The SNM-USC-SPM unification agreement failed to last after Siad Barre fled Mogadishu. On January 26, 1991, the USC formed an interim government, which the SNM refused to recognize. On May 18, 1991, the SNM declared the independence of the Republic of Somaliland. The USC interim government opposed this declaration, arguing instead for a unified Somalia. Apart from these political disagreements, fighting broke out between and within the USC and SPM. The SNM also sought to establish its control over northern Somalia by pacifying clans such as the Gadabursi and the Dulbahante. To make matters worse, guerrilla groups proliferated; by late 1991, numerous movements vied for political power, including the United Somali Front (Iise), Somali Democratic Alliance (Gadabursi), United Somali Party (Dulbahante), Somali Democratic Movement (Rahanwayn), and Somali National Front (Mareehaan). The collapse of the nation state system and the emergence of clan-based guerrilla movements and militias that became governing authorities persuaded most Western observers that national reconciliation would be a long and difficult process. The country's population is estimated to be between 7 and 8 million. The country is very poor with a market-based economy in which most of the work force is employed as subsistence farmers, agro-pastoralists, or pastoralists. The principal exports are livestock and charcoal; there is very little industry. Insecurity and bad weather continued to affect the country's already extremely poor economic situation. A livestock ban, lifted in 2000, was reinstituted by Saudi Arabia because of fears of Rift Valley fever and reportedly because of Saudi political considerations. Livestock is the most important component of the Somali economy, and the ban has harmed further an already devastated economy. The country's economic problems continued to cause serious unemployment and led to pockets of malnutrition in southern areas of the country. Most Somalis are Sunni Muslims. (Less than 1 percent of ethnic Somalis are Christians.) Loyalty to Islam reinforces distinctions that set Somalis apart from their immediate African neighbors, most of whom are either Christians (particularly the Amhara and others of Ethiopia) or adherents of indigenous African faiths. The Islamic ideal is a society organized to implement Muslim precepts in which no distinction exists between the secular and the religious spheres. Among Somalis this ideal had been approximated less fully in the north than among some groups in the settled regions of the south where religious leaders were at one time an integral part of the social and political structure. Among nomads, the exigencies of pastoral life gave greater weight to the warrior's role, and religious leaders were expected to remain aloof from political matters. The role of religious functionaries began to shrink in the 1950s and 1960s as some of their legal and educational powers and responsibilities were transferred to secular authorities. The position of religious leaders changed substantially after the 1969 revolution and the introduction of scientific socialism. Siad Barre insisted that his version of socialism was compatible with Quranic principles, and he condemned atheism. Religious leaders, however, were warned not to meddle in politics. The new government instituted legal changes that some religious figures saw as contrary to Islamic precepts. The regime reacted sharply to criticism, executing some of the protesters. Subsequently, religious leaders seemed to accommodate themselves to the government. Somali Islam rendered the world intelligible to Somalis and made their lives more bearable in a harsh land. Amidst the interclan violence that characterized life in the early 1990s, Somalis naturally sought comfort in their faith to make sense of their national disaster. The traditional response of practicing Muslims to social trauma is to explain it in terms of a perceived sin that has caused society to stray from the "straight path of truth" and consequently to receive God's punishment. The way to regain God's favor is to repent collectively and rededicate society in accordance with Allah's divine precepts. On the basis of these beliefs, a Somali brand of messianic Islamism (sometimes seen as fundamentalism) sprang up to fill the vacuum created by the collapse of the state. In the disintegrated Somali world of early 1992, Islamism appeared to be largely confined to Bender Cassim, a coastal town in Majeerteen country. For instance, a Yugoslav doctor who was a member of a United Nations team sent to aid the wounded was gunned down by masked assailants there in November 1991. Reportedly, the assassins belonged to an underground Islamist movement whose adherents wished to purify the country of "infidel" influence. The Somali Penal Code, promulgated in early 1962, became effective on April 3, 1964. It was Somalia's first codification of laws designed to protect the individual and to ensure the equitable administration of justice. The basis of the code was the constitutional premise that the law has supremacy over the state and its citizens. The code placed responsibility for determining offenses and punishments on the written law and the judicial system and excluded many penal sanctions formerly observed in unwritten customary law. The authorities who drafted the code, however, did not disregard the people's past reliance on traditional rules and sanctions. The code contained some of the authority expressed by customary law and by Islamic, sharia, or religious law. The penal laws applied to all nationals, foreigners, and stateless persons living in Somalia. Courts ruled out ignorance of the law as a justification for breaking the law or an excuse for committing an offense, but considered extenuations and mitigating factors in individual cases. The penal laws prohibited collective punishment, which was contrary to the traditional sanctions of diya-paying groups. The penal laws stipulated that if the offense constituted a violation of the code, the perpetrator had committed an unlawful act against the state and was subject to its sanctions. Judicial action under the code, however, did not rule out the possibility of additional redress in the form of diya through civil action in the courts. Siad Barre's regime attacked this tolerance of diya, and forbade its practice entirely in 1974. Under the Somali penal code, to be criminally liable a person must have committed an act or have been guilty of an omission that caused harm or danger to the person or property of another or to the state. Further, the offense must have been committed willfully or as the result of negligence, imprudence, or illegal behavior. Under Somali penal law, the courts assumed the accused to be innocent until proved guilty beyond reasonable doubt. In criminal prosecution, the burden of proof rested with the state. Penal laws classed offenses as either crimes or contraventions, the latter being legal violations without criminal intent. Death by shooting was the only sentence for serious offenses such as crimes against the state and murder. The penal law usually prescribed maximum and minimum punishments but left the actual sentence to the judge's discretion. The penal laws comprised three categories. The first dealt with general principles of jurisprudence; the second defined criminal offenses and prescribed specified punishments; the third contained sixty-one articles that regulated contraventions of public order, safety, morality, and health. Penal laws took into consideration the role of punishment in restoring the offender to a useful place in society. The Criminal Procedure Code governed matters associated with arrest and trial. The code, which conformed to British common law, prescribed the kinds and jurisdictions of criminal courts, identified the functions and responsibilities of judicial officials, outlined the rules of evidence, and regulated the conduct of trials. Normally, a person could be arrested only if caught in the act of committing an offense or upon issuance of a warrant by the proper judicial authority. The code recognized the writ of habeas corpus. Those arrested had the right to appear before a judge within twenty-four hours. As government opposition proliferated in the late 1970s and early 1980s, the Siad Barre regime increasingly subverted or ignored Somalia's legal system. By the late 1980s, Somalia had become a police state, with citizens often falling afoul of the authorities for solely political reasons. Pressure by international human rights organizations such as Amnesty International and Africa Watch failed to slow Somalia's descent into lawlessness. After Siad Barre fell from power in January 1991, the new authorities promised to restore equity to the country's legal system. Given the many political, economic, and social problems confronting post-Siad Barre Somalia, however, it appeared unlikely that this goal would be achieved soon. INCIDENCE OF CRIME Somalia has provided data neither for United Nations nor INTERPOL surveys of crime; however, an estimate of crime is given in the United States State Department's Consular Information Sheet according to which .."The Department of State warns U.S. citizens against all travel to Somalia. Inter-clan and inter-factional fighting can flare up with little warning, and kidnapping, murder, and other threats to U.S. citizens and other foreigners can occur unpredictably in many regions. While the self-declared "Republic of Somaliland" in northern Somalia has been relatively peaceful, the Sanaag and Sool regions in eastern Somaliland, bordering on Puntland (northeastern Somalia), are subject to insecurity due to potential inter-clan fighting. In addition, the Mogadishu area, the Puntland region in northern Somalia, and the districts of Gedo and Bay (especially the vicinity of Baidoa) in the south have experienced serious fighting in recent months. Territorial control in the Mogadishu area is divided among numerous groups; lines of control are unclear and frequently shift, making movement within this area extremely hazardous. …incidents such as armed banditry and road assaults may occur. In addition, there have been reports of general crime and rock-throwing against aid workers outside of Hargeisa. Civil unrest persists in the rest of the country. U.S. citizens should not travel to areas other than Somaliland. With the exception of Somaliland, crime is an extension of the general state of insecurity. Serious and violent crimes are very common. Kidnapping and robbery are a particular problem in Mogadishu and other areas in the south. U.S. citizens are urged to use caution when sailing near the coast of Somalia. Merchant vessels, fishing boats and pleasure craft alike risk seizure and their crews being held for ransom, especially in the waters near the Horn of Africa and the Kenyan border. At independence, Somalia had four distinct legal traditions: English common law, Italian law, Islamic sharia or religious law, and Somali customary law (traditional rulers and sanctions). The challenge after 1960 was to meld this diverse legal inheritance into one system. During the 1960s, a uniform penal code, a code of criminal court procedures, and a standardized judicial organization were introduced. The Italian system of basing judicial decisions on the application and interpretation of the legal code was retained. The courts were enjoined, however, to apply English common law and doctrines of equity in matters not governed by legislation. In Italian Somaliland, observance of the sharia had been more common than in British Somaliland, where the application of Islamic law had been limited to cases pertaining to marriage, divorce, family disputes, and inheritance. Qadis (Muslim judges) in British Somaliland also adjudicated customary law in cases such as land tenure disputes and disagreements over the payment of diya or blood compensation. In Italian Somaliland, however, the sharia courts had also settled civil and minor penal matters, and Muslim plaintiffs had a choice of appearing before a secular judge or a qadi. After independence the differences between the two regions were resolved by making the sharia applicable in all civil matters if the dispute arose under that law. Somali customary law was retained for optional application in such matters as land tenure, water and grazing rights, and the payment of diya. The military junta suspended the constitution of 1961 when it took power in 1969, but it initially respected other sources of law. In 1973 the Siad Barre regime introduced a unified civil code. Its provisions pertaining to inheritance, personal contracts, and water and grazing rights sharply curtailed both the sharia and Somali customary law. Siad Barre's determination to limit the influence of the country's clans was reflected in sections of the code that abolished traditional clan and lineage rights over land, water resources, and grazing. In addition, the new civil code restricted the payment of diya as compensation for death or injury to the victim or close relatives rather than to an entire diya-paying group. A subsequent amendment prohibited the payment of diya entirely. The attorney general, who was appointed by the minister of justice, was responsible for the observance of the law and prosecution of criminal matters. The attorney general had ten deputies in the capital and several other deputies in the rest of the country. Outside of Mogadishu, the deputies of the attorney general had their offices at the regional and district courts. Under the Siad Barre regime, several police and intelligence organizations were responsible for maintaining public order, controlling crime, and protecting the government against domestic threats. These included the Somali Police Force (SPF), the People's Militia, the NSS, and a number of other intelligencegathering operations, most of which were headed by members of the president's family. After Siad Barre's downfall, these units were reorganized or abolished. The Somali Police Force (SPF) grew out of police forces employed by the British and Italians to maintain peace during the colonial period. Both European powers used Somalis as armed constables in rural areas. Somalis eventually staffed the lower ranks of the police forces, and Europeans served as officers. The colonial forces produced the senior officers and commanders-- including Siad Barre--who led the SPF and the army after independence. In 1884 the British formed an armed constabulary to police the northern coast. In 1910 the British created the Somaliland Coastal Police, and in 1912 they established the Somaliland Camel Constabulary to police the interior. In 1926 the colonial authorities formed the Somaliland Police Force. Commanded by British officers, the force included Somalis in its lower ranks. Armed rural constabulary (illalo) supported this force by bringing offenders to court, guarding prisoners, patrolling townships, and accompanying nomadic tribesmen over grazing areas. The Italians initially relied on military forces to maintain public order in their colony. In 1914 the authorities established a coastal police and a rural constabulary (gogle) to protect Italian residents. By 1930 this force included about 300 men. After the fascists seized power in Italy, colonial administrators reconstituted the Somali Police Corps into the Corpo Zaptié. Italian carabinieri commanded and trained the new corps, which eventually numbered approximately 800. During Italy's war against Ethiopia, the Corpo Zaptié expanded to about 6,000 men. In 1941 the British defeated the Italians and formed a British Military Administration (BMA) over both protectorates. The BMA disbanded the Corpo Zaptié and created the Somalia Gendarmerie. By 1943 this force had grown to more than 3,000 men, led by 120 British officers. In 1948 the Somalia Gendarmerie became the Somali Police Force. After the creation of the Italian Trust Territory in 1950, Italian carabinieri officers and Somali personnel from the Somali Police Force formed the Police Corps of Somalia (Corpo di Polizia della Somalia). In 1958 the authorities made the corps an entirely Somali force and changed its name to the Police Force of Somalia (Forze di Polizia della Somalia). In 1960 the British Somaliland Scouts joined with the Police Corps of Somalia to form a new Somali Police Force, which consisted of about 3,700 men. The authorities also organized approximately 1,000 of the force as the Darawishta Poliska, a mobile group used to keep peace between warring clans in the interior. Since then, the government has considered the SPF a part of the armed forces. It was not a branch of the SNA, however, and did not operate under the army's command structure. Until abolished in 1976, the Ministry of Interior oversaw the force's national commandant and his central command. After that date, the SPF came under the control of the presidential adviser on security affairs. Each of the country's administrative regions had a police commandant; other commissioned officers maintained law and order in the districts. After 1972 the police outside Mogadishu comprised northern and southern group commands, divisional commands (corresponding to the districts), station commands, and police posts. Regional governors and district commissioners commanded regional and district police elements. Under the parliamentary regime, police received training and matériel aid from West Germany, Italy, and the United States. Although the government used the police to counterbalance the Soviet-supported army, no police commander opposed the 1969 army coup. During the 1970s, German Democratic Republic (East Germany) security advisers assisted the SPF. After relations with the West improved in the late 1970s, West German and Italian advisers again started training police units. By the late 1970s, the SPF was carrying out an array of missions, including patrol work, traffic management, criminal investigation, intelligence gathering, and counterinsurgency. The elite mobile police groups consisted of the Darawishta and the Birmadka Poliska (Riot Unit). The Darawishta, a mobile unit that operated in remote areas and along the frontier, participated in the Ogaden War. The Birmadka acted as a crack unit for emergency action and provided honor guards for ceremonial functions. In 1961 the SPF established an air wing, equipped with Cessna light aircraft and one Douglas DC-3. The unit operated from improvised landing fields near remote police posts. The wing provided assistance to field police units and to the Darawishta through the airlift of supplies and personnel and reconnaissance. During the final days of Siad Barre's regime, the air wing operated two Cessna light aircraft and two DO-28 Skyservants. Technical and specialized police units included the Tributary Division, the Criminal Investigation Division (CID), the Traffic Division, a communications unit, and a training unit. The CID, which operated throughout the country, handled investigations, fingerprinting, criminal records, immigration matters, and passports. In 1961 the SPF established a women's unit. Personnel assigned to this small unit investigated, inspected, and interrogated female offenders and victims. Policewomen also handled cases that involved female juvenile delinquents, ill or abandoned girls, prostitutes, and child beggars. Service units of the Somali police included the Gadidka Poliska (Transport Department) and the Health Service. The Police Custodial Corps served as prison guards. In 1971 the SPF created a fifty-man national Fire Brigade. Initially, the Fire Brigade operated in Mogadishu. Later, however, it expanded its activities into other towns, including Chisimayu, Hargeysa, Berbera, Merca, Giohar, and Beledweyne. Beginning in the early 1970s, police recruits had to be seventeen to twenty-five years of age, of high moral caliber, and physically fit. Upon completion of six months of training at the National Police Academy in Mogadishu, those who passed an examination would serve two years on the force. After the recruits completed this service, the police could request renewal of their contracts. Officer cadets underwent a nine-month training course that emphasized supervision of police field performance. Darawishta members attended a six-month tactical training course; Birmadka personnel received training in public order and riot control. After Siad Barre fled Mogadishu in January 1991, both the Darawishta and Birmadka forces ceased to operate, for all practical purposes. In August 1972, the government established the People's Militia, known as the Victory Pioneers (Guulwadayaal). Although a wing of the army, the militia worked under the supervision of the Political Bureau of the presidency. After the SRSP's formation in 1976, the militia became part of the party apparatus. Largely because of the need for military reserves, militia membership increased from 2,500 in 1977 to about 10,000 in 1979, and to approximately 20,000 by 1990. After the collapse of Siad Barre's regime, the People's Militia, like other military elements, disintegrated. The militia staffed the government and party orientation centers that were located in every settlement in Somalia. The militia aided in self-help programs, encouraged "revolutionary progress," promoted and defended Somali culture, and fought laziness, misuse of public property, and "reactionary" ideas and actions. Moreover, the militia acted as a law enforcement agency that performed duties such as checking contacts between Somalis and foreigners. The militia also had powers of arrest independent of the police. In rural areas, militiamen formed "vigilance corps" that guarded grazing areas and towns. After Siad Barre fled Mogadishu in January 1991, militia members tended to join one of the insurgent groups or clan militias. Shortly after Siad Barre seized power, the Soviet Committee of State Security (Komitet Gosudarstvennoi Bezopasnosti--KGB) helped Somalia form the National Security Service (NSS). This organization, which operated outside normal bureaucratic channels, developed into an instrument of domestic surveillance, with powers of arrest and investigation. The NSS monitored the professional and private activities of civil servants and military personnel, and played a role in the promotion and demotion of government officials. As the number of insurgent movements proliferated in the late 1980s, the NSS increased its activities against dissidents, rebel sympathizers, and other government opponents. Until the downfall of Siad Barre's regime, the NSS remained an elite organization staffed by men from the SNA and the police force who had been chosen for their loyalty to the president. After the withdrawal of the last U.N. peacekeepers in 1995, clan and factional militias, in some cases supplemented by local police forces established with U.N. help in the early 1990's, continued to function with varying degrees of effectiveness. Intervention by Ethiopian troops in 1996 and 1997 helped to maintain order in Gedo region by closing down the training bases of the Islamic group Al'Ittihad Al-Islami (AIAI). In Somaliland more than 60 percent of the budget was allocated to maintaining a militia and police force composed of former troops. In 2000 a Somaliland presidential decree, citing national security concerns in the wake of the conclusion of the Djibouti Conference, delegated special powers to the police and the military. Also in 2000, the TNG began recruiting for a new 4,000-officer police force to restore order in Mogadishu. The TNG requested former soldiers to register and enroll in training camps to form a national army. At year's end 2001, the TNG had a 3,500-officer police force and a militia of approximately 5,000 persons. During the year 2001, 7,000 former non-TNG militia were demobilized to retrain them for service with the TNG; however, many of the militia members left the demobilization camps after the TNG was unable to pay their salaries for 3 months. At year's end 2001, the TNG was attempting to restore salaries and to continue the demobilization process. During the year 2001, Mogadishu police began to patrol in the TNG-controlled areas of the city. Police and militia committed numerous human rights abuses throughout the country. Many civilian citizens were killed in factional fighting, especially in Gedo, Hiran, Lower Shabelle, Middle Shabelle, Middle Juba, Lower Juba regions, and in the cities of Mogadishu and Bosasso. Kidnaping remained a problem. There were some reports of the use of torture by Somaliland and Puntland administrations and militias. In Somaliland and Puntland, police used lethal force while disrupting demonstrations. The use of landmines, reportedly by the Rahanwein Resistance Army (RRA), resulted in several deaths. Political violence and banditry have been endemic since the revolt against Siad Barre, who fled the capital in January 1991. Since that time, tens of thousands of persons, mostly noncombatants, have died in interfactional and interclan fighting. The vast majority of killings throughout the year resulted from clashes between militias or unlawful militia activities; several occurred during land disputes, and a small number involved common criminal activity. The number of killings increased from 2000 as a result of fighting between the following groups: Between the RRA and TNG; between the TNG and warlord Muse Sudi in Mogadishu; between warlord Hussein Aideed and the TNG; between Abdullahi Yusuf's forces and those of Jama Ali Jama in Puntland; and between the SRRC and Jubaland Alliance in Kismayo. Security forces and police killed several persons, and in some instances used lethal force to disperse demonstrators during the year 2001. For example, on February 3, in Bosasso, security forces and police shot and killed 1 woman and injured 11 other persons during a demonstration. On August 23, Somaliland police, who were arresting supporters of elders for protesting actions of President Egal, killed a small child during an exchange of gunfire. On August 28, in Mogadishu, TNG police reportedly killed two young brothers. There were no investigations, and no action was taken against the perpetrators during the year 2001. Unlike in the previous year, Islamic courts did not execute summarily any persons during the year 2001. Killings resulted from conflicts between security and police forces and militias during the year. There were no known reports of unresolved politically motivated disappearances, although cases easily might have been concealed among the thousands of refugees and displaced persons. There continued to be reports of kidnapings of aid workers during the year 2001. There were numerous kidnapings by militia groups and armed assailants who demanded ransom for hostages. The Transitional National Charter, adopted in 2000 but not implemented by year's end 2001, prohibits torture, and the Puntland Charter prohibits torture "unless sentenced by Islamic Shari'a courts in accordance with Islamic law;" however, there were some reports of the use of torture by the Puntland and Somaliland administrations and by warring militiamen against each other or against civilians. Observers believe that many incidents of torture were not reported. Security forces killed and injured persons while forcibly dispersing demonstrations during the year 2001. Security forces, police, and militias also injured persons during the year, including supporters and members of the TNG. The Transitional Charter, adopted in 2000 but not implemented by year's end 2001, provides for the sanctity of private property and privacy; however, looting and forced entry into private property continued in Mogadishu, although on a smaller scale than in previous years. The Puntland Charter recognizes the right to private property; however, the authorities did not respect this right on at least one occasion. Militia members reportedly confiscated persons' possessions as punishment during extortion attempts during the year 2001. Most properties that were occupied forcibly during militia campaigns in 1992-93, notably in Mogadishu and the Lower Shabelle, remained in the hands of persons other than their prewar owners. Approximately 300,000 persons, or 4 percent of the population, are internally displaced persons (IDP's) as a result of interfactional and interclan fighting. In the absence of constitutional or other legal protections, various factions and armed bandits continued to engage in arbitrary detention, including the holding of relief workers. On February 26, a U.N. Educational, Scientific and Cultural Organization (UNESCO) academic who was in Garowe, Puntland to conduct a seminar, was arrested and charged with distributing antigovernment leaflets; he was released after paying a fine. On May 22, authorities in Somaliland arrested and detained Suleiman Mohamoud Adan "Gaal" for holding meetings outside of Somaliland with Djibouti President Gelleh and TNG members; on June 5, he was released. On June 12, warlord Muse Sudi's militia arrested six clan elders for attending a meeting to discuss clan affairs, because he reportedly believed that they were attempting to undermine his authority; the elders were released after several days. On June 13, the Puntland Administration arrested two intellectuals reportedly for engaging in antigovernment political activities; they were released after a few days. On August 23, Somaliland President Egal ordered the detention of approximately 10 elders. After fighting between Somaliland authorities and supporters of the elders, four sultans (sub-clan chiefs)_and one of their supporters were arrested. On September 3, President Egal ordered their release. On September 24, the RRA in Burhakaba arrested 11 pro-TNG elders and accused them of fomenting division and dissension within the Rahanwein clan. Unlike in the previous year, there were no reports that Somaliland authorities detained foreigners for proselytizing. Seven Christian Ethiopians arrested in Somaliland in 1999 for allegedly attempting to proselytize were released at the beginning of the year. Unlike in previous years, there were no reports that authorities in Somaliland, Puntland, and in areas of the south detained local or foreign journalists. It was unknown whether persons detained in 2000 were released during the year 2001. There were no developments in the following arrest cases from 2000: The September arrests of five persons by Somaliland police, and the March detention of five persons by the Puntland region security committee. There were no developments in the arrests of the following persons arrested by the Somaliland authorities in 2000 for participating in the Djibouti Conference: Sultan Mohamed Abdulkadir, who was arrested in November; Bile Mahmud Qabowsadeh, who was arrested in October; and Abdi Hashi, who was arrested in May. There were no reports of lengthy pretrial detention in violation of the pre-1991 Penal Code in Somaliland or Puntland. None of the factions used forced exile. Over the centuries, the Somalis developed a system of handling disputes or acts of violence, including homicide, as wrongs involving not only the parties immediately concerned but also the clans to which they belonged. The offending party and his group would pay diya to the injured party and his clan. The British and Italians enforced criminal codes based on their own judicial systems in their respective colonies, but did not seriously disrupt the diya-paying system. After independence the Somali government developed its own laws and procedures, which were largely based on British and Italian legal codes. Somali officials made no attempt to develop a uniquely Somali criminal justice system, although diya- paying arrangements continued. The military junta that seized power in 1969 changed little of the criminal justice system it inherited. However, the government launched a campaign against diya and the concept of collective responsibility for crimes. This concept is the most distinctly Somali of any in the criminal justice system. The regime instead concentrated on extending the influence of laws introduced by the British and Italians. This increased the government's control over an area of national life previously regulated largely by custom. The constitution of 1961 had provided for a unified judiciary independent of the executive and the legislature. A 1962 law integrated the courts of northern and southern Somalia into a four-tiered system: the Supreme Court, courts of appeal, regional courts, and district courts. Sharia courts were discontinued although judges were expected to take the sharia into consideration when making decisions. The Siad Barre government did not fundamentally alter this structure; nor had the provisional government made any significant changes as of May 1992. At the lowest level of the Somali judicial system were the eighty-four district courts, each of which consisted of civil and criminal divisions. The civil division of the district court had jurisdiction over matters requiring the application of the sharia, or customary law, and suits involving claims of up to 3,000 Somali shillings (for value of the shilling, see Glossary). The criminal division of the district court had jurisdiction over offenses punishable by fines or prison sentences of less than three years. There were eight regional courts, each consisting of three divisions. The ordinary division had jurisdiction over penal and civil cases considered too serious to be heard by the district courts. The assize division considered only major criminal cases, that is, those concerning crimes punishable by more than ten years' imprisonment. A third division handled cases pertaining to labor legislation. In both the district and regional courts, a single magistrate, assisted by two laymen, heard cases, decided questions of fact, and voted on the guilt or innocence of the accused. Somalia's next-highest tier of courts consisted of the two courts of appeal. The court of appeals for the southern region sat at Mogadishu, and the northern region's court of appeals sat at Hargeysa. Each court of appeal had two divisions. The ordinary division heard appeals of district court decisions and of decisions of the ordinary division of the regional courts, whereas the assize division was only for appeals from the regional assize courts. A single judge presided over cases in both divisions. Two laymen assisted the judge in the ordinary division, and four laymen assisted the judge in the assize division. The senior judges of the courts of appeal, who were called presidents, administered all the courts in their respective regions. The Supreme Court, which sat at Mogadishu, had ultimate authority for the uniform interpretation of the law. It heard appeals of decisions and judgments of the lower courts and of actions taken by public attorneys, and settled questions of court jurisdiction. The Supreme Court was composed of a chief justice, who was referred to as the president, a vice president, nine surrogate justices, and four laymen. The president, two other judges, and four laymen constituted a full panel for plenary sessions of the Supreme Court. In ordinary sessions, one judge presided with the assistance of two other judges and two laymen. The president of the Supreme Court decided whether a case was to be handled in plenary or ordinary session, on the basis of the importance of the matter being considered. Although the military government did not change the basic structure of the court system, it did introduce a major new institution, the National Security Courts (NSCs), which operated outside the ordinary legal system and under the direct control of the executive. These courts, which sat at Mogadishu and the regional capitals, had jurisdiction over serious offenses defined by the government as affecting the security of the state, including offenses against public order and crimes by government officials. The NSC heard a broad range of cases, passing sentences for embezzlement by public officials, murder, political activities against the state, and thefts of government food stocks. A senior military officer was president of each NSC. He was assisted by two other judges, usually also military officers. A special military attorney general prosecuted cases brought before the NSC. No other court, not even the Supreme Court, could review NSC sentences. Appeals of NSC verdicts could be taken only to the president of the republic. Opponents of the Siad Barre regime accused the NSC of sentencing hundreds of people to death for political reasons. In October 1990, Siad Barre announced the abolition of the widely feared and detested courts; as of May 1992, the NSCs had not been reinstituted by the provisional government. Before the 1969 coup, the Higher Judicial Council had responsibility for the selection, promotion, and discipline of members of the judiciary. The council was chaired by the president of the Supreme Court and included justices of the court, the attorney general, and three members elected by the National Assembly. In 1970 military officers assumed all positions on the Higher Judicial Council. The effect of this change was to make the judiciary accountable to the executive. One of the announced aims of the provisional government after the defeat of Siad Barre was the restoration of judicial independence. As of year 2001, there is no national judicial system. The Transitional Charter, adopted in 2000, provides for an independent judiciary and for a High Commission of Justice, a Supreme Court, a Court of Appeal, and courts of first reference; however, the Charter had not been implemented by year's end 2001. Some regions have established local courts that depend on the predominant local clan and associated factions for their authority. The judiciary in most regions relies on some combination of traditional and customary law, Shari'a law, the Penal Code of the pre-1991 Siad Barre Government, or some combination of the three. For example, in Bosasso and Afmadow, criminals are turned over to the families of their victims, which then exact blood compensation in keeping with local tradition. Under the system of customary justice, clans often hold entire opposing clans or sub-clans responsible for alleged violations by individuals. Islamic Shari'a courts, which traditionally ruled in cases of civil and family law but extended their jurisdiction to criminal proceedings in some regions beginning in 1994, ceased to function effectively in the country during the year 2001. The Islamic courts in Mogadishu gradually were absorbed during the year 2001 by the TNG, and the courts in Merka and Beledweyne ceased to function. In Berbera courts apply a combination of Shari'a law and the former Penal Code. In south Mogadishu, a segment of north Mogadishu, the Lower Shabelle, and parts of the Gedo and Hiran regions, court decisions are based on a combination of Shari'a and customary law. Throughout most of the country, customary law forms a basis for court decisions. In 2000 Somaliland adopted a new Constitution based on democratic principles but continued to use the pre-1991 Penal Code. The Constitution provides for an independent judiciary; however, the judiciary is not independent in practice. A U.N. report issued in 2000 noted a serious lack of trained judges and of legal documentation in Somaliland, which caused problems in the administration of justice. Untrained police and other persons reportedly served as judges. The Puntland Charter implemented in 1998 provides for an independent judiciary; however, the judiciary is not independent in practice. The Puntland Charter also provides for a Supreme Court, courts of appeal, and courts of first reference. In Puntland clan elders resolved the majority of cases using traditional methods; however, those with no clan representation in Puntland were subject to the Administration's judicial system. The Transitional Charter, which was not implemented by year's end 2001, provides for the right to be represented by an attorney. The right to representation by an attorney and the right to appeal do not exist in those areas that apply traditional and customary judicial practices or Shari'a law. These rights more often are respected in regions that continue to apply the former government's penal code, such as Somaliland and Puntland. In January more than 50 gunmen attacked an Islamic court in Mogadishu and released 48 prisoners and looted the premises; the motivation for the attack remained unknown at year's end 2001. There were no reports of political prisoners. The few prisons that existed before 1960 had been established during the British and Italian colonial administrations. By independence these facilities were in poor condition and were inadequately staffed. After independence the Somali government included in the constitution an article asserting that criminal punishment must not be an obstacle to convicts' moral reeducation. This article also established a prison organization and emphasized prisoner rehabilitation. The Somali Penal Code of 1962 effectively stipulated the reorganization of the prison system. The code required that prisoners of all ages work during prison confinement. In return for labor on prison farms, construction projects, and roadbuilding, prisoners received a modest salary, which they could spend in prison canteens or retain until their release. The code also outlawed the imprisonment of juveniles with adults. By 1969 Somalia's prison system included forty-nine facilities, the best-equipped of which was the Central Prison of Mogadishu. During the 1970s, East Germany helped Somalia build four modern prisons. As opposition to Siad Barre's regime intensified, the country's prisons became so crowded that the government used schools, military and police headquarters, and part of the presidential palace as makeshift jails. Despite criticism by several international humanitarian agencies, the Somali government failed to improve the prison system. As of year 2001, prison conditions varied throughout the country; however, in general they remained harsh, and in some cases, life threatening. Conditions at the north Mogadishu prison of the Shari'a court system remained harsh and life threatening. Hareryale, a prison established between north and south Mogadishu reportedly holds hundreds of prisoners, including children. Conditions at Hareryale are described as overcrowded and poor. Similar conditions exist at Shirkhole prison, an Islamic Court Militia run prison in south Mogadishu and at north Mogadishu prison for Abgel clan prisoners run by warlord Musa Sudi. In September the U.N. Secretary General's Independent Expert on Human Rights, Dr. Ghanim Alnajar, visited prisons in Hargeisa and Mogadishu. Alnajar reported that conditions had not improved in the 3 years since his previous visit. Overcrowding, poor sanitary conditions, a lack of access to adequate health care, and an absence of education and vocational training characterized prisons throughout the country. Tuberculosis was widespread. Abuse by guards reportedly was common in many prisons. Pretrial detainees and political prisoners are held separately from convicted prisoners. According to an international observer, men and women are housed separately in the Puntland prison in Bosasso; this is the case in other prisons as well. Juveniles frequently are housed with adults in prisons. Custom allows parents to place children in prison without judicial proceedings. The detainees' clans generally pay the costs of detention. In many areas, prisoners are able to receive food from family members or from relief agencies. Ethnic minorities make up a disproportionately large percentage of the prison population. The Puntland Administration permits prison visits by independent monitors. Somaliland authorities permit prison visits by independent monitors, and such visits occurred during the year 2001. The Jumale Center for Human Rights visited prisons in Mogadishu during the year 2001.
http://www-rohan.sdsu.edu/faculty/rwinslow/africa/somalia.html
13
56
Return to Vignettes of Ancient Mathematics In Aristotelian kinematics, it is necessary to distinguish the following in speaking of X traveling from A 1. the distance or line AB which X travels over, 2. the time in which X travels AB, 3. the movement, which is the line AB understood as the line over which X travels in the given time. Note that the movement is here conceived as a distance-traveled by X in an individual time, e.g. from Los Angeles to New York from 12:00 PM to 5:00 PM (Pacific time), 1 Sept. 2000. The distance-traveled by X from Los Angeles to Kaua'i one year later may be an equal distance-traveled in an equal time, but the distances-traveled and the times of travel are not the same. The distance traveled two years later from Los Angeles to New York may be the same distance, but it is not the same movement. For convenience, the following translations will be used: |Greek||literal translation||translation used| |carrying||movement (a distance moved)| |the carried object||the moved| |change||change (a quantity of the change)| |the changed object||the changed| Mechanica 1 858b1-10 First then the properties of the balance cause puzzlement: what is the reason why larger balances are more precise than smaller. And the principle of this cause puzzlement: why in the circle does the line which stands further from the center move faster than that one near to it which changes by nearly the same force, namely the smaller. For the faster is said in two ways. If it traverses an equal place in a smaller time, we say that it is faster, and if in an equal time it traverses more. The larger describes in an equal time a larger circle. For the outer is larger than the inner. And the cause of these things is that the line describing the circle moves with two movements. The defintion is of 'faster'. The notion of 'velocity' plays no role here, or anywhere else in the treatise. Let TA be the time that A travels distance DA, where the movement is DA,TA. Let TB be the time that B travels distance DB, where the movement is DB,TB. The defintions of 'faster' are, given DA,TA and DB,TB: 1. A moves faster than B if DA = DB and TA < TB. 2. A moves faster than B if TA = TB and DA < DB. In the discussion, the author only uses definition 2. Mechanica 1 858b10-23 This proof is commonly described as 'the parallelogram of velocities' or even as 'the parallelogram of forces'. Both descriptions are very anachronistic and inappropriate. The composed lines are movements. The diagram in modern editions and translations of the Mechanica involves a parallelogram. There is no reason to think, however, that the intended figure is not a rectangle. Whether a parallelogram is required depends on how one interprets the diagram at 858a2-6 and the relation between this problem and the related case of the rhombus at ch. 23. You may select which you prefer. (diagram 1: moving picture or fixed picture ) And so whenever something moves in some ratio, it is necessary that the moved moves on a straight-line, and it becomes a diameter of the figure which the composed lines in this ratio make. For let the ratio which the moved moves be what AB has to AG; and let AG move towards B, and also let AB move down towards HG. (figure 2) Let A move towards D, and the line AB towards E. And so if the ratio for the movement is what AB has to AG, it is necessary that AD also have this ratio to AE. Therefore the small quadrilateral is similar to the larger, so the same diameter is also of them, and A will be at Z. In the same manner, it will be proved wherever the motion is marked off. For it will always be on the diameter. Mechanica 1 858b23-33 (converse of the previous) And so it is obvious that what is moved on the diameter must move in two movements in the ratio of the sides. For if it moves in some other ratio, it will not move along the diameter. If it moves with two motions in no fixed ratio in any time, it is impossible the movement be straight. For let it be straight. And so when this is posited as diameter, and when the sides are filled out, it is necessary that the moved move with the ratio of the sides. For this was proved earlier. Therefore, what is moved in no ratio in no time will not produce a straight-line. For if it moves in some ratio in some time, it is necessary that there be a straight movement in this time for the mentioned reasons. How we interpret 858b10-23 depends partly on how we interpret this argument. (diagram 2) Modern translations make the argument general and so tend to make the arc BEG different from a quadrant. They then need to emend the second sentence of the text. Here we treat arc BEG as a quadrant. This makes the second sentence mean that when the point moving on a straight line, BD or BK, arrives at G, the straight line on which it rides is now moved to being a radius that is perpendicular again. One suspects that the radius KB was perpendicular to BD but later is rotated to KG perpendicular to DG, so that the line on which the point moves is BD. You may select which you prefer. diagram 1) Thus it becomes circular, when moving in two movements in no fixed ratio in any time. That the line describing the circle moves with two movements together is obvious from these things and the fact that what is moved along a straight line arrives onto the perpendicular, so that the line from the center is again perpendicular. (diagram 3: moving picture or still picture) Let there be a circle ABG, and let the end B move to D. Then it arrives sometime at G. And so if it had moved in the ratio which BD has to DG, it would have moved on the diameter BG. And now, since it moves in no fixed ratio, it moves on the circumference BEG. If of two things moved from the same force one is pushed back more and the other less, it is reasonable that the one pushed back more changes more slowly than the one pushed back less, which is thought to happen in the case of larger and smaller of lines from the center describing circles. For since the end point of the smaller line is nearer to the fixed point than the end point of the larger, just as held back in the opposite direction, the end point of the smaller moves to the center more slowly. And so this happens to every line describing a circle, and it moves with its natural motion along the circular-arc, but with its unnatural motion to the side and the center. But the smaller line always changes with its unnatural change always greater. For it is controlled more since it is nearer to the center which holds it back. And it is clear from these things that the smaller of the lines from the center describing circles changes with regard to the unnatural magnitude more than the larger. (diagram 1) Let there be a circle BGDE, and another, a smaller, in this, CNMX, about the same center A. And let the diameters be extended, GD and BE in the larger, with MC, NX in the smaller . And let the oblong (i.e. rectangle) be filled out, DYRG. (diagram 2: moving or fixed) If, in fact, AB describing a circle will have come to the same point from where it set out for AE, it is clear that it moves to itself. Similarly AC will have come to AC as well. But AC moves slower than AB, just as was said, since the pushing back is greater and AC is held back more. (diagram 3 = general diagram) Let AQH be drawn, and from Q to AB let a perpendicular QZ be drawn in the circle. And again from Q let QW be drawn parallel to AB, and WU to perpendicular AB,* as well as HK. In fact lines WU and QZ are equal. Therefore, BU is smaller than CZ. For equal straight-lines when placed in unequal circles at right angles to the diameter cut off smaller segments than the they do in larger circles (see brief proof below), but WU is equal to QZ. In fact in as much time in which AQ moved CQ, in so much time the end point of BA has moved in the larger circle a distance larger than BW. For the natural movement is equal, but the unnatural is smaller: BU is smaller than ZC. But they ought to be proportional, as the natural magnitude is to the natural, so is the unnatural magnitude to the unnatural. Therefore, it has traversed a circular-arc, HB, larger than WB. But it is necessary that it traverse HB in this time. For it will be here whenever the unnatural magnitude happens to be pairwise proportional to the natural magnitude. If in fact the natural magnitude is larger in the larger circle, then the unnatural magnitude might instead occur here in only one way, in order for B to move with movement BH in the time in which point C moved CQ.** For here magnitude KH*** is natural for point B (for the line KH is perpendicular from H), but it is unnatural to KB. But as HK is to KB, QZ is to ZC. (diagram 4) But it is obvious if lines are joined from B, C to H, Q (since triangles BKH and CZQ are similar). (diagram 3 = general diagram) But if the movement which B makes is smaller or larger than HB, they will not be similar and the natural to the unnatural will not be pairwise proportional either. *The manuscripts have: . The text may have been: (let WU be drawn perpendicular to AB). **Bekker's text reads , which everyone translates as above. Nonetheless, this meaning is clearly forced but correct, so that the text needs some fixing, though perhaps not to the extent of Heath's, . ***Here too, everyone translated the received text in this way, although it is very forced. I read instead of the manuscripts (the center). General note: What are the natural and the unnatural movements? The Oxford translation's diagram makes the natural motion sideways, but it should be downwards (as Heath or Blancanus). Hence, this diagram (after Blancanus) has the natural motiions be KH and ZQ. The unnatural motions are BK and CZ towards the center, i.e. towards line GD or NX. These are proportional: KH : ZQ = BK : CZ. The reason why the point more distant from the center moves faster from the same force is clear through what was said, while the reason by larger balances are more precise than smaller is obvious from these things. For the rope becomes the center (since this stays put), while the lines from the center are on each side of the scale. And so from the same weight it is necessary that the end point of the scale move faster by as much as it is distant form the rope, and some weights placed on small balances are not clear in regard to perception, while in large ones they are clear. For nothing prevents it moving a magnitude smaller than would be obvious to sight. In the case of a large scale the same weight made a visible magnitude. And some are clear in both cases, but much more in the case of the larger ones due to the fact that the magnitude of the downward push from the same weight is much larger in the larger ones. And for this reason the dealers in purple use trickery in setting them up them for skimming by not placing the rope in the middle and by infusing lead into one side of the beam or by making the balance of wood towards the root where they tends to push down or if it has a knot. For it is heavier where the root of the wood is, but a knot is a sort of root. Lemma for Mechanica 1 859a18-b19: BU < CZ (proof after Heath) 1) Given UW = ZQ, and UW and ZQ perpendicular to diameters BE and CM. (diagram 2) First extend UW to P and ZQ to L, to the respective circles. Thus PW intersects BE in circle BGED and QL intersects CM in circle CNMX. (diagram 3: alternating or single) By WU * UP = BU * UE QZ * ZL = CZ * ZM Since WU = UP = QZ = ZL, BU * UE = CZ * ZM. Hence, BU : CZ = ZM : UE. If ZM < UE then BU < CZ. We need to show that ZM < UE. We do this by showing that AZ < AU. (diagram 4) Join AQ and AW to form right triangles AZQ and AUW, where LZ = UW. AW2 - AU2 = UW2 = LZ2 =AL2 - AZ2 Hence, AW2 + AZ2 = AL2 + AU2 Since AW2 > AL2, it follows that AZ2 and AZ < AU. Since AM < AE and AZ < AU, then MA + AZ < UA + AE. Hence, BU < CZ, so that BU < CZ.
http://www.calstatela.edu/faculty/hmendel/Ancient%20Mathematics/Aristotle/Mechanica/ch.1/Mech.Ch1.html
13
54
Joint probability is the probability of two events in conjunction. That is, it is the probability of both events together. The joint probability of A and B is written or Marginal probability is then the unconditional probability P(A) of the event A; that is, the probability of A, regardless of whether event B did or did not occur. If B can be thought of as the event of a random variable X having a given outcome, the marginal probability of A can be obtained by summing (or integrating, more generally) the joint probabilities over all outcomes for X. For example, if there are two possible outcomes for X with corresponding events B and B', this means that . This is called marginalization. In these definitions, note that there need not be a causal or temporal relation between A and B. A may precede B or vice versa or they may happen at the same time. A may cause B or vice versa or they may have no causal relation at all. Notice, however, that causal and temporal relations are informal notions, not belonging to the probabilistic framework. They may apply in some examples, depending on the interpretation given to events. Conditioning of probabilities, i.e. updating them to take account of (possibly new) information, may be achieved through Bayes' theorem. In such conditioning, the probability of A given only initial information I, P(A|I), is known as the prior probability. The updated conditional probability of A, given I and the outcome of the event B, is known as the posterior probability, P(A|B,I). The prior probability of each event describes how likely the outcome is before the dice are rolled, without any knowledge of the roll's outcome. For example, die 1 is equally likely to fall on each of its 6 sides, so . Similarly . Likewise, of the 6 × 6 = 36 possible ways that a pair of dice can land, just 5 result in a sum of 8 (namely 2 and 6, 3 and 5, 4 and 4, 5 and 3, and 6 and 2), so . Some of these events can both occur at the same time; for example events A and C can happen at the same time, in the case where die 1 lands on 3 and die 2 lands on 5. This is the only one of the 36 outcomes where both A and C occur, so its probability is 1/36. The probability of both A and C occurring is called the joint probability of A and C and is written , so . On the other hand, if die 2 lands on 1, the dice cannot sum to 8, so . Now suppose we roll the dice and cover up die 2, so we can only see die 1, and observe that die 1 landed on 3. Given this partial information, the probability that the dice sum to 8 is no longer 5/36; instead it is 1/6, since die 2 must land on 5 to achieve this result. This is called the conditional probability, because it's the probability of C under the condition that is A is observed, and is written , which is read "the probability of C given A." Similarly, , since if we observe die 2 landed on 1, we already know the dice can't sum to 8, regardless of what the other die landed on. On the other hand, if we roll the dice and cover up die 2, and observe die 1, this has no impact on the probability of event B, which only depends on die 2. We say events A and B are statistically independent or just independent and in this case Intersection events and conditional events are related by the formula: In this example, we have: As noted above, , so by this formula: On multiplying across by P(A), In other words, if two events are independent, their joint probability is the product of the prior probabilities of each event occurring by itself. Thus, if A and B are independent, then their joint probability can be expressed as a simple product of their individual probabilities. Equivalently, for two independent events A and B with non-zero probabilities, In other words, if A and B are independent, then the conditional probability of A, given B is simply the individual probability of A alone; likewise, the probability of B given A is simply the probability of B alone. Therefore, if P(B) > 0 then is defined and equal to 0. The conditional probability fallacy is the assumption that P(A|B) is approximately equal to P(B|A). The mathematician John Allen Paulos discusses this in his book Innumeracy (p. 63 et seq.), where he points out that it is a mistake often made even by doctors, lawyers, and other highly educated non-statisticians. It can be overcome by describing the data in actual numbers rather than probabilities. The relation between P(A|B) and P(B|A) is given by Bayes Theorem: In other words, one can only assume that P(A|B) is approximately equal to P(B|A) if the prior probabilities P(A) and P(B) are also approximately equal. In order to identify individuals having a serious disease in an early curable form, one may consider screening a large group of people. While the benefits are obvious, an argument against such screenings is the disturbance caused by false positive screening results: If a person not having the disease is incorrectly found to have it by the initial test, they will most likely be quite distressed until a more careful test shows that they do not have the disease. Even after being told they are well, their lives may be affected negatively. The magnitude of this problem is best understood in terms of conditional probabilities. Suppose 1% of the group suffer from the disease, and the rest are well. Choosing an individual at random, Suppose that when the screening test is applied to a person not having the disease, there is a 1% chance of getting a false positive result, i.e. Finally, suppose that when the test is applied to a person having the disease, there is a 1% chance of a false negative result, i.e. Now, one may calculate the following: The fraction of individuals in the whole group who are well and test negative: The fraction of individuals in the whole group who are ill and test positive: The fraction of individuals in the whole group who have false positive results: The fraction of individuals in the whole group who have false negative results: Furthermore, the fraction of individuals in the whole group who test positive: Finally, the probability that an individual actually has the disease, given that the test result is positive: In this example, it should be easy to relate to the difference between the conditional probabilities P(positive|disease) (which is 99%) and P(disease|positive) (which is 50%): the first is the probability that an individual who has the disease tests positive; the second is the probability that an individual who tests positive actually has the disease. With the numbers chosen here, the last result is likely to be deemed unacceptable: half the people testing positive are actually false positives. Suppose X is a random variable that can be equal either to 0 or to 1. As above, one may speak of the conditional probability of any event A given the event X = 0, and also of the conditional probability of A given the event X = 1. The former is denoted P(A|X = 0) and the latter P(A|X = 1). Now define a new random variable Y, whose value is P(A|X = 0) if X = 0 and P(A|X = 1) if X = 1. That is This new random variable Y is said to be the conditional probability of the event A given the discrete random variable X: More generally still, it is possible to speak of the conditional probability of an event given a sigma-algebra. See conditional expectation. Dolling up die castings: various surface treatments can bring out the best in die-cast parts. The following article offers help for deciphering your die-casting options. (Coatings & Finishes). Apr 01, 2003; The enduring popularity of die castings is owed, in part, to the ease with which die-cast parts can be enhanced with a wide... WIN THIS PICK-UP; DIE HARD 4.0 FIRST BID FREE FOR EVERY READER THE LOWEST UNIQUE BID WINS NISSAN NAVARA DIE HARD 4.0 WORTH OVER Pounds 22,000 Nov 03, 2007; BRUCE Willis comes out on top in the toughest Die Hard movie to date - while the new Nissan Navara is equal to any challenge,...
http://www.reference.com/browse/up+die
13
203
Two-Dimensional Vectors In most mathematics courses up until this point, we deal with scalars. These are quantities which only need one number to express. For instance, the amount of gasoline used to drive to the grocery store is a scalar quantity because it only needs one number: 2 gallons. In this unit, we deal with vectors. A vector is a directed line segment -- that is, a line segment that points one direction or the other. As such, it has an initial point and a terminal point. The vector starts at the initial point and ends at the terminal point, and the vector points towards the terminal point. A vector is drawn as a line segment with an arrow at the terminal point: The same vector can be placed anywhere on the coordinate plane and still be the same vector -- the only two bits of information a vector represents are the magnitude and the direction. The magnitude is simply the length of the vector, and the direction is the angle at which it points. Since neither of these specify a starting or ending location, the same vector can be placed anywhere. To illustrate, all of the line segments below can be defined as the vector with magnitude and angle 45 degrees: It is customary, however, to place the vector with the initial point at the origin as indicated by the black vector. This is called the standard position. Component Form In standard practice, we don't express vectors by listing the length and the direction. We instead use component form, which lists the height (rise) and width (run) of the vectors. It is written as follows: Other ways of denoting a vector in component form include: From the diagram we can now see the benefits of the standard position: the two numbers for the terminal point's coordinates are the same numbers for the vector's rise and run. Note that we named this vector u. Just as you can assign numbers to variables in algebra (usually x, y, and z), you can assign vectors to variables in calculus. The letters u, v, and w are usually used, and either boldface or an arrow over the letter is used to identify it as a vector. When expressing a vector in component form, it is no longer obvious what the magnitude and direction are. Therefore, we have to perform some calculations to find the magnitude and direction. where is the width, or run, of the vector; is the height, or rise, of the vector. You should recognize this formula as the Pythagorean theorem. It is -- the magnitude is the distance between the initial point and the terminal point. The magnitude of a vector can also be called the norm. where is the direction of the vector. This formula is simply the tangent formula for right triangles. Vector Operations For these definitions, assume: Vector Addition Vector Addition is often called tip-to-tail addition, because this makes it easier to remember. The sum of the vectors you are adding is called the resultant vector, and is the vector drawn from the initial point (tip) of the first vector to the terminal point (tail) of the second vector. Although they look like the arrows, the pointy bit is the tail, not the tip. (Imagine you were walking the direction the vector was pointing... you would start at the flat end (tip) and walk toward the pointy end.) It looks like this: (Notice, the black lined vector is the sum of the two dotted line vectors!) Or more generally: Scalar Multiplication Graphically, multiplying a vector by a scalar changes only the magnitude of the vector by that same scalar. That is, multiplying a vector by 2 will "stretch" the vector to twice its original magnitude, keeping the direction the same. Numerically, you calculate the resultant vector with this formula: As previously stated, the magnitude is changed by the same constant: Since multiplying a vector by a constant results in a vector in the same direction, we can reason that two vectors are parallel if one is a constant multiple of the other -- that is, that if for some constant c. We can also divide by a non-zero scalar by instead multiplying by the reciprocal, as with dividing regular numbers: Dot Product The dot product is a way of multiplying two vectors to produce a scalar value. Because it combines the components of two vectors to form a /scalar/, it is sometimes called a scalar product. If you were asked to take the 'dot product of two rectangular vectors' you would do the following: It is very important to note that the dot product of two vectors does not result in another vector, it gives you a scalar, just a numerical value. Another common pitfall may arise if your vectors are not in rectangular ('cartesian') format. Sometimes, vectors are instead expressed in polar coordinates, where the first component is the vector's magnitude (length) and the second is the angle from the x-axis at which the vector should be oriented. Dot products cannot be performed using the conventional method on these sorts of vectors; vectors in polar format must be converted to their equivalent rectangular form before you can work with them using the formula given above. A common way to convert to rectangular coordinates is to imagine that the vector was projected horizontally and vertically to form a right triangle. You could then use properties of sin and cos to find the length of the two legs the right triangle. The horizontal length would then be the x-component of the rectangular expression of the vector and the vertical length would be the y-component. Remember that if the vector is pointing down or to the left, the corresponding components would have to be negative to indicate that. With some rearrangement and trigonometric manipulation, we can see that the number that results from the dot product of two vectors is a surprising and useful identity: where is the angle between the two vectors. This provides a convenient way of finding the angle between two vectors: Notice that the dot product is 'commutative', that is: Also, the dot product of two vectors will be the length of the vector squared: and by the Pythagorean theorem, The dot product can be visualized as the length of a projection of one vector on to the other. In other words, the dot product asks 'how much magnitude of this vector is going in the direction of that vector?' Applications of Scalar Multiplication and Dot Product Unit Vectors A unit vector is a vector with a magnitude of 1. The unit vector of u is a vector in the same direction as , but with a magnitude of 1: The process of finding the unit vector of u is called normalization. As mentioned in scalar multiplication, multiplying a vector by constant C will result in the magnitude being multiplied by C. We know how to calculate the magnitude of . We know that dividing a vector by a constant will divide the magnitude by that constant. Therefore, if that constant is the magnitude, dividing the vector by the magnitude will result in a unit vector in the same direction as : , where is the unit vector of Standard Unit Vectors A special case of Unit Vectors are the Standard Unit Vectors i and j: i points one unit directly right in the x direction, and j points one unit directly up in the y direction: Using the scalar multiplication and vector addition rules, we can then express vectors in a different way: If we work that equation out, it makes sense. Multiplying x by i will result in the vector . Multiplying y by j will result in the vector . Adding these two together will give us our original vector, . Expressing vectors using i and j is called standard form. Projection and Decomposition of Vectors Sometimes it is necessary to decompose a vector into two components: one component parallel to a vector , which we will call ; and one component perpendicular to it, . Since the length of is (), it is straightforward to write down the formulas for and : Length of a vector The length of a vector is given by the dot product of a vector with itself, and : Perpendicular vectors If the angle between two vectors is 90 degrees or (if the two vectors are orthogonal to each other), that is the vectors are perpendicular, then the dot product is 0. This provides us with an easy way to find a perpendicular vector: if you have a vector , a perpendicular vector can easily be found by either Polar coordinates Polar coordinates are an alternative two-dimensional coordinate system, which is often useful when rotations are important. Instead of specifying the position along the x and y axes, we specify the distance from the origin, r, and the direction, an angle θ. Looking at this diagram, we can see that the values of x and y are related to those of r and θ by the equations Because tan-1 is multivalued, care must be taken to select the right value. Just as for Cartesian coordinates the unit vectors that point in the x and y directions are special, so in polar coordinates the unit vectors that point in the r and θ directions are also special. We will call these vectors and , pronounced r-hat and theta-hat. Putting a circumflex over a vector this way is often used to mean the unit vector in that direction. Again, on looking at the diagram we see, Three-Dimensional Coordinates and Vectors Basic definition Two-dimensional Cartesian coordinates as we've discussed so far can be easily extended to three-dimensions by adding one more value: 'z'. If the standard (x,y) coordinate axes are drawn on a sheet of paper, the 'z' axis would extend upwards off of the paper. Similar to the two coordinate axes in two-dimensional coordinates, there are three coordinate planes in space. These are the xy-plane, the yz-plane, and the xz-plane. Each plane is the "sheet of paper" that contains both axes the name mentions. For instance, the yz-plane contains both the y and z axes and is perpendicular to the x axis. Therefore, vectors can be extended to three dimensions by simply adding the 'z' value. To facilitate standard form notation, we add another standard unit vector: Again, both forms (component and standard) are equivalent. Magnitude: Magnitude in three dimensions is the same as in two dimensions, with the addition of a 'z' term in the radicand. Three dimensions The polar coordinate system is extended into three dimensions with two different coordinate systems, the cylindrical and spherical coordinate systems, both of which include two-dimensional or planar polar coordinates as a subset. In essence, the cylindrical coordinate system extends polar coordinates by adding an additional distance coordinate, while the spherical system instead adds an additional angular coordinate. Cylindrical coordinates The cylindrical coordinate system is a coordinate system that essentially extends the two-dimensional polar coordinate system by adding a third coordinate measuring the height of a point above the plane, similar to the way in which the Cartesian coordinate system is extended into three dimensions. The third coordinate is usually denoted h, making the three cylindrical coordinates (r, θ, h). The three cylindrical coordinates can be converted to Cartesian coordinates by Spherical coordinates Polar coordinates can also be extended into three dimensions using the coordinates (ρ, φ, θ), where ρ is the distance from the origin, φ is the angle from the z-axis (called the colatitude or zenith and measured from 0 to 180°) and θ is the angle from the x-axis (as in the polar coordinates). This coordinate system, called the spherical coordinate system, is similar to the latitude and longitude system used for Earth, with the origin in the centre of Earth, the latitude δ being the complement of φ, determined by δ = 90° − φ, and the longitude l being measured by l = θ − 180°. The three spherical coordinates are converted to Cartesian coordinates by Cross Product The cross product of two vectors is a determinant: and is also a pseudovector. The cross product of two vectors is orthogonal to both vectors. The magnitude of the cross product is the product of the magnitude of the vectors and the sin of the angle between them. This magnitude is the area of the parallelogram defined by the two vectors. The cross product is linear and anticommutative. For any numbers a and b, If both vectors point in the same direction, their cross product is zero. Triple Products If we have three vectors we can combine them in two ways, a triple scalar product, and a triple vector product The triple scalar product is a determinant If the three vectors are listed clockwise, looking from the origin, the sign of this product is positive. If they are listed anticlockwise the sign is negative. The order of the cross and dot products doesn't matter. Either way, the absolute value of this product is the volume of the parallelepiped defined by the three vectors, u, v, and w The triple vector product can be simplified This form is easier to do calculations with. The triple vector product is not associative. There are special cases where the two sides are equal, but in general the brackets matter. They must not be omitted. Three-Dimensional Lines and Planes We will use r to denote the position of a point. The multiples of a vector, a all lie on a line through the origin. Adding a constant vector b will shift the line, but leave it straight, so the equation of a line is, This is a parametric equation. The position is specified in terms of the parameter s. Any linear combination of two vectors, a and b lies on a single plane through the origin, provided the two vectors are not colinear. We can shift this plane by a constant vector again and write If we choose a and b to be orthonormal vectors in the plane (i.e. unit vectors at right angles) then s and t are Cartesian coordinates for points in the plane. These parametric equations can be extended to higher dimensions. Instead of giving parametric equations for the line and plane, we could use constraints. E.g., for any point in the xy plane z=0 For a plane through the origin, the single vector normal to the plane, n, is at right angle with every vector in the plane, by definition, so is a plane through the origin, normal to n. For planes not through the origin we get A line lies on the intersection of two planes, so it must obey the constraint for both planes, i.e. These constraint equations con also be extended to higher dimensions. Vector-Valued Functions Vector-Valued Functions are functions that instead of giving a resultant scalar value, give a resultant vector value. These aid in the creation of direction and vector fields, and are therefore used in physics to aid with visualizations of electric, magnetic, and many other types of fields. They are of the following form: Limits, Derivatives, and Integrals Put simply, the limit of a vector-valued function is the limit of its parts. Therefore for any there is a such that But by the triangle inequality Therefore A similar argument can be used through parts a_n(t) Now let again, and that for any ε>0 there is a corresponding φ>0 such 0<|t-c|<φ implies From this we can then create an accurate definition of a derivative of a vector-valued function: The final step was accomplished by taking what we just did with limits. By the Fundamental Theorem of Calculus integrals can be applied to the vector's components. In other words: the limit of a vector function is the limit of its parts, the derivative of a vector function is the derivative of its parts, and the integration of a vector function is the integration of it parts. Velocity, Acceleration, Curvature, and a brief mention of the Binormal Assume we have a vector-valued function which starts at the origin and as its independent variables changes the points that the vectors point at trace a path. We will call this vector , which is commonly known as the position vector. If then represents a position and t represents time, then in model with Physics we know the following: is displacement. where is the velocity vector. is the speed. where is the acceleration vector. The only other vector that comes in use at times is known as the curvature vector. The vector used to find it is known as the unit tangent vector, which is defined as or shorthand . The vector normal to this then is . We can verify this by taking the dot product Also note that Then we can actually verify: Therefore is perpendicular to What this gives rise to is the Unit Normal Vector of which the top-most vector is the Normal vector, but the bottom half is known as the curvature. Since the Normal vector points toward the inside of a curve, the sharper a turn, the Normal vector has a large magnitude, therefore the curvature has a small value, and is used as an index in civil engineering to reflect the sharpness of a curve (clover-leaf highways, for instance). The only other thing not mentioned is the Binormal that occurs in 3-d curves , which is useful in creating planes parallel to the curve.
http://en.wikibooks.org/wiki/Calculus/Vectors
13
62
The decimal system is a positional numeral system; it has positions for units, tens, hundreds, etc. The position of each digit conveys the multiplier (a power of ten) to be used with that digit—each position has a value ten times that of the position to its right. Ten is the number which is the count of fingers and thumbs on both hands (or toes on the feet). In many languages the word digit or its translation is also the anatomical term referring to fingers and toes. In English, decimal (decimus < Lat.) means tenth, decimate means reduce by a tenth, and denary (denarius < Lat.) means the unit of ten. The symbols for the digits in common use around the globe today are called Arabic numerals by Europeans and Indian numerals by Arabs, the two groups' terms both referring to the culture from which they learned the system. However, the symbols used in different areas are not identical; for instance, Western Arabic numerals (from which the European numerals are derived) differ from the forms used by other Arab cultures. Some cultures do, or used to, use other numeral systems, including pre-Columbian Mesoamerican cultures such as the Maya, who use a vigesimal system (using all twenty fingers and toes), some Nigerians who use several duodecimal (base 12) systems, the Babylonians, who used sexagesimal (base 60), and the Yuki, who reportedly used quaternal (base 4). Computer hardware and software systems commonly use a binary representation, internally (although a few of the earliest computers, such as ENIAC, did use decimal representation internally). For external use by computer specialists, this binary representation is sometimes presented in the related octal or hexadecimal systems. For most purposes, however, binary values are converted to the equivalent decimal values for presentation to and manipulation by humans. Both computer hardware and software also use internal representations which are effectively decimal for storing decimal values and doing arithmetic. Often this arithmetic is done on data which are encoded using binary-coded decimal, but there are other decimal representations in use (see IEEE 754r), especially in database implementations. Decimal arithmetic is used in computers so that decimal fractional results can be computed exactly, which is not possible using a binary fractional representation. This is often important for financial and other calculations. Decimal fractions are commonly expressed without a denominator, the decimal separator being inserted into the numerator (with leading zeros added if needed), at the position from the right corresponding to the power of ten of the denominator. e.g., 8/10, 83/100, 83/1000, and 8/10000 are expressed as: 0.8, 0.83, 0.083, and 0.0008. In English-speaking and many Asian countries, a period (.) is used as the decimal separator; in many other languages, a comma is used. The integer part or integral part of a decimal number is the part to the left of the decimal separator (see also floor function). The part from the decimal separator to the right is the fractional part; if considered as a separate number, a zero is often written in front. Especially for negative numbers, we have to distinguish between the fractional part of the notation and the fractional part of the number itself, because the latter gets its own minus sign. It is usual for a decimal number whose absolute value is less than one to have a leading zero. Trailing zeros after the decimal point are not necessary, although in science, engineering and statistics they can be retained to indicate a required precision or to show a level of confidence in the accuracy of the number: Whereas 0.080 and 0.08 are numerically equal, in engineering 0.080 suggests a measurement with an error of up to 1 part in two thousand (±0.0005), while 0.08 suggests a measurement with an error of up to 1 in two hundred (see Significant figures). Ten is the product of the first and third prime numbers, is one greater than the square of the second prime number, and is one less than the fifth prime number. This leads to plenty of simple decimal fractions: That a rational number must have a finite or recurring decimal expansion can be seen to be a consequence of the long division algorithm, in that there are only q-1 possible nonzero remainders on division by q, so that the recurring pattern will have a period less than q. For instance to find 3/7 by long division: .4 2 8 5 7 1 4 ... 7 ) 3.0 0 0 0 0 0 0 0 2 8 30/7 = 4 r 2 1 4 20/7 = 2 r 6 5 6 60/7 = 8 r 4 3 5 40/7 = 5 r 5 4 9 50/7 = 7 r 1 7 10/7 = 1 r 3 2 8 30/7 = 4 r 2 (again) The converse to this observation is that every recurring decimal represents a rational number p/q. This is a consequence of the fact the recurring part of a decimal representation is, in fact, an infinite geometric series which will sum to a rational number. For instance, Every real number has a (possibly infinite) decimal representation, i.e., it can be written as Such a sum converges as i decreases, even if there are infinitely many nonzero ai. Consider those rational numbers which have only the factors 2 and 5 in the denominator, i.e. which can be written as p/(2a5b). In this case there is a terminating decimal representation. For instance 1/1=1, 1/2=0.5, 3/5=0.6, 3/25=0.12 and 1306/1250=1.0448. Such numbers are the only real numbers which don't have a unique decimal representation, as they can also be written as a representation that has a recurring 9, for instance 1=0.99999…, 1/2=0.499999…, etc. This leaves the irrational numbers. They also have unique infinite decimal representation, and can be characterised as the numbers whose decimal representations neither terminate nor recur. So in general the decimal representation is unique, if one excludes representations that end in a recurring 9. and a version of this even holds for irrational-base numeration systems, such as golden mean base representation. There follows a chronological list of recorded decimal writers. Some psychologists suggest irregularities of numerals in a language may hinder children's counting ability. Decimats: Helping Students to Make Sense of Decimal Place Value: Anne Roche Introduces "Decimats" and Describes How They Can Be Used to Make Sense of Decimal Size and Decimal Place Value Jun 22, 2010; Background A considerable body of research exists on students' understanding of decimal fractions and the prevalence and... The Effects of Rounding on the Consumer Price Index: Calculating Percent Changes in a Price Index Rounded to Three Decimal Places Mitigates a Problem That Can Arise When Percent Changes Are Based on the Same Index Rounded to a Single Decimal Place Oct 01, 2006; The Bureau of Labor Statistics (BLS, the Bureau) rounds the Consumer Price Index (CPI) to a single decimal place before it is... Pi continued. (University of Tokyo mathematicians have calculated Pi to the 3.22 billionth decimal place, setting a new record)(Brief Article) Aug 26, 1995; It's possible to use the distribution of bright stars across the night sky to deduce a numerical value of pi ([pi]) that comes...
http://www.reference.com/browse/decimal-place
13
58
Illustrations by TC and Hammer Open this document as a PDF formatted for printing Your plane has four basic forces working on it during flight –Thrust, Gravity, Drag, and Lift. Lift and Thrust are your friends while Gravity and Drag usually work against you but can help in some situations. Thrust is produced by your engine. It is directed directly back from the propeller. In a propeller driven plane, it is created by the propeller pushing air to the rear of the plane. In a jet, it is created by accelerating the exhaust created by burning the fuel and discharging it to the rear. Thrust pushes (or pulls) your plane forward and thereby creates lift for the wing by generating airflow over it. The amount of thrust produced by your engine can be controlled by the manifold pressure (throttle) or by adjusting the RPMs. Aircraft engines produce different amount of thrust at different altitudes, and some planes’ engines are optimized at different altitudes than others. Gravity (weight) is the pull of the earth on objects. It is the weight of the plane and is always directed towards the center of the earth. Don’t confuse this force with centrifugal force, which is what causes you to “pull G’s” during a maneuver. Gravity acts on all planes equally at all times. Drag is the resistance of air against the surfaces of your plane. It will always be directed opposite the direction of travel. Because air is less dense at higher altitude, drag decreases at altitude. The force of drag on your plane increases with speed until it cancels out your plane’s thrust. When this happens, you have reached your maximum speed. Some planes are much more aerodynamic than others, meaning they have less drag. This can help them go faster and hold energy more efficiently, but it can sometimes cause problems trying to reduce speed if you need to do so in a hurry. Lift is generated by the wing as it moves through the air. It will always be directed perpendicular to the direction of travel when looking from the side and perpendicular to the leading edge of the wing when looking at the plane from the front. The faster a wing is moving through the air, the more lift is generated by that wing. The second major factor for producing lift is the “Angle of Attack” which is discussed in more detail below. In discussions about maneuvering your plane, you will often hear the term “vector” and in particular “lift vector”. A vector is a depiction of the direction a force is acting on something. The blue arrows in the picture at the top of this page depict the force vector of each of the forces acting on the plane while it is in flight. For the purpose of almost all discussions of the forces acting on your plane, think of the force vector as acting on the plane’s center of gravity. As discussed above, all of the force vectors act relative to a part of the plane, the direction of travel, or to the center of the earth. The most basic concept in understanding force vectors is they must cancel each other out in order to maintain constant speed level flight. In the picture at the top of the page, Thrust is equal to Drag and Lift is equal to Gravity (weight). In the picture below, however, it is not as simple. Imagine this plane flying at extreme low speed but maintaining level flight at constant speed. As the speed lowered, the pilot was forced to put his nose up in order to maintain level flight. This creates more lift by increasing the Angle of Attack of the wing (discussed in more detail below). In this nose-up attitude, Gravity is still pulling the plane straight down towards the center of the earth, Drag is still working opposite of the direction of flight, and Lift is still being generated perpendicular to the relative wind. Thrust, however, is now working in a different direction. As discussed above, the forces acting on a plane must cancel each other out in order for the plane to fly at a constant speed and altitude. Since Thrust is no longer acting exactly opposite to drag, it is useful to break it into components. In this case, Thrust can be broken into a horizontal component (the green line) and a vertical component (the red line). In level, constant-speed flight, the horizontal component of thrust is equal in magnitude to drag. Lift + the vertical component of thrust are equal in magnitude to Gravity. Angle of Attack, Indicated Air Speed, and Lift Besides giving a vertical component to thrust, lifting the nose of the plane increases the Angle of Attack of the wing which increases the lift produced by the wing. The Angle of Attack is the angle at which the chord of the wing meets the relative wind. The chord is the line between the leading edge and the trailing edge of the wing. As mentioned above, the relative wind is opposite your direction of flight and equal in force to your indicated air speed. It is important to note the relative wind does not have to be level with the ground. In the pictures below, the wing on the left is in level flight while the wing on the right is climbing at a constant rate and speed. Both wings have the same Angle of Attack. The two main factors affecting how much lift any given wing produces are indicated airspeed and the Angle of Attack of the wing. Indicated airspeed is important because it takes into account the density of the air as it changes with altitude. The faster the wing moves through the air, the more lift it produces. However, the thinner air at high altitudes produces less lift for a given “true” airspeed than the thicker air at sea level. This difference is accounted for in the indicated airspeed. an aircraft’s speed decreases, Lift decreases unless the Angle of Attack is increased. The Angle of Attack can be increased until the wing reaches its “critical angle”. This is the Angle of Attack at which airflow over the wing is disrupted to the point that lift is no longer produced. At this point, the wing stalls. The critical angle varies with speed, weight of the plane, and wing design. The Angle of Attack is increased by using the elevator to increase the pitch of the aircraft. In order to maintain level flight, you must increase the AoA as speed decreases and vice versa. This is why you must raise your nose as you slow down and lower your nose as you speed up if you want to maintain the same altitude. Lift Vector, Angle of Attack, and Maneuvering vector is the force vector you will discuss the most when talking about maneuvers. That is because almost all maneuvers are done by manipulating your lift vector and increasing the wing’s Angle of Attack. To turn your plane, you first roll your wings so the lift vector is pointed towards the direction you want to go as in the picture to the right. By rolling your wings, you change the direction of your lift vector. You can now divide this vector into a horizontal component and a vertical component as shown in the image above. In order to maintain a level turn, the vertical component must equal the weight of the plane. The horizontal component causes the plane to turn. Both components can be increased by applying up elevator to increase your wing’s Angle of Attack. This increases the lift vector which in turn increases both the horizontal and vertical components. It also exposes more of the wing to the virtual wind which increases drag and causes you to slow. The higher the angle of attack, the more drag must be overcome because more of the wing (and the other surfaces of the plane) is exposed to the virtual wind. Besides the four forces always experienced by a plane in flight, a turning plane experiences a virtual force know as Centrifugal Force. Centrifugal force does not actually exist, but objects moving in a circle act as thought it does. This is the force that causes you to “Pull G’s”. While gravity is a constant force acting on your plane at all times, maneuvering your plane often causes you to “pull G’s”. As noted above, pulling G’s is the result of Centrifugal Force. The University of Virginia’s “Phun Physics” website describes Centrifugal Force like this: An object traveling in a circle behaves as if it is experiencing an outward force. This force is known as the centrifugal force. It is important to note that the centrifugal force does not actually exist. Nevertheless, it appears quite real to the object being rotated. In level flight, you are at 1 G. When you pull back on your stick, you pull positive G’s. At 6 G’s, you black out. This blackout is preceded by a “grayout”, which is a gradual narrowing of your field of view. Pushing forward on your stick causes negative G’s. At about 1 negative G, you will red out. Holding at 0 G’s will give your plane its best acceleration. A stall occurs when the airflow over the wing is disrupted to the point that the wing no longer produces enough lift for controlled flight. Stalls are most often associated with getting too slow but may actually occur at any speed. Technically, a stall occurs when the wing exceeds its critical angle of attack. This can be caused by raising the nose in an attempt to maintain level flight at low speeds or by an excessively abrupt maneuver at higher speeds. Any stall caused by a maneuver when flying above stall speed is sometimes called an accelerated stall. Most stalls can be recovered by lowering the nose or increasing throttle. Failure to recover from a stall quickly can result in a The last flight dynamic we will discuss is compression. Compression occurs when the air moving over your control surfaces “locks” them so they do not respond. This phenomenon happens at different speeds in different planes. Altitude also affects the speed at which compression occurs with compression setting in sooner at higher altitudes. In some planes in Aces High, the Combat Trim function can make it seem like you are experiencing compression even if you are not. This is because combat trim tries adjusts your trim for level flight at whatever speed you are going. In some planes, most notably the Bf 109 series, the combat trim will be full down at high speeds. While this keeps you in level flight, it also keeps you from being able to pull out of a dive. To counter this, you must trim up to at least center and preferably trim up. Adjusting trim also helps when you are experiencing true compression. If you find yourself in a steep dive and your controls won’t respond, reduce throttle, trim up, and use your rudder to skid and hopefully slow you down enough that you regain control.
http://trainers.hitechcreations.com/flightdynamics/flightdynamics.htm
13