score
int64
50
2.08k
text
stringlengths
698
618k
url
stringlengths
16
846
year
int64
13
24
73
Through this post I am going to explain How Linear Regression works? Let us start with what is regression and how it works? Regression is widely used for prediction and forecasting in field of machine learning. Focus of regression is on the relationship between dependent and one or more independent variables. The “dependent variable” represents the output or effect, or is tested to see if it is the effect. The “independent variables” represent the inputs or causes, or are tested to see if they are the cause. Regression analysis helps to understand how the value of the dependent variable changes when any one of the independent variables is varied, while the other independent variables are kept unchanged. In the regression, dependent variable is estimated as function of independent variables which is called regression function. Regression model involves following variables. - Independent variables X. - Dependent variable Y - Unknown parameter θ In the regression model Y is function of (X,θ). There are many techniques for regression analysis, but here we will consider linear regression. In the Linear regression, dependent variable(Y) is the linear combination of the independent variables(X). Here regression function is known as hypothesis which is defined as below. hθ(X) = f(X,θ) Suppose we have only one independent variable(x), then our hypothesis is defined as below. The goal is to find some values of θ(known as coefficients), so we can minimize the difference between real and predicted values of dependent variable(y). If we take the values of all θ are zeros, then our predicted value will be zero. Cost function is used as measurement factor of linear regression model and it calculates average squared error for m observations. Cost function is denoted by J(θ) and defined as below. As we can see from the above formula, if cost is large then, predicted value is far from the real value and if cost is small then, predicted value is nearer to real value. Therefor, we have to minimize cost to meet more accurate prediction. Linear regression in R R is language and environment for statistical computing. R has powerful and comprehensive features for fitting regression models. We will discuss about how linear regression works in R. In R, basic function for fitting linear model is lm(). The format is fit <- lm(formula, data) where formula describes model(in our case linear model) and data describes which data are used to fit model. The resulting object(fit in this case) is a list that contains information about the fitted model. The formula typically written as Y ~ x1 + x2 + … + xk where ~ separates the dependent variable(y) on the left from independent variables(x1, x2, ….. , xk) from right, and the independent variables are separated by + signs. let’s see simple regression example(example is from book R in action). We have the dataset women which contains height and weight for a set of 15 women ages 30 to 39. we want to predict weight from height. R code to fit this model is as below. >fit <-lm(weight ~ height, data=women) >summary(fit) Output of the summary function gives information about the object fit. Output is as below Call: lm(formula = weight ~ height, data = women) Residuals: Min 1Q Median 3Q Max -1.7333 -1.1333 -0.3833 0.7417 3.1167 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -87.51667 5.93694 -14.74 1.71e-09 *** height 3.45000 0.09114 37.85 1.09e-14 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.525 on 13 degrees of freedom Multiple R-squared: 0.991, Adjusted R-squared: 0.9903 F-statistic: 1433 on 1 and 13 DF, p-value: 1.091e-14 Let’s understand the output. Values of coefficients(θs) are -87.51667 and 3.45000, hence prediction equation for model is as below Weight = -87.52 + 3.45*height In the output, residual standard error is cost which is 1.525. Now, we will look at real values of weight of 15 women first and then will look at predicted values. Actual values of weight of 15 women are as below Output 115 117 120 123 126 129 132 135 139 142 146 150 154 159 164 Predicted values of 15 women are as below Output 1 2 3 4 5 6 7 8 9 112.5833 116.0333 119.4833 122.9333 126.3833 129.8333 133.2833 136.7333 140.1833 10 11 12 13 14 15 143.6333 147.0833 150.5333 153.9833 157.4333 160.8833 We can see that predicted values are nearer to the actual values.Finally, we understand what is regression, how it works and regression in R. Here, I want to beware you from the misunderstanding about correlation and causation. In the regression, dependent variable is correlated with the independent variable. This means, as the value of the independent variable changes, value of the dependent variable also changes. But, this does not mean that independent variable cause to change the value of dependent variable. Causation implies correlation , but reverse is not true. For example, smoking causes the lung cancer and smoking is correlated with alcoholism. Many discussions are there on this topic. if we go deep into than one blog is not enough to explain this.But, we will keep in mind that we will consider correlation between dependent variable and independent variable in regression. In the next blog, I will discuss about the real world business problem and how to use regression into it. Liked this? Get more by Signing up for our free newsletter! Would you like to understand the value of predictive analysis when applied on web analytics data to help improve your understanding relationship between different variables? So register now for our Upcoming Webinar: How to perform predictive analysis on your web analytics tool data. Get More Info & Book Your Seat Now!
http://www.tatvic.com/blog/linear-regression-using-r/
13
67
Least common multiple In arithmetic and number theory, the least common multiple (also called the lowest common multiple or smallest common multiple) of two integers a and b, usually denoted by LCM(a, b), is the smallest positive integer that is divisible by both a and b. If either a or b is 0, LCM(a, b) is defined to be zero. The LCM of more than two integers is also well-defined: it is the smallest integer that is divisible by each of them. A multiple of a number is the product of that number and an integer. For example, 10 is a multiple of 5 because 5 × 2 = 10, so 10 is divisible by 5 and 2. Because 10 is the smallest positive integer that is divisible by both 5 and 2, it is the least common multiple of 5 and 2. By the same principle, 10 is the least common multiple of −5 and 2 as well. What is the LCM of 4 and 6? Multiples of 4 are: - 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, 56, 60, 64, 68, 72, 76, ... and the multiples of 6 are: - 6, 12, 18, 24, 30, 36, 42, 48, 54, 60, 66, 72, ... Common multiples of 4 and 6 are simply the numbers that are in both lists: - 12, 24, 36, 48, 60, 72, .... So, from this list of the first few common multiples of the numbers 4 and 6, their least common multiple is 12. When adding, subtracting, or comparing vulgar fractions, it is useful to find the least common multiple of the denominators, often called the lowest common denominator, because each of the fractions can be expressed as a fraction with this denominator. For instance, where the denominator 42 was used because it is the least common multiple of 21 and 6. Computing the least common multiple Reduction by the greatest common divisor Many school age children are taught the term greatest common factor (GCF) instead of the greatest common divisor(GCD); therefore, for those familiar with the concept of GCF, substitute GCF when GCD is used below. The following formula reduces the problem of computing the least common multiple to the problem of computing the greatest common divisor (GCD): This formula is also valid when exactly one of a and b is 0, since gcd(a, 0) = |a|. Because gcd(a, b) is a divisor of both a and b, it's more efficient to compute the LCM by dividing before multiplying: This reduces the size of one input for both the division and the multiplication, and reduces the required storage needed for intermediate results (overflow in the a×b computation). Because gcd(a, b) is a divisor of both a and b, the division is guaranteed to yield an integer, so the intermediate result can be stored in an integer. Done this way, the previous example becomes: Finding least common multiples by prime factorization The unique factorization theorem says that every positive integer greater than 1 can be written in only one way as a product of prime numbers. The prime numbers can be considered as the atomic elements which, when combined together, make up a composite number. Here we have the composite number 90 made up of one atom of the prime number 2, two atoms of the prime number 3 and one atom of the prime number 5. This knowledge can be used to find the LCM of a set of numbers. Example: Find the value of lcm(8,9,21). First, factor out each number and express it as a product of prime number powers. The lcm will be the product of multiplying the highest power of each prime number together. The highest power of the three prime numbers 2, 3, and 7 is 23, 32, and 71, respectively. Thus, This method is not as efficient as reducing to the greatest common divisor, since there is no known general efficient algorithm for integer factorization, but is useful for illustrating concepts. This method can be illustrated using a Venn diagram as follows. Find the prime factorization of each of the two numbers. Put the prime factors into a Venn diagram with one circle for each of the two numbers, and all factors they share in common in the intersection. To find the LCM, just multiply all of the prime numbers in the diagram. Here is an example: - 48 = 2 × 2 × 2 × 2 × 3, - 180 = 2 × 2 × 3 × 3 × 5, and what they share in common is two "2"s and a "3": - Least common multiple = 2 × 2 × 2 × 2 × 3 × 3 × 5 = 720 - Greatest common divisor = 2 × 2 × 3 = 12 This also works for the greatest common divisor (GCD), except that instead of multiplying all of the numbers in the Venn diagram, one multiplies only the prime factors that are in the intersection. Thus the GCD of 48 and 180 is 2 × 2 × 3 = 12. A simple algorithm This method works as easily for finding the LCM of several integers. Let there be a finite sequence of positive integers X = (x1, x2, ..., xn), n > 1. The algorithm proceeds in steps as follows: on each step m it examines and updates the sequence X(m) = (x1(m), x2(m), ..., xn(m)), X(1) = X. The purpose of the examination is to pick up the least (perhaps, one of many) element of the sequence X(m). Assuming xk0(m) is the selected element, the sequence X(m+1) is defined as - xk(m+1) = xk(m), k ≠ k0 - xk0(m+1) = xk0(m) + xk0. In other words, the least element is increased by the corresponding x whereas the rest of the elements pass from X(m) to X(m+1) unchanged. The algorithm stops when all elements in sequence X(m) are equal. Their common value L is exactly LCM(X). (For a proof and an interactive simulation see reference below, Algorithm for Computing the LCM.) A method using a table This method works for any number of factors. One begins by listing all of the numbers vertically in a table (in this example 4, 7, 12, 21, and 42): The process begins by dividing all of the factors by 2. If any of them divides evenly, write 2 at the top of the table and the result of division by 2 of each factor in the space to the right of each factor and below the 2. If a number does not divide evenly, just rewrite the number again. If 2 does not divide evenly into any of the numbers, try 3. Now, check if 2 divides again: Once 2 no longer divides, divide by 3. If 3 no longer divides, try 5 and 7. Keep going until all of the numbers have been reduced to 1. Now, multiply the numbers on the top and you have the LCM. In this case, it is 2 × 2 × 3 × 7 = 84. You will get to the LCM the quickest if you use prime numbers and start from the lowest prime, 2. Fundamental theorem of arithmetic where the exponents n2, n3, ... are non-negative integers; for example, 84 = 22 31 50 71 110 130 ... Given two integers and their least common multiple and greatest common divisor are given by the formulas In fact, any rational number can be written uniquely as the product of primes if negative exponents are allowed. When this is done, the above formulas remain valid. Using the same examples as above: The positive integers may be partially ordered by divisibility: if a divides b (i.e. if b is an integer multiple of a) write a ≤ b (or equivalently, b ≥ a). (Forget the usual magnitude-based definition of ≤ in this section - it isn't used.) Under this ordering, the positive integers become a lattice with meet given by the gcd and join given by the lcm. The proof is straightforward, if a bit tedious; it amounts to checking that lcm and gcd satisfy the axioms for meet and join. Putting the lcm and gcd into this more general context establishes a duality between them: - If a formula involving integer variables, gcd, lcm, ≤ and ≥ is true, then the formula obtained by switching gcd with lcm and switching ≥ with ≤ is also true. (Remember ≤ is defined as divides). The following pairs of dual formulas are special cases of general lattice-theoretic identities. This identity is self-dual: Let D be the product of ω(D) distinct prime numbers (i.e. D is squarefree). where the absolute bars || denote the cardinality of a set. The LCM in commutative rings The least common multiple can be defined generally over commutative rings as follows: Let a and b be elements of a commutative ring R. A common multiple of a and b is an element m of R such that both a and b divide m (i.e. there exist elements x and y of R such that ax = m and by = m). A least common multiple of a and b is a common multiple that is minimal in the sense that for any other common multiple n of a and b, m divides n. In general, two elements in a commutative ring can have no least common multiple or more than one. However, any two least common multiples of the same pair of elements are associates. In a unique factorization domain, any two elements have a least common multiple. In a principal ideal domain, the least common multiple of a and b can be characterised as a generator of the intersection of the ideals generated by a and b (the intersection of a collection of ideals is always an ideal). In principal ideal domains, one can even talk about the least common multiple of arbitrary collections of elements: it is a generator of the intersection of the ideals generated by the elements of the collection. See also - Crandall, Richard; Pomerance, Carl (2001), Prime Numbers: A Computational Perspective, New York: Springer, ISBN 0-387-94777-9 - Hardy, G. H.; Wright, E. M. (1979), An Introduction to the Theory of Numbers (Fifth edition), Oxford: Oxford University Press, ISBN 978-0-19-853171-5 - Landau, Edmund (1966), Elementary Number Theory, New York: Chelsea - Long, Calvin T. (1972), Elementary Introduction to Number Theory (2nd ed.), Lexington: D. C. Heath and Company, LCCN 77-171950 - Pettofrezzo, Anthony J.; Byrkit, Donald R. (1970), Elements of Number Theory, Englewood Cliffs: Prentice Hall, LCCN 77-81766
http://en.wikipedia.org/wiki/Least_common_multiple
13
55
Geometry is heavily tested on the GRE Math section, and a thorough review of geometrical concepts is essential to a high score. Consider the following problem: “If the length of an edge of a cube X is twice the length of an edge of cube Y, what is the ratio of the volume of cube Y to the volume of cube X?” The easiest way to solve this is to pick a number for the initial edge length and plug it into the problem. For instance, let’s say cube X is a 4x4x4 cube. Cube X would have a volume of 64. Cube Y would have to be a 2x2x2 cube, since 2 is half of 4, and it would have a volume of 8. The ratio of the volume of cube Y to the volume of cube X would thus be 8 to 64, or 1/8. However, you really should have known that to begin with. Imagine that cube X had edges that were three times as long as those of Cube Y. Then Cube X would now be a 6x6x6 cube if Cube Y remains a 2x2x2 cube, and the volume ratio would be 8 to 216, or 1/27. Notice something? 8 is 2 ^3, and 27 is 3^3. If the ratio of the sides is 1:4, the ratio of the volumes will be 1:64. If the ratio of the sides is 1:5, the ratio of the volumes will be 1:125. Since these are cubes, you just cube the ratios. 1^3 is 1, and 4^3 is 64; 5^3 is 125. If you know this simple property of the relationship between length and volume, it will take a problem that would take 30 seconds to solve and turn it into a problem that takes 5 seconds to solve. On a timed exam, that could be the difference between getting another, harder question right or wrong. Memorizing these kinds of mathematical facts is something that the GRE test writers expect top scorers to do, and they write the questions so that they can be solved quickly if you know them. It also pays to memorize the squares and cubes of the numbers 1 through 12. So with cubes, you cube the ratio of the sides. What about squares? If you guessed that you square the ratio of the side lengths in order to get the ratio of the areas, you’d be right, as you can see from a quick demonstration. If the original square has side lengths of 1 and the new square has side lengths of 2, the side ratio is 1:2 and the area ratio is 1:4. If the new square has side lengths of 3, then the side ratio is 1:3 and the area ratio is 1:9. If the new square has side lengths of 4, then the side ratio is 1:4 and the area ratio is 1:16, and so on. Sure enough, you just square the original ratio. So now you know about cubes and squares, but what about tesseracts? “Tessawhats?” you say? A tesseract is to a cube as a cube is to a square, just as a cube is to a square what a square is to a line. Still confused? Let me explain it this way: say you draw a line a foot long running from east to west. This line only exists in one dimension: east-west. Then, you decide to square it by adding three more lines: two perpendicular to it running north to south and one parallel to it running east to west. This square exists in two dimensions: east-west and north-south. Now you decide to turn the square into a cube by adding lines in the up-down dimension, so that each edge of the original square is now the edge of another square emanating from it. This cube exists in three spatial dimensions: east-west, north-south, and up-down. Now you take this cube you’ve made and decide to square it…in a fourth spacial dimension. What is this fourth dimension? Who knows. We live in a world in which we experience only three spacial dimensions, so it is impossible for us to imagine what a four dimensional object would look like. That hasn’t stopped mathematicians from naming four-dimensional objects, and this hypercube I’ve just described to you is called a tesseract. As you know, even though a cube is a three dimensional object, it is possible to draw a cube on a piece of paper in only two dimensions by using perspective and all those other artistic illusions. Likewise, some have attempted to render tesseracts in three dimensions in order to give some approximation of what they might look like. Having never seen an actual tesseract, though, you might still find these representations confusing. In terms of doing calculations, though, tesseracts are simple as can be. For a square with side lengths of 1 and another square with side lengths of 2, the ratio of side lengths is 1:2^1 (since sides are 1 dimensional), or 1:2, and the ratio of areas will be 1:2^2 (since squares are 2 dimensional) or 1:4. For a cube with side lengths of 1 and another cube with side lengths of 2, the ratio of volumes is 1:2^3 (since cubes are 3 dimensional), or 1:8. So, for a tesseract with side lengths of 1 and another tesseract with side lengths of 2, the ratio of hypervolumes(?) is 1:2^4 (since tesseracts are 4 dimensional), or 1:16. It just follows the pattern. Try not to think about it too much. If you’re having trouble with tesseracts, don’t worry. They’re not on the test. I just wrote about them to mess with your head. Remember, if you ever want extra help getting ready for the GRE, you can always study with experts like me through Test Masters. Until then, happy studying!
http://www.newgre.org/preparation/sample-math-problem-hip-square-or-cube/
13
59
In the preceding posts, I mentioned infinite products as approximations for π. These may be seen geometrically as exhaustion methods, where the area of a polygon approaches the circular area alternately from above, from below, from above, from below, etc. There are also integral representations of pi. In such integral representations, π appears in the quantitative value of the integral of a mathematical function. Visually, this is often represented as the area delimited by the bounds of the function. However, the relation with the circle is lost, when viewed under Cartesian coordinates. For example, the graph of the simplest instance of the Cauchy-Lorentz distribution, f(x)=1/(1+x²), "has nothing at all to do with circles or geometry in any obvious way" as quoted from last Pi-day Sunday function from Matt Springer's Built on Facts blog. In order to view the role of the circle in integral representations of π, we need to switch to alternative ways to visualize math functions. As an example, let's take the constant function y=f(x)=2. The function f maps an element x from a domain to the element y of the target. In this case, for every x, the target y has the constant value 2. With Cartesian coordinates, we are used to represent this function as a horizontal straight line, like in Figure 1a (click on the figure to view it enlarged). If however we write it as R=f(r)=2, where the function f maps any circle of radius r of the domain to a target circle of radius R=2, the same function can be viewed as a circle of constant radius, like in Figure 1b. So the same function f can be equally well viewed as a straight line or as a circle (x, y, r or R are only dummy variables). Now if we take another example, the linear function, y=f(x)=2x, we are often used to view it in Cartesian coordinates as a straight line with slope 2, like in Figure 1c. In the circular representation R=f(r)=2r, this works however differently. Because we are relating circles of the input domain to other circles of the target, for each circle of radius r, we need to draw the target circle of radius 2r. A single line won't do. For one value of r, we need to draw two circles. If we use blue circles for elements of the input domain and red circles for elements of the target, we could visualize it for successive values of r as an animation like in Figure 1d. In that way, we view the progression of the target circle as the input circle becomes larger. Unlike the Cartesian representation which shows the progression of a function in a static graph, this circular representation needs a dynamic or recurrent process to get grip of the progression of the function. Therefore it isn't very adapted for illustrations in print media. On the other hand, it has the advantage of keeping track of the geometrical form of the circle. And that's exactly what we need in order to perceive the circular nature when π shows up in mathematical functions. The relation of the integral of the Cauchy-Lorentz distribution f(r)=1/(1+r²) with the circle can then be seen with the help of the geometric counterparts of arithmetic operations like addition, squaring and dividing. A convenient procedure is illustrated in the successive steps of Figure 2. Step 1. Draw the input circle of radius r and the reference circle of radius unity. Step 2. Determine r². Step 5. Find the target ring related to the input ring ranging over [r, r + dr]. This yields a ring of width dr/(1+r²). The location of this ring depends on the relative progression rates of r and r² (I've not yet found a straightforward explanation for this determination). Step 6. Integrate dr/(1+r²) for r running over all space. For r becoming larger and larger, the summed area tends towards the area of a circle of radius 1. For the positive half plane, this corresponds to the π/2 value found analytically. The tricky step seems to be the way how to relate the progression between r and 1/(1+r²) in steps 5 and 6. One can verify for example the value of the integral at intermediate steps. For the integral from r=0 to 1, the value in the positive half plane must be π/4, which can be verified on the figure below. In order to gain more insight on π, it could be of interest to develop skills for this circular representation.
http://commonsensequantum.blogspot.com/2010/04/keeping-track-of-circle-for-integral.html
13
707
|This is the print version of Geometry You won't see this message or any elements not part of the book's content when you print or preview this page. Part I- Euclidean Geometry Chapter 1: Points, Lines, Line Segments and Rays Points and lines are two of the most fundamental concepts in Geometry, but they are also the most difficult to define. We can describe intuitively their characteristics, but there is no set definition for them: they, along with the plane, are the undefined terms of geometry. All other geometric definitions and concepts are built on the undefined ideas of the point, line and plane. Nevertheless, we shall try to define them. A point is an exact location in space. Points are dimensionless. That is, a point has no width, length, or height. We locate points relative to some arbitrary standard point, often called the "origin". Many physical objects suggest the idea of a point. Examples include the tip of a pencil, the corner of a cube, or a dot on a sheet of paper. As for a line segment, we specify a line with two points. Starting with the corresponding line segment, we find other line segments that share at least two points with the original line segment. In this way we extend the original line segment indefinitely. The set of all possible line segments findable in this way constitutes a line. A line extends indefinitely in a single dimension. Its length, having no limit, is infinite. Like the line segments that constitute it, it has no width or height. You may specify a line by specifying any two points within the line. For any two points, only one line passes through both points. On the other hand, an unlimited number of lines pass through any single point. We construct a ray similarly to the way we constructed a line, but we extend the line segment beyond only one of the original two points. A ray extends indefinitely in one direction, but ends at a single point in the other direction. That point is called the end-point of the ray. Note that a line segment has two end-points, a ray one, and a line none. A point exists in zero dimensions. A line exists in one dimension, and we specify a line with two points. A plane exists in two dimensions. We specify a plane with three points. Any two of the points specify a line. All possible lines that pass through the third point and any point in the line make up a plane. In more obvious language, a plane is a flat surface that extends indefinitely in its two dimensions, length and width. A plane has no height. Space exists in three dimensions. Space is made up of all possible planes, lines, and points. It extends indefinitely in all directions. Mathematics can extend space beyond the three dimensions of length, width, and height. We then refer to "normal" space as 3-dimensional space. A 4-dimensional space consists of an infinite number of 3-dimensional spaces. Etc. [How we label and reference points, lines, and planes.] Chapter 2: Angles An angle is the union of two rays with a common endpoint, called the vertex. The angles formed by vertical and horizontal lines are called right angles; lines, segments, or rays that intersect in right angles are said to be perpendicular. Angles, for our purposes, can be measured in either degrees (from 0 to 360) or radians (from 0 to ). Angles length can be determined by measuring along the arc they map out on a circle. In radians we consider the length of the arc of the circle mapped out by the angle. Since the circumference of a circle is , a right angle is radians. In degrees, the circle is 360 degrees, and so a right angle would be 90 degrees. Angles are named in several ways. - By naming the vertex of the angle (only if there is only one angle formed at that vertex; the name must be non-ambiguous) - By naming a point on each side of the angle with the vertex in between. - By placing a small number on the interior of the angle near the vertex. Classification of Angles by Degree Measure - an angle is said to be acute if it measures between 0 and 90 degrees, exclusive. - an angle is said to be right if it measures 90 degrees. - notice the small box placed in the corner of a right angle, unless the box is present it is not assumed the angle is 90 degrees. - all right angles are congruent - an angle is said to be obtuse if it measures between 90 and 180 degrees, exclusive. Special Pairs of Angles - adjacent angles - adjacent angles are angles with a common vertex and a common side. - adjacent angles have no interior points in common. - complementary angles - complementary angles are two angles whose sum is 90 degrees. - complementary angles may or may not be adjacent. - if two complementary angles are adjacent, then their exterior sides are perpendicular. - supplementary angles - two angles are said to be supplementary if their sum is 180 degrees. - supplementary angles need not be adjacent. - if supplementary angles are adjacent, then the sides they do not share form a line. - linear pair - if a pair of angles is both adjacent and supplementary, they are said to form a linear pair. - vertical angles - angles with a common vertex whose sides form opposite rays are called vertical angles. - vertical angles are congruent. Side-Side-Side (SSS) (Postulate 12) If three sides of one triangle are congruent to three sides of a second triangle, then the two triangles are congruent. Side-Angle-Side (SAS) (Postulate 13) If two sides and the included angle of a second triangle, then the two triangles are congruent. If two angles and the included side of one triangle are congruent to two angles and the included side of a second triangle, then two triangles are congruent. If two angles and a non-included side of one triangle are congruent to two angles and the corresponding non-included side of a second triangle, then the two triangles are congruent. NO - Angle-Side-Side (ASS) The "ASS" postulate does not work, unlike the other ones. A way that students can remember this is that "ass" is not a nice word, so we don't use it in geometry (since it does not work). There are two approaches to furthering knowledge: reasoning from known ideas and synthesizing observations. In inductive reasoning you observe the world, and attempt to explain based on your observations. You start with no prior assumptions. Deductive reasoning consists of logical assertions from known facts. What you need to know Before one can start to understand logic, and thereby begin to prove geometric theorems, one must first know a few vocabulary words and symbols. Conditional: a conditional is something which states that one statement implies another. A conditional contains two parts: the condition and the conclusion, where the former implies the latter. A conditional is always in the form "If statement 1, then statement 2." In most mathematical notation, a conditional is often written in the form p ⇒ q, which is read as "If p, then q" where p and q are statements. Converse: the converse of a logical statement is when the conclusion becomes the condition and vice versa; i.e., p ⇒ q becomes q ⇒ p. For example, the converse of the statement "If someone is a woman, then they are a human" would be "If someone is a human, then they are a woman." The converse of a conditional does not necessarily have the same truth value as the original, though it sometimes does, as will become apparent later. AND: And is a logical operator which is true only when both statements are true. For example, the statement "Diamond is the hardest substance known to man AND a diamond is a metal" is false. While the former statement is true, the latter is not. However, the statement "Diamond is the hardest substance known to man AND diamonds are made of carbon" would be true, because both parts are true. OR: If two statements are joined together by "or," then the truth of the "or" statement is dependant upon whether one or both of the statements from which it is composed is true. For example, the statement "Tuesday is the day after Monday OR Thursday is the day after Saturday" would have a truth value of "true," because even though the latter statement is false, the former is true. NOT: If a statement is preceded by "NOT," then it is evaluating the opposite truth value of that statement. The symbol for "NOT" is For example, if the statement p is "Elvis is dead," then ¬p would be "Elvis is not dead." The concept of "NOT" can cause some confusion when it relates to statements which contain the word "all." For example, if r is "¬". "All men have hair," then ¬r would be "All men do not have hair" or "No men have hair." Do not confuse this with "Not all men have hair" or "Some men have hair." The "NOT" should apply to the verb in the statement: in this case, "have." ¬p can also be written as NOT p or ~p. NOT p may also be referred to as the "negation of p." Inverse: The inverse of a conditional says that the negation of the condition implies the negation of the conclusion. For example, the inverse of p ⇒ q is ¬p ⇒ ¬q. Like a converse, an inverse does not necessarily have the same truth value as the original conditional. Biconditional: A biconditional is conditional where the condition and the conclusion imply one another. A biconditional starts with the words "if and only if." For example, "If and only if p, then q" means both that p implies q and that q implies p. Premise: A premise is a statement whose truth value is known initially. For example, if one were to say "If today is Thursday, then the cafeteria will serve burritos," and one knew that what day it was, then the premise would be "Today is Thursday" or "Today is not Thursday." ⇒: The symbol which denotes a conditional. p ⇒ q is read as "if p, then q." Iff: Iff is a shortened form of "if and only if." It is read as "if and only if." ⇔: The symbol which denotes a biconditonal. p ⇔ q is read as "If and only if p, then q." ∴: The symbol for "therefore." p ∴ q means that one knows that p is true (p is true is the premise), and has logically concluded that q must also be true. ∧: The symbol for "and." ∨: The symbol for "or." There are a few forms of deductive logic. One of the most common deductive logical arguments is modus ponens, which states that: - p ⇒ q - p ∴ q - (If p, then q) - (p, therefore q) An example of modus ponens: - If I stub my toe, then I will be in pain. - I stub my toe. - Therefore, I am in pain. Another form of deductive logic is modus tollens, which states the following. - p ⇒ q - ¬q ∴ ¬p - (If p, then q) - (not q, therefore not p) Modus tollens is just as valid a form of logic as modus ponens. The following is an example which uses modus tollens. - If today is Thursday, then the cafeteria will be serving burritos. - The cafeteria is not serving burritos, therefore today is not Thursday. Another form of deductive logic is known as the If-Then Transitive Property. Simply put, it means that there can be chains of logic where one thing implies another thing. The If-Then Transitive Property states: - p ⇒ q - (q ⇒ r) ∴ (p ⇒ r) - (If p, then q) - ((If q, then r), therefore (if p, then r)) For example, consider the following chain of if-then statements. - If today is Thursday, then the cafeteria will be serving burritos. - If the cafeteria will be serving burritos, then I will be happy. - Therefore, if today is Thursday, then I will be happy. Inductive reasoning is a logical argument which does not definitely prove a statement, but rather assumes it. Inductive reasoning is used often in life. Polling is an example of the use of inductive reasoning. If one were to poll one thousand people, and 300 of those people selected choice A, then one would infer that 30% of any population might also select choice A. This would be using inductive logic, because it does not definitively prove that 30% of any population would select choice A. Because of this factor of uncertainty, inductive reasoning should be avoided when possible when attempting to prove geometric properties. Truth tables are a way that one can display all the possibilities that a logical system may have when given certain premises. The following is a truth table with two premises (p and q), which shows the truth value of some basic logical statements. (NOTE: T = true; F = false) |p||q||¬p||¬q||p ⇒ q||p ⇔ q||p ∧ q||p ∨ q| Unlike science which has theories, mathematics has a definite notion of proof. Mathematics applies deductive reasoning to create a series of logical statements which show that one thing implies another. Consider a triangle, which we define as a shape with three vertices joined by three lines. We know that we can arbitrarily pick some point on a page, and make that into a vertex. We repeat that process and pick a second point. Using a ruler, we can connect these two points. We now make a third point, and using the ruler connect it to each of the other points. We have constructed a triangle. In mathematics we formalize this process into axioms, and carefully lay out the sequence of statements to show what follows. All definitions are clearly defined. In modern mathematics, we are always working within some system where various axioms hold. The most common form of explicit proof in highschool geometry is a two column proof consists of five parts: the given, the proposition, the statement column, the reason column, and the diagram (if one is given). Example of a Two-Column Proof Now, suppose a problem tells you to solve for , showing all steps made to get to the answer. A proof shows how this is done: Prove: x = 1 |Property of subtraction| We use "Given" as the first reason, because it is "given" to us in the problem. Written proofs (also known as informal proofs, paragraph proofs, or 'plans for proof') are written in paragraph form. Other than this formatting difference, they are similar to two-column proofs. Sometimes it is helpful to start with a written proof, before formalizing the proof in two-column form. If you're having trouble putting your proof into two column form, try "talking it out" in a written proof first. Example of a Written Proof We are given that x + 1 = 2, so if we subtract one from each side of the equation (x + 1 - 1 = 2 - 1), then we can see that x = 1 by the definition of subtraction. A flowchart proof or more simply a flow proof is a graphical representation of a two-column proof. Each set of statement and reasons are recorded in a box and then arrows are drawn from one step to another. This method shows how different ideas come together to formulate the proof. Postulates in geometry are very similar to axioms, self-evident truths, and beliefs in logic, political philosophy and personal decision-making. The five postulates of Euclidean Geometry define the basic rules governing the creation and extension of geometric figures with ruler and compass. Together with the five axioms (or "common notions") and twenty-three definitions at the beginning of Euclid's Elements, they form the basis for the extensive proofs given in this masterful compilation of ancient Greek geometric knowledge. They are as follows: - A straight line may be drawn from any given point to any other. - A straight line may be extended to any finite length. - A circle may be described with any given point as its center and any distance as its radius. - All right angles are congruent. - If a straight line intersects two other straight lines, and so makes the two interior angles on one side of it together less than two right angles, then the other straight lines will meet at a point if extended far enough on the side on which the angles are less than two right angles. Postulate 5, the so-called Parallel Postulate was the source of much annoyance, probably even to Euclid, for being so relatively prolix. Mathematicians have a peculiar sense of aesthetics that values simplicity arising from simplicity, with the long complicated proofs, equations and calculations needed for rigorous certainty done behind the scenes, and to have such a long sentence amidst such other straightforward, intuitive statements seems awkward. As a result, many mathematicians over the centuries have tried to prove the results of the Elements without using the Parallel Postulate, but to no avail. However, in the past two centuries, assorted non-Euclidean geometries have been derived based on using the first four Euclidean postulates together with various negations of the fifth. Chapter 7. Vertical Angles Vertical angles are a pair of angles with a common vertex whose sides form opposite rays. An extensively useful fact about vertical angles is that they are congruent. Aside from saying that any pair of vertical angles "obviously" have the same measure by inspection, we can prove this fact with some simple algebra and an observation about supplementary angles. Let two lines intersect at a point, and angles A1 and A2 be a pair of vertical angles thus formed. At the point of intersection, two other angles are also formed, and we'll call either one of them B1 without loss of generality. Since B1 and A1 are supplementary, we can say that the measure of B1 plus the measure of A1 is 180. Similarly, the measure of B1 plus the measure of A2 is 180. Thus the measure of A1 plus the measure of B1 equals the measure of A2 plus the measure of B1, by substitution. Then by subracting the measure of B1 from each side of this equality, we have that the measure of A1 equals the measure of A2. Parallel Lines in a Plane Two coplanar lines are said to be parallel if they never intersect. For any given point on the first line, its distance to the second line is equal to the distance between any other point on the first line and the second line. The common notation for parallel lines is "||" (a double pipe); it is not unusual to see "//" as well. If line m is parallel to line n, we write "m || n". Lines in a plane either coincide, intersect in a point, or are parallel. Controversies surrounding the Parallel Postulate lead to the development of non-Euclidean geometries. Parallel Lines and Special Pairs of Angles When two (or more) parallel lines are cut by a transversal, the following angle relationships hold: - corresponding angles are congruent - alternate exterior angles are congruent - same-side interior angles are supplementary Theorems Involving Parallel Lines - If a line in a plane is perpendicular to one of two parallel lines, it is perpendicular to the other line as well. - If a line in a plane is parallel to one of two parallel lines, it is parallel to both parallel lines. - If three or more parallel lines are intersected by two or more transversals, then they divide the transversals proportionally. Congruent shapes are the same size with corresponding lengths and angles equal. In other words, they are exactly the same size and shape. They will fit on top of each other perfectly. Therefore if you know the size and shape of one you know the size and shape of the others. For example: Each of the above shapes is congruent to each other. The only difference is in their orientation, or the way they are rotated. If you traced them onto paper and cut them out, you could see that they fit over each other exactly. Having done this, right away we can see that, though the angles correspond in size and position, the sides do not. Therefore it is proved the triangles are not congruent. Similar shapes are like congruent shapes in that they must be the same shape, but they don't have to be the same size. Their corresponding angles are congruent and their corresponding sides are in proportion. Methods of Determining Congruence Two triangles are congruent if: - each pair of corresponding sides is congruent - two pairs of corresponding angles are congruent and a pair of corresponding sides are congruent - two pairs of corresponding sides and the angles included between them are congruent Tips for Proofs Commonly used prerequisite knowledge in determining the congruence of two triangles includes: - by the reflexive property, a segment is congruent to itself - vertical angles are congruent - when parallel lines are cut by a transversal corresponding angles are congruent - when parallel lines are cut by a transversal alternate interior angles are congruent - midpoints and bisectors divide segments and angles into two congruent parts For two triangles to be similar, all 3 corresponding angles must be congruent, and all three sides must be proportionally equal. Two triangles are similar if... - Two angles of each triangle are congruent. - The acute angle of a right triangle is congruent to the acute angle of another right triangle. - The two triangles are congruent. Note here that congruency implies similarity. A quadrilateral is a polygon that has four sides. Special Types of Quadrilaterals - A parallelogram is a quadrilateral having two pairs of parallel sides. - A square, a rhombus, and a rectangle are all examples of parallelograms. - A rhombus is a quadrilateral of which all four sides are the same length. - A rectangle is a parallelogram of which all four angles are 90 degrees. - A square is a quadrilateral of which all four sides are of the same length, and all four angles are 90 degrees. - A square is a rectangle, a rhombus, and a parallelogram. - A trapezoid is a quadrilateral which has two parallel sides (U.S.) - U.S. usage: A trapezium is a quadrilateral which has no parallel sides. - U.K usage: A trapezium is a quadrilateral with two parallel sides (same as US trapezoid definition). - A kite is an quadrilateral with two pairs of congruent adjacent sides. One of the most important properties used in proofs is that the sum of the angles of the quadrilateral is always 360 degrees. This can easily be proven too: If you draw a random quadrilateral, and one of its diagonals, you'll split it up into two triangles. Given that the sum of the angles of a triangle is 180 degrees, you can sum them up, and it'll give 360 degrees. A parallelogram is a geometric figure with two pairs of parallel sides. Parallelograms are a special type of quadrilateral. The opposite sides are equal in length and the opposite angles are also equal. The area is equal to the product of any side and the distance between that side and the line containing the opposite side. Properties of Parallelograms The following properties are common to all parallelograms (parallelogram, rhombus, rectangle, square) - both pairs of opposite sides are parallel - both pairs of opposite sides are congruent - both pairs of opposite angles are congruent - the diagonals bisect each other - A rhombus is a parallelogram with four congruent sides. - The diagonals of a rhombus are perpendicular. - Each diagonal of a rhombus bisects two angles the rhombus. - A rhombus may or may not be a square. - A square is a parallelogram with four right angles and four congruent sides. - A square is both a rectangle and a rhombus and inherits all of their properties. A Trapezoid (American English) or Trapezium (British English) is a quadrilateral that has two parallel sides and two non parallel sides. Some properties of trapezoids: - The interior angles sum to 360° as in any quadrilateral. - The parallel sides are unequal. - Each of the parallel sides is called a base (b) of the trapezoid. The two angles that join one base are called 'base angles'. - If the two non-parallel sides are equal, the trapezoid is called an isosceles trapezoid. - In an isosceles trapezoid, each pair of base angles are equal. - If one pair of base angles of a trapezoid are equal, the trapezoid is isosceles. - A line segment connecting the midpoints of the non-parallel sides is called the median (m) of the trapeziod. - The median of a trapezoid is equal to one half the sum of the bases (called b1 and b2). - A line segment perpendicular to the bases is called an altitude (h) of the trapezoid. The area (A) of a trapezoid is equal to the product of an altitude and the median. Recall though that the median is half of the sum of the bases. Substituting for m, we get: A circle is a set of all points in a plane that are equidistant from a single point; that single point is called the centre of the circle and the distance between any point on circle and the centre is called radius of the circle. a chord is an internal segment of a circle that has both of its endpoints on the circumference of the circle. - the diameter of a circle is the largest chord possible a secant of a circle is any line that intersects a circle in two places. - a secant contains any chord of the circle a tangent to a circle is a line that intersects a circle in exactly one point, called the point of tangency. - at the point of tangency the tangent line and the radius of the circle are perpendicular Chapter 16. Circles/Arcs An arc is a segment of the perimeter of a given circle. The measure of an arc is measured as an angle, this could be in radians or degrees (more on radians later). The exact measure of the arc is determined by the measure of the angle formed when a line is drawn from the center of the circle to each end point. As an example the circle below has an arc cut out of it with a measure of 30 degrees. As I mentioned before an arc can be measured in degrees or radians. A radian is merely a different method for measuring an angle. If we take a unit circle (which has a radius of 1 unit), then if we take an arc with the length equal to 1 unit, and draw line from each endpoint to the center of the circle the angle formed is equal to 1 radian. this concept is displayed below, in this circle an arc has been cut off by an angle of 1 radian, and therefore the length of the arc is equal to because the radius is 1. From this definition we can say that on the unit circle a single radian is equal to radians because the perimeter of a unit circle is equal to . Another useful property of this definition that will be extremely useful to anyone who studies arcs is that the length of an arc is equal to its measure in radians multiplied by the radius of the circle. Converting to and from radians is a fairly simple process. 2 facts are required to do so, first a circle is equal to 360 degrees, and it is also equal to . using these 2 facts we can form the following formula: , thus 1 degree is equal to radians. From here we can simply multiply by the number of degrees to convert to radians. for example if we have 20 degrees and want to convert to radians then we proceed as follows: The same sort of argument can be used to show the formula for getting 1 radian. , thus 1 radian is equal to A tangent is a line in the same plane as a given circle that meets that circle in exactly one point. That point is called the point of tangency. A tangent cannot pass through a circle; if it does, it is classified as a chord. A secant is a line containing a chord. A common tangent is a line tangent to two circles in the same plane. If the tangent does not intersect the line containing and connecting the centers of the circles, it is an external tangent. If it does, it is an internal tangent. Two circles are tangent to one another if in a plane they intersect the same tangent in the same point. Sector of a circle A sector of a circle can be thought of as a pie piece. In the picture below, a sector of the circle is shaded yellow. To find the area of a sector, find the area of the whole circle and then multiply by the angle of the sector over 360 degrees. A more intuitive approach can be used when the sector is half the circle. In this case the area of the sector would just be the area of the circle divided by 2. - See Angle Addition Property of Equality For any real numbers a, b, and c, if a = b, then a + c = b + c. A figure is an angle if and only if it is composed of two rays which share a common endpoint. Each of these rays (or segments, as the case may be) is known as a side of the angle (For example, in the illustration at right), and the common point is known as the angle's vertex (point B in the illustration). Angles are measured by the difference of their slopes. The units for angle measure are radians and degrees. Angles may be classified by their degree measure. - Acute Angle: an angle is an acute angle if and only if it has a measure of less than 90° - Right Angle: an angle is an right angle if and only if it has a measure of exactly 90° - Obtuse Angle: an angle is an obtuse angle if and only if it has a measure of greater than 90° Angle Addition Postulate If P is in the interior of an angle , then Center of a circle Point P is the center of circle C if and only if all points in circle C are equidistant from point P and point P is contained in the same plane as circle C. A collection of points is said to be a circle with a center at point P and a radius of some distance r if and only if it is the collection of all points which are a distance of r away from point P and are contained by a plane which contain point P. A polygon is said to be concave if and only if it contains at least one interior angle with a measure greater than 180° exclusively and less than 360° exclusively. Two angles formed by a transversal intersecting with two lines are corresponding angles if and only if one is on the inside of the two lines, the other is on the outside of the two lines, and both are on the same side of the transversal. Corresponding Angles Postulate If two lines cut by a transversal are parallel, then their corresponding angles are congruent. Corresponding Parts of Congruent Triangles are Congruent Postulate The Corresponding Parts of Congruent Triangles are Congruent Postulate (CPCTC) states: - If ∆ABC ≅ ∆XYZ, then all parts of ∆ABC are congruent to their corresponding parts in ∆XYZ. For example: - ∠ABC ≅ ∠XYZ - ∠BCA ≅ ∠YZX - ∠CAB ≅ ∠ZXY CPCTC also applies to all other parts of the triangles, such as a triangle's altitude, median, circumcenter, et al. A line segment is the diameter of a circle if and only if it is a chord of the circle which contains the circle's center. - See Circle and if they cross they are congruent A collection of points is a line if and only if the collection of points is perfectly straight (aligned), is infinitely long, and is infinitely thin. Between any two points on a line, there exists an infinite number of points which are also contained by the line. Lines are usually written by two points in the line, such as line AB, or A collection of points is a line segment if and only if it is perfectly straight, is infinitely thin, and has a finite length. A line segment is measured by the shortest distance between the two extreme points on the line segment, known as endpoints. Between any two points on a line segment, there exists an infinite number of points which are also contained by the line segment. Two lines or line segments are said to be parallel if and only if the lines are contained by the same plane and have no points in common if continued infinitely. Two planes are said to be parallel if and only if the planes have no points in common when continued infinitely. Two lines that intersect at a 90° angle. Given a line, and a point P not in line , then there is one and only one line that goes through point P perpendicular to An object is a plane if and only if it is a two-dimensional object which has no thickness or curvature and continues infinitely. A plane can be defined by three points. A plane may be considered to be analogous to a piece of paper. A point is a zero-dimensional mathematical object representing a location in one or more dimensions. A point has no size; it has only location. A polygon is a closed plane figure composed of at least 3 straight lines. Each side has to intersect another side at their respective endpoints, and that the lines intersecting are not collinear. The radius of a circle is the distance between any given point on the circle and the circle's center. - See Circle A ray is a straight collection of points which continues infinitely in one direction. The point at which the ray stops is known as the ray's endpoint. Between any two points on a ray, there exists an infinite number of points which are also contained by the ray. The points on a line can be matched one to one with the real numbers. The real number that corresponds to a point is the point's coordinate. The distance between two points is the absolute value of the difference between the two coordinates of the two points. Geometry/Synthetic versus analytic geometry - Two and Three-Dimensional Geometry and Other Geometric Figures Perimeter and Arclength Perimeter of Circle The circles perimeter can be calculated using the following formula where and the radius of the circle. Perimeter of Polygons The perimeter of a polygon with number of sides abbreviated can be caculated using the following formula Arclength of Circles The arclength of a given circle with radius can be calculated using where is the angle given in radians. Arclength of Curves If a curve in have a parameter form for , then the arclength can be calculated using the following fomula Derivation of formula can be found using differential geometry on infinitely small triangles. Area of Circles The method for finding the area of a circle is Where π is a constant roughly equal to 3.14159265358978 and r is the radius of the circle; a line drawn from any point on the circle to its center. Area of Triangles Three ways of calculating the area inside of a triangle are mentioned here. If one of the sides of the triangle is chosen as a base, then a height for the triangle and that particular base can be defined. The height is a line segment perpendicular to the base or the line formed by extending the base and the endpoints of the height are the corner point not on the base and a point on the base or line extending the base. Let B = the length of the side chosen as the base. Let h = the distance between the endpoints of the height segment which is perpendicular to the base. Then the area of the triangle is given by: This method of calculating the area is good if the value of a base and its corresponding height in the triangle is easily determined. This is particularly true if the triangle is a right triangle, and the lengths of the two sides sharing the 90o angle can be determined. - , also known as Heron's Formula If the lengths of all three sides of a triangle are known, Hero's formula may be used to calculate the area of the triangle. First, the semiperimeter, s, must be calculated by dividing the sum of the lengths of all three sides by 2. For a triangle having side lengths a, b, and c : Then the triangle's area is given by: If the triangle is needle shaped, that is, one of the sides is very much shorter than the other two then it can be difficult to compute the area because the precision needed is greater than that available in the calculator or computer that is used. In otherwords Heron's formula is numerically unstable. Another formula that is much more stable is: where , , and have been sorted so that . In a triangle with sides length a, b, and c and angles A, B, and C opposite them, This formula is true because in the formula . It is useful because you don't need to find the height from an angle in a separate step, and is also used to prove the law of sines (divide all terms in the above equation by a*b*c and you'll get it directly!) Area of Rectangles The area calculation of a rectangle is simple and easy to understand. One of the sides is chosen as the base, with a length b. An adjacent side is then the height, with a length h, because in a rectangle the adjacent sides are perpendicular to the side chosen as the base. The rectangle's area is given by: Sometimes, the baselength may be referred to as the length of the rectangle, l, and the height as the width of the rectangle, w. Then the area formula becomes: Regardless of the labels used for the sides, it is apparent that the two formulas are equivalent. Of course, the area of a square with sides having length s would be: Area of Parallelograms The area of a parallelogram can be determined using the equation for the area of a rectangle. The formula is: A is the area of a parallelogram. b is the base. h is the height. The height is a perpendicular line segment that connects one of the vertices to its opposite side (the base). Area of Rhombus Remember in a rombus all sides are equal in length. and represent the diagonals. Area of Trapezoids The area of a trapezoid is derived from taking the arithmetic mean of its two parallel sides to form a rectangle of equal area. Where and are the lengths of the two parallel bases. Area of Kites The area of a kite is based on splitting the kite into four pieces by halving it along each diagonal and using these pieces to form a rectangle of equal area. Where a and b are the diagonals of the kite. Alternatively, the kite may be divided into two halves, each of which is a triangle, by the longer of its diagonals, a. The area of each triangle is thus Where b is the other (shorter) diagonal of the kite. And the total area of the kite (which is composed of two identical such triangles) is Which is the same as Areas of other Quadrilaterals The areas of other quadrilaterals are slightly more complex to calculate, but can still be found if the quadrilateral is well-defined. For example, a quadrilateral can be divided into two triangles, or some combination of triangles and rectangles. The areas of the constituent polygons can be found and added up with arithmetic. Volume is like area expanded out into 3 dimensions. Area deals with only 2 dimensions. For volume we have to consider another dimension. Area can be thought of as how much space some drawing takes up on a flat piece of paper. Volume can be thought of as how much space an object takes up. |Common equations for volume:| |A cube:||s = length of a side| |A rectangular prism:||l = length, w = width, h = height| |A cylinder (circular prism):||r = radius of circular face, h = height| |Any prism that has a constant cross sectional area along the height:||A = area of the base, h = height| |A sphere:||r = radius of sphere which is the integral of the Surface Area of a sphere |An ellipsoid:||a, b, c = semi-axes of ellipsoid| |A pyramid:||A = area of the base, h = height of pyramid| |A cone (circular-based pyramid):||r = radius of circle at base, h = distance from base to tip (The units of volume depend on the units of length - if the lengths are in meters, the volume will be in cubic meters, etc.) The volume of any solid whose cross sectional areas are all the same is equal to that cross sectional area times the distance the centroid(the center of gravity in a physical object) would travel through the solid. If two solids are contained between two parallel planes and every plane parallel to these two plane has equal cross sections through these two solids, then their volumes are equal. A Polygon is a two-dimensional figure, meaning all of the lines in the figure are contained within one plane. They are classified by the number of angles, which is also the number of sides. One key point to note is that a polygon must have at least three sides. Normally, three to ten sided figures are referred to by their names (below), while figures with eleven or more sides is an n-gon, where n is the number of sides. Hence a forty-sided polygon is called a 40-gon. A polygon with three angles and sides. A polygon with four angles and sides. A polygon with five angles and sides. A polygon with six angles and sides. A polygon with seven angles and sides. A polygon with eight angles and sides. A polygon with nine angles and sides. A polygon with ten angles and sides. For a list of n-gon names, go to and scroll to the bottom of the page. Polygons are also classified as convex or concave. A convex polygon has interior angles less than 180 degrees, thus all triangles are convex. If a polygon has at least one internal angle greater than 180 degrees, then it is concave. An easy way to tell if a polygon is concave is if one side can be extended and crosses the interior of the polygon. Concave polygons can be divided into several convex polygons by drawing diagonals. Regular polygons are polygons in which all sides and angles are congruent. A triangle is a type of polygon having three sides and, therefore, three angles. The triangle is a closed figure formed from three straight line segments joined at their ends. The points at the ends can be called the corners, angles, or vertices of the triangle. Since any given triangle lies completely within a plane, triangles are often treated as two-dimensional geometric figures. As such, a triangle has no volume and, because it is a two-dimensionally closed figure, the flat part of the plane inside the triangle has an area, typically referred to as the area of the triangle. Triangles are always convex polygons. A triangle must have at least some area, so all three corner points of a triangle cannot lie in the same line. The sum of the lengths of any two sides of a triangle is always greater than the length of the third side. The preceding statement is sometimes called the Triangle Inequality. Certain types of triangles Categorized by angle The sum of the interior angles in a triangle always equals 180o. This means that no more than one of the angles can be 90o or more. All three angles can all be less than 90oin the triangle; then it is called an acute triangle. One of the angles can be 90o and the other two less than 90o; then the triangle is called a right triangle. Finally, one of the angles can be more than 90o and the other two less; then the triangle is called an obtuse triangle. Categorized by sides If all three of the sides of a triangle are of different length, then the triangle is called a scalene triangle. If two of the sides of a triangle are of equal length, then it is called an isoceles triangle. In an isoceles triangle, the angle between the two equal sides can be more than, equal to, or less than 90o. The other two angles are both less than 90o. If all three sides of a triangle are of equal length, then it is called an equilateral triangle and all three of the interior angles must be 60o, making it equilangular. Because the interior angles are all equal, all equilateral triangles are also the three-sided variety of a regular polygon and they are all similar, but might not be congruent. However, polygons having four or more equal sides might not have equal interior angles, might not be regular polygons, and might not be similar or congruent. Of course, pairs of triangles which are not equilateral might be similar or congruent. Opposite corners and sides in triangles If one of the sides of a triangle is chosen, the interior angles of the corners at the side's endpoints can be called adjacent angles. The corner which is not one of these endpoints can be called the corner opposite to the side. The interior angle whose vertex is the opposite corner can be called the angle opposite to the side. Likewise, if a corner or its angle is chosen, then the two sides sharing an endpoint at that corner can be called adjacent sides. The side not having this corner as one of its two endpoints can be called the side opposite to the corner. The sides or their lengths of a triangle are typically labeled with lower case letters. The corners or their corresponding angles can be labeled with capital letters. The triangle as a whole can be labeled by a small triangle symbol and its corner points. In a triangle, the largest interior angle is opposite to longest side, and vice versa. Any triangle can be divided into two right triangles by taking the longest side as a base, and extending a line segment from the opposite corner to a point on the base such that it is perpendicular to the base. Such a line segment would be considered the height or altitude ( h ) for that particular base ( b ). The two right triangles resulting from this division would both share the height as one of its sides. The interior angles at the meeting of the height and base would be 90o for each new right triangle. For acute triangles, any of the three sides can act as the base and have a corresponding height. For more information on right triangles, see Right Triangles and Pythagorean Theorem. Area of Triangles If base and height of a triangle are known, then the area of the triangle can be calculated by the formula: ( is the symbol for area) Ways of calculating the area inside of a triangle are further discussed under Area. The centroid is constructed by drawing all the medians of the triangle. All three medians intersect at the same point: this crossing point is the centroid. Centroids are always inside a triangle. They are also the centre of gravity of the triangle. The three angle bisectors of the triangle intersect at a single point, called the incentre. Incentres are always inside the triangle. The three sides are equidistant from the incentre. The incentre is also the centre of the inscribed circle (incircle) of a triangle, or the interior circle which touches all three sides of the triangle. The circumcentre is the intersection of all three perpendicular bisectors. Unlike the incentre, it is outside the triangle if the triangle is obtuse. Acute triangles always have circumcentres inside, while the circumcentre of a right triangle is the midpoint of the hypotenuse. The vertices of the triangle are equidistant from the circumcentre. The circumcentre is so called because it is the centre of the circumcircle, or the exterior circle which touches all three vertices of the triangle. The orthocentre is the crossing point of the three altitudes. It is always inside acute triangles, outside obtuse triangles, and on the right vertex of the right-angled triangle. Please note that the centres of an equilateral triangle are always the same point. Right Triangles and Pythagorean Theorem Right triangles are triangles in which one of the interior angles is 90o. A 90o angle is called a right angle. Right triangles are sometimes called right-angled triangles. The other two interior angles are complementary, i.e. their sum equals 90o. Right triangles have special properties which make it easier to conceptualize and calculate their parameters in many cases. The side opposite of the right angle is called the hypotenuse. The sides adjacent to the right angle are the legs. When using the Pythagorean Theorem, the hypotenuse or its length is often labeled with a lower case c. The legs (or their lengths) are often labeled a and b. Either of the legs can be considered a base and the other leg would be considered the height (or altitude), because the right angle automatically makes them perpendicular. If the lengths of both the legs are known, then by setting one of these sides as the base ( b ) and the other as the height ( h ), the area of the right triangle is very easy to calculate using this formula: This is intuitively logical because another congruent right triangle can be placed against it so that the hypotenuses are the same line segment, forming a rectangle with sides having length b and width h. The area of the rectangle is b × h, so either one of the congruent right triangles forming it has an area equal to half of that rectangle. Right triangles can be neither equilateral, acute, nor obtuse triangles. Isosceles right triangles have two 45° angles as well as the 90° angle. All isosceles right triangles are similar since corresponding angles in isosceles right triangles are equal. If another triangle can be divided into two right triangles (see Triangle), then the area of the triangle may be able to be determined from the sum of the two constituent right triangles. Also the Pythagorean theorem can be used for non right triangles. a2+b2=c2-2c For history regarding the Pythagorean Theorem, see Pythagorean theorem. The Pythagorean Theorem states that: - In a right triangle, the square of the length of the hypotenuse is equal to the sum of the squares of the lengths of the other two sides. Let's take a right triangle as shown here and set c equal to the length of the hypotenuse and set a and b each equal to the lengths of the other two sides. Then the Pythagorean Theorem can be stated as this equation: Using the Pythagorean Theorem, if the lengths of any two of the sides of a right triangle are known and it is known which side is the hypotenuse, then the length of the third side can be determined from the formula. Sine, Cosine, and Tangent for Right Triangles Sine, Cosine, and Tangent are all functions of an angle, which are useful in right triangle calculations. For an angle designated as θ, the sine function is abbreviated as sin θ, the cosine function is abbreviated as cos θ, and the tangent function is abbreviated as tan θ. For any angle θ, sin θ, cos θ, and tan θ are each single determined values and if θ is a known value, sin θ, cos θ, and tan θ can be looked up in a table or found with a calculator. There is a table listing these function values at the end of this section. For an angle between listed values, the sine, cosine, or tangent of that angle can be estimated from the values in the table. Conversely, if a number is known to be the sine, cosine, or tangent of a angle, then such tables could be used in reverse to find (or estimate) the value of a corresponding angle. These three functions are related to right triangles in the following ways: In a right triangle, - the sine of a non-right angle equals the length of the leg opposite that angle divided by the length of the hypotenuse. - the cosine of a non-right angle equals the length of the leg adjacent to it divided by the length of the hypotenuse. - the tangent of a non-right angle equals the length of the leg opposite that angle divided by the length of the leg adjacent to it. For any value of θ where cos θ ≠ 0, If one considers the diagram representing a right triangle with the two non-right angles θ1and θ2, and the side lengths a,b,c as shown here: For the functions of angle θ1: Analogously, for the functions of angle θ2: Table of sine, cosine, and tangent for angles θ from 0 to 90° |θ in degrees||θ in radians||sin θ||cos θ||tan θ| General rules for important angles: Polyominoes are shapes made from connecting unit squares together, though certain connections are not allowed. A domino is the shape made from attaching unit squares so that they share one full edge. The term polyomino is based on the word domino. There is only one possible domino. Tromino↑Jump back a section A polymino made from four squares is called a tetromino. There are five possible combinations and two reflections: A polymino made from five squares is called a pentomino. There are twelve possible pentominoes, excluding mirror images and rotations. Ellipses are sometimes called ovals. Ellipses contain two foci. The sum of the distance from a point on the ellipse to one focus and that same point to the other focus is constant Area Shapes Extended into 3rd Dimension Geometry/Area Shapes Extended into 3rd Dimension Area Shapes Extended into 3rd Dimension Linearly to a Line or Point Geometry/Area Shapes Extended into 3rd Dimension Linearly to a Line or Point Ellipsoids and Spheres Geometry/Ellipsoids and Spheres Suppose you are an astronomer in America. You observe an exciting event (say, a supernova) in the sky and would like to tell your colleagues in Europe about it. Suppose the supernova appeared at your zenith. You can't tell astronomers in Europe to look at their zenith because their zenith points in a different direction. You might tell them which constellation to look in. This might not work, though, because it might be too hard to find the supernova by searching an entire constellation. The best solution would be to give them an exact position by using a coordinate system. On Earth, you can specify a location using latitude and longitude. This system works by measuring the angles separating the location from two great circles on Earth (namely, the equator and the prime meridian). Coordinate systems in the sky work in the same way. The equatorial coordinate system is the most commonly used. The equatorial system defines two coordinates: right ascension and declination, based on the axis of the Earth's rotation. The declination is the angle of an object north or south of the celestial equator. Declination on the celestial sphere corresponds to latitude on the Earth. The right ascension of an object is defined by the position of a point on the celestial sphere called the vernal equinox. The further an object is east of the vernal equinox, the greater its right ascension. A coordinate system is a system designed to establish positions with respect to given reference points. The coordinate system consists of one or more reference points, the styles of measurement (linear measurement or angular measurement) from those reference points, and the directions (or axes) in which those measurements will be taken. In astronomy, various coordinate systems are used to precisely define the locations of astronomical objects. Latitude and longitude are used to locate a certain position on the Earth's surface. The lines of latitude (horizontal) and the lines of longitude (vertical) make up an invisible grid over the earth. Lines of latitude are called parallels. Lines of longitude aren't completely straight (they run from the exact point of the north pole to the exact point of the south pole) so they are called meridians. 0 degrees latitude is the Earth's middle, called the equator. O degrees longitude was tricky because there really is no middle of the earth vertically. It was finally agreed that the observatory in Greenwich, U.K. would be 0 degrees longitude due to its significant roll in scientific discoveries and creating latitude and longitude. 0 degrees longitude is called the prime meridian. Latitude and longitude are measured in degrees. One degree is about 69 miles. There are sixty minutes (') in a degree and sixty seconds (") in a minute. These tiny units make GPS's (Global Positioning Systems) much more exact. There are a few main lines of latitude:the Arctic Circle, the Antarctic Circle, the Tropic of Cancer, and the Tropic of Capricorn. The Antarctic Circle is 66.5 degrees south of the equator and it marks the temperate zone from the Antarctic zone. The Arctic Circle is an exact mirror in the north. The Tropic of Cancer separates the tropics from the temperate zone. It is 23.5 degrees north of the equator. It is mirrored in the south by the Tropic of Capricorn. Horizontal coordinate system One of the simplest ways of placing a star on the night sky is the coordinate system based on altitude or azimuth, thus called the Alt-Az or horizontal coordinate system. The reference circles for this system are the horizon and the celestial meridian, both of which may be most easily graphed for a given location using the celestial sphere. In simplest terms, the altitude is the angle made from the position of the celestial object (e.g. star) to the point nearest it on the horizon. The azimuth is the angle from the northernmost point of the horizon (which is also its intersection with the celestial meridian) to the point on the horizon nearest the celestial object. Usually azimuth is measured eastwards from due north. So east has az=90°, south has az=180°, west has az=270° and north has az=360° (or 0°). An object's altitude and azimuth change as the earth rotates. Equatorial coordinate system The equatorial coordinate system is another system that uses two angles to place an object on the sky: right ascension and declination. Ecliptic coordinate system The ecliptic coordinate system is based on the ecliptic plane, i.e., the plane which contains our Sun and Earth's average orbit around it, which is tilted at 23°26' from the plane of Earth's equator. The great circle at which this plane intersects the celestial sphere is the ecliptic, and one of the coordinates used in the ecliptic coordinate system, the ecliptic latitude, describes how far an object is to ecliptic north or to ecliptic south of this circle. On this circle lies the point of the vernal equinox (also called the first point of Aries); ecliptic longitude is measured as the angle of an object relative to this point to ecliptic east. Ecliptic latitude is generally indicated by φ, whereas ecliptic longitude is usually indicated by λ. Galactic coordinate system As a member of the Milky Way Galaxy, we have a clear view of the Milky Way from Earth. Since we are inside the Milky Way, we don't see the galaxy's spiral arms, central bulge and so forth directly as we do for other galaxies. Instead, the Milky Way completely encircles us. We see the Milky Way as a band of faint starlight forming a ring around us on the celestial sphere. The disk of the galaxy forms this ring, and the bulge forms a bright patch in the ring. You can easily see the Milky Way's faint band from a dark, rural location. Our galaxy defines another useful coordinate system — the galactic coordinate system. This system works just like the others we've discussed. It also uses two coordinates to specify the position of an object on the celestial sphere. The galactic coordinate system first defines a galactic latitude, the angle an object makes with the galactic equator. The galactic equator has been selected to run through the center of the Milky Way's band. The second coordinate is galactic longitude, which is the angular separation of the object from the galaxy's "prime meridian," the great circle that passes through the Galactic center and the galactic poles. The galactic coordinate system is useful for describing an object's position with respect to the galaxy's center. For example, if an object has high galactic latitude, you might expect it to be less obstructed by interstellar dust. Transformations between coordinate systems One can use the principles of spherical trigonometry as applied to triangles on the celestial sphere to derive formulas for transforming coordinates in one system to those in another. These formulas generally rely on the spherical law of cosines, known also as the cosine rule for sides. By substituting various angles on the celestial sphere for the angles in the law of cosines and by thereafter applying basic trigonometric identities, most of the formulas necessary for coordinate transformations can be found. The law of cosines is stated thus: To transform from horizontal to equatorial coordinates, the relevant formulas are as follows: where RA is the right ascension, Dec is the declination, LST is the local sidereal time, Alt is the altitude, Az is the azimuth, and Lat is the observer's latitude. Using the same symbols and formulas, one can also derive formulas to transform from equatorial to horizontal coordinates: Transformation from equatorial to ecliptic coordinate systems can similarly be accomplished using the following formulas: where RA is the right ascension, Dec is the declination, φ is the ecliptic latitude, λ is the ecliptic longitude, and ε is the tilt of Earth's axis relative to the ecliptic plane. Again, using the same formulas and symbols, new formulas for transforming ecliptic to equatorial coordinate systems can be found: - Traditional Geometry: A topological space is a set X, and a collection of subsets of X, C such that both the empty set and X are contained in C and the union of any subcollection of sets in C and the intersection of any finite subcollection of sets in C are also contained within C. The sets in C are called open sets. Their complements relative to X are called closed sets. Given two topological spaces, X and Y, a map f from X to Y is continuous if for every open set U of Y, f−1(U) is an open set of X. Hyperbolic and Elliptic Geometry There are precisely three different classes of three-dimensional constant-curvature geometry: Euclidean, hyperbolic and elliptic geometry. The three geometries are all built on the same first four axioms, but each has a unique version of the fifth axiom, also known as the parallel postulate. The 1868 Essay on an Interpretation of Non-Euclidean Geometry by Eugenio Beltrami (1835 - 1900) proved the logical consistency of the two Non-Euclidean geometries, hyperbolic and elliptic. The Parallel Postulate The parallel postulate is as follows for the corresponding geometries. Euclidean geometry: Playfair's version: "Given a line l and a point P not on l, there exists a unique line m through P that is parallel to l." Euclid's version: "Suppose that a line l meets two other lines m and n so that the sum of the interior angles on one side of l is less than 180°. Then m and n intersect in a point on that side of l." These two versions are equivalent; though Playfair's may be easier to conceive, Euclid's is often useful for proofs. Hyperbolic geometry: Given an arbitrary infinite line l and any point P not on l, there exist two or more distinct lines which pass through P and are parallel to l. Elliptic geometry: Given an arbitrary infinite line l and any point P not on l, there does not exist a line which passes through P and is parallel to l. Hyperbolic geometry is also known as saddle geometry or Lobachevskian geometry. It differs in many ways to Euclidean geometry, often leading to quite counter-intuitive results. Some of these remarkable consequences of this geometry's unique fifth postulate include: 1. The sum of the three interior angles in a triangle is strictly less than 180°. Moreover, the angle sums of two distinct triangles are not necessarily the same. 2. Two triangles with the same interior angles have the same area. Models of Hyperbolic Space The following are four of the most common models used to describe hyperbolic space. 1. The Poincaré Disc Model. Also known as the conformal disc model. In it, the hyperbolic plane is represented by the interior of a circle, and lines are represented by arcs of circles that are orthogonal to the boundary circle and by diameters of the boundary circle. Preserves hyperbolic angles. 2. The Klein Model. Also known as the Beltrami-Klein model or projective disc model. In it, the hyperbolic plane is represented by the interior of a circle, and lines are represented by chords of the circle. This model gives a misleading visual representation of the magnitude of angles. 3. The Poincaré Half-Plane Model. The hyperbolic plane is represented by one-half of the Euclidean plane, as defined by a given Euclidean line l, where l is not considered part of the hyperbolic space. Lines are represented by half-circles orthogonal to l or rays perpendicular to l. Preserves hyperbolic angles. 4. The Lorentz Model. Spheres in Lorentzian four-space. The hyperbolic plane is represented by a two-dimensional hyperboloid of revolution embedded in three-dimensional Minkowski space. Based on this geometry's definition of the fifth axiom, what does parallel mean? The following definitions are made for this geometry. If a line l and a line m do not intersect in the hyperbolic plane, but intersect at the plane's boundary of infinity, then l and m are said to be parallel. If a line p and a line q neither intersect in the hyperbolic plane nor at the boundary at infinity, then p and q are said to be ultraparallel. The Ultraparallel Theorem For any two lines m and n in the hyperbolic plane such that m and n are ultraparallel, there exists a unique line l that is perpendicular to both m and n. Elliptic geometry differs in many ways to Euclidean geometry, often leading to quite counter-intuitive results. For example, directly from this geometry's fifth axiom we have that there exist no parallel lines. Some of the other remarkable consequences of the parallel postulate include: The sum of the three interior angles in a triangle is strictly greater than 180°. Models of Elliptic Space Spherical geometry gives us perhaps the simplest model of elliptic geometry. Points are represented by points on the sphere. Lines are represented by circles through the points. - Euclid's First Four Postulates - Euclid's Fifth Postulate - Incidence Geometry - Projective and Affine Planes (necessary?) - Axioms of Betweenness - Pasch and Crossbar - Axioms of Congruence - Continuity (necessary?) - Hilbert Planes - Neutral Geometry If you would like to request anything in this topic please post it below. - Modern geometry - An Alternative Way and Alternative Geometric Means of Calculating the Area of a Circle = Geometry/An Alternative Way and Alternative Geometric Means of Calculating the Area of a Circle
http://en.m.wikibooks.org/wiki/Geometry/Print_version
13
50
The Origin of Asteroidsby Dr. Walt Brown (This article has been reproduced with permission from the Center for Scientific Creation. The original article can be found here.) NOTE - In order to fully understand the content of this article (and it’s companion article The Origin of Comets), you should read the book, In the Beginning by Dr. Walt Brown. This book fully explains Dr. Brown’s Hydroplate Theory which is the foundation upon which this article is written. In fact, this “article” is actually a chapter in the book, In the Beginning. Members of the 4th Day Alliance can download the complete PDF copy of this chapter by clicking here. Figure 156: Asteroid Ida and Its Moon, Dactyl. In 1993, the Galileo spacecraft, heading toward Jupiter, took this picture 2,000 miles from asteroid Ida. To the surprise of most, Ida had a moon (about 1 mile in diameter) orbiting 60 miles away! Both Ida and Dactyl are composed of earthlike rock. We now know of 68 other asteroids that have moons.1 According to the laws of orbital mechanics (described in the preceding chapter), capturing a moon in space is unbelievably difficult—unless both the asteroid and a nearby potential moon had very similar speeds and directions and unless gases surrounded the asteroid during capture. If so, the asteroid, its moon, and each gas molecule were probably coming from the same place and were launched at about the same time. Within a million years, passing bodies would have stripped the moons away, so these asteroid-moon captures must have been recent. From a distance, large asteroids look like big rocks. However, many show, by their low density, that they contain either much empty space or something light, such as water ice.2 Also, the best close-up pictures of an asteroid show millions of smaller rocks on its surface. Therefore, asteroids are flying rock piles held together by gravity. Ida, about 35 miles long, does not have enough gravity to squeeze itself into a spherical shape. SUMMARY: The fountains of the great deep launched rocks as well as muddy water. As rocks moved farther from Earth, Earth’s gravity became less significant to them, and the gravity of nearby rocks became increasingly significant. Consequently, many rocks, assisted by their mutual gravity and surrounding clouds of water vapor, merged to become asteroids. Isolated rocks in space are meteoroids. Drag forces caused by water vapor and thrust forces produced by the radiometer effect concentrated asteroids in what is now the asteroid belt. All the so-called “mavericks of the solar system” (asteroids, meteoroids, and comets) resulted from the explosive events at the beginning of the flood. Asteroids, also called minor planets, are rocky bodies orbiting the Sun. The orbits of most asteroids lie between those of Mars and Jupiter, a region called the asteroid belt. The largest asteroid, Ceres, is almost 600 miles in diameter and has about one-third the volume of all other asteroids combined. Orbits of almost 30,000 asteroids have been calculated. Many more asteroids have been detected, some less than 20 feet in diameter. A few that cross the Earth’s orbit would do great damage if they ever collided with Earth. Two explanations are given for the origin of asteroids: (1) they were produced by an exploded planet, and (2) a planet failed to evolve completely. Experts recognize the problems with each explanation and are puzzled. The hydroplate theory offers a simple and complete—but quite different—solution that also answers other questions. Meteorites, Meteors, and MeteoroidsIn space, solid bodies smaller than an asteroid but larger than a molecule are called “meteoroids.” They are renamed “meteors” as they travel through Earth’s atmosphere, and “meteorites” if they hit the ground. Exploded-Planet Explanation. Smaller asteroids are more numerous than larger asteroids, a pattern typical of fragmented bodies. Seeing this pattern led to the early belief that asteroids are remains of an exploded planet. Later, scientists realized that all the fragments combined would not make up one small planet.3 Besides, too much energy is needed to explode and scatter even the smallest planet. Failed-Planet Explanation. The most popular explanation today for asteroids is that they are bodies that did not merge to become a planet. Never explained is how, in nearly empty space, matter merged to become these rocky bodies in the first place,4 why rocky bodies started to form a planet but stopped,5 or why it happened only between the orbits of Mars and Jupiter. Also, because only vague explanations have been given for how planets formed, any claim to understand how one planet failed to form lacks credibility. In general, orbiting rocks do not merge to become either planets or asteroids. Special conditions are required, as explained on page 267 and Endnote 23 on page 288.] Today, collisions and near collisions fragment and scatter asteroids, just the opposite of this “failed-planet explanation.” In fact, during the 4,600,000,000 years evolutionists say asteroids have existed, asteroids would have had so many collisions that they should be much more fragmented than they are today.6 Hydroplate Explanation. Asteroids are composed of rocks expelled from Earth. The size distribution of asteroids does show that at least part of a planet fragmented. Although an energy source is not available to explode and disperse an entire Earth-size planet, the eruption of so much supercritical water from the subterranean chambers could have launched one 2,300th of the Earth—the mass of all asteroids combined. Astronomers have tried to describe the exploded planet, not realizing they were standing on the remaining 99.95% of it—too close to see it.7 As flood waters escaped from the subterranean chambers, pillars, forced to carry more and more of the weight of the overlying crust, were crushed. Also, the almost 10-mile-high walls of the rupture were unstable, because rock is not strong enough to support a cliff more than 5 miles high. As lower portions of the walls were crushed, large blocks8 were swept up and launched by the jetting fountains. Unsupported rock in the top 5 miles then fragmented. The smaller the rock, the faster it accelerated and the farther it went, just as a rapidly flowing stream carries smaller dirt particles faster and farther. Water droplets in the fountains partially evaporated and quickly froze. Large rocks had large spheres of influence which grew as the rocks traveled away from Earth. Larger rocks became “seeds” around which other rocks and ice collected as spheres of influence expanded. Because of all the evaporated water vapor and the resulting aerobraking, even more mass concentrated around the “seeds.”Clumps of rocks became asteroids. Question 1: Why did some clumps of rocks and ice in space become asteroids and others become comets?Imagine living in a part of the world where heavy frost settled each night, but the Sun shone daily. After many decades, would the countryside be buried in hundreds of feet of frost? The answer depends on several things besides the obvious need for a large source of water. If dark rocks initially covered the ground, the Sun would heat them during the day, so frost from the previous night would tend to evaporate. However, if the sunlight was dim or the frost was thick (thereby reflecting more sunlight during the day), little frost would evaporate. More frost would accumulate the next night. Frost thickness would increase every 24 hours. Now imagine living on a newly formed asteroid. Its spin would give you day-night cycles. After sunset, surface temperatures would plummet toward nearly absolute zero (-460°F), because asteroids do not have enough gravity to hold an atmosphere for long. With little atmosphere to insulate the asteroid, the day’s heat would quickly radiate, unimpeded, into outer space. Conversely, when the Sun rose, its rays would have little atmosphere to warm, so temperatures at the asteroid’s surface would rise rapidly. As the fountains of the great deep launched rocks and water droplets, evaporation in space dispersed an “ocean” of water molecules and other gases in the inner solar system. Gas molecules that struck the cold side of your spinning asteroid would become frost.9 Sunlight would usually be dim on rocks in larger, more elongated orbits. Therefore, little frost would evaporate during the day, and the frost’s thickness would increase. Your “world” would become a comet. However, if your “world” orbited relatively near the Sun, its rays would evaporate each night’s frost, so your “world” would remain an asteroid. Heavier rocks could not be launched with as much velocity as smaller particles (dirt, water droplets, and smaller rocks). The heavier rocks merged to become asteroids, while the smaller particles, primarily water, merged to become comets, which generally have larger orbits. No “sharp line” separates asteroids and comets. PREDICTION 33Asteroids are rock piles, often with ice acting as a weak “glue” inside. Large rocks that began the capture process are nearer the centers of asteroids. Comets, which are primarily ice, have rocks in their cores. Four years after this prediction was published in 2001 (In the Beginning, 7th edition, page 220), measurements of the largest asteroid, Ceres, found that it does indeed have a dense, rocky core and primarily a water-ice mantle.10 Question 2: Wasn’t asteroid Eros found to be primarily a large, solid rock?A pile of dry sand here on Earth cannot maintain a slope greater than about 30 degrees. If it were steeper, the sand grains would roll downhill. Likewise, a pile of dry pebbles or rocks on an asteroid cannot have a slope exceeding about 30 degrees. However, 4% of Eros’ surface exceeds this slope, so some scientists concluded that much of Eros must be a large, solid rock. This conclusion overlooks the possibility that ice is present between some rocks and acts as a weak glue—as predicted above. Ice in asteroids would also explain their low density. Endnote 8 gives another reason why asteroids are probably flying rock piles. Question 3: Objects launched from Earth should travel in elliptical, cometlike orbits. How could rocky bodies launched from Earth become concentrated in almost circular orbits between Mars and Jupiter?Gases, such as water vapor and its components,11 were abundant in the inner solar system for many years after the flood. Hot gas molecules striking each asteroid’s hot side were repelled with great force. This jetting action was like air rapidly escaping from a balloon, applying a thrust in a direction opposite to the escaping gas.12 Cold molecules striking each asteroid’s cold side produced less jetting. This thrusting, efficiently powered by solar energy, pushed asteroids outward, away from the sun, concentrating them between the orbits of Mars and Jupiter.13 [See Figures 157 and 158.] Figure 157: Thrust and Drag Acted on Asteroids (Sun, asteroid, gas molecules, and orbit are not to scale.) The fountains of the great deep launched rocks and muddy water from Earth. The larger rocks, assisted by water vapor and other gases within the spheres of influence of these rocks, captured other rocks and ice particles. Those growing bodies that were primarily rocks became asteroids. The Sun heats an asteroid’s near side, while the far side radiates its heat into cold outer space. Therefore, large temperature differences exist on opposite sides of each rocky, orbiting body. The slower the body spins, the darker the body,14 and the closer it is to the Sun, the greater the temperature difference. (For example, temperatures on the sunny side of our Moon reach a searing 260°F, while on the dark side, temperatures can drop to a frigid -280°F.) Also, gas molecules (small blue circles) between the Sun and asteroid, especially those coming from very near the Sun, are hotter and faster than those on the far side of an asteroid. Hot gas molecules hitting the hot side of an asteroid bounce off with much higher velocity and momentum than cold gas molecules bouncing off the cold side. Those impacts slowly expanded asteroid orbits until too little gas remained in the inner solar system to provide much thrust. The closer an asteroid was to the Sun, the greater the outward thrust. Gas molecules, densely concentrated near Earth’s orbit, created a drag on asteroids. My computer simulations have shown how gas, throughout the inner solar system for years after the flood, herded asteroids into a tight region near Earth’s orbital plane—an asteroid belt.15 Thrust primarily expanded the orbits. Drag circularized orbits and reduced their angles of inclination. Figure 158: The Radiometer Effect. This well-known novelty, called a radiometer, demonstrates the unusual thrust that pushed asteroids into their present orbits. Sunlight warms the dark side of each vane more than the light side. The partial vacuum inside the bulb approaches that found in outer space, so gas molecules travel relatively long distances before striking other molecules. Gas molecules bounce off the hotter, black side with greater velocity than off the colder, white side. This turns the vanes away from the dark side. The black side also radiates heat faster when it is warmer than its surroundings. This can be demonstrated by briefly placing the radiometer in a freezer. There the black side cools faster, making the white side warmer than the black, so the vanes turn away from the white side. In summary, the black side gains heat faster when in a hot environment and loses heat faster when in a cold environment. Higher gas pressure always pushes on the warmer side. Question 4: Could the radiometer effect push asteroids 1–2 astronomical units (AU) farther from the Sun?Each asteroid began as a swarm of particles (rocks, ice, and gas molecules) orbiting within a large sphere of influence. Because a swarm’s volume was quite large, its spin was much slower than it would be as it shrank to become an asteroid—perhaps orders of magnitude slower. The slow spin produced extreme temperature differences between the hot and cold sides. The cold side would have been so cold that gas molecules striking it would tend to stick, thereby adding “fuel” to the developing asteroid. Because the swarm’s volume was large, the radiometer pressure acted over a large area and produced a large thrust. The swarm’s large thrust and low density caused the swarm to rapidly accelerate—much like a feather placed in a gentle breeze. Also, the Sun’s gravity 93,000,000 miles from the Sun (the Earth-Sun distance) is 1,600 times weaker than Earth’s gravity here on Earth.17 So, pushing a swarm of rocks and debris farther from the Sun was surprisingly easy, because there is almost no resistance in outer space. Question 5: Why are 4% of meteorites almost entirely iron and nickel? Also, why do meteorites rarely contain quartz, which constitutes about 27% of granite’s volume?Pillars were formed in the subterranean chamber when the thicker portions of the crust were squeezed downward onto the chamber floor. Twice daily, during the centuries before the flood, these pillars were stretched and compressed by tides in the subterranean water. This gigantic heating process steadily raised pillar temperatures. [See “What Triggered the Flood?” here.] As explained in Figure 159, temperatures in what are now iron-nickel meteorites once exceeded 1,300°F, enough to dissolve quartz and allow iron and nickel to settle downward and become concentrated in the pillar tips.18 (A similar gravitational settling process concentrated iron and nickel in the Earth’s core after the flood began. See “Melting the Inner Earth” here.) Evolutionists have great difficulty explaining iron-nickel meteorites. First, everyone recognizes that a powerful heating mechanism must first melt at least some of the parent body from which the iron-nickel meteorites came, so iron and nickel can sink and be concentrated. How this could have occurred in the weak gravity of extremely cold asteroids has defied explanation.19 Second, the concentrated iron and nickel, which evolutionists visualize in the core of a large asteroid, must then be excavated and blasted into space. Available evidence shows that this has not happened.20 Figure 156: Asteroid Ida and Its Moon, Dactyl. Most iron-nickel meteorites display Widmanstätten patterns. That is, if an iron-nickel meteorite is cut and its face is polished and then etched with acid, the surface has the strange crisscross pattern shown above. This shows that temperatures throughout those meteorites exceeded 1,300°F.16 Why were so many meteoroids, drifting in cold space, at one time so uniformly hot? An impact would not produce such uniformity, nor would a blowtorch. The heating a meteor experiences in passing through the atmosphere is barely felt more than a fraction of an inch beneath the surface. If radioactive decay generated the heat, certain daughter products should be present; they are not. Question 5 explains how these high temperatures were probably reached. Question 6: Aren’t meteoroids chips from asteroids?This commonly-taught idea is based on an error in logic. Asteroids and meteoroids have some similarities, but that does not mean that one came from the other. Maybe a common event produced both asteroids and meteoroids. Also, three major discoveries suggest that meteoroids came not from asteroids, but from Earth. 1. In the mid-1970s, the Pioneer 10 and 11 spacecraft traveled out through the asteroid belt. NASA expected that the particle detection experiments on board would find 10 times more meteoroids in the belt than are present near Earth’s orbit.21 Surprisingly, the number of meteoroids diminished as the asteroid belt was approached.22 This showed that meteoroids are not coming from asteroids but from nearer the Earth’s orbit. 2. A faint glow of light, called the zodiacal light, extends from the orbit of Venus out to the asteroid belt. The light is reflected sunlight bouncing off dust-size particles. This lens-shaped swarm of particles orbits the Sun, near Earth’s orbital plane. (On dark, moonless nights, zodiacal light can be seen in the spring in the western sky after sunset and in the fall in the eastern sky before sunrise.) Debris chipped off asteroids would have a wide range of sizes and would not be as uniform and fine as the particles reflecting the zodiacal light. Debris expelled by the fountains of the great deep would place fine dust particles in the Earth's orbital plane. 3. Many meteorites have remanent magnetism, so they must have come from a larger magnetized body. Eros, the only asteroid on which a spacecraft has landed and taken magnetic measurements, has no net magnetic field. If this is true of other asteroids as well, meteorites probably did not come from asteroids.30 If asteroids are flying rock piles, as it now appears, any magnetic fields in the randomly oriented rocks would be largely self-canceling, so the asteroid would have no net magnetic field. Therefore, instead of coming from asteroids, meteorites likely came from a magnetized body such as a planet. Because Earth’s magnetic field is 2,000 times greater than that of all other rocky planets combined, meteorites probably came from Earth. Remanent magnetism decays, so meteorites must have recently broken away from their parent magnetized body. Those who believe that meteorites were chipped off asteroids say this happened millions of years ago. PREDICTION 34:Most rocks comprising asteroids will be found to be magnetized. Two InterpretationsWith a transmission electron microscope, Japanese scientist Kazushige Tomeoka identified several major events in the life of one meteorite. Initially, this meteorite was part of a much larger parent body orbiting the Sun. The parent body had many thin cracks, through which mineral-rich water cycled. Extremely thin mineral layers were deposited on the walls of these cracks. These deposits, sometimes hundreds of layers thick, contained calcium, magnesium, carbonates, and other chemicals. Mild thermal metamorphism in this rock shows that temperatures increased before it experienced some final cracks and was blasted into space.31 Hydroplate Interpretation. Earth was the parent body of all meteorites, most of which came from pillars. [Pages 381–386 explain how, why, when, and where pillars formed.] Twice a day before the flood, tides in the subterranean water compressed and stretched these pillars. Compressive heating occurred and cracks developed. Just as water circulates through a submerged sponge that is squeezed and stretched, mineral-laden water circulated through cracks in pillars for years before they broke up. Pillar fragments, launched into space by the fountains of the great deep, became meteoroids. In summary, water did it. Tomeoka’s (and Most Evolutionists’) Interpretation. Impacts on an asteroid cracked the rock that was to become this meteorite. Ice was deposited on the asteroid. Impacts melted the ice, allowing liquid water to circulate through the cracks and deposit hundreds of layers of magnesium, calcium, and carbonate bearing minerals. A final impact blasted rocks from this asteroid into space. In summary, impacts did it. Figure 160: Shatter Cone. When a large, crater-forming meteorite strikes the Earth, a shock wave radiates outward from the impact point. The passing shock wave breaks the rock surrounding the crater into meteorite-size fragments having distinctive patterns called shatter cones. (Until shatter cones were associated with impact craters by Robert S. Dietz in 1969, impact craters were often difficult to identify.) If large impacts on asteroids launched asteroid fragments toward Earth as meteorites, a few meteorites should have shatter cone patterns. None have ever been reported. Therefore, meteorites are probably not derived from asteroids. Likewise, impacts have not launched meteorites from Mars. Question 7: Does other evidence support this hypothesis that asteroids and meteoroids came from Earth?Yes. Here are seventeen additional observations that either support the proposed explanation or are inconsistent with other current theories on the origin of asteroids and meteoroids: 1. The materials in meteorites and meteoroids are remarkably similar to those in the Earth’s crust.32 Some meteorites contain very dense elements, such as nickel and iron. Those heavy elements seem compatible only with the denser rocky planets: Mercury, Venus, and Earth—Earth being the densest. A few asteroid densities have been calculated. They are generally low, ranging from 1.2 to 3.3 gm/cm3. The higher densities match those of the Earth’s crust. The lower densities imply the presence of empty space between loosely held rocks or something light such as water ice.33 PREDICTION 35:Rocks in asteroids are typical of the Earth’s crust. Expensive efforts to mine asteroids34 to recover strategic or precious metals will be a waste of money. 2. Meteorites contain different varieties (isotopes) of the chemical element molybdenum, each isotope having a slightly different atomic weight. If, as evolutionists teach, a swirling cloud of gas and dust mixed for millions of years and produced the Sun, its planets, and meteorites, then each meteorite should have about the same combination of these molybdenum isotopes. Because this is not the case,35 meteorites did not come from a swirling dust cloud or any source that mixed for millions of years. 3. Most meteorites36 and some asteroids37 contain metamorphosed minerals, showing that those bodies reached extremely high temperatures, despite a lifetime in the “deep freeze” of outer space. Radioactive decay within such relatively small bodies could not have produced the necessary heating, because too much heat would have escaped from their surfaces. Stranger still, liquid water altered some meteorites38 while they and their parent bodies were heated—sometimes heated multiple times.39 Impacts in space are often proposed to explain this mysterious heating throughout an asteroid or meteroite. However, an impact would raise the temperature only near the point of impact. Before gravel-size fragments from an impact could become uniformly hot, they would radiate their heat into outer space.40 For centuries before the flood, heat was steadily generated within pillars in the subterranean water chamber. As the flood began, the powerful jetting water launched rock fragments into space—fragments of hot, crushed pillars and fragments from the crumbling walls of the ruptured crust. Those rocks became meteoroids and asteroids. 4. Because asteroids came from Earth, they typically spin in the same direction as Earth (counterclockwise, as seen from the North). However, collisions have undoubtedly randomized the spins of many smaller asteroids in the last few thousand years.41 5. Some asteroids have captured one or more moons. [See Figure 156 at top of this page.] Sometimes the “moon” and asteroid are similar in size. Impacts would not create equal-size fragments that could capture each other.42 The only conceivable way for this to happen is if a potential moon enters an asteroid’s expanding sphere of influence while traveling about the same speed and direction as the asteroid. If even a thin gas surrounds the asteroid, the moon will be drawn closer to the asteroid, preventing the moon from being stripped away later. An “exploded planet” would disperse relatively little gas. The “failed planet explanation” meets none of the requirements. The hydroplate theory satisfies all the requirements. Figure 161: Chondrules. The central chondrule above is 2.2 millimeters in diameter, the size of this circle: o. This picture was taken in reflected light. However, meteorites containing chondrules can be thinly sliced and polished, allowing light from below to pass through the thin slice and into the microscope. Such light becomes polarized as it passes through the minerals. The resulting colors identify minerals in and around the chondrules. [Meteorite from Hammada al Hamra Plateau, Libya.] Chondrules (CON-drools) are strange, spherical, BB-size objects found in 86% of all meteorites. To understand the origin of meteorites we must also understand how chondrules formed. Their spherical shape and texture show they were once molten, but to melt chondrules requires temperatures exceeding 3,000°F. How could chondrules get that hot without melting the surrounding rock, which usually has a lower melting temperature? Because chondrules contain volatile substances that would have bubbled out of melted rock, chondrules must have melted and cooled quite rapidly.23 By one estimate, melting occurred in about one-hundredth of a second.24 The standard explanation for chondrules is that small pieces of rock, moving in outer space billions of years ago, before the Sun and Earth formed, suddenly and mysteriously melted. These liquid droplets quickly cooled, solidified, and then were encased inside the rock that now surrounds them. Such vague conditions, hidden behind a veil of space and time, make it nearly impossible to test this explanation in a laboratory. Scientists recognize that this standard story does not explain the rapid melting and cooling of chondrules or how they were encased uniformly in rocks which are radiometrically older than the chondrules.25 As one scientist wrote, “The heat source of chondrule melting remains uncertain. We know from the petrological data that we are looking for a very rapid heating source, but what?”26 Frequently, minerals grade (gradually change) across the boundaries between chondrules and surrounding material.27 This suggests that chondrules melted while encased in rock. If so, the heating sources must have acted briefly and been localized near the center of what are now chondrules. But how could this have happened? The most common mineral in chondrules is olivine.28 Deep rocks contain many BB-size pockets of olivine. Pillars within the subterranean water probably had similar pockets. As the subterranean water escaped from under the crust, pillars had to carry more of the crust’s weight. When olivine reaches a certain level of compression, it suddenly changes into another mineral, called spinel (spin-EL), and shrinks in volume by about 10%.29 (Material surrounding each pocket would not shrink.) Tiny, collapsing pockets of olivine transforming into spinel would generate great heat, for two reasons. First, the transformation is exothermic; that is, it releases heat chemically. Second, it releases heat mechanically, by friction. Here’s why. At the atomic level, each pocket would collapse in many stages—much like falling dominos or the section-by-section crushing of a giant scaffolding holding up an overloaded roof. Within each pocket, as each microscopic crystal slid over adjacent crystals at these extreme pressures, melting would occur along sliding surfaces. The remaining solid structures in the olivine pocket would then carry the entire compressive load—quickly collapsing and melting other parts of the “scaffolding.” The fountains of the great deep expelled pieces of crushed pillars into outer space where they rapidly cooled. Their tumbling action, especially in the weightlessness of space, would have prevented volatiles from bubbling out of the encased liquid pockets within each rock. In summary, chondrules are a by product of the mechanism that produced meteorites—a rapid process that started under the Earth’s crust as the flood began. Also, tidal effects, as described on pages 425–428, limit the lifetime of the moons of asteroids to about 100,000 years.43 This fact and the problems in capturing a moon caused evolutionist astronomers to scoff at early reports that some asteroids have moons. Figure 162: Peanut Asteroids. The fountains of the great deep expelled dirt, rocks, and considerable water from Earth. About half of that water quickly evaporated into the vacuum of space; the remainder froze. Each evaporated gas molecule became an orbiting body in the solar system. Asteroids then formed as explained on pages 298–302. Many are shaped like peanuts. Gas molecules captured by asteroids or released by icy asteroids became their atmospheres. Asteroids with thick atmospheres sometimes captured smaller asteroids as moons. If an atmosphere remained long enough, the moon would lose altitude and gently merge with the low-gravity asteroid, forming a peanut-shaped asteroid. (We see merging when a satellite or spacecraft reenters Earth’s atmosphere, slowly loses altitude, and eventually falls to Earth.) Without an atmosphere, merging becomes almost impossible. Japan’s Hayabusa spacecraft orbited asteroid Itokawa (shown above) for two months in 2005. Scientists studying Itokawa concluded that it consists of two smaller asteroids that merged. Donald Yeomans, a mission scientist and member of NASA’s Jet Propulsion Laboratory, admitted, “It’s a major mystery how two objects each the size of skyscrapers could collide without blowing each other to smithereens. This is especially puzzling in a region of the solar system where gravitational forces would normally involve collision speeds of 2 km/sec.”45 The mystery is easily solved when one understands the role that water played in the origin of comets and asteroids. Notice, a myriad of rounded boulders, some 150 feet in diameter, litter Itokawa’s surface. High velocity water produces rounded boulders; an exploded planet or impacts on asteroids would produce angular rocks. 6. The smaller moons of the giant planets (Jupiter, Saturn, Uranus, and Neptune) are captured asteroids. Most astronomers probably accept this conclusion, but have no idea how these captures could occur.44 As explained earlier in this chapter, for decades to centuries after the flood the radiometer effect, powered by the Sun’s energy, spiraled asteroids outward from Earth’s orbit. Water vapor, around asteroids and in interplanetary space, temporarily thickened asteroid and planet atmospheres. This facilitated aerobraking which allowed massive planets to capture asteroids. Recent discoveries indicate that Saturn’s 313-mile-wide moon, Enceladus (en-SELL-uh-duhs), is a captured asteroid. Geysers at Enceladus’ south pole are expelling water vapor and ice crystals which escape Enceladus and supply Saturn’s E ring.46 That water contains salts resembling Earth’s ocean waters.47 Because asteroids are icy and weak, they would experience strong tides if captured by a giant planet. Strong tides would have recently48 generated considerable internal heat, slowed the moon’s spin, melted ice, and boiled deep reservoirs of water. Enceladus’ spin has almost stopped, its internal water is being launched (some so hot that it becomes a plasma),49 and its surface near the geysers has buckled, probably due to the loss of internal water. Because the material for asteroids and their organic matter came recently from Earth, water is still jetting from cold Enceladus’ surprisingly warm south pole, and “dark green organic material”50 is on its surface. 7. A few asteroids suddenly develop comet tails, so they are considered both asteroid and comet. The hydroplate theory says that asteroids are weakly joined piles of rocks and ice. If such a pile cracked slightly, perhaps due to an impact by space debris, then internal ice, suddenly exposed to the vacuum of space, would violently vent water vapor and produce a comet tail. The hydroplate theory explains why comets are so similar to asteroids. 8. A few comets have nearly circular orbits within the asteroid belt. Their tails lengthen as they approach perihelion and recede as they approach aphelion. If comets formed beyond the planet Neptune, it is highly improbable that they could end up in nearly circular orbits in the asteroid belt.51 So, these comets almost certainly did not form in the outer solar system. Also, comet ice that near the Sun would evaporate relatively quickly. Only the hydroplate theory explains how comets (icy rock piles) recently entered the asteroid belt. 9. If asteroids passing near Earth came from the asteroid belt, too many of them have diameters less than 50 meters,52 and too many have circular orbits.53 However, we would expect this if the rocks that formed asteroids were launched from Earth. 10. Computer simulations, both forward and backward in time, show that asteroids traveling near Earth have a maximum expected lifetime of only about a million years. They “quickly” collide with the Sun.54 This raises doubts that all asteroids began 4,600,000,000 years ago as evolutionists claim—living 4,600 times longer than the expected lifetime of near-Earth asteroids. 11. Earth has one big moon and several small moons—up to 650 feet in diameter.55 The easiest explanation for the small moons is that they were launched from Earth with barely enough velocity to escape Earth’s gravity. (To understand why the largest of these small moons is about 650 feet in diameter, see Endnote 8.) 12. Asteroids 3753 Cruithne and 2000 AA29 are traveling companions of Earth.56 They delicately oscillate, in a horseshoe pattern, around two points that lie 60° (as viewed from the Sun) forward and 60° behind the Earth but on Earth’s nearly circular orbit. These points, predicted by Lagrange in 1764 and called Lagrange points, are stable places where an object would not move relative to the Earth and Sun if it could once occupy either point going at zero velocity relative to the Earth and Sun. But how could a slowly moving object ever reach, or get near, either point? Most likely, it barely escaped from Earth. Also, Asteroid 3753 could not have been in its present orbit for long, because it is so easy for a passing gravitational body to perturb it out of its stable niche. Time permitting, Venus will pass near this asteroid 8,000 years from now and may dislodge it.57 13. Furthermore, Jupiter has two Lagrange points on its nearly circular orbit. The first, called L4, lies 60° (as seen from the Sun) in the direction of Jupiter’s motion. The second, called L5, lies 60° behind Jupiter. Visualize planets and asteroids as large and small marbles rolling in orbitlike paths around the Sun on a large frictionless table. At each Lagrange point is a bowl-shaped depression that moves along with each planet. Because there is no friction, small marbles (asteroids) that roll down into a bowl normally pick up enough speed to roll back out. However, if a chance gravitational encounter slowed one marble right after it entered a bowl, it might not exit the bowl. Marbles trapped in a bowl would normally stay 60° ahead of or behind their planet, gently rolling around near the bottom of their moving bowl. One might think an asteroid is just as likely to get trapped in Jupiter’s leading bowl as its trailing bowl—a 50–50 chance, as with the flip of a coin. Surprisingly, 1068 asteroids are in Jupiter’s leading (L4) bowl, but only 681 are in the trailing bowl.69 This shouldn’t happen in a trillion trials if an asteroid is just as likely to get trapped at L4 as L5. What concentrated so many asteroids near the L4 Lagrange point? According to the hydroplate theory, asteroids formed near Earth’s orbit. Then, the radiometer effect spiraled them outward, toward the orbits of Mars and Jupiter. Some spiraled through Jupiter’s circular orbit and passed near both L4 and L5. Jupiter’s huge gravity would have slowed those asteroids that were moving away from Jupiter but toward L4. That braking action would have helped some asteroids settle into the L4 bowl. Conversely, asteroids that entered L5 were accelerated toward Jupiter, so they would quickly be pulled out of L5 by Jupiter’s gravity. The surprising excess of asteroids near Jupiter’s L4 is what we would expect based on the hydroplate theory. Figure 163: Asteroid Belt and Jupiter’s L4 and L5. The size of the Sun, planets, and especially asteroids are magnified, but their relative positions are accurate. About 90% of the 30,000 precisely known asteroids lie between the orbits of Mars and Jupiter, a doughnut-shaped region called the asteroid belt. A few small asteroids cross Earth’s orbit. Jupiter’s Lagrange points, L4 and L5, lie 60° ahead and 60° behind Jupiter, respectively. They move about the Sun at the same velocity as Jupiter, as if they were fixed at the corners of the two equilateral triangles shown. Items 12 and 13 explain why so many asteroids have settled near L4 and L5, and why significantly more oscillate around L4 than L5. 14. Without the hydroplate theory, one has difficulty imagining situations in which an asteroid would (a) settle into one of Jupiter’s Lagrange points, (b) capture a moon, especially a moon with about the same mass as the asteroid, or (c) have a circular orbit, along with its moon, about their common center of mass. If all three happened to an asteroid, astronomers would be shocked; no astronomer would have predicted that it could happen to a comet. Nevertheless, an “asteroid” discovered earlier, named 617 Patroclus, satisfies (a)–(c). Patroclus and its moon, Menoetius, have such low densities that they would float in water; therefore, both are probably comets70—dirty, fluffy snowballs. Paragraphs 5, 7, 8, and 13 (above) explain why these observations make perfect sense with the hydroplate theory. 15. As explained in “Shallow Meteorites,” meteorites are almost always found surprisingly near Earth’s surface. The one known exception is in southern Sweden, where 40 meteorites and thousands of grain-size fragments of one particular type of meteorite have been found at different depths in a few limestone quarries. The standard explanation is that all these meteorites somehow struck this same small area over a 1–2-million-year period about 480 million years ago.71 A more likely explanation is that some meteorites, not launched with enough velocity to escape Earth during the flood, fell back to Earth. One or more meteorites fragmented on reentering Earth’s atmosphere. The pieces landed in mushy, recently-deposited limestone layers in southern Sweden. 16. Light spectra (detailed color patterns, much like a long bar code) from certain asteroids in the outer asteroid belt imply the presence of organic compounds, especially kerogen, a coal-tar residue.72 No doubt the kerogen came from plant life. Life as we know it could not survive in such a cold region of space, but common organic matter launched from Earth could have been preserved. 17. Many asteroids are reddish and have light characteristics showing the presence of iron.73 On Earth, reddish rocks almost always imply iron oxidized (rusted) by oxygen gas. Today, oxygen is rare in outer space. If iron on asteroids is oxidized, what was the source of the oxygen? Answer: Water molecules, surrounding and impacting asteroids, dissociated (broke apart), releasing oxygen. That oxygen then combined chemically with iron on the asteroid’s surface, giving the reddish color. Mars, often called the red planet, derives its red color from oxidized iron. Again, oxygen contained in water vapor launched from Earth during the flood, probably accounts for Mars’ red color. Mars’ topsoil is richer in iron and magnesium than Martian rocks beneath the surface. The dusty surface of Mars also contains carbonates, such as limestone.74 Because meteorites and Earth’s subterranean water contained considerable iron, magnesium, and carbonates, it appears that Mars was heavily bombarded by meteorites and water launched from Earth’s subterranean chamber. [See “The Origin of Limestone” on pages 224–229.] Those who believe that meteorites came from asteroids have wondered why meteorites do not have the red color of most asteroids.75 The answer is twofold: (a) as explained on page 301, meteorites did not come from asteroids but both came from Earth, and (b) asteroids contain oxidized iron, as explained above, but meteorites are too small to attract an atmosphere gravitationally. Figure 164: Salt of the Earth. On 22 March 1998, this 2 3/4 pound meteorite landed 40 feet from boys playing basketball in Monahans, Texas. While the rock was still warm, police were called. Hours later, NASA scientists cracked the meteorite open in a clean-room laboratory, eliminating any possibility of contamination. Inside were salt (NaCl) crystals 0.1 inch (3 mm) in diameter and liquid water!58 Some of these salt crystals are shown in the blue circle, highly magnified and in true color. Bubble (B) is inside a liquid, which itself is inside a salt crystal. Eleven quivering bubbles were found in about 40 fluid pockets. Shown in the green circle is another bubble (V) inside a liquid (L). The length of the horizontal black bar represents 0.005 mm, about 1/25 the diameter of a human hair. NASA scientists who investigated this meteorite believe that it came from an asteroid, but that is highly unlikely. Asteroids, having little gravity and being in the vacuum of space, cannot sustain liquid water, which is required to form salt crystals. (Earth is the only planet, indeed the only body in the solar system, that can sustain liquid water on its surface.) Nor could surface water (gas, liquid, or solid) on asteroids withstand high-velocity impacts. Even more perplexing for the evolutionist: What is the salt’s origin? Also, what accounts for the meteorite’s other contents: potassium, magnesium, iron, and calcium—elements abundant on Earth, but as far as we know, not beyond Earth?59 Dust-sized meteoroids often come from comets. Most larger meteoroids are rock fragments that never merged into a comet or asteroid. Much evidence supports Earth as the origin of meteorites. - Minerals and isotopes in meteorites are remarkably similar to those on Earth.32 - Some meteorites contain sugars,60 salt crystals containing liquid water,61 and possible cellulose.62 - Other meteorites contain limestone,63 which, on Earth, forms only in liquid water. - Three meteorites contain excess amounts of left-handed amino acids64—a sign of once-living matter. - A few meteorites show that “salt-rich fluids analogous to terrestrial brines” flowed through their veins.65 - Some meteorites have about twice the heavy hydrogen concentration as Earth’s water today.66 As explained in the preceding chapter and in “Energy in the Subterranean Water” here, this heavy hydrogen came from the subterranean chambers. - About 86% of all meteorites contain chondrules, which are best explained by the hydroplate theory. - Seventy-eight types of living bacteria have been found in two meteorites after extreme precautions were taken to avoid contamination.67 Bacteria need liquid water to live, grow, and reproduce. Obviously, liquid water does not exist inside meteoroids whose temperatures in outer space are near absolute zero (-460°F). Therefore, the bacteria must have been living in the presence of liquid water before being launched into space. Once in space, they quickly froze and became dormant. Had bacteria originated in outer space, what would they have eaten? Water on MarsWater recently and briefly flowed at various locations on Mars.76 Photographic comparisons show that some water flowed within the last 2–5 years!77 Water is now stored as ice at Mars’ poles78 and in surface soil. Mars’ stream beds usually originate on crater walls rather than in ever smaller tributaries as on Earth.79 Rain formed other channels.80 Martian drainage channels and layered strata are found at almost isolated 200 locations.81 Most gullies are on crater slopes at high latitudes82—extremely cold slopes that receive little sunlight. One set of erosion gullies is on the central peak of an impact crater!83 Figure 165: Erosion Channels on Mars. These channels frequently originate in scooped-out regions, called amphitheaters, high on a crater wall. On Earth, where water falls as rain, erosion channels begin with narrow tributaries that merge with larger tributaries and finally, rivers. Could impacts of comets or icy asteroids have formed these craters, gouged out amphitheaters, and melted the ice—each within seconds? Mars, which is much colder than Antarctica in the winter, would need a heating source, such as impacts, to produce liquid water. Today, Mars is cold, averaging -80°F (112 Fahrenheit degrees below freezing). Water on Mars should be ice, not liquid water. Mars’ low atmospheric pressures would hasten freezing even more.84 Water probably came from above. Soon after Earth’s global flood, the radiometer effect caused asteroids to spiral out to the asteroid belt, just beyond Mars. This gave asteroids frequent opportunities to collide with Mars. When crater-forming impacts occurred, large amounts of debris were thrown into Mars’ atmosphere. Mars’ thin atmosphere and low gravity allowed the debris to settle back to the surface in vast layers of thin sheets—strata. PREDICTION 36Most sediments taken from layered strata on Mars and returned to Earth will show that they were deposited through Mars’ atmosphere, not through water. (Under a microscope, water deposited grains have nicks and gouges, showing that they received many blows as they tumbled along stream bottoms. Sediments deposited through an atmosphere receive few nicks.) Impact energy (and heat) from icy asteroids and comets bombarding Mars released liquid water, which often pooled inside craters or flowed downhill and eroded the planet’s surface.87 (Most liquid water soaked into the soil and froze.) Each impact was like the bursting of a large dam here on Earth. Brief periods of intense, hot rain and localized flash floods followed.88 These Martian hydrodynamic cycles quickly “ran out of steam,” because Mars receives relatively little heat from the Sun. While the consequences were large for Mars, the total water was small by Earth’s standards—about twice the water in Lake Michigan. Today, when meteorites strike icy soil on Mars, some of that ice melts. When this happens on a crater wall, liquid water flows down the crater wall, leaving the telltale gullies that have shocked the scientific community.77 PREDICTION 37As has been discovered on the Moon and apparently on Mercury, frost will be found within asteroids and in permanently shadowed craters on Mars. This frost will be rich in heavy hydrogen. Are Some Meteorites from Mars?Widely publicized claims have been made that at least 30 meteorites from Mars have been found. With international media coverage in 1996, a few scientists also proposed that one of these meteorites, named ALH84001, contained fossils of primitive life. Later study rejected that claim. The wormy-looking shapes discovered in a meteorite [supposedly] from Mars turned out to be purely mineralogical and never were alive.89 The 30 meteorites are presumed to have come from the same place, because they contain similar ratios of three types of oxygen: oxygen weighing 16, 17, and 18 atomic mass units. (That presumption is not necessarily true, is it?) A chemical argument then indirectly links one of those meteorites to Mars, but the link is more tenuous than most realize.90 That single meteorite had tiny glass nodules containing dissolved gases. A few of these gases (basically the noble gases: argon, krypton, neon, and xenon) had the same relative abundances as those found in Mars’ atmosphere in 1976. (Actually, a later discovery shows that the mineralogy of these meteorites differs from that of almost all Martian rock.91) Besides, if two things are similar, it does not mean that one came from the other. Similarity in the relative abundances of the noble gases in Mars’ atmosphere and in one meteorite may be because those gases originated in Earth’s preflood subterranean chamber. Rocks and water from the subterranean chamber may have transported those gases to Mars. Could those 30 meteorites have come from Mars? To escape the gravity of Mars requires a launch velocity of 3 miles per second. Additional velocity is then needed to transfer to an orbit intersecting Earth, 34–236 million miles away. Supposedly, one or more asteroids slammed into Mars and blasted off millions of meteoroids. Millions are needed, because less than one in a million92 would ever hit Earth, be large enough to survive reentry, be found, be turned over to scientists, and be analyzed in detail. Besides, if meteorites can come to Earth from Mars, many more should have come from the Moon—but haven’t.93 For an impact suddenly to accelerate, in a fraction of a second, any solid from rest to a velocity of 3 miles per second requires such extreme shock pressures that much of the material would melt, if not vaporize.94 All 30 meteorites should at least show shock effects. Some do not. Also, Mars should have at least six giant craters if such powerful blasts occurred, because six different launch dates are needed to explain the six age groupings the meteorites fall into (based on evolutionary dating methods). Such craters are hard to find, and large, recent impacts on Mars should have been rare. Then there are energy questions. Almost all impact energy is lost as shock waves and ultimately as heat. Little energy remains to lift rocks off Mars. Even with enough energy, the fragments must be large enough to pass through Mars’ atmosphere. To see the difficulty, imagine throwing a ball high into the air. Then visualize how hard it would be to throw a handful of dust that high. Atmospheric drag, even in Mars’ thin atmosphere, absorbs too much of the smaller particles’ kinetic energy. Finally, for large particles to escape Mars, the expelling forces must be focused, as occurs in a gun barrel or rocket nozzle. For best results, this should be aimed straight up, to minimize the path length through the atmosphere. A desire to believe in life on Mars produced a type of “Martian mythology” that continues today. In 1877, Italian astronomer Giovanni Schiaparelli reported seeing grooves on Mars. The Italian word for groove is “canali”; therefore, many of us grew up hearing about “canals” on Mars—a mistranslation. Because canals are man-made structures, people started thinking about “little green men” on Mars. In 1894, Percival Lowell, a wealthy, amateur astronomer with a vivid imagination, built Lowell Observatory primarily to study Mars. Lowell published a map showing and naming Martian canals, and wrote several books: Mars (1895), Mars and Its Canals (1906), and Mars As the Abode of Life (1908). Even into the 1960s, textbooks displayed his map, described vegetative cycles on Mars, and explained how Martians may use canals to convey water from the polar ice caps to their parched cities. Few scientists publicly disagreed with the myth, even after 1949 when excellent pictures from the 200-inch telescope on Mount Palomar were available. Those of us in school before 1960 were directly influenced by such myths; almost everyone has been indirectly influenced. Artists, science fiction writers, and Hollywood helped fuel this “Martian mania.” In 1898, H. G. Wells wrote The War of the Worlds telling of strange-looking Martians invading Earth. In 1938, Orson Welles, in a famous radio broadcast, panicked many Americans into thinking New Jersey was being invaded by Martians. In 1975, two Viking spacecraft were sent to Mars to look for life. Carl Sagan announced, shortly before the tests were completed, that he was certain life would be discovered—a reasonable conclusion, if life evolved. The prediction failed. In 1996, United States President Clinton read to a global television audience, “More than 4 billion years ago this piece of rock [ALH84001] was formed as a part of the original crust of Mars. After billions of years, it broke from the surface and began a 16-million-year journey through space that would end here on Earth.” “... broke from the surface ...”? The myth is still alive. Final ThoughtsAs with the 24 other major features listed on page 106 [of the book, In the Beginning], we have examined the origin of asteroids and meteoroids from two directions: “cause-to-effect” and “effect-to-cause.” Cause-to-Effect. We saw that given the assumption listed on page 115 [of the book, In the Beginning], consequences naturally followed: subterranean water became supercritical, the fountains of the great deep erupted; large rocks, muddy water, and water vapor were launched into space; gas and gravity assembled asteroids; and gas pressure powered by the Sun’s energy (the radiometer effect) herded asteroids into the asteroid belt. Isolated rocks still moving in the solar system are meteoroids. Effect-to-Cause. We considered seventeen effects (pages 302–306)[of the book, In the Beginning], each incompatible with present theories on the origin of asteroids and meteoroids. Each effect was evidence that many rocks and large volumes of water vapor were launched from Earth. Portions of Part III will examine this global flood from a third direction: historical records from claimed eyewitnesses. All three perspectives reinforce each other, illuminating in different ways this catastrophic event. To access the footnotes for this article, click here.
http://4thdayalliance.com/articles/solar-system/origin-of-asteroids/
13
71
2.1 The Strength of Gravity and Electric Forces Gravity is a relatively very weak force. The electric Coulomb force between a proton and an electron is of the order of 1039 (that’s 1 with 39 zeros after it) times stronger than the gravitational force between them. We can get a hint of the relative strength of electromagnetic forces when we use a small magnet to pick up an iron object, say, a ball bearing. Even though the whole of Earth’s gravitation attraction is acting upon the ball bearing, the magnet overcomes this easily when close enough to the ball bearing. In space, gravity only becomes significant in those places where the electromagnetic forces are shielded or neutralized. For spherical masses and charges, both the gravity force and the electric Coulomb force vary inversely with the square of the distance and so decrease rapidly with distance. For other geometries/configurations, the forces decrease more slowly with distance. For example, the force between two relatively long and thin electric currents moving parallel to each other varies inversely with the first power of the distance between them. Electric currents can transport energy over huge distances before using that energy to create some detectable result, just like we use energy from a distant power station to boil a kettle in our kitchen. This means that, over longer distances, electromagnetic forces and electric currents together can be much more effective than either the puny force of gravity or even the stronger electrostatic Coulomb force. Remember that, just in order to explain the behavior of the matter we can detect, the Gravity Model needs to imagine twenty-four times more matter than we can see, in special locations, and of a special invisible type. It seems much more reasonable to investigate whether the known physics of electromagnetic forces and electric currents can bring about the observed effects instead of having to invent what may not exist. 2.2 The “Vacuum” of Space Until about 100 years ago, space was thought to be empty. The words “vacuum” and “emptiness” were interchangeable. But probes have found that space contains atoms, dust, ions, and electrons. Although the density of matter in space is very low, it is not zero. Therefore, space is not a vacuum in the conventional sense of there being “nothing there at all”. For example, the Solar “wind” is known to be a flow of charged particles coming from the Sun and sweeping round the Earth, ultimately causing visible effects like the Northern (and Southern) Lights. The dust particles in space are thought to be 2 to 200 nanometers in size, and many of them are also electrically charged, along with the ions and electrons. This mixture of neutral and charged matter is called plasma, and it is suffused with electromagnetic fields. We will discuss plasma and its unique interactions with electromagnetic fields in more detail in Chapter 3. The “empty” spaces between planets or stars or galaxies are very different from what astronomers assumed in the earlier part of the 20th century. (Note about terminology in links: astronomers often refer to matter in the plasma state as “gas,” “winds,” “hot, ionized gas,” “clouds,” etc. This fails to distinguish between the two differently-behaving states of matter in space, the first of which is electrically-charged plasma and the other of which may be neutral gas which is just widely-dispersed, non-ionized molecules or atoms.) The existence of charged particles and electromagnetic fields in space is accepted in both the Gravity Model and the Electric Model. But the emphasis placed on them and their behavior is one distinctive difference between the models. We will therefore discuss magnetic fields next. 2.3 Introduction to Magnetic Fields What do we mean by the terms “magnetic field” and “magnetic field lines”? In order to understand the concept of a field, let’s start with a more familiar example: gravity. We know that gravity is a force of attraction between bodies or particles having mass. We say that the Earth’s gravity is all around us here on the surface of the Earth and that the Earth’s gravity extends out into space. We can express the same idea more economically by saying that the Earth has a gravitational field which extends into space in all directions. In other words, a gravitational field is a region where a gravitational force of attraction will be exerted between bodies with mass. Similarly, a magnetic field is a region in which a magnetic force would act on a magnetized or charged body. (We will look at the origin of magnetic fields later). The effect of the magnetic force is most obvious on ferromagnetic materials. For example, iron filings placed on a surface in a magnetic field align themselves in the direction of the field like compass needles. Because the iron filings tend to align themselves south pole to north pole, the pattern they make could be drawn as a series of concentric lines, which would indicate the direction and, indirectly, strength of the field at any point. Therefore magnetic field lines are one convenient way to represent the direction of the field, and serve as guiding centers for trajectories of charged particles moving in the field (ref. Fundamentals of Plasma Physics, Cambridge University Press, 2006, Paul Bellan, Ph.D.). It is important to remember that field lines do not exist as physical objects. Each iron filing in a magnetic field is acting like a compass: you could move it over a bit and it would still point magnetic north-south from its new position. Similarly, a plumb bob (a string with a weight at one end) will indicate the local direction of the gravitational field. Lines drawn longitudinally through a series of plumb bobs would make a set of gravitational field lines. Such lines do not really exist; they are just a convenient, imaginary means of visualizing or depicting the direction of force applied by the field. See Appendix I for more discussion of this subject, or here, at Fizzics Fizzle. A field line does not necessarily indicate the direction of the force exerted by whatever is causing the field. Field lines may be drawn to indicate direction or polarity of a force, or may be drawn as contours of equal intensities of a force, in the same way as contour lines on a map connect points of equal elevation above, say, sea level. Often, around 3-dimensional bodies with magnetic fields, imaginary surfaces are used to represent the area of equal force, instead of lines. By consensus, the definition of the direction of a magnetic field at some point is from the north to the south pole. In a gravitational field, one could choose to draw contour lines of equal gravitational force instead of the lines of the direction of the force. These lines of equal gravitational force would vary with height (that is, with distance from the center of the body), rather like contour lines on a map. To find the direction of the force using these elevation contour lines, one would have to work out which way a body would move. Placed on the side of a hill, a stone rolls downhill, across the contours. In other words the gravitational force is perpendicular to the field lines of equal gravitational force. Magnetic fields are more complicated than gravity in that they can either attract or repel. Two permanent bar magnets with their opposite ends (opposite “poles”, or N-S) facing each other will attract each other along the direction indicated by the field lines of the combined field from them both (see image above). Magnets with the same polarity (N-N or S-S) repel one other along the same direction. Magnetic fields also exert forces on charged particles that are in motion. Because the force that the charged particle experiences is at right angles to both the magnetic field line and the particle’s direction, a charged particle moving across a magnetic field is made to change direction (i.e. to accelerate) by the action of the field. Its speed remains unchanged to conserve kinetic energy. The following image shows what happens to an electron beam in a vacuum tube before and after a magnetic field is applied, in a lab demonstration. The magnetic force on a charged particle in motion is analogous to the gyroscopic force. A charged particle moving directly along or “with” a magnetic field line won’t experience a force trying to change its direction, just as pushing on a spinning gyroscope directly along its axis of rotation will not cause it to turn or “precess”. Even though the force on different charged particles varies, the concept of visualizing the direction of the magnetic field as a set of imaginary field lines is useful because the direction of the force on any one material, such as a moving charged particle, can be worked out from the field direction. 2.4 The Origin of Magnetic Fields There is only one way that magnetic fields can be generated: by moving electric charges. In permanent magnets, the fields are generated by electrons spinning around the nuclei of the atoms. A strong magnet is created when all the electrons orbiting the nuclei have spins that are aligned, creating a powerful combined force. If the magnet is heated to its Curie temperature, the thermal motion of the atoms breaks down the orderly spin alignments, greatly reducing the net magnetic field. In a metal wire carrying a current, the magnetic field is generated by electrons moving down the length of the wire. A more detailed introduction to the complex subject of exchange coupling and ferromagnetism can be found here. Either way, any time electric charges move, they generate magnetic fields. Without moving electric charges, magnetic fields cannot exist. Ampère’s Law states that a moving charge generates a magnetic field with circular lines of force, on a plane that is perpendicular to the movement of the charge. Since electric currents made up of moving electric charges can be invisible and difficult to detect at a distance, detecting a magnetic field at a location in space (by well-known methods in astronomy, see below) is a sure sign that it is accompanied by an electric current. If a current flows in a conductor, such as a long straight wire or a plasma filament, then each charged particle in the current will have a small magnetic field around it. When all the individual small magnetic fields are added together, the result is a continuous magnetic field around the whole length of the conductor. The regions in space around the wire where the field strength is equal (called “equipotential surfaces”) are cylinders concentric with the wire. Time-varying electric and magnetic fields are considered later. (See Chapter IV and Appendix III) The question of the origin of magnetic fields in space is one of the key differences between the Gravity Model and the Electric Model. The Gravity Model allows for the existence of magnetic fields in space because they are routinely observed, but they are said to be caused by dynamos inside stars. For most researchers today, neither electric fields nor electric currents in space play any significant part in generating magnetic fields. In contrast, the Electric Model, as we shall see in more detail later, argues that magnetic fields must be generated by the movement of charged particles in space in the same way that magnetic fields are generated by moving charged particles here on Earth. Of course, the Electric Model accepts that stars and planets have magnetic fields, too, evidenced by magnetospheres and other observations. The new insight has been to explain a different origin for these magnetic fields in space if they are not created by dynamos in stars. 2.5 Detecting Magnetic Fields in Space Since the start of the space age, spacecraft have been able to measure magnetic fields in the solar system using instruments on board the spacecraft. We can “see” magnetic fields beyond the range of spacecraft because of the effect that the fields have on light and other radiation passing through them. We can even estimate the strength of the magnetic fields by measuring the amount of that effect. Optical image Magnetic field intensity, direction We have known about the Earth’s magnetic field for centuries. We can now detect such fields in space, so the concept of magnetic fields in space is intuitively easy to understand, although astronomers have difficulty in explaining the origination of these magnetic fields. Magnetic fields can be detected at many wavelengths by observing the amount of symmetrical spectrographic emission line or absorption line splitting that the magnetic field induces. This is known as the Zeeman effect, after Dutch physicist and 1902 Nobel laureate, Pieter Zeeman, (1865—1943). Note in the right image above how closely the field direction aligns with the galactic arms visible in the optical image, left. Another indicator of the presence of magnetic fields is the polarization of synchrotron emission radiated by electrons in magnetic fields, useful at galactic scales. See Beck’s article on Galactic Magnetic Fields, in Scholarpedia, plus Beck and Sherwood’s Atlas of Magnetic Fields in Nearby Galaxies. Measurement of the degree of polarization makes use of the Faraday effect. The Faraday rotation in turn leads to the derivation of the strength of the magnetic field through which the polarized light is passing. The highly instructional paper by Phillip Kronberg et al, Measurement of the Electric Current in a Kpc-Scale Jet, provides a compelling insight into the direct link between the measured Faraday rotation in the powerful “knots” in a large galactic jet, the resultant magnetic field strength, and the electric current present in the jet. Magnetic fields are included in both the Gravity Model and the Electric Model of the Universe. The essential difference is that the Electric Model recognizes that magnetic fields in space always accompany electric currents. We will take up electric fields and currents next. 2.6 Introduction to Electric Fields An electric charge has polarity. That is, it is either positive or negative. By agreement, the elementary (smallest) unit of charge is equal to that of an electron (-e) or a proton (+e). Electric charge is quantized; it is always an integer multiple of e. The fundamental unit of charge is the coulomb (C), where e = 1.60×10-19 coulomb. By taking the inverse of the latter tiny value, one coulomb is 6.25×1018 singly-charged particles. One ampere (A) of electric current is one coulomb per second. A 20A current thus would be 20 C of charge per second, or the passage of 1.25×1020 electrons per second past a fixed point. Every charge has an electric field associated with it. An electric field is similar to a magnetic field in that it is caused by the fundamental force of electromagnetic interaction and its “range” or extent of influence is infinite, or indefinitely large. The electric field surrounding a single charged particle is spherical, like the gravitational acceleration field around a small point mass or a large spherical mass. The strength of an electric field at a point is defined as the force in newtons (N) that would be exerted on a positive test charge of 1 coulomb placed at that point. Like gravity, the force from one charge is inversely proportional to the square of the distance to the test (or any other) charge. The point in defining a test charge as positive is to consistently define the direction of the force due to one charge acting upon another charge. Since like charges repel and opposites attract, just like magnetic poles, the imaginary electric field lines tend to point away from positive charges and toward negative charges. See a short YouTube video on the electric field here. Here is a user-controlled demonstration of 2 charges and their associated lines of force in this Mathematica application. You may need to download Mathematica Player (just once, and it’s free) from the linked web site to play with the demo. Click on “Download Live Demo”after you install Mathematica Player. You can adjust strength and polarity of charge (+ or -) with the sliders, and drag the charged particles around the screen. Give the field lines time to smooth out between changes. Electromagnetic forces are commonly stronger than gravitational forces on plasma in space. Electromagnetism can be shielded, while gravity can not, so far as is known. The common argument in the standard model is that most of the electrons in one region or body are paired with protons in the nuclei of atoms and molecules, so the net forces of the positive charges and negative charges cancel out so perfectly that “for large bodies gravity can dominate” (link: Wikipedia, Fundamental Interactions, look under the Electromagnetism sub-heading). What is overlooked above is that, with the occasional exception of relatively cool, stable and near-neutral planetary environments like those found here on Earth, most other matter in the Universe consists of plasma; i.e., charged particles and neutral particles moving in a complex symphony of charge separation and the electric and magnetic fields of their own making. Gravity, while always present, is not typically the dominant force. Far from consisting of mostly neutralized charge and weak magnetic and electric fields and their associated weak currents, electric fields and currents in plasma can and often do become very large and powerful in space. The Electric Model holds that phenomena in space such as magnetospheres, Birkeland currents, stars, pulsars, galaxies, galactic and stellar jets, planetary nebulas, “black holes”, energetic particles such as gamma rays and X-rays and more, are fundamentally electric events in plasma physics. Even the rocky bodies – planets, asteroids, moons and comets, and the gas bodies in a solar system – exist in the heliospheres of their stars, and are not exempt from electromagnetic forces and their effects. Each separate charged particle contributes to the total electric field. The net force at any point in a complex electromagnetic field can be calculated using vectors, if the charges are assumed stationary. If charged particles are moving (and they always are), however, they “create” – are accompanied by – magnetic fields, too, and this changes the magnetic configuration. Changes in a magnetic field in turn create electric fields and thereby affect currents themselves, so fields that start with moving particles represent very complex interactions, feedback loops and messy mathematics. Charges in space may be distributed spatially in any configuration. If, instead of a point or a sphere, the charges are distributed in a linear fashion so that the length of a charged area is much longer than its width or diameter, it can be shown that the electric field surrounds the linear shape like cylinders of equal force potential, and that the field from this configuration decreases with distance from the configuration as the inverse of the distance (not the inverse square of the distance) from the centerline. This is important in studying the effects of electric and magnetic fields in filamentary currents such as lightning strokes, a plasma focus, or large Birkeland currents in space. Remember that the direction of applied force on a positive charge starts from positive charge and terminates on negative charge, or failing a negative charge, extends indefinitely far. Even a small charge imbalance with, say, more positively-charged particles here and more negatively-charged particles a distance away leads to a region of force or electric field between the areas of separated dissimilar charges. The importance of this arrangement will become more clear in the discussion of double layers in plasma, further on. Think of an electrical capacitor where there are two separated, oppositely charged plates or layers, similar to the two charged plates “B” in the diagram above. There will be an electric field between the layers. Any charged particle moving or placed between the layers will be accelerated towards the oppositely charged layer. Electrons (which are negatively charged) accelerate toward the positively charged layer, and positive ions and protons toward the negatively charged layer. According to Newton’s Laws, force results in acceleration. Therefore electric fields will result in charged particles’ acquiring velocity. Oppositely charged particles will move in opposite directions. An electric current is, by definition, movement of charge past a point. Electric fields therefore cause electric currents by giving charged particles a velocity. If an electric field is strong enough, then charged particles will be accelerated to very high velocities by the field. For a little further reading on electric fields see this. 2.7 Detecting Electric Fields and Currents in Space Electric fields and currents are more difficult to detect without putting a measuring instrument directly into the field, but we have detected currents in the solar system using spacecraft. One of the first was the low-altitude polar orbit TRIAD satellite in the 1970s, which found currents interacting with the Earth’s upper atmosphere. In 1981 Hannes Alfvén described a heliospheric current model in his book, Cosmic Plasma. Since then, a region of electric current called the heliospheric current sheet (HCS) has been found that separates the positive and negative regions of the Sun’s magnetic field. It is tilted approximately 15 degrees to the solar equator. During one half of a solar cycle, outward-pointing magnetic fields lie above the HCS and inward-pointing fields below it. This is reversed when the Sun’s magnetic field reverses its polarity halfway through the solar cycle. As the Sun rotates, the HCS rotates with it, “dragging” its undulations into what NASA terms “the standard Parker spiral”. Spacecraft have measured changes over time in the current sheet at various locations since the 1980s. They have detected near-Earth and solar currents as well. The Gravity Model accepts that these currents exist in space but assumes they are a result of the magnetic field. We will return to this point later. Electric fields outside the reach of spacecraft are not detectable in precisely the same way as magnetic fields. Line-splitting or broadening in electric fields occurs, but it is asymmetrical line splitting that indicates the presence of an electric field, in contrast to the symmetric line splitting in magnetic fields. Further, electric field line broadening is sensitive to the mass of the elements emitting light (the lighter elements being readily broadened or split, and heavier elements less so affected), while Zeeman (magnetic field) broadening is indifferent to mass. Asymmetric bright-line splitting or broadening is called the Stark effect, after Johannes Stark (1874–1957). Another way in which we can detect electric fields is by inference from the behavior of charged particles, especially those that are accelerated to high velocities, and the existence of electromagnetic radiation such as X-rays in space, which we have long known from Earth-bound experience are generated by strong electric fields. Electric currents in low density plasmas in space operate like fluorescent lights or evacuated Crookes Tubes. In a weak current state, the plasma is dark and radiates little visible light (although cold, thin plasma can radiate a lot in the radio and far infrared wavelengths). As current increases, plasma enters a glow mode, radiating a modest amount of electromagnetic energy in the visible spectrum. This is visible in the image at the end of this chapter. When electrical current becomes very intense in a plasma, the plasma radiates in the arc mode. Other than scale, there is little significant difference between lightning and the radiating surface of a star’s photosphere. This means, of course, that alternative explanations for these effects are also possible, at least in theory. The Gravity Model often assumes that the weak force of gravity multiplied by supernatural densities that are hypothesized to make up black holes or neutron stars creates these types of effect. Or maybe particles are accelerated to near-light-speed by supernovae explosions. The question is whether “multiplied gravity” or lab-testable electromagnetism is more consistent with observations that the Universe is composed of plasma. The Electric Model argues that electrical effects are not just limited to those parts of the solar system that spacecraft have been able to reach. The Electric Model supposes that similar electrical effects also occur outside the solar system. After all, it would be odd if the solar system was the only place in the Universe where electrical effects do occur in space. End of Chapter 2
http://www.thunderbolts.info/wp/2011/10/17/essential-guide-to-the-eu-chapter-2/
13
158
An Introduction to MATLAB: Basic Operations MATLAB is a programming language that is very useful for numerical simulation and data analysis. The following tutorials are intended to give you an introduction to scientific computing in MATLAB. Lots of MATLAB demos are available online at You can work through these at your leisure, if you want. Everything you need for EOS 225 should be included in the following tutorials. At its simplest, we can use MATLAB as a calculator. Type What do you get? ans = 5 What do you get? ans = 21 Can also do more complicated operations, like taking exponents: for "3 squared" type ans = 9 For "two to the fourth power" type ans = 16 "Scientific notation" is expressed with "10^" replaced by "e" - that is, 10^7 is written 1e7 and 2.15x10^-3 is written 2.15e-3. For example: ans = 0.0150 2e-3 * 1000 ans = 2 MATLAB has all of the basic arithmetic operations built in: + addition - subtraction * multiplication \ division ^ exponentiation as well as many more complicated functions (e.g. trigonometric, exponential): sin(x) sine of x (in radians) cos(x) cosine of x (in radians) exp(x) exponential of x log(x) base e logarithm of x (normally written ln) The above are just a sample - MATLAB has lots of built-in functions. When working with arithmetic operations, it's important to be clear about the order in which they are to be carried out. This can be specified by the use of brackets. For example, if you want to multiply 5 by 2 then add 3, we can type ans = 13 and we get the correct value. If we want to multiply 5 by the sum of 2 and 3, we type ans = 25 and this gives us the correct value. Carefully note the placement of the brackets. If you don't put brackets, Matlab has its own built in order of operations: multiplication/division first, then addition/subtraction. For example: ans = 13 gives the same answer as (5*2)+3. As another example, if we want to divide 8 by 2 and then subtract 3, we type ans = 1 and get the right answer. To divide 8 by the difference between 2 and 3, we type ans = -8 and again get the right answer. If we type ans = 1 we get the first answer - the order of operations was division first, then subtraction. In general, it's good to use brackets - they invovle more typing, and may make a computation look more cumbersome, but they help reduce ambiguity regarding what you want the computation to do. This is a good point to make a general comment about computing. Computers are actually quite stupid - they do what you tell them to, not what you want them to do. When you type any commands into a computer program like MATLAB, you need to be very careful that these two things match exactly. You can always get help in MATLAB by typing "help". Type this alone and you'll get a big list of directories you can get more information about - which is not always too useful. It's more useful to type "help" with some other command that you'd like to know more about. E.g.: SIN Sine of argument in radians. SIN(X) is the sine of the elements of X. See also ASIN, SIND. Reference page in Help browser doc sin ATAN Inverse tangent, result in radians. ATAN(X) is the arctangent of the elements of X. See also ATAN2, TAN, ATAND. Reference page in Help browser doc atan You can get a list of all the built-in functions by typing Elementary math functions. Trigonometric. sin - Sine. sind - Sine of argument in degrees. sinh - Hyperbolic sine. asin - Inverse sine. asind - Inverse sine, result in degrees. asinh - Inverse hyperbolic sine. cos - Cosine. cosd - Cosine of argument in degrees. cosh - Hyperbolic cosine. acos - Inverse cosine. acosd - Inverse cosine, result in degrees. acosh - Inverse hyperbolic cosine. tan - Tangent. tand - Tangent of argument in degrees. tanh - Hyperbolic tangent. atan - Inverse tangent. atand - Inverse tangent, result in degrees. atan2 - Four quadrant inverse tangent. atanh - Inverse hyperbolic tangent. sec - Secant. secd - Secant of argument in degrees. sech - Hyperbolic secant. asec - Inverse secant. asecd - Inverse secant, result in degrees. asech - Inverse hyperbolic secant. csc - Cosecant. cscd - Cosecant of argument in degrees. csch - Hyperbolic cosecant. acsc - Inverse cosecant. acscd - Inverse cosecant, result in degrees. acsch - Inverse hyperbolic cosecant. cot - Cotangent. cotd - Cotangent of argument in degrees. coth - Hyperbolic cotangent. acot - Inverse cotangent. acotd - Inverse cotangent, result in degrees. acoth - Inverse hyperbolic cotangent. hypot - Square root of sum of squares. Exponential. exp - Exponential. expm1 - Compute exp(x)-1 accurately. log - Natural logarithm. log1p - Compute log(1+x) accurately. log10 - Common (base 10) logarithm. log2 - Base 2 logarithm and dissect floating point number. pow2 - Base 2 power and scale floating point number. realpow - Power that will error out on complex result. reallog - Natural logarithm of real number. realsqrt - Square root of number greater than or equal to zero. sqrt - Square root. nthroot - Real n-th root of real numbers. nextpow2 - Next higher power of 2. Complex. abs - Absolute value. angle - Phase angle. complex - Construct complex data from real and imaginary parts. conj - Complex conjugate. imag - Complex imaginary part. real - Complex real part. unwrap - Unwrap phase angle. isreal - True for real array. cplxpair - Sort numbers into complex conjugate pairs. Rounding and remainder. fix - Round towards zero. floor - Round towards minus infinity. ceil - Round towards plus infinity. round - Round towards nearest integer. mod - Modulus (signed remainder after division). rem - Remainder after division. sign - Signum. MATLAB can be used like a calculator - but it's much more. It's also a programming language, with all of the basic components of any such language. The first and most basic of these components is one that we use all the time in math - the variable. Like in math, variables are generally denoted symbolically by individual characters (like "a" or "x") or by strings of characters (like "var1" or "new_value"). In class we've distinguished between variables and parameters - but denoted both of these by characters. MATLAB doesn't make this distinction - any numerical quantity given a symbolic "name" is a variable". How do we assign a value to a variable? Easy - just use the equality sign. For example a = 3 a = 3 sets the value 3 to the variable a. As another example b = 2 b = 2 sets the value 2 to the variable b. We can carry out mathematical operations with these variables: e.g. ans = 5 ans = 6 ans = 9 ans = 20.0855 Although operation of setting a value to a variable looks like an algebraic equality like we use all the time in math, in fact it's something quite different. The statement a = 3 should not be interpreted as "a is equal to 3". It should be interpreted as "take the value 3 and assign it to the variable a". This difference in interpretation has important consequences. In algebra, we can write a = 3 or 3 = a -- these are equivalent. The = symbol in MATLAB is not symmetric - the command a = b should be interpreted as "take the value of b and assign it to the variable a" - there's a single directionality. And so, for example, we can type a = 3 a = 3 with no problem, if we type we get an error message. The value 3 is fixed - we can't assign another number to it. It is what it is. Another consequence of the way that the = operator works is that a statement like a = a+1 makes perfect sense. In algebra, this would imply that 0 = 1, which is of course nonsense. In MATLAB, it means "take the value that a has, add one to it, then assign that value to a". This changes the value of a, but that's allowed. For example, type: a = 3 a = a+1 a = 3 a = 4 First a is assigned the value 3, then (by adding one) it becomes 4. There are some built in variables; one of the most useful is pi: ans = 3.1416 We can also assign the output of a mathematical operation to a new variable: e.g. b = a*exp(a) b = 218.3926 b = 218.3926 If you want MATLAB to just assign the value of a calculation to a variable without telling you the answer right away, all you have to do is put a semicolon after the calculation: b = a*exp(a); Being able to use variables is very convenient, particularly when you're doing a multi-step calculation with the same quantity and want to be able to change the value. For example: a = 1; b = 3*a; c = a*b^2; d = c*b-a; d = 26 Now say I want to do the same calculation with a = 3; all I need to do is make one change a = 3; b = 3*a; c = a*b^2; d = c*b-a; How does this make things any easier? Well, it didn't really here - we still had to type out the equations for b, c, and d all over again. But we'll seee that in a stand-alone computer program it's very useful to be able to do this. In fact, the sequence of operations above is an example of a computer program. Operations are carried out in a particular order, with the results of earlier computations being fed into later ones. It is very important to understand this sequential structure of programming. In a program, things happen in a very particular order: the order you tell them to have. It's very important to make sure you get this order right. This is pretty straightforward in the above example, but can be much more complicated in more complicated programs. Any time a variable is created, it's kept in memory until you purposefully get rid of it (or quit the program). This can be useful - you can always use the variable again later. It can also make things harder - for example, in a long program you may try using a variable name that you've already used for another variable earlier in the program, leading to confusion. It can therefore be useful sometimes make MATLAB forget about a variable; for this the "clear" command is used. For example, define b = 3; Now if we ask what b is, we'll get back that it's 3 b = 3 Using the clear command to remove b from memory now if we ask about b we get the error message that it's not a variable in memory - we've succeeded in getting rid of it. To get rid of everything in memory, just type An important idea in programming is that of an array (or matrix). This is just an ordered sequence of numbers (known as elements): e.g. M = [1, 22, -0.4] is a 3-element array in which the first element is 1, the second element is 22, and the third element is -0.4. These are ordered - in this particular array, these numbers always occur in this sequence - but this doesn't mean that there's any particular structure ordering in general. That is - in an array, numbers don't have to increase or decrease or anything like that. The elements can be in any order - but that order partly defines the array. Also note that the numbers can be integers or rational numbers, or positive or negative. While the elements of the array can be any kind of number, their positions are identified by integers: there is a first, a second, a third, a fourth, etc. up until the end of the array. It's standard to indicate the position of the array using bracket notation: in the above example, the first element is M(1) = 1 the second element is M(2) = 22 and the third element is M(3) = -0.4. These integers counting off position in the array are known as "indices" (singular "index"). All programming languages use arrays, but MATLAB is designed to make them particularly easy to work with (the MAT is for "matrix"). To make the array above in MATLAB all you need to do is type M = [1 22 -0.4] M = 1.0000 22.0000 -0.4000 Then to look at individual elements of the array, just ask for them by index number: ans = 1 ans = 22 ans = -0.4000 We can also ask for certain ranges of an array, using the "colon" operator. For an array M we can ask for element i through element j by typing ans = 1 22 ans = 22.0000 -0.4000 If we want all elements of the array, we can type the colon on its own ans = 1.0000 22.0000 -0.4000 We can also use this notation to make arrays with a particular structure. Typing M = a:b:c makes an array that starts with first element and increases with increment b: M(2) = a+b M(3) = a+2b M(4) = a+3b The array stops at the largest value of N for which M(N) <= c. M = 1:1:3 M = 1 2 3 The array starts with 1, increases by 1, and ends at 3 M = 1:.5:3 M = 1.0000 1.5000 2.0000 2.5000 3.0000 The array starts at 1, increases by 0.5, and ends at 3 M = 1:.6:3 M = 1.0000 1.6000 2.2000 2.8000 Here the array starts at 1, increases by 0.6, and ends at 2.8 - because making one more step in the array would make the last element bigger than 3. M = 3:-.5:1 M = 3.0000 2.5000 2.0000 1.5000 1.0000 This kind of array can also be decreasing. If the increment size b isn't specified, a default value of 1 is used: M = 1:5 M = 1 2 3 4 5 That is, the array a:c is the same as the array a:1:c It is important to note that while the elements of an array can be any kind of number, the indices must be positive integers (1 and bigger). Trying non-positive or fractional integers will result in an error message: Each of the elements of an array is a variable on its own, which can be used in a mathematical operation. E.g.: ans = 4 ans = 6 The array itself is also a kind of variable - an array variable. You need to be careful with arithmetic operations (addition, subtraction, multiplication, division, exponentiaion) when it comes to arrays - these things can be defined, but they have to be defined correctly. We'll look at this later. In MATLAB, when most functions are fed an array as an argument they give back an array of the function acting on each element. That is, for the function f and the array M, g=f(M) is an array such that g(i) = f(M(i)). a = 0:4; b = exp(a) b = 1.0000 2.7183 7.3891 20.0855 54.5982 Let's define two arrays of the same size a = 1:5; b = exp(a); and what we get is a plot of the array a versus the array b - in this case, a discrete version of the exponential function exp(x) over the range x=1 to x=5. We can plot all sorts of things: the program a = 0:.01:5; b = cos(2*pi*a); plot(a,b) sets the variable a as a fine discretisation of the range from x=0 to x=5, defines b as the cosine of 2 pi x over that range, and plots a agaist b - showing us the familiar sinusoidal waves. We can also do all sorts of things with plots - stretch them vertically and horizontally, flip them upside down, give them titles and label the axes, have multiple subplots in a single plot ... but we'll come to these as we need them. Arithmetic operations (addition, subtraction, multiplication, division) between an array and a scalar (a single number) are straightforward. If we add an array and a scalar, every element in the array is added to that scalar: the ith element of the sum of the array M and the scalar a is M(i)+a. M = [1 3 -.5 7]; M2 = M+1 M2 = 2.0000 4.0000 0.5000 8.0000 Similarly, we can subtract, multiply by, and divide by a scalar. M3 = 3*M M3 = 3.0000 9.0000 -1.5000 21.0000 M4 = M/10 M4 = 0.1000 0.3000 -0.0500 0.7000 It's even possible to add, subtract, multiply and divide arrays with other arrays - but we have to be careful doing this. In particular, we can only do these things between arrays of the same size: that is, we can't add a 5-element array to a 10-element array. If the arrays are the same size, these arithmetic operations are straightforward. For example, the sum of the N-element array a and the N-element array b is an N-element array c whose ith element is c(i) = a(i)+b(i) a = [1 2 3]; b = [2 -1 4]; c = a+b; c c = 3 1 7 That is, addition is element-wise. It's just the same with subtraction. d = a-b; d d = -1 3 -1 With multiplication we use a somewhat different notation. Mathematics defines a special kind of multiplication between arrays - matrix multiplication - which is not what we're doing here. However, it's what MATLAB thinks you're doing if you use the * sign between arrays. To multiply arrays element-wise (like with addition), we need to use the .* notation (note the "." before the "*"): e = a.*b; e e = 2 -2 12 Similarly, to divide, we don't use /, but rather ./ f = a./b; f f = 0.5000 -2.0000 0.7500 (once again, note the dot). As we'll see over and over again, it's very useful to be able to carry out arithmetic operations between arrays. For example, say we want to make a plot of x versus 1/x between x = 2 and x = 4. Then we can type in the program x = 2:.1:4; y = 1./x; plot(x,y) xlabel('x'); ylabel('y'); Note how we put the labels on the axes - using the commands xlabel and ylabel, with the arguments 'x' and 'y'. Because the arguments are character strings - not numbers - they need to be in single quotes. The axis labels can be more complicated, e.g. xlabel('x (between 1 and 5)') ylabel('y = 1/x') We haven't talked yet about how to exponentiate an array. To take the array M to the power b element-wise, we type M.^b Note again the "." before the "^" in the exponentiation. As an example x = [1 2 3 4]; y = x.^2 y = 1 4 9 16 As another example, we can redo the earlier program: x = 2:.1:4; y = x.^(-1); plot(x,y) xlabel('x'); ylabel('y'); Note that we put the "-1" in brackets - this makes sure that the minus sign associated with making the exponent negative is applied before the "^" of the exponentiation. In this case, we don't have to do this - but when programming it doesn't hurt to be as specific as possible. These are the basic tools that we'll need to use MATLAB. Subsequent tutorials will cover other aspects of writing a program - but what we've talked about above forms the core. Everything that follows will build upon the material in this tutorial. The following exercises will use the tools we've learned above and are designed to get you thinking about programming. In writing your programs, you'll need to be very careful to think through: (1) what is the goal of the program (what do I need it to do?) (2) what do I need to tell MATLAB? (3) what order do I need to tell it in? It might be useful to sketch the program out first, before typing anything into MATLAB. It can even be useful to write the program out on paper first and walk through it step by step, seeing if it will do what you think it should. Plot the following functions: (a) y = 3x+2 with x = 0, 0.25, 0.5, ...., 7.75, 8 (b) y = exp(-x^2) with x = 0, 0.1, 0.2, ..., 2 (c) y = ln(exp(x^-1)) with x = 1, 1.5, 2, ..., 4 (d) y = (ln(exp(x)))^-1 with x = 1, 1.5, 2, ..., 4 A mountain range has a tectonic uplift rate of 1 mm/yr and erosional timescale of 1 million years. If the mountain range starts with a height h(0) = 0 at time t = 0, write a program that predicts and plots the height h(t) at t=0, t=1 million years, t=2 million years, t=3 million years, t=4 million years, and t=5 million years (neglecting isostatic effects). Label the axes of this plot, including units. Repeat Exercise 2 in the case that the erosional timescale is 500,000 years. Repeat Exercise 3 in the case that the tectonic uplift rate is 2 mm/yr.
http://web.uvic.ca/~monahana/eos225/matlab_tutorial/tutorial_1/introduction_to_matlab.html
13
310
Area And Perimeter Powerpoint PPT Therefore, Family A has the pool with the bigger swimming area. The perimeter of Family A’s pool is 12 units long. ... PowerPoint Presentation Author: Dr. Beth McCulloch Vinson Last modified by: Bill Ide Created Date: 7/1/2000 5:08:39 PM Area and Perimeter By Christine Berg Edited by V T Hamilton Perimeter The sum of the lengths of all sides of the figure Area The number of square units inside a figure Calculating Area Area Abbreviations: A = Area b = base h = height Rectangle To find the area multiply the length of the base ... Perimeter Author: Tiffany Bennett Last modified by: Tom Deibel Created Date: 9/19/2003 8:47:37 PM Document presentation format: On-screen Show Company: PISD Other titles: Finding Area and Perimeter of Polygons Area Definition: The number of square units needed to cover a surface. (INSIDE) Length x width Perimeter Definition: The sum of the length of all the sides of a polygon. The distance around the outside of a shape is called the perimeter. 8 cm 6 cm 8 cm 6 cm The perimeter of the shape is 8 + 6 + 8 + 6 = 28cm. First we need to find he length of each side by counting the squares. Perimeter The perimeter of a closed plane figure is the length of its boundary. 10cm 8cm 7cm 8cm 15cm Perimeter = 15 + 8 + 10 + 8 + 7 = 48cm Perimeter Rectangle Area Area is the space within the boundary of a figure. Jeopardy Perimeter & Area Perimeter Triangles Circles Toss Up Q $100 Q $100 Q $100 Q $100 Q$600 Q $200 Q $200 Q $200 Q $200 Q $600 Q $300 Q $300 Q $300 Q $300 Q $600 Finding the Perimeter Finding the Perimeter Take a walk around the edge! 6cm 10 cm The perimeter is… 32cm ! 6 16 22 32 Take a walk around the edge! 8cm 10 cm The perimeter is… 26 cm ! 8cm 8 16 26 Take a walk around the edge! Area of a Rectangle www.mathxtc.com This is one in a series of Powerpoint slideshows to illustrate how to calculate the area of various 2-D shapes. Perimeter, Circumference and Area Author: Charlotte-Mecklenburg School District Last modified by: Administrator Created Date: 3/9/2011 4:33:43 PM Document presentation format: On-screen Show Company: Charlotte-Mecklenburg School District Other titles: 1.7 Notes Perimeter, Circumference, Area. These formulas are found on p. 49 of your textbook. 1.7 Notes Perimeter, Circumference, Area. †Ask Dr. Math for a discussion on “square units” vs. “units squared.” Free powerpoint template: www.brainybetty.com Formulas for Geometry Mr. Ryan Don’t Get Scared!!! ... x 5 = Area 4 x 5 = 20 If the radius is 5, then the diameter is 10 Radius 5 Area=3.14 x (5 x 5) Perimeter = 3.14 x 10 * * * * Title: Formulas for Geometry Author: Mr. Ryan Last ... AREA OF A TRIANGLE You probably already know the formula for the area of a triangle. Area is one-half the base times the height. ... PowerPoint Presentation Author: Haider Last modified by: Stephen Corcoran Created Date: 7/21/2004 4:11:54 PM Surface Area What does it mean to ... Prism SA You can find the SA of any prism by using the basic formula for SA which is 2B + LSA= SA LSA= lateral Surface area LSA= perimeter of the base x height of the prism B = the base of ... PowerPoint Presentation Author: Chamberlain School No. 7-1 Last ... The Area and Perimeter of a Circle The Area and Perimeter of a Circle Diameter Radius centre What is the formula relating the circumference to the diameter? One of the great areas of confusion for students in the measurement strand is Area and Perimeter, in fact it sometimes seems that there is another term out there, “Arimeter”. Area and Perimeter Math 7 Area and Perimeter There are two measurements that tell you about the size of a polygon. The number of squares needed to cover the polygon is the area of the shape. Area, Perimeter and Volume Section 3-4-3 Perimeter Perimeter is measuring around the outside of something. Perimeter requires the addition of all sides of the shape. Area and . Perimeter . Triangles, Parallelograms, Rhombus, and Trapezoids. Mr. Miller. Geometry. Chapter 10 . Sections 1 and 2 The Area and Perimeter of a Circle The Area and Perimeter of a Circle The Area and Perimeter of a Circle A circle is defined by its diameter or radius Diameter radius The perimeter or circumference of a circle is the distance around the outside The area of a circle is the space inside it The ... The perimeter of a triangle is the measure around the triangle = a + b + c To find the area of a triangle: The height = the ... PowerPoint Presentation Author: Valued Gateway Client Last modified by: Understanding Area and Perimeter Amy Boesen CS255 Perimeter Perimeter is simply the distance around an object. ... PowerPoint Presentation Last modified by: Registered User Created Date: 1/1/1601 12:00:00 AM Document presentation format: Times New Roman Tahoma Wingdings Verdana Whirlpool PowerPoint Presentation Area and Perimeter Perimeter and Area Perimeter Find the Area of these shapes Doing your Work! Plenary How can we find the perimeter of this shape ? Now try these ... Area and perimeter of irregular shapes 19yd 30yd 37yd 23 yd 7yd 18yd What is the perimeter of this irregular shape? To find the perimeter, you first need to make sure you have all of the information you need. PowerPoint Presentation Author: Cub Last modified by: Certiport, Inc. Created Date: 3/3/2003 2:01:31 PM Document presentation format: On-screen Show Company: Mount Carmel Academy Other titles: Area is the amount of square units needed to cover the face (or flat side) of a figure When the area is found, it is reported in square units. ... Circumference as length Calculate the Surface Area of a Gear Head Motor 2.00” Dia. 1.55” 3.850” 1.367” Perimeter and Area of Basic Shapes b h s P = s1 + s2 + s3 A = ½ bh s s ... PowerPoint Presentation Perimeter and Area of Basic Shapes PowerPoint ... ... 56” 0.190” dia. typ. 0.5” 1.00” 45 deg. 0.71” 1.41” 0.190” typ. Calculate the Perimeter of this Component Perimeter Worksheet Perimeter and Area of Basic Shapes b h s P = s1 + s2 + s3 A = ½ ... PowerPoint Presentation Perimeter and Area of Basic Shapes PowerPoint ... Squares Perimeter = 4l The area of a square is given as: ... PowerPoint Presentation Author: MELISSA TROQUET Description: Contents written and edited by Melissa Lieberman ©Boardworks Ltd 2004 Flash developed by John Deans Last modified by: Area Formulas Rectangle Rectangle What is the area formula? Rectangle What is the area formula? ... Answers PowerPoint Presentation PowerPoint Presentation PowerPoint Presentation Trapezoid Trapezoid Trapezoid Trapezoid Trapezoid Trapezoid Trapezoid Trapezoid Practice! Answers ... Area, Perimeter, and Circumference Author: Registered User Last modified by: Registered User Created Date: 11/26/2002 5:55:51 PM Document presentation format: On-screen Show Company: Northern Michigan University Other titles: Effect of Change The effects on perimeter, area, and volume when dimensions are changed proportionally. * * Perimeter of a rectangle How would the perimeter change if the dimensions of the rectangle are doubled? 7 ft. 4 ft. 14 ft. 8 ft. 8cm 4cm P= _____ 2L + 2W P= (2 x8cm) + (2x 4cm) = 24 cm P= S+S+S+S 16cm+8cm = Practice Find the Area and Perimeter of the following rectangle: 10cm ... PowerPoint Presentation Author: Stephanie Green Last modified by: Stephanie Green Created Date: 7/1/2000 5:08:39 PM Perimeter Area Applications Objectives By: April Rodriguez, Betty Williams, ... PowerPoint Presentation PowerPoint Presentation Now, what should we do? You did say area, right? Remember, area is the number of square units, or units2, needed to cover the surface. ... 3.14 x 6 =18.84 x 10 = 188.4 SA = 244.92 2B + LSA = SA Rectangular Prism A B C 7 6 in 9 2B + LSA= SA Area of Base x 2 = LSA = perimeter x Height = Total SA = Triangular Prism 8 17 22 m 15 2B + LSA= SA Area of Base x 2 ... PowerPoint Presentation Author: Chamberlain School No. 7-1 Last ... Geometry: Perimeter and Area Lesson 6-8 Find the perimeter and area of figures Perimeter The distance around a figure. One way to find the perimeter of a figure is to add the length of all the sides. When finding the perimeter of a rectangle we use a common formula. Perimeter & Area of Rectangles, Squares ... PowerPoint Presentation Area of a Rectangle Area of a Rectangle Area of a Rectangle Area of a Rectangle PowerPoint Presentation Area of a Square Area of a Square Take Out Your Learning Targets LT #8 Perimeter of a Rectangle & Square PowerPoint ... Definition Circumference is the distance around a circle or the perimeter. Formula = Pi x diameter Area is the measure of the amount of surface enclosed by the boundary of a figure. ... for helping me use Microsoft PowerPoint: Definition from Connected Mathematics. Section 6.1: Perimeter & Area of Rectangles & Parallelograms Perimeter – the distance around the OUTSIDE of a figure Area – the number of square units INSIDE a figure Finding the Perimeter of Rectangles and Parallelograms Find the perimeter of each figure. Ruggedized Unattended Sensor Systems for Perimeter or Area Security Key Features & Benefits: Expands Physical Security & Surveillance Footprint Irregular shapes perimeter Objective 0506.4.2 Decompose irregular shapes to find perimeter and area. Remember!!! Perimeter all sides added together Review of perimeter http://www.jogtheweb.com/run/deYhohv5NJMJ/Area-and-Perimeter#1 Find the perimeter Find the perimeter What is an irregular shape? Area and Circumference of a Circle Author: Letitia Lee Cox Last modified by: Letitia Lee Cox Created Date: 6/17/2008 6:32:33 PM Document presentation format: On-screen Show Company: Jefferson County Schools Other titles: Area & Perimeter of Quadrilaterals & Parallelogram Perimeter Add up all the sides Quadrilateral has 4 sides Add them up Or use P = 2L + 2W Perimeter of a square is P = 4s Ex 1 Ex 2 Ex 3 Ex 4 Ex 5 Area A = L * W Rectangle A = S2 Square A = B * h Parallelogram Note: base and height will ... Estimate Perimeter, Circumference, and Area When estimating perimeter of shapes on a grid, use the length of one grid square to approximate the length of each side of the figure. Lesson 8 Perimeter and Area Perimeter The perimeter of a closed figure is the distance around the outside of the figure. In the case of a polygon, the perimeter is found by adding the lengths of all of its sides. Area of a circle Area examples ... PowerPoint Presentation Author: Project 2002 Last modified by: Bernie Lafferty Created Date: 1/28/2002 2:58:41 PM Document presentation format: On-screen Show Company: Glasgow City Council Perimeter, area and volume Aim To introduce approaches to working out perimeter, area and volume of 2D and 3D shapes. ... PowerPoint Presentation Author: David King Last modified by: ruthc Created Date: 9/22/2003 11:25:06 AM Document presentation format: Inicios in Mathematics NCCEP - UT Austin More Algebra with some geometry Geometry (Cabri Jr) and Navigator Activities Using both Activity Center and Screen Capture Area Invariance for Triangles, Parallelism and More (A Dynamic Geometry Interpretation of 1/2 b * h) More Area and Perimeter ... Sec. 1-9 Perimeter, Circumference, and Area Objective: Find the perimeters and areas of rectangles, squares, & the circumferences of circles. ... the sum of the areas of all of its surfaces Formulas for PRISMS LA = Ph Lateral Area = Perimeter of Base height of prism SA = Ph + 2B Surface Area = Perimeter of Base height of prism + 2 Area ... PowerPoint Presentation Author: cee13931 Last modified by: Griesemer, Sarah Created ...
http://freepdfdb.com/ppt/area-and-perimeter-powerpoint
13
68
Compounding Functions and Graphing Functions of Functions - 0:06 Functions - 0:58 Composite Functions - 4:01 Domain and Range of Composite… - 7:06 Lesson Summary Did You Know… This lesson is part of a free course that leads to real college credit accepted by 2,900 colleges. We know that functions map numbers to other numbers, so what happens when you have a function of a function? Welcome to functions within functions, the realm of composite functions! Recall that functions are like a black box; they map numbers to other numbers. If y is a function of x, then we write it as y=f(x). And for this function, we have an input, x, and an output, y. So x is our independent variable, and y is our dependent variable. Our input will be anywhere within the domain of the function, and our output will be anywhere within the range of the function. So perhaps it's not too much of a stretch to know that you can combine functions into a big function. In math, this is known as a composition of functions. Here you start with x, and you use it as input to a function, y=f(x). And you're going to put that as input into a second function, g. So if we have a function y=f(x), and we want to plug it into z=g(y), we can end up with z=g(f(x)). This is a composite function. When you're looking at composite functions, there are two main points to keep in mind. First, you need to evaluate the function from the inside out. You need to figure out what f(x) is before you figure out what g is. Say we have the function f(x)=3x, and we have another function g(x)= 4 + x. I'm going to find z when x=2. We're going to find f(x) when x=2 for f(2)= 3 * 2, which is 6. Saying g(f(2)) is like saying g(6). We do the same thing and say g(6) = 4 + 6. Well, that's 10, so z is just 10. The second thing to keep in mind is that g(f(x)) does not equal f(g(x)). There are some cases where it can, but in general, it does not. So if we use f(x)=3x and g(x)=x + 4, then let's look at the case where x=0. Then g(f(0)), where f(0) is 0 * 3 - well that's just zero, so I'm looking at g(0). I plug zero in for x here, and it's just 4. Now, if I look at f(g(0)), that's like saying f(4), and that gives me 12. f(g(0))=12, and g(f(0))=4. Those are not the same. So, g(f(x)) does not equal f(g(x)). Domain and Range of Composite Functions What happens to the domain and range of a composite function? Well, if we have the function g(x), we have some domain and some range for g(x). Separately we've got a domain for f(x) and a range for g(x). If I write f(g(x)), then the output of g(x), which is the range, has to be somewhere in the domain of f(x). Otherwise, we could get a number here that f(x) really doesn't know what to do with. What does all this really mean? Consider the function f(x)=sin(x). The domain of sin(x) is going to be all of x, and the range is going to be between -1 and 1. Now let's look at the function g(x) equals the absolute value of x, or g(x)=abs(x). Again the domain is all of x, and the range is everything greater than 1 or equal to 0. If I take those two - here's my range of sin(x) - what happens to g(f(x))? So g is the absolute value, so I'll have abs(sin(x)). What's the domain and range of that composite function? If I'm graphing g(f(x)), I'm graphing the absolute value of sin(x), so the graph looks like this. I have a range here that goes from 0 to 1 and a domain that covers all of x. Well, this makes sense. What if I look at f(g(x)), so the function is going to be sine of the absolute value of x, sin(abs(x)). For the absolute value of x, you can take anything as input, so the domain is going to be all values of x, and the range of abs(x) is going to be zero and up, so anything that's a positive number. Now, sine can take anything, so the range of abs(x) is within the domain of sin(x), but what happens to the output? What is the range of this composite function? Let's graph it - is that unexpected? Now the range is in between -1 and 1, which just so happens to be the range of f(x). To recap, we know functions map numbers to other numbers, like y=f(x). The domain and range tell us the possible values for the input and output of a function. Composite functions take the output of one function and use it as input for another function, and we write this f(g(x)). We're going to evaluate f(g(x)) from the inside out, so we're going to evaluate g(x) before we evaluate f(x). And we also know that f(g(x)) does not equal g(f(x)). Chapters in Math 104: Calculus People are saying… "This just saved me about $2,000 and 1 year of my life." — Student "I learned in 20 minutes what it took 3 months to learn in class." — Student "Really helped me understand something I have been struggling with for a while!!!" — High School Student "Totally awesome! I got the lecture quick, like never before in my life!!!" — College Student "When I studied algebra, linear equations were the hardest for me. I wish someone could have explained rise/run and abstract linear equations like this when I was learning it in high school." — Student "You have made a complex subject like calculus very concrete." — Student "I've never seen numbers in this light before. Thank you." — Sarah, College Student "I think this is the best way to relate classroom teaching to our daily life." — Student
http://education-portal.com/academy/lesson/compounding-functions-and-graphing-functions-of-functions.html
13
60
An ice core is a core sample that is typically removed from an ice sheet, most commonly from the polar ice caps of Antarctica, Greenland or from high mountain glaciers elsewhere. As the ice forms from the incremental build up of annual layers of snow, lower layers are older than upper, and an ice core contains ice formed over a range of years. The properties of the ice and the recrystallized inclusions within the ice can then be used to reconstruct a climatic record over the age range of the core, normally through isotopic analysis. This enables the reconstruction of local temperature records and the history of atmospheric composition. Ice cores contain an abundance of information about climate. Inclusions in the snow of each year remain in the ice, such as wind-blown dust, ash, bubbles of atmospheric gas and radioactive substances. The variety of climatic proxies is greater than in any other natural recorder of climate, such as tree rings or sediment layers. These include (proxies for) temperature, ocean volume, precipitation, chemistry and gas composition of the lower atmosphere, volcanic eruptions, solar variability, sea-surface productivity, desert extent and forest fires. The length of the record depends on the depth of the ice core and varies from a few years up to 800 kyr (800,000 years) for the EPICA core. The time resolution (i.e. the shortest time period which can be accurately distinguished) depends on the amount of annual snowfall, and reduces with depth as the ice compacts under the weight of layers accumulating on top of it. Upper layers of ice in a core correspond to a single year or sometimes a single season. Deeper into the ice the layers thin and annual layers become indistinguishable. An ice core from the right site can be used to reconstruct an uninterrupted and detailed climate record extending over hundreds of thousands of years, providing information on a wide variety of aspects of climate at each point in time. It is the simultaneity of these properties recorded in the ice that makes ice cores such a powerful tool in paleoclimate research. Structure of ice sheets and cores Ice sheets are formed from snow. Because an ice sheet survives summer, the temperature in that location usually does not warm much above freezing. In many locations in Antarctica the air temperature is always well below the freezing point of water. If the summer temperatures do get above freezing, any ice core record will be severely degraded or completely useless, since meltwater will percolate into the snow. The surface layer is snow in various forms, with air gaps between snowflakes. As snow continues to accumulate, the buried snow is compressed and forms firn, a grainy material with a texture similar to granulated sugar. Air gaps remain, and some circulation of air continues. As snow accumulates above, the firn continues to densify, and at some point the pores close off and the air is trapped. Because the air continues to circulate until then, the ice age and the age of the gas enclosed are not the same, and may differ by hundreds of years. The gas age–ice age difference is as great as 7 kyr in glacial ice from Vostok. Under increasing pressure, at some depth the firn is compressed into ice. This depth may range between a few to several tens of meters to typically 100 m for Antarctic cores. Below this level material is frozen in the ice. Ice may appear clear or blue. Layers can be visually distinguished in firn and in ice to significant depths. In a location on the summit of an ice sheet where there is little flow, accumulation tends to move down and away, creating layers with minimal disturbance. In a location where underlying ice is flowing, deeper layers may have increasingly different characteristics and distortion. Drill cores near bedrock often are challenging to analyze due to distorted flow patterns and composition likely to include materials from the underlying surface. Characteristics of firn The layer of porous firn on Antarctic ice sheets is 50–150 m deep. It is much less deep on glaciers. Air in the atmosphere and firn are slowly exchanged by molecular diffusion through pore spaces, because gases move toward regions of lower concentration. Thermal diffusion causes isotope fractionation in firn when there is rapid temperature variation, creating isotope differences which are captured in bubbles when ice is created at the base of firn. There is gas movement due to diffusion in firn, but not convection except very near the surface. Below the firn is a zone in which seasonal layers alternately have open and closed porosity. These layers are sealed with respect to diffusion. Gas ages increase rapidly with depth in these layers. Various gases are fractionated while bubbles are trapped where firn is converted to ice. A core is collected by separating it from the surrounding material. For material which is sufficiently soft, coring may be done with a hollow tube. Deep core drilling into hard ice, and perhaps underlying bedrock, involves using a hollow drill which actively cuts a cylindrical pathway downward around the core. When a drill is used, the cutting apparatus is on the bottom end of a drill barrel, the tube which surrounds the core as the drill cuts downward around the edge of the cylindrical core. The length of the drill barrel determines the maximum length of a core sample (6 m at GISP2 and Vostok). Collection of a long core record thus requires many cycles of lowering a drill/barrel assembly, cutting a core 4–6 m in length, raising the assembly to the surface, emptying the core barrel, and preparing a drill/barrel for drilling. Because deep ice is under pressure and can deform, for cores deeper than about 300 m the hole will tend to close if there is nothing to supply back pressure. The hole is filled with a fluid to keep the hole from closing. The fluid, or mixture of fluids, must simultaneously satisfy criteria for density, low viscosity, frost resistance, as well as workplace safety and environmental compliance. The fluid must also satisfy other criteria, for example those stemming from the analytical methods employed on the ice core. A number of different fluids and fluid combinations have been tried in the past. Since GISP2 (1990–1993) the US Polar Program has utilized a single-component fluid system, n-butyl acetate, but the toxicology, flammability, aggressive solvent nature, and longterm liabilities of n-butyl acetate raises serious questions about its continued application. The European community, including the Russian program, has concentrated on the use of two-component drilling fluid consisting of low-density hydrocarbon base (brown kerosene was used at Vostok) boosted to the density of ice by addition of halogenated-hydrocarbon densifier. Many of the proven densifier products are now considered too toxic, or are no longer available due to efforts to enforce the Montreal Protocol on ozone-depleting substances. In April 1998 on the Devon Ice Cap filtered lamp oil was used as a drilling fluid. In the Devon core it was observed that below about 150 m the stratigraphy was obscured by microfractures. Core processing Modern practice is to ensure that cores remain uncontaminated, since they are analysed for trace quantities of chemicals and isotopes. They are sealed in plastic bags after drilling and analysed in clean rooms. The core is carefully extruded from the barrel; often facilities are designed to accommodate the entire length of the core on a horizontal surface. Drilling fluid will be cleaned off before the core is cut into 1-2 meter sections. Various measurements may be taken during preliminary core processing. Current practices to avoid contamination of ice include: - Keeping ice well below the freezing point. - At Greenland and Antarctic sites, temperature is maintained by having storage and work areas under the snow/ice surface. - At GISP2, cores were never allowed to rise above -15 °C, partly to prevent microcracks from forming and allowing present-day air to contaminate the fossil air trapped in the ice fabric, and partly to inhibit recrystallization of the ice structure. - Wearing special clean suits over cold weather clothing. - Mittens or gloves. - Filtered respirators. - Plastic bags, often polyethylene, around ice cores. Some drill barrels include a liner. - Proper cleaning of tools and laboratory equipment. - Use of laminar-flow bench to isolate core from room particulates. For shipping, cores are packed in Styrofoam boxes protected by shock absorbing bubble-wrap. Due to the many types of analysis done on core samples, sections of the core are scheduled for specific uses. After the core is ready for further analysis, each section is cut as required for tests. Some testing is done on site, other study will be done later, and a significant fraction of each core segment is reserved for archival storage for future needs. Projects have used different core-processing strategies. Some projects have only done studies of physical properties in the field, while others have done significantly more study in the field. These differences are reflected in the core processing facilities. Ice relaxation Deep ice is under great pressure. When brought to the surface, there is a drastic change in pressure. Due to the internal pressure and varying composition, particularly bubbles, sometimes cores are very brittle and can break or shatter during handling. At Dome C, the first 1000 m were brittle ice. Siple dome encountered it from 400 to 1000 m. It has been found that allowing ice cores to rest for some time (sometimes for a year) makes them become much less brittle. Decompression causes significant volume expansion (called relaxation) due to microcracking and the exsolving of enclathratized gases. Relaxation may last for months. During this time, ice cores are stored below -10 °C to prevent cracking due to expansion at higher temperatures. At drilling sites, a relaxation area is often built within existing ice at a depth which allows ice core storage at temperatures below -20 °C. It has been observed that the internal structure of ice undergoes distinct changes during relaxation. Changes include much more pronounced cloudy bands and much higher density of "white patches" and bubbles. Several techniques have been examined. Cores obtained by hot water drilling at Siple Dome in 1997–1998 underwent appreciably more relaxation than cores obtained with the PICO electro-mechanical drill. In addition, the fact that cores were allowed to remain at the surface at elevated temperature for several days likely promoted the onset of rapid relaxation. Ice core data Many materials can appear in an ice core. Layers can be measured in several ways to identify changes in composition. Small meteorites may be embedded in the ice. Volcanic eruptions leave identifiable ash layers. Dust in the core can be linked to increased desert area or wind speed. Isotopic analysis of the ice in the core can be linked to temperature and global sea level variations. Analysis of the air contained in bubbles in the ice can reveal the palaeocomposition of the atmosphere, in particular CO2 variations. There are great problems relating the dating of the included bubbles to the dating of the ice, since the bubbles only slowly "close off" after the ice has been deposited. Nonetheless, recent work has tended to show that during deglaciations CO2 increases lag temperature increases by 600 +/- 400 years. Beryllium-10 concentrations are linked to cosmic ray intensity which can be a proxy for solar strength. There may be an association between atmospheric nitrates in ice and solar activity. However, recently it was discovered that sunlight triggers chemical changes within top levels of firn which significantly alter the pore air composition. This raises levels of formaldehyde and NOx. Although the remaining levels of nitrates may indeed be indicators of solar activity, there is ongoing investigation of resulting and related effects of effects upon ice core data. Core contamination Some contamination has been detected in ice cores. The levels of lead on the outside of ice cores is much higher than on the inside. In ice from the Vostok core (Antarctica), the outer portion of the cores have up to 3 and 2 orders of magnitude higher bacterial density and dissolved organic carbon than the inner portion of the cores, respectively, as a result of drilling and handling. Paleoatmospheric sampling As porous snow consolidates into ice, the air within it is trapped in bubbles in the ice. This process continuously preserves samples of the atmosphere. In order to retrieve these natural samples the ice is ground at low temperatures, allowing the trapped air to escape. It is then condensed for analysis by gas chromatography or mass spectrometry, revealing gas concentrations and their isotopic composition respectively. Apart from the intrinsic importance of knowing relative gas concentrations (e.g. to estimate the extent of greenhouse warming), their isotopic composition can provide information on the sources of gases. For example CO2 from fossil-fuel or biomass burning is relatively depleted in 13C. See Friedli et al., 1986. Dating the air with respect to the ice it is trapped in is problematic. The consolidation of snow to ice necessary to trap the air takes place at depth (the 'trapping depth') once the pressure of overlying snow is great enough. Since air can freely diffuse from the overlying atmosphere throughout the upper unconsolidated layer (the 'firn'), trapped air is younger than the ice surrounding it. Trapping depth varies with climatic conditions, so the air-ice age difference could vary between 2500 and 6000 years (Barnola et al., 1991). However, air from the overlying atmosphere may not mix uniformly throughout the firn (Battle et al., 1986) as earlier assumed, meaning estimates of the air-ice age difference could be less than imagined. Either way, this age difference is a critical uncertainty in dating ice-core air samples. In addition, gas movement would be different for various gases; for example, larger molecules would be unable to move at a different depth than smaller molecules so the ages of gases at a certain depth may be different. Some gases also have characteristics which affect their inclusion, such as helium not being trapped because it is soluble in ice. In Law Dome ice cores, the trapping depth at DE08 was found to be 72 m where the age of the ice is 40±1 years; at DE08-2 to be 72 m depth and 40 years; and at DSS to be 66 m depth and 68 years. Paleoatmospheric firn studies At the South Pole, the firn-ice transition depth is at 122 m, with a CO2 age of about 100 years. Gases involved in ozone depletion, CFCs, chlorocarbons, and bromocarbons, were measured in firn and levels were almost zero at around 1880 except for CH3Br, which is known to have natural sources. Similar study of Greenland firn found that CFCs vanished at a depth of 69 m (CO2 age of 1929). Analysis of the Upper Fremont Glacier ice core showed large levels of chlorine-36 that definitely correspond to the production of that isotope during atmospheric testing of nuclear weapons. This result is interesting because the signal exists despite being on a glacier and undergoing the effects of thawing, refreezing, and associated meltwater percolation. 36Cl has also been detected in the Dye-3 ice core (Greenland), and in firn at Vostok. Studies of gases in firn often involve estimates of changes in gases due to physical processes such as diffusion. However, it has been noted that there also are populations of bacteria in surface snow and firn at the South Pole, although this study has been challenged. It had previously been pointed out that anomalies in some trace gases may be explained as due to accumulation of in-situ metabolic trace gas byproducts. Dating cores Shallow cores, or the upper parts of cores in high-accumulation areas, can be dated exactly by counting individual layers, each representing a year. These layers may be visible, related to the nature of the ice; or they may be chemical, related to differential transport in different seasons; or they may be isotopic, reflecting the annual temperature signal (for example, snow from colder periods has less of the heavier isotopes of H and O). Deeper into the core the layers thin out due to ice flow and high pressure and eventually individual years cannot be distinguished. It may be possible to identify events such as nuclear bomb atmospheric testing's radioisotope layers in the upper levels, and ash layers corresponding to known volcanic eruptions. Volcanic eruptions may be detected by visible ash layers, acidic chemistry, or electrical resistance change. Some composition changes are detected by high-resolution scans of electrical resistance. Lower down the ages are reconstructed by modeling accumulation rate variations and ice flow. Dating is a difficult task. Five different dating methods have been used for Vostok cores, with differences such as 300 years at 100 m depth, 600yr at 200 m, 7000yr at 400 m, 5000yr at 800 m, 6000yr at 1600 m, and 5000yr at 1934 m. Different dating methods makes comparison and interpretation difficult. Matching peaks by visual examination of Moulton and Vostok ice cores suggests a time difference of about 10,000 years but proper interpretation requires knowing the reasons for the differences. Ice core storage and transport Ice cores are typically stored and transported in refrigerated ISO container systems. Due to the high value and the temperature-sensitive nature of the ice core samples, container systems with primary and back-up refrigeration units and generator sets are often used. Known as a Redundant Container System in the industry, the refrigeration unit and generator set automatically switches to its back-up in the case of a loss of performance or power to provide the ultimate peace of mind when shipping this valuable cargo. Ice core sites Ice cores have been taken from many locations around the world. Major efforts have taken place on Greenland and Antarctica. Sites on Greenland are more susceptible to snow melt than those in Antarctica. In the Antarctic, areas around the Antarctic Peninsula and seas to the west have been found to be affected by ENSO effects. Both of these characteristics have been used to study such variations over long spans of time. The first to winter on the inland ice was J.P. Koch and Alfred Wegener in a hut they built on the ice in Northeast Greenland. Inside the hut they drilled to a depth of 25 m with an auger similar to an oversized corkscrew. Station Eismitte Eismitte means Ice-Center in German. The Greenland campsite was located 402 kilometers (250 mi) from the coast at an estimated altitude of 3,000 meters (9,843 feet). As a member of the Alfred Wegener Expedition to Eismitte in central Greenland from July 1930 to August 1931, Ernst Sorge hand-dug a 15 m deep pit adjacent to his beneath-the-surface snow cave. Sorge was the first to systematically and quantitatively study the near-surface snow/firn strata from inside his pit. His research validated the feasibility of measuring the preserved annual snow accumulation cycles, like measuring frozen precipitation in a rain gauge. Camp VI During 1950-1951 members of Expeditions Polaires Francaises (EPF) led by Paul Emile Victor reported boring two holes to depths of 126 and 150 m on the central Greenland inland ice at Camp VI and Station Central (Centrale). Camp VI is in the western part of Greenland on the EPF-EGIG line at an elevation of 1598 masl. Station Centrale The Station Centrale was not far from station Eismitte. Centrale is on a line between Milcent (70°18’N 45°35’W, 2410 masl) and Crête (71°7’N 37°19’W), at about (70°43'N 41°26'W), whereas Eismitte is at (71°10’N 39°56’W, ~3000 masl). Site 2 In 1956, pre-International Geophysical Year (IGY) of 1957-58, a 10 cm diameter core using a rotary mechanical drill (US) to 305 m was recovered. A second 10 cm diameter core was recovered in 1957 by the same drill rig to 411 m. A commercially modified, mechanical-rotary Failing-1500 rock-coring rig was used, fitted with special ice cutting bits. Camp Century Three cores were attempted at Camp Century in 1961, 1962, and again in 1963. The third hole was started in 1963 and reached 264 m. The 1963 hole was re-entered using the thermal drill (US) in 1964 and extended to 535 m. In mid-1965 the thermal drill was replaced with an electro-mechanical drill, 9.1 cm diameter, that reached the base of the ice sheet in July 1966 at 1387 m. The Camp Century, Greenland, (77°10’N 61°08’W, 1885 masl) ice core (cored from 1963–1966) is 1390 m deep and contains climatic oscillations with periods of 120, 940, and 13,000 years. Another core in 1977 was drilled at Camp Century using a Shallow (Dane) drill type, 7.6 cm diameter, to 100 m. North Site At the North Site (75°46’N 42°27’W, 2870 masl) drilling began in 1972 using a SIPRE (US) drill type, 7.6 cm diameter to 25 m. The North Site was 500 km north of the EGIG line. At a depth of 6–7 m diffusion had obliterated some of the seasonal cycles. North Central The first core at North Central (74°37’N 39°36’W) was drilled in 1972 using a Shallow (Dane) drill type, 7.6 cm diameter to 100 m. At Crête in central Greenland (71°7’N 37°19’W) drilling began in 1972 on the first core using a SIPRE (US) drill type, 7.6 cm diameter to 15 m. The Crête core was drilled in central Greenland (1974) and reached a depth of 404.64 meters, extending back only about fifteen centuries. Annual cycle counting showed that the oldest layer was deposited in 534 AD. The Crête 1984 ice cores consist of 8 short cores drilled in the 1984-85 field season as part of the post-GISP campaigns. Glaciological investigations were carried out in the field at eight core sites (A-H). "The first core drilled at Station Milcent in central Greenland covers the past 780 years." Milcent core was drilled at 70.3°N, 44.6°W, 2410 masl. The Milcent core (398 m) was 12.4 cm in diameter, using a Thermal (US) drill type, in 1973. Dye 2 Drilling with a Shallow (Swiss) drill type at Dye 2 (66°23’N 46°11’W, 2338 masl) began in 1973. The core was 7.6 cm in diameter to a depth of 50 m. A second core to 101 m was 10.2 cm in diameter was drilled in 1974. An additional core at Dye 2 was drilled in 1977 using a Shallow (US) drill type, 7.6 cm diameter, to 84 m. Summit Camp The camp is located approximately 360 km from the east coast and 500 km from the west coast of Greenland at (Saattut, Uummannaq), and 200 km NNE of the historical ice sheet camp Eismitte. The closest town is Ittoqqortoormiit, 460 km ESE of the station. The station however is not part of Sermersooq municipality, but falls within the bounds of the Northeast Greenland National Park. An initial core at Summit (71°17’N 37°56’W, 3212 masl) using a Shallow (Swiss) drill type was 7.6 cm in diameter for 31 m in 1974. Summit Camp, also Summit Station, is a year-round research station on the apex of the Greenland Ice Sheet. Its coordinates are variable, since the ice is moving. The coordinates provided here (72°34’45”N 38°27’26”W, 3212 masl) are as of 2006. South Dome The first core at South Dome (63°33’N 44°36’W, 2850 masl) used a Shallow (Swiss) drill type for a 7.6 cm diameter core to 80 m in 1975. Hans Tausen (or Hans Tavsen) The first GISP core drilled at Hans Tausen Iskappe (82°30’N 38°20’W, 1270 masl) was in 1975 using a Shallow (Swiss) drill type, 7.6 cm diameter core to 60 m. The second core at Hans Tausen was drilled in 1976 using a Shallow (Dane) drill type, 7.6 cm diameter to 50 m. The drilling team reported that the drill was stuck in the drill hole and lost. The Hans Tausen ice cap in Peary Land was drilled again with a new deep drill to 325 m. The ice core contained distinct melt layers all the way to bedrock indicating that Hans Tausen contains no ice from the glaciation; i.e., the world’s northernmost ice cap melted away during the post-glacial climatic optimum and was rebuilt when the climate got colder some 4000 years ago. Camp III The first core at Camp III (69°43’N 50°8’W) was drilled in 1977 using a Shallow (Swiss) drill type, 7.6 cm, to 49 m. The last core at Camp III was drilled in 1978 using a Shallow (Swiss) drill type, 7.6 cm diameter, 80 m depth. Dye 3 The Renland ice core from East Greenland apparently covers a full glacial cycle from the Holocene into the previous Eemian interglacial. It was drilled in 1985 to a length of 325 m. From the delta-profile, the Renland ice cap in the Scoresbysund Fiord has always been separated from the inland ice, yet all the delta-leaps revealed in the Camp Century 1963 core recurred in the Renland ice core. The GRIP and GISP cores, each about 3000 m long, were drilled by European and US teams respectively on the summit of Greenland. Their usable record stretches back more than 100,000 years into the last interglacial. They agree (in the climatic history recovered) to a few metres above bedrock. However, the lowest portion of these cores cannot be interpreted, probably due to disturbed flow close to the bedrock. There is evidence the GISP2 cores contain an increasing structural disturbance which casts suspicion on features lasting centuries or more in the bottom 10% of the ice sheet. The more recent NorthGRIP ice core provides an undisturbed record to approx. 123,000 years before present. The results indicate that Holocene climate has been remarkably stable and have confirmed the occurrence of rapid climatic variation during the last ice age. The NGRIP drilling site is near the center of Greenland ( , 2917 m, ice thickness 3085). Drilling began in 1999 and was completed at bedrock in 2003. The NGRIP site was chosen to extract a long and undisturbed record stretching into the last glacial. NGRIP covers 5 kyr of the Eemian, and shows that temperatures then were roughly as stable as the pre-industrial Holocene temperatures were. The North Greenland Eemian Ice Drilling (NEEM) site is located at 77°27’N 51°3.6’W, masl. Drilling started in June 2009. The ice at NEEM was expected to be 2545 m thick. On July 26, 2010, drilling reached bedrock at 2537.36 m. For the list of ice cores visit IceReader web site Plateau Station Plateau Station is an inactive American research and Queen Maud Land traverse support base on the central Antarctic Plateau. The base was in continuous use until January 29, 1969. Ice core samples were made, but with mixed success. Byrd Station Marie Byrd Land formerly hosted the Operation Deep Freeze base Byrd Station (NBY), beginning in 1957, in the hinterland of Bakutis Coast. Byrd Station was the only major base in the interior of West Antarctica. In 1968, the first ice core to fully penetrate the Antarctic Ice Sheet was drilled here. Dolleman Island The British Antarctic Survey (BAS) has used Dolleman Island as ice core drilling site in 1976, 1986 and 1993. Berkner Island In the 1994/1995 field season the British Antarctic Survey, Alfred Wegener Institute and the Forschungsstelle für Physikalische Glaziologie of the University of Münster cooperated in a project drilling ice cores on the North and South Domes of Berkner Island. Cape Roberts Project Between 1997 and 1999 the international Cape Roberts Project (CRP) has recovered up to 1000 m long drill cores in the Ross Sea, Antarctica to reconstruct the glaciation history of Antarctica. International Trans-Antarctic Scientific Expedition (ITASE) The International Trans-Antarctic Scientific Expedition (ITASE) was created in 1990 with the purpose of studying climate change through research conducted in Antarctica. A 1990 meeting held in Grenoble, France, served as a site of discussion regarding efforts to study the surface and subsurface record of Antarctica’s ice cores. Lake Vida The lake gained widespread recognition in December 2002 when a research team, led by the University of Illinois at Chicago's Peter Doran, announced the discovery of 2,800 year old halophile microbes (primarily filamentous cyanobacteria) preserved in ice layer core samples drilled in 1996. As of 2003, the longest core drilled was at Vostok station. It reached back 420,000 years and revealed 4 past glacial cycles. Drilling stopped just above Lake Vostok. The Vostok core was not drilled at a summit, hence ice from deeper down has flowed from upslope; this slightly complicates dating and interpretation. Vostok core data are available. EPICA/Dome C and Kohnen Station The European Project for Ice Coring in Antarctica (EPICA) first drilled a core near Dome C at (560 km from Vostok) at an altitude of 3,233 m. The ice thickness is 3,309 +/-22 m and the core was drilled to 3,190 m. It is the longest ice core on record, where ice has been sampled to an age of 800 kyr BP (Before Present). Present-day annual average air temperature is -54.5 °C and snow accumulation 25 mm/y. Information about the core was first published in Nature on June 10, 2004. The core revealed 8 previous glacial cycles. They subsequently drilled a core at Kohnen Station in 2006. Although the major events recorded in the Vostok, EPICA, NGRIP, and GRIP during the last glacial period are present in all four cores some variation with depth (both shallower and deeper) occur between the Antarctic and Greenland cores. Dome F Two deep ice cores were drilled near the Dome F summit ( , altitude 3,810 m). The first drilling started in August 1995, reached a depth of 2503 m in December 1996 and covers a period back to 320,000 years. The second drilling started in 2003, was carried out during four subsequent austral summers from 2003/2004 until 2006/2007, and by then a depth of 3,035.22 m was reached. This core greatly extends the climatic record of the first core, and, according to a first, preliminary dating, it reaches back until 720,000 years. WAIS Divide The West Antarctic Ice Sheet Divide (WAIS Divide) Ice Core Drilling Project began drilling over the 2005 and 2006 seasons, drilling ice cores up to the depth of 300 m for the purposes of gas collection, other chemical applications, and to test the site for use with the Deep Ice Sheet Coring (DISC) Drill. Sampling with the DISC Drill will begin over the 2007 season and researchers and scientists expect that these new ice cores will provide data to establish a greenhouse gas record back over 40,000 years. TAlos Dome Ice CorE Project is a new 1620 m deep ice core drilled at Talos Dome that provides a paleoclimate record covering at least the last 250,000 years. The TALDICE coring site (159°11'E 72°49'S; 2315 m a.s.l.; annual mean temperature -41°C) is located near the dome summit and is characterised by an annual snow accumulation rate of 80 mm water equivalent. Non-polar cores The non-polar ice caps, such as found on mountain tops, were traditionally ignored as serious places to drill ice cores because it was generally believed the ice would not be more than a few thousand years old, however since the 1970s ice has been found that is older, with clear chronological dating and climate signals going as far back as the beginning of the most recent ice age. Although polar cores have the clearest and longest chronological record, four-times or more as long, ice cores from tropical regions offer data and insights not available from polar cores and have been very influential in advancing understanding of the planets climate history and mechanisms. Mountain ice cores have been retrieved in the Andes in South America, Mount Kilimanjaro in Africa, Tibet, various locations in the Himalayas, Alaska, Russia and elsewhere. Mountain ice cores are logistically very difficult to obtain. The drilling equipment must be carried by hand, organized as a mountaineering expedition with multiple stage camps, to altitudes upwards of 20,000 feet (helicopters are not safe), and the multi-ton ice cores must then be transported back down the mountain, all requiring mountaineering skills and equipment and logistics and working at low oxygen in extreme environments in remote third world countries. Scientists may stay at high altitude on the ice caps for up 20 to 50 days setting altitude endurance records that even professional climbers do not obtain. American scientist Lonnie Thompson has been pioneering this area since the 1970s, developing light-weight drilling equipment that can be carried by porters, solar-powered electricity, and a team of mountaineering-scientists. The ice core drilled in Guliya ice cap in western China in the 1990s reaches back to 760,000 years before the present — farther back than any other core at the time, though the EPICA core in Antarctica equalled that extreme in 2003. Because glaciers are retreating rapidly worldwide, some important glaciers are now no longer scientifically viable for taking cores, and many more glacier sites will continue to be lost, the "Snows of Mount Kilimanjaro" (Hemingway) for example could be gone by 2015. Upper Fremont Glacier Ice core samples were taken from Upper Fremont Glacier in 1990-1991. These ice cores were analyzed for climatic changes as well as alterations of atmospheric chemicals. In 1998 an unbroken ice core sample of 164 m was taken from the glacier and subsequent analysis of the ice showed an abrupt change in the oxygen isotope ratio oxygen-18 to oxygen-16 in conjunction with the end of the Little Ice Age, a period of cooler global temperatures between the years 1550 and 1850. A linkage was established with a similar ice core study on the Quelccaya Ice Cap in Peru. This demonstrated the same changes in the oxygen isotope ratio during the same period. Nevado Sajama Quelccaya Ice Cap Mount Kilimanjaro ice fields These cores provide a ~11.7 ka record of Holocene climate and environmental variability including three periods of abrupt climate change at ~8.3, ~5.2 and ~4 ka. These three periods correlate with similar events in the Greenland GRIP and GISP2 cores. East Rongbuk Glacier See also - Core drill - Core sample in general from ocean floor, rocks and ice. - Greenland ice cores - Ice core brittle zone - Jean Robert Petit - Scientific drilling - WAIS Divide Ice Core Drilling Project. - ^ Bender M, Sowers T, Brook E (August 1997). "Gases in ice cores". Proc. Natl. Acad. Sci. U.S.A. 94 (16): 8343–9. Bibcode:1997PNAS...94.8343B. doi:10.1073/pnas.94.16.8343. PMC 33751. PMID 11607743. - ^ Kaspers, Karsten Adriaan. "Chemical and physical analyses of firn and firn air: from Dronning Maud Land, Antarctica; 2004-10-04". DAREnet. Retrieved October 14, 2005. - ^ "The Composition of Air in the Firn of Ice Sheets and the Reconstruction of Anthropogenic Changes in Atmospheric Chemistry". Retrieved October 14, 2005. - ^ "http://www.ssec.wisc.edu/icds/reports/Drill_Fluid.pdf" (PDF). Retrieved October 14, 2005. - ^ "http://pubs.usgs.gov/prof/p1386j/history/history-lores.pdf" (PDF). Retrieved October 14, 2005. - ^ Journal of Geophysical Research (Oceans and Atmospheres) Special Issue [Full Text]. Retrieved October 14, 2005. - ^ "Physical Properties Research on the GISP2 Ice Core". Retrieved October 14, 2005. - ^ Svensson, A., S. W. Nielsen, S. Kipfstuhl, S. J. Johnsen, J. P. Steffensen, M. Bigler, U. Ruth, and R. Röthlisberger (2005). "Visual stratigraphy of the North Greenland Ice Core Project (NorthGRIP) ice core during the last glacial period". J. Geophys. Res. 110 (D02108): D02108. Bibcode:2005JGRD..11002108S. doi:10.1029/2004JD005134. - ^ A.J. Gow and D.A. Meese. "The Physical and Structural Properties of the Siple Dome Ice Cores". WAISCORES. Retrieved October 14, 2005. - ^ "Purdue study rethinks atmospheric chemistry from ground up". Archived from the original on December 28, 2005. Retrieved October 14, 2005. - "Summit_ACS.html". Retrieved October 14, 2005. - ^ Amy Ng and Clair Patterson (1981). "Natural concentrations of lead in ancient Arctic and Antarctic ice". Geochimica et Cosmochimica Acta 45 (11): 2109–21. Bibcode:1981GeCoA..45.2109N. doi:10.1016/0016-7037(81)90064-8. - ^ "Glacial ice cores: a model system for developing extraterrestrial decontamination protocols". Publications of Brent Christner. Archived from the original on March 7, 2005. Retrieved May 23, 2005. - ^ Michael Bender, Todd Sowersdagger, and Edward Brook (1997). "Gases in ice cores". Proc. Natl. Acad. Sci. USA 94 (August): 8343–9. Bibcode:1997PNAS...94.8343B. doi:10.1073/pnas.94.16.8343. PMC 33751. PMID 11607743. Bender, M.; Sowers, T; Brook, E (1997). "Gases in ice cores". Proceedings of the National Academy of Sciences 94 (16): 8343–9. Bibcode:1997PNAS...94.8343B. doi:10.1073/pnas.94.16.8343. PMC 33751. PMID 11607743. More than one of - ^ "TRENDS: ATMOSPHERIC CARBON DIOXIDE". Retrieved October 14, 2005. - ^ "CMDL Annual Report 23: 5.6. MEASUREMENT OF AIR FROM SOUTH POLE FIRN". Retrieved October 14, 2005. - ^ "Climate Prediction Center — Expert Assessments". Retrieved October 14, 2005. - ^ M.M. Reddy, D.L. Naftz, P.F. Schuster. "FUTURE WORK". ICE-CORE EVIDENCE OF RAPID CLIMATE SHIFT DURING THE TERMINATION OF THE LITTLE ICE AGE. Archived from the original on September 13, 2005. Retrieved October 14, 2005. - ^ "Thermonuclear 36Cl". Archived from the original on May 23, 2005. Retrieved October 14, 2005. - ^ Delmas RJ, J Beer, HA Synal, et al (2004). "Bomb-test 36Cl measurements in Vostok snow (Antarctica) and the use of 36Cl as a dating tool for deep ice cores". Tellus B 36 (5): 492. Bibcode:2004TellB..56..492D. doi:10.1111/j.1600-0889.2004.00109.x. - ^ Carpenter EJ, Lin S, Capone DG (October 2000). "Bacterial Activity in South Pole Snow". Appl. Environ. Microbiol. 66 (10): 4514–7. doi:10.1128/AEM.66.10.4514-4517.2000. PMC 92333. PMID 11010907. - ^ Warren SG, Hudson SR (October 2003). "Bacterial Activity in South Pole Snow Is Questionable". Appl. Environ. Microbiol. 69 (10): 6340–1; author reply 6341. doi:10.1128/AEM.69.10.6340-6341.2003. PMC 201231. PMID 14532104. - ^ Sowers, T. (2003). "Evidence for in-situ metabolic activity in ice sheets based on anomalous trace gas records from the Vostok and other ice cores". EGS - AGU - EUG Joint Assembly: 1994. Bibcode:2003EAEJA.....1994S. - ^ "NOAA Paleoclimatology Program — Vostok Ice Core Timescales". Retrieved October 14, 2005. - ^ "Polar Paleo-Climate Interests". Retrieved October 14, 2005. - ^ Jim White and Eric Steig. "Siple Dome Highlights: Stable isotopes". WAISCORES. Retrieved October 14, 2005. - ^ "GISP2 and GRIP Records Prior to 110 kyr BP". Archived from the original on September 9, 2005. Retrieved October 14, 2005. - ^ Gow, A. J., D. A. Meese, R. B. Alley, J. J. Fitzpatrick, S. Anandakrishnan, G. A. Woods, and B. C. Elder (1997). "Physical and structural properties of the Greenland Ice Sheet Project 2 ice core: A review". J. Geophys. Res. 102 (C12): 26559–76. Bibcode:1997JGR...10226559G. doi:10.1029/97JC00165. - ^ Whitehouse, David (14 October 2005). "Breaking through Greenland's ice cap". BBC. - ^ "NOAA Paleoclimatology Program — Vostok Ice Core". Retrieved October 14, 2005. - ^ Bowen, Mark (2005). Thin Ice. Henry Holt Company, ISBN 0-8050-6443-5 - British Antarctic Survey, The ice man cometh - ice cores reveal past climates - Hubertus Fischer, Martin Wahlen, Jesse Smith, Derek Mastroianni, Bruce Deck (1999-03-12). "Ice Core Records of Atmospheric CO2 Around the Last Three Glacial Terminations". Science (Science) 283 (5408): 1712–4. Bibcode:1999Sci...283.1712F. doi:10.1126/science.283.5408.1712. PMID 10073931. Retrieved 2010-06-20. - Dansgaard W. Frozen Annals Greenland Ice Sheet Research. Odder, Denmark: Narayana Press. p. 124. ISBN 87-990078-0-0. - Langway CC Jr. (Jan 2008). "The History of Early Polar Ice Cores". Cold Regions Science and Technology 52 (2): 101. doi:10.1016/j.coldregions.2008.01.001. - Wegener K (Sep 1955). "Die temperatur in Grönlandischen inlandeis". Pure Appl Geophys. 32 (1): 102–6. Bibcode:1955GeoPA..32..102W. doi:10.1007/BF01993599. - Rose LE. "The Greenland Ice Cores". Kronos 12 (1): 55–68. - "Crete Ice Core". - Oeschger H, Beer J, Andree M; Beer; Andree (Aug 1987). "10Be and 14C in the Earth system". Phil Trans R Soc Lond A. 323 (1569): 45–56. Bibcode:1987RSPTA.323...45O. doi:10.1098/rsta.1987.0071. JSTOR 38000. - "NOAA Paleoclimatology World Data Centers Dye 3 Ice Core". - Hansson M, Holmén K (Nov 2001). "High latitude biospheric activity during the Last Glacial Cycle revealed by ammonium variations in Greenland Ice Cores". Geophy Res Lett. 28 (22): 4239–42. Bibcode:2001GeoRL..28.4239H. doi:10.1029/2000GL012317. - National Science Foundation press release for Doran et al. (2003) - "Deep ice tells long climate story". BBC News. September 4, 2006. Retrieved May 4, 2010. - Peplow, Mark (25 January 2006). "Ice core shows its age". Nature (journal). doi:10.1038/news060123-3. "part of the European Project for Ice Coring in Antarctica (EPICA) ... Both cores were ... Dome C ... the Kohnen core ..." - "Deciphering the ice". CNN. 12 September 2001. Archived from the original on 13 June 2008. Retrieved 8 July 2010. - Thompson LG, Mosley-Thompson EM, Henderson KA (2000). "Ice-core palaeoclimate records in tropical South America since the Last Glacial Maximum". J Quaternary Sci. 15 (4): 377–94. Bibcode:2000JQS....15..377T. doi:10.1002/1099-1417(200005)15:4<377::AID-JQS542>3.0.CO;2-L. - Thompson LG, Mosley-Thompson EM, Davis ME, Henderson KA, Brecher HH, Zagorodnov VS, Mashlotta TA, Lin PN, Mikhalenko VN, Hardy DR, Beer J (2002). "Kilimanjaro ice core records: evidence of Holocene climate change in tropical Africa". Science. 298 (5593): 589–93. Bibcode:2002Sci...298..589T. doi:10.1126/science.1073198. PMID 12386332. - Ming J, Cachier H, Xiao C, "et al." (2008). ACP 8 (5): 1343–52. - http://www.tonderai.co.uk/earth/ice_cores.php "The Chemistry of Ice Cores" literature review - Barnola J, Pimienta P, Raynaud D, Korotkevich Y (1991). "CO2-Climate relationship as deduced from the Vostok ice core – a reexamination based on new measurements and on a reevaluation of the air dating". Tellus Series B-Chemical and Physical Meteorology 43 (2): 83–90. Bibcode:1991TellB..43...83B. doi:10.1034/j.1600-0889.1991.t01-1-00002.x. - Battle M, Bender M, Sowers T, et al (1996). "Atmospheric gas concentrations over the past century measured in air from firn at the South Pole". Nature 383 (6597): 231–5. Bibcode:1996Natur.383..231B. doi:10.1038/383231a0. - Friedli H, Lotscher H, Oeschger H, et al (1986). "Ice core record of the C13/C12 ratio of atmospheric CO2 in the past two centuries". Nature 324 (6094): 237–8. Bibcode:1986Natur.324..237F. doi:10.1038/324237a0. - Andersen KK, Azuma N, Barnola JM, et al. (September 2004). "High-resolution record of Northern Hemisphere climate extending into the last interglacial period" (PDF). Nature 431 (7005): 147–51. Bibcode:2004Natur.431..147A. doi:10.1038/nature02805. PMID 15356621. |Wikimedia Commons has media related to: Ice cores| - Ice Core Gateway - National Ice Core Laboratory - Facility for storing, curating, and studying ice cores recovered from the polar regions. - Ice-core evidence of rapid climate shift during the termination of the Little Ice Age - Upper Fremont Glacier study - Byrd Polar Research Center - Ice Core Paleoclimatology Research Group - National Ice Core Laboratory - Science Management Office - West Antarctic Ice Sheet Divide Ice Core Project - PNAS Collection of Articles on the Rapid Climate Change - Map of some worldwide ice core drilling locations - Map of some ice core drilling locations in Antarctica - Alley RB (February 2000). "Ice-core evidence of abrupt climate changes". Proc. Natl. Acad. Sci. U.S.A. 97 (4): 1331–4. Bibcode:2000PNAS...97.1331A. doi:10.1073/pnas.97.4.1331. PMC 34297. PMID 10677460. - August 2010: Ice Cores: A Window into Climate History interview with Eric Wolff, British Antarctic Survey from Allianz Knowledge - September 2006: BBC: Core reveals carbon dioxide levels are highest for 800,000 years - June 2004: "Ice cores unlock climate secrets" from the BBC - June 2004: "Frozen time" from Nature - June 2004: "New Ice Core Record Will Help Understanding of Ice Ages, Global Warming" from NASA - September 2003: "Oldest ever ice core promises climate revelations" - from New Scientist
http://en.wikipedia.org/wiki/Ice_core
13
55
Teaching Plan 3 Explore the Circumcenter of a Triangle This lesson plan is to introduce the concepts of circumcenter by using computers with sketchpad software to explore. Students are able to observe and explore possible results (images) through computers by carrying out their ideas in front of screens. IL Learning Standards 1. Understand the concepts of circumcenter of a triangle and other relative knowledge. 2. Be able to use computers with Geometer's Sketchpad to observe possible results and solve geometric problems. 1. Computers and Geometer's Sketchpad software 2. Papers, pencils, and rulers Lesson PlanDay 1 - Introduction of basic definition, review of relative concepts, and class discussionDay 1 Day 2 - Group activity to answer questions by using computers with sketchpad Day 3 - Group discussion, sharing results, and making conclusion 1. The instructors introduces the basic definition of circumcenter and had better review similar concepts about centroid, incenter, and orthocenter of a triangle. 2. Discuss students' thought and other relative questions about circumcenter. Such as: How many circumcenters are there in a triangle? Is the point of circumcenter always on the inside of a triangle? If not, please describe the possible results and depend on what kind of triangle is. 3 Then, the instructor and students turn toward to play and test computers and discuss how to draw graphs and find their answers by using computers. The instructor has 2-3 students form a group team to work through computers to collect data in order to decide the conclusion for questions. The instructor should turn around each group to observe students' learning and offer some help if students have problems on how to operate computers with sketchpad software. 1. Is there only a point of circumcenter in a triangle? Explain your possible reasons. 2. Is the point of circumcenter always on the inside of a triangle? If not. Please describe the possible results and depend on what kind of triangle is. Worksheet#1 and GSP file. 3. What are the different properties among centroid, incenter, orthocenter, and circumcenter? 4. What kind of triangle will result in that centroid, incenter, orthocenter, or circumcenter in the same triangle will overlap? GSP file 5. Which three points among centroid, incenter, orthocenter, and circumcenter will be on a line? ( This line is called Euler line.) Describe your experimental result and explain it. GSP file. 6. In a triangle ABC, suppose that O is the point of circumcenter of triangle ABC. Observe the relation between angle ABC and angle AOC. Make a conclusion and explain it. Worksheet#2. and GSP file. 7. In a triangle ABC, suppose that O is the point of circumcenter of triangle ABC. Observe the length of OA, OB, and OC. Are they equal? Explain it. Let O be the center, and the length of OA be the radius to draw a circle. Observe the situation of point B and C and explain it. GSP file. ( This circle is called circumscribed circle to the triangle ABC.) In this class, students offer their results to discuss and share among groups and make the final conclusion for the questions of Day 2 activity. Finally, if possible, the instructor should demand students to develop their geometric proof for each of the above questions. And, let students know that lots of results from dynamic models do not represent and make a proof. In a triangle ABC, AB= 3 cm, BC= 4 cm, CA= 5 cm. 1) What kind of triangle is it? Why? 2) Suppose that O is the point of circumcenter of triangle ABC, the sum of OA, OB, and OC is = ______. 1) In a acute triangle ABC, suppose that O is the point of circumcenter of triangle ABC, and the angle BAC is 65 degrees, then the angle BOC is ________ degrees. 2) In a triangle DEF, angle DEF is obtuse angle. Suppose O is the point of circumcenter of triangle DEF, and the angle DEF is 130 degrees, then the angle DOF is ________ degrees. In a triangle ABC, let A' be the midpoint of BC, B' be the midpoint of AC, and C' be the midpoint of AB. And let O is the circumcenter of triangle ABC. Please explain O is the orthocenter of triangle A'B'C'. (Hint: perpendicular lines) There is an arc BCD which is a part of a circle. Could you find the center of this circle and draw the another part of this circle ? Explain your method. (Hint: Three points form a triangle and decide a circle.) 1. Replace traditional geometric teaching in which geometry is taught by a verbal description to dynatmic drawing. 2. Help teacher to teach and replace traditional teaching which uses blackboards and chalks to draw graphs 3. Computers with sketchpad software not only allow students to manipulate geometric shapes to discover and explore the geometric relationships, but also verify possible results, provide a creative activity for students' ideas, and enhance students' geometric intuition. 4. Facilitate the creation of a rich mathematical learning environment to assist students' geometric proof and establish geometric concepts 1. It can not replace traditional logic geometric proof -lots of examples do not make a proof 2. Students can not get maximal and potential learning benefits from by using computers to learn if the instructor do not offer appropriate learning directions and guide. The instructor also should know what kind of learning environment with computers is most likely to encourage and stimulate students' learning. 1. Szymanski, W. A., (1994). Geometric computerized proofs= drawing package + symbolic compution software. Journal of Computers in Mathematics and Science Teaching, 13, p433-444. 2. Silver, J. A. (1998). Can computers to teach proofs? Mathematics Teacher, 91, 660-663 Any Comment: Yi-wen Chen firstname.lastname@example.org
http://mste.illinois.edu/courses/ci499sp01/students/ychen17/project336/teachplan3.html
13
83
When scientists first began using rockets for research, their eyes were focused upward, on the mysteries that lay beyond our atmosphere and our planet. But it wasn't long before they realized that this new technology could also give them a unique vantage point from which to look back at Earth. Scientists working with V-2 and early sounding rockets for the Naval Research Laboratory (NRL) made the first steps in this direction almost ten years before Goddard was formed. The scientists put aircraft gun cameras on several rockets in an attempt to determine which way the rockets were pointing. When the film from one of these rockets was developed, it had recorded images of a huge tropical storm over Brownsville, Texas. Because the rocket.... ....was spinning, the image wasn't a neat, complete picture, but Otto Berg, the scientist who had modified the camera to take the photo, took the separate images home and pasted them together on a flat board. He then took the collage to Life magazine, which published what was arguably one of the earliest weather photos ever taken from space.1 Space also offered unique possibilities for communication that were recognized by industry and the military several years before NASA was organized. Project RAND2 had published several reports in the early 1950s outlining the potential benefits of satellite-based communication relays, and both AT&T and Hughes had conducted internal company studies on the commercial viability of communication satellites by 1959.3 These rudimentary seeds, already sown by the time Goddard opened its doors, grew into an amazing variety of communication, weather, and other remote-sensing satellite projects at the Center that have revolutionized many aspects of our lives. They have also taught us significant and surprising things about the planet we inhabit. Our awareness of large-scale crop and forest conditions, ozone depletion, greenhouse warming, and El Nino weather patterns has increased dramatically because of our ability to look back on Earth from space. Satellites have allowed us to measure the shape of the Earth more accurately, track the movement of tectonic plates, and analyze portions of the atmosphere and areas of the world that are hard to reach from the ground. In addition, the "big picture" perspective satellites offer has allowed scientists to begin investigating the dynamics between different individual processes and the development and behavior of global patterns and systems. Ironically, it seems we have had to develop the ability to leave our planet before we could begin to fully understand it. From the very earliest days of the space program, scientists realized that satellites could offer an important side-benefit to researchers interested in mapping the gravity field and shape of the Earth, and Goddard played an important role in this effort. The field of geodesy, or the study of the gravitational field of the Earth and its relationship to the solid structure of the planet, dates back to the third century B.C., when the Greek astronomer Eratosthenes combined astronomical observation with land measurement to try to prove that the Earth was, in fact, round. Later astronomers and scientists had used other methods of triangulation to try to estimate the exact size of the Earth. Astronomers also had used the Moon, or stars with established locations, to try to map the shape of the Earth and exact distances between points more precisely. But satellites offered a new twist to this methodology. For one thing, the Earth's shape and gravity field affected the orbit of satellites. So at the beginning of the space age, Goddard's tracking and characterizing the orbit of the first satellites was in and of itself a scientific endeavor. From that orbital data, scientists could infer information about the Earth's gravity field, which is affected by the distribution of its mass. The Earth, as it turns out, is not perfectly round, and its mass is not perfectly distributed. There are places where land or ocean topography results in denser or less dense mass accumulation. The centrifugal force of the Earth's rotation combines with gravity and these mass concentrations to create bulges and depressions in the planet. In fact, although we think of the Earth as round, Goddard's research showed us that it is really slightly pear-shaped. Successive Goddard satellites enabled scientists to gather much more precise information about the Earth's shape as well as exact positions of points on the planet. In fact, within 10 years, scientists had learned as much again about global positioning, the size and shape of the Earth, and its gravity field as their predecessors had learned in the previous 200 years. Laser reflectors on Goddard satellites launched in 1965, 1968, and 1976, for example, allowed scientists to make much more precise measurements between points, which enabled them to determine the exact location or movement of objects. The laser reflectors developed for Goddard's LAGEOS satellite, launched in 1976, could determine movement or position within a few centimeters, which allowed scientists to track and analyze tectonic plate movement and continental drift. Among other things, the satellite data told scientists that the continents seem to be inherently rigid bodies, even if they contain divisive bodies of water, such as the Mississippi River, and that continental plate movement appears to occur at a constant rate over time. Plate movement information provided by satellites has also helped geologists track the dynamics that lead up to Earthquakes, which is an important step in predicting these potentially catastrophic events. The satellite positioning technique used for this plate tectonic research was the precursor to the Global Positioning System (GPS) technology that now uses a... ...constellation of satellites to provide precise three-dimensional navigation for aircraft and other vehicles. Yet although a viable commercial market is developing for GPS technology today, the greatest commercial application of space has remained the field of communication satellites.4 For all the talk about the commercial possibilities of space, the only area that has proven substantially profitable since 1959 is communication satellites, and Goddard played an important role in developing the early versions of these spacecraft. The industry managers who were conducting research studies and contemplating investment in this field in 1959 could not have predicted the staggering explosion of demand for communications that has accompanied the so-called "Information Age." But they saw how dramatically demand for telephone service had increased since World War II, and they saw potential in other communications technology markets, such as better or broader transmission for television and radio signals. As a result, several companies were even willing to invest their own money, if necessary, to develop communication satellites. The Department of Defense (DoD) actually had been working on communication satellite technology for a number of years, and it wanted to keep control of what it considered a critical technology. So when NASA was organized, responsibility for communication satellite technology development was split between the new space agency and the DoD. The DoD would continue responsibility for "active" communication satellites, which added power to incoming signals and actively transmitted the signals back to ground stations. NASA's role was initially limited to "passive" communication satellites, which relied on simply reflecting signals off the satellite to send them back to Earth.5 NASA's first communication satellite, consequently, was a passive spacecraft called "Echo." It was based on a balloon design by an engineer at NASA's Langley Research Center and developed by Langley, Goddard, JPL and AT&T. Echo was, in essence, a giant mylar balloon, 100 feet in diameter, that could "bounce" a radio signal back down to another ground station a long distance away from the first one. Echo I, the world's first communication satellite, was successfully put into orbit on 12 August 1960. Soon after launch, it reflected a pre-taped message from President Dwight Eisenhower across.... .....the country and other radio messages to Europe, demonstrating the potential of global radio communications via satellite. It also generated a lot of public interest, because the sphere was so large that it could be seen from the ground with the naked eye as it passed by overhead. Echo I had some problems, however. The sphere seemed to buckle somewhat, hampering its signal-reflecting ability. So in 1964, a larger and stronger passive satellite, Echo II, was put into orbit. Echo II was made of a material 20 times more resistant to buckling than Echo I and was almost 40 feet wider in diameter. Echo II also experienced some difficulties with buckling. But the main reason the Echo satellites were not pursued any further was not that the concept wouldn't work. It was simply that it was eclipsed by much better technology - active communication satellites.6 Syncom, Telstar, and Relay By 1960, Hughes, RCA, and AT&T were all advocating the development of active communication satellites. They differed in the kind of satellite they recommended, however. Hughes felt strongly that the best system would be based on geosynchronous satellites. Geosynchronous satellites are in very high orbits - 22,300 miles above the ground. This high orbit allows their orbital speed to match the rotation speed of the Earth, which means they can remain essentially stable over one spot, providing a broad range of coverage 24 hours a day. Three of these satellites, for example, can provide coverage of the entire world, with the exception of the poles. The disadvantage of using geosynchronous satellites for communications is that sending a signal up 22,300 miles and back causes a time-delay of approximately a quarter second in the signal. Arguing that this delay would be too annoying for telephone subscribers, both RCA and AT&T supported a bigger constellation of satellites in medium Earth orbit, only a few hundred miles above the Earth.7 The Department of Defense had been working on its own geosynchronous communication satellite, but the project was running into significant development problems and delays. NASA had been given permission by 1960 to pursue active communication satellite technology as well as passive systems, so the DoD approached NASA about giving Hughes a sole-source contract to develop an experimental geosynchronous satellite. The result was Syncom, a geosynchronous satellite design built by Hughes under contract to Goddard. Hughes already had begun investing its own money and effort in the technology, so Syncom I was ready for Goddard to launch in February 1963 - only 17 months after the contract was awarded. Syncom I stopped sending signals a few seconds before it was inserted into its final orbit, but Syncom II was launched successfully five months later, demonstrating the viability of the system. The third Syncom satellite, launched in August 1964, transmitted live television coverage of the Olympic Games in Tokyo, Japan to stations in North America and Europe. Although the military favored the geosynchronous concept, it was not the only technology being developed. In 1961, Goddard began working with RCA on the "Relay" satellite, which was launched 13 December 1962. Relay was designed to demonstrate the feasibility of medium-orbit, wide-band communications satellite technology and to help develop the ground.... ....station operations necessary for such a system. It was a very successful project, transmitting even color television signals across wide distances. AT&T, meanwhile, had run into political problems with NASA and government officials who were concerned that the big telecommunications conglomerate would end up monopolizing what was recognized as potentially powerful technology. But when NASA chose to fund RCA's Relay satellite instead of AT&T's design, AT&T decided to simply use its own money to develop a medium orbit communications satellite, which it called Telstar. NASA would launch the satellite, but AT&T would reimburse NASA for the costs involved. Telstar 1 was launched on 10 July 1962, and a second Telstar satellite followed less than a year later. Both satellites were very successful, and Telstar 2 demonstrated that it could even transmit both color and black and white television signals between the United States and Europe. In some senses, Relay and Telstar were competitors. But RCA and AT&T, who were both working with managers at Goddard, reportedly cooperated very well with each other. Each of the efforts was seen as helping to advance the technology necessary for this new satellite industry to become viable, and both companies saw the potential profit of that in the long run. By 1962, it was clear that satellite communications technology worked, and there was going to be money made in its use. Fearful of the powerful monopoly satellites could offer a single company, Congress passed the Satellite Communications Act, setting up a consortium of existing communications carriers to run the satellite communications industry. Individual companies could bid to sell satellites to the consortium, but no single company would own the system. NASA would launch the satellites for Comsat, as the consortium was called, but Comsat would run the operations. In 1964, the Comsat consortium was expanded further with the formation of the International Telecommunications Satellite Organization, commonly known as "Intelsat," to establish a framework for international use of communication satellites. These organizations had the responsibility for choosing the type of satellite technology the system would use. The work of RCA, AT&T and Hughes had proven that either medium-altitude or geosynchronous satellites could work. But in 1965, the consortiums finally decided to base the international system on geosynchronous satellites similar to the Syncom design.8 Applications Technology Satellites Having helped to develop the prototype satellites, Goddard stepped back from operational communication satellites and focused its efforts on developing advanced technology for future systems. Between 1966 and 1974, Goddard launched a total of six Applications Technology Satellites (ATS) to research advanced technology for communications and meteorological spacecraft. The ATS spacecraft were all put into geosynchronous orbits and investigated microwave and millimeter wavelengths for..... ....communication transmissions, methods for aircraft and marine navigation and communications, and various control technologies to improve geosynchronous satellites. Four of the spacecraft were highly successful and provided valuable data for improving future communication satellites. The sixth ATS spacecraft, launched 30 May 1974, even experimented with transmitting health and education television to small, low-cost ground stations in remote areas. It also tested a geosynchronous satellite's ability to provide tracking and data transmission services for other satellites. Goddard's research in this area, and the expertise the Center developed in the process, made it possible for NASA to develop the Tracking and Data Relay Satellite System (TDRSS) the agency still uses today.9 After ATS-6, NASA transferred responsibility for future communication satellite research to the Lewis Research Center. Goddard, however, maintained responsibility for developing and operating the TDRSS tracking and data satellite system.10 Statistically, the United States has the world's most violent weather. In a typical year, the U.S. will endure some 10,000 violent thunderstorms, 5,000 floods, 1,000 tornadoes, and several hurricanes.11 Improving weather prediction, therefore, has been a high priority of meteorologists here for a very long time. The early sounding rocket flights began to indicate some of the possibilities space flight might offer in terms of understanding and forecasting the weather, and they prompted the military to pursue development of a meteorological satellite. The Advanced Research Projects Agency (ARPA)12 had a group of scientists and engineers working on this project at the U.S. Army Signal Engineering Laboratories in Ft. Monmouth, New Jersey when NASA was first organized. Recognizing the country's history of providing weather services to the public through a civilian agency, the military agreed to transfer the research group to NASA. These scientists and engineers became one of the founding units of Goddard in 1958. Television and Infrared Observation Satellites These Goddard researchers were working on a project called the Television and Infrared Observation Satellite (TIROS). When it was launched on 1 April 1960, it became the world's first meteorological satellite, returning thousands of images of cloud cover and spiralling storm systems. Goddard's Explorer VI satellite had recorded some crude cloud cover images before TIROS I was launched, but the TIROS satellite was the first spacecraft dedicated to meteorological data gathering and transmitted the first really good cloud cover photographs. 13 Clearly, there was a lot of potential in this new technology, and other meteorological satellites soon followed the first TIROS spacecraft. Despite its name, the first TIROS carried only television cameras. The second TIROS satellite, launched in November 1960, also included an infrared instrument, which gave it the ability to detect cloud cover even at night. The TIROS capabilities were limited, but the satellites still provided a tremendous service in terms of weather forecasting. One of the biggest obstacles meteorologists faced was the local, "spotty" nature of the data... ...they could obtain. Weather balloons and ocean buoys could only collect data in their immediate area. Huge sections of the globe, especially over the oceans, were dark areas where little meteorological information was available. This made forecasting a difficult task, especially for coastal areas. Sounding rockets offered the ability to take measurements at all altitudes of the atmosphere, which helped provide temperature, density and water vapor information. But sounding rockets, too, were limited in the scope of their coverage. Satellites offered the first chance to get a "big picture" perspective on weather patterns and storm systems as they travelled around the globe. Because weather forecasting was an operational task that usually fell under the management of the Weather Bureau, there was some disagreement about who should have responsibility for designing and operating this new class of satellite. Some people at Goddard felt that NASA should take the lead, because the new technology was satellite-based. The Weather Bureau, on the other hand, was going to be paying for the satellites and wanted control over the type of spacecraft and instruments they were funding. When the dust settled, it was decided that NASA would conduct research on advanced meteorological satellite technology and would manage the building, launching and testing of operational weather satellites. The Weather Bureau would have final say over operational satellite design, however, and would take over management of spacecraft operations after the initial test phase was completed.14 The TIROS satellites continued to improve throughout the early 1960s. Although the spacecraft were officially research satellites, they also provided the Weather Bureau with a semi-operational weather satellite system from 1961 to 1965. TIROS III, launched in July 1961, detected numerous hurricanes, tropical storms, and weather fronts around the world that conventional ground networks missed or would not have seen for several more days.15 TIROS IX, launched in January 1965, was the first of the series launched into a polar orbit, rotating around the Earth in a north-south direction. This orientation allowed the satellite to cross the equator at the same time each day and provided coverage of the entire globe, including the higher latitudes and polar regions, as its orbit precessed around the Earth. The later TIROS satellites also improved their coverage by changing the location of the spacecraft's camera. The TIROS satellites were designed like a wheel of cheese. The wheel spun around but, like a toy top or gyroscope, the axis of the wheel kept pointing in the same direction as the satellite orbited the Earth. The cameras were placed on the satellite's axis, which allowed them to take continuous pictures of the Earth when that surface was actually facing the planet. Like dancers doing a do-si-do, however, the surface with the cameras would be pointing parallel to or away from the Earth for more than half of the satellite's orbit. TIROS IX (and the operational TIROS satellites), put the camera on the rotating section of the wheel, which was kept facing perpendicular to the Earth throughout its orbit. This made the satellite operate more like a dancer twirling around while circling her partner. While the camera could only take pictures every few seconds, when the section of the wheel holding the camera rotated past the Earth, it could continue taking photographs throughout the satellite's entire orbit. In 1964, Goddard took another step in developing more advanced weather satellites when it launched the first NIMBUS spacecraft. NASA had originally envisioned the larger and more sophisticated NIMBUS as the design for the Weather Bureau's operational satellites. The Weather Bureau decided that the.... ....NIMBUS spacecraft were too large and expensive, however, and opted to stay with the simpler TIROS design for the operational system. So the NIMBUS satellites were used as research vehicles to develop advanced instruments and technology for future weather satellites. Between 1964 and 1978, Goddard developed and launched a total of seven Nimbus research satellites. In 1965, the Weather Bureau was absorbed into a new agency called the Environmental Science Services Administration (ESSA). The next year, NASA launched the first satellite in ESSA's operational weather system. The satellite was designed like the TIROS IX spacecraft and was designated "ESSA 1." As per NASA's agreement, Goddard continued to manage the building, launching and testing of ESSA's operational spacecraft, even as the Center's scientists and engineers worked to develop more advanced technology with separate research satellites. The ESSA satellites were divided into two types. One took visual images of the Earth with an an Automatic Picture Transmission (APT) camera system and transmitted them in real time to stations around the globe. The other recorded images that were recorded and then transmitted to a central ground station for global analysis. These first ESSA satellites were deployed in pairs in "Sun-synchronous" polar orbits around the Earth, crossing the same point at approximately the same time each day. In 1970, Goddard launched an improved operational spacecraft for ESSA using "second generation" weather satellite technology. The Improved TIROS Operational System (ITOS), as the design was initially called, combined the functions of the previous pairs of ESSA satellites into a single spacecraft and added a day and night scanning radiometer. This improvement meant that meteorologists could get global cloud cover information every 12 hours instead of every 24 hours. Soon after ITOS 1 was launched, ESSA evolved into the National Oceanic and Atmospheric Administration (NOAA), and successive ITOS satellites were redesignated as NOAA 1, 2, 3, etc. This designation system for NOAA's polar-orbiting satellites continues to this day. In 1978, NASA launched the first of what was called the "third generation" of polar orbiting satellites. The TIROS-N design was a much bigger, three-axis-stabilized spacecraft that incorporated much more advanced equipment. The TIROS-N series of instruments, used aboard operational NOAA satellites today, provided much more accurate sea-surface temperature information, which is necessary to predict a phenomenon like an El Nino weather pattern. They also could identify snow and sea ice and could provide much better temperature profiles for different altitudes in the atmosphere. But while the lower-altitude polar satellites can observe some phenomena in more detail because they are relatively close to the Earth, they can't provide the continuous "big picture" information a geosynchronous satellite can offer. So for the past 25 years, NOAA has operated two weather satellite systems - the TIROS series of polar orbiting satellites at lower altitudes, and two geosynchronous satellites more than 22,300 miles above the Earth.16 While polar-orbiting satellites were an improvement over the more equatorial-orbiting TIROS satellites, scientists realized that they could get a much better perspective on weather systems from a geosynchronous spacecraft. Goddard's research teams started investigating this technology with the launch of the first Applications Technology Satellite (ATS-1) in 1966. Because the ATS had a geosynchronous orbit that kept it "parked" above one spot, meteorologists could get progressive photographs of the same area over a period of time as often as every 30 minutes. The "satellite photos" showing changes in cloud cover that we now almost take for granted during nightly newscasts are made possible by geosynchronous weather satellites. Those cloud movement images also allowed meteorologists to infer wind currents and speeds. This information is particularly useful in determining weather patterns over areas of the world such as oceans or the tropics, where conventional aircraft and balloon methods can't easily gather data. Goddard's ATS III satellite, launched in 1967, included a multi-color scanner that could provide images in color, as well. Shortly after its launch, ATS III took the first color image of the entire Earth, a photo made possible by the satellite's 22,300 mile high orbit.17 In 1974, Goddard followed its ATS work with a dedicated geosynchronous weather satellite called the Synchronous Meteorological Satellite (SMS). Both SMS -1 and SMS-2 were research prototypes, but they still provided meteorologists with practical information as they tested out new technology. In addition to providing continuous coverage of a broad area, the SMS satellites collected and relayed weather data from 10,000 automatic ground stations in six hours, giving forecasters more timely and detailed data than they had ever had before. Goddard launched NOAA's first operational geostationary18 satellite, designated the Geostationary Operational Environmental Satellite (GOES) in October 1975. That satellite has led to a whole family of GOES spacecraft. As with previous operational satellites, Goddard managed the building, launching and testing of the GOES spacecraft. The first seven GOES spacecraft, while geostationary, were still "spinning" designs like NOAA's earlier operational ESSA satellites. In the early 1980s, however, NOAA decided that it wanted the new series of geostationary GOES spacecraft to be three-axis stabilized, as well, and to incorporate significantly more advanced instruments. In addition, NOAA decided to award a single contract directly with an industry manufacturer for the spacecraft and instruments, instead of working separate instrument and spacecraft contracts through Goddard. Goddard typically developed new instruments and technology on research satellites before putting them onto an operational spacecraft for NOAA. The plan for GOES 8,19 however, called for incorporating new technology instruments directly into a spacecraft that was itself a new design and also had an operational mission. Meteorologists across the country were going to rely on the new instruments for accurate weather forecasting information, which put a tremendous amount of added pressure on the designers. But the contractor selected to build the instruments underestimated the cost and complexity of developing the GOES 8 instruments. In addition, Goddard's traditional "Phase B" design study, which would have generated more concrete estimates of the time and cost involved in the instrument development, was eliminated on the GOES 8 project. The study was skipped in an attempt to save time, because NOAA was facing a potential crisis with its geostationary satellite system. NOAA wanted to have two geostationary satellites up at any given point in order to adequately cover both coasts of the country. But the GOES 5 satellite failed in 1984, leaving only one geostationary satellite, GOES 6, in operation. The early demise of GOES 4 and GOES 5 left NOAA uneasy about how long GOES 6 would last, prompting the "streamlining" efforts on the GOES 8 spacecraft design. The problem became even more serious in 1986 when the launch vehicle for the GOES G spacecraft, which would have become GOES 7, failed after launch. Another GOES satellite was successfully launched in 1987, but the GOES 6 spacecraft failed in January 1989, leaving the United States once again with only one operational geostationary weather satellite. By 1991, when the GOES 8 project could not predict a realistic launch date, because working instruments for the spacecraft still hadn't been developed, Congress began to investigate the issue. The GOES 7 spacecraft was aging, and managers and elected officials realized that it was entirely possible that the country might soon find itself without any geostationary satellite coverage at all. To buy the time necessary to fix the GOES 8 project and alleviate concerns about coverage, NASA arranged with the Europeans to "borrow" one of their Eumetsat geostationary satellites. The satellite was allowed to "drift" further west so it sat closer to the North American coast, allowing NOAA to move the GOES 7 satellite further west. Meanwhile, Goddard began to take a more active role in the GOES 8 project. A bigger GOES 8 project office was established at the Center and Goddard brought in some of its best instrument experts to work on the project, both at Goddard and at the contractor's facilities. Goddard, after all, had some of the best meteorological instrument-building expertise in the country. But because Goddard was not directly in charge of the instrument sub-contract, the Center had been handicapped in making that knowledge and experience available to the beleaguered contractor. The project was a sobering reminder of the difficulties that could ensue when, in an effort to save time and money, designers attempted to streamline a development project or combine research and operational functions into a single spacecraft. But in 1994, the GOES 8 spacecraft was finally successfully launched, and the results have been impressive. Its advanced instruments performed as advertised, improving the spacecraft's focusing and atmospheric sounding abilities and significantly reducing the amount of time the satellite needed to scan any particular area. 20 Earth Resources Satellites As meteorological satellite technology developed and improved, Goddard scientists realized that the same instruments used for obtaining weather information could be used for other purposes, as well. Meteorologists could look at radiation that travelled back up from the Earth's surface to determine things like water vapor content and temperature profiles at different altitudes in the atmosphere. But those same emissions could reveal potentially valuable information about the Earth's surface, as well. Objects at a temperature above absolute zero emit radiation, many of them at precise and unique wavelengths in the electromagnetic spectrum. So by analyzing the emissions of any object, from a star or comet to a particular section of forest or farmland, scientists can learn important things about its chemical composition. Instruments on the Nimbus spacecraft had the ability to look at reflected solar radiation from the Earth in several different wavelengths. As early as 1964, scientists began discussing the possibilities of experimenting with this technology to see what it might be able to show us about not only the atmosphere, but also resources on the Earth. The result was the Earth Resources Technology Satellite (ERTS), launched in 1972 and later given the more popular name "Landsat 1." The spacecraft was based on a Nimbus satellite,with a multi-channel radiometer to look at different wavelength bands where the reflected energy from surfaces such as forests, water, or different crops would fall. The satellite instruments also had much better resolution than the Nimbus instruments. Each swath of the Earth covered by the Nimbus scanner was 1500 miles wide, with each pixel in the picture representing five miles. The polar-orbiting ERTS satellite instrument could focus in on a swath only 115 miles wide, with each pixel representing 80 meters. This resolution allowed scientists to view a small enough section of land, in enough detail, to conduct a worthwhile analysis of what it contained. Images from the ERTS/Landsat satellite, for example, showed scientists a 25-mile wide geological feature near Reno, Nevada that appeared to be a previously undiscovered meteor crater. Other images collected by the satellite were useful in discovering water-bearing rocks in Nebraska, Illinois and New York and determining that water pollution drifted off the Atlantic coast as a cohesive unit, instead of dissipating in the ocean currents. The success of the ERTS satellite prompted scientists to want to explore this use of satellite technology further. They began working on instruments that could get pixel resolutions as high as five meters, but were told to discontinue that research because of national security concerns. If a civilian satellite provided data that detailed, it might allow foreign countries to find out critical information about military installations or other important targets in the U.S. This example illustrates one of the ongoing difficulties with Earth resource satellite research. The fact that the same information can be used for both scientific and practical purposes often creates complications with not only who should be responsible for the work, but how and where the information will be used. In any event, the follow-on satellite, "Landsat-2," was limited to the same levels of resolution. More recent Landsat spacecraft, however, have been able to improve instrument resolution further.21 Landsat 2 was launched in January 1975 and looked at land areas for an even greater number of variables than its ERTS predecessor, integrating information from ground stations with data obtained by the satellite's instruments. Because wet land and green crops reflect solar energy at different wavelengths than dry soil or brown plants, Landsat imagery enabled researchers to look at soil moisture levels and crop health over wide areas, as well as soil temperature, stream flows, and snow depth. Its data was used by the U.S. Department of Agriculture, the U.S. Forest Service, the Department of Commerce, the Army Corps of Engineers, the Environmental Protection Agency and the Department of Interior, as well as agencies from foreign countries.22 The Landsat program clearly was a success, particularly from a scientific perspective. It proved that satellite technology could determine valuable information about precious natural resources, agricultural activity, and environmental hazards. The question was who should operate the satellites. Once the instruments were developed, the Landsat spacecraft were going to be collecting the same data, over and over, instead of exploring new areas and technology. One could argue that by examining the evolution of land resources over time, scientists were still exploring new processes and gathering new scientific information about the Earth. But that same information was being used predominantly for practical purposes of natural resource management, agricultural and urban planning, and monitoring environmental hazards. NASA had never seen its role as providing ongoing, practical information, but there was no other agency with the expertise or charter to operate land resource satellites. As a result, NASA continued to manage the building, launch, and space operation of the Landsat satellites until 1984. Processing and distribution of the satellite's data was managed by the Department of Interior, through an Earth Resources Observation System (EROS) Data Center that was built by the U.S. Geological Survey in Sioux Falls, South Dakota in 1972. In 1979, the Carter Administration developed a new policy in which the Landsat program would be managed by NOAA and eventually turned over to the private sector. In 1984, the first Reagan Administration put that policy into effect, soliciting commercial bids for operating the system, which at that point consisted of two operational satellites. Landsat 4 had been launched in 1982 and Landsat 5 was launched in 1984. Ownership and operation of the system was officially turned over to the EOSAT Company in 1985, which sold the images to anyone who wanted them, including the government. At the same time, responsibility for overseeing the program was transferred from NASA to NOAA. Under the new program guidelines, the next spacecraft in the Landsat program, Landsat 6, would also be constructed independently by industry. There were two big drawbacks with this move, however, as everyone soon found out. The first was that although there was something of a market for Landsat images, it was nothing like that surrounding the communication satellite industry. The EOSAT company found itself struggling to stay afloat. Prices for images jumped from the couple of hundred dollars per image that EROS had charged to $4,000 per shot, and EOSAT still found itself bordering on insolvency. Being a private company, EOSAT also was concerned with making a profit, not archiving data for the good of science or the government. Government budgets wouldn't allow for purchasing thousands of archival images at $4,000 a piece, so the EROS Data Center only bought a few selected images each year. As a result, many of the the scientific or archival benefits the system could have created were lost. In 1992, the Land Remote Sensing Policy Act reversed the 1984 decision to commercialize the Landsat system, noting the scientific, national security, economic, and social utility of the Landsat images. Landsat 6 was launched the following year, but the spacecraft failed to reach orbit and ended up in the Indian Ocean. This launch failure was discouraging, but planning for the next Landsat satellite was already underway. Goddard had agreed to manage design of a new data ground station for the satellite, and NASA and the Department of Defense initially agreed to divide responsibility for managing the satellite development. But the Air Force subsequently pulled out of the project and, in May 1994, management of the Landsat system was turned over to NASA, the U.S. Geological Survey (USGS), and NOAA. At the same time, Goddard assumed sole management responsibility for developing Landsat 7. The only U.S. land resource satellites in operation at the moment are still Landsat 4 and 5, which are both degrading in capability. Landsat 5, in fact, is the only satellite still able to transmit images. The redesigned Landsat 7 satellite is scheduled for launch by mid-1999, and its data will once again be made available though the upgraded EROS facilities in Sioux Falls, South Dakota. Until then, scientists, farmers and other users of land resource information have to rely on Landsat 5 images through EOSAT, or they have to turn to foreign companies for the information. The French and the Indians have both created commercial companies to sell land resource information from their satellites, but both companies are being heavily subsidized by their governments while a market for the images is developed. There is probably a viable commercial market that could be developed in the United States, as well. But it may be that the demand either needs to grow substantially on its own or would need government subsidy before a commercialization effort could succeed. The issue of scientific versus practical access to the information would also still have to be resolved. No matter how the organization of the system is eventually structured, Landsat imagery has proven itself an extremely valuable tool for not only natural resource management but urban planning and agricultural assistance, as well. Former NASA Administrator James Fletcher even commented in 1975 that if he had one space-age development to save the world, it would be Landsat and its successor satellites.23 Without question, the Landsat technology has enabled us to learn much more about the Earth and its land-based resources. And as the population and industrial production on the planet increase, learning about the Earth and potential dangers to it has become an increasingly important priority for scientists and policy-makers alike.24 Atmospheric Research Satellites One of the main elements scientists are trying to learn about the Earth is the composition and behavior of its atmosphere. In fact, Goddard's scientists have been investigating the dynamics of the Earth's atmosphere for scientific, as well as meteorological, purposes since the inception of the Center. Explorers 17, 19, and 32, for example, all researched various aspects of the density, composition, pressure and temperature of the Earth's atmosphere. Explorers 51 and 54, also known as "Atmosphere Explorers," investigated the chemical processes and energy transfer mechanisms that control the atmosphere. Another goal of Goddard's atmospheric scientists was to understand and measure what was called the "Earth Radiation Budget." Scientists knew that radiation from the Sun enters the Earth's atmosphere. Some of that energy is reflected back into space, but most of it penetrates the atmosphere to warm the surface of the Earth. The Earth, in turn, radiates.... ....energy back into space. Scientists knew that the overall radiation received and released was about equal, but they wanted to know more about the dynamics of the process and seasonal or other fluctuations that might exist. Understanding this process is important because the excesses and deficits in this "budget," as well as variations in it over time or at different locations, create the energy to drive our planet's heating and weather patterns. The first satellite to investigate the dynamics of the Earth Radiation Budget was Explorer VII, launched in 1959. Nimbus 2 provided the first global picture of the radiation budget, showing that the amount of energy reflected by the Earth's atmosphere was lower than scientists had thought. Additional instruments on Nimbus 3, 5, and 6, as well as operational TIROS and ESSA satellites, explored the dynamics of this complex process further. In the early 1980s, researchers developed an Earth Radiation Budget Experiment (ERBE) instrument that could better analyze the short-wavelength energy received from the Sun and the longer-wavelength energy radiated into space from the Earth. This instrument was put on a special Earth Radiation Budget Satellite (ERBS) launched in 1984, as well as the NOAA-9 and NOAA 10 weather satellites. This instrument has provided scientists with information on how different kinds of clouds affect the amount of energy trapped in the Earth's atmosphere. Lower, thicker clouds, for example, reflect a portion of the Sun's energy back into space, creating a... ....cooling effect on the surface and atmosphere of the Earth. High, thin cirrus clouds, on the other hand, let the Sun's energy in but trap some of the Earth's outgoing infrared radiation, reflecting it back to the ground. As a result, they can have a warming effect on the Earth's atmosphere. This warming effect can, in turn, create more evaporation, leading to more moisture in the air. This moisture can trap even more radiation in the atmosphere, creating a warming cycle that could influence the long-term climate of the Earth. Because clouds and atmospheric water vapor seem to play a significant role in the radiation budget of the Earth as well as the amount of global warming and climate change that may occur over the next century, scientists are attempting to find out more about the convection cycle that transports water vapor into the atmosphere. In 1997, Goddard launched the Tropical Rainfall Measuring Mission (TRMM) satellite into a near-equatorial orbit to look more closely at the convection cycle in the tropics that powers much of the rest of the world's cloud and weather patterns. The TRMM satellite's Clouds and the Earth's Radiant Energy System (CERES) instrument, built by NASA's Langley Research Center, is an improved version of the earlier ERBE experiment. While the satellite's focus is on convection and rainfall in the lower atmosphere, some of that moisture does get transported into the upper atmosphere, where it can play a role in changing the Earth's radiation budget and overall climate.25 An even greater amount of atmospheric research, however, has been focused on a once little-known chemical compound of three oxygen atoms called ozone. Ozone, as most Americans now know, is a chemical in the upper atmosphere that blocks incoming ultraviolet rays from the Sun, protecting us from skin cancer and other harmful effects caused by ultraviolet radiation. The ozone layer was first brought into the spotlight in the 1960s, when designers began working on the proposed Supersonic Transport (SST). Some scientists and environmentalists were concerned that the jet's high-altitude emissions might damage the ozone layer, and the federal government funded several research studies to evaluate the risk. The cancellation of the SST in 1971 shelved the issue, at least temporarily, but two years later a much greater potential threat emerged. In 1973, two researchers at the University of California, Irvine came up with the astounding theory that certain man-made chemicals, called chlorofluorocarbons (CFCs), could damage the atmosphere's ozone layer. These chemicals were widely used in everything from hair spray to air conditioning systems, which meant that the world might have a dangerously serious problem on its hands. In 1975, Congress directed NASA to develop a "comprehensive program of research, technology and monitoring of phenomena of the upper atmosphere" to evaluate the potential risk of ozone damage further. NASA was already conducting atmospheric research, but the Congressional mandate supported even wider efforts. NASA was not the only organization looking into the problem, either. Researchers around the world began focusing on learning more about the chemistry of the upper atmosphere and the behavior of ozone layer. Goddard's Nimbus IV research satellite, launched in 1970, already had an instrument on it to analyze ultraviolet rays that were "backscattered," or reflected, from different altitudes in the Earth's atmosphere. Different wavelengths of UV radiation should be absorbed by the ozone at different levels in the atmosphere. So by analyzing how much UV radiation was still present in different wavelengths, researchers could develop a profile of how thick or thin the ozone layer was at different altitudes and locations. In 1978, Goddard launched the last and most capable of its Nimbus-series satellites. Nimbus 7 carried an improved version of this experiment, called the Solar Backscatter Ultraviolet (SBUV) instrument. It also carried a new sensor called the Total Ozone Mapping Spectrometer (TOMS). As opposed to the SBUV, which provided a vertical profile of ozone in the atmosphere, the TOMS instrument generated a high-density map of the total amount of ozone in the atmosphere. A similar instrument, called the SBUV-2, has been put on weather satellites since the early 1980s. For a number of years, the Space Shuttle periodically flew a Goddard instrument called the Shuttle Solar Backscatter Ultraviolet (SSBUV) experiment that was used to calibrate the SBUV-2 satellite instruments to insure the readings continued to be accurate. In the last couple of years, however, scientists have developed data-processing methods of calibrating the instruments, eliminating the need for the Shuttle experiments. Yet it was actually not a NASA satellite that discovered the "hole" that finally developed in the ozone layer. In May 1985, a British researcher in Antarctica published a paper announcing that he had detected an astounding 40% loss in the ozone layer over a Antarctica the previous winter. When Goddard researchers went back and looked at their TOMS data from that time period, they discovered that the data indicated the exact same phenomenon. Indeed, the satellite indicated an area of ozone layer thinning, or "hole,"26 the size of the Continental U.S. How had researchers missed a development that drastic? Ironically enough, it was because the anomaly was so drastic. The TOMS data analysis software had been programmed to flag grossly anomalous data points, which were assumed to be errors. Nobody had expected the ozone loss to be as great as it was, so the data points over the area where the loss had occurred looked like problems with the instrument or its calibration. . Once the Nimbus 7 data was verified, Goddard's researchers generated a visual map of the area over Antarctica where the ozone loss had occurred. In fact, the ability to generate visual images of the ozone layer and its "holes" have been among the significant contributions NASA's ozone-related satellites have made to the public debate over the issue. Data points are hard for most people to fully understand. But for non-scientists, a visual image showing a gap in a protective layer over Antarctica or North America makes the problem not only clear, but somehow very real. The problem then became determining what was causing the loss of ozone. The problem was a particularly sticky one, because it was going to relate directly to legislation and restrictions that would be extremely costly for industry. By 1978, the Environmental Protection Agency (EPA) had already moved to ban.... ....the use of CFCs in aerosols. By 1985, the United Nations Environmental Program (UNEP) was calling on nations to take measures to protect the ozone and, in 1987, forty-three nations signed the "Montreal Protocol, agreeing to cut CFC production 50% by the year 2000. The CFC theory was based on a prediction that chlorofluorocarbons, when they reached the upper atmosphere, released chlorine and flourine. The chlorine, it was suspected, was reacting with the ozone to form chlorine monoxide - a chemical that is able to destroy a large amount of ozone in a very short period of time. Because the issue was the subject of so much debate, NASA launched numerous research efforts to try to validate or disprove the theory. In addition to satellite observations, NASA sent teams of researchers and aircraft to Antarctica to take in situ readings of the ozone layer and the ozone "hole" itself. These findings were then supplemented with the bigger picture perspective the TOMS and SBUV instruments could provide. The TOMS instrument on Nimbus 7 was not supposed to last more than a couple of years. But the information it was providing was considered so critical to the debate that Goddard researchers undertook an enormous effort to keep the instrument working, even as it aged and began to degrade. The TOMS instrument also hadn't been designed to show long-term trends, so the data processing techniques had to be significantly improved to give researchers that kind of information. In the end, Goddard was able to keep the Nimbus 7 TOMS instrument operating for almost 15 years, which provided ozone monitoring until Goddard was able to launch a replacement TOMS instrument on a Russian satellite in 1991.27 A more comprehensive project to study the upper atmosphere and and the ozone layer was launched in 1991, as well. The satellite, called the Upper Atmosphere Research Satellite (UARS), was one of the results of Congress's 1975 mandate for NASA to pursue additional ozone research. Although its goal is to try to understand the chemistry and dynamics of the upper atmosphere, the focus of UARS is clearly on ozone research. Original plans called for the spacecraft to be launched from the Shuttle in the mid-1980s, but the Challenger explosion back-up delayed its launch until 1991. Once in orbit, however, the more advanced instruments on board the UARS satellite were able to map chlorine monoxide levels in the stratosphere. Within months, the satellite was able to confirm what the Antarctic.... ....aircraft expeditions and Nimbus-7 satellite had already reported - that there was a clear and causal link between levels of chlorine, formation of chlorine monoxide, and levels of ozone loss in the upper atmosphere. Since the launch of UARS, the TOMS instrument has been put on several additional satellites to insure that we have a continuing ability to monitor changes in the ozone layer. A Russian satellite called Meteor 3 took measurements with a TOMS instrument from 1991 until the satellite ceased operating in 1994. The TOMS instrument was also incorporated into a Japanese satellite called the Advanced Earth Observing System (ADEOS) that was launched in 1996. ADEOS, which researchers hoped could provide TOMS coverage until the next scheduled TOMS instrument launch in 1999, failed after less than a year in orbit. But fortunately, Goddard had another TOMS instrument ready for launch on a small NASA satellite called an Earth Probe, which was put into orbit with the Pegasus launch vehicle in 1996. Researchers hope that this instrument will continue to provide coverage and data until the next scheduled TOMS instrument launch. All of these satellites have given us a much clearer picture of what the ozone layer is, how it interacts with various other chemicals, and what causes it to deteriorate. These pieces of information are essential elements for us to have if we want to figure out how best to protect what is arguably one of our most precious natural resources. Using the UARS satellite, scientists have been able to track the progress of CFCs up into the stratosphere and have detected the build-up of chlorine monoxide over North America and the Arctic as well as Antarctica. Scientists also have discovered that ozone loss is much greater when the temperature of the stratosphere is cold. In 1997, for example, particularly cold stratospheric temperatures created the first Antarctic-type of ozone hole over North America. Another factor in ozone loss is the level of aerosols, or particulate matter, in the upper atmosphere. The vast majority of aerosols come from soot, other pollution, or volcanic activity, and Goddard's scientists have been studying the effects of these particles in the atmosphere ever since the launch of the Nimbus I spacecraft in 1964. Goddard's 1984 Earth Radiation Budget Satellite (ERBS), which is still operational, carries a Stratospheric Aerosol and Gas Experiment (SAGE II) that tracks aerosol levels in the lower and upper atmosphere. The Halogen Occultation Experiment (HALOE) instrument on UARS also measures aerosol intensity and distribution. In 1991, both UARS and SAGE II were used to track the movement and dispersal of the massive aerosol cloud created by the Mt. Pinatubo volcano eruption in the Philippines. The eruption caused stratospheric aerosol levels to increase to as much as 100 times their pre-eruption levels, creating spectacular Sunsets around the world but causing some other effects, as well. These volcanic clouds appear to help cool the Earth, which could affect global warming trends, but the aerosols in these clouds seem to increase the amount of ozone loss in the stratosphere, as well. The good news is, the atmosphere seems to be beginning to heal itself. In 1979 there was no ozone hole. Throughout the 1980s, while legislative and policy debates raged over the issue, the hole developed and grew steadily larger. In 1989, most U.S. companies finally ceased production of CFC chemicals and, in 1990, the U.N. strengthened its Montreal Protocol to call for the complete phaseout of CFCs by the year 2000. Nature is slow to react to changes in our behavior but, by 1997, scientists finally began to see a levelling out and even a slight decrease in chlorine monoxide levels and ozone loss in the upper atmosphere.28 Continued public interest in this topic has made ozone research a little more complicated for the scientists involved. Priorities and pressures in the program have changed along with Presidential administrations and Congressional agendas and, as much as scientists can argue that data is simply data, they cannot hope to please everyone in such a politically charged arena. Some environmentalists argue that the problem is much worse than NASA is making it out to be, while more conservative politicians have argued that NASA's scientists are blowing the issue out of proportion.29 But at this point a few things are clearer. The production of CFC chemicals was, in fact, harming a critical component of our planet's atmosphere. It took a variety of ground and space instruments to detect and map the nature and extent of the problem. But the perspective offered by Goddard's satellites allowed scientists and the general public to get a clear overview of the problem and map the progression of events that caused it. This information has had a direct impact on changing the world's industrial practices which, in turn, have begun to slow the damage and allow the planet to heal itself. The practical implications of Earth-oriented satellite data may make life a little more complicated for the scientists involved, but no one can argue the significance or impact of the work. By developing the technology to view and analyze the Earth from space, we have given ourselves an invaluable tool for helping us understand and protect the planet on which we live. One of the biggest advantages to remote sensing of the Earth from satellites stems from the fact that the majority of the Earth's surface area is extremely difficult to study from the ground. The world's oceans cover 71% of the Earth's surface and comprise 99% of its living area. Atmospheric convective activity over the tropical ocean area is believed to drive a significant amount of the world's weather. Yet until recently, the only way to map or analyze this powerful planetary element was with buoys, ships or aircraft. But these methods could only obtain data from various individual points, and the process was extremely difficult , expensive, and time-consuming. Satellites, therefore, offered oceanographers a tremendous advantage. A two-minute ocean color satellite image, for example, contains more measurements than a ship travelling 10 knots could make in a decade. This ability has allowed scientists to learn a lot more about the vast open stretches of ocean that influence our weather, our global climate, and our everyday lives.30 Although Goddard's early meteorological satellites were not geared specifically toward analyzing ocean characteristics, some of the instruments could provide information about the ocean as well as the atmosphere. The passive microwave sensors that allowed scientists to "see" through clouds better, for example, also let them map the distribution of sea ice around the world. Changes in sea ice distribution can indicate climate changes and affect sea levels around the world, which makes this an important parameter to monitor. At the same time, this information also has allowed scientists to locate open passageways for ships trying to get through the moving ice floes of the Arctic region. By 1970, NOAA weather satellites also had instruments that could measure the temperature of the ocean surface in areas where there was no cloud cover, and the Landsat satellites could provide some information on snow and ice distributions. But since the late 1970s, much more sophisticated ocean-sensing satellite technology has emerged.31 The Nimbus 7 satellite, for example, carried an improved microwave instrument that could generate a much more detailed picture of sea ice distribution than either... ...the earlier Nimbus or Landsat satellites. Nimbus 7 also carried the first Coastal Zone Color Scanner (CZCS), which allowed scientists to map pollutants and sediment near coastlines. The CZCS also showed the location of ocean phytoplankton around the world. Phytoplankton are tiny, carbon dioxide-absorbing plants that constitute the lowest rung on the ocean food chain. So phytoplankton generally mark spots where larger fish may be found. But because they bloom where nutrient-rich water from the deep ocean comes up near the surface, their presence also gives scientists clues about the ocean's currents and circulation. Nimbus 7 continued to send back ocean color information until 1984. Scientists at Goddard continued working on ocean color sensor development... ....throughout the 1980s, and a more advanced coastal zone ocean color instrument was launched on the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) satellite in 1997. In contrast to most scientific satellites, SeaWiFS was funded and launched by a private company instead of by NASA. Most of the ocean color data the satellite provides is purchased by NASA and other research institutions, but the company is selling some data to the fishing industry, as well.32 Since the launch of the Nimbus 7 and Tiros-N satellites in 1978, scientists have also been able to get much better information on global ocean surface temperatures. Sea surface temperatures tell scientists about ocean circulation, because they can use the temperature information to track the movement of warmer and cooler bodies of water. Changes in sea surface temperatures can also indicate the development of phenomena such as El Nino climate patterns. In fact, one of the most marked indications of a developing El Nino condition, which can cause heavy rains in some parts of the world and devastating drought in others, is an unusually warm tongue of water moving eastward from the western equatorial Pacific Ocean. NOAA weather satellites have carried instruments to measure sea surface temperature since 1981, and NASA's EOS AM-1 satellite, scheduled for launch in 1999, incorporates an instrument that can measure those temperatures with even more precision. The launch of Nimbus 7 also gave researchers the ability to look at surface winds, which help drive ocean circulation. With Nimbus 7, however, scientists had to infer surface winds by looking at slight differentiations in microwave emissions coming from the ocean surface. A scatterometer designed specifically to measure surface winds was not launched until the Europeans launched ERS-1 in 1991. Another scatterometer was launched on the Japanese ADEOS spacecraft in 1996. Because ADEOS failed less than a year after launch, Goddard researchers have begun an intensive effort to launch another scatterometer, called QuickSCAT, on a NASA spacecraft. JPL project managers are being aided in this effort by the Goddard-developed Rapid Spacecraft Procurement Initiative, which will allow them to incorporate the instrument into an existing small spacecraft design.Using this streamlined process, scientists hope to have QuickSCAT in orbit by the end of 1998.33 In the 1970s, researchers at the Wallops Flight Facility also began experimenting with radar altimetry to determine sea surface height, although they were pleased if they could get accuracy within a meter. In 1992, however, a joint satellite project between NASA and the French Centre National d'Etudes Spatiales (CNES) called TOPEX/Poseidon put a much more accurate radar altimeter into orbit. Goddard managed the development of the TOPEX radar altimeter, which can measure sea surface height within a few centimeters. In addition to offering useful information for maritime weather reports, this sea level data tells scientists some important things about ocean movement. For one thing, sea surface height indicates the build-up of water in one area of the world or another. One of the very first precursors to an El Nino condition, for example, is a rise in ocean levels in the western equatorial Pacific, caused by stronger-than-normal easterly trade winds. Sea level also tells scientists important information about the amount of heat the ocean is storing. If the sea level in a particular area is low, it means that the area of warm, upper-level water is shallow. This means that colder, deeper water can reach the surface there, driving ocean circulation and bringing nutrients up from below, leading to the production of phytoplankton. The upwelling of cold water will also cool down the sea surface temperature, reducing the amount of water that evaporates into the atmosphere. All of these improvements in satellite capabilities gave oceanographers and scientists an opportunity to integrate on-site surface measurements from buoys or ships with the more global perspective available from space. As a result, we are finally beginning to piece together a more complete picture of our oceans and the role they play in the Earth's biosystems and climate. In fact, one of the most significant results of ocean-oriented satellite research was the realization that ocean and atmospheric processes were intimately linked to each other. To really understand the dynamics of the ocean or the atmosphere, we needed to look at the combined global system they comprised.34 El Nino and Global Change The main catalyst that prompted scientists to start looking at the oceans and atmosphere as an integrated system was the El Nino event of 1982-83. The rains and drought associated with the unusual weather pattern caused eight billion dollars of damage, leading to several international research programs to try to understand and predict the phenomenon better. The research efforts included measurements by ships, aircraft, ocean buoys, and satellites, and the work is continuing today. But by 1996, scientists had begun to understand the warning signals and patterns of a strong El Nino event. They also had the technology to track atmospheric wind currents and cloud formation, ocean color, sea surface temperatures, sea surface levels and sea surface winds, which let them accurately predict the heavy rains and severe droughts that occurred at points around the world throughout the 1997-98 winter. The reason the 1982-83 El Nino prompted a change to a more integrated ocean-atmospheric approach is that the El Nino phenomenon does not exist in the ocean or the atmosphere by itself. It's the coupled interactions between the two elements that cause this periodic weather pattern to occur. The term El Nino, which means "The Child," was coined by fishermen on the Pacific coast of Central America who noticed a warming of their coastal ocean waters, along with a decline in fish population, near the Christ Child's birthday in December. But as scientists have discovered, the sequence of events that causes that warming begins many months earlier, in winds headed the opposite direction. In a normal year, strong easterly trade winds blowing near the equator drag warmer, upper-level ocean water to the western edge of the Pacific ocean. That build-up of warm water causes convection.... ....up into the tropical atmosphere, leading to rainfall along the Indonesian and Australian coastlines. It also leads to upwelling of colder, nutrient-rich water along the eastern equatorial Pacific coastlines, along Central and South America. In an El Nino year, however, a period of stronger-than-normal trade winds that significantly raises sea levels in the western Pacific is followed by a sharp drop in those winds. The unusually weak trade winds allow the large build-up of warm water in the western tropical Pacific to flow eastward along the equator. That change moves the convection and rainfall off the Indonesian and Australian coasts, causing severe drought in those areas and, as the warm water reaches the eastern edge of the Pacific ocean, much heavier than normal rainfall occurs along the western coastlines of North, Central, and South America. The movement of warm water toward the eastern Pacific also keeps the colder ocean water from coming up to the surface, keeping phytoplankton from growing and reducing the presence of fish further up on the food chain. In other words, an El Nino is the result of a change in atmospheric winds, which causes a change in ocean currents and sea level distribution, which causes a change in sea surface temperature, which causes a change in water vapor entering the atmosphere, which causes further changes in the wind currents, and so on, creating a cyclical pattern. Scientists still don't know exactly what causes the initial change in atmospheric winds, but they now realize that they need to look at a global system of water, land and air interactions in order to find the answer. And satellites play a critical role in being able to do that. An El Nino weather pattern is the biggest short-term "coupled" atmospheric and oceanographic climate signal on the planet after the change in seasons, which is why it prompted researchers to take a more interdisciplinary approach to studying it. But scientists are beginning to realize that many of the Earth's climatic changes or phenomena are really coupled events that require a broader approach in order to understand. In fact, the 1990s have seen the emergence of a new type of scientist who is neither oceanographer or atmospheric specialist, but is an amphibious kind of researcher focusing on the broader issue of climate change.35 One of the other important topics these researchers are currently trying to assess is the issue of global warming. Back in 1896, a Swedish chemist named Svante Arrhenius predicted that the increasing carbon dioxide emissions from the industrial revolution would eventually cause the Earth to become several degrees warmer. The reason for this warming was due to what has become known as the "greenhouse effect." In essence, carbon dioxide and other "greenhouse gases," such as water vapor, allow the short-wavelength radiation from the Sun to pass through the atmosphere, warming the Earth. But the gases absorb the longer-wavelength energy travelling back from the Earth into space, radiating part of that energy back down to the Earth again. Just as the glass in a greenhouse allows the Sun through but traps the heat inside, these gases end up trapping a certain amount of heat in the Earth's atmosphere, causing the Earth to become warmer. The effect of this warming could be small or great, depending on how much the temperature actually changes. If it is only a degree or two, the effect would be relatively small. But a larger change in climate could melt polar ice, causing the sea level to rise several feet and wiping out numerous coastal communities and resources. If the warming happened rapidly, vegetation might not have time to adjust to the climate change, which could affect the world's food supply as well as timber and other natural resources. The critical question, then, is how great a danger global warming is. And the answer to that is dependent on numerous factors. One, obviously, is the amount of carbon dioxide and other emissions we put into the air - a concern that has driven efforts to reduce our carbon dioxide-producing fossil fuel consumption. But the amount of carbon dioxide in the air is also dependent on how much can be absorbed again by plant life on Earth - a figure that scientists depend on satellites in order to compute. Landsat images can tell scientists how much deforestation is occurring around the world, and how much healthy plant life remains to absorb CO2. Until recently, however, the amount of CO2 absorbed by the world's oceans was unknown. The ocean color images of SeaWiFS are helping to fill that gap, because the phytoplankton it tracks are a major source of carbon dioxide absorption in the oceans. Another part of the global warming equation is how much water vapor is in the atmosphere - a factor that is driven by ocean processes, especially in the heat furnace of the tropics. As a result, scientists are trying to learn more about the transfer of heat and water vapor between the ocean and different levels of the atmosphere, using tools such as Goddard's TRMM and UARS satellites. All of these numbers and factors are fed into atmospheric and global computer models, many of which have been developed at the Goddard Institute for Space Studies (GISS) in New York City. These models then try to predict how our global climate may change based on current emissions, population trends, and known facts about ocean and atmospheric processes. While these models have been successful in predicting short-term effects, such as the global temperature drop after the Mt. Pinatubo volcano eruption, the problem with trying to predict global change is that it's a very long-term process, with many factors that may change over time. We have only been studying the Earth in bits and pieces, and for only a short number of years. In order to really understand which climate changes are short-term variations and which ones are longer trends of more permanent change, scientists needed to observe and measure the global, integrated climate systems of Planet Earth over a long period of time. This realization was the impetus for NASA's Mission to Planet Earth, or the Earth Science Enterprise.36 Earth Science Enterprise In some senses, the origins of what became NASA's "Mission to Planet Earth" (MTPE) began in the late 1970s, when we began studying the overall climate and planetary processes of other planets in our solar system. Scientists began to realize that we had never taken that kind of "big picture" look at our own planet, and that such an effort might yield some important and fascinating results. But an even larger spur to the effort was simply the development of knowledge and technology that gave scientists both the capability and an understanding of the importance of looking at the Earth from a more global, systems perspective. Discussions along these lines were already underway when the El Nino event of 1982-83 and the discovery of the ozone "hole" in 1985 elevated the level of interest and support for global climate change research to an almost crisis level. Although the "Mission to Planet Earth" was not announced as a formal new NASA program until 1990, work on the satellites to perform the mission was underway before that. In 1991, Goddard's UARS satellite became the first official MTPE spacecraft to be launched. Although the program has now changed its name to the Earth Science Enterprise, suffered several budget cuts, and refocused its efforts from overall global change to a narrower focus of global climate change (leaving out changes in solid land masses), the basic goal of the program echoes what was initiated in 1990. In essence, the Earth Science Enterprise aims to integrate satellite, aircraft and ground-based instruments to monitor 24 interrelated processes and parameters in the planet's oceans and atmosphere over a 15-year period. Phase I of the program consisted of integrating information from satellites such as UARS, the TOMS Earth Probe, TRMM, TOPEX/Poseidon, ADEOS and SeaWiFS with Space Shuttle research payloads, research aircraft and ground station observations. Phase II is scheduled to begin in 1999 with the launch of Landsat 7 and the first in a series of Earth Observing System (EOS) satellites. The EOS spacecraft are extremely large research platforms with many different instruments to look at various atmospheric and ocean processes that affect natural resources and the overall global climate. They will be polar-orbiting satellites, with orbital paths that will allow the different satellites to take measurements at different times of the day. EOS AM-1 is scheduled for launch in late 1998. EOS PM-1 is scheduled for launch around the year 2000. The first in an EOS altimetry series of satellites, which will study the role of oceans, ocean winds and ocean-atmosphere interactions in climate systems, will launch in 2000. An EOS CHEM-1 satellite, which will look at the behavior of ozone and greenhouse gases, measure pollution and the effect of aerosols on global climate, is scheduled for launch in 2002. Follow-on missions will continue the work of these initial observation satellites over a 15-year period. There is still much we don't know about our own planet. Indeed, the first priority of the Earth Science Enterprise satellites is simply to try to fill in the gaps in what we know about the behavior and dynamics of our oceans and our atmosphere. Then scientists can begin to look at how those elements interact, and what impact they have and will have on global climate and climate change. Only then will we really know how great a danger global warming is, or how much our planet can absorb the man-made elements we are creating in greater and greater amounts.37 It's an ambitious task. But until the advent of satellite technology, the job would have been impossible to even imagine undertaking. Satellites have given us the ability to map and study large sections of the planet that would be difficult to cover from the planet's surface. Surface and aircraft measurements also play a critical role in these studies. But satellites were the breakthrough that gave us the unique ability to stand back far enough from the trees to see the complete and complex forest in which we live. For centuries, humankind has stared at the stars and dreamed of travelling among them. We imagined ourselves zipping through asteroid fields, transfixed by spectacular sights of meteors, stars, and distant galaxies. Yet when the astronauts first left the planet, they were surprised to find themselves transfixed not by distant stars, but by the awe-inspiring view their spaceship gave them of the place they had just left - a dazzling, mysterious planet they affectionally nicknamed the "Big Blue Marble." As our horizons expanded into the universe, so did our perspective and understanding of the place we call home. As an astronaut on an international Space Shuttle crew put it, "The first day or so we all pointed to our countries. The third or fourth day we were pointing to our continents. By the fifth day we were aware of only one Earth."38 Satellites have given this perspective to all of us, expanding our horizons and deepening our understanding of the planet we inhabit. If the world is suddenly a smaller place, with cellular phones, paging systems, and Internet service connecting friends from distant lands, it's because satellites have advanced our communication abilities far beyond anything Alexander Graham Bell ever imagined. If we have more than a few hours' notice of hurricanes or storm fronts, it's because weather satellites have enabled meteorologists to better understand the dynamics of weather systems and track those systems as they develop around the world. If we can detect and correct damage to our ozone layer or give advance warning of a strong El Nino winter, it's because satellites have helped scientists better understand the changing dynamics of our atmosphere and our oceans. We now understand that our individual "homes" are affected by events on the far side of the globe. From both a climatic and environmental perspective, we have realized that our home is indeed "one Earth," and we need to look at its entirety in order to understand and protect it. The practical implications of this information sometimes make the scientific pursuit of this understanding more complicated than our explorations into the deeper universe. But no one would argue the inherent worth of the information or the advantages satellites offer. The satellites developed by Goddard and its many partners have expanded both our capabilities and our understanding of the complex processes within our Earth's atmosphere. Those efforts may be slightly less mind-bending than our search for space-time anomalies or unexplainable black holes, but they are perhaps even more important. After all, there may be millions of galaxies in the universe. But until we find a way to reach them, this planet is the only one we have. And the better we understand it, the better our chances are of preserving it - not only for ourselves, but for the generations to come.
http://history.nasa.gov/SP-4312/ch5.htm
13
50
1 Basic Electric Circuits Thevenins and Nortons Theorems Lesson 10 2 THEVENIN NORTON THEVENINS THEOREM Consider the following A Network 1 Network 2 B Figure 10.1 Coupled networks. For purposes of discussion at this point we consider that both networks are composed of resis tors and independent voltage and current sources 1 3 THEVENIN NORTON THEVENINS THEOREM Suppose Network 2 is detached from Network 1 and we focus temporarily only on Network 1. A Network 1 B Figure 10.2 Network 1 open-circuited. Network 1 can be as complicated in structure as one can imagine. Maybe 45 meshes 387 resistors 91 voltage sources and 39 current sources. 2 4 THEVENIN NORTON THEVENINS THEOREM A Network 1 B Now place a voltmeter across terminals A-B and read the voltage. We call this the open-circuit voltage. No matter how complicated Network 1 is we read one voltage. It is either positive at A (with respect to B) or negative at A. We ca ll this voltage Vos and we also call it VTHEVENIN VTH 3 5 THEVENIN NORTON THEVENINS THEOREM We now deactivate all sources of Network 1. To deactivate a voltage source we remove the source and replace it with a short circuit. To deactivate a current source we remove 4 6 THEVENIN NORTON THEVENINS THEOREM Consider the following circuit. Figure 10.3 A typical circuit with independent sources How do we deactivate the sources of this circuit 5 7 THEVENIN NORTON THEVENINS THEOREM When the sources are deactivated the circuit appears as in Figure 10.4. Figure 10.4 Circuit of Figure 10.3 with sources deactivated Now place an ohmmeter across A-B and read the resistance. If R1 R2 R4 20 and R310 then the meter reads 10 . 6 8 THEVENIN NORTON THEVENINS THEOREM We call the ohmmeter reading under these conditions RTHEVENIN and shorten this to RTH. T herefore the important results are that we can replace Network 1 with the following network. Figure 10.5 The Thevenin equivalent structure. 7 9 THEVENIN NORTON THEVENINS THEOREM We can now tie (reconnect) Network 2 back to terminals A-B. Figure 10.6 System of Figure 10.1 with Network 1 replaced by the Thevenin equivalent circuit. We can now make any calculations we desire within Network 2 and they will give the same results as if we still had Network 1 connected. 8 10 THEVENIN NORTON THEVENINS THEOREM It follows that we could also replace Network 2 with a Thevenin voltage and Thevenin resistance. The results would be as shown in Figure 10.7. Figure 10.7 The network system of Figure 10.1 replaced by Thevenin voltages and resistances. 9 11 THEVENIN NORTON THEVENINS THEOREM Example 10.1. Find VX by first finding VTH and RTH to the left of A-B. Figure 10.8 Circuit for Example 10.1. First remove everything to the right of A-B. 10 12 THEVENIN NORTON THEVENINS THEOREM Example 10.1. continued Figure 10.9 Circuit for finding VTH for Example 10.1. Notice that there is no current flowing in the 4 resistor (A-B) is open. Thus there can be no v oltage across the resistor. 11 13 THEVENIN NORTON THEVENINS THEOREM Example 10.1. continued We now deactivate the sources to the left of A-B and find the resistance seen looking in these ter minals. RTH Figure 10.10 Circuit for find RTH for Example 10.10. We see RTH 126 4 8 12 14 THEVENIN NORTON THEVENINS THEOREM Example 10.1. continued After having found the Thevenin circuit we connect this to the load in order to find VX. Figure 10.11 Circuit of Ex 10.1 after connecting Thevenin circuit. 13 15 THEVENIN NORTON THEVENINS THEOREM In some cases it may become tedious to find RTH by reducing the resistive network with the source s deactivated. Consider the following Figure 10.12 A Thevenin circuit with the output shorted. We see Eq 10.1 14 16 THEVENIN NORTON THEVENINS THEOREM Example 10.2. For the circuit in Figure 10.13 find RTH by using Eq 10.1. Figure 10.13 Given circuit with load shorted The task now is to find ISS. One way to do this is to replace the circuit to the left of C-D with a Thevenin voltage and Thevenin resistance. 15 17 THEVENIN NORTON THEVENINS THEOREM Example 10.2. continued Applying Thevenins theorem to the left of terminals C-D and reconnecting to the load gives Figure 10.14 Thevenin reduction for Example 10.2. 16 18 THEVENIN NORTON THEVENINS THEOREM Example 10.3 For the circuit below find VAB by first finding the Thevenin circuit to the left of terminals A-B . Figure 10.15 Circuit for Example 10.3. We first find VTH with the 17 resistor removed. Next we find RTH by looking into termina ls A-B with the sources deactivated. 17 19 THEVENIN NORTON THEVENINS THEOREM Example 10.3 continued Figure 10.16 Circuit for finding VOC for Example 10.3. 18 20 THEVENIN NORTON THEVENINS THEOREM Example 10.3 continued Figure 10.17 Circuit for find RTH for Example 10.3. 19 21 THEVENIN NORTON THEVENINS THEOREM Example 10.3 continued Figure 10.18 Thevenin reduced circuit for Example 10.3. We can easily find that 20 22 THEVENIN NORTON THEVENINS THEOREM Example 10.4 Working with a mix of independent and dependent sources. Find the voltage across the 100 load resistor by first finding the Thevenin circuit to the left of terminals A-B. Figure 10.19 Circuit for Example 10.4 21 23 THEVENIN NORTON THEVENINS THEOREM Example 10.4 continued First remove the 100 load resistor and find VAB VTH to the left of terminals A-B. Figure 10.20 Circuit for find VTH Example 10.4. 22 24 THEVENIN NORTON THEVENINS THEOREM Example 10.4 continued To find RTH we deactivate all independent sources but retain all dependent sources as shown in Figu re 10.21. Figure 10.21 Example 10.4 independent sources deactivated. We cannot find RTH of the above circuit as it stands. We must apply either a voltage or curre nt source at the load and calculate the ratio of this voltage to current to find RTH. 23 25 THEVENIN NORTON THEVENINS THEOREM Example 10.4 continued Figure 10.22 Circuit for find RTH Example 10.4. Around the loop at the left we write the following equation From which 24 26 THEVENIN NORTON THEVENINS THEOREM Example 10.4 continued Figure 10.23 Circuit for find RTH Example 10.4. Using the outer loop going in the cw direction using drops or 25 27 THEVENIN NORTON THEVENINS THEOREM Example 10.4 continued The Thevenin equivalent circuit tied to the 100 load resistor is shown below. Figure 10.24 Thevenin circuit tied to load Example 10.4. 26 28 THEVENIN NORTON THEVENINS THEOREM Example 10.5 Finding the Thevenin circuit when only resistors and dependent sources are present. Consider the circ uit below. Find Vxy by first finding the Theveni n circuit to the left of x-y. Figure 10.25 Circuit for Example 10.5. For this circuit it would probably be easier to use mesh or nodal analysis to find Vxy. However the purpose is to illustrate Thevenins theorem. 27 29 THEVENIN NORTON THEVENINS THEOREM Example 10.5 continued We first reconcile that the Thevenin voltage for this circuit must be zero. There is no juice in the circuit so there cannot be any open circuit voltage except zero. This is always true when the circuit is made up of only dependent sources and resistors. To find RTH we apply a 1 A source and determine V for the circuit below. Figure 10.26 Circuit for find RTH Example 10.5. 30 THEVENIN NORTON THEVENINS THEOREM Example 10.5 continued Figure 10.27 Circuit for find RTH Example 10.5. Write KVL around the loop at the left starting at m going cw using drops 29 31 THEVENIN NORTON THEVENINS THEOREM Example 10.5 continued Figure 10.28 Determining RTH for Example 10.5. We write KVL for the loop to the right starting at n using drops and find or 32 THEVENIN NORTON THEVENINS THEOREM Example 10.5 continued We know that where V 50 and I 1. Thus RTH 50 . The Thevenin circuit tied to the load is given below. Figure 10.29 Thevenin circuit tied to the load Example 10.5. Obviously VXY 50 V 31 33 THEVENIN NORTON NORTONS THEOREM Assume that the network enclosed below is composed of independent sources and resistors. Network Nortons Theorem states that this network can be replaced by a current source shunted by a resistance R. 33 34 THEVENIN NORTON NORTONS THEOREM In the Norton circuit the current source is the short circuit current of the network that is th e current obtained by shorting the output of the network. The resistance is the resistance seen looking into the network with all sources deactivated. This is the same as RTH. 35 THEVENIN NORTON NORTONS THEOREM We recall the following from source transformations. In view of the above if we have the Thevenin equivalent circuit of a network we can obtain th e Norton equivalent by using source transformatio n. However this is not how we normally go about finding the Norton equivalent circuit. 34 36 THEVENIN NORTON NORTONS THEOREM Example 10.6. Find the Norton equivalent circuit to the left of terminals A-B for the network shown below. Conne ct the Norton equivalent circuit to the load and find the current in the 50 resistor. Figure 10.30 Circuit for Example 10.6. 35 37 THEVENIN NORTON NORTONS THEOREM Example 10.6. continued Figure 10.31 Circuit for find INORTON. It can be shown by standard circuit analysis that 36 38 THEVENIN NORTON NORTONS THEOREM Example 10.6. continued It can also be shown that by deactivating the sources We find the resistance looking into term inals A-B is RN and RTH will always be the same value for a given circuit. The Norton equivalent circuit tied to the load is shown below. Figure 10.32 Final circuit for Example 10.6. 37 39 THEVENIN NORTON NORTONS THEOREM Example 10.7. This example illustrates how one might use Nortons Theorem in electronics. the following circuit comes close to representing the model of a transistor. For the circuit shown below find the Norton equivalent circuit to the left of terminals A-B. Figure 10.33 Circuit for Example 10.7. 38 40 THEVENIN NORTON NORTONS THEOREM Example 10.7. continued We first find We first find VOS 39 41 THEVENIN NORTON NORTONS THEOREM Example 10.7. continued Figure 10.34 Circuit for find ISS Example 10.7. We note that ISS - 25IS. Thus 40 42 THEVENIN NORTON NORTONS THEOREM Example 10.7. continued Figure 10.35 Circuit for find VOS Example 10.7. From the mesh on the left we have From which 41 43 THEVENIN NORTON NORTONS THEOREM Example 10.7. continued We saw earlier that Therefore The Norton equivalent circuit is shown below. Norton Circuit for Example 10.7 42 44 THEVENIN NORTON Extension of Example 10.7 Using source transformations we know that the Thevenin equivalent circuit is as follows Figure 10.36 Thevenin equivalent for Example 10.7. 43 45 circuits End of Lesson 10 Thevenin and Norton PowerShow.com is a leading presentation/slideshow sharing website. Whether your application is business, how-to, education, medicine, school, church, sales, marketing, online training or just for fun, PowerShow.com is a great resource. And, best of all, most of its cool features are free and easy to use. You can use PowerShow.com to find and download example online PowerPoint ppt presentations on just about any topic you can imagine so you can learn how to improve your own slides and presentations for free. Or use it to find and download high-quality how-to PowerPoint ppt presentations with illustrated or animated slides that will teach you how to do something new, also for free. Or use it to upload your own PowerPoint slides so you can share them with your teachers, class, students, bosses, employees, customers, potential investors or the world. Or use it to create really cool photo slideshows - with 2D and 3D transitions, animation, and your choice of music - that you can share with your Facebook friends or Google+ circles. That's all free as well! For a small fee you can get the industry's best online privacy or publicly promote your presentations and slide shows with top rankings. But aside from that it's free. We'll even convert your presentations and slide shows into the universal Flash format with all their original multimedia glory, including animation, 2D and 3D transition effects, embedded music or other audio, or even video embedded in slides. All for free. Most of the presentations and slideshows on PowerShow.com are free to view, many are even free to download. (You can choose whether to allow people to download your original PowerPoint presentations and photo slideshows for a fee or free or not at all.) Check out PowerShow.com today - for FREE. There is truly something for everyone!
http://www.powershow.com/view/25f58-YTk5Y/Basic_Electric_Circuits_powerpoint_ppt_presentation
13
64
- How does Radio Echo Sounding Work? - Frequencies and Wavelengths - Radio Wave Propagation in Ice - Field Work - Data Processing - Photos and Links How Does Radio-Echo Sounding Work? A radio-echo sounding system consists of two main components: 1) the transmitter, and 2) the receiver. The transmitter sends out a brief burst of radio waves of a specific frequency. The receiver detects the radio waves from the transmitter and any waves that have bounced, or reflected off nearby surfaces. The receiver records the amount of time between the arrival of the transmitted wave and any reflected waves as well as the strength of the waves (measured as an AC The radio waves travel at different speeds through different materials. For example, radio waves travel very close to 300,000,000 meters/second (3 x 108 m/s) through air, a little less than double the speed in ice at 1.69 x 108 m/s. See the next three tabs for more indepth explaination. Frequencies & Wavelengths of Waves Electro-Magnetic (EM) energy is made up of both particles and waves. A single wavelength is 2¼ or 360° of the wave's angular distance. When a wave travels through a material, the wavelength is the distance travelled through the material by 2¼ of a wave. The number of times a wave oscillates over a certain amount of time is know as the frequency of the wave. The units of frequencies are Hertz (Hz) which is the number of complete wavelengths that pass a point in a single second. Therefore, 1 Hz = 1 cycle/second or 1/s. The wavelength of a signal passing through a material depends on the frequency (f ) of the wave and the signal velocity (u ) through the material (a property of the material itself). As shown above, the units of frequency are 1/s, and the units of velocity are m/s. Since wavelength(l ) is measured in m, the equation to obtain wavelength is: l = f * u or wavelength = frequency * velocity A higher amplitude wave of a given frequency carries more energy than a low amplitude wave. A signal can be detected only if its amplitude is greater than that of any background noise. For example, if you are listening to a radio in New York City, you can pick up a station from Seattle only if its signal is stronger than the EM noise caused by the sun, electric motors, local radio There are numerous radio-echo sounding devices used by various researchers thorughout the world. The components described here are those used by researchers at the University of Wyoming, which is based on that designed by Barry Narod and Garry Clarke at the University of British Coloumbia (Narod & Clarke, J. of Glaciology, 1995). It has been designed for use on temperate glaciers. The transmitter emits a 10 ns (nanosecond) long pulse at a frequency of 100 MHz. The details of the pulse-generation circuitry can be found in Narod & Clarke, 1995. The frequency of the pulse is modulated for use on temperate glaciers by attaching two 10 m antennas. The resulting 5 MHz frequency is ideal for temperate glacier radio-echo sounding. The transmitter is powered by a 12 V battery. The transmitter and battery are housed in a small tackle box which is attached to a pair of old skis. The antennas extend out the front and back of the tackle box. The forward antenna is carried by the person pulling the transmitter sled's tow rope, while the rear antenna drags behind. There is no focusing of the transmitted signal, so it propagates in all directions into the ice and air. In order to reduce "ringing" of the signal along the antenna, resistors are embedded every meter along the antenna. The total resistance of each 10 m antenna is 11 ohms. The receiver begins with an antenna identical to that of the transmitter. As each pulse is sent out of the transmitter, some of the transmitted energy travels through the air and some through the ice. The velocity of radio waves in air is almost twice that in ice, so the receiver first detects the "Direct Wave" transmitted through the air between the transmitter and receiver. This triggers the oscilloscope to begin recording the signal. For the next 10 µs, the oscilloscope records the voltage of the signals that have reflected off nearby surfaces. The scope averages 64 of the transmitter pulses and reflected waves to generate a single trace. By averaging the scope reduces niose due to signal scatter and instrument noise in order to obtain a better trace to be recorded on the laptop computer. The entire receiver is placed in a small sled which is pulled by a tow rope. A third researcher monitors the signals on the oscilloscope and records the information onto the laptop. Both the scope and the laptop are powered by a 12 V battery which can be charged by a solar panel for extended surveys. Radio Wave Propagation in Temperate Ice As most people know, both water and ice are transparent to the visible light portion of the Electro-Magnetic (EM) spectrum. At the much lower frequencies (and longer wavelengths) of radio waves, liquid water is opaque while ice is still relatively transparent. This is why radio-echo sounding is used in the sub-freezing regions of the Arctic and Antarctic glaciers and ice sheets. There is little water present within these cold ice masses to scatter or block the radio signals. The lack of water has allowed researchers to use frequencies ranging from a few MHz for subglacial mapping, up to 200-500 MHz for crevasse detection near the ice surface. Frequencies in the GHz range are used for studies of snow structure and stratigraphy. By definition, temperate ice exists at the pressure-melting point. This means that both ice and water phases coexist. The presence of liquid water presents a problem when trying to use radio waves in temperate glaciers because the water scatters the radio signals making it difficult to receive coherent reflections that can later be In the late 1960s through the mid-1970s, a number of researchers experimented with various frequencies and transmitter designs. Their findings concluded that frequencies between ~2 and ~10 MHz are best for temperate glaciers. 5 MHz pulse-transmitters are the most common used The basic reason that a 5 MHz signal works in most temperate ice is that the resulting 34 m wavelength is far larger than the size of the majority of the englacial water bodies that scatter the signal. Unfortunately, the long wavelength of the signal seriously limits the resolution of the radio-echo sounding survey. EM Wave Propagation Through a Dielectric Material Radio waves travel through ice due to its dielectric properties. The dielectric constant of a given material is a complex number describing the comparison of the electrical permittivity of a material and that of a vacuum. As a complex number, the dielectric constant contains both real and imaginary portions. The imaginary part of the number represents the polarization of atoms in the material as the EM energy passes through it (Feynman, 1964). The EM wave propagation velocity is determined by its entire complex The propagation velocity of a radio wave in ice is determined by the dielectric properties of ice. Liquid water and various types of bedrock have unique dielectric constants. Since the dieliectric properties of a material are related to conductivity, concentrations of dissolved ions in liquid water will affect the dielectric constant (more free ions increase the conductivity of water). The dielectric constants of some materials are listed below: |Ice (at 0ºC) ||3.2 ± 0.03 Reflections of Waves The Basic Concept When a wave encounters an interface between materials of different properties, the wave may be refracted, reflected, or both. Snell's Law describes the reaction of light to a boundary between materials of different dielectric contrasts (or refractive index), based on the angle at which a ray perpendicular to the wave front hits the interface. The angle of the incoming ray (Angle of Incidence: ai) is equal to the angle of reflection (ar). The Angle of Refraction (aR) is determined by the ratio of the sines of the Angle of Incidence to the Angle of Refraction and the ratio of the dielectric constants for the upper and lower layers (e1 and e2). There is a point where the Angle of Incidence is large enough (close to horizontal) that there is no refraction. This is called the Angle of Critical Refraction where all the incoming waves are either reflected or refracted along the interface. Ay angles larger than the Angle of Critical Refracion result in only reflection. Radio-Echo Sounding in the Field The appropriate field methods for gathering Radio-Echo Sounding (RES) data depend upon the objective of the survey. If a researcher simply wants a rough estimate of the glacier thickness, only a couple readings might suffice. If a high-resolution map of the glacier bed is desired, a dense grid of measurement points is necessary. Below is a description of the field techniques used to develop a high-resolution map of the glacier bed. It is important to remember that even after the field work is over there are many hours of data processing to be done. The techniques described here were developed to minimize the processing time and to maximize the resolution of the resulting map. Mapping the RES Grid When processing and interpreting the RES data after the field season, the researcher needs to know the topography of the glacier surface to correct for changes in the recorded wave travel times. The glacier surface topography is mapped using the Global Positioning System (GPS) or by traditional optical surveying. While GPS is faster, it does not have the vertical or horizontal resolution of optical surveying. The horizontal positions are necessary to locate the map with respect to other maps of the area, while the vertical coordinates are critical for the data processing and need to be accurate to within 0.5 m. In order to reduce the possibility of spatial aliasing and to maximize the resolution of the RES survey, the traces should be recorded less than one-quarter wavelength apart. For example, a 5 MHz RES system produces a 34 m wavelength. Therefore the grid of RES traces should be less than 8.5 m A rectangular grid with the traces aligned at 90° to one another greatly simplifies the data processing. Unfortunately, field conditions do not always oblige such an orderly system and the grid is modified by the presence of crevasses, melt-water ponds, steep slopes, avalanche debris, etc. In such cases, detailed notes help to recreate the grid during the data processing. Recording the Profiles The transmitter and receiver occupy separate sleds. These may be pulled in-line or side-by-side depending on the design specifications of the instruments. The Univ. of Wyoming system is pulled side-by-side so that the transmitter and receiver are pulled parallel to one another. A single researcher pulls the transmitter on its homemade sled while another pulls the receiver sled. A third researcher walks beside the receiver sled to monitor the incoming signals on the oscilloscope and then record them to the laptop computer. Some systems can continuously record traces to a computer and do minor amounts of pre-processing such as trace stacking (or averaging) and digital filtering to remove noise. The Univ. of Wyoming system is much simpler requiring the researchers to stop at each position in the RES grid and manually tell the computer to retrieve data from the oscilloscope. Although more time consuming, this method allows the researchers to monitor the condition of the incoming data and results in a smaller data set. Each trace recorded onto the computer is an average of at least 64 received pulses from the transmitter so that the signal-to-noise ratio is improved. RES Field Work on the Worthington The Worthington Glacier is a small temperate valley glacier in the Chugach Mts. of South-Central Alaska. Radio-echo sounding surveys have been recorded there in support of ice-dynamics research by the Univ. of Wyoming and the Institute of Arctic & Alpine Research at Processing Radio Echo-Sounding Data Processing the Radio Echo-Sounding (RES) data transforms the data from incoherent numbers to a data set that can be interpreted. Our processing methods are drawn from refection seismology techniques. These are outlined in Welch, 1996; Welch et al., 1998; and Yilmaz, 1987. We use a number of IDL (from Research Systems, Inc.) scripts to organize our data and usually create screen plots of each profile through each step of the processing to help identify problems or mistakes. We also use Seismic Unix (SU), a collection of freeware seismic processing scripts from the Colorado School of Mines. SU handles the filtering, gain controls, RMS, and migration of the data. IDL is used for file manipulation and plotting and provides a general programming background for the processing. The processing steps below are listed in the order that they are applied. The steps should be followed in this order. Note that quality of the processing results are strongly dependent on the quality of the field data. Data Cleaning and Sorting The first step of data processing is to organize and clean the field data so that all the profiles are oriented in the same direction (South to North, for example), any duplicated traces are deleted, profiles that were recorded in multiple files are joined together, and surface coordinates are assigned to each trace based on survey data. These steps are some of the most tedious, but are critical for later migration and interpretation. Static and Elevation Corrections The data is plotted as though the transmitter and receiver were a single point and the glacier surface is a horizontal plane. Since neither is the case, the data must be adjucted to reflect actual conditions. The transmitter-receiver separation results in a trigger-delay equivalent to the travel-time of the signal across the distance separating the two. This travel-time is added to the tops of all the traces as a Static Correction. The data is adjusted with respect to the highest trace elevation in the profile array. Trace elevations are taken from the survey data and the elevation difference between any trace and the highest trace is converted into a travel-time through ice by multiplying the elevation distance by the radio-wave velocity in ice (1.69 x 108 m/s). The travel-time is added to the top of the trace, adjusting the recorded data downward. Filtering and Gain Controls We use a bandpass filter in SU to elimitate low and high frequency noise that result from the radar instrumentation, nearby generators, etc. Generally we accept only frequencies within a window of 4-7 MHz as our center transmitter frequency is 5 MHz. Depending on the data, we will adjust the gain on the data, but generally avoid any gain as it also increases noise amplitude. We try to properly adjust gain controls in the field so that later adjustment is Cross-Glacier Migration (2-D) We 2-D migrate the data in the cross-glacier (or across the dominent topography of the dataset) in order to remove geometric errors introduced by the plotting method. Yilmaz (1987) provides a good explanation for the need for migration as well as descriptions of various migration algorithms. Why is migration necessary? The radar transmitter emits an omni-directional signal that we can assume is roughly spherical in shape. As the wave propagates outward from the transmitter, the size of the spherical wavefront gets bigger so when it finally reflects off a surface, that surface may be far from directly beneath the transmitter. Since by convention, we plot the data as though all reflections come from directly below the transmitter, we have to adjust the data to show the reflectors in their true positions. We generally use a TK migration routine that is best for single-velocity media where steep slopes are expected. As you can see from the plot below, the shape of the bed reflector has changed from the unmigrated plots shown in the previous Down-Glacier Migration (2-D) In order to account for the 3-dimensional topography of the glacier bed, we now migrate the profiles again, this time in the down-glacier direction. We use the same migration routine and the cross-glacier migrated profiles as the input. Although not as accurate as a true 3-dimensional migration, this two-pass method accounts for much of the regional topography by migrating in two orthogonal directions. Radar Profile After Down-Glacier Interpretating and Plotting the Bed Surface Once the profiles have been migrated in both the cross-glacier and down-glacier directions, we use IDL to plot the profiles as an animation sequence. The animation shows slices of the processed dataset in both the down-glacier and cross-glacier direction. By animating the profiles, it is easier to identify coherent reflection surfaces within the dataset. Another IDL script allows the user to digitize, grid, and plot reflection surfaces. The resolution of an interpreted surface is a function of the instrumentation, field techniques and processing methods. Through modeling of synthetic radar profiles, we have shown that under ideal circumstances, we can expect to resolve features with a horizontal radius greater than or equal to half the transmitter's wavelength in ice. So for a 5 MHz system, we can expect to resolve features that are larger than about 34 m across. Since the horizontal resolution is far coarser than the vertical resolution of 1/4 wavelength, we use the horizontal resolution as a smoothing window size for the interpreted reflector surfaces. We use a distance-weighted window to smooth the surfaces. The ice and bedrock surfaces of a portion of the Worthington Glacier obtained in the 1996 radio echo sounding survey. The 1994 boreholes are also plotted. (Plot by Joel Harper, U. of Wyo.) Click on the image for a larger version. The ice surface and bedrock surface beneath the Worthington Glacier, Alaska. Resolution of both surfaces is 20 x 20 m. Yellow lines indicate the positiond of boreholes used to measure ice Pictures of the Worthington Glacier Area Notes on Radar Profiles Three arrays of Radio-Echo Sounding profiles have been recorded on the Worthington Glacier. The 1994 survey was recorded using different field methods than the field methods used in 1996 & 1998. The same eqpuipmet was used in all three surveys as well as the same data processing techniques. The first profiles were recorded in 1994 and oriented parallel to the ice flow direction. The locations of these profiles were not measured accurately, and the profiles were recorded a few at a time over a period of about a month. The resulting glacier bed map was not very accurate, with a resolution of about 40 x 40 meters. The 1996 radar profiles were recorded in the cross-glacier direction. The location of every fourth trace of each profile was measured with optical surveying equipment using a local coordinate system seen in the map below. The profiles were spaced 20 m apart and a trace recorded every 5 m along each profile. The resulting glacier bed map had a resolution of 20 x 20 meters. In 1998 we used the radio-echo sounding equipment to look for englacial conduits that transport surface meltwater through the glacier to its bed. This study required the maximum resolution that we could obtain from the eqpuipment, so the profiles and traces were spaced every 5 m. Every fourth trace on each profile was surveyed to locate it to within 0.25 m and the entire RES survey was recorded in two days. The survey was repeated a month later to look for changes in the geometry of any englacial conduits found. The first RES survey was processed to produce a map of the glacier bed surface with a resolution of 17.5 x 17.5 m. The maximum resolution obtainable by an RES survey is half of the signal wavelength. Our 5 MHz system, therefore, can obtain 17 x 17 m resolution under the best of circumstances.
http://stolaf.edu/other/cegsic/background/index.htm
13
61
History of geodesy Geodesy (/dʒiːˈɒdɨsi/), also named geodetics, is the scientific discipline that deals with the measurement and representation of the Earth. Humanity has always been interested in the Earth. During very early times this interest was limited, naturally, to the immediate vicinity of home and residency, and the fact that we live on a near spherical globe may or may not have been apparent. As humanity developed, so did its interest in understanding and mapping the size, shape, and composition of the Earth. Early ideas about the figure of the Earth held the Earth to be flat (see flat earth), and the heavens a physical dome spanning over it. Two early arguments for a spherical earth were that lunar eclipses were seen as circular shadows which could only be caused by a spherical Earth, and that Polaris is seen lower in the sky as one travels South. The early Greeks, in their speculation and theorizing, ranged from the flat disc advocated by Homer to the spherical body postulated by Pythagoras — an idea supported later by Aristotle. Pythagoras was a mathematician and to him the most perfect figure was a sphere. He reasoned that the gods would create a perfect figure and therefore the earth was created to be spherical in shape. Anaximenes, an early Greek scientist, believed strongly that the earth was rectangular in shape. Since the spherical shape was the most widely supported during the Greek Era, efforts to determine its size followed. Plato determined the circumference of the earth to be 400,000 stadia (between 62,800 km/39,250 mi and 74,000 km/46,250 mi ) while Archimedes estimated 300,000 stadia ( 55,500 kilometres/34,687 miles ), using the Hellenic stadion which scholars generally take to be 185 meters or 1/10 of a geographical mile. Plato's figure was a guess and Archimedes' a more conservative approximation. In Egypt, a Greek scholar and philosopher, Eratosthenes (276 BC– 195 BC), is said to have made more explicit measurements. He had heard that on the longest day of the summer solstice, the midday sun shone to the bottom of a well in the town of Syene (Aswan). At the same time, he observed the sun was not directly overhead at Alexandria; instead, it cast a shadow with the vertical equal to 1/50th of a circle (7° 12'). To these observations, Eratosthenes applied certain "known" facts (1) that on the day of the summer solstice, the midday sun was directly over the Tropic of Cancer; (2) Syene was on this tropic; (3) Alexandria and Syene lay on a direct north-south line; (4) The sun was a relatively long way away (Astronomical unit). Legend has it that he had someone walk from Alexandria to Syene to measure the distance: that came out to be equal to 5000 stadia or (at the usual Hellenic 185 meters per stadion) about 925 kilometres. From these observations, measurements, and/or "known" facts, Eratosthenes concluded that, since the angular deviation of the sun from the vertical direction at Alexandria was also the angle of the subtended arc (see illustration), the linear distance between Alexandria and Syene was 1/50 of the circumference of the Earth which thus must be 50×5000 = 250,000 stadia or probably 25,000 geographical miles. The circumference of the Earth is 24,902 miles (40,075.16 km). Over the poles it is more precisely 40,008 km or 24,860 statute miles. The actual unit of measure used by Eratosthenes was the stadion. No one knows for sure what his stadion equals in modern units, but some say that it was the Hellenic 185-meter stadion. Had the experiment been carried out as described, it would not be remarkable if it agreed with actuality. What is remarkable is that the result was probably about one sixth too high. His measurements were subject to several inaccuracies: (1) though at the summer solstice the noon sun is overhead at the Tropic of Cancer, Syene was not exactly on the tropic (which was at 23° 43' latitude in that day) but about 22 geographical miles to the north; (2) the difference of latitude between Alexandria (31.2 degrees north latitude) and Syene (24.1 degrees) is really 7.1 degrees rather than the perhaps rounded (1/50 of a circle) value of 7° 12' that Eratosthenes used; (4) the actual solstice zenith distance of the noon sun at Alexandria was 31° 12' − 23° 43' = 7° 29' or about 1/48 of a circle not 1/50 = 7° 12', an error closely consistent with use of a vertical gnomon which fixes not the sun's center but the solar upper limb 16' higher; (5) the most importantly flawed element, whether he measured or adopted it, was the latitudinal distance from Alexandria to Syene (or the true Tropic somewhat further south) which he appears to have overestimated by a factor that relates to most of the error in his resulting circumference of the earth. A parallel later ancient measurement of the size of the earth was made by another Greek scholar, Posidonius. He is said to have noted that the star Canopus was hidden from view in most parts of Greece but that it just grazed the horizon at Rhodes. Posidonius is supposed to have measured the elevation of Canopus at Alexandria and determined that the angle was 1/48th of circle. He assumed the distance from Alexandria to Rhodes to be 5000 stadia, and so he computed the Earth's circumference in stadia as 48 times 5000 = 240,000. Some scholars see these results as luckily semi-accurate due to cancellation of errors. But since the Canopus observations are both mistaken by over a degree, the "experiment" may be not much more than a recycling of Eratosthenes's numbers, while altering 1/50 to the correct 1/48 of a circle. Later either he or a follower appears to have altered the base distance to agree with Eratosthenes's Alexandria-to-Rhodes figure of 3750 stadia since Posidonius's final circumference was 180,000 stadia, which equals 48×3750 stadia. The 180,000 stadia circumference of Posidonius is suspiciously close to that which results from another method of measuring the earth, by timing ocean sun-sets from different heights, a method which produces a size of the earth too low by a factor of 5/6, due to horizontal refraction. The abovementioned larger and smaller sizes of the earth were those used by Claudius Ptolemy at different times, 252,000 stadia in the Almagest and 180,000 stadia in the later Geographical Directory. His midcareer conversion resulted in the latter work's systematic exaggeration of degree longitudes in the Mediterranean by a factor close to the ratio of the two seriously differing sizes discussed here, which indicates that the conventional size of the earth was what changed, not the stadion. The Indian mathematician Aryabhata (AD 476 - 550) was a pioneer of mathematical astronomy. He describes the earth as being spherical and that it rotates on its axis, among other things in his work Āryabhaṭīya. Aryabhatiya is divided into four sections. Gitika, Ganitha (mathematics), Kalakriya (reckoning of time) and Gola (celestial sphere). The discovery that the earth rotates on its own axis from west to east is described in Aryabhatiya ( Gitika 3,6; Kalakriya 5; Gola 9,10;). For example he explained the apparent motion of heavenly bodies is only an illusion (Gola 9), with the following simile; - Just as a passenger in a boat moving downstream sees the stationary (trees on the river banks) as traversing upstream, so does an observer on earth see the fixed stars as moving towards the west at exactly the same speed (at which the earth moves from west to east.) Aryabhatiya also estimates the circumference of Earth, with an accuracy of 1%, which is remarkable. Aryabhata gives the radii of the orbits of the planets in terms of the Earth-Sun distance as essentially their periods of rotation around the Sun. He also gave the correct explanation of lunar and solar eclipses and that the Moon shines by reflecting sunlight. The Muslim scholars, who held to the spherical Earth theory, used it to calculate the distance and direction from any given point on the earth to Mecca. This determined the Qibla, or Muslim direction of prayer. Muslim mathematicians developed spherical trigonometry which was used in these calculations. Around AD 830 Caliph al-Ma'mun commissioned a group of astronomers to measure the distance from Tadmur (Palmyra) to al-Raqqah, in modern Syria. They found the cities to be separated by one degree of latitude and the distance between them to be 66⅔ miles and thus calculated the Earth's circumference to be 24,000 miles. Another estimate given was 56⅔ Arabic miles per degree, which corresponds to 111.8 km per degree and a circumference of 40,248 km, very close to the currently modern values of 111.3 km per degree and 40,068 km circumference, respectively. Muslim astronomers and geographers were aware of magnetic declination by the 15th century, when the Egyptian Muslim astronomer 'Abd al-'Aziz al-Wafa'i (d. 1469/1471) measured it as 7 degrees from Cairo. Of the medieval Persian Abu Rayhan Biruni (973-1048) it is said: "Important contributions to geodesy and geography were also made by Biruni. He introduced techniques to measure the earth and distances on it using triangulation. He found the radius of the earth to be 6339.6 km, a value not obtained in the West until the 16th century. His Masudic canon contains a table giving the coordinates of six hundred places, almost all of which he had direct knowledge." At the age of 17, Biruni calculated the latitude of Kath, Khwarazm, using the maximum altitude of the Sun. Biruni also solved a complex geodesic equation in order to accurately compute the Earth's circumference, which were close to modern values of the Earth's circumference. His estimate of 6,339.9 km for the Earth radius was only 16.8 km less than the modern value of 6,356.7 km. In contrast to his predecessors who measured the Earth's circumference by sighting the Sun simultaneously from two different locations, Biruni developed a new method of using trigonometric calculations based on the angle between a plain and mountain top which yielded more accurate measurements of the Earth's circumference and made it possible for it to be measured by a single person from a single location. Abu Rayhan Biruni's method was intended to avoid "walking across hot, dusty deserts" and the idea came to him when he was on top of a tall mountain in India (present day Pind Dadan Khan, Pakistan). From the top of the mountain, he sighted the dip angle which, along with the mountain's height (which he calculated beforehand), he applied to the law of sines formula. This was the earliest known use of dip angle and the earliest practical use of the law of sines. He also made use of algebra to formulate trigonometric equations and used the astrolabe to measure angles. His method can be summarized as follows: He first calculated the height of the mountain by going to two points at sea level with a known distance apart and then measuring the angle between the plain and the top of the mountain for both points. He made both the measurements using an astrolabe. He then used the following trigonometric formula relating the distance (d) between both points with the tangents of their angles (θ) to determine the height (h) of the mountain: He then stood at the highest point of the mountain, where he measured the dip angle using an astrolabe. He applied the values he obtained for the dip angle and the mountain's height to the following trigonometric formula in order to calculate the Earth's radius: - R = Earth radius - h = height of mountain - θ = dip angle Biruni had also, by the age of 22, written a study of map projections, Cartography, which included a method for projecting a hemisphere on a plane. Around 1025, Biruni was the first to describe a polar equi-azimuthal equidistant projection of the celestial sphere. He was also regarded as the most skilled when it came to mapping cities and measuring the distances between them, which he did for many cities in the Middle East and western Indian subcontinent. He often combined astronomical readings and mathematical equations, in order to develop methods of pin-pointing locations by recording degrees of latitude and longitude. He also developed similar techniques when it came to measuring the heights of mountains, depths of valleys, and expanse of the horizon, in The Chronology of the Ancient Nations. He also discussed human geography and the planetary habitability of the Earth. He hypothesized that roughly a quarter of the Earth's surface is habitable by humans, and also argued that the shores of Asia and Europe were "separated by a vast sea, too dark and dense to navigate and too risky to try". Revising the figures attributed to Posidonius, another Greek philosopher determined 18,000 miles as the Earth's circumference. This last figure was promulgated by Ptolemy through his world maps. The maps of Ptolemy strongly influenced the cartographers of the Middle Ages. It is probable that Christopher Columbus, using such maps, was led to believe that Asia was only 3 or 4 thousand miles west of Europe. Ptolemy's view was not universal, however, and chapter 20 of Mandeville's Travels (c. 1357) supports Eratosthenes' calculation. It was not until the 16th century that his concept of the Earth's size was revised. During that period the Flemish cartographer, Mercator, made successive reductions in the size of the Mediterranean Sea and all of Europe which had the effect of increasing the size of the earth. Early modern period Jean Picard performed the first modern meridian arc measurement in 1699–70. He measured a base line by the aid of wooden rods, used a telescope in his angle measurements, and computed with logarithms. Jacques Cassini later continued Picard's arc northward to Dunkirk and southward to the Spanish boundary. Cassini divided the measured arc into two parts, one northward from Paris, another southward. When he computed the length of a degree from both chains, he found that the length of one degree in the northern part of the chain was shorter than that in the southern part. See the illustration at right. This result, if correct, meant that the earth was not a sphere, but an oblong (egg-shaped) ellipsoid—which contradicted the computations by Isaac Newton and Christiaan Huygens. Newton's theory of gravitation predicted the Earth to be an oblate spheroid with a flattening of 1:230. The issue could be settled by measuring, for a number of points on earth, the relationship between their distance (in north-south direction) and the angles between their astronomical verticals (the projection of the vertical direction on the sky). On an oblate Earth the meridional distance corresponding to one degree would grow toward the poles. The French Academy of Sciences dispatched two expeditions – see French Geodesic Mission. One expedition under Pierre Louis Maupertuis (1736–37) was sent to Torne Valley (as far North as possible). The second mission under Pierre Bouguer was sent to what is modern-day Ecuador, near the equator (1735–44). The measurements conclusively showed that the earth was oblate, with a flattening of 1:210. Thus the next approximation to the true figure of the Earth after the sphere became the oblong ellipsoid of revolution. Asia and Americas In South America Bouguer noticed, as did George Everest in the 19th century Great Trigonometric Survey of India, that the astronomical vertical tended to be pulled in the direction of large mountain ranges, due to the gravitational attraction of these huge piles of rock. As this vertical is everywhere perpendicular to the idealized surface of mean sea level, or the geoid, this means that the figure of the Earth is even more irregular than an ellipsoid of revolution. Thus the study of the "undulation of the geoid" became the next great undertaking in the science of studying the figure of the Earth. In the late 19th century the Zentralbüro für die Internationale Erdmessung (that is, Central Bureau for International Geodesy) was established by Austria-Hungary and Germany. One of its most important goals was the derivation of an international ellipsoid and a gravity formula which should be optimal not only for Europe but also for the whole world. The Zentralbüro was an early predecessor of the International Association of Geodesy (IAG) and the International Union of Geodesy and Geophysics (IUGG) which was founded in 1919. Most of the relevant theories were derived by the German geodesist Friedrich Robert Helmert in his famous books Die mathematischen und physikalischen Theorieen der höheren Geodäsie, Einleitung und 1. Teil (1880) and 2. Teil (1884); English translation: Mathematical and Physical Theories of Higher Geodesy, Vol. 1 and Vol. 2. Helmert also derived the first global ellipsoid in 1906 with an accuracy of 100 meters (0.002 percent of the Earth's radii). The US geodesist Hayford derived a global ellipsoid in ~1910, based on intercontinental isostasy and an accuracy of 200 m. It was adopted by the IUGG as "international ellipsoid 1924". - Cleomedes 1.10 - Strabo 2.2.2, 2.5.24; D.Rawlins, Contributions - D.Rawlins (2007). "Investigations of the Geographical Directory 1979–2007 "; DIO, volume 6, number 1, page 11, note 47, 1996. - David A. King, Astronomy in the Service of Islam, (Aldershot (U.K.): Variorum), 1993. - Gharā'ib al-funūn wa-mulah al-`uyūn (The Book of Curiosities of the Sciences and Marvels for the Eyes), 2.1 "On the mensuration of the Earth and its division into seven climes, as related by Ptolemy and others," (ff. 22b-23a) - Edward S. Kennedy, Mathematical Geography, pp. 187–8, in (Rashed & Morelon 1996, pp. 185–201) - Barmore, Frank E. (April 1985), "Turkish Mosque Orientation and the Secular Variation of the Magnetic Declination", Journal of Near Eastern Studies (University of Chicago Press) 44 (2): 81–98 , doi:10.1086/373112 - John J. O'Connor, Edmund F. Robertson (1999). Abu Arrayhan Muhammad ibn Ahmad al-Biruni, MacTutor History of Mathematics archive. - "Khwarizm". Foundation for Science Technology and Civilisation. Retrieved 2008-01-22. - James S. Aber (2003). Alberuni calculated the Earth's circumference at a small town of Pind Dadan Khan, District Jhelum, Punjab, Pakistan.Abu Rayhan al-Biruni, Emporia State University. - Lenn Evan Goodman (1992), Avicenna, p. 31, Routledge, ISBN 0-415-01929-X. - Behnaz Savizi (2007), "Applicable Problems in History of Mathematics: Practical Examples for the Classroom", Teaching Mathematics and Its Applications (Oxford University Press) 26 (1): 45–50, doi:10.1093/teamat/hrl009 (cf. Behnaz Savizi. "Applicable Problems in History of Mathematics; Practical Examples for the Classroom". University of Exeter. Retrieved 2010-02-21.) - Beatrice Lumpkin (1997), Geometry Activities from Many Cultures, Walch Publishing, pp. 60 & 112–3, ISBN 0-8251-3285-1 - Jim Al-Khalili, The Empire of Reason 2/6 (Science and Islam - Episode 2 of 3) on YouTube, BBC - Jim Al-Khalili, The Empire of Reason 3/6 (Science and Islam - Episode 2 of 3) on YouTube, BBC - David A. King (1996), "Astronomy and Islamic society: Qibla, gnomics and timekeeping", in Roshdi Rashed, ed., Encyclopedia of the History of Arabic Science, Vol. 1, p. 128-184 . Routledge, London and New York. - An early version of this article was taken from the public domain source at http://www.ngs.noaa.gov/PUBS_LIB/Geodesy4Layman/TR80003A.HTM#ZZ4. - J.L. Greenberg: The problem of the Earth's shape from Newton to Clairaut: the rise of mathematical science in eighteenth-century Paris and the fall of "normal" science. Cambridge : Cambridge University Press, 1995 ISBN 0-521-38541-5 - M.R. Hoare: Quest for the true figure of the Earth: ideas and expeditions in four centuries of geodesy. Burlington, VT: Ashgate, 2004 ISBN 0-7546-5020-0 - D.Rawlins: "Ancient Geodesy: Achievement and Corruption" 1984 (Greenwich Meridian Centenary, published in Vistas in Astronomy, v.28, 255-268, 1985) - D.Rawlins: "Methods for Measuring the Earth's Size by Determining the Curvature of the Sea" and "Racking the Stade for Eratosthenes", appendices to "The Eratosthenes-Strabo Nile Map. Is It the Earliest Surviving Instance of Spherical Cartography? Did It Supply the 5000 Stades Arc for Eratosthenes' Experiment?", Archive for History of Exact Sciences, v.26, 211-219, 1982 - C.Taisbak: "Posidonius vindicated at all costs? Modern scholarship versus the stoic earth measurer". Centaurus v.18, 253-269, 1974
http://en.wikipedia.org/wiki/History_of_geodesy
13
50
Deforestation is the logging or burning of trees in forested areas. There are several reasons for doing so: trees or derived charcoal can be sold as a commodity and are used by humans while cleared land is used as pasture, plantations of commodities and human settlement. The removal of trees without sufficient reforestation, has resulted in damage to habitat, biodiversity loss and aridity. Also deforestated regions often degrade into wasteland. Disregard or unawareness of intrinsic value, and lack of ascribed value, lax forest management and environmental law allow deforestation to occur on such a large scale. In many countries, deforestation is an ongoing issue which is causing extinction, changes to climatic conditions, desertification and displacement of indigenous people. In simple terms, deforestation occurs because forested land is not economically viable. Increasing the amount of farmland, woods are used by native populations of over 200 million people worldwide. The presumed value of forests as a genetic resources has never been confirmed by any economic studies . As a result owners of forested land lose money by not clearing the forest and this affects the welfare of the whole society . From the perspective of the developing world, the benefits of forest as carbon sinks or biodiversity reserves go primarily to richer developed nations and there is insufficient compensation for these services. As a result some countries simply have too much forest. Developing countries feel that some countries in the developed world, such as the United States of America, cut down their forests centuries ago and benefited greatly from this deforestation and that it is hypocritical to deny developing countries the same opportunities: that the poor shouldn’t have to bear the cost of preservation when the rich created the problem . Aside from a general agreement that deforestation occurs to increase the economic value of the land there is no agreement on what causes deforestation. Logging may be a direct source of deforestation in some areas and have no effect or be at worst an indirect source in others due to logging roads enabling easier access for farmers wanting to clear the forest: experts do not agree on whether logging is an important contributor to global deforestation and some believe that logging makes considerable contribution to reducing deforestation because in developing countries logging reserves are far larger than nature reserves . Similarly there is no consensus on whether poverty is important in deforestation. Some argue that poor people are more likely to clear forest because they have no alternatives, others that the poor lack the ability to pay for the materials and labour needed to clear forest. . Claims that that population growth drives deforestation is weak and based on flawed data. with population increase due to high fertility rates being a primary driver of tropical deforestation in only 8% of cases . The FAO states that the global deforestation rate is unrelated to human population growth rate, rather it is the result of lack of technological advancement and inefficient governance . There are many causes at the root of deforestation, such as the corruption and inequitable distribution of wealth and power, population growth and overpopulation, and urbanization. Globalization is often viewed as a driver of deforestation. According to British environmentalist Norman Myers, 5% of deforestation is due to cattle ranching, 19% to over-heavy logging, 22% due to the growing sector of palm oil plantations, and 54% due to slash-and-burn farming. It's very difficult, if not impossible, to obtain figures for the rate of deforestation . The FAO data are based largely on reporting from forestry departments of individual countries. The World Bank estimates that 80% of logging operations are illegal in Bolivia and 42% in Colombia, while in Peru, illegal logging equals 80% of all activities. For tropical countries, deforestation estimates are very uncertain: based on satellite imagery, the rate of deforestation in the tropics is 23% lower than the most commonly quoted rates and for the tropics as a whole deforestation rates could be in error by as much as +/- 50% . Conversely a new analysis of satellite images reveal that the deforestation in the Amazon basin is twice as fast as scientists previously estimated. The UNFAO has the best long term datasets on deforestation available and based on these datasets global forest cover has remained approximately stable since the middle of the twentieth century ) and based on the longest dataset available global forest cover has increased since 1954 . The rate of deforestation is also declining, with less and less forest cleared each decade. Globally the rate of deforestation declined during the 1980s, with even more rapid declines in the 1990s and still more rapid declines from 2000 to 2005 . Based on these trends global anti-deforestation efforts are expected to outstrip deforestation within the next half-century with global forest cover increasing by 10 percent—an area the size of India—by 2050.Rates of deforestation are highest in developing tropical nations, although globally the rate of tropical forest loss is also declining, with tropical deforestation rates of about 8.6 million hectares annually in the 1990s, compared to a loss of around 9.2 million hectares during the previous decade. . The utility of the FAO figures have been disputed by some environmental groups. These questions are raised primarily because the figures do not distinguish between forest types. The fear is that highly diverse habitats, such as tropical rainforest, may be experiencing an increase in deforestation which is being masked by large decreases in less biodiverse dry, open forest types. Because of this omission it is possible that many of the negative impacts of deforestation, such as habitat loss, are increasing despite a decline in deforestation. Some environmentalists have predicted that unless significant measures such as seeking out and protecting old growth forests that haven't been disturbed , are taken on a worldwide basis to preserve them, by 2030 there will only be ten percent remaining with another ten percent in a degraded condition. 80 percent will have been lost and with them the irreversible loss of hundreds of thousands of species. Despite the ongoing reduction in deforestation over the past 30 years the process deforestation remains a serious global ecological problem and a major social and economic problem in many regions. 13 million hectares of forest are lost each year, 6 million hectares of which are forest that had been largely undisturbed by man . This results in a loss of habitat for wildlife as well as reducing or removing the ecosystem services provided by these forests. The decline in the rate of deforestation also does not address the damage already caused by deforestation. Global deforestation increased sharply in the mid-1800s. and about half of the mature tropical forests, between 7.5 million to 8 million square kilometres (2.9 million to 3 million sq mi) of the original 15 million to 16 million square kilometres (5.8 million to 6.2 million sq mi) that until, 1947 covered the planet have been cleared. The rate of deforestation also varies widely by region and despite a global decline in some regions, particularly in developing tropical nations, the rate of deforestation is increasing. For example, Nigeria lost 81% of its old-growth forests in just 15 years (1990- 2005). All of Africa is suffering deforestation at twice the world rate. The effects of deforestation are most pronounced in tropical rainforests . Brazil has lost 90-95% of its Mata Atlântica forest. In Central America, two-thirds of lowland tropical forests have been turned into pasture since 1950. Half of the Brazilian state of Rondonia's 243,000 km² have been affected by deforestation in recent years and tropical countries, including Mexico, India, Philippines, Indonesia, Thailand, Myanmar, Malaysia, Bangladesh, China, Sri Lanka, Laos, Nigeria, Congo, Liberia, Guinea, Ghana and the Côte d'Ivoire have lost large areas of their rainforest. Because the rates vary so much across regions the global decline in deforestation rates does not necessarily indicate that the negative effects of deforestation are also declining. Deforestation trends could follow the Kuznets curve however even if true this is problematic in so-called hot-spots because of the risk of irreversible loss of non-economic forest values for example valuable habitat or species loss. Deforestation is a contributor to global warming, and is often cited as one of the major causes of the enhanced greenhouse effect. Tropical deforestation is responsible for approximately 20% of world greenhouse gas emissions. According to the Intergovernmental Panel on Climate Change deforestation, mainly in tropical areas, account for up to one-third of total anthropogenic carbon dioxide emissions. Trees and other plants remove carbon (in the form of carbon dioxide) from the atmosphere during the process of photosynthesis and release it back into the atmosphere during normal respiration. Only when actively growing can a tree or forest remove carbon over an annual or longer timeframe. Both the decay and burning of wood releases much of this stored carbon back to the atmosphere. In order for forests to take up carbon, the wood must be harvested and turned into long-lived products and trees must be re-planted. Deforestation may cause carbon stores held in soil to be released. Forests are stores of carbon and can be either sinks or sources depending upon environmental circumstances. Mature forests alternate between being net sinks and net sources of carbon dioxide (see carbon dioxide sink and carbon cycle). Reducing emissions from the tropical deforestation and forest degradation (REDD) in developing countries has emerged as new potential to complement ongoing climate policies. The idea consists in providing financial compensations for the reduction of greenhouse gas (GHG) emissions from deforestation and forest degradation". The worlds rain forests are widely believed by laymen to contribute a significant amount of world's oxygen although it is now accepted by scientists that rainforests contribute little net oxygen to the atmosphere and deforestation will have no effect whatsoever on atmospheric oxygen levels. However, the incineration and burning of forest plants in order to clear land releases tonnes of CO2 which contributes to global warming. The water cycle is also affected by deforestation. Trees extract groundwater through their roots and release it into the atmosphere. When part of a forest is removed, the trees no longer evaporate away this water, resulting in a much drier climate. Deforestation reduces the content of water in the soil and groundwater as well as atmospheric moisture. Deforestation reduces soil cohesion, so that erosion, flooding and landslides ensue. Forests enhance the recharge of aquifers in some locales, however, forests are a major source of aquifer depletion on most locales. Shrinking forest cover lessens the landscape's capacity to intercept, retain and transpire precipitation. Instead of trapping precipitation, which then percolates to groundwater systems, deforested areas become sources of surface water runoff, which moves much faster than subsurface flows. That quicker transport of surface water can translate into flash flooding and more localized floods than would occur with the forest cover. Deforestation also contributes to decreased evapotranspiration, which lessens atmospheric moisture which in some cases affects precipitation levels down wind from the deforested area, as water is not recycled to downwind forests, but is lost in runoff and returns directly to the oceans. According to one preliminary study, in deforested north and northwest China, the average annual precipitation decreased by one third between the 1950s and the 1980s. Trees, and plants in general, affect the water cycle significantly: As a result, the presence or absence of trees can change the quantity of water on the surface, in the soil or groundwater, or in the atmosphere. This in turn changes erosion rates and the availability of water for either ecosystem functions or human services. The forest may have little impact on flooding in the case of large rainfall events, which overwhelm the storage capacity of forest soil if the soils are at or close to saturation. Tropical rainforests produce about 30% of our planets fresh water. Undisturbed forest has very low rates of soil loss, approximately 2 metric tons per square kilometre (6 short tons per square mile). Deforestation generally increases rates of soil erosion, by increasing the amount of runoff and reducing the protection of the soil from tree litter. This can be an advantage in excessively leached tropical rain forest soils. Forestry operations themselves also increase erosion through the development of roads and the use of mechanized equipment. China's Loess Plateau was cleared of forest millennia ago. Since then it has been eroding, creating dramatic incised valleys, and providing the sediment that gives the Yellow River its yellow color and that causes the flooding of the river in the lower reaches (hence the river's nickname 'China's sorrow'). Removal of trees does not always increase erosion rates. In certain regions of southwest US, shrubs and trees have been encroaching on grassland. The trees themselves enhance the loss of grass between tree canopies. The bare intercanopy areas become highly erodible. The US Forest Service, in Bandelier National Monument for example, is studying how to restore the former ecosystem, and reduce erosion, by removing the trees. Tree roots bind soil together, and if the soil is sufficiently shallow they act to keep the soil in place by also binding with underlying bedrock. Tree removal on steep slopes with shallow soil thus increases the risk of landslides, which can threaten people living nearby. However most deforestation only affects the trunks of trees, allowing for the roots to stay rooted, negating the landslide. Deforestation results in declines in biodiversity. The removal or destruction of areas of forest cover has resulted in a degraded environment with reduced biodiversity. Forests support biodiversity, providing habitat for wildlife; moreover, forests foster medicinal conservation. With forest biotopes being irreplaceable source of new drugs (such as taxol), deforestation can destroy genetic variations (such as crop resistance) irretrievably. Since the tropical rainforests are the most diverse ecosystems on earth and about 80% of the world's known biodiversity could be found in tropical rainforests removal or destruction of significant areas of forest cover has resulted in a degraded environment with reduced biodiversity. Scientific understanding of the process of extinction is insufficient to accurately to make predictions about the impact of deforestation on biodiversity. Most predictions of forestry related biodiversity loss are based on species-area models, with an underlying assumption that as forest are declines species diversity will decline similarly. However, many such models have been proven to be wrong and loss of habitat does not necessarily lead to large scale loss of species. Species-area models are known to overpredict the number of species known to be threatened in areas where actual deforestation is ongoing, and greatly overpredict the number of threatened species that are widespread. It has been estimated that we are losing 137 plant, animal and insect species every single day due to rainforest deforestation, which equates to 50,000 species a year. Others state that tropical rainforest deforestation is contributing to the ongoing Holocene mass extinction. The known extinction rates from deforestation rates are very low, approximately 1 species per year from mammals and birds which extrapolates to approximately 23000 species per year for all species. Predictions have been made that more than 40% of the animal and plant species in Southeast Asia could be wiped out in the 21st century, with such predictions called into questions by 1995 data that show that within regions of Southeast Asia much of the original forest has been converted to monospecific plantations but potentially endangered species are very low in number and tree flora remains widespread and stable. Damage to forests and other aspects of nature could halve living standards for the world's poor and reduce global GDP by about 7% by 2050, a major report concluded at the Convention on Biological Diversity (CBD) meeting in Bonn. Historically utilization of forest products, including timber and fuel wood, have played a key role in human societies, comparable to the roles of water and cultivable land. Today, developed countries continue to utilize timber for building houses, and wood pulp for paper. In developing countries almost three billion people rely on wood for heating and cooking. The forest products industry is a large part of the economy in both developed and developing countries. Short-term economic gains made by conversion of forest to agriculture, or over-exploitation of wood products, typically leads to loss of long-term income and long term biological productivity (hence reduction in nature's services). West Africa, Madagascar, Southeast Asia and many other regions have experienced lower revenue because of declining timber harvests. Illegal logging causes billions of dollars of losses to national economies annually. The new procedures to get amounts of wood are causing more harm to the economy and overpowers the amount of money spent by people employed in logging. According to a study, "in most areas studied, the various ventures that prompted deforestation rarely generated more than US$5 for every ton of carbon they released and frequently returned far less than US $1." The price on the European market for an offset tied to a one-ton reduction in carbon is 23 euro (about $35). See also: Timeline of environmental events. Deforestation has been practiced by humans for tens of thousands of years before the beginnings of civilization. Fire was the first tool that allowed humans to modify the landscape. The first evidence of deforestation appears in the Mesolithic period. It was probably used to convert closed forests into more open ecosystems favourable to game animals. With the advent of agriculture, fire became the prime tool to clear land for crops. In Europe there is little solid evidence before 7000 BC. Mesolithic foragers used fire to create openings for red deer and wild boar. In Great Britain shade tolerant species such as oak and ash are replaced in the pollen record by hazels, brambles, grasses and nettles. Removal of the forests led to decreased transpiration resulting in the formation of upland peat bogs. Widespread decrease in elm pollen across Europe between 8400-8300 BC and 7200-7000 BC, starting in southern Europe and gradually moving north to Great Britain, may represent land clearing by fire at the onset of Neolithic agriculture.The Neolithic period saw extensive deforestation for farming land. Stone axes were being made from about 3000 BC not just from flint, but from a wide variety of hard rocks from across Britain and North America as well. They include the noted Langdale axe industry in the English Lake District, quarries developed at Penmaenmawr in North Wales and numerous other locations. Rough-outs were made locally near the quarries, and some were polished locally to give a fine finish. This step not only increased the mechanical strength of the axe, but also made penetration of wood easier. Flint was still used from sources such as Grimes Graves but from many other mines across Europe. Throughout most of history, humans were hunter gatherers who hunted within forests. In most areas, such as the Amazon, the tropics, Central America, and the Caribbean, only after shortages of wood and other forest products occur are policies implemented to ensure forest resources are used in a sustainable manner. In ancient Greece, Tjeered van Andel and co-writers summarized three regional studies of historic erosion and alluviation and found that, wherever adequate evidence exists, a major phase of erosion follows, by about 500-1,000 years the introduction of farming in the various regions of Greece, ranging from the later Neolithic to the Early Bronze Age. The thousand years following the mid-first millennium BCE saw serious, intermittent pulses of soil erosion in numerous places. The historic silting of ports along the southern coasts of Asia Minor (e.g. Clarus, and the examples of Ephesus, Priene and Miletus, where harbors had to be abandoned because of the silt deposited by the Meander) and in coastal Syria during the last centuries BC. Easter Island has suffered from heavy soil erosion in recent centuries, aggravated by agriculture and deforestation. Jared Diamond gives an extensive look into the collapse of the ancient Easter Islanders in his book Collapse. The disappearance of the island's trees seems to coincide with a decline of its civilization around the 17th and 18th century. The famous silting up of the harbor for Bruges, which moved port commerce to Antwerp, also follow a period of increased settlement growth (and apparently of deforestation) in the upper river basins. In early medieval Riez in upper Provence, alluvial silt from two small rivers raised the riverbeds and widened the floodplain, which slowly buried the Roman settlement in alluvium and gradually moved new construction to higher ground; concurrently the headwater valleys above Riez were being opened to pasturage. A typical progress trap is that cities were often built in a forested area providing wood for some industry (e.g. construction, shipbuilding, pottery). When deforestation occurs without proper replanting, local wood supplies become difficult to obtain near enough to remain competitive, leading to the city's abandonment, as happened repeatedly in Ancient Asia Minor. The combination of mining and metallurgy often went along this self-destructive path. Meanwhile most of the population remaining active in (or indirectly dependent on) the agricultural sector, the main pressure in most areas remained land clearing for crop and cattle farming; fortunately enough wild green was usually left standing (and partially used, e.g. to collect firewood, timber and fruits, or to graze pigs) for wildlife to remain viable, and the hunting privileges of the elite (nobility and higher clergy) often protected significant woodlands. Major parts in the spread (and thus more durable growth) of the population were played by monastical 'pioneering' (especially by the Benedictine and Commercial orders) and some feudal lords actively attracting farmers to settle (and become tax payers) by offering relatively good legal and fiscal conditions - even when they did so to launch or encourage cities, there always was an agricultural belt around and even quite some within the walls.When on the other hand demography took a real blow by such causes as the Black Death or devastating warfare (e.g. Genghis Khan's Mongol hordes in eastern and central Europe, Thirty Years' War in Germany) this could lead to settlements being abandoned, leaving land to be reclaimed by nature, even though the secondary forests usually lacked the original biodiversity. From 1100 to 1500 AD significant deforestation took place in Western Europe as a result of the expanding human population. The large-scale building of wooden sailing ships by European (coastal) naval owners since the 15th century for exploration, colonisation, slave trade - and other trade on the high seas and (often related) naval warfare (the failed invasion of England by the Spanish Armada in 1559 and the battle of Lepanto 1571 are early cases of huge waste of prime timber; each of Nelson's Royal navy war ships at Trafalgar had required 6,000 mature oaks) and piracy meant that whole woody regions were over-harvested, as in Spain, where this contributed to the paradoxical weakening of the domestic economy since Columbus' discovery of America made the colonial activities (plundering, mining, cattle, plantations, trade ...) predominant. In Changes in the Land (1983), William Cronon collected 17th century New England Englishmen's reports of increased seasonal flooding during the time that the forests were initially cleared, and it was widely believed that it was linked with widespread forest clearing upstream. The massive use of charcoal on an industrial scale in Early Modern Europe was a new acceleration of the onslaught on western forests; even in Stuart England, the relatively primitive production of charcoal has already reached an impressive level. For ship timbers, Stuart England was so widely deforested that it depended on the Baltic trade and looked to the untapped forests of New England to supply the need. In France, Colbert planted oak forests to supply the French navy in the future; as it turned out, as the oak plantations matured in the mid-nineteenth century, the masts were no longer required. Specific parallels are seen in twentieth century deforestation occurring in many developing nations. The difficulties of estimating deforestation rates are nowhere more apparent than in the widely varying estimates of rates of rainforest deforestation. At one extreme Alan Grainger, of Leeds University, argues that there is no credible evidence of any longterm decline in rainforest area while at the other some environmental groups argue that one fifth of the world's tropical rainforest was destroyed between 1960 and 1990, that rainforests 50 years ago covered 14% of the worlds land surface and have been reduced to 6%. and that all tropical forests will be gone by the year 2090 . While the FAO states that the annual rate of tropical closed forest loss is declining (FAO data are based largely on reporting from forestry departments of individual countries) from 8 million has in the 1980s to 7 million in the 1990s some environmentalists are stating that rainforest are being destroyed at an ever-quickening pace. The London-based Rainforest Foundation notes that "the UN figure is based on a definition of forest as being an area with as little as 10% actual tree cover, which would therefore include areas that are actually savannah-like ecosystems and badly damaged forests." These divergent viewpoints are the result of the uncertainties in the extent of tropical deforestation. For tropical countries, deforestation estimates are very uncertain and could be in error by as much as +/- 50% while based on satellite imagery, the rate of deforestation in the tropics is 23% lower than the most commonly quoted rates . Conversely a new analysis of satellite images reveal that deforestation of the Amazon rainforest is twice as fast as scientists previously estimated. The extent of deforestation that has occurred in West Africa during the twentieth century is currently being hugely exaggerated . Despite these uncertainties there is agreement that development of rainforests remains a significant environmental problem. Up to 90% of West Africa's coastal rainforests have disappeared since1900. In South Asia, about 88% of the rainforests have been lost. Much of what of the world's rainforests remains is in the Amazon basin, where the Amazon Rainforest covers approximately 4 million square kilometres. The regions with the highest tropical deforestation rate between 2000 and 2005 were Central America -- which lost 1.3% of its forests each year -- and tropical Asia. In Central America, 40% of all the rainforests have been lost in the last 40 years. Madagascar has lost 90% of its eastern rainforests. As of 2007, less than 1% of Haiti's forests remain. Several countries, notably Brazil, have declared their deforestation a national emergency. From about the mid-1800s, around 1852, the planet has experienced an unprecedented rate of change of destruction of forests worldwide. More than half of the mature tropical forests that back in some thousand years ago covered the planet have been cleared. A January 30, 2009 New York Times article stated, "By one estimate, for every acre of rain forest cut down each year, more than 50 acres of new forest are growing in the tropics..." The new forest includes secondary forest on former farmland and so-called degraded forest. Africa is suffering deforestation at twice the world rate, according to the U.N. Environment Programme (UNEP). Some sources claim that deforestation have already wipedout roughly 90% of the West Africa's original forests. Deforestation is accelerating in Central Africa. According to the FAO, Africa lost the highest percentage of tropical forests of any continent. According to the figures from the FAO (1997), only 22.8% of West Africa's moist forests remain, much of this degraded. Massive deforestation threatens food security in some African countries. Africa experiences one of the highest rates of deforestation due to 90% of its population being dependent on wood for wood-fuel energy as the main source of heating and cooking. . Research carried out by WWF International in 2002 shows that in Africa, rates of illegal logging vary from 50% for Cameroon and Equatorial Guinea to 70% in Gabon and 80% in Liberia – where revenues from the timber industry also fuelled the civil war. See main article: Deforestation in Ethiopia. The main cause of deforestation in Ethiopia, located in East Africa, is a growing population and subsequent higher demand for agriculture, livestock production and fuel wood. Other reasons include low education and inactivity from the government, although the current government has taken some steps to tackle deforestation. Organizations such as Farm Africa are working with the federal and local governments to create a system of forest management. Ethiopia, the third largest country in Africa by population, has been hit by famine many times because of shortages of rain and a depletion of natural resources. Deforestation has lowered the chance of getting rain, which is already low, and thus causes erosion. Bercele Bayisa, an Ethiopian farmer, offers one example why deforestation occurs. He said that his district was forested and full of wildlife, but overpopulation caused people to come to that land and clear it to plant crops, cutting all trees to sell as fire wood. Ethiopia has lost 98% of its forested regions in the last 50 years. At the beginning of the 20th century, around 420,000 km² or 35% of Ethiopia's land was covered with forests. Recent reports indicate that forests cover less than 14.2% or even only 11.9% now. Between 1990 and 2005, the country lost 14% of its forests or 21,000 km². Deforestation with resulting desertification, water resource degradation and soil loss has affected approximately 94% of Madagascar's previously biologically productive lands. Since the arrival of humans 2000 years ago, Madagascar has lost more than 90% of its original forest. Most of this loss has occurred since independence from the French, and is the result of local people using slash-and-burn agricultural practises as they try to subsist. Largely due to deforestation, the country is currently unable to provide adequate food, fresh water and sanitation for its fast growing population. See main article: Deforestation in Nigeria. According to the FAO, Nigeria has the world's highest deforestation rate of primary forests. It has lost more than half of its primary forest in the last five years. Causes cited are logging, subsistence agriculture, and the collection of fuel wood. Almost 90% of West Africa's rainforest has been destroyed. Iceland has undergone extensive deforestation since Vikings settled in the ninth century. As a result, vast areas of vegetation and land has degraded, and soil erosion and desertification has occurred. As much as half of the original vegetative cover has been destroyed, caused in part by overexploitation, logging and overgrazing under harsh natural conditions. About 95% of the forests and woodlands once covering at least 25% of the area of Iceland may have been lost. Afforestation and revegetation has restored small areas of land. Victoria and NSW's remnant red gum forests including the Murray River's Barmah-Millewa, are increasingly being clear-felled using mechanical harvesters, destroying already rare habitat. Macnally estimates that approximately 82% of fallen timber has been removed from the southern Murray Darling basin, and the Mid-Murray Forest Management Area (including the Barmah and Gunbower forests) provides about 90% of Victoria's red gum timber. One of the factors causing the loss of forest is expanding urban areas. Littoral Rainforest growing along coastal areas of eastern Australia is now rare due to ribbon development to accommodate the demand for seachange lifestyles. See main article: Deforestation in Brazil. There is no agreement on what drives deforestation in Brazil, though a broad consensus exists that expansion of croplands and pastures is important. Increases in commodity prices may increase the rate of deforestation Recent development of a new variety of soybean has led to the displacement of beef ranches and farms of other crops, which, in turn, move farther into the forest. Certain areas such as the Atlantic Rainforest have been diminished to just 7% of their original size. Although much conservation work has been done, few national parks or reserves are efficiently enforced. Some 80% of logging in the Amazon is illegal. In 2008, Brazil's Government has announced a record rate of deforestation in the Amazon. Deforestation jumped by 69% in 2008 compared to 2007's twelvemonths, according to official government data. Deforestation could wipe out or severely damage nearly 60% of the Amazon rainforest by 2030, says a new report from WWF. One case of deforestation in Canada is happening in Ontario's boreal forests, near Thunder Bay, where 28.9% of a 19,000 km² of forest area had been lost in the last 5 years and is threatening woodland caribou. This is happening mostly to supply pulp for the facial tissue industry . In Canada, less than 8% of the boreal forest is protected from development and more than 50% has been allocated to logging companies for cutting. The forest loss is acute in Southeast Asia, the second of the world's great biodiversity hot spots. According to 2005 report conducted by the FAO, Vietnam has the second highest rate of deforestation of primary forests in the world second to only Nigeria. More than 90% of the old-growth rainforests of the Philippine archipelago have been cut. Russia has the largest area of forests of any nation on Earth. There is little recent research into the rates of deforestation but in 1992 2 million hectares of forest was lost and in 1994 around 3 million hectares were lost. . The present scale of deforestation in Russia is most easily seen using Google Earth, areas nearer to China are most affected as it is the main market for the timber. . Deforestation in Russia is particularly damaging as the forests have a short growing season due to extremely cold winters and therefore will take longer to recover. At present rates, tropical rainforests in Indonesia would be logged out in 10 years, Papua New Guinea in 13 to 16 years. There are significantly large areas of forest in Indonesia that are being lost as native forest is cleared by large multi-national pulp companies and being replaced by plantations. In Sumatra tens of thousands of square kilometres of forest have been cleared often under the command of the central government in Jakarta who comply with multi national companies to remove the forest because of the need to pay off international debt obligations and to develop economically. In Kalimantan, between 1991 and 1999 large areas of the forest were burned because of uncontrollable fire causing atmospheric pollution across South-East Asia. Every year, forest are burned by farmers (slash-and-burn techniques are used by between 200 and 500 million people worldwide) and plantation owners. A major source of deforestation is the logging industry, driven spectacularly by China and Japan. . Agricultural development programs in Indonesia (transmigration program) moved large populations into the rainforest zone, further increasing deforestation rates. A joint UK-Indonesian study of the timber industry in Indonesia in 1998 suggested that about 40% of throughout was illegal, with a value in excess of $365 million. More recent estimates, comparing legal harvesting against known domestic consumption plus exports, suggest that 88% of logging in the country is illegal in some way. Malaysia is the key transit country for illegal wood products from Indonesia. Prior to the arrival of European-Americans about one half of the United States land area was forest, about 4 million square kilometers (1 billion acres) in 1600. For the next 300 years land was cleared, mostly for agriculture at a rate that matched the rate of population growth. For every person added to the population, one to two hectares of land was cultivated. This trend continued until the 1920s when the amount of crop land stabilized in spite of continued population growth. As abandoned farm land reverted to forest the amount of forest land increased from 1952 reaching a peak in 1963 of 3,080,000 km² (762 million acres). Since 1963 there has been a steady decrease of forest area with the exception of some gains from 1997. Gains in forest land have resulted from conversions from crop land and pastures at a higher rate than loss of forest to development. Because urban development is expected to continue, an estimated 93,000 km² (23 million acres) of forest land is projected be lost by 2050 , a 3% reduction from 1997. Other qualitative issues have been identified such as the continued loss of old-growth forest, the increased fragmentation of forest lands, and the increased urbanization of forest land. According to a report by Stuart L. Pimm the extent of forest cover in the Eastern United States reached its lowest point in roughly 1872 with about 48 percent compared to the amount of forest cover in 1620. Of the 28 forest bird species with habitat exclusively in that forest, Pimm claims 4 become extinct either wholly or mostly because of habitat loss, the passenger pigeon, Carolina parakeet, ivory-billed woodpecker, and Bachman's Warbler. A key factor in controlling deforestation could come from the Kyoto Protocol. Avoided deforestation also known as Reduced Emissions from Deforestation and Degradation (REDD) could be implemented in a future Kyoto Protocol and allow the protection of a great amount of forests. At the moment, REDD is not yet implemented into any of the flexible mechanisms as CDM, JI or ET. New methods are being developed to farm more intensively, such as high-yield hybrid crops, greenhouse, autonomous building gardens, and hydroponics. These methods are often dependent on chemical inputs to maintain necessary yields. In cyclic agriculture, cattle are grazed on farm land that is resting and rejuvenating. Cyclic agriculture actually increases the fertility of the soil. Intensive farming can also decrease soil nutrients by consuming at an accelerated rate the trace minerals needed for crop growth. Deforestation presents multiple societal and environmental problems. The immediate and long-term consequences of global deforestation are almost certain to jeopardize life on Earth, as we know it.Some of these consequences include: loss of biodiversity; the destruction of forest-based-societies; and climatic disruption. For example, much loss of the Amazon Rainforest can cause enormous amounts of carbon dioxide to be released back into the atmosphere. Efforts to stop or slow deforestation have been attempted for many centuries because it has long been known that deforestation can cause environmental damage sufficient in some cases to cause societies to collapse. In Tonga, paramount rulers developed policies designed to prevent conflicts between short-term gains from converting forest to farmland and long-term problems forest loss would cause, while during the seventeenth and eighteenth centuries in Tokugawa Japan the shoguns developed a highly sophisticated system of long-term planning to stop and even reverse deforestation of the preceding centuries through substituting timber by other products and more efficient use of land that had been farmed for many centuries. In sixteenth century Germany landowners also developed silviculture to deal with the problem of deforestation. However, these policies tend to be limited to environments with good rainfall, no dry season and very young soils (through volcanism or glaciation). This is because on older and less fertile soils trees grow too slowly for silviculture to be economic, whilst in areas with a strong dry season there is always a risk of forest fires destroying a tree crop before it matures. In the areas where "slash-and-burn" is practiced, switching to "slash-and-char" would prevent the rapid deforestation and subsequent degradation of soils. The biochar thus created, given back to the soil, is not only a durable carbon sequestration method, but it also is an extremely beneficial amendment to the soil. Mixed with biomass it brings the creation of terra preta, one of the richest soils on the planet and the only one known to regenerate itself. In many parts of the world, especially in East Asian countries, reforestation and afforestation are increasing the area of forested lands . The amount of woodland has increased in 22 of the world's 50 most forested nations. Asia as a whole gained 1 million hectares of forest between 2000 and 2005. Tropical forest in El Salvador expanded more than 20 percent between 1992 and 2001. Based on these trends global forest cover is expected to increase by 10 percent—an area the size of India—by 2050 . In the People's Republic of China, where large scale destruction of forests has occurred, the government has in the past required that every able-bodied citizen between the ages of 11 and 60 plant three to five trees per year or do the equivalent amount of work in other forest services. The government claims that at least 1 billion trees have been planted in China every year since 1982. This is no longer required today, but March 12 of every year in China is the Planting Holiday. Also, it has introduced the Green Wall of China-project which aims to halt the expansion of the Gobi-desert through the planting of trees. However, due to the large percentage of trees dying off after planting (up to 75%), the project is not very successful and regular carbon ofsetting through the Flexible Mechanisms might have been a better option. There has been a 47-million-hectare increase in forest area in China since the 1970s . The total number of trees amounted to be about 35 billion and 4.55% of China's land mass increased in forest coverage. The forest coverage was 12% two decades ago and now is 16.55%. . In western countries, increasing consumer demand for wood products that have been produced and harvested in a sustainable manner are causing forest landowners and forest industries to become increasingly accountable for their forest management and timber harvesting practices. The Arbor Day Foundation's Rain Forest Rescue program is a charity that helps to prevent deforestation. The charity uses donated money to buy up and preserve rainforest land before the lumber companies can buy it. The Arbor Day Foundation then protects the land from deforestation. This also locks in the way of life of the primitive tribes living on the forest land. Organizations such as Community Forestry International, The Nature Conservancy, World Wide Fund for Nature, Conservation International, African Conservation Foundation and Greenpeace also focus on preserving forest habitats. Greenpeace in particular has also mapped out the forests that are still intact and published this information unto the internet. . HowStuffWorks in turn, made a more simple thematic map showing the amount of forests present just before the age of man (8000 years ago) and the current (reduced) levels of forest. This Greenpeace map thus created, as well as this thematic map from howstuffworks marks the amount of afforestation thus again required to repair the damage caused by man. To meet the worlds demand for wood it has been suggested by forestry writers Botkins and Sedjo that high-yielding forest plantations are suitable. It has been calculated that plantations yielding 10 cubic meters per hectare annually could supply all the timber required for international trade on 5 percent of the world's existing forestland. By contrast natural forests produce about 1-2 cubic meters per hectare, therefore 5 to 10 times more forest land would be required to meet demand. Forester Chad Oliver has suggested a forest mosaic with high-yield forest lands interpersed with conservation land. According to an international team of scientists, led by Pekka Kauppi, professor of environmental science and policy at Helsinki University, the deforestation already done could still be reverted by tree plantings (eg CDM & JI afforestation/reforestation projects) in 30 years. The conclusion was made, through analysis of data acquired from FAO. Reforestation through tree planting (trough eg the noted CDM & JI A/R-projects), might take advantage of the changing precipitation due to climate change. This may be done through studying where the precipitation is perceived to be increased (see the globalis thematic map of the 2050 precipitation) and setting up reforestation projects in these locations. Especially areas such as Niger, Sierra Leone and Liberia are important candidates; in huge part because they also suffer from an expanding desert (the Sahara) and decreasing biodiversity (while being an important biodiversity hotspot). While the preponderance of deforestation is due to demands for agricultural and urban use for the human population, there are some examples of military causes. One example of deliberate deforestation is that which took place in the U.S. zone of occupation in Germany after World War II. Before the onset of the Cold War defeated Germany was still considered a potential future threat rather than potential future ally. To address this threat, attempts were made to lower German industrial potential, of which forests were deemed an element. Sources in the U.S. government admitted that the purpose of this was the "ultimate destruction of the war potential of German forests." As a consequence of the practice of clear-felling, deforestation resulted which could "be replaced only by long forestry development over perhaps a century." War can also be a cause of deforestation, either deliberately such as through the use of Agent Orange during the Vietnam War where, together with bombs and bulldozers, it contributed to the destruction of 44 percent of the forest cover, or inadvertently such as in the 1945 Battle of Okinawa where bombardment and other combat operations reduced the lush tropical landscape into "a vast field of mud, lead, decay and maggots".
http://everything.explained.at/Deforestation/
13
62
Over the years, NASA has collected a great deal of Earth science data from dozens of orbiting satellites. With time, these data collections have scattered among many archives that vary significantly in sophistication and access. NASA risked losing valuable, irreplaceable data as people retired, storage media decayed, formats changed and collections dispersed. Scientists began to spend more time searching for data than performing research. Today, NASA's Office of Mission to Planet Earth, which leads the agency's Earth science research, continues to collect data. This office operates 11 active satellites and instruments, which together produce 450 gigabytes (Gb) of data each day. Landsat alone, one of NASA's most popular sources of remote sensing data, produces 200 Gb of raw data per day. In 1997, NASA will launch the first of many Earth Observation Systems (EOS) satellites and instruments that will double the daily production of raw data. EOS will produce 15 years of global, comprehensive environmental remote sensing data. To handle the size and variety of data now available and to promote cross-discipline research, NASA created EOSDIS, which drastically reduces the time spent searching for relevant data, allowing scientists to focus their research efforts on changes in the Earth's environment. EOSDIS allows scientists to search many data centers and disciplines quickly and easily, quickening the pace of research. The faster the research, the more quickly scientists can identify causes of detrimental environmental effects, opening the way for policy- and lawmakers to act at international, national and local levels. The well-known hole in the ozone layer above the Antarctic illustrates the process from research to policy to law. Researchers first discovered the ozone hole when lofting a weather balloon from an Antarctic research station. But NASA's NIMBUS 7 satellite had the necessary instruments, so why hadn't it detected the hole? Scientists quickly discovered that the calibration algorithm routinely dropped low ozone values as "noise." When they retrieved 12 years of original NIMBUS 7 data, scientists verified the existence of the hole and indicated that it had grown over the last decade. Data from additional instruments revealed that CloroFloroCarbons (CFCs), such as Freon, destroyed the ozone layer and created the hole. Armed with this knowledge, the United States signed several international treaties restricting the production of CFCs. Congress passed regulations on the production, distribution and recovery of CFCs in the United States. As a direct result, worldwide production of CFCs has plummeted. Today, consumers cannot openly buy Freon. Given time, the CFCs already in the atmosphere will disperse and the ozone layer will heal itself. Another example of the benefits of multiple-discipline Earth science research lies in the work of the EOSDIS Pathfinder projects, which recycle old data from past and current satellites into new products for scientific research. One project used old Landsat data to assess deforestation in the Amazon basin, indicating that the true rate of deforestation closely matches that cited by the Brazilian government, thus ending a long standing, international debate. Now that scientists have settled the extent of deforestation, policy- and lawmakers can act to fix it. In yet another result of the EOSDIS philosophy, ocean dynamists recently discovered a huge, low-amplitude wave that propagates back and forth across the Pacific Ocean. Only a few inches high, but a thousand miles long, the wave bounces back and forth between South America and Asia. The same scientists also found that sea level has risen slightly over the last few years, while other researchers detected a slight decline in total ice coverage. Are these three phenomena related? If so, why? Only collaborative research between atmospheric physics, ocean dynamics, meteorology and climatology can answer these questions. The same principles apply to regional and local, as well as national and international, policy and law. Through EOSDIS, state and local governments can obtain accurate data and information about water tables, flood plains, ground cover and air quality. For example, the state of Ohio has begun using NASA remote sensing data to monitor reclamation of strip mining sites, a task for which the state does not have enough personnel to perform on-site inspections. EOSDIS does a lot more than just store and distribute Earth science data. It also provides the operational ground infrastructure for all satellites and instruments within the Mission to Planet Earth office at NASA. It contains Earth science data from EOS satellites, other MTPE satellites, joint programs with international partners and other agencies, field studies and past satellites. It receives and processes the raw data from the satellites. After initial processing, EOSDIS delivers the data to the Distributed Active Archive Centers (DAACs) for further processing, storage and distribution. EOSDIS also includes mission operations and satellite control. The Science Data Processing Segment handles all data production, archive and distribution through the Information Management Service, the Planning and Data Processing System, and the Data Archival and Distribution Services. The Information Management Service performs data search, access and retrieval for the EOSDIS. The Planning and Data Processing System processes the raw data into the standard products offered by the EOSDIS. The Data Archival and Distribution Service permanently stores all data received or produced by EOSDIS. The Flight Operations Segment, consisting of the EOS Operations Center, the Instrument Support Terminals and the Spacecraft Simulator, supports the EOS satellites and instruments. The Operations Center commands and controls the operation of EOS satellites. The Instrument Support Terminals consist of a few generic workstations dedicated to the command and control of specific instruments. Generally, each instrument will have its own Instrument Support Terminal. The Spacecraft Simulator analyzes general satellite information stripped off the main data stream, searching for trends and problems. The Communications and Systems Management Segment, consisting of the Systems Management Center and the NASA Internal Network, manages schedules and operations among the DAACs and other elements of the EOSDIS. The Systems Management Center manages network loading, data transfer and overall processing to optimize EOSDIS performance. The Internal Network connects all of the permanent archives, transferring data among all of the DAACs and Science Computing Facilities via a dedicated fiber network utilizing the asynchronous transfer mode. The NASA Science Internet (or Internet for short) links the general user to the EOSDIS. The Internet also links EOSDIS to data centers outside NASA. The EOSDIS Data and Operations System (EDOS), consisting of the Data Interface Facility, the Data Production Facility and the Sustaining Engineering Facility, handles all telemetry to and from the satellite and performs the initial data processing. The Data Interface Facility is the primary communication and data link between the ground and the satellites. The Data Interface Facility separates the main data stream into the scientific and system information. The scientific information goes to the Data Production Facility, while the system information goes to the EOS Operations Control Center and the Spacecraft Simulator. The Data Production Facility separates the scientific data by instruments, calibrates it and attaches any ancillary data (orbit information, for example). All data then gets transferred to the DAACs for permanent storage. The Sustaining Engineering Facility maintains equipment, identifies hardware trends and plans for future upgrades. The DAACs process the data from each instrument on each satellite into approximately 250 products. Among the many satellite projects from which products are developed are the Tropical Rain Measurement Mission, the Ocean Topography Experiment and Total Ozone Mapping Spectrometer. Through EOSDIS, data products can come from field campaigns, such as the Boreal Ecosystem Atmosphere Study; from satellites operated by other agencies, such as NOAA's Geostationary Orbit Environmental Satellite; and from past NASA missions and programs. Users can locate data products by discipline, DAAC, Earth location, instrument, satellite or time. EOSDIS allows any data format, but uses the Hierarchical Data Format, developed by the National Center for Supercomputing Applications, as the standard. NASA released Version 0 of the EOSDIS to the general public in 1994. Version 0 connects all the DAACs with some elements of the Science Data Processing Segments, primarily the Information Management Service. Version 0 consolidates 12 distinct data systems and allows users to locate and order data products at eight DAACs (SEDAC will come on line later this year). Through Version 0, users can also link to NOAA's Satellite Active Archive. Version 1, due for release in February 1996, will include all functional elements of the EOSDIS, but not at full capacity. Version 2, due for release in November 1997, will bring the EOSDIS up to full capacity. Minor upgrades between versions will fix small problems, improve specific services and add new products. Anyone can access the EOSDIS via the Internet with telenet or via modem. One can access Version 0 from a computer that runs UNIX, X-Windows or VT100. Users can search through the EOSDIS archives in a variety of ways: by scientific discipline, satellite or product name. One can limit the search to specific regions on the Earth or specific dates. To help in selection, EOSDIS allows users to preview low-resolution browse images before ordering the data product. Data set descriptions also help users choose applicable products. A help desk at each DAAC takes data orders and troubleshoots problems. Kevin Schaefer is with NASA Headquarters in Washington, DC.
http://asis.org/Bulletin/Apr-95/schaefer.html
13
80
Feathered dinosaurs is a term used to describe dinosaurs, particularly maniraptoran dromaeosaurs, that were covered in plumage; either filament-like intergumentary structures with few branches, to fully developed pennaceous feathers complete with shafts and vanes. Feathered dinosaurs first came to realization after it was discovered that dinosaurs are closely related to birds. Since then, the term "feathered dinosaurs" has widened to encompass the entire concept of the dinosaur–bird relationship, including the various avian characteristics some dinosaurs possess, including a pygostyle, a posteriorly oriented pelvis, elongated arms and forelimbs and clawed hand, and clavicles fused to form a furcula. A substantial amount of evidence demonstrates that birds are the descendants of theropod dinosaurs, and that birds evolved during the Jurassic from small, feathered maniraptoran theropods closely related to dromaeosaurids and troodontids (known collectively as deinonychosaurs). Less than two dozen species of dinosaurs have been discovered with direct fossil evidence of plumage since the 1990s, with most coming from Cretaceous deposits in China, most notably Liaoning Province. Together, these fossils represent an important transition between dinosaurs and birds, which allows paleontologists to piece together the origin and evolution of birds. Despite integumentary structures being limited to non-avian dinosaurs, particularly well-documented in maniraptoriformes, fossils do suggest that a large number of theropods were feathered, and it has even been suggested that based on phylogenetic analyses, Tyrannosaurus at one stage of its life may have been covered in down-like feathers, although there is no direct fossil evidence of this. Based on what is known of the dinosaur fossil record, paleontologists generally think that most of dinosaur evolution happened at relatively large body size (a mass greater than a few kilograms), and in animals that were entirely terrestrial. Small size (<1 kg) and arboreal habits seem to have arisen fairly late during dinosaurian evolution, and only within maniraptora. |Part of a series on| |Dinosaurs and birds| Birds were originally linked with other dinosaurs back in the late 1800s, most famously by Thomas Huxley. This view remained fairly popular until the 1920s when Gerhard Heilmann's book The Origin of Birds was published in English. Heilmann argued that birds could not have descended from dinosaurs (predominantly because dinosaurs lacked clavicles, or so he thought), and he therefore favored the idea that birds originated from the so-called 'pseudosuchians': primitive archosaurs that were also thought ancestral to dinosaurs and crocodilians. This became the mainstream view until the 1970s, when a new look at the anatomical evidence (combined with new data from maniraptoran theropods) led John Ostrom to successfully resurrect the dinosaur hypothesis. Fossils of Archaeopteryx include well-preserved feathers, but it was not until the early 1990s that clearly nonavian dinosaur fossils were discovered with preserved feathers. Today there are more than twenty genera of dinosaurs with fossil feathers, nearly all of which are theropods. Most are from the Yixian Formation in China. The fossil feathers of one specimen, Shuvuuia deserti, have even tested positive for beta-keratin, the main protein in bird feathers, in immunological tests. Shortly after the 1859 publication of Charles Darwin's The Origin of Species, the ground-breaking book which described his theory of evolution by natural selection, British biologist and evolution-defender Thomas Henry Huxley proposed that birds were descendants of dinosaurs. He compared skeletal structure of Compsognathus, a small theropod dinosaur, and the 'first bird' Archaeopteryx lithographica (both of which were found in the Upper Jurassic Bavarian limestone of Solnhofen). He showed that, apart from its hands and feathers, Archaeopteryx was quite similar to Compsognathus. In 1868 he published On the Animals which are most nearly intermediate between Birds and Reptiles, making the case. The leading dinosaur expert of the time, Richard Owen, disagreed, claiming Archaeopteryx as the first bird outside dinosaur lineage. For the next century, claims that birds were dinosaur descendants faded, while more popular bird-ancestry hypotheses including that of a possible 'crocodylomorph' and 'thecodont' ancestor gained ground. Since the discovery of such theropods as Microraptor and Epidendrosaurus, paleontologists and scientists in general now have small forms exhibiting some features suggestive of a tree-climbing (or scansorial) way of life. However, the idea that dinosaurs might have climbed trees goes back a long way, and well pre-dates the dinosaur renaissance of the 1960s and 70s. The idea of scansoriality in non-avian dinosaurs has been considered a 'fringe' idea, and it's partly for this reason that, prior to 2000, nobody had attempted any sort of review on the thoughts that had been published about the subject. The oldest reference to scansoriality in a dinosaur comes from William Fox, the Isle of Wight curator and amateur fossil collector, who in 1866 proposed that Calamospondylus oweni from the Lower Cretaceous Wessex Formation of the Isle of Wight might have been in the habit of 'leaping from tree to tree'. The Calamospondylus oweni specimen that Fox referred to was lost, and the actual nature of the fossil remains speculative, but there are various reasons for thinking that it was a theropod. However, it's not entirely accurate to regard Fox's ideas about Calamospondylus as directly relevant to modern speculations about tree-climbing dinosaurs given that, if Fox imagined Calamospondylus oweni as resembling anything familiar, it was probably as a lizard-like reptile, and not as a dinosaur as they are currently understood. During the early decades of the 20th century the idea of tree-climbing dinosaurs became reasonably popular as Othenio Abel, Gerhard Heilmann and others used comparisons with birds, tree kangaroos and monkeys to argue that the small ornithopod Hypsilophodon (also from the Wessex Formation of the Isle of Wight) was scansorial. Heilmann had come to disagree with this idea and now regarded Hypsilophodon as terrestrial. William Swinton favored the idea of a scansorial Hypsilophodon, concluding that 'it would be able to run up the stouter branches and with hands and tail keep itself balanced until the need for arboreal excursions had passed', and in a 1936 review of Isle of Wight dinosaurs mentioned the idea that small theropods might also have used their clawed hands to hold branches when climbing. During the 1970s, Peter Galton was able to show that all of the claims made about the forelimb and hindlimb anatomy of Hypsilophodon supposedly favoring a scansorial lifestyle were erroneous, and that this animal was in fact well suited for an entirely terrestrial, cursorial lifestyle. Nevertheless, for several decades Hypsilophodon was consistently depicted as a tree-climber. In recent decades, Gregory Paul has been influential in arguing that small theropods were capable climbers, and he not only argued for and illustrated scansorial abilities in coelurosaurs, he also proposed that as-yet-undiscovered maniraptorans were highly proficient climbers and included the ancestors of birds. The hypothesized existence of small arboreal theropods that are as yet unknown from the fossil record later proved integral to George Olshevsky's 'Birds Came First' (BCF) hypothesis. Olshevsky argued that all dinosaurs, and in fact all archosaurs, descend from small, scansorial ancestors, and that it is these little climbing reptiles which are the direct ancestors of birds. Ostrom, Deinonychus and the Dinosaur RenaissanceEdit In 1964, the first specimen of Deinonychus antirrhopus was discovered in Montana, and in 1969, John Ostrom of Yale University described Deinonychus as a theropod whose skeletal resemblance to birds seemed unmistakable. Since that time, Ostrom had become a leading proponent of the theory that birds are direct descendants of dinosaurs. During the late 1960s, Ostrom and others demonstrated that maniraptoran dinosaurs could fold their arms in a manner similar to that of birds. Further comparisons of bird and dinosaur skeletons, as well as cladistic analysis strengthened the case for the link, particularly for a branch of theropods called maniraptors. Skeletal similarities include the neck, the pubis, the wrists (semi-lunate carpal), the 'arms' and pectoral girdle, the shoulder blade, the clavicle and the breast bone. In all, over a hundred distinct anatomical features are shared by birds and theropod dinosaurs. Other researchers drew on these shared features and other aspects of dinosaur biology and began to suggest that at least some theropod dinosaurs were feathered. The first restoration of a feathered dinosaur was Sarah Landry's depiction of a feathered "Syntarsus" (now renamed Megapnosaurus or considered a synonym of Coelophysis), in Robert T. Bakker's 1975 publication Dinosaur Renaissance. Gregory S. Paul was probably the first paleoartist to depict maniraptoran dinosaurs with feathers and protofeathers, starting in the late 1980s. By the 1990s, most paleontologists considered birds to be surviving dinosaurs and referred to 'non-avian dinosaurs' (all extinct), to distinguish them from birds (aves). Before the discovery of feathered dinosaurs, the evidence was limited to Huxley and Ostrom's comparative anatomy. Some mainstream ornithologists, including Smithsonian Institution curator Storrs L. Olson, disputed the links, specifically citing the lack of fossil evidence for feathered dinosaurs. Modern research and feathered dinosaurs in ChinaEdit The early 1990s saw the discovery of spectacularly preserved bird fossils in several Early Cretaceous geological formations in the northeastern Chinese province of Liaoning. South American paleontologists, including Fernando Novas and others, discovered evidence showing that maniraptorans could move their arms in a bird-like manner. Gatesy and others suggested that anatomical changes to the vertebral column and hindlimbs occured before birds first evolved, and Xu Xing and colleagues proved that true functional wings and flight feathers evolved in some maniraptorans, all strongly suggesting that these anatomical features were already well-developed before the first birds evolved. In 1996, Chinese paleontologists described Sinosauropteryx as a new genus of bird from the Yixian Formation, but this animal was quickly recognized as a theropod dinosaur closely related to Compsognathus. Surprisingly, its body was covered by long filamentous structures. These were dubbed 'protofeathers' and considered to be homologous with the more advanced feathers of birds, although some scientists disagree with this assessment. Chinese and North American scientists described Caudipteryx and Protarchaeopteryx soon after. Based on skeletal features, these animals were non-avian dinosaurs, but their remains bore fully-formed feathers closely resembling those of birds. "Archaeoraptor," described without peer review in a 1999 issue of National Geographic, turned out to be a smuggled forgery, but legitimate remains continue to pour out of the Yixian, both legally and illegally. Many newly described feathered dinosaurs preserve horny claw sheaths, integumentary structures (filaments to fully pennaceous feathers), and internal organs. Feathers or "protofeathers" have been found on a wide variety of theropods in the Yixian, and the discoveries of extremely bird-like dinosaurs, as well as dinosaur-like primitive birds, have almost entirely closed the morphological gap between theropods and birds. Archaeopteryx, the first good example of a "feathered dinosaur", was discovered in 1861. The initial specimen was found in the solnhofen limestone in southern Germany, which is a lagerstätte, a rare and remarkable geological formation known for its superbly detailed fossils. Archaeopteryx is a transitional fossil, with features clearly intermediate between those of modern reptiles and birds. Discovered just two years after Darwin's seminal Origin of Species, its discovery spurred the nascent debate between proponents of evolutionary biology and creationism. This early bird is so dinosaur-like that, without a clear impression of feathers in the surrounding rock, at least one specimen was mistaken for Compsognathus. Since the 1990s, a number of additional feathered dinosaurs have been found, providing even stronger evidence of the close relationship between dinosaurs and modern birds. Most of these specimens were unearthed in Liaoning province, northeastern China, which was part of an island continent during the Cretaceous period. Though feathers have been found only in the lagerstätte of the Yixian Formation and a few other places, it is possible that non-avian dinosaurs elsewhere in the world were also feathered. The lack of widespread fossil evidence for feathered non-avian dinosaurs may be due to the fact that delicate features like skin and feathers are not often preserved by fossilization and thus are absent from the fossil record. A recent development in the debate centers around the discovery of impressions of "protofeathers" surrounding many dinosaur fossils. These protofeathers suggest that the tyrannosauroids may have been feathered. However, others claim that these protofeathers are simply the result of the decomposition of collagenous fiber that underlaid the dinosaurs' integument. The Dromaeosauridae family, in particular, seems to have been heavily feathered and at least one dromaeosaurid, Cryptovolans, may have been capable of flight. Because feathers are often associated with birds, feathered dinosaurs are often touted as the missing link between birds and dinosaurs. However, the multiple skeletal features also shared by the two groups represent the more important link for paleontologists. Furthermore, it is increasingly clear that the relationship between birds and dinosaurs, and the evolution of flight, are more complex topics than previously realized. For example, while it was once believed that birds evolved from dinosaurs in one linear progression, some scientists, most notably Gregory S. Paul, conclude that dinosaurs such as the dromaeosaurs may have evolved from birds, losing the power of flight while keeping their feathers in a manner similar to the modern ostrich and other ratites. Comparisons of bird and dinosaur skeletons, as well as cladistic analysis, strengthens the case for the link, particularly for a branch of theropods called maniraptors. Skeletal similarities include the neck, pubis, wrist (semi-lunate carpal), arm and pectoral girdle, shoulder blade, clavicle, and breast bone. At one time, it was believed that dinosaurs lacked furculae, long believed to be a structure unique to birds, that were formed by the fusion of the two collarbones (clavicles) into a single V-shaped structure that helps brace the skeleton against the stresses incurred while flapping. This apparent absence was considered an overwhelming argument to refute the dinosaur ancestry of birds by Danish artist and naturalist Gerhard Heilmann's monumentally influential The Origin of Birds in 1926. That reptiles ancestral to birds, therefore, should, at the very least, show well-developed clavicles. In the book, Heilmann discussed that no clavicles had been reported in any theropod dinosaur. Noting this fact, Heilmann suggested that birds evolved from a more generalized archosaurian ancestor, such as the aptly-named Ornithosuchus (literally, “bird-crocodile”), which is now believed to be closer to the crocodile end of the archosaur lineage. At the time, however, Ornithosuchus seemed to be a likely ancestor of more birdlike creatures. Contrary to what Heilman believed, paleontologists since the 1980s now accept that clavicles and in most cases furculae are a standard feature not just of theropods but of saurischian dinosaurs. Furculae in dinosaurs is not only limited to maniraptorans, as evidenced by an article by Chure & Madson in which they described a furcula in an allosaurid dinosaur, a non-avian theropod. In 1983, Rinchen Barsbold reported the first dinosaurian furcula from a specimen of the Cretaceous theropod Oviraptor. A furcula-bearing Oviraptor specimen had previously been known since the 1920s, but because at the time the theropod origin of birds was largely dismissed, it was misidentified for sixty years.:9 Following this discovery, paleontologists began to find furculae in other theropod dinosaurs. Wishbones are now known from the dromaeosaur Velociraptor, the allosauroid Allosaurus, and the tyrannosaurid Tyrannosaurus rex, to name a few. Up to late 2007, ossified furculae (i.e. made of bone rather than cartilage) have been found in nearly all types of theropods except the most basal ones, Eoraptor and Herrerasaurus. The original report of a furcula in the primitive theropod Segisaurus (1936) has been confirmed by a re-examination in 2005. Joined, furcula-like clavicles have also been found in Massospondylus, an Early Jurassic sauropodomorph, indicating that the evolution of the furcula was well underway when the earliest dinosaurs were diversifying. In 2000, Alex Downs reported an isolated furcula found within a block of Coelophysis bauri skeletons from the Late Triassic Rock Point Formation at Ghost Ranch, New Mexico. While it seemed likely that it originally belonged to Coelophysis, the block contained fossils from other Triassic animals as well, and Alex declined to make a positive identification. Currently, a total of five C. bauri furculae have been found in the New Mexico Museum of Natural History's (NMMNH) Ghost Ranch, New Mexico Whitaker Quarry block C-8-82. Three of the furculae are articulated in juvenile skeletons; two of these are missing fragments but are nearly complete, and one is apparently complete. Two years later, Tykoski et al. described several furculae from two species of the coelophysoid genus Syntarsus (now Megapnosaurus), S. rhodesiensis and S. kayentakatae, from the Early Jurassic of Zimbabwe and Arizona, respectively. Syntarsus was long considered to be the genus most closely related to Coelophysis, differing only in a few anatomical details and slightly younger age, so the identification of furculae in Syntarsus made it very likely that the furcula Alex Downs noted in 2000 came from Coelophysis after all. By 2006, wishbones were definitively known from the Early Jurassic Coelophysis rhodesiensis and Coelophysis kayentakatae, and a single isolated furcula was known that might have come from the Late Triassic type species, Coelophysis bauri. Avian air sacsEdit Large meat-eating dinosaurs had a complex system of air sacs similar to those found in modern birds, according to an investigation which was led by Patrick O'Connor of Ohio University. The lungs of theropod dinosaurs (carnivores that walked on two legs and had birdlike feet) likely pumped air into hollow sacs in their skeletons, as is the case in birds. "What was once formally considered unique to birds was present in some form in the ancestors of birds", O'Connor said. In a paper published in the online journal Public Library of Science ONE (September 29, 2008), scientists described Aerosteon riocoloradensis, the skeleton of which supplies the strongest evidence to date of a dinosaur with a bird-like breathing system. CT-scanning revealed the evidence of air sacs within the body cavity of the Aerosteon skeleton. Heart and sleeping postureEdit Modern computed tomography (CT) scans of a dinosaur chest cavity conducted in 2000 found the apparent remnants of complex four-chambered hearts, much like those found in today's mammals and birds. The idea is controversial within the scientific community, coming under fire for bad anatomical science or simply wishful thinking. The type fossil of the troodont, Mei, is complete and exceptionally well preserved in three-dimensional detail, with the snout nestled beneath one of the forelimbs, similar to the roosting position of modern birds. This demonstrates that the dinosaurs slept like certain modern birds, with their heads tucked under their arms. This behavior, which may have helped to keep the head warm, is also characteristic of modern birds. A discovery of features in a Tyrannosaurus rex skeleton recently provided more evidence that dinosaurs and birds evolved from a common ancestor and, for the first time, allowed paleontologists to establish the sex of a dinosaur. When laying eggs, female birds grow a special type of bone in their limbs between the hard outer bone and the marrow. This medullary bone, which is rich in calcium, is used to make eggshells. The presence of endosteally derived bone tissues lining the interior marrow cavities of portions of the Tyrannosaurus rex specimen's hind limb suggested that T. rex used similar reproductive strategies, and revealed the specimen to be female. Further research has found medullary bone in the theropod Allosaurus and ornithopod Tenontosaurus. Because the line of dinosaurs that includes Allosaurus and Tyrannosaurus diverged from the line that led to Tenontosaurus very early in the evolution of dinosaurs, this suggests that dinosaurs in general produced medullary tissue. Medullary bone has been found in specimens of sub-adult size, which suggests that dinosaurs reached sexual maturity rather quickly for such large animals. The micro-structure of eggshells and bones has also been determined to be similar to that of birds. Brooding and care of youngEdit Several specimens of the Mongolian oviraptorid Citipati was discovered in a chicken-like brooding position resting over the eggs in its nest in 1993, which may mean it was covered with an insulating layer of feathers that kept the eggs warm. All of the nesting specimens are situated on top of egg clutches, with their limbs spread symmetrically on each side of the nest, front limbs covering the nest perimeter. This brooding posture is found today only in birds and supports a behavioral link between birds and theropod dinosaurs. The nesting position of Citipati also supports the hypothesis that it and other oviraptorids had feathered forelimbs. With the 'arms' spread along the periphery of the nest, a majority of eggs would not be covered by the animal's body unless an extensive coat of feathers was present. A dinosaur embryo was found without teeth, which suggests some parental care was required to feed the young dinosaur, possibly the adult dinosaur regurgitated food into the young dinosaur's mouth (see altricial). This behavior is seen in numerous bird species; parent birds regurgitate food into the hatchling's mouth. The loss of teeth and the formation of a beak has been shown to have been favorably selected to suit the newly aerodynamical bodies of avian flight in early birds. In the Jehol Biota in China, various dinosaur fossils have been discovered that have a variety of different tooth morphologies, in respect to this evolutionary trend. Sinosauropteryx fossils display unserrated premaxillary teeth, while the maxillary teeth are serrated. In the preserved remains of Protarchaeopteryx, four premaxillary teeth are present that are serrated. The diminutive oviraptorosaur Caudipteryx has four hook-like premaxillary teeth, and in Microraptor zhaoianus, the posterior teeth of this species had developed a constriction that led to a less compressed tooth crown. These dinosaurs exhinit a heterodont dentition pattern that clearly illustrates a transition from the teeth of maniraptorans to those of early, basal birds. Molecular evidence and soft tissueEdit One of the best examples of soft tissue impressions in a fossil dinosaur was discovered in Petraroia, Italy. The discovery was reported in 1998, and described the specimen of a small, very young coelurosaur, Scipionyx samniticus. The fossil includes portions of the intestines, colon, liver, muscles, and windpipe of this immature dinosaur. In the March 2005 issue of Science, Dr. Mary Higby Schweitzer and her team announced the discovery of flexible material resembling actual soft tissue inside a 68-million-year-old Tyrannosaurus rex leg bone from the Hell Creek Formation in Montana. After recovery, the tissue was rehydrated by the science team. The seven collagen types obtained from the bone fragments, compared to collagen data from living birds (specifically, a chicken), reveal that older theropods and birds are closely related. When the fossilized bone was treated over several weeks to remove mineral content from the fossilized bone marrow cavity (a process called demineralization), Schweitzer found evidence of intact structures such as blood vessels, bone matrix, and connective tissue (bone fibers). Scrutiny under the microscope further revealed that the putative dinosaur soft tissue had retained fine structures (microstructures) even at the cellular level. The exact nature and composition of this material, and the implications of Dr. Schweitzer's discovery, are not yet clear; study and interpretation of the specimens is ongoing. The successful extraction of ancient DNA from dinosaur fossils has been reported on two separate occasions, but upon further inspection and peer review, neither of these reports could be confirmed. However, a functional visual peptide of a theoretical dinosaur has been inferred using analytical phylogenetic reconstruction methods on gene sequences of related modern species such as reptiles and birds. In addition, several proteins have putatively been detected in dinosaur fossils, including hemoglobin. Feathers are extremely complex integumentary structures that characterize a handful of vertebrate animals. Although it is generally acknowledged that feathers are derived and evolved from simpler integumentary structures, the early diversification and origin of feathers was relatively unknown until recently, and current research is ongoing. Since the theropod ancestry of birds is widely supported with osteological and other physical lines of evidence, the precursors of feathers in dinosaurs are also present, as predicted by those who originally proposed a theropod origin for birds. In 2006, Chinese paleontologist Xu Xing stated in a paper that since many members of Coelurosauria exhibit miniaturization, primitive integumentary structures (and later on feathers) evolved in order to insulate their small bodies. The functional view on the evolution of feathers has traditionally focussed on insulation, flight and display. Discoveries of non-flying Late Cretaceous feathered dinosaurs in China however suggest that flight could not have been the original primary function. Feathers in dinosaurs indicate that their original function was not flight, but of a different nature. Theories include insulation brought around after they had metabolically changed from their cold-blooded reptilian ancestors, to increasing running speed. It has been suggested that vaned feathers evolved in the context of thrust, with running, non-avian theropods flapping their arms to increase their running speed. The following is the generally acknowledged version of the origin and early evolution of feathers: - The first feathers evolved; they are single filaments. - Branching structures developed. - The rachis evolved. - Pennaceous feathers evolved. - Aerodynamic morphologies appeared. (curved shaft and asymmetrical vanes) This scenario appears to indicate that downy, contour, and flight feathers, are more derived forms of the first "feather". However, it is also possible that protofeathers and basal feathers disappeared early on in the evolution of feathers and that more primitive feathers in modern birds are secondary. This would imply that the feathers in modern birds have nothing to do with protofeathers. A recent study performed by Prum and Brush (2002) suggested that the feathers of birds are not homologous with the scales of reptiles. A new model of feather evolution posits that the evolution of feathers first began with a feather follicle merging from the skin's surface that has no relation to reptilian scales. After this initial event, additions and new morphological characteristics were added to the feather design and more complex feathers evolved. This model of feather evolution, while agreeing with the distribution of various feather morphologies in coelurosaurs, it is also at odds with other evidence. The feather bristles of modern-day turkeys resemble the hair-like integumentary strcutures found in some maniraptorans, pterosaurs (see Pterosauria#Pycnofibers), and ornithischians, are widely regarded to be homologous to modern feathers, yet share also show distinct, feather like characteristics. This has led some paleontologists, such as Xu Xing, to theorize that feathers share homology with lizard scales after all. - Stage I: Tubular filaments and feather-type beta keratin evolved.[Note 3] - Stage II: The filamentous structure evolved distal branches.[Note 4] - Stage III: Xu Xing described this stage as being the most important stage. The main part of the modern feather, the feather follicle, appeared along with the rachises and planar forms developed.[Note 5] - Stage IV: Large, stiff, well-developed pennaceous feathers evolved on the limbs and tails of maniraptoran dinosaurs. Barbules evolved.[Note 6] - Stage V: Feather tracts (pennaceous feathers that are located on regions other than the limbs and tail) evolved. Specialized pennaceous feathers developed. Xu Xing himself stated that this new model was similar to the one out forward by Richard Prum, with the exception that Xu's model posits that feathers "feature a combination of transformation and innovation". This view differs from Prum's model in that Prum suggested that feathers were purely an evolutionary novelty. Xing's new model also suggests that the tubular filaments and branches evolved before the appearance of the feather follicle, while also acknowledging that the follicle was an important development in feather evolution, also in contrast to Prum's model of feather evolution. Primitive feather typesEdit The evolution of feather structures is thought to have proceeded from simple hollow filaments through several stages of increasing complexity, ending with the large, deeply rooted, feathers with strong pens (rachis), barbs and barbules that birds display today. It is logical that the simplest structures were probably most useful as insulation, and that this implies homeothermy. Only the more complex feather structures would be likely candidates for aerodynamic uses. Models of feather evolution are often proposed that the earliest prototype feathers were hair-like integumentary filaments similar to the structures of Sinosauropteryx, a compsognathid (Jurassic/Cretaceous, 150-120 Ma), and Dilong, a basal tyrannosauroid from the Early Cretaceous. It is not known with certainty at what point in archosaur phylogeny the earliest simple “protofeathers” arose, or if they arose once or, independently, multiple times. Filamentous structures are clearly present in pterosaurs, and long, hollow quills have been reported in a specimen of Psittacosaurus from Liaoning. It is thus possible that the genes for building simple integumentary structures from beta keratin arose before the origin of dinosaurs, possibly in the last common ancestor with pterosaurs – the basal Ornithodire. In Prum's model of feather evolution, hollow quill-like integumentary structures of this sort were termed Stage 1 feathers. The idea that feathers started out as hollow quills also supports Alan Brush's idea that feathers are evolutionary novelties, and not derived from scales. However, in order to determine the homology of Stage 1 feathers, it is necessary to determine their proteinaceous content: unlike the epidermal appendages of all other vertebrates, feathers are almost entirely composed of beta-keratins (as opposed to alpha-keratins) and, more specifically, they are formed from a group of beta-keratins called phi-keratins. No studies have yet been performed on the Stage 1 structures of Sinosauropteryx or Dilong in order to test their proteinaceous composition, however, tiny filamentous structures discovered adjacent to the bones of the alvarezsaurid Shuvuuia have been tested for beta-keratin, and the structures were discovered to be composed of beta-keratin. Alvarezsaurids have been of controversial phylogenetic position, but are generally agreed to be basal members of the Maniraptora clade. Due to this discovery, paleontologists are now convinced that beta-keratin-based protofeathers had evolved at the base of this clade at least. Vaned, pennaceous feathersEdit While basal coelurosaurs possessed these apparently hollow quill-like 'Stage 1' filaments, they lacked the more complex structures seen in maniraptorans. Maniraptorans possessed vaned feathers with barbs, barbules and hooklets just like those of modern birds. The first dinosaur fossils from the Yixian formation found to have true flight-structured feathers (pennaceous feathers) were Protarchaeopteryx and Caudipteryx (135-121 Ma). Due to the size and proportions of these animals it is more likely that their feathers were used for display rather than for flight. Subsequent dinosaurs found with pennaceous feathers include Pedopenna and Jinfengopteryx. Several specimens of Microraptor, described by Xu et al. in 2003, show not only pennaceous feathers but also true asymmetrical flight feathers, present on the fore and hind limbs and tail. Asymmetrical feathers are considered important for flight in birds. Before the discovery of Microraptor gui, Archaeopteryx was the most primitive known animal with asymmetrical flight feathers. However, the bodies of maniraptorans were not covered in vaned feathers as are those of the majority of living birds: instead, it seems that they were at least partly covered in the more simple structures that they had inherited from basal coelurosaurs like Sinosauropteryx. This condition may have been retained all the way up into basal birds: despite all those life restorations clothing archaeopterygids in vaned breast, belly, throat and neck feathers, it seems that their bodies also were at least partly covered in the more simple filamentous structures. The Berlin Archaeopteryx specimen appears to preserve such structures on the back of the neck though pennaceous vaned feathers were present on its back, at least. Though it has been suggested at times that vaned feathers simply must have evolved for flight, the phylogenetic distribution of these structures currently indicates that they first evolved in flightless maniraptorans and were only later exapted by long-armed maniraptorans for use in locomotion. Of course a well-known minority opinion, best known from the writings of Gregory Paul, is that feathered maniraptorans are secondarily flightless and descend from volant bird-like ancestors. While this hypothesis remains possible, it lacks support from the fossil record, though that may or may not mean much, as the fossil record is incomplete and prone to selection bias. The discovery of Epidexipteryx represented the earliest known examples of ornamental feathers in the fossil record. Epidexipteryx is known from a well preserved partial skeleton that includes four long feathers on the tail, composed of a central rachis and vanes. However, unlike in modern-style rectrices (tail feathers), the vanes were not branched into individual filaments but made up of a single ribbon-like sheet. Epidexipteryx also preserved a covering of simpler body feathers, composed of parallel barbs as in more primitive feathered dinosaurs. However, the body feathers of Epidexipteryx are unique in that some appear to arise from a "membranous structure." The skull of Epidexipteryx is also unique in a number of features, and bears an overall similarity to the skull of Sapeornis, oviraptorosaurs and, to a lesser extent, therizinosauroids. The tail of Epidexipteryx bore unusual vertebrae towards the tip which resembled the feather-anchoring pygostyle of modern birds and some oviraptorosaurs. Despite its close relationship to avialan birds, Epidexipteryx appears to have lacked remiges (wing feathers), and it likely could not fly. Zhang et al. suggest that unless Epidexipteryx evolved from flying ancestors and subsequently lost its wings, this may indicate that advanced display feathers on the tail may have predated flying or gliding flight. According to the model of feather evolution developed by Prum & Brush, feathers started out ('stage 1') as hollow cylinders, then ('stage 2') became unbranched barbs attached to a calamus. By stage 3, feathers were planar structures with the barbs diverging from a central rachis, and from there pennaceous feathers. The feathers of Epidexipteryx may represent stage 2 structures, but also suggests that a more complicated sequence of steps in the evolution of feathers took place. Use in predationEdit Several maniraptoran lineages were clearly predatory and, given the morphology of their manual claws, fingers and wrists, presumably in the habit of grabbing at prey with their hands. Contrary to popular belief, feathers on the hands would not have greatly impeded the use of the hands in predation. Because the feathers are attached at an angle roughly perpendicular to the claws, they are oriented tangentially to the prey's body, regardless of prey size.:315 It is important to note here that theropod hands appear to have been oriented such that the palms faced medially (facing inwards), and were not parallel to the ground as used to be imagined. However, feathering would have interfered with the ability of the hands to bring a grasped object up toward the mouth given that extension of the maniraptoran wrist would have caused the hand to rotate slightly upwards on its palmar side. If both feathered hands are rotated upwards and inwards at the same time, the remiges from one hand would collide with those of the other. For this reason, maniraptorans with feathered hands could grasp objects, but would probably not be able to carry them with both hands. However, dromaeosaurids and other maniraptorans may have solved this problem by clutching objects single-handedly to the chest. Feathered hands would also have restricted the ability of the hands to pick objects off of the ground, given that the feathers extend well beyond the ends of the digits. It remains possible that some maniraptorans lacked remiges on their fingers, but the only evidence available indicates the contrary. It has recently been argued that the particularly long second digit of the oviraptorosaur Chirostenotes was used as a probing tool, locating and extracting invertebrates and small mammals and so on from crevices and burrows. It seems highly unlikely that a digit that is regularly thrust into small cavities would have had feathers extending along its length, so either Chirostenotes didn't probe as proposed, or its second finger was unfeathered, unlike that of Caudipteryx and the other feathered maniraptorans. Given the problems that the feathers might have posed for clutching and grabbing prey from the ground, we might also speculate that some of these dinosaurs deliberately removed their own remiges by biting them off. Some modern birds (notably motmots) manipulate their own feathers by biting off some of the barbs, so this is at least conceivable, but no remains in the fossil record have been recovered that support this conclusion. Some feather morphologies in non-avian theropods are comparable to those on modern birds. Single filament like structures are not present in modern feathers, although some birds possess highly specialized feathers that are superficial in appearance to protofeathers in non-avian theropods. Tuft-like structures seen in some maniraptorans are similar to that of the natal down in modern birds. Similarly, structures in the fossil record composed of a series of filaments joined at their bases along a central filament bear an uncanny resemblance to the down feathers in modern birds, with the exception of a lack of barbules. Furthermore, structures on fossils have been recovered from Chinese Cretaceous deposits that are a series of filaments joined at their bases at the distal portion of the central filament bear a superficial resemblance to filoplumes. More derived, pennaceous, feathers on the tails and limbs of feathered dinosaurs are nearly identical to the remiges and retrices of modern birds. Feather structures and anatomyEdit Feathers vary in length according to their position on the body, with the filaments of the compsognathid Sinosauropteryx being 13 mm and 21 mm long on the neck and shoulders respectively. In contrast, the structures on the skull are about 5 mm long, those on the arm about 2 mm long, and those on the distal part of the tail about 4 mm long. Because the structures tend to be clumped together it is difficult to be sure of an individual filament's morphology. The structures might have been simple and unbranched, but Currie & Chen (2001) thought that the structures on Sinosauropteryx might be branched and rather like the feathers of birds that have short quills but long barbs. The similar structures of Dilong also appear to exhibit a simple branching structure. Exactly how feathers were arranged on the arms and hands of both basal birds and non-avian maniraptorans has long been unclear, and both non-avian maniraptorans and archaeopterygids have conventionally been depicted as possessing unfeathered fingers. However, the second finger is needed to support the remiges,[Note 7] and therefore must have been feathered. Derek Yalden's 1985 study was important in showing exactly how the remiges would have grown off of the first and second phalanges of the archaeopterygid second finger, and this configuration has been widely recognized.:129-159 However, there has been some minor historical disagreement over exactly how many remiges were present in archaeopterygids (there were most likely 11 primaries and a tiny distal 12th one, and at least 12 secondaries), and also about how the hand claws were arranged. The claws were directed perpendicularly to the palmar surface in life, and rotated anteriorly in most (but not all) specimens during burial.:129-159 It has also been suggested on occasion that the fingers of archaeopterygids and other feathered maniraptorans were united in a single fleshy 'mitten' as they are in modern birds, and hence unable to be employed in grasping. However, given that the interphalangeal finger joints of archaeopterygids appear suited for flexion and extension, and that the third finger apparently remained free and flexible in birds more derived than archaeopterygids, this is unlikely to be correct; it's based on a depression in the sediment that was identified around the bones. Like those of archaeopterygids and modern birds, the remiges of non-avian theropods would also have been attached to the phalanges of the second manual digit as well as to the metacarpus and ulna, and indeed we can see this in the fossils. It's the case in the sinornithosaur NGMC 91-A and Microraptor. Surprisingly, in Caudipteryx, the remiges are restricted to the hands alone, and don't extend from the arm. They seem to have formed little 'hand flags' that are unlikely to have served any function other than display. Caudipteryx is an oviraptorosaur and possesses a suite of characters unique to this group. It is not a member of Aves, despite the efforts of some workers to make it into one. The hands of Caudipteryx supported symmetrical, pennaceous feathers that had vanes and barbs, and that measured between 15–20 centimeters long (6–8 inches). These primary feathers were arranged in a wing-like fan along the second finger, just like primary feathers of birds and other maniraptorans. No fossil of Caudipteryx preserves any secondary feathers attached to the forearms, as found in dromaeosaurids, Archaeopteryx and modern birds. Either these arm feathers are not preserved, or they were not present on Caudipteryx in life. An additional fan of feathers existed on its short tail. The shortness and symmetry of the feathers, and the shortness of the arms relative to the body size, indicate that Caudipteryx could not fly. The body was also covered in a coat of short, simple, down-like feathers. A small minority, including ornithologists Alan Feduccia and Larry Martin, continues to assert that birds are instead the descendants of earlier archosaurs, such as Longisquama or Euparkeria. Embryological studies of bird developmental biology have raised questions about digit homology in bird and dinosaur forelimbs. Opponents also claim that the dinosaur-bird hypothesis is dogma, apparently on the grounds that those who accept it have not accepted the opponents' arguments for rejecting it. However, science does not require unanimity and does not force agreement, nor does science settle issues by vote. It has been over 25 years since John Ostrom first put forth the dinosaur-bird hypothesis in a short article in Nature, and the opponents of this theory have yet to propose an alternative, testable hypothesis. However, due to the cogent evidence provided by comparative anatomy and phylogenetics, as well as the dramatic feathered dinosaur fossils from China, the idea that birds are derived dinosaurs, first championed by Huxley and later by Nopcsa and Ostrom, enjoys near-unanimous support among today's paleontologists. BADD, BAND, and the Birds Came First hypothesisEdit - Main article: Birds Came First The non-standard, non-mainstream Birds Came First (or BCF) hypothesis proposed by George Olshevsky suggests that while there is a close relationship between dinosaurs and birds, but argues that, merely given this relationship, it is just as likely that dinosaurs descended from birds as the other way around. The hypothesis does not propose that birds in the proper sense evolved earlier than did other dinosaurs or other archosaurs: rather, it posits that small, bird-like, arboreal archosaurs were the direct ancestors of all the archosaurs that came later on (proper birds included). George was aware of this fact, and apparently considered the rather tongue-in-cheek alternative acronym GOODD, meaning George Olshevsky On Dinosaur Descendants. This was, of course, meant as opposite to the also tongue-in-cheek BADD (Birds Are Dinosaur Descendants): the term George uses for the 'conventional' or 'mainstream' view of avian origins outlined in the first two paragraphs above. 'BADD' is bad, according to BCF, as it imagines that small size, feathers and arboreal habits all evolved very late in archosaur evolution, and exclusively within maniraptoran theropod dinosaurs. Protoavis is a Late Triassic archosaurian whose fossilized remains were found near Post, Texas. These fossils have been described as a primitive bird which, if the identification is valid, would push back avian origins some 60-75 million years. Though it existed far earlier than Archaeopteryx, its skeletal structure is allegedly more bird-like. The fossil bones are too badly preserved to allow an estimate of flying ability; although reconstructions usually show feathers, judging from thorough study of the fossil material there is no indication that these were present. However, this description of Protoavis assumes that Protoavis has been correctly interpreted as a bird. Almost all paleontologists doubt that Protoavis is a bird, or that all remains assigned to it even come from a single species, because of the circumstances of its discovery and unconvincing avian synapomorphies in its fragmentary material. When they were found at a Dockum Formation quarry in the Texas panhandle in 1984, in a sedimentary strata of a Triassic river delta, the fossils were a jumbled cache of disarticulated bones that may reflect an incident of mass mortality following a flash flood. Scientists such as Alan Feduccia have cited Protoavis in an attempt to refute the hypothesis that birds evolved from dinosaurs. However, the only consequence would be to push back the point of divergence further back in time. At the time when such claims were originally made, the affiliation of birds and maniraptoran theropods which today is well-supported and generally accepted by most ornithologists was much more contentious; most Mesozoic birds have only been discovered since then. Chatterjee himself has since used Protoavis to support a close relationship between dinosaurs and birds. "As there remains no compelling data to support the avian status of Protoavis or taxonomic validity thereof, it seems mystifying that the matter should be so contentious. The author very much agrees with Chiappe in arguing that at present, Protoavis is irrelevant to the phylogenetic reconstruction of Aves. While further material from the Dockum beds may vindicate this peculiar archosaur, for the time being, the case for Protoavis is non-existent." Claimed temporal paradoxEdit The temporal paradox, or time problem is a controversial issue in the evolutionary relationships of feathered dinosaurs and birds. It was originally conceived of by paleornithologist Alan Feduccia. The concept is based on the apparent following facts. The consensus view is that birds evolved from dinosaurs, but the most bird-like dinosaurs, and those most closely related to birds (the maniraptorans), are known mostly from the Cretaceous, by which time birds had already evolved and diversified. If bird-like dinosaurs are the ancestors of birds they should be older than birds, but Archaeopteryx is 155 million years old, while the very bird-like Deinonychus is 35 million years younger. This idea is sometimes summarized as "you can't be your own grandmother". The development of avian characteristics in dinosaurs supposedly should have led to the first modern bird appearing about 60 million years ago. However, Archaeopteryx lived 150 million years ago, long before any of the bird changes took place in dinosaurs. Each of the feathered dinosaur families developed avian-like features in its own way. Thus there were many several different lines of evolution. Archaeopteryx was merely the result of one such line. Numerous researchers have discredited the idea of the temporal paradox. Witmer (2002) summarized this critical literature by pointing out that there are at least three lines of evidence that contradict it. First, no one has proposed that maniraptoran dinosaurs of the Cretaceous are the ancestors of birds. They have merely found that dinosaurs like dromaeosaurs, troodontids and oviraptorosaurs are close relatives of birds. The true ancestors are thought to be older than Archaeopteryx, perhaps Early Jurassic or even older. The scarcity of maniraptoran fossils from this time is not surprising, since fossilization is a rare event requiring special circumstances and fossils may never be found of animals in sediments from ages that they actually inhabited. Secondly, fragmentary remains of maniraptoran dinosaurs actually have been known from Jurassic deposits in China, North America, and Europe for many years. The femur of a tiny maniraptoran from the Late Jurassic of Colorado was reported by Padian and Jensen in 1989. In a 2009 article in the journal Acta Palaeontologica Polonica, six velociraptorine dromaeosaurid teeth were described as being recovered from a bone bed in Langenberg Quarry of Oker (Goslar, Germany). These teeth are notable in this instance in that they dated back to the Kimmeridgian stage of the Late Jurassic, roughly 155-150 Ma, and represent some of the earliest dromaeosaurids known to science, further refuting a "temporal paradox". Furthermore, a small, as of yet undescribed troodontid known as WDC DML 001, was announced in 2003 as being found in the Late Jurassic Morrison Formation of eastern/central Wyoming. The presence of this derived maniraptoran in Jurassic sediments is a strong refutation of the "temporal paradox". Third, if the temporal paradox would indicate that birds should not have evolved from dinosaurs, then what animals are more likely ancestors considering their age? Brochu and Norell (2001) analyzed this question using several of the other archosaurs that have been proposed as bird ancestors, and found that all of them create temporal paradoxes—long stretches between the ancestor and Archaeopteryx where there are no intermediate fossils—that are actually worse. Thus, even if one used the logic of the temporal paradox, one should still prefer dinosaurs as the ancestors to birds. Quick & Ruben (2009)Edit In Quick & Ruben's 2009 paper, they argue that modern birds are fundamentally different from non-avian dinosaurs in terms of abdominal soft-tissue morphology, and therefore birds cannot be modified dinosaurs. The paper asserts that a specialized 'femoral-thigh complex', combined with a synsacrum and ventrally separated pubic bones, provides crucial mechanical support for the abdominal wall in modern birds, and has thereby allowed the evolution of large abdominal air-sacs that function in respiration. In contrast, say the authors, theropod dinosaurs lack these features and had a highly mobile femur that cannot have been incorporated into abdominal support. Therefore, non-avian theropods cannot have had abdominal air-sacs that functioned like those of modern birds, and non-avian theropods were fundamentally different from modern birds. However, this was not mentioned in the paper, but was of course played-up in the press interviews. The paper never really demonstrate anything, but merely try to shoot holes in a given line of supporting evidence. It has been argued that respiratory turbinates supposedly falsify dinosaur endothermy, even though it has never been demonstrated that respiratory turbinates really are a requirement for any given physiological regime, and even though there are endotherms that lack respiratory turbinates. The innards of Sinosauropteryx and Scipionyx also supposedly falsify avian-like air-sac systems in non-avian coelurosaurs and demonstrate a crocodilian-like hepatic piston diaphragm, even though personal interpretation is required to accept that this claim might be correct. Furthermore, even though crocodilians and dinosaurs are fundamentally different in pelvic anatomy, some living birds have the key soft-tissue traits reported by Ruben et al. in Sinosauropteryx and Scipionyx, and yet still have an avian respiratory system. For a more detailed rebuttal of Quick & Ruben's paper, see this post by Darren Naish at Tetrapod Zoology. There have been claims that the supposed feathers of the Chinese fossils are a preservation artifact. Despite doubts, the fossil feathers have roughly the same appearance as those of birds fossilized in the same locality, so there is no serious reason to think they are of different nature; moreover, no non-theropod fossil from the same site shows such an artifact, but sometimes show unambiguous hair (some mammals) or scales (some reptiles). Some researchers have interpreted the filamentous impressions around Sinosauropteryx fossils as remains of collagen fibers, rather than primitive feathers. Since they are clearly external to the body, these researchers have proposed that the fibers formed a frill on the back of the animal and underside of its tail, similar to some modern aquatic lizards. This would refute the proposal that Sinosauropteryx is the most basal known theropod genus with feathers, and also questions the current theory of feather origins itself. It calls into question the idea that the first feathers evolved not for flight but for insulation, and that they made their first appearance in relatively basal dinosaur lineages that later evolved into modern birds. The Archaeoraptor fakeEdit - Main article: Archaeoraptor In 1999, a supposed 'missing link' fossil of an apparently feathered dinosaur named "Archaeoraptor liaoningensis", found in Liaoning Province, northeastern China, turned out to be a forgery. Comparing the photograph of the specimen with another find, Chinese paleontologist Xu Xing came to the conclusion that it was composed of two portions of different fossil animals. His claim made National Geographic review their research and they too came to the same conclusion. The bottom portion of the "Archaeoraptor" composite came from a legitimate feathered dromaeosaurid now known as Microraptor, and the upper portion from a previously-known primitive bird called Yanornis. Flying and glidingEdit The ability to fly or glide has been suggested for at least two dromaeosaurid species. The first, Rahonavis ostromi (originally classified as avian bird, but found to be a dromaeosaurid in later studies) may have been capable of powered flight, as indicated by its long forelimbs with evidence of quill knob attachments for long sturdy flight feathers. The forelimbs of Rahonavis were more powerfully built than Archaeopteryx, and show evidence that they bore strong ligament attachments necessary for flapping flight. Luis Chiappe concluded that, given these adaptations, Rahonavis could probably fly but would have been more clumsy in the air than modern birds. Another species of dromaeosaurid, Microraptor gui, may have been capable of gliding using its well-developed wings on both the fore and hind limbs. Microraptor was among the first non-avian dinosaurs discovered with the impressions of feathers and wings. On Microraptor, the long feathers on the forelimbs possess asymmetrical vanes. The external vanes are narrow, while the internal ones are broad. In addition, Microraptor possessed elongated remiges with asymmetrical vanes that demonstrate aerodynamic function on the hind limbs. A 2005 study by Sankar Chatterjee suggested that the wings of Microraptor functioned like a split-level "biplane", and that it likely employed a phugoid style of gliding, in which it would launch from a perch and swoop downward in a 'U' shaped curve, then lift again to land on another tree, with the tail and hind wings helping to control its position and speed. Chatterjee also found that Microraptor had the basic requirements to sustain level powered flight in addition to gliding. Microraptor had two sets of wings, on both its forelegs and hind legs. The long feathers on the legs of Microraptor were true flight feathers as seen in modern birds, with asymmetrical vanes on the arm, leg, and tail feathers. As in bird wings, Microraptor had both primary (anchored to the hand) and secondary (anchored to the arm) flight feathers. This standard wing pattern was mirrored on the hind legs, with flight feathers anchored to the upper foot bones as well as the upper and lower leg. It had been proposed by Chinese scientists that the animal glided and probably lived in trees, pointing to the fact that wings anchored to the feet of Microraptor would have hindered their ability to run on the ground, and suggest that all primitive dromaeosaurids may have been arboreal. Sankar Chatterjee determined in 2005 that, in order for the creature to glide or fly, the wings must have been on different levels (as on a biplane) and not overlaid (as on a dragonfly), and that the latter posture would have been anatomically impossible. Using this biplane model, Chatterjee was able to calculate possible methods of gliding, and determined that Microraptor most likely employed a phugoid style of gliding—launching itself from a perch, the animal would have swooped downward in a deep 'U' shaped curve and then lifted again to land on another tree. The feathers not directly employed in the biplane wing structure, like those on the tibia and the tail, could have been used to control drag and alter the flight path, trajectory, etc. The orientation of the hind wings would also have helped the animal control its gliding flight. In 2007, Chatterjee used computer algorithms that test animal flight capacity to determine whether or not Microraptor was capable of true, powered flight, in addition to passive gliding. The resulting data showed that Microraptor did have the requirements to sustain level powered flight, so it is theoretically possible that the animal flew on occasion in addition to gliding. Saurischian integumentary structuresEdit The hip structure possessed by modern birds actually evolved independently from the "lizard-hipped" saurischians (specifically, a sub-group of saurischians called the Maniraptora) in the Jurassic Period. In this example of convergent evolution, birds developed hips oriented similar to the earlier ornithischian hip anatomy, in both cases possibly as an adaptation to a herbivorous or omnivorous diet. In Saurischia, maniraptorans are characterized by long arms and three-fingered hands, as well as a "half-moon shaped" (semi-lunate) bone in the wrist (carpus). Maniraptorans are the only dinosaurs known to have breast bones (ossified sternal plates). In 2004, Tom Holtz and Halszka Osmólska pointed out six other maniraptoran characters relating to specific details of the skeleton. Unlike most other saurischian dinosaurs, which have pubic bones that point forward, several groups of maniraptorans have an ornithischian-like backwards-pointing hip bone. A backward-pointing hip characterizes the therizinosaurs, dromaeosaurids, avialans, and some primitive troodontids. The fact that the backward-pointing hip is present in so many diverse maniraptoran groups has led most scientists to conclude that the "primitive" forward-pointing hip seen in advanced troodontids and oviraptorosaurs is an evolutionary reversal, and that these groups evolved from ancestors with backward-pointing hips. Modern pennaceous feathers and remiges are known from advanced maniraptoran groups (Oviraptorosauria and Paraves). More primitive maniraptorans, such as therizinosaurs (specifically Beipiaosaurus), preserve a combination of simple downy filaments and unique elongated quills. Powered and/or gliding flight is present in members of Avialae, and possibly in some dromaeosaurids such as Rahonavis and Microraptor. Simple feathers are known from more primitive coelurosaurs such as Sinosauropteryx, and possibly from even more distantly related species such as the ornithischian Tianyulong and the flying pterosaurs. Thus it appears as if some form of feathers or down-like integument would have been present in all maniraptorans, at least when they were young. Skin impressions from the type specimen of Beipiaosaurus inexpectus indicated that the body was covered predominately by downy feather-like fibers, similar to those of Sinosauropteryx, but longer, and are oriented perpendicular to the arm. Xu et al., who described the specimen, suggested that these downy feathers represent an intermediate stage between Sinosauropteryx and more advanced birds (Avialae). Unique among known theropods, Beipiaosaurus also possessed a secondary coat of much longer, simpler feathers that rose out of the down layer. These unique feathers (known as EBFFs, or elongated broad filamentous feathers) were first described by Xu et al. in 2009, based on a specimen consisting of the torso, head and neck. Xu and his team also found EBFFs in the original type specimen of B. inexpectus, revealed by further preparation. The holotype also preserved a pygostyle-like structure. The holotype was discovered in two phases. Limb fragments and dorsal and cervical vertebrae were discovered initially. The discovery site was re-excavated later on, and this time an articulated tail and partial pelvis were discovered. All come from the same individual. The holotype has the largest proto-feathers known of any feathered dinosaur, with the author and paleontologist Xing Xu stating: "Most integumentary filaments are about 50 mm in length, although the longest is up to 70 mm. Some have indications of branching distal ends.". The holotype also had preserved dense patches of parallel integumentary structures in association with its lower arm and leg. Thick, stiff, spine-like structures were recovered sprouting from the new specimen's throat region, the back of its head, its neck and its back. New preparation of the holotype reveals that the same structures are also present on the tail (though not associated with the pygostyle-like structure). The EBFFs differ from other feather types in that they consist of a single, unbranched filament. Most other primitive feathered dinosaurs have down-like feathers made up of two or more filaments branching out from a common base or along a central shaft. The EBFFs of Beipiaosaurus are also much longer than other primitive feather types, measuring about 100-150 millimeters (4-6 inches) long, roughly half the length of the neck. In Sinosauropteryx, the longest feathers are only about 15% of the neck length. The EBFFs of Beipiaosaurus are also unusually broad, up to 3 mm wide in the type specimen. The broadest feathers of Sinosauropteryx are only 0.2 mm wide, and only slightly wider in larger forms such as Dilong. Additionally, where most primitive feather types are circular in cross section, EBFFs appear to be oval-shaped. None of the preserved EBFFs were curved or bent beyond a broad arc in either specimen, indicating that they were fairly stiff. They were probably hollow, at least at the base. In a 2009 interview, Xu stated: "Both [feather types] are definitely not for flight, inferring the function of some structures of extinct animals would be very difficult, and in this case, we are not quite sure whether these feathers are for display or some other functions." He speculated that the finer feathers served as an insulatory coat and that the larger feathers were ornamental, perhaps for social interactions such as mating or communication. Long filamentous structures have been preserved along with skeletal remains of numerous coelurosaurs from the Early Cretaceous Yixian Formation and other nearby geological formations from Liaoning, China. These filaments have usually been interpreted as "protofeathers," homologous with the branched feathers found in birds and some non-avian theropods, although other hypotheses have been proposed. A skeleton of Dilong was described in the scientific journal Nature in 2004 that included the first example of "protofeathers" in a tyrannosauroid from the Yixian Formation of China. Similarly to down feathers of modern birds, the "protofeathers" found in Dilong were branched but not pennaceous, and may have been used for insulation. The presence of "protofeathers" in basal tyrannosauroids is not surprising, since they are now known to be characteristic of coelurosaurs, found in other basal genera like Sinosauropteryx, as well as all more derived groups. Rare fossilized skin impressions of large tyrannosaurids lack feathers, however, instead showing skin covered in scales. While it is possible that protofeathers existed on parts of the body which have not been preserved, a lack of insulatory body covering is consistent with modern multi-ton mammals such as elephants, hippopotamuses, and most species of rhinoceros. Alternatively, secondary loss of "protofeathers" in large tyrannosaurids may be analogous with the similar loss of hair in the largest modern mammals like elephants, where a low surface area-to-volume ratio slows down heat transfer, making insulation by a coat of hair unnecessary. Therefore, as large animals evolve in or disperse into warm climates, a coat of fur or feathers loses its selective advantage for thermal insulation and can instead become a disadvantage, as the insulation traps excess heat inside the body, possibly overheating the animal. Protofeathers may also have been secondarily lost during the evolution of large tyrannosaurids, especially in warm Cretaceous climates. Tyrannosaurus at one stage of its life may have been covered in down-like feathers, although there is no direct fossil evidence of this. A few troodont fossils, including specimens of Mei and Sinornithoides, demonstrate that these animals roosted like birds, with their heads tucked under their forelimbs. These fossils, as well as numerous skeletal similarities to birds and related feathered dinosaurs, support the idea that troodontids probably bore a bird-like feathered coat. The discovery of a fully-feathered, primitive troodontid (Jinfengopteryx) lends support to this. The type specimen of Jinfengopteryx elegans is 55 cm long and from the Qiaotou Formation of Liaoning Province, China. Troodontids are important to research on the origin of birds because they share many anatomical characters with early birds. Crucially, the substantially complete fossil identified as WDC DML 001 ("Lori"), is a troodontid from the Late Jurassic Morrison Formation, close to the time of Archaeopteryx. The discovery of this Jurassic troodont is positive physical evidence that derived deinonychosaurs were present very near the time that birds arose, and basal paravians must have evolved much earlier. This fact strongly invalidates the "temporal paradox" cited by the few remaining opponents of the idea that birds are closely related to dinosaurs. (see claimed temporal paradox below.) There is a large body of evidence showing that dromaeosaurids were covered in feathers. Some dromaeosaurid fossils preserve long, pennaceous feathers on the hands and arms (remiges) and tail (rectrices), as well as shorter, down-like feathers covering the body. Other fossils, which do not preserve actual impressions of feathers, still preserve the associated bumps on the forearm bones where long wing feathers would have attached in life. Overall, this feather pattern looks very much like Archaeopteryx. The first known dromaeosaur with definitive evidence of feathers was Sinornithosaurus, reported from China by Xu et al. in 1999. NGMC 91-A, the Sinornithosaurus-like theropod informally dubbed "Dave", possessed unbranched fibers in additional to more complex branched and tufted structures. Many other dromaeosaurid fossils have been found with feathers covering their bodies, some with fully-developed feathered wings. Several even show evidence of a second pair of wings on the hind legs, including Microraptor and Cryptovolans. While direct feather impressions are only possible in fine-grained sediments, some fossils found in coarser rocks show evidence of feathers by the presence of quill knobs, the attachment points for wing feathers possessed by some birds. The dromaeosaurids Rahonavis and Velociraptor have both been found with quill knobs, showing that these forms had feathers despite no impressions having been found. In light of this, it is most likely that even the larger ground-dwelling dromaeosaurids bore feathers, since even flightless birds today retain most of their plumage, and relatively large dromaeosaurids, like Velociraptor, are known to have retained pennaceous feathers. Though some scientists had suggested that the larger dromaeosaurids lost some or all of their insulatory covering, the discovery of feathers in Velociraptor specimens has been cited as evidence that all members of the family retained feathers. Fossils of dromaeosaurids more primitive than Velociraptor are known to have had feathers covering their bodies, and fully developed, feathered wings. The fact that the ancestors of Velociraptor were feathered and possibly capable of flight long suggested to paleontologists that Velociraptor bore feathers as well, since even flightless birds today retain most of their feathers. In September 2007, Alan Turner, Peter Makovicky, and Mark Norell reported the presence of quill knobs on the ulna of a Velociraptor specimen from Mongolia. Fourteen bumps approximately 4mm apart were found in a straight line along the bone, directly corresponding to the same structures in living birds, the bumps serving as an anchor for the secondary feathers. These bumps on bird wing bones show where feathers anchor, and their presence on Velociraptor indicate it too had feathers. According to paleontologist Alan Turner, A lack of quill knobs does not necessarily mean that a dinosaur did not have feathers. Finding quill knobs on Velociraptor, though, means that it definitely had feathers. This is something we'd long suspected, but no one had been able to prove. Co-author Mark Norell, Curator-in-Charge of fossil reptiles, amphibians and birds at the American Museum of Natural History, also weighed in on the discovery, saying: The more that we learn about these animals the more we find that there is basically no difference between birds and their closely related dinosaur ancestors like velociraptor. Both have wishbones, brooded their nests, possess hollow bones, and were covered in feathers. If animals like velociraptor were alive today our first impression would be that they were just very unusual looking birds. According to Turner and co-authors Norell and Peter Makovicky, quill knobs are not found in all prehistoric birds, and their absence does not mean that an animal was not feathered – flamingos, for example, have no quill knobs. However, their presence confirms that Velociraptor bore modern-style wing feathers, with a rachis and vane formed by barbs. The forearm specimen on which the quill knobs were found (specimen number IGM 100/981) represents an animal 1.5 meters in length (5 ft) and 15 kilograms (33 lbs) in weight. Based on the spacing of the six preserved knobs in this specimen, the authors suggested that Velociraptor bore 14 secondaries (wing feathers stemming from the forearm), compared with 12 or more in Archaeopteryx, 18 in Microraptor, and 10 in Rahonavis. This type of variation in the number of wing feathers between closely related species, the authors asserted, is to be expected, given similar variation among modern birds. Turner and colleagues interpreted the presence of feathers on Velociraptor as evidence against the idea that the larger, flightless maniraptorans lost their feathers secondarily due to larger body size. Furthermore, they noted that quill knobs are almost never found in flightless bird species today, and that their presence in Velociraptor (presumed to have been flightless due to its relatively large size and short forelimbs) is evidence that the ancestors of dromaeosaurids could fly, making Velociraptor and other large members of this family secondarily flightless, though it is possible the large wing feathers inferred in the ancestors of Velociraptor had a purpose other than flight. The feathers of the flightless Velociraptor may have been used for display, for covering their nests while brooding, or for added speed and thrust when running up inclined slopes. The preserved impressions of integumentary structures in Sinornithosaurus were composed of filaments, and showed two features that indicate they are early feathers. First, several filaments were joined together into "tufts", similar to the way down is structured. Second, a row of filaments (barbs) were joined together to a main shaft (rachis), making them similar in structure to normal bird feathers. However, they do not have the secondary branching and tiny little hooks (barbules) that modern feathers have, which allow the feathers of modern birds to form a discrete vane. The filaments are arranged in a parallel fashion to each other, and are perpendicular to the bones. In specimen NGMC - 91, the feathers covered the entire body, including the head in front of the eye, the neck, wing - like sprays on the arms, long feathers on the thighs, and a lozenge - shaped fan on the tail like that of Archaeopteryx. Pedopenna is a maniraptoran theropod that shows evidence of avian affinities that are further evidence of the dinosaur-bird evolutionary relationship. Apart from having a very bird-like skeletal structure in its legs, Pedopenna was remarkable due to the presence of long pennaceous feathers on the metatarsus (foot). Some deinonychosaurs are also known to have these 'hind wings', but those of Pedopenna differ from those of animals like Microraptor. Pedopenna hind wings were smaller and more rounded in shape. The longest feathers were slightly shorter than the metatarsus, at about 55 mm (2 in) long. Additionally, the feathers of Pedopenna were symmetrical, unlike the asymmetrical feathers of some deinonychosaurs and birds. Since asymmetrical feathers are typical of animals adapted to flying, it is likely that Pedopenna represents an early stage in the development of these structures. While many of the feather impressions in the fossil are weak, it is clear that each possessed a rachis and barbs, and while the exact number of foot feathers is uncertain, they are more numerous than in the hind-wings of Microraptor. Pedopenna also shows evidence of shorter feathers overlying the long foot feathers, evidence for the presence of coverts as seen in modern birds. Since the feathers show fewer aerodynamic adaptations than the similar hind wings of Microraptor, and appear to be less stiff, suggests that if they did have some kind of aerodynamic function, it was much weaker than in deinonychosaurs and birds. Xu and Zhang, in their 2005 description of Pedopenna, suggested that the feathers could be ornamental, or even vestigial. It is possible that a hind wing was present in the ancestors of deinonychosaurs and birds, and later lost in the bird lineage, with Pedopenna representing an intermediate stage where the hind wings are being reduced from a functional gliding apparatus to a display or insulatory function. Anchiornis is notable for its proportionally long forelimbs, which measured 80% of the total length of the hind limbs. This is similar to the condition in early avians such as Archaeopteryx, and the authors pointed out that long forelimbs are necessary for flight. It is possible that Anchiornis was able to fly or glide, and may have had a functional airfoil. Anchiornis also had a more avian wrist than other non-avian theropods. Anchiornis has hind leg proportions more like those of lower theropod dinosaurs than avialans. Faint, carbonized feather impressions were preserved in patches in the type specimen. Feathers on the torso measured an average of 20 mm in length, but the feathers were too poorly preserved to ascertain details of their structure. A cladistic analysis indicated that Anchiornis is part of the avian lineage, but outside of the clade that includes Archaeopteryx and modern birds, strongly suggesting that Anchiornis was a basal member of the Avialae and the sister taxon of Aves. Anchiornis can therefore be considered to be a non-avian avialian. All specimens of Sinosauropteryx preserve integumentary structures (filaments arising from the skin) which most paleontologists interpret as very primitive feathers. These short, down-like filaments are preserved all along the back half of the skull, arms, neck, back, and top and bottom of the tail. Additional patches of feathers have been identified on the sides of the body, and paleontologist Chen, Dong and Zheng proposed that the density of the feathers on the back and the randomness of the patches elsewhere on the body indicated the animals would have been fully feathered in life, with the ventral feathers having been removed by decomposition. The filaments are preserved with a gap between the bones, which several authors have noted corresponds closely to the expected amount of skin and muscle tissue that would have been present in life. The feathers adhere close to the bone on the skull and end of the tail, where little to no muscle was present, and the gap increases over the back vertebrae, where more musculature would be expected, indicating that the filaments were external to the skin and do not correspond with sub-cutaneous structures. The random positioning of the filaments and often "wavy" lines of preservation indicate that they were soft and pliable in life. Examination with microscopes shows that each individual filament appears dark along the edges and light internally, suggesting that they were hollow, like modern feathers. Compared to modern mammals the filaments were quite coarse, with each individual strand much larger and thicker than the corresponding hairs of similarly sized mammals. The length of the filaments varied across the body. They were shortest just in front of the eyes, with a length of 13 mm. Going further down the body, the filaments rapidly increase in length until reaching 35 mm long over the shoulder blades. The length remains uniform over the back, until beyong the hips, when the filaments lengthen again and reach their maximum length midway down the tail at 40 mm. The filaments on the underside of the tail are shorter overall and decrease in length more rapidly than those on the dorsal surface. By the 25th tail vertebrae, the filaments on the underside reach a length of only 35 mm. The longest feathers present on the forearm measured 14 mm. Overall, the filaments most closely resemble the "plumules" or down-like feathers of some modern birds, with a very short quill and long, thin barbs. The same structures are seen in other fossils from the Yixian Formation, including Confuciusornis. Analysis of the fossils of Sinosauropteryx have shown an alternation of lighter and darker bands preserved on the tail, giving us an idea of what the animal looked like in real life. This banding is probably due to preserved areas of melanin, which can produce dark tones in fossils. The type specimen of Epidendrosaurus also preserved faint feather impressions at the end of the tail, similar to the pattern found in the dromaeosaurid Microraptor. While the reproductive strategies of Epidendrosaurus itself remain unknown, several tiny fossil eggs discovered in Phu Phok, Thailand (one of which contained the embryo of a theropod dinosaur) may have been laid by a small dinosaur similar to Epidendrosaurus or Microraptor. The authors who described these eggs estimated the dinosaur they belonged to would have had the adult size of a modern Goldfinch. Scansoriopteryx fossils preserve impressions of wispy, down-like feathers around select parts of the body, forming V-shaped patterns similar to those seen in modern down feathers. The most prominent feather impressions trail from the left forearm and hand. The longer feathers in this region led Czerkas and Yuan to speculate that adult scansoriopterygids may have had reasonably well-developed wing feathers which could have aided in leaping or rudimentary gliding, though they ruled out the possibility that Scansoriopteryx could have achieved powered flight. Like other maniraptorans, Scansoriopteryx had a semilunate (half-moon shaped) bone in the wrist that allowed for bird-like folding motion in the hand. Even if powered flight was not possible, this motion could have aided maneuverability in leaping from branch to branch. Scales were also preserved near the base of the tail. For more on the implications of this discovery, see Scansoriopteryx#Implications. Oviraptorosaurs, like dromaeosaurs, are so bird-like that several scientists consider them to be true birds, more advanced than Archaeopteryx. Gregory S. Paul has written extensively on this possibility, and Teresa Maryańska and colleagues published a technical paper detailing this idea in 2002. Michael Benton, in his widely-respected text Vertebrate Palaeontology, also included oviraptorosaurs as an order within the class Aves. However, a number of researchers have disagreed with this classification, retaining oviraptorosaurs as non-avialan maniraptorans slightly more primitive than the dromaeosaurs. Evidence for feathered oviraptorosaurs exists in several forms. Most directly, two species of primitive oviraptorosaurs (Caudipteryx) have been found with impressions of well developed feathers, most notably on the wings and tail, suggesting that they functioned at least partially for display. Secondly, at least one oviraptorosaur (Nomingia) was preserved with a tail ending in something like a pygostyle, a bony structure at the end of the tail that, in modern birds, is used to support a fan of feathers. Similarly, quill knobs (anchor points for wing feathers on the ulna) have been reported in the oviraptorosaurian species, Avimimus portentosus. Additionally, a number of oviraptorid specimens have famously been discovered in a nesting position similar to that of modern birds. The arms of these specimens are positioned in such a way that they could perfectly cover their eggs if they had small wings and a substantial covering of feathers. Protarchaeopteryx, an oviraptorosaur, is well known for its fan-like array of 12 rectricial feathers, but it also seems to have sported simple filament-like structures elsewhere on the tail. Soft and downy feathers are preserved in the chest region and tail base, and are also preserved adjacent to the femora. The bodies and limbs of oviraptorosaurs are arranged in a bird-like manner, suggesting the presence of feathers on the arms which may have been used for insulating eggs or brooding young. Members of Oviraptoridae possess a quadrate bone that shows particularly avian characteristics, including a pneumatizatized, double-headed structure, the presence of the pterygoid process, and articular fossa for the quadrratojugal. Oviraptorids were probably feathered, since some close relatives were found with feathers preserved (Caudipteryx and possibly Protarchaeopteryx). Another finding pointing to this is the discovery in Nomingia of a pygostyle, a bone that results from the fusion of the last tail vertebrae and is responsible in birds to hold a fan of feathers in the tail. Finally, the arm position of the brooding Citipati would have been far more effective if feathers were present to cover the eggs. Caudipteryx has clear and unambiguously pennaceous feathers, like modern birds, and several cladistic analyses have consistently recovered it as a nonavian, oviraptorid, dinosaur, it provided, at the time of its description, the clearest and most succinct evidence that birds evolved from dinosaurs. Lawrence Witmer stated: - "The presence of unambiguous feathers in an unambiguously nonavian theropod has the rhetorical impact of an atomic bomb, rendering any doubt about the theropod relationships of birds ludicrous.”" However, not all scientists agreed that Caudipteryx was unambiguously non-avian, and some of them continued to doubt that general consensus. Paleornithologist Alan Feduccia sees Caudipteryx as a flightless bird evolving from earlier archosaurian dinosaurs rather than from late theropods. Jones et al. (2000) found that Caudipteryx was a bird based on a mathematical comparison of the body proportions of flightless birds and non-avian theropods. Dyke and Norell (2005) criticized this result for flaws in their mathematical methods, and produced results of their own which supported the opposite conclusion. Other researchers not normally involved in the debate over bird origins, such as Zhou, acknowledged that the true affinities of Caudipteryx were debatable. In 1997, filament-like integumentary structures were reported to be present in the Spanish ornithomimosaur Pelecanimimus polyodon. Furthermore, one published life restoration depicts Pelecanimimus as having been covered in the same sort of quill-like structures as are present on Sinosauropteryx and Dilong. However, a brief 1997 report that described soft-tissue mineralization in the Pelecanimimus holotype has been taken by most workers as the definitive last word 'demonstrating' that integumentary fibers were absent from this taxon. However, the report describing soft-tissue mineralization described soft-tissue preservation seen in one small patch of tissue, and the absence of integument here does not provide much information about the distribution of integument on the live animal. This might explain why a few theropod workers (notably Paul Sereno and Kevin Padian) have continued to indicate the presence of filamentous integumentary structures in Pelecanimimus. Feduccia et al. (2005) argued that Pelecanimimus possessed scaly arms and figured some unusual rhomboidal structures in an effort to demonstrate this. The objects that they illustrate do not resemble scales and it remains to be seen whether they are anything to do with the integument of this dinosaur. A full description/monograph on this dinosaur has yet to be published, which might have more information on this subject. Ornithischian integumentary structuresEdit The integument, or body covering, of Psittacosaurus is known from a Chinese specimen, which most likely comes from the Yixian Formation of Liaoning. The specimen, which is not yet assigned to any particular species, was illegally exported from China, in violation of Chinese law, but was purchased by a German museum and arrangements are being made to return the specimen to China. Most of the body was covered in scales. Larger scales were arranged in irregular patterns, with numerous smaller scales occupying the spaces between them, similarly to skin impressions known from other ceratopsians, such as Chasmosaurus. However, a series of what appear to be hollow, tubular bristles, approximately 16 centimeters (6.4 in) long, were also preserved, arranged in a row down the dorsal (upper) surface of the tail. However, according to Mayr et al., "[a]t present, there is no convincing evidence which shows these structures to be homologous to the structurally different [feathers and protofeathers] of theropod dinosaurs." As the structures are only found in a single row on the tail, it is unlikely that they were used for thermoregulation, but they may have been useful for communication through some sort of display. Tianyulong is notable for the row of long, filamentous integumentary structures apparent on the back, tail and neck of the fossil. The similarity of these structures with those found on some derived theropods suggests their homology with feathers and raises the possibility that the earliest dinosaurs and their ancestors were covered with analogous dermal filamentous structures that can be considered as primitive feathers (proto-feathers). The filamentous integumentary structures are preserved on three areas of the fossil: in one patch just below the neck, another one on the back, and the largest one above the tail. The hollow filaments are parallel to each other and are singular with no evidence of branching. They also appear to be relatively rigid, making them more analogous to the integumentary structures found on the tail of Psittacosaurus than to the proto-feather structures found in avian and non-avian theropods. Among the theropods, the structures in Tianyulong are most similar to the singular unbranched proto-feathers of Sinosauropteryx and Beipiaosaurus. The estimated length of the integumentary structures on the tail is about 60 mm which is seven times the height of a caudal vertebra. Their length and hollow nature argue against of them being subdermal structures such as collagen fibers. Phylogenetics and homologyEdit Such dermal structures have previously been reported only in derived theropods and ornithischians, and their discovery in Tianyulong extends the existence of such structures further down in the phylogenetic tree. However, the homology between the ornithischian filaments and the theropods proto-feathers is not obvious. If the homology is supported, the consequence is that the common ancestor of both saurischians and ornithischians were covered by feather-like structures and that groups for which skin impression are known such as the sauropods were only secondarily featherless. If the homology is not supported, it would indicate that these filamentous dermal structures evolved independently in saurischians and ornithischians, as well as in other archosaurs such as the pterosaurs. The authors (in supplementary information to their primary article) noted that discovery of similar filamentous structures in the theropod Beipiaosaurus bolstered the idea that the structures on Tianyulong are homologous with feathers. Both the filaments of Tianyulong and the filaments of Beipiaosaurus were laong, singular, and unbranched. In Beipiaosaurus, however, the filaments were flattened. In Tianyulong, the filaments were round in cross section, and therefore closer in structure to the earliest forms of feathers predicted by developmental models. Some scientists have argued that other dinosaur proto-feathers are actually fibers of collagen that have come loose from the animals' skins. However, collagen fibers are solid structures; based on the long, hollow nature of the filaments on Tianyulong the authors rejected this explanation. After a century of hypotheses without conclusive evidence, especially well-preserved (and legitimate) fossils of feathered dinosaurs were discovered during the 1990s, and more continue to be found. The fossils were preserved in a lagerstätte — a sedimentary deposit exhibiting remarkable richness and completeness in its fossils — in Liaoning, China. The area had repeatedly been smothered in volcanic ash produced by eruptions in Inner Mongolia 124 million years ago, during the Early Cretaceous Period. The fine-grained ash preserved the living organisms that it buried in fine detail. The area was teeming with life, with millions of leaves, angiosperms (the oldest known), insects, fish, frogs, salamanders, mammals, turtles, lizards and crocodilians discovered to date. The most important discoveries at Liaoning have been a host of feathered dinosaur fossils, with a steady stream of new finds filling in the picture of the dinosaur-bird connection and adding more to theories of the evolutionary development of feathers and flight. Norell et al (2007) reported quill knobs from an ulna of Velociraptor mongoliensis, and these are strongly correlated with large and well-developed secondary feathers. List of dinosaur genera preserved with evidence of feathersEdit A number of non-avian dinosaurs are now known to have been feathered. Direct evidence of feathers exists for the following genera, listed in the order currently accepted evidence was first published. In all examples, the evidence described consists of feather impressions, except those marked with an asterisk (*), which denotes genera known to have had feathers based on skeletal or chemical evidence, such as the presence of quill knobs. - Avimimus* (1987):536 - Sinosauropteryx (1996) - Protarchaeopteryx (1997) - Caudipteryx (1998) - Rahonavis* (1998) - Shuvuuia (1999) - Sinornithosaurus (1999) - Beipiaosaurus (1999) - Microraptor (2000) - Nomingia* (2000) - Cryptovolans (2002) - Scansoriopteryx (2002) - Epidendrosaurus (2002) - Psittacosaurus? (2002) - Yixianosaurus (2003) - Dilong (2004) - Pedopenna (2005) - Jinfengopteryx (2005) - Sinocalliopteryx (2007) - Velociraptor* (2007) - Epidexipteryx (2008) - Anchiornis (2009) - Tianyulong? (2009) - Note, filamentous structures in some ornithischian dinosaurs (Psittacosaurus, Tianyulong) and pterosaurs may or may not be homologous with the feathers and protofeathers of theropods. Phylogeny and the inference of feathers in other dinosaursEdit Feathered dinosaur fossil finds to date, together with cladistic analysis, suggest that many types of theropod may have had feathers, not just those that are especially similar to birds. In particular, the smaller theropod species may all have had feathers and possibly even the larger theropods (for instance T. rex) may have had feathers, in their early stages of development after hatching. Whereas these smaller animals may have benefited from the insulation of feathers, large adult theropods are unlikely to have had feathers, since inertial heat retention would likely be sufficient to manage heat. Excess internal heat may even have become a problem, had these very large creatures been feathered. Fossil feather impressions are extremely rare; therefore only a few feathered dinosaurs have been identified so far. However, through a process called phylogenetic bracketing, scientists can infer the presence of feathers on poorly-preserved specimens. All fossil feather specimens have been found to show certain similarities. Due to these similarities and through developmental research almost all scientists agree that feathers could only have evolved once in dinosaurs. Feathers would then have been passed down to all later, more derived species (although it is possible that some lineages lost feathers secondarily). If a dinosaur falls at a point on an evolutionary tree within the known feather-bearing lineages, scientists assume it too had feathers, unless conflicting evidence is found. This technique can also be used to infer the type of feathers a species may have had, since the developmental history of feathers is now reasonably well-known. Nearly all paleontologists regard birds as coelurosaurian theropod dinosaurs. Within Coelurosauria, multiple cladistic analyses have found support for a clade named Maniraptora, consisting of therizinosauroids, oviraptorosaurs, troodontids, dromaeosaurids, and birds. Of these, dromaeosaurids and troodontids are usually united in the clade Deinonychosauria, which is a sister group to birds (together forming the node-clade Eumaniraptora) within the stem-clade Paraves. Other studies have proposed alternative phylogenies in which certain groups of dinosaurs that are usually considered non-avian are suggested to have evolved from avian ancestors. For example, a 2002 analysis found oviraptorosaurs to be basal avians. Alvarezsaurids, known from Asia and the Americas, have been variously classified as basal maniraptorans, paravians, the sister taxon of ornithomimosaurs, as well as specialized early birds. The genus Rahonavis, originally described as an early bird, has been identified as a non-avian dromaeosaurid in several studies. Dromaeosaurids and troodontids themselves have also been suggested to lie within Aves rather than just outside it.:472 The scientists who described the (apparently unfeathered) Juravenator performed a genealogical study of coelurosaurs, including distribution of various feather types. Based on the placement of feathered species in relation to those that have not been found with any type of skin impressions, they were able to infer the presence of feathers in certain dinosaur groups. The following simplified cladogram follows these results, and shows the likely distribution of plumaceous (downy) and pennaceous (vaned) feathers among theropods. Note that the authors inferred pennaceous feathers for Velociraptor based on phylogenetic bracketing, a prediction later confirmed by fossil evidence. - Origin of birds - Evolution of birds - Origin of avian flight - Birds Came First - Alan Feduccia - George Olshevsky - ^ All known dromaeosaurs have pennaceous feathers on the arms and tail, and substantially thick coat of feathers on the body, especially the neck and breast. Clear fossil evidence of modern avian-style feathers exists for several related dromaeosaurids, including Velociraptor and Microraptor, though no direct evidence is yet known for Deinonychus itself. - ^ On page 155 of Dinosaurs of the Air by Gregory Paul, there are an accumulated total of 305 potential synapomorphies with birds for all non-avian theropod nodes, 347 for all non-avian dinosauromorph nodes. Shared features between birds and dinosaurs include: - A pubis (one of the three bones making up the vertebrate pelvis) shifted from an anterior to a more posterior orientation (see Saurischia), and bearing a small distal "boot". - Elongated arms and forelimbs and clawed manus (hands). - Large orbits (eye openings in the skull). - Flexible wrist with a semi-lunate carpal (wrist bone). - Double-condyled dorsal joint on the quadrate bone. - Ossified ucinate process of the ribs. - Most of the sternum is ossified. - Broad sternal plates. - Ossified sternal ribs. - Brain enlarged above reptilian maximum. - Overlapping field of vision. - Olfaction sense reduced. - An arm/leg length ratio between 0.5 and 1.0 - Lateral exposition of the glenoid in the humeral joint. - Hollow, thin-walled bones. - 3-fingered opposable grasping manus (hand), 4-toed pes (foot); but supported by 3 main toes. - Fused carpometacarpus. - Metacarpal III bowed posterolaterally. - Flexibilty of digit III reduced. - Digit III tightly appressed to digit II. - Well developed arm folding mechanism. - Reduced, posteriorly stiffened tail. - Distal tail stiffened. - Tail base hyperflexible, especially dorsally. - Elongated metatarsals (bones of the feet between the ankle and toes). - S-shaped curved neck. - Erect, digitgrade (ankle held well off the ground) stance with feet postitioned directly below the body. - Similar eggshell microstructure. - Teeth with a constriction between the root and the crown. - Functional basis for wing power stroke present in arms and pectoral girdle (during motion, the arms were swung down and forward, then up and backwards, describing a "figure-eight" when viewed laterally). - Expanded pneumatic sinuses in the skull. - Five or more vertebrae incorporated into the sacrum (hip). - Posterior caudal vertebrate fused to form the pygostyle. - Large, strongly built, and straplike scapula (shoulder blade). - Scapula blades are horizontal. - Scapula tip is pointed. - Acromion process is developed, similar to that in Archaeopteryx. - Retroverted and long coracoids. - Strongly flexed and subvertical coracoids relative to the scapula. - Clavicles (collarbone) fused to form a furcula (wishbone). - U-shaped furcula. - Hingelike ankle joint, with movement mostly restricted to the fore-aft plane. - Secondary bony palate (nostrils open posteriorly in throat). - Pennaceous feathers in some taxa. Proto-feathers, filaments, and integumenatary structures in others. - Well-developed, symmetrical arm contour feathers. - Source 1: Are Birds Really Dinosaurs? Dinobuzz, Current Topics Concerning Dinosaurs. Created 9/27/05. Accessed 7/20/09. Copyright 1994-2009 by the Regents of the University of California, all rights reserved. - Source 2: Kurochkin, E., N. 2006. Parallel Evolution of Theropod Dinoaurs and Birds. Entomological Review 86 (1), pp. S45-S58. doi:10.1134/S0013873806100046 - Source 3: Paul, Gregory S. (2002). "11". Dinosaurs of the Air: The Evolution and Loss of Flight in Dinosaurs and Birds. Baltimore: Johns Hopkins University Press. pp. 225-227: Table 11.1. ISBN 978-0801867637. - ^ Xu Xing suggested that the integumentary features present in some pterosaurs and the ornithischian dinosaur Psittacosaurus may be evidence of this first stage. - ^ Examples in the fossil record may include Sinosauropteryx, Beipiaosaurus, Dilong, and Sinornithosaurus. - ^ According to Xu Xing, the stage III is supported by the fact that feather follicles developed after barb ridges, along with the follicle having a unique role in the formation of the rachis. - ^ See Caudipteryx, Protarchaeopteryx, and Sinornithosaurus. Xu Xing also noted that while the pennaceous feathers of Microraptor differ from those of Caudipteryx and Protarchaeopteryx due to the aerodynamic functions of its feathers, they still belong together in the same stage because they both "evolved form-stiffening barbules" on their feathers. - ^ Remiges are the large feathers of the forelimbs (singular remex). The large feathers that grow from the tail are termed rectrices (singular rectrix). - ^ Darwin, Charles R. (1859). On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life. London: John Murray. p. 502pp. http://darwin-online.org.uk/content/frameset?itemID=F373&viewtype=side&pageseq=16. - ^ Huxley, Thomas H. (1870). "Further evidence of the affinity between the dinosaurian reptiles and birds". Quarterly Journal of the Geological Society of London 26: 12–31. - ^ Huxley, Thomas H. (1868). "On the animals which are most nearly intermediate between birds and reptiles". Annals of the Magazine of Natural History 4 (2): 66–75. - ^ Foster, Michael; Lankester, E. Ray 1898–1903. The scientific memoirs of Thomas Henry Huxley. 4 vols and supplement. London: Macmillan. - ^ Owen, R. (1863): On the Archaeopteryx of von Meyer, with a description of the fossil remains of a long-tailed species, from the Lithographic Slate of Solenhofen. - Philosophical Transactions of the Royal Society of London, 1863: 33-47. London. - ^ a b Padian K. and Chiappe LM (1998). The origin and early evolution of birds. Biological Reviews 73: 1-42. - ^ a b c d e f g h i Xu Xing; Zhou Zhonghe; Wang Xiaolin; Kuang Xuewen; Zhang Fucheng; & Du Xiangke (2003). "Four-winged dinosaurs from China". Nature 421 (6921): 335–340. doi:10.1038/nature01342. - ^ a b c d Zhang, F., Zhou, Z., Xu, X. & Wang, X. (2002). "A juvenile coelurosaurian theropod from China indicates arboreal habits." Naturwissenschaften, 89(9): 394-398. doi:10.1007 /s00114-002-0353-8. - ^ Fox, W. (1866). Another new Wealden reptile. Athenaeum 2014, 740. - ^ Naish, D. (2002). The historical taxonomy of the Lower Cretaceous theropods (Dinosauria) Calamospondylus and Aristosuchus from the Isle of Wight. Proceedings of the Geologists' Association 113, 153-163. - ^ Swinton, W. E. (1936a). Notes on the osteology of Hypsilophodon, and on the family Hypsilophodontidae. Proceedings of the Zoological Society of London 1936, 555-578. - ^ Swinton, W. E. (1936b). The dinosaurs of the Isle of Wight. Proceedings of the Geologists' Association 47, 204-220. - ^ Galton, P. M. (1971a). Hypsilophodon, the cursorial non-arboreal dinosaur. Nature 231, 159-161. - ^ Galton, P. M. (1971b). The mode of life of Hypsilophodon, the supposedly arboreal ornithopod dinosaur. Lethaia 4, 453-465. - ^ a b Paul, G.S. (1988). Predatory Dinosaurs of the World. New York: Simon & Schuster. - ^ a b Olshevsky, G. (2001a). The birds came first: a scenario for avian origins and early evolution, 1. Dino Press 4, 109-117. - ^ a b Olshevsky, G. (2001b). The birds came first: a scenario for avian origins and early evolution. Dino Press 5, 106-112. - ^ a b Ostrom, John H. (1969). "Osteology of Deinonychus antirrhopus, an unusual theropod from the Lower Cretaceous of Montana". Bulletin of the Peabody Museum of Natural History 30: 1–165. - ^ Paul, Gregory S. (2000). "A Quick History of Dinosaur Art". in Paul, Gregory S. (ed.). The Scientific American Book of Dinosaurs. New York: St. Martin's Press. pp. 107–112. ISBN 0-312-26226-4. - ^ El Pais: El 'escándalo archaeoraptor' José Luis Sanz y Francisco Ortega 16/02/2000 Online, Spanish - ^ a b Swisher Iii, C.C.; Wang, Y.Q.; Wang, X.L.; Xu, X.; Wang, Y. (2001), "Cretaceous age for the feathered dinosaurs of Liaoning, China", Rise of the Dragon: Readings from Nature on the Chinese Fossil Record: 167, http://books.google.com/books?hl=en, retrieved on 2009-09-02 - ^ a b Swisher, C.C.; Xiaolin, W.; Zhonghe, Z.; Yuanqing, W.; Fan, J.I.N.; Jiangyong, Z.; Xing, X.U.; Fucheng, Z.; et al. (2002), "Further support for a Cretaceous age for the feathered-dinosaur beds of Liaoning, China: %u2026", Chinese Science Bulletin 47 (2): 136–139, http://www.springerlink.com/index/W7724740N2320M80.pdf, retrieved on 2009-09-02 - ^ Sereno, Paul C.; & Rao Chenggang (1992). "Early evolution of avian flight and perching: new evidence from the Lower Cretaceous of China". Science 255 (5046): 845–848. doi:10.1126/science.255.5046.845. PMID 17756432. - ^ Hou Lian-Hai; Zhou Zhonghe; Martin, Larry D.; & Feduccia, Alan (1995). "A beaked bird from the Jurassic of China". Nature 377 (6550): 616–618. doi:10.1038/377616a0. - ^ Novas, F. E., Puerta, P. F. (1997). New evidence concerning avian origins from the Late Cretaceous of Patagonia. Nature 387:390-392. - ^ Norell, M. A., Clark, J. M., Makovivky, P. J. (2001). Phylogenetic relationships among coelurosaurian dinosaurs. In: Gauthier, J. A., Gall, L. F., eds. New Perspectives on the Origin and Early Evolution of Birds. Yale University Press, New Haven, pp. 49-67. - ^ Gatesy, S. M., Dial, K. P. (1996). Locomotor modules and the evolution of avian flight. Evolution 50:331-340. - ^ Gatesy, S. M. (2001). The evolutionary history of the theropod caudal locomotor module. In: Gauthier, J. A., Gall, L. F., eds. New Perspectives on the Origin and Early Evolution of Birds. Yale University Press, New Haven, pp. 333-350. - ^ Xu, X. (2002). Deinonychosaurian fossils from the Jehol Group of western Liaoning and the coelurosaurian evolution (Dissertation). Chinese Academy of Sciences, Beijing. - ^ a b c d e f g h i j k l m n o p q Xu Xing (2006). Feathered dinosaurs from China and the evolution of major avian characters. Integrative Zoology 1:4-11. doi:10.1111/j.1749-4877.2006.00004.x - ^ a b Ji Qiang; & Ji Shu-an (1996). "On the discovery of the earliest bird fossil in China and the origin of birds". Chinese Geology 233: 30–33. - ^ a b c d e f g h i Chen Pei-ji; Dong Zhiming; & Zhen Shuo-nan. (1998). "An exceptionally preserved theropod dinosaur from the Yixian Formation of China". Nature 391 (6663): 147–152. doi:10.1038/34356. - ^ a b Lingham-Soliar, Theagarten; Feduccia, Alan; & Wang Xiaolin. (2007). "A new Chinese specimen indicates that ‘protofeathers’ in the Early Cretaceous theropod dinosaur Sinosauropteryx are degraded collagen fibres". Proceedings of the Royal Society B: Biological Sciences 274 (1620): 1823–1829. doi:10.1098/rspb.2007.0352. - ^ a b c d e f Ji Qiang; Currie, Philip J.; Norell, Mark A.; & Ji Shu-an. (1998). "Two feathered dinosaurs from northeastern China". Nature 393 (6687): 753–761. doi:10.1038/31635. - ^ Sloan, Christopher P. (1999). "Feathers for T. rex?". National Geographic 196 (5): 98–107. - ^ Monastersky, Richard (2000). "All mixed up over birds and dinosaurs". Science News 157 (3): 38. doi:10.2307/4012298. http://www.sciencenews.org/view/generic/id/94/title/All_mixed_up_over_birds_and_dinosaurs. - ^ a b c d e Xu Xing; Tang Zhi-lu; & Wang Xiaolin. (1999). "A therizinosaurid dinosaur with integumentary structures from China". Nature 399 (6734): 350–354. doi:10.1038/20670. - ^ a b c d e f g Xu, X., Norell, M. A., Kuang, X., Wang, X., Zhao, Q., Jia, C. (2004). "Basal tyrannosauroids from China and evidence for protofeathers in tyrannosauroids". Nature 431: 680–684. doi:10.1038/nature02855. - ^ Zhou Zhonghe; & Zhang Fucheng (2002). "A long-tailed, seed-eating bird from the Early Cretaceous of China". Nature 418 (6896): 405–409. doi:10.1038/nature00930. - ^ Wellnhofer, P. (1988). Ein neuer Exemplar von Archaeopteryx. Archaeopteryx 6:1–30. - ^ a b c Zhou Zhonghe; Barrett, Paul M.; & Hilton, Jason. (2003). "An exceptionally preserved Lower Cretaceous ecosystem". Nature 421 (6925): 807–814. doi:10.1038/nature01420. - ^ a b c d Feduccia, A., Lingham-Soliar, T. & Hinchliffe, J. R. (2005). Do feathered dinosaurs exist? Testing the hypothesis on neontological and paleontological evidence. Journal of Morphology 266, 125-166. doi:10.1002/jmor.10382 - ^ a b c Czerkas, S.A., Zhang, D., Li, J., and Li, Y. (2002). "Flying Dromaeosaurs". in Czerkas, S.J.. Feathered Dinosaurs and the Origin of Flight: The Dinosaur Museum Journal 1. Blanding: The Dinosaur Museum. pp. 16–26. - ^ a b Norell, Mark, Ji, Qiang, Gao, Keqin, Yuan, Chongxi, Zhao, Yibin, Wang, Lixia. (2002). "'Modern' feathers on a non-avian dinosaur". Nature, 416: pp. 36. 7 March 2002. - ^ a b c d e f g h Paul, Gregory S. (2002). Dinosaurs of the Air: The Evolution and Loss of Flight in Dinosaurs and Birds. Baltimore: Johns Hopkins University Press. ISBN 978-0801867637. - ^ Heilmann, G. (1926): The Origin of Birds. Witherby, London. ISBN 0-486-22784-7 (1972 Dover reprint) - ^ John Ostrom (1975). The origin of birds. Annual Review of Earth and Planetary Sciences 3, pp. 55. - ^ Bryant, H.N. & Russell, A.P. (1993) The occurrence of clavicles within Dinosauria: implications for the homology of the avian furcula and the utility of negative evidence. Journal of Vertebrate Paleontology, 13(2):171-184. - ^ Chure, Daniel J.; & Madsen, James H. (1996). "On the presence of furculae in some non-maniraptoran theropods". Journal of Vertebrate Paleontology 16 (3): 573–577. - ^ Norell, Mark A.; & Makovicky, Peter J. (1999). "Important features of the dromaeosaurid skeleton II: Information from newly collected specimens of Velociraptor mongoliensis". American Museum Novitates 3282: 1–44. http://hdl.handle.net/2246/3025. - ^ Colbert, E. H. & Morales, M. (1991) Evolution of the vertebrates: a history of the backboned animals through time. 4th ed. Wiley-Liss, New York. 470 p. - ^ Barsbold, R. et al. (1990) Oviraptorosauria. In The Dinosauria, Weishampel, Dodson &p; Osmolska (eds) pp 249-258. - ^ Included as a cladistic definer, e.g. (Columbia University) Master Cladograms or mentioned even in the broadest context, such as Paul C. Sereno, "The origin and evolution of dinosaurs" Annual Review of Earth and Planetary Sciences 25 pp 435-489. - ^ Lipkin, C., Sereno, P.C., and Horner, J.R. (November 2007). "THE FURCULA IN SUCHOMIMUS TENERENSIS AND TYRANNOSAURUS REX (DINOSAURIA: THEROPODA: TETANURAE)". Journal of Paleontology 81 (6): 1523–1527. doi:10.1666/06-024.1. http://jpaleontol.geoscienceworld.org/cgi/content/extract/81/6/1523. - full text currently online at "The Furcula in Suchomimus Tenerensis and Tyrannosaurus rex". http://www.redorbit.com/news/health/1139122/the_furcula_in_suchomimus_tenerensis_and_tyrannosaurus_rex_dinosauria_theropoda/index.html. This lists a large number of theropods in which furculae have been found, as well as describing those of Suchomimus Tenerensis and Tyrannosaurus rex. - ^ Carrano, M,R., Hutchinson, J.R., and Sampson, S.D. (December 2005). "New information on Segisaurus halli, a small theropod dinosaur from the Early Jurassic of Arizona". Journal of Vertebrate Paleontology 25 (4): 835–849. doi:10.1671/0272-4634(2005)025[0835:NIOSHA]2.0.CO;2. http://www.rvc.ac.uk/AboutUs/Staff/jhutchinson/documents/JH18.pdf. - ^ Yates, Adam M.; and Vasconcelos, Cecilio C. (2005). "Furcula-like clavicles in the prosauropod dinosaur Massospondylus". Journal of Vertebrate Paleontology 25 (2): 466–468. doi:10.1671/0272-4634(2005)025[0466:FCITPD]2.0.CO;2. - ^ Downs, A. (2000). Coelophysis bauri and Syntarsus rhodesiensis compared, with comments on the preparation and preservation of fossils from the Ghost Ranch Coelophysis Quarry. New Mexico Museum of Natural History and Science Bulletin, vol. 17, pp. 33–37. - ^ The furcula of Coelophysis bauri, a Late Triassic (Apachean) dinosaur (Theropoda: Ceratosauria) from New Mexico. 2006. By Larry Rinehart, Spencer Lucas, and Adrian Hunt - ^ a b Ronald S. Tykoski, Catherine A. Forster, Timothy Rowe, Scott D. Sampson, and Darlington Munyikwad. (2002). A furcula in the coelophysid theropod Syntarsus. Journal of Vertebrate Paleontology 22(3):728-733. - ^ Larry F. Rinehart, Spencer G. Lucas, Adrian P. Hunt. (2007). Furculae in the Late Triassic theropod dinosaur Coelophysis bauri. Paläontologische Zeitschrift 81: 2 - ^ a b Sereno, P.C.; Martinez, R.N.; Wilson, J.A.; Varricchio, D.J.; Alcober, O.A.; and Larsson, H.C.E. (September 2008). "Evidence for Avian Intrathoracic Air Sacs in a New Predatory Dinosaur from Argentina". PLoS ONE 3 (9): e3303. doi:10.1371/journal.pone.0003303. http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0003303. Retrieved on 2008-10-27. - ^ O'Connor, P.M. & Claessens, L.P.A.M. (2005). "Basic avian pulmonary design and flow-through ventilation in non-avian theropod dinosaurs". Nature 436: 253–256. doi:10.1038/nature03716. - ^ Meat-Eating Dinosaur from Argentina Had Bird-Like Breathing System Newswise, Retrieved on September 29, 2008. - ^ Fisher, P. E., Russell, D. A., Stoskopf, M. K., Barrick, R. E., Hammer, M. & Kuzmitz, A. A. (2000). Cardiovascular evidence for an intermediate or higher metabolic rate in an ornithischian dinosaur. Science 288, 503–505. - ^ Hillenius, W. J. & Ruben, J. A. (2004). The evolution of endothermy in terrestrial vertebrates: Who? when? why? Physiological and Biochemical Zoology 77, 1019–1042. - ^ Dinosaur with a Heart of Stone. T. Rowe, E. F. McBride, P. C. Sereno, D. A. Russell, P. E. Fisher, R. E. Barrick, and M. K. Stoskopf (2001) Science 291, 783 - ^ a b Xu, X. and Norell, M.A. (2004). A new troodontid dinosaur from China with avian-like sleeping posture. Nature 431:838-841.See commentary on the article. - ^ Schweitzer, M.H.; Wittmeyer, J.L.; and Horner, J.R. (2005). "Gender-specific reproductive tissue in ratites and Tyrannosaurus rex". Science 308: 1456–1460. doi:10.1126/science.1112158. PMID 15933198. http://www.sciencemag.org/cgi/content/abstract/308/5727/1456. - ^ Lee, Andrew H.; and Werning, Sarah (2008). "Sexual maturity in growing dinosaurs does not fit reptilian growth models". Proceedings of the National Academy of Sciences 105 (2): 582–587. doi:10.1073/pnas.0708903105. PMID 18195356. http://www.pnas.org/cgi/content/abstract/105/2/582. - ^ Chinsamy, A., Hillenius, W.J. 2004). Physiology of nonavian dinosaurs. In:Weishampel, D.B., Dodson, P., Osmolska, H., eds. The Dinosauria. University of California Press, Berkely. pp. 643-65. - ^ Norell, M.A., Clark, J.M., Chiappe, L.M., and Dashzeveg, D. (1995). "A nesting dinosaur." Nature 378:774-776. - ^ a b Clark, J.M., Norell, M.A., & Chiappe, L.M. (1999). "An oviraptorid skeleton from the Late Cretaceous of Ukhaa Tolgod, Mongolia, preserved in an avianlike brooding position over an oviraptorid nest." American Museum Novitates, 3265: 36 pp., 15 figs.; (American Museum of Natural History) New York. (5.4.1999). - ^ Norell, M. A., Clark, J. M., Dashzeveg, D., Barsbold, T., Chiappe, L. M., Davidson, A. R., McKenna, M. C. and Novacek, M. J. (November 1994). "A theropod dinosaur embryo and the affinities of the Flaming Cliffs Dinosaur eggs" (abstract page). Science 266 (5186): 779–782. doi:10.1126/science.266.5186.779. PMID 17730398. http://www.sciencemag.org/cgi/content/abstract/266/5186/779. - ^ Oviraptor nesting Oviraptor nests or Protoceratops? - ^ Gregory Paul (1994). Thermal environments of dinosaur nestlings: Implications for endothermy and insulation. In: Dinosaur Eggs and Babies. - ^ Hombergerm D.G. (2002). The aerodynamically streamlined body shape of birds: Implications for the evolution of birds, feathers, and avian flight. In: Zhou, Z., Zhang, F., eds. Proceedings of the 5th symposium of the Society of Avian Paleontology and Evolution, Beijing, 1-4 June 2000. Beijing, China: Science Press. p. 227-252. - ^ a b c Ji, Q., and Ji, S. (1997). "A Chinese archaeopterygian, Protarchaeopteryx gen. nov." Geological Science and Technology (Di Zhi Ke Ji), 238: 38-41. Translated By Will Downs Bilby Research Center Northern Arizona University January, 2001 - ^ a b c d e Xu, X., Zhou, Z., and Wang, X. (2000). "The smallest known non-avian theropod dinosaur." Nature, 408 (December): 705-708. - ^ Dal Sasso, C. and Signore, M. (1998). Exceptional soft-tissue preservation in a theropod dinosaur from Italy. Nature 292:383–387. See commentary on the article - ^ Mary H. Schweitzer, Jennifer L. Wittmeyer, John R. Horner, and Jan K. Toporski (2005). Science 307 (5717) pp. 1952-1955. doi:10.1126/science.1108397 - ^ Schweitzer, M.H., Wittmeyer, J.L. and Horner, J.R. (2005). Soft-Tissue Vessels and Cellular Preservation in Tyrannosaurus rex. Science 307:1952–1955. See commentary on the article - ^ Wang, H., Yan, Z. and Jin, D. (1997). Reanalysis of published DNA sequence amplified from Cretaceous dinosaur egg fossil. Molecular Biology and Evolution. 14:589–591. See commentary on the article. - ^ Chang, B.S.W., Jönsson, K., Kazmi, M.A., Donoghue, M.J. and Sakmar, T.P. (2002). Recreating a Functional Ancestral Archosaur Visual Pigment. Molecular Biology and Evolution 19:1483–1489. See commentary on the article. - ^ Embery, et al. "Identification of proteinaceous material in the bone of the dinosaur Iguanodon." Connect Tissue Res. 2003; 44 Suppl 1:41-6. PMID: 12952172 - ^ Schweitzer, et al. (1997 Jun 10) "Heme compounds in dinosaur trabecular bone." Proc Natl Acad Sci U S A.. 94(12):6291–6. PMID: 9177210 - ^ Fucheng, Z., Zhonghe, Z., and Dyke, G. (2006). Feathers and 'feather-like' integumentary structures in Liaoning birds and dinosaurs. Geol . J. 41:395-404. - ^ a b Cheng-Ming Choung, Ping Wu, Fu-Cheng Zhang, Xing Xu, Minke Yu, Randall B. Widelitz, Ting-Xin Jiang, and Lianhai Hou (2003). Adaptation to the sky: defining the feather with integument fossils from the Mesozoic China and exprimental evidence from molecular laboratories. Journal of Experimental Zoology (MOL DEV EVOL) 298b:42-56. - ^ Bakker, R.T., Galton, P.M. (1974). Dinosaur monophyly and a new class of vertebrates. Nature 248:168-172. - ^ Sumida, SS & CA Brochu (2000). "Phylogenetic context for the origin of feathers". American Zoologist 40 (4): 486–503. doi:10.1093/icb/40.4.486. http://icb.oxfordjournals.org/cgi/content/abstract/40/4/486. - ^ a b c d Chiappe, Luis M., (2009). Downsized Dinosaurs:The Evolutionary Transition to Modern Birds. Evo Edu Outreach 2: 248-256. doi:10.1007/s12052-009-0133-4 - ^ Burgers, P., Chiappe, L.M. (1999). The wing of Archaeopteryx as a primary thrust generator. Nature 399: 60-2. doi:10.1038/19967 - ^ a b c d e f g h Prum, R. & Brush A.H. (2002). "The evolutionary origin and diversification of feathers". The Quarterly Review of Biology 77: 261–295. doi:10.1086/341993. - ^ a b c d Prum, R. H. (1999). Development and evolutionary origin of feathers. Journal of Experimental Zoology 285, 291-306. - ^ Griffiths, P. J. (2000). The evolution of feathers from dinosaur hair. Gaia 15, 399-403. - ^ a b c d e f g Mayr, G. Peters, S.D. Plodowski, G. Vogel, O. (2002). "Bristle-like integumentary structures at the tail of the horned dinosaur Psittacosaurus". Naturwissenschaften 89: 361–365. doi:10.1007/s00114-002-0339-6. - ^ a b c Schweitzer, Mary Higby, Watt, J.A., Avci, R., Knapp, L., Chiappe, L, Norell, Mark A., Marshall, M. (1999). "Beta-Keratin Specific Immunological reactivity in Feather-Like Structures of the Cretaceous Alvarezsaurid, Shuvuuia deserti Journal of Experimental Zoology Part B (Mol Dev Evol) 285:146-157 - ^ Schweitzer, M. H. (2001). Evolutionary implications of possible protofeather structures associated with a specimen of Shuvuuia deserti. In Gauthier, J. & Gall, L. F. (eds) New prespectives on the origin and early evolution of birds: proceedings of the international symposium in honor of John H. Ostrom. Peabody Museum of Natural History, Yale University (New Haven), pp. 181-192. - ^ Christiansen, P. & Bonde, N. (2004). Body plumage in Archaeopteryx: a review, and new evidence from the Berlin specimen. C. R. Palevol 3, 99-118. - ^ M.J. Benton, M.A. Wills, R. Hitchin. (2000). Quality of the fossil record through time. Nature 403, 534-537. doi:10.1038/35000558 - ^ Morgan, James (2008-10-22). "New feathered dinosaur discovered". BBC. http://news.bbc.co.uk/2/hi/science/nature/7684796.stm. Retrieved on 2009-07-02. - ^ a b c d e f Zhang, F., Zhou, Z., Xu, X., Wang, X., & Sullivan, C. (2008). "A bizarre Jurassic maniraptoran from China with elongate ribbon-like feathers." Available from Nature Precedings, doi:10.1038/npre.2008.2326.1 . - ^ Prum, R,. O. & Brush, A. H. (2003). Which came first, the feather or the bird? Scientific American 286 (3), 84-93. - ^ Epidexipteryx: bizarre little strap-feathered maniraptoran ScienceBlogs Tetrapod Zoology article by Darren Naish. October 23, 2008 - ^ Gishlick, A. D. (2001). The function of the manus and forelimb of Deinonychus antirrhopus and its importance for the origin of avian flight. In Gauthier, J. & Gall, L. F. (eds) New Perspectives on the Origin and Early Evolution of Birds: Proceedings of the International Symposium in Honor of John H. Ostrom. Peabody Museum of Natural History, Yale University (New Haven), pp. 301-318. - ^ Senter, P. (2006). Comparison of forelimb function between Deinonychus and Bambiraptor (Theropoda: Dromaeosauridae). Journal of Vertebrate Paleontology 26, 897-906. - ^ JA Long, P Schouten. (2008). Feathered Dinosaurs: The Origin of Birds - ^ a b Yalden, D. W. (1985). Forelimb function in Archaeopteryx. In Hecht, M. K., Ostrom, J. H., Viohl, G. & Wellnhofer, P. (eds) The Beginnings of Birds - Proceedings of the International Archaeopteryx Conference, Eichstatt 1984, pp. 91-97. - ^ Chen, P.-J., Dong, Z.-M. & Zhen, S.-N. (1998). An exceptionally well-preserved theropod dinosaur from the Yixian Formation of China. Nature 391, 147-152. - ^ a b c Currie, Philip J.; Pei-ji Chen. (2001). Anatomy of Sinosauropteryx prima from Liaoning, northeastern China. Canadian Journal of Earth Sciences 38, 1705-1727. doi:10.1139/cjes-38-12-1705 - ^ Bohlin, B. 1947. The wing of Archaeornithes. Zoologiska Bidrag 25, 328-334. - ^ Rietschel, S. (1985). Feathers and wings of Archaeopteryx, and the question of her flight ability. In Hecht, M. K., Ostrom, J. H., Viohl, G. & Wellnhofer, P. (eds) The Beginnings of Birds - Proceedings of the International Archaeopteryx Conference, Eichstatt 1984, pp. 251-265. - ^ a b Griffiths, P. J. 1993. The claws and digits of Archaeopteryx lithographica. Geobios 16, 101-106. - ^ Stephan, B. 1994. The orientation of digital claws in birds. Journal fur Ornithologie 135, 1-16. - ^ a b c Chiappe, L.M. and Witmer, L.M. (2002). Mesozoic Birds: Above the Heads of Dinosaurs. Berkeley: University of California Press, ISBN 0520200942 - ^ Martin, L. D. & Lim, J.-D. (2002). Soft body impression of the hand in Archaeopteryx. Current Science 89, 1089-1090. - ^ a b c d Feduccia, A. (1999). The Origin and Evolution of Birds. 420 pp. Yale University Press, New Haven. ISBN 0300078617. - ^ a b Dyke, G.J., and Norell, M.A. (2005). "Caudipteryx as a non-avialan theropod rather than a flightless bird." Acta Palaeontologica Polonica, 50(1): 101–116. PDF fulltext - ^ a b c Witmer, L.M. (2002). “The Debate on Avian Ancestry; Phylogeny, Function and Fossils”, Mesozoic Birds: Above the Heads of Dinosaurs : 3–30. ISBN 0-520-20094-2 - ^ Jones T.D., Ruben J.A., Martin L.D., Kurochkin E.N., Feduccia A., Maderson P.F.A., Hillenius W.J., Geist N.R., Alifanov V. (2000). Nonavian feathers in a Late Triassic archosaur. Science 288: 2202-2205. - ^ Martin, Larry D. (2006). "A basal archosaurian origin for birds". Acta Zoologica Sinica 50 (6): 977–990. - ^ Burke, Ann C.; & Feduccia, Alan. (1997). "Developmental patterns and the identification of homologies in the avian hand". Science 278 (5338): 666–668. doi:10.1126/science.278.5338.666. - ^ a b Kevin Padian (2000). Dinosaurs and Birds — an Update. Reports of the National Center for Science Education. 20 (5):28–31. - ^ Ostrom J.H. (1973). The ancestry of birds. Nature 242: 136. - ^ a b Padian, Kevin. (2004). "Basal Avialae". in Weishampel, David B.; Dodson, Peter; & Osmólska, Halszka (eds.). The Dinosauria (Second ed.). Berkeley: University of California Press. pp. 210–231. ISBN 0-520-24209-2. - ^ Olshevsky, G. (1991). A Revision of the Parainfraclass Archosauria Cope, 1869, Excluding the Advanced Crocodylia. Publications Requiring Research, San Diego. - ^ Olshevsky, G. (1994). The birds first? A theory to fit the facts. Omni 16 (9), 34-86. - ^ a b Chatterjee, S. (1999): Protoavis and the early evolution of birds. Palaeontographica A 254: 1-100. - ^ Chatterjee, S. (1995): The Triassic bird Protoavis. Archaeopteryx 13: 15-31. - ^ Chatterjee, S. (1998): The avian status of Protoavis. Archaeopteryx 16: 99-122. - ^ Chatterjee, S. (1991). "Cranial anatomy and relationships of a new Triassic bird from Texas." Philosophical Transactions of the Royal Society B: Biological Sciences, 332: 277-342. HTML abstract - ^ Paul, G.S. (2002). Dinosaurs of the Air: The Evolution and Loss of Flight in Dinosaurs and Birds. Johns Hopkins University Press, Baltimore. ISBN 0-8018-6763-0 - ^ Witmer, L. (2002). "The debate on avian ancestry: phylogeny, function, and fossils." Pp. 3-30 in: Chiappe, L.M. and Witmer, L.M. (eds), Mesozoic birds: Above the heads of dinosaurs. University of California Press, Berkeley, Calif., USA. ISBN 0-520-20094-2 - ^ Nesbitt, Sterling J.; Irmis, Randall B. & Parker, William G. (2007): A critical re-evaluation of the Late Triassic dinosaur taxa of North America. Journal of Systematic Palaeontology 5(2): 209-243. - ^ Ostrom, J. (1987): Protoavis, a Triassic bird? Archaeopteryx 5: 113-114. - ^ Ostrom, J.H. (1991): The bird in the bush. Nature 353(6341): 212. - ^ Ostrom, J.H. (1996): The questionable validity of Protoavis. Archaeopteryx 14: 39-42. - ^ Chatterjee, S. (1987). "Skull of Protoavis and Early Evolution of Birds." Journal of Vertebrate Paleontology, 7(3)(Suppl.): 14A. - ^ a b EvoWiki (2004). Chatterjee's Chimera: A Cold Look at the Protoavis Controversy. Version of 2007-JAN-22. Retrieved 2009-FEB-04. - ^ Chatterjee, S. (1997). The Rise of Birds: 225 Million Years of Evolution. Johns Hopkins University Press, Baltimore. ISBN 0-8018-5615-9 - ^ Feduccia, Alan (1994) "The Great Dinosaur Debate" Living Bird. 13:29-33. - ^ Why Birds Aren't Dinosaurs. Explore:Thought and Discovery at the University of Kansas. Accessed 8/05/09. - ^ Jensen, James A. & Padian, Kevin. (1989) "Small pterosaurs and dinosaurs from the Uncompahgre fauna (Brushy Basin member, Morrison Formation: ?Tithonian), Late Jurassic, western Colorado" Journal of Paleontology Vol. 63 no. 3 pg. 364-373. - ^ Lubbe, T. van der, Richter, U., and Knötschke, N. 2009. Velociraptorine dromaeosaurid teeth from the Kimmeridgian (Late Jurassic) of Germany. Acta Palaeontologica Polonica 54 (3): 401–408. DOI: 10.4202/app.2008.0007. - ^ a b c d Hartman, S., Lovelace, D., and Wahl, W., (2005). "Phylogenetic assessment of a maniraptoran from the Morrison Formation." Journal of Vertebrate Paleontology, 25, Supplement to No. 3, pp 67A-68A http://www.bhbfonline.org/AboutUs/Lori.pdf - ^ Brochu, Christopher A. Norell, Mark A. (2001) "Time and trees: A quantitative assessment of temporal congruence in the bird origins debate" pp.511-535 in "New Perspectives on the Origin and Early Evolution of Birds" Gauthier&Gall, ed. Yale Peabody Museum. New Haven, Conn. USA. - ^ a b Ruben, J., Jones, T. D., Geist, N. R. & Hillenius, W. J. (1997). Lung structure and ventilation in theropod dinosaurs and early birds. Science 278, 1267-1270. - ^ a b Ruben, J., Dal Sasso, C., Geist, N. R., Hillenius, W. J., Jones, T. D. & Signore, M. (1999). Pulmonary function and metabolic physiology of theropod dinosaurs. Science 283, 514-516. - ^ Quick, D. E. & Ruben, J. A. (2009). Cardio-pulmonary anatomy in theropod dinosaurs: implications from extant archosaurs. Journal of Morphology doi: 10.1002/jmor.10752 - ^ gazettetimes.com article - ^ Discovery Raises New Doubts About Dinosaur-bird Links ScienceDaily article - ^ Ruben, J., Hillenius, W., Geist, N. R., Leitch, A., Jones, T. D., Currie, P. J., Horner, J. R. & Espe, G. (1996). The metabolic status of some Late Cretaceous dinosaurs. Science 273, 1204-1207. - ^ Theagarten Lingham-Soliar (2003). The dinosaurian origin of feathers: perspectives from dolphin (Cetacea) collagen fibers. Naturwissenschaften 90 (12): 563-567. - ^ Peter Wellnhofer (2004) "Feathered Dragons: Studies on the Transition from Dinosaurs to Birds. Chapter 13. The Plumage of Archaeopteryx:Feathers of a Dinosaur?" Currie, Koppelhaus, Shugar, Wright. Indiana University Press. Bloomington, IN. USA. pp. 282-300. - ^ Lingham-Soliar, T et al. (2007) Proc. R. Soc. Lond. B doi:10.1098/rspb.2007.0352. - ^ Access : Bald dino casts doubt on feather theory : Nature News - ^ "Transcript: The Dinosaur that Fooled the World". BBC. http://www.bbc.co.uk/science/horizon/2001/dinofooltrans.shtml. Retrieved on 2006-12-22. - ^ Mayell, Hillary (2002-11-20). "Dino Hoax Was Mainly Made of Ancient Bird, Study Says". National Geographic. http://news.nationalgeographic.com/news/2002/11/1120_021120_raptor.html. Retrieved on 2008-06-13. - ^ Zhou, Zhonghe, Clarke, Julia A., Zhang, Fucheng. "Archaeoraptor's better half." Nature Vol. 420. 21 November 2002. pp. 285. - ^ a b Makovicky, Peter J.; Apesteguía, Sebastián; & Agnolín, Federico L. (2005). "The earliest dromaeosaurid theropod from South America". Nature 437 (7061): 1007–1011. doi:10.1038/nature03996. - ^ Norell, M.A., Clark, J.M., Turner, A.H., Makovicky, P.J., Barsbold, R., and Rowe, T. (2006). "A new dromaeosaurid theropod from Ukhaa Tolgod (Omnogov, Mongolia)." American Museum Novitates, 3545: 1-51. - ^ a b Forster, Catherine A.; Sampson, Scott D.; Chiappe, Luis M. & Krause, David W. (1998). "The Theropod Ancestry of Birds: New Evidence from the Late Cretaceous of Madagascar". Science (5358): pp. 1915–1919. doi:10.1126/science.279.5358.1915. (HTML abstract). - ^ a b Chiappe, L.M.. Glorified Dinosaurs: The Origin and Early Evolution of Birds. Sydney: UNSW Press. - ^ a b Kurochkin, E., N. (2006). Parallel Evolution of Theropod Dinoaurs and Birds. Entomological Review 86 (1), pp. S45-S58. doi:10.1134/S0013873806100046 - ^ Kurochkin, E., N. (2004). A Four-Winged Dinosaur and the Origin of Birds. Priroda 5, 3-12. - ^ a b c S. Chatterjee. (2005). The Feathered Dinosaur Microraptor:Its Biplane Wing Platform and Flight Performance. 2005 Salt Lake City Annual Meeting. - ^ a b c d Chatterjee, S., and Templin, R.J. (2007). "Biplane wing platform and flight performance of the feathered dinosaur Microraptor gui." Proceedings of the National Academy of Sciences, 104(5): 1576-1580. - ^ a b c Holtz, Thomas R.; & Osmólska, Halszka. (2004). "Saurischia". in Weishampel, David B.; Dodson, Peter; & Osmólska, Halszka (eds.). The Dinosauria (Second ed.). Berkeley: University of California Press. pp. 21–24. ISBN 0-520-24209-2. - ^ a b c d e Xu, Xing; Zheng Xiao-ting; You, Hai-lu (20 January 2009). "A new feather type in a nonavian theropod and the early evolution of feathers". Proceedings of the National Academy of Sciences (Philadelphia). doi:10.1073/pnas.0810055106. PMID 19139401. - ^ a b c Turner, Alan H.; Hwang, Sunny; & Norell, Mark A. (2007). "A small derived theropod from Öösh, Early Cretaceous, Baykhangor, Mongolia". American Museum Novitates 3557 (3557): 1–27. doi:10.1206/0003-0082(2007)3557[1:ASDTFS]2.0.CO;2. http://hdl.handle.net/2246/5845. - ^ a b Bryner, Jeanna (2009). "Ancient Dinosaur Wore Primitive Down Coat." http://www.foxnews.com/story/0,2933,479875,00.html - ^ a b Xu, X., Cheng, C., Wang, X. & Chang, C. (2003). Pygostyle-like structure from Beipiaosaurus (Theropoda, Therizinosauroidea) from the Lower Cretaceous Yixian Formation of Liaoning, China. Acta Geologica Sinica 77, 294-298. - ^ a b c Xu Xing; Zhou Zhonghe & Prum, Richard A. (2003). "Branched integumental structures in Sinornithosaurus and the origin of feathers". Nature 410 (6825): 200–204. doi:10.1038/35065589. - ^ Paul, Gregory S. (2008). "The extreme lifestyles and habits of the gigantic tyrannosaurid superpredators of the Late Cretaceous of North America and Asia". in Carpenter, Kenneth; and Larson, Peter E. (editors). Tyrannosaurus rex, the Tyrant King (Life of the Past). Bloomington: Indiana University Press. p. 316. ISBN 0-253-35087-5. - ^ Martin, Larry D.; & Czerkas, Stephan A. (2000). "The fossil record of feather evolution in the Mesozoic". American Zoologist 40 (4): 687–694. doi:10.1668/0003-1569(2000)040[0687:TFROFE]2.0.CO;2. http://www.bioone.org/perlserv/?request=get-abstract&doi=10.1668%2F0003-1569%282000%29040%5B0687%3ATFROFE%5D2.0.CO%3B2. - ^ a b T. rex was fierce, yes, but feathered, too. - ^ Nicholas M. Gardner, David B. Baum, Susan Offner. (2008). [392b%3ANDEFFI2.0.CO%3B2 No Direct Evidence for Feathers in Tyrannosaurus rex]. The American Biology Teacher 70(7):392-392 - ^ a b Xu, X., Wang, X.-L., and Wu, X.-C. (1999). "A dromaeosaurid dinosaur with a filamentous integument from the Yixian Formation of China". Nature 401: 262–266. doi:10.1038/45769. - ^ a b c d e f g h i j k Turner, A.H.; Makovicky, P.J.; and Norell, M.A. (2007). "Feather quill knobs in the dinosaur Velociraptor" (pdf). Science 317 (5845): 1721. doi:10.1126/science.1145076. PMID 17885130. http://www.sciencemag.org/cgi/reprint/317/5845/1721.pdf. - ^ a b Ji, Q., Norell, M. A., Gao, K.-Q., Ji, S.-A. & Ren, D. (2001). The distribution of integumentary structures in a feathered dinosaur. Nature 410, 1084-1088. - ^ a b American Museum of Natural History. "Velociraptor Had Feathers." ScienceDaily 20 September 2007. 23 January 2008 http://www.sciencedaily.com/releases/2007/09/070920145402.htm - ^ a b c d e f g h Xu, X., and Zhang, F. (2005). "A new maniraptoran dinosaur from China with long feathers on the metatarsus." Naturwissenschaften, 92(4): 173 - 177. - ^ a b c d Xu, X., Zhao, Q., Norell, M., Sullivan, C., Hone, D., Erickson, G., Wang, X., Han, F. and Guo, Y. (2009). "A new feathered maniraptoran dinosaur fossil that fills a morphological gap in avian origin." Chinese Science Bulletin, 6 pages, accepted November 15, 2008. - ^ Currie, PJ & Chen, PJ (2001) Anatomy of Sinosauropteryx prima from Liaoning, northeastern China, Canadian Journal of Earth Sciences, 38: 1,705-1,727. - ^ Buffetaut, E., Grellet-Tinner, G., Suteethorn, V., Cuny, G., Tong, H., Košir, A., Cavin, L., Chitsing, S., Griffiths, P.J., Tabouelle, J. and Le Loeuff, J. (2005). "Minute theropod eggs and embryo from the Lower Cretaceous of Thailand and the dinosaur-bird transition." Naturwissenschaften, 92(10): 477-482. - ^ a b c Czerkas, S.A., and Yuan, C. (2002). "An arboreal maniraptoran from northeast China." Pp. 63-95 in Czerkas, S.J. (Ed.), Feathered Dinosaurs and the Origin of Flight. The Dinosaur Museum Journal 1. The Dinosaur Museum, Blanding, U.S.A. PDF abridged version - ^ Maryanska, T., Osmolska, H., & Wolsam, M. (2002). "Avialian status for Oviraptorosauria". Acta Palaeontologica Polonica 47 (1): 97–116. - ^ Benton, M. J. (2004). Vertebrate Palaeontology, 3rd ed. Blackwell Science Ltd. - ^ a b Turner, Alan H.; Pol, Diego; Clarke, Julia A.; Erickson, Gregory M.; and Norell, Mark (2007). "A basal dromaeosaurid and size evolution preceding avian flight" (pdf). Science 317: 1378–1381. doi:10.1126/science.1144066. PMID 17823350. http://www.sciencemag.org/cgi/reprint/317/5843/1378.pdf. - ^ a b c d Barsbold, R., Osmólska, H., Watabe, M., Currie, P.J., and Tsogtbaatar, K. (2000). "New Oviraptorosaur (Dinosauria, Theropoda) From Mongolia: The First Dinosaur With A Pygostyle". Acta Palaeontologica Polonica, 45(2): 97-106. - ^ C.M. Chuong, R. Chodankar, R.B. Widelitz (2000). Evo-Devo of feathers and scales: building complex epithelial appendages. Commentary, Current Opinion in Genetics & Development 10 (4), pp. 449-456. - ^ a b Kurzanov, S.M. (1987). "Avimimidae and the problem of the origin of birds." Transactions of the Joint Soviet-Mongolian Paleontological Expedition, 31: 5-92. [in Russian] - ^ a b Hopp, Thomas J., Orsen, Mark J. (2004) "Feathered Dragons: Studies on the Transition from Dinosaurs to Birds. Chapter 11. Dinosaur Brooding Behavior and the Origin of Flight Feathers" Currie, Koppelhaus, Shugar, Wright. Indiana University Press. Bloomington, IN. USA. - ^ Maryańska, T. & Osmólska, H. (1997). The Quadrate of Oviraptorid Dinosaurs. Acta Palaeontologica Polonica 42 (3): 361-371. - ^ Jones, T.D., Farlow, J.O., Ruben, J.A., Henderson, D.M., and Hillenius, W.J. (2000). "Cursoriality in bipedal archosaurs." Nature, 406(6797): 716–718. doi:10.1038/35021041 PDF fulltext Supplementary information - ^ Zhou, Z., Wang, X., Zhang, F., and Xu, X. (2000). "Important features of Caudipteryx - Evidence from two nearly complete new specimens." Vertebrata Palasiatica, 38(4): 241–254. PDF fulltext - ^ Buchholz, P. (1997). Pelecanimimus polyodon. Dinosaur Discoveries 3, 3-4. - ^ Briggs, D. E., Wilby, P. R., Perez-Moreno, B., Sanz, J. L. & Fregenal-Martinez, M. (1997). The mineralization of dinosaur soft tissue in the Lower Cretaceous of Las Hoyas, Spain. Journal of the Geological Society, London 154, 587-588. - ^ a b Theagarten Lingham-Soliar. (2008). A unique cross section through the skin of the dinosaur Psittacosaurus from China showing a complex fibre architecture. Proc R Soc B 275: 775-780. - ^ a b c d e f g h i j k Zheng, X.-T., You, H.-L., Xu, X. and Dong, Z.-M. (2009). "An Early Cretaceous heterodontosaurid dinosaur with filamentous integumentary structures." Nature, 458(19): 333-336. doi:10.1038/nature07856 - ^ Witmer, L.M. (2009), "Dinosaurs: Fuzzy origins for feathers", Nature 458 (7236): 293–295, http://www.nature.com/nature/journal/v458/n7236/full/458293a.html, retrieved on 2009-09-02 - ^ "Tianyulong". Pharyngula. PZ Myers. March 20, 2009. http://scienceblogs.com/pharyngula/2009/03/tianyulong.php. Retrieved on 2009-04-30. - ^ a b "Tianyulong - a fuzzy dinosaur that makes the origin of feathers fuzzier". Not Exactly Rocket Science:Science for Everyone. Ed Yong. March 18, 2009. http://scienceblogs.com/notrocketscience/2009/03/tianyulong_-_a_fuzzy_dinosaur_that_makes_the_origin_of_feath.php. Retrieved on 2009-07-22. - ^ Xu, X., Wang, X., Wu, X., (1999). A dromaeosaurid dinosaur with a filamentous integument from the Yixian Formation of China. Nature 401:6750 262-266 doi 10.1038/45769 - ^ Xu. X., Zhao, X., Clark, J.M., (1999). A new therizinosaur from the Lower Jurassic lower Lufeng Formation of Yunnan, China. Journal of Vertebrate Paleontology 21:3 477–483 doi 10.1671/0272-4634 - ^ Xu, X. and Wang, X.-L. (2003). "A new maniraptoran from the Early Cretaceous Yixian Formation of western Liaoning." Vertebrata PalAsiatica, 41(3): 195–202. - ^ Ji, Q., Ji, S., Lu, J., You, H., Chen, W., Liu, Y., and Liu, Y. (2005). "First avialan bird from China (Jinfengopteryx elegans gen. et sp. nov.)." Geological Bulletin of China, 24(3): 197-205. - ^ Ji, S., Ji, Q., Lu J., and Yuan, C. (2007). "A new giant compsognathid dinosaur with long filamentous integuments from Lower Cretaceous of Northeastern China." Acta Geologica Sinica, 81(1): 8-15. - ^ Czerkas, S.A., and Ji, Q. (2002). "A new rhamphorhynchoid with a headcrest and complex integumentary structures." Pp. 15-41 in: Czerkas, S.J. (Ed.). Feathered Dinosaurs and the Origin of Flight. Blanding, Utah: The Dinosaur Museum. ISBN 1-93207-501-1. - ^ a b c Senter, Phil (2007). "A new look at the phylogeny of Coelurosauria (Dinosauria: Theropoda)". Journal of Systematic Palaeontology 5 (4): 429–463. doi:10.1017/S1477201907002143. - ^ Osmólska, Halszka; Maryańska, Teresa; & Wolsan, Mieczysław. (2002). "Avialan status for Oviraptorosauria". Acta Palaeontologica Polonica 47 (1): 97–116. http://app.pan.pl/article/item/app47-097.html. - ^ Martinelli, Agustín G.; & Vera, Ezequiel I. (2007). "Achillesaurus manazzonei, a new alvarezsaurid theropod (Dinosauria) from the Late Cretaceous Bajo de la Carpa Formation, Río Negro Province, Argentina". Zootaxa 1582: 1–17. http://www.mapress.com/zootaxa/2007f/z01582p017f.pdf. - ^ Novas, Fernando E.; & Pol, Diego. (2002). "Alvarezsaurid relationships reconsidered". in Chiappe, Luis M.; & Witmer, Lawrence M. (eds.). Mesozoic Birds: Above the Heads of Dinosaurs. Berkeley: University of California Press. pp. 121–125. ISBN 0-520-20094-2. - ^ Sereno, Paul C. (1999). "The evolution of dinosaurs". Science 284 (5423): 2137–2147. doi:10.1126/science.284.5423.2137. PMID 10381873. - ^ Perle, Altangerel; Norell, Mark A.; Chiappe, Luis M.; & Clark, James M. (1993). "Flightless bird from the Cretaceous of Mongolia". Science 362 (6421): 623–626. doi:10.1038/362623a0. - ^ Chiappe, Luis M.; Norell, Mark A.; & Clark, James M. (2002). "The Cretaceous, short-armed Alvarezsauridae: Mononykus and its kin". in Chiappe, Luis M.; & Witmer, Lawrence M. (eds.). Mesozoic Birds: Above the Heads of Dinosaurs. Berkeley: University of California Press. pp. 87–119. ISBN 0-520-20094-2. - ^ Forster, Catherine A.; Sampson, Scott D.; Chiappe, Luis M.; & Krause, David W. (1998). "The theropod ancestry of birds: new evidence from the Late Cretaceous of Madagascar". Science 279 (5358): 1915–1919. doi:10.1126/science.279.5358.1915. PMID 9506938. - ^ Mayr, Gerald; Pohl, Burkhard; & Peters, D. Stefan (2005). "A well-preserved Archaeopteryx specimen with theropod features.". Science 310 (5753): 1483–1486. doi:10.1126/science.1120331. PMID 16322455. - ^ Göhlich, U.B., and Chiappe, L.M. (2006). "A new carnivorous dinosaur from the Late Jurassic Solnhofen archipelago." Nature, 440: 329-332. - Gauthier, J.; De Queiroz, K. (2001), "Feathered dinosaurs, flying dinosaurs, crown dinosaurs, and the name" Aves", New Perspectives on the Origin and Early Evolution of Birds: 7–41. - Fucheng, Z.; Zhonghe, Z.; Dyke, G. (2006), "Feathers and'feather-like'integumentary structures in Liaoning birds and dinosaurs", Geological Journal 41. - Zhou, Z. (2004), "The origin and early evolution of birds: discoveries, disputes, and perspectives from fossil evidence", Naturwissenschaften 91 (10): 455–471. - Vargas, A.O.; Fallon, J.F. (2005), "Birds have dinosaur wings: the molecular evidence", J Exp Zool (Mol Dev Evol) 304: 86–90. - Prum, R.O. (2002), "Why ornithologists should care about the theropod origin of birds", The Auk 119 (1): 1–17. - Clark, J.M.; Norell, M.A.; Makovicky, P.J. (2002). "Cladistic approaches to the relationships of birds to other theropod dinosaurs". Mesozoic birds, above the heads of the dinosaurs. pp. 31–61. - Perrichot, V.; Marion, L.; Néraudeau, D.; Vullo, R.; Tafforeau, P. (2008), "The early evolution of feathers: fossil evidence from Cretaceous amber of France", Proceedings of the Royal Society B: Biological Sciences 275 (1639): 1197. - DinoBuzz — dinosaur-bird controversy explained, by UC Berkeley. - Journal of Dinosaur Paleontology, with many articles on dinosaur-bird links. - Feathered dinosaurs at the American Museum of Natural History. - First Dinosaur Found With its Body Covering Intact; Displays Primitive Feathers From Head to Tail — AMNH Press Release - Notes from recent papers on theropod dinosaurs and early avians - The evolution of feathers
http://fossil.wikia.com/wiki/Feathered_dinosaurs
13
104
B1. Motion on prescribed trajectory Inclined plane. Motion on an inclined plane. The inclined plane as a machine When a mass is being raised, the direction of the motion (vertically upwards) and the direction of the load (weight, vertically downwards) are opposite, they form a stretched angle, a special one of all angles. In general, the angle between load and motion differs; then only a fractionof the load resists displacement, that is, only that fraction need be overcome. Let the mass lie on an inclined plane CD (Fig. 28) and let it be moved upwards by a force which acts parallel to CD. Such a plane is inclined with respect to the horizontal, for example, a road leading up a mountain; its angle of inclination is a. The direction of the required motion is inclined to that of the force of gravity MP. What will the mass do when it is allowed to move on its own, that is, when it is only subject to gravity and no friction? MP represents the force of gravity in magnitude as well as in direction; it causes vertical motion of the mass. The mass cannot move in this direction; it exerts therefore on the nclined plane, - that is, a resistance to its motion- a pressure . The driving force MP simultaneously moves the mass, but in a direction other than what it would be without the inclined plane. In fact, MP is replaced by the two, simultaneously acting forces MQ and MR. The force MQ acts as a pressure on the plane, which resists it due to its firmness and responds with an equally large pressure in the opposite direction and thus balances it. The mass can follow the force MR, since the plane does not resist motion along it (Fall along an inclined plane). It must be balanced by some other force, if the motion is to be stopped. Since MR/MP = cos b, you have MR = MP·cos b. Since MP is the weight of the mass, that is, MP = mg, and cosb=cosa, you find that MR = mg·sina. On an inclined plane, the upwards directed force required to balance the load equals the load times the sine of the angle of inclination, that is, it is smaller than the actual load. The force, required to keep the mass at rest on the inclined plane (or, if its motion is uniform, in uniform motion) must only be as large as MR and in the opposite direction. However, if the force , which is to stop the mass from moving along the inclined plane downwards, acts parallel to the base of the inclined plane (K in Fig. 29), then it must be larger than mgsina, because only the component K cos a acts along the inclined plane; hence K cos a = mg sin a, whence K = mg tan a. The component which acts as a pressure on the inclined plane is made ineffective by its firmness. Thus, the inclined plane becomes a device by which you can balance a force by a smaller force. Such devices are called machines. Fall along an inclined plane If you do not balance the component MR (m·g sina), the mass will move downwards, that is, "fall" along the inclined plane. Its acceleration is given by force/mass = (m·g sin a)/m = g sin a. If you replace here v by g sin a, you can answer all questions relating to the fall along an inclined plane. For example, the velocity at the end of the t-th second is: v1 = g·t· sin a (it is g·t during free fall!). You can now make the velocity of fall as small as you please by making a sufficiently small, that is, let the inclined plane differ only very little from the horizontal plane. (Galilei used it to prove the laws of fall). After it has fallen the distance s, the velocity of a freely falling mass is v=(2gs)1/2, whence it is v=(2gsina ·s)1/2 along the inclined plane with the angle of inclination a. Now let the mass drop from the point D (Fig. 28) to the horizontal plane - the base of the inclined plane - once freely along h and a second time along the inclined plane of length CD = l: What will be the velocities at which the mass arrives in these two experiments? In the first case, you must replace s by h, whence v = (2·g·h)1/2. In the second case, you must replace s by l, whence v1= (2g sin a ·l)1/2; however, since h/l = sin a, then v1=(2gsina·h/sina)1/2= (2gh)1/2 = v, that is, the mass arrives with the same velocity at the horizontal plane, irrespectively of whether it falls freely through the height of the inclined plane or along the inclined plane. Tautochrone The fall along an inclined plane is an example of motion along a prescribed trajectory. It is very strange, if the prescribed trajectory is the concave side of the arc of a cycloid. What is a cycloid? A point M of a circle which rolls along the straight line ab without slipping describes a cycloid, for example, every point on the periphery of a rolling tire or wheel. In order to reach the lowest point in a fall along an inclined plane, the mass requires the time interval t = (2s/gsin a)1/2, which depends on s, that is, it is longer, if it lies higher up on the inclined plane than if it were to start its fall lower down. However, if it falls along a vertical, upwards concave arc of a cycloid, it uses always the same time to reach the lowest point irrespectively from which point it starts to fall (Huygens 1673). The cycloid is therefore also called a Tautochrone (Greek: tautos = same, cronos = time). The inclined plane is a device by which you can balance a force by means of a smaller force. For example, if you want to lift by hand a load from the ground on to a truck, you exert yourself less, if you push it along an inclined board on to the car than if you lift it vertically. The less inclined the board is, the smaller is the downwards force, which must be overcome. However, the distance over which the mass must be moved along the inclined plane is in the same proportion larger than the height by which it must be lifted. What you save in your application of force, you must give in the working distance. If the height is h and the inclination a, then the length of the inclined plane is l = h/sina. While the (larger) force p must shift its point of application over the (smaller) distance h, that is, altogether perform the work p·h, the smaller force p·sin a must shift its point of application over the larger distance l = h/sin a and perform the work p·sina · h/sin a = p·h, that is, the same as before and you do not save work. But with the aid of the inclined plane, you can undertake work for which your muscle strength is not sufficient; by suitable inclination of the inclined plane, you can make the force, required for a performance, as small as you please. The inclined plane is one of the devices, which are called machines - simple machines (in contrast to machines comprising several simple machines). In general, a machine is defined as: A device, which is able to resist and makes it possible to balance a force of given magnitude by a smaller force. - The demand for a balancing capacity means: The machine must not be changed by the two groups of forces, that is, it must only transfer a force, but not use itself any of it. This problem is not strictly technically soluble, especially due to the deformability of solid bodies and friction. Every inclined ladder, every upward road, every staircase is an inclined plane. In an unlimited area of applications, the inclined plane forms the base for two other simple machines: the screw and the wedge. You use the screw in a press (also the primitive copy press) and in devices which with the aid of screws raise loads. Every cutting instrument (knife, scissors, axe) employs a wedge. Fig. 33 shows that the screw is an inclined plane: The hypotenuse AB of a right-angled triangle, placed around a circular cylinder tightly, describes on it a spiral. If its length CB equals the circumference of the cylinder, then, if the points B and C coincide, you have one turn of a spiral. Obviously, the line AB is an inclined plane with height AC, AB its length and BC its base. A flexible bar in place of AB (For example, of square cross-section), which covers the screw line, forms a protruding band, the thread of the screw. Screws have several such turns, all of which can be imagined to have arisen in the same manner - The screw can transfer forces only after it is given a nut: You cut inside a hollow cylinder of circular cross-section with the diameter of the cylinder of Fig. 31 the same thread, which was on the cylinder a raised thread, that is the nut of the screw. If you insert the screw into the nut, that is, lay one inclined plane against the other, and let gravity act, it slides with its thread in the thread of the nut (assuming that there is no friction. In practice, friction cannot be avoided). If you wish to eliminate the effect of gravity, it must be opposed by a force like on the inclined plane. You can apply this force at the circumference C of the screw, that is, in Fig. 29, parallel to the base of the inclined plane, from which the screw emerges. It is smaller in the same ratio as that, to be held in equilibrium, just as we found iy to be the case for the inclined plane. The relations between the forces on a wedge are similarly related. A wedge is a three-sided prism (Fig. 31) in which one angle is very small compared with the other two. The two planes meeting in the sharp angle are the sides, the third the back and the edge opposite it the edge of the wedge. When the wedge has opened up a body, the separated parts press on it and drive it out, if there is no friction and the force driving it in ceases. Again friction is important! An axe, which you drive into a block of wood, is not necessarily ejected when no force is applied to it. However, if friction could be almost eliminated, the wedge would be thrown out by the separated parts of the block. In order to keep the axe inside the block, you would have to apply an appropriate force to the back of the axe. The sharper the wedge, the smaller this force. All cutting gadgets like knives, chisels, planes, etc. work like wedges. Figss. 34/ 35 show the relations between the forces which act on a wedge. A right-angled wedge has been driven under a beam, which is to support a wall, in order to stop it from falling to the right side of the figure. The wall presses against the beam, which presses against the wedge; it would eject it horizontally, if there were no friction, that is, you would then have to apply a horizontal force on its back, in order to keep it in place. How large must be this force in relation to the pressure L of the beam? The pressure L acts at a right angle to the side AB of the wedge, but only the component at right angle to BC attempts to drive the wedge out. The component at right angle to the ground is of no interest (the resistance of the ground balances it). The force P, which you must apply against the back of the wedge, must therefore equal l. The figure shows that l/L = BC/BA, whence l is the smaller the sharper is the wedge, that is, the smaller is the ratio of the back to the wedge's hypotenuse. Experience confirms this conclusion during the use of a knife, axe, needle, nail, etc. They intrude the easier, the more pointed they are. Friction as impediment When answering the question regarding the magnitude of the force required to balance a load of given size by means of a machine , we have neglected friction. However, it is ever present. A body on an inclined plane does not slide at all down it. As a rule, it stays put (unless the inclined plane is rather steep or the body has a special shape). Friction holds it in place. The more perfect the surface of the inclined plane as well as of the body, the less steep need be the inclined plane, on which the body would slide down. A method for the measurement of friction employs determination of the angle of inclination of a plane, at which the body starts to glide down. If you attempt to pull the mass m by the spiral spring along the table T (Fig. 36), the spring will stretch a certain amount before the mass starts to move. However, the magnitude of this tension does not correspond to Newton's Second Law, but is larger. It can be measured like on a letter balance by a weight. For example, if m is 1000 g and you must apply a 600 g weight to stretch the spring, before m starts to move, this means: You must employ 3/5 times the force by which the mass presses on the table, in order to overcome the friction of m on T; the number 3/5 is the friction coefficient. You obtain the same number, if you place m on an inclined plane and find out at which angle of inclination m starts to glide down. (The scarp angle of a pile of sand, of grain, etc. also depends on this angle before the substance starts to slide). In order to maintain the motion of the moving mass m, a much smaller tension in the spring is needed, may be 2/5 of its weight. This number, the friction coefficient, does not only differ for various pairs of material, but also changes for the same pair depending on the state of the surfaces, that is, the lubricant between the surfaces (oil. graphite, grease, etc.). Rolling on well greased wheels encounters the least frictional resistance among all modes of motion. The friction coefficient is then much smaller than during sliding. A cartwheel has on its circumference rolling, on its axle sliding friction; ball-bearings are used, in order to convert also the latter into rolling motion (Fig. 37). If you press surfaces, which slide along each other, ever more strongly together, the friction becomes substantially larger. Then you must apply a much larger force, in order to generate motion and ongoing motion becomes then much slower. This is the principle of the brake in cars. You employ all the time friction intentionally as well as unintentionally. You could not move on foot or otherwise, if friction did not stop you from slipping; you would also not stay at rest while sitting or lying or standing up, unless friction stops you from sliding. Lighting a match on the friction surface of its box by frictional heat belongs to the conscious technical applications of friction. Devices like the brake dynamometer also employ friction. Motions, which can be generated by mechanical means and remain visible, remain so also after the moving forces have stopped, but you see them slow down and eventually stop; for example, a carriage, when it is detached from a moving locomotive, keeps on rolling, a boat which is no longer rowed, keeps on drifing, etc. Frictional effects: But a motion has only ceased to be visible; in reality, is has converted itself into another motion which, however, is not visible, but can be noted through its effect: Contacting surface have been heated. As a rule, one imagines that the end of the visible motion has destroyed it. As a rule, the heating is not large enough to be seen without effort. But occasionally, it becomes so, for example, when very fast motion of a mass is suddenly interrupted or very down much slowed down. In the case of a railway carriage, which is being braked, the braking blocks on the wheels become so hot that you can feel it when you touch them, meteors, on entering the atmosphere from the airless interstellar space, become heated by the atmosphere on their surfaces so much that they light up (shooting stars), a flying bullet, impeded by a resisting material, can become so hot that it melts on the surface, etc. continue step back
http://mpec.sc.mahidol.ac.th/radok/physmath/PHYSICS/B1.htm
13
89
In many data analyses in social science, it is desirable to compute a coefficient of association. Coefficients of association are quantitative measures of the amount of relationship between two variables. Ultimately, most techniques can be reduced to a coefficient of association and expressed as the amount of relationship between the variables in the analysis. For instance, with a t test, the correlation between group membership and score can be computed from the t value. There are many types of coefficients of association. They express the mathematical association in different ways, usually based on assumptions about the data. The most common coefficient of association you will encounter is the Pearson product-moment correlation coefficient (symbolized as the italicized r), and it is the only coefficient of association that can safely be referred to as simply the "correlation coefficient". It is common enough so that if no other information is provided, it is reasonable to assume that is what is meant. Let's return to our data on IQ and achievement in the previous assignment, only this time, disregard the class groups. Just assume we have IQ and achievement scores on thirty people. IQ has been shown to be a predictor of achievement, that is IQ and achievement are correlated. Another way of stating the relationship is to say that high IQ scores are matched with high achievement scores and low IQ scores are matched with low achievement scores. Given that a person has a high IQ, I would reasonably expect high achievement. Given a low IQ, I would expect low achievement. (Please bear in mind that these variables are chosen for demonstration purposes only, and I do not want to get into discussions of whether the relationship between IQ and achievement is useful or meaningful. That is a matter for another class.) So, the Pearson product-moment correlation coefficient is simply a way of stating such a relationship and the degree or "strength" of that relationship. The coefficient ranges in values from -1 to +1. A value of 0 represents no relationship, and values of -1 and +1 indicate a perfect linear relationships. If each dot represents a single person, and that person's IQ is plotted on the X axis, and their achievement scores is plotted on the Y axis, we can make a scatterplot of the values which allow us to visualize the degree of relationship or correlation between the two variables. The graphic below gives an approximation of how variables X and Y are related at various values of r : The r value for a set of paired scores can be calculated as follows: There is another method of calculating r which helps in understanding what the measure actually is. Review the ideas in the earlier lessons of what a z score is. Any set of scores can be transformed into an equivalent set of z scores. The variable will then have a mean of 0 and a standard deviation of 1. The z scores above mean are positive, and z scores below the mean are negative. The r value for the correlation between the scores is then simply the sum of the products of the the z scores for each pair divided by the total number of pairs minus 1. This method of computation helps to show why the r value signifies what it does. Consider several cases of pairs of scores on X and Y. Now, when thinking of how the numerator of the sum above is computed, consider only the signs of the scores and signs of their products. If a person's score on X is substantially below the mean, then their z score is large and negative. If they are also below the mean on Y, their z score for Y is also large and negative. The product of these two z scores is then large and positive. The product is also obviously large and positive if both people score substantially above the mean on both X and Y. So, the more the z scores on X and Y are alike, the more positive the product sum in the equation becomes. Note that if people score opposite on the measures consistently ( negative z scores on X and positive z scores on Y), the more negative the product sum becomes. This system sometimes helps to give insight into how the correlation coefficient works. The r value is then an average of the products between z scores (using n-1 instead of n to correct for population bias). When the signs of the z scores are random throughout the group, there is roughly equal probability of having a positive ZZ product or a negative ZZ product. You should be able to see how this would tend to lead to a sum close to zero. Interpretation of r One interpretation of r is that the square of the value represents the proportion of variance in one variable which is accounted for by the other variable. The square of the correlation coefficient is called the coefficient of determination. It is easy for most people to interpret quantities when they are on a linear scale, but this square relationship creates an exponential relationship which should be kept in mind when interpreting correlation coefficients in terms of "large", "small", etc. Note the graph below which shows the proportion of variance accounted for at different levels of r . Note that not even half of the variance is accounted for until r reaches .71, and that values below .30 account for less than 10% of the variance. Note also how rapidly the proportion of variance accounted for increases between .80 and .90, as compared to between .30 and .40. Note that r = .50 is only 25% of the variance. Be careful not to interpret r in a linear way like it is a percentage or proportion. It is the square which has that quality. That is, don't fall into the trap of thinking of r = .60 as "better than half", because it clearly is not (it is 36%). There are some obvious caveats in correlation and regression. One has been pointed out by Teri in the last lesson. In order for r to have the various properties needed for it's use in other statistical techniques, and in fact, to be interpreted in terms of proportions of variance accounted for, it is assumed that the relationship between the variables is linear. If the relationship between the variables is curvilinear as shown in the figure below, r will be an incorrect estimate of the relationship. Notice that although the relationship between the curvilinear variables is actually better than with the linear, the r value is likely to be less for the curvilinear case because the assumption is not met. This problem can be addressed with something called nonlinear regression, which is a topic for advanced statistics. However, it should be obvious that one can transform the y variable (such as with log or square functions) to make the relation linear, and then a normal linear regression can be run on the transformed scores. This is essentially how nonlinear regression works. Another assumption is called homoscedasticity (HOMO-SEE-DAS-STI-CITY or HOMO-SKEE-DAS-STI-CITY). This is the assumption that the variance of one variable is the same across all levels of the other. The figure below shows a violation of the homoscedasticity assumption. These data are heteroscedastic (HETERO-SKEE-DASTIC). Note that Y is much better predicted at lower levels of X than at higher levels of X : A related assumption is one of bivariate normality . This assumption is sometimes difficult to understand (and it crops up in even more complicated forms in multivariate statistics), and difficult to test or demonstrate. Essentially, bivariate normality means that for every possible value of one variable, the values of the other variable are normally distributed. You may be able to visualize this by looking at the figure below with thousands of observations (this problem is complicated enough to approach the limits of my artistic ability). Think of the normal curves as being frequency or density at their corresponding values of X or Y. That is, visualize them as perpendicular to the page. Regression and correlation are very sensitive to these assumptions. The values for this type of analysis should not be over interpreted. That is, quantitative predictions should be tempered by the validity of these assumptions. It should be intuitive from the explanation of the correlation coefficient that a significant correlation allows some degree of prediction of Y if we know X. In fact, when we are dealing with z scores, the math for this prediction equation is very simple. The predicted Z for the Y score (z'y) is: When the r value is used in this way, it is called a standardized regression coefficient , and the symbol used to represent it is often a lower case Greek beta (b), so the standardized regression equation for regression of y on x is written as : When we are not working with z scores, but we are attempting to predict Y raw scores from X raw scores, the equation requires a quantity called the unstandardized regression coefficient. This is usually symbolized as B1, and allows for the following prediction equation for raw scores: The unstandardized regression coefficient (B1) can be computed from the r value and the standard deviations of the two sets of scores (Equation a). The B0 is the intercept for the regression line, and it can be computed by subtracting the product of B1 and the mean of the x scores from the mean of the y scores (Equation b). Now, suppose we are attempting to predict Y (achievement) from X (IQ). Assume we have IQ and Achievement scores for a group of 10 people. Suppose I want to develop a regression equation to make the best prediction of a person's Achievement if I am given their IQ score. I would proceed as follows: First compute r. Now, it is a simple matter to compute B1 . B1 = SPxy / SSx = 420 / 512.5 = 0.82 Now compute B0 . B0 = MY - B1 MX = 94.8 - 0.82(99.5) = 13.2 The regression equation for predicting Achievement from IQ is then Y' = B0 + B1(X) ACHIEVEMENT SCORE = 13.2 + 0.82 (IQ) Error of Prediction Given an r value between the two variables, what kind of error in my predicted achievement score should be expected? This is a complicated problem, but an over simplified way of dealing with it can be stated which is not too far off for anything other than extreme values. The standard error of the estimate can be thought of roughly as the standard deviation of the expected distribution of true Y values around a predicted Y value. The problem is that this distribution changes as you move across the X distribution, and so the standard error is not correct for most any prediction. However, it does give a reasonable estimate of the confidence interval around predicted scores. For standardized (z) scores, the standard error of the estimate is equation (a). For raw scores, it is equation (b) : For example, given a predicted Y score of 87, and a standard error of estimate of 5.0, we could speculate that our person's true score is somewhere between 87-2(5) and 87+2(5) for roughly 96% confidence. Again, this is an oversimplification, and the procedures for making precise confidence intervals are best left for another time.
http://jamesstacks.com/stat/pearson.htm
13
100
Movement advocating the immediate end of slavery. The abolitionist movement began in earnest in the United States in the 1820s and expanded under the influence of the Second Great Awakening, a Christian religious movement that emphasized the equality of all men and women in the eyes of God. Most leading abolitionists lived in New England, which had a long history of anti-slavery activity, but the movement also thrived in Philadelphia and parts of Ohio and Indiana. Paint made with pigment (color) suspended in acrylic polymer (a synthetic medium), rather than in natural oils, such as linseed, used in oil paints. It is a modern medium that came into use in the 1950s. Unlike oil paint, it is fast drying and water soluble. Three-dimensional art made by building up material (such as clay) to produce forms, instead of carving it away. Type of photograph that is printed on paper coated with silver salts (the substance that turns dark when it is exposed to light in a camera) suspended in egg whites (albumen). Albumen prints were more popular than daguerreotypes, which they replaced, because multiple copies could be printed and they were less expensive. Albumen prints were often toned with a gold wash, which gives them a yellowish color. Symbolic representation of an idea, concept, or truth. In art, allegories are often expressed through symbolic fictional figures, such as “Columbia,” a woman who represents America; or Father Time, an old man with an hourglass and scythe. Type of photograph made between 1850 and 1860 in which a negative was attached to a piece of glass with black paper or cloth behind it. Against the black background, the tones of the resulting photograph are reversed, so that it reads as a positive image. The ambrotype went out of use when less expensive methods of photography were invented, like the albumen print. Latin for “before the war.” It refers to the period between 1820 and 1860 in American history. Term encompassing a range of ideas opposing slavery. It included abolitionism, or the idea that slavery should be ended immediately. But it also included other positions, including colonization and gradual emancipation. Some anti-slavery figures (like Abraham Lincoln) opposed slavery as a moral wrong, but did not seek to end it where it already existed, mostly because they believed that slavery was protected by the Constitution. Others had no moral concerns about slavery, but opposed the expansion of the institution because they believed that wage laborers could not compete in a slave-based economy. Antrobus, John (1837–1907): Sculptor and painter of portraits, landscapes, and genre scenes (showing everyday life). Antrobus was born in England but came to Philadelphia in 1850. During his travels through the American West and Mexico, he worked as a portraitist before opening a studio in New Orleans. He served briefly with the Confederate Army during the Civil War before moving to Chicago. Antrobus sculpted both Abraham Lincoln and Stephen Douglas and was the first artist to paint a portrait of Ulysses S. Grant (in 1863). Army of the Potomac: Largest and most important Union army in the Eastern Theater of the Civil War, led at various times by Generals Irvin McDowell, George McClellan, Ambrose Burnside, Joseph Hooker, and George Meade. From 1864–1865, General Ulysses S. Grant, then Commander-in-Chief of all Union forces, made his headquarters with this Army, though General Meade remained the official commander. The army’s size and significance to the war meant that it received a great deal of attention in newspapers and magazines of the day. Artist Winslow Homer lived and traveled with the army at various times when he worked for Harper’s Weekly as an illustrator. Army of Northern Virginia: Primary army of the Confederacy and often the adversary of the Union Army of the Potomac. Generals P. G. T. Beauregard and Joseph E. Johnston were its first leaders; after 1862 and to the end of the war, the popular General Robert E. Lee commanded it. On April 9, 1865, Lee surrendered his army to Union General-in-Chief Ulysses S. Grant in the small town of Appomattox Courthouse, effectively ending the Civil War. Collection of weapons or military equipment. The term arsenal also refers to the location where weapons or equipment for military use is stored. Discipline that seeks to understand how artworks were made, what history they reflect, and how they have been understood. Surprise murder of a person. The term is typically used when individuals in the public eye, such as political leaders, are murdered. Atkinson, Edward (1827–1905): American political leader and economist who began his political career as a Republican supporter of the Free Soil movement. Atkinson fought slavery before the Civil War by helping escaped slaves and raising money for John Brown. After the Civil War, in 1886, Atkinson campaigned for future President Grover Cleveland and worked against imperialism (the movement to expand a nation’s territorial rule by annexing territory outside of the main country) after the Spanish-American War. Ball, Thomas (1819–1911): American sculptor who gained recognition for his small busts before creating more monumental sculptures. Notable works include one of the first statues portraying Abraham Lincoln as the Great Emancipator (1876), paid for by donations from freed slaves and African American Union veterans, which stands in Washington D.C.’s Lincoln Park. Ball also created a heroic equestrian statue of George Washington for the Boston Public Garden (1860–1864). He joined an expatriate community in Italy, where he received many commissions for portrait busts, cemetery memorials, and heroic bronze statues. Barnard, George N. (1819–1902): Photographer known for his work in daguerreotypes, portraiture, and stereographs. Barnard devoted much of his time to portraiture after joining the studio of acclaimed photographer Mathew Brady. He produced many group portraits of soldiers in the early years of the Civil War. Barnard was employed by the Department of the Army and traveled with General William T. Sherman, an assignment that would yield the 61 albumen prints that compose Barnard’s Photographic Views of Sherman’s Campaign. In the post-war years, he operated studios in South Carolina and Chicago, the latter of which was destroyed in the 1871 Chicago Fire. Battle of Gettysburg: Fought July 1–3, 1863, in and around the town of Gettysburg, Pennsylvania, this battle was a turning point in the Civil War. Union forces stopped Confederate General Robert E. Lee's second (and last) attempt to invade the North. The Union emerged victorious, but the battle was the war's bloodiest, with fifty-one thousand casualties (twenty-three thousand Union and twenty-eight thousand Confederate). President Abraham Lincoln delivered his famous "Gettysburg Address" in November 19, 1863, at the dedication of the Soldiers' National Cemetery at Gettysburg. Bell, John (1797–1869): Politician who served as United States Congressman from Tennessee and Secretary of War under President Harrison. On the eve of the Civil War in 1860, Bell and other people from Border States formed the Constitutional Union Party. Under its moderate, vague platform, the Constitutional Unionists stood for supporting the Constitution but preserving the Union through being pro-slavery but anti-secession. Bell lost the election, receiving the lowest percentage of the popular vote and only winning the states of Tennessee, Kentucky, and Virginia. During the Civil War, Bell gave his support to the Confederacy. Bellew, Frank Henry Temple (1828–1888): American illustrator who specialized in political cartoons and comic illustrations. Before, during, and after the Civil War, Bellew’s illustrations appeared in newspapers and illustrated magazines such as Vanity Fair and Harper’s Weekly. He is perhaps most famous for his humorous cartoon “Long Abraham Lincoln a Little Longer” and his image depicting “Uncle Sam” from the March 13, 1852, issue of the New York Lantern. His Uncle Sam illustration is the first depiction of that character. Bierstadt, Albert (1839–1902): German-American painter and member of the Hudson River School of landscape painting. Bierstadt spent time in New England and the American West and is well known for his large landscapes that highlight the scale and drama of their setting. A member of the National Academy of Design, he worked in New York City and had a successful career until near the end of his life when his paintings temporarily fell out of style. Billings, Hammatt (1819–1874): American artist, designer, and architect. Billings lived in Boston for the majority of his life, and designed several public buildings and monuments in the New England region. He became famous for his work as an illustrator. He illustrated over 100 books, including works by Nathaniel Hawthorne, Charles Dickens, and Harriet Beecher Stowe. His illustrations of Stowe’s 1852 novel Uncle Tom’s Cabin, were particularly well-regarded, and helped launch his successful career. Bishop, T. B. (active, 19th century): American photographer whose image of an escaped slave was turned into an illustration for the popular illustrated magazine Harper's Weekly. Blythe, David Gilmour (1815–1865): Sculptor, illustrator, poet, and painter best known for his satirical genre painting (showing everyday life). His work focused mainly on the American court system and the condition of poor young street urchins. Blythe also produced many politically-charged canvases supporting his Unionist views in the years leading up to and during the Civil War. Booth, John Wilkes (1838–1865): American stage actor who assassinated President Lincoln. Booth was active in the anti-immigrant Know-Nothing Party during the 1850s. He supported slavery and acted as a Confederate spy during the Civil War. In 1864, Booth planned to kidnap Lincoln and bring him to the Confederate government in Richmond, Virginia. But after the fall of Richmond to Union forces, Booth changed his mind, deciding instead to assassinate Lincoln, Vice President Andrew Johnson, and Secretary of State William Seward. On April 14, 1865, Booth shot Lincoln at Ford’s Theatre and then fled. Union soldiers found and killed Booth on April 26, 1865. Slaveholding states that did not secede from the Union during the Civil War. Geographically, these states formed a border between the Union and the Confederacy, and included Delaware, Maryland, Kentucky, Missouri, and later, West Virginia (which had seceded from Virginia in 1861). Of these, Maryland, Kentucky, and Missouri were particularly important to Union war policy as each of these states had geographic features like rivers that the Union needed to control the movement of people and supplies. Most of the Border States had substantial numbers of pro-secession citizens who joined the Confederate army. Borglum, John Guzton de la Mothe (1867–1941): American sculptor and engineer best known for his Mount Rushmore National Memorial comprising monumental portraits of presidents Washington, Jefferson, Lincoln, and Roosevelt carved out of the mountain. Borglum began his career as painter but was dissatisfied with medium. He later studied at Académie Julian in Paris, where he was influenced by the bold sculptor Auguste Rodin. Borglum believed that American art should be grand in scale, like the nation itself. He received commissions for several monumental sculptures during his career, including a six-ton head of Lincoln and the 190-foot wide Confederate Memorial in Stone Mountain, Georgia. Brady, Mathew (1823–1896): American photographer, perhaps best known for his photographs of the Civil War. Brady studied under many teachers, including Samuel F. B. Morse, the artist and inventor who introduced photography to America. Brady opened a photography studio in New York City in 1844 and in Washington, D.C. in 1856. During the Civil War, he supervised a group of traveling photographers who documented the war. These images depicted the bloody reality of the battlefield. They convinced Americans that photography could be used for more than portraiture. Congress purchased his photographic negatives in 1875. Breckinridge, John (1821–1875): Democratic politician from Kentucky who served as a Congressman from Kentucky. He was Vice President of the United States under James Buchanan before running for president in 1860 as a Southern Rights Democrat. Breckinridge lost the election, winning only Deep South states. During the war, Breckinridge held the rank of Major General in the Confederate army and briefly served as the Confederate Secretary of War. Bricher, Alfred Thompson (1837–1908): American specialist in landscape, focusing on marine and costal paintings. Largely self-taught, Bricher studied the works of artists he met while sketching New England. Bricher had a relationship with L. Prang and Company, to which he supplied paintings that were turned into popular, inexpensive chromolithographs. During his career, Bricher worked in watercolor and oil paint and traveled through New England, the Mississippi River Valley, and Canada. His style moved from the precise detailed realism of his early career to a looser brush style that evokes romantic themes of loss and the power of nature. Briggs, Newton (active, 19th century): Photographer who created portraits of Abraham Lincoln and Hannibal Hamlin used as campaign ephemera. A large printed poster used for advertising or for political campaigns. Broadsides were often inexpensively and quickly made, and intended to send a message rather than be a work of art. Brown, John (1800–1859): Radical abolitionist leader who participated in the Underground Railroad and other anti-slavery causes. As early as 1847, Brown began to plan a war to free slaves. In 1855 he moved to the Kansas territory with his sons, where he fought and killed proslavery settlers. In 1859, he led a raid on a federal arsenal in Harpers Ferry, Virginia, hoping to start a slave rebellion. After the raid failed, Brown was captured, put on trial, and executed for his actions. Brown was praised as a martyr by abolitionists, although the majority of people thought he was an extremist. Metal sculpture made by pouring a molten alloy (metallic mixture) of copper and tin into a mold. The mold is removed when the metal has cooled, leaving the bronze sculpture. Bronzes are designed by artists but made at foundries. Sculpture portraying only the top half of a person’s body: their head, shoulders, and typically their upper torso. Buttre, John Chester (1821–1893): New York City-based engraver who was responsible for publishing The American Portrait Gallery, a collection of biographies and images of notable American public figures. Buttre was partner in the firm of Rice & Buttre. He created sentimental images of Civil War which sold well. Cade, John J. (active, 19th century): Canadian-born engraver of portraits who worked for New York publishers. In 1890 he was living in Brooklyn, New York. Cade worked with illustrator Felix Octavius Carr Darley. Representation in which a person’s traits are exaggerated or distorted. These are usually made for comic or satirical effect. French term for “visiting card.” These small (usually 2 1/2 x 4 inches) photographs mounted on cardboard were so named because they resembled visiting or business cards. Exchanged among family members and friends, these first appeared in the 1850s and replaced the daguerreotype in popularity because they were less expensive, could be made in multiples, and could be mailed or inserted into albums. Carter, Dennis Malone (1818–1881): Irish-American painter of historical scenes and portraits. Carter worked in New Orleans before moving to New York City. He exhibited his paintings in art centers like New York and Philadelphia, and mainly became known for his paintings of historical scenes. Carter, William Sylvester (1909–1996): African American painter. Carter was born in Chicago and studied at the School of the Art Institute of Chicago. During the 1930s, he was involved with the Works Progress Administration, a jobs program that helped artists and other workers weather the Great Depression. Copy of three-dimensional form, made by pouring or pressing substances such as molten metal, plaster, or clay into a mold created from that form. The term is also used to describe the act of making a cast. Elaborate, temporary decorative structure under which a coffin is placed during a visitation period or funeral ceremony. Type of curved sword with a single edge, commonly carried by cavalry units, or those trained to fight on horseback. The cavalry saber was a standard-issue weapon for Union cavalry troops during the Civil War, but used less often by Confederates. The usefulness of cavalry sabers had decreased as new innovations in modern rifles developed, however, and cavalrymen carried them more for decorative or intimidation purposes than for actual fighting. Chappel, Alonzo (1828–1887): American illustrator and painter of portraits, landscapes, and historical scenes. Chappel briefly studied at the National Academy of Design in New York. Focusing on portrait painting early in his career, Chappel became famous for providing illustrations for books about American and European history. Many of his illustrations included important events and people in American History through the Civil War. During and after the Civil War, Chappel painted Civil War battle scenes and leaders, like President Lincoln. Church, Frederic Edwin (1826-1900): American landscape painter who studied under Thomas Cole, the founder of the Hudson River School of painting. Elected to the National Academy of design at age twenty-two, Church began his career by painting large, romantic landscapes featuring New England and the Hudson River. Influenced by scientific writings and art theory, Church became an explorer who used his drawings and sketches as a basis for studio paintings. Church traveled to South America, the Arctic Circle, Europe, Jamaica, and the Middle East. Church had an international reputation as America’s foremost landscape painter. A person who is a citizen and not a member of a branch of the military. Civil Rights Movement: Civil rights are literally “the rights citizens enjoy by law.” The modern United States Civil Rights Movement occurred between 1954 and 1968 and sought to achieve the equal rights African Americans had been denied after the Civil War. Organized efforts like voter drives and the use of non-violent techniques to desegregate public space helped to draw national attention to the injustice of segregation, which was particularly widespread in the South. These efforts led to new laws that ensured equal voting rights for African Americans and banned discrimination based on race, color, religion, or national origin. Ideas, objects, or forms that are often associated with ancient Greece and Rome; but the term can be applied to the achievements of other cultures as well. The term also refers to established models considered to have lasting significance and value or that conform to established standards. Colman, Samuel Jr. (1832-1920): American landscape painter influenced by the Hudson River school, America’s first native landscape painting movement. In his early career, Colman studied at the National Academy of Design and painted scenes of New England. Colman became a master of the newly popular technique of watercolor painting. After the Civil War, Colman had a diverse career: painting the American West, Europe, and North Africa, learning to create etchings, and working in design. In addition to watercolor, Colman worked increasingly in drawing and pastel. Later in life, Colman wrote and published essays on art and worked to place his collections in various museums. Movement led by the American Colonization Society (A.C.S.), which was founded in 1816. In the antebellum period, the movement sought to gradually end slavery and relocate freed African Americans outside of the United States. Members were mainly white people who were opposed to slavery but doubted that the races could live peacefully together. Some African Americans joined the colonizationists, mostly because they feared being ill-treated in the United States. In 1822, the A.C.S. created the West African colony of Liberia to receive freed slaves. Abolitionists opposed colonization as immoral, insisting that the government should end slavery immediately and acknowledge equal rights for African Americans. Act of placing an order for something, such as a work of art. An individual or group can commission a work of art, often with a portion of the payment made to the artist in advance of its completion (for the purchase of supplies, etc.). Public monuments and painted portraits are usually commissioned, for example. The term also refers to the act of placing an order for (commissioning) a work of art. Member of the military who holds a commission, or rank. In the Union army, the commissioned ranks included first and second lieutenant, captain, major, lieutenant colonel, colonel, brigadier general, major general, and lieutenant general. In the Confederate army, the ranks were the same except that there was only one form of general. The officer received this commission and authority directly from the government. A non-commissioned officer refers to an enlisted member of the military who has been delegated authority by a commissioned officer. Non-commissioned officers in both armies included sergeant, corporal, and the lowest rank: private. Way in which the elements (such as lines, colors, and shapes) in a work of art are arranged. Compromise of 1850: Series of five bills passed by Congress in 1850 intended to solve a national crisis over whether slavery should expand into the West. It brought California into the Union as a free state, organized the New Mexico and Utah territories under popular sovereignty, banned the slave trade (but not slavery) in Washington, D.C., created a stronger fugitive slave law, and settled the boundaries of Texas. While this compromise was thought to be a final solution to the dispute over slavery in the American territories, it lasted only a short time as the same issues arose again with the organization of the Kansas and Nebraska Territories in 1854. Confederate States of America (C.S.A.): Government of eleven slave states that seceded from the United States of America. The first six member states (South Carolina, Georgia, Florida, Alabama, Mississippi, and Louisiana) founded the Confederacy on February 4, 1861. Texas joined very shortly thereafter. Jefferson Davis of Mississippi was its president. When Confederate forces fired upon Union troops stationed at Fort Sumter on April 12–13, 1861, President Abraham Lincoln called for seventy-five thousand militia men to put down what he referred to as an “insurrection.” At that point, four additional states—North Carolina, Virginia, Tennessee, and Arkansas—also seceded in protest of the Union’s coercive measures. Political party organized during the presidential campaign of 1860 in response to the Democratic Party’s split into Southern and Northern factions. Members mostly came from the border slave states; they were hostile to free soil ideas, but equally uncomfortable with the secessionist ideas of the radical Southern wing of the Democratic Party. They adopted a moderate, vague platform that emphasized the need to preserve the Union and the Constitution. They nominated John Bell of Kentucky to run for president in the 1860 election, but only gained electoral votes in Tennessee, Kentucky, and Virginia. The party dissolved shortly afterward. An edge or outline in a work of art. Term used by the Union army to describe runaway slaves who came under the army’s protection. It was coined by General Benjamin Butler, who in 1861 refused the request of Confederate slaveholders to return slaves who had run away to Union military lines. Before the war, law dictated that runaways had to be surrendered to their owners upon claim, but Butler argued that slaves were like any other enemy property and could be confiscated as “contraband” according to the laws of war. Butler was no abolitionist, but his policy was the first official attempt to weaken slavery in the South. Temporary shelters run by the Union army throughout the occupied South and free states where refugee slaves (including the families of black soldiers) sought protection, food, and work. Cope, George (1855–1929): American landscape and trompe l’oeil painter. Cope was trained as a landscape painter, but later transitioned to trompe l’oeil painting, producing highly realistic still-lifes inspired by his passion for the outdoors and hunting. Cope spent most of his life and career in the Brandywine River Valley of Pennsylvania, though traveled as far as the Pacific Northwest. Copley, John M. (active, 19th century): American author of the 1893 book A Sketch of the Battle of Franklin, Tenn.; with Reminiscences of Camp Douglas. Copley was a Confederate member of the 49th Tennessee Infantry. Cash crop of the antebellum South that was produced almost entirely by slave labor. Before 1800, the South’s large farmers (planters) grew long-staple cotton, which was relatively cheap to clean by hand before sale. But long-staple cotton would only grow in coastal regions. With the invention of the cotton gin in 1796, planters throughout the South began planting short staple cotton. The gin cleaned seeds from short staple cotton—which was expensive to clean by hand but grew in virtually any climate in the South. The gin thus prompted the spread of cotton and slavery westward, making the planter class enormously wealthy and influential. War fought from 1853 to 1856 between Russia and the combined forces of the Ottoman Empire, England, France, and Sardinia. The war ended Russia’s dominance in Southeastern Europe. It was incredibly bloody, resulting in some five-hundred thousand deaths due to battle, disease, and exposure. Many aspects of this conflict anticipated the American Civil War, including the use of the telegraph and railroad to facilitate military movements, the use of rifled muskets, the advent of iron-clad ships, the daily reporting of newspaper correspondents from the scenes of battle, and (though to a smaller degree), the use of photography to document warfare. Crowe, Eyre (1824–1910): British painter and writer, known for genre scenes (paintings of everyday life) and historical subjects. Crowe studied in Paris. While working for British author William Makepeace Thackeray, Crowe visited the United States in 1852–1853. His visits to Richmond, Virginia in 1853 and 1856 inspired his paintings showing the brutal reality of slavery in America. Currier and Ives (1857-1907): New York firm started by Nathaniel Currier and James Ives, later carried on by their sons. Specializing in affordable, hand-colored prints called lithographs, Currier and Ives employed numerous artists over the firm’s fifty-year history. Its prints covered thousands of different subjects, including famous people, famous events, landscapes, humor, and sports. These images appealed to the interests and feelings of middle-class Americans and were purchased by people all over the country. During the Civil War, Currier and Ives produced images about recent events, bringing images of the war into Americans’ homes. Curry, John Steuart (1897-1946): American artist who created paintings, prints, drawings, and murals, that portrayed the American rural heartland as a wellspring of national identity. A Kansas native, Curry studied at the Art Institute of Chicago before focusing on several decorative mural commissions and Kansas scenes, including a large mural depicting John Brown at the Kansas statehouse. Curry's designs proved controversial because they included what many Kansans regarded as unflattering depictions of their state. Although honored in his later years, the furor over the murals is said to have hastened Curry's death from a heart attack, at the age of forty-eight. Early type of photograph invented by the Frenchman Louis-Jacques-Mandé Daguerre (1787–1851). Each image is one-of-a-kind and made on a polished silver-coated metal plate. Daguerreotypes were often called “the mirror with a memory” because their surface is so reflective. For protection, daguerreotypes were packaged behind glass inside a decorative case. Shortly after daguerreotypes were made public by the French government in 1836, they were introduced in America. They were wildly popular in the 1840s and 1850s since they were more affordable than having a portrait painted. Darley, Felix Octavius Carr (1822–1888): American illustrator of magazines and books. Darley began his career in 1842 in Philadelphia. He also worked in New York City and Delaware. Darley became one of the most popular book illustrators in America after 1848, when he created illustrations that became engravings used in books by Washington Irving, James Fenimore Cooper, Nathaniel Hawthorne, Harriet Beecher Stowe, and Edgar Allen Poe. Darley’s images of American icons like pilgrims, pioneers, and soldiers were in high demand before, during, and after the Civil War. Darling, Aaron E. (active, 19th century): Artist who painted the Chicago abolitionist couple John and Mary Jones in c.1865. Davis, Jefferson F. (1808–1889): Democratic politician and Mexican War veteran who served as U.S. Senator and Secretary of War before becoming President of the Confederacy in 1861. Davis was born in Kentucky and educated at West Point; he served briefly in the U.S. Army before becoming a cotton planter in Mississippi. Though a strong supporter of slavery and slaveholders’ rights, he opposed secession. Nonetheless, when Mississippi seceded, he left the Senate to serve in the Confederate army. To his dismay, he was elected president of the Confederate constitutional convention. After the war, he was indicted for treason and imprisoned, but never put on trial. Embellishment or ornament meant to make something pleasing. The term also refers to an honor or commemoration. Individual features, or a small portion of a larger whole. Geographic region of the Southern United States including South Carolina, Georgia, Alabama, Mississippi, Louisiana, Florida, and Texas, also known as the Lower South or Deep South. These states had the highest slave populations in the South and their economies were heavily reliant on cotton cultivation (as well as sugar and rice). During the Civil War, each of the states seceded from the Union prior to the bombardment of Fort Sumter (April 12–13, 1861). System of government through which citizens elect their rulers, based on ancient Greek philosophy and practice. The United States is a representative (or indirect) democracy, meaning that eligible adult citizens elect politicians to make decisions on their behalf. Democratic principles are based on the idea that political power lies with the people, but many democratic systems have historically limited the right to vote. In the United States during the Civil War, for instance, only white men could vote. Party of opposition during the Civil War. Democrats believed in states’ rights, a strict interpretation of the United States Constitution, and a small federal government. Before the war, the party supported popular sovereignty in the Western territories. Southern Democrats abandoned the national party during the election season of 1860. During the secession crisis, Northern Democrats sought to restore the Union through compromise rather than military force, but the Confederacy rejected these attempts. After the attack at Fort Sumter (April 12–13, 1861), many Northern Democrats supported war on the Confederacy, but others opposed it, the draft, and emancipation. Douglas, Stephen A. (1813–1861): Democratic lawyer and politician from Illinois who served in the state legislature before his election to the U.S. Senate in 1847. As a Democratic leader, Douglas championed the policy of popular sovereignty (in which territories decided their slaveholding or free status). He is well known for his debates with Abraham Lincoln, his Republican challenger for the Senate in 1858. Though he won that election, Douglas lost to his rival in the presidential election of 1860. After the war began, he supported Lincoln and urged his party to follow suit. Two months later, he died from typhoid fever in Chicago. Douglass, Frederick (1818–1895): Former slave, author, and publisher who campaigned for the abolition of slavery. Douglass published his autobiography, Narrative of the Life of Frederick Douglass, an American Slave, Written By Himself, in 1845. Mentored by anti-slavery leader William Lloyd Garrison, Douglass developed his own philosophy of abolition, arguing that the Constitution could "be wielded in behalf of emancipation.” His newspapers, The North Star and Frederick Douglass’s Paper, led abolitionist thought in the antebellum period. He met with Abraham Lincoln during the Civil War and recruited Northern blacks for the Union Army. After the war, he continued fighting for African American civil rights. Dred Scott v. Sanford: Supreme Court decision of 1857 that declared that Dred Scott (and all African Americans) were not citizens of the United States and did not have rights as such. Dred Scott was the slave of an army surgeon named Dr. Emerson who had traveled with Scott to free states and territories. After Emerson’s death in 1846, Scott sued Emerson’s heirs claiming that his time in free areas made him a free man. The case was appealed to the United States Supreme Court, which ruled that neither federal nor territorial governments could outlaw slavery in the territories, therefore making free soil and popular sovereignty unconstitutional. Election of 1860: Historic presidential election. Four men ran in the race: Abraham Lincoln of Illinois for the Republican Party, Stephen Douglas of Illinois for the Democratic Party, John C. Breckinridge of Kentucky for the Southern Rights Democratic Party, and John Bell of Kentucky for the Constitutional Unionists party. Abraham Lincoln won the election by a majority of the Electoral College, but without a majority of the overall popular vote. All of his support came from free states. Breckenridge dominated the Deep South states, Bell gained limited support in the border slave states, and Douglas was overwhelmingly defeated throughout the country. Procedure established by the Constitutional Convention of 1787 whereby the states elect the President of the United States. It was a compromise between those who advocated election of the president by Congress and those who wanted election by popular vote. In the Electoral College, every state gets one vote for each of their senators (always two) and representatives in Congress (a minimum of one, with additional representatives determined by the size of a state’s population). In the Election of 1860, Abraham Lincoln won the presidency with 180 electoral votes, but did not receive a majority of the popular vote. Ellsbury, George H. (1840–1900): American artist and lithographer. Ellsbury worked for Harper’s Weekly as a sketch artist during the Civil War. He also created city views of the American Midwest between 1866 and 1874, before moving to Minnesota and the western territories. Freeing a person from the controlling influence of another person, or from legal, social, or political restrictions. In the United States, it is often used to refer specifically to the abolition of slavery. Executive order issued by President Abraham Lincoln on September 22, 1862, stating that as of January 1, 1863, "all persons held as slaves" within the rebellious southern states (those that had seceded) "are, and henceforward shall be free." The Emancipation Proclamation applied only to the rebelling Confederacy, leaving slavery legal in the Border States and parts of the Confederacy under Union control. Nonetheless, slaves who were able to flee Confederate territory were guaranteed freedom under Union protection. While the order did not end slavery, it added moral force to the Union cause and allowed African American men to join the Union armies. Printmaking technique where the artist uses a tool called a burin to create lines in a wood or metal surface. After the design is drawn, the plate is inked and the image is transferred under pressure from the woodblock or metal plate to paper. Visual and documentary materials—pamphlets, ribbons, buttons, printed matter—that are generally not intended to last. Items produced for political campaigns—including Abraham Lincoln’s—are often considered to be ephemera. As historical material, ephemera are very valuable because they help us understand what audiences in the past saw and used. Relating to horses. Equestrian portraits of Civil War officers show seated, uniformed figures sitting on active or athletic-looking horses. This kind of image is often seen in art history; kings and emperors were often shown this way to suggest their power as leaders. Printmaking technique where the artist coats a metal plate in wax, and then removes wax from parts of the plate to create the design. Acid is then applied to the plate. This acid acts on the metal to create a permanent design. The plate is inked and the design is transferred under pressure from the plate to paper. In photography, the amount of time that the shutter of the camera is open, determining how much light enters into the camera and falls on the light-sensitive surface (like a metal or glass plate or film in pre-digital photography). The surface is then processed to create a photograph. During the Civil War, photography was still new and exposure times needed to be longer to get a visible image. This made it difficult to take pictures of action, such as battle, because the subjects had to be still for the entire time the shutter was open. Fassett, Cornelia Adele (1831–1898): Portraitist who worked in Chicago and Washington, D.C., Fassett worked with her husband, photographer Samuel M. Fassett, and painted portraits of prominent Illinois men, including Abraham Lincoln in 1860. She moved to Washington, D.C. in 1875 where she received many political commissions, including portraits of Ulysses S. Grant, Rutherford B. Hayes, and James Garfield. Fassett is known for these portraits as well as her painting The Florida Case before the Electoral Commission of 1879 in the United States Senate art collection and features roughly 260 Washington politicians. Fassett, Samuel (active, 1855–1875): American photographer active before, during, and after the Civil War. Fassett worked in Chicago and Washington, D.C. In Washington, he was a photographer to the Supervising Architect of the Treasury. Fassett is best known for taking one of the earliest photographs of Abraham Lincoln before he became president. He was married to American painter Cornelia Adele Fassett, who painted a portrait of Lincoln after her husband’s image. Firestone, Shirley (active, 20th century): Painter who depicted Harriet Tubman in 1964. Forbes, Edwin (1839–1895): Illustrator and artist. Forbes produced images for Frank Leslie’s Illustrated Newspaper from 1861–1865 and traveled as a sketch artist with the Army of the Potomac, covering events of the war. He depicted scenes of everyday life as well as battle scenes, such as the Second Battle of Bull Run and Hooker’s Charge on Antietam. Forbes went on to produce many etchings and paintings from his Brooklyn studio, inspired by his war-time images. In artworks that portray scenes or spaces, the foreground is the area, usually at the bottom of the picture, which appears closest to the viewer. The background is the area that appears farthest away and is higher up on the picture plane. Infantry soldier with a military rank during the Civil War who fought on foot. Foot soldiers carried different types of swords and weapons than did cavalry soldiers (who fought on horseback) during the war, since they were trained to fight in different situations. Fort in the harbor of Charleston, South Carolina that was the site of the first military action in the Civil War. The fort was bombarded by the newly formed Confederacy between April 12 and 13, 1861. On April 14, Major Robert Anderson lowered the American flag and surrendered the fort. This event led to widespread support for war in both the North and the South. Following the battle, Lincoln called for seventy-five thousand men to enlist in the armed services to help suppress the rebellion, which led four more states to join the Confederacy. Factory that produces cast goods by pouring molten metal (such as iron, aluminum, or bronze) into a mold. A foundry is needed to produce goods like bronze sculptures or artillery, such as cannons. Frank Leslie’s Illustrated Newspaper: Popular publication during the Civil War that featured fiction, news, and illustrations of battlefield life. Frank Leslie is the pseudonym (fake name) adopted by English illustrator and newspaper editor Henry Carter. Carter worked for the Illustrated London News and circus man P. T. Barnum before moving to America and founding his first publication using the name Frank Leslie. After the war, Leslie married Miriam Follin, a writer who worked for his paper. Following Leslie’s death, Miriam changed her name to “Frank Leslie” and took over as editor. A paper with the name Frank Leslie on its masthead was in publication from 1852–1922. Philosophy that stressed economic opportunity and a man’s ability to move across social class and geographic boundaries. Those who believed in free labor thought that man should be free to earn the fruit of his own labor, gain independence, and prosper within a democratic society. Most free labor thinkers opposed slavery to some extent, and the idea itself was central to both the Free Soil movement and the Republican Party. Type of anti-slavery political philosophy that declared that western territories of the United States should be free of slavery. Unlike abolitionists, many white “free soilers” were unconcerned with Southern slaves. Instead, they feared slavery’s impact on white workers, believing that the system of slavery made it harder for free workers to compete. Some free soilers were also racist and opposed living near African Americans. Others, like Abraham Lincoln, opposed slavery on moral grounds, but believed that Congress could not end slavery where it already existed and could only limit it in states where it had not yet been established. French, Daniel Chester (1850–1931): Leading American monumental sculptor. French studied for two years in Italy before returning to the United States to open studios in Boston and Washington, D.C. He earned commissions for portraiture and public monuments, where he combined classical symbolism with realism in his sculptures. French is perhaps best known for the massive seated Lincoln at the Lincoln Memorial on the National Mall in Washington, D.C. (1911–1922). Fugitive Slave Act: Part of the Compromise of 1850 that enhanced the Constitution’s 1787 fugitive slave clause by creating a system of federal enforcement to manage slaveholder claims on runaway slaves. Before the war, such claims were handled by state officials, and many free states passed personal liberty laws to protect free blacks from being falsely claimed as runaways; these laws, however, also helped abolitionists hide actual fugitive slaves. The new act put federal marshals in charge of runaway slave claims in an attempt to override state laws. Nonetheless, many free states refused to help implement the Act, making it difficult to enforce. Furan, R. (active, 20th century): Painter who depicted Harriet Tubman in 1963. Gardner, Alexander (1821–1882): Scottish-American scientist and photographer who worked with photographer Mathew Brady. Gardner served as the manager of Brady’s Washington, D.C. gallery until the outbreak of the Civil War. Gardner produced published more than 3000 images from the war, taken by himself and others he hired to help him. One hundred of these appear in the landmark publication Gardner’s Photographic Sketch Book of the War. The collection, however, was a commercial failure. After the war Gardner traveled to the West and continued photographing. Garrison, William Lloyd (1805–1879): Abolitionist and publisher who founded the anti-slavery newspaper The Liberator in 1831. Garrison rejected colonization and believed that African Americans were equals of white citizens and should granted political rights in American society. He co-founded the Anti-Slavery Society and in 1854 publicly burned copies of the U.S. Constitution and the Fugitive Slave Act because they protected slavery. During the Civil War he supported the Union, but criticized President Lincoln for not making abolition the main objective of the war. After the Civil War and the passage of the 13th Amendment banning slavery, Garrison fought for temperance and women’s suffrage. Refers to the type of subject matter being depicted. Landscapes, still lifes, and portraits are different genres in art. “Genre” can also specifically refer to art that depicts scenes of everyday life. Gifford, Sanford Robinson (1823-1880): American landscape painter and native of Hudson, New York. Influenced by Thomas Cole, founder of the Hudson River School of painting, Gifford studied at the National Academy of Design, but taught himself to paint landscapes by studying Cole’s paintings and by sketching mountain scenes. He developed an individual style by making natural light the main subject of his paintings. Gifford traveled widely throughout his career, painting scenes from Europe, the Near East, the American West, the Canadian Pacific region, and Alaska. Gifford also served in the Union army, although his art makes few references to his experience of the war. Opaque paint similar to watercolor. Gouache is made by grinding pigments in water and then adding a gum or resin to bind it together. The paint has a matte finish. Graff, J. (active, 19th century): Painter who depicted the Chicago Zouaves, a famous Civil War drill team, during their visit to Utica, New York. Grand Army of the Republic (G.A.R.): An organization for honorably discharged veterans of the Union army founded in Illinois in 1866. Its hundreds of thousands of members helped needy and disabled veterans, lobbied for the passage of pension laws and government benefits for veterans, encouraged friendship between veterans, and promoted public allegiance to the United States Government; it also served as a grass roots organizing arm of the Republican Party. The G.A.R. helped make Decoration Day (Memorial Day) a national holiday and was responsible for making the pledge of allegiance a part of the school day. Grant, Ulysses S. (1822–1885): Union military leader during the Civil War. Grant attended West Point and fought in the Mexican-American War prior to his Civil War service. After fighting in the Mississippi Valley and winning victories at Shiloh and Vicksburg, Grant moved to the East to act as General in Chief of the United States Army in March 1864. His relentless campaign ground down Robert E. Lee’s Army of Northern Virginia for the next year, culminating in Lee’s surrender to Grant at Appomattox Court House, Virginia, on April 9, 1865. He was later elected eighteenth President of the United States from 1869 to 1877. Picture that features more than one person and communicates something about them. Because it was important to include certain people in a group portrait, artists and publishers sometimes added individuals who hadn’t actually posed for the artist, or left out some of those who did. Great Seal of the United States (also called the Seal of the United States): National coat of arms for the United States. The design, created on June 20, 1782, portrays a bald eagle holding a shield representing the original thirteen states. The blue band above represents Congress and the stars represent the U.S. on the world stage. The Latin language motto E Pluribus Unum means “out of many, one.” The olive branch symbolizes peace; thirteen arrows symbolize war. On the reverse, a pyramid symbolizes strength and duration. Over it is an eye, symbolizing God. There are two other mottoes: Annuit Coeptis, meaning “He [God] has favored our undertakings,” and Novus Ordo Seclorum, meaning “a new order of the ages.” Site of radical abolitionist John Brown’s October 17, 1859, raid, where he and twenty-two men (white and black) captured a federal armory and arsenal as well as a rifle works. Brown hoped to inspire a slave uprising in the surrounding area, but instead he and most of his men were captured by a local militia led by Robert E. Lee, future General of the Confederate Army of Northern Virginia. Many of the raiders died, and Brown was put on trial and then hanged for his actions. Brown’s fiery statements during his trial were inspirational to Northern abolitionists and outraged Southerners. Harper’s Weekly (A Journal of Civilization): Popular Northern, New York-based, illustrated magazine (1857–1916) and important news source about the Civil War. It consisted of news, popular interest stories, illustrations, and war-related features. Harper’s employed illustrators and artists such as Edwin Forbes and Winslow Homer to make images, sometimes while traveling with the Northern armies. Healy, George P. A. (1813–1894): American painter of portraits and historic scenes. Healy studied in France and created works for European royalty before he returned to America. Healy was one of the most well-known and popular portrait painters of his time. Between 1855 and 1867, Healy lived in Chicago and painted important political figures like Abraham Lincoln as well as famous authors and musicians. After the Civil War, Healy traveled throughout Europe painting commissions before returning to Chicago in 1892. Herline, Edward (1825–1902): German-American lithographer and engraver. Herline was active in Philadelphia starting in the 1850s, working with several print publishers, including Loux & Co. He was known for his artistic skill in creating microscopic details in his views. Herline produced a wide range of lithographs including city views, book illustrations, maps, and images for advertisements. Hill, A. (active, 19th century): Lithographer who created images for Ballou’s Magazine, a nineteenth-century periodical published in Boston, Massachusetts. Hollyer, Samuel (1826–1919): British-American printmaker who worked in lithography, etching, and engraving. Hollyer studied in London before immigrating to America in 1851. Hollyer worked for book publishers in New York City and was known for portraits, landscapes, and other illustrations before, during, and after the Civil War. Term used to describe the area of a nation or region at war that is removed from battlegrounds and occupied by civilians. During the Civil War, there were Northern and Southern homefronts. Homer, Winslow (1836-1910): American painter and artist of the Civil War period. Homer used his art to document contemporary American outdoor life and to explore humankind’s spiritual and physical relationship to nature. He had been trained in commercial illustration in Boston before the war. During the conflict he was attached to the Union’s Army of the Potomac and made drawings of what he saw. Many of these were published in the popular magazine Harper’s Weekly. After the war, Homer became more interested in painting, using both watercolors and oils. He painted children, farm life, sports, and the sea. Horton, Berry (1917–1987): African American artist who worked in Chicago. Horton made figure drawings and painted. Hudson River School: Group of American landscape painters in the nineteenth century (about 1825 to the 1870s) who worked to capture the beauty and wonder of the American wilderness and nature as it was disappearing. Many of the painters worked in or around New York’s Hudson River Valley, frequently in the Catskill and Adirondack Mountains, though later generations painted locations outside of America as well. This group is seen as the first uniquely American art movement since their outlook and approach to making art differed from the dominant European artistic traditions. State of being or conception that is grander or more perfect than in real life. In art, this may mean making a sitter look more beautiful or a leader more powerful. Much art and literature, especially before 1900, tended to idealize its subjects. Combination of newspaper and illustrated magazine (such as Harper’s Weekly, Leslie’s Illustrated News, etc.) that appeared in the United States in the 1850s. In an era before television and the internet, these offered a very visual experience of current events. The technology did not exist to publish photographs in such publications at the time. Instead, a drawing was made from a photograph, and then a print was made from the drawing. This was how images based on photographs appeared. Publications also hired sketch artists to go out into the field; their drawings were also turned into illustrations. Immke, H. W. (1839–1928): Illinois-based photographer. Immke emigrated from Germany to Peru, Illinois, in 1855 where he studied farming before moving to Chicago in 1866. There, he worked with Samuel M. Fassett, who had one of the best equipped photography studios of the Civil War era. Immke established his own studio in Princeton, Illinois, later that year and operated a very successful business through 1923. He specialized in portraits, with over four hundred images of early Bureau County Illinois settlers in his collection; he also produced landscapes and genre scenes (portrayals of daily life). Movement towards an economy dominated by manufacturing rather than agriculture. An industrial economy relies on a factory system, large-scale machine-based production of goods, and greater specialization of labor. Industrialization changed the American landscape, leading to artistic and cultural responses like the Hudson River School of painting and the development of parks in urban areas—an interest in nature that was seen as disappearing. By the mid-nineteenth century, the northern United States had undergone much more industrialization than had the South, a factor that contributed to the Union victory over the Confederacy during the Civil War. Military unit of soldiers who are armed and trained to fight on foot. Jewett, William S. (1821-1873): American painter who focused on portraits, landscapes, and genre paintings (scenes of everyday life). He studied at New York City’s prestigious National Academy of Design before being drawn to California by the promise of wealth during the Gold Rush. Although his mining career failed, Jewett discovered that his artistic talents were in high demand among California’s newly rich, who prized his status as an established New York painter. Jewett became one of California’s leading artists. Kansas-Nebraska Act of 1854: Law that declared that popular sovereignty, rather than the Missouri Compromise line of 36° 30´ latitude, would determine whether Kansas and Nebraska would be free or slave states. (Popular sovereignty meant that residents of each territory should decide whether slavery would be permitted, rather than the federal government.) After the bill passed, pro-slavery settlers in Kansas fought anti-slavery settlers in a series of violent clashes where approximately fifty people died. This era in Kansas history is sometimes referred to as “Bleeding Kansas” or the “Border War.” Kansas was admitted to the Union as a free state in 1861. Keck, Charles (1875–1951): American sculptor known for his realistic style. Born in New York City, and a student of the American Academy of Design, Keck apprenticed under celebrated sculptor Augustus Saint-Gaudens before becoming his assistant. Keck’s gift for realistic depiction is seen in his 1945 bronze sculpture The Young Lincoln. Traditional wool cap worn by Civil War foot soldiers. It had a short visor and a low, flat crown. Both the Union and Confederate armies wore kepis, but Union soldiers wore blue and Confederates wore grey. Kurz, Louis (1835–1921) and Kurz & Allison (1878–1921): Austrian-born lithographer and mural painter who primarily worked in Chicago after immigrating to America in 1848. Kurz was known for his book Chicago Illustrated, a series of lithographs featuring views of the city and its buildings. After 1878 Kurz became a partner in an art publishing firm with Alexander Allison. Their company, Kurz & Allison, created chromolithographs (color-printed lithographs) on a variety of subjects, including Abraham Lincoln and the Civil War. The firm continued until Kurz’s death in 1921. An outdoor space, or view of an outdoor space. Landscapes in art are often more than just neutral portrayals of the land. They can reflect ideas, attitudes, and beliefs, and may even refer to well known stories from the past. Landscapes are also the settings for myths, biblical stories, and historical events. At the time of the Civil War, landscape paintings were often used to communicate ideas about American expansion, patriotism, and other ideas relevant to the time. Law, William Thomas (active, 19th century): Painter who depicted the 1860 Republican National Convention in Chicago. Lawrence, Martin M. (1808–1859): American photographer who had a studio in New York. Lawrence trained as a jeweler, but began to make daguerreotypes (an early type of photograph) in the early 1840s. He was well-regarded amongst his peers for his commitment to experimenting with new techniques in early photography. He was profiled in the new publication The Photographic Art Journal in 1851 as a leader in his field. Lee, Robert E. (1807–1870): Confederate military leader during the Civil War. Lee graduated second in his class from West Point in 1829 and served in the U.S. Army until the secession of his home state of Virginia in 1861. Lee then resigned from the U.S. Army to join the Confederate cause. In May 1862, Lee took command of the Confederacy’s Army of Northern Virginia. He won victories at Manassas and Chancellorsville, and eventually became General in Chief of all Confederate armies on February 6, 1865. Lee surrendered to Union General Ulysses S. Grant on April 9, 1865, effectively ending the Civil War. Cast or model of a person’s face and/or hands made directly from that person’s body. A life mask is made from a living subject and a “death mask” from the face of a deceased person. Typically grease is applied to the face or hands, which are then covered with plaster that hardens to form a mold. Abraham Lincoln was the subject of two life masks. Sculptors often made or used these to aid them in creating portraits. Sometimes the masks were used to make metal or plaster casts. Lincoln, Abraham (1809–1865): Sixteenth President of the United States. Lincoln was an Illinois lawyer and politician before serving as a U.S. Representative from 1848 to 1850. He lost the 1858 election for U.S. Senate to Democrat Stephen Douglas, but their debates gave Lincoln a national reputation. In 1860, Lincoln won the Presidency, a victory that Southern radicals used as justification for secession. Lincoln’s Emancipation Proclamation went into effect on January 1, 1863, which led to the eventual abolition of slavery. Re-elected in 1864, Lincoln was assassinated by John Wilkes Booth shortly after the war’s end. Type of print made using a process of “drawing upon stone,” where a lithographer creates an image on a polished stone with a greasy crayon or pencil. The image is prepared by a chemical process so that the grease contained in it becomes permanently fixed to the stone. The stone is sponged with water, and printer’s ink, containing oils, is rolled over the surface. Because oil and water repel each other, the ink remains in areas with grease. The image is then transferred to paper using a special press. Chromolithography, a multicolored printing process, uses a different stone for each color of ink. Loux & Co.: Philadelphia lithography firm, active in the nineteenth century, specialized in maps and views of cities. Loux & Co. worked with artists like Edward Herline. Lussier, Louis O. (1832–1884): Canadian-American portrait painter. Lussier Studied in San Francisco and worked in California with partner Andrew P. Hill before relocating to Illinois after the Civil War. March to the Sea: Military campaign (also known as the Savannah Campaign) led by Union General William Tecumseh Sherman between November 15 and December 21, 1864. Sherman marched with 62,000 Union soldiers between Atlanta and Savannah, Georgia, confiscating or destroying much of the Southern civilian property in their path. This march is an early example of modern “total war,” as it strove to destroy both the Confederacy’s civilian morale and its ability to re-supply itself. Martyl (Suzanne Schweig Langsdorf) (1918-2013): American painter, print maker, muralist, and lithographer who trained in art history and archaeology. Langsdorf studied at Washington University in St. Louis. She was given her art signature name, “Martyl,” by her mother, who was also an artist. Martyl painted landscapes and still lifes in both the abstract and realist tradition. She taught art at the University of Chicago from 1965 to 1970. Person who suffers, makes great sacrifices, or is killed while standing for his or her beliefs. Mayer, Constant (1832–1911): French-born genre (everyday scenes) and portrait painter. Mayer studied at the prestigious École des Beaux-Arts in Paris before immigrating to America. Mayer’s works were popular in the States and abroad. Generals Ulysses S. Grant and Philip Sheridan are among the noteworthy individuals who had their portraits painted by Mayer. The material or materials an artwork is made of, such as oil paint on canvas or bronze for sculpture. During the Civil War more and more media were becoming available and affordable, including photography and various kinds of prints. Merritt, Susan Torrey (1826–1879): Amateur artist from Weymouth, Massachusetts who is noted for her collage painting Antislavery Picnic at Weymouth Landing, Massachusetts. Military Order of the Loyal Legion of the United States (M.O.L.L.U.S.): Patriotic organization founded by Philadelphia Union military officers immediately after the assassination of President Abraham Lincoln. M.O.L.L.U.S. was established to defend the Union after the war, as there were rumors following Lincoln’s death of a conspiracy to destroy the federal government through assassination of its leaders. Officers in M.O.L.L.U.S. served as an honor guard at Lincoln’s funeral. Miller, Samuel J. (1822–1888): Photographer who created daguerreotypes (an early form of photography) in Akron, Ohio. Miller’s sitters included anti-slavery activist Frederick Douglass. First major legislative compromise about slavery in the nineteenth century. In 1819, Missouri sought to join the Union as a slave state. Northerners opposed to slavery’s expansion westward tried to force Missouri to adopt an emancipation plan as a condition for admission; Southerners angrily opposed this. A compromise bill was forged in 1820, when Maine was admitted as a free state alongside slaveholding Missouri. In addition, slavery was prohibited from territory located north of the 36° 30’ latitude (except Missouri). The precedent of admitting slave and free states in tandem held until the Compromise of 1850. In sculpture, the method of adding or shaping material (clay, wax, plaster) to form an artwork. In painting and drawing, modeling is the method of making things look three dimensional by shading their edges, for example. Moran, Thomas (1837-1926): Born in England but raised in Philadelphia, Moran was the last of the nineteenth-century American landscape painters known as the Hudson River school. After a brief apprenticeship as an engraver, he studied painting, traveling to England in 1862 and Europe in 1866. In 1872 the United States Congress purchased his painting Grand Canyon of the Yellowstone, a work that resulted from his participation in the first government-sponsored expedition to Yellowstone. Moran’s illustrations helped convince the government to preserve the region as a national park. Over Moran’s long and commercially successful career he painted the American West, Italy, Cuba, Mexico, and New York. Mount, William Sidney (1807-1868): American portraitist and America’s first major genre (everyday scene) painter. Mount studied briefly at the National Academy of Design but was mainly self-taught. By drawing his subject matter from daily life, Mount rejected the high-culture demand for grand historical scenes modeled after European examples. Mount’s images were reproduced as engravings and color lithographs based on his paintings—a common practice before the age of photography. These prints popularized his art and encouraged other artists to pursue genre subjects. Hailed by critics of the era as an original American artist, Mount created works that reflect daily life and the politics of his time. Mulligan, Charles J. (active, 19th and early 20th centuries): Talented American sculptor who trained under renowned sculptor Lorado Taft. Mulligan studied at the School of the Art Institute of Chicago and later at the prestigious École des Beaux-Arts in Paris. Mulligan also taught at the School of the Art Institute of Chicago before leaving to focus on commissioned work, such as his acclaimed 1903 portrayal of the martyred Lincoln, Lincoln the Orator. Painting (typically large scale) created directly on a wall or on canvas mounted to a wall. Myers, Private Albert E. (active, 19th century): Amateur painter and Union soldier from Pennsylvania. Myers painted an image of Camp Douglas in Chicago (a prison-of-war camp for captured Confederate soldiers, and a training and detention camp for Union soldiers) while he was stationed there during the Civil War. Toy version of nineteenth-century stage spectacles. They were meant to imitate shows that featured large-scale pictures of famous events or dramatic landscapes. Children looked into the box of the myriopticon and moved knobs to change from one picture to another. The toy often came with posters, tickets, and a booklet from which to read a story to accompany the pictures. Nall, Gus (active, 20th century): African American representational and abstract painter. Nall studied at the Art Institute of Chicago, and later taught art. He was active in Chicago in the 1950s and 1960s. Nast, Thomas (1840–1902): Popular political cartoonist. Born in Germany, Nast immigrated to America in 1846. He began his career as reportorial artist and freelance illustrator in the years leading up to the Civil War. As an ardent supporter of the Union cause, Nast created many recruitment posters and newspaper promotions for the war effort. He joined Harper’s Weekly in 1862 and quickly gained fame as a political cartoonist and satirist, working to expose corruption in government in the post-Civil War years. Nast died in Ecuador after contracting malaria while serving there as Consul General, as appointed by President Theodore Roosevelt. Artistic approach in which artists attempt to make their subjects look as they do in the real world. Such artworks are said to be "naturalistic." New York State Emancipation Act of 1827: Legislation formally banning slavery in New York State. After the Revolutionary War, New York gradually enacted laws that restricted the growth of slavery. Importing new slaves became illegal in 1810, for example. The 1827 act grew out of legislation passed in 1817 that set July 4, 1827, as the date when the following additional measures for enslaved African Americans would go into effect: those born in New York before July 4, 1799 would be freed immediately; all males born after that date would be freed at the age of 28; and all females would be freed at the age of 25. Painting made from pigment (color), such as ground minerals, suspended in oil. Oil paintings can have a glowing quality and are admired for their jewel-like colors. They typically require a long time to dry. Military weapons including anything that is shot out of a gun, such as bullets or cannonballs. O’Sullivan, Timothy (c.1840–1882): Photographer who worked with Mathew Brady and Alexander Gardner. O’Sullivan began his career in photography as an apprentice to Mathew Brady. He left Brady’s studio to work independently as a Civil War photographer for two years before joining the studio of Alexander Gardner, whom he helped to provide images for Gardner’s Photographic Sketch Book of the War. After the war, O’Sullivan accompanied and made photographs for many government geographical surveys of the United States before being appointed as chief photographer for the United States Treasury in 1880. P. S. Duval & Son (1837–1879): Philadelphia lithography firm founded by French-American lithographer Peter S. Duval. Duval was brought to America from France by Cephas G. Childs to work in his Philadelphia firm. Duval was one of America’s most prestigious makers of chromolithographs (lithographs printed in multiple colors). After a fire in 1856, Duval’s son Stephen joined the firm. The firm was famous for being an innovative lithographic leader that printed well-made, colorful city views, historic scenes, and portraits on a variety of subjects. Created by the repetition of elements (shapes or lines, for example) in a predictable combination. Philippoteaux, Paul D. (1846–1923): French painter and artist known for creating cycloramas (massive oil on canvas paintings that were displayed with real props for a three-dimensional effect). Philippoteaux was commissioned to paint a “Battle of Gettysburg” cyclorama in 1882. He created several paintings in the post-Civil War era depicting its battles and military leaders. An image created by a photographer using a camera. Photography is a scientific and artistic process that uses light to create a permanent image. During the Civil War era, a photographer used a lens to focus light on a light-sensitive surface (like a specially prepared metal or glass plate or film) for a specific length of time. In pre-digital photography, surface was then processed (or "developed") with chemicals to reveal an image. Types of photographs included albumen prints, ambrotypes, daguerreotypes, and tintypes. Pleasing to look at or resembling art; literally means “like a picture.” In the nineteenth century, the term was also understood to mean an established set of aesthetic ideals that were developed in England and often used in American landscape painting, like those produced by the Hudson River School. Substance that gives color to paint, ink, or other art media. Oil paints, for example, are made from powdered pigment suspended in oil. Pigments may be made from natural substances, such as minerals and plants, or may be synthetic. The United States Constitution provides that each state’s citizens be represented in Congress by people they elect. Each state receives two Senators, but in the House the number of representatives varies according to a state’s population, as determined by census every ten years. During the Constitutional Convention of 1787, Southern slaveholding states refused to join the Union unless they could include their slave populations in this calculation. Without this measure, they would have been overwhelmingly outnumbered by free state representatives. After debate, the convention compromised by allowing states to count three-fifths of their slave populations toward representation in the House. Artwork or building that has many colors. Temporary floating bridge made by placing small boats called pontoons next to each other. The pontoons are tied together but not to the land, so the bridge can move with the current of the river or stream. During the Civil War, moving the bridge parts over land was done by wagon, and required many men and horses. The Union army became exceptionally skilled at building pontoon bridges, even across the swamps of the Deep South. Political principle coined by Senator Lewis Cass of Michigan during his 1848 Presidential campaign, and later championed by Senator Stephen Douglas of Illinois. The principle stated that settlers of each territory, not the federal government, should determine whether or not slavery would be permitted there. Popular sovereignty was a compromise to resolve Congressional conflict over whether or not United States territories should be admitted to the Union as free or slave states. Though the Democratic Party endorsed the idea, it was rejected by many northerners in favor of Free Soil ideas, and the pro-slavery South grew increasingly hostile toward it. Total number of votes directly cast by eligible voters for a candidate in an election. In the United States presidential election system, the popular vote in each state determines which candidate receives that state’s votes in the Electoral College. The Electoral College is a voting body created by the U.S. Constitution that elects the President and Vice President using appointed electors. The number of electors for each state is equal to the state’s number of federal representatives and senators. These electors are obligated to cast their votes for the ticket who won the popular vote in their respective states. Representation or depiction of a person in two or three dimensions (e.g. a painting or a sculpture). Sometimes an artist will make a portrait of himself or herself (called a self-portrait). Powers, Hiram (1805–1873): One of the most influential American sculptors of the nineteenth century. Powers developed a passion for sculpture as a young man while studying in Cincinnati under Prussian artist Frederick Eckstein. Powers began his career doing portrait busts of friends and later politicians. He is best known for The Greek Slave (1843), which was championed as a symbol of morality, especially during its tour of the United States amid rising abolitionist tensions. He spent much of his life within the artistic expatriate community in Florence, Italy, and received many commissions throughout his later career, notably some for the Capitol in Washington, D.C. Price, Ramon B. (1930–2000): African American artist and curator. Price was born in Chicago and educated at the School of the Art Institute of Chicago and Indiana University at Bloomington. Mentored by Margaret Burroughs, co-founder of the DuSable Museum of African American History, Price became a painter and a sculptor who focused his career on teaching. Price educated high school and college students before becoming chief curator at the DuSable Museum. A mechanically reproduced image, usually on paper, but sometimes on another type of surface, like fabric. Printmaking encompasses a range of processes, put prints are generally produced by inking a piece of wood, metal, or polished stone that has a design or drawing on it. Pressure is applied when the inked surface comes into contact with the material being printed on; this transfers the design to the final printed surface. Proctor, Alexander Phimister (1860–1950): Painter, etcher, and sculptor known for his unsentimental representations of the American West and his sculptures of historical and symbolic subjects. Proctor began his career as a wood engraver, and later gained international recognition for his 35 sculptures of western animals, commissioned for the World’s Columbian Exhibition in 1893. Throughout his career, his subjects ranged from animals inspired by his frequent hunting trips to political icons, such as General Robert E. Lee and William T. Sherman; he also sculpted figures that represent American ideals, such as the Pioneer Mother. One who opposes or takes arms against his or her government. During the Civil War, Northerners applied this term to supporters of the Confederacy, particularly to soldiers and armies. Southerners also adopted the name as a badge of honor, associating it with the colonial rebels of the American Revolution. Act of public resistance—often violent—to a government or ruler. In the Civil War, the North saw the secession of the South as an act of rebellion, while Southerners saw the formation of the Confederacy as within their States’ rights. Rebisso, Louis T. (1837–1899): Italian-born sculptor who created monumental works in the United States. Rebisso was forced to leave Italy for political reasons while in his twenties. He immigrated to Boston and later settled in Cincinnati, the city with which he is linked. He worked as professor of sculpture at the Art Academy of Cincinnati. The artist is well known for his bronze Ulysses S. Grant Memorial (1891) in Chicago’s Lincoln Park. Period after the Civil War during which the Confederacy was reintegrated into the Union between 1865 and 1877. The era was turbulent, as former slaves fought for citizenship rights while white Southerners violently resisted change. By 1877, whites again controlled their states, after which they systematically oppressed black citizens politically and economically. Renesch, E. G. (active, 20th century): Creator of patriotic images and recruiting posters around the time of WWI, some of which included Abraham Lincoln and others that showed African-Americans in uniform. An image or artistic likeness of a person, place, thing, or concept. Political party formed in 1854 by antislavery former members of the Whig, Free Soil, and Democratic Parties. Republicans ran their first candidate for president in 1856. At that time, they pledged to stop the spread of slavery, maintain the Missouri Compromise, admit Kansas to the Union as a free state, and oppose the Supreme Court’s decision in the Dred Scott case. The party was mainly composed of Northerners and it sought the support of Westerners, farmers, and Eastern manufacturers. Abraham Lincoln ran for president as a Republican and won the election in 1860. Rogers, John (1829–1904): Renowned artist who sculpted scenes of everyday life, families, and Civil War soldiers. Rogers primarily made statuettes, referred to as Rogers Groups, which were mass produced as plaster casts and sold to and displayed in households across the country. He also received commissions for several larger-scale pieces, such as a sculpture of General John A. Reynolds in Philadelphia. Approach or movement in art that stresses strong emotion and imagination. Romanticism was dominant in the arts between about 1780 and 1840, but is also present in art made since then. Saint-Gaudens, Augustus (1848–1907): Foremost American sculptor of his era. Saint-Gaudens began his career as apprentice to a stone-cutter at age thirteen. He studied at the college Cooper Union and the National Academy of Design, both in New York. He collaborated with other American painters and architects on several projects, while also creating important independent sculptures and reliefs. Some of his most famous works include his public monuments to President Lincoln and Colonel Robert Gould Shaw. Saint-Gaudens also designed decorative arts, coins and medals, busts, and relief portraits. Events held in Northern cities during the Civil War to raise money to support Union soldiers. The fairs were organized through the United States Sanitary Commission, formed in response to the Army Medical Bureau’s inability to maintain clean, medically safe environments for soldiers, particularly the wounded. Women played an important role in founding the commission and organizing the fairs. The first event, the Northwestern Soldiers’ Fair, was held in Chicago in October and November 1863. Donated items were exhibited and purchased to benefit the Union military. The atmosphere of these fairs was festive, with lots of displays, vendors, music, and speeches. Saunders, Harlan K. (1850–c. 1950): Artist who served in the Civil War, fighting with the 36th Illinois Volunteer Infantry. Saunders painted General John A. Logan after the war. Art consisting of images carved onto ivory or ivory-like materials. Initially the term referred to art made by American whalers who carved or scratched designs onto the bones or teeth of whales or the tusks of walruses. Much of this art was made during the whaling period (between the 1820s and the 1870s). Seamen often produced their designs using sharp implements and ink or lampblack (produced from soot from oil lamps, for example) wiped into the scratched lines to make the intricate drawings visible. Three-dimensional work of art. Sculptures can be free-standing or made in relief (raised forms on a background surface). Sometimes, a sculpture is described according to the material from which it is made (e.g., a bronze, a marble, etc.). To break away from a larger group or union. Secession has been a common feature of the modern political and cultural world (after 1800) when groups of all kinds sought identity and independence. In the context of the Civil War, the Confederacy argued that a state could secede if it believed the federal government failed to meet its Constitutional duties. Because the states had voluntarily entered the federal government, they could likewise exit the Union should they see fit to do so. In 1860–1861, slaveholding states believed that Congress’ failure to protect slavery in the territories justified secession. Sense of identity specific to a region of the country or group of states. Leading up to the Civil War, sectionalism was caused by the growing awareness that different regions of the country (North and South) had developed distinct economic interests and cultures as a consequence of their forms of labor. Those differences prompted political conflicts over the place of slavery in the country. The most radical brand of sectionalism in the United States led to secession. Shaw, Robert Gould (1837–1863): Colonel in the Union Army who led the African American 54th Massachusetts Volunteer Infantry during the Civil War. Shaw was a member of a prominent Boston abolitionist family, and he attended Harvard in the years before the Civil War. Shaw was killed on July 18, 1863 while leading his troops in the Second Battle of Fort Wagner near Charleston, South Carolina, and was buried at the battle site in a mass grave with his soldiers. Sheridan, Philip (1831–1888): Union military leader during the Civil War. Sheridan rose quickly through the ranks of the Union Army during the war, becoming a Major General in 1863. In 1864, he became famous for the destruction of the Shenandoah Valley of Virginia, an area rich in resources and foodstuffs needed by the Confederacy. After the war, Sheridan was military governor of Texas and Louisiana before leading military forces against Indian tribes in the Great Plains. Sheridan became Commanding General of the United States Army in 1883 until his death in 1888. Sherman, William Tecumseh (1820–1891): Union military leader during the Civil War famous for his “March to the Sea,” a total war campaign through Georgia and South Carolina that severely damaged the Confederacy. Sherman graduated from West Point in 1840 and served in the military until 1853. After careers in banking and military education, he re-joined the U.S. Army as a colonel in 1861. He was promoted to Major General after several successful battles. He accepted the Confederate surrender of all troops in Georgia, Florida, and the Carolinas on April 26, 1865. From 1869 to 1883, Sherman served as Commanding General of the U.S. Army. Person in a painting, photograph, sculpture, or other work of art who is likely to have posed for the artist. “Sitting for a portrait” means to pose for one. Drawing or painting that is quickly made and captures the major details of a subject. A sketch is not intended to be a finished work. Slave Power Conspiracy: Idea that slaveholders held too much power in the federal government and used that power to limit the freedoms of fellow citizens. In particular, proponents of the idea pointed to the ways that abolitionists were prevented from petitioning against slavery by slavery’s sympathizers in Congress, or that slaveholders had dominated the presidency by virtue of the three-fifths compromise, (of the first fifteen presidents, ten had owned slaves) or unfairly influenced the Supreme Court, as in the Dred Scott Decision of 1856. The idea became central to the Republican Party’s platform, and to Abraham Lincoln’s campaign in 1860. System in which people are considered property, held against their will, and forced to work. By the Civil War, slavery was fundamental to the economy, culture, and society of the South, and the slave population numbered four million. Under this system, children born to enslaved mothers were also enslaved. Slavery was thought suitable only for people of African descent, both because, historically, the slave trade had been based on kidnapping African peoples, and because most white Americans believed themselves superior to darker skinned peoples. Slaves built the South’s wealth through their uncompensated forced labor, growing cotton and other crops. Southern Rights Democrats: Faction of the Democratic Party made up of Southerners who left the national party just before the Election of 1860. This group openly discussed seceding from the Union and ran on a platform that rejected popular sovereignty, demanded legal protection for slavery in the Western territories, and advocated that the United States reopen the slave trade with Africa (which had ended in 1808). In 1860 John Breckinridge ran for president as a Southern Rights Democrat, receiving seventy-two electoral votes all from the Deep South states, and coming in second to Republican winner Abraham Lincoln, who received 180 electoral votes. Spencer, Lilly Martin (1822-1902): Born in England but raised in Ohio, Spencer focused on genre paintings of American middle-class home life. Spencer showed talent at a young age and trained with American artists around Cincinnati before moving to New York. She was an honorary member of National Academy of Design, the highest recognition the institution then permitted women. Spencer was active in the art world while also marrying and raising children. Spencer gained fame in Europe and America through her humorous images of domestic life, many of which were reproduced as prints. Spencer continued to paint until her death at the age of eighty. Type of agricultural product that is in constant demand and is the main raw material produced in a region. Examples of staple crops in the South include cotton, sugar, tobacco, and rice. In the pre-Civil War United States, cotton was the largest export staple crop. Two nearly identical photographs mounted on a card. When examined through a special viewer (a stereoscope), they give the impression of three-dimensional depth. The principles of stereographic photography were known since the beginning of photography. Stereographic images were made with cameras that had two separate lenses positioned an “eye’s distance” apart. The effect works because, like human eyes, the stereoscope merges two images recorded from slightly different positions into one. Oversimplified conception, opinion, or belief about a person or group. Stereotypes live on because they are repeated, but they are often cruel and inaccurate. The term also is used for the act of stereotyping a person or group. Artwork showing objects that are inanimate (don’t move) and arranged in a composition. Still-life paintings often feature common everyday items like food, flowers, or tableware. Sometimes the selection of items is symbolic, representing a person or an idea. Stowe, Harriet Beecher (1811–1896): Abolitionist and author of the anti-slavery novel Uncle Tom’s Cabin, published between 1851 and 1852. Stowe was the daughter of Lyman Beecher, preacher and founder of the American Temperance Society. Uncle Tom’s Cabin became a bestseller and enabled Stowe to pursue a full-time career as a writer of novels, short stories, articles, and poems. Stowe used the fame she gained from Uncle Tom’s Cabin to travel through the United States and Europe speaking against slavery. Stringfellow, Allen (1923–2004): African American painter and Chicago gallery owner. Stringfellow studied at the University of Illinois and the Art Institute in Milwaukee, Wisconsin. Along with traditional painting, he worked as a printmaker, and in collage and watercolor. Stringfellow was mentored by the African American painter William Sylvester Carter. Many of Stringfellow’s artworks involve images of religion and jazz. Individual or characteristic manner of presentation or representation. In art, an artist, a culture, or a time period may be associated with a recognizable style. Something that stands for or represents an idea, quality, or group. The figure of “Uncle Sam” represents the United States, for example. Artists often use symbolism to represent ideas and events in ways that are easy to visualize. Taft, Lorado (1860–1936): Sculptor, educator, and writer regarded as one of Chicago’s most renowned native artists. Taft studied at the prestigious École des Beaux-Arts in Paris and returned to Chicago, where he opened a sculpture studio and taught and lectured about sculpture at the School of the Art Institute of Chicago. He also lectured on art history at the University of Chicago, nearby his studio. Taft earned praise for his work commissioned for the Horticultural Building at the World’s Columbian Exhibition in 1893, and soon began making monumental pieces that can be seen across the country. Tholey, Augustus (birth date unknown–1898): German-American painter, pastel artist, lithographer, and engraver. Tholey moved to Philadelphia in 1848 where, over the next few decades, he worked for a number of publishing firms. He specialized in military and patriotic portraits. Type of photograph popular during the Civil War era, sometimes called a “ferrotype.” To make one, a photographic negative is printed on a blackened piece of very thin iron (not tin, incidentally). A negative seen against a black background turns the negative into a positive image, as with an ambrotype, another type of photograph. Tintypes were very popular because they were inexpensive and could be put into photo-albums and sent through the mail, unlike fragile and bulkier daguerreotypes. Many Civil War soldiers had tintypes made of themselves. Way an artist interprets his or her subject. Also refers to his or her uses of art materials in representing a subject. Truth, Sojourner (1797–1883): Former slave and advocate for equality and justice. Born into slavery in New York State as Isabella Baumfree, she walked away from slavery in 1825 after her owner broke his promise to grant her freedom. She took the name Sojourner Truth in 1843, and committed her life to preaching against injustice. Truth worked with abolitionist leader William Lloyd Garrison, who published her biography in 1850. Following its publication, Truth became a popular anti-slavery and women’s rights speaker. After the war, Truth campaigned for the Freedman’s Relief Association and advocated for giving land in the Western territories to freed slaves. Tubman, Harriet (c.1820–1913): Former slave, abolitionist, and leader in the women’s suffrage movement. Born enslaved in Maryland, Harriet Tubman escaped slavery by age thirty and traveled to freedom in Philadelphia. She risked her life along the Underground Railroad to make several trips back to the South to lead family members and others out of bondage. Tubman became a supporter of John Brown, and spoke out publically against slavery. During the Civil War, she aided the Union army as a scout and spy in Confederate territory. After the war, Tubman became a leader in the women’s suffrage movement. Uncle Tom’s Cabin; or, Life Among the Lowly: Popular anti-slavery novel published in 1852 by the New England abolitionist and writer Harriett Beecher Stowe (1811–1896). It first appeared as installments in an abolitionist magazine before it was published in two parts. Among the most widely read books of the nineteenth century, it was translated into several languages and often performed as a play. Several of its characters and famous scenes were portrayed in art and illustrations during the Civil War period. The illustrator Hammatt Billings (1818–1874) made the well-known engravings that illustrated the book. Symbolic name for the secret network of people, routes, and hiding places that enabled African American slaves to escape to freedom before and during the Civil War. Although some white Northern abolitionists supported the network, escaping slaves were frequently assisted by fellow African Americans, both Southern slaves and Northern freedmen. Code words were often used to talk about the Underground Railroad: “conductors” such as Harriet Tubman led escaping slaves, or “cargo,” to safe places called “stations.” Shorthand for the United States federal government. During the Civil War, it became the name most frequently used to describe the states left behind after the Confederacy seceded (though they are also called “the North”). It was made up of eighteen free states, five Border States (those slave states that did not secede), and the western territories. United States Colored Infantry/Troops (U.S.C.T.): Branch of the Union Army reserved for black servicemen, as the army did not allow integrated regiments. The majority of the U.S.C.T.’s approximately one hundred seventy-nine thousand soldiers came from slave states, but African American men from all over the United States eagerly joined the Federal Army because they believed Union victory would end slavery. In the free states, for instance, nearly seventy percent of eligible African American men enlisted! As the war progressed, the War Department looked to the South to bolster the ranks, since one of the military necessities driving emancipation was to increase the fighting strength of the federal army. United States Sanitary Commission (U.S.S.C.): Civilian organization founded to help improve medical care and sanitary conditions for Union soldiers. The U.S.S.C. raised money and collected goods to provide supplies and medical care to soldiers. It worked with the military to modernize and provide hospital care for the wounded. Members also raised money through public events like Sanitary Fairs, where donated items were exhibited and purchased to benefit the Union military. Geographic and cultural area of the American South. During the Civil War, it included states that seceded from the Union and joined the Confederacy (Virginia, North Carolina, Tennessee, and Arkansas) and Border States which remained loyal to the Union (Delaware, Maryland, Kentucky, and Missouri). Sometimes referred to as the “Upland South,” the region is distinct from the Lower or Deep South in its geography, agriculture, and culture. Growth of cities and a movement of populations to cities. Urbanization causes economic and cultural changes that affect people in both urban and rural areas. In the time leading up to and during the Civil War, the North underwent urbanization at a fast rate. This gave the North advantages in the war in terms of both manufacturing and the ability to move people and goods from place to place. Volk, Leonard Wells (1828–1895): American sculptor who had a studio in Chicago. Many regard him as the first professional sculptor in this city. Related to Illinois Senator Stephen A. Douglas by marriage, Douglas sponsored Volk’s art education in Europe in the mid 1850s. In 1860 Volk became the first sculptor to make life casts in plaster of President Lincoln’s hands, face, shoulders, and chest. Volk became known for his war monuments, but his casts of Lincoln were frequently used by other artists to create sculptures of the president. War with Mexico: War fought between the United States and Mexico (1846–1848). After the U.S. annexed Texas in 1845, President James K. Polk attempted to purchase large swaths of western territory from Mexico. When Mexico refused, the U.S. created a border dispute that it later used as an excuse to declare war. With U.S. victory came five-hundred thousand square miles of new territory, including what would become California, New Mexico, Arizona, and parts of Utah, Colorado, Nevada, and Wyoming. Disagreements over slavery’s place in these territories provoked political tensions that led to the Civil War. Ward, John Quincy Adams (1830–1910): American sculptor in bronze, marble, and plaster. Ward studied in New York under local sculptor Henry Kirke Brown before opening his own New York studio in 1861. He enjoyed a very successful career, and was noted for his natural, realistic work. Also an abolitionist, Ward attempted to portray the complexities of emancipation in his popular sculpture The Freedman (1865). Washington, Jennie Scott (active, 20th century–today): African American painter who focuses on historical and contemporary subjects. Washington was a protégée of Margaret Burroughs, the artist, writer, and co-founder of the DuSable Museum of African American History. Educated at the American Academy of Art in Chicago and the Art Institute of Chicago, Washington also teaches art. Her public access art program, Jennie's Reflections, has been on the air in Chicago since 1989. Paint in which the pigment (color) is suspended in water. Most often painted on paper, watercolors were also used to give color to drawings and to black-and-white prints (such as those by Currier and Ives) and sometimes to photographs. They are more portable and faster drying than oil paints. Although watercolor was often associated with amateur or women artists, many well-known Civil War era artists like Winslow Homer, Samuel Colman, and others worked in the medium. Waud, Alfred R. (1828–1891): English born illustrator, painter, and photographer who immigrated to America in 1858 and worked as a staff artist for the magazine Harper’s Weekly during and after the Civil War. Waud’s sketches were first-person accounts of the war that reached thousands of readers. After the Civil War, he traveled through the South documenting the Reconstruction. Waud also toured the American West, depicting the frontier, Native Americans, and pioneers. Wessel, Sophie (1916–1994): American artist and community activist. A graduate of the School of the Art Institute of Chicago, Wessel was an artist under the Works Progress Administration in the late 1930s, a jobs program that helped artists and other workers weather the Great Depression. Primarily an oil painter, Wessel also worked in drawing, in sculpture, in watercolors, and as a printmaker. Wessel’s art focuses on political and social-justice subjects, like the Civil Rights Movement, rights for women, and the Anti-War Movement. She also taught art at several Chicago-area community centers. Political party founded in 1833 in opposition to the policies of President Andrew Jackson. Whigs supported a platform of compromise and balance in government as well as federal investments in manufacturing and national transportation improvements. They tended to oppose aggressive territorial expansion programs. The Whig party dissolved in 1856 over division on the issue of whether slavery should expand into the United States’ territories. Many Northern Whigs went on to found the Republican Party. White, Stanford (1853–1906): Influential architect of the firm McKim, Mead, and White. White worked with his firm and independently to design several enduring structures such as the Washington Square Arch (1889) and the New York Herald Building (1894). White was murdered by the husband of his former lover in the original Madison Square Garden (a building he had also designed). Wiest, D. T. (active, 19th century): Artist who created the image In Memory of Abraham Lincoln: The Reward of the Just after Lincoln’s assassination. Elite infantry troops and voluntary drill teams that wore showy uniforms—brightly colored jackets and baggy pants—inspired by uniform designs that French soldiers popularized in the 1830s. The French Zouaves had borrowed ideas for their uniforms from Algerian (northern African) soldiers. Zouaves existed in many armies across the world. Civil War Zouaves were often seen in parades, but they served bravely in battle, too. Colonel Elmer E. Elsworth (1837–1861), a personal friend of Abraham Lincoln and the first casualty of the Civil War, led a Zouave unit that was well known in Chicago, Illinois, and across the country.
http://www.civilwarinart.org/glossary
13
67
A screw thread, often shortened to thread, is a helical structure used to convert between rotational and linear movement or force. A screw thread is a ridge wrapped around a cylinder or cone in the form of a helix, with the former being called a straight thread and the latter called a tapered thread. A screw thread is the essential feature of the screw as a simple machine and also as a fastener. More screw threads are produced each year than any other machine element. The mechanical advantage of a screw thread depends on its lead, which is the linear distance the screw travels in one revolution. In most applications, the lead of a screw thread is chosen so that friction is sufficient to prevent linear motion being converted to rotary, that is so the screw does not slip even when linear force is applied so long as no external rotational force is present. This characteristic is essential to the vast majority of its uses. The tightening of a fastener's screw thread is comparable to driving a wedge into a gap until it sticks fast through friction and slight plastic deformation. Screw threads have several applications: - Gear reduction via worm drives - Moving objects linearly by converting rotary motion to linear motion, as in the leadscrew of a jack. - Measuring by correlating linear motion to rotary motion (and simultaneously amplifying it), as in a micrometer. - Both moving objects linearly and simultaneously measuring the movement, combining the two aforementioned functions, as in a leadscrew of a lathe. In all of these applications, the screw thread has two main functions: - It converts rotary motion into linear motion. - It prevents linear motion without the corresponding rotation. Every matched pair of threads, external and internal, can be described as male and female. For example, a screw has male threads, while its matching hole (whether in nut or substrate) has female threads. This property is called gender. The helix of a thread can twist in two possible directions, which is known as handedness. Most threads are oriented so that the threaded item, when seen from a point of view on the axis through the center of the helix, moves away from the viewer when it is turned in a clockwise direction, and moves towards the viewer when it is turned counterclockwise. This is known as a right-handed (RH) thread, because it follows the right hand grip rule. Threads oriented in the opposite direction are known as left-handed (LH). By common convention, right-handedness is the default handedness for screw threads. Therefore, most threaded parts and fasteners have right-handed threads. Left-handed thread applications include: - Where the rotation of a shaft would cause a conventional right-handed nut to loosen rather than to tighten due to fretting induced precession. Examples include: - In combination with right-handed threads in turnbuckles and clamping studs. - In some gas supply connections to prevent dangerous misconnections, for example in gas welding the flammable gas supply uses left-handed threads. - In a situation where neither threaded pipe end can be rotated to tighten/loosen the joint, e.g. in traditional heating pipes running through multiple rooms in a building. In such a case, the coupling will have one right-handed and one left-handed thread - In some instances, for example early ballpoint pens, to provide a "secret" method of disassembly. - In mechanisms to give a more intuitive action as: - Some Edison base lamps and fittings (such as formerly on the New York City Subway) have a left-hand thread to deter theft, since they cannot be used in other light fixtures. The term chirality comes from the Greek word for "hand" and concerns handedness in many other contexts. The cross-sectional shape of a thread is often called its form or threadform (also spelled thread form). It may be square, triangular, trapezoidal, or other shapes. The terms form and threadform sometimes refer to all design aspects taken together (cross-sectional shape, pitch, and diameters). Most triangular threadforms are based on an isosceles triangle. These are usually called V-threads or vee-threads because of the shape of the letter V. For 60° V-threads, the isosceles triangle is, more specifically, equilateral. For buttress threads, the triangle is scalene. The theoretical triangle is usually truncated to varying degrees (that is, the tip of the triangle is cut short). A V-thread in which there is no truncation (or a minuscule amount considered negligible) is called a sharp V-thread. Truncation occurs (and is codified in standards) for practical reasons: - The thread-cutting or thread-forming tool cannot practically have a perfectly sharp point; at some level of magnification, the point is truncated, even if the truncation is very small. - Too-small truncation is undesirable anyway, because: - The cutting or forming tool's edge will break too easily; - The part or fastener's thread crests will have burrs upon cutting, and will be too susceptible to additional future burring resulting from dents (nicks); - The roots and crests of mating male and female threads need clearance to ensure that the sloped sides of the V meet properly despite (a) error in pitch diameter and (b) dirt and nick-induced burrs. - The point of the threadform adds little strength to the thread. Ball screws, whose male-female pairs involve bearing balls in between, show that other variations of form are possible. Roller screws use conventional thread forms but introduce an interesting twist on the theme. The angle characteristic of the cross-sectional shape is often called the thread angle. For most V-threads, this is standardized as 60 degrees, but any angle can be used. Lead, pitch, and starts Lead (pron.: //) and pitch are closely related concepts.They can be confused because they are the same for most screws. Lead is the distance along the screw's axis that is covered by one complete rotation of the screw (360°). Pitch is the distance from the crest of one thread to the next. Because the vast majority of screw threadforms are single-start threadforms, their lead and pitch are the same. Single-start means that there is only one "ridge" wrapped around the cylinder of the screw's body. Each time that the screw's body rotates one turn (360°), it has advanced axially by the width of one ridge. "Double-start" means that there are two "ridges" wrapped around the cylinder of the screw's body. Each time that the screw's body rotates one turn (360°), it has advanced axially by the width of two ridges. Another way to express this is that lead and pitch are parametrically related, and the parameter that relates them, the number of starts, very often has a value of 1, in which case their relationship becomes equality. In general, lead is equal to S times pitch, in which S is the number of starts. Whereas metric threads are usually defined by their pitch, that is, how much distance per thread, inch-based standards usually use the reverse logic, that is, how many threads occur per a given distance. Thus inch-based threads are defined in terms of threads per inch (TPI). Pitch and TPI describe the same underlying physical property—merely in different terms. When the inch is used as the unit of measurement for pitch, TPI is the reciprocal of pitch and vice versa. For example, a 1⁄4-20 thread has 20 TPI, which means that its pitch is 1⁄20 inch (0.050 in or 1.27 mm). As the distance from the crest of one thread to the next, pitch can be compared to the wavelength of a wave. Another wave analogy is that pitch and TPI are inverses of each other in a similar way that period and frequency are inverses of each other. Coarse versus fine Coarse threads are those with larger pitch (fewer threads per axial distance), and fine threads are those with smaller pitch (more threads per axial distance). Coarse threads have a larger threadform relative to screw diameter, whereas fine threads have a smaller threadform relative to screw diameter. This distinction is analogous to that between coarse teeth and fine teeth on a saw or file, or between coarse grit and fine grit on sandpaper. The common V-thread standards (ISO 261 and Unified Thread Standard) include a coarse pitch and a fine pitch for each major diameter. For example, 1⁄2-13 belongs to the UNC series (Unified National Coarse) and 1⁄2-20 belongs to the UNF series (Unified National Fine). A common misconception among people not familiar with engineering or machining is that the term coarse implies here lower quality and the term fine implies higher quality. The terms when used in reference to screw thread pitch have nothing to do with the tolerances used (degree of precision) or the amount of craftsmanship, quality, or cost. They simply refer to the size of the threads relative to the screw diameter. Coarse threads can be made accurately, or fine threads inaccurately. There are several relevant diameters for screw threads: major diameter, minor diameter, and pitch diameter. Major diameter Major diameter is the largest diameter of the thread. For a male thread, this means "outside diameter", but in careful usage the better term is "major diameter", since the underlying physical property being referred to is independent of the male/female context. On a female thread, the major diameter is not on the "outside". The terms "inside" and "outside" invite confusion, whereas the terms "major" and "minor" are always unambiguous. Minor diameter Minor diameter is the smallest diameter of the thread. Pitch diameter Pitch diameter (sometimes abbreviated PD) is a diameter in between major and minor. It is the diameter at which each pitch is equally divided between the mating male and female threads. It is important to the fit between male and female threads, because a thread can be cut to various depths in between the major and minor diameters, with the roots and crests of the threadform being variously truncated, but male and female threads will only mate properly if their sloping sides are in contact, and that contact can only happen if the pitch diameters of male and female threads match closely. Another way to think of pitch diameter is "the diameter on which male and female should meet". Classes of fit The way in which male and female fit together, including play and friction, is classified (categorized) in thread standards. Achieving a certain class of fit requires the ability to work within tolerance ranges for dimension (size) and surface finish. Defining and achieving classes of fit are important for interchangeability. Classes include 1, 2, 3 (loose to tight); A (external) and B (internal); and various systems such as H and D limits. Standardization and interchangeability To achieve a predictably successful mating of male and female threads and assured interchangeability between males and between females, standards for form, size, and finish must exist and be followed. Standardization of threads is discussed below. Thread depth Screw threads are almost never made perfectly sharp (no truncation at the crest or root), but instead are truncated, yielding a final thread depth that can be expressed as a fraction of the pitch value. The UTS and ISO standards codify the amount of truncation, including tolerance ranges. A perfectly sharp 60° V-thread will have a depth of thread ("height" from root to crest) equal to .866 of the pitch. This fact is intrinsic to the geometry of an equilateral triangle—a direct result of the basic trigonometric functions. It is independent of measurement units (inch vs mm). However, UTS and ISO threads are not sharp threads. The major and minor diameters delimit truncations on either side of the sharp V, typically about 1/8p (although the actual geometry definition has more variables than that). This means that a full (100%) UTS or ISO thread has a height of around .65p. Threads can be (and often are) truncated a bit more, yielding thread depths of 60% to 75% of the .65p value. This makes the thread-cutting easier (yielding shorter cycle times and longer tap and die life) without a large sacrifice in thread strength. The increased truncation is quantified by the percentage of thread that it leaves in place, where the nominal full thread (where depth is about .65p) is considered 100%. For most applications, 60% to 75% threads used. In may cases 60% threads are optimal, and 75% threads are wasteful or "over-engineered" (additional resources were unnecessarily invested in creating them). To truncate the threads below 100% of nominal, different techniques are used for male and female threads. For male threads, the bar stock is "turned down" somewhat before thread cutting, so that the major diameter is reduced. Likewise, for female threads the stock material is drilled with a slightly larger tap drill, increasing the minor diameter. (The pitch diameter is not affected by these operations, which are only varying the major or minor diameters.) This balancing of truncation versus thread strength is common to many engineering decisions involving material strength and material thickness, cost, and weight. Engineers use a number called the safety factor to quantify the increased material thicknesses or other dimension beyond the minimum required for the estimated loads on a mechanical part. Increasing the safety factor generally increases the cost of manufacture and decreases the likelihood of a failure. So the safety factor is often the focus of a business management decision when a mechanical product's cost impacts business performance and failure of the product could jeopardize human life or company reputation. For example, aerospace contractors are particularly rigorous in the analysis and implementation of safety factors, given the incredible damage that failure could do (crashed aircraft or rockets). Material thickness affects not only the cost of manufacture, but also the device's weight and therefore the cost (in fuel) to lift that weight into the sky (or orbit). The cost of failure and the cost of manufacture are both extremely high. Thus the safety factor dramatically impacts company fortunes and is often worth the additional engineering expense required for detailed analysis and implementation. Tapered threads are used on fasteners and pipe. A common example of a fastener with a tapered thread is a wood screw. The threaded pipes used in some plumbing installations for the delivery of fluids under pressure have a threaded section that is slightly conical. Examples are the NPT and BSP series. The seal provided by a threaded pipe joint is created when a tapered externally threaded end is tightened into an end with internal threads. Normally a good seal requires the application of a separate sealant in the joint, such as thread seal tape, or a liquid or paste pipe sealant such as pipe dope, however some threaded pipe joints do not require a separate sealant. Standardization of screw threads has evolved since the early nineteenth century to facilitate compatibility between different manufacturers and users. The standardization process is still ongoing; in particular there are still (otherwise identical) competing metric and inch-sized thread standards widely used. Standard threads are commonly identified by short letter codes (M, UNC, etc.) which also form the prefix of the standardized designations of individual threads. Additional product standards identify preferred thread sizes for screws and nuts, as well as corresponding bolt head and nut sizes, to facilitate compatibility between spanners (wrenches) and other tools. ISO standard threads These were standardized by the International Organization for Standardization (ISO) in 1947. Although metric threads were mostly unified in 1898 by the International Congress for the standardization of screw threads, separate metric thread standards were used in France, Germany, and Japan, and the Swiss had a set of threads for watches. Other current standards In particular applications and certain regions, threads other than the ISO metric screw threads remain commonly used, sometimes because of special application requirements, but mostly for reasons of backwards compatibility: - ASME B1.1 Unified Inch Screw Threads, (UN and UNR Thread Form), considered an American National Standard (ANS) widely use in the US and Canada - Unified Thread Standard (UTS), which is still the dominant thread type in the United States and Canada. This standard includes: - Unified Coarse (UNC), commonly referred to as "National Coarse" or "NC" in retailing. - Unified Fine (UNF), commonly referred to as "National Fine" or "NF" in retailing. - Unified Extra Fine (UNEF) - Unified Special (UNS) - National pipe thread (NPT), used for plumbing of water and gas pipes, and threaded electrical conduit. - NPTF (National Pipe Thread Fuel) - British Standard Whitworth (BSW), and for other Whitworth threads including: - British standard pipe thread (BSP) which exists in a taper and non taper variant; used for other purposes as well - British Standard Pipe Taper (BSPT) - British Association screw threads (BA), primarily electronic/electrical, moving coil meters and to mount optical lenses - British Standard Buttress Threads (BS 1657:1950) - British Standard for Spark Plugs BS 45:1972 - British Standard Brass a fixed pitch 26tpi thread - Glass Packaging Institute threads (GPI), primarily for glass bottles and vials - Power screw threads - Camera case screws, used to mount a camera on a photographic tripod: - ¼″ UNC used on almost all small cameras - ⅜″ UNC for larger (and some older small) cameras (many older cameras use ¼" BSW or ⅜" BSW threads, which in low stress applications, and if machined to wide tolerances, are for practical purposes compatible with the UNC threads) - Royal Microscopical Society (RMS) thread, also known as society thread, is a special 0.8" diameter x 36 thread-per-inch (tpi) Whitworth thread form used for microscope objective lenses. - Microphone stands: - ⅝″ 27 threads per inch (tpi) Unified Special thread (UNS, USA and the rest of the world) - ¼″ BSW (not common in the USA, used in the rest of the world) - ⅜″ BSW (not common in the USA, used in the rest of the world) - Stage lighting suspension bolts (in some countries only; some have gone entirely metric, others such as Australia have reverted to the BSW threads, or have never fully converted): - ⅜″ BSW for lighter luminaires - ½″ BSW for heavier luminaires - Tapping screw threads (ST) – ISO 1478 - Aerospace inch threads (UNJ) – ISO 3161 - Aerospace metric threads (MJ) – ISO 5855 - Tyre valve threads (V) – ISO 4570 - Metal bone screws (HA, HB) – ISO 5835 - Panzergewinde (Pg) (German) is an old German 80° thread (DIN 40430) that remained in use until 2000 in some electrical installation accessories in Germany. - Fahrradgewinde (Fg) (English: bicycle thread) is a German bicycle thread standard (per DIN 79012 and DIN 13.1), which encompasses a lot of CEI and BSC threads as used on cycles and mopeds everywhere (http://www.fahrradmonteur.de/fahrradgewinde.php) - CEI (Cycle Engineers Institute, used on bicycles in Britain and possibly elsewhere) - Edison base Incandescent light bulb holder screw thread - Fire hose connection (NFPA standard 194) - Hose Coupling Screw Threads (ANSI/ASME B1.20.7-1991 [R2003]) for garden hoses and accessories - Löwenherz thread, a German metric thread used for measuring instruments - Sewing machine thread History of standardization The first historically important intra-company standardization of screw threads began with Henry Maudslay around 1800, when the modern screw-cutting lathe made interchangeable V-thread machine screws a practical commodity. During the next 40 years, standardization continued to occur on the intra-company and inter-company level. No doubt many mechanics of the era participated in this zeitgeist; Joseph Clement was one of those whom history has noted. In 1841, Joseph Whitworth created a design that, through its adoption by many British railroad companies, became a national standard for the United Kingdom called British Standard Whitworth. During the 1840s through 1860s, this standard was often used in the United States and Canada as well, in addition to myriad intra- and inter-company standards. In April 1864, William Sellers presented a paper to the Franklin Institute in Philadelphia, proposing a new standard to replace the U.S.'s poorly standardized screw thread practice. Sellers simplified the Whitworth design by adopting a thread profile of 60° and a flattened tip (in contrast to Whitworth's 55° angle and rounded tip). The 60° angle was already in common use in America, but Sellers's system promised to make it and all other details of threadform consistent. The Sellers thread, easier for ordinary machinists to produce, became an important standard in the U.S. during the late 1860s and early 1870s, when it was chosen as a standard for work done under U.S. government contracts, and it was also adopted as a standard by highly influential railroad industry corporations such as the Baldwin Locomotive Works and the Pennsylvania Railroad. Other firms adopted it, and it soon became a national standard for the U.S., later becoming generally known as the United States Standard thread (USS thread). Over the next 30 years the standard was further defined and extended and evolved into a set of standards including National Coarse (NC), National Fine (NF), and National Pipe Taper (NPT). Meanwhile, in Britain, the British Association screw threads were also developed and refined. During this era, in continental Europe, the British and American threadforms were well known, but also various metric thread standards were evolving, which usually employed 60° profiles. Some of these evolved into national or quasi-national standards. They were mostly unified in 1898 by the International Congress for the standardization of screw threads at Zurich, which defined the new international metric thread standards as having the same profile as the Sellers thread, but with metric sizes. Efforts were made in the early 20th century to convince the governments of the U.S., UK, and Canada to adopt these international thread standards and the metric system in general, but they were defeated with arguments that the capital cost of the necessary retooling would drive some firms from profit to loss and hamper the economy. (The mixed use of dueling inch and metric standards has since cost much, much more, but the bearing of these costs has been more distributed across national and global economies rather than being borne up front by particular governments or corporations, which helps explain the lobbying efforts.) Sometime between 1912 and 1916, the Society of Automobile Engineers (SAE) created an "SAE series" of screw thread sizes to augment the USS standard. During the late 19th and early 20th centuries, engineers found that ensuring the reliable interchangeability of screw threads was a multi-faceted and challenging task that was not as simple as just standardizing the major diameter and pitch for a certain thread. It was during this era that more complicated analyses made clear the importance of variables such as pitch diameter and surface finish. A tremendous amount of engineering work was done throughout World War I and the following interwar period in pursuit of reliable interchangeability. Classes of fit were standardized, and new ways of generating and inspecting screw threads were developed (such as production thread-grinding machines and optical comparators). Therefore, in theory, one might expect that by the start of World War II, the problem of screw thread interchangeability would have already been completely solved. Unfortunately, this proved to be false. Intranational interchangeability was widespread, but international interchangeability was less so. Problems with lack of interchangeability among American, Canadian, and British parts during World War II led to an effort to unify the inch-based standards among these closely allied nations, and the Unified Thread Standard was adopted by the Screw Thread Standardization Committees of Canada, the United Kingdom, and the United States on November 18, 1949 in Washington, D.C., with the hope that they would be adopted universally. (The original UTS standard may be found in ASA (now ANSI) publication, Vol. 1, 1949.) UTS consists of Unified Coarse (UNC), Unified Fine (UNF), Unified Extra Fine (UNEF) and Unified Special (UNS). The standard was not widely taken up in the UK, where many companies continued to use the UK's own British Association (BA) standard. However, internationally, the metric system was eclipsing inch-based measurement units. In 1947, the ISO was founded; and in 1960, the metric-based International System of Units (abbreviated SI from the French Système International) was created. With continental Europe and much of the rest of the world turning to SI and the ISO metric screw thread, the UK gradually leaned in the same direction. The ISO metric screw thread is now the standard that has been adopted worldwide and has mostly displaced all former standards, including UTS. In the U.S., where UTS is still prevalent, over 40% of products contain at least some ISO metric screw threads. The UK has completely abandoned its commitment to UTS in favour of the ISO metric threads, and Canada is in between. Globalization of industries produces market pressure in favor of phasing out minority standards. A good example is the automotive industry; U.S. auto parts factories long ago developed the ability to conform to the ISO standards, and today very few parts for new cars retain inch-based sizes, regardless of being made in the U.S. Even today, over a half century since the UTS superseded the USS and SAE series, companies still sell hardware with designations such as "USS" and "SAE" to convey that it is of inch sizes as opposed to metric. Most of this hardware is in fact made to the UTS, but the labeling and cataloging terminology is not always precise. Engineering drawing In American engineering drawings, ANSI Y14.6 defines standards for indicating threaded parts. Parts are indicated by their nominal diameter (the nominal major diameter of the screw threads), pitch (number of threads per inch), and the class of fit for the thread. For example, “.750-10UNC-2A” is male (A) with a nominal major diameter of 0.750 in, 10 threads per inch, and a class-2 fit; “.500-20UNF-1B” would be female (B) with a 0.500 in nominal major diameter, 20 threads per inch, and a class-1 fit. An arrow points from this designation to the surface in question. There are many ways to generate a screw thread, including the traditional subtractive types (e.g., various kinds of cutting [single-pointing, taps and dies, die heads, milling]; molding; casting [die casting, sand casting]; forming and rolling; grinding; and occasionally lapping to follow the other processes); newer additive techniques; and combinations thereof. - Inspection of thread geometry is discussed at Threading (manufacturing) > Inspection. See also |Wikimedia Commons has media related to: Screw threads| - Acme Thread Form - Bicycle thread - Buttress Thread Form - Dryseal Pipe Threads Form - Filter thread - Garden hose thread form - Metric: M Profile Thread Form - National Thread Form - National Pipe Thread Form - Nut (hardware) - Tapered thread - Thread pitch gauge - Degarmo, Black & Kohser 2003, p. 741. - Brown, Sheldon. "Bicycle Glossary: Pedal". Sheldon Brown. Retrieved 2010-10-19. - Bhandari, p. 205. - ISO 1222:2010 Photography -- Tripod connections - Löwenherz thread - Ryffel 1988, p. 1603. - Sewing machine thread - Roe 1916, pp. 9–10. - ASME 125th Anniversary: Special 2005 Designation of Landmarks: Profound Influences in Our Lives: The United States Standard Screw Threads - Roe 1916, pp. 248–249. - Roe 1916, p. 249. - Wilson pp. 77–78 (page numbers may be from an earlier edition). - Bhandari, V B (2007), Design of Machine Elements, Tata McGraw-Hill, ISBN 978-0-07-061141-2. - Degarmo, E. Paul; Black, J T.; Kohser, Ronald A. (2003), Materials and Processes in Manufacturing (9th ed.), Wiley, ISBN 0-471-65653-4. - Green, Robert E. et al. (eds) (1996), Machinery's Handbook (25 ed.), New York, NY, USA: Industrial Press, ISBN 978-0-8311-2575-2. - Roe, Joseph Wickham (1916), English and American Tool Builders, New Haven, Connecticut: Yale University Press, LCCN 16011753. Reprinted by McGraw-Hill, New York and London, 1926 (LCCN 27-24075); and by Lindsay Publications, Inc., Bradley, Illinois, (ISBN 978-0-917914-73-7). - Wilson, Bruce A. (2004), Design Dimensioning and Tolerancing (4th ed.), Goodheart-Wilcox, ISBN 1-59070-328-6. - International Thread Standards - ModelFixings - Thread Data - NASA RP-1228 Threaded Fastener Design Manual[dead link]
http://en.wikipedia.org/wiki/Screw_thread
13
79
From earliest times, astronomers assumed that the orbits in which the planets moved were circular; yet the numerous catalogs of measurements compiled especially during the 16th cent. did not fit this theory. At the beginning of the 17th cent., Johannes Kepler stated three laws of planetary motion that explained the observed data: the orbit of each planet is an ellipse with the sun at one focus; the speed of a planet varies in such a way that an imaginary line drawn from the planet to the sun sweeps out equal areas in equal amounts of time; and the ratio of the squares of the periods of revolution of any two planets is equal to the ratio of the cubes of their average distances from the sun. The orbits of the solar planets, while elliptical, are almost circular; on the other hand, the orbits of many of the extrasolar planets discovered during the 1990s are highly elliptical. After the laws of planetary motion were established, astronomers developed the means of determining the size, shape, and relative position in space of a planet's orbit. The size and shape of an orbit are specified by its semimajor axis and by its eccentricity. The semimajor axis is a length equal to half the greatest diameter of the orbit. The eccentricity is the distance of the sun from the center of the orbit divided by the length of the orbit's semimajor axis; this value is a measure of how elliptical the orbit is. The position of the orbit in space, relative to the earth, is determined by three factors: (1) the inclination, or tilt, of the plane of the planet's orbit to the plane of the earth's orbit (the ecliptic); (2) the longitude of the planet's ascending node (the point where the planet cuts the ecliptic moving from south to north); and (3) the longitude of the planet's perihelion point (point at which it is nearest the sun; see apsis). These quantities, which determine the size, shape, and position of a planet's orbit, are known as the orbital elements. If only the sun influenced the planet in its orbit, then by knowing the orbital elements plus its position at some particular time, one could calculate its position at any later time. However, the gravitational attractions of bodies other than the sun cause perturbations in the planet's motions that can make the orbit shift, or precess, in space or can cause the planet to wobble slightly. Once these perturbations have been calculated one can closely determine its position for any future date over long periods of time. Modern methods for computing the orbit of a planet or other body have been refined from methods developed by Newton, Laplace, and Gauss, in which all the needed quantities are acquired from three separate observations of the planet's apparent position. The laws of planetary orbits also apply to the orbits of comets, natural satellites, artificial satellites, and space probes. The orbits of comets are very elongated; some are long ellipses, some are nearly parabolic (see parabola), and some may be hyperbolic. When the orbit of a newly discovered comet is calculated, it is first assumed to be a parabola and then corrected to its actual shape when more measured positions are obtained. Natural satellites that are close to their primaries tend to have nearly circular orbits in the same plane as that of the planet's equator, while more distant satellites may have quite eccentric orbits with large inclinations to the planet's equatorial plane. Because of the moon's proximity to the earth and its large relative mass, the earth-moon system is sometimes considered a double planet. It is the center of the earth-moon system, rather than the center of the earth itself, that describes an elliptical orbit around the sun in accordance with Kepler's laws. All of the planets and most of the satellites in the solar system move in the same direction in their orbits, counterclockwise as viewed from the north celestial pole; some satellites, probably captured asteroids, have retrograde motion, i.e., they revolve in a clockwise direction. In physics, an orbit is the gravitationally curved path of one object around a point or another body, for example the gravitational orbit of a planet around a star. Historically, the apparent motion of the planets were first understood in terms of epicycles, which are the sums of numerous circular motions. This predicted the path of the planets quite well, until Johannes Kepler was able to show that the motion of the planets were in fact elliptical motions. Sir Isaac Newton was able to prove that this was equivalent to an inverse square, instantaneously propagating force he called gravitation. Albert Einstein later was able to show that gravity is due to curvature of space-time, and that orbits lie upon geodesics and this is the current understanding. The basis for the modern understanding of orbits was first formulated by Johannes Kepler whose results are summarized in his three laws of planetary motion. First, he found that the orbits of the planets in our solar system are elliptical, not circular (or epicyclic), as had previously been believed, and that the sun is not located at the center of the orbits, but rather at one focus. Second, he found that the orbital speed of each planet is not constant, as had previously been thought, but rather that the speed of the planet depends on the planet's distance from the sun. And third, Kepler found a universal relationship between the orbital properties of all the planets orbiting the sun. For each planet, the cube of the planet's distance from the sun, measured in astronomical units (AU), is equal to the square of the planet's orbital period, measured in Earth years. Jupiter, for example, is approximately 5.2 AU from the sun and its orbital period is 11.86 Earth years. So 5.2 cubed equals 11.86 squared, as predicted. Isaac Newton demonstrated that Kepler's laws were derivable from his theory of gravitation and that, in general, the orbits of bodies responding to an instantaneously propagating force of gravity were conic sections. Newton showed that a pair of bodies follow orbits of dimensions that are in inverse proportion to their masses about their common center of mass. Where one body is much more massive than the other, it is a convenient approximation to take the center of mass as coinciding with the center of the more massive body. Albert Einstein was able to show that gravity was due to curvature of space-time and was able to remove the assumption of Newton that changes propagate instantaneously. In relativity theory orbits follow geodesic trajectories which approximate very well to the Newtonian predictions. However there are differences and these can be used to determine which theory relativity agrees with. Essentially all experimental evidence agrees with relativity theory to within experimental measuremental accuracy. Owing to mutual gravitational perturbations, the eccentricities of the orbits of the planets in our solar system vary over time. Mercury, the smallest planet in the Solar System, has the most eccentric orbit. At the present epoch, Mars has the next largest eccentricity while the smallest eccentricities are those of the orbits of Venus and Neptune. As two objects orbit each other, the periapsis is that point at which the two objects are closest to each other and the apoapsis is that point at which they are the farthest from each other. (More specific terms are used for specific bodies. For example, perigee and apogee are the lowest and highest parts of an Earth orbit, respectively.) In the elliptical orbit, the center of mass of the orbiting-orbited system will sit at one focus of both orbits, with nothing present at the other focus. As a planet approaches periapsis, the planet will increase in speed, or velocity. As a planet approaches apoapsis, the planet will decrease in velocity. As an illustration of an orbit around a planet, the Newton's cannonball model may prove useful (see image below). Imagine a cannon sitting on top of a tall mountain, which fires a cannonball horizontally. The mountain needs to be very tall, so that the cannon will be above the Earth's atmosphere and the effects of air friction on the cannonball can be ignored. If the cannon fires its ball with a low initial velocity, the trajectory of the ball curves downward and hits the ground (A). As the firing velocity is increased, the cannonball hits the ground farther (B) away from the cannon, because while the ball is still falling towards the ground, the ground is increasingly curving away from it (see first point, above). All these motions are actually "orbits" in a technical sense — they are describing a portion of an elliptical path around the center of gravity — but the orbits are interrupted by striking the Earth. If the cannonball is fired with sufficient velocity, the ground curves away from the ball at least as much as the ball falls — so the ball never strikes the ground. It is now in what could be called a non-interrupted, or circumnavigating, orbit. For any specific combination of height above the center of gravity, and mass of the planet, there is one specific firing velocity that produces a circular orbit, as shown in (C). As the firing velocity is increased beyond this, a range of elliptic orbits are produced; one is shown in (D). If the initial firing is above the surface of the Earth as shown, there will also be elliptical orbits at slower velocities; these will come closest to the Earth at the point half an orbit beyond, and directly opposite, the firing point. At a specific velocity called escape velocity, again dependent on the firing height and mass of the planet, an infinite orbit such as (E) is produced — a parabolic trajectory. At even faster velocities the object will follow a range of hyperbolic trajectories. In a practical sense, both of these trajectory types mean the object is "breaking free" of the planet's gravity, and "going off into space". The velocity relationship of two objects with mass can thus be considered in four practical classes, with subtypes: Energy is associated with gravitational fields. A stationary body far from another can do external work if it is pulled towards it, and therefore has gravitational potential energy. Since work is required to separate two massive bodies against the pull of gravity, their gravitational potential energy increases as they are separated, and decreases as they approach one another. For point masses the gravitational energy decreases without limit as they approach zero separation, and it is convenient and conventional to take the potential energy as zero when they are an infinite distance apart, and then negative (since it decreases from zero) for smaller finite distances. With two bodies, an orbit is a conic section. The orbit can be open (so the object never returns) or closed (returning), depending on the total kinetic + potential energy of the system. In the case of an open orbit, the speed at any position of the orbit is at least the escape velocity for that position, in the case of a closed orbit, always less. Since the kinetic energy is never negative, if the common convention is adopted of taking the potential energy as zero at infinite separation, the bound orbits have negative total energy, parabolic trajectories have zero total energy, and hyperbolic orbits have positive total energy. An open orbit has the shape of a hyperbola (when the velocity is greater than the escape velocity), or a parabola (when the velocity is exactly the escape velocity). The bodies approach each other for a while, curve around each other around the time of their closest approach, and then separate again forever. This may be the case with some comets if they come from outside the solar system. A closed orbit has the shape of an ellipse. In the special case that the orbiting body is always the same distance from the center, it is also the shape of a circle. Otherwise, the point where the orbiting body is closest to Earth is the perigee, called periapsis (less properly, "perifocus" or "pericentron") when the orbit is around a body other than Earth. The point where the satellite is farthest from Earth is called apogee, apoapsis, or sometimes apifocus or apocentron. A line drawn from periapsis to apoapsis is the line-of-apsides. This is the major axis of the ellipse, the line through its longest part. Orbiting bodies in closed orbits repeat their path after a constant period of time. This motion is described by the empirical laws of Kepler, which can be mathematically derived from Newton's laws. These can be formulated as follows: Note that that while the bound orbits around a point mass, or a spherical body with an ideal Newtonian gravitational field, are all closed ellipses, which repeat the same path exactly and indefinitely, any non-spherical or non-Newtonian effects (as caused, for example, by the slight oblateness of the Earth, or by relativistic effects, changing the gravitational field's behavior with distance) will cause the orbit's shape to depart to a greater or lesser extent from the closed ellipses characteristic of Newtonian two body motion. The 2-body solutions were published by Newton in Principia in 1687. In 1912, Karl Fritiof Sundman developed a converging infinite series that solves the 3-body problem; however, it converges too slowly to be of much use. Except for special cases like the Lagrangian points, no method is known to solve the equations of motion for a system with four or more bodies. Instead, orbits with many bodies can be approximated with arbitrarily high accuracy. These approximations take two forms. One form takes the pure elliptic motion as a basis, and adds perturbation terms to account for the gravitational influence of multiple bodies. This is convenient for calculating the positions of astronomical bodies. The equations of motion of the moon, planets and other bodies are known with great accuracy, and are used to generate tables for celestial navigation. Still there are secular phenomena that have to be dealt with by post-newtonian methods. The differential equation form is used for scientific or mission-planning purposes. According to Newton's laws, the sum of all the forces will equal the mass times its acceleration (F = ma). Therefore accelerations can be expressed in terms of positions. The perturbation terms are much easier to describe in this form. Predicting subsequent positions and velocities from initial ones corresponds to solving an initial value problem. Numerical methods calculate the positions and velocities of the objects a tiny time in the future, then repeat this. However, tiny arithmetic errors from the limited accuracy of a computer's math accumulate, limiting the accuracy of this approach. Differential simulations with large numbers of objects perform the calculations in a hierarchical pairwise fashion between centers of mass. Using this scheme, galaxies, star clusters and other large objects have been simulated. Please note that the following is a classical (Newtonian) analysis of orbital mechanics, which assumes the more subtle effects of general relativity (like frame dragging and gravitational time dilation) are negligible. General relativity does, however, need to be considered for some applications such as analysis of extremely massive heavenly bodies, precise prediction of a system's state after a long period of time, and in the case of interplanetary travel, where fuel economy, and thus precision, is paramount. To analyze the motion of a body moving under the influence of a force which is always directed towards a fixed point, it is convenient to use polar coordinates with the origin coinciding with the center of force. In such coordinates the radial and transverse components of the acceleration are, respectively: Since the force is entirely radial, and since acceleration is proportional to force, it follows that the transverse acceleration is zero. As a result, After integrating, we have which is actually the theoretical proof of Kepler's 2nd law (A line joining a planet and the sun sweeps out equal areas during equal intervals of time). The constant of integration, h, is the angular momentum per unit mass. It then follows that where G is the constant of universal gravitation, m is the mass of the orbiting body (planet), and M is the mass of the central body (the Sun). Substituting into the prior equation, we have So for the gravitational force – or, more generally, for any inverse square force law – the right hand side of the equation becomes a constant and the equation is seen to be the harmonic equation (up to a shift of origin of the dependent variable). The solution is: The equation of the orbit described by the particle is thus: The rotation to do this in three dimensions requires three numbers to uniquely determine; traditionally these are expressed as three angles. In principle once the orbital elements are known for a body, its position can be calculated forward and backwards indefinitely in time. However, in practice, orbits are affected or perturbed, by forces other than gravity due to the central body and thus the orbital elements change over time. For a prograde or retrograde impulse (i.e. an impulse applied along the orbital motion), this changes both the eccentricity as well as the orbital period, but any closed orbit will still intersect the perturbation point. Notably, a prograde impulse given at periapsis raises the altitude at apoapsis, and vice versa, and a retrograde impulse does the opposite. A transverse force out of the orbital plane causes rotation of the orbital plane. The bounds of an atmosphere vary wildly. During solar maxima, the Earth's atmosphere causes drag up to a hundred kilometres higher than during solar minima. Some satellites with long conductive tethers can also decay because of electromagnetic drag from the Earth's magnetic field. Basically, the wire cuts the magnetic field, and acts as a generator. The wire moves electrons from the near vacuum on one end to the near-vacuum on the other end. The orbital energy is converted to heat in the wire. Orbits can be artificially influenced through the use of rocket motors which change the kinetic energy of the body at some point in its path. This is the conversion of chemical or electrical energy to kinetic energy. In this way changes in the orbit shape or orientation can be facilitated. Another method of artificially influencing an orbit is through the use of solar sails or magnetic sails. These forms of propulsion require no propellant or energy input other than that of the sun, and so can be used indefinitely. See statite for one such proposed use. Orbital decay can also occur due to tidal forces for objects below the synchronous orbit for the body they're orbiting. The gravity of the orbiting object raises tidal bulges in the primary, and since below the synchronous orbit the orbiting object is moving faster than the body's surface the bulges lag a short angle behind it. The gravity of the bulges is slightly off of the primary-satellite axis and thus has a component along the satellite's motion. The near bulge slows the object more than the far bulge speeds it up, and as a result the orbit decays. Conversely, the gravity of the satellite on the bulges applies torque on the primary and speeds up its rotation. Artificial satellites are too small to have an appreciable tidal effect on the planets they orbit, but several moons in the solar system are undergoing orbital decay by this mechanism. Mars' innermost moon Phobos is a prime example, and is expected to either impact Mars' surface or break up into a ring within 50 million years. Finally, orbits can decay via the emission of gravitational waves. This mechanism is extremely weak for most stellar objects, only becoming significant in cases where there is a combination of extreme mass and extreme acceleration, such as with black holes or neutron stars that are orbiting each other closely. However, in the real world, many bodies rotate, and this introduces oblateness and distorts the gravity field, and gives a quadrupole moment to the gravitational field which is significant at distances comparable to the radius of the body. The general effect of this is to change the orbital parameters over time; predominantly this gives a rotation of the orbital plane around the rotational pole of the central body (it perturbs the argument of perigee) in a way that is dependent on the angle of orbital plane to the equator as well as altitude at perigee. Thus the constant has dimension density-1 time-2. This corresponds to the following properties. Scaling of distances (including sizes of bodies, while keeping the densities the same) gives similar orbits without scaling the time: if for example distances are halved, masses are divided by 8, gravitational forces by 16 and gravitational accelerations by 2. Hence orbital periods remain the same. Similarly, when an object is dropped from a tower, the time it takes to fall to the ground remains the same with a scale model of the tower on a scale model of the earth. When all densities are multiplied by four, orbits are the same, but with orbital velocities doubled. When all densities are multiplied by four, and all sizes are halved, orbits are similar, with the same orbital velocities. These properties are illustrated in the formula (known as Kepler's 3rd Law) for an elliptical orbit with semi-major axis a, of a small body around a spherical body with radius r and average density σ, where T is the orbital period.
http://www.reference.com/browse/Orbit
13
114
Elementary Human Genetics The Central Asian Gene Pool The Karakalpak Gene Pool Discussion and Conclusions Elementary Human Genetics Every human is defined by his or her library of genetic material, copies of which are stored in every cell of the body apart from the red blood cells. Cells are classified as somatic, meaning body cells, or gametic, the cells involved in reproduction, namely the sperm and the egg or ovum. The overwhelming majority of human genetic material is located within the small nucleus at the heart of each somatic cell. It is commonly referred to as the human genome. Within the nucleus it is distributed between 46 separate chromosomes, two of which are known as the sex chromosomes. The latter occur in two forms, designated X and Y. Chromosomes are generally arranged in pairs - a female has 22 pairs of autosome chromosomes plus one pair of X chromosomes, while a male has a similar arrangement apart from having a mixed pair of X and Y sex chromosomes. A neutron crystallography cross-sectional image of a chromosome, showing the double strand of DNA wound around a protein core. Image courtesy of the US Department of Energy Genomics Program A single chromosome consists of just one DNA macromolecule composed of two separate DNA strands, each of which contains a different but complementary sequence of four different nucleotide bases - adenine (A), thymine (T), cytasine (C), and guanine (G). The two strands are aligned in the form of a double helix held together by hydrogen bonds, adenine always linking with thymine and cytasine always linking with guanine. Each such linkage between strands is known as a base pair. The total human genome contains about 3 billion such base pairs. As such it is an incredibly long molecule that could be from 3 cm to 6 cm long were it possible to straighten it. In reality the double helix is coiled around a core of structural proteins and this is then supercoiled to create the chromosome, 23 pairs of which reside within a cell nucleus with a diameter of just 0.0005 cm. A gene is a segment of the DNA nucleotide sequence within the chromosome that can be chemically read to make one specific protein. Each gene is located at a certain point along the DNA strand, known as its locus. The 22 autosome chromosome pairs vary in size from 263 million base pairs in chromosome 1 (the longest) down to about 47 million base pairs in chromosome 21 (the shortest - chromosome 22 is the second shortest with 50 million base pairs), equivalent to from 3,000 down to 300 genes. The two sex genes are also very different, X having about 140 million base pairs and expressing 1,100 genes, Y having only 23 million base pairs and expressing a mere 78 genes. The total number of genes in the human genome is around 30,000. A complete set of 23 human homologous chromosome pairs Image courtesy of the National Human Genome Research Institute, Maryland Each specific pair of chromosomes have their own distinct characteristics and can be identified under the microscope after staining with a dye and observing the resulting banding. With one exception the chromosome pairs are called homologous because they have the same length and the same sequence of genes. For example the 9th pair always contain the genes for melanin production and for ABO blood type, while the 14th pair has two genes critical to the body's immune response. Even so the individual chromosomes within each matching pair are not identical since each one is inherited from each parent. A certain gene at a particular locus in one chromosome may differ from the corresponding gene in the other chromosome, one being dominant and the other recessive. The one exception relates to the male sex chromosomes, a combination of X and Y, which are not the same length and are therefore not homologous. A set of male human chromosomes showing typical banding Various forms of the same gene (or of some other DNA sequence within the chromosome) are known as alleles. Differences in DNA sequences at a specific chromosome locus are known as genetic polymorphisms. They can be categorized into various types, the most simple being the difference in just a single nucleotide - a single nucleotide polymorphism. When a normal somatic cell divides and replicates, the 23 homologous chromosome pairs (the genome) are duplicated through a complex process known as mitosis. The two strands of DNA within each chromosome unravel and unzip themselves in order to replicate, eventually producing a pair of sister chromatids - two brand new copies of the original single chromosome joined together. However because the two chromosomes within each homologous pair are slightly different (one being inherited from each parent) the two sister chromatids are divided in two. The two halves of each sister chromatid are allocated to each daughter cell, thus replicating the original homologous chromosome pair. Such cells are called diploid because they contain two (slightly different) sets of genetic information. The production of gametic cells involves a quite different process. Sperm and eggs are called haploid cells, meaning single, because they contain only one set of genetic information - 22 single unpaired chromosomes and one sex chromosome. They are formed through another complex process known as meiosis. It involves a deliberate reshuffling of the parental genome in order to increase the genetic diversity within the resulting sperm or egg cells and consequently among any resulting offspring. As before each chromosome pair is replicated in the form of a pair of sister chromatids. This time however, each half of each chromatid embraces its opposite neighbour in a process called synapsis. An average of two or three segments of maternal and paternal DNA are randomly exchanged between chromatids by means of molecular rearrangements called crossover and genetic recombination. The new chromatid halves are not paired with their matching partners but are all separated to create four separate haploid cells, each containing one copy of the full set of 23 chromosomes, and each having its own unique random mix of maternal and paternal DNA. In the male adult this process forms four separate sperm cells, but in the female only one of the four cells becomes an ovum, the other three forming small polar bodies that progressively decay. During fertilization the two haploid cells - the sperm and the ovum or egg - interact to form a diploid zygote (zyg meaning symetrically arranged in pairs). In fact the only contribution that the sperm makes to the zygote is its haploid nucleus containing its set of 23 chromosomes. The sex of the offspring is determined by the sex chromosome within the sperm, which can be either X (female) or Y (male). Clearly the sex chromosome within the ovum has to be X. The X and the Y chromosomes are very different, the Y being only one third the size of the X. During meiosis in the male, the X chromosome recombines and exchanges DNA with the Y only at its ends. Most of the Y chromosome is therefore unaffected by crossover and recombination. This section is known as the non-recombining part of the Y chromosome and it is passed down the male line from father to son relatively unchanged. Scanning electron micrograph of an X and Y chromosome Image courtesy of Indigo Instruments, Canada Not all of the material within the human cell resides inside the nucleus. Both egg and sperm cells contain small energy-producing organelles within the cytoplasm called mitochondria that have their own genetic material for making several essential mitochondrial proteins. However the DNA content is tiny in comparison with that in the cell nucleus - it consists of several rings of DNA totalling about 16,500 base pairs, equivalent to just 13 genes. The genetic material in the nucleus is about 300,000 times larger. When additional mitochondria are produced inside the cell, the mitochondrial DNA is replicated and copies are transferred to the new mitochondria. The reason why mitochondrial DNA, mtDNA for short, is important is because during fertilization virtually no mitochondria from the male cell enters the egg and those that do are tagged and destroyed. Consequently the offspring only inherit the female mitochondria. mtDNA is therefore inherited through the female line. Population genetics is a branch of mathematics that attempts to link changes in the overall history of a population to changes in its genetic structure, a population being a group of interbreeding individuals of the same species sharing a common geographical area. By analysing the nature and diversity of DNA within and between different populations we can gain insights into their separate evolution and the extent to which they are or are not related to each other. We can gain insights into a population's level of reproductive isolation, the minimum time since it was founded, how marriage partners were selected, past geographical expansions, migrations, and mixings. The science is based upon the property of the DNA molecule to occasionally randomly mutate during replication, creating the possibility that the sequence of nucleotides in the DNA of one generation may differ slightly in the following generation. The consequence of this is that individuals within a homogenous population will in time develop different DNA sequences, the characteristic that we have already identified as genetic polymorphism. Because mutations are random, two identical but isolated populations will tend to change in different directions over time. This property is known as random genetic drift and its effect is greater in smaller To study genetic polymorphisms, geneticists look for specific genetic markers. These are clearly recognizable mutations in the DNA whose frequency of incidence varies widely across populations from different geographical areas. In reality the vast majority of human genetic sequences are identical, only around 0.1% of them being affected by polymorphisms. There are several types of genetic marker. The simplest are single nucleotide polymorphisms (SNPs), mentioned above, where just one nucleotide has been replaced with another (for example A replaces T or C replaces G). SNPs in combination along a stretch of DNA are called haplotypes, shorthand for haploid genotypes. These have turned out to be valuable markers because they are genetically relatively stable and are found at differing frequencies in many populations. Some are obviously evolutionarily related to each other and can be classified into haplogroups (Hg). Another type of polymorphism is where short strands of DNA have been randomly inserted into the genetic DNA. This results in so-called biallelic polymorphism, since the strand is either present or absent. These are useful markers because the individuals that have the mutant insert can be traced back to a single common ancestor, while those who do not have the insert represent the original ancestral state . Biallelic polymorphisms can be assigned to certain haplotypes. A final type of marker is based upon microsatellites, very short sequences of nucleotides, such as GATA, that are repeated in tandem numerous times. A polymorphism occurs if the number of repetitions increases or decreases. Microsatellite polymorphisms, sometimes also called short-tandem-repeat polymorphisms, occur more frequently over time, providing a different tool to study the rate of genetic change against time. Of course the whole purpose of sexual reproduction is to deliberately scramble the DNA from both parents in order to create a brand new set of chromosome pairs for their offspring that are not just copies of the parental chromosomes. Studies show that about 85% of genetic variation in autosomal sequences occurs within rather than between populations. However it is the genetic variation between populations that is of the greatest interest when we wish to study their history. Because of this, population geneticists look for more stable pieces of DNA that are not disrupted by reproduction. These are of two radically different types, namely the non-recombining part of the Y chromosome and the mitochondrial DNA or mtDNA. A much higher 40% of the variations in the Y chromosome and 30% of the variations in mtDNA are found between populations. Each provides a different perspective on the genetic evolution of a particular population. Y Chromosome Polymorphisms By definition the Y chromosome is only carried by the male line. Although smaller than the other chromosomes, the Y chromosome is still enormous compared to the mtDNA. The reason that it carries so few genes is because most of it is composed of "junk" DNA. As such it is relatively unaffected by natural selection. The non-recombining part of the Y chromosome is passed on from father to son with little change apart from the introduction of genetic polymorphisms as a result of random mutations. The only problem with using the Y chromosome to study inheritance has been the practical difficulty of identifying a wide range of polymorphisms within it, although the application of special HPLC techniques has overcome some of this limitation in recent years. Y chromosome polymorphisms seem to be more affected by genetic drift and may give a better resolution between closely related populations where the time since their point of divergence has been relatively short. By contrast the mtDNA is carried by the female line. Although less than one thousandth the size of the DNA in the non-recombinant Y chromosome, polymorphisms are about 10 times more frequent in mtDNA than in autosome chromosomes. Techniques and Applications Population genetics is a highly statistical science and different numerical methods can be used to calculate the various properties of one or several populations. Our intention here is to cover the main analytical tools used in the published literature relating to Karakalpak and the other Central Asian populations. The genetic diversity of a population is the diversity of DNA sequences within its gene pool. It is calculated by a statistical method known as the analysis of molecular variance (AMOVA) in the DNA markers from that population. It is effectively a summation of the frequencies of individual polymorphisms found within the sample, mathematically normalized so that a diversity of 0 implies all the individuals in that population have identical DNA and a diversity of 1 implies that the DNA of every individual is different. The genetic distance between two populations is a measure of the difference in their polymorphism frequencies. It is calculated statistically by comparing the pairwise differences between the markers identified for each population, to the pairwise differences within each of the two populations. This distance is a multi-dimensional not a linear measure. However it is normally illustrated graphically in two dimensions. New variables are identified by means of an angular transformation, the first two of which together account for the greatest proportion of the differences between the populations studied. Another property that can be measured statistically is kinship - the extent to which members of a population are related to each other as a result of a common ancestor. Mathematically, a kinship coefficient is the probability that a randomly sampled sequence of DNA from a randomly selected locus is identical across all members of the same population. A coefficient of 1 implies everyone in the group is related while a coefficient of 0 implies no kinship at all. By making assumptions about the manner in which genetic mutations occur and their frequency over time it is possible to work backwards and estimate how many generations (and therefore years) have elapsed from the most recent common ancestor, the individual to whom all the current members of the population are related by descent. This individual is not necessarily the founder of the population. For example if we follow the descent of the Y chromosome, this can only be passed down the male line from father to son. If a male has no sons his non-combining Y chromosome DNA is eliminated from his population for ever more. Over time, therefore, the Y chromosomes of the populations ancestors will be progressively lost. There may well have been ancestors older than the most recent common ancestor, even though we can find no signs for those ancestors in the Y chromosome DNA of the current population. A similar situation arises with mtDNA in the female half of the population because some women do not have daughters. In 1977 the American anthropologist Gordon T. Bowles published an analysis of the anthropometric characteristics of 519 different populations from across Asia, including the Karakalpaks and two regional groups of Uzbeks. Populations were characterized by 9 standard measurements, including stature and various dimensions of the head and face. A multivariate analysis was used to separate the different populations by their physical features. Bowles categorized the populations across four regions of Asia (West, North, East, and South) into 19 geographical groups. He then analysed the biological distances between the populations within each group to identify clusters of biologically similar peoples. Central Asia was divided into Group XVII encompassing Mongolia, Singkiang, and Kazakhstan and Group XVIII encompassing Turkestan and Tajikistan. Each Group was found to contain three population clusters: Anthropological Cluster Analysis of Central Asia | Group || Cluster ||Regional Populations| |XVII ||1||Eastern Qazaqs| Alai Valley Kyrgyz |2||Aksu Rayon Uighur Alma Ata Uighur |Alma Ata Qazaqs| T'ien Shan Kyrgyz |Total Turkmen | Within geographical Group XVIII, the Karakalpaks clustered with the Uzbeks of Tashkent and the Uzbeks of Samarkand. The members of this first cluster were much more heterogeneous than the other two clusters of neighbouring peoples. Conversely the Turkmen cluster had the lowest variance of any of the clusters in the North Asia region, showing that different Turkmen populations are closely related. The results of this study were re-presented by Cavalli-Sforza in a more readily understandable graphical form. The coordinates used are artificial mathematical transformations of the original 9 morphological measurements, designed to identify the distances between different populations in a simple two-dimensional format. The first two principal coordinates identify a clear division between the Uzbek/Karakalpaks, and the Turkmen and Iranians, but show similarities between the Uzbek/Karakalpaks and the Tajiks, and also with the western Siberians. Though not so close there are some similarities between the Uzbek/Karakalpaks and the Qazaqs, Kyrgyz, and Mongols: Physical Anthropology of Asia redrawn by David Richardson after Bowles 1977 First and Second Principal Coordinates The second and third principal coordinates maintains the similarity between Uzbek/Karakalpaks and Tajiks but emphasizes the more eastern features of the Qazaqs, Kyrgyz, and Mongols: Physical Anthropology of Asia redrawn by David Richardson after Bowles 1977 Second and Third Principal Coordinates The basic average morphology of the Uzbeks and Karakalpaks shows them to be of medium stature, with heads that have an average length but an above average breadth compared to the other populations of Asia. Their faces are broad and are of maximum height. Their noses are of average width but have the maximum length found in Asia. Qazaqs have the same stature but have longer and broader heads. Their faces are shorter but broader, having the maximum breadth found in Asia, while their noses too are shorter and slightly broader. Some of these differences in features were noted by some of the early Russian visitors, such as N. N. Karazin, who observed the differences between the Karakalpaks and the Qazaqs (who at that time were called Kirghiz) when he first entered the northern Aral delta: "In terms of type, the Karakalpak people themselves differ noticeably from the Kirghizs: flattened Mongolian noses are already a rarity here, cheek-bones do not stand out so, beards and eyebrows are considerably thicker - there is a noticeably strong predominance of the Turkish race." The Central Asian Gene Pool Western researchers tended to under represent Central Asian populations in many of the earlier studies of population genetics. Cavalli-Sforza, Menozzi, and Piazza, 1994 In 1994 Cavalli-Sforza and two of his colleagues published a landmark study of the worldwide geographic distribution of human genes. In order to make global comparisons the study was forced to rely upon the most commonly available genetic markers, and analysed classical polymorphisms based on blood groups, plasma proteins, and red cell enzymes. Sadly no information was included for Karakalpaks or Qazaqs. Results were analysed continent by continent. The results for the different populations of Asia grouped the Uzbeks, Turkmen, and western Turks into a central cluster, located on the borderline between the Caucasian populations of the west and south and the populations of Northeast Asia and East Asia: Principal Component Analysis of Asian Populations Redrawn by David Richardson after Cavelli-Sforza et al, 1994 Comas, Calafell, Pérez-Lezaun et al, 1998 In 1993-94 another Italian team collected DNA samples from four different populations close to the Altai: Qazaq highlanders living close to Almaty, Uighur lowlanders in the same region, and two Kyrgyz communities - one in the southern highlands, the other in the northern lowlands of Kyrgyzstan. The data was used in two studies, both published in 1998. In the first, by Comas et al, mtDNA polymorphisms in these four communities were compared with other Eurasian populations in the west (Europe, Middle East, and Turkey), centre (the Altai) and the east (Mongolia, China, and Korea). The four Central Asian populations all showed high levels of sequence diversity - in some cases the highest in Eurasia. At the same time they were tightly clustered together, almost exactly halfway between the western and the eastern populations, the exception being that the Mongolians occupied a position close to this central cluster. The results suggested that the Central Asian gene pool was an admixture of the western and eastern gene pools, formed after the western and eastern Eurasians had diverged. The authors suggested that this diversity had possibly been enhanced by human interaction along the Silk Road. In the second, by Pérez-Lezaun et al, short-tandem-repeat polymorphisms in the Y chromosome were analysed for the four Central Asian populations alone. Each of the four was found to be highly heterogeneous yet very different from the other three, the latter finding appearing to contradict the mtDNA results. However the two highland groups had less genetic diversity because each had very high frequencies for one specific polymorphism: Y chromosome haplotype frequencies, with labels given to those shared by more than one population From Pérez-Lezaun et al, 1998. The researchers resolved the apparent contradiction between the two studies in terms of different migration patterns for men and women. All four groups practised a combination of exogamy and patrilocal marriage - in other words couples within the same clan could not marry and brides always moved from their own village to the village of the groom. Consequently the males, and their genes, were isolated and localized, while the females were mobile and there were more similarities in their genes. The high incidence of a single marker in each highland community was presumed to be a founder's effect, supported by evidence that the highland Qazaq community had only been established by lowland Qazaqs a few hundred years ago. Zerjal, Spencer Wells, Yuldasheva, Ruzibakiev, and Tyler-Smith, 2002 In 2002 a joint Oxford University/Imperial Cancer Research Fund study was published, analysing Y chromosome polymorphisms in 15 different Central Asian populations, from the Caucasus to Mongolia. It included Uzbeks from the eastern viloyat of Kashkadarya, Qazaqs and Uighurs from eastern Kazakhstan, Tajiks, and Kyrgyz. Blood samples had been taken from 408 men, living mainly in villages, between 1993 and 1995. In the laboratory the Y chromosomes were initially typed with binary markers to identify 13 haplogroups. Following this, microsatellite variations were typed in order to define more detailed haplotypes. Haplogroup frequencies were calculated for each population and were illustrated by means of the following chart: Haplogroup frequencies across Central Asia From Zerjal et al, 2002. Many of the same haplogroups occurred across the 5,000 km expanse of Central Asia, although with large variations in frequency and with no obvious overall pattern. Haplogroups 1, 2, 3, 9, and 26 accounted for about 70% of the total sample. Haplogroups (Hg) 1 and 3 were common in almost all populations, but the highest frequencies of Hg1 were found in Turkmen and Armenians, while the highest frequencies of Hg3 were found in Kyrgyz and Tajiks. Hg3 was more frequent in the eastern populations, but was only present at 3% in the Qazaqs. Hg3 is the equivalent of M17, which seems to originate from Russia and the Ukraine, a region not covered by this survey - see Spencer Wells et al, 2001 below. Hg9 was very frequent in the Middle East and declined in importance across Central Asia from west to east. However some eastern populations had a higher frequency - the Uzbeks, Uighurs, and Dungans. Hg10 and its derivative Hg36 showed the opposite pattern, together accounting for 54% of haplogroups for the Mongolians and 73% for the Qazaqs. Hg26, which is most frequently found in Southeast Asia, occurs with the highest frequencies among the Dungans (26%), Uighurs (15%), Mongolians (13%), and Qazaqs (13%) in eastern Central Asia. Hg12 and Hg 16 are widespread in Siberia and northern Eurasia but are rare in Central Asia except for the Turkmen and Mongolians. Hg21 was restricted to the Caucasus region. The most obvious observation is that virtually each population is quite distinct. As an example, the Uzbeks are quite different from the Turkmen, Qazaqs, or Mongolians. Only two populations, the Kyrgyz from central Kyrgyzstan and the Tajiks from Pendjikent, show any The researchers measured the genetic diversity of each population using both haplogroup and microsatellite frequencies. Within Central Asia, the Uzbeks, Uighurs, Dungans, and Mongolians exhibited high genetic diversity, while the Qazaqs, Kyrgyz, Tajiks, and Turkmen showed low genetic diversity. These differences were explored by examining the haplotype variation within each haplogroup for each population. Among the Uzbeks, for example, many different haplotypes are widely dispersed across all chromosomes. Among the Qazaqs, however, the majority of the haplotypes are clustered together and many chromosomes share the same or related haplotypes. Low diversity coupled with high frequencies of population-specific haplotype clusters are typical of populations that have experienced a bottleneck or a founder event. The most recent common ancestor of the Tajik population was estimated to date from the early part of the 1st millennium AD, while the most recent common ancestors of the Qazaq and Kyrgyz populations were placed in the period 1200 to 1500 AD. The authors suggested that bottlenecks might be a feature of societies like the Qazaqs and Kyrgyz with small, widely dispersed nomadic groups, especially if they had suffered massacres during the Mongol invasion. Of course these calculations have broad confidence intervals and must be interpreted with caution. Microsatellite haplotype frequencies were used to investigate the genetic distances among the separate populations. The best two-dimentional fit produces a picture with no signs of general clustering on the basis of either geography or linguistics: Genetic distances based on micosatellite haplotypes From Zerjal et al, 2002. The Kyrgyz (ethnically Turkic) do cluster next to the Tajiks (supposedly of Indo-Iranian origin), but both are well separated from the neighbouring Qazaqs. The Turkmen, Qazaqs, and Georgians tend to be isolated from the other groups, leaving the Uzbeks in a somewhat central position, clustered with the Uighurs and Dungans. The authors attempted to interpret the results of their study in terms of the known history of the region. The apparently underlying graduation in haplogroup frequencies from west to east was put down to the eastward agricultural expansion out of the Middle East during the Neolithic, some of the haplogroup markers involved being more recent than the Palaeolithic. Meanwhile Hg3 (equivalent to M17 and Eu19), which is widespread in Central Asia, was attributed to the migration of the pastoral Indo-Iranian "kurgan culture" eastwards from the Ukraine in the late 3rd/early 2nd millennium BC. The mountainous Caucasus region seems to have been bypassed by this migration, which seems to have extended across Central Asia as far as the borders of Siberia and China. Later events also appear to have left their mark. The presence of a high number of low-frequency haplotypes in Central Asian populations was associated with the spread of Middle Eastern genes, either through merchants associated with the early Silk Route or the later spread of Islam. Uighurs and Dungans show a relatively high Middle Eastern admixture, including higher frequencies of Hg9, which might indicate their ancestors migrated from the Middle East to China before moving into Central Asia. High frequencies of Hg10 and its derivative Hg36 are found in the majority of Altaic-speaking populations, especially the Qazaqs, but also the Uzbeks and Kyrgyz. Yet its contribution west of Uzbekistan is low or undetectable. This feature is associated with the progressive migrations of nomadic groups from the east, from the Hsiung-Nu, to the Huns, the Turks, and the Mongols. Of course Central Asians have not only absorbed immigrants from elsewhere but have undergone expansions, colonizations and migrations of their own, contributing their DNA to surrounding populations. Hg1, the equivalent of M45 and its derivative markers, is believed to have originated in Central Asia and is found throughout the Caucasus and in Mongolia. The Karakalpak Gene Pool Spencer Wells et al, 2001 The first examination of Karakalpak DNA appeared as part of a widespread study of Eurasian Y chromosome diversity published by Spencer Wells et al in 2001. It included samples from 49 different Eurasian groups, ranging from western Europe, Russia, the Middle East, the Caucasus, Central Asia, South India, Siberia, and East Asia. Data on 12 other groups was taken from the literature. In addition to the Karakalpaks, the Central Asian category included seven separate Uzbek populations selected from Ferghana to Khorezm, along with Turkmen from Ashgabat, Tajiks from Samarkand, and Qazaqs and Uighurs from Almaty. The study used biallelic markers that were then assigned to 23 different haplotypes. To illustrate the results the latter were condensed into 7 evolutionary-related groups. The study found that the Uzbek, Karakalpak, and Tajik populations had the highest haplotype diversity in Eurasia, the Karakalpaks having the third highest diversity of all 49 groups. The Qazaqs and Kyrgyz had a significantly lower diversity. This diversity is obvious from the chart comparing haplotype frequencies across Eurasia: Distribution of Y chromosome haplotype lineages across various Eurasian populations From Spencer Wells et al, 2001. Uzbeks have a fairly balanced haplotype profile, while populations in the extreme west and east are dominated by one specific haplotype lineage - the M173 lineage in the extreme west and the M9 lineage in the extreme east and Siberia. The Karakalpaks are remarkably similar to the Uzbeks: Distribution of Y chromosome haplotype lineages in Uzbeks and Karakalpaks From Spencer Wells et al, 2001. the main differences being that Karakalpaks have a higher frequency of M9 and M130 and a lower frequency of M17 and M89 haplotype lineages. M9 is strongly linked to Chinese and other far-eastern peoples, while M130 is associated with Mongolians and Qazaqs. On the other hand, M17 is strong in Russia, the Ukraine, the Czech and Slovak Republics as well as in Kyrgyz populations, while M89 has a higher frequency in the west. It seems that compared to Uzbeks, the Karakalpak gene pool has a somewhat higher frequency of haplotypes that are associated with eastern as opposed to western Eurasian populations. In fact the differences between Karakalpaks and Uzbeks are no more pronounced than between the Uzbeks themselves. Haplotype frequencies for the Karakalpaks tend to be within the ranges measured across the different Uzbek populations: Comparison of Karakalpak haplotype lineage frequencies to other ethnic groups in Central Asia || M130|| M89 || M9 || M45 || M173 || M17 || Total | ||0 - 7||7-18||19-34||5-21||4-11 Statistically Karakalpaks are genetically closest to the Uzbeks from Ferghana, followed by those from Surkhandarya, Samarkand, and finally Khorezm. They are furthest from the Uzbeks of Bukhara, Tashkent, and Kashkadarya. These results also show the distance between the Karakalpaks and the other peoples of Central Asia and its neighbouring regions. Next to the Uzbeks, the Karakalpaks are genetically closest to the Tatars and Uighurs. However they are quite distant from the Turkmen, Qazaqs, Kyrgyz, Siberians, and Iranians. The researchers produced a "neighbour-joining" tree, which clustered the studied populations into eight categories according to the genetic distances between them. The Karakalpaks were classified into cluster VIII along with Uzbeks, Tatars, and Uighurs - the populations with the highest genetic diversity. They appear sandwiched between the peoples of Russian and the Ukraine and the Mongolians and Qazaqs. Neighbour-joining tree of 61 Eurasian Populations Karakalpaks are included in cluster VIII along with Uzbeks, Tatars, and Uighurs From Spencer Wells et al, 2001. Spencer Wells and his colleagues did not attempt to explain why the Karakalpak gene pool is similar to Uzbek but is different from the Qazaq, a surprising finding given that the Karakalpaks lived in the same region as the Qazaqs of the Lesser Horde before migrating into Khorezm. Instead they suggested that the high diversity in Central Asia might indicate that its population is among the oldest in Eurasia. M45 is the ancestor of haplotypes M173, the predominant group found in Western Europe, and is thought to have arisen in Central Asia about 40,000 years ago. M173 occurred about 30,000 years old, just as modern humans began their migration from Central Asia into Europe during the Upper Palaeolithic. M17 (also known as the Eu19 lineage) has its origins in eastern Europe and the Ukraine and may have been initially introduced into Central Asia following the last Ice Age and re-introduced later by the south-eastern migration of the Indo-Iranian "kurgan" culture. Comas et al, 2004 At the beginning of 2004 a complementary study was published by David Comas, based on the analysis of mtDNA haplogroups from 12 Central Asian and neighbouring populations, including Karakalpaks, Uzbeks, and Qazaqs. Sample size was only 20, dropping to 16 for Dungans and Uighurs, so that errors in the results for individual populations could be high. The study reconfirmed the high genetic diversity within Central Asian populations. However a high proportion of sequences originated elsewhere, suggesting that the region had experienced "intense gene flow" in the past. The haplogroups were divided into three types according to their origins: West Eurasian, East Asian, and India. Populations showed a graduation from the west to the east with the Karakalpaks occupying the middle ground, with half of their haplogroups having a western origin and the other half having an eastern origin. Uzbek populations contained a small Indian component. Mixture of western and eastern mtDNA haplogroups across Central Asia |Population||West Eurasian|| East Asian || Total | The researchers found that two of the haplogroups of East Asian origin (D4c and G2a) not only occurred at higher frequencies in Central Asia than in neighbouring populations but appeared in many related but diverse forms. These may have originated as founder mutations some 25,000 to 30,000 years ago, expanded as a result of genetic drift and subsequently become dispersed into the neighbouring populations. Their incidence was highest in the Qazaqs, and second highest in the Turkmen and Karakalpaks. The majority of the other lineages separate into two types with either a western or an eastern origin. They do not overlap, suggesting that they were already differentiated before they came together in Central Asia. Furthermore the eastern group contains both south-eastern and north-eastern components. One explanation for their admixture in Central Asia is that the region was originally inhabited by Western people, who were then partially replaced by the arrival of Eastern people. There is genetic evidence from archaeological sites in eastern China of a drastic shift, between 2,500 and 2,000 years ago, from a European-like population to the present-day East Asian population. The presence of ancient Central Asian sequences suggests it is more likely that the people of Central Asia are a mixture of two differentiated groups of peoples who originated in west and east Eurasia respectively. Chaix and Heyer et al, 2004 The most interesting study of Karakalpak DNA so far was published by a team of French workers in the autumn of 2004. It was based on blood samples taken during two separate expeditions to Karakalpakstan in 2001 and 2002, organized with the assistance of IFEAC, the Institut Français d'Etudes sur l'Asie Centrale, based in Tashkent. The samples consisted of males belonging to five different ethnic groups: Qon'ırat Karakalpaks (sample size 53), On To'rt Urıw Karakalpaks (53), Qazaqs (50), Khorezmian Uzbeks (40), and Turkmen (51). The study was based on the analysis of Y chromosome haplotypes from DNA extracted from white blood cells. In addition to providing samples for DNA analysis, participants were also interviewed to gather information on their paternal lineages and tribal and clan Unfortunately the published results only focused on the genetic relationships between the tribes, clans and lineages of these five ethnic groups. However before reviewing these important findings it is worth looking at the more general aspects that emerged from the five samples. These were summarized by Professor Evelyne Heyer and Dr R Chaix at a workshop on languages and genes held in France in 2005, where the results from Karakalpakstan were compared with the results from similar expeditions to Kyrgyzstan, the Bukhara, Samarkand, and Ferghana Valley regions of Uzbekistan, and Tajikistan as well as with some results published by other research teams. In some cases comparisons were limited by the fact that the genetic analysis of samples from different regions was not always done according to the same protocols. The first outcome was the reconfirmation of the high genetic diversity among Karakalpaks and Uzbeks: Y Chromosome Diversity across Central Asia |Population||Region||Sample Size|| Diversity | |Karakalpak On To'rt Urıw||Karakalpakstan||54||0.89| |Tajik Kamangaron||Ferghana Valley||30||0.98| |Tajik Richtan||Ferghana Valley||29||0.98| |Kyrgyz Andijan||Uzbek Ferghana Valley||46||0.82| |Kyrgyz Jankatalab||Uzbek Ferghana Valley||20||0.78| |Kyrgyz Doboloo||Uzbek Ferghana Valley||22||0.70| The high diversities found in Uighur and Tajik communities also agreed with earlier findings. Qon'ırat Karakalpaks had somewhat greater genetic diversity than On To'rt Urıw Karakalpaks. Some of these figures are extremely high. A diversity of zero implies a population where every individual is identical. A diversity of one implies the opposite, the haplotypes of every individual The second more important finding concerned the Y chromosome genetic distances among different Central Asian populations. As usual this was presented in two dimensions: Genetic distances between ethnic populations in Karakalpakstan and the Ferghana Valley From Chaix and Heyer et al, 2004. The researchers concluded that Y chromosome genetic distances were strongly correlated to geographic distances. Not only are Qon'ırat and On To'rt Urıw populations genetically close, both are also close to the neighbouring Khorezmian Uzbeks. Together they give the appearance of a single population that has only relatively recently fragmented into three separate groups. Clearly this situation is mirrored with the two Tajik populations living in the Ferghana Valley and also with two of the three Kyrgyz populations from the same region. Although close to the local Uzbeks, the two Karakalpak populations have a slight bias towards the local Qazaqs. The study of the Y chromosome was repeated for the mitichondrial DNA, to provide a similar picture for the female half of the same populations. The results were compared to other studies conducted on other groups of Central Asians. We have redrawn the chart showing genetic distances among populations, categorizing different ethnic groups by colour to facilitate comparisons: Genetic distances among ethnic populations in Central Asia Based on mitochondrial DNA polymorphisms From Heyer, 2005. The French team concluded that, in this case, genetic distances were not related to either geographical distances or to linguistics. However this is not entirely true because there is some general clustering among populations of the same ethnic group, although by no means as strong as that observed from the Y chromosome data. The three Karakalpak populations highlighted in red consist of the On To'rt Urıw (far right), the Qon'ırat (centre), and the Karakalpak sample used in the Comas 2004 study (left). The Uzbeks are shown in green and those from Karakalpakstan are the second from the extreme left, the latter being the Uzbeks from Samarkand. A nearby group of Uzbeks from Urgench in Khorezm viloyati appear extreme left. There is some relationship between the mtDNA of the Karakalpak and Uzbek populations of the Aral delta therefore, but it is much weaker than the relationship between their Y chromosome DNA. On the other hand the Qazaqs of Karakalpakstan, the uppermost yellow square, are very closely related to the Karakalpak Qon'ırat according to These results are similar to those that emerged from the Italian studies of Qazaq, Uighur, and Kyrgyz Y chromosome and mitochondrial DNA. Ethnic Turkic populations are generally exogamous. Consequently the male DNA is relatively isolated and immobile because men traditionally stay in the same village from birth until death. They had to select their wives from other geographic regions and sometimes married women from other ethnic groups. The female DNA within these groups is consequently more diversified. The results suggest that in the delta, some Qon'ırat men have married Qazaq women and/or some Qazaq men have married Qon'ırat women. Let us now turn to the primary focus of the Chaix and Heyer paper. Are the tribes and clans of the Karakalpaks and other ethnic groups living within the Aral delta linked by kinship? Y chromosome polymorphisms were analysed for each separate lineage, clan, tribe, and ethnic group using single tandem repeats. The resulting haplotypes were used to calculate a kinship coefficient at each respective Within the two Karakalpak samples the Qon'ırat were all Shu'llik and came from several clans, only three of which permitted the computation of kinship: the Qoldawlı, Qıyat, and Ashamaylı clans. However none of these clans had recognized lineages. The Khorezmian Uzbeks have also long ago abandoned their tradition of preserving genealogical lineages. The On To'rt Urıw were composed of four tribes, four clans, and four lineages: - Qıtay tribe - Qıpshaq tribe, Basar clan - Keneges tribe, Omır and No'kis clans - Man'g'ıt tribe, Qarasıraq clan The Qazaq and the Turkmen groups were also structured along tribal, clan, and lineage lines. The results of the study showed that lineages, where they were still maintained, exhibited high levels of kinship, the On To'rt Urıw having by far the highest. People belonging to the same lineage were therefore significantly more related to each other than people selected at random in the overall global population. Put another way, they share a common ancestor who is far more recent than the common ancestor for the population as a whole: Kinship coefficients for five different ethnic populations, including the Qon'ırat and the On To'rt Urıw. From Chaix and Heyer et al, 2004. The kinship coefficients at the clan level were lower, but were still significant in three groups - the Karakalpak Qon'ırat, the Qazaqs, and the Turkmen. However for the Karakalpak On To'rt Urıw and the Uzbeks, men from the same clan were only fractionally more related to each other than were men selected randomly from the population at large. When we reach the tribal level we find that the men in all five ethnic groups show no genetic kinship whatsoever. In these societies the male members of some but not all tribal clans are partially related to varying degrees, in the sense that they are the descendants of a common male ancestor. Depending on the clan concerned this kinship can be strong, weak, or non-existant. However the members of different clans within the same tribe show no such interrelationship at all. In other words, tribes are conglomerations of clans that have no genetic links with each other apart from those occurring between randomly chosen populations. It suggests that such tribes were formed politically, as confederations of unrelated clans, and not organically as a result of the expansion and sub-division of an initially genetically homogenous extended family group. By assuming a constant rate of genetic mutation over time and a generation time of 30 years, the researchers were able to calculate the number of generations (and therefore years) that have elapsed since the existence of the single common ancestor. This was essentially the minimum age of the descent group and was computed for each lineage and clan. However the estimated ages computed were very high. For example, the age of the Qon'ırat clans was estimated at about 460 generations or 14,000 years (late Ice Age), while the age of the On To'rt Urıw lineages was estimated at around 200 generations or 6,000 years (early Neolithic). Clearly these results are ridiculous. The explanation is that each group included immigrants or outsiders who were clearly unrelated to the core population. The calculation was therefore modified, restricting the sample to those individuals who belonged to the modal haplogroup of the descent group. This excluded about 17% of the men in the initial sample. Results were excluded for those descent groups that contained less than three |Descent Group||Population||Number of |Age in years||95% Confidence| || 35||1,058||454 - 3,704| || 20|| 595||255 - 2,083| ||3,051||1,307 - 10,677| On To'rt Urıw || 13|| 397||170 - 1,389| || 415||178 - 1,451| || 516||221 - 1,806| The age of the On To'rt Urıw and other lineages averaged about 15 generations, equivalent to about 400 to 500 years. The age of the clans varied more widely, from 20 generations for the Qazaqs, to 35 generations for the Qon'ırat, and to 102 generations for the Turkmen. This dates the oldest common ancestor of the Qazaq and Qon'ırat clans to a time some 600 to 1,200 years ago. However the common ancestor of the Turkmen clans is some 3,000 years old. The high ages of the Turkmen clans was the result of the occurrence of a significantly mutated haplotype within the modal haplogroup. It was difficult to judge whether these individuals were genuinely related to the other clan members or were themselves recent immigrants. These figures must be interpreted with considerable caution. Clearly the age of a clan's common ancestor is not the same as the age of the clan itself, since that ancestor may have had ancestors of his own, whose lines of descent have become extinct over time. The calculated ages therefore give us a minimum limit for the age of the clan and not the age of the clan itself. In reality however, the uncertainty in the assumed rate of genetic mutation gives rise to extremely wide 95% confidence intervals. The knowledge that certain Karakalpak Qon'ırat clans are most likely older than a time ranging from 450 to 3,700 years is of little practical use to us. Clearly more accurate models are required. Chaix, R.; Quintana-Murci, L.; Hegay, T.; Hammer, M. F.; Mobasher, Z.; Austerlitz, F.; and Heyer, E., 2007 The latest analysis of Karakalpak DNA comes from a study examining the genetic differences between various pastoral and farming populations in Central Asia. In this region these two fundamentally different economies are organized according to quite separate social traditions: The study aims to identify differences in the genetic diversity of the two groups as a result of these two different lifestyles. It examines the genetic diversity of: - pastoral populations are classified into what their members claim to be descent groups (tribes, clans, and lineages), practice exogamous marriage (where men must marry women from clans that are different to their own), and are organized on a patrilineal basis (children being affiliated to the descent group of the father, not the mother). - farmer populations are organized into nuclear and extended families rather than tribes and often practise endogamous marriage (where men marry women from within the same clan, often their cousins). The diversity of mtDNA was examined by investigating one of two short segments, known as hypervariable segment number 1 or HVS-1. This and HVS-2 have been found to contain the highest density of neutral polymorphic variations between individuals. - maternally inherited mitochondrial DNA in 12 pastoral and 9 farmer populations, and - paternally inherited Y-chromosomes in 11 pastoral and 7 farmer populations. The diversity of the Y chromosome was examined by investigating 6 short tandem repeats (STRs) in the non-recombining region of the chromosome. This particular study sampled mtDNA from 5 different populations from Karakalpakstan: On To'rt Urıw Karakalpaks, Qon'ırat Karakalpaks, Qazaqs, Turkmen, and Uzbeks. Samples collected as part of other earlier studies were used to provide mtDNA data on 16 further populations (one of which was a general group of Karakalpaks) and Y chromosome data on 20 populations (two of which were On To'rt Urıw and Qon'ırat Karakalpaks sampled in 2001 and 2002). The sample size for each population ranged from 16 to 65 individuals. Both Karakalpak arıs were classified as pastoral, along with Qazaqs, Kyrgyz, and Turkmen. Uzbeks were classified as farmers, along with Tajiks, Uighurs, Kurds, and Dungans. Results of the mtDNA Analysis The results of the mtDNA analysis are given in Table 1, copied from the paper. Table 1. Sample Descriptions and Estimators of Genetic Diversity from the mtDNA Sequence |Population ||n ||Location ||Long ||Lat ||H ||π ||D ||pD ||Ps |Karakalpaks ||20 ||Uzbekistan ||58 ||43 ||0.99 ||5.29 ||-1.95 ||0.01 ||0.90 ||1.05 | |Karakalpaks (On To'rt Urıw) ||53 ||Uzbekistan/Turkmenistan border ||60 ||42 ||0.99 ||5.98 ||-1.92 ||0.01 ||0.70 ||1.20 | |Karakalpaks (Qon'ırat) ||55 ||Karakalpakstan ||59 ||43 ||0.99 ||5.37 ||-2.01 ||0.01 ||0.82 ||1.15 | |Qazaqs ||50 ||Karakalpakstan ||63 ||44 ||0.99 ||5.23 ||-1.97 ||0.01 ||0.88 ||1.11 | |Qazaqs ||55 ||Kazakhstan ||80 ||45 ||0.99 ||5.66 ||-1.87 ||0.01 ||0.69 ||1.25 | |Qazaqs ||20 || ||68 ||42 ||1.00 ||5.17 ||-1.52 ||0.05 ||1.00 ||1.00 | |Kyrgyz ||20 ||Kyrgyzstan ||74 ||41 ||0.97 ||5.29 ||-1.38 ||0.06 ||0.55 ||1.33 | |Kyrgyz (Sary-Tash) ||47 ||South Kyrgyzstan, Pamirs ||73 ||40 ||0.97 ||5.24 ||-1.95 ||0.01 ||0.49 ||1.52 | |Kyrgyz (Talas) ||48 ||North Kyrgyzstan ||72 ||42 ||0.99 ||5.77 ||-1.65 ||0.02 ||0.77 ||1.14 | |Turkmen ||51 ||Uzbekistan/Turkmenistan border ||59 ||42 ||0.98 ||5.48 ||-1.59 ||0.04 ||0.53 ||1.42 | |Turkmen ||41 ||Turkmenistan ||60 ||39 ||0.99 ||5.20 ||-2.07 ||0.00 ||0.73 ||1.21 | |Turkmen ||20 || ||59 ||40 ||0.98 ||5.28 ||-1.71 ||0.02 ||0.75 ||1.18 | |Dungans ||16 ||Kyrgyzstan ||78 ||41 ||0.94 ||5.27 ||-1.23 ||0.12 ||0.31 ||1.60 | |Kurds ||32 ||Turkmenistan ||59 ||39 ||0.97 ||5.61 ||-1.35 ||0.05 ||0.41 ||1.52 | |Uighurs ||55 ||Kazakhstan ||82 ||47 ||0.99 ||5.11 ||-1.91 ||0.01 ||0.62 ||1.28 | |Uighurs ||16 ||Kyrgyzstan ||79 ||42 ||0.98 ||4.67 ||-1.06 ||0.15 ||0.63 ||1.23 | |Uzbeks (North) ||40 ||Karakalpakstan ||60 ||43 ||0.99 ||5.49 ||-2.03 ||0.00 ||0.68 ||1.21 | |Uzbeks (South) ||42 ||Surkhandarya, Uzbekistan ||67 ||38 ||0.99 ||5.07 ||-1.96 ||0.01 ||0.81 ||1.14 | |Uzbeks (South) ||20 ||Uzbekistan ||66 ||40 ||0.99 ||5.33 ||-1.82 ||0.02 ||0.90 ||1.05 | |Uzbeks (Khorezm) ||20 ||Khorezm, Uzbekistan ||61 ||42 ||0.98 ||5.32 ||-1.62 ||0.04 ||0.70 ||1.18 | |Tajiks (Yagnobi) ||20 || ||71 ||39 ||0.99 ||5.98 ||-1.76 ||0.02 ||0.90 ||1.05 | Key: the pastoral populations are in the grey area; the farmer populations are in the white area. The table includes the following parameters: - sample size, n, the number of individuals sampled in each population. Individuals had to be unrelated to any other member of the same sample for at least two generations. - the geographical longitude and latitude of the population sampled. - heterozygosity, H, the proportion of different alleles occupying the same position in each mtDNA sequence. It measures the frequency of heterozygotes for a particular locus in the genetic sequence and is one of several statistics indicating the level of genetic variation or polymorphism within a population. When H=0, all alleles are the same and when H=1, all alleles are different. - the mean number of pairwise differences, π, measures the average number of nucleotide differences between all pairs of HVS-1 sequences. This is another statistic indicating the level of genetic variation within a population, in this case measuring the level of mismatch - Tajima’s D, D, measures the frequency distribution of alleles in a nucleotide sequence and is based on the difference between two estimations of the population mutation rate. It is often used to distinguish between a DNA sequence that has evolved randomly (D=0) and one that has experienced directional selection favouring a single allele. It is consequently used as a test for natural selection. However it is also influenced by population history and negative values of D can indicate high rates of population growth. - the probability that D is significantly different from zero, pD. - the proportion of singletons, Ps, measures the relative number of unique polymorphisms in the sample. The higher the proportion of singletons, the greater the population has been affected by inward migration. - the mean number of individuals carrying the same mtDNA sequence, C, is an inverse measure of diversity. The more individuals with the same sequence, the less diversity within the population and the higher proportion of individuals who are closely related. The table shows surprisingly little differentiation between pastoral and farmer populations. Both show high levels of within population genetic diversity (for both groups, median H=0.99 and π is around 5.3). Further calculations of genetic distance between populations, Fst, ( not presented in the table but given graphically in the online reference below) showed a corresponding low level of genetic differentiation among pastoral populations as well as among farmer populations. Both groups of populations also showed a significantly negative Tajima’s D, which the authors attribute to a high rate of demographic growth in neutrally evolving populations. Supplementary data made available online showed a weak correlation between genetic distance, Fst, and geographic distance for both pastoral and farmer populations. Click here for redirection to the relevant Results of the Y chromosome Analysis The results of the Y chromosome analysis are given in Table 2, also copied from the paper: Table 1. Sample Descriptions and Estimators of Genetic Diversity from the Y chromosome STRs |Population ||n ||Location ||Long ||Lat ||H ||π ||r ||Ps ||C | |Karakalpaks (On To'rt Urıw) ||54 ||Uzbekistan/Turkmenistan border ||60 ||42 ||0.86 ||3.40 ||1.002 ||0.24 ||2.84 | |Karakalpaks (Qon'ırat) ||54 ||Karakalpakstan ||59 ||43 ||0.91 ||3.17 ||1.003 ||0.28 ||2.35 | |Qazaqs ||50 ||Karakalpakstan ||63 ||44 ||0.85 ||2.36 ||1.004 ||0.16 ||2.78 | |Qazaqs ||38 ||Almaty, KatonKaragay, Karatutuk, Rachmanovsky Kluchi, Kazakhstan |68 ||42 ||0.78 ||2.86 ||1.004 ||0.26 ||2.71 | |Qazaqs ||49 ||South-east Kazakhstan ||77 ||40 ||0.69 ||1.56 ||1.012 ||0.22 ||3.06 | |Kyrgyz ||41 ||Central Kyrgyzstan (Mixed) ||74 ||41 ||0.88 ||2.47 ||1.004 ||0.41 ||1.86 | |Kyrgyz (Sary-Tash) ||43 ||South Kyrgyzstan, Pamirs ||73 ||40 ||0.45 ||1.30 ||1.003 ||0.12 ||4.78 | |Kyrgyz (Talas) ||41 ||North Kyrgyzstan ||72 ||42 ||0.94 ||3.21 ||1.002 ||0.39 ||1.78 | |Mongolians ||65 ||Ulaanbaatar, Mongolia ||90 ||49 ||0.96 ||3.37 ||1.009 ||0.38 ||1.81 | |Turkmen ||51 ||Uzbekistan/Turkmenistan border ||59 ||42 ||0.67 ||1.84 ||1.006 ||0.27 ||3.00 | |Turkmen ||21 ||Ashgabat, Turkmenistan ||59 ||40 ||0.89 ||3.34 ||1.006 ||0.48 ||1.62 | |Dungans ||22 ||Alexandrovka and Osh, Kyrgyzstan ||78 ||41 ||0.99 ||4.13 ||1.005 ||0.82 ||1.10 | |Kurds ||20 ||Bagyr, Turkmenistan ||59 ||39 ||0.99 ||3.59 ||1.009 ||0.80 ||1.11 | |Uighurs ||33 ||Almaty and Lavar, Kazakhstan ||79 ||42 ||0.99 ||3.72 ||1.007 ||0.67 ||1.22 | |Uighurs ||39 ||South East Kazakhstan ||79 ||43 ||0.99 ||3.79 ||1.008 ||0.77 ||1.15 | |Uzbeks (North) ||40 ||Karakalpakstan ||60 ||43 ||0.96 ||3.42 ||1.005 ||0.48 ||1.54 | |Uzbeks (South) ||28 ||Kashkadarya, Uzbekistan ||66 ||40 ||1.00 ||3.53 ||1.008 ||0.93 ||1.04 | |Tajiks (Yagnobi) ||22 ||Penjikent, Tajikistan ||71 ||39 ||0.87 ||2.69 ||1.012 ||0.45 ||1.69 | Key: the pastoral populations are in the grey area; the farmer populations are in the white area. This table also includes the sample size, n, and longitude and latitude of the population sampled, as well as the heterozygosity, H, the mean number of pairwise differences, π, the proportion of singletons, Ps, and the mean number of individuals carrying the same Y STR haplotype, C. In addition it includes a statistical computation of the demographic growth rate, r. In contrast to the results obtained from the mtDNA analysis, both the heterozyosity and the mean pairwise differences computed from the Y chromosome STRs were significantly lower in the pastoral populations than in the farmer populations. Thus Y chromosome diversity has been lost in the pastoral Conversely calculations of the genetic distance, Rst, between each of the two groups of populations showed that pastoral populations were more highly differentiated than farmer populations. The supplemental data given online demonstrates that this is not as a result of geographic distance, there being no perceived correlation between genetic and geographic distance in both population groups. Finally the rate of demographic growth was found to be lower in pastoral than in farmer populations. At first sight the results are counter-intuitive. One would expect that the diversity of mtDNA in pastoral societies would be higher than in farming societies, because the men in those societies are marrying brides who contribute mtDNA from clans other than their own. Similarly one would expect no great difference in Y chromosome diversity between pastoralists and farmers because both societies are patrilinear. Leaving aside the matter of immigration, the males who contribute the Y chromosome are always selected from the local sampled population. To understand the results, Chaix et al investigated the distribution of genetic diversity within individual populations using a statistical technique called multi-dimensional scaling analysis or MDS. This attempts to sort or resolve a sample into its different component parts, illustrating the results in two dimensions. The example chosen in the paper focuses on the Karakalpak On To'rt Urıw arıs. The MDS analysis of the Y chromosome data resolves the sample of 54 individuals into clusters, each of whom have exactly the same STR haplotypes: Multidimensional Scaling Analysis based on the Matrix of Distance between Y STR Haplotypes in a Specific Pastoral Population: the Karakalpak On To'rt Urıw. Thus the sample contains 13 individuals from the O'mir clan of the Keneges tribe with the same haplotype (shown by the large cross), 10 individuals of the Qarasıyraq clan of the Man'g'ıt tribe with the same haplotype (large diamond), and 10 individuals from the No'kis clan of the Keneges tribe with the same haplotype (large triangle). Other members of the same clans have different haplotypes, as shown on the chart. Those close to the so-called "identity core" group may have arisen by mutation. Those further afield might represent immigrants or adoptions. No such clustering is observed following the MDS analysis of the mtDNA data for the same On To'rt Urıw arıs: Multidimensional Scaling Analysis based on the Number of Differences between the Mitochondrial Sequence in the Same Pastoral Population: the Karakalpak On To'rt Urıw. Every individual in the sample, including those from the same clan, has a different HVS-1 sequence. Similar MDS analyses of the different farmer populations apparently showed very few "identity cores" in the Y chromosome data and a total absence of clustering in the mtDNA data, just as in the case of the On To'rt Urıw. The overall conclusion was that the existence of "identity cores" was specific to the Y chromosome data and was mainly restricted to the pastoral populations. This is reflected in the tables above, where we can see that the mean number of individuals carrying the same mtDNA sequence ranges from about 1 to 1½ and shows no difference between pastoral and farming populations. On the other hand the mean number of individuals carrying the same STR haplotype is low for farming populations but ranges from 1½ up to almost 5 for the pastoralists. Pastoral populations also have a lower number of Y chromosome singletons. Chaix et al point to three reinforcing factors to explain the existence of "identity cores" in pastoral as opposed to farming populations: Together these factors reduce overall Y chromosome diversity. - pastoral lineages frequently split and divide with closely related men remaining in the same sub-group, thereby reducing Y chromosome diversity, - small populations segmented into lineages can experience strong genetic drift, creating high frequencies of specific haplotypes, and - random demographic uncertainty in small lineage groups can lead to the extinction of some haplotypes, also reducing diversity. To explain the similar levels of mtDNA diversity in pastoral and farmer populations, Chaix et al point to the complex rules connected with exogamy. Qazaq men for example must marry a bride who has not had an ancestor belonging to the husband's own lineage for at least 7 generations, while Karakalpak men must marry a bride from another clan, although she can belong to the same tribe. Each pastoral clan, therefore, is gaining brides (and mtDNA) from external clans but is losing daughters (and mtDNA) to external clans. Such continuous and intense migration reduces mtDNA genetic drift within the clan. This in turn lowers diversity to a level similar to that observed in farmer populations, which is in any event already high. The process of two-way female migration effectively isolates the mtDNA structure of pastoral societies from their social structure. One aspect overlooked by the study is that, until recent times, Karakalpak clans were geographically isolated in villages located in specific parts of the Aral delta and therefore tended to always intermarry with one of their adjacent neighbouring clans. In effect, the two neighbouring clans behaved like a single population, with females moving between clans in every generation. How such social behaviour affected genetic structure was not investigated. The Uzbeks were traditionally nomadic pastoralists and progressively became settled agricultural communities from the 16th century onwards. The survey provided an opportunity to investigate the effect of this transition in lifestyle on the genetic structure of the Uzbek Y chromosome. Table 2 above shows that the genetic diversity found among Uzbeks, as measured by heterozygosity and the mean number of pairwise differences, was similar to that of the other farmer populations, as was the proportion of singleton haplotypes. Equally the mean number of individuals carrying the same Y STR haplotype was low (1 to 1½), indicating an absence of the haplotype clustering (or "identity cores") observed in pastoral populations. The pastoral "genetic signature" must have been rapidly eroded, especially in the case of the northern Uzbeks from Karakalpakstan, who only settled from the 17th century onwards. Two reasons are proposed for this rapid transformation. Firstly the early collapse and integration of the Uzbek descent groups following their initial settlement and secondly their mixing with traditional Khorezmian farming populations, which led to the creation of genetic admixtures of the two groups. Of course the Karakalpak On To'rt Urıw have been settled farmers for just as long as many Khorezmian Uzbeks and cannot in any way be strictly described as pastoralists. Indeed the majority of Karakalpak Qon'ırats have also been settled for much of the 20th century. However both have strictly maintained their traditional pastoralist clan structure and associated system of exogamous marriage. So although their lifestyles have changed radically , their social behaviour to date has not. Discussion and Conclusions The Karakalpaks and their Uzbek and Qazaq neighbours have no comprehensive recorded history, just occasional historical reports coupled with oral legends which may or may not relate to certain historical events in their past. We therefore have no record of where or when the Karakalpak confederation emerged and for what political or other reasons. In the absence of solid archaeological or historical evidence, many theories have been advanced to explain the origin of the Karakalpaks. Their official history, as taught in Karakalpak colleges and schools today, claims that the Karakalpaks are the descendants of the original endemic nomadic population of the Khorezm oasis, most of whom were forced to leave as a result of the Mongol invasion in 1221 and the subsequent dessication of the Aral delta following the devastation of Khorezm by Timur in the late 14th century, only returning in significant numbers during the 18th century. We fundamentally disagree with this simplistic picture, which uncritically endures with high- ranking support because it purports to establish an ancient Karakalpak origin and justifies tenure of the current homeland. While population genetics cannot unravel the full tribal history of the Karakalpaks per se, it can give us important clues to their formation and can eliminate some of the less likely theories that have been proposed. The two arıs of the Karakalpaks, the Qon'ırat and the On To'rt Urıw, are very similar to each other genetically, especially in the male line. Both are equally close to the Khorezmian Uzbeks, their southern neighbours. Indeed the genetic distances between the different populations of Uzbeks scattered across Uzbekistan is no greater than the distance between many of them and the Karakalpaks. This suggests that Karakalpaks and Uzbeks have very similar origins. If we want to find out about the formation of the Karakalpaks we should look towards the emergence of the Uzbek (Shaybani) Horde and its eastwards migration under the leadership of Abu'l Khayr, who united much of the Uzbek confederation between 1428 and 1468. Like the Uzbeks, the Karakalpaks are extremely diverse genetically. One only has to spend time with them to realize that some look European, some look Caucasian, and some look typically Mongolian. Their DNA turns out to be an admixture, roughly balanced between eastern and western populations. Two of their main genetic markers have far-eastern origins, M9 being strongly linked to Chinese and other Far Eastern peoples and M130 being linked to the Mongolians and Qazaqs. On the other hand, M17 is strong in Russia, the Ukraine, and Eastern Europe, while M89 is strong in the Middle East, the Caucasus, and Russia. M173 is strong in Western Europe and M45 is believed to have originated in Central Asia, showing that some of their ancestry goes back to the earliest inhabitants of that region. In fact the main difference between the Karakalpaks and the Uzbeks is a slight difference in the mix of the same markers. Karakalpaks have a somewhat greater bias towards the eastern markers. One possible cause could be the inter-marriage between Karakalpaks and Qazaqs over the past 400 years, a theory that gains some support from the close similarities in the mitochondrial DNA of the neighbouring female Karakalpak Qon'ırat and Qazaqs of the Aral delta. After the Uzbeks, Karakalpaks are next closest to the Uighurs, the Crimean Tatars, and the Kazan Tatars, at least in the male line. However in the female line the Karakalpaks are quite different from the Uighurs and Crimean Tatars (and possibly from the Kazan Tatars as well). There is clearly a genetic link with the Tatars of the lower Volga through the male line. Of course the Volga region has been closely linked through communications and trade with Khorezm from the earliest days. The Karakalpaks are genetically distant from the Qazaqs and the Turkmen, and even more so from the Kyrgyz and the Tajiks. We know that the Karakalpaks were geographically, politically, and culturally very close to the Qazaqs of the Lesser Horde prior to their migration into the Aral delta and were even once ruled by Qazaq tribal leaders. From their history, therefore, one might have speculated that the Karakalpaks may have been no more than another tribal group within the overall Qazaq confederation. This is clearly not so. The Qazaqs have a quite different genetic history, being far more homogenous and genetically closer to the Mongolians of East Asia. However as we have seen, the proximity of the Qazaqs and Karakalpaks undoubtedly led to intermarriage and therefore some level of genetic exchange. Karakalpak Y chromosome polymorphisms show different patterns from mtDNA polymorphisms in a similar manner to that identified in certain other Central Asian populations. This seems to be associated with the Turkic traditions of exogamy and so-called patrilocal marriage. Marriage is generally not permissible between couples belonging to the same clan, so men must marry women from other clans, or tribes, or in a few cases even different ethnic groups. After the marriage the groom stays in his home village and his bride moves from her village to his. The result is that the male non-recombining part of the Y chromosome becomes localized as a result of its geographical isolation, whereas the female mtDNA benefits from genetic mixing as a result of the albeit short range migration of young brides from different clans One of the most important conclusions is the finding that clans within the same tribe show no sign of genetic kinship, whether the tribe concerned is Karakalpak, Uzbek, Qazaq, or Turkmen. Indeed among the most settled ethnic groups, the Uzbeks and Karakalpak On To'rt Urıw, there is very little kinship even at clan level. It seems that settled agricultural communities soon lose their strong tribal identity and become more openminded to intermarriage with different neighbouring ethnic groups. Indeed the same populations place less importance on their geneaology and no longer maintain any identity according to lineage. It has generally been assumed that most Turkic tribal groups like the Uzbeks were formed as confederations of separate tribes and this is confirmed by the recent genetic study of ethnic groups from Karakalpakstan. We now see that this extends to the tribes themselves, with an absence of any genetic link between clans belonging to the same tribe. Clearly they too are merely associations of disparate groups, formed because of some historical reason other than descent. Possible causes for such an association of clans could be geographic or economic, such as common land use or shared water rights; military, such as a common defence pact or the construction of a shared qala; or perhaps political, such as common allegiance to a strong tribal leader. The history of Central Asia revolves around migrations and conflicts and the formation, dissolution, and reformation of tribal confederations, from the Saka Massagetae and the Sarmatians, to the Oghuz and Pechenegs, the Qimek, Qipchaq, and Karluk, the Mongols and Tatars, the White and Golden Hordes, the Shaybanid and Noghay Hordes, and finally the Uzbek, Qazaq, and Karakalpak confederations. Like making cocktails from cocktails, the gene pool of Central Asia was constantly being scrambled, more so on the female line as a result of exogamy and patrilineal The same tribal and clan names occur over and over again throughout the different ethnic Qipchaq-speaking populations of Central Asia, but in different combinations and associations. Many of the names predate the formation of the confederations to which they now belong, relating to earlier Turkic and Mongol tribal factions. Clearly tribal structures are fluid over time, with some groups withering or being absorbed by others, while new groups emerge or are added. When Abu'l Khayr Sultan became khan of the Uzbeks in 1428-29, their confederation consisted of at least 24 tribes, many with smaller subivisions. The names of 6 of those tribes occur among the modern Karakalpaks. A 16th century list, based on an earlier document, gives the names of 92 nomadic Uzbek tribes, at least 20 of which were shared by the later breakaway Qazaqs. 13 of the 92 names also occur among the modern Karakalpaks. Shortly after his enthronement as the Khan of Khorezm in 1644-45, Abu'l Ghazi Khan reorganized the tribal structure of the local Uzbeks into four tüpe: |Tüpe||Main Tribes||Secondary Tribes |On Tort Urugh||On To'rt Urıw||Qan'glı| |Durman, Yüz, Ming| Shaykhs, Burlaqs, Arabs | || ||Uyg'ır| 8 out of the 11 tribal names associated with the first three tüpe are also found within the Karakalpak tribal structure. Clearly there is greater overlap between the Karakalpak tribes and the local Khorezmian Uzbek tribes than in the Uzbek tribes in general. The question is whether these similarities pre-dated the Karakalpak migration into the Aral delta or are a result of later Uzbek influences? We know that the Qon'ırat were a powerful tribe in Khorezm for Uzbeks and Karakalpaks alike. They were mentioned as one of the Karakalpak "clans" on the Kuvan Darya [Quwan Darya] by Gladyshev in 1741 along with the Kitay, Qipchaq, Kiyat, Kinyagaz-Mangot (Keneges-Man'g'ıt), Djabin, Miton, and Usyun. Munis recorded that Karakalpak Qon'ırat, Keneges, and Qıtay troops supported Muhammad Amin Inaq against the Turkmen in 1769. Thanks to Sha'rigu'l Payzullaeva we have a comparison of the Qon'ırat tribal structure in the Aral Karakalpaks, the Surkhandarya Karakalpaks, and the Khorezmian Uzbeks, derived from genealogical records: The different status of the same Qon'ırat tribal groups among the Aral and Surkhandarya Karakalpaks and the Khorezmian Uzbeks | Khorezmian | |Qostamg'alı||clan||branch of tribe|| | |Qanjıg'alı||tiıre||branch of tribe||tube| |Shu'llik||division of arıs||clan|| | |Tartıwlı||tiıre||branch of tribe||clan| |Sıyraq||clan||branch of clan|| | |Qaramoyın||tribe||branch of clan|| | A tube is a branch of a tribe among the Khorezmian Uzbeks and a tiıre is a branch of a clan among the Aral Karakalpaks. The Karakalpak enclave in Surkhandarya was already established in the first half of the 18th century, some Karakalpaks fleeing to Samarkand and beyond following the devastating Jungar attack of 1723. Indeed it may even be older - the Qon'ırat have a legend that they came to Khorezm from the country of Zhideli Baysun in Surkhandarya. This suggests that some Karakalpaks had originally travelled south with factions from the Shaybani Horde in the early 16th century. The fact that the Karakalpak Qon'ırats remaining in that region have a similar tribal structure to the Khorezmian Uzbeks is powerful evidence that the tribal structure of the Aral Karakalpaks had broadly crystallized prior to their migration into the Aral delta. The Russian ethnographer Tatyana Zhdanko was the first academic to make an in-depth study of Karakalpak tribal structure. She not only uncovered the similarities between the tribal structures of the Uzbek and Karakalpak Qon'ırats in Khorezm but also the closeness of their respective customs and material and spiritual cultures. She concluded that one should not only view the similarity between the Uzbek and Karakalpak Qon'ırats in a historical sense, but should also see the commonality of their present- day ethnic relationships. B. F. Choriyev added that "this kind of similarity should not only be sought amongst the Karakalpak and the Khorezmian Qon'ırats but also amongst the Surkhandarya Qon'ırats. They all have the same ethnic history." Such ethnographic studies provide support to the findings that have emerged from the recent studies of Central Asian genetics. Together they point towards a common origin of the Karakalpak and Uzbek confederations. They suggest that each was formed out of the same melange of tribes and clans inhabiting the Dasht-i Qipchaq following the collapse of the Golden Horde, a vast expanse ranging northwards from the Black Sea coast to western Siberia and then eastwards to the steppes surrounding the lower and middle Syr Darya, encompassing the whole of the Aral region along the way. Of course the study of the genetics of present-day populations gives us the cumulative outcome of hundreds of thousands of years of complex human history and interaction. We now need to establish a timeline, tracking genetic changes in past populations using the human skeletal remains retrieved from Saka, Sarmatian, Turkic, Tatar, and early Uzbek and Karakalpak archaeological burial sites. Such studies might pinpoint the approximate dates when important stages of genetic intermixing occurred. Sha'rigu'l Payzullaeva recalls an interesting encounter at the Regional Studies Museum in No'kis during the month of August 1988. Thirty-eight elderly men turned up together to visit the Museum. Each wore a different kind of headdress, some with different sorts of taqıya, others with their heads wrapped in a double kerchief. They introduced themselves as Karakalpaks from Jarqorghan rayon in Surkhandarya viloyati, just north of the Afghan border. One of them said "Oh daughter, we are getting old now. We decided to come here to see our homeland before we die." During their visit to the Museum they said that they would travel to Qon'ırat rayon the following day. Sha'rigu'l was curious to know why they specifically wanted to visit Qon'ırat. They explained that it was because most of the men were from the Qon'ırat clan. One of the men introduced himself to Sha'rigu'l: "My name is Mirzayusup Khaliyarov, the name of my clan is Qoldawlı. After discovering that Sha'rigu'l was also Qoldawlı his eyes filled with tears and he kissed her on the forehead. Bowles, G. T., The People of Asia, Weidenfeld and Nicolson, London, 1977. Comas, D., Calafell, F., Mateu, E., Pérez-Lezaun, A., Bosch, E., Martínez-Arias, R., Clarimon, J., Facchini, F., Fiori, G., Luiselli, D., Pettener, D., and Bertranpetit, J., Trading Genes along the Silk Road: mtDNA Sequences and the Origin of Central Asian Populations, American Journal of Human Genetics, 63, pages 1824 to 1838, 1998. Cavalli-Sforza, L. L., Menozzi, P., and Piazza, A., The History and Geography of Human Genes, Princeton University Press, Chaix, R., Austerlitz, F., Khegay, T., Jacquesson, S., Hammer, M. F., Heyer, E., and Quintana-Murci, L., The Genetic or Mythical Ancestry of Descent Groups: Lessons from the Y Chromosome, American Journal of Human Genetics, Volume 75, pages 1113 to 1116, 2004. Chaix, R., Quintana-Murci, L., Hegay, T., Hammer, M. F., Mobasher, Z., Austerlitz, F., and Heyer, E., From Social to Genetic Structures in Central Asia, Current Biology, Volume 17, Issue 1, pages 43 to 48, 9 January 2007. Comas, D., Plaza, S., Spencer Wells, R., Yuldaseva, N., Lao, O., Calafell, F., and Bertranpetit, J., Admixture, migrations, and dispersals in Central Asia: evidence from maternal DNA lineages, European Journal of Human Genetics, pages 1 to 10, 2004. Heyer, E., Central Asia: A common inquiry in genetics, linguistics and anthropology, Presentation given at the conference entitled "Origin of Man, Language and Languages", Aussois, France, 22-25 September, 2005. Heyer, E., Private communications to the authors, 14 February and 17 April, 2006. Krader, L., Peoples of Central Asia, The Uralic and Altaic Series, Volume 26, Indiana University, Bloomington, 1971. Passarino, G., Semino, O., Magri, C., Al-Zahery, N., Benuzzi, G., Quintana-Murci, L., Andellnovic, S., Bullc-Jakus, F., Liu, A., Arslan, A., and Santachiara-Benerecetti, A., The 49a,f Haplotype 11 is a New Marker of the EU19 Lineage that Traces Migrations from Northern Regions of the Black Sea, Human Immunology, Volume 62, pages 922 to 932, 2001. Payzullaeva, Sh., Numerous Karakalpaks, many of them! [in Karakalpak], Karakalpakstan Publishing, No'kis, 1995. Pérez-Lezaun, A., Calafell, F., Comas, D., Mateu, E., Bosch, E., Martínez-Arias, R., Clarimón, J., Fiori, G., Luiselli, D., Facchini, F., Pettener, D., and Bertranpetit, J., Sex-Specific Migration Patterns in Central Asian Populations, Revealed by Analysis of Y-Chromosome Short Tandem Repeats and mtDNA, American Journal of Human Genetics, Volume 65, pages 208 to 219, 1999. Spencer Wells, R., The Journey of Man, A Genetic Odyssey, Allen Lane, London, 2002. Spencer Wells, R., et al, The Eurasian Heartland: A continental perspective on Y-chromosome diversity, Proceedings of the National Academy of Science, Volume 98, pages 10244 to 10249, USA, 28 August 2001. Underwood, J. H., Human Variation and Human Micro-Evolution, Prentice-Hall Inc., New Jersey, 1979. Underwood, P. A., et al, Detection of Numerous Y Chromosome Biallelic Polymorphisms by Denaturing High-Performance Liquid Chromatography, Genome Research, Volume 7, pages 996 to 1005, 1997. Zerjal, T., Spencer Wells, R., Yuldasheva, N., Ruzibakiev, R., and Tyler-Smith, C., A Genetic Landscape Reshaped by Recent Events: Y Chromosome Insights into Central Asia, American Journal of Human Genetics, Volume 71, pages 466 to 482, 2002. Visit our sister site www.qaraqalpaq.com, which uses the correct transliteration, Qaraqalpaq, rather than the Russian transliteration, Karakalpak. Return to top of page
http://www.karakalpak.com/genetics.html
13
77
A map projection is a systematic transformation of the latitudes and longitudes of locations on the surface of a sphere or an ellipsoid into locations on a plane. Map projections are necessary for creating maps. All map projections distort the surface in some fashion. Depending on the purpose of the map, some distortions are acceptable and others are not; therefore different map projections exist in order to preserve some properties of the sphere-like body at the expense of other properties. There is no limit to the number of possible map projections. More generally, the surfaces of planetary bodies can be mapped even if they are too irregular to be modeled well with a sphere or ellipsoid. Even more generally, projections are the subject of several pure mathematical fields, including differential geometry and projective geometry. However "map projection" refers specifically to a cartographic projection. Maps can be more useful than globes in many situations: they are more compact and easier to store; they readily accommodate an enormous range of scales; they are viewed easily on computer displays; they can facilitate measuring properties of the terrain being mapped; they can show larger portions of the Earth's surface at once; and they are cheaper to produce and transport. These useful traits of maps motivate the development of map projections. However, Carl Friedrich Gauss's Theorema Egregium proved that a sphere's surface cannot be represented on a plane without distortion. The same applies to other reference surfaces used as models for the Earth. Since any map projection is a representation of one of those surfaces on a plane, all map projections distort. Every distinct map projection distorts in a distinct way. The study of map projections is the characterization of these distortions. Projection is not limited to perspective projections, such as those resulting from casting a shadow on a screen, or the rectilinear image produced by a pinhole camera on a flat film plate. Rather, any mathematical function transforming coordinates from the curved surface to the plane is a projection. Few projections in actual use are perspective. For simplicity most of this article assumes that the surface to be mapped is that of a sphere. In reality, the Earth and other large celestial bodies are generally better modeled as oblate spheroids, whereas small objects such as asteroids often have irregular shapes. These other surfaces can be mapped as well. Therefore, more generally, a map projection is any method of "flattening" into a plane a continuous curved surface. Metric properties of maps Many properties can be measured on the Earth's surface independently of its geography. Some of these properties are: Map projections can be constructed to preserve one or more of these properties, though not all of them simultaneously. Each projection preserves or compromises or approximates basic metric properties in different ways. The purpose of the map determines which projection should form the base for the map. Because many purposes exist for maps, many projections have been created to suit those purposes. Another consideration in the configuration of a projection is its compatibility with data sets to be used on the map. Data sets are geographic information; their collection depends on the chosen datum (model) of the Earth. Different datums assign slightly different coordinates to the same location, so in large scale maps, such as those from national mapping systems, it is important to match the datum to the projection. The slight differences in coordinate assignation between different datums is not a concern for world maps or other vast territories, where such differences get shrunk to imperceptibility. Which projection is best? The mathematics of projection do not permit any particular map projection to be "best" for everything. Something will always get distorted. Therefore a diversity of projections exists to service the many uses of maps and their vast range of scales. Modern national mapping systems typically employ a transverse Mercator or close variant for large-scale maps in order to preserve conformality and low variation in scale over small areas. For smaller-scale maps, such as those spanning continents or the entire world, many projections are in common use according to their fitness for the purpose. Thematic maps normally require an equal area projection so that phenomena per unit area are shown in correct proportion. However, representing area ratios correctly necessarily distorts shapes more than many maps that are not equal-area. Hence reference maps of the world often appear on compromise projections instead. Due to the severe distortions inherent in any map of the world, within reason the choice of projection becomes largely one of æsthetics. The Mercator projection, developed for navigational purposes, has often been used in world maps where other projections would have been more appropriate. This problem has long been recognized even outside professional circles. For example a 1943 New York Times editorial states: The time has come to discard [the Mercator] for something that represents the continents and directions less deceptively... Although its usage... has diminished... it is still highly popular as a wall map apparently in part because, as a rectangular map, it fills a rectangular wall space with more map, and clearly because its familiarity breeds more popularity. A controversy in the 1980s over the Peters map motivated the American Cartographic Association (now Cartography and Geographic Information Society) to produce a series of booklets (including Which Map is Best) designed to educate the public about map projections and distortion in maps. In 1989 and 1990, after some internal debate, seven North American geographic organizations adopted a resolution recommending against using any rectangular projection (including Mercator and Gall–Peters) for reference maps of the world. Construction of a map projection The creation of a map projection involves two steps: - Selection of a model for the shape of the Earth or planetary body (usually choosing between a sphere or ellipsoid). Because the Earth's actual shape is irregular, information is lost in this step. - Transformation of geographic coordinates (longitude and latitude) to Cartesian (x,y) or polar plane coordinates. Cartesian coordinates normally have a simple relation to eastings and northings defined on a grid superimposed on the projection. Some of the simplest map projections are literally projections, as obtained by placing a light source at some definite point relative to the globe and projecting its features onto a specified surface. This is not the case for most projections, which are defined only in terms of mathematical formulae that have no direct geometric interpretation. Choosing a projection surface A surface that can be unfolded or unrolled into a plane or sheet without stretching, tearing or shrinking is called a developable surface. The cylinder, cone and of course the plane are all developable surfaces. The sphere and ellipsoid do not have developable surfaces, so any projection of them onto a plane will have to distort the image. (To compare, one cannot flatten an orange peel without tearing and warping it.) One way of describing a projection is first to project from the Earth's surface to a developable surface such as a cylinder or cone, and then to unroll the surface into a plane. While the first step inevitably distorts some properties of the globe, the developable surface can then be unfolded without further distortion. Aspects of the projection Once a choice is made between projecting onto a cylinder, cone, or plane, the aspect of the shape must be specified. The aspect describes how the developable surface is placed relative to the globe: it may be normal (such that the surface's axis of symmetry coincides with the Earth's axis), transverse (at right angles to the Earth's axis) or oblique (any angle in between). The developable surface may also be either tangent or secant to the sphere or ellipsoid. Tangent means the surface touches but does not slice through the globe; secant means the surface does slice through the globe. Moving the developable surface away from contact with the globe never preserves or optimizes metric properties, so that possibility is not discussed further here. A globe is the only way to represent the earth with constant scale throughout the entire map in all directions. A map cannot achieve that property for any area, no matter how small. It can, however, achieve constant scale along specific lines. Some possible properties are: - The scale depends on location, but not on direction. This is equivalent to preservation of angles, the defining characteristic of a conformal map. - Scale is constant along any parallel in the direction of the parallel. This applies for any cylindrical or pseudocylindrical projection in normal aspect. - Combination of the above: the scale depends on latitude only, not on longitude or direction. This applies for the Mercator projection in normal aspect. - Scale is constant along all straight lines radiating from a particular geographic location. This is the defining characteristic of an equidistant projection such as the Azimuthal equidistant projection. There are also projections (Maurer, Close) where true distances from two points are preserved. Choosing a model for the shape of the Earth Projection construction is also affected by how the shape of the Earth is approximated. In the following section on projection categories, the earth is taken as a sphere in order to simplify the discussion. However, the Earth's actual shape is closer to an oblate ellipsoid. Whether spherical or ellipsoidal, the principles discussed hold without loss of generality. Selecting a model for a shape of the Earth involves choosing between the advantages and disadvantages of a sphere versus an ellipsoid. Spherical models are useful for small-scale maps such as world atlases and globes, since the error at that scale is not usually noticeable or important enough to justify using the more complicated ellipsoid. The ellipsoidal model is commonly used to construct topographic maps and for other large- and medium-scale maps that need to accurately depict the land surface. A third model of the shape of the Earth is the geoid, a complex and more accurate representation of the global mean sea level surface that is obtained through a combination of terrestrial and satellite gravity measurements. This model is not used for mapping because of its complexity, but rather is used for control purposes in the construction of geographic datums. (In geodesy, plural of "datum" is "datums" rather than "data".) A geoid is used to construct a datum by adding irregularities to the ellipsoid in order to better match the Earth's actual shape. It takes into account the large-scale features in the Earth's gravity field associated with mantle convection patterns, and the gravity signatures of very large geomorphic features such as mountain ranges, plateaus and plains. Historically, datums have been based on ellipsoids that best represent the geoid within the region that the datum is intended to map. Controls (modifications) are added to the ellipsoid in order to construct the datum, which is specialized for a specific geographic region (such as the North American Datum). A few modern datums, such as WGS84 which is used in the Global Positioning System, are optimized to represent the entire earth as well as possible with a single ellipsoid, at the expense of accuracy in smaller regions. A fundamental projection classification is based on the type of projection surface onto which the globe is conceptually projected. The projections are described in terms of placing a gigantic surface in contact with the earth, followed by an implied scaling operation. These surfaces are cylindrical (e.g. Mercator), conic (e.g., Albers), or azimuthal or plane (e.g. stereographic). Many mathematical projections, however, do not neatly fit into any of these three conceptual projection methods. Hence other peer categories have been described in the literature, such as pseudoconic, pseudocylindrical, pseudoazimuthal, retroazimuthal, and polyconic. Another way to classify projections is according to properties of the model they preserve. Some of the more common categories are: - Preserving direction (azimuthal), a trait possible only from one or two points to every other point - Preserving shape locally (conformal or orthomorphic) - Preserving area (equal-area or equiareal or equivalent or authalic) - Preserving distance (equidistant), a trait possible only between one or two points and every other point - Preserving shortest route, a trait preserved only by the gnomonic projection Because the sphere is not a developable surface, it is impossible to construct a map projection that is both equal-area and conformal. Projections by surface The three developable surfaces (plane, cylinder, cone) provide useful models for understanding, describing, and developing map projections. However, these models are limited in two fundamental ways. For one thing, most world projections in actual use do not fall into any of those categories. For another thing, even most projections that do fall into those categories are not naturally attainable through physical projection. As L.P. Lee notes, No reference has been made in the above definitions to cylinders, cones or planes. The projections are termed cylindric or conic because they can be regarded as developed on a cylinder or a cone, as the case may be, but it is as well to dispense with picturing cylinders and cones, since they have given rise to much misunderstanding. Particularly is this so with regard to the conic projections with two standard parallels: they may be regarded as developed on cones, but they are cones which bear no simple relationship to the sphere. In reality, cylinders and cones provide us with convenient descriptive terms, but little else. Lee's objection refers to the way the terms cylindrical, conic, and planar (azimuthal) have been abstracted in the field of map projections. If maps were projected as in light shining through a globe onto a developable surface, then the spacing of parallels would follow a very limited set of possibilities. Such a cylindrical projection (for example) is one which: - Is rectangular; - Has straight vertical meridians, spaced evenly; - Has straight parallels symmetrically placed about the equator; - Has parallels constrained to where they fall when light shines through the globe onto the cylinder, with the light source someplace along the line formed by the intersection of the prime meridian with the equator, and the center of the sphere. (If you rotate the globe before projecting then the parallels and meridians will not necessarily still be straight lines. Rotations are normally ignored for the purpose of classification.) Where the light source emanates along the line described in this last constraint is what yields the differences between the various "natural" cylindrical projections. But the term cylindrical as used in the field of map projections relaxes the last constraint entirely. Instead the parallels can be placed according to any algorithm the designer has decided suits the needs of the map. The famous Mercator projection is one in which the placement of parallels does not arise by "projection"; instead parallels are placed how they need to be in order to satisfy the property that a course of constant bearing is always plotted as a straight line. The term "normal cylindrical projection" is used to refer to any projection in which meridians are mapped to equally spaced vertical lines and circles of latitude (parallels) are mapped to horizontal lines. The mapping of meridians to vertical lines can be visualized by imagining a cylinder whose axis coincides with the Earth's axis of rotation. This cylinder is wrapped around the Earth, projected onto, and then unrolled. By the geometry of their construction, cylindrical projections stretch distances east-west. The amount of stretch is the same at any chosen latitude on all cylindrical projections, and is given by the secant of the latitude as a multiple of the equator's scale. The various cylindrical projections are distinguished from each other solely by their north-south stretching (where latitude is given by φ): - North-south stretching equals east-west stretching (secant φ): The east-west scale matches the north-south scale: conformal cylindrical or Mercator; this distorts areas excessively in high latitudes (see also transverse Mercator). - North-south stretching grows with latitude faster than east-west stretching (secant² φ): The cylindric perspective (= central cylindrical) projection; unsuitable because distortion is even worse than in the Mercator projection. - North-south stretching grows with latitude, but less quickly than the east-west stretching: such as the Miller cylindrical projection (secant[4φ/5]). - North-south distances neither stretched nor compressed (1): equirectangular projection or "plate carrée". - North-south compression precisely the reciprocal of east-west stretching (cosine φ): equal-area cylindrical. This projection has many named specializations differing only in the scaling constant. Some of those specializations are the Gall–Peters or Gall orthographic, Behrmann, and Lambert cylindrical equal-area). This kind of projection divides north-south distances by a factor equal to the secant of the latitude, preserving area at the expense of shapes. In the first case (Mercator), the east-west scale always equals the north-south scale. In the second case (central cylindrical), the north-south scale exceeds the east-west scale everywhere away from the equator. Each remaining case has a pair of secant lines—a pair of identical latitudes of opposite sign (or else the equator) at which the east-west scale matches the north-south-scale. Normal cylindrical projections map the whole Earth as a finite rectangle, except in the first two cases, where the rectangle stretches infinitely tall while retaining constant width. Pseudocylindrical projections represent the central meridian as a straight line segment. Other meridians are longer than the central meridian and bow outward away from the central meridian. Pseudocylindrical projections map parallels as straight lines. Along parallels, each point from the surface is mapped at a distance from the central meridian that is proportional to its difference in longitude from the central meridian. On a pseudocylindrical map, any point further from the equator than some other point has a higher latitude than the other point, preserving north-south relationships. This trait is useful when illustrating phenomena that depend on latitude, such as climate. Examples of psuedocylindrial projections include: - Sinusoidal, which was the first pseudocylindrical projection developed. Vertical scale and horizontal scale are the same throughout, resulting in an equal-area map. On the map, as in reality, the length of each parallel is proportional to the cosine of the latitude. Thus the shape of the map for the whole earth is the region between two symmetric rotated cosine curves. The true distance between two points on the same meridian corresponds to the distance on the map between the two parallels, which is smaller than the distance between the two points on the map. The distance between two points on the same parallel is true. The area of any region is true. - Collignon projection, which in its most common forms represents each meridian as 2 straight line segments, one from each pole to the equator. The term "conic projection" is used to refer to any projection in which meridians are mapped to equally spaced lines radiating out from the apex and circles of latitude (parallels) are mapped to circular arcs centered on the apex. When making a conic map, the map maker arbitrarily picks two standard parallels. Those standard parallels may be visualized as secant lines where the cone intersects the globe—or, if the map maker chooses the same parallel twice, as the tangent line where the cone is tangent to the globe. The resulting conic map has low distortion in scale, shape, and area near those standard parallels. Distances along the parallels to the north of both standard parallels or to the south of both standard parallels are necessarily stretched. The most popular conic maps either - Albers conic - compress north-south distance between each parallel to compensate for the east-west stretching, giving an equal-area map, or - Equidistant conic - keep constant distance scale along the entire meridian, typically the same or near the scale along the standard parallels, or - Lambert conformal conic - stretch the north-south distance between each parallel to equal the east-west stretching, giving a conformal map. - Werner cordiform, upon which distances are correct from one pole, as well as along all parallels. - Continuous American polyconic Azimuthal (projections onto a plane) Azimuthal projections have the property that directions from a central point are preserved and therefore great circles through the central point are represented by straight lines on the map. Usually these projections also have radial symmetry in the scales and hence in the distortions: map distances from the central point are computed by a function r(d) of the true distance d, independent of the angle; correspondingly, circles with the central point as center are mapped into circles which have as center the central point on the map. The radial scale is r'(d) and the transverse scale r(d)/(R sin(d/R)) where R is the radius of the Earth. Some azimuthal projections are true perspective projections; that is, they can be constructed mechanically, projecting the surface of the Earth by extending lines from a point of perspective (along an infinite line through the tangent point and the tangent point's antipode) onto the plane: - The gnomonic projection displays great circles as straight lines. Can be constructed by using a point of perspective at the center of the Earth. r(d) = c tan(d/R); a hemisphere already requires an infinite map, - The General Perspective projection can be constructed by using a point of perspective outside the earth. Photographs of Earth (such as those from the International Space Station) give this perspective. - The orthographic projection maps each point on the earth to the closest point on the plane. Can be constructed from a point of perspective an infinite distance from the tangent point; r(d) = c sin(d/R). Can display up to a hemisphere on a finite circle. Photographs of Earth from far enough away, such as the Moon, give this perspective. - The azimuthal conformal projection, also known as the stereographic projection, can be constructed by using the tangent point's antipode as the point of perspective. r(d) = c tan(d/2R); the scale is c/(2R cos²(d/2R)). Can display nearly the entire sphere's surface on a finite circle. The sphere's full surface requires an infinite map. Other azimuthal projections are not true perspective projections: - Azimuthal equidistant: r(d) = cd; it is used by amateur radio operators to know the direction to point their antennas toward a point and see the distance to it. Distance from the tangent point on the map is proportional to surface distance on the earth (; for the case where the tangent point is the North Pole, see the flag of the United Nations) - Lambert azimuthal equal-area. Distance from the tangent point on the map is proportional to straight-line distance through the earth: r(d) = c sin(d/2R) - Logarithmic azimuthal is constructed so that each point's distance from the center of the map is the logarithm of its distance from the tangent point on the Earth. r(d) = c ln(d/d0); locations closer than at a distance equal to the constant d0 are not shown (, figure 6-5) Projections by preservation of a metric property Conformal, or orthomorphic, map projections preserve angles locally, implying that they map infinitesimal circles of constant size anywhere on the Earth to infinitesimal circles of varying sizes on the map. In contrast, mappings that are not conformal distort most such small circles into ellipses of distortion. An important consequence of conformality is that relative angles at each point of the map are correct, and the local scale (although varying throughout the map) in every direction around any one point is constant. These are some conformal projections: - Mercator: Rhumb lines are represented by straight segments - Transverse Mercator - Stereographic: Any circle of a sphere, great and small, maps to a circle or straight line. - Lambert conformal conic - Peirce quincuncial projection - Adams hemisphere-in-a-square projection - Guyou hemisphere-in-a-square projection These are some projections that preserve area: - Gall orthographic (also known as Gall–Peters, or Peters, projection) - Albers conic - Lambert azimuthal equal-area - Lambert cylindrical equal-area - Goode's homolosine - Tobler hyperelliptical - Snyder’s equal-area polyhedral projection, used for geodesic grids. These are some projections that preserve distance from some standard point or line: - Equirectangular—distances along meridians are conserved - Plate carrée—an Equirectangular projection centered at the equator - Azimuthal equidistant—distances along great circles radiating from centre are conserved - Equidistant conic - Sinusoidal—distances along parallels are conserved - Werner cordiform distances from the North Pole are correct as are the curved distance on parallels - Two-point equidistant: two "control points" are arbitrarily chosen by the map maker. Distance from any point on the map to each control point is proportional to surface distance on the earth. Great circles are displayed as straight lines: Direction to a fixed location B (the bearing at the starting location A of the shortest route) corresponds to the direction on the map from A to B: - Littrow—the only conformal retroazimuthal projection - Hammer retroazimuthal—also preserves distance from the central point - Craig retroazimuthal aka Mecca or Qibla—also has vertical meridians Compromise projections Compromise projections give up the idea of perfectly preserving metric properties, seeking instead to strike a balance between distortions, or to simply make things "look right". Most of these types of projections distort shape in the polar regions more than at the equator. These are some compromise projections: - van der Grinten - Miller cylindrical - Winkel Tripel - Buckminster Fuller's Dymaxion - B.J.S. Cahill's Butterfly Map - Kavrayskiy VII - Wagner VI projection - Chamberlin trimetric - Oronce Finé's cordiform See also - Snyder, J.P. (1989). Album of Map Projections, United States Geological Survey Professional Paper. United States Government Printing Office. 1453. - Nirtsov, Maxim V. (2007). "The problems of mapping irregularly-shaped celestial bodies". International Cartographic Association. - Choosing a World Map. Falls Church, Virginia: American Congress on Surveying and Mapping. 1988. p. 1. ISBN 0-9613459-2-6. - Slocum, Terry A.; Robert B. McMaster, Fritz C. Kessler, Hugh H. Howard (2005). Thematic Cartography and Geographic Visualization (2nd ed.). Upper Saddle River, NJ: Pearson Prentice Hall. p. 166. ISBN 0-13-035123-7. - Bauer, H.A. (1942). "Globes, Maps, and Skyways (Air Education Series)". New York. p. 28 - Miller, Osborn Maitland (1942). "Notes on Cylindrical World Map Projections". Geographical Review 43 (3): 405–409. - Raisz, Erwin Josephus. (1938). General Cartography. New York: McGraw–Hill. 2d ed., 1948. p. 87. - Robinson, Arthur Howard. (1960). Elements of Cartography, second edition. New York: John Wiley and Sons. p. 82. - Snyder, John P. (1993). Flattening the Earth: Two Thousand Years of Map Projections p. 157. Chicago and London: The University of Chicago Press. ISBN 0-226-76746-9. (Summary of the Peters controversy.) - American Cartographic Association's Committee on Map Projections, 1986. Which Map is Best p. 12. Falls Church: American Congress on Surveying and Mapping. - American Cartographer. 1989. 16(3): 222–223. - Snyder, John P. (1993). Flattening the earth: two thousand years of map projections. University of Chicago Press. ISBN 0-226-76746-9. - Snyder, John P. (1997). Flattening the earth: two thousand years of map projections. University of Chicago Press. ISBN 978-0-226-76747-5. - Lee, L.P. (1944). "The nomenclature and classification of map projections". Empire Survey Review VII (51): 190–200. p. 193 - Weisstein, Eric W., "Sinusoidal Projection", MathWorld. - Carlos A. Furuti. "Conic Projections" - Weisstein, Eric W., "Gnomonic Projection", MathWorld. - "The Gnomonic Projection". Retrieved November 18, 2005. - Weisstein, Eric W., "Orthographic Projection", MathWorld. - Weisstein, Eric W., "Stereographic Projection", MathWorld. - Weisstein, Eric W., "Azimuthal Equidistant Projection", MathWorld. - Weisstein, Eric W., "Lambert Azimuthal Equal-Area Projection", MathWorld. - "http://www.gis.psu.edu/projection/chap6figs.html". Retrieved November 18, 2005. - Fran Evanisko, American River College, lectures for Geography 20: "Cartographic Design for GIS", Fall 2002 - Map Projections—PDF versions of numerous projections, created and released into the Public Domain by Paul B. Anderson ... member of the International Cartographic Association's Commission on Map Projections |Wikimedia Commons has media related to: Map projections| - A Cornucopia of Map Projections, a visualization of distortion on a vast array of map projections in a single image. - G.Projector, free software by can render many projections (NASA GISS). - Color images of map projections and distortion (Mapthematics.com). - Geometric aspects of mapping: map projection (KartoWeb.itc.nl). - Java world map projections, Henry Bottomley (SE16.info). - Map projections http://www.3dsoftware.com/Cartography/USGS/MapProjections/, archived by the Wayback Machine (3DSoftware). - Map projections, John Savard. - Map Projections (MathWorld). - Map Projections An interactive JAVA applet to study deformations (area, distance and angle) of map projections (UFF.br). - Map Projections: How Projections Work (Progonos.com). - Map Projections Poster (U.S. Geographical Survey). - MapRef: The Internet Collection of MapProjections and Reference Systems in Europe - PROJ.4 - Cartographic Projections Library. - Projection Reference Table of examples and properties of all common projections (RadicalCartography.net). - PDF (1.70 MB), Melita Kennedy (ESRI). - World Map Projections, Stephen Wolfram based on work by Yu-Sung Chang (Wolfram Demonstrations Project).
http://en.wikipedia.org/wiki/Map_projection
13
123
Some Common Alternative Conceptions (Misconceptions) Earth Systems, Cosmology and Astronomy The correct conception of seasonal change is that it is caused by the tilting of the earth relative to the sun’s rays. As the Earth goes around its orbit, the Northern hemisphere is at various times oriented more toward or more away from the Sun, and likewise for the Southern hemisphere. Seasonal change is explained by the changing angle of the Earth’s rotation axis toward the Earth’s orbit, which causes the alteration in light angle toward a concrete place on the Earth. A major misconception about seasonal change, held by school students and adults (university students — and teacher trainees and primary teachers — Atwood & Atwood, 1996; Kikas, 2004; Ojala, 1997) is known as the “distance theory.” In this theory, seasons on the Earth are caused by varying distances of the Earth from the Sun on its elliptical orbit. Temperature varies in winter and summer because the distance between the Sun and the Earth is different during these two seasons. One way to see that this reasoning is erroneous is to note that the seasons are out of phase in the Northern and Southern hemispheres: when it is Summer in the North it is Winter in the South. (see Atwood & Atwood, 1996; Baxter, 1995; Kikas, 1998; 2003, 2004; Ojala, 1997). Correct scientific theory on the earth’s shape posits a spherical shape of the earth. Knowledge about the Earth Misconceptions: Elementary school children (1st through 5th grades) commonly hold misconceptions about the earth’s shape. Some children believe that the earth is shaped like a flat rectangle or a disc that is supported by the ground and covered by the sky and solar objects above its “top.” Other children think of the earth as a hollow sphere, with people living on flat ground deep inside it, or as a flattened sphere with people living on its flat “top” and “bottom.” Finally, some children form a belief in a dual earth, according to which there are two earths: a flat one on which people live, and a spherical one that is a planet up in the sky. Due to these misconceptions, elementary school children experience difficulty learning the correct scientific understanding of the spherical earth taught in school. It appears that children start with an initial concept of the earth as a physical object that has all the characteristics of physical objects in general (i.e., it is solid, stable, stationary and needing support), in which space is organized in terms of the direction of up and down and in which unsupported objects fall “down.” When students are exposed to the information that the earth is a sphere, they find it difficult to understand because it violates certain of the above-mentioned beliefs about physical objects. (See Vosniadou, 1994; Vosniadou & Brewer, 1992; Vosniadou et al. 2001.) The correct explanation for the day/night cycle is the fact that the earth spins. Elementary school children (1st through 5th grades) show some common misconceptions about the day/night cycle. Misconception #1: The earliest kind of misunderstanding (initial model) is consistent with observations of everyday experience. Clouds cover the Sun; day is replaced by night; the Sun sets behind the hills. Misconception #2: Somewhat older children have “synthetic” models that represent an integration between initial (everyday) models and culturally accepted views (e.g., the sun and moon revolve around the stationary earth every 24 hours; the earth rotates in an up/down direction and the sun and moon are fixed on opposite sides; the Earth goes around the sun; the Moon blocks the sun; the Sun moves in space; the Earth rotates and revolves). (See Kikas, 1998; Vosniadou & Brewer, 1994) The correct understanding of plants is that plants are living things. Misconception: Elementary school children think of plants as nonliving things (Hatano et al., 1997). Path of blood flow in circulation The correct conception is that lungs are involved and are the site of oxygen-carbon-dioxide exchange. Also, there is a double pattern of blood flow dubbed the “double loop” or “double path” model. This model includes four separate chambers in the heart as well as a separate loop to and from the lungs. Blood from the right ventricle is pumped into the lungs to be oxygenated, whereas blood from the left ventricle is pumped to the rest of the body to deliver oxygen. Hence, one path transports de-oxygenated blood to receive oxygen, while the other path transports oxygenated blood to deliver oxygen. Misconceptions: Yip (1998) evaluated science teacher knowledge of the circulatory system. Teachers were asked to underline incorrect statements about blood circulation and provide justification for their choices. Most teachers were unable to relate blood flow, blood pressure, and blood vessel diameter. More experienced teachers often had the same misconceptions as less experienced teachers. Misconception #1: The most common misconception is the “single loop” model, wherein the arteries carry blood from the heart to the body (where oxygen is deposited and waste collected) and the veins carry blood from the body to the heart (where it is cleaned and re-oxygenated) (Chi, 2005). This conception differs from the correct conception in three ways: It does not assume that lungs are involved, but assumes that lungs are another part of the body to which blood has to travel. It does not assume that the site of oxygen-carbon-dioxide exchange is in the lungs; instead, it assumes such exchange happens in the heart It does not assume there is a double loop (double paths), pulmonary and systemic, but instead assumes that there is a single path of blood flow and the role of the circulatory system is a systemic one only. “Single loop” misconceptions contain five constituent propositions: Blood flows from the heart to the body in arteries. Blood flows from the body to the heart in veins. The body uses the “clean” blood in some way, rendering it unclean. Blood is “cleaned” or “replenished with oxygen” in the heart. Circulation is a cycle. Misconception #2: There is a “heart-to-toe” path in answer to the question of “What path does blood take when it leaves the heart?” (8th and 10th graders) (Arnaudin & Mintzes, 1985; Chi 2005) Categories of Misconceptions (Erroneous Ideas) (See Pelaez, Boyd, Rojas, & Hoover, 2005) The groups of blood circulation errors detected among prospective elementary teachers fell into five categories: Blood pathway. These are common conceptual errors about the pathway a drop of blood takes as it leaves the heart and travels through the body and lungs. A typical correct answer explains dual circulation with blood from the left side of the heart going to a point in the body and returning to the right side of the heart, where it is pumped to the lungs and back to the left side of the heart. Blood vessels. A correct response has blood traveling in veins to the heart and arteries carrying blood away from the heart, and the response recognizes that arteries feed and veins drain each capillary bed in an organ. Gas exchange. A correct response indicates that a concentration gradient between two compartments drives the net transport of gases across cell membranes. Gas molecule transport and utilization. A correct response explains that oxygen is transported by blood to the cells of the body and carbon dioxide is transported from the cells where it is produced and eventually back to the lungs. Lung function. A correct response explains that lungs get oxygen from the air and eliminate carbon dioxide from the body. Force and Motion of Objects The correct conception of force, which is based on Newtonian physics (Newtonian theory of mechanics), describes force as a process used to explain changes in the kinetic (caused by motion) state of physical objects. Motion is the natural state that does not need to be explained. What needs to be explained are changes in the kinetic state. Force is a feature of the interaction between two objects. It comes in interactive action-reaction pairs (e.g., the force exerted by a table on a book when the book is resting on the table) that are needed to explain, not an object’s motion, but its change in motion (acceleration). Force is an influence that may cause a body to accelerate. It may be experienced as a lift, push or pull upon an object resulting from the object's interaction with another object. Hence, static objects, such as the book on the table, can exert force. Whenever there is an interaction between two objects, there is a force upon each of the objects. When the interaction ceases, the two objects no longer experience the force. Forces only exist as a result of an interaction. Two interacting bodies exert equal and opposite forces on each other. Force has a magnitude and a direction. (See Committee on Science Learning, Kindergarten through Eighth Grade, 2007) Misconception #1: Motion/velocity implies force. One of the most deeply held misconceptions (or naive theories) about force is known as the pre-Newtonian “impetus theory” or the “acquired force” theory and it is typical among elementary, middle and high school students (see Mayer, 2003; McCloskey, 1983; Vosniadou et al., 2001) and among adults (university students — Kikas, 2003; and teacher trainee and primary teachers — Kikas, 2004). It is erroneously believed that objects are kept moving by internal forces (as opposed to external forces). Based on this reasoning, force is an acquired property of objects that move. This reasoning is central to explaining the motion of inanimate objects. They think that force is an acquired property of inanimate objects that move, since rest is considered to be the natural state of objects. Hence, the motion of objects requires explanation, usually in terms of a causal agent, which is the force of another object. Hence force is the agent that causes an inanimate object to move. The object stops when this acquired force dissipates in the environment. Hence force can be possessed, transformed or dissipated. This “impetus theory” misconception is evident in the following problems taken from Mayer (2007) and McClosky, Caramaza and Green (1980): The drawing on the left — with the curved line — is the misconception response and reflects the impetus theory. This is the idea that when an object is set in motion it acquires a force or impetus (e.g., acquired when it went around through the tube and gained angular momentum) that keeps it moving (when it gets out of the tube). However, the object will lose momentum as the force disappears. The correct drawing on the right — with the straight path — reflects the Newtonian concept that an object in motion will continue until some external force acts upon it. Misconception #2: Static objects cannot exert forces (no motion implies no force). Many high school students hold a classic misconception in the area of physics, in particular, mechanics. They erroneously believe that “static objects are rigid barriers that cannot exert force.” The classic target problem explains the “at rest” condition of an object. Students are asked whether a table exerts an upward force on a book that is placed on the table. Students with this misconception will claim that the table does not push up on a book lying at rest on it. However, gravity and the table exert equal, but oppositely, directed forces on the book thus keeping the book in equilibrium and “at rest.” The table’s force comes from the microscopic compression or bending of the table. Misconception #3: Only active agents exert force. Students are less likely to recognize passive forces. They may think that forces are needed more to start a motion than to stop one. Hence, they may have difficulty recognizing friction as a force. On the correct understanding of gravity, falling objects, regardless of weight, fall at the same speed. Misconception: Heavier objects fall faster than lighter objects. Many students learning about Newtonian motion often persist in their belief that heavier objects fall faster than light objects (Champagne et al., 1985). There is one class of alternative theories (or misconceptions) that is very deeply entrenched. These relate to ontological beliefs (i.e., beliefs about the fundamental categories and properties of the world). (See Chi 2005; Chinn & Brewer, 1998; Keil, 1979). Some common mistaken ontological beliefs that have been found to resist change include: beliefs that objects like electrons and photons move along a single discrete path (Brewer & Chinn, 1991) belief that time flows at a constant rate regardless of relative motion (Brewer & Chinn, 1991) belief that concepts like heat, light, force, and current are a material substance (Chi, 1992) belief that force is something internal to a moving object (McCloskey, 1983; See section on physics misconceptions). Other Misconceptions in Science Belief that rivers only flow from north to south. Epistemological Misconceptions about the Domain of Science Itself (its objectives, methods, and purposes) Many middle school and high school students tend to see the purpose of science as manufacturing artifacts that are useful for humankind. Moreover, scientific explanations are viewed as being inductively derived from data and facts, since the hypothetical or conjectural nature of scientific theories is not well-understood. Also, such students tend not to differentiate between theories and evidence, and have trouble evaluating theories in light of evidence (See Mason, 2002 for review). A correct understanding of money embodies the value of coin currency as noncorrelated with its size. Misconception: At the PreK level, children hold a core misconception about money and the value of coins. Students think nickels are more valuable than dimes because nickels are bigger. Correct understanding of subtraction includes the notion that the columnar order (top to bottom) of the problem cannot be reversed or flipped (Brown & Burton, 1978; Siegler, 2003; Williams & Ryan, 2000). Misconception #1: Students (age 7) have a “smaller-from-larger” error (misconception) that subtraction entails subtracting the smaller digit in each column from the larger digit regardless of which is on top. Misconception #2: When subtracting from 0 (when the minuend includes a zero), there are two subtypes of misconceptions: Misconception a: Flipping the two numbers in the column with the 0. In problem “307-182,” 0 – 8 is treated as 8 – 0, exemplified by a student who wrote “8 ” as the answer. Misconception b: Lack of decrementing; or not decrementing the number to the left of the 0 (due to first bug above, wherein nothing was borrowed from this column.) In problem “307-182,” this means not reducing the 3 to 2. Correct understanding of multiplication includes the knowledge that multiplication does not always increase a number. Misconception: Students have a misconception that multiplication always increases a number. For example, take the number 8: 3 x 8 = 24 5 x 8 = 40 This impedes students’ learning of the multiplication of a (positive) number by a fraction less than one, such as ½ x 8 = 4. Misconception comes in the form of “division as sharing” (Nunes & Bryant, 1996), or the “primitive, partitive model of division” (Tirosh, 2000). In this model, an object or collection of objects is divided into a number of equal parts or sub collections (e.g., Five friends bought 15 lbs. of cookies and shared them equally. How many pounds of cookies did each person get?). The primitive partitive model places three constraints on the operation of division: The divisor (the number by which a dividend is divided) must be a whole number; The divisor must be less than the dividend; and The quotient (the result of the division problem) must be less than the dividend. Hence, children have difficulty with the following two problems because they violate the “dividend is always greater than the divisor constraint” (Tirosh, 2002): “A five-meter-long stick was divided into 15 equal sticks. What is the length of each stick?” A common incorrect response to this problem is 15 divided by 5 (instead of the correct 5 divided by 15). “Four friends bought ¼ kilogram of chocolate and shared it equally. How much chocolate did each person get?” A common incorrect response to this problem is 4 x ¼ or 4 divided by 4 (instead of the correct ¼ divided by 4). Similarly, children have difficulty with the following problem because the primitive, partitive model implies that “division always makes things smaller” (Tirosh, 2002). “Four kilograms of cheese were packed in packages of ¼ kilogram each. How many packages contained this amount of cheese?” Because of this belief they do not view division as a possible operation for solving this word problem. They incorrectly choose the expression “1/4 X 4” as the answer (See Fischbein, Deri, Nello & Marino, 1985). This “primitive, partitive” model interferes with children’s ability to divide fractions — because students believe you cannot divide a small number by a larger number, as it would be impossible to share less among more. Indeed, even teacher trainees can have this preconception of division “as sharing.” Teachers were unable to provide contexts for the following problem (Goulding. Rowland, & Barber, 2002): 2 divided by ¼ The correct conception of negative numbers is that these are numbers less than zero. They are usually written by indicating their opposite, which is a positive number, with a preceding minus sign (See Williams & Ryan, 2000). A Separation Misconception means treating the two parts of the number — the minus sign and the number — separately. In number lines, the scale may be marked: -20, -30, 0, 10, 20...(because the ordering is 20 then 30, and the minus sign is attached afterwards) and later the sequence gets -4 inserted thus: -7, -4, 1,...(because the sequence is read 1, 4, 7 and the minus sign is afterwards attached). Similarly, we can explain: -4 + 7 = -11. The correct conception of a fraction is of the division of one cardinal number by another. Children start school with an understanding of counting — that numbers are what one gets when one counts collections of things (the counting principles). Students have moved towards using counting words and other symbols that are numerically meaningful. The numbering of fractions is not consistent with the counting principles, including the idea that numbers result when sets of things are counted and that addition involves putting two sets together. One cannot count things to generate a fraction. A fraction, as noted, is defined as the division of one cardinal number by another. Moreover, some counting principles do not apply to fractions. For example, one cannot use counting based algorithms for ordering fractions — ¼ is not more than ½. In addition, the nonverbal and verbal counting principles do not map to the tripartite symbolic representations of fractions (two cardinal numbers separated by a line) (See misconception examples above and Hartnett & Gelman, 1998). Misconceptions reflect children’s tendency to distort fractions in order to fit their counting-based number theory, instead of viewing a fraction as a new kind of number. Misconception #1: Student increase the values of denominator maps in order to increase quantitative values. This includes a natural number ordering rule for fractions that is based on cardinal values of the denominator (See Hartnett & Gelman, 1998). Misconception #2: When adding fractions, the process is to add the two numerators to form the sum’s numerator and then add the two denominators to form its denominator. Example: Elementary and high school students think ¼ is larger than ½ because 4 is more than 2 and they seldom read ½ correctly as “one half.” Rather, they use a variety of alternatives, including “one and two, ” “one and a half,” “one plus two, ” “twelve,” and “three.” (See Gelman, Cohen, & Hartnett, 1989, cited in Hartnett & Gelman, 1998), Example ½ +1/3 = 2/5 (See Siegler, 2003). The correct understanding of the decimal system is of a numeration system based on powers of 10. A number is written as a row of digits, with each posi¬tion in the row corresponding to a certain power of 10. A decimal point in the row divides it into those powers of 10 equal to or greater than 0 and those less than 0, i.e., negative powers of 10. Positions farther to the left of the decimal point correspond to increasing positive powers of 10 and those farther to the right to increasing negative powers, i.e., to division by higher positive powers of 10. A number written in the decimal system is called a decimal, although sometimes this term is used to refer only to a proper fraction written in this system and not to a mixed number. Decimals are added and subtracted in the same way as in¬tegers (whole numbers), except that when these operations are written in columnar form, the decimal points in the column entries and in the answer must all be placed one under another. In multiplying two decimals, the operation is the same as for integers except that the number of decimal places in the product (i.e., digits to the right of the decimal point) is equal to the sum of the decimal places in the factors (e.g., the factor 7.24 to two decimal places and the factor 6.3 to one decimal place have the product 45.612 to three decimal places). In division, (e.g., 4.32|12.8), a decimal point in the divisor (4.32) is shifted to the extreme right (i.e., to 432.) and the decimal point in the dividend (12.8) is shifted the same number of places to the right (to 1280), with one or more zeros added before the decimal to make this possible. The decimal point in the quotient is then placed above that in the dividend, i.e., 432|1280.0 and zeros are added to the right of the decimal point in the dividend as needed. The division proceeds the same as for integers. Misconception #1: Students often use a “separation strategy,” whereby they separate the whole (integer) and decimal as different entities. They treat the two parts before and after the decimal point as separate entities. This has been seen in pupils (Williams & Ryan, 2000), as well as in beginning preservice teachers (Ryan & McCrae, 2005). Division by 100: 300.62 divided by 100 Correct Answer = 3.0062 Misconception Answer = 3.62 Example: When given 7.7, 7.8, 7.9, students continue the scale with 7.10, 7.11. Misconception #2: This relates to the ordering of decimal fractions from largest to smallest (Resnick et al., 1989; Sackur-Grisvard, & Leonard, 1985). This misconception is also seen in primary teacher trainees (Goulding et al., 2002). Here is an example of a mistaken ordering: 0.203 2.35 X 10-2; two hundreths 2.19 X 10 -1; one fifth A lack of connection exists in the knowledge base between different forms of numerical expressions AND difficulties with more than two decimal places. Misconception a: The larger/longer number is the one with more digits to the right of the decimal point, i.e. 3.214 is greater than 3.8 (Resnick et al., 1989; Sackur-Grisvard & Leonard, 1985; Siegler, 2003). This is known as the “whole number rule” because children are using their knowledge of whole number values in comparing decimal fractions (Resnick et al., 1989). Whole number errors derive from students’ applying rules for interpreting multidigit integers. Children using this rule appear to have little knowledge of decimal numbers. Their representation of the place value system does not contain the critical information of column values, column names and the role of zero as a placeholder (see Resnick et al., 1989). Misconception b: The “largest/longest decimal is the smallest (the one with the fewest digits to the right of decimal).” Given the pairs 1.35 and 1.2, 1.2 is viewed as greater. 2.43 judged larger than 2.897 (Mason & Ruddock, 1986; cited in Goulding et al., 2002, Resnick et al., 1989; Sackur-Grisvard & Leonard, 1985; Siegler, 2003; & Ryan & McCrae) This is known as the “fraction” rule because children appear to be relying on ordinary fraction notation and their knowledge of the relation between size of parts and number of parts (Resnick et al., 1989). Fraction errors derive from children’s attempts to interpret decimals as fractions. For instance, if they know that thousandths are smaller parts than hundredths, and that three-digit decimals are read as thousandths, whereas two-digit decimals are read as hundredths, they may infer that longer decimals, because they refer to smaller parts, must have lower values (Resnick et al., 1989). These children are not able to coordinate information about the size of parts with information about the number of parts; when attending to size of parts (specified by the number of columns) they ignored the number of parts (specified by the digits). Misconception c: Students make incorrect judgments about ordering numbers that include decimal points when one number has one or more zeros immediately to the right of the decimal point or has other digits to the right of the decimal point. Hence, in ordering the following three numbers (3.214, 3.09, 3.8), a student correctly chooses the number with the zero as the smallest, but then resorts to “the larger number is the one with more digits to the right” rule (i. e., 3.09, 3.8, 3.214) (Resnick et al., 1989; Sackur-Grisvard & Leonard, 1985). This is known as the “zero rule” because it appears to be generated by children who are aware of the place-holder function of zero, but do not have a fully developed place value structure. As a result, they apply their knowledge of zero being very small to a conclusion that the entire decimal must be small (See Resnick et al.,1989). Misconception #3: Multiplication of decimals. Example: 0.3 X 0.24 Correct Answer = 0.072 Misconception answer: Multiply 3 x 24 and adjust two decimal points. 0.72 (This is seen in the beginning instruction of pre-service teachers as well.) Misconception #4: Units, tenths and hundredths. Example: Write in decimal form: 912 + 4/100 Correct Answer = 912.04 Misconception answer = 912.004. 4/100 is ¼ or 100 divided by 4 gives the decimal or 1/25 is 0.25 = 912.25 Overgeneralization of Conceptions Developed for "Whole Numbers" (cited in Williams & Ryan, 2000) Misconception #1: Ignoring the minus or % sign. Errors such as: 4 + - 7 = -11; -10 + 15 = 25. Misconception #2: Thinking that zero is the lowest number. Misconception #1: Incorrect generalization or extension of correct rules. Siegler (2003) provides the following example: The distributive principle indicates that a x (b + c) = (a x b) + (a x c) Some students erroneously extend this principle on the basis of superficial similarities and produce: a + (b x c) = (a + b) x ( a + c) Misconception #2: Variable misconception. Correct understanding of variables means that a student knows that letters in equations represent, at once, a range of unspecified numbers/values. It is very common for middle school students to have misconceptions about core concepts in algebra, including concepts of a variable (Kuchemann 78; Knuth, Alibali, McNeil, Weinberg, & Stephens, 2005; MacGregor & Stacey, 1997; Rosnick, 1981). This misconception can begin in the early elementary school years and then persist through the high school years. There are several levels or kinds of variable misconceptions: Variable Misconception: Level 1 A letter is assigned one numerical value from the outset. Variable Misconception: Level 3 A letter is interpreted as a label for an object or as an object itself. At a university, there are six times as many students as professors. This fact is represented by the equation S = 6P. In this equation, what does the letter S stand for? a. number of students (Correct) c. students (Misconception) d. none of the above Misconception #3: Equality misconception. Correct understanding of equivalence (the equal sign) is the “relational” view of the equal sign. This means understanding that the equal sign is a symbol of equivalence (i.e., a symbol that denotes a relationship between two quantities). Students exhibit a variety of misconceptions about equality (Falkner, Levi, & Carpenter, 1999; Kieran, 1981,1992; Knuth et al., 2005; McNeil & Alibali, 2005; Steinberg, Sleeman, & Ktorza,1990; Williams & Ryan, 2000). The equality misconception is also evident in adults, like college students (McNeil & Alibali, 2005). Students do not understand the concept of “equivalent equations” and basic principles of transforming equations. Often, they do not know how to keep both sides of the equation equal. So, they do not add/subtract equally from both sides of the equal sign. In solving x + 3 = 7, a next step could be A. x + 3 – 3 = 7 – 3 (Correct) B. x + 3 + 7 = 0 C. = 7 – 3 (Misconception) D. .3x = 7 It is assumed that the answer (solution) is the number after the equal sign (i.e., answer on the right) The correct understanding of poems includes the notion that a poem need not rhyme. Misconceptions are that poems must rhyme. A correct understanding of language includes the knowledge that language can be used both literally and nonliterally. The misconception is that language is always used literally. Many elementary school children have difficulty understanding nonliteral or figurative uses of language, such as metaphor and verbal irony. In these nonliteral uses of language, the speaker’s intention is to use an utterance to express a meaning that is not the literal meaning of the utterance. In irony, speakers are expressing a meaning that is opposite to the literal meaning (e.g., while standing in the pouring rain, one says “What a lovely day.”). Metaphor is a figure of speech in which a term or phrase is applied to something to which it is not literally applicable in order to suggest a resemblance, as in “All the world’s a stage." (Shakespeare). Students have difficulty understanding nonliteral (figurative) uses of language because they have a misconception that language is used only literally. (See Winner, 1997.)
http://www.apa.org/education/k12/alternative-conceptions.aspx
13
83
Science Fair Project Encyclopedia This article is about angles in geometry. For other articles, see Angle (disambiguation) An angle (from the Lat. angulus, a corner, a diminutive, of which the primitive form, angus, does not occur in Latin; cognate are the Lat. angere, to compress into a bend or to strangle, and the Gr. ἄγκοσ, a bend; both connected with the Aryan or Indo-European root ank-, to bend) is the figure formed by two rays sharing a common endpoint, called the vertex of the angle. Angles provide a means of expressing the difference in slope between two rays meeting at a vertex without the need to explicitly define the slopes of the two rays. Angles are studied in geometry and trigonometry. Euclid defines a plane angle as the inclination to each other, in a plane, of two lines which meet each other, and do not lie straight with respect to each other. According to Proclus an angle must be either a quality or a quantity, or a relationship. The first concept was used by Eudemus , who regarded an angle as a deviation from a straight line; the second by Carpus of Antioch , who regarded it as the interval or space between the intersecting lines; Euclid adopted the third concept, although his definitions of right, acute, and obtuse angles are certainly quantitative. Units of measure for angles In order to measure an angle, a circle centered at the vertex is drawn. Since the circumference of a circle is always directly proportional to the length of its radius, the measure of the angle is independent of the size of the circle. Note that angles are dimensionless, since they are defined as the ratio of lengths. - The radian measure of the angle is the length of the arc cut out by the angle, divided by the circle's radius. The SI system of units uses radians as the (derived) unit for angles. - The degree measure of the angle is the length of the arc, divided by the circumference of the circle, and multiplied by 360. The symbol for degrees is a small superscript circle, as in 360°. 2π radians is equal to 360° (a full circle), so one radian is about 57° and one degree is π/180 radians. - The grad, also called grade or gon, is an angular measure where the arc is divided by the circumference, and multiplied by 400. It is used mostly in triangulation. - The point is used in navigation, and is defined as 1/32 of a circle, or exactly 11.25°. - The full circle or full turns represents the number or fraction of complete full turns. For example, π/2 radians = 90° = 1/4 full circle Conventions on measurement A convention universally adopted in mathematical writing is that angles given a sign are positive angles if measured counterclockwise, and negative angles if measured clockwise, from a given line. If no line is specified, it can be assumed to be the x-axis in the Cartesian plane. In navigation and other areas this convention may not be followed. In mathematics radians are assumed unless specified otherwise because this removes the arbitrariness of the number 360 in the degree system and because the trigonometric functions can be developed into particularly simple Taylor series if their arguments are specified in radians. Types of angles An angle of π/2 radians or 90°, one-quarter of the full circle is called a right angle. Angles smaller than a right angle are called acute angles; angles larger than a right angle are called obtuse angles. Angles equal to two right angles are called straight angles. Angles larger than two right angles are called reflex angles. The difference between an acute angle and a right angle is termed the complement of the angle, and between an angle and two right angles the supplement of the angle. In Euclidean geometry, the inner angles of a triangle add up to π radians or 180°; the inner angles of a quadrilateral add up to 2π radians or 360°. In general, the inner angles of a simple polygon with n sides add up to (n − 2) × π radians or (n − 2) × 180°. If two straight lines intersect, four angles are formed. Each one has an equal measure to the angle across from it; these congruent angles are called vertical angles . If a straight line intersects two parallel lines, corresponding angles at the two points of intersection are equal; adjacent angles are complementary, that is they add to π radians or 180°. Angles in different contexts This allows one to define angles in any real inner product space, replacing the Euclidean dot product · by the Hilbert space inner product <·,·>. The angle between a line and a curve (mixed angle) or between two intersecting curves (curvilinear angle) is defined to be the angle between the tangents at the point of intersection. Various names (now rarely, if ever, used) have been given to particular cases:—amphicyrtic (Gr. ἀμφί, on both sides, κυρτόσ, convex) or cissoidal (Gr. κισσόσ, ivy), biconvex; xystroidal or sistroidal (Gr. ξυστρίσ, a tool for scraping), concavo-convex; amphicoelic (Gr. κοίλη, a hollow) or angulus lunularis, biconcave. Also a plane and an intersecting line form an angle. This angle is equal to π/2 radians minus the angle between the intersecting line and the line that goes through the point of intersection and is perpendicular to the plane. Angles in Riemannian geometry Angles in astronomy In astronomy, one can measure the angular separation of two stars by imagining two lines through the Earth, each one intersecting one of the stars. Then the angle between those lines can be measured; this is the angular separation between the two stars. Astronomers also measure the apparent size of objects. For example, the full moon has an angular measurement of 0.5°, when viewed from Earth. One could say, "The Moon subtends an angle of half a degree." The small-angle formula can be used to convert such an angular measurement into a distance/size ratio. Angles in maritime navigation The obsolete (but still commonly used) format of angle used to indicate longitude or latitude is hemisphere degree minute' second", where there are 60 minutes in a degree and 60 seconds in a minute, for instance N 51 23′26″ or E 090 58′57″ - Central angle - Complementary angles - Inscribed angle - Supplementary angles - solid angle for a concept of angle in three dimensions. - Angle Bisectors - Angle Bisectors and Perpendiculars in a Quadrilateral - Angle Bisectors in a Quadrilateral - Constructing a triangle from its angle bisectors - Online Unit Converter - Conversion of many different units The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Angle
13
61
In 1929, Edwin Hubble discovered that the universe was expanding, and the velocity of expansion was a function of the distance from the Earth. For example, galaxies at a “proper distance” D from the Earth were moving away from the Earth at a velocity v, according to the following equation: v = H0D, where H0 is the constant of proportionality (the Hubble constant). In this context, the phrase “proper distance” means a distance (D) measured at a specific time. Obviously, since the galaxies are moving away from the Earth, the distance D will change (i.e. increase) with time. Until 1998, most physicists and cosmologists believed that the expansion would eventually be slowed by gravity and be reversed (i.e. all matter in the universe would eventually be pulled by gravity to a common point resulting in the “Big Crunch”). In 1998, three physicists (Saul Perlmutter, Brian P. Schmidt, and Adam G. Riess) decided to measure the expansion and expected to confirm that it was slowing down. To their surprise, and the scientific community’s surprise, they discovered that the universe’s expansion was accelerating. In 2011, they received the Nobel Prize in Physics for their discovery. The accelerated expansion of the universe is one of the great mysteries in science. Since the vast majority of scientists believe in the principle of cause and effect, the scientific community postulated that something was causing the accelerated expansion. They named the cause “dark energy,” which they believed was some kind of vacuum force. Today we know that extremely distant galaxies are actually moving away from the Earth with a velocity that exceeds the speed of light. This serves to deepen the mystery. Let us turn our attention to what is causing the accelerated expansion of the universe. First, let us understand that the extremely distant galaxies themselves are not moving away from the Earth faster than the speed of light. A mass, including a galaxy, cannot obtain a velocity greater than the speed of light, according to Einstein’s special theory of relativity. Any theory that attempts to explain the faster than light velocity of extremely distant galaxies via any type of force acting on the galaxies would contradict Einstein’s special theory of relativity. Therefore, we must conclude the galaxies themselves are not moving faster than the speed of light. However, no law of physics prohibits the expansion of space faster than the speed of light. With this understanding, it is reasonable to conclude the space between extremely distant galaxies is expanding faster than the speed of light, which accounts for our observation that the galaxies are moving away from Earth at a velocity faster than the speed of light. What is causing the space between extremely distant galaxies to expand faster than the speed of light? To address this question, let us discuss what we know about space and, more specifically, about vacuums. In my book, Unraveling the Universe’s Mysteries, I explain that vacuums are actually a reservoir for virtual particles. This is not a new theory. Paul Dirac, the famous British physicist and Nobel Laureate, asserted in 1930 that vacuums contain electrons and positrons (i.e. a positron is the antimatter counterpart of an electron). This is termed the Dirac sea. Asserting that vacuums contain matter-antimatter particles is equivalent to asserting that vacuums contain positive and negative energy, based on Einstein’s famous mass-energy equivalence equation, E = mc2 (where E stands for energy, m is the rest mass of an object, and c is the speed of light in a vacuum). Do vacuum really contain particles or energy? Our experimentation with laboratory vacuums proves they do. However, we have no way to directly measure the energy of a vacuum or directly observe virtual particles within the vacuum. As much as we physicists talk about energy, we are unable to measure it directly. Instead, we measure it indirectly via its effects. For example, we are able to measure the Casimir-Polder force, which is an attraction between a pair of closely spaced electrically neutral metal plates in a vacuum. In effect, virtual particles pop in and out of existence, in accordance with the Heisenberg Uncertainty Principle, at a higher density on the outside surfaces of the plates. The density of virtual particles between the plates is less due to their close spacing. The higher density of virtual particles on the outside surfaces of the plates acts to push the plates together. This well-known effect is experimental evidence that virtual particles exist in a vacuum. This is just one effect regarding the way virtual particles affect their environment. There is a laundry list of other effects that prove virtual particles are real and exist in a vacuum. I previously mentioned the Heisenberg Uncertainty Principle. I will now explain it, as well as the role it plays in the spontaneous creation of virtual particles. The Heisenberg Uncertainty Principle describes the statistical behavior of mass and energy at the level of atoms and subatomic particles. Here is a simple analogy. When you heat a house, it is not possible to heat every room uniformly. The rooms themselves and places within each room will vary in temperature. The Heisenberg Uncertainty Principle says the same about the energy distribution within a vacuum. It will vary from point to point. When energy accumulates at a point in a vacuum, virtual particle pairs (matter and antimatter) are forced to pop into existence. The accumulation of energy and the resulting virtual particle pairs are termed a quantum fluctuation. Clearly, vacuums contain energy in the form of virtual particle pairs (matter-antimatter). By extension, we can also argue that the vacuums between galaxies contain energy. Unfortunately, with today’s technology, we are unable to measure the amount of energy or the virtual particle pairs directly. Why are we unable to measure the virtual particle pairs in a vacuum directly? Two answers are likely. First, they may not exist as particles in a vacuum, but rather as energy. As stated previously, we are unable to measure energy directly. Second, if they exist as particles, they may be extremely small, perhaps having a diameter in the order of a Planck length. In physics, the smallest length believed to exist is the Planck length, which science defines via fundamental physical constants. We have no scientific equipment capable of measuring anything close to a Planck length. For our purposes here, it suffices to assert that vacuums contain energy. We are unable to measure the amount of energy directly, but we are able to measure the effects the energy has on its environment. Next, let us consider existence. Any mass requires energy to exist (move forward in time). In my book, Unraveling the Universe’s Mysteries, the Existence Equation Conjecture is derived, discussed, and shown to be consistent with particle acceleration data. The equation is: KEX4 = – .3 mc2, where KEX4 is the kinetic energy associated with moving in the fourth dimension (X4) of Minkowski space, m is the rest mass of an object, and c is the speed of light in a vacuum. This asserts that for a mass to exist (defined as movement in time), it requires energy, as described by the Existence Equation Conjecture. (For simplicity, from this point forward I will omit the word “conjecture” and refer to the equation as the “Existence Equation.”) Due to the enormous negative energy implied by the Existence Equation, in my book, Unraveling the Universe’s Mysteries, I theorized that any mass draws the energy required for its existence from the universe, more specifically from the vacuum of space. Below, I will demonstrate that this gives rise to what science terms dark energy and causes the accelerated expansion of space. At this point, let us address two questions: 1. Is the Existence Equation correct? I demonstrated quantitatively in Appendix 2, Unraveling the Universe’s Mysteries, that the equation accurately predicts a muon’s existence (within 2%), when the muon is accelerated close to the speed of light. Based on this demonstration, there is a high probability that the Existence Equation is correct. 2. What is the space between galaxies? The space between galaxies is a vacuum. For purposes here, I am ignoring celestial objects that pass through the vacuum between galaxies. I am only focusing on the vacuum itself. From this standpoint, based on Dirac’s assertion and our laboratory experiments, we can conclude that vacuums contain matter-antimatter (i.e. the Dirac sea), or equivalently (from Einstein’s famous mass-energy equivalence equation, E = mc2) positive-negative energy. Given that a vacuum contains mass, we can postulate that each mass within a vacuum exerts a gravitational pull on every other mass within the vacuum. This concept is based on Newton’s classical law of gravity, F = G (m1 m2)/r2, where m1 is one mass (i.e. virtual particle) and m2 is another mass (i.e. virtual particle), r is the distance between the two masses, G is constant of proportionality (i.e. the gravitational constant), and F is the force of attraction between the masses. If we think of a vacuum as a collection of virtual particles, it appears reasonable to assume the gravitational force will define the size of the vacuum. This is similar to the way the size of a planet is determined by the amount of mass that makes up the planet and the gravitational force holding the mass together. This is a crucial point. The density of virtual particles defines the size of the vacuum. However, we have shown that existence requires energy (via the Existence Equation). A simple review of the Existence Equation delineates that the amount of energy a mass requires to exist is enormous. The energy of existence is directly proportional to the mass. Therefore, a galaxy, which includes stars, planets, dark matter, and celestial objects, would require an enormous amount of energy to exist. In effect, to sustain its existence, the galaxy must continually consume energy in accordance with the Existence Equation. Using the above information, let us address three key questions: 1. What is causing the vacuum of space between galaxies to expand? To sustain their existence, galaxies remove energy from the vacuum (i.e. space) that borders the galaxies. The removal of energy occurs in accordance with the Existence Equation. The removal of energy results in the gravitational force defining the vacuum to weaken. This causes the vacuum (space) to expand. 2. Why are the distant galaxies expanding at a greater rate than those galaxies closer to the Earth? The galaxies that are extremely distant from the Earth have existed longer than those have closer to the Earth. Therefore, distant galaxies have consumed more energy from the vacuums of space that surround them than galaxies closer to the Earth. 3. Why is the space within a galaxy not expanding? A typical galaxy is a collection of stars, planets, celestial objects, and dark matter. We know from observational measurements that dark matter only exists within a galaxy and not between galaxies. I believe the dark matter essentially allows the galaxy to act as if it were one large mass. From this perspective, it appears that the dark matter blocks any removal of energy from the vacuum (i.e. space) within a galaxy. Dose this solve the profound mystery regarding the accelerated expansion of the universe? To my mind, it does. I leave it to you, my colleagues, to draw your own conclusions.
http://www.louisdelmonte.com/unraveling-the-universes-accelerated-expansion/
13
54
Circumference Of a Circle When Radius is Given Video Tutorial circles video, circumference video, curves video, plane figures video, radius video, shapes video. Watch Our Video Tutorials At Full Length At TuLyn, we have over 2000 math video clips. While our guests can view a short preview of each video clip, our members enjoy watching them at full length. Become a member to gain access to all of our video tutorials, worksheets and word problems. Circumference Of a Circle When Radius is Given This tutorial will show you how to find the circumference when given the radius. You will learn the relationship between the diameter and the radius. It is important to take note that we need to replace the diameter in the formula and not the radius, so we need to take the measurement for the radius and figure out the diameter in order to solve for the circumference. Circumference of a circle when radius is given video involves circles, circumference, curves, plane figures, radius, shapes. The video tutorial is recommended for 3rd Grade, 4th Grade, 5th Grade, 6th Grade, 7th Grade, 8th Grade, 9th Grade, and/or 10th Grade Math students studying Algebra, Geometry, Basic Math, and/or Pre-Algebra. Circles are simple shapes of Euclidean geometry. A circle consists of those points in a plane which are at a constant distance, called the radius, from a fixed point, called the center. A chord of a circle is a line segment whose both endpoints lie on the circle. A diameter is a chord passing through the center. The length of a diameter is twice the radius. A diameter is the largest chord in a circle. Circles are simple closed curves which divide the plane into an interior and an exterior. The circumference of a circle is the perimeter of the circle, and the interior of the circle is called a disk. An arc is any connected part of a circle. A circle is a special ellipse in which the two foci are coincident. Circles are conic sections attained when a right circular cone is intersected with a plane perpendicular to the axis of the cone. The circumference is the distance around a closed curve. Circumference is a kind of perimeter.
http://tulyn.com/4th-grade-math/radius/videotutorials/circumference-of-a-circle-when-radius-is-given_by_polly.html
13
57
Using Critical Points The second sort of critical point is an inflection point. This is a point in the graph where on one side, the slop is increasing, and on the other side, the slope is decreasing. This can happen whether the curve is increasing or decreasing. Look in the following graph at the first inflection point. Before it, the slope started from flat and increased to a slant, and after the point, the slope decreases back to flat. The inflection point is the exact point at which this transition occurs. We refer to this as a change in concavity. Before the point, it is concave up, and after, it is concave down. After the second inflection point, it is concave up again. You should be able to realize that if the graph is continuous and smooth, between every min and max there must be an inflection point. The converse is not necessarily true. I can now explain mathematically what these points are. Critical points are the areas at which y' and y" are equal to 0, simply put. The local maximums or minimums can be found by setting the first derivative to 0. This works because when the slope is 0, the graph is flat. If the graph is flat, it is almost always because it was going down, and now it's going up, making a minimum, or the opposite. If the object was moving upward, it is switching direction, to go downward, and for a split second the velocity is 0. This should be obvious from the graph below. At the local max and local min points, the derivative will be 0. So make it equal to 0, and see what x-values emerge. Inflection points are found by setting the second derivative to 0. If the first derivative measure the rate of change of y, then the second derivative measures the rate of change in y'. This measures the rate at which the slop is changing. If the second derivative is positive, the the slope is changing at a faster and faster pace. If there is a point at which the second derivative becomes 0, and then negative, the angle of the slope will stop becoming steeper, and it will then become less and less steep, possible until the curve is flat. And further. The slope can decrease to the point where it is negative, and the curve will be decreasing. When y" is positive, the graph is concave up. When it is negative, the graph is concave down. If there is a point where it is 0, then that means at that point the graph is switching from concave up to concave down, or vice versa. Look back at the graph above. Look back at the graph above. Velocity and Acceleration The function f(x) can be referring to the displacement of a particle or object. Displacement is the distance traveled from the starting point. The x-axis is time, and the y-axis is the distance moved. It can be thought of as a ball thrown directly upward, and you are plotting its position against time. Of course, that would have a particular curve. Now the derivative of this kind of function would be the velocity, because it would be plotting the rate of change against time. Remember that the derivative of a function is an equation for the slope at any point that you plug x in? Well, in an equation of displacement, the slope is the velocity, or the speed at which the displacement changes over time. So y is the velocity. That's what velocity is! Speed. Speed is how quickly something moves, or in other words, how quickly displacement changes. Using the same logic, you can see that y is the acceleration, because it is the derivative of the velocity, or the rate of change of velocity over time. If the velocity is increasing then the acceleration is positive. I just explained how that sort of thing works by inflection points above. Heres an example: y = x2 4x I will take the derivative and second derivative: y = 2x 4 y = 2 In this example, there is a constant acceleration of 2 for all values of x. This makes sense logically. As you can see, the graph begins with a negative velocity, (displacement is decreasing) but it begins to slow its backward movement, which is reverse deceleration, or acceleration. This constant acceleration eventually brings velocities positive. We would like to find the local maximums and/or minimums. This would be a local minimum, and is seen in our example. I will now set y to 0. y = 2x 4 = 0 2x = 4 x = 2 So when x is equal to 2 the velocity is 0 and the object has reached its minimum value. How did I know it was a minimum value and not a maximum value? Because at the point x = 2 the acceleration is positive. In fact, in this entire equation the acceleration is positive; it always equals 2. When the acceleration is positive, it means the velocity is going upward, which means that it must have been negative before and is positive now. That means it is a minimum. The term for this is concave up. When a graph is concave up, it means the slant is slowly getting higher, or less negative. When the acceleration is negative, it is concave down, because that will be the shape of the graph at that point. There will be a maximum, and the slant of the graph is going to be on a downward trend. I'm sorry to repeat myself here. I don't want to insult anybody's intelligence. (Note that in rare cases there will be an inflection point on the same spot as the first derivative is 0, and in that case, the point is not a min or max, but the graph slows down at that point to a slant of 0, and then continues in the same direction it was going before.) Drawing a graph using critical points Firstly, what are critical points? Allow me to repeat a bit. These are all points where y and y are equal to 0. When y is equal to 0 you have local maximums and minimums, as explained. When y equals 0, you have inflection points, or changes in concavity. While y is positive, it is concave up, and while its negative, the graph is concave down. So in between, when it is 0, the graph is switching from concave up to down, or the other way around. The way that you figure out what to do with an equation is by using the following chart: (If you end up needing another column or two do not worry) The top line refers to what x is. The next the rows are y, y and y. You always want to know what y is at -∞ and ∞. For this you use limit of the equation as x goes to both, respectively. You also want to set y and y to zero and fill in the value of x at which this occurs. Here is an equation: y = x3 6x2 + 9x y = 3x2 12 x + 9 y = 6x 12 I will set y to 0 and plug it into the graph, and set y to 0 and plug it in. 3x2 12x + 9 = 0 3(x2 4x + 3) = 0 (x 3)(x 1) = 0 x = 3, 1 6x 12 = 0 6x = 12 x = 2 Now I will plug in all values of x at which there are critical points: Next I will Find all values of y for every critical point so I can know the full (x, y) coordinate at which these occur. I will also find -∞ and ∞. Lim xΰ-∞ Y = x3 6x2 + 9x = x3 = -∞ Lim xΰ∞ Y = x3 6x2 + 9x = x3 = ∞ y = x3 6x2 + 9x = 1 6 + 9 = 4 (When x = 1) y = x3 6x2 + 9x = 8 24 + 18 = 2 (When x = 2) y = x3 6x2 + 9x = 27 54 + 27 = 0 (When x = 3) You can already see from the chart that y is rising from negative infinity up to 4, and goes down to 0 when x is 3, and then rises to infinity. In between, it switched concavity at the point (2, 2). It is clear that at point (1, 4) the graph is concave down, since that is a maximum, but Ill figure it out anyway by checking y at that point. Ill also check concavity at (3, 0) to make sure thats a minimum. y = 6x 12 = 6 12 = -6 (When x is 1) y = 6x 12 = 18 12 = 6 (When x is 3) Now you have every part of the graph you need to fill in to solve the graph. Here is a picture of the graph, as you should draw it. Once again, excuse the sloppiness. :) Incidentally, once you know the (x, y) coordinates of all the critical points, and you know the direction the graph goes in to negative infinity and infinity, you can immediately figure out what the graph looks like without have to check concave up/down, etc. Plot the 3 points, and draw a line coming in from (-∞, ∞), and draw a line leaving to (∞, ∞). in the center area where you have the points, it obviously climbs to the first point, comes through the second, and loops back up at the third. There cannot be any extra squiggles in the graph, because if there were, each squiggle would have another set of max, min, and inflection points. So draw the simplest possible curve for it, and that will be correct. There are, incidentally, much more difficult curves possible. I will try one now. y = 1/x - 4x2 y' = -1/x2 - 8x y'' = 2/x3 - 8 Set y' to 0: -1/x2 - 8x = 0 -1/x2 - 8x3/x2 = 0 -(8x3 + 1)/x2 = 0 Set y'' to 0: 2/x3 - 8 = 0 2/x3 = 8 2/8 = x3 x = 3√4 Lim x-->-∞ y = 1/x - 4x2 = 1/-∞ - 4(-∞)2 = 0 - ∞ = -∞ y = 1/(-1/2) - 4(-1/2)2 = -3 This function demonstrates the shortcomings of the chart method I use to solve these problems. Everything looks great, but something's missing. What happens when the graph is near 0? You should worry about domain in any of these problems. The domain of x refers to all the possible values it can have. In this case, x cannot be 0. It is not in the domain. If you try to put 0 in the original equation, you end up dividing by 0. We will therefore have to make special consideration for this. We must use some method to find out what happens in the 0 region. For this problem, I have two possible ways. The first way requires no calculation. At small numbers, the second term will be very small, and of little influence on the value of the equation. The first term will dominate. So I think to myself, what does the 1/x graph look like? The answer is: So I know that near 0, our equation will look like this. It will go down to negative infinity, and then come down from positive infinity. The second way involves not realizing that. Imagine putting a tiny number into 1/x - 4x2. The first term will get large, as you will be dividing by a tiny number, and the second term will near 0. The equation overall will be very big. Then put in a tiny negative number. The same thing will happen, except the first term will be very large and negative. I can assume that the smaller the number I put in, the bigger it will get. This is a vertical asymptote. (For more on this, see Limits) Whichever you try, we now know 4 lines, and two points. This turns out to be an unusual looking graph. It looks like abstract art. Let me fill in the rest:
http://www.qcalculus.com/cal06.htm
13
91
Invasion of Burma Contributor: C. Peter Chen Burma, isolated from the rest of the world with mountainous ranges on her western, northern, and eastern borders, was a British colony with a degree of autonomy. With the pressure from Japan, British armed Burma with some British and Indian troops and obsolete aircraft so that there would be a small buffer between Japan and India, crown jewel of Britain's Asiatic empire. United States also aimed to help Burma as a direct result of Japanese pressure, but the reason was much different than that of the British; the United States looked to maintain Burmese outside Japanese control so that supply lines into China would remain open. The supplies traveled into China via the Burma Road, a treacherous gravel road that connected Kunming, China with Lace, Burma that opened in 1938. Britain and United States' worries about Burma were not unfounded, as Japan did look to incorporate Burma into her borders. Beyond the wish to cut off China's supply lines, a Japanese-occupied Burma would also provide Japan added security from any potential flanking strikes from the west against the southward expansion that was about to take place. The Invasion Began 11 Dec 1941 On 11 Dec 1941, only days after Japan's declaration of war against Britain, Japanese aircraft struck airfields at Tavoy, south of Rangoon. On the next day, small units of Japanese troops infiltrated into Brumese borders and engaged in skirmishes against British and Burmese troops. On the same day, a Flying Tigers squadron transferred from China to Rangoon to reinforce against the upcoming invasion. Under the banner of liberating Burma from western imperialism, the Japanese 15th Army of the Southern Expeditionary Army under the command of Shojiro Iida marched across the border in force from Siam. Airfields at Tavoy and Mergui fell quickly, removing the whatever little threat the obsolete British aircraft posed and preventing Allied reinforcements from the air. 16 Dec 1941 As the invasion had gotten underway, the United States recognized that she must assist British troops in the region. Brigadier General John Magruder, head of the American Military Mission to China, approached Chinese leader Chiang Kaishek for his permission to transfer ammunition aboard the transport Tulsa, currently docked in Rangoon, to the British troops. The goods were originally destined for the Chinese, but Magruder, arguing on behalf of Washington, expressed that the British troops be given priority or the Burma Road might fall under Japanese control, therefore making future supply runs impossible. Before Chiang responded, however, senior American officer in Rangoon Lieutenant Colonel Joseph Twitty advised the government in Rangoon to impound the American ship, while maintaining United States' innocent front. Chiang protested fiercely, noting it as an "illegal confistication". Chiang's representative in Rangoon, General Yu Feipeng, attempted to negotiate for a compromise, but Chiang's attitude was more drastic. On 25 Dec, Chiang announced that he would allow all lend-lease supplies to go to the British in Burma, but all Chinese troops in Burma would be withdrew back into China, and the British-Chinese alliance was to end. For days, Magruder worked with Chiang, and was finally able to secure Chiang's agreement to share the supplies with the British, but as a compromise, Magruder also had to give in to Chiang's demands that Twitty be removed from his position. This incident, later labeled as the Tulsa Incident, exemplified the difficulties that Chiang's stern personality imposed on the relationship between China, Britain, and the United States. The Battle of Sittang Bridge 22-31 Jan 1942 In Jan and Feb 1942, the Indian 17th Division under the command of British Major General John Smyth fought a campaign to slow the Japanese advance near the Sittang River. The Japanese 55th Division attacked from Rahaeng, Siam across the Kawkareik Pass on 22 Jan 1942, and over the next nine days pushed the Smyth's troops to the Sittang Bridge, where they were enveloped and crushed. "The Allied defense was a disaster", said military historian Nathan Prefer. "Two understrength Japanese infantry divisions, the 33d and 55th, enjoyed victory after victory over Indian, British, and Burmese troops who were undertrained, inadequately prepared for jungle warfare, and completely dependent upon motor transport for all supply." The Battle of Rangoon Rangoon was first attacked first by air; the few Royal Air Force and American Flying Tigers aircraft defended its air space effectively initially, but their numbers waned under constant pressure. Japanese troops appeared at Rangoon's doorsteps toward the end of Feb 1942. Magruder gathered all the trucks he could to send as much lend-lease supplies north into China as possible, and whatever could not be shipped out be given to the British, which included 300 Bren guns, 3 million rounds of ammunition, 1,000 machine guns with 180,000 rounds of ammunition, 260 jeeps, 683 trucks, and 100 field telephones. Nevertheless, he was still forced to destroy more than 900 trucks, 5,000 tires, 1,000 blankets and sheets, and more than a ton of miscellaneous items, all to prevent Japanese capture. As Japanese troops approached Rangoon, two Chinese Armies, the 5th and the 6th, marched south from China on 1 Mar 1942 to assist. The Chinese armies totalled six divisions, though half of them were understrength and most men of the 6th Army were undertrained green soldiers. Cooperation between the Chinese and the British were poor, though the Chinese regarded Americans such as General Joseph Stilwell in the Chinese temporary war time capital of Chungking rather highly. Outside Rangoon, the British 7th Armored Brigade attempted to counterattack the Japanese troops marching from the direction of the Sittang River, but failed. On 6 Mar, Japanese troops reached the city, and the final evacuation order was given by British officers on the next day. Retreating troops demolished the port facilities to prevent Japanese use. Whatever aircraft remained of the RAF and the Flying Tigers relocated to Magwe in the Irrawaddy Valley south of Mandalay. Battle of Tachiao 18 Mar 1942 On 8 Mar 1942, the 200th Division of the Chinese 5th Army began arriving in Taungoo, Burma to take over defense positions from the British. At dawn on 18 Mar, about 200 Japanese reconnaissance troops of 143rd Regiment of Japanese 55th Division, on motorcycles, reached a bridge near Pyu and were ambushed by the Chinese; 30 Japanese were killed, and the Chinese captured 20 rifles, 2 light machine guns, and 19 motorcycles. After sundown, expecting a Japanese counterattack, the Chinese fell back to Oktwin a few kilometers to the south. Pyu was captured by the Japanese on the following day. Battle of Oktwin 20-23 Mar 1942 The Japanese 143rd Regiment and a cavalry formation of the Japanese 55th Division attacked defensive positions north of the Kan River in Burma manned by troops of the Cavalry Regiment of the Chinese 5th Army. The Chinese fell back toward Oktwin. At dawn on 22 Mar, 122nd Regiment of the Japanese 55th Division attacked outposts manned by a battalion of the Chinese 200th Division, but made little progress. After two days of heavy fighting, the Chinese fell back toward Taungoo, Burma after nightfall on 23 Mar. Battle of Taungoo 24-30 Mar 1942 Taungoo, an important crossroads city in central Burma, housed the headquarters of Major General Dai Anlan's Chinese 200th Division. The city was attacked by the Japanese 112th Regiment on 24 Mar, quickly surrounding the city on three sides. At 0800 hours on 25 Mar, the main offensive was launched on the city, attempting to push the Chinese defense toward the Sittang River. The Chinese held on to their positions, forcing the Japanese to engage in brutal house-to-house fighting, which took away the Japanese firepower superiority. A counteroffensive launched by the Chinese at 2200 hours, however, failed to regain lost territory. On the next day, the Japanese also failed to penetrate Chinese lines, and later in the day the Chinese, too, repeated the previous day's performance with a failed counterattack which suffered heavy casualties. On 27 and 28 Mar, Japanese aircraft and artillery bombarded the Chinese positions to pave way for an attack by the newly arrived Reconnaissance Regiment of the Japanese 56th Division. On the following day, the Japanese penetrated into the northwestern section of the city in the morning, and by noon the headquarters of the Chinese 200th Division was seriously threatened. In the afternoon, Dai gave the order to retreat after nightfall. The Chinese 200th Division established a new defensive position at Yedashe to the north, joined by the New 22nd Division. Japanese troops would attack this new position on 5 Apr and overcome it by 8 Apr. Battle of Yenangyaung 11-19 Apr 1942 On 11 Apr, Japanese 33rd Division attacked the Indian 48th Brigade at the oil fields at Yenangyaung, using captured British tanks to support the assault. The situation at first waved back and forth, then General William Slim's two divisions who arrived in response became cut off, leading to British General Harold Alexander requesting American Lieutenant General Joseph Stilwell in China for reinforcements to the Yenangyaung region. On 16 Apr, nearly 7,000 British troops were encircled by equal number of Japanese troops. General Sun Liren arrived with the 113th Regiment of the Chinese 38th Division, 1,121-strong, on 17 Apr. Sun arrived without artillery or tank support, but that deficiency was quickly augmented by the acquisition of Brigadier Anstice's British 7th Armored Brigade. The Chinese attacked southward, while Major General Bruce Scott led the British 1st Burma Division against Pin Chaung. On 19 Apr, the Chinese 38th Division took control of Twingon outside of Yenangyaung, then moved into Yenangyaung itself, but even with the arrival of the 1st Burma Division at Yenangyaung the position could not be defended. The Allied forces withdrew 40 miles to the north. Although Yenangyaung still fell under Japanese control at the end, nearly 7,000 British troops were saved from capture or destruction. The British Withdraw 7 Mar-26 May 1942 General Alexander and Slim led the remaining forces north through the jungles toward Mandalay, slowing down the Japanese as much as they could. Supply became a critical issue after the fall of Rangoon and its port facilities. In Tokyo, it was decided that Burma was to be rid of all Allied troops. An additional regiment was assigned as reinforcement to the Japanese 33rd Division to bring it up to full strength. Soon after, two additional infantry divisions, the 18th and 56th, arrived in the theater, further bolstering Japanese numbers. The reinforcements arrived to the area undetected by Allied intelligence. Fresh Japanese troops moved north in three separate columns, one through the Irrawaddy Valley, another along the Rangoon-Mandalay Road in the Sittang Valley, and the third marched from Taunggyi in the east for Lashio. Chinese troops attempted to delay Japanese advances but failed; most of them fell back across the Chinese border almost immediately. Alexander and Slim successfully retreated across the Indian border on 26 May 1942. Along the way, they destroyed precious oilfields so that they could not be used by the Japanese. As the British crossed into India, Japanese forces captured the entire country of Burma, including the important airfields in Myitkyina near the Chinese border. Some time during the conquest of Burma, the Japanese set up a comfort women system similar to the systems seen in Korea and China. When the combined American and Chinese forces later retook Myitkyina in Aug 1944, 3,200 women were known to be retreating with the retreating Japanese forces. 2,800 of the women were Koreans who were forced to be relocated from their home country to serve the Japanese troops as prostitutes, but there were also many Burmese women who volunteered in the belief that the Japanese were there to liberate their country from western imperialism. Some Chinese women were seen in the ranks as well. The goal of such a system was to prevent the Japanese soldiers from raping Burmese women, and to prevent the spreading of venereal diseases. Conclusion of the Campaign "I claim we got a hell of a beating", recalled Stilwell. "We got run out of Burma and it is embarrassing as hell." With Burma under Japanese control, the blockade on China was complete, but that was but a symptom of the real underlying issue: the conflicting goals of the three Allied nations involved in Burma. To Britain, Burma was nothing but a buffer between Japanese troops and India. To China, Burma was a sideshow of the Sino-Chinese War, though important in that it provided an important supply line. To the United States, Burma was the key to keep China fighting in order to tie down the countless number of Japanese soldiers in China so that they could not be re-deployed in the South Pacific. Meanwhile, caught between the politics of the three Allied nations and the Japanese invader, the Burmese people found that none of the warring powers willing to listen to their sentiments. Sources: BBC, the Pacific Campaign, Vinegar Joe's War, US Army Center of Military History, Wikipedia. Invasion of Burma Timeline |12 Dec 1941||Churchill placed the defence of Burma under Wavell's command, promising four fighter and six bomber squadrons and matérial reinforcements, together with the 18th Division and what remained of 17th Indian Division (since two of its Brigades had been diverted to Singapore). On the same day, the 3rd Squadron of the American Volunteer Group was transferred to Rangoon, Burma.| |14 Dec 1941||A battalion from the Japanese 143rd Infantry Regiment occupied Victoria Point, Burma on the Kra River near the Thai-Burmese border.| |22 Dec 1941||The Japanese 55th Division, commanded by Lieutenant General Takeuchi Yutaka, assembled at Bangkok, Thailand and was issued orders for it to cross the Thai-Burma frontier and capture Moulmein, which happened to be held by the Headquarters of 17th Indian Division.| |23 Dec 1941||54 Japanese bombers escorted by 24 fighters attacked Rangoon, Burma in the late morning, killing 1,250; of those who became wounded as the result of this raid, 600 died.| |28 Dec 1941||Lieutenant-General Thomas Hutton assumed command of Burma army. A competent and efficient Staff Officer (he had been responsible for the great expansion of the Indian army), he had not actually commanded troops for twenty years. Across the border in Thailand, Japanese Colonel Keiji Suzuki announced the disbandment of the Minami Kikan (Burmese armed pro-Japanese nationalists) organization, which would be replaced by the formation of a Burma Independence Army (BIA), to accompany the Invasion force.| |29 Dec 1941||Japanese bombers struck Rangoon, Burma, destroying the railway station and dock facilities.| |14 Jan 1942||Japanese forces advanced into Burma.| |16 Jan 1942||The first clash between Japanese and British forces within Burma occurred when a column of the 3rd Battalion of the Japanese 112th Infantry Regiment was engaged by the British 6th Burma Rifles (plus two companies of the 3rd Burma Rifles and elements of the Kohine battalion BFF) at the town of Tavoy (population 30,000 and strategically important as it was the start of a metal road to Rangoon). By the 18th the Japanese had taken the town, having lost 23 dead and 40 wounded, but the morale of the defenders had been badly damaged and the Japanese column was able to move on to Mergui without serious opposition.| |19 Jan 1942||Japanese troops captured the airfield at Tavoy (now Dawei), Burma.| |20 Jan 1942||The Japanese advance guard crossed the border into Burma heading for Moulmein. Kawkareik was defended by 16th Indian Brigade under Brigadier J. K. "Jonah" Jones, but was widely dispersed covering the tracks leading to the border 38 miles away. The Japanese first encountered the 1st/7th Gurkha Rifles (who had only arrived on the previous day) near Myawadi. The Gurkhas were quickly outflanked and forced to withdraw. Within forty-eight hours the rest of 16th Infantry Brigade were forced to follow.| |23 Jan 1942||The Japanese commenced a determined effort to establish air superiority over Rangoon, Burma. By 29 Jan seventeen Japanese aircraft had been shot down for the loss of two American Volunteer Group and ten Royal Air Force machines, forcing the Japanese temporarily to concede.| |24 Jan 1942||Japanese aircraft attacked Rangoon, Burma for the second day in a row. From the Thai-Burmese border, Japanese troops marched in multiple columns toward Moulmein, Burma, looking to capture the nearby airfield.| |25 Jan 1942||Japanese aircraft attacked Rangoon, Burma for the third day in a row. Meanwhile, Archibald Wavell ordered that the airfield at Moulmein, Burma to be defended, which was being threatened by troops of the Japanese 55th infantry Division.| |26 Jan 1942||Japanese aircraft attacked Rangoon, Burma for the fourth day in a row.| |30 Jan 1942||Japanese 55th Infantry Division captured the airfield at Moulmein, Burma.| |31 Jan 1942||Japanese 55th Infantry Division captured the town of Moulmein, Burma one day after the nearby airfield was captured; Burmese 2nd Infantry Brigade (Brigadier Roger Ekin) retreated across the Salween River during the night after having lost 617 men (mostly missing); Archibald Wavell however, unaware of the true situation, was appalled and angry to hear of the ease with which the Japanese had driven Burmese 2nd Infantry Brigade from the town. On the same day, Slim issued a report summarizing the air situation in Burma, noting the Allies had 35 aircraft in the area to defend against about 150 Japanese aircraft; while a few more Allied aircraft were en route for Burma, by mid-Mar 1942 there would be 400 operational Japanese aircraft in this theater of war.| |3 Feb 1942||Burmese 2nd Infantry Brigade and a part of the Indian 17th Division withdrew from Martaban, Burma toward the Bilin River.| |6 Feb 1942||Wavell, still angry at the loss of Moulmein, Burma, ordered 2nd Burma Brigade to "take back all you have lost". It was too late-the Japanese were already bringing more troops (33rd "White Tigers" Division and the Headquarters of 15th Army) across the frontier. Lieutenant-General Hutton insisted on abandoning Moulmein and taking up new positions on the Salween which would be reinforced by the newly committed 46th Indian Brigade who had been brought down from the Shah States.| |7 Feb 1942||The Japanese infiltrated across the Salween River in Burma cutting the defenders of Martaban River, 3/7th Gurkhas with a company of the King's Own Yorkshire Light Infantry under command, from the 46th Indian Brigade headquarters base at Thaton. The Gurkha's Commanding Officer, Lieutenant Colonel H. A. Stevenson, knowing that his position was now untenable led a bayonet charge to clear the road block. The subsequent retreat from Martaban (over difficult terrain with no food) of more than 50 miles in two days was a terrible ordeal and a foretaste of things to come.| |10 Feb 1942||Japanese troops crossed the Salween River in Burma.| |11 Feb 1942||Having crossed the Salween River at Kuzeik, Burma during the night the Japanese II/215th Infantry regiment engaged the raw and inexperienced 7/10th Baluch who were deployed in a semi-circle with their backs to the river without barbed wire or artillery support. After dark the Japanese launched their attack on the Indian positions and after four hours of bitter hand to hand fighting began to get the upper hand. By dawn organized resistance had effectively ceased. The heroic 7/10th Buluch had suffered 289 killed; with the few survivors making off in small parties.| |13 Feb 1942||In Burma, the British Commander-in-Chief Lieutenant-General Hutton requested Archibald Wavell to appoint a corps commander to take charge of operations and a liaison team to work with the Chinese. He received no reply as Wavell was incapacitated after suffering a fall.| |14 Feb 1942||Indian 17th Infantry Division was ordered to defend against the Japanese advance toward Rangoon, Burma at the Bilin River.| |15 Feb 1942||Japanese troops penetrated Indian 17th Infantry Division positions on the Bilin River north of Rangoon, Burma.| |17 Feb 1942||Japanese troops crossed the Bilin River north of Rangoon, Burma and began to encircle the Indian 17th Infantry Division.| |18 Feb 1942||After three days of confused fighting along the Bilin in Burma, Major General "Jackie" Smyth learned that he was threatened with being outflanked to the south by the Japanese 143rd Regiment. He committed his last reserves, 4/12th Frontier Force Regiment who fought a stiff action on 16th Indian Brigade's left but ultimately failed to dislodge the Japanese.| |19 Feb 1942||Mandalay, Burma came under aerial attack for the first time. Meanwhile, the Japanese 143rd Regiment, having crossed the Bilin Estuary arrived at Taungzon, effectively bypassing the British and Indian positions along the Bilin River; Lieutenant General Hutton had no option but to permit a withdrawal to the Sittang.| |20 Feb 1942||The Japanese attacked the positions of 16th and 46th Indian Brigades at Kyaikto, Burma, delaying the retreat from the Balin to the Sittang Bridge for forty-eight hours, and causing total confusion among the withdrawing columns. To make matters worse the Indians came under friendly air attack from RAF and AVG aircraft. In addition most of the Divisional Headquarters' radio equipment was lost in the confusion. In Rangoon, Hutton's implementation of the second part of the evacuate Europeans caused wide-spread panic with much looting by drunken natives, and the emptying of the cities goals of lunatics and criminals.| |21 Feb 1942||The 2nd Burma Frontier Force, who had been placed north of the Kyaikto track to warn against outflanking, were heavily engaged by the Japanese 215th Regiment and forced to withdraw north-west, crossing the Sittang River by country boats, and proceeding to Pegu. No report of this contact ever reached the divisional commander "Jackie" Smyth who was still hearing rumours of a threatened parachute landing to the west. To the south, British 7th Armored Brigade arrived at Rangoon by sea from Egypt.| |22 Feb 1942||During the early hours, the Sittang Bridge in Burma became blocked when a lorry got stuck across the carriageway. With the Japanese closing in on Pagoda and Buddha Hills overlooking the important crossing, the British divisional commander "Jackie" Smyth had to accept that the bridge must be destroyed, even though a large part of his force was still on the east bank. Lieutenant-General Hutton was informed that he was to be replaced but was to remain in Burma as Alexander's Chief of Staff, a most awkward position which he endured until he was replaced at his own request by Major-General John Winter before returning to India in early April.| |23 Feb 1942||The Sittang railway bridge in Burma was blown up to prevent its capture by the Japanese, even though most of General Smyth's command was still on the east bank. Smyth salvaged from the catastrophe 3,484 infantry, 1,420 rifles, 56 Bren guns and 62 Thompson submachine guns. Nearly 5,000 men, 6,000 weapons and everything else was lost. Despite many men making it back across the river without their weapons, 17th Indian was now a spent force. It would take the Japanese a fortnight to bring up bridging equipment which permitted the Europeans in Rangoon to make their escape from the doomed city.| |28 Feb 1942||General Archibald Wavell, who believed Rangoon, Burma must be held, relieved Thomas Hutton for planning an evacuation.| |2 Mar 1942||Japanese 33rd and 55th Infantry Divisions crossed Sittang River at Kunzeik and Donzayit, Burma, forcing the British 2nd Battalion Royal Tank Regiment to fall back 20 miles as the Japanese troops captured the village of Waw.| |3 Mar 1942||Japanese troops forced Indian 17th Infantry Division out of Payagyi, Burma.| |4 Mar 1942||In Burma, Japanese troops enveloped Chinese troops at Toungoo while British 7th Queen's Own Hussars regiment clashed with Japanese troops at Pegu.| |6 Mar 1942||Anglo-Indian and Japanese troops clashed at various roadblocks near Rangoon, Burma.| |7 Mar 1942||£11,000,000 worth of oil installations of Burmah Oil Company in southern Burma near Rangoon were destroyed as British retreated from the city, preventing Japanese capture; this destruction would result in 20 years of High Court litigation after the war. Also destroyed were 972 unassembled Lend-Lease trucks and 5,000 tires. From Rangoon, 800 civilians departed aboard transports for Calcutta, India. The Anglo-Indian troops in the Rangoon region were held up by a Japanese roadblock at Taukkyan, which was assaulted repeatedly without success.| |8 Mar 1942||200th Division of the Chinese 5th Army arrived at Taungoo, Burma to assist the British defense.| |9 Mar 1942||Japanese troops entered undefended Rangoon, Burma, abandoned by British troops two days prior.| |10 Mar 1942||Japanese 55th Infantry Division began pursuing the retreating British troops from Rangoon, Burma.| |15 Mar 1942||Harold Alexander admitted to Joseph Stilwell that the British had only 4,000 well-equipped fighting men in Burma.| |18 Mar 1942||Chinese troops ambushed 200 Japanese reconnaissance troops near Pyu in Battle of Tachiao, killing 30. Meanwhile, aircraft of the 1st American Volunteer Group "Flying Tigers" bombed the Japanese airfield at Moulmein, claiming 16 Japanese aircraft destroyed on the ground. Of the Burmese coast, troops from India reinforced the garrison on Akyab Island.| |19 Mar 1942||Japanese troops captured Pyu, Burma.| |20 Mar 1942||Japanese 143rd Regiment and a cavalry formation of the Japanese 55th Division attacked troops the Cavalry Regiment of the Chinese 5th Army north of the Kan River in Burma.| |21 Mar 1942||151 Japanese bombers attacked the British airfield at Magwe in northern Burma, the operating base of the Chinese Air Force 1st American Volunteer Group "Flying Tigers"; 15 Sino-American aircraft were destroyed at the cost of 2 Japanese aircraft. Meanwhile, at Oktwin, forward elements of Japanese 55th Division engaged Chinese troops.| |22 Mar 1942||American and British airmen abandoned the airfield in Magwe in northern Burma. To the southeast, at dawn, troops of the 600th Regiment of the Chinese 200th ambushed troops of the 122nd Regiment of the Japanese 55th Division near Oktwin, Burma.| |23 Mar 1942||Chinese troops held the Japanese attacks in check near Oktwin, Burma, but withdrew toward Taungoo after sundown.| |24 Mar 1942||Japanese 112th Regiment attacked Taungoo, Burma, overcoming the disorganized Chinese outer defenses. Meanwhile, Japanese 143rd Regiment flanked the Chinese defenses and captured the airfield and rail station 6 miles north of the city. Taungoo would be surrounded on three sides by the end of the day.| |25 Mar 1942||The main Japanese offensive against Taungoo, Burma began at 0800 hours, striking northern, western, and southern sides of the city nearly simultaneously. Fierce house-to-house fighting would continue through the night.| |26 Mar 1942||Chinese and Japanese troops continued to engage in house-to-house fighting in Taungoo, Burma, with heavy losses on both sides.| |27 Mar 1942||Japanese aircraft and artillery bombarded Chinese positions at Taungoo, Burma.| |28 Mar 1942||A fresh regiment of the Japanese 56th Division attacked Chinese-defended city of Taungoo, Burma.| |29 Mar 1942||Japanese penetrated the Chinese defenses at Taungoo, Burma and threatened to trap the Chinese 200th Division in the city. General Dai Anlan issued the order to retreat from the city after sundown, falling back northward. During the withdraw, the Chinese failed to destroy the bridge over the Sittang River. To the west, Japanese captured a main road near Shwedaung, disrupting the Allied withdraw; an Anglo-Indian attack from the south failed to break the roadblock.| |30 Mar 1942||Japanese 55th Division attacked Taungoo, Burma at dawn, capturing it without resistance as the Chinese 200th Division had evacuated the city overnight. To the west, British 7th Armoured Brigade broke through the Japanese roadblock at Shwedaung, but suffered tank destroyed on the nearby bridge over the Irrawaddy River, blocking traffic. Shortly after, Japanese-sponsored Burma National Army attacked the British troops while the British attempted to maneuver around the disabled tank, killing 350 with as many losses.| |2 Apr 1942||Japanese troops drove Indian 17th Division out of Prome, Burma.| |3 Apr 1942||Six B-17 bombers of the US 10th Air Force based in Asansol, India attacked Rangoon, Burma, setting three warehouses on fire; one aircraft was lost in this attack.| |4 Apr 1942||Japanese aircraft bombed areas of Mandalay, Burma, killing more than 2,000, most of whom were civilians.| |5 Apr 1942||Japanese and Chinese troops clashed at Yedashe in central Burma.| |6 Apr 1942||Japanese troops captured Mandalay, Burma. Off Akyab on the western coast of Burma, Japanese aircraft sank Indian sloop HMIS Indus.| |8 Apr 1942||Japanese troops overran Chinese 200th Division and New 22nd Division defensive positions at Yedashe, Burma.| |10 Apr 1942||Japanese and Chinese troops clashed at Szuwa River, Burma.| |11 Apr 1942||In Burma, British troops formed a new defensive line, Minhia-Taungdwingyi-Pyinmana, on the Irrawaddy River. After dark, the Japanese reached this line, launching a first attack on the Indian 48th Brigade at Kokkogwa.| |12 Apr 1942||Japanese attacks on Minhia, Thadodan, and Alebo on the Minhia-Taungdwingyi-Pyinmana defensive line in Burma were stopped by Anglo-Indian troops including the British 2nd Royal Tank Regiment. British tankers reported seeing captured British tanks pressed into Japanese service.| |13 Apr 1942||Japanese troops continued to assault the Minhia-Taungdwingyi-Pyinmana defensive line along the Irrawaddy River in Burma without success. To the northwest, troops of Japanese 56th Infantry Division captured Mauchi from troops of Chinese 6th Army and the nearby tungsten mines.| |15 Apr 1942||As Japanese troops began to push through the British Minhia-Taungdwingyi-Pyinmana defensive line along the Irrawaddy River in Burma and approached the oil-producing region of Yenangyaung, William Slim gave the order to destroy 1,000,000 gallons of crude oil to prevent Japanese capture while the British 7th Armoured Division pushed through Japanese road blocks to prepare men on the line to fall back.| |16 Apr 1942||Japanese troops decisively defeated the 1st Burma Division near Yenangyaung, Burma.| |17 Apr 1942||William Slim launched a failed counterattack with the Indian 17th Division near Yenangyaung, Burma; he had wanted the counterattack to open up Japanese lines, to meet with troops of the 113th Regiment of Chinese 38th Division fighting to relieve Yenangyaung, and to allow the remnants of the 1st Burma Division to return to the main Allied lines. To the east, Japanese 56th Infantry Division and Chinese troops clashed at Bawlake and Pyinmana, Burma.| |18 Apr 1942||Although the 113th Regiment of the Chinese 38th Division under General Sun Liren and the British 7th Armoured Brigade had reached near Yenangyaung, Burma, they could not prevent the Japanese troops from capturing the city; the final elements of British troops fleeing out of the city destroyed the power station to prevent Japanese use.| |19 Apr 1942||The 113th Regiment of the Chinese 38th Division under General Sun Liren captured Twingon, Burma then repulsed a Japanese counterattack that saw heavy casualties on both sides. To the east, Japanese 55th Infantry Division captured Pyinmana.| |20 Apr 1942||Japanese troops captured Taunggyi, Burma, capital of the southern Shan States, along with its large gasoline store. In central Burma, troops of the Japanese 56th Division pushed Chinese troops out of Loikaw, while troops of the Japanese 18th Division clashed with Chinese troops at Kyidaunggan.| |21 Apr 1942||Japanese 18th Division captured Kyidaunggan, Burma from Chinese troops.| |22 Apr 1942||British troops fell back to Meiktila, Burma while Indian 17th Infantry Division fell back from Taungdwingyi to Mahlaing to protect Mandalay.| |23 Apr 1942||Chinese mercenary troops under Allied command attacked Taunggyi, Burma while Japanese 56th Division captured Loilem.| |24 Apr 1942||Japanese 18th Infantry Division captured Yamethin, Burma.| |25 Apr 1942||Alexander, Slim, and Stilwell met at Kyaukse, Burma, 25 miles south of Mandalay. It was decided that all Allied troops were to be pulled out of Burma, but Slim demanded that no British nor Indian units would be withdrawn to China even if the Chinese border was closer to that of India's. Meanwhile, Japanese and Chinese troops clashed at Loilem, central Burma.| |26 Apr 1942||In Burma, the Indian 17th Division moved from Mahlaing to Meiktila, 20 miles to the south, to assist the Chinese 200th Division in forming a line of defense against the Japanese attack on Mandalay.| |28 Apr 1942||Troops of the Chinese 28th Division arrived at Lashio in northern Burma. To the west, the Indian 17th Division crossed the Irrawaddy River at Sameikkon, Burma on its retreat toward India; Chinese 38th Division and British 7th Armoured Brigade formed a line between Sagaing and Ondaw to guard the retreat.| |29 Apr 1942||Japanese 18th Infantry Division captured Kyaukse, Burma just south of Mandalay. To the west, Japanese 33rd Infantry Division pursued the Anglo-Indian withdraw across the Irrawaddy River toward India. To the north, 100 kilometers south of the border with China, Japanese 56th Infantry Division captured Lashio midday.| |30 Apr 1942||In western Burma, Chinese 38th Division began to move westward to join the Anglo-Indian troops already en route for India. After the tanks of the British 7th Armoured Division had successfully crossed the Ava Bridge over the Irrawaddy River, Chinese troops blew up the bridge to slow the Japanese pursuit.| |1 May 1942||Japanese 18th Infantry Division captured Mandalay, Burma. 300 kilometers the northeast, Japanese and Chinese troops clashed at Hsenwi. 50 miles west of Mandalay, Japanese troops blocked the British retreat at Monywa on the Chindwin River and then attacked from the rear by surprise, capturing the headquarters of the 1st Burma Division.| |2 May 1942||1st Burma Division unsuccessfully attacked Japanese 33rd Infantry Division at Monywa, Burma on the Chindwin River.| |3 May 1942||Having fought off the attack by the 1st Burma Division at Monywa, Burma, Japanese 33rd Infantry Division went on the offensive pushing 1st Burma Division back toward Alon.| |4 May 1942||Japanese troops captured Bhamo, Burma. Off the Burmese coast, with increasing malaria cases affecting the garrison's morale, Akyab Island was abandoned.| |8 May 1942||Japanese troops captured Myitkyina, Burma.| |9 May 1942||By this date, most troops of the Burma Corps had withdrew west of the Chindwin River.| |10 May 1942||The Thai Phayap Army invaded Shan State, Burma. In western Burma, Gurkha units, rearguard to the British general retreat, held off another Japanese assault throughout the afternoon; they also withdrew westwards after sundown.| |12 May 1942||The monsoon began in Burma, slowing the retreat of Allied troops into India, but it also stopped Japanese attempts to attack the retreating columns from the air.| |15 May 1942||The retreating Allied columns reached Assam in northeastern India.| |18 May 1942||Most of the retreating troops of BURCORPS reached India.| |20 May 1942||Japanese troops completed the conquest of Burma. All Allied troops previously under the command of William Slim (who was transferred to Indian XV Corps) were reassigned to the British IV Corps, thus dissolving the Burma Corps.| |23 May 1942||Japanese and Chinese troops clashed along the Hsipaw-Mogok road in northern Burma.| |25 May 1942||Chinese 38th Infantry Division began to cross the border from Burma into India.| |27 May 1942||Thai forces captured Kengtung, Burma.| Visitor Submitted Comments All visitor submitted comments are opinions of those making the submissions and do not reflect views of WW2DB. » Alexander, Harold » Du, Yuming » Slim, William » Sun, Liren Advertise on ww2db.com - » 725 biographies - » 302 events - » 26812 timeline entries - » 663 ships - » 300 aircraft models - » 163 vehicle models - » 254 weapon models - » 64 historical documents - » 282 book reviews - » 209 maps - » 16045 photos, 1464 in color Fleet Admiral Chester W. Nimitz, 16 Mar 1945
http://ww2db.com/battle_spec.php?battle_id=59
13
58
Language Arts Lessons Useful Word Lists Help and Information - British Spelling - Custom Sentences and Definitions - Funding Sources - FAQs - Frequently Asked Questions - Getting Started Welcome Letters - Handwriting Worksheets - Our Educational Awards - Single Sign On via Gmail - Sequential Spelling Program - Student Writing Practice - Teacher Training Videos - Standards Correlation - The Importance of Spelling - Improve Your Writing Skills - Recommended Learning Resources - Writing Prompts that Motivate - Reading Comprehension - Research on Spelling Automaticity - Incorporating Spelling Into Reading - SpellingCity and NComputing - Title 1 Schools - Vocabulary/Word Study - Welcome Edmodo - Welcome Google Chrome Fourth Grade Math Vocabulary VocabularySpellingCity has created these fourth grade math word lists as tools teachers and parents can use to supplement the fourth grade math curriculum with interactive, educational math vocabulary games. Simply choose a list from a particular math area, and then select one of the 25 learning activities available. The material for these lists was specifically designed to be used in a fourth grade math class. The math vocabulary lists are based on the Common Core Fourth Grade Math Standards. VocabularySpellingCity ensures that these academic vocabulary lists are level-appropriate for fourth graders. Teachers can import these lists into their accounts, and edit or add to them to suit their purposes. Common Core State Standards Overview for Fourth Grade Math Click for more information on Math Vocabulary and the Common Core Standards in general. For information pertaining to 4th grade in particular, please refer to the chart above. To preview the definitions, select a list and use the Flash Cards. For help on using the site, watch one of our short videos on how to use the site. Elementary students can not only achieve enrichment in fourth grade math terms through interactive exercises, but they can also acquire necessary understanding of pivotal math concepts while playing educational online math vocabulary games. The themed lists are organized so that students are given challenging 4th grade math vocabulary in such a way that fourth graders can quickly excel in the comprehension of important math concepts. Animated interactive games greatly enhance students' learning of 4th grade math words. Students not only learn elementary math words while having a great time, but also gain confidence in a subject that many consider daunting. The math vocabulary lists are based on the Common Core Fourth Grade Math Standards. Teachers and parents can count on the effective and accurate grouping of these math vocabulary lists and have come to rely on the use of 4th grade math definitions in interactive games to activate students' math comprehension. More than a traditional 4th grade math dictionary, this assortment of targeted lists, combined with exciting and challenging elementary math vocabulary drill and practice games, makes learning math words fun for fourth graders everywhere! Fourth Grade Math Vocabulary Words at a Glance: Operations & Algebraic Thinking: variable, inequality, equivalent, differences, factor, equation, product, comparison, expression, similarity, inequality, relationship, similarity, comparison, differences, factor, equation, variable, extraneous, equivalent Base Ten Operations Number & Operations in Base Ten: comparison, equation, relationship, equivalent, inequality, factor, rounding, regroup, variable, similarity, size, inverse operation, gram, calculate, compare, composite number, million, decimal number, simplify, relative, addend, product, symmetry, centimeter, fahrenheit, celsius, differences, polyhedron, extraneous, estimation Number & Operations - Fractions: proper fraction, percent, consecutive, common fraction, ordinal number, factor, multiples, improper fraction, mixed number, fraction, compare, dividend, denominator, remainder, divisor, quotient, more than, numerator, less than, equivalent Measurement & Data Units & Coordinates: y-axis, line graph, customary units, non-standard units, x-axis, coordinates, coordinate, system, data, unit conversion, unit Length: meter, length, width, kilometer, measurement, inch, yard, centimeter, metric, foot Problem Solving: probability, predict, array, survey, chance, likely, unlikely, certainty, data collection, tendency Quantity/Size: volume, liter, ounce, pint, kilogram, weight, mass, quart, gallon, balance Time/Temperature: Celsius, Fahrenheit, measurement, minute, second, event, degree, time , temperature, hour Interpretation: mean, median, mode, range, likelihood, ordered pairs, statistics, interpret, graph, data Presentation: tree diagram, pie chart, diagram, data, circle graph, Venn diagram, tally, bar graph, frequency table, measure Angles: congruent, acute angle, obtuse angle, rotate, straight angle, degrees, angle, right angle, triangle, perpendicular Classification: similarity, translation, congruent, reflection, rectangular, symmetry, closed figure, open figure, rotation, transformation Lines: intersection, perpendicular, length, line segment, circumference, point, distance, grid, side, line of symmetry Measurement: square unit, area, capacity, degrees, distance, grid, radii, height, diameter, length Polygons: polygon, pentagon, quadrilateral, hexagon, rhombus, pentagon, parallelogram, plane figure, octagon, polyhedron Prisms: prism, base, face, solid, sphere, horizontal, parallel lines, cube, cylinder, cone For a complete online Math curriculum in Kindergarten Math, First Grade Math, Second Grade Math, Third Grade Math, Fourth Grade Math, Fifth Grade Math, Sixth Grade Math, Seventh Grade Math, or Eighth Grade Math visit Time4Learning.com. Here are some fun Math Games from LearningGamesForKids by grade level: Kindergarten Math Games, First Grade Math Games, Second Grade Math Games, Third Grade Math Games, Fourth Grade Math Games, Fifth Grade Math Games, Addition Math Games, Subtraction Math Games, Multiplication Math Games, or Division Math Games.
http://www.spellingcity.com/fourth-grade-math-vocabulary.html
13
58
Section 17: Geography¶ It is very common to have data in which the coordinate are “geographics” or “latitude/longitude”. Unlike coordinates in Mercator, UTM, or Stateplane, geographic coordinates are not cartesian coordinates. Geographic coordinates do not represent a linear distance from an origin as plotted on a plane. Rather, these spherical coordinates describe angular coordinates on a globe. In spherical coordinates a point is specified by the angle of rotation from a reference meridian (longitude), and the angle from the equator (latitude). You can treat geographic coordinates as approximate cartesian coordinates and continue to do spatial calculations. However, measurements of distance, length and area will be nonsensical. Since spherical coordinates measure angular distance, the units are in “degrees.” Further, the approximate results from indexes and true/false tests like intersects and contains can become terribly wrong. The distance between points get larger as problem areas like the poles or the international dateline are approached. For example, here are the coordinates of Los Angeles and Paris. - Los Angeles: POINT(-118.4079 33.9434) - Paris: POINT(2.3490 48.8533) The following calculates the distance between Los Angeles and Paris using the standard PostGIS cartesian ST_Distance(geometry, geometry). Note that the SRID of 4326 declares a geographic spatial reference system. SELECT ST_Distance( ST_GeometryFromText('POINT(-118.4079 33.9434)', 4326), -- Los Angeles (LAX) ST_GeometryFromText('POINT(2.5559 49.0083)', 4326) -- Paris (CDG) ); Aha! 121! But, what does that mean? The units for spatial reference 4326 are degrees. So our answer is 121 degrees. But (again), what does that mean? On a sphere, the size of one “degree square” is quite variable, becoming smaller as you move away from the equator. Think of the meridians (vertical lines) on the globe getting closer to each other as you go towards the poles. So, a distance of 121 degrees doesn’t mean anything. It is a nonsense number. In order to calculate a meaningful distance, we must treat geographic coordinates not as approximate cartesian coordinates but rather as true spherical coordinates. We must measure the distances between points as true paths over a sphere – a portion of a great circle. Starting with version 1.5, PostGIS provides this functionality through the geography type. Different spatial databases have different approaches for “handling geographics” - Oracle attempts to paper over the differences by transparently doing geographic calculations when the SRID is geographic. - SQL Server uses two spatial types, “STGeometry” for cartesian data and “STGeography” for geographics. - Informix Spatial is a pure cartesian extension to Informix, while Informix Geodetic is a pure geographic extension. - Similar to SQL Server, PostGIS uses two types, “geometry” and “geography”. Using the geography instead of geometry type, let’s try again to measure the distance between Los Angeles and Paris. Instead of ST_GeometryFromText(text), we will use ST_GeographyFromText(text). SELECT ST_Distance( ST_GeographyFromText('POINT(-118.4079 33.9434)'), -- Los Angeles (LAX) ST_GeographyFromText('POINT(2.5559 49.0083)') -- Paris (CDG) ); A big number! All return values from geography calculations are in meters, so our answer is 9124km. Older versions of PostGIS supported very basic calculations over the sphere using the ST_Distance_Spheroid(point, point, measurement) function. However, ST_Distance_Spheroid is substantially limited. The function only works on points and provides no support for indexing across the poles or international dateline. The need to support non-point geometries becomes very clear when posing a question like “How close will a flight from Los Angeles to Paris come to Iceland?” Working with geographic coordinates on a cartesian plane (the purple line) yields a very wrong answer indeed! Using great circle routes (the red lines) gives the right answer. If we convert our LAX-CDG flight into a line string and calculate the distance to a point in Iceland using geography we’ll get the right answer (recall) in meters. SELECT ST_Distance( ST_GeographyFromText('LINESTRING(-118.4079 33.9434, 2.5559 49.0083)'), -- LAX-CDG ST_GeographyFromText('POINT(-21.8628 64.1286)') -- Iceland ); So the closest approach to Iceland on the LAX-CDG route is a relatively small 532km. The cartesian approach to handling geographic coordinates breaks down entirely for features that cross the international dateline. The shortest great-circle route from Los Angeles to Tokyo crosses the Pacific Ocean. The shortest cartesian route crosses the Atlantic and Indian Oceans. SELECT ST_Distance( ST_GeometryFromText('Point(-118.4079 33.9434)'), -- LAX ST_GeometryFromText('Point(139.733 35.567)')) -- NRT (Tokyo/Narita) AS geometry_distance, ST_Distance( ST_GeographyFromText('Point(-118.4079 33.9434)'), -- LAX ST_GeographyFromText('Point(139.733 35.567)')) -- NRT (Tokyo/Narita) AS geography_distance; geometry_distance | geography_distance -------------------+-------------------- 258.146005837336 | 8833954.76996256 In order to load geometry data into a geography table, the geometry first needs to be projected into EPSG:4326 (longitude/latitude), then it needs to be changed into geography. The ST_Transform(geometry,srid) function converts coordinates to geographics and the Geography(geometry) function “casts” them from geometry to geography. CREATE TABLE nyc_subway_stations_geog AS SELECT Geography(ST_Transform(geom,4326)) AS geog, name, routes FROM nyc_subway_stations; Building a spatial index on a geography table is exactly the same as for geometry: CREATE INDEX nyc_subway_stations_geog_gix ON nyc_subway_stations_geog USING GIST (geog); The difference is under the covers: the geography index will correctly handle queries that cover the poles or the international date-line, while the geometry one will not. There are only a small number of native functions for the geography type: - ST_AsText(geography) returns text - ST_GeographyFromText(text) returns geography - ST_AsBinary(geography) returns bytea - ST_GeogFromWKB(bytea) returns geography - ST_AsSVG(geography) returns text - ST_AsGML(geography) returns text - ST_AsKML(geography) returns text - ST_AsGeoJson(geography) returns text - ST_Distance(geography, geography) returns double - ST_DWithin(geography, geography, float8) returns boolean - ST_Area(geography) returns double - ST_Length(geography) returns double - ST_Covers(geography, geography) returns boolean - ST_CoveredBy(geography, geography) returns boolean - ST_Intersects(geography, geography) returns boolean - ST_Buffer(geography, float8) returns geography - ST_Intersection(geography, geography) returns geography Creating a Geography Table¶ The SQL for creating a new table with a geography column is much like that for creating a geometry table. However, geography includes the ability to specify the object type directly at the time of table creation. For example: CREATE TABLE airports ( code VARCHAR(3), geog GEOGRAPHY(Point) ); INSERT INTO airports VALUES ('LAX', 'POINT(-118.4079 33.9434)'); INSERT INTO airports VALUES ('CDG', 'POINT(2.5559 49.0083)'); INSERT INTO airports VALUES ('REK', 'POINT(-21.8628 64.1286)'); In the table definition, the GEOGRAPHY(Point) specifies our airport data type as points. The new geography fields don’t get registered in the geometry_columns view. Instead, they are registered in a view called geography_columns. SELECT * FROM geography_columns; f_table_name | f_geography_column | srid | type -------------------------------+--------------------+------+---------- nyc_subway_stations_geography | geog | 0 | Geometry airports | geog | 4326 | Point Casting to Geometry¶ While the basic functions for geography types can handle many use cases, there are times when you might need access to other functions only supported by the geometry type. Fortunately, you can convert objects back and forth from geography to geometry. The PostgreSQL syntax convention for casting is to append ::typename to the end of the value you wish to cast. So, 2::text with convert a numeric two to a text string ‘2’. And 'POINT(0 0)'::geometry will convert the text representation of point into a geometry point. The ST_X(point) function only supports the geometry type. How can we read the X coordinate from our geographies? SELECT code, ST_X(geog::geometry) AS longitude FROM airports; code | longitude ------+----------- LAX | -118.4079 CDG | 2.5559 REK | -21.8628 By appending ::geometry to our geography value, we convert the object to a geometry with an SRID of 4326. From there we can use as many geometry functions as strike our fancy. But, remember – now that our object is a geometry, the coordinates will be interpretted as cartesian coordinates, not spherical ones. Why (Not) Use Geography¶ Geographics are universally accepted coordinates – everyone understands what latitude/longitude mean, but very few people understand what UTM coordinates mean. Why not use geography all the time? - First, as noted earlier, there are far fewer functions available (right now) that directly support the geography type. You may spend a lot of time working around geography type limitations. - Second, the calculations on a sphere are computationally far more expensive than cartesian calculations. For example, the cartesian formula for distance (Pythagoras) involves one call to sqrt(). The spherical formula for distance (Haversine) involves two sqrt() calls, an arctan() call, four sin() calls and two cos() calls. Trigonometric functions are very costly, and spherical calculations involve a lot of them. If your data is geographically compact (contained within a state, county or city), use the geometry type with a cartesian projection that makes sense with your data. See the http://spatialreference.org site and type in the name of your region for a selection of possible reference systems. If, on the other hand, you need to measure distance with a dataset that is geographically dispersed (covering much of the world), use the geography type. The application complexity you save by working in geography will offset any performance issues. And, casting to geometry can offset most functionality limitations. ST_Distance(geometry, geometry): For geometry type Returns the 2-dimensional cartesian minimum distance (based on spatial ref) between two geometries in projected units. For geography type defaults to return spheroidal minimum distance between two geographies in meters. ST_GeographyFromText(text): Returns a specified geography value from Well-Known Text representation or extended (WKT). ST_Transform(geometry, srid): Returns a new geometry with its coordinates transformed to the SRID referenced by the integer parameter. ST_X(point): Returns the X coordinate of the point, or NULL if not available. Input must be a point. |||(1, 2) | The buffer and intersection functions are actually wrappers on top of a cast to geometry, and are not carried out natively in spherical coordinates. As a result, they may fail to return correct results for objects with very large extents that cannot be cleanly converted to a planar representation. For example, the ST_Buffer(geography,distance) function transforms the geography object into a “best” projection, buffers it, and then transforms it back to geographics. If there is no “best” projection (the object is too large), the operation can fail or return a malformed buffer. Table Of Contents Previous: Section 16: Projection Exercises This work is licensed under a Creative Commons Attribution-Share Alike 3.0 United States License. Feel free to use this material, but we ask that you please retain the OpenGeo branding, logos and style.
http://workshops.opengeo.org/postgis-intro/geography.html
13
129
Non-Programmer's Tutorial for Python 3/Defining Functions To start off this chapter I am going to give you an example of what you could do but shouldn't (so don't type it in): a = 23 b = -23 if a < 0: a = -a if b < 0: b = -b if a == b: print("The absolute values of", a, "and", b, "are equal") else: print("The absolute values of", a, "and", b, "are different") with the output being: The absolute values of 23 and 23 are equal The program seems a little repetitive. Programmers hate to repeat things -- that's what computers are for, after all! (Note also that finding the absolute value changed the value of the variable, which is why it is printing out 23, and not -23 in the output.) Fortunately Python allows you to create functions to remove duplication. Here is the rewritten example: a = 23 b = -23 def absolute_value(n): if n < 0: n = -n return n if absolute_value(a) == absolute_value(b): print("The absolute values of", a, "and", b, "are equal") else: print("The absolute values of", a, "and", b, "are different") with the output being: The absolute values of 23 and -23 are equal The key feature of this program is the def (short for define) starts a function definition. def is followed by the name of the function absolute_value. Next comes a '(' followed by the parameter n is passed from the program into the function when the function is called). The statements after the ':' are executed when the function is used. The statements continue until either the indented statements end or a return is encountered. The return statement returns a value back to the place where the function was called. We already have encountered a function in our very first program, the Notice how the values of b are not changed. Functions can be used to repeat tasks that don't return values. Here are some examples: def hello(): print("Hello") def area(width, height): return width * height def print_welcome(name): print("Welcome", name) hello() hello() print_welcome("Fred") w = 4 h = 5 print("width =", w, "height =", h, "area =", area(w, h)) with output being: Hello Hello Welcome Fred width = 4 height = 5 area = 20 That example shows some more stuff that you can do with functions. Notice that you can use no arguments or two or more. Notice also when a function doesn't need to send back a value, a return is optional. Variables in functions When eliminating repeated code, you often have variables in the repeated code. In Python, these are dealt with in a special way. So far all variables we have seen are global variables. Functions have a special type of variable called local variables. These variables only exist while the function is running. When a local variable has the same name as another variable (such as a global variable), the local variable hides the other. Sound confusing? Well, these next examples (which are a bit contrived) should help clear things up. a = 4 def print_func(): a = 17 print("in print_func a = ", a) print_func() print("a = ", a) When run, we will receive an output of: in print_func a = 17 a = 4 Variable assignments inside a function do not override global variables, they exist only inside the function. Even though a was assigned a new value inside the function, this newly assigned value was only relevant to print_func, when the function finishes running, and the a's values is printed again, we see the originally assigned values. Here is another more complex example. a_var = 10 b_var = 15 e_var = 25 def a_func(a_var): print("in a_func a_var = ", a_var) b_var = 100 + a_var d_var = 2 * a_var print("in a_func b_var = ", b_var) print("in a_func d_var = ", d_var) print("in a_func e_var = ", e_var) return b_var + 10 c_var = a_func(b_var) print("a_var = ", a_var) print("b_var = ", b_var) print("c_var = ", c_var) print("d_var = ", d_var) in a_func a_var = 15 in a_func b_var = 115 in a_func d_var = 30 in a_func e_var = 25 a_var = 10 b_var = 15 c_var = 125 d_var = Traceback (most recent call last): File "C:\def2.py", line 19, in <module> print("d_var = ", d_var) NameError: name 'd_var' is not defined In this example the variables d_var are all local variables when they are inside the function a_func. After the statement return b_var + 10 is run, they all cease to exist. The variable a_var is automatically a local variable since it is a parameter name. The variables d_var are local variables since they appear on the left of an equals sign in the function in the statements b_var = 100 + a_var and d_var = 2 * a_var . Inside of the function a_var has no value assigned to it. When the function is called with c_var = a_func(b_var), 15 is assigned to a_var since at that point in time b_var is 15, making the call to the function a_func(15). This ends up setting a_var to 15 when it is inside of As you can see, once the function finishes running, the local variables b_var that had hidden the global variables of the same name are gone. Then the statement print("a_var = ", a_var) prints the value 10 rather than the value 15 since the local variable that hid the global variable is gone. Another thing to notice is the NameError that happens at the end. This appears since the variable d_var no longer exists since a_func finished. All the local variables are deleted when the function exits. If you want to get something from a function, then you will have to use One last thing to notice is that the value of e_var remains unchanged inside a_func since it is not a parameter and it never appears on the left of an equals sign inside of the function a_func. When a global variable is accessed inside a function it is the global variable from the outside. Functions allow local variables that exist only inside the function and can hide other variables that are outside the function. #! /usr/bin/python #-*-coding: utf-8 -*- # converts temperature to Fahrenheit or Celsius def print_options(): print("Options:") print(" 'p' print options") print(" 'c' convert from Celsius") print(" 'f' convert from Fahrenheit") print(" 'q' quit the program") def celsius_to_fahrenheit(c_temp): return 9.0 / 5.0 * c_temp + 32 def fahrenheit_to_celsius(f_temp): return (f_temp - 32.0) * 5.0 / 9.0 choice = "p" while choice != "q": if choice == "c": c_temp = float(input("Celsius temperature: ")) print("Fahrenheit:", celsius_to_fahrenheit(c_temp)) choice = input("option: ") elif choice == "f": f_temp = float(input("Fahrenheit temperature: ")) print("Celsius:", fahrenheit_to_celsius(f_temp)) choice = input("option: ") elif choice == "p": #Alternatively choice != "q": so that print when anything unexpected inputed print_options() choice = input("option: ") Options: 'p' print options 'c' convert from celsius 'f' convert from fahrenheit 'q' quit the program option: c Celsius temperature: 30 Fahrenheit: 86.0 option: f Fahrenheit temperature: 60 Celsius: 15.5555555556 option: q #! /usr/bin/python #-*-coding: utf-8 -*- # calculates a given rectangle area print def hello(): print('Hello!') def area(width, height): return width * height def print_welcome(name): print('Welcome,', name) def positive_input(prompt): number = float(input(prompt)) while number <= 0: print('Must be a positive number') number = float(input(prompt)) return number name = input('Your Name: ') hello() print_welcome(name) print print('To find the area of a rectangle,') print('enter the width and height below.') print w = positive_input('Width: ') h = positive_input('Height: ') print('Width =', w, 'Height =', h, 'so Area =', area(w, h)) Your Name: Josh Hello! Welcome, Josh To find the area of a rectangle, enter the width and height below. Width: -4 Must be a positive number Width: 4 Height: 3 Width = 4 Height = 3 so Area = 12 Rewrite the area2.py program from the Examples above to have a separate function for the area of a square, the area of a rectangle, and the area of a circle ( 3.14 * radius**2). This program should include a menu interface. def square(L): return L * L def rectangle(width , height): return width * height def circle(radius): return 3.14159 * radius ** 2 def options(): print() print("Options:") print("s = calculate the area of a square.") print("c = calculate the area of a circle.") print("r = calculate the area of a rectangle.") print("q = quit") print() print("This program will calculate the area of a square, circle or rectangle.") choice = "x" options() while choice != "q": choice = input("Please enter your choice: ") if choice == "s": L = float(input("Length of square side: ")) print("The area of this square is", square(L)) options() elif choice == "c": radius = float(input("Radius of the circle: ")) print("The area of the circle is", circle(radius)) options() elif choice == "r": width = float(input("Width of the rectangle: ")) height = float(input("Height of the rectangle: ")) print("The area of the rectangle is", rectangle(width, height)) options() elif choice == "q": print(" ",end="") else: print("Unrecognized option.") options() Last modified on 17 March 2013, at 07:03
http://en.m.wikibooks.org/wiki/Non-Programmer's_Tutorial_for_Python_3/Defining_Functions
13
227
Analysis of variance Analysis of variance (ANOVA) is a collection of statistical models used to analyze the differences between group means and their associated procedures (such as "variation" among and between groups). In ANOVA setting, the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether or not the means of several groups are equal, and therefore generalizes t-test to more than two groups. Doing multiple two-sample t-tests would result in an increased chance of committing a type I error. For this reason, ANOVAs are useful in comparing (testing) three or more means (groups or variables) for statistical significance. Background and terminology ANOVA is a particular form of statistical hypothesis testing heavily used in the analysis of experimental data. A statistical hypothesis test is a method of making decisions using data. A test result (calculated from the null hypothesis and the sample) is called statistically significant if it is deemed unlikely to have occurred by chance, assuming the truth of the null hypothesis. A statistically significant result (when a probability (p-value) is less than a threshold (significance level)) justifies the rejection of the null hypothesis. In the typical application of ANOVA, the null hypothesis is that all groups are simply random samples of the same population. This implies that all treatments have the same effect (perhaps none). Rejecting the null hypothesis implies that different treatments result in altered effects. By construction, hypothesis testing limits the rate of Type I errors (false positives leading to false scientific claims) to a significance level. Experimenters also wish to limit Type II errors (false negatives resulting in missed scientific discoveries). The Type II error rate is a function of several things including sample size (positively correlated with experiment cost), significance level (when the standard of proof is high, the chances of overlooking a discovery are also high) and effect size (when the effect is obvious to the casual observer, Type II error rates are low). The terminology of ANOVA is largely from the statistical design of experiments. The experimenter adjusts factors and measures responses in an attempt to determine an effect. Factors are assigned to experimental units by a combination of randomization and blocking to ensure the validity of the results. Blinding keeps the weighing impartial. Responses show a variability that is partially the result of the effect and is partially random error. ANOVA is the synthesis of several ideas and it is used for multiple purposes. As a consequence, it is difficult to define concisely or precisely. "Classical ANOVA for balanced data does three things at once: - As exploratory data analysis, an ANOVA is an organization of an additive data decomposition, and its sums of squares indicate the variance of each component of the decomposition (or, equivalently, each set of terms of a linear model). - Comparisons of mean squares, along with F-tests ... allow testing of a nested sequence of models. - Closely related to the ANOVA is a linear model fit with coefficient estimates and standard errors." In short, ANOVA is a statistical tool used in several ways to develop and confirm an explanation for the observed data. - It is computationally elegant and relatively robust against violations to its assumptions. - ANOVA provides industrial strength (multiple sample comparison) statistical analysis. - It has been adapted to the analysis of a variety of experimental designs. As a result: ANOVA "has long enjoyed the status of being the most used (some would say abused) statistical technique in psychological research." ANOVA "is probably the most useful technique in the field of statistical inference." ANOVA is difficult to teach, particularly for complex experiments, with split-plot designs being notorious. In some cases the proper application of the method is best determined by problem pattern recognition followed by the consultation of a classic authoritative test. (Condensed from the NIST Engineering Statistics handbook: Section 5.7. A Glossary of DOE Terminology.) - Balanced design - An experimental design where all cells (i.e. treatment combinations) have the same number of observations. - A schedule for conducting treatment combinations in an experimental study such that any effects on the experimental results due to a known change in raw materials, operators, machines, etc., become concentrated in the levels of the blocking variable. The reason for blocking is to isolate a systematic effect and prevent it from obscuring the main effects. Blocking is achieved by restricting randomization. - A set of experimental runs which allows the fit of a particular model and the estimate of effects. - Design of experiments. An approach to problem solving involving collection of data that will support valid, defensible, and supportable conclusions. - How changing the settings of a factor changes the response. The effect of a single factor is also called a main effect. - Unexplained variation in a collection of observations. DOE's typically require understanding of both random error and lack of fit error. - Experimental unit - The entity to which a specific treatment combination is applied. - Process inputs an investigator manipulates to cause a change in the output. - Lack-of-fit error - Error that occurs when the analysis omits one or more important terms or factors from the process model. Including replication in a DOE allows separation of experimental error into its components: lack of fit and random (pure) error. - Mathematical relationship which relates changes in a given response to changes in one or more factors. - Random error - Error that occurs due to natural variation in the process. Random error is typically assumed to be normally distributed with zero mean and a constant variance. Random error is also called experimental error. - A schedule for allocating treatment material and for conducting treatment combinations in a DOE such that the conditions in one run neither depend on the conditions of the previous run nor predict the conditions in the subsequent runs.[nb 1] - Performing the same treatment combination more than once. Including replication allows an estimate of the random error independent of any lack of fit error. - The output(s) of a process. Sometimes called dependent variable(s). - A treatment is a specific combination of factor levels whose effect is to be compared with other treatments. Classes of models There are three classes of models used in the analysis of variance, and these are outlined here. The fixed-effects model of analysis of variance applies to situations in which the experimenter applies one or more treatments to the subjects of the experiment to see if the response variable values change. This allows the experimenter to estimate the ranges of response variable values that the treatment would generate in the population as a whole. Random effects models are used when the treatments are not fixed. This occurs when the various factor levels are sampled from a larger population. Because the levels themselves are random variables, some assumptions and the method of contrasting the treatments (a multi-variable generalization of simple differences) differ from the fixed-effects model. A mixed-effects model contains experimental factors of both fixed and random-effects types, with appropriately different interpretations and analysis for the two types. Example: Teaching experiments could be performed by a university department to find a good introductory textbook, with each text considered a treatment. The fixed-effects model would compare a list of candidate texts. The random-effects model would determine whether important differences exist among a list of randomly selected texts. The mixed-effects model would compare the (fixed) incumbent texts to randomly selected alternatives. Defining fixed and random effects has proven elusive, with competing definitions arguably leading toward a linguistic quagmire. Assumptions of ANOVA The analysis of variance has been studied from several approaches, the most common of which uses a linear model that relates the response to the treatments and blocks. Even when the statistical model is nonlinear, it can be approximated by a linear model for which an analysis of variance may be appropriate. Textbook analysis using a normal distribution - Independence of observations – this is an assumption of the model that simplifies the statistical analysis. - Normality – the distributions of the residuals are normal. - Equality (or "homogeneity") of variances, called homoscedasticity — the variance of data in groups should be the same. The separate assumptions of the textbook model imply that the errors are independently, identically, and normally distributed for fixed effects models, that is, that the errors ('s) are independent and In a randomized controlled experiment, the treatments are randomly assigned to experimental units, following the experimental protocol. This randomization is objective and declared before the experiment is carried out. The objective random-assignment is used to test the significance of the null hypothesis, following the ideas of C. S. Peirce and Ronald A. Fisher. This design-based analysis was discussed and developed by Francis J. Anscombe at Rothamsted Experimental Station and by Oscar Kempthorne at Iowa State University. Kempthorne and his students make an assumption of unit treatment additivity, which is discussed in the books of Kempthorne and David R. Cox. In its simplest form, the assumption of unit-treatment additivity[nb 2] states that the observed response from experimental unit when receiving treatment can be written as the sum of the unit's response and the treatment-effect , that is The assumption of unit-treatment addivity implies that, for every treatment , the th treatment have exactly the same effect on every experiment unit. The assumption of unit treatment additivity usually cannot be directly falsified, according to Cox and Kempthorne. However, many consequences of treatment-unit additivity can be falsified. For a randomized experiment, the assumption of unit-treatment additivity implies that the variance is constant for all treatments. Therefore, by contraposition, a necessary condition for unit-treatment additivity is that the variance is constant. The use of unit treatment additivity and randomization is similar to the design-based inference that is standard in finite-population survey sampling. Derived linear model Kempthorne uses the randomization-distribution and the assumption of unit treatment additivity to produce a derived linear model, very similar to the textbook model discussed previously. The test statistics of this derived linear model are closely approximated by the test statistics of an appropriate normal linear model, according to approximation theorems and simulation studies. However, there are differences. For example, the randomization-based analysis results in a small but (strictly) negative correlation between the observations. In the randomization-based analysis, there is no assumption of a normal distribution and certainly no assumption of independence. On the contrary, the observations are dependent! The randomization-based analysis has the disadvantage that its exposition involves tedious algebra and extensive time. Since the randomization-based analysis is complicated and is closely approximated by the approach using a normal linear model, most teachers emphasize the normal linear model approach. Few statisticians object to model-based analysis of balanced randomized experiments. Statistical models for observational data However, when applied to data from non-randomized experiments or observational studies, model-based analysis lacks the warrant of randomization. For observational data, the derivation of confidence intervals must use subjective models, as emphasized by Ronald A. Fisher and his followers. In practice, the estimates of treatment-effects from observational studies generally are often inconsistent. In practice, "statistical models" and observational data are useful for suggesting hypotheses that should be treated very cautiously by the public. Summary of assumptions The normal-model based ANOVA analysis assumes the independence, normality and homogeneity of the variances of the residuals. The randomization-based analysis assumes only the homogeneity of the variances of the residuals (as a consequence of unit-treatment additivity) and uses the randomization procedure of the experiment. Both these analyses require homoscedasticity, as an assumption for the normal-model analysis and as a consequence of randomization and additivity for the randomization-based analysis. However, studies of processes that change variances rather than means (called dispersion effects) have been successfully conducted using ANOVA. There are no necessary assumptions for ANOVA in its full generality, but the F-test used for ANOVA hypothesis testing has assumptions and practical limitations which are of continuing interest. Problems which do not satisfy the assumptions of ANOVA can often be transformed to satisfy the assumptions. The property of unit-treatment additivity is not invariant under a "change of scale", so statisticians often use transformations to achieve unit-treatment additivity. If the response variable is expected to follow a parametric family of probability distributions, then the statistician may specify (in the protocol for the experiment or observational study) that the responses be transformed to stabilize the variance. Also, a statistician may specify that logarithmic transforms be applied to the responses, which are believed to follow a multiplicative model. According to Cauchy's functional equation theorem, the logarithm is the only continuous transformation that transforms real multiplication to addition. Characteristics of ANOVA ANOVA is used in the analysis of comparative experiments, those in which only the difference in outcomes is of interest. The statistical significance of the experiment is determined by a ratio of two variances. This ratio is independent of several possible alterations to the experimental observations: Adding a constant to all observations does not alter significance. Multiplying all observations by a constant does not alter significance. So ANOVA statistical significance results are independent of constant bias and scaling errors as well as the units used in expressing observations. In the era of mechanical calculation it was common to subtract a constant from all observations (when equivalent to dropping leading digits) to simplify data entry. This is an example of data coding. Logic of ANOVA The calculations of ANOVA can be characterized as computing a number of means and variances, dividing two variances and comparing the ratio to a handbook value to determine statistical significance. Calculating a treatment effect is then trivial, "the effect of any treatment is estimated by taking the difference between the mean of the observations which receive the treatment and the general mean." Partitioning of the sum of squares ANOVA uses traditional standardized terminology. The definitional equation of sample variance is , where the divisor is called the degrees of freedom (DF), the summation is called the sum of squares (SS), the result is called the mean square (MS) and the squared terms are deviations from the sample mean. ANOVA estimates 3 sample variances: a total variance based on all the observation deviations from the grand mean, an error variance based on all the observation deviations from their appropriate treatment means and a treatment variance. The treatment variance is based on the deviations of treatment means from the grand mean, the result being multiplied by the number of observations in each treatment to account for the difference between the variance of observations and the variance of means. If the null hypothesis is true, all three variance estimates are equal (within sampling error). The fundamental technique is a partitioning of the total sum of squares SS into components related to the effects used in the model. For example, the model for a simplified ANOVA with one type of treatment at different levels. The number of degrees of freedom DF can be partitioned in a similar way: one of these components (that for error) specifies a chi-squared distribution which describes the associated sum of squares, while the same is true for "treatments" if there is no treatment effect. See also Lack-of-fit sum of squares. The F-test is used for comparing the factors of the total deviation. For example, in one-way, or single-factor ANOVA, statistical significance is tested for by comparing the F test statistic where MS is mean square, = number of treatments and = total number of cases to the F-distribution with , degrees of freedom. Using the F-distribution is a natural candidate because the test statistic is the ratio of two scaled sums of squares each of which follows a scaled chi-squared distribution. The expected value of F is (where n is the treatment sample size) which is 1 for no treatment effect. As values of F increase above 1 the evidence is increasingly inconsistent with the null hypothesis. Two apparent experimental methods of increasing F are increasing the sample size and reducing the error variance by tight experimental controls. The textbook method of concluding the hypothesis test is to compare the observed value of F with the critical value of F determined from tables. The critical value of F is a function of the numerator degrees of freedom, the denominator degrees of freedom and the significance level (α). If F ≥ FCritical (Numerator DF, Denominator DF, α) then reject the null hypothesis. The computer method calculates the probability (p-value) of a value of F greater than or equal to the observed value. The null hypothesis is rejected if this probability is less than or equal to the significance level (α). The two methods produce the same result. The ANOVA F-test is known to be nearly optimal in the sense of minimizing false negative errors for a fixed rate of false positive errors (maximizing power for a fixed significance level). To test the hypothesis that all treatments have exactly the same effect, the F-test's p-values closely approximate the permutation test's p-values: The approximation is particularly close when the design is balanced. Such permutation tests characterize tests with maximum power against all alternative hypotheses, as observed by Rosenbaum.[nb 3] The ANOVA F–test (of the null-hypothesis that all treatments have exactly the same effect) is recommended as a practical test, because of its robustness against many alternative distributions.[nb 4] ANOVA consists of separable parts; partitioning sources of variance and hypothesis testing can be used individually. ANOVA is used to support other statistical tools. Regression is first used to fit more complex models to data, then ANOVA is used to compare models with the objective of selecting simple(r) models that adequately describe the data. "Such models could be fit without any reference to ANOVA, but ANOVA tools could then be used to make some sense of the fitted models, and to test hypotheses about batches of coefficients." "[W]e think of the analysis of variance as a way of understanding and structuring multilevel models—not as an alternative to regression but as a tool for summarizing complex high-dimensional inferences ..." ANOVA for a single factor The simplest experiment suitable for ANOVA analysis is the completely randomized experiment with a single factor. More complex experiments with a single factor involve constraints on randomization and include completely randomized blocks and Latin squares (and variants: Graeco-Latin squares, etc.). The more complex experiments share many of the complexities of multiple factors. A relatively complete discussion of the analysis (models, data summaries, ANOVA table) of the completely randomized experiment is available. ANOVA for multiple factors ANOVA generalizes to the study of the effects of multiple factors. When the experiment includes observations at all combinations of levels of each factor, it is termed factorial. Factorial experiments are more efficient than a series of single factor experiments and the efficiency grows as the number of factors increases. Consequently, factorial designs are heavily used. The use of ANOVA to study the effects of multiple factors has a complication. In a 3-way ANOVA with factors x, y and z, the ANOVA model includes terms for the main effects (x, y, z) and terms for interactions (xy, xz, yz, xyz). All terms require hypothesis tests. The proliferation of interaction terms increases the risk that some hypothesis test will produce a false positive by chance. Fortunately, experience says that high order interactions are rare. The ability to detect interactions is a major advantage of multiple factor ANOVA. Testing one factor at a time hides interactions, but produces apparently inconsistent experimental results. Caution is advised when encountering interactions; Test interaction terms first and expand the analysis beyond ANOVA if interactions are found. Texts vary in their recommendations regarding the continuation of the ANOVA procedure after encountering an interaction. Interactions complicate the interpretation of experimental data. Neither the calculations of significance nor the estimated treatment effects can be taken at face value. "A significant interaction will often mask the significance of main effects." Graphical methods are recommended to enhance understanding. Regression is often useful. A lengthy discussion of interactions is available in Cox (1958). Some interactions can be removed (by transformations) while others cannot. A variety of techniques are used with multiple factor ANOVA to reduce expense. One technique used in factorial designs is to minimize replication (possibly no replication with support of analytical trickery) and to combine groups when effects are found to be statistically (or practically) insignificant. An experiment with many insignificant factors may collapse into one with a few factors supported by many replications. Worked numeric examples Some analysis is required in support of the design of the experiment while other analysis is performed after changes in the factors are formally found to produce statistically significant changes in the responses. Because experimentation is iterative, the results of one experiment alter plans for following experiments. The number of experimental units In the design of an experiment, the number of experimental units is planned to satisfy the goals of the experiment. Experimentation is often sequential. Early experiments are often designed to provide mean-unbiased estimates of treatment effects and of experimental error. Later experiments are often designed to test a hypothesis that a treatment effect has an important magnitude; in this case, the number of experimental units is chosen so that the experiment is within budget and has adequate power, among other goals. Reporting sample size analysis is generally required in psychology. "Provide information on sample size and the process that led to sample size decisions." The analysis, which is written in the experimental protocol before the experiment is conducted, is examined in grant applications and administrative review boards. Besides the power analysis, there are less formal methods for selecting the number of experimental units. These include graphical methods based on limiting the probability of false negative errors, graphical methods based on an expected variation increase (above the residuals) and methods based on achieving a desired confident interval. Power analysis is often applied in the context of ANOVA in order to assess the probability of successfully rejecting the null hypothesis if we assume a certain ANOVA design, effect size in the population, sample size and significance level. Power analysis can assist in study design by determining what sample size would be required in order to have a reasonable chance of rejecting the null hypothesis when the alternative hypothesis is true. Several standardized measures of effect gauge the strength of the association between a predictor (or set of predictors) and the dependent variable. Effect-size estimates facilitate the comparison of findings in studies and across disciplines. A non-standardized measure of effect size with meaningful units may be preferred for reporting purposes. η2 ( eta-squared ): Eta-squared describes the ratio of variance explained in the dependent variable by a predictor while controlling for other predictors. Eta-squared is a biased estimator of the variance explained by the model in the population (it estimates only the effect size in the sample). On average it overestimates the variance explained in the population. As the sample size gets larger the amount of bias gets smaller, Cohen (1992) suggests effect sizes for various indexes, including ƒ (where 0.1 is a small effect, 0.25 is a medium effect and 0.4 is a large effect). He also offers a conversion table (see Cohen, 1988, p. 283) for eta squared (η2) where 0.0099 constitutes a small effect, 0.0588 a medium effect and 0.1379 a large effect. It is always appropriate to carefully consider outliers. They have a disproportionate impact on statistical conclusions and are often the result of errors. It is prudent to verify that the assumptions of ANOVA have been met. Residuals are examined or analyzed to confirm homoscedasticity and gross normality. Residuals should have the appearance of (zero mean normal distribution) noise when plotted as a function of anything including time and modeled data values. Trends hint at interactions among factors or among observations. One rule of thumb: "If the largest standard deviation is less than twice the smallest standard deviation, we can use methods based on the assumption of equal standard deviations and our results will still be approximately correct." A statistically significant effect in ANOVA is often followed up with one or more different follow-up tests. This can be done in order to assess which groups are different from which other groups or to test various other focused hypotheses. Follow-up tests are often distinguished in terms of whether they are planned (a priori) or post hoc. Planned tests are determined before looking at the data and post hoc tests are performed after looking at the data. Often one of the "treatments" is none, so the treatment group can act as a control. Dunnett's test (a modification of the t-test) tests whether each of the other treatment groups has the same mean as the control. Post hoc tests such as Tukey's range test most commonly compare every group mean with every other group mean and typically incorporate some method of controlling for Type I errors. Comparisons, which are most commonly planned, can be either simple or compound. Simple comparisons compare one group mean with one other group mean. Compound comparisons typically compare two sets of groups means where one set has two or more groups (e.g., compare average group means of group A, B and C with group D). Comparisons can also look at tests of trend, such as linear and quadratic relationships, when the independent variable involves ordered levels. Following ANOVA with pair-wise multiple-comparison tests has been criticized on several grounds. There are many such tests (10 in one table) and recommendations regarding their use are vague or conflicting. Study designs and ANOVAs There are several types of ANOVA. Many statisticians base ANOVA on the design of the experiment, especially on the protocol that specifies the random assignment of treatments to subjects; the protocol's description of the assignment mechanism should include a specification of the structure of the treatments and of any blocking. It is also common to apply ANOVA to observational data using an appropriate statistical model. Some popular designs use the following types of ANOVA: - One-way ANOVA is used to test for differences among two or more independent groups (means),e.g. different levels of urea application in a crop. Typically, however, the one-way ANOVA is used to test for differences among at least three groups, since the two-group case can be covered by a t-test. When there are only two means to compare, the t-test and the ANOVA F-test are equivalent; the relation between ANOVA and t is given by F = t2. - Factorial ANOVA is used when the experimenter wants to study the interaction effects among the treatments. - Repeated measures ANOVA is used when the same subjects are used for each treatment (e.g., in a longitudinal study). - Multivariate analysis of variance (MANOVA) is used when there is more than one response variable. Balanced experiments (those with an equal sample size for each treatment) are relatively easy to interpret; Unbalanced experiments offer more complexity. For single factor (one way) ANOVA, the adjustment for unbalanced data is easy, but the unbalanced analysis lacks both robustness and power. For more complex designs the lack of balance leads to further complications. "The orthogonality property of main effects and interactions present in balanced data does not carry over to the unbalanced case. This means that the usual analysis of variance techniques do not apply. Consequently, the analysis of unbalanced factorials is much more difficult than that for balanced designs." In the general case, "The analysis of variance can also be applied to unbalanced data, but then the sums of squares, mean squares, and F-ratios will depend on the order in which the sources of variation are considered." The simplest techniques for handling unbalanced data restore balance by either throwing out data or by synthesizing missing data. More complex techniques use regression. ANOVA is (in part) a significance test. The American Psychological Association holds the view that simply reporting significance is insufficient and that reporting confidence bounds is preferred. ANOVA is considered to be a special case of linear regression which in turn is a special case of the general linear model. All consider the observations to be the sum of a model (fit) and a residual (error) to be minimized. While the analysis of variance reached fruition in the 20th century, antecedents extend centuries into the past according to Stigler. These include hypothesis testing, the partitioning of sums of squares, experimental techniques and the additive model. Laplace was performing hypothesis testing in the 1770s. The development of least-squares methods by Laplace and Gauss circa 1800 provided an improved method of combining observations (over the existing practices of astronomy and geodesy). It also initiated much study of the contributions to sums of squares. Laplace soon knew how to estimate a variance from a residual (rather than a total) sum of squares. By 1827 Laplace was using least squares methods to address ANOVA problems regarding measurements of atmospheric tides. Before 1800 astronomers had isolated observational errors resulting from reaction times (the "personal equation") and had developed methods of reducing the errors. The experimental methods used in the study of the personal equation were later accepted by the emerging field of psychology which developed strong (full factorial) experimental methods to which randomization and blinding were soon added. An eloquent non-mathematical explanation of the additive effects model was available in 1885. Sir Ronald Fisher introduced the term "variance" and proposed a formal analysis of variance in a 1918 article The Correlation Between Relatives on the Supposition of Mendelian Inheritance. His first application of the analysis of variance was published in 1921. Analysis of variance became widely known after being included in Fisher's 1925 book Statistical Methods for Research Workers. One of the attributes of ANOVA which ensured its early popularity was computational elegance. The structure of the additive model allows solution for the additive coefficients by simple algebra rather than by matrix calculations. In the era of mechanical calculators this simplicity was critical. The determination of statistical significance also required access to tables of the F function which were supplied by early statistics texts. |Wikimedia Commons has media related to: Analysis of variance| - Randomization is a term used in multiple ways in this material. "Randomization has three roles in applications: as a device for eliminating biases, for example from unobserved explanatory variables and selection effects: as a basis for estimating standard errors: and as a foundation for formally exact significance tests." Cox (2006, page 192) Hinkelmann and Kempthorne use randomization both in experimental design and for statistical analysis. - Unit-treatment additivity is simply termed additivity in most texts. Hinkelmann and Kempthorne add adjectives and distinguish between additivity in the strict and broad senses. This allows a detailed consideration of multiple error sources (treatment, state, selection, measurement and sampling) on page 161. - Rosenbaum (2002, page 40) cites Section 5.7 (Permutation Tests), Theorem 2.3 (actually Theorem 3, page 184) of Lehmann's Testing Statistical Hypotheses (1959). - The F-test for the comparison of variances has a mixed reputation. It is not recommended as a hypothesis test to determine whether two different samples have the same variance. It is recommended for ANOVA where two estimates of the variance of the same sample are compared. While the F-test is not generally robust against departures from normality, it has been found to be robust in the special case of ANOVA. Citations from Moore & McCabe (2003): "Analysis of variance uses F statistics, but these are not the same as the F statistic for comparing two population standard deviations." (page 554) "The F test and other procedures for inference about variances are so lacking in robustness as to be of little use in practice." (page 556) "[The ANOVA F test] is relatively insensitive to moderate nonnormality and unequal variances, especially when the sample sizes are similar." (page 763) ANOVA assumes homoscedasticity, but it is robust. The statistical test for homoscedasticity (the F-test) is not robust. Moore & McCabe recommend a rule of thumb. - Gelman (2005, p 2) - Howell (2002, p 320) - Montgomery (2001, p 63) - Gelman (2005, p 1) - Gelman (2005, p 5) - "Section 5.7. A Glossary of DOE Terminology". NIST Engineering Statistics handbook. NIST. Retrieved 5 April 2012. - "Section 4.3.1 A Glossary of DOE Terminology". NIST Engineering Statistics handbook. NIST. Retrieved 14 Aug 2012. - Montgomery (2001, Chapter 12: Experiments with random factors) - Gelman (2005, pp 20–21) - Snedecor, George W.; Cochran, William G. (1967). Statistical Methods (6th ed.). p. 321. - Cochran & Cox (1992, p 48) - Howell (2002, p 323) - Anderson, David R.; Sweeney, Dennis J.; Williams, Thomas A. (1996). Statistics for business and economics (6th ed.). Minneapolis/St. Paul: West Pub. Co. pp. 452–453. ISBN 0-314-06378-1. - Anscombe (1948) - Kempthorne (1979, p 30) - Cox (1958, Chapter 2: Some Key Assumptions) - Hinkelmann and Kempthorne (2008, Volume 1, Throughout. Introduced in Section 2.3.3: Principles of experimental design; The linear model; Outline of a model) - Hinkelmann and Kempthorne (2008, Volume 1, Section 6.3: Completely Randomized Design; Derived Linear Model) - Hinkelmann and Kempthorne (2008, Volume 1, Section 6.6: Completely randomized design; Approximating the randomization test) - Bailey (2008, Chapter 2.14 "A More General Model" in Bailey, pp. 38–40) - Hinkelmann and Kempthorne (2008, Volume 1, Chapter 7: Comparison of Treatments) - Kempthorne (1979, pp 125–126, "The experimenter must decide which of the various causes that he feels will produce variations in his results must be controlled experimentally. Those causes that he does not control experimentally, because he is not cognizant of them, he must control by the device of randomization." "[O]nly when the treatments in the experiment are applied by the experimenter using the full randomization procedure is the chain of inductive inference sound. It is only under these circumstances that the experimenter can attribute whatever effects he observes to the treatment and the treatment only. Under these circumstances his conclusions are reliable in the statistical sense.") - Freedman[full citation needed] - Montgomery (2001, Section 3.8: Discovering dispersion effects) - Hinkelmann and Kempthorne (2008, Volume 1, Section 6.10: Completely randomized design; Transformations) - Bailey (2008) - Montgomery (2001, Section 3-3: Experiments with a single factor: The analysis of variance; Analysis of the fixed effects model) - Cochran & Cox (1992, p 2 example) - Cochran & Cox (1992, p 49) - Hinkelmann and Kempthorne (2008, Volume 1, Section 6.7: Completely randomized design; CRD with unequal numbers of replications) - Moore and McCabe (2003, page 763) - Gelman (2008) - Montgomery (2001, Section 5-2: Introduction to factorial designs; The advantages of factorials) - Belle (2008, Section 8.4: High-order interactions occur rarely) - Montgomery (2001, Section 5-1: Introduction to factorial designs; Basic definitions and principles) - Cox (1958, Chapter 6: Basic ideas about factorial experiments) - Montgomery (2001, Section 5-3.7: Introduction to factorial designs; The two-factor factorial design; One observation per cell) - Wilkinson (1999, p 596) - Montgomery (2001, Section 3-7: Determining sample size) - Howell (2002, Chapter 8: Power) - Howell (2002, Section 11.12: Power (in ANOVA)) - Howell (2002, Section 13.7: Power analysis for factorial experiments) - Moore and McCabe (2003, pp 778–780) - Wilkinson (1999, p 599) - Montgomery (2001, Section 3-4: Model adequacy checking) - Moore and McCabe (2003, p 755, Qualifications to this rule appear in a footnote.) - Montgomery (2001, Section 3-5.8: Experiments with a single factor: The analysis of variance; Practical interpretation of results; Comparing means with a control) - Hinkelmann and Kempthorne (2008, Volume 1, Section 7.5: Comparison of Treatments; Multiple Comparison Procedures) - Howell (2002, Chapter 12: Multiple comparisons among treatment means) - Montgomery (2001, Section 3-5: Practical interpretation of results) - Cochran & Cox (1957, p 9, "[T]he general rule [is] that the way in which the experiment is conducted determines not only whether inferences can be made, but also the calculations required to make them.") - "The Probable Error of a Mean". Biometrika 6: 1–0. 1908. doi:10.1093/biomet/6.1.1. - Montgomery (2001, Section 3-3.4: Unbalanced data) - Montgomery (2001, Section 14-2: Unbalanced data in factorial design) - Wilkinson (1999, p 600) - Gelman (2005, p.1) (with qualification in the later text) - Montgomery (2001, Section 3.9: The Regression Approach to the Analysis of Variance) - Howell (2002, p 604) - Howell (2002, Chapter 18: Resampling and nonparametric approaches to data) - Montgomery (2001, Section 3-10: Nonparametric methods in the analysis of variance) - Stigler (1986) - Stigler (1986, p 134) - Stigler (1986, p 153) - Stigler (1986, pp 154–155) - Stigler (1986, pp 240–242) - Stigler (1986, Chapter 7 - Psychophysics as a Counterpoint) - Stigler (1986, p 253) - Stigler (1986, pp 314–315) - The Correlation Between Relatives on the Supposition of Mendelian Inheritance. Ronald A. Fisher. Philosophical Transactions of the Royal Society of Edinburgh. 1918. (volume 52, pages 399–433) - On the "Probable Error" of a Coefficient of Correlation Deduced from a Small Sample. Ronald A. Fisher. Metron, 1: 3-32 (1921) - Scheffé (1959, p 291, "Randomization models were first formulated by Neyman (1923) for the completely randomized design, by Neyman (1935) for randomized blocks, by Welch (1937) and Pitman (1937) for the Latin square under a certain null hypothesis, and by Kempthorne (1952, 1955) and Wilk (1955) for many other designs.") - Anscombe, F. J. (1948). "The Validity of Comparative Experiments". Journal of the Royal Statistical Society. Series A (General) 111 (3): 181–211. doi:10.2307/2984159. JSTOR 2984159. MR 30181. - Bailey, R. A. (2008). Design of Comparative Experiments. Cambridge University Press. ISBN 978-0-521-68357-9. Pre-publication chapters are available on-line. - Belle, Gerald van (2008). Statistical rules of thumb (2nd ed.). Hoboken, N.J: Wiley. ISBN 978-0-470-14448-0. - Cochran, William G.; Cox, Gertrude M. (1992). Experimental designs (2nd ed.). New York: Wiley. ISBN 978-0-471-54567-5. - Cohen, Jacob (1988). Statistical power analysis for the behavior sciences (2nd ed.). Routledge ISBN 978-0-8058-0283-2 - Cohen, Jacob (1992). "Statistics a power primer". Psychology Bulletin 112 (1): 155–159. doi:10.1037/0033-2909.112.1.155. PMID 19565683. - Cox, David R. (1958). Planning of experiments. Reprinted as ISBN 978-0-471-57429-3 - Cox, D. R. (2006). Principles of statistical inference. Cambridge New York: Cambridge University Press. ISBN 978-0-521-68567-2. - Freedman, David A.(2005). Statistical Models: Theory and Practice, Cambridge University Press. ISBN 978-0-521-67105-7 - Gelman, Andrew (2005). "Analysis of variance? Why it is more important than ever". The Annals of Statistics 33: 1–53. doi:10.1214/009053604000001048. - Gelman, Andrew (2008). "Variance, analysis of". The new Palgrave dictionary of economics (2nd ed.). Basingstoke, Hampshire New York: Palgrave Macmillan. ISBN 978-0-333-78676-5. - Hinkelmann, Klaus & Kempthorne, Oscar (2008). Design and Analysis of Experiments. I and II (Second ed.). Wiley. ISBN 978-0-470-38551-7. - Howell, David C. (2002). Statistical methods for psychology (5th ed.). Pacific Grove, CA: Duxbury/Thomson Learning. ISBN 0-534-37770-X. - Kempthorne, Oscar (1979). The Design and Analysis of Experiments (Corrected reprint of (1952) Wiley ed.). Robert E. Krieger. ISBN 0-88275-105-0. - Lehmann, E.L. (1959) Testing Statistical Hypotheses. John Wiley & Sons. - Montgomery, Douglas C. (2001). Design and Analysis of Experiments (5th ed.). New York: Wiley. ISBN 978-0-471-31649-7. - Moore, David S. & McCabe, George P. (2003). Introduction to the Practice of Statistics (4e). W H Freeman & Co. ISBN 0-7167-9657-0 - Rosenbaum, Paul R. (2002). Observational Studies (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-98967-9 - Scheffé, Henry (1959). The Analysis of Variance. New York: Wiley. - Stigler, Stephen M. (1986). The history of statistics : the measurement of uncertainty before 1900. Cambridge, Mass: Belknap Press of Harvard University Press. ISBN 0-674-40340-1. - Wilkinson, Leland (1999). "Statistical Methods in Psychology Journals; Guidelines and Explanations". American Psychologist 54 (8): 594–604. doi:10.1037/0003-066X.54.8.594. - Box, G. E. P. (1953). "Non-Normality and Tests on Variances". Biometrika (Biometrika Trust) 40 (3/4): 318–335. JSTOR 2333350. - Box, G. E. P. (1954). "Some Theorems on Quadratic Forms Applied in the Study of Analysis of Variance Problems, I. Effect of Inequality of Variance in the One-Way Classification". The Annals of Mathematical Statistics 25 (2): 290. doi:10.1214/aoms/1177728786. - Box, G. E. P. (1954). "Some Theorems on Quadratic Forms Applied in the Study of Analysis of Variance Problems, II. Effects of Inequality of Variance and of Correlation Between Errors in the Two-Way Classification". The Annals of Mathematical Statistics 25 (3): 484. doi:10.1214/aoms/1177728717. - Caliński, Tadeusz & Kageyama, Sanpei (2000). Block designs: A Randomization approach, Volume I: Analysis. Lecture Notes in Statistics 150. New York: Springer-Verlag. ISBN 0-387-98578-6. - Christensen, Ronald (2002). Plane Answers to Complex Questions: The Theory of Linear Models (Third ed.). New York: Springer. ISBN 0-387-95361-2. - Cox, David R. & Reid, Nancy M. (2000). The theory of design of experiments. (Chapman & Hall/CRC). ISBN 978-1-58488-195-7 - Fisher, Ronald (1918). "Studies in Crop Variation. I. An examination of the yield of dressed grain from Broadbalk". Journal of Agricultural Science 11: 107–135. - Freedman, David A.; Pisani, Robert; Purves, Roger (2007) Statistics, 4th edition. W.W. Norton & Company ISBN 978-0-393-92972-0 - Hettmansperger, T. P.; McKean, J. W. (1998). Robust nonparametric statistical methods. Kendall's Library of Statistics 5 (First ed.). New York: Edward Arnold. pp. xiv+467 pp. ISBN 0-340-54937-8. MR 1604954. Unknown parameter - Lentner, Marvin; Thomas Bishop (1993). Experimental design and analysis (Second ed.). P.O. Box 884, Blacksburg, VA 24063: Valley Book Company. ISBN 0-9616255-2-X. - Tabachnick, Barbara G. & Fidell, Linda S. (2007). Using Multivariate Statistics (5th ed.). Boston: Pearson International Edition. ISBN 978-0-205-45938-4 - Wichura, Michael J. (2006). The coordinate-free approach to linear models. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge: Cambridge University Press. pp. xiv+199. ISBN 978-0-521-86842-6. MR 2283455. |Wikiversity has learning materials about Analysis of variance| - SOCR ANOVA Activity and interactive applet. - Examples of all ANOVA and ANCOVA models with up to three treatment factors, including randomized block, split plot, repeated measures, and Latin squares, and their analysis in R - NIST/SEMATECH e-Handbook of Statistical Methods, section 7.4.3: "Are the means equal?"
http://en.wikipedia.org/wiki/Analysis_of_variance
13
77
Fusion power refers to power generated by nuclear fusion reactions. In Physics and Nuclear chemistry, nuclear fusion is the process by which multiple- like charged atomic nuclei join together to form a heavier nucleus In this kind of reaction, two light atomic nuclei fuse together to form a heavier nucleus and in doing so, release energy. The nucleus of an Atom is the very dense region consisting of Nucleons ( Protons and Neutrons, at the center of an atom In a more general sense, the term can also refer to the production of net usable power from a fusion source, similar to the usage of the term "steam power. " Most design studies for fusion power plants involve using the fusion reactions to create heat, which is then used to operate a steam turbine, similar to most coal-fired power stations as well as fission-driven nuclear power stations. A steam turbine is a mechanical device that extracts Thermal energy from pressurized Steam, and converts it into useful mechanical work Nuclear fission is the splitting of the nucleus of an atom into parts (lighter nuclei) often producing Free neutrons and other smaller nuclei which may Nuclear power is any Nuclear technology designed to extract usable Energy from atomic nuclei via controlled Nuclear reactions The largest current experiment is the Joint European Torus [JET]. JET, the Joint European Torus, is the largest Nuclear fusion experimental reactor yet built In 1997, JET produced a peak of 16. 1 MW of fusion power (65% of input power), with fusion power of over 10 MW sustained for over 0. The watt (symbol W) is the SI derived unit of power, equal to one Joule of energy per Second. 5 sec. In June 2005, the construction of the experimental reactor ITER, designed to produce several times more fusion power than the power put into the plasma over many minutes, was announced. ITER is an international Tokamak ( Magnetic confinement fusion) research/engineering proposal for an experimental project that will help to make the transition from The production of net electrical power from fusion is planned for DEMO, the next generation experiment after ITER. DEMO (DEMOnstration Power Plant is a proposed Nuclear fusion power plant that is intended to build upon the expected success of the ITER (originally The basic concept behind any fusion reaction is to bring two or more atoms very close together, close enough that the strong nuclear force in their nuclei will pull them together into one larger atom. In particle physics the strong interaction, or strong force, or color force, holds Quarks and Gluons together to form Protons and If two light nuclei fuse, they will generally form a single nucleus with a slightly smaller mass than the sum of their original masses. The difference in mass is released as energy according to Einstein's mass-energy equivalence formula E = mc². In Physics, mass–energy equivalence is the concept that for particles slower than light any Mass has an associated Energy and vice versa. If the input atoms are sufficiently massive, the resulting fusion product will be heavier than the reactants, in which case the reaction requires an external source of energy. The dividing line between "light" and "heavy" is iron. Iron (ˈаɪɚn is a Chemical element with the symbol Fe (ferrum and Atomic number 26 Above this atomic mass, energy will generally be released in nuclear fission reactions, below it, in fusion. Nuclear fission is the splitting of the nucleus of an atom into parts (lighter nuclei) often producing Free neutrons and other smaller nuclei which may Fusion between the atoms is opposed by their shared electrical charge, specifically the net positive charge of the nuclei. In order to overcome this electrostatic force, or "Coulomb barrier", some external source of energy must be supplied. ---- Bold text Coulomb's law', developed in the 1780s by French physicist Charles Augustin de Coulomb, may be stated in scalar form The Coulomb barrier, named after physicist Charles-Augustin de Coulomb (1736&ndash1806 is the energy barrier due to Electrostatic interaction that two nuclei need The easiest way to do this is to heat the atoms, which has the side effect of stripping the electrons from the atoms and leaving them as bare nuclei. The electron is a fundamental Subatomic particle that was identified and assigned the negative charge in 1897 by J In most experiments the nuclei and electrons are left in a fluid known as a plasma. In Physics and Chemistry, plasma is an Ionized Gas, in which a certain proportion of Electrons are free rather than being bound The temperatures required to provide the nuclei with enough energy to overcome their repulsion is a function of the total charge, so hydrogen, which has the smallest nuclear charge therefore reacts at the lowest temperature. Hydrogen (ˈhaɪdrədʒən is the Chemical element with Atomic number 1 Helium has an extremely low mass per nucleon and therefore is energetically favoured as a fusion product. Helium ( He) is a colorless odorless tasteless non-toxic Inert Monatomic Chemical As a consequence, most fusion reactions combine isotopes of hydrogen ("protium", deuterium, or tritium) to form isotopes of helium (³He or 4He). In Physics and Nuclear chemistry, nuclear fusion is the process by which multiple- like charged atomic nuclei join together to form a heavier nucleus A hydrogen atom is an atom of the chemical element Hydrogen. The electrically neutral Deuterium, also called heavy hydrogen, is a Stable isotope of Hydrogen with a Natural abundance in the Oceans of Earth Tritium (ˈtɹɪtiəm symbol or, also known as Hydrogen-3) is a radioactive Isotope of Hydrogen. Perhaps the three most widely considered fuel cycles are based on the D-T, D-D, and p-11B reactions. Other fuel cycles (D-³He and ³He-³He) would require a supply of ³He, either from other nuclear reactions or from extraterrestrial sources, such as the surface of the moon or the atmospheres of the gas giant planets. The details of the calculations comparing these reactions can be found here. In Physics and Nuclear chemistry, nuclear fusion is the process by which multiple- like charged atomic nuclei join together to form a heavier nucleus The easiest (according to the Lawson criterion) and most immediately promising nuclear reaction to be used for fusion power is: Deuterium is a naturally occurring isotope of hydrogen and as such is universally available. In Nuclear fusion research the Lawson criterion, first derived by John D Deuterium, also called heavy hydrogen, is a Stable isotope of Hydrogen with a Natural abundance in the Oceans of Earth Tritium (ˈtɹɪtiəm symbol or, also known as Hydrogen-3) is a radioactive Isotope of Hydrogen. Helium-4 ( or) is a non- Radioactive and light Isotope of Helium. This article is a discussion of neutrons in general For the specific case of a neutron found outside the nucleus see Free neutron. Isotopes (Greek isos = "equal" tópos = "site place" are any of the different types of atoms ( Nuclides The large mass ratio of the hydrogen isotopes makes the separation rather easy compared to the difficult uranium enrichment process. Enriched uranium is a kind of Uranium in which the percent composition of Uranium-235 has been increased through the process of Isotope separation. Tritium is also an isotope of hydrogen, but it occurs naturally in only negligible amounts due to its radioactive half-life of 12. In Nuclear physics, beta decay is a type of Radioactive decay in which a Beta particle (an Electron or a Positron) is emitted Half-Life (computer-game page here It's already listed in the disambiguation page 32 years. Consequently, the deuterium-tritium fuel cycle requires the breeding of tritium from lithium using one of the following reactions: The reactant neutron is supplied by the D-T fusion reaction shown above, the one which also produces the useful energy. A breeder reactor is a Nuclear reactor that generates new Fissile or fissionable material at a greater rate than it consumes such material Lithium (ˈlɪθiəm is a Chemical element with the symbol Li and Atomic number 3 The reaction with 6Li is exothermic, providing a small energy gain for the reactor. An exothermic reaction is a Chemical reaction that releases Heat. The reaction with 7Li is endothermic but does not consume the neutron. In Thermodynamics, the word endothermic "within-heating" describes a process or reaction that absorbs Energy in the form of Heat. At least some 7Li reactions are required to replace the neutrons lost by reactions with other elements. Most reactor designs use the naturally occurring mix of lithium isotopes. The supply of lithium is more limited than that of deuterium, but still large enough to supply the world's energy demand for thousands of years. Several drawbacks are commonly attributed to D-T fusion power: The neutron flux expected in a commercial D-T fusion reactor is about 100 times that of current fission power reactors, posing problems for material design. Neutron flux is a term referring to the number of Neutrons passing through an Area over a span of Time. Design of suitable materials is underway but their actual use in a reactor is not proposed until the generation after ITER. ITER is an international Tokamak ( Magnetic confinement fusion) research/engineering proposal for an experimental project that will help to make the transition from After a single series of D-T tests at JET, the largest fusion reactor yet to use this fuel, the vacuum vessel was sufficiently radioactive that remote handling needed to be used for the year following the tests. JET, the Joint European Torus, is the largest Nuclear fusion experimental reactor yet built On the other hand, the volumetric deposition of neutron power can also be seen as an advantage. If all the power of a fusion reactor had to be transported by conduction through the surface enclosing the plasma, it would be very difficult to find materials and a construction that would survive, and it would probably entail a relatively poor efficiency. Though more difficult to facilitate than the deuterium-tritium reaction, fusion can also be achieved through the reaction of deuterium with itself. This reaction has two branches that occur with nearly equal probability: |D + D||→ T||+ p| |→ ³He||+ n| The optimum temperature for this reaction is 15 keV, only slightly higher than the optimum for the D-T reaction. The first branch does not produce neutrons, but it does produce tritium, so that a D-D reactor will not be completely tritium-free, even though it does not require an input of tritium or lithium. Most of the tritium produced will be burned before leaving the reactor, which reduces the tritium handling required, but also means that more neutrons are produced and that some of these are very energetic. The neutron from the second branch has an energy of only 2. 45 MeV, whereas the neutron from the D-T reaction has an energy of 14. 1 MeV, resulting in a wider range of isotope production and material damage. Assuming complete tritium burn-up, the reduction in the fraction of fusion energy carried by neutrons is only about 18%, so that the primary advantage of the D-D fuel cycle is that tritium breeding is not required. Other advantages are independence from limitations of lithium resources and a somewhat softer neutron spectrum. The price to pay compared to D-T is that the energy confinement (at a given pressure) must be 30 times better and the power produced (at a given pressure and volume) is 68 times less. If aneutronic fusion is the goal, then the most promising candidate may be the proton-boron reaction: Under reasonable assumptions, side reactions will result in about 0. Aneutronic fusion is any form of Fusion power where no more than 1% of the total energy released is carried by Neutrons Since the most-studied fusion reactions 1% of the fusion power being carried by neutrons. At 123 keV, the optimum temperature for this reaction is nearly ten times higher than that for the pure hydrogen reactions, the energy confinement must be 500 times better than that required for the D-T reaction, and the power density will be 2500 times lower than for D-T. In Engineering, the term specific power can refer to power either per unit of Mass, Volume or Area, although power per unit of Since the confinement properties of conventional approaches to fusion such as the tokamak and laser pellet fusion are marginal, most proposals for aneutronic fusion are based on radically different confinement concepts. The idea of using human-initiated fusion reactions was first made practical for military purposes, in nuclear weapons. A nuclear weapon is an explosive device that derives its destructive force from Nuclear reactions either fission or a combination of fission and fusion. In a hydrogen bomb, the energy released by a fission weapon is used to compress and heat fusion fuel, beginning a fusion reaction which can release a very large amount of energy. The first fusion-based weapons released some 500 times more energy than early fission weapons. Civilian applications, in which explosive energy production must be replaced by a controlled production, are still being developed. Although it took less than ten years to go from military applications to civilian fission energy production, it has been very different in the fusion energy field; more than fifty years have already passed without any commercial fusion energy production plant coming into operation. Registration of the first patent related to a fusion reactor by the United Kingdom Atomic Energy Authority, the inventors being Sir George Paget Thomson and Moses Blackman, dates back to 1946. The United Kingdom Atomic Energy Authority (UKAEA was established in 1954 as a Statutory corporation to oversee and pioneer the development of Nuclear energy within Sir George Paget Thomson, FRS ( May 3, 1892 &ndash September 10, 1975) was an English Physicist and Moses Blackman ( December 6, 1908 - June 3, 1983) was a South African born British crystallographer. Some basic principles used in the ITER experiment are described in this patent: toroidal vacuum chamber, magnetic confinement, and radio frequency plasma heating. Radio frequency ( RF) is a Frequency or rate of Oscillation within the range of about 3 Hz to 300 GHz The U. S. fusion program began in 1951 when Lyman Spitzer began work on a stellarator under the code name Project Matterhorn. Lyman Strong Spitzer Jr ( June 26, 1914 &ndash March 31, 1997) was an American theoretical physicist. A stellarator is a device used to confine a hot plasma with magnetic fields in order to sustain a controlled Nuclear fusion reaction His work led to the creation of the Princeton Plasma Physics Laboratory, where magnetically confined plasmas are still studied. Princeton Plasma Physics Laboratory (PPPL is a United States Department of Energy national laboratory for Plasma physics and Nuclear fusion The stellarator concept fell out of favor for several decades afterwards, plagued by poor confinement issues, but recent advances in computer technology have led to a significant resurgence in interest in these devices. A stellarator is a device used to confine a hot plasma with magnetic fields in order to sustain a controlled Nuclear fusion reaction A wide variety of other magnetic geometries were also experimented with, notably with the magnetic mirror. A magnetic mirror is a Magnetic field configuration where the field strength changes when moving along a field line These systems also suffered from similar problems when higher performance versions were constructed. A new approach was outlined in the theoretical works fulfilled in 1950–1951 by I.E. Tamm and A.D. Sakharov in Soviet Union, which laid the foundations of the tokamak. Igor Yevgenyevich Tamm ( Russian И́горь Евге́ньевич Та́мм) ( July 8 1895 &ndash April 12 1971) was Andrei Dmitrievich Sakharov (Андре́й Дми́триевич Са́харов (May 21 1921 – December 14 1989 was an eminent Soviet nuclear Physicist The Union of Soviet Socialist Republics (USSR was a constitutionally Socialist state that existed in Eurasia from 1922 to 1991 A tokamak is a machine producing a toroidal Magnetic field for confining a plasma. Experimental research of these systems started in 1956 in Kurchatov Institute, Moscow by a group of Soviet scientists lead by Lev Artsimovich. Kurchatov Institute ( Роcсийский научный центр "Курчатовский Институт" Russian Scientific Centre "Kurchatov Institute" is Moscow (Москва́ romanised: Moskvá, IPA: see also other names) is the Capital and the largest city of Lev Andreevich Artsimovich ( Арцимович Лев Андреевич in Russian; also transliterated Arzimowitsch) ( 25 February The group constructed the first tokamaks, the most successful of them being T-3 and its larger version T-4. T-4 was tested in 1968 in Novosibirsk, conducting the first quasistationary thermonuclear fusion reaction ever. History The city was founded in 1893 as the future site of the Trans-Siberian Railway bridge crossing the great Siberian river Ob, and was known as The tokamak was dramatically more efficient than the other approaches of the same era, and most research after the 1970s concentrated on variations of this theme. The same is true today, where very large tokamaks like ITER are hoping to demonstrate several milestones on the way to commercial power production, including a burning plasma with long burn times, high power output and online fueling. ITER is an international Tokamak ( Magnetic confinement fusion) research/engineering proposal for an experimental project that will help to make the transition from The fusion energy gain factor, usually expressed with the symbol Q, is the ratio of Fusion power produced in a Nuclear fusion reactor to the power required There are no guarantees that the project will be successful, as previous generations of machines have faced formerly unseen problems on many occasions. But the entire field of high temperature plasmas is much better understood now due to the earlier research, and there is considerable optimism that ITER will meet its goals. If successful, ITER would be followed by a "commercial demonstrator" system, similar to the very earliest power-producing fission reactors built in the era before wide-scale commercial deployment of larger machines started in the 1960s and 1970s. Even with these goals met, there are a number of major engineering problems remaining, notably finding suitable "low activity" materials for reactor construction, demonstrating secondary systems including practical tritium extraction, and building reactor designs that allow their reactor core to be removed when it becomes embrittled due to the neutron flux. Practical generators based on the tokamak concept remain far in the future. The public at large has been somewhat disappointed, as the initial outlook for practical fusion power plants was much rosier than is now realized; a pamphlet from the 1970s printed by General Atomic stated that "Several commercial fusion reactors are expected to be online by the year 2000. " The Z-pinch phenomenon has been known since the end of the 18th century. In Fusion power research the Z-pinch, or zeta pinch, is a type of plasma confinement system that uses an electrical current in the plasma to generate The 18th century lasted from 1701 to 1800 in the Gregorian calendar, in accordance with the Anno Domini / Common Era numbering system Its use in the fusion field comes from research made on toroidal devices, initially in the Los Alamos National Laboratory right from 1952 (Perhapsatron), and in the United Kingdom from 1954 (ZETA), but its physical principles remained for a long time poorly understood and controlled. Los Alamos National Laboratory (LANL (previously known at various times as Site Y, Los Alamos Laboratory, and Los Alamos Scientific Laboratory) is a Year 1952 ( MCMLII) was a Leap year starting on Tuesday (link will display full calendar of the Gregorian calendar. The United Kingdom of Great Britain and Northern Ireland, commonly known as the United Kingdom, the UK or Britain,is a Sovereign state located Year 1954 ( MCMLIV) was a Common year starting on Friday (link will display full 1954 Gregorian calendar) Pinch devices were studied as potential development paths to practical fusion devices through the 1950s, but studies of the data generated by these devices suggested that instabilities in the collapse mechanism would doom any pinch-type device to power levels that were far too low to suggest continuing along these lines would be practical. Most work on pinch-type devices ended by the 1960s. Recent work on the basic concept started as a result of the appearance of the "wires array" concept in the 1980s, which allowed a more efficient use of this technique. The Sandia National Laboratory runs a continuing wire-array research program with the Zpinch machine. The Z machine is the largest X-ray generator in the world and is designed to test materials in conditions of extreme temperature and pressure In addition, the University of Washington's ZaP Lab have shown quiescent periods of stability hundreds of times longer than expected for plasma in a Z-pinch configuration, giving promise to the confinement technique. See Washington (disambiguation for other uses The University of Washington, founded in 1861, is a public research University The technique of implosion of a microcapsule irradiated by laser beams, the basis of laser inertial confinement, was first suggested in 1962 by scientists at Lawrence Livermore National Laboratory, shortly after the invention of the laser itself in 1960. A laser is a device that emits Light ( Electromagnetic radiation) through a process called Stimulated emission. Inertial confinement fusion ( ICF) is a process where Nuclear fusion reactions are initiated by heating and compressing a fuel target typically in the form of The Lawrence Livermore National Laboratory ( LLNL) in Livermore California is a scientific research laboratory founded by the University of California in 1952 Lasers of the era were very low powered, but low-level research using them nevertheless started as early as 1965. More serious research started in the early 1970s when new types of lasers offered a path to dramatically higher power levels, levels that made inertial-confinement fusion devices appear practical for the first time. By the late 1970s great strides had been made in laser power, but with each increase new problems were found in the implosion technique that suggested even more power would be required. By the 1980s these increases were so large that using the concept for generating net energy seemed remote. Most research in this field turned to weapons research, always a second line of research, as the implosion concept is somewhat similar to hydrogen bomb operation. The Teller–Ulam design is a Nuclear weapon design which is used in Megaton -range Thermonuclear weapons and is more colloquially referred to as "the Work on very large versions continued as a result, with the very large National Ignition Facility in the US and Laser Mégajoule in France supporting these research programs. The National Ignition Facility, or NIF, is a Laser -based Inertial confinement fusion (ICF research device under construction at the Lawrence Laser Mégajoule ( LMJ) is an experimental Inertial confinement fusion (ICF device being built in France by the French nuclear science directorate More recent work had demonstrated that significant savings in the required laser energy are possible using a technique known as "fast ignition". The savings are so dramatic that the concept appears to be a useful technique for energy production again, so much so that it is a serious contender for pre-commercial development once again. There are proposals to build an experimental facility dedicated to the fast ignition approach, known as HiPER. The High Power laser Energy Research facility ( HiPER) is an experimental laser-driven Inertial confinement fusion (ICF device undergoing preliminary design for possible At the same time, advances in solid state lasers appear to improve the "driver" systems' efficiency by about ten times (to 10- 20%), savings that make even the large "traditional" machines almost practical, and might make the fast ignition concept outpace the magnetic approaches in further development. A solid-state laser is a Laser that uses a gain medium that is a Solid, rather than a Liquid such as in Dye lasers or a Gas The laser-based concept has other advantages as well. The reactor core is mostly exposed, as opposed to being wrapped in a huge magnet as in the tokamak. This makes the problem of removing energy from the system somewhat simpler, and should mean that a laser-based device would be much easier to perform maintenance on, such as core replacement. Additionally, the lack of strong magnetic fields allows for a wider variety of low-activation materials, including carbon fiber, which would both reduce the frequency of such swaps, as well as reducing the radioactivity of the discarded core. In other ways the program has many of the same problems as the tokamak; practical methods of energy removal and tritium recycling need to be demonstrated, and in addition there is always the possibility that a new previously unseen collapse problem will arise. Throughout the history of fusion power research there have been a number of devices that have produced fusion at a much smaller level, not being suitable for energy production, but nevertheless starting to fill other roles. Inventor of the Cathode Ray Tube Television Philo T. Farnsworth patented his first Fusor design in 1968, a device which uses inertial electrostatic confinement. The cathode ray tube (CRT is a Vacuum tube containing an Electron gun (a source of electrons and a Fluorescent screen with internal or Philo Taylor Farnsworth ( August 19, 1906 – March 11, 1971) was an American inventor The Farnsworth–Hirsch Fusor, or simply fusor, is an apparatus designed by Philo T Inertial electrostatic confinement (often abbreviated as IEC) is a concept for retaining a plasma using an electrostatic field Towards the end of the 1960s, Robert Hirsch designed a variant of the Farnsworth Fusor known as the Hirsch-Meeks fusor. See also Hirsch report Dr Robert L Hirsch is a former senior Energy program adviser for Science Applications International Corporation The Farnsworth–Hirsch Fusor, or simply fusor, is an apparatus designed by Philo T This variant is a considerable improvement over the Farnsworth design, and is able to generate neutron flux in the order of one billion neutrons per second. Although the efficiency was very low at first, there were hopes the device could be scaled up, but continued development demonstrated that this approach would be impractical for large machines. Nevertheless, fusion could be achieved using a "lab bench top" type set up for the first time, at minimal cost. This type of fusor found its first application as a portable neutron generator in the late 1990s. Neutron generators are Neutron source devices which contain compact Linear accelerators and that produce Neutrons by fusing Isotopes of Hydrogen An automated sealed reaction chamber version of this device, commercially named Fusionstar was developed by EADS but abandoned in 2001. The European Aeronautic Defence and Space Company EADS NV ( EADS) is a large European aerospace corporation formed by the merger on July 10, Its successor is the NSD-Fusion neutron generator. Neutron generators are Neutron source devices which contain compact Linear accelerators and that produce Neutrons by fusing Isotopes of Hydrogen Robert W. Bussard's Polywell concept is roughly similar to the Fusor design, but replaces the problematic grid with a magnetically contained electron cloud which holds the ions in position and gives an accelerating potential. Robert W Bussard ( August 11 1928 &ndash October 6 2007) was an American Physicist who worked primarily in Nuclear The polywell is a plasma confinement concept that combines elements of Inertial electrostatic confinement and Magnetic confinement fusion, intended The Farnsworth–Hirsch Fusor, or simply fusor, is an apparatus designed by Philo T Bussard claimed that a scaled up version would be capable of generating net power. In April 2005, a team from UCLA announced it had devised a novel way of producing fusion using a machine that "fits on a lab bench", using lithium tantalate to generate enough voltage to smash deuterium atoms together. The University of California Los Angeles (generally known as UCLA) is a public research university located in Westwood Los Angeles, California, United Lithium tantalate (LiTaO3 is a crystalline solid which possesses unique Optical, Piezoelectric and Pyroelectric properties which make it valuable However, the process does not generate net power. See Pyroelectric fusion. Pyroelectric fusion refers to the technique of using pyroelectric Crystals Such a device would be useful in the same sort of roles as the fusor. The likelihood of a catastrophic accident in a fusion reactor in which injury or loss of life occurs is much smaller than that of a fission reactor. This article is a subarticle of Nuclear power. A nuclear reactor is a device in which Nuclear chain reactions are initiated controlled The primary reason is that the fission products in a fission reactor continue to generate heat through beta-decay for several hours or even days after reactor shut-down, meaning that a meltdown is plausible even after the reactor has been stopped. In contrast, fusion requires precisely controlled conditions of temperature, pressure and magnetic field parameters in order to generate net energy. If the reactor were damaged, these parameters would be disrupted and the heat generation in the reactor would rapidly cease. There is also no risk of a runaway reaction in a fusion reactor, since the plasma is normally burnt at optimal conditions, and any significant change will render it unable to produce excess heat. Runaway reactions are also less of a concern in modern fission reactors, as they are typically designed to immediately shut down under accident conditions, but in a fusion reactor such behaviour is almost unavoidable, and there is thus little need to carefully design them to achieve this extra safety feature. Although the plasma in a fusion power plant will have a volume of 1000 cubic meters or more, the density of the plasma is extremely low, and the total amount of fusion fuel in the vessel is very small, typically a few grams. If the fuel supply is closed, the reaction stops within seconds. In comparison, a fission reactor is typically loaded with enough fuel for one or several years, and no additional fuel is necessary to keep the reaction going. In the magnetic approach, strong fields are developed in coils that are held in place mechanically by the reactor structure. Failure of this structure could release this tension and allow the magnet to "explode" outward. The severity of this event would be similar to any other industrial accident, and could be effectively stopped with a containment building similar to those used in existing (fission) nuclear generators. A containment building, in its most common usage is a Steel or reinforced concrete structure enclosing a Nuclear reactor. The laser-driven inertial approach is generally lower-stress. Although failure of the reaction chamber is possible, simply stopping fuel delivery would prevent any sort of catastrophic failure. Most reactor designs rely on the use of liquid lithium as both a coolant and a method for converting stray neutrons from the reaction into tritium, which is fed back into the reactor as fuel. Lithium (ˈlɪθiəm is a Chemical element with the symbol Li and Atomic number 3 Tritium (ˈtɹɪtiəm symbol or, also known as Hydrogen-3) is a radioactive Isotope of Hydrogen. Lithium is highly flammable, and in the case of a fire it is possible that the lithium stored on-site could be burned up and escape. In this case the tritium contents of the lithium would be released into the atmosphere, posing a radiation risk. However, calculations suggest that the total amount of tritium and other radioactive gases in a typical power plant would be so small, about 1 kg, that they would have diluted to legally acceptable limits by the time they blew as far as the plant's perimeter fence. A perimeter fence is a structure that circles the Perimeter of an area to prevent access The natural product of the fusion reaction is a small amount of helium, which is completely harmless to life and does not contribute to global warming. Global warming is the increase in the average measured temperature of the Of more concern is tritium, which, like other isotopes of hydrogen, is difficult to retain completely. Tritium (ˈtɹɪtiəm symbol or, also known as Hydrogen-3) is a radioactive Isotope of Hydrogen. During normal operation, some amount of tritium will be continually released. There would be no acute danger, but the cumulative effect on the world's population from a fusion economy could be a matter of concern. The 12 year half-life of tritium would at least prevent unlimited build-up and long-term contamination without appropriate containment techniques. Current ITER designs are investigating total containment facilities for any tritium. The large flux of high-energy neutrons in a reactor will make the structural materials radioactive. The radioactive inventory at shut-down may be comparable to that of a fission reactor, but there are important differences. The half-life of the radioisotopes produced by fusion tend to be less than those from fission, so that the inventory decreases more rapidly. A radionuclide is an Atom with an unstable nucleus, which is a nucleus characterized by excess energy which is available to be imparted either to a newly-created Furthermore, there are fewer unique species, and they tend to be non-volatile and biologically less active. Unlike fission reactors, whose waste remains dangerous for thousands of years, most of the radioactive material in a fusion reactor would be the reactor core itself, which would be dangerous for about 50 years, and low-level waste another 100. By 300 years the material would have the same radioactivity as coal ash. Fly ash is one of the residues generated in the Combustion of Coal. . In current designs, some materials will yield waste products with long half-lives. Additionally, the materials used in a fusion reactor are more "flexible" than in a fission design, where many materials are required for their specific neutron cross-sections. This allows a fusion reactor to be designed using materials that are selected specifically to be "low activation", materials that do not easily become radioactive. Vanadium, for example, would become much less radioactive than stainless steel. Vanadium (vəˈneɪdiəm is a Chemical element that has the symbol V and Atomic number 23 In Metallurgy, stainless steel is defined as a Steel Alloy with a minimum of 11 Carbon fibre materials are also low-activation, as well as being strong and light, and are a promising area of study for laser-inertial reactors where a magnetic field is not required. In general terms, fusion reactors would create far less radioactive material than a fission reactor, the material it would create is less damaging biologically, and the radioactivity "burns off" within a time period that is well within existing engineering capabilities. Although fusion power uses nuclear technology, the overlap with nuclear weapons technology is small. Tritium is a component of the trigger of hydrogen bombs, but not a major problem in production. Tritium (ˈtɹɪtiəm symbol or, also known as Hydrogen-3) is a radioactive Isotope of Hydrogen. The Teller–Ulam design is a Nuclear weapon design which is used in Megaton -range Thermonuclear weapons and is more colloquially referred to as "the The copious neutrons from a fusion reactor could be used to breed plutonium for an atomic bomb, but not without extensive redesign of the reactor, so that clandestine production would be easy to detect. The theoretical and computational tools needed for hydrogen bomb design are closely related to those needed for inertial confinement fusion, but have very little in common with (the more scientifically developed) magnetic confinement fusion. Inertial confinement fusion ( ICF) is a process where Nuclear fusion reactions are initiated by heating and compressing a fuel target typically in the form of Magnetic confinement fusion is an approach to generating Fusion energy that uses Magnetic fields to confine the fusion fuel in the form of a plasma. Large-scale reactors using neutronic fuels (e. g. ITER) and thermal power production (turbine based) are most comparable to fission power from an engineering and economics viewpoint. ITER is an international Tokamak ( Magnetic confinement fusion) research/engineering proposal for an experimental project that will help to make the transition from Nuclear power is any Nuclear technology designed to extract usable Energy from atomic nuclei via controlled Nuclear reactions Both fission and fusion power plants involve a relatively compact heat source powering a conventional steam turbine-based power plant, while producing enough neutron radiation to make activation of the plant materials problematic. Neutron activation is the process in which Neutron radiation induces Radioactivity in materials and occurs when atomic nuclei capture Free neutrons The main distinction is that fusion power produces no high-level radioactive waste (though activated plant materials still need to be disposed of). There are some power plant ideas which may significantly lower the cost or size of such plants; however, research in these areas is nowhere near as advanced as in tokamaks. A tokamak is a machine producing a toroidal Magnetic field for confining a plasma. Fusion power commonly proposes the use of deuterium, an isotope of hydrogen, as fuel and in many current designs also use lithium. Deuterium, also called heavy hydrogen, is a Stable isotope of Hydrogen with a Natural abundance in the Oceans of Earth Isotopes (Greek isos = "equal" tópos = "site place" are any of the different types of atoms ( Nuclides Lithium (ˈlɪθiəm is a Chemical element with the symbol Li and Atomic number 3 Assuming a fusion energy output equal to the current global output and that this does not increase in the future, then the known current lithium reserves would last 3000 years, lithium from sea water would last 60 million years, and a more complicated fusion process using only deuterium from sea water would have fuel for 150 billion years. Confinement refers to all the conditions necessary to keep a plasma dense and hot long enough to undergo fusion: The first human-made, large-scale production of fusion reactions was the test of the hydrogen bomb, Ivy Mike, in 1952 . The Teller–Ulam design is a Nuclear weapon design which is used in Megaton -range Thermonuclear weapons and is more colloquially referred to as "the Ivy Mike was the codename given to the first US test of a fusion device where a major part of the explosive yield came from fusion It was once proposed to use hydrogen bombs as a source of power by detonating them in underground caverns and then generating electricity from the heat produced, but such a power plant is unlikely ever to be constructed, for a variety of reasons. (See the PACER project for more details. The PACER project carried out at Los Alamos National Laboratory in the mid-1970s explored the possibility of a Fusion power system that would involve exploding small ) Controlled thermonuclear fusion (CTF) refers to the alternative of continuous power production, or at least the use of explosions that are so small that they do not destroy a significant portion of the machine that produces them. To produce self-sustaining fusion, the energy released by the reaction (or at least a fraction of it) must be used to heat new reactant nuclei and keep them hot long enough that they also undergo fusion reactions. Retaining the heat is called energy confinement and may be accomplished in a number of ways. The hydrogen bomb really has no confinement at all. The fuel is simply allowed to fly apart, but it takes a certain length of time to do this, and during this time fusion can occur. This approach is called inertial confinement. Inertial confinement fusion ( ICF) is a process where Nuclear fusion reactions are initiated by heating and compressing a fuel target typically in the form of If more than milligram quantities of fuel are used (and efficiently fused), the explosion would destroy the machine, so theoretically, controlled thermonuclear fusion using inertial confinement would be done using tiny pellets of fuel which explode several times a second. To induce the explosion, the pellet must be compressed to about 30 times solid density with energetic beams. If the beams are focused directly on the pellet, it is called direct drive, which can in principle be very efficient, but in practice it is difficult to obtain the needed uniformity. An alternative approach is indirect drive, in which the beams heat a shell, and the shell radiates x-rays, which then implode the pellet. X-radiation (composed of X-rays) is a form of Electromagnetic radiation. The beams are commonly laser beams, but heavy and light ion beams and electron beams have all been investigated. An ion beam is a type of Particle beam consisting of Ions. Ion beams have many uses in Electronics manufacturing (principally Ion implantation Inertial confinement produces plasmas with impressively high densities and temperatures, and appears to be best suited to weapons research, X-ray generation, very small reactors, and perhaps in the distant future, spaceflight. They rely on fuel pellets with close to a "perfect" shape in order to generate a symmetrical inward shock wave to produce the high-density plasma, and in practice these have proven difficult to produce. For the music album by Converter see Shock Front For the 1977 horror film see Shock Waves A shock wave (also called A recent development in the field of laser induced ICF is the use of ultrashort pulse multi-petawatt lasers to heat the plasma of an imploding pellet at exactly the moment of greatest density after it is imploded conventionally using terawatt scale lasers. This page lists examples of the power in Watts produced by various different sources of energy This research will be carried out on the (currently being built) OMEGA EP petawatt and OMEGA lasers at the University of Rochester and at the GEKKO XII laser at the institute for laser engineering in Osaka Japan, which if fruitful, may have the effect of greatly reducing the cost of a laser fusion based power source. The University of Rochester ( U of R UR) is a private, nonsectarian Coeducational Research University located in Rochester At the temperatures required for fusion, the fuel is in the form of a plasma with very good electrical conductivity. Electrical conductivity or specific conductivity is a measure of a material's ability to conduct an Electric current. This opens the possibility to confine the fuel and the energy with magnetic fields, an idea known as magnetic confinement. In Physics, a magnetic field is a Vector field that permeates space and which can exert a magnetic force on moving Electric charges Magnetic confinement fusion is an approach to generating Fusion energy that uses Magnetic fields to confine the fusion fuel in the form of a plasma. The Lorenz force works only perpendicular to the magnetic field, so that the first problem is how to prevent the plasma from leaking out the ends of the field lines. In Physics, the Lorentz force is the Force on a Point charge due to Electromagnetic fields It is given by the following equation There are basically two solutions. The first is to use the magnetic mirror effect. A magnetic mirror is a Magnetic field configuration where the field strength changes when moving along a field line If particles following a field line encounter a region of higher field strength, then some of the particles will be stopped and reflected. Advantages of a magnetic mirror power plant would be simplified construction and maintenance due to a linear topology and the potential to apply direct conversion in a natural way, but the confinement achieved in the experiments was so poor that this approach has been essentially abandoned. The second possibility to prevent end losses is to bend the field lines back on themselves, either in circles or more commonly in nested toroidal surfaces. In Geometry, a torus (pl tori) is a Surface of revolution generated by revolving a Circle in three dimensional space about an axis Coplanar The most highly developed system of this type is the tokamak, with the stellarator being next most advanced, followed by the Reversed field pinch. A tokamak is a machine producing a toroidal Magnetic field for confining a plasma. A stellarator is a device used to confine a hot plasma with magnetic fields in order to sustain a controlled Nuclear fusion reaction A reversed-field pinch (RFP is a device used to produce and contain near-thermonuclear plasmas. Compact toroids, especially the Field-Reversed Configuration and the spheromak, attempt to combine the advantages of toroidal magnetic surfaces with those of a simply connected (non-toroidal) machine, resulting in a mechanically simpler and smaller confinement area. A Field-Reversed Configuration (FRC is a device developed for Magnetic fusion energy research that confines a plasma on closed magnetic field lines without a central A spheromak is a Magnetic fusion energy concept in which the plasma is in magnetohydrodynamic equilibrium In Topology, a geometrical object or space is called simply connected (or 1-connected) if it is Path-connected and every path between two points can be Compact toroids still have some enthusiastic supporters but are not backed as readily by the majority of the fusion community. Finally, there are also electrostatic confinement fusion systems, in which ions in the reaction chamber are confined and held at the center of the device by electrostatic forces, as in the Farnsworth-Hirsch Fusor, which is not believed to be able to developed into a power plant. Inertial electrostatic confinement (often abbreviated as IEC) is a concept for retaining a plasma using an electrostatic field An ion is an Atom or Molecule which has lost or gained one or more Valence electrons giving it a positive or negative electrical charge The Farnsworth–Hirsch Fusor, or simply fusor, is an apparatus designed by Philo T The Polywell, an advanced variant of the fusor, has shown a degree of research interest as of late; however, the technology is relatively immature, and major scientific and engineering questions remain which researchers under the auspices of the U. The polywell is a plasma confinement concept that combines elements of Inertial electrostatic confinement and Magnetic confinement fusion, intended The Farnsworth–Hirsch Fusor, or simply fusor, is an apparatus designed by Philo T S. Office of Naval Research hope to further investigate. The Office of Naval Research ( ONR) headquartered in Arlington Virginia ( Ballston) is the office within the United States Department of the A more subtle technique is to use more unusual particles to catalyse fusion. The best known of these is Muon-catalyzed fusion which uses muons, which behave somewhat like electrons and replace the electrons around the atoms. Muon-catalyzed fusion ( μCF) is a process allowing Nuclear fusion to take place at Temperatures significantly lower than the temperatures required for These muons allow atoms to get much closer and thus reduce the kinetic energy required to initiate fusion. Muons require more energy to produce than we can get back from muon-catalysed fusion, making this approach impractical for the generation of power. Some researchers have reported excess heat, neutrons, tritium, helium and other nuclear effects in so-called cold fusion systems. Cold fusion, sometimes called low energy nuclear reactions (LENR or condensed matter nuclear science, is a set of effects reported in controversial laboratory experiments In 2004, a peer review panel was commissioned by the US Department of Energy to study these claims: two thirds of its members found the evidences of nuclear reactions unconvincing, five found the evidence "somewhat convincing" and one was entirely convinced. In 2006, Mosier-Boss and Szpak, researchers in the U.S. Navy's Space and Naval Warfare Systems Center San Diego, reported evidence of nuclear reactions, which have been independently replicated. Space and Naval Warfare Systems Center San Diego (SSC San Diego is the U Research into sonoluminescence induced fusion, sometimes known as "bubble fusion", also continues, although it is met with as much skepticism as cold fusion is by most of the scientific community. Sonoluminescence is the emission of short bursts of Light from imploding bubbles in a Liquid when excited by Sound. Bubble fusion, also known as sonofusion, is the non-technical name for a Nuclear fusion reaction hypothesized to occur during Sonoluminescence, an extreme In fusion research, achieving a fusion energy gain factor Q = 1 is called breakeven and is considered a significant although somewhat artificial milestone. The fusion energy gain factor, usually expressed with the symbol Q, is the ratio of Fusion power produced in a Nuclear fusion reactor to the power required Ignition refers to an infinite Q, that is, a self-sustaining plasma where the losses are made up for by fusion power without any external input. In a practical fusion reactor, some external power will always be required for things like current drive, refueling, profile control, and burn control. A value on the order of Q = 20 will be required if the plant is to deliver much more energy than it uses internally. There have been many design studies for fusion power plants. Despite many differences, there are several systems that are common to most. To begin with, a fusion power plant, like a fission power plant, is customarily divided into the nuclear island and the balance of plant. Nuclear power is any Nuclear technology designed to extract usable Energy from atomic nuclei via controlled Nuclear reactions The balance of plant is the conventional part that converts high-temperature heat into electricity via steam turbines. A steam turbine is a mechanical device that extracts Thermal energy from pressurized Steam, and converts it into useful mechanical work It is much the same in a fusion power plant as in a fission or coal power plant. In a fusion power plant, the nuclear island has a plasma chamber with an associated vacuum system, surrounded by plasma-facing components (first wall and divertor) maintaining the vacuum boundary and absorbing the thermal radiation coming from the plasma, surrounded in turn by a blanket where the neutrons are absorbed to breed tritium and heat a working fluid that transfers the power to the balance of plant. If magnetic confinement is used, a magnet system, using primarily cryogenic superconducting magnets, is needed, and usually systems for heating and refueling the plasma and for driving current. In inertial confinement, a driver (laser or accelerator) and a focusing system are needed, as well as a means for forming and positioning the pellets. Although the standard solution for electricity production in fusion power plant designs is conventional steam turbines using the heat deposited by neutrons, there are also designs for direct conversion of the energy of the charged particles into electricity. These are of little value with a D-T fuel cycle, where 80% of the power is in the neutrons, but are indispensable with aneutronic fusion, where less than 1% is. Aneutronic fusion is any form of Fusion power where no more than 1% of the total energy released is carried by Neutrons Since the most-studied fusion reactions Direct conversion has been most commonly proposed for open-ended magnetic configurations like magnetic mirrors or Field-Reversed Configurations, where charged particles are lost along the magnetic field lines, which are then expanded to convert a large fraction of the random energy of the fusion products into directed motion. A magnetic mirror is a Magnetic field configuration where the field strength changes when moving along a field line A Field-Reversed Configuration (FRC is a device developed for Magnetic fusion energy research that confines a plasma on closed magnetic field lines without a central The particles are then collected on electrodes at various large electrical potentials. Typically the claimed conversion efficiency is in the range of 80%, but the converter may approach the reactor itself in size and expense. Developing materials for fusion reactors has long been recognized as a problem nearly as difficult and important as that of plasma confinement, but it has received only a fraction of the attention. The International Fusion Material Irradiation Facility, also known as IFMIF, is an international scientific research program designed to test materials for suitability for use The neutron flux in a fusion reactor is expected to be about 100 times that in existing pressurized water reactors (PWR). Pressurized water reactor ( PWR s (also VVER if of Russian design are generation II nuclear power reactors that use ordinary Water Each atom in the blanket of a fusion reactor is expected to be hit by a neutron and displaced about a hundred times before the material is replaced. Furthermore the high-energy neutrons will produce hydrogen and helium in various nuclear reactions that tends to form bubbles at grain boundaries and result in swelling, blistering or embrittlement. One also wishes to choose materials whose primary components and impurities do not result in long-lived radioactive wastes. Finally, the mechanical forces and temperatures are large, and there may be frequent cycling of both. The problem is exacerbated because realistic material tests must expose samples to neutron fluxes of a similar level for a similar length of time as those expected in a fusion power plant. Such a neutron source is nearly as complicated and expensive as a fusion reactor itself would be. Proper materials testing will not be possible in ITER, and a proposed materials testing facility, IFMIF, is still at the design stage in 2005. ITER is an international Tokamak ( Magnetic confinement fusion) research/engineering proposal for an experimental project that will help to make the transition from The International Fusion Material Irradiation Facility, also known as IFMIF, is an international scientific research program designed to test materials for suitability for use The material of the plasma facing components (PFC) is a special problem. The PFC do not have to withstand large mechanical loads, so neutron damage is much less of an issue. They do have to withstand extremely large thermal loads, up to 10 MW/m², which is a difficult but solvable problem. Regardless of the material chosen, the heat flux can only be accommodated without melting if the distance from the front surface to the coolant is not more than a centimeter or two. The primary issue is the interaction with the plasma. One can choose either a low-Z material, typified by graphite although for some purposes beryllium might be chosen, or a high-Z material, usually tungsten with molybdenum as a second choice. See also List of elements by atomic number In Chemistry and Physics, the atomic number (also known as the proton The Mineral graphite, as with Diamond and Fullerene, is one of the Allotropes of carbon. Beryllium (bəˈrɪliəm is a Chemical element with the symbol Be and Atomic number 4 See also List of elements by atomic number In Chemistry and Physics, the atomic number (also known as the proton Tungsten (ˈtʌŋstən also known as wolfram (/ˈwʊlfrəm/ is a Chemical element that has the symbol W and Atomic number 74 Molybdenum (məˈlɪbdənəm from the Greek word for the metal " Lead " is a Group 6 Chemical element with the symbol Mo Use of liquid metals (lithium, gallium, tin) has also been proposed, e. g. , by injection of 1-5 mm thick streams flowing at 10 m/s on solid substrates. If graphite is used, the gross erosion rates due to physical and chemical sputtering would be many meters per year, so one must rely on redeposition of the sputtered material. Sputtering is a process whereby Atoms are Ejected from a solid target material due to bombardment of the target by energetic Ions It is commonly used for The location of the redeposition will not exactly coincide with the location of the sputtering, so one is still left with erosion rates that may be prohibitive. An even larger problem is the tritium co-deposited with the redeposited graphite. The tritium inventory in graphite layers and dust in a reactor could quickly build up to many kilograms, representing a waste of resources and a serious radiological hazard in case of an accident. The consensus of the fusion community seems to be that graphite, although a very attractive material for fusion experiments, cannot be the primary PFC material in a commercial reactor. The sputtering rate of tungsten can be orders of magnitude smaller than that of carbon, and tritium is not so easily incorporated into redeposited tungsten, making this a more attractive choice. On the other hand, tungsten impurities in a plasma are much more damaging than carbon impurities, and self-sputtering of tungsten can be high, so it will be necessary to ensure that the plasma in contact with the tungsten is not too hot (a few tens of eV rather than hundreds of eV). Tungsten also has disadvantages in terms of eddy currents and melting in off-normal events, as well as some radiological issues. It is far from clear whether nuclear fusion will be economically competitive with other forms of power. The many estimates that have been made of the cost of fusion power cover a wide range, and indirect costs of and subsidies for fusion power and its alternatives make any cost comparison difficult. The low estimates for fusion appear to be competitive with but not drastically lower than other alternatives. The high estimates are several times higher than alternatives. While fusion power is still in early stages of development, vast sums have been and continue to be invested in research. In the EU almost € 10 billion was spent on fusion research up to the end of the 1990s, and the new ITER reactor alone is budgeted at € 10 billion. ITER is an international Tokamak ( Magnetic confinement fusion) research/engineering proposal for an experimental project that will help to make the transition from It is estimated that up to the point of possible implementation of electricity generation by nuclear fusion, R&D will need further promotion totalling around € 60-80 billion over a period of 50 years or so (of which € 20-30 billion within the EU). Nuclear fusion research receives € 750 million (excluding ITER funding), compared with € 810 million for all non-nuclear energy research combined , putting research into fusion power well ahead of that of any single rivaling technology. Fusion power would provide much more energy for a given weight of fuel than any technology currently in use, and the fuel itself (primarily deuterium) exists abundantly in the Earth's ocean: about 1 in 6500 hydrogen atoms in seawater is deuterium. Deuterium, also called heavy hydrogen, is a Stable isotope of Hydrogen with a Natural abundance in the Oceans of Earth Although this may seem a low proportion (about 0. 015%), because nuclear fusion reactions are so much more energetic than chemical combustion and seawater is easier to access and more plentiful than fossil fuels, some experts estimate that fusion could supply the world's energy needs for centuries. An important aspect of fusion energy in contrast to many other energy sources is that the cost of production is inelastic. In Economics and business studies the price elasticity of demand (PED is a measure of the sensitivity of quantity demanded to changes in price The cost of wind energy, for example, goes up as the optimal locations are developed first, while further generators must be sited in less ideal conditions. With fusion energy, the production cost will not increase much, even if large numbers of plants are built. It has been suggested that even 100 times the current energy consumption of the world is possible. Some problems which are expected to be an issue in the next century such as fresh water shortages can actually be regarded merely as problems of energy supply. Water resources are sources of Water that are useful or potentially useful to Humans Uses of water include Agricultural, industrial, Household For example, in desalination plants, seawater can be purified through distillation or reverse osmosis. Desalination, desalinization, or desalinisation refers to any of several processes that remove excess salt and other Minerals from Water Seawater is Water from a Sea or Ocean. On average seawater in the world's oceans has a Salinity of about 3 Distillation is a method of separating Mixtures based on differences in their volatilities in a boiling liquid mixture Reverse osmosis (RO is a separation process that uses pressure to force a Solution through a membrane that retains the Solute on one side and allows the However, these processes are energy intensive. Even if the first fusion plants are not competitive with alternative sources, fusion could still become competitive if large scale desalination requires more power than the alternatives are able to provide. Despite being technically non-renewable, fusion power has many of the benefits of long-term renewable energy sources (such as being a sustainable energy supply compared to presently-utilized sources and emitting no greenhouse gases) as well as some of the benefits of such much more limited energy sources as hydrocarbons and nuclear fission (without reprocessing). Non-renewable energy is energy taken from "finite resources that will eventually dwindle, becoming too expensive or too environmentally damaging to retrieve" Greenhouse gases are gaseous constituents of the atmosphere bothnatural and anthropogenic that absorb and emit radiation at specific wavelengths within the spectrum of thermal infrared Nuclear reprocessing separates components of Spent nuclear fuel such as Reprocessed uranium Plutonium Minor Like these currently dominant energy sources, fusion could provide very high power-generation density and uninterrupted power delivery (due to the fact that it is not dependent on the weather, unlike wind and solar power). The weather is a set of all the phenomena occurring in a given Atmosphere at a given Time. Despite optimism dating back to the 1950s about the wide-scale harnessing of fusion power, there are still significant barriers standing between current scientific understanding and technological capabilities and the practical realization of fusion as an energy source. Research, while making steady progress, has also continually thrown up new difficulties. Therefore it remains unclear that an economically viable fusion plant is even possible. An editorial in New Scientist magazine opined that "if commercial fusion is viable, it may well be a century away. New Scientist is a weekly International science magazine and website covering recent developments in science and technology for a general English -speaking " Ironically, a pamphlet printed by General Atomics in 1970's stated that "By the year 2000, several commercial fusion reactors are expected to be on-line. General Atomics is a nuclear physics and Defense contractor headquartered in San Diego California. " Several fusion reactors have been built, but as yet none has produced more thermal energy than electrical energy consumed. Despite research having started in the 1950s, no commercial fusion reactor is expected before 2050. The ITER project is currently leading the effort to commercialize fusion power. ITER is an international Tokamak ( Magnetic confinement fusion) research/engineering proposal for an experimental project that will help to make the transition from A kibibyte (a contraction of ki lo bi nary byte) is a unit of Information or Computer storage, established by the International
http://citizendia.org/Fusion_power
13
72
Exploring Learning Styles and Instruction Learning is an interactive process, the product of student and teacher activity within a specific learning environment. These activities, which are the central elements of the learning process, show a wide variation in pattern, style and quality (Keefe, 1987). Learning problems frequently are not related to the difficulty of the subject matter but rather to the type and level of cognitive process required to learn the material (Keefe, 1988). Gregorc and Ward (1977) claim that if educators are to successfully address the needs of the individual they have to understand what "individual" means. They must relate teaching style to learning style. The famous case of Tinker versus DesMoines Community School District (1969) which concerns itself with student rights will be extended to encompass the right of a student to learn in ways that complement his ability to achieve. Public Law 94-142 which requires the identification of learning style and individualization for all handicapped children is one step away from mandating individualization for all students (Dunn and Dunn, 1978). Educators must learn to base programs on the differences that exist among students rather than on the assumption that everyone learns the same way (Keefe, 1987). Learning has taken place when we observe a change of learner behavior resulting from what has been experienced. Similarly, we can recognize the learning style of an individual student only by observing his overt behavior. Learning style is a consistent way of functioning that reflects the underlying causes of learning behavior (Keefe, 1987). Keefe (1991) describes learning style as both a student characteristic and an instructional strategy. As a student characteristic, learning style is an indicator of how a student learns and likes to learn. As an instructional strategy, it informs the cognition, context and content of learning. Each learner has distinct and consistent preferred ways of perception, organization and retention. These learning styles are characteristic cognitive, affective, and physiological behaviors that serve as relatively stable indicators of how learners perceive, interact with and respond to the learning environment Students learn differently from each other (Price, 1977). Caplan (1981) has determined that brain structure influences language structure acquisition. It has been shown that different hemispheres of the brain contain different perception avenues (Schwartz, Davidson, &Maer, 1975). Stronck (1980) claims that several types of cells present in some brains are not present in others and such differences occur throughout the brain's structure. Talmadge and Shearer (1969) have determined that learning styles do exist. Their study shows that the characteristics of the content of a learning experience are a critical factor affecting relationships that exist between learner characteristics and instructional methods. Reiff(1992) claims that styles influence how students learn, how teachers teach, and how they interact. Each person is born with certain preferences toward particular styles, but these preferences are influenced by culture, experience and development. Keefe (1987) asserts that perceptual style is a matter of learner choice, but that preference develops from infancy almost subconsciously. A teacher alert to these preferences can arrange for flexibility in the learning environment. Learning style is the composite of characteristic cognitive, affective and physiological factors (Keefe, 1991). A useful approach for understanding and describing learning styles is the consideration of these factors. Cognitive styles are the information processing habits of an individual. These represent a person's typical modes of perceiving, thinking, remembering, and problem solving (Keefe, 1991). External information is received through the network of perceptual modalities. This information is the raw data that the brain processes for learning to occur. If there is a deficit in a perceptual modality the brain will receive incorrect or incomplete data and limited or inappropriate learning will occur (Keefe, 1988). Learning modalities are the sensory channels or pathways through which individuals give, receive, and store information. Most students learn with all of their modalities but have certain strengths and weaknesses in a specific modality (Reiff, 1992). These avenues of preferred perception include kinesthetic/tactual, auditory and visual (Eiszler, 1983). Stronck (1980) describes the kinesthetic/tactual learners as the ones who try things out, touch, feel, and manipulate. Kinesthetic/tactual learners express their feelings physically. They gesture when speaking, are poor listeners, and they lose interest in long speeches. These students learn best by doing. They need direct involvement in what they are learning. More than thirty percent of our students may have a kinesthetic/tactual preference for learning (Barbe, 1979). Auditory learners talk about what to do when they learn. They enjoy listening, but cannot wait to have a chance to talk themselves. These students respond well to lecture and discussion (Barbe, 1979). Visual learners learn by seeing. They think in pictures and have vivid imaginations. They have greater recall of concepts that are presented visually (Barbe, Most of the students not doing well in school are kinesthetic\tactual learners. Instruction geared toward the other modalities can cause these learners to fall behind. As this happens, students begin to lose confidence in themselves and resent school because of repeated failure (Reiff, 1992). An effective means to reach all learners is modality-based instruction, which consists of organizing around the different modalities to accommodate the needs of the learner. Modality based instruction consists of using a variety of motivating, introductory techniques and then providing alternative strategies when a student fails to grasp the skill or concept. If a learner does not initially understand the lesson, the teacher needs to intervene, personalize instruction and reteach using a different method (Reiff, 1992). Perceptual modality preferences are not separate units of learning style. Instruments and assessment approaches that lead teachers and researchers to consider modality preferences in general terms may contribute to the misunderstanding of individual differences rather than help develop and use information on individual differences in teaching (Eiszler, 1983). Affective components of learning styles include personality and emotional characteristics related to the areas of persistence, responsibility, motivation and peer interaction (Reiff, 1992). The physiological components of learning styles are biologically based modes of response that are founded on sex-related differences, personal nutrition and health, and reactions to the physical environment (Keefe, 1991). Student performances in different subject areas are related to how individuals do, in fact, learn. Systematic ways to identify individual preferences for learning and suggestions for teaching students with varying learning styles can be based on an individual's diagnosis of his learning style (Price, 1977). Comprehension of ~ individual differences and learning styles can provide teachers with the theory and knowledge upon which to base decisions. Once a teacher has determined why a student responds in a certain way, then they can make more intelligent decisions about instruction methods (Reef, Several research studies have demonstrated that students can identify their own learning styles; when exposed to a teaching style that matches their learning style, students score higher on tests than those not taught in their learning style; and it is advantageous to teach and test students in their preferred modalities (Dunn and Dunn, 1978). The Learning Style Profile (LSP) provides educators with a well validated and easy-to-use instrument for diagnosing the characteristics of an individual's learning style. LSP provides an overview of the tendencies and preferences of the individual learner (Keefe,l991). All students can benefit from a responsive learning environment and from the enhancement of their learning skills (Keefe, 1991). No educational program can be successful without attention to the personal learning needs of individual students. A single approach to instruction whether traditional or innovative, simply does not do the job (Keefe, 1987). Using one teaching style or learning style exclusively is not conducive to a successful educational program (Dunn and Dunn, l978). "Hard to reach and hard to teach students " are more successful when taught with different modality strategies (Reiff, 1992). Students vary widely in their cognitive styles yet few teachers consider this variable when planning instruction (Fenstermacher, 1983). If we wish students to have optimum learning in our schools, we must change the way we deliver instruction. If a student continues to fail to respond to changed instruction then we must retrain his or her cognitive styles to make school success possible (Keefe, 1987). It is nothing less than revolutionary to base instructional planning on an analysis of each student's learning characteristics. To do so moves education away from the traditional assembly-line mass production model to a handcrafted one (Keefe, 1987). Planning appropriate and varied lessons will improve both instructional and classroom management. Realistically, a teacher cannot be expected to have a different lesson for every child in the classroom, however, lessons can reflect an understanding of individual differences by appropriately incorporating strategies for a variety of styles. When individual differences are considered, many researchers claim that students will have higher achievement, a more positive attitude, and an improved self-concept (Reiff, 1992). Planning learning-style based instruction involves diagnosing individual learning style; profiling group preferences; determining group strengths and weaknesses; examining subject content for areas that may create problems with weak skills; analyzing students' prior achievement scores; remediating weak skills; assessing current instructional methods to determine whether they are adequate or require more flexibility; and modifying the learning environment and developing personalized learning experiences (Keefe, 1991). A better understanding of learning style can help teachers reduce frustration for themselves and their students (Reiff, 1992). A knowledge of style can also show teachers how some of their own behaviors can hinder student progress Eiszler (1983) claims that varying teaching strategies to address all channels promotes learning no matter what students' preferences of cognitive styles are. Dunn (1979) showed that slow learners tend to increase their amounts of achievement when varied multisensory methods were used as a form of instruction. However, not everyone agrees with matching learning styles and teaching styles. Rector and Henderson (1970) have determined through their research that the effect of various teaching strategies depends on such factors as the nature of the concept to be taught, the students' characteristics, and the time available. In their study no significant difference was found in different teaching strategies and student achievement. Today low achievement is blamed directly on the school, their teachers, and the instructional programs or methods being used. Achievement scores reveal only where a child is academically. I.Q. tests suggest a child's potential, not why he or she has not progressed further or more quickly. Personality instruments serve to explain student behavior but they provide little insight into how to help him achieve. It is possible however to help each child learn more efficiently by diagnosing the individual's learning style (Dunn and Dunn, 1978). Just juggling the requirements of courses without attention to what needs to occur between teachers and students inside the classroom will not automatically produce better prepared students. Students not only need to feel confident that they can learn but also need to possess skills that they can use to facilitate their learning (Kilpatrick, 1985). Students who understand their learning styles and who exercise active control over their cognitive skills do better in school. They are better adjusted, have more positive attitudes toward learning and achieve at higher levels than their less skillful peers As teachers continue to restructure the learning environment so as to accommodate various learning styles, evaluation must occur to determine the effectiveness of the teaching and learning process. Exploring and implementing alternative evaluation methods will provide the teacher with more complete and accurate information about the capabilities of their students. For example, student products, students working in cooperative groups, role-playing or simulated situations, questions on audiotapes or computers are other avenues through which we can test students rather than the traditional paper and pencil method (Reiff, 1992). If a student does not learn the way we teach him, we must teach him the way he learns (Dunn and Dunn, 1978). As educators we must strive to continue to learn not only from research, but also from our students and each other. This continued education will certainly benefit our students as we try new ideas and new teaching strategies. As we implement new ideas we will address more learning styles and further facilitate the education of our students. We should not seek to have students, who are products of our teaching style, be clones of ourselves, but rather we should strive to teach our students how to build upon their strengths and become better educated individuals. By addressing students' learning styles and planning instruction accordingly we will meet more individuals' educational needs and will be more successful in our educational goals. Barbe, W. B., & Swassing, R. H. (1979). Teaching through modality strengths. New York, NY: Zane-Bloser, Inc. Caplan, D. (1981). Prospects for neurolinguistic theory. Cognition, 10(1 3), 59-64. Dunn, R. (1979). Learning-A matter of style. Educational Leadership, Dunn, R., & Dunn, K (1978, March). How to create hands-on materials. Instructor, pp. 134-141. Dunn, R., & Dunn, K (1978). Teaching students through their individual learning styles. Reston, VA: Reston Publishing Company, Inc. Eislzer, C. F. (1983). Perceptual preferences as an aspect of adolescent learning styles. Education, 103(3), 231-242. Fenstermacher, G. D. (1983). Individual differences and the common curriculum. Chicago, L: National Society for the Study of Education. Gregorc, A. F., & Ward, H. B. (1977). Implications for learning and teaching: A new definition for individual. NASSP Bulletin, 61, 20-26. Keefe, J. W. (1991). Learning style: Cognitive and thinking skills. Reston, VA: National Association of Secondary School Principals. Keefe, J. W. (1988). Profiling and utilizing learning style. Reston, VA: National Association of Secondary School Principals. Keefe, J. W. (1987). Theory and practice. Reston, VA: National Association of Secondary School Principals. Price, G., Dunn, R., & Dunn, K. (1977). A summary of research on learning style. New York, NY: American Educational Research Association. Rector, R. E., & Henderson, K. B. (1970). The relative effectiveness of four strategies for teaching mathematical concepts. Journal for Research in Mathematics Education, 1, 69-75. Reiff, J. C. (1992). Learning styles. Washington, DC: National Education Shwartz, G. E., Davidson, R. J., & Maer, F. (1975). Right hemisphere lateralization for emotion in the human brain: Interactions with cognition. Science, 190(4211), 286-288. Stronck, D. R. (1980). The educational implications of human individuality. American Biology Teacher, 42, 146-151. Talmadge, G. K., & Shearer, J. W. (1969). Relationship among learning styles, instructional methods and the nature of leaming experiences. Journal of Educational Psychology, 57, 222-230. One inch square Wheat Thins? Materials: Box of Wheat Thins Wheat Thins are advertised as being one inch square. However, the average wheat thin is one inch square. Divide the class into groups and have your students use a ruler to determine the size of one wheat thin. Record each groups' measures on the board. Have students average the measures and determine how close the average is to 1 square inch. Be sure to instruct students on the reasons why the measure is a square measure. You may want to have a class discussion on truth in advertising and how it relates to What is one square foot? Materials: One box of Wheat Thins for each group The floor tiles in most classrooms are one square foot. Divide students into groups. Have your students assume that each cracker is one square inch and determine the area of the floor tile by covering the tile with crackers and counting the number of crackers on each tile. Ask if there is a quicker way to determine this area. Students should be able to determine the area by multiplying the length of each side of the tile. Extension: Estimate the perimeter of the football field in terms of wheat thins. What does the biggest area mean? Materials: 12 Wheat Thins for each student Have students to build a geometric figure that will encompass the most area using the crackers for the perimeter. Discuss their findings. This activity may also be done on a geoboard with a string 12 inches long. Students should find that the largest area occurs when they construct a square. One perimeter, how many different areas? Have students to use a 12 inch piece of string to construct the following shapes and have them to find the area of each shape. Square, Rectangle with one side 2 units long, Equilateral triangle, Right Isosceles Triangle, Circle Which shape has the greatest area? What makes area and perimeter different? How do the area of a circle and the area of a square compare? Materials: Grid paper, scissors, paper and pencil Use grid paper to make a circle, a square and a rectangle of the same area. What is the smallest area possible? How do the areas of each compare? Using average pace length to determine area Materials: Yardstick, paper and pencil Have students to determine their average pace length by walking 100 feet 10 times and averaging the number of paces it took them each time they walked. Use this average to calculate the area of a portion of land by "stepping off' the perimeter of the land and recording the lengths of each side. One suggestion is to determine if the band practice field will 'fit' on the student parking lot. Animals and their Home Ranges Have students research different animals and record the size of their home ranges. An animal's home range is the amount of space the animal needs to fulfill its requirements for food, breeding, and so forth. Have students make graphs comparing the size of the animal to the area of its home range. Students should then discuss what might happen to an animal if the size of its habitat is altered through a natural disaster such as fire or man's development of the land. Students should realize that the larger the animal, the larger the home range of the animal. Presenting Surface Area Materials: One inch grid paper Assorted rectangular boxes. Tape, Paper, Pencil Divide your class into groups and give each group several sheets of grid paper and a set of the other materials. Use the one inch square grid paper to cover the boxes. Tell your students that they are not to let the squares overlap and that they need to be certain to cover all exposed surfaces (like gift wrapping the box). Have students to find the area of each side by counting the number of squares on each side. After recording this information have them find the total surface area of each box by adding the areas of the sides together. You may wish to ask students about finding a shortcut for doing this and they may derive the formula for the surface area of a box for you. Be sure that you are including an imaginary or real lid on your Modeling the Room Have student groups measure all of the objects in the room to determine their dimensions. They are to build a scale model of their object using the grid paper and a scale of the class's choosing. They will need to label their object so that others can identify it. Use each group's object and put them together to from a scale model of the room. This activity should reinforce the need for accurate measures and what a scale model represents. Discuss relative size of the objects. Invariably, someone will have represented one of the objects incorrectly. Materials: Grid paper Patterns for solids Have student groups build rectangular solids from grid paper. Let them choose which solid to model. Once they have built the models they need to find the surface area of the model. Formulating the area of a triangle Use grid paper and pencil to draw a parallelogram. Have students cut the parallelogram so that they have 2 equal triangles. Find the area of the original rectangle and the two triangles. Discuss the relationships between the areas. Students should derive that the area of each triangle is half that of the parallelogram. Extension or beforehand: Have students draw a parallelogram on a sheet of grid paper and cut it out. Students should cut the parallelogram so that the pieces form a rectangle of the same area. Geometric Lake Day Provide students with a pool, swim rings, measuring devices, beech towels, balls, umbrellas, etc. They can provide edible solids of their choice, such as brownies and sodas. Students should complete the activity sheet. Provide students with circular objects and a measuring device. Have students complete the chart provided on exploring circumference and diameter. Give students different quadrilaterals inscribed in circles and have them complete the Discovering Ptolemy activity sheet. It is more interesting to the students to draw their own quadrilaterals and to post the measures on a chart. Students usually need to work together to formally state the
http://jwilson.coe.uga.edu/EMT705/EMT705.Hood.html
13
56
Electricity: Static Electricity Static Electricity: Problem Set Overview This set of 33 problems targets your ability to determine circuit quantities such as the quantity of charge, separation distance between charges, electric force, electric field strength, and resultant forces and field strengths from verbal descriptions and diagrams of physical situations pertaining to electric circuits. Problems range in difficulty from the very easy and straight-forward to the very difficult and complex. The more difficult problems are color-coded as blue problems. Relating the Quantity of Charge to Numbers of Protons and Electrons: Atoms are the building blocks of all objects. These atoms possess protons, neutrons and electrons. While neutrons are electrically neutral, the protons and electrons possess electrical charge. The proton and electron have a predictable amount of charge, with the proton being assigned a positive type of charge and the electron a negative type. The charge on an electron has a well-accepted, experimentally-determined value of -1.6 x 10-19 C (where the negative simply indicates the type of charge). Protons have an equal amount of charge and an opposite type; thus, the charge of a proton is +1.6 x 10-19 C. Objects consisting of atoms containing protons and electrons can have an overall charge on them if there is an imbalance of protons and electrons. An object with more protons than electrons will be charged positively and an object with more electrons than protons will be charged negatively. The magnitude of the quantity of charge on an object is simply the difference between the number of protons and electrons multiplied by 1.6 x 10-19 C. Coulomb's Law of Electric Force: A charged object can exert an attractive or repulsive force on other charged objects in its vicinity. The amount of force follows a rather predictable pattern which is dependent upon the amount of charge present on the two objects and the distance of separation. Coulomb's law of electric force expresses the relationship in the form of the following equation: Felect = k•Q1•Q2/d2 where Felect represents the magnitude of the electric force (in Newtons), Q1 and Q2 represent the quantity of charge (in Coulombs) on objects 1 and 2, and d represents the separation distance between the objects' centers (in meters). The symbol k represents a constant of proportionality known as Coulomb's constant and has the value of 9.0 x 109 N•m2/C2. A charged object can exert an electric influence upon objects from which they are spatially separated. This action-at-a-distance phenomenon is sometimes explained by saying the charged object establishes an electric field in the space surrounding it. Other objects which enter the field interact with the field and experience the influence of the field. The strength of the electric field can be tested by measuring the force exerted on a test charge. Of course, the more charge on the test charge, the more force which would be experienced by it. While the force experienced by the test charge is proportional to the amount of charge on the test charge, the ratio of force to charge would be the same regardless of the amount of charge on the test charge. By definition, the electric field strength (E) at a given location about a source charge is simply the ratio of the force experienced (F) by a test charge to the quantity of charge on the test charge (qtest). E = F / qtest The electric field strength as created by a source charge (Q) varies with location. In accord with Coulomb's law, the force on a test charge is greatest when closest to the source charge and less when further away. Substitution of the expression for force into the above equation and subsequent algebraic simplification yields a second equation for electric field (E) which expresses its strength in terms of the variables which effect it. The equation is E = k • Q / d2where k is Coulombs constant of 9.0 x 109 N•m2/C2, Q is the quantity of charge on the source creating the field and d is the distance from the center of the source. Direction of Force and Field Vectors: Many problems in this problem set will demand that you understand the directional nature of electric force and electric field. Electric forces between objects can be attractive or repulsive. Objects charged with an opposite type of charge will be attracted to each other and objects charged with the same type of charge will be repelled by each other. These attractive and repulsive interactions describe the direction of the forces exerted upon any object. In some instances involving configurations of three or more charges, an object will experience two or more forces in the same or different directions. In such instances, the interest is usually in knowing what the net electric force is. Finding the net electric force involves determining the magnitude and direction of the individual forces and then adding them up to determine the net force. When adding electric forces, the direction must be considered. A 10 unit force to the left and a 25 unit force to the right add up to a 15 unit force to the right. Such reasoning about direction will be critical to analyzing situations where two or more forces are present. Electric field is also a vector quantity that has a directional nature associated with it. By convention, the direction of the electric field vector at any location surrounding a source charge is in the direction that a positive test charge would be pushed or pulled if placed at that location. Even if a negative charge is used to measure the strength of a source charge's field, the convention for direction is based upon the direction of force on a positive test charge. Adding Vectors - SOH CAH TOA and Pythagorean Theorem: Electric field and electric force are vector quantities which have a direction. In situations in which there are two or more force or field vectors present, it is often desired to know what the net electric force or field is. Finding the net value from knowledge of individual values requires that vectors be added together in head-to-tail fashion. If the vectors being added are at right angles to each other, then the Pythagorean theorem can be used to determine the resultant or net value; a trigonometric function can be used to determine an angle and subsequently a direction. If the vectors being added are not at right angles to each other, then the usual procedure of adding them involves using a trigonometric function to resolve each vector into x- and y-components. The components are then added together to determine the sum of all x- and y-components. These sum values can then be added together in a right triangle to determine the net or resultant vector. And as usual, a trigonometric function can be used to determine an angle and subsequently a direction of the net or resultant vector. The graphic below depicts by means of diagrams how the components of a vector can be added together to determine the resultant of vectors A and B. Comparing Gravitational and Electrical Forces: Gravitational forces and electrical forces are often compared to each other. Both force types are fundamental forces which act over a distance of separation. Gravitational forces are based on masses attracting and follow the law of universal gravitation equation. Fgrav = G • m1 • m2 / d2 where m1 and m2 are the masses of the attracting objects (in kg), d is the separation distance as measured from object center to object center (in meters) and G is a proportionality constant with a value of 6.67x 10-11 N•m2/kg2. Electrical forces are based on charged objects attracting or repelling and follow the Coulomb's law equation (as stated above). Some of the problems on this set will involve comparisons of the magnitude of the electric force to the magnitude of the gravitational force. The simultaneous use of both equations will be necessary in the solution of such problems. Habits of an Effective Problem-Solver An effective problem solver by habit approaches a physics problem in a manner that reflects a collection of disciplined habits. While not every effective problem solver employs the same approach, they all have habits which they share in common. These habits are described briefly here. An effective problem-solver... - ...reads the problem carefully and develops a mental picture of the physical situation. If needed, they sketch a simple diagram of the physical situation to help visualize it. - ...identifies the known and unknown quantities and records themin an organized manner, often times recording them on the diagram iteself. They equate given values to the symbols used to represent the corresponding quantity (e.g., Q1 = 2.4 μC; Q2 = 3.8 μC; d = 1.8 m; Felect = ???). - ...plots a strategy for solving for the unknown quantity; the strategy will typically center around the use of physics equations be heavily dependent upon an understanding of physics principles. - ...identifies the appropriate formula(s) to use, often times writing them down. Where needed, they perform the needed conversion of quantities into the proper unit. - ...performs substitutions and algebraic manipulations in order to solve for the unknown quantity. Additional Readings/Study Aids: The following pages from The Physics Classroom Tutorial may serve to be useful in assisting you in the understanding of the concepts and mathematics associated with these problems. - Charge and Charge Interactions - Coulomb's Law - Inverse Square Law - Newton's Laws and the Electrical Force - Electric Field Intensity
http://www.physicsclassroom.com/calcpad/estatics/index.cfm
13
66
Quadrilateral Overview Basics of quadrilaterals including concave, convex ones. Parallelograms, rectangles, rhombi and squares ⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles. - What I wanna do in this video is give an overview of quadrilaterals. - And you can imagine from this prefix, or I guess you could say from the beginning of this word - quad - This involves four of something. - And quadrilaterals, as you can imagine, are, are shapes. - And we're gonna be talking about two-dimensional shapes that have four sides, and four vertices, and four angles. - So, for example, one, two, three, four. - That is a quadrilateral. - Although that last side didn't look too straight. - One, two, three, four. That is a quadrilateral. - One, two, three, four. These are all quadrilaterals. - They all have four sides, four vertices, and clearly four angles. - One angle, two angles, three angles, and four angles. - Here you can measure. Here actually let me draw this one a little bit bigger 'cause it's interesting. - So in this one right over here you have one angle, two angles, three angles - and then you have this really big angle right over there. - If you look at the, if you look at the interior angles of this quadrilateral. - Now quadrilaterals, as you can imagine, can be subdivided into other groups - based on the properties of the quadrilaterals. - And the main subdivision of quadrilaterals is between concave and convex quadrilaterals - So you have concave, and you have convex. - And the way I remember concave quadrilaterals, or really concave polygons of any number of shapes - is that it looks like something has caved in. - So, for example, this is a concave quadrilateral - It looks like this side has been caved in. - And one way to define concave quadrilaterals, - so let me draw it a little bit bigger, - so this right over here is a concave quadrilateral, - is that it has an interior angle, it has an interior angle that is larger than 180 degrees. - So, for example, this interior angle right over here is larger, is larger than 180 degrees. - It's an interesting proof, maybe I'll do a video, it's actually a pretty simple proof, - to show that if you have a concave quadrilateral - if at least one of the interior angles has a measure larger than 180 degrees - that none of the sides can be parallel to each other. - The other type of quadrilateral, you can imagine, - is when all of the interior angles are less than 180 degrees. - And you might say, "Well, what happens at 180 degrees?" - Well, if this angle was 180 degrees then these wouldn't be two different sides - it would just be one side and that would look like a triangle. - But if all of the interior angles are less than 180 degrees, - then you are dealing with a convex quadrilateral. - So this convex quadrilateral would involve that one and that one over there. - So this right over here is what a convex quadrilateral, - this is what a convex quadrilateral could look like. - Four points. Four sides. Four angles. - Now within convex quadrilaterals there are some other interesting categorizations. - So now we're just gonna focus on convex quadrilaterals - so that's gonna be all of this space over here. - So one type of convex quadrilateral is a trapezoid. - A trapezoid. And a trapezoid is a convex quadrilateral - and sometimes the definition here is a little bit, - different people will use different definitions, - so some people will say a trapezoid is a quadrilateral that has exactly two sides that are parallel to each other - So, for example, they would say that this right over here - this right over here is a trapezoid, where this side is parallel to that side. - If I give it some letters here, if I call this trapezoid A, B, C, D, - we could say that segment AB is parallel to segment DC - and because of that we know that this is, that this is a trapezoid - Now I said that the definition is a little fuzzy because some people say - you can have exactly one pair of parallel sides - but some people say at least one pair of parallel sides. - So if you say, if you use the original definition, - and that's the kind of thing that most people are referring to when they say a trapezoid, - exactly one pair of parallel sides, it might be something like this, - but if you use a broader definition of at least one pair of parallel sides, - then maybe this could also be considered a trapezoid. - So you have one pair of parallel sides. Like that. - And then you have another pair of parallel sides. Like that. - So this is a question mark where it comes to a trapezoid. - A trapezoid is definitely this thing here, where you have one pair of parallel sides. - Depending on people's definition, this may or may not be a trapezoid. - If you say it's exactly one pair of parallel sides, this is not a trapezoid because it has two pairs. - If you say at least one pair of parallel sides, then this is a trapezoid. - So I'll put that as a little question mark there. - But there is a name for this regardless of your definition of what a trapezoid is. - If you have a quadrilateral with two pairs of parallel sides, - you are then dealing with a parallelogram. - So the one thing that you definitely can call this is a parallelogram. - And I'll just draw it a little bit bigger. - So it's a quadrilateral. If I have a quadrilateral, and if I have two pairs of parallel sides - So two of the opposite sides are parallel. - So that side is parallel to that side and then this side is parallel to that side there - You're dealing with a parallelogram. - And then parallelograms can be subdivided even further. - They can be subdivided even further if the four angles in a parallelogram are all right angles, - you're dealing with a rectangle. So let me draw one like that. - So if the four sides, so from parallelograms, these are, this is all in the parallelogram universe. - What I'm drawing right over here, that is all the parallelogram universe. - This parallelogram tells me that opposite sides are parallel. - And if we know that all four angles are 90 degrees - and we've proven in previous videos how to figure out the sum of the interior angles of any polygon - and using that same method, you could say that the sum of the interior angles of a rectangle, - or of any, of any quadril, of any quadrilateral, is actually a hund- is actually 360 degrees, - and you see that in this special case as well, but maybe we'll prove it in a separate video. - But this right over here we would call a rectangle - a parallelogram, opposite sides parallel, - and we have four right angles. - Now if we have a parallelogram, where we don't necessarily have four right angles, - but we do have, where we do have the length of all the sides be equal, - then we're dealing with a rhombus. So let me draw it like that. - So it's a parallelogram. This is a parallelogram. - So that side is parallel to that side. This side is parallel to that side. - And we also know that all four sides have equal lengths. - So this side's length is equal to that side's length. - Which is equal to that side's length, which is equal to that side's length. - Then we are dealing with a rhombus. - So one way to view it, all rhombi are parallelograms - All rectangles are parallelograms - All parallelograms you cannot assume to be rectangles. - All parallelograms you cannot assume to be rhombi. - Now, something can be both a rectangle and a rhombus. - So let's say this is the universe of rectangles - So the universe of rectangles. Drawing a little of a venn diagram here. - Is that set of shapes, and the universe of rhombi is this set of shapes right over here. - So what would it look like? - Well, you would have four right angles, and they would all have the same length. - So, it would look like this. - So it would definitely be a parallelogram. - It would be a parallelogram. Four right angles. - Four right angles, and all the sides would have the same length. - And you probably. This is probably the first of the shapes that you learned, or one of the first shapes. - This is clearly a square. - So all squares are both rhombi, are are members of the, they can also be considered a rhombus - and they can also be considered a rectangle, - and they could also be considered a parallelogram. - But clearly, not all rectangles are squares - and not all rhombi are squares - and definitely not all parallelograms are squares. - This one, clearly, right over here is neither a rectangle, nor a rhombi - nor a square. - So that's an overview, just gives you a little bit of taxonomy of quadrilaterals. - And then in the next few videos, we can start to explore them and find their interesting properties - Or just do interesting problems involving them. Be specific, and indicate a time in the video: At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger? Have something that's not a question about this content? This discussion area is not meant for answering homework questions. Share a tip When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831... Have something that's not a tip or feedback about this content? This discussion area is not meant for answering homework questions. Discuss the site For general discussions about Khan Academy, visit our Reddit discussion page. Flag inappropriate posts Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians. - disrespectful or offensive - an advertisement - low quality - not about the video topic - soliciting votes or seeking badges - a homework question - a duplicate answer - repeatedly making the same post - a tip or feedback in Questions - a question in Tips & Feedback - an answer that should be its own question about the site
http://www.khanacademy.org/math/geometry/quadrilaterals-and-polygons/v/quadrilateral-overview
13
102
Basic Radar Systems Principle of Operation Radar is an acronym for Radio Detection and Ranging. The term "radio" refers to the use of electromagnetic waves with wavelengths in the so-called radio wave portion of the spectrum, which covers a wide range from 104 km to 1 cm. Radar systems typically use wavelengths on the order of 10 cm, corresponding to frequencies of about 3 GHz. The detection and ranging part of the acronym is accomplished by timing the delay between transmission of a pulse of radio energy and its subsequent return. If the time delay is Dt, then the range may be determined by the simple R = cDt/2 where c = 3 x 108 m/s, the speed of light at which all electromagnetic waves propagate. The factor of two in the formula comes from the observation that the radar pulse must travel to the target and back before detection, or twice the range. A radar pulse train is a type of amplitude modulation of the radar frequency carrier wave, similar to how carrier waves are modulated in communication systems. In this case, the information signal is quite simple: a single pulse repeated at regular intervals. The common radar carrier modulation, known as the pulse train is shown below. The common parameters of radar as defined by referring to Figure 1. PW = pulse width. PW has units of time and is commonly expressed in ms. PW is the duration of the pulse. RT = rest time. RT is the interval between pulses. It is measured in ms. PRT = pulse repetition time. PRT has units of time and is commonly expressed in ms. PRT is the interval between the start of one pulse and the start of another. PRT is also equal to the sum, PRT = PW+RT. PRF = pulse repetition frequency. PRF has units of time-1 and is commonly expressed in Hz (1 Hz = 1/s) or as pulses per second (pps). PRF is the number of pulses transmitted per second and is equal to the inverse of PRT. RF = radio frequency. RF has units of time-1 or Hz and is commonly expressed in GHz or MHz. RF is the frequency of the carrier wave which is being modulated to form the pulse train. A practical radar system requires seven basic components as illustrated Transmitter. The transmitter creates the radio wave to be sent and modulates it to form the pulse train. The transmitter must also amplify the signal to a high power level to provide adequate range. The source of the carrier wave could be a Klystron, Traveling Wave Tube (TWT) or Magnetron. Each has its own characteristics and limitations. 2. Receiver. The receiver is sensitive to the range of frequencies being transmitted and provides amplification of the returned signal. In order to provide the greatest range, the receiver must be very sensitive without introducing excessive noise. The ability to discern a received signal from background noise depends on the signal-to-noise ratio (S/N). The background noise is specified by an average value, called the noise-equivalent-power (NEP). This directly equates the noise to a detected power level so that it may be compared to the return. Using these definitions, the criterion for successful detection of a target is Pr > (S/N) NEP, where Pr is the power of the return signal. Since this is a significant quantity in determining radar system performance, it is given a unique designation, Smin, and is called the Minimum Signal for Detection. Smin = (S/N) NEP Since Smin, expressed in Watts, is usually a small number, it has proven useful to define the decibel equivalent, MDS, which stands for Minimum Discernible Signal. MDS = 10 Log (Smin/1 mW) When using decibels, the quantity inside the brackets of the logarithm must be a number without units. I the definition of MDS, this number is the fraction Smin /1 mW. As a reminder, we use the special notation dBm for the units of MDS, where the "m" stands for 1 mW. This is shorthand for decibels referenced to 1 mW, which is sometimes written as dB//1mW. In the receiver, S/N sets a threshold for detection which determines what will be displayed and what will not. In theory, if S/N = 1, then only returns with power equal to or greater than the background noise will be displayed. However, the noise is a statistical process and varies randomly. The NEP is just the average value of the noise. There will be times when the noise exceeds the threshold that is set by the receiver. Since this will be displayed and appear to be a legitimate target, it is called a false alarm. If the SNR is set too high, then there will be few false alarms, but some actual targets may not be displayed known as a miss). If SNR is set too low, then there will be many false alarms, or a high false alarm rate (FAR). Some receivers monitor the background and constantly adjust the SNR to maintain a constant false alarm rate, and therefore all called CFAR receivers. Some common receiver 1.) Pulse Integration. The receiver takes an average return strength over many pulses. Random events like noise will not occur in every pulse and therefore, when averaged, will have a reduced effect as compared to actual targets that will be in every pulse. 2.) Sensitivity Time Control (STC). This feature reduces the impact of returns from sea state. It reduces the minimum SNR of the receiver for a short duration immediately after each pulse is transmitted. The effect of adjusting the STC is to reduce the clutter on the display in the region directly around the transmitter. The greater the value of STC, the greater the range from the transmitter in which clutter will be removed. However, an excessive STC will blank out potential returns close to the transmitter. 3.) Fast Time Constant (FTC). This feature is designed to reduce the effect of long duration returns that come from rain. This processing requires that strength of the return signal must change quickly over it duration. Since rain occurs over and extended area, it will produce a long, steady return. The FTC processing will filter these returns out of the display. Only pulses that rise and fall quickly will be displayed. In technical terms, FTC is a differentiator, meaning it determines the rate of change in the signal, which it then uses to discriminate pulses which are not changing rapidly. 3. Power Supply. The power supply provides the electrical power for all the components. The largest consumer of power is the transmitter which may require several kW of average power. The actually power transmitted in the pulse may be much greater than 1 kW. The power supply only needs to be able to provide the average amount of power consumed, not the high power level during the actual pulse transmission. Energy can be stored, in a capacitor bank for instance, during the rest time. The stored energy then can be put into the pulse when transmitted, increasing the peak power. The peak power and the average power are related by the quantity called duty cycle, DC. Duty cycle is the fraction of each transmission cycle that the radar is actually transmitting. Referring to the pulse train in Figure 2, the duty cycle can be seen to be: DC = PW / PRF Synchronizer. The synchronizer coordinates the timing for range determination. It regulates that rate at which pulses are sent (i.e. sets PRF) and resets the timing clock for range determination for each pulse. Signals from the synchronizer are sent simultaneously to the transmitter, which sends a new pulse, and to the display, which resets the return sweep. Duplexer. This is a switch which alternately connects the transmitter or receiver to the antenna. Its purpose is to protect the receiver from the high power output of the transmitter. During the transmission of an outgoing pulse, the duplexer will be aligned to the transmitter for the duration of the pulse, PW. After the pulse has been sent, the duplexer will align the antenna to the receiver. When the next pulse is sent, the duplexer will shift back to the transmitter. A duplexer is not required if the transmitted power is low. Antenna. The antenna takes the radar pulse from the transmitter and puts it into the air. Furthermore, the antenna must focus the energy into a well-defined beam which increases the power and permits a determination of the direction of the target. The antenna must keep track of its own orientation which can be accomplished by a synchro-transmitter. There are also antenna systems which do not physically move but are steered electronically (in these cases, the orientation of the radar beam is already known a priori). The beam-width of an antenna is a measure of the angular extent of the most powerful portion of the radiated energy. For our purposes the main portion, called the main lobe, will be all angles from the perpendicular where the power is not less than ½ of the peak power, or, in decibels, -3 dB. The beam-width is the range of angles in the main lobe, so defined. Usually this is resolved into a plane of interest, such as the horizontal or vertical plane. The antenna will have a separate horizontal and vertical beam-width. For a radar antenna, the beam-width can be predicted from the dimension of the antenna in the plane of q = l/L q is the beam-width in radians, l is the wavelength of the radar, and L is the dimension of the antenna, in the direction of interest (i.e. width or height). In the discussion of communications antennas, it was stated that the beam-width for an antenna could be found using q = 2l/L. So it appears that radar antennas have one-half of the beam-width as communications antennas. The difference is that radar antennas are used both to transmit and receive the signal. The interference effects from each direction combine, which has the effect of reducing the beam-width. Therefore when describing two-way systems (like radar) it is appropriate to reduce the beam-width by a factor of ½ in the beam-width The directional gain of an antenna is a measure of how well the beam is focused in all angles. If we were restricted to a single plane, the directional gain would merely be the ratio 2p/q. Since the same power is distributed over a smaller range of angles, directional gain represents the amount by which the power in the beam is increased. In both angles, then directional gain would be given by: Gdir = 4p/q f since there are 4p steradians corresponding to all directions (solid angle, measured in steradians, is defined to be the area of the beam front divided by the range squared, therefore a non-directional beam would cover an area of 4pR2 at distance R, therefore 4p steradians). Here we used: q = horizontal beam-width (radians) f = vertical beam-width (radians) Sometimes directional gain is measured in decibels, namely 10 log (Gdir). As an example, an antenna with a horizontal beam-width of 1.50 (0.025 radians) and vertical beam-width of 20o (0.33 radians) will have: directional gain(dB) = 10 log (4 p/ 0.025 0.333) = 30.9 dB Example: find the horizontal and vertical beam-width of the AN/SPS-49 long range radar system, and the directional gain in dB. The antenna is 7.3 m wide by 4.3 m tall, and operates at 900 MHz. The wavelength, l=c/f = 0.33 m. Given that L= 7.3 m, then q = l/L = 0.33/7.3 = 0.045 radians, or q = 30. The antenna is 4.3 m tall, so a similar calculation gives f = 0.076 radians f = 40. The directional gain, Gdir = 4p/(0.045 0.076) = 3638. Expressed in decibels, directional gain = 10 Log(3638) = 35.6 dB. Display. The display unit may take a variety of forms but in general is designed to present the received information to an operator. The most basic display type is called an A-scan (amplitude vs. Time delay). The vertical axis is the strength of the return and the horizontal axis is the time delay, or range. The A-scan provides no information about the direction of the target. The most common display is the PPI (plan position indicator). The A-scan information is converted into brightness and then displayed in the same relative direction as the antenna orientation. The result is a top-down view of the situation where range is the distance from the origin. The PPI is perhaps the most natural display for the operator and therefore the most widely used. In both cases, the synchronizer resets the trace for each pulse so that the range information will begin at the origin. In this example, the use of increased STC to suppress the sea clutter would be helpful. of the parameters of the basic pulsed radar system will affect performance in some way. Here we find specific examples and quantify this dependence The duration of the pulse and the length of the target along the determine the duration of the returned pulse. In most cases the length of the return is usually very similar to the transmitted pulse. In the display unit, the pulse (in time) will be converted into a pulse in distance. The range of values from the leading edge to the trailing edge will create some uncertainty in the range to the target. Taken at face value, the ability to accurately measure range is determined by the pulse width. If we designate the uncertainty in measured range as the range resolution, RRES, then it must be equal to the range equivalent of the pulse width, namely: RRES = c PW/2 Now, you may wonder why not just take the leading edge of the pulse as the range which can be determined with much finer accuracy? The problem is that it is virtually impossible to create the perfect leading edge. In practice, the ideal pulse will really appear like: To create a perfectly formed pulse with a vertical leading edge would require an infinite bandwidth. In fact you may equate the bandwidth, b, of the transmitter to the minimum pulse width, PW by: PW = 1/2b Given this insight, it is quite reasonable to say that the range can be determined no more accurately than cPW/2 or equivalently RRES = c/4b In fact, high resolution radar is often referred to as wide-band radar which you now see as equivalent statements. One term is referring to the time domain and the other the frequency domain. The duration of the pulse also affects the minimum range at which the radar system can detect. The outgoing pulse must physically clear the antenna before the return can be processed. Since this lasts for a time interval equal to the pulse width, PW, the minimum displayed range is then: RMIN = c PW/2 The minimum range effect can be seen on a PPI display as a saturated or blank area around the origin. Increasing the pulse width while maintaining the other parameters the same will also affect the duty cycle and therefore the average power. For many systems, it is desirable to keep the average power fixed. Then the PRF must be simultaneously changed with PW in order to keep the product PW x PRF the same. For example, if the pulse width is reduced by a factor of ½ in order to improve the resolution, then the PRF is usually doubled. Pulse Repetition Frequency (PRF) The frequency of pulse transmission affects the maximum range that can be displayed. Recall that the synchronizer resets the timing clock as each new pulse is transmitted. Returns from distant targets that do no reach the receiver until after the next pulse has been sent will not be displayed correctly. Since the timing clock has been reset, they will be displayed as if the range where less than actual. If this were possible, then the range information would be considered ambiguous. An operator would not know whether the range were the actual range or some greater The maximum actual range that can be detected and displayed without ambiguity, or the maximum unambiguous range, is just the range corresponding to a time interval equal to the pulse repetition time, PRT. Therefore, the maximum unambiguous range, RUNAMB = c PRT/2 = c/(2PRF) When a radar is scanning, it is necessary to control the scan rate so that a sufficient number of pulses will be transmitted in any particular direction in order to guarantee reliable detection. If too few pulses are used, then it will more difficult to distinguish false targets from actual ones. False targets may be present in one or two pulses but certainly not in ten or twenty in a row. Therefore to maintain a low false detection rate, the number of pulses transmitted in each direction should be kept high, usually above ten. For systems with high pulse repetition rates (frequencies), the radar beam can be repositioned more rapidly and therefore scan more quickly. Conversely, if the PRF is lowered the scan rate needs to be reduced. For simple scans it is easy to quantify the number of pulses that will be returned from any particular target. Let t represent the dwell time, which is the duration that the target remains in the radar's beam during each scan. The number of pulses, N, that the target will be exposed to during the dwell time is: N = t PRF We may rearrange this equation to make a requirement on the dwell time for a particular scan tmin = Nmin /PRF So it is easy to see that high pulse repetition rates require smaller dwell times. For a continuous circular scan, for example, the dwell time is related to the rotation rate and the beam-width. t = q/W where q = beam-width [degrees] W = rotation rate [degrees/sec] which will give the dwell time in seconds. These relationships can be combined, giving the following equation from which the maximum scan rate may be determined for a minimum number of pulses per scan: = q PRF/N Finally, the frequency of the radio carrier wave will also have affect on how the radar beam propagates. At the low frequency extremes, radar beams will refract in the atmosphere and can be caught in "ducts" which result in long ranges. At the high extreme, the radar beam will behave much like visible light and travel in very straight lines. Very high frequency radar beams will suffer high losses and are not suitable for long range systems. The frequency will also affect the beam-width. For the same antenna size, a low frequency radar will have a larger beam-width than a high frequency one. In order to keep the beam-width constant, a low frequency radar will need a large Theoretical Maximum Range Equation A radar receiver can detect a target if the return is of sufficient Let us designate the minimum return signal that can be detected as Smin, which should have units of Watts, W. The size and ability of a target to reflect radar energy can be summarized into a single term, s, known as the radar cross-section, which has units of m2. If absolutely all of the incident radar energy on the target were reflected equally in all directions, then the radar cross section would be equal to the target's cross-sectional area as seen by the transmitter. In practice, some energy is absorbed and the reflected energy is not distributed equally in all directions. Therefore, the radar cross-section is quite difficult to estimate and is normally determined by Given these new quantities we can construct a simple model for the radar power that returns to the receiver: Pr = Pt G 1/4pR2 s 1/4pR2 Ae The terms in this equation have been grouped to illustrate the sequence from transmission to collection. Here is the sequence G = r Gdir The transmitter puts out peak power Pt into the antenna, which focuses it into a beam with gain G. The power gain is similar to the directional gain, Gdir, except that it must also include losses from the transmitter to the antenna. These losses are summarized by the single term for efficiency, r. The radar energy spreads out uniformly in all directions. The power per unit area must therefore decrease as the area increases. Since the energy is spread out over the surface of a sphere the factor of 1/4pR2 accounts for the reduction. The radar energy is collected by the surface of the target and reflected. The radar cross section s accounts for both of these processes. The reflected energy spreads out just like the transmitted energy. The receiving antenna collects the energy proportional to its effective area, known as the antenna's aperture, Ae. This also includes losses in the reception process until the signal reaches the receiver. Hence the subscript "e" for "effective." The effective aperture is related to the physical aperture, A, by the same efficiency term used in power gain, given the symbol r. So that Ae = r A Our criterion for detection is simply that the received power, exceed the minimum, Smin. Since the received power decreases with range, the maximum detection range will occur when the received power is equal to the minimum , i.e. Pr = Smin. If you solve for the range, you get an equation for the maximum theoretical Perhaps the most important feature of this equation is the fourth-root dependence. The practical implication of this is that one must greatly increase the output power to get a modest increase in performance. For example, in order to double the range, the transmitted power would have to be increased 16-fold. You should also note that the minimum power level for detection, Smin, depends on the noise level. In practice, this quantity constantly be varied in order to achieve the perfect balance between high sensitivity which is susceptible to noise and low sensitivity which may limit the radar's ability to detect targets. Example: Find the maximum range of the AN/SPS-49 radar, given the following data Antenna Size = 7.3 m wide by 4.3 m tall Efficiency = 80 % Peak power = 360 kW Cross section = 1 m2 Smin = 1 10-12 W We know from the previous example, that the directional antenna gain, Gdir = 4p/qf = 4p/(.05 x .07) = 3430 The power gain, G = r Gdir G = 2744. Likewise, the effective aperture, Ae = rA = .8(7.3 x 4.3) Ae = 25.1 m2. Therefore the range is , or R = 112 km.
http://www.fas.org/man/dod-101/navy/docs/es310/radarsys/radarsys.htm
13
167
Transcript: OK, so anyway, let's get started. So, the first unit of the class, so basically I'm going to go over the first half of the class today, and the second half of the class on Tuesday just because we have to start somewhere. So, the first things that we learned about in this class were vectors, and how to do dot-product of vectors. So, remember the formula that A dot B is the sum of ai times bi. And, geometrically, it's length A times length B times the cosine of the angle between them. And, in particular, we can use this to detect when two vectors are perpendicular. That's when their dot product is zero. And, we can use that to measure angles between vectors by solving for cosine in this. Hopefully, at this point, this looks a lot easier than it used to a few months ago. So, hopefully at this point, everyone has this kind of formula memorized and has some reasonable understanding of that. But, if you have any questions, now is the time. No? Good. Next we learned how to also do cross product of vectors in space -- -- and remember, we saw how to use that to find area of, say, a triangle or a parallelogram in space because the length of the cross product is equal to the area of a parallelogram formed by the vectors a and b. And, we can also use that to find a vector perpendicular to two given vectors, A and B. And so, in particular, that comes in handy when we are looking for the equation of a plane because we've seen -- So, the next topic would be equations of planes. And, we've seen that when you put the equation of a plane in the form ax by cz = d, well, in there is actually the normal vector to the plane, or some normal vector to the plane. So, typically, we use cross product to find plane equations. OK, is that still reasonably familiar to everyone? Yes, very good. OK, we've also seen how to look at equations of lines, and those were of a slightly different nature because we've been doing them as parametric equations. So, typically we had equations of a form, maybe x equals some constant times t, y equals constant plus constant times t. z equals constant plus constant times t where these terms here correspond to some point on the line. And, these coefficients here correspond to a vector parallel to the line. That's the velocity of the moving point on the line. And, well, we've learned in particular how to find where a line intersects a plane by plugging in the parametric equation into the equation of a plane. We've learned more general things about parametric equations of curves. So, there are these infamous problems in particular where you have these rotating wheels and points on them, and you have to figure out, what's the position of a point? And, the general principle of those is that you want to decompose the position vector into a sum of simpler things. OK, so if you have a point on a wheel that's itself moving and something else, then you might want to first figure out the position of a center of a wheel than find the angle by which the wheel has turned, and then get to the position of a moving point by adding together simpler vectors. So, the general principle is really to try to find one parameter that will let us understand what has happened, and then decompose the motion into a sum of simpler effect. So, we want to decompose the position vector into a sum of simpler vectors. OK, so maybe now we are getting a bit out of some people's comfort zone, but hopefully it's not too bad. Do you have any general questions about how one would go about that, or, yes? Sorry? What about it? Parametric descriptions of a plane, so we haven't really done that because you would need two parameters to parameterize a plane just because it's a two dimensional object. So, we have mostly focused on the use of parametric equations just for one dimensional objects, lines, and curves. So, you won't need to know about parametric descriptions of planes on a final, but if you really wanted to, you would think of defining a point on a plane as starting from some given point. Then you have two vectors given on the plane. And then, you would add a multiple of each of these vectors to your starting point. But see, the difficulty is to convert from that to the usual equation of a plane, you would still have to go back to this cross product method, and so on. So, it is possible to represent a plane, or, in general, a surface in parametric form. But, very often, that's not so useful. Yes? How do you parametrize an ellipse in space? Well, that depends on how it's given to you. But, OK, let's just do an example. Say that I give you an ellipse in space as maybe the more, well, one exciting way to parameterize an ellipse in space is maybe the intersection of a cylinder with a slanted plane. That's the kind of situations where you might end up with an ellipse. OK, so if I tell you that maybe I'm intersecting a cylinder with equation x squared plus y squared equals a squared with a slanted plane to get, I messed up my picture, to get this ellipse of intersection, so, of course you'd need the equation of a plane. And, let's say that this plane is maybe given to you. Or, you can switch it to form where you can get z as a function of x and y. So, maybe it would be z equals, I've already used a; I need to use a new letter. Let's say c1x c2y plus d, whatever, something like that. So, what I would do is first I would look at what my ellipse does in the directions in which I understand it the best. And, those directions would be probably the xy plane. So, I would look at the xy coordinates. Well, if I look at it from above xy, my ellipse looks like just a circle of radius a. So, if I'm only concerned with x and y, presumably I can just do it the usual way for a circle. x equals a cosine t. y equals a sine t, OK? And then, z would end up being just, well, whatever the value of z is to be on the slanted plane above a given xy position. So, in fact, it would end up being ac1 cosine t plus ac2 sine t plus d, I guess. OK, that's not a particularly elegant parameterization, but that's the kind of thing you might end up with. Now, in general, when you have a curve in space, it would rarely be the case that you have to get a parameterization from scratch unless you are already being told information about how it looks in one of the coordinate planes, you know, this kind of method. Or, at least you'd have a lot of information that would quickly reduce to a plane problem somehow. Of course, I could also just give you some formulas and let you figure out what's going on. But, in general, we've done more stuff with plane curves. With plane curves, certainly there's interesting things with all sorts of mechanical gadgets that we can study. OK, any other questions on that? No? OK, so let me move on a bit and point out that with parametric equations, we've looked also at things like velocity and acceleration. So, the velocity vector is the derivative of a position vector with respect to time. And, it's not to be confused with speed, which is the magnitude of v. So, the velocity vector is going to be always tangent to the curve. And, its length will be the speed. That's the geometric interpretation. So, just to provoke you, I'm going to write, again, that formula that was that v equals T hat ds dt. What do I mean by that? If I have a curve, and I'm moving on the curve, well, I have the unit tangent vector which I think at the time I used to draw in blue. But, blue has been abolished since then. So, I'm going to draw it in red. OK, so that's a unit vector that goes along the curve, and then the actual velocity is going to be proportional to that. And, what's the length? Well, it's the speed. And, the speed is how much arc length on the curve I go per unit time, which is why I'm writing ds dt. That's another guy. That's another of these guys for the speed, OK? And, we've also learned about acceleration, which is the derivative of velocity. So, it's the second derivative of a position vector. And, as an example of the kinds of manipulations we can do, in class we've seen Kepler's second law, which explains how if the acceleration is parallel to the position vector, then r cross v is going to be constant, which means that the motion will be in an plane, and you will sweep area at a constant rate. So now, that is not in itself a topic for the exam, but the kinds of methods of differentiating vector quantities, applying the product rule to take the derivative of a dot or cross product and so on are definitely fair game. I mean, we've seen those on the first exam. They were there, and most likely they will be on the final. OK, so I mean that's the extent to which Kepler's law comes up, only just knowing the general type of manipulations and proving things with vector quantities, but not again the actual Kepler's law itself. I skipped something. I skipped matrices, determinants, and linear systems. OK, so we've seen how to multiply matrices, and how to write linear systems in matrix form. So, remember, if you have a 3x3 linear system in the usual sense, so, you can write this in a matrix form where you have a 3x3 matrix and you have an unknown column vector. And, their matrix product should be some given column vector. OK, so if you don't remember how to multiply matrices, please look at the notes on that again. And, also you should remember how to invert a matrix. So, how did we invert matrices? Let me just remind you very quickly. So, I should say 2x2 or 3x3 matrices. Well, you need to have a square matrix to be able to find an inverse. The method doesn't work, doesn't make sense. Otherwise, then the concept of inverse doesn't work. And, if it's larger than 3x3, then we haven't seen that. So, let's say that I have a 3x3 matrix. What I will do is I will start by forming the matrix of minors. So, remember that minors, so, each entry is a 2x2 determinant in the case of a 3x3 matrix formed by deleting one row and one column. OK, so for example, to get the first minor, especially in the upper left corner, I would delete the first row, the first column. And, I would be left with this 2x2 determinant. I take this times that minus this times that. I get a number that gives my first minor. And then, same with the others. Then, I flip signs according to this checkerboard pattern, and that gives me the matrix of cofactors. OK, so all it means is I'm just changing the signs of these four entries and leaving the others alone. And then, I take the transpose of that. So, that means I read it horizontally and write it down vertically. I swept the rows and the columns. And then, I divide by the inverse. Well, I divide by the determinant of the initial matrix. OK, so, of course, this is kind of very theoretical, and I write it like this. Probably it makes more sense to do it on an example. I will let you work out examples, or bug your recitation instructors so that they do one on Monday if you want to see that. It's a fairly straightforward method. You just have to remember the steps. But, of course, there's one condition, which is that the determinant of a matrix has to be nonzero. So, in fact, we've seen that, oh, there is still one board left. We've seen that a matrix is invertible -- -- exactly when its determinant is not zero. And, if that's the case, then we can solve the linear system, AX equals B by just setting X equals A inverse B. That's going to be the only solution to our linear system. Otherwise, well, AX equals B has either no solution, or infinitely many solutions. Yes? The determinant of a matrix real quick? Well, I can do it that quickly unless I start waving my hands very quickly, but remember we've seen that you have a matrix, a 3x3 matrix. Its determinant will be obtained by doing an expansion with respect to, well, your favorite. But usually, we are doing it with respect to the first row. So, we take this entry and multiply it by that determinant. Then, we take that entry, multiply it by that determinant but put a minus sign. And then, we take that entry and multiply it by this determinant here, and we put a plus sign for that. OK, so maybe I should write it down. That's actually the same formula that we are using for cross products. Right, when we do cross products, we are doing an expansion with respect to the first row. That's a special case. OK, I mean, do you still want to see it in more details, or is that OK? Yes? That's correct. So, if you do an expansion with respect to any row or column, then you would use the same signs that are in this checkerboard pattern there. So, if you did an expansion, actually, so indeed, maybe I should say, the more general way to determine it is you take your favorite row or column, and you just multiply the corresponding entries by the corresponding cofactors. So, the signs are plus or minus depending on what's in that diagram there. Now, in practice, in this class, again, all we need is to do it with respect to the first row. So, don't worry about it too much. OK, so, again, the way that we've officially seen it in this class is just if you have a1, a2, a3, b1, b2, b3, c1, c2, c3, so if the determinant is a1 times b2 b3, c2 c3, minus a2 b1 b3 c1 c3 plus a3 b1 b2 c1 c2. And, this minus is here basically because of the minus in the diagram up there. But, that's all we need to know. Yes? How do you tell the difference between infinitely many solutions or no solutions? That's a very good question. So, in full generality, the answer is we haven't quite seen a systematic method. So, you just have to try solving and see if you can find a solution or not. So, let me actually explain that more carefully. So, what happens to these two situations when a is invertible or not? So, remember, in the linear system, you can think of a linear system as asking you to find the intersection between three planes because each equation is the equation of a plane. So, Ax = B for a 3x3 system means that x should be in the intersection of three planes. And then, we have two cases. So, the case where the system is invertible corresponds to the general situation where your three planes somehow all just intersect in one point. And then, the situation where the determinant, that's when the determinant is not zero, you get just one point. However, sometimes it will happen that all the planes are parallel to the same direction. So, determinant a equals zero means the three planes are parallel to a same vector. And, in fact, you can find that vector explicitly because that vector has to be perpendicular to all the normals. So, at some point we saw other subtle things about how to find the direction of this line that's parallel to all the planes. So, now, this can happen either with all three planes containing the same line. You know, they can all pass through the same axis. Or it could be that they have somehow shifted with respect to each other. And so, it might look like this. Then, the last one is actually in front of that. So, see, the lines of intersections between two of the planes, so, here they all pass through the same line, and here, instead, they intersect in one line here, one line here, and one line there. And, there's no triple intersection. So, in general, we haven't really seen how to decide between these two cases. There's one important situation where we have seen we must be in the first case that when we have a homogeneous system, so that means if the right hand side is zero, then, well, x equals zero is always a solution. It's called the trivial solution. It's the obvious one, if you want. So, you know that, and why is that? Well, that's because all of your planes have to pass through the origin. So, you must be in this case if you have a noninvertible system where the right hand side is zero. So, in that case, if the right hand side is zero, there's two cases. Either the matrix is invertible. Then, the only solution is the trivial one. Or, if a matrix is not invertible, then you have infinitely many solutions. If B is not zero, then we haven't really seen how to decide. We've just seen how to decide between one solution or zero,infinitely many, but not how to decide between these last two cases. Yes? I think in principle, you would be able to, but that's, well, I mean, that's a slightly counterintuitive way of doing it. I think it would probably work. Well, I'll let you figure it out. OK, let me move on to the second unit, maybe, because we've seen a lot of stuff, or was there a quick question before that? No? OK. OK, so what was the second part of the class about? Well, hopefully you kind of vaguely remember that it was about functions of several variables and their partial derivatives. OK, so the first thing that we've seen is how to actually view a function of two variables in terms of its graph and its contour plot. So, just to remind you very quickly, if I have a function of two variables, x and y, then the graph will be just the surface given by the equation z equals f of xy. So, for each x and y, I plot a point at height given with the value of the a function. And then, the contour plot will be the topographical map for this graph. It will tell us, what are the various levels in there? So, what it amounts to is we slice the graph by horizontal planes, and we get a bunch of curves which are the points at given height on the plot. And, so we get all of these curves, and then we look at them from above, and that gives us this map with a bunch of curves on it. And, each of them has a number next to it which tells us the value of a function there. And, from that map, we can, of course, tell things about where we might be able to find minima or maxima of our function, and how it varies with respect to x or y or actually in any direction at a given point. So, now, the next thing that we've learned about is partial derivatives. So, for a function of two variables, there would be two of them. There's f sub x which is partial f partial x, and f sub y which is partial f partial y. And, in terms of a graph, they correspond to slicing by a plane that's parallel to one of the coordinate planes, so that we either keep x constant, or keep y constant. And, we look at the slope of a graph to see the rate of change of f with respect to one variable only when we hold the other one constant. And so, we've seen in particular how to use that in various places, but, for example, for linear approximation we've seen that the change in f is approximately equal to f sub x times the change in x plus f sub y times the change in y. So, you can think of f sub x and f sub y as telling you how sensitive the value of f is to changes in x and y. So, this linear approximation also tells us about the tangent plane to the graph of f. In fact, when we turn this into an equality, that would mean that we replace f by the tangent plane. We've also learned various ways of, before I go on, I should say, of course, we've seen these also for functions of three variables, right? So, we haven't seen how to plot them, and we don't really worry about that too much. But, if you have a function of three variables, you can do the same kinds of manipulations. So, we've learned about differentials and chain rules, which are a way of repackaging these partial derivatives. So, the differential is just, by definition, this thing called df which is f sub x times dx plus f sub y times dy. And, what we can do with it is just either plug values for changes in x and y, and get approximation formulas, or we can look at this in a situation where x and y will depend on something else, and we get a chain rule. So, for example, if f is a function of t time, for example, and so is y, then we can find the rate of change of f with respect to t just by dividing this by dt. So, we get df dt equals f sub x dx dt plus f sub y dy dt. We can also get other chain rules, say, if x and y depend on more than one variable, if you have a change of variables, for example, x and y are functions of two other guys that you call u and v, then you can express dx and dy in terms of du and dv, and plugging into df you will get the manner in which f depends on u and v. So, that will give you formulas for partial f partial u, and partial f partial v. They look just like these guys except there's a lot of curly d's instead of straight ones, and u's and v's in the denominators. OK, so that lets us understand rates of change. We've also seen yet another way to package partial derivatives into not a differential, but instead, a vector. That's the gradient vector, and I'm sure it was quite mysterious when we first saw it, but hopefully by now, well, it should be less mysterious. OK, so we've learned about the gradient vector which is del f is a vector whose components are just the partial derivatives. So, if I have a function of just two variables, then it's just this. And, so one observation that we've made is that if you look at a contour plot of your function, so maybe your function is zero, one, and two, then the gradient vector is always perpendicular to the contour plot, and always points towards higher ground. OK, so the reason for that was that if you take any direction, you can measure the directional derivative, which means the rate of change of f in that direction. So, given a unit vector, u, which represents some direction, so for example let's say I decide that I want to go in this direction, and I ask myself, how quickly will f change if I start from here and I start moving towards that direction? Well, the answer seems to be, it will start to increase a bit, and maybe at some point later on something else will happen. But at first, it will increase. So, the directional derivative is what we've called f by ds in the direction of this unit vector, and basically the only thing we know to be able to compute it, the only thing we need is that it's the dot product between the gradient and this vector u hat. In particular, the directional derivatives in the direction of I hat or j hat are just the usual partial derivatives. That's what you would expect. OK, and so now you see in particular if you try to go in a direction that's perpendicular to the gradient, then the directional derivative will be zero because you are moving on the level curve. So, the value doesn't change, OK? Questions about that? Yes? Yeah, so let's see, so indeed to look at more recent things, if you are taking the flux through something given by an equation, so, if you have a surface given by an equation, say, f equals one. So, say that you have a surface here or a curve given by an equation, f equals constant, then the normal vector to the surface is given by taking the gradient of f. And that is, in general, not a unit normal vector. Now, if you wanted the unit normal vector to compute flux, then you would just scale this guy down to unit length, OK? So, if you wanted a unit normal, that would be the gradient divided by its length. However, for flux, that's still of limited usefulness because you would still need to know about ds. But, remember, we've seen a formula for flux in terms of a non-unit normal vector, and n over n dot kdxdy. So, indeed, this is how you could actually handle calculations of flux through pretty much anything. Any other questions about that? OK, so let me continue with a couple more things we need to, so, we've seen how to do min/max problems, in particular, by looking at critical points. So, critical points, remember, are the points where all the partial derivatives are zero. So, if you prefer, that's where the gradient vector is zero. And, we know how to decide using the second derivative test whether a critical point is going to be a local min, a local max, or a saddle point. Actually, we can't always quite decide because, remember, we look at the second partials, and we compute this quantity ac minus b squared. And, if it happens to be zero, then actually we can't conclude. But, most of the time we can conclude. However, that's not all we need to look for an absolute global maximum or minimum. For that, we also need to check the boundary points, or look at the behavior of a function, at infinity. So, we also need to check the values of f at the boundary of its domain of definition or at infinity. Just to give you an example from single variable calculus, if you are trying to find the minimum and the maximum of f of x equals x squared, well, you'll find quickly that the minimum is at zero where x squared is zero. If you are looking for the maximum, you better not just look at the derivative because you won't find it that way. However, if you think for a second, you'll see that if x becomes very large, then the function increases to infinity. And, similarly, if you try to find the minimum and the maximum of x squared when x varies only between one and two, well, you won't find the critical point, but you'll still find that the smallest value of x squared is when x is at one, and the largest is at x equals two. And, all this business about boundaries and infinity is exactly the same stuff, but with more than one variable. It's just the story that maybe the minimum and the maximum are not quite visible, but they are at the edges of a domain we are looking at. Well, in the last three minutes, I will just write down a couple more things we've seen there. So, how to do max/min problems with non-independent variables -- So, if your variables are related by some condition, g equals some constant. So, then we've seen the method of Lagrange multipliers. OK, and what this method says is that we should solve the equation gradient f equals some unknown scalar lambda times the gradient, g. So, that means each partial, f sub x equals lambda g sub x and so on, and of course we have to keep in mind the constraint equation so that we have the same number of equations as the number of unknowns because you have a new unknown here. And, the thing to remember is that you have to be careful that the second derivative test does not apply in this situation. I mean, this is only in the case of independent variables. So, if you want to know if something is a maximum or a minimum, you just have to use common sense or compare the values of a function at the various points you found. Yes? Will we actually have to calculate? Well, that depends on what the problem asks you. It might ask you to just set up the equations, or it might ask you to solve them. So, in general, solving might be difficult, but if it asks you to do it, then it means it shouldn't be too hard. I haven't written the final yet, so I don't know what it will be, but it might be an easy one. And, the last thing we've seen is constrained partial derivatives. So, for example, if you have a relation between x, y, and z, which are constrained to be a constant, then the notion of partial f partial x takes several meanings. So, just to remind you very quickly, there's the formal partial, partial f, partial x, which means x varies. Y and z are held constant. And, we forget the constraint. This is not compatible with a constraint, but we don't care. So, that's the guy that we compute just from the formula for f ignoring the constraints. And then, we have the partial f, partial x with y held constant, which means y held constant. X varies, and now we treat z as a dependent variable. It varies with x and y according to whatever is needed so that this constraint keeps holding. And, similarly, there's partial f partial x with z held constant, which means that, now, y is the dependent variable. And, the way in which we compute these, we've seen two methods which I'm not going to tell you now because otherwise we'll be even more over time. But, we've seen two methods for computing these based on either the chain rule or on differentials, solving and substituting into differentials.
http://xoax.net/math/crs/multivariable_calculus_mit/lessons/Lecture34/
13
67
Solar sails (also called light sails or photon sails, especially when they use light sources other than the Sun) are a proposed form of spacecraft propulsion using large membrane mirrors. Radiation pressure is about 10-5 Pa at Earth's distance from the Sun and decreases by the square of the distance from the light source (e.g. sun), but unlike rockets, solar sails require no reaction mass. Although the thrust is small, it continues as long as the light source shines and the sail is deployed. In theory a lightsail (actually a system of lightsails) powered by an Earth-based laser could even be used to decelerate the spacecraft as it approaches its destination. Solar collectors, temperature-control panels and sun shades are occasionally used as expedient solar sails, to help ordinary spacecraft and satellites make minor attitude control corrections and orbit modifications without using fuel. This conserves fuel that would otherwise be used for maneuvering and altitude control. A few have even had small purpose-built solar sails for this use. For example, EADS Astrium's Eurostar E3000 geostationary communications satellites use solar sail panels attached to their solar cell arrays to off-load transverse angular momentum, thereby saving fuel (angular momentum is accumulated over time as the gyroscopic momentum wheels control the spacecraft's attitude - this excess momentum must be offloaded to protect the wheels from overspin). Some unmanned spacecraft (such as Mariner 10) have substantially extended their service lives with this practice. The science of solar sails is well-proven, but the technology to manage large solar sails is still undeveloped. Mission planners are not yet willing to risk multimillion dollar missions on unproven solar sail unfolding and steering mechanisms. This neglect has inspired some enthusiasts to attempt private development of the technology, such as the Cosmos 1. The concept was first proposed by German astronomer Johannes Kepler in the seventeenth century. It was again proposed by Friedrich Zander in the late 1920s and gradually refined over the decades. Recent serious interest in lightsails began with an article by engineer and science fiction author Robert L. Forward in 1984. How they work The spacecraft arranges a large membrane mirror which reflects light from the Sun or some other source. The radiation pressure on the mirror provides a small amount of thrust by reflecting photons. Tilting the reflective sail at an angle from the Sun produces thrust at an angle normal to the sail. In most designs, steering would be done with auxiliary vanes, acting as small solar sails to change the attitude of the large solar sail (see the vanes on the illustration labeled Cosmos 1 , below). The vanes would be adjusted by electric motors. In theory a lightsail driven by a laser or other beam from Earth can be used to decelerate a spacecraft approaching a distant star or planet, by detaching part of the sail and using it to focus the beam on the forward-facing surface of the rest of the sail. In practice, however, most of the deceleration would happen while the two parts are at a great distance from each other, and that means that, to do that focusing, it would be necessary to give the detached part an accurate optical shape and orientation. Sails orbit, and therefore do not need to hover or move directly toward or away from the sun. Almost all missions would use the sail to change orbit, rather than thrusting directly away from a planet or the sun. The sail is rotated slowly as the sail orbits around a planet so the thrust is in the direction of the orbital movement to move to a higher orbit or against it to move to a lower orbit. When an orbit is far enough away from a planet, the sail then begins similar maneuvers in orbit around the sun. The best sort of missions for a solar sail involves a dive near the sun, where the light is intense, and sail efficiencies are high. Going close to the Sun may be done for different mission aims: for exploring the solar poles from a short distance, for observing the Sun and its near environment from a non-Keplerian circular orbit the plane of which may be shifted some solar radii, for flying by the Sun such that the sail gets a very high speed. An unsuspected feature, until the first half of the 1990s, of the solar sail propulsion is to allow a sailcraft to escape the solar system with a cruise speed higher or even much higher than a spacecraft powered by a nuclear electric rocket system. The spacecraft mass-to-sail area ratio does not need to achieve ultra-low values, even though the sail should be an advanced all-metal sail. This flight mode is also known as fast solar sailing. Proven mathematically (like many other astronautical items well in advance of their actual launches), such sailing mode has been considered by NASA/Marshall as one of the options for a future precursor interstellar probe exploring the near interstellar space beyond the heliosphere. Most theoretical studies of interstellar missions with a solar sail plan to push the sail with a very large laser Beam-powered propulsion Direct Impulse beam. The thrust vector (spatial vector) would therefore be away from the Sun and toward the target. Limitations of solar sails Solar sails don't work well, if at all, in low Earth orbit below about 800 km altitude due to erosion or air drag. Above that altitude they give very small accelerations that take months to build up to useful speeds. Solar sails have to be physically large, and payload size is often small. Deploying solar sails is also highly challenging to date. Solar sails must face the sun to decelerate. Therefore, on trips away from the sun, they must arrange to loop behind the outer planet, and decelerate into the sunlight. There is a common misunderstanding that solar sails cannot go towards their light source. This is false. In particular, sails can go toward the sun by thrusting against their orbital motion. This reduces the energy of their orbit, spiraling the sail toward the sun, see Tack (sailing). Investigated sail designs "Parachutes" would have very low mass, but theoretical studies show that they will collapse from the forces placed by shrouds. Radiation pressure does not behave like aerodynamic pressure. The highest thrust-to-mass designs known (2007) were theoretical designs developed by Eric Drexler. He designed a sail using reflective panels of thin aluminum film (30 to 100 nanometres thick) supported by a purely tensile structure. It rotated and would have to be continually under slight thrust. He made and handled samples of the film in the laboratory, but the material is too delicate to survive folding, launch, and deployment, hence the design relied on space-based production of the film panels, joining them to a deployable tension structure. Sails in this class would offer accelerations an order of magnitude higher than designs based on deployable plastic films. The highest-thrust to mass designs for ground-assembled deployable structures are square sails with the masts and guy lines on the dark side of the sail. Usually there are four masts that spread the corners of the sail, and a mast in the center to hold guide wires. One of the largest advantages is that there are no hot spots in the rigging from wrinkling or bagging, and the sail protects the structure from the sun. This form can therefore go quite close to the sun, where the maximum thrust is present. Control would probably use small sails on the ends of the spars. In the 1970s JPL did extensive studies of rotating blade and rotating ring sails for a mission to rendezvous with Halley's Comet. The intention was that such structures would be stiffened by their angular momentum, eliminating the need for struts, and saving mass. In all cases, surprisingly large amounts of tensile strength were needed to cope with dynamic loads. Weaker sails would ripple or oscillate when the sail's attitude changed, and the oscillations would add and cause structural failure. So the difference in the thrust-to-mass ratio was almost nil, and the static designs were much easier to control. JPL's reference design was called the "heliogyro" and had plastic-film blades deployed from rollers and held out by centripetal forces as it rotated. The spacecraft's altitude and direction were to be completely controlled by changing the angle of the blades in various ways, similar to the cycle and collective pitch of a helicopter. Although the design had no mass advantage over a square sail, it remained attractive because the method of deploying the sail was simpler than a strut-based design. JPL also investigated "ring sails" (Spinning Disk Sail in the above diagram), panels attached to the edge of a rotating spacecraft. The panels would have slight gaps, about one to five percent of the total area. Lines would connect the edge of one sail to the other. Weights in the middles of these lines would pull the sails taut against the coning caused by the radiation pressure. JPL researchers said that this might be an attractive sail design for large manned structures. The inner ring, in particular, might be made to have artificial gravity roughly equal to the gravity on the surface of Mars. A solar sail can serve a dual function as a high-gain antenna. Designs differ, but most modify the metallization pattern to create a holographic monochromatic lens or mirror in the radio frequencies of interest, including visible light. Pekka Janhunen from FMI has invented a type of solar wind sail called the electric solar wind sail It has little in common with the solar wind sail design externally, bacause the sails are substituted with straigthened conducting tethers (wires) which are placed radially around the host ship. The wires are electrically charged and thus an electric field is created around the wires. The electric field of the wires extends a few tens of metres into the surrounding solar wind plasma. Because the solar wind electrons react on the electric field similarly as on a concrete solar wind sail, the function radius of the wires is based on the electric field that is generated around the wire rather than the actual wire itself. This fact also makes it possible to maneuver a ship with electric solar wind sail by regulating the electric charge of the wires. A full-sized functioning electric solar wind sail would have 50-100 straightened wires with a length of about 20 km each. Sail testing in space NASA has successfully tested deployment technologies on small scale sails in vacuum chambers. No solar sails have been successfully used in space as primary propulsion systems, but research in the area is continuing. It is noteworthy that the Mariner 10 mission, which flew by the planets Mercury and Venus, demonstrated use of solar pressure as a method of attitude control, in order to conserve attitude-control propellant. On February 4, 1993, Znamya 2, a 20-meter wide aluminized-mylar reflector, was successfully tested from the Russian Mir space station. Although the deployment test was successful, the experiment only demonstrated the deployment, not propulsion. A second test, Znamaya 2.5, failed to deploy properly. On August 9, 2004 Japanese ISAS successfully deployed two prototype solar sails from a sounding rocket. A clover type sail was deployed at 122 km altitude and a fan type sail was deployed at 169 km altitude. Both sails used 7.5 micrometer thick film. The experiment was purely a test of the deployment mechanisms, not of propulsion. A joint private project between Planetary Society, Cosmos Studios and Russian Academy of Science launched Cosmos 1 on June 21, 2005, from a submarine in the Barents Sea, but the Volna rocket failed, and the spacecraft failed to reach orbit. A solar sail would have been used to gradually raise the spacecraft to a higher earth orbit. The mission would have lasted for one month. A suborbital prototype test by the group failed in 2001 as well, also because of rocket failure. A 15-meter-diameter solar sail (SSP, solar sail sub payload, soraseiru sabupeiro-do) was launched together with ASTRO-F on a M-V rocket on February 21, 2006, and made it to orbit. It deployed from the stage, but opened incompletely. A team from the NASA Marshall Space Flight Center (Marshall), along with a team from the NASA Ames Research Center, developed a solar sail mission called NanoSail-D which was lost in a launch failure aboard a Falcon 1 rocket on 3 August 2008. The primary objective of the mission had been to test sail deployment technologies. The spacecraft might not have returned useful data about solar sail propulsion, according to Edward E. Montgomery, technology manager of Solar Sail Propulsion at Marshall, "The orbit available to us in this launch opportunity is so low, it may not allow us to stay in orbit long enough for solar pressure effects to accumulate to a measurable degree. The NanoSail-D structure was made of aluminum and plastic, with the spacecraft weighing less than . The sail has about of light-catching surface. The best known material is thought to be a thin mesh of aluminium with holes less than ½ the wavelength of most light. Nanometre-sized "antennas" would emit heat energy as infrared. Although samples have been created, it is too fragile to unfold or unroll with known technology. The most common material in current designs is aluminized 2 µm Kapton film. It resists the heat of a pass close to the Sun and still remains reasonably strong. The aluminium reflecting film is on the Sun side. The sails of Cosmos 1 were made of aluminized PET film. Research by Dr. Geoffrey Landis in 1998-9, funded by the NASA Institute for Advanced Concepts, showed that various materials such as Alumina for laser lightsails and Carbon fiber for microwave pushed lightsails were superior sail materials to the previously standard aluminum or Kapton films. In 2000, Energy Science Laboratories developed a new carbon fiber material which might be useful for solar sails. The material is over 200 times thicker than conventional solar sail designs, but it is so porous that it has the same weight. The rigidity and durability of this material could make solar sails that are significantly sturdier than plastic films. The material could self-deploy and should withstand higher temperatures. There has been some theoretical speculation about using molecular manufacturing techniques to create advanced, strong, hyper-light sail material, based on nanotube mesh weaves, where the weave "spaces" are less than ½ the wavelength of light impinging on the sail. While such materials have as-of-yet only been produced in laboratory conditions, and the means for manufacturing such material on an industrial scale are not yet available, such materials could weigh less than 0.1 g/m² making them lighter than any current sail material by a factor of at least 30. For comparison, 5 micrometre thick Mylar sail material weighs 7 g/m², aluminized Kapton films weighs up to 12 g/m², and Energy Science Laboratories' new carbon fiber material weighs in at 3g/m². Robert L. Forward pointed out that a solar sail could be used to modify the orbit of a satellite around the Earth. In the limit, a sail could be used to "hover" a satellite above one pole of the Earth. Spacecraft fitted with solar sails could also be placed in close orbits about the Sun that are stationary with respect to either the Sun or the Earth, a type of satellite named by Forward a statite . This is possible because the propulsion provided by the sail offsets the gravitational potential of the Sun. Such an orbit could be useful for studying the properties of the Sun over long durations. Such a spacecraft could conceivably be placed directly over a pole of the Sun, and remain at that station for lengthy durations. Likewise a solar sail-equipped spacecraft could also remain on station nearly above the polar terminator of a planet such as the Earth by tilting the sail at the appropriate angle needed to just counteract the planet's gravity. Robert Forward proposed the use of lasers to push solar sails, providing beam-powered propulsion . Given a sufficiently powerful laser and a large enough mirror to keep the laser focused on the sail for long enough, a solar sail could be accelerated to a significant fraction of the speed of light . To do so, however, would require the engineering of massive, precisely-shaped optical mirrors or lenses (wider than the Earth for interstellar transport), incredibly powerful lasers, and more power for the lasers than humanity currently generates. A potentially easier approach would be to use a maser to drive a "solar sail" composed of a mesh of wires with the same spacing as the wavelength of the microwaves, since the manipulation of microwave radiation is somewhat easier than the manipulation of visible light. The hypothetical "Starwisp" interstellar probe design would use a maser to drive it. Masers spread out more rapidly than optical lasers thanks to their longer wavelength, and so would not have as long an effective range. Masers could also be used to power a painted solar sail, a conventional sail coated with a layer of chemicals designed to evaporate when struck by microwave radiation. The momentum generated by this evaporation could significantly increase the thrust generated by solar sails, as a form of lightweight ablative laser propulsion. To further focus the energy on a distant solar sail, designs have considered the use of a large zone plate. This would be placed at a location between the laser or maser and the spacecraft. The plate could then be propelled outward using the same energy source, thus maintaining its position so as to focus the energy on the solar sail. Additionally, it has been theorized by da Vinci Project contributor T. Pesando that solar sail-utilizing spacecraft successful in interstellar travel could be used to carry their own zone plates or perhaps even masers to be deployed during flybys at nearby stars. Such an endeavour could allow future solar-sailed craft to effectively utilize focused energy from other stars rather than from the Earth or Sun, thus propelling them more swiftly through space and perhaps even to more distant stars. However, the potential of such a theory remains uncertain if not dubious due to the high-speed precision involved and possible payloads required. Despite the loss of Cosmos 1 (which was due to a failure of the launcher), scientists and engineers around the world remain encouraged and continue to work on solar sails. While most direct applications created so far intend to use the sails as inexpensive modes of cargo transport, some scientists are investigating the possibility of using solar sails as a means of transporting humans. This goal is strongly related to the management of very large (i.e. well above 1 km²) surfaces in space and the sail making advancements. Thus, in the near/medium term, solar sail propulsion is aimed chiefly at accomplishing a very high number of non-crewed missions in any part of the solar system and beyond. Critics of the solar sail argue that solar sails are impractical for orbital and interplanetary missions because they move on an indirect course. However, when in Earth orbit, the majority of mass on most interplanetary missions is taken up by fuel. A robotic solar sail could therefore multiply an interplanetary payload by several times by reducing this significant fuel mass, and create a reusable, multimission spacecraft. Most near-term planetary missions involve robotic exploration craft, in which the directness of the course is unimportant compared to the fuel mass savings and fast transit times of a solar sail. For example, most existing missions use multiple gravitational slingshots to reduce necessary fuel mass, in order to save transit time at the cost of directness of the route. There is also a misunderstanding that solar sails capture energy primarily from the solar wind high speed charged particles emitted from the sun. These particles would impart a small amount of momentum upon striking the sail, but this effect would be small compared to the force due to radiation pressure from light reflected from the sail. The force due to light pressure is about 5,000 times as strong as that due to solar wind. A much larger type of sail called a magsail would employ the solar wind. It has been proposed that momentum exchange from reflection of photons is an unproven effect that may violate the thermodynamical Carnot rule. This criticism was raised by Thomas Gold of Cornell, leading to a public debate in the spring of 2003. This criticism has been refuted by Benjamin Diedrich, pointing out that the Carnot Rule does not apply to an open system. Further explanation of lab results demonstrating is provided. James Oberg has also refuted Dr. Gold's analysis: "But ‘solar sailing’ isn’t theoretical at all, and photon pressure has been successfully calculated for all large spacecraft. Interplanetary missions would arrive thousands of kilometers off course if correct equations had not been used. The effect for a genuine ‘solar sail’ will be even more One way to see the conservation of energy is not a problem is to note that when reflected by a solar sail, a photon undergoes a Doppler shift; its wavelength increases (and energy decreases) by a factor dependent on the velocity of the sail, transferring energy from the sun-photon system to the sail. This change of energy can easily be verified to be exactly equal (and opposite) to the energy change of the sail. The Extended Heliocentric Reference Frame - In the 1991-92 the classical equations of the solar sail motion in the solar gravitational field were written by using a different mathematical formalism, namely, the lightness vector fully characterizing the sailcraft dynamics. In addition, solar-sail spacecraft has been supposed to be able to reverse its motion (in the solar system) provided that its sail were sufficiently light that sailcraft sail loading (σ) is not higher than 2.1 g/m². This value entails a high-performance technology indeed, but much probably within the capabilities of emerging technologies. - For describing the concept of fast sailing and some related items, we need to define two frames of reference. The first is an inertial Cartesian coordinate system centred on the Sun or a heliocentric inertial frame (HIF, for short). For instance, the plane of reference, or the XY plane, of HIF can be the mean ecliptic at some standard epoch such as J2000. The second Cartesian reference frame is the so-called heliocentric orbital frame (HOF, for short) with the origin in the sailcraft barycenter. The x-axis of HOF is the direction of the Sun-to-sailcraft vector, or position vector, the z-axis is along the sailcraft orbital angular momentum, whereas the y-axis completes the counterclockwise triad. Such definition can be extended to sailcraft trajectories, including both counterclockwise and clockwise arcs of motion, such a way HOF is always a continuous positively-oriented triad. The sail orientation unit vector (defined in sailcraft), say, n can be specified in HOF by a pair of angles, e.g. the azimuth α and the elevation δ. Elevation is the angle that n forms with the xy-plane of HOF (-90° ≤ δ ≤ 90°). Azimuth is the angle that the projection of n onto the HOF xy-plane forms with the HOF x-axis (0 ≤ α < 360 °). In HOF, azimuth and elevation are equivalent to longitude and latitude, respectively. - The sailcraft lightness vector L = [λr , λt , λn] depends on α and δ (non-linearly) and the thermo-optical parameters of the sail materials (linearly). Neglecting a small contribution coming from the aberration of light, one has the following particular cases (irrespective of the sail material): - α = 0 , δ = 0 ⇔ [λr , 0 , 0] ⇔ λ=|L|=λr - α ≠ 0 , δ = 0 ⇔ [λr , λt , 0] - α = 0 , δ ≠ 0 ⇔ [λr , 0 , λn] A Flight Example - Now suppose we have built a sailcraft with an all-metal sail of Aluminium and Chromium such that σ = 2 g/m². A launcher delivers the (packed) sailcraft at some million kilometers from the Earth. There, the whole sailcraft is deployed and begins its flight in the solar system (here, for the sake of simplicity, we neglect any gravitational perturbation from planets). A conventional spacecraft would move approximately in a circular orbit at about 1 AU from the Sun. In contrast, a sailcraft like this one is sufficiently light to be able to escape the solar system or to point to some distant object in the heliosphere. If n is parallel to the local sun-light direction, then λr = λ = 0.725 (i.e. 1/2 < λ < 1); as a result, this sailcraft moves on a hyperbolic orbit. Its speed at infinity is equal to 20 km/s. Strictly speaking, this potential solar sail mission would be faster than the current record speed for missions beyond the planetary range, namely, the Voyager-1 speed, which amounts to 17 km/s or about 3.6 AU/yr (1 AU/yr = 4.7404 km/s). However, three kilometers per second are not meaningful in the context of very deep space missions. - As a consequence, one has to resort to some L having more than one component different from zero. The classical way to gain speed is to tilt the sail at some suitable positive α. If α= +21°, then the sailcraft begins by accelerating; after about two months, it achieves 32 km/s. However, this is a speed peak inasmuch as its subsequent motion is characterized by a monotonic speed decrease towards an asymptotic value, or the cruise speed, of 26 km/s. After 18 years, the sailcraft is 100 AU away from the Sun. This would mean a pretty fast mission. However, considering that a sailcraft with 2 g/m² is technologically advanced, is there any other way to increase its speed significantly? Yes, there is. Let us try to explain this effect of non-linear dynamics. - The above figures show that spiralling out from a circular orbit is not a convenient mode for a sailcraft to be sent away from the Sun since it would not have a high enough excess speed. On the other hand, it is known from astrodynamics that a conventional Earth satellite has to perform a rocket maneuver at/around its perigee for maximizing its speed at "infinity". Similarly, one can think of delivering a sailcraft close to the Sun to get much more energy from the solar photon pressure (that scales as 1/R2). For instance, suppose one starts from a point at 1 AU on the ecliptic and achieves a perihelion distance of 0.2 AU in the same plane by a two-dimensional trajectory. In general, there are three ways to deliver a sailcraft, initially at R0 from the Sun, to some distance R < R0: - using an additional propulsion system to send the folded-sail sailcraft to the perihelion of an elliptical orbit; there, the sail is deployed with its axis parallel to the sun-light for getting the maximum solar flux at the chosen distance; - spiralling in by α slightly negative, namely, via a slow deceleration; - strongly decelerating by a "sufficiently large" sail-axis angle negative in HOF. - The first way - although usable as a good reference mode - requires another high-performance propulsion system. - The second way is ruled out in the present case of σ = 2 g/m²; as a matter of fact, a small α < 0 entails a λr too high and a negative λt too low in absolute value: the sailcraft would go far from the Sun with a decreasing speed (as discussed above). - In the third way, there is a critical negative sail-axis angle in HOF, say, αcr such that for sail orientation angles α < αcr the sailcraft trajectory is characterized as follows: - # the distance (from the Sun) first increases, achieves a local maximum at some point M, then decreases. The orbital angular momentum (per unit mass), say, H of the sailcraft decreases in magnitude. It is suitable to define the scalar H = H•k, where k is the unit vector of the HIF Z-axis; - # after a short time (few weeks or less, in general), the sailcraft speed V = |V| achieves a local minimum at a point P. H continues to decrease; - # past P, the sailcraft speed increases because the total vector acceleration, say, A begins by forming an acute angle with the vector velocity V; in mathematical terms, dV / dt = A • V / V > 0. This is the first key-point to realize; - # eventually, the sailcraft achieves a point Q where H = 0; here, the sailcraft's total energy (per unit mass), say, E (including the contribution of the solar pressure on the sail) shows a (negative) local minimum. This is the second key-point; - # past Q, the sailcraft - keeping the negative value of the sail orientation - regains angular momentum by reversing its motion (that is H is oriented down and H < 0). R keeps on decreasing while dV/dt augments. This is the third key-point; - # the sailcraft energy continues to increase and a point S is reached where E=0, namely, the escape condition is satisfied; the sailcraft keeps on accelerating. S is located before the perihelion. The (negative) H continues to decrease; - # if the sail attitude α has been chosen appropriately (about -25.9 deg in this example), the sailcraft flies-by the Sun at the desired (0.2 AU) perihelion, say, U; however, differently from a Keplerian orbit (for which the perihelion is the point of maximum speed), past the perihelion, V increases further while the sailcraft recedes from the Sun. - # past U, the sailcraft is very fast and pass through a point, say, W of local maximum for the speed, since λ < 1. Thus, speed decreases but, at a few AU from the Sun (about 2.7 AU in this example), both the (positive) E and the (negative) H begin a plateau or cruise phase; V becomes practically constant and, the most important thing, takes on a cruise value considerably higher than the speed of the circular orbit of the departure planet (the Earth, in this case). This example shows a cruise speed of 14.75 AU/yr or 69.9 km/s. At 100 AU, the sailcraft speed is 69.6 km/s. H-reversal sun flyby trajectory - The Figure below shows the mentioned sailcraft trajectory. Only the initial arc around the Sun has been plotted. The remaining part is rectilinear, in practice, and represents the cruise phase of the spacecraft. The sail is represented by a short segment with a central arrow that indicates its orientation. Note that the complicate change of sail direction in HIF is very simply achieved by a constant attitude in HOF. That brings about a net non-Keplerian feature to the whole trajectory. Some remarks are in order. - As mentioned in point-3, the strong sailcraft speed increase is due to both the solar-light thrust and gravity acceleration vectors. In particular, dV / dt, or the along-track component of the total acceleration, is positive and particularly high from the point-Q to the point-U. This suggests that if a quick sail attitude maneuver is performed just before H vanishes, α → -α, the sailcraft motion continues to be a direct motion with a final cruise velocity equal in magnitude to the reversal one (because the above maneuver keeps the perihelion value unchanged). The basic principle both sailing modes share may be summarised as follows: a sufficiently light sailcraft needs to lose most of its initial energy for subsequently achieving the absolute maximum of energy compliant with its given technology. - The above 2D class of new trajectories represents an ideal case. The realistic 3D fast sailcraft trajectories are considerably more complicated than the 2D cases. However, the general feature of producing a fast cruise speed can be further enhanced. Some of the enclosed references contain strict mathematical algorithms for dealing with this topic. Recently (July 2005), in an international symposium an evolution of the above concept of fast solar sailing has been discussed. A sailcraft with σ = 1 g/m² could achieve over 30 AU/yr in cruise (by keeping the perihelion at 0.2 AU), namely, well beyond the cruise speed of any nuclear-electric spacecraft (at least as conceived today). Such paper has been published on the Journal of the British Interplanetary Society (JBIS) in 2006. Solar sails in fiction - On the Waves of Ether (По волнам эфира) by B. Krasnogorsky 1913, spacecraft propelled by solar light pressure. - Sunjammer by Arthur C. Clarke, a short story (in The Wind from the Sun anthology) describing a solar sail craft Earth-Moon race. It was originally published under the name "Sunjammer" but when Clarke learned of the short story of the same name by Poul Anderson, he quickly changed it. - Dust of Far Suns, by Jack Vance, also published as Sail 25, depicts a crew of space cadets on a training mission aboard a malfunction-ridden solar sail craft. - The Mote in God's Eye (1975) by Larry Niven and Jerry Pournelle depicts an interstellar alien spacecraft driven by laser-powered light sails. - Both Green Mars and Blue Mars, by Kim Stanley Robinson, contain a solar reflecting mirror called a soletta made of earth to mars solar powered shuttles. - Rocheworld by Robert L. Forward, a novel about an interstellar mission driven by laser-powered light sails. - In the movie Tron, the characters Tron, Flynn and Yori are using a computer model of a solar sailer to escape from the MCP. - Solar sails appeared in Star Wars Episode II: Attack of the Clones, in which Count Dooku has a combination hyperdrive and starsail spacecraft dubbed the Solar Sailer. In episode 6 of the animated TV series Star Wars: Clone Wars, Count Dooku also appears travelling on a space ship equipped with a solar sail. - In the film Star Trek IV: The Voyage Home, an officer aboard a crippled spaceship discusses a plan to construct a solar sail to take his ship to the nearest port. - A solar sail appears in the Star Trek: Deep Space Nine episode "Explorers", as the primary propulsion system of the "Bajoran lightship". The vessel inadvertently exceeds the speed of light by sailing on a stream of tachyons. - The Lady Who Sailed The Soul by Cordwainer Smith, a short story (part of The Rediscovery Of Man collection) describing journeys on solar sail craft. - In David Brin's Heaven's Reach, sentient machines are using solar sails to harvest carbon from a red giant star's atmosphere to repair a Dyson Sphere-like construct. - The book Accelerando by Charles Stross depicts a solar sail craft powered by a series of very powerful lasers being used to contact alien intelligences outside of our solar system. - The GSX-401FW Stargazer, a primarily unmanned Gundam mobile suit from the Cosmic Era timeline of the Gundam Seed metaseries, employs a propulsion system dubbed "Voiture Lumière" which utilizes a nano-particle solar sail. - The R.L.S. Legacy, seen in the Disney movie "Treasure Planet", was powered entirely by solar sails. - A solar sail appears in an early episode of the most recent incarnation of "The Outer Limits" (season one, "The Message"). The description refers to it as a planet, perhaps to avoid being a "spoiler". - The 1983 Doctor Who serial Enlightenment depicts a race through the solar system using solar sail ships. - 1985 Japanese Original Video Animation title 'Odin: Photon Sailer Starlight', directed by Eiichi Yamamoto and Takeshi Shirado, features a space ship that travels on beams of light, with its massive sails, over great distances in space. - The 2006 science-fiction novel Le Papillon Des Étoiles (lit. The Butterfly Of The Stars), by Bernard Werber, tells the story of a community of humans who escape from Earth and set off towards a new habitable planet aboard a large spaceship pulled by a gigantic solar sail (one million square kilometers large when deployed). - A space yacht rigged with solar sails is described in the science-fiction novel "Planet of The Apes" by Pierre Boulle (original 1963 work). - G. Vulpetti, L. Johnson, G. L. Matloff, Solar Sails: A Novel Approach to Interplanetary Flight, Springer, August 2008 - Space Sailing by Jerome L. Wright, who was involved with JPL's effort to use a solar sail for a rendezvous with Halley's comet. - Solar Sailing, Technology, Dynamics and Mission Applications - [Colin R. McInnes] presents the state of the art in his book. - NASA/CR 2002-211730, the chapter IV - presents the theory and the optimal NASA-ISP trajectory via the H-reversal sailing mode - G. Vulpetti, The Sailcraft Splitting Concept, JBIS, Vol.59, pp. 48-53, February 2006 - G. L. Matloff, Deep-Space Probes: to the Outer Solar System and Beyond, 2nd ed., Springer-Chichester, UK, 2005 - T. Taylor, D. Robinson, T. Moton, T. C. Powell, G. Matloff, and J. Hall, Solar Sail Propulsion Systems Integration and Analysis (for Option Period), Final Report for NASA/MSFC, Contract No. H-35191D Option Period, Teledyne Brown Engineering Inc., Huntsville, AL, May 11, 2004 - G. Vulpetti, Sailcraft Trajectory Options for the Interstellar Probe: Mathematical Theory and Numerical Results, the Chapter IV of NASA/CR-2002-211730, “The Interstellar Probe (ISP): Pre-Perihelion Trajectories and Application of Holography”, June 2002 - G. Vulpetti, Sailcraft-Based Mission to The Solar Gravitational Lens, STAIF-2000, Albuquerque (New Mexico, USA), 30 Jan - 3 Feb, 2000 - G. Vulpetti, General 3D H-Reversal Trajectories for High-Speed Sailcraft, Acta Astronautica, Vol. 44, No. 1, pp. 67-73, 1999 - C. R. McInnes, Solar Sailing: Technology, Dynamics, and Mission Applications, Springer-Praxis Publishing Ltd, Chichester, UK, 1999 - Genta, G., and Brusa, E., The AURORA Project: a New Sail Layout, Acta Astronautica, 44, No. 2-4, pp. 141-146 (1999) - S. Scaglione and G. Vulpetti, The Aurora Project: Removal of Plastic Substrate to Obtain an All-Metal Solar Sail, special issue of Acta Astronautica, vol. 44, No. 2-4, pp. 147-150, 1999 - J. L. Wright, Space Sailing, Gordon and Breach Science Publishers, Amsterdam, 1993
http://www.reference.com/browse/Keplerian+orbit
13
51
This chapter describes how to calculate the radiation fields. It also provides general information about the antenna characteristics that can be derived based on the radiation fields. Once the currents on the circuit are known, the electromagnetic fields can be computed. They can be expressed in the spherical coordinate system attached to your circuit as shown in Co-polarization angle. The electric and magnetic fields contain terms that vary as 1/r, 1/r 2 etc. It can be shown that the terms that vary as 1/r 2 , 1/r 3 , ... are associated with the energy storage around the circuit. They are called the reactive field or near-field components. The terms having a 1/r dependence become dominant at large distances and represent the power radiated by the circuit. Those are called the far-field components (E ff , H ff ). In the direction parallel to the substrate (theta = 90 degrees), parallel plate modes or surface wave modes, that vary as 1/sqrt(r), may be present, too. Although they will dominate in this direction, and account for a part of the power emitted by the circuit, they are not considered to be part of the far-fields. The radiated power is a function of the angular position and the radial distance from the circuit. The variation of power density with angular position is determined by the type and design of the circuit. It can be graphically represented as a radiation pattern. The far-fields can only be computed at those frequencies that were calculated during a simulation. The far-fields will be computed for a specific frequency and for a specific excitation state. They will be computed in all directions (theta, phi) in the open half space above and/or below the circuit. Besides the far-fields, derived radiation pattern quantities such as gain, directivity, axial ratio, etc. are computed. Based on the radiation fields, polarization and other antenna characteristics such as gain, directivity, and radiated power can be derived. The far-field can be decomposed in several ways. You can work with the basic decomposition in (, ). However, with linear polarized antennas, it is sometimes more convenient to decompose the far-fields into (E co, E cross ) which is a decomposition based on an antenna measurement set-up. For circular polarized antennas, a decomposition into left and right hand polarized field components (E lhp , E rhp ) is most appropriate. Below you can find how the different components are related to each other. is the characteristic impedance of the open half sphere under consideration. The fields can be normalized with respect to: Below is shown how the left hand and right hand circular polarized field components are derived. From those, the circular polarization axial ratio (AR cp ) can be calculated. The axial ratio describes how well the antenna is circular polarized. If its amplitude equals one, the fields are perfectly circularly polarized. It becomes infinite when the fields are linearly polarized. Below, the equations to decompose the far-fields into a co and cross polarized field are given ( is the co polarization angle). From those, a "linear polarization axial ratio" (AR lp ) can be derived. This value illustrates how well the antenna is linearly polarized. It equals to one when perfect linear polarization is observed and becomes infinite for a perfect circular polarized antenna. This parameter is the solid angle through which all power emanating from the antenna would flow if the maximum radiation intensity is constant for all angles over the beam area. It is measured in steradians and is represented by: The maximum directivity is given by: where P inj is the real power, in watts, injected into the circuit. The maximum gain is given by: For the planar cut, the angle phi ( Cut Angle ), which is relative to the x-axis, is kept constant. The angle theta, which is relative to the z-axis, is swept to create a planar cut. Theta is swept from 0 to 360 degrees. This produces a view that is perpendicular to the circuit layout plane. Planar (vertical) cut illustrates a planar cut. In layout, there is a fixed coordinate system such that the monitor screen lies in the XYplane. The X-axis is horizontal, the Y-axis is vertical, and the Z-axis is normal to the screen. To choose which plane is probed for a radiation pattern, the cut angle must be specified. For example, if the circuit is rotated by 90 degrees, the cut angle must also be changed by 90 degrees if you wish to obtain the same radiation pattern from one orientation to the next. For a conical cut, the angle theta, which is relative to the z-axis, is kept constant. Phi, which is relative to the x-axis, is swept to create a conical cut. Phi is swept from 0 to 360 degrees. This produces a view that is parallel to the circuit layout plane. Conical cut illustrates a conical cut. If you choose to view results immediately after the far-field computation is complete, enable Open display when computation completed . When Data Display is used for viewing the far-field data, a data display window containing default plot types of the data display template of your choice will be automatically opened when the computation is finished. The default template, called FarFields, bundles four groups of plots: - Linear Polarization with E co , E cross , AR lp. - Circular Polarization with E lhp , E rhp , AR cp. - Absolute Fields with . - Power with Gain, Directivity, Radiation Intensity, Efficiency. For more information, please refer to About Antenna Characteristics. If 3D Visualization is selected in the Radiation Pattern dialog, the normalized electric far-field components for the complete hemisphere are saved in ASCII format in the file < project_dir>/ mom_dsn /<design_name>/ proj.fff . The data is saved in the following format: #Frequency <f> GHz /\* loop over <f> \*/ #Excitation #<i> /\* loop over <i> \*/ #Begin cut /\* loop over phi \*/ <theta> <phi_0> <real\(E_theta\)> <imag\(E_theta\)> <real\(E_phi\)> <imag\(E_phi\)> /\* loop over <theta> \*/ #End cut #Begin cut <theta> <phi_1> <real\(E_theta\)> <imag\(E_theta\)> <real\(E_phi\)> <imag\(E_phi\)> /\* loop over <theta> \*/ #End cut : : #Begin cut <theta> <phi_n> <real\(E_theta\)> <imag\(E_theta\)> <real\(E_phi\)> <imag\(E_phi\)> /\* loop over <theta> \*/ #End cut In the proj.fff file, E_theta and E_phi represent the theta and phi components, respectively, of the far-field values of the electric field. Note that the fields are described in the spherical co-ordinate system (r, theta, phi) and are normalized. The normalization constant for the fields can be derived from the values found in the proj.ant file and equals: The proj.ant file, stored in the same directory, contains the antenna characteristics. The data is saved in the following format: Excitation <i> /\* loop over <i> \*/ Frequency <f> GHz /\* loop over <f> \*/ Maximum radiation intensity <U> /\* in Watts/steradian \*/ Angle of U_max <theta> <phi> /\* both in deg \*/ E_theta_max <mag\(E_theta_max\)> ; E_phi_max <mag\(E_phi_max\)> E_theta_max <real\(E_theta_max\)> <imag\(E_theta_max\)> E_phi_max <real\(E_phi_max\)> <imag\(E_phi_max\)> Ex_max <real\(Ex_max\)> <imag\(Ex_max\)> Ey_max <real\(Ey_max\)> <imag\(Ey_max\)> Ez_max <real\(Ez_max\)> <imag\(Ez_max\)> Power radiated <excitation #i> <prad> /\* in Watts \*/ Effective angle <eff_angle_st> steradians <eff_angle_deg> degrees Directivity <dir> dB /\* in dB \*/ Gain <gain> dB /\* in dB \*/ The maximum electric field components (E_theta_max, E_phi_max, etc.) are those found at the angular position where the radiation intensity is maximal. They are all in volts. - Far-fields including E fields for different polarizations and axial ratio in 3D and 2D formats - Antenna parameters such as gain, directivity, and direction of main radiation in tabular format This section describes how to view the data. In EMDS for ADS RF mode, radiation results are not available for display. For general information about radiation patterns and antenna parameters, refer to About Radiation Patterns. In EMDS for ADS, computing the radiation results is included as a post processing step. The Far Field menu item appears in the main menu bar only if radiation results are available. If a radiation results file is available, it is loaded automatically. The command Set Port Solution Weights (in the Current menu) has no effect on the radiation results. The excitation state for the far-fields is specified in the radiation pattern dialog box before computation. You can also read in far-field data from other projects. First, select the project containing the far-field data that you want to view, then load the data: - Choose Projects > Select Project. - Select the name of the Momentum or Agilent EMDS project that you want to use. - Click Select Momentum or Select Agilent EMDS. - Choose Projects > Read Field Solution. - When the data is finished loading, it can be viewed in far-field plots and as antenna parameters. To display a 3D far-field plot: - Choose Far Field > Far Field Plot. - Select the view in which you want to insert the plot. - Select the E Field format: - E = sqrt(mag(E Theta)2 + mag(E Phi)2) - E Theta - E Phi - E Left - E Right - Circular Axial Ratio - E Co - E Cross - Linear Axial Ratio - If you want the data normalized to a value of one, enable Normalize. For Circular and Linear Axial Ratio choices, set the Minimum dB. Also set the Polarization Angle for E Co, E Cross, and Linear Axial Ratio. - By default, a linear scale is used to display the plot. If you want to use a logarithmic scale, enable Log Scale. Set the minimum magnitude that you want to display, in dB. - Click OK . - Click Display Options. - A white, dashed line appears lengthwise on the far-field. You can adjust the position of the line by setting the Constant Phi Value, in degrees, using the scroll bar. - Adjust the translucency of the far-field by using the scroll bar under Translucency. - Click Done . You can take a 2D cross section of the far-field and display it on a polar or rectangular plot. The cut type can be either planar (phi is fixed, theta is swept) or conical (theta is fixed, phi is swept). The figure below illustrates a planar cut (or phi cut) and a conical cut (or theta cut), and the resulting 2D cross section as it would appear on a polar plot. The procedure that follows describes how to define the 2D cross section. To define a cross section of the 3D far-field: - Choose Far Field > Cut 3D Far Field. - If you want a conical cut, choose Theta Cut. If you want a planar cut, choose Phi Cut. - Set the angle of the conical cut using the Constant Theta Value scroll bar or set the angle of the planar cut using the Constant Phi Value scroll bar. - Click Apply to accept the setting. The cross section is added to the Cut Plots list. - Repeat these steps to define any other cross sections. - Click Done to dismiss the dialog box - On a polar plot - On a rectangular plot, in magnitude versus angle In the figure below, a cross section is displayed on a polar and rectangular plot. To display a 2D far-field plot: - Choose Far Field > Plot Far Field Cut . - Select a 2D cross section from the 2D Far Field Plots list. The type of cut (phi or theta) and the angle identifies each cross section. - Select the view that you want to use to display the plot. - Select the E-field format. - Select the plot type, either Cartesian or Polar. - If you want the data normalized to a value of one, enable Normalize. - By default, a linear scale is used to display the plot. If you want to use a logarithmic scale, enable Log Scale. If available, set the minimum magnitude that you want to display, in dB; also, set the polarization angle. - Click OK. Choose Far Field > Antenna Parameters to view gain, directivity, radiated power, maximum E-field, and direction of maximum radiation. The data is based on the frequency and excitation state as specified in the radiation pattern dialog. The parameters include: - Radiated power, in watts - Effective angle, in degrees - Directivity, in dB - Gain, in dB - Maximum radiation intensity, in watts per steradian - Direction of maximum radiation intensity, theta and phi, both in degrees - E_theta, in magnitude and phase, in this direction - E_phi, in magnitude and phase, in this direction - E_x, in magnitude and phase, in this direction - E_y, in magnitude and phase, in this direction - E_z, in magnitude and phase, in this direction In the antenna parameters, the magnitude of the E-fields is in volts.
http://cp.literature.agilent.com/litweb/pdf/ads2008/emds/ads2008/Radiation_Patterns_and_Antenna_Characteristics.html
13
255
Quantitative Introduction to General Relativity For a general overview of the theory, see General Relativity |This article/section deals with mathematical concepts appropriate for a student in late university or graduate level.| General Relativity is a mathematical extension of Special Relativity. GR views space-time as a 4-dimensional manifold, which looks locally like Minkowski space, and which acquires curvature due to the presence of massive bodies. Thus, near massive bodies, the geometry of space-time differs to a large degree from Euclidean geometry: for example, the sum of the angles in a triangle is not exactly 180 degrees. Just as in classical physics, objects travel along geodesics in the absence of external forces. Importantly though, near a massive body, geodesics are no longer straight lines. It is this phenomenon of objects traveling along geodesics in a curved spacetime that accounts for gravity. The mathematical expression of the theory of general relativity takes the form of the Einstein field equations, a set of ten nonlinear partial differential equations. While solving these equations is quite difficult, examining them provides valuable insight into the structure and meaning of the theory. In their general form, the Einstein field equations are written as a single tensor equation in abstract index notation relating the curvature of spacetime to sources of curvature such as energy density and momentum. In this form, Gμν represents the Einstein tensor, G is the same gravitational constant that appears in the law of universal gravitation, and Tμν is the stress-energy tensor (sometimes referred to as the energy-momentum tensor). The indices μ and ν range from zero to three, representing the time coordinate and the three space coordinates in a manner consistent with special relativity. The left side of the equation — the Einstein tensor — describes the curvature of spacetime in the region under examination. The right side of the equation describes everything in that region that affects the curvature of spacetime. As we can clearly see even in this simplified form, the Einstein field equations can be solved "in either direction." Given a description of the gravitating matter, energy, momentum and fields in a region of spacetime, we can calculate the curvature of spacetime surrounding that region. On the other hand, given a description of the curvature of a region spacetime, we can calculate the motion of a test particle anywhere within that region. Even at this level of examination, the fundamental thesis of the general theory of relativity is obvious: motion is determined by the curvature of spacetime, and the curvature of spacetime is determined by the matter, energy, momentum and fields within it. The right side of the equation: the stress-energy tensor In the Newtonian approximation, the gravitational vector field is directly proportional to mass. In general relativity, mass is just one of several sources of spacetime curvature. The stress-energy tensor, Tμν, includes all of these sources. Put simply, the stress-energy tensor quantifies all the stuff that contributes to spacetime curvature, and thus to the gravitational field. First we will define the stress-energy tensor technically, then we'll examine what that definition means. In technical terms, the stress energy tensor represents the flux of the μ component of 4-momentum across a surface of constant coordinate xν. Fine. But what does that mean? In classical mechanics, it's customary to refer to coordinates in space as x, y and z. In general relativity, the convention is to talk instead about coordinates x0, x1, x2, and x3, where x0 is the time coordinate otherwise called t, and the other three are just the x, y and z coordinates. So "a surface of constant coordinate "xν" simply means a 3-plane perpendicular to the xν axis. The flux of a quantity can be visualized as the magnitude of the current in a river: the flux of water is the amount of water that passes through a cross-section of the river in a given interval of time. So more generally, the flux of a quantity across a surface is the amount of that quantity that passes through that surface. Four-momentum is the special relativity analogue of the familiar momentum from classical mechanics, with the property that the time coordinate of a particle's four-momentum is simply the energy of the particle; the other three components of four-momentum are the same as in classical momentum. So putting that all together, the stress-energy tensor is the flux of 4-momentum across a surface of constant coordinate. In other words, the stress-energy tensor describes the density of energy and momentum, and the flux of energy and momentum in a region. Since under the mass-energy equivalence principle we can convert mass units to energy units and vice-versa, this means that the stress-energy tensor describes all the mass and energy in a given region of spacetime. Put even more simply, the stress-energy tensor represents everything that gravitates. The stress-energy tensor, being a tensor of rank two in four-dimensional spacetime, has sixteen components that can be written as a 4 × 4 matrix. Here the components have been color-coded to help clarify their physical interpretations. - energy density, which is equivalent to mass-energy density; this component includes the mass contribution - , , - the components of momentum density - , , - the components of energy flux The space-space components of the stress-energy tensor are simply the stress tensor from classic mechanics. Those components can be interpreted as: - , , , , , - the components of shear stress, or stress applied tangential to the region - , , - the components of normal stress, or stress applied perpendicular to the region; normal stress is another term for pressure. Pay particular attention to the first column of the above matrix: the components , , and , are interpreted as densities. A density is what you get when you measure the flux of 4-momentum across a 3-surface of constant time. Put another way, the instantaneous value of 4-momentum flux is density. Similarly, the diagonal space components of the stress-energy tensor — , and — represent normal stress, or pressure. Not some weird, relativistic pressure, but plain old ordinary pressure, like what keeps a balloon inflated. Pressure also contributes to gravitation, which raises a very interesting observation. Imagine a box of air, a rigid box that won't flex. Let's say that the pressure of the air inside the box is the same as the pressure of the air outside the box. If we heat the box — assuming of course that the box is airtight — then the temperature of the gas inside will rise. In turn, as predicted by the ideal gas law, the pressure within the box will increase. The box is now heavier than it was. More precisely, increasing the pressure inside the box raised the value of the pressure contribution to the stress-energy tensor, which will increase the curvature of spacetime around the box. What's more, merely increasing the temperature alone caused spacetime around the box to curve more, because the kinetic energy of the gas molecules inside the box also contributes to the stress-energy tensor, via the time-time component . All of these things contribute to the curvature of spacetime around the box, and thus to the gravitational field created by the box. Of course, in practice, the contributions of increased pressure and kinetic energy would be miniscule compared to the mass contribution, so it would be extremely difficult to measure the gravitational effect of heating the box. But on larger scales, such as the sun, pressure and temperature contribute significantly to the gravitational field. In this way, we can see that the stress-energy tensor neatly quantifies all static and dynamic properties of a region of spacetime, from mass to momentum to electric charge to temperature to pressure to shear stress. Thus, the stress-energy tensor is all we need on the right-hand side of the equation in order to relate matter, energy and, well, stuff to curvature, and thus to the gravitational field. Example 1: Stress-energy tensor for a vacuum The simplest possible stress-energy tensor is, of course, one in which all the values are zero. This tensor represents a region of space in which there is no matter, energy or fields, not just at a given instant, but over the entire period of time in which we're interested in the region. Nothing exists in this region, and nothing happens in this region. So one might assume that in a region where the stress-energy tensor is zero, the gravitational field must also necessarily be zero. There's nothing there to gravitate, so it follows naturally that there can be no gravitation. In fact, it's not that simple. We'll discuss this in greater detail in the next section, but even a cursory qualitative examination can tell us there's more going on than that. Consider the gravitational field of an isolated body. A test particle placed somewhere near but outside of the body will move in a geodesic in spacetime, freely falling inward toward the central mass. A test particle with some constant linear velocity component perpendicular to the interval between the particle and the mass will move in a conic section. This is true even though the stress-energy tensor in that region is exactly zero. This much is obvious from our intuitive understanding of gravity: gravity affects things at a distance. But exactly how and why this happens, in the model of the Einstein field equations, is an interesting question which will be explored in the next section. Example 2: Stress-energy tensor for an ideal dust Imagine a time-dependent distribution of identical, massive, non-interacting, electrically neutral particles. In general relativity, such a distribution is called a dust. Let's break down what this means. - The distribution of particles in our dust is not a constant; that is to say, the particles may be motion. The overall configuration you see when you look at the dust depends on the time at which you look at it, so the dust is said to be time-dependent. - The particles that make up our dust are all exactly the same; they don't differ from each other in any way. - Each particle in our dust has some rest mass. Because the particles are all identical, their rest masses must also be identical. We'll call the rest mass of an individual particle m0. - The particles don't interact with each other in any way: they don't collide, and they don't attract or repel each other. This is, of course, an idealization; since the particles are said to have mass m0, they must at least interact with each other gravitationally, if not in other ways. But we're constructing our model in such a way that gravitational effects between the individual particles are so small as to be be negligible. Either the individual particles are very tiny, or the average distance between them is very large. This same assumption neatly cancels out any other possible interactions, as long as we assume that the particles are far enough apart. - electrically neutral - In addition to the obvious electrostatic effect of two charged particles either attracting or repelling each other — thus violating our "non-interacting" assumption — allowing the particles to be both charged and in motion would introduce electrodynamic effects that would have to be factored into the stress-energy tensor. We would greatly prefer to ignore these effects for the sake of simplicity, so by definition, the particles in our dust are all electrically neutral. The easiest way to visualize an ideal dust is to imagine, well, dust. Dust particles sometimes catch the light of the sun and can be seen if you look closely enough. Each particle is moving in apparent ignorance of the rest, its velocity at any given moment dependent only on the motion of the air around it. If we take away the air, each particle of dust will continue moving in a straight line at a constant velocity, whatever its velocity happened to be at the time. This is a good visualization of an ideal dust. We're now going to zoom out slightly from our model, such that we lose sight of the individual particles that make up our dust and can consider instead the dust as a whole. We can fully describe our dust at any event P — where event is defined as a point in space at an instant in time — by measuring the density ρ and the 4-velocity u at P. If we have those two pieces of information about the dust at every point within it at every moment in time, then there's literally nothing else to say about the dust: it's been fully described. Let's start by figuring out the density of dust at a the event P, as measured from the perspective of an observer moving along with the flow of dust at P. The density ρ is calculated very simply: where m0 is the mass of each particle and n is the number of particles in a cubical volume one unit of length on a side centered on P. This quantity is called proper density, meaning the density of the dust as measured within the dust's own reference frame. In other words, if we could somehow imagine the dust to measure its own density, the proper density is the number it would get. Clearly proper density is a function of position, since it varies from point to point within the dust; the dust might be more "crowded" over here, less "crowded" over there. But it's also a function of time, because the configuration of the dust itself is time-dependent. If you measure the proper density at some point in space at one instant of time, then measure it at the same point in space at a different instant of time, you may get a different measurement. By convention, when dealing with a quantity that depends both on position in space and on time, physicists simply say that the quantity is a function of position, with the understanding that they're referring to a "position" in four-dimensional spacetime. The other quantity we need is 4-velocity. Four-velocity is an extension of three-dimensional velocity (or 3-velocity). In three dimensional space, 3-velocity is a vector with three components. Likewise, in four-dimensional spacetime, 4-velocity is a vector with four components. Directly measuring 4-velocity is an inherently tricky business, since one of its components describes motion along a "direction" that we cannot see with our eyes: motion through time. The math of special relativity lets us calculate the 4-velocity of a moving particle given only its 3-velocity v (with components vi where i = 1,2,3) and the speed of light. The time component of 4-velocity is given by: and the space components u1, u2 and u3 by: where γ is the boost, or Lorentz factor: and where , in turn, is the square of the Euclidean magnitude of the 3-velocity vector v: Therefore, if we know the 3-velocity of the dust at event P, then we can calculate its 4-velocity. (For more details on the how and why of 4-velocity, refer to the article on special relativity.) Just as proper density is a function of position in spacetime, 4-velocity also depends on position. The 4-velocity of our dust at a given point in space won't necessarily be the same as the 4-velocity of the dust at another point in space. Likewise, the 4-velocity at a given point at a given time may not be the same as the 4-velocity of the dust at the same point at a different time. It helps to think of 4-velocity as the velocity of the dust through a point in both space and time. Assembling the stress-energy tensor Since the density and the 4-velocity fully describe our dust, we have everything we need to calculate the stress-energy tensor. where the symbol indicates a tensor product. The tensor product of two vectors is a tensor of rank two, so the stress-energy tensor must be a tensor of rank two. In an arbitrary coordinate frame xμ, the contravariant components of the stress-energy tensor for an ideal dust are given by: From this equation, we can now calculate the contravariant components of the stress-energy tensor for an ideal dust. We start with the contravariant time-time component T00: If we rearrange the terms in this equation slightly, something important becomes apparent: Recall that ρ is a density quantity, in mass per unit volume. By the mass-energy equivalence principle, we know that E = mc2. So we can interpret this component of the stress-energy tensor, which is written here in terms of mass-energy, to be equivalent to an energy density. The off-diagonal components of the tensor — Tμν where μ and ν are not equal — are calculated this way: Again, recall that ρ is a quantity of mass per unit volume. Multiplying a mass times a velocity gives momentum, so we can interpret ρv1 as the density of momentum along the x1 direction, multiplied by constants c and γ2. Momentum density is an extremely difficult quantity to visualize, but it's a quantity that comes up over and over in general relativity. If nothing else, one can take comfort in the fact that momentum density is mathematically equivalent to the product of mass density and velocity, both of which are much more intuitive quantities. Note that the off-diagonal components of the tensor are equal to each other: In other words, in the case of an ideal dust, the stress-energy tensor is said to be symmetric. A rank two symmetric tensor is said to be symmetric if Tab = Tba. Diagonal space components The diagonal space components of the stress-energy tensor are calculated this way: In this case, we're multiplying a four-dimensional mass density, ρ, by the square of a component of 4-velocity. By dimensional analysis, we can see: Recall that the force has units: If we divide the units of the diagonal space component by the units of force, we get: So the diagonal space components of the stress-energy tensor come are expressed in terms of force per unit volume. Force per unit area are, of course, the traditional units of pressure in three-dimensional mechanics. So we can interpret the diagonal space components of the stress-energy tensor as the components of "4-pressure" in spacetime. The big picture We now know everything we know to assemble the entire stress-energy tensor, all sixteen components, and look at it as a whole. The large-scale structure of the tensor now becomes apparent. This is the stress-energy tensor of an ideal dust. The tensor is composed entirely out of the proper density and the components of 4-velocity. When velocities are low, the coefficient γ2, even though it's a squared value, remains extremely close to one. The time-time component includes a mass multiplied by the square of the speed of light, so it has to do with energy. The rest of the top row and left column all include the speed of light as a coefficient, as well as density and velocity; in the case of an ideal dust which is made up of non-interacting particles, the energy flux along any basis direction is the same as the momentum density along that direction. This is not the case in other, less simple models, but it's true here. The diagonal space components of the tensor represent pressure. For example, the T11 component represents the pressure that would be exerted on a plane perpendicular to the x1 direction. The off-diagonal space components represent shear stress. The T12 component, for instance, represents the pressure that would be exerted in the x2 direction on a plane perpendicular to the x1 axis. The overall process for calculating the stress-energy tensor for any system is fairly similar to the example given here. It involves taking into account all the matter and energy in the system, describing how the system evolves over time, and breaking that evolution down into components which represent individual densities and fluxes along different directions relative to a chosen coordinate basis. As can easily be imagined, the task of constructing a stress-energy tensor for a system of arbitrary complexity can be a very daunting one. Fortunately, gravity is an extremely weak interaction, as interactions go, so on the scales where gravity is interesting, much of the complexity of a system can be approximated. For instance, there is absolutely nothing in the entire universe that behaves exactly like the ideal dust described here; every massive particle interacts, in one way or another, with other massive particles. No matter what, a real system is going to be very much more complex than this approximation. Yet, the ideal dust solution remains a much-used approximation in theoretical physics specifically because gravity is such a weak interaction. On the scales where gravity is worth studying, many distributions of matter, including interstellar nebulae, clusters of galaxies, even the whole universe really do behave very much like an ideal dust. The left side of the equation: the Einstein curvature tensor We will recall that the Einstein field equations can be written as a single tensor equation: The right side of the equation consists of some constants and the stress-energy tensor, described in significant detail in the previous section. The right side of the equation is the "matter" side. All matter and energy in a region of space is described by the right side of the equation. The left side of the equation, then, is the "space" side. Matter tells space how to curve, and space tells matter how to move. So the left side of the Einstein field equation must necessarily describe the curvature of spacetime in the presence of matter and energy. Some assumptions about the universe Before we proceed into a discussion of what curvature is and how the Einstein equation describes it, we must first pause to state some fundamental assumptions about the universe. The first assumption we're going to make is that spacetime is continuous. In essence, this means that for any event P in spacetime — that is, any point in space and moment in time — there exists some local neighborhood of P where the intrinsic properties of spacetime differ from those at P by only an infinitesimal amount. The second assumption we're going to make is that spacetime is differentiable everywhere. In other words, the geometry of spacetime doesn't have any sharp creases in it. If we hold these two assumptions to be true, then a convenient property of spacetime emerges: Given any event P, there exists a local neighborhood where spacetime can be treated as flat, that is, having zero curvature. It is not necessarily true that all of spacetime be flat — in fact, it most definitely is not — but given any event in spacetime, there exists some neighborhood around it that is flat. This neighborhood may be arbitrarily small in both time and space, but it is guaranteed to exist as long as our two assumptions remain valid. With these two assumptions and this convenient property in hand, we will now examine what it means to say that spacetime is curved. Flatness versus curvature The Euclidean plane is an infinite, flat, two-dimensional surface. A sheet of paper is a good approximation of the Euclidean plane. Onto this plane, we can project a set of Cartesian coordinates. By "Cartesian," we mean that the coordinate axes are straight lines, that they are perpendicular, and that the unit lengths of the axes are equal. A fancier term for a Cartesian coordinate system is an orthonormal basis. Note carefully the distinction between the Euclidean plane and Cartesian coordinates. The plane exists as a thing in and of itself, just as a blank piece of paper does. It has certain properties, which we'll get into below. Those properties are intrinsic to the plane. That is, the properties don't have anything to do with the coordinates we project onto the plane. The plane is a geometric object, and the coordinates are the method by which we measure the plane. (The emphasis on the word measure there is not accidental; please keep this idea in the foreground of your mind as we continue.) Cartesian coordinates are not the only coordinates we can use in the Euclidean plane. For example, instead of having axes that are perpendicular to each other, we could choose axes that are straight lines, but that meet at some non-perpendicular angle. These types of coordinates are called oblique. For that matter, we're not bound to use straight-line coordinates at all. We could instead choose polar coordinates, wherein every point on the plane is described by a distance from a fixed but arbitrary point and an angle from a fixed but arbitrary direction. Polar coordinates are often more convenient than Cartesian coordinates. For example, when navigating a ship on the ocean, the location of a fixed point is usually described in terms of a bearing and a distance, where the distance is the straight-line distance from the ship to the point, and the bearing is the clockwise angle relative to the direction in which the ship is sailing. Polar coordinates in two and three dimensions are often used in physics for similar reasons. But there's a fundamental problem with polar coordinates that is not present with Cartesian coordinates. In Cartesian coordinates, every point on the Euclidean plane is identified by exactly one set of real numbers: there is precisely one set of x and y coordinates for every point, and every point corresponds to precisely one set of coordinates. This is not true in polar coordinates. What are the unique polar coordinates for the origin? The radial distance is obviously zero, but what is the angle? In actuality, if the radial distance is zero, any angle can be used, and the coordinates will identify the same point. The one-to-one correspondence between points in the plane and pairs of coordinates breaks down at the origin. In mathematical terms, polar coordinates in the Euclidean plane have a coordinate singularity at the origin. A coordinate singularity is a point in space where ambiguities are introduced, not because of some intrinsic property of space, but because of the coordinate basis you chose. So clearly there may exist a reason to choose one coordinate system over another when measuring — there's that word again — the Euclidean plane. Polar coordinates have a singularity at the origin — in this case, a point of undefined angle — while Cartesian coordinates have no such singularities anywhere. So there may be good reason to choose Cartesian coordinates over polar coordinates when measuring the Euclidean plane. Fortunately, this is always possible. The Euclidean plane can always be measured by Cartesian coordinates; that is, coordinates wherein the axes are straight and perpendicular at their intersection, and where lines of constant coordinate — picture the grid on a sheet of graph paper — are always a constant distance apart no matter where you measure them. Imagine taking a piece of graph paper, which is printed in a pattern that lets us easily visualize the Cartesian coordinate system, and rolling it into a cylinder. Do any creases appear in the paper? No, it remains smooth all over. Do the lines printed on the paper remain a constant distance apart everywhere? Yes, they do. In technical mathematical terms, then, the surface of a cylinder is flat. That is, it can be measured by an orthonormal basis, and there is everywhere a one-to-one correspondence between sets of coordinates and points on the surface. It's possible not to use an orthonormal basis to measure the surface; one might reasonably choose polar coordinates, or some other arbitrary coordinate system, if it's more convenient. But whichever basis is actually used, it's always possible to switch to an orthonormal basis instead. Now imagine wrapping a sheet of graph paper around a basketball. Does the paper remain smooth? No, if we press it down, creases appear. Do the lines on the paper remain parallel? No, they have to bend in order to conform the paper to the shape of the ball. In the same technical mathematical terms, the surface of a sphere is not flat. It's curved. That is, it is not possible to measure the surface all over using an orthonormal basis. But what if we focus our attention only on a part of the sphere? What if instead of measuring a basketball, we want to measure the whole Earth? The Earth is a sphere, and therefore its surface is curved and can't be measured all over with Cartesian coordinates. But if we look only at a small section of the surface — a square mile on a side, for instance — then we can project a set of Cartesian coordinates that work just fine. If we choose our region of interest to be sufficiently small, then Cartesian coordinates will fit on the surface to within the limits of our ability to measure the difference. The surface of a sphere, then, is globally curved, but locally flat. In physicist jargon, the surface of a sphere can be flattened over a sufficiently small region. Not the whole sphere all at once, nor half of it, nor a quarter of it. But a sufficiently small region can be dealt with as if it were a Euclidean plane. But this brings up an important point. The entire surface of the sphere is curved, and thus can't be approximated with Cartesian coordinates. But a sufficiently small patch of the surface can be approximated with Cartesian coordinates. This implies, then, that "curvedness" isn't an either-or property. Somewhere between the locally flat region of the surface and the entire surface, the amount of curvature goes from none to some value. Curvature, then, must be something we can measure. The metric tensor It is a fundamental property of the Euclidean plane that, when Cartesian coordinates are used, the distance s between any two points A and B is given by the following equation: where Δx and Δy are the distance between A and B in the x and y directions, respectively. This is essentially a restatement of the universally known Pythagorean theorem, and in the context of general relativity, it is called the metric equation. Metric, of course, comes from the same linguistic root as the word measure, and since this is the equation we use to measure distances, it makes sense to call it the metric equation. But this particular metric equation only works on the Euclidean plane with Cartesian coordinates. If we use polar coordinates, this equation won't work. If we're on a curved surface instead of a plane, this equation won't work. This metric equation is only valid on a flat surface with Cartesian coordinates. Which makes it pretty useless, since so much of physics revolves around curved spacetime and spherical coordinates. What we need is a generalized metric equation, some way of measuring the interval of any two points regardless of what coordinate system we're using or whether our local geometry is flat or curved. The metric tensor equation provides this generalization. If v is any vector having components vμ, the length of v is given by the following equation: where gμν is the metric tensor, and μ and ν range over the number of dimensions. Recall that Einstein summation notation means that this is actually a sum over indices μ and ν. If we assume that we're in the two-dimensional Euclidean plane, the metric tensor equation expands to: The terms of the metric tensor, then, must be numerical coefficients in the metric equation. We already know what these equations need to be to make the metric equation work in the Euclidean plane with Cartesian coordinates: Now we can write the metric tensor for the Euclidean plane in Cartesian coordinates in the form of a 2 × 2 matrix: So in the case of the Euclidean plane with Cartesian coordinates, the metric tensor is the Kronecker delta: Of course, the same concepts apply if we expand our interest from the plane to three-dimensional Euclidean space with Cartesian coordinates. We just have to let the indices of the Kronecker delta run from 1 to 3. Which gives us the following metric equation for the length of a vector v (omitting terms with zero coefficient): Which precisely agrees with the Pythagorean theorem in three dimensions. So given a metric tensor gμν for any space and coordinate basis, we can calculate the distance between any two points. The metric tensor, therefore, is what allows us to measure curved space. In a very real sense, the metric tensor describes the shape of both the underlying geometry and the chosen coordinate basis. But relativity is concerned not with geometrically abstract space; we're interested in very real spacetime, and that requires a slightly different kind of metric. The local Minkowski metric Parallel transport and intrinsic curvature The Riemann and Ricci tensors and the curvature scalar The Einstein tensor The cosmological constant <ref>tags exist, but no <references/>tag was found
http://www.conservapedia.com/Quantitative_Introduction_to_General_Relativity
13
75
California Mathematics Standards, 6th Grade 1.0 Students compare and order positive and negative fractions, decimals, and mixed numbers. Students solve problems involving fractions, ratios, proportions, and percentages: 1.1 Compare and order positive and negative fractions, decimals, mixed numbers and place them on a number line. 2.0 Students calculate and solve problems involving addition, subtraction, multiplication, and division: 1.2 Interpret and use ratios in different contexts (e.g., batting averages, miles per hour) to show the relative sizes of two quantities, using appropriate notations (a/b, a to b, a:b). 1.3 Use proportions to solve problems (e.g., determine the value of N if 4/7=N/21, find the length of a side of a polygon similar to a known polygon). Use cross-multiplication as a method for solving such problems, understanding it as the multiplication of both sides of an equation by a multiplicative inverse. 1.4 Calculate given percentages of quantities and solve problems involving discounts at sales, interest earned, and tips. 2.1 Solve problems involving addition, subtraction, multiplication, and division of positive fractions and explain why a particular operation was used for a given situation. 2.2 Explain the meaning of multiplication and division of positive fractions and perform the calculations (e.g., 5/8 / 15/16 = 5/8 x 16/15 = 2/3). 2.3 Solve addition, subtraction, multiplication, and division problems, including those arising in concrete situations, that use positive and negative integers and combinations of these operations. 2.4 Determine the least common multiple and the greatest common divisor of whole numbers; use them to solve problems with fractions (e.g., to find a common denominator to add two fractions or to find the reduced form for a fraction). Algebra and Functions 1.0 Students write verbal expressions and sentences as algebraic expressions and equations; they evaluate algebraic expressions, solve simple linear equations, and graph and interpret their results: 1.1 Write and solve one-step linear equations in one variable 2.0 Students analyze and use tables, graphs and rules to solve problems involving rates and proportions: 1.2 Write and evaluate an algebraic expression for a given situation, using up to three variables. 1.3 Apply algebraic order of operations and the commutative, associative, and distributive properties to evaluate expressions; and justify each step in the process. 1.4 Solve problems manually by using the correct order of operations or by using a scientific calculator. 2.1 Convert one unit of measurement to another (e.g., from feet to miles, from centimeters to inches). 3.0 Students investigate geometric patterns and describe them algebraically: 2.2 Demonstrate an understanding that rate is a measure of one quantity per unit value of another quantity. 2.3 Solve problems involving rates, average speed, distance, and time. 3.1 Use variables in expressions describing geometric quantities (e.g., P=2w / 2l, A=1/2bh, C=(pi)d - the formulas for the perimeter of a rectangle, the area of a triangle, and the circumference of a circle, respectively). 3.2 Express in symbolic form simple relationships arising from geometry. Measurement and Geometry 1.0 Students deepen their understanding of the measurement of plane and solid shapes and use this understanding to solve problems: 1.1 Understand the concept of a constant such as pi; know the formulas for the circumference and area of a circle. 2.0 Students identify and describe the properties of two-dimensional figures: 1.2 Know common estimates of pi (3.14; 22/7) and use these values to estimate and calculate the circumference and the area of circles; compare with actual measurements. 1.3 Know and use the formulas for the volume of triangular prisms and cylinders (area of base x height); compare these formulas and explain the similarity between them and the formula for the volume of a rectangular solid. 2.1 Identify angles as vertical, adjacent, complementary, or supplementary and provide descriptions of these terms. 2.2 Use the properties of complementary and supplementary angles and the sum of the angles of a triangle to solve problems involving an unknown angle. 2.3 Draw quadrilaterals and triangles from given information about them (e.g., a quadrilateral having equal sides but no right angles, a right isosceles triangle). Statistics, Data Analysis and Probability 1.0 Students compute and analyze statistical measurement for data sets: 1.1 Compute the range, mean, median, and mode of data sets. 2.0 Students use data samples of a population and describe the characteristics and limitations of the samples: 1.2 Understand how additional data added to data sets may affect these computations of measures of central tendency. 1.3 Understand how the inclusion or exclusion of outliers affects measures of central tendency. 1.4 Know why a specific measure of central tendency (mean, median, mode) provides the most useful information in a given context. 2.1 Compare different samples of a population with the data from the entire population and identify a situation in which it makes sense to use a sample. 3.0 Students determine theoretical and experimental probabilities and use these to make predictions about events: 2.2 Identify different ways of selecting a sample (e.g., convenience sampling, responses to a survey, random sampling) and which method makes a sample more representative for a population. 2.3Analyze data displays and explain why the way in which the question was asked might have influenced the results obtained and why the way in which the results were displayed might have influenced the conclusions reached. 2.4Identify data that represent sampling errors and explain why the sample (and the display) might be biased. 2.5Identify claims based on statistical data and, in simple cases, evaluate the validity of the claims. 3.1Represent all possible outcomes for compound events in an organized way (e.g., tables, grids, tree diagrams) and express the theoretical probability of each outcome. 3.2 Use data to estimate the probability of future events (e.g. batting averages or number of accidents per mile driven). 3.3 Represent probabilities as rations. proportions, decimals between 0 and 1, and percentages between 0 and 100 and verify that the probabilities computed are reasonable; know that if P is the probability of an even, 1-P is the probability of an event not occurring. 3.4 Understand the probability of either of two disjoint events occurring is the sum of the two individual probabilities and that the probability of one event following another, in independent trials, is the product of the two probabilities. 3.5 Understand the difference between independent and dependent events. 1.0 Students make decisions about how to approach problems: 1.1 Analyze problems by identifying relationships, discriminating relevant from irrelevant information, identifying missing information, sequencing and prioritizing information, and observing patterns 2.0 Students use strategies, skills and concepts in finding solutions: 1.2 Formulate and justify mathematical conjectures based upon a general description of the mathematical question or problem posed 1.3 Determine when and how to break a problem into simpler parts 2.1 Use estimation to verify the reasonableness of calculated results 3.0 Students move beyond a particular problem by generalizing to other situations: 2.2 Apply strategies and results from simpler problems to more complex problems 2.3 Estimate unknown quantities graphically and solve for them using logical reasoning, and arithmetic and algebraic techniques 2.4 Use a variety of methods such as words, numbers, symbols, charts, graphs, tables, diagrams and models to explain mathematical reasoning 2.5 Express the solution clearly and logically using appropriate mathematical notation and terms and clear language, and support solutions with evidence, in both verbal and symbolic work 2.6 Indicate the relative advantages of exact and approximate solutions to problems and give answers to a specified degree of accuracy 2.7 Make precise calculations and check the validity of the results from the context of the problem 3.1 Evaluate the reasonableness of the solution in the context of the original situation 3.2 Note method of deriving the solution and demonstrate conceptual understanding of the derivation by solving similar problems 3.3 Develop generalizations of the results obtained and the strategies used and extend them to new problem situations
http://mathforum.org/alejandre/frisbie/math/standards6.html
13
166
Gravity is the force of attraction between massive particles. Weight is determined by the mass of an object and its location in a gravitational field. While a great deal is known about the properties of gravity, the ultimate cause of the gravitational force remains an open question. General relativity is the most successful theory of gravitation to date. It postulates that mass and energy curve space-time, resulting in the phenomenon known as gravity. The effect of the bending of spacetime is often misunderstood as most people seem to prefer to think of a falling object as accelerating when the facts do not support that assumption. Ask any skydiver if he feels any acceleration (other than from wind resistance). Gravity, simply put, is acceleration. F=ma means that there must be a force that causes a mass to accelerate. For a rocket ship, that is the rocket motor. For the earth, that is the compression of the mass between something on the surface of the earth and the earth's center of mass. The acceleration is in relation to spacetime in that the weight you feel is your resistance to deviating from your path in spacetime. The same holds true in the rocket ship except that a rocket motor supplies the force to accelerate you from your spacetime path. There is no difference between weight you feel because of gravity or the rocket. Newton's law of universal gravitation Newton's law of universal gravitation states the following: - Every object in the Universe attracts every other object with a force directed along the line of centers of mass for the two objects. This force is proportional to the product of their masses and inversely proportional to the square of the separation between the centers of mass of the two objects. Given that the force is along the line through the two masses, the law can be stated symbolically as follows. - F is the magnitude of the (repulsive) gravitational force between two objects - G is the gravitational constant, that is approximately : G = 6.67 × 10−11 N m2 kg-2 - m1 is the mass of first object - m2 is the mass of second object - r is the distance between the objects It can be seen that this repulsive force F is always negative, and this means that the net attractive force is positive. The minus sign is used to hold the same value meaning as in the Coulomb's Law, where a positive force as result means repulsion between two charges. Thus gravity is proportional to the mass of each object, but has an inverse square relationship with the distance between the centres of each mass. Strictly speaking, this law applies only to point-like objects. If the objects have spatial extent, the force has to be calculated by integrating the force (in vector form, see below) over the extents of the two bodies. It can be shown that for an object with a spherically-symmetric distribution of mass, the integral gives the same gravitational attraction on masses outside it as if the object were a point mass.1 This law of universal gravitation was originally formulated by Isaac Newton in his work, the Principia Mathematica (1687). The history of gravitation as a physical concept is considered in more detail below. Newton's law of universal gravitation can be written as a vector equation to account for the direction of the gravitational force as well as its magnitude. In this formulation, quantities in bold represent vectors. - F12 is the force on object 1 due to object 2 - G is the gravitational constant - m1 and m2 are the masses of the objects 1 and 2 - r21 = | r2 − r1 | is the distance between objects 2 and 1 - is the unit vector from object 2 to 1 It can be seen, that the vector form of the equation is the same as the scalar form, except for the vector value of F and the unit vector. Also, it can be seen that F12 = − F21. Gravitational acceleration is given by the same formula except for one of the factors m: The gravitational field is a vector field that describes the gravitational force an object of given mass experiences in any given place in space. It is a generalization of the vector form, which becomes particularly useful if more than 2 objects are involved (such as a rocket between the Earth and the Moon). For 2 objects (e.g. object 1 is a rocket, object 2 the Earth), we simply write instead of and m instead of m1 and define the gravitational field as: so that we can write: This formulation is independent of the objects causing the field. The field has units of force divided by mass; in SI, this is N·kg−1. Problems with Newton's theory Although Newton's formulation of gravitation is quite accurate for most practical purposes, it has a few problems: - There is no prospect of identifying the mediator of gravity. Newton himself felt the inexplicable action at a distance to be unsatisfactory (see "Newton's reservations" below). - Newton's theory requires that gravitational force is transmitted instantaneously. Given classical assumptions of the nature of space and time, this is necessary to preserve the conservation of angular momentum observed by Johannes Kepler. However, it is in direct conflict with Einstein's theory of special relativity which places an upper limit—the speed of light in vacuum—on the velocity at which signals can be transmitted. Disagreement with observation - Newton's theory does not fully explain the precession of the perihelion of the orbit of the planet Mercury. There is a 43 arcsecond per century discrepancy between the Newtonian prediction (resulting from the gravitational tugs of the other planets) and the observed precessionTemplate:Fn. - The predicted deflection of light by gravity is only half as much as observations of this deflection, which were made after General Relativity was developed in 1915. - The observed fact that gravitational and inertial masses are the same for all bodies is unexplained within Newton's system. General relativity takes this as a postulate. See equivalence principle. It's important to understand that while Newton was able to formulate his law of gravity in his monumental work, he was deeply uncomfortable with the notion of "action at a distance" which his equations implied. He never, in his words, "assigned the cause of this power". In all other cases, he used the phenomenon of motion to explain the origin of various forces acting on bodies, but in the case of gravity, he was unable to experimentally identify the motion that produces the force of gravity. Moreover, he refused to even offer a hypothesis as to the cause of this force on grounds that to do so was contrary to sound science. He lamented the fact that "philosophers have hitherto attempted the search of nature in vain" for the source of the gravitational force, as he was convinced "by many reasons" that there were "causes hitherto unknown" that were fundamental to all the "phenomena of nature". These fundamental phenomena are still under investigation and, though hypotheses abound, the definitive answer is yet to be found. While it is true that Einstein's hypotheses are successful in explaining the effects of gravitational forces more precisely than Newton's in certain cases, he too never assigned the cause of this power, in his theories. It is said that in Einstein's equations, "matter tells space how to curve, and space tells matter how to move", but this new idea, completely foreign to the world of Newton, does not enable Einstein to assign the "cause of this power" to curve space any more than the Law of Universal Gravitation enabled Newton to assign its cause. In Newton's own words: - I wish we could derive the rest of the phenomena of nature by the same kind of reasoning from mechanical principles; for I am induced by many reasons to suspect that they may all depend upon certain forces by which the particles of bodies, by some causes hitherto unknown, are either mutually impelled towards each other, and cohere in regular figures, or are repelled and recede from each other; which forces being unknown, philosophers have hitherto attempted the search of nature in vain. If science is eventually able to discover the cause of the gravitational force, Newton's wish could eventually be fulfilled as well. It should be noted that here, the word "cause" is not being used in the same sense as "cause and effect" or "the defendant caused the victim to die". Rather, when Newton uses the word "cause," he (apparently) is referring to an "explanation". In other words, a phrase like "Newtonian gravity is the cause of planetary motion" means simply that Newtonian gravity explains the motion of the planets. See Causality and Causality (physics). Einstein's theory of gravitation Einstein's theory of gravitation answered the problems with Newton's theory noted above. In a revolutionary move, his theory of general relativity (1915) stated that the presence of mass, energy, and momentum causes spacetime to become curved. Because of this curvature, the paths that objects in inertial motion follow can "deviate" or change direction over time. This deviation appears to us as an acceleration towards massive objects, which Newton characterized as being gravity. In general relativity however, this acceleration or free fall is actually inertial motion. So objects in a gravitational field appear to fall at the same rate due to their being in inertial motion while the observer is the one being accelerated. (This identification of free fall and inertia is known as the Equivalence principle.) The relationship between the presence of mass/energy/momentum and the curvature of spacetime is given by the Einstein field equations. The actual shapes of spacetime are described by solutions of the Einstein field equations. In particular, the Schwarzschild solution (1916) describes the gravitational field around a spherically symmetric massive object. The geodesics of the Schwarzschild solution describe the observed behavior of objects being acted on gravitationally, including the anomalous perihelion precession of Mercury and the bending of light as it passes the Sun. Arthur Eddington found observational evidence for the bending of light passing the Sun as predicted by general relativity in 1919. Subsequent observations have confirmed Eddington's results, and observations of a pulsar which is occulted by the Sun every year have permitted this confirmation to be done to a high degree of accuracy. There have also in the years since 1919 been numerous other tests of general relativity, all of which have confirmed Einstein's theory. Units of measurement and variations in gravity Gravitational phenomena are measured in various units, depending on the purpose. The gravitational constant is measured in newtons times metre squared per kilogram squared. Gravitational acceleration, and acceleration in general, is measured in metres per second squared or in non-SI units such as galileos, gees, or feet per second squared. The acceleration due to gravity at the Earth's surface is approximately 9.8 m/s2, more precise values depending on the location. A standard value of the Earth's gravitational acceleration has been adopted, called gn. When the typical range of interesting values is from zero to tens of metres per second squared, as in aircraft, acceleration is often stated in multiples of gn. When used as a measurement unit, the standard acceleration is often called "gee", as g can be mistaken for g, the gram symbol. For other purposes, measurements in millimetres or micrometres per second squared (mm/s² or µm/s²) or in multiples of milligals or milligalileos (1 mGal = 1/1000 Gal), a non-SI unit still common in some fields such as geophysics. A related unit is the eotvos, which is a cgs unit of the gravitational gradient. Mountains and other geological features cause subtle variations in the Earth's gravitational field; the magnitude of the variation per unit distance is measured in inverse seconds squared or in eotvoses. A larger variation in the effect of gravity occurs when we move from the equator to the poles. The effective force of gravity decreases as the distance from the equator decreases, due to the rotation of the Earth, and the resulting centrifugal force and flattening of the Earth. The centrifugal force causes an effective force 'up' which effectively counteracts gravity, while the flattening of the Earth causes the poles to be closer to the center of mass of the Earth. It is also related to the fact that the Earth's density changes from the surface of the planet to its centre. The sea-level gravitational acceleration is 9.780 m/s² at the equator and 9.832 m/s² at the poles, so an object will exert about 0.5% more force due to gravity at sea level at the poles than at sea level at the equator . Comparison with electromagnetic force The gravitational interaction of protons is approximately a factor 1036 weaker than the electromagnetic repulsion. This factor is independent of distance, because both interactions are inversely proportional to the square of the distance. Therefore on an atomic scale mutual gravity is negligible. However, the main interaction between common objects and the Earth and between celestial bodies is gravity, because at this scale matter is electrically neutral: even if in both bodies there were a surplus or deficit of only one electron for every 1018 protons and neutrons this would already be enough to cancel gravity (or in the case of a surplus in one and a deficit in the other: double the interaction). However, the main interactions between the charged particles in cosmic plasma (that makes up over 99% of the universe by volume), are electromagnetic forces. The relative weakness of gravity can be demonstrated with a small magnet picking up pieces of iron. The small magnet is able to overwhelm the gravitational interaction of the entire Earth. Similarly, when doing a chin-up, the electromagnetic interaction within your muscle cells is able to overcome the force induced by Earth on your entire body. Gravity is small unless at least one of the two bodies is large or one body is very dense and the other is close by, but the small gravitational interaction exerted by bodies of ordinary size can fairly easily be detected through experiments such as the Cavendish torsion bar experiment. - Jefimenko, Oleg D., "Causality, electromagnetic induction, and gravitation : a different approach to the theory of electromagnetic and gravitational fields". Star City [West Virginia] : Electret Scientific Co., c1992. ISBN 0917406095 - Heaviside, Oliver, "A gravitational and electromagnetic analogy". The Electrician, 1893. Gravity and quantum mechanics It is strongly believed that three of the four fundamental forces (the strong nuclear force, the weak nuclear force, and the electromagnetic force) are manifestations of a single, more fundamental force. Combining gravity with these forces of quantum mechanics to create a theory of quantum gravity is currently an important topic of research amongst physicists. General relativity is essentially a geometric theory of gravity. Quantum mechanics relies on interactions between particles, but general relativity requires no exchange of particles in its explanation of gravity. Scientists have theorized about the graviton (a messenger particle that transmits the force of gravity) for years, but have been frustrated in their attempts to find a consistent quantum theory for it. Many believe that string theory holds a great deal of promise to unify general relativity and quantum mechanics, but this promise has yet to be realized. It is notable that in general relativity gravitational radiation (which under the rules of quantum mechanics must be composed of gravitons) is only created in situations where the curvature of spacetime is oscillating, such as for co-orbiting objects. The amount of gravitational radiation emitted by the solar system and its planetary systems is far too small to measure. However, gravitational radiation has been indirectly observed as an energy loss over time in binary pulsar systems such as PSR1913+16). It is believed that neutron star mergers and black hole formation may create detectable amounts of gravitational radiation. Gravitational radiation observatories such as LIGO have been created to study the problem. No confirmed detections have been made of this hypothetical radiation, but as the science behind LIGO is refined and as the instruments themselves are endowed with greater sensitivity over the next decade, this may change. Experimental tests of theories Today General Relativity is accepted as the standard description of gravitational phenomena. (Alternative theories of gravitation exist but are more complicated than General Relativity.) General Relativity is consistent with all currently available measurements of large-scale phenomena. For weak gravitational fields and bodies moving at slow speeds at small distances, Einstein's General Relativity gives almost exactly the same predictions as Newton's law of gravitation. Crucial experiments that justified the adoption of General Relativity over Newtonian gravity were the classical tests: the gravitational redshift, the deflection of light rays by the Sun, and the precession of the orbit of Mercury. More recent experimental confirmations of General Relativity were the (indirect) deduction of gravitational waves being emitted from orbiting binary stars, the existence of neutron stars and black holes, gravitational lensing, and the convergence of measurements in observational cosmology to an approximately flat model of the observable Universe, with a matter density parameter of approximately 30% of the critical density and a cosmological constant of approximately 70% of the critical density. The equivalence principle, the postulate of general relativity that presumes that inertial mass and gravitational mass are the same, is also under test. Past, present, and future tests are discussed in the equivalence principle section. Even to this day, scientists try to challenge General Relativity with more and more precise direct experiments. The goal of these tests is to shed light on the yet unknown relationship between Gravity and Quantum Mechanics. Space probes are used to either make very sensitive measurements over large distances, or to bring the instruments into an environment that is much more controlled than it could be on Earth. For example, in 2004 a dedicated satellite for gravity experiments, called Gravity Probe B, was launched to test general relativity's predicted frame-dragging effect, among others. Also, land-based experiments like LIGO and a host of "bar detectors" are trying to detect gravitational waves directly. A space-based hunt for gravitational waves, LISA_(astronomy), is in its early stages. It should be sensitive to low frequency gravitational waves from many sources, perhaps including the Big Bang. Speed of gravity: Einstein's theory of relativity predicts that the speed of gravity (defined as the speed at which changes in location of a mass are propagated to other masses) should be consistent with the speed of light. In 2002, the Fomalont-Kopeikin experiment produced measurements of the speed of gravity which matched this prediction. However, this experiment has not yet been widely peer-reviewed, and is facing criticism from those who claim that Fomalont-Kopeikin did nothing more than measure the speed of light in a convoluted manner. The Pioneer anomaly is an empirical observation that the positions of the Pioneer 10 and Pioneer 11 space probes differ very slightly from what would be expected according to known effects (gravitational or otherwise). The possibility of new physics has not been ruled out, despite very thorough investigation in search of a more prosaic explanation. Recent Alternative theories - Brans-Dicke theory of gravity - Rosen bi-metric theory of gravity - In the modified Newtonian dynamics (MOND), Mordehai Milgrom proposes a modification of Newton's Second Law of motion for small accelerations. Historical Alternative theories - Nikola Tesla challenged Albert Einstein's theory of relativity, announcing he was working on a Dynamic theory of gravity (which began between 1892 and 1894) and argued that a "field of force" was a better concept and focused on media with electromagnetic energy that fill all of space. - In 1967 Andrei Sakharov proposed something similar, if not essentially identical. His theory has been adopted and promoted by Messrs. Haisch, Rueda and Puthoff who, among other things, explain that gravitational and inertial mass are identical and that high speed rotation can reduce (relative) mass. Combining these notions with those of T. T. Brown, it is relatively easy to conceive how field propulsion vehicles such as "flying saucers" could be engineered given a suitable source of power. - Georges-Louis LeSage proposed a gravity mechanism, now commonly called LeSage gravity, based on a fluid-based explanation where a light gas fills the entire universe. A self-gravitating system is a system of masses kept together by mutual gravity. An example is a binary star. Special applications of gravity A weight hanging from a cable over a pulley provides a constant tension in the cable, also in the part on the other side of the pulley. Molten lead, when poured into the top of a shot tower, will coalesce into a rain of spherical lead shot, first separating into droplets, forming molten spheres, and finally freezing solid, undergoing many of the same effects as meteoritic tektites, which will cool into spherical, or near-spherical shapes in free-fall. Comparative gravities of different planets and Earth's moon The standard acceleration due to gravity at the Earth's surface is, by convention, equal to 9.80665 metres per second squared. (The local acceleration of gravity varies slightly over the surface of the Earth; see gee for details.) This quantity is known variously as gn, ge (sometimes this is the normal equatorial value on Earth, 9.78033 m/s²), g0, gee, or simply g (which is also used for the variable local value). The following is a list of the gravitational accelerations (in multiples of g) at the Sun, the surfaces of each of the planets in the solar system, and the Earth's moon : Note: The "surface" is taken to mean the cloud tops of the gas giants (Jupiter, Saturn, Uranus and Neptune) in the above table. It is usually specified as the location where the pressure is equal to a certain value (normally 75 kPa?). For the Sun, the "surface" is taken to mean the photosphere. For spherical bodies surface gravity in m/s2 is 2.8 × 10−10 times the radius in m times the average density in kg/m3. When flying from Earth to Mars, climbing against the field of the Earth at the start is 100 000 times heavier than climbing against the force of the sun for the rest of the flight. Mathematical equations for a falling body These equations describe the motion of a falling body under acceleration g near the surface of the Earth. Here, the acceleration of gravity is a constant, g, because in the vector equation above, r21 would be a constant vector, pointing straight down. In this case, Newton's law of gravitation simplifies to the law - F = mg The following equations ignore air resistance and the rotation of the Earth, but are usually accurate enough for heights not exceeding the tallest man-made structures. They fail to describe the Coriolis effect, for example. They are extremely accurate on the surface of the Moon, where the atmosphere is almost nil. Astronaut David Scott demonstrated this with a hammer and a feather. Galileo was the first to demonstrate and then formulate these equations. He used a ramp to study rolling balls, effectively slowing down the acceleration enough so that he could measure the time as the ball rolled down a known distance down the ramp. He used a water clock to measure the time; by using an "extremely accurate balance" to measure the amount of water, he could measure the time elapsed. 2 - For Earth, in Metric units: in Imperial units: For other planets, multiply by the ratio of the gravitational accelerations shown above. |Distance d traveled by a falling object | under the influence of gravity for a time t: |Elapsed time t of a falling object | under the influence of gravity for distance d: |Average velocity va of a falling object| under constant acceleration g for any given time: |Average velocity va of a falling object| under constant acceleration g traveling distance d: |Instantaneous velocity vi of a falling object| under constant acceleration g for any given time: |Instantaneous velocity vi of a falling object| under constant acceleration g, traveling distance d: Note: "Average" means average in time. Note: Distance traveled, d, and time taken, t, must be in the same system of units as acceleration g. See dimensional analysis. To convert metres per second to kilometres per hour (km/h) multiply by 3.6, and to convert feet per second to miles per hour (mph) multiply by 0.68 (or, precisely, 15/22). For any mass distribution there is a scalar field, the gravitational potential (a scalar potential), which is the gravitational potential energy per unit mass of a point mass, as function of position. It is where the integral is taken over all mass. Minus its gradient is the gravity field itself, and minus its Laplacian is the divergence of the gravity field, which is everywhere equal to -4πG times the local density. Thus when outside masses the potential satisfies Laplace's equation (i.e., the potential is a harmonic function), and when inside masses the potential satisfies Poisson's equation with, as right-hand side, 4πG times the local density. Acceleration relative to the rotating Earth The acceleration measured on the rotating surface of the Earth is not quite the same as the acceleration that is measured for a free-falling body because of the centrifugal force. In other words, the apparent acceleration in the rotating frame of reference is the total gravity vector minus a small vector toward the north-south axis of the Earth, corresponding to staying stationary in that frame reference. History of gravitational theory The first mathematical formulation of gravity was published in 1687 by Sir Isaac Newton. His law of universal gravitation was the standard theory of gravity until work by Albert Einstein and others on general relativity. Since calculations in general relativity are complicated, and Newtonian gravity is sufficiently accurate for calculations involving weak gravitational fields (e.g., launching rockets, projectiles, pendulums, etc.), Newton's formulae are generally preferred. Although the law of universal gravitation was first clearly and rigorously formulated by Isaac Newton, the phenomenon was observed and recorded by others. Even Ptolemy had a vague conception of a force tending toward the center of the Earth which not only kept bodies upon its surface, but in some way upheld the order of the universe. Johannes Kepler inferred that the planets move in their orbits under some influence or force exerted by the Sun; but the laws of motion were not then sufficiently developed, nor were Kepler's ideas of force sufficiently clear, to make a precise statement of the nature of the force. Christiaan Huygens and Robert Hooke, contemporaries of Newton, saw that Kepler's third law implied a force which varied inversely as the square of the distance. Newton's conceptual advance was to understand that the same force that causes a thrown rock to fall back to the Earth keeps the planets in orbit around the Sun, and the Moon in orbit around the Earth. Newton was not alone in making significant contributions to the understanding of gravity. Before Newton, Galileo Galilei corrected a common misconception, started by Aristotle, that objects with different mass fall at different rates. To Aristotle, it simply made sense that objects of different mass would fall at different rates, and that was enough for him. Galileo, however, actually tried dropping objects of different mass at the same time. Aside from differences due to friction from the air, Galileo observed that all masses accelerate the same. Using Newton's equation, F = ma, it is plain to us why: The above equation says that mass m1 will accelerate at acceleration a1 under the force of gravity, but divide both sides of the equation by m1 and: Nowhere in the above equation does the mass of the falling body appear. When dealing with objects near the surface of a planet, the change in r divided by the initial r is so small that the acceleration due to gravity appears to be perfectly constant. The acceleration due to gravity on Earth is usually called g, and its value is about 9.8 m/s2 (or 32 ft/s2). Galileo didn't have Newton's equations, though, so his insight into gravity's proportionality to mass was invaluable, and possibly even affected Newton's formulation on how gravity works. However, across a large body, variations in r can create a significant tidal force. - Note 1: Proposition 75, Theorem 35: p.956 - I.Bernard Cohen and Anne Whitman, translators: Isaac Newton, The Principia: Mathematical Principles of Natural Philosophy. Preceded by A Guide to Newton's Principia, by I.Bernard Cohen. University of California Press 1999 ISBN 0-520-08816-6 ISBN 0-520-08817-4 - Note 2: See the works of Stillman Drake, for a comprehensive study of Galileo and his times, the Scientific Revolution. - Template:Fnb Max Born (1924), Einstein's Theory of Relativity (The 1962 Dover edition, page 348 lists a table documenting the observed and calculated values for the precession of the perihelion of Mercury, Venus, and Earth.) - Gravity wave - Gravitational binding energy - Gravity Research Foundation - Standard gravitational parameter - n-body problem - Pioneer anomaly - Table of velocities required for a spacecraft to escape a planet's gravitational field - Application to gravity of the divergence theorem - Gravity field - Scalar Gravity - Halliday, David; Robert Resnick; Kenneth S. Krane (2001). Physics v. 1, New York: John Wiley & Sons. ISBN 0471320579. - Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.), Brooks/Cole. ISBN 0534408427. - Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.), W. H. Freeman. ISBN 0716708094. ca:Gravetat da:Gravitation de:Gravitation es:Gravedad eo:Gravito fa:گرانش fr:Gravitation ko:중력 he:כבידה ia:Gravitate io:Graveso ja:重力 hu:Gravitáció it:Forza di gravità ms:Graviti nl:Zwaartekracht no:Gravitasjon nds:Gravitatschon pl:Grawitacja pt:Gravidade ru:Гравитация sl:gravitacija sr:Гравитација sv:Gravitation zh:引力
http://www.exampleproblems.com/wiki/index.php?title=Gravity&oldid=31142
13
70
The bold plan for an Apollo mission based on LOR held the promise of landing on the moon by 1969, but it presented many daunting technical difficulties. Before NASA could dare attempt any type of lunar landing, it had to learn a great deal more about the destination. Although no one believed that the moon was made of green cheese, some lunar theories of the early 1960s seemed equally fantastic. One theory suggested that the moon was covered by a layer of dust perhaps 50 feet thick. If this were true, no spacecraft would be able to safely land on or take off from the lunar surface. Another theory claimed that the moon's dust was not nearly so thick but that it possessed an electrostatic charge that would cause it to stick to the windows of the lunar landing vehicle, thus making it impossible for the astronauts to see out as they landed. Cornell University astronomer Thomas Gold warned that the moon might even be composed of a spongy material that would crumble upon impact.1 At Langley, Dr. Leonard Roberts, a British mathematician in Clint Brown's Theoretical Mechanics Division, pondered the riddle of the lunar surface and drew an equally pessimistic conclusion. Roberts speculated that because the moon was millions of years old and had been constantly bombarded without the protection of an atmosphere, its surface was most likely so soft that any vehicle attempting to land on it would sink and be buried as if it had landed in quicksand. After the president's commitment to a manned lunar landing in 1961, Roberts began an extensive three year research program to show just what would happen if an exhaust rocket blasted into a surface of very thick powdered sand. His analysis indicated that an incoming rocket would throw up a mountain of sand, thus creating a big rim all the way around the outside of the landed spacecraft. Once the spacecraft settled, this huge bordering volume of sand would collapse, completely engulf the spacecraft, and kill its occupants.2 Telescopes revealed little about the nature of the lunar surface. Not even the latest, most powerful optical instruments could see through the earth's atmosphere well enough to resolve the moon's detailed surface features. Even an object the size of a football stadium would not show up on a telescopic photograph, and enlarging the photograph would only increase the blur. To separate fact from fiction and obtain the necessary information about the craters, crevices, and jagged rocks on the lunar surface, NASA would have to send out automated probes to take a closer look. The first of these probes took off for the moon in January 1962 as part of a NASA project known as Ranger. A small 800-pound spacecraft was to make a "hard landing," crashing to its destruction on the moon. Before Ranger crashed, however, its on-board multiple television camera payload was to send back close views of the surface -views far more detailed than any captured by a telescope. Sadly, the first six Ranger probes were not successful. Malfunctions of the booster or failures of the launch-vehicle guidance system plagued the first three attempts; malfunctions of the spacecraft itself hampered the fourth and fifth probes; and the primary experiment could not take place during the sixth Ranger attempt because the television equipment would not transmit. Although these incomplete missions did provide some extremely valuable high-resolution photographs, as well as some significant data on the performance of Ranger's systems, in total the highly publicized record of failures embarrassed NASA and demoralized the Ranger project managers at JPL. Fortunately, the last three Ranger flights in 1964 and 1965 were successful. These flights showed that a lunar landing was possible, but the site would have to be carefully chosen to avoid craters and big boulders.3 JPL managed a follow-on project to Ranger known as Surveyor. Despite failures and serious schedule delays, between May 1966 and January 1968, six Surveyor spacecraft made successful soft landings at predetermined points on the lunar surface. From the touchdown dynamics, surface-bearing strength measurements, and eye-level television scanning of the local surface conditions, NASA learned that the moon could easily support the impact and the weight of a small lander. Originally, NASA also planned for (and Congress had authorized) a second type of Surveyor spacecraft, which instead of making a soft landing on the moon, was to be equipped for high-resolution stereoscopic film photography of the moon's surface from lunar orbit and for instrumented measurements of the lunar environment. However, this second Surveyor or "Surveyor Orbiter" did not materialize. The staff and facilities of JPL were already overburdened with the responsibilities for Ranger and "Surveyor Lander"; they simply could not take on another major spaceflight project.4 In 1963, NASA scrapped its plans for a Surveyor Orbiter and turned its attention to a lunar orbiter project that would not use the Surveyor spacecraft system or the Surveyor launch vehicle, Centaur. Lunar Orbiter would have a new spacecraft and use the Atlas-Agena D to launch it into space. Unlike the preceding unmanned lunar probes, which were originally designed for general scientific study, Lunar Orbiter was conceived after a manned lunar landing became a national commitment. The project goal from the start was to support the Apollo mission. Specifically, Lunar Orbiter was designed to provide information on the lunar surface conditions most relevant to a spacecraft landing. This meant, among other things, that its camera had to be sensitive enough to capture subtle slopes and minor protuberances and depressions over a broad area of the moon's front side. As an early working group on the requirements of the lunar photographic mission had determined, Lunar Orbiter had to allow the identification of 45-meter objects over the entire facing surface of the moon, 4.5-meter objects in the "Apollo zone of interest," and 1.2-meter objects in all the proposed landing areas.5 Five Lunar Orbiter missions took place. The first launch occurred in August 1966 within two months of the initial target date. The next four Lunar Orbiters were launched on schedule; the final mission was completed in August 1967, barely a year after the first launch. NASA had planned five flights because mission reliability studies had indicated that five might be necessary to achieve even one success. However, all five Lunar Orbiters were successful, and the prime objective of the project, which was to photograph in detail all the proposed landing sites, was met in three missions. This meant that the last two flights could be devoted to photographic exploration of the rest of the lunar surface for more general scientific purposes. The final cost of the program was not slight: it totaled $163 million, which was more than twice the original estimate of $77 million. That increase, however, compares favorably with the escalation in the price of similar projects, such as Surveyor, which had an estimated cost of $125 million and a final cost of $469 million. In retrospect, Lunar Orbiter must be, and rightfully has been, regarded as an unqualified success. For the people and institutions responsible, the project proved to be an overwhelmingly positive learning experience on which greater capabilities and ambitions were built. For both the prime contractor, the Boeing Company, a world leader in the building of.... .... airplanes, and the project manager, Langley Research Center, a premier aeronautics laboratory, involvement in Lunar Orbiter was a turning point. The successful execution of a risky enterprise became proof positive that they were more than capable of moving into the new world of deep space. For many observers as well as for the people who worked on the project, Lunar Orbiter quickly became a model of how to handle a program of space exploration its successful progress demonstrated how a clear and discrete objective, strong leadership, and positive person-to-person communication skills can keep a project on track from start to finish.6 Many people inside the American space science community believed that neither Boeing nor Langley was capable of managing a project like Lunar Orbiter or of supporting the integration of first-rate scientific experiments and space missions. After NASA headquarters announced in the summer of 1963 that Langley would manage Lunar Orbiter, more than one space scientist was upset. Dr. Harold C. Urey, a prominent scientist from the University of California at San Diego, wrote a letter to Administrator James Webb asking him, "How in the world could the Langley Research Center, which is nothing more than a bunch of plumbers, manage this scientific program to the moon?"7 Urey's questioning of Langley's competency was part of an unfolding debate over the proper place of general scientific objectives within NASA's spaceflight programs. The U.S. astrophysics community and Dr. Homer E. Newell's Office of Space Sciences at NASA headquarters wanted "quality science" experiments incorporated into every space mission, but this caused problems. Once the commitment had been made to a lunar landing mission, NASA had to decide which was more important: gathering broad scientific information or obtaining data required for accomplishing the lunar landing mission. Ideally, both goals could be incorporated in a project without one compromising the other, but when that seemed impossible, one of the two had to be given priority. The requirements of the manned mission usually won out. For Ranger and Surveyor, projects involving dozens of outside scientists and the large and sophisticated Space Science Division at JPL, that meant that some of the experiments would turn out to be less extensive than the space scientists wanted.8 For Lunar Orbiter, a project involving only a few astrogeologists at the U.S. Geological Survey and a very few space scientists at Langley, it meant, ironically, that the primary goal of serving Apollo would be achieved so quickly that general scientific objectives could be included in its last two missions. Langley management had entered the fray between science and project engineering during the planning for Project Ranger. At the first Senior Council meeting of the Office of Space Sciences (soon to be renamed the Office of Space Sciences and Applications [OSSA]) held at NASA headquarters on 7 June 1962, Langley Associate Director Charles Donlan had questioned the priority of a scientific agenda for the agency's proposed unmanned lunar probes because a national commitment had since been made to a manned lunar landing. The initial requirements for the probes had been set long before Kennedy's announcement, and therefore, Donlan felt NASA needed to rethink them. Based on his experience at Langley and with Gilruth's STG, Donlan knew that the space science people could be "rather unbending" about adjusting experiments to obtain "scientific data which would assist the manned program." What needed to be done now, he felt, was to turn the attention of the scientists to exploration that would have more direct applications to the Apollo lunar landing program.9 Donlan was distressed specifically by the Office of Space Sciences' recent rejection of a lunar surface experiment proposed by a penetrometer feasibility study group at Langley. This small group, consisting of half a dozen people from the Dynamic Loads and Instrument Research divisions, had devised a spherical projectile, dubbed "Moonball," that was equipped with accelerometers capable of transmitting acceleration versus time signatures during impact with the lunar surface. With these data, researchers could determine the hardness, texture, and load-bearing strength of possible lunar landing sites. The group recommended that Moonball be flown as part of the follow-on to Ranger.10 A successful landing of an intact payload required that the landing loads not exceed the structural capabilities of the vehicle and that the vehicle make its landing in some tenable position so it could take off again. Both of these requirements demanded a knowledge of basic physical properties of the surface material, particularly data demonstrating its hardness or resistance to penetration. In the early 1960s, these properties were still unknown, and the Langley penetrometer feasibility study group wanted to identify them. Without the information, any design of Apollo's lunar lander would have to be based on assumed surface characteristics.11 In the opinion of the Langley penetrometer group, its lunar surface hardness experiment would be of "general scientific interest," but it would, more importantly, provide "timely engineering information important to the design of the Apollo manned lunar landing vehicle." 12 Experts at JPL, however, questioned whether surface hardness was an important criterion for any experiment and argued that "the determination of the terrain was more important, particularly for a horizontal landing.''13 In the end, the Office of Space Sciences rejected the Langley idea in favor of making further seismometer experiments, which might tell scientists something basic about the origins of the moon and its astrogeological history.* For engineer Donlan, representing a research organization like Langley dominated by engineers and by their quest for practical solutions to applied problems, this rejection seemed a mistake. The issue came down to what NASA needed to know now. That might have been science before Kennedy's commitment, but it definitely was not science after it. In Donlan's view, Langley's rejected approach to lunar impact studies had been the correct one. The consensus at the first Senior Council meeting, however, was that "pure science experiments will be able to provide the engineering answers for Project Apollo." 14 Over the next few years, the engineering requirements for Apollo would win out almost totally. As historian R. Cargill Hall explains in his story of Project Ranger, a "melding" of interests occurred between the Office.... ....of Space Sciences and the Office of Manned Space Flight followed by a virtually complete subordination of the scientific priorities originally built into the unmanned projects. Those priorities, as important as they were, "quite simply did not rate" with Apollo in importance.15 The sensitive camera eyes of the Lunar Orbiter spacecraft carried out a vital reconnaissance mission in support of the Apollo program. Although NASA designed the project to provide scientists with quantitative information about the moon's gravitational field and the dangers of micrometeorites and solar radiation in the vicinity of the lunar environment, the primary objective of Lunar Orbiter was to fly over and photograph the best landing sites for the Apollo spacecraft. NASA suspected that it might have enough information about the lunar terrain to land astronauts safely without the detailed photographic mosaics of the lunar surface compiled from the orbiter flights, but certainly landing sites could be pinpointed more accurately with the help of high-resolution photographic maps Lunar Orbiter would even help to train the astronauts for visual recognition of the lunar topography and for last-second maneuvering above it before touchdown. Langley had never managed a deep-space flight project before, and Director Floyd Thompson was not sure that he wanted to take on the burden of responsibility when Oran Nicks, the young director of lunar and planetary programs in Homer Newell's Office of Space Sciences, came to him with the idea early in 1963. Along with Newell's deputy, Edgar M. Cortright, Nicks was the driving force behind the orbiter mission at NASA headquarters. Cortright, however, first favored giving the project to JPL and using Surveyor Orbiter and the Hughes Aircraft Company, which was the prime contractor for Surveyor Lander. Nicks disagreed with this plan and worked to persuade Cortright and others that he was right. In Nicks' judgment, JPL had more than it could handle with Ranger and Surveyor Lander and should not have anything else "put on its plate," certainly not anything as large as the Lunar Orbiter project. NASA Langley, on the other hand, besides having a reputation for being able to handle a variety of aerospace tasks, had just lost the STG to Houston and so, Nicks thought, would be eager to take on the new challenge of a lunar orbiter project. Nicks worked to persuade Cortright that distributing responsibilities and operational programs among the NASA field centers would be "a prudent management decision." NASA needed balance among its research centers. To ensure NASA's future in space, headquarters must assign to all its centers challenging endeavors that would stimulate the development of "new and varied capabilities."16 Cortright was persuaded and gave Nicks permission to approach Floyd Thompson.** This Nicks did on 2 January 1963, during a Senior Council meeting of the Office of Space Sciences at Cape Canaveral. Nicks asked Thompson whether Langley "would be willing to study the feasibility of undertaking a lunar photography experiment," and Thompson answered cautiously that he would ask his staff to consider the idea.17 The historical record does not tell us much about Thompson's personal thoughts regarding taking on Lunar Orbiter. But one can infer from the evidence that Thompson had mixed feelings, not unlike those he experienced about supporting the STG. The Langley director would not only give Nicks a less than straightforward answer to his question but also would think about the offer long and hard before committing the center. Thompson invited several trusted staff members to share their feelings about assuming responsibility for the project. For instance, he went to Clint Brown, by then one of his three assistant directors for research, and asked him what he thought Langley should do. Brown told him emphatically that he did not think Langley should take on Lunar Orbiter. An automated deep-space project would be difficult to manage successfully. The Lunar Orbiter would be completely different from the Ranger and Surveyor spacecraft and being a new design, would no doubt encounter many unforeseen problems. Even if it were done to everyone's satisfaction -and the proposed schedule for the first launches sounded extremely tight -Langley would probably handicap its functional research divisions to give the project all the support that it would need. Projects devoured resources. Langley staff had learned this firsthand from its experience with the STG. Most of the work for Lunar Orbiter would rest in the management of contracts at industrial plants and in the direction of launch and mission control operations at Cape Canaveral and Pasadena. Brown, for one, did not want to be involved.18 But Thompson decided, in what Brown now calls his director's "greater wisdom," that the center should accept the job of managing the project. Some researchers in Brown's own division had been proposing a Langley-directed photographic mission to the moon for some time, and Thompson, too, was excited by the prospect.19 Furthermore, the revamped Lunar Orbiter was not going to be a space mission seeking general scientific knowledge about the moon. It was going to be a mission directly in support of Apollo, and this meant that engineering requirements would be primary. Langley staff preferred that practical orientation; their past work often resembled projects on a smaller scale. Whether the "greater wisdom" stemmed from Thompson's own powers of judgment is still not certain. Some informed Langley veterans, notably Brown, feel that Thompson must have also received some strongly stated directive from NASA headquarters that said Langley had no choice but to take on the project. Whatever was the case in the beginning, Langley management soon welcomed Lunar Orbiter. It was a chance to prove that they could manage a major undertaking. Floyd Thompson personally oversaw many aspects of the project and for more than four years did whatever he could to make sure that Langley's functional divisions supported it fully. Through most of this period, he would meet every Wednesday morning with the top people in the project office to hear about the progress of their work and offer his own ideas. As one staff member recalls, "I enjoyed these meetings thoroughly. [Thompson was] the most outstanding guy I've ever met, a tremendously smart man who knew what to do and when to do it."20 Throughout the early months of 1963, Langley worked with its counterparts at NASA headquarters to establish a solid and cooperative working relationship for Lunar Orbiter. The center began to draw up preliminary specifications for a lightweight orbiter spacecraft and for the vehicle that would launch it (already thought to be the Atlas-Agena D). While Langley personnel were busy with that, TRW's Space Technologies Laboratories (STL) of Redondo Beach, California, was conducting a parallel study of a lunar orbiter photographic spacecraft under contract to NASA headquarters. Representatives from STL reported on this work at meetings at Langley on 25 February and 5 March 1963. Langley researchers reviewed the contractor's assessment and found that STL's estimates of the chances for mission success closely matched their own. If five missions were attempted, the probability of achieving one success was 93 percent. The probability of achieving two was 81 percent. Both studies confirmed that a lunar orbiter system using existing hardware would be able to photograph a landed Surveyor and would thus be able to verify the conditions of that possible Apollo landing site. The independent findings concluded that the Lunar Orbiter project could be done successfully and should be done quickly because its contribution to the Apollo program would be great. 21 With the exception of its involvement in the X-series research airplane programs at Muroc, Langley had not managed a major project during the period of the NACA. As a NASA center, Langley would have to learn to manage projects that involved contractors, subcontractors, other NASA facilities, and headquarters -a tall order for an organization used to doing all its work in-house with little outside interference. Only three major projects were assigned to Langley in the early 1960s: Scout, in 1960; Fire, in 1961; and Lunar Orbiter, in 1963. Project Mercury and Little Joe, although heavily supported by Langley, had been managed by the independent STG, and Project Echo, although managed by Langley for a while, eventually was given to Goddard to oversee. To prepare for Lunar Orbiter in early 1963, Langley management reviewed what the center had done to initiate the already operating Scout and Fire projects. It also tried to learn from JPL about inaugurating paperwork for, and subsequent management of, Projects Ranger and Surveyor. After these reviews, Langley felt ready to prepare the formal documents required by NASA for the start-up of the project.22 As Langley prepared for Lunar Orbiter, NASA's policies and procedures for project management were changing. In October 1962, spurred on by its new top man, James Webb, the agency had begun to implement a series of structural changes in its overall organization. These were designed to improve relations between headquarters and the field centers, an area of fundamental concern. Instead of managing the field centers through the Office of Programs, as had been the case, NASA was moving them under the command of the headquarters program directors. For Langley, this meant direct lines of communication with the OART and the OSSA. By the end of 1963, a new organizational framework was in place that allowed for more effective management of NASA projects. In early March 1963, as part of Webb's reform, NASA headquarters issued an updated version of General Management Instruction 4-1-1. This revised document established formal guidelines for the planning and management of a project. Every project was supposed to pass through four preliminary stages: (1) Project Initiation, (2) Project Approval, (3) Project Implementation, and (4) Organization for Project Management.23 Each step required the submission of a formal document for headquarters' approval. From the beginning, everyone involved with Lunar Orbiter realized that it had to be a fast-track project. In order to help Apollo, everything about it had to be initiated quickly and without too much concern about the letter of the law in the written procedures. Consequently, although no step was to be taken without first securing approval for the preceding step, Langley initiated the paperwork for all four project stages at the same time. This same no-time-to-lose attitude ruled the schedule for project development. All aspects had to be developed concurrently. Launch facilities had to be planned at the same time that the design of the spacecraft started. The photographic, micrometeoroid, and selenodetic experiments had to be prepared even before the mission operations plan was complete. Everything proceeded in parallel: the development of the spacecraft, the mission design, the operational plan and preparation of ground equipment, the creation of computer programs, as well as a testing plan. About this parallel development, Donald H. Ward, a key member of Langley's Lunar Orbiter project team, remarked, "Sometimes this causes undoing some mistakes, but it gets to the end product a lot faster than a serial operation where you design the spacecraft and then the facilities to support it."24 Using the all-at-once approach, Langley put Lunar Orbiter in orbit around the moon only 27 months after signing with the contractor. On 11 September 1963, Director Floyd Thompson formally established the Lunar Orbiter Project Office (LOPO) at Langley, a lean organization of just a few people who had been at work on Lunar Orbiter since May. Thompson named Clifford H. Nelson as the project manager. An NACA veteran and head of the Measurements Research Branch of IRD, Nelson was an extremely bright engineer. He had served as project engineer on several flight research programs, and Thompson believed that he showed great promise as a technical manager. He worked well with others, and Thompson knew that skill in interpersonal relations would be essential in managing Lunar Orbiter because so much of the work would entail interacting with contractors. To help Nelson, Thompson originally reassigned eight people to LOPO: engineers Israel Taback, Robert Girouard, William I. Watson, Gerald Brewer, John B. Graham, Edmund A. Brummer, financial accountant Robert Fairburn, and secretary Anna Plott. This group was far smaller than the staff of 100 originally estimated for this office. The most important technical minds brought in to participate came from either IRD or from the Applied Materials and Physics Division, which was the old PARD. Taback was the experienced and sage head of the Navigation and Guidance Branch of IRD; Brummer, an expert in telemetry, also came from IRD; and two new Langley men, Graham and Watson, were brought in to look over the integration of mission operations and spacecraft assembly for the project. A little later IRD's talented Bill Boyer also joined the group as flight operations manager, as did the outstanding mission analyst Norman L. Crabill, who had just finished working on Project Echo. All four of the NACA veterans were serving as branch heads at the time of their assignment to LOPO. This is significant given that individuals at that level of authority and experience are often too entrenched and concerned about further career development to take a temporary assignment on a high-risk project. The LOPO staff set up an office in a room in the large 16-Foot Transonic Tunnel building in the Langley West Area. When writing the Request for Proposals, Nelson, Taback, and the others involved could only afford the time necessary to prepare a brief document, merely a few pages long, that sketched out some of the detailed requirements. As Israel Taback remembers, even before the project office was established, he and a few fellow members of what would become LOPO had already talked extensively with the potential contractors. Taback explains, "Our idea was that they would be coming back to us [with details]. So it wasn't like we were going out cold, with a brand new program."25 Langley did need to provide one critical detail in the request: the means for stabilizing the spacecraft in lunar orbit. Taback recalls that an "enormous difference" arose between Langley and NASA headquarters over this issue. The argument was about whether the Request for Proposals should require that the contractors produce a rotating satellite known as a "spinner." The staff of the OSSA preferred a spinner based on STL's previous study of Lunar Orbiter requirements. However, Langley's Lunar Orbiter staff doubted the wisdom of specifying the means of stabilization in the Request for Proposals. They wished to keep the door open to other, perhaps better, ways of stabilizing the vehicle for photography. The goal of the project, after all, was to take the best possible high-resolution pictures of the moon's surface. To do that, NASA needed to create the best possible orbital platform for the spacecraft's sophisticated camera equipment, whatever that turned out to be. From their preliminary analysis and conversations about mission requirements, Taback, Nelson, and others in LOPO felt that taking these pictures from a three-axis (yaw, pitch, and roll), attitude-stabilized device would be easier than taking them from a spinner. A spinner would cause distortions of the image because of the rotation of the vehicle. Langley's John F. Newcomb of the Aero Space Mechanics Division (and eventual member of LOPO) had calculated that this distortion would destroy the resolution and thus seriously compromise the overall quality of the pictures. This was a compromise that the people at Langley quickly decided they could not live with. Thus, for sound technical reasons, Langley insisted that the design of the orbiter be kept an open matter and not be specified in the Request for Proposals. Even if Langley's engineers were wrong and a properly designed spinner would be most effective, the sensible approach was to entertain all the ideas the aerospace industry could come up with before choosing a design.26 For several weeks in the summer of 1963, headquarters tried to resist the Langley position. Preliminary studies by both STL for the OSSA and by Bell Communications (BellComm) for the Office of Manned Space Flight indicated that a rotating spacecraft using a spin-scan film camera similar to the one developed by the Rand Corporation in 1958 for an air force satellite reconnaissance system ( "spy in the sky" ) would work well for Lunar Orbiter. Such a spinner would be less complicated and less costly than the three-axis-stabilized spacecraft preferred by Langley.27 But Langley staff would not cave in on an issue so fundamental to the project's success. Eventually Newell, Cortright, Nicks, and Scherer in the OSSA offered a compromise that Langley could accept: the Request for Proposals could state that "if bidders could offer approaches which differed from the established specifications but which would result in substantial gains in the probability of mission success, reliability, schedule, and economy," then NASA most certainly invited them to submit those alternatives. The request would also emphasize that NASA wanted a lunar orbiter that was built from as much off-the shelf hardware as possible. The development of many new technological systems would require time that Langley did not have.28 Langley and headquarters had other differences of opinion about the request. For example, a serious problem arose over the nature of the contract. Langley's chief procurement officer, Sherwood Butler, took the conservative position that a traditional cost-plus-a-fixed-fee contract would be best in a project in which several unknown development problems were bound to arise. With this kind of contract, NASA would pay the contractor for all actual costs plus a sum of money fixed by the contract negotiations as a reasonable profit. NASA headquarters, on the other hand, felt that some attractive financial incentives should be built into the contract. Although unusual up to this point in NASA history, headquarters believed that an incentives contract would be best for Lunar Orbiter. Such a contract would assure that the contractor would do everything possible to solve all the problems encountered and make sure that the project worked. The incentives could be written up in such a way that if, for instance, the contractor lost money on any one Lunar Orbiter mission, the loss could be recouped with a handsome profit on the other missions. The efficacy of a cost-plus-incentives contract rested in the solid premise that nothing motivated a contractor more than making money. NASA headquarters apparently understood this better than Langley's procurement officer who wanted to keep tight fiscal control over the project and did not want to do the hairsplitting that often came with evaluating whether the incentive clauses had been met.29 On the matter of incentives, Langley's LOPO engineers sided against their own man and with NASA headquarters. They, too, thought that incentives were the best way to do business with a contractor -as well as the best way to illustrate the urgency that NASA attached to Lunar Orbiter.30 The only thing that bothered them was the vagueness of the incentives being discussed. When Director Floyd Thompson understood that his engineers really wanted to take the side of headquarters on this issue, he quickly concurred. He insisted only on three things: the incentives had to be based on clear stipulations tied to cost, delivery, and performance, with penalties for deadline overruns; the contract had to be fully negotiated and signed before Langley started working with any contractor (in other words, work could not start under a letter of intent); and all bidding had to be competitive. Thompson worried that the OSSA might be biased in favor of STL as the prime contractor because of STL's prior study of the requirements of lunar orbiter systems.31 In mid-August 1963, with these problems worked out with headquarters, Langley finalized the Request for Proposals and associated Statement of Work, which outlined specifications, and delivered both to Captain Lee R. Scherer, Lunar Orbiter's program manager at NASA headquarters, for presentation to Ed Cortright and his deputy Oran Nicks. The documents stated explicitly that the main mission of Lunar Orbiter was "the acquisition of photographic data of high and medium resolution for selection of suitable Apollo and Surveyor landing sites." The request set out detailed criteria for such things as identifying "cones" (planar features at right angles to a flat surface), "slopes" (circular areas inclined with respect to the plane perpendicular to local gravity), and other subtle aspects of the lunar surface. Obtaining information about the size and shape of the moon and about the lunar gravitational field was deemed less important. By omitting a detailed description of the secondary objectives in the request, Langley made clear that "under no circumstances" could anything "be allowed to dilute the major photo reconnaissance mission."32 The urgency of the national commitment to a manned lunar landing mission was the force driving Lunar Orbiter. Langley wanted no confusion on that point. Cliff Nelson and LOPO moved quickly in September 1963 to create a Source Evaluation Board that would possess the technical expertise and good judgment to help NASA choose wisely from among the industrial firms bidding for Lunar Orbiter. A large board of reviewers (comprising more than 80 evaluators and consultants from NASA centers and other aerospace organizations) was divided into groups to evaluate the technical feasibility, cost, contract management concepts, business operations, and other critical aspects of the proposals. One group, the so-called Scientists' Panel, judged the suitability of the proposed spacecraft for providing valuable information to the scientific community after the photographic mission had been completed. Langley's two representatives on the Scientists' Panel were Clint Brown and Dr. Samuel Katzoff, an extremely insightful engineering analyst, 27-year Langley veteran, and assistant chief of the Applied Materials and Physics Division. Although the opinions of all the knowledgeable outsiders were taken .seriously, Langley intended to make the decision.33 Chairing the Source Evaluation Board was Eugene Draley, one of Floyd Thompson's assistant directors. When the board finished interviewing all the bidders, hearing their oral presentations, and tallying the results of its scoring of the proposals (a possible 70 points for technical merit and 30 points for business management), it was to present a formal recommendation to Thompson. He in turn would pass on the findings with comments to Homer Newell's office in Washington. Five major aerospace firms submitted proposals for the Lunar Orbiter contract. Three were California firms: STL in Redondo Beach, Lockheed Missiles and Space Company of Sunnyvale, and Hughes Aircraft Company of Los Angeles. The Martin Company of Baltimore and the Boeing Company of Seattle were the other two bidders.34 Three of the five proposals were excellent. Hughes had been developing an ingenious spin-stabilization system for geosynchronous communication satellites, which helped the company to submit an impressive proposal for a rotating vehicle. With Hughes's record in spacecraft design and fabrication, the Source Evaluation Board gave Hughes serious consideration. STL also submitted a fine proposal for a spin-stabilized rotator. This came as no surprise, of course, given STL's prior work for Surveyor as well as its prior contractor studies on lunar orbiter systems for NASA headquarters. The third outstanding proposal -entitled "ACLOPS" (Agena-Class Lunar Orbiter Project) -was Boeing's. The well-known airplane manufacturer had not been among the companies originally invited to bid on Lunar Orbiter and was not recognized as the most logical of contenders. However, Boeing recently had successfully completed the Bomarc missile program and was anxious to become involved with the civilian space program, especially now that the DOD was canceling Dyna-Soar, an air force project for the development of an experimental X-20 aerospace plane. This cancellation released several highly qualified U.S. Air Force personnel, who were still working at Boeing, to support a new Boeing undertaking in space. Company representatives had visited Langley to discuss Lunar Orbiter, and Langley engineers had been so excited by what they had heard that they had pestered Thompson to persuade Seamans to extend an invitation to Boeing to join the bidding. The proposals from Martin, a newcomer in the business of automated space probes, and Lockheed, a company with years of experience handling the Agena space vehicle for the air force, were also quite satisfactory. In the opinion of the Source Evaluation Board, however, the proposals from Martin and Lockheed were not as strong as those from Boeing and Hughes. The LOPO staff and the Langley representatives decided early in the evaluation that they wanted Boeing to be selected as the contractor; on behalf of the technical review team, Israel Taback had made this preference known both in private conversations with, and formal presentations to, the Source Evaluation Board. Boeing was Langley's choice because it proposed a three axis stabilized spacecraft rather than a spinner. For attitude reference in orbit, the spacecraft would use an optical sensor similar to the one that was being planned for use on the Mariner C spacecraft, which fixed on the star Canopus. An attitude stabilized orbiter eliminated the need for a focal-length spin camera. This type of photographic system, first conceived by Merton E. Davies of the Rand Corporation in 1958, could compensate for the distortions caused by a rotating spacecraft but would require extensive development. In the Boeing proposal, Lunar Orbiter would carry a photo subsystem designed by Eastman Kodak and used on DOD spy satellites.35 This subsystem worked automatically and with the precision of a Swiss watch. It employed two lenses that took pictures simultaneously on a roll of 70-millimeter aerial film. If one lens failed, the other still worked. One lens had a focal length of 610 millimeters (24 inches) and could take pictures from an altitude of 46 kilometers (28.5 miles) with a high resolution for limited-area coverage of approximately 1 meter. The other, which had a focal length of about 80 millimeters (3 inches), could take pictures with a medium resolution of approximately 8 meters for wide coverage of the lunar surface. The film would be developed on board the spacecraft using the proven Eastman Kodak "Bimat" method. The film would be in contact with a web containing a single solution dry processing chemical, which eliminated the need to use wet chemicals. Developed automatically and wound onto a storage spool, the processed film could then be "read out" and transmitted by the spacecraft's communications subsystem to receiving stations of JPL's worldwide Deep Space Network, which was developed for communication with spacefaring vehicles destined for the moon and beyond. 36 How Boeing had the good sense to propose an attitude-stabilized platform based on the Eastman Kodak camera, rather than to propose a rotator with a yet-to be developed camera is not totally clear. Langley engineers had conversed with representatives of all the interested bidders, so Boeing's people might possibly have picked up on Langley's concerns about the quality of photographs from spinners. The other bidders, especially STL and Hughes, with their expertise in spin-stabilized spacecraft, might also have picked up on those concerns but were too confident in the type of rotationally stabilized system they had been working on to change course in midstream. Furthermore, Boeing had been working closely with RCA, which for a time was also thinking about submitting a proposal for Lunar Orbiter. RCA's idea was a lightweight (200-kilogram), three axis, attitude stabilized, and camera-bearing payload that could be injected into lunar orbit as part of a Ranger-type probe. A lunar orbiter study group, chaired by Lee Scherer..... ....at NASA headquarters, had evaluated RCA's approach in October 1962, however, and found it lacking. It was too expensive ($20.4 million for flying only three spacecraft), and its proposed vidicon television unit could not cover the lunar surface either in the detail or the wide panoramas NASA wanted.37 Boeing knew all about this rejected RCA approach. After talking to Langley's engineers, the company shrewdly decided to stay with an attitude stabilized orbiter but to dump the use of the inadequate vidicon television. Boeing replaced the television system with an instrument with a proven track record in planetary reconnaissance photography: the Eastman Kodak spy camera.38 On 20 December 1963, two weeks after the Source Evaluation Board made its formal recommendation to Administrator James Webb in Washington, NASA announced that it would be negotiating with Boeing as prime contractor for the Lunar Orbiter project. Along with the excellence of its proposed spacecraft design and Kodak camera, NASA singled out the strength of Boeing's commitment to the project and its corporate capabilities to..... ....complete it on schedule without relying on many subcontractors. Still, the choice was a bit ironic. Only 14 months earlier, the Scherer study group had rejected RCA's approach in favor of a study of a spin-stabilized spacecraft proposed by STL. Now Boeing had outmaneuvered its competition by proposing a spacecraft that incorporated essential features of the rejected RCA concept and almost none from the STL's previously accepted one. Boeing won the contract even though it asked for considerably more money than any of the other bidders. The lowest bid, from Hughes, was $41,495,339, less than half of Boeing's $83,562,199, a figure that would quickly rise when the work started. Not surprisingly, NASA faced some congressional criticism and had to defend its choice. The agency justified its selection by referring confidently to what Boeing alone proposed to do to ensure protection of Lunar Orbiter's photographic film from the hazards of solar radiation.39 This was a technical detail that deeply concerned LOPO. Experiments conducted by Boeing and by Dr. Trutz Foelsche, a Langley scientist in the Space Mechanics (formerly Theoretical Mechanics) Division who specialized in the study of space radiation effects, suggested that even small doses of radiation from solar flares could fog ordinary high-speed photographic film. This would be true especially in the case of an instrumented probe like Lunar Orbiter, which had thin exterior vehicular shielding. Even if the thickness of the shielding around the film was increased tenfold (from 1 g/cm2 to 10 g/cm2), Foelsche judged that high-speed film would not make it through a significant solar-particle event without serious damage.40 Thus,..... .....something extraordinary had to be done to protect the high-speed film. A better solution was not to use high-speed film at all. As NASA explained successfully to its critics, the other bidders for the Lunar Orbiter contract relied on high-speed film and faster shutter speeds for their on-board photographic subsystems. Only Boeing did not. When delegates from STL, Hughes, Martin, and Lockheed were asked at a bidders' briefing in November 1963 about what would happen to their film if a solar event occurred during an orbiter mission, they all had to admit that the film would be damaged seriously. Only Boeing could claim otherwise. Even with minimal shielding, the more insensitive, low-speed film used by the Kodak camera would not be fogged by high-energy radiation, not even if the spacecraft moved through the Van Allen radiation belts.41 This, indeed, proved to be the case. During the third mission of Lunar Orbiter in February 1967, a solar flare with a high amount of optical activity did occur, but the film passed through it unspoiled.42 Negotiations with Boeing did not take long. Formal negotiations began on 17 March 1964, and ended just four days later. On 7 May Administrator Webb signed the document that made Lunar Orbiter an official NASA commitment. Hopes were high. But in the cynical months of 1964, with Ranger's setbacks still making headlines and critics still faulting NASA for failing to match Soviet achievements in space, everyone doubted whether Lunar Orbiter would be ready for its first scheduled flight to the moon in just two years. Large projects are run by only a handful of people. Four or five key individuals delegate jobs and responsibilities to others. This was certainly true for Lunar Orbiter. From start to finish, Langley's LOPO remained a small organization; its original nucleus of 9 staff members never grew any larger than 50 professionals. Langley management knew that keeping LOPO's staff small meant fewer people in need of positions when the project ended. If all the positions were built into a large project office, many careers would be out on a limb; a much safer organizational method was for a small project office to draw people from other research and technical divisions to assist the project as needed.43 In the case of Lunar Orbiter, four men ran the project: Cliff Nelson, the project manager; Israel Taback, who was in charge of all activities leading to the production and testing of the spacecraft; Bill Boyer, who was responsible for planning and integrating launch and flight operations; and James V. Martin, the assistant project manager. Nelson had accepted the assignment with Thompson's assurance that he would be given wide latitude in choosing the men and women he wanted to work with him in the project office. As a result, virtually all of his top people were hand-picked. The one significant exception was his chief assistant, Jim Martin. In September 1964, the Langley assistant director responsible for the project office, Gene Draley, brought in Martin to help Nelson cope with some of the stickier details of Lunar Orbiter's management. A senior manager in charge of Republic Aviation's space systems requirements, Martin had a tremendous ability for anticipating business management problems and plenty of experience taking care of them. Furthermore, he was a well-organized and skillful executive who could make schedules, set due dates, and closely track the progress of the contractors and subcontractors. This "paper" management of a major project was troublesome for Cliff Nelson, a quiet people-oriented person. Draley knew about taskmaster Martin from Republic's involvement in Project Fire and was hopeful that Martin's acerbity and business-mindedness would complement Nelson's good-heartedness and greater technical depth, especially in dealings with contractors. Because Cliff Nelson and Jim Martin were so entirely opposite in personality, they did occasionally clash, which caused a few internal problems in LOPO. On the whole, however, the alliance worked quite well, although it was forced by Langley management. Nelson generally oversaw the whole endeavor and made sure that everybody worked together as a team. For.... ....the monitoring of the day-to-day progress of the project's many operations, Nelson relied on the dynamic Martin. For example, when problems arose with the motion-compensation apparatus for the Kodak camera, Martin went to the contractor's plant to assess the situation and decided that its management was not placing enough emphasis on following a schedule. Martin acted tough, pounded on the table, and made the contractor put workable schedules together quickly. When gentler persuasion was called for or subtler interpersonal relationships were involved, Nelson was the person for the job. Martin, who was technically competent but not as technically talented as Nelson, also deferred to the project manager when a decision required particularly complex engineering analysis. Thus, the two men worked together for the overall betterment of Lunar Orbiter.44 Placing an excellent person with just the right specialization in just the right job was one of the most important elements behind the success of Lunar Orbiter, and for this eminently sensible approach to project management, Cliff Nelson and Floyd Thompson deserve the lion's share of credit. Both men cultivated a management style that emphasized direct dealings with people and often ignored formal organizational channels. Both stressed the importance of teamwork and would not tolerate any individual, however talented, willfully undermining the esprit de corps. Before filling any position in the project office, Nelson gave the selection much thought. He questioned whether the people under consideration were Compatible with others already in his project organization. He wanted to know whether candidates were goal-oriented -willing to do whatever was necessary (working overtime or traveling) to complete the project.45 Because Langley possessed so many employees who had been working at the center for many years, the track record of most people was either well known or easy to ascertain. Given the outstanding performance of Lunar Orbiter and the testimonies about an exceptionally healthy work environment in the project office, Nelson did an excellent job predicting who would make a productive member of the project team.46 Considering Langley's historic emphasis on fundamental applied aeronautical research, it might seem surprising that Langley scientists and engineers did not try to hide inside the dark return passage of a wind tunnel rather than be diverted into a spaceflight project like Lunar Orbiter. As has been discussed, some researchers at Langley (and agency-wide) objected to and resisted involvement with project work. The Surveyor project at JPL had suffered from staff members' reluctance to leave their own specialties to work on a space project. However, by the early 1960s the enthusiasm for spaceflight ran so rampant that it was not hard to staff a space project office. All the individuals who joined LOPO at Langley came enthusiastically; otherwise Cliff Nelson would not have had them. Israel Taback, who had been running the Communications and Control Branch of IRD, remembers having become distressed with the thickening of what he calls "the paper forest": the preparation of five-year plans, ten-year plans, and other lengthy documents needed to justify NASA's budget requests. The work he had been doing with airplanes and aerospace vehicles was interesting (he had just finished providing much of the flight instrumentation for the X-15 program), but not so interesting that he wanted to turn down Cliff Nelson's offer to join Lunar Orbiter. "The project was brand new and sounded much more exciting than what I had been doing," Taback remembers. It appealed to him also because of its high visibility both inside and outside the center. Everyone had to recognize the importance of a project directly related to the national goal of landing a man on the moon. 47 Norman L. Crabill, the head of LOPO's mission design team, also decided to join the project. On a Friday afternoon, he had received the word that one person from his branch of the Applied Materials and Physics Division would have to be named by the following Monday as a transfer to LOPO; as branch head, Crabill himself would have to make the choice. That weekend he asked himself, "What's your own future, Crabill? This is space. If you don't step up to this, what's your next chance. You've already decided not to go with the guys to Houston." He immediately knew who to transfer, "It was me." That was how he "got into the space business." And in his opinion, it was "the best thing" that he ever did.48 Cliff Nelson's office had the good sense to realize that monitoring the prime contractor did not entail doing Boeing's work for Boeing. Nelson approached the management of Lunar Orbiter more practically: the contractor was "to perform the work at hand while the field center retained responsibility for overseeing his progress and assuring that the job was done according to the terms of the contract." For Lunar Orbiter, this philosophy meant specifically that the project office would have to keep "a continuing watch on the progress of the various components, subsystems, and the whole spacecraft system during the different phases of designing, fabricating and testing them."49 Frequent meetings would take place between Nelson and his staff and their counterparts at Boeing to discuss all critical matters, but Langley would not assign all the jobs, solve all the problems, or micromanage every detail of the contractor's work. This philosophy sat well with Robert J. Helberg, head of Boeing's Lunar Orbiter team. Helberg had recently finished directing the company's work on the Bomarc missile, making him a natural choice for manager of Boeing's next space venture. The Swedish-born Helberg was absolutely straightforward, and all his people respected him immensely -as would everyone in LOPO. He and fellow Swede Cliff Nelson got along famously. Their relaxed relationship set the tone for interaction between Langley and Boeing. Ideas and concerns passed freely back and forth between the project offices. Nelson and his people "never had to fear the contractor was just telling [them] a lie to make money," and Helberg and his tightly knit, 220-member Lunar Orbiter team never had to complain about uncaring, papershuffling bureaucrats who were mainly interested in dotting all the i's and crossing all the t's and making sure that nothing illegal was done that could bother government auditors and put their necks in a wringer.50 The Langley/NASA headquarters relationship was also harmonious and effective. This was in sharp contrast to the relationship between JPL and headquarters during the Surveyor project. Initially, JPL had tried to monitor the Surveyor contractor, Hughes, with only a small staff that provided little on-site technical direction; however, because of unclear objectives, the open-ended nature of the project (such basic things as which experiment packages would be included on the Surveyor spacecraft were uncertain), and a too highly diffused project organization within Hughes, JPL's "laissez-faire" approach to project management did not work. As the problems snowballed, Cortright found it necessary to intervene and compelled JPL to assign a regiment of on-site supervisors to watch over every detail of the work being done by Hughes. Thus, as one analyst of Surveyor's management has observed, "the responsibility for overall spacecraft development was gradually retrieved from Hughes by JPL, thereby altering significantly the respective roles of the field center and the spacecraft systems contractors."51 Nothing so unfortunate happened during Lunar Orbiter, partly because NASA had learned from the false steps and outright mistakes made in the management of Surveyor. For example, NASA now knew that before implementing a project, everyone involved must take part in extensive preliminary discussions. These conversations ensured that the project's goals were certain and each party's responsibilities clear. Each office should expect maximum cooperation and minimal unnecessary interference from the others. Before Lunar Orbiter was under way, this excellent groundwork had been laid. As has been suggested by a 1972 study done by the National Academy of Public Administration, the Lunar Orbiter project can serve as a model of the ideal relationship between a prime contractor, a project office, a field center, a program office, and headquarters. From start to finish nearly everything important about the interrelationship worked out superbly in Lunar Orbiter. According to LOPO's Israel Taback, "Everyone worked together harmoniously as a team whether they were government, from headquarters or from Langley, or from Boeing." No one tried to take advantage of rank or to exert any undue authority because of an official title or organizational affiliation.52 That is not to say that problems never occurred in the management of Lunar Orbiter. In any large and complex technological project involving several parties, some conflicts are bound to arise. The key to project success lies in how differences are resolved. The most fundamental issue in the premission planning for Lunar Orbiter was how the moon was to be photographed. Would the photography be "concentrated" on a predetermined single target, or would it be "distributed" over several selected targets across the moon's surface? On the answer to this basic question depended the successful integration of the entire mission plan for Lunar Orbiter. For Lunar Orbiter, as with any other spaceflight program, mission planning involved the establishment of a complicated sequence of events: When should the spacecraft be launched? When does the launch window open and close? On what trajectory should the spacecraft arrive in lunar orbit? How long will it take the spacecraft to get to the moon? How and when should orbital "injection" take place? How and when should the spacecraft get to its target(s), and at what altitude above the lunar surface should it take the pictures? Where does the spacecraft need to be relative to the sun for taking optimal pictures of the lunar surface? Answering these questions also meant that NASA's mission planners had to define the lunar orbits, determine how accurately those orbits could be navigated, and know the fuel requirements. The complete mission profile had to be ready months before launch. And before the critical details of the profile could be made ready, NASA had to select the targeted areas on the lunar surface and decide how many of them were to be photographed during the flight of a single orbiter.53 Originally NASA's plan was to conduct a concentrated mission. The Lunar Orbiter would go up and target a single site of limited dimensions. Top NASA officials listen to a LOPO briefing at Langley in December 1966. Sitting to the far right with his hand on his chin is Floyd Thompson. To the left sits Dr. George Mueller, NASA associate administrator for Manned Space Flight. On the wall is a diagram of the sites selected for the "concentrated mission. " The chart below illustrates the primary area of photographic interest. The country's leading astrogeologists would help in the site selection by identifying the smoothest, most attractive possibilities for a manned lunar landing. The U.S. Geological Survey had drawn huge, detailed maps of the lunar surface from the best available telescopic observations. With these maps, NASA would select one site as the prime target for each of the five Lunar Orbiter missions. During a mission, the spacecraft would travel into orbit and move over the target at the "perilune," or lowest point in the orbit (approximately 50 kilometers [31.1 miles] above the surface); then it would start taking pictures. Successive orbits would be close together longitudinally, and the Lunar Orbiter's camera would resume photographing the surface each time it passed over the site. The high-resolution lens would take a 1-meter-resolution picture of a small area (4 x 16 kilometers) while at exactly the same time, the medium-resolution lens would take an 8-meter resolution picture of a wider area (32 x 37 kilometers). The high-resolution lens would photograph at such a rapid interval that the pictures would just barely overlap. The wide-angle pictures, taken by the medium-resolution lens, would have a conveniently wide overlap. All the camera exposures would take place in 24 hours, thus minimizing the threat to the film from a solar flare. The camera's capacity of roughly 200 photographic frames would be devoted to one location. The result would be one area shot in adjacent, overlapping strips. By putting the strips together, NASA had a picture of a central 1-meter-resolution area that was surrounded by a broader 8-meter resolution area -in other words, it would be one large, rich stereoscopic picture of a choice lunar landing site. NASA would learn much about that one ideal place, and the Apollo program would be well served.54 The plan sounded fine to everyone at least in the beginning. Langley's Request for Proposals had specified the concentrated mission, and Boeing had submitted the winning proposal based on that mission plan. Moreover, intensive, short-term photography like that called for in a concentrated mission was exactly what Eastman Kodak's high-resolution camera system had been designed for. The camera was a derivative of a spy satellite photo system created specifically for earth reconnaissance missions specified by the DOD.*** As LOPO's mission planners gave the plan more thought, however, they realized that the concentrated mission approach was flawed. Norman Crabill, Langley's head of mission integration for Lunar Orbiter, remembers the question he began to ask himself, "What happens if only one of these missions is going to work? This was in the era of Ranger failures and Surveyor slippage. When you shoot something, you had only a twenty percent probability that it was going to work. It was that bad." On that premise, NASA planned to fly five Lunar Orbiters, hoping that one would operate as it should. "Suppose we go up there and shoot all we [have] on one site, and it turns out to be no good?" fretted Crabill, and others began to worry as well. What if that site was not as smooth as it appeared on the U.S. Geological Survey maps, or a gravitational anomaly or orbital perturbation was present, making that particular area of the moon unsafe for a lunar lauding? And what if that Lunar Orbiter turned out to be the only one to work? What then?55 In late 1964, over the course of several weeks, LOPO became more convinced that it should not be putting all its eggs in one basket. "We developed the philosophy that we really didn't want to do the concentrated mission; what we really wanted to do was what we called the 'distributed mission,"' recalls Crabill. The advantage of the distributed mission was that it would enable NASA to inspect several choice targets in the Apollo landing zone with only one spacecraft.56 In early 1965, Norm Crabill and Tom Young of the LOPO mission integration team traveled to the office of the U.S. Geological Survey in Flagstaff, Arizona. There, the Langley engineers consulted with U.S. government astrogeologists John F. McCauley, Lawrence Rowan, and Harold Masursky. Jack McCauley was Flagstaff's top man at the time, but he assigned Larry Rowan, "a young and upcoming guy, very reasonable and very knowledgeable," the job of heading the Flagstaff review of the Lunar Orbiter site selection problem. "We sat down with Rowan at a table with these big lunar charts," and Rowan politely reminded the Langley duo that "the dark areas on the moon were the smoothest." Rowan then pointed to the darkest places across the entire face of the moon.57 Rowan identified 10 good targets. When Crabill and Young made orbital calculations, they became excited. In a few moments, they had realized that they wanted to do the distributed mission. Rowan and his colleagues in Flagstaff also became excited about the prospects. This was undoubtedly the way to catch as many landing sites as possible. The entire Apollo zone of interest was ±45° longitude and ±5° latitude, along the equatorial region of the facing, or near side of the moon. Within that zone, the area that could be photographed via a concentrated mission was small. A single Lunar Orbiter that could photograph 10 sites of that size all within that region would be much more effective. If the data showed that a site chosen by the astrogeologists was not suitable, NASA would have excellent photographic coverage of nine other prime sites. In summary, the distributed mode would.... .....give NASA the flexibility to ensure that Lunar Orbiter would provide the landing site information needed by Apollo even if only one Lunar Orbiter mission proved successful. But there was one big hitch: Eastman Kodak's photo system was not designed for the distributed mission. It was designed for the concentrated mission in which all the photography would involve just one site and be loaded, shot, and developed in 24 hours. If Lunar Orbiter must photograph 10 sites, a mission would last at least two weeks. The film system was designed to sustain operations for only a day or two; if the mission lasted longer than that, the Bimat film would stick together, the exposed parts of it would dry out, the film would get stuck in the loops, and the photographic mission would be completely ruined. When Boeing first heard that NASA had changed its mind and now wanted to do the distributed mission, Helberg and his men balked. According to LOPO's Norman Crabill, Boeing's representatives said, "Look, we understand you want to do this. But, wait. The system was designed, tested, used, and proven in the concentrated mission mode. You can't change it now because it wasn't designed to have the Bimat film in contact for long periods of time. In two weeks' time, some of the Bimat is just going to go, pfft! It's just going to fail!" Boeing understood the good sense of the distributed mission, but as the prime contractor, the company faced a classic technological dilemma. The customer, NASA, wanted to use the system to do something it was not designed to do. This could possibly cause a disastrous failure. Boeing had no recourse but to advise the customer that what it wanted to do could endanger the entire mission.58 The Langley engineers wanted to know whether Boeing could solve the film problem. "We don't know for sure," the Boeing staff replied, "and we don't have the time to find out." NASA suggested that Boeing conduct tests to obtain quantitative data that would define the limits of the film system. Boeing's response was "That's not in the contract."59 The legal documents specified that the Lunar Orbiter should have the capacity to conduct the concentrated mission. If NASA now wanted to change the requirements for developing the Orbiter, then a new contract would have to be negotiated. A stalemate resulted on this issue and lasted until early 1965. The first launch was only a year away. If LOPO hoped to persuade Boeing to accept the idea of changing a basic mission requirement, it had to know the difference in reliability between the distributed and concentrated missions. If analysis showed that the distributed mission would be far less reliable, then even LOPO might want to reconsider and proceed with the concentrated mission. Crabill gave the job of obtaining this information to Tom Young, a young researcher from the Applied Materials and Physics Division. Crabill had specifically requested that Young be reassigned to LOPO mission integration because, in his opinion, Young was "the brightest guy [he] knew." On the day Young had reported to work with LOPO, Crabill had given him "a big pile of stuff to read," thinking he would be busy and, as Crabill puts it, "out of my hair for quite a while." But two days later, Young returned, having already made his way through all the material. When given the job of the comparative mission reliability analysis, Young went to Boeing in Seattle. In less than two weeks, he found what he needed to know and figured out the percentages: the reliability for the concentrated mission was an unspectacular 60 percent, but for the distributed mission it was only slightly worse, 58 percent. "It was an insignificant difference," Crabill thought when he heard Young's numbers, especially because nobody then really knew how to do that type of analysis. "We didn't gag on the fact that it was pretty low anyway, but we really wanted to do this distributed mission." The Langley researchers decided that the distributed mission was a sensible choice, if the Kodak system could be made to last for the extra time and if Boeing could be persuaded to go along with the mission change.60 LOPO hoped that Young's analysis would prove to Boeing that no essential difference in reliability existed between the two types of missions, but Boeing continued to insist that the concentrated mission was the legal requirement, not the distributed mission. The dispute was a classic case of implementing a project before even the customer was completely sure of what that project should accomplish. In such a situation, the only sensible thing to do was to be flexible. The problem for Boeing, of course, was that such flexibility might cost the company its financial incentives. If a Lunar Orbiter mission failed, the company worried that it would not be paid the bonus money promised in the contract. Helberg and Nelson discussed this issue in private conversations. Floyd Thompson participated in many of these talks and even visited Seattle to try to facilitate an agreement. In the end, Langley convinced Helberg that the change from a concentrated to a distributed mission would not impact Boeing's incentives. If a mission failed because of the change, LOPO promised that it would assume the responsibility. Boeing would have done its best according to the government request and instructions -and for that they would not be penalized. 61 The missions, however, would not fail. NASA and Boeing would handle the technical problems involving the camera by testing the system to ascertain the definite limits of its reliable operation. From Kodak, the government and the prime contractor obtained hard data regarding the length of time the film could remain set in one place before the curls or bends in the film around the loops became permanent and the torque required to advance the film exceeded the capability of the motor. From these tests, Boeing and LOPO established a set of mission "rules" that had to be followed precisely. For example, to keep the system working, Lunar Orbiter mission controllers at JPL had to advance the film one frame every eight hours. The rules even required that film sometimes be advanced without opening the door of the camera lens. Mission controllers called these nonexposure shots their "film-set frames" and the schedule of photographs their "film budget."62 As a result of the film rules, the distributed mission turned out to be a much busier operation than a concentrated mission would have been. Each time a photograph was taken, including film-set frames, the spacecraft had to be maneuvered. Each maneuver required a command from mission control. LOPO staff worried about the ability of the spacecraft to execute so many maneuvers over such a prolonged period. They feared something would go wrong during a maneuver that would cause them to lose control of the spacecraft. Lunar Orbiter 1, however, flawlessly executed an astounding number of commands, and LOPO staff were able to control spacecraft attitude during all 374 maneuvers.63 Ultimately, the trust between Langley and Boeing allowed each to take the risk of changing to a distributed mission. Boeing trusted Langley to assume responsibility if the mission failed, and Langley trusted Boeing to put its best effort into making the revised plan a success. Had either not fulfilled its promise to the other, Lunar Orbiter would not have achieved its outstanding record. Simple as this diagram of Lunar Orbiter (left) may look, no spacecraft in NASA history operated more successfully than Lunar Orbiter. Below, Lunar Orbiter goes I through a final inspection in the NASA Hanger S clean room at Kennedy Space Center prior to launch on 10 August 1966. The spacecraft was mounted on a three-axis test stand with its solar panels deployed and high-gain dish antenna extended from the side. The switch to the distributed mission was not the only instance during the Lunar Orbiter mission when contract specifications were jettisoned to pursue a promising idea. Boeing engineers realized that the Lunar Orbiter project presented a unique opportunity for photographing the earth. When the LOPO staff heard this idea, they were all for it, but Helberg and Boeing management rejected the plan. Turning the spacecraft around so that its camera could catch a quick view of the earth tangential to the moon's surface entailed technical difficulties, including the danger that, once the spacecraft's orientation was changed, mission controllers could lose command of the spacecraft. Despite the risk, NASA urged Boeing to incorporate the maneuver in the mission plan for Lunar Orbiter 1. Helberg refused.64 In some projects, that might have been the end of the matter. People would have been forced to forget the idea and to live within the circumscribed world of what had been legally agreed upon. Langley, however, was not about to give up on this exciting opportunity. Cliff Nelson,.... .....Floyd Thompson, and Lee Scherer went to mission control at JPL to talk to Helberg and at last convinced him that he was being too cautious -that "the picture was worth the risk." If any mishap occurred with the spacecraft during the maneuver, NASA again promised that Boeing would still receive compensation and part of its incentive for taking the risk. The enthusiasm of his own staff for the undertaking also influenced Helberg in his final decision to take the picture. 65 On 23 August 1966 just ad Lunar Orbiter l was about to pass behind the moon, mission controllers executed the necessary maneuvers to point the camera away from the lunar surface and toward the earth. The result was the world's first view of the earth from space. It was called "the picture of the century'' and "the greatest shot taken since the invention of photography."**** Not even the color photos of the earth taken during the Apollo missions superseded the impact of this first image of our planet as a little island of life floating in the black and infinite sea of space. 66 Lunar Orbiter defied all the probability studies. All five missions worked extraordinarily well, and with the minor exception of a short delay in the launch of Lunar Orbiter I -the Eastman Kodak camera was not ready - all the missions were on schedule. The launches were three months apart with the first taking place in August 1966 and the last in August 1967. This virtually perfect flight record was a remarkable achievement, especially considering that Langley had never before managed any sort of flight program into deep space. Lunar Orbiter accomplished what it was designed to do, and more. Its camera took 1654 photographs. More than half of these (840) were of the proposed Apollo landing sites. Lunar Orbiters I, II, and III took these site pictures from low-flight altitudes, thereby providing detailed coverage of 22 select areas along the equatorial region of the near side of the moon. One of the eight sites scrutinized by Lunar Orbiters II and III was a very smooth area in the Sea of Tranquility. A few years later, in July 1969, Apollo 11 commander Neil Armstrong would navigate the lunar module Eagle to a landing on this site.67 By the end of the third Lunar Orbiter mission, all the photographs needed to cover the Apollo landing sites had been taken. NASA was then free to redesign the last two missions, move away from the pressing engineering objective imposed by Apollo, and go on to explore other regions of the moon for the benefit of science. Eight hundred and eight of the remaining 814 pictures returned by Lunar Orbiters IV and V focused on the rest of the near side, the polar regions, and the mysterious far side of the moon. These were not the first photographs of the "dark side"; a Soviet space probe, Zond III, had taken pictures of it during a fly-by into a solar orbit a year earlier, in July 1965. But the Lunar Orbiter photos were higher quality than the Russian pictures and illuminated some lunarscapes that had never before been seen by the human eye. The six remaining photos were of the spectacular look back at the distant earth. By the time all the photos were taken, about 99 percent of the moon's surface had been covered. When each Lunar Orbiter completed its photographic mission, the spacecraft continued its flight to gather clues to the nature of the lunar gravitational environment. NASA found these clues valuable in the planning of the Apollo flights. Telemetry data clearly indicated that the moon's gravitational pull was not uniform. The slight dips in the path of the Lunar Orbiters as they passed over certain areas of the moon's surface were caused by gravitational perturbations, which in turn were caused by the mascons. The extended missions of the Lunar Orbiters also helped to confirm that radiation levels near the moon were quite low and posed no danger to astronauts unless a major solar flare occurred while they were exposed on the lunar surface. A few months after each Lunar Orbiter mission, NASA deliberately crashed the spacecraft into the lunar surface to study lunar impacts and their seismic consequences. Destroying the spacecraft before it deteriorated and mission controllers had lost command of it ensured that it would not wander into the path of some future mission.68 Whether the Apollo landings could have been made successfully without the photographs from Lunar Orbiter is a difficult question to answer. Without the photos, the manned landings could certainly still have been attempted. In addition to the photographic maps drawn from telescopic observation, engineers could use some good pictures taken from Ranger and Surveyor to guide them. However, the detailed photographic coverage of 22 possible landing sites definitely made NASA's final selection of ideal sites much easier and the pinpointing of landing spots possible. Furthermore, Lunar Orbiter also contributed important photometric information that proved vital to the Apollo program. Photometry involves the science of measuring the intensity of light. Lunar Orbiter planners had to decide where to position the camera to have the best light for taking the high-resolution photographs. When we take pictures on earth, we normally want to have the sun behind us so it is shining directly on the target. But a photo taken of the lunar surface in these same circumstances produces a peculiar photometric function: the moon looks flat. Even minor topographical features are indistinguishable because of the intensity of the reflecting sunlight from the micrometeorite filled lunar surface. The engineers in LOPO had to determine the best position for photographing the moon. After studying the problem (Taback, Crabill, and Young led the attack on this problem), LOPO's answer was that the sun should indeed be behind the spacecraft, but photographs should be taken when the sun was only 15 degrees above the horizon. 69 Long before it was time for the first Apollo launch, LOPO's handling of the lunar photometric function was common knowledge throughout NASA and the aerospace industry. The BellComm scientists and engineers who reviewed Apollo planning quickly realized that astronauts approaching the moon to make a landing needed, like Lunar Orbiter, to be in the best position for viewing the moon's topography. Although a computer program would pinpoint the Apollo landing site, the computer's choice might not be suitable. If that was the case, astronauts would have to rely on their own eyes to choose a spot. If the sun was in the wrong position, they would not make out craters and boulders, the surface would appear deceptively flat, and the choice might be disastrous. Apollo 11 commander Neil Armstrong did not like the spot picked by the computer for the Eagle landing. Because NASA had planned for him to be in the best viewing position relative to the sun, Armstrong could see that the place was "littered with boulders the size of Volkswagons." So he flew on. He had to go another 1500 meters before he saw a spot where he could set the lunar module down safely.70 NASA might have considered the special photometric functions involved in viewing the moon during Apollo missions without Lunar Orbiter, but the experience of the Lunar Orbiter missions took the guesswork out of the calculations. NASA knew that its astronauts would be able to see what they needed to see to avoid surface hazards. This is a little-known but important contribution from Lunar Orbiter. In the early 1970s Erasmus H. Kloman, a senior research associate with the National Academy of Public Administration, completed an extensive comparative investigation of NASA's handling of its Surveyor and Lunar Orbiter projects. After a lengthy review, NASA published a shortened and distilled version of Kloman's larger study as Unmanned Space Project Management: Surveyor and Lunar Orbiter. The result even in the expurgated version, with all names of responsible individuals left out -was a penetrating study in "sharp contrasts" that should be required reading for every project manager in business, industry, or government. Based on his analysis of Surveyor and Lunar Orbiter, Kloman concluded that project management has no secrets of success. The key elements are enthusiasm for the project, a clear understanding of the project's objective, and supportive and flexible interpersonal and interoffice relationships. The history of Surveyor and Lunar Orbiter, Kloman wrote, "serves primarily as a confirmation of old truths about the so-called basic principles of management rather than a revelation of new ones." Kloman writes that Langley achieved Lunar Orbiter's objectives by "playing it by the book." By this, Kloman meant that Langley applied those simple precepts of good management; he did not mean that success was achieved through a thoughtless and strict formula for success. Kloman understood that Langley's project engineers broke many rules and often improvised as they went along. Enthusiasm, understanding, support, and flexibility allowed project staff to adapt the mission to new information, ideas, or circumstances. "Whereas the Surveyor lessons include many illustrations of how 'not to' set out on a project or how to correct for early misdirections," Kloman argued, "Lunar Orbiter shows how good sound precepts and directions from the beginning can keep a project on track."71 Lunar Orbiter, however, owes much of its success to Surveyor. LOPO staff were able to learn from the mistakes made in the Surveyor project. NASA headquarters was responsible for some of these mistakes. The complexity of Surveyor was underestimated, unrealistic manpower and financial ceilings were imposed, an "unreasonably open-ended combination of scientific experiments for the payload" was insisted upon for too long, too many changes in the scope and objectives of the project were made, and the project was tied to the unreliable Centaur launch vehicle.72 NASA headquarters corrected these mistakes. In addition, Langley representatives learned from JPL's mistakes and problems. They talked at great length to JPL staff in Pasadena about Surveyor both before and after accepting the responsibility for Lunar Orbiter. From these conversations, Langley acquired a great deal of knowledge about the design and management of an unmanned space mission. JPL scientists and engineers even conducted an informal "space school" that helped to educate several members of LOPO and Boeing's team about key details of space mission design and operations. The interpersonal skills of the individuals responsible for Lunar Orbiter, however, appear to have been the essential key to success. These skills centered more on the ability to work with other people than they did on what one might presume to be the more critical and esoteric managerial, conceptual. and technical abilities. In Kloman's words, "individual personal qualities and management capabilities can at times be a determining influence in overall project performance."73 Compatibility among individual managers. Nelson and Helberg, and the ability of those managers to stimulate good working relationships between people proved a winning combination for Lunar Orbiter. Norman Crabill made these comments about Lunar Orbiter's management: "We had some people who weren't afraid to use their own judgment instead of relying on rules. These people could think and find the essence of a problem, either by discovering the solution themselves or energizing the troops to come up with an alternative which would work. They were absolute naturals at that job."74 Lunar Orbiter was a pathfinder for Apollo, and it was an outstanding contribution by Langley Research Center to the early space program. The old NACA aeronautics laboratory proved not only that it could handle a major deep space mission, but also that it could achieve an extraordinary record of success that matched or surpassed anything yet tried by NASA. When the project ended and LOPO members went back into functional research divisions, Langley possessed a pool of experienced individuals who were ready, if the time came, to plan and manage yet another major... .....project. That opportunity came quickly in the late 1960s with the inception of Viking, a much more complicated and challenging project designed to send unmanned reconnaissance orbiters and landing probes to Mars. When Viking was approved, NASA headquarters assigned the project to "those plumbers" at Langley. The old LOPO team formed the nucleus of Langley's much larger Viking Project Office. With this team, Langley would once again manage a project that would be virtually an unqualified success. * Later in Apollo planning, engineers at the Manned Spacecraft Center in Houston thought that deployment of a penetrometer from the LEM during its final approach to landing would prove useful. The penetrometer would "sound" the anticipated target and thereby determine whether surface conditions were conducive to landing. Should surface conditions prove unsatisfactory, the LEM could be flown to another spot or the landing could be aborted. In the end, NASA deemed the experiment unnecessary. What the Surveyor missions found out about the nature of the lunar soil (that it resembled basalt and had the consistency of damp sand) made NASA so confident about the hardness of the surface that it decided this penetrometer experiment could be deleted. For more information, see Ivan D. Ertel and Roland W. Newkirk, The Apollo Spacecraft: A Chronology, vol. 4, NASA SP-4009 (Washington, 1978), p. 24 ** Edgar Cortright and Oran Nicks would come to have more than a passing familiarity with the capabilities of Langley Research Center. In 1968, NASA would name Cortright to succeed Thompson as the center's director. Shortly thereafter, Cortright named Nicks as his deputy director. Both men then stayed at the center into the mid-1970s. *** in the top-secret DOD system, the camera with the film inside apparently would reenter the atmosphere inside a heat-shielded package that parachuted down, was hooked, and was physically retrieved in midair (if all went as planned) by a specially equipped U.S. Air Force C-119 cargo airplane. It was obviously a very unsatisfactory system, but in the days before advanced electronic systems, it was the best high-resolution satellite reconnaissance system that modern technology could provide. Few NASA people were ever privy to many of the details of how the "black box" actually worked, because they did not have "the need to know." However, they figured that it had been designed, as one LOPO engineer has described in much oversimplified layman's terms, "so when a commander said, 'we've got the target', bop, take your snapshots, zap, zap, zap, get it down from orbit, retrieve it and bring it home, rush it off to Kodak, and get your pictures.", (Norman Crabill interview with author, Hampton, Va., 28 August 1991.) **** The unprecedented photo also provided the first oblique perspectives of the lunar surface. All other photographs taken during the first mission were shot from a position perpendicular to the surface and thus, did not depict the moon in three dimensions. In subsequent missions, NASA made sure to include this sort of oblique photography. Following the first mission, Boeing prepared a booklet entitled Lunar Orbiter I -Photography (NASA Langley, 1965), which gave a detailed technical description of the earth-moon photographs; see especially pp. 64-71.
http://history.nasa.gov/SP-4308/ch10.htm
13
76
Definite integral from a to b is the area contained between f(x) and the x-axis on that interval. Area between two curves is found by 1) determining where the 2 functions intersect 2) determining which function is the greater function over that interval and 3) evaluating the definite integral over the interval of greater function minus lesser function. 3 Find the area enclosed by the functions 4 5.2 Volumes of Solids Slabs Disks Washers 5 Solids of Revolution Disk Method A solid may be formed by revolving a curve about an axis. The volume of this solid may be found by considering the solid sliced into many many round disks. The area of each disk is the area of a circle. Volume is found by integrating the area. The radius of each circle is f(x) for each x value in the interval. 6 Washer method If the area between two curves is revolved around an axis a solid is created that is hollow in the center. When slicing this solid the sections created are washers not solid circles. The area of the smaller circle must be subtracted from the area of the larger one. 7 5.3 Volumes of Solids of Revolutions Shells When an area between two curves is revolved about an axis a solid is created. This solid could be considered as the sum of many many concentric cylinders. Volume is the integral of the area in this case it is the surface area of the cylinder thus r x and h f(x) 8 Does it matter which method to use Either method may work. Sketch a picture of the function to determine which method may be easier. If a specific method is requested that method should be implemented. 9 5.4 Length of a Plane Curve A plane curve is smooth if it is determined by a pair of parametric equations x f(t) and y g(t) a lttltb where f and g exist and are continuous on ab and f(t) and g(t) are not simultaneously zero on (ab). If the curve is smooth we can find its length. 10 Approximate curve length by the sum of many many line segments. To have the actual length you would need infinitely many line segments each whose length is found using the Pythagorean theorem. The length of a smooth curve defined as xf(t) and yg(t) is 11 What if the function is not parametric but defined as y f(x) Infinitely many line segments still provide the length. Again use the Pythagorean formula with horizontal component x and vertical component dy/dx for every line segment. 12 5.5 Work Fluid Force Work Force x Distance In many cases the force is not constant throughout the entire distance. To determine total work done add all the amounts of work done throughout the interval INTEGRATE! If the force is defined as F(x) then work is 13 Fluid Force If a tank is filled to a depth h with a fluid of density (sigma) then the force exerted by the fluid on a horizontal rectangle of area A on the bottom is equal to the weight of the column of fluid that stands directly over that rectangle. Let sigma density h(x)depth w(x)width then force is 14 5.6 Moments and Center of Mass The product of the mass m of a particle and its directed distance from a point (its lever arm) is called the moment of the particle with respect to that point. It measures the tendency of the mass to produce a rotation about the point. 2 masses along a line balance at a point if the sum of their moments with respect to that point is zero. The center of mass is the balance point. 15 Finding the center of mass let M moment m mass sigma density 16 Centroid For a planar region the center of pass of a homogeneous lamina is the centroid. Pappuss Theorem If a region R lying on one side of a line in its plane is revolved about that line then the volume of the resulting solid is equal to the area of R multiplied by the distance traveled by its centroid. 17 5.7 Probability and Random Variables Expectation of a random variable If X is a random variable with a given probability distribution p(Xx) then the expectation of X denoted E(X) also called the mean of X and denoted as mu is 18 Probability Density Function (PDF) If the outcomes are not finite (discrete) but could be any real number in an interval it is continuous. Continuous random variables are studied similarly to distribution of mass. The expected value (mean) of a continuous random variable X is 19 Theorem A Let X be a continuous random variable taking on values in the interval AB and having PDF f(x) and CDF (cumulative distribution function) F(x). Then 1. F(x) f(x) 2. F(A) 0 and F(B) 1 3. P(altXltb) F(b) F(a) 20 (No Transcript) PowerShow.com is a leading presentation/slideshow sharing website. Whether your application is business, how-to, education, medicine, school, church, sales, marketing, online training or just for fun, PowerShow.com is a great resource. And, best of all, most of its cool features are free and easy to use. You can use PowerShow.com to find and download example online PowerPoint ppt presentations on just about any topic you can imagine so you can learn how to improve your own slides and presentations for free. Or use it to find and download high-quality how-to PowerPoint ppt presentations with illustrated or animated slides that will teach you how to do something new, also for free. Or use it to upload your own PowerPoint slides so you can share them with your teachers, class, students, bosses, employees, customers, potential investors or the world. Or use it to create really cool photo slideshows - with 2D and 3D transitions, animation, and your choice of music - that you can share with your Facebook friends or Google+ circles. That's all free as well! For a small fee you can get the industry's best online privacy or publicly promote your presentations and slide shows with top rankings. But aside from that it's free. We'll even convert your presentations and slide shows into the universal Flash format with all their original multimedia glory, including animation, 2D and 3D transition effects, embedded music or other audio, or even video embedded in slides. All for free. Most of the presentations and slideshows on PowerShow.com are free to view, many are even free to download. (You can choose whether to allow people to download your original PowerPoint presentations and photo slideshows for a fee or free or not at all.) Check out PowerShow.com today - for FREE. There is truly something for everyone!
http://www.powershow.com/view/e2280-YmIwO/Applications_of_the_Integral_powerpoint_ppt_presentation
13
50
The original Super Sun, prior to its nova, was accumulating electrons from the Galaxy consistent with the demands of the environment through which it was passing. As we have explained earlier, the Super Sun became too electro-negative and expelled material violently into its surrounding space. This material could not escape; its expulsion was opposed both by the post-nova Sun and by the Galaxy. It thus formed and filled a sac surrounding the newly created Solaria Binaria. In the sac was the whole system of Solaria Binaria; the Sun, Super Uranus, the primitive planets, and the plenum (of gases and solids) of solar origin that nurtured the planets. As the binary widens, the sac becomes conical in shape, narrowing from the size of the Sun at one end to about the size of Super Uranus at the other. A system of similar appearance has been postulated for the binary AM Herculis (Liller, p352). Wickramasinghe and Bessell describe gas flow patterns in X-ray- emitting binary systems. There, one may note a similarity in the shape of their pattern of maximum obscuration to the cone of gases proposed in this work. Viewed from the outside the ancient plenum would have been opaque to light. Not so with the gas of the Earth's atmosphere today, which is eight kilometers thick if the atmosphere is considered as a column of gas of constant density . This atmospheric layer is of trivial thickness compared to the radius of the Earth, yet its importance to the environment is unquestionable. Even this negligible atmospheric layer removes 18.4 per cent of the incoming sunlight, mostly by diverting it from its original direction of travel. Some of this scattered light returns to space, but most of it is redirected several times to produce the blue sky so familiar to us. Atmospheric scatter is enhanced near sunset when the incoming light traverses an atmospheric column tens of times longer than near noon. The setting Sun is notably fainter and its color redder because of the increased scatter. If the atmospheric column were as little as 1280 kilometers thick (at the present surface air density) all of the sunlight would be deflected from its incoming direction. Light would still be seen but only after scattering several times; no discernible source could be identified with the light. So it was in the days of Solaria Binaria. To be precise, if, in the last days of Super Uranus, this body were about thirty gigameters from Earth and if Super Uranus was then as bright per square centimeter of surface as today's Sun, it would not have been directly visible unless the gas density in the plenum was close to that deduced today for the Earth's atmosphere at an altitude of eighty kilometers. To see the more distant Sun this density would have to be decreased another fourfold . In the Age of Urania, Super Uranus was located about as far from the Sun as the orbit of the planet Venus today. This would provide the plenum with a volume of about 10 20 cubic kilometers. If the plenum contained as much as one per cent of the atoms in the present Sun, the gas density would be several times that found at the base of the Earth's atmosphere today. Neither star would be seen directly, and only a dim diffused light could reach the planetary surfaces. As the binary evolved, the plenum came to contain an increased electrical charge; it expanded, leaving less and less gas in the space between the principals. Thus it became gradually more transparent. Astronomers see diluting plenum gases elsewhere in evolving binary systems. Batten (1973a, p10), discussing matter flow within binary systems, favors gas densities of the order of 10 13 particles per cubic centimeter. Warner and Nather propose a much higher density for one system (U Geminorum-a dwarf nova system) where they postulate a gas disc with 6 x 10 17 electrons per cubic centimeter. Unless all the gas is ionized, the neutral gas density would be higher than the calculated electron density. The gas densities that they mention are comparable to those necessary to allow the early humans to discern the first celestial orbits. In the earlier stages of Solaria Binaria the plenum was impenetrable to an outside observer; all detected radiation came from the surface layers of the cone-shaped sac, an area up to fifty-five times the surface of the Sun. The luminosity of the sac would arise from the transaction between in flowing galactic electrons and the gases on the perimeter of the sac. The plenum, at formation, was electron-rich relative to the stars and the planetary nuclei centered within it. These latter electron-deficient bodies promptly initiated a transaction to obtain more electrons by expelling electron-deficient atoms into the volume of the plenum. The charge differences within the sac were modulated with time. In other words, the plenum was losing electrons from its perimeter to its center. In response, the size of the sac collapsed under cosmic pressure. In time this charge-redistribution might have diminished the volume of the sac by as much as tenfold, compressing the cone of gases into a cylinder or column of smaller diameter. Running along the axis between the Sun and Super Uranus was an electrical discharge joining the two principals. Moving with this electrical flow was matter from the Sun that was bound for Super Uranus. Some of this matter would be intercepted by and incorporated into the primitive planets. Induced by the electrical flow a magnetic field was generated which encircled the axis and radially pinched the gases. The pinch effect is self-limiting in that the more the current, the more the pinch. An infinite current in theory pinches the current carriers into an infinitesimal volume, extinguishing it (Blevin, 1964a, p214). Material would be extruded at both ends of the pinched flow by the pressure induced in the pinch. This circular magnetic field, a magnetic tube, would induce randomly moving ions of the plenum to circulate along the field direction. The circulating motion of the ions eventually would be transferred by collision to the neutral gases. The result would be that in the outer regions flow would be dominated by revolution around the circumference of the tube. Everything here would eventually revolve uniformly. The innermost regions of the column were dominated by flow along the axis. Considerable transaction occurred at the junction of these two separately moving regions of the column, the central and the peripheral. Some luminosity would arise from the transaction of electrons and ions deep within the magnetic tube. The ions electrically accelerated towards Super Uranus were neutralized at some point along their trajectory. At neutralization X-rays were produced. Some of the ions would be neutralized upon collision within the magnetic tube, most upon reaching Super Uranus; but, because of the pinch phenomenon noted above, some ions would be extruded and neutralized near the perimeter of the sac behind Super Uranus. Despite the high gas density in the original plenum, X-ray emission would be observable from the outside. That such is the case elsewhere is indicated by Brennan. As the plenum diluted with time (in a manner to be discussed in Chapter Eleven) the outside observer would see deeper and deeper into the system, and eventually all of the X-ray emission would come from the interface between the magnetic tube and the surface of Super Uranus. As in other binary systems, a partial eclipse of the main X-ray source would then be seen as the dumb-bell revolved (see Tananbaum and Hutchings for data on other binaries). Matsuoka notes a positive correlation between X-ray and optical emission in binaries. Radio-emitting regions surround many binary systems (Wickramasinghe and Bessell). Spangler and his colleagues claim that radio emission from binary stars is noted for stars that are over-luminous. The radio emission is generated by electrons transacting with the magnetic field associated with the inter-star axis. That this emission is enhanced when a stronger transaction occurs between the stars causing the over-luminosity is understandable, using our model. At the perimeter of the plenum, optical effects would show to an outside observer an apparent absorption shell associated with the hidden binary within. Like many of the close-binary systems, the stars of Solaria Binaria would not be resolvable in a distant telescope, but the binary nature of the system could be known because observable differences would be produced as the dumb-bell revolved. Gas-containing binary systems as described here, and elsewhere (Batten, 1973b, pp157ff, pp176ff), represent the stake of Solaria Binaria at various epochs, and especially in its last days. As the binary system collapsed, the plenum thinned, allowing direct observation of light produced by sources inside the sac. The gas disc, theoretically implied to surround the stars of other binaries, is waning in the late translucent plenum. The gas streams detected flowing between certain binary components are present in Solaria Binaria along what we call the electrical arc. The gas clouds, whose absorption spectrum leads us to believe that they envelop entire binary systems, correspond to the perimeter of the early opaque plenum. As Solaria Binaria evolved, each of the classes of circumstellar matter noted by astronomers became observable in their turn. Inferable from the above is the degree of visibility from the Earth's surface, or from any point of the planetary belt within the plenum. Overall there is a translucence. Objects near at hand might be distinguished, certainly after the half-way mark in the million-year history of Solaria Binaria was past. Sky bodies were indistinguishable from Earth. With passing time, the level of light would increase. In the beginning, the light is scattered and the sky is a dim white. As the plenum thinned electrically, the sky bodies would emerge as diffuse reddish patches. During this process, the sky would brighten and become more blue. Thus, as they emerge, Super Uranus and the Sun brighten and whiten while the sky becomes darker and bluer. At a time related to the changes soon to be discussed, around fourteen thousand years ago, the Earth is suddenly peopled by humans, and one may investigate whether any memories remain of the plenum. There seem to be several legendary themes that correlate with our deductions about visibility. Seemingly, aboriginal legends describe the heavens as hard, heavy, marble-like and luminous. Earliest humans were seeing a vault, a dome . Probably in retrospect, to the heaven was ascribed the human qualities of a robe or covering, and, by extension, part of an anthropomorphic god. Thus, the Romans saw Coelus, the Chinese T'ien, the Hindus Varuna, and the Greeks Ouranos. Vail (1905/ 1972) presents ample evidence that day and night were uncertain and that the heavens were continuously translucent. When Hindu myth says that "the World was dark and asleep until the Great => Demiurge appeared", we construe the word "dark" as non-bright relative to the sunlit sky that came later. Heaven and Earth were close together, were spouses, according to Greek and other legends. The global climate of the Earth in the plenum was wet; all is born from the insemination of the fecund Earth by the Sky, said some legends. There was so much moisture in the plenum that, although the ocean basins were not yet structured, the first proto-humans might confuse the waters of the firmament above with the earth-waters. In some legendary beginnings, a supreme deity had dispatched a diver to bring out Earth from the great primordial waters of chaos (Long, 1963). The earliest condition was referred to as a chaos, not in the present sense of turbulent clouds, disorder, and disaster, but in the sense of lacking precise indicators of order, such as a cycle that would let time be measured. T'ien is the Chinese Heaven, universally present chaos without form. The gods who later give men time, such as Kronos, are specifically celebrated therefore (Plato). Sky bodies were invisible. Legends of creation do not begin with a bright sky filled with beings, but speak of a time before this. When the first sky-body observations are reported, they are of falling bodies. The earliest fixed heavenly body in legend is not the Sun, the Moon, the planets, nor the stars, but Super Uranus, as will be described later on. Nor was the radiant perimeter of the sac visible. It lay far beyond discernment as such, and was in any case practically indistinguishable from its luminescence. The electrical arc would have been visible directly only in its decaying days, being likewise sheathed from sight by the dense atmosphere of the tube. That the arc or axis appeared along with the sky bodies before its radiance expired is to be determined in the next chapter, where its composition and operation are discussed. Notes on Chapter 5 32. The actual atmosphere does not have a constant density throughout its volume. If condensed to constant density it would become an 8-km column of gas at the atmospheric density found presently at the bottom of the atmosphere. 33. The retention of a more dense, thin atmospheric skin surrounding the Earth (and the other planets) would not affect the visibility of the binary components more adversely than does the Earth's atmosphere today. 34. Vail (1905) collected ancient expressions from diverse cultures testifying to perceptions of the heavens as "the Shining Whole", "the Brilliant All", the "firmament", "the vault", "Heaven the Concealer". Heaven was the Deity who came down crushingly on Earth, and the heavens are said to "roll away" and to open to discharge the Heavenly Hosts; great rivers are said to flow out of Heaven. In other places we read of the gods chopping and piercing holes in the celestial ceiling, of a Boreal Hole that is an "Island of Stars", a "star opening", "Mimer's Well". Heaven was perceived to become ever more impalpable and tenuous with time, so that not only the memory of it but also its names, adjectives and metaphors lost their strength of meaning.
http://www.grazian-archive.com/quantavolution/QuantaHTML/vol_05/solaria-binaria_05.htm
13
55
Okay, so now that you have the basic idea of what a limit is, we’re going to develop your intuition a little. Section 2.2: The Limit of a Function This whole chapter is to show you two things: 1) How a limit works on a function in general, 2) how to deal with limits in practice. So, let’s start off with the book’s definition of a limit Verbally, this is “the limit of f(x) as x approaches a equals L.” What it means is that for a function f(x), the closer we take x to a certain value, a, the closer the limit will get to L. In other words, for some function f(x) = x, as we take x to one value, f(x) approaches some other value. That’s the whole idea of a limit: As the input of a function approaches some value, the output of that function approaches some other value. Let’s go through the book’s examples and develop some intuition. Guess the value of . So, take this mathematical expression and figure out what it approaches as x edges closer and closer to 1. You’ll note that you can’t just put in the 1 and get a value out. That’s what makes this a limit problem. If you wanted the limit as x approaches 1 of the function “f(x) = x+1,” it’d be easy. Just substitute 1 in for x and you get 2. In the case of this example it doesn’t work, because the function approaches 0/0, which is undefined. So, as we did in previous problems we take x closer and closer to 1 without actually hitting 1 and see what the result seems to approach. In the book, they go as close as .9999 for x, and get a value of 0.500025 for f(x). This suggests that the value is 1/2. In addition, they do values from the other direction, getting as close as 1.0001. At that x value, the f(x) value is .499975. So, it seems very likely that the limit’s value is 0.5. As we edge closer and closer to x=1, f(x) gets closer and closer to 1/2. It’s important to note here that 0.5 is the limit, EVEN THOUGH there is not a defined value at x=1. For the graph, we’d put a little hole at x=1 because it’s undefined. However, the limit isn’t a measure of what happens when x equals something. It’s a measure of what happens as x approaches something. Are you starting to get an intuitive feel for these? Let’s do another example. Estimate the value of . Once again, note that you can’t just pop the value t approaches and get a real answer. You’ll get 0/0. See if you can solve this one yourself, then I’ll give you the answer. Did you solve it? Okay, how about now? As you get closer and closer, you approach 0.1666666…, and the closer you get, the more sixes you can pop on. So, it seems very likely that you’re approaching 0.1 followed by infinity sixes, which is also known as 1/6. Here, the book makes a side point about the pitfalls of computer-based calculations. For the system they’re using, at about t=0.00005 they start getting a value of 0 for f(x). This is a good example of why it’s always good to know the math “under the hood.” No computer is accurate to infinite decimal places. At a certain point, it’s just rounding. In this case, when the decimal (after you square t) gets to be on the order of billionths, it says “fuck it,” turns it into a 0, and gives you the wrong value. So far, the limits we’ve done have been pretty intuitive. Let’s look at one that might surprise you: Guess the value of What’s your intuitive guess? Maybe you think it’s 0, since you know the numerator goes to 0 as x goes to 0. But, that doesn’t work because the denominator does too. Maybe you think limits don’t make sense for periodic functions. Also wrong – remember, we’re approaching a particular point. It doesn’t matter how the function behaves elsewhere. So, let’s go back to calculating the actual values. When we do, we find that the closer x gets to 0, the closer the function gets to 1. This may seem like a small thing, but it’s a big deal in physical calculations. It means that, as the physicists say “for small values of sin(x), sin(x) = x.” That’s a big deal. Imagine you’ve got an ugly equation with the sine of some big pile of variables. Now, imagine you can remove the sin() part. A common physics example is pendular motion. Part of the calculation for how a pendulum moves involves the maximum angle of swing it achieves. If you can just use the value of the angle, rather than the sine of the angle, it massively simplifies things. Of course, this is a bit of a rule of thumb, so it’s arbitrary as to exactly what “small” means. The version I was taught is that you’re good down to around 15 degrees (π/12 radians). So, you can already see how limits are helping us out. Hopefully, you can also see how limits can give unintuitive results. With that in mind, let’s test your intuition again! What is the value of ? In this case, as x gets smaller and smaller, you’re taking the sine of a larger and larger number. As you know, sine is a periodic function, meaning that it wobbles up and down as you walk down x. So, in this case, you have a problem. As x goes to 0, the function operates on bigger and bigger values. But, as those values get bigger and bigger, the operation (sine) stays between -1 and 1, wobbling back and forth forever. So, there is no particular value you approaches as π/x gets bigger. Therefore, we say that the limit does not exist. One more test of your intuition! Now, say you make a list of what happens as x goes to 0. You’ll note that it seems to be getting smaller and smaller, approaching 0. So, you might guess that the function approaches 0. BUT YOU’VE BEEN PLAYED FOR THE FOOL, MY FRIEND. Look at the function again. We know for sure that the left part, simply goes to 0 as x goes to zero. What about the right part? Well, as x goes to zero, cosine goes to 1. So the right part goes to 1/10,000. So, as x gets closer and closer to 0, our function should actually approach 1/10,000. That is to say, it gets very small indeed, but it does not reach 0. You can confirm this by graphing it and seeing if the function ever touches zero. It doesn’t. The lesson here is this: You can’t just look at a list of numbers and assume they’ll lead you to the limit. That list of numbers is just an intuitive way to look at things. Getting 0 instead of 1/10,000 is pretty good. In fact, it’s only off by 1/10,000. But, in the right context, that might matter quite a bit. What if the equation predicts what percent of people will die of ultra-plague when I release it later this year? At 1/10,000, you’re looking at 600,000 dead people – all of them dead because you didn’t understand the concept of a limit. All these examples may seem a bit different, but they’re getting at the same idea – limits are what f(x) approaches as x approaches something. Unfortunately, it’s not always quite that simple… The Heaviside Function is given by H(t). H(t) is 0 when t is less than zero, and 1 when H(t) is greater than or equal to zero. That bastard got a whole function named after him that’s simple enough to be in a pre-calc text. Take a look at the link there, which shows a graph of the function. What do you think the limit is as you get closer to 0? You’ll immediately see a problem. If you approach from one side, the limit is 0. If you approach from the other side, the limit is 1. So, it’s not clear that there’s a single limit. But, you’re not as lost as in example 4, where there was no limit at all. Here, you can at least say there seem to be 2 limits. And, you’d be right to say that. In fact, many equations have more than 1 limit. That’ll set us up for the next blog, on One-sided Limits.
http://www.theweinerworks.com/?p=675
13
100
Conversion Factors and Functions Earlier we showed how unity factors can be used to express quantities in different units of the same parameter. For example, a density can be expressed in g/cm3 or lb/ft3. Now we will see how conversion factors representing mathematical functions, like D = m/v, can be used to transform quantities into different parameters. For example, what is the volume of a given massA measure of the force required to impart unit acceleration to an object; mass is proportional to chemical amount, which represents the quantity of matter in an object. of gold? Unity factors and conversion factors are conceptually different, and we'll see that the "dimensional analysisA technique in which the cancelling of units is used as a tool to check the correctness of a calculation." we develop for unit conversion problems must be used with care in the case of functions. When we are referring to the same object or sample of material, it is often useful to be able to convert one parameter into another. For example, in our discussion of fossil-fuel reserves we find that 318 Pg (3.18 × 1017 g) of coal, 28.6 km3 (2.68 × 1010 m3) of petroleum, and 2.83 × 103 km3 (2.83 × 1013 m3) of natural gasA state of matter in which a substance occupies the full volume of its container and changes shape to match the shape of the container. In a gas the distance between particles is much greater than the diameters of the particles themselves; hence the distances between particles can change as necessary so that the matter uniformly occupies its container. (measured at normal atmospheric pressureForce per unit area; in gases arising from the force exerted by collisions of gas molecules with the wall of the container. and 15°C) are available. But none of these quantities tells us what we really want to know ― how much heatEnergy transferred as a result of a temperature difference; a form of energy stored in the movement of atomic-sized particles. energyA system's capacity to do work. could be released by burning each of these reserves? Only by converting the mass of coal and the volumes of petroleum and natural gas into their equivalent energies can we make a valid comparison. When this is done, we find that the coal could release 7.2 × 1021 J, , the petroleum 1.1 × 1021 J, and the gas 1.1 × 1021 J of heat energy. Thus the reserves of coal are more than three times those of the other two fuels combined. It is for this reason that more attention is being paid to the development of new ways for using coal resources than to oil or gas. Conversion of one kind of quantity into another is usually done with what can be called a conversion factor, but the conversion factor is based on a mathematical function (D = m / V) or mathematical equation that relates parameters. Since we have not yet discussed energy or the units (joules) in which it is measured, an example involving the more familiar quantities mass and volume will be used to illustrate the way conversion factors are employed. The same principles apply to finding how much energy would be released by burning a fuel, and that problem will be encountered later. Suppose we have a rectangular solidA state of matter having a specific shape and volume and in which the particles do not readily change their relative positions. sample of gold which measures 3.04 cm × 8.14 cm × 17.3 cm. We can easily calculate that its volume is 428 cm3 but how much is it worth? The price of gold is about 5 dollars per gramOne thousandth of a kilogram., and so we need to know the mass rather than the volume. It is unlikely that we would have available a scale or balance which could weigh accurately such a large, heavy sample, and so we would have to determine the mass of gold equivalent to a volume of 428 cm3. This can be done by manipulating the equation which defines density, ρ = m / V. If we multiply both sides by V, we obtain m = V × ρ or mass = volume × density (1) Taking the density of gold from a reference table, we can now calculate This is more than 18 lb of gold. At the price quoted above, it would be worth over 40 000 dollars! The formula which defines density can also be used to convert the mass of a sample to the corresponding volume. If both sides of Eq. (1) are multiplied by 1/ρ, we have Notice that we used the mathematical function D = m/V to convert parameters from mass to volume or vice versa in these examples. How does this differ from the use of unity factors to change units of one parameter? An Important Caveat A mistake sometimes made by beginning students is to confuse density with concentrationA measure of the ratio of the quantity of a substance to the quantity of solvent, solution, or ore. Also, the process of making something more concentrated., which also may have units of g/cm3. By dimensional analysis, this looks perfectly fine. To see the error, we must understand the meaning of the function C = . In this case, V refers to the volume of a solutionA mixture of one or more substances dissolved in a solvent to give a homogeneous mixture., which contains both a soluteThe substance added to a solvent to make a solution. and solventThe substance to which a solute is added to make a solution.. Given a concentration of an alloyA solid that has metallic properties and is made up of two or more elements. is 10g gold in 100 cm3 of alloy, we see that it is wrong (although dimensionally correct as far as conversion factors go) to incorrectly calculate the volume of gold in 20 g of the alloy as follows: 20 g x = 200 cm3 It is only possible to calculate the volume of gold if the density of the alloy is known, so that the volume of alloy represented by the 20 g could be calculated. This volume multiplied by the concentration gives the mass of gold, which then can be converted to a volume with the density function. The bottom line is that using a simple unit cancellation method does not always lead to the expected results, unless the mathematical function on which the conversion factor is based is fully understood. A solution of ethanol with a concentration of 0.1754 g / cm3 has a density of 0.96923 g / cm3 and a freezing pointThe temperature at which a liquid becomes a solid; also called melting point. of -9 ° F . What is the volume of ethanol (D = 0.78522 g / cm3 at 25 °C) in 100 g of the solution? The volume of 100 g of solution is V = m / D = 100 g /0.96923 g cm-3 = 103.17 cm3. The mass of ethanol in this volume is m = V x C = 103.17 cm3 x 0.1754 g / cm3 = 18.097 g. The volume of ethanol = m / D = 18.097 g / 0.78522 g / cm3 = 23.05 cm3. Note that we cannot calculate the volume of ethanol by = 123.4 cm3 even though this equation is dimensionally correct. Note that this result required when to use the function C = m/V, and when to use the function D=m/V as conversion factors. Pure dimensional analysis could not reliably give the answer, since both functions have the same dimensions. EXAMPLE 2 Find the volume occupied by a 4.73-g sample of benzene. Solution The density of benzene is 0.880 g cm–3. Using Eq. (2), (Note that taking the reciprocal of simply inverts the fraction ― 1 cm3 goes on top, and 0.880 g goes on the bottom.) The two calculations just done show that density is a conversion factor which changes volume to mass, and the reciprocal of density is a conversion factor changing mass into volume. This can be done because the mathematical formula defining density relates it to mass and volume. Algebraic manipulation of this formula gave us expressions for mass and for volume [Eq. (1) and (2)], and we used them to solve our problems. If we understand the function D = m/V and heed the caveat above, we can devise appropriate converstion factors by unit cancellation, as the following example shows: EXAMPLE 3 A student weighs 98.0 g of mercury. If the density of mercury is 13.6 g/cm3, what volume does the sample occupy? We know that volume is related to mass through density. V = m × conversion factor Since the mass is in grams, we need to get rid of these units and replace them with volume units. This can be done if the reciprocal of the density is used as a conversion factor. This puts grams in the denominator so that these units cancel: If we had multiplied by the density instead of its reciprocal, the units of the result would immediately show our error: It is clear that square grams per cubic centimeter are not the units we want. Using a conversion factor is very similar to using a unity factor — we know the conversion factor is correct when units cancel appropriately. A conversion factor is not unity, however. Rather it is a physical quantity (or the reciprocal of a physical quantity) which is related to the two other quantities we are interconverting. The conversion factor works because of the relationship [ie. the definition of density as defined by Eqs. (1) and (2) includes the relationships between density, mass, and volume], not because it is has a value of one. Once we have established that a relationship exists, it is no longer necessary to memorize a mathematical formula. The units tell us whether to use the conversion factor or its reciprocal. Without such a relationship, however, mere cancellation of units does not guarantee that we are doing the right thing. A simple way to remember relationships among quantities and conversion factors is a “road map“of the type shown below: This indicates that the mass of a particular sample of matterAnything that occupies space and has mass; contrasted with energy. is related to its volume (and the volume to its mass) through the conversion factor, density. The double arrow indicates that a conversion may be made in either direction, provided the units of the conversion factor cancel those of the quantity which was known initially. In general the road map can be written As we come to more complicated problems, where several steps are required to obtain a final result, such road maps will become more useful in charting a path to the solution. EXAMPLE 4 Black ironwood has a density of 67.24 lb/ft3. If you had a sample whose volume was 47.3 ml, how many grams would it weigh? (1 lb = 454 g; 1 ft = 30.5 cm). Solution The road map tells us that the mass of the sample may be obtained from its volume using the conversion factor, density. Since milliliters and cubic centimeters are the same, we use the SI unitsThe international system of units (Système International d'Unité) based on seven fundamental units: meter, kilogram, second, ampere, kelvin, candela, mole. for our calculation: Mass = m = 47.3 cm3 × Since the volume units are different, we need a unity factor to get them to cancel: We now have the mass in pounds, but we want it in grams, so another unity factor is needed: In subsequent chapters we will establish a number of relationships among physical quantities. Formulas will be given which define these relationships, but we do not advocate slavish memorization and manipulation of those formulas. Instead we recommend that you remember that a relationship exists, perhaps in terms of a road map, and then adjust the quantities involved so that the units cancel appropriately. Such an approach has the advantage that you can solve a wide variety of problems by using the same technique.
http://chempaths.chemeddl.org/services/chempaths/?q=book/General%20Chemistry%20Textbook/Introduction%3A%20The%20Ambit%20of%20Chemistry/1183/conversion-factors-and-fun
13
87
Mathematics Grade 4 |Printable Version (pdf)| (1) Students generalize their understanding of place value to 1,000,000, understanding the relative sizes of numbers in each place. They apply their understanding of models for multiplication (equal-sized groups, arrays, area models), place value, and properties of operations, in particular the distributive property, as they develop, discuss, and use efficient, accurate, and generalizable methods to compute products of multi-digit whole numbers. Depending on the numbers and the context, they select and accurately apply appropriate methods to estimate or mentally calculate products. They develop fluency with efficient procedures for multiplying whole numbers; understand and explain why the procedures work based on place value and properties of operations; and use them to solve problems. Students apply their understanding of models for division, place value, properties of operations, and the relationship of division to multiplication as they develop, discuss, and use efficient, accurate, and generalizable procedures to find quotients involving multi-digit dividends. They select and accurately apply appropriate methods to estimate and mentally calculate quotients, and interpret remainders based upon the context. (2) Students develop understanding of fraction equivalence and operations with fractions. They recognize that two different fractions can be equal (e.g., 15/9 = 5/3), and they develop methods for generating and recognizing equivalent fractions. Students extend previous understandings about how fractions are built from unit fractions, composing fractions from unit fractions, decomposing fractions into unit fractions, and using the meaning of fractions and the meaning of multiplication to multiply a fraction by a whole number. (3) Students describe, analyze, compare, and classify two-dimensional shapes. Through building, drawing, and analyzing two-dimensional shapes, students deepen their understanding of properties of two-dimensional objects and the use of them to solve problems involving symmetry. Grade 4 Overview Operations and Algebraic Thinking Number and Operations in Base Ten Number and Operations - Fractions Measurement and Data Core Standards of the Course 1. Interpret a multiplication equation as a comparison, e.g., interpret 35 = 5 × 7 as a statement that 35 is 5 times as many as 7 and 7 times as many as 5. Represent verbal statements of multiplicative comparisons as multiplication equations. 2. Multiply or divide to solve word problems involving multiplicative comparison, e.g., by using drawings and equations with a symbol for the unknown number to represent the problem, distinguishing multiplicative comparison from additive comparison.1 3. Solve multistep word problems posed with whole numbers and having whole-number answers using the four operations, including problems in which remainders must be interpreted. Represent these problems using equations with a letter standing for the unknown quantity. Assess the reasonableness of answers using mental computation and estimation strategies including rounding. 4. Find all factor pairs for a whole number in the range 1–100. Recognize that a whole number is a multiple of each of its factors. Determine whether a given whole number in the range 1–100 is a multiple of a given one-digit number. Determine whether a given whole number in the range 1–100 is prime or composite. 5. Generate a number or shape pattern that follows a given rule. Identify apparent features of the pattern that were not explicit in the rule itself. For example, given the rule “Add 3” and the starting number 1, generate terms in the resulting sequence and observe that the terms appear to alternate between odd and even numbers. Explain informally why the numbers will continue to alternate in this way. 1. Recognize that in a multi-digit whole number, a digit in one place represents ten times what it represents in the place to its right. For example, recognize that 700 ÷ 70 = 10 by applying concepts of place value and division. 2. Read and write multi-digit whole numbers using base-ten numerals, number names, and expanded form. Compare two multi-digit numbers based on meanings of the digits in each place, using >, =, and < symbols to record the results of comparisons. 5. Multiply a whole number of up to four digits by a one-digit whole number, and multiply two two-digit numbers, using strategies based on place value and the properties of operations. Illustrate and explain the calculation by using equations, rectangular arrays, and/or area models. 6. Find whole-number quotients and remainders with up to four-digit dividends and one-digit divisors, using strategies based on place value, the properties of operations, and/or the relationship between multiplication and division. Illustrate and explain the calculation by using equations, rectangular arrays, and/or area models. 1. Explain why a fraction a/b is equivalent to a fraction (n × a)/(n × b) by using visual fraction models, with attention to how the number and size of the parts differ even though the two fractions themselves are the same size. Use this principle to recognize and generate equivalent fractions. 2. Compare two fractions with different numerators and different denominators, e.g., by creating common denominators or numerators, or by comparing to a benchmark fraction such as 1/2. Recognize that comparisons are valid only when the two fractions refer to the same whole. Record the results of comparisons with symbols >, =, or <, and justify the conclusions, e.g., by using a visual fraction model. - Understand addition and subtraction of fractions as joining and separating parts referring to the same whole. - Decompose a fraction into a sum of fractions with the same denominator in more than one way, recording each decomposition by an equation. Justify decompositions, e.g., by using a visual fraction model. Examples: 3/8 = 1/8 + 1/8 + 1/8 ; 3/8 = 1/8 + 2/8 ; 2 1/8 = 1 + 1 + 1/8 = 8/8 + 8/8 + 1/8. - Add and subtract mixed numbers with like denominators, e.g., by replacing each mixed number with an equivalent fraction, and/or by using properties of operations and the relationship between addition and subtraction. - Solve word problems involving addition and subtraction of fractions referring to the same whole and having like denominators, e.g., by using visual fraction models and equations to represent the problem. - Understand a fraction a/b as a multiple of 1/b. For example, use a visual fraction model to represent 5/4 as the product 5 × (1/4), recording the conclusion by the equation 5/4 = 5 × (1/4). - Understand a multiple of a/b as a multiple of 1/b, and use this understanding to multiply a fraction by a whole number. For example, use a visual fraction model to express 3 × (2/5) as 6 × (1/5), recognizing this product as 6/5. (In general, n × (a/b) = (n × a)/b.) - Solve word problems involving multiplication of a fraction by a whole number, e.g., by using visual fraction models and equations to represent the problem. For example, if each person at a party will eat 3/8 of a pound of roast beef, and there will be 5 people at the party, how many pounds of roast beef will be needed? Between what two whole numbers does your answer lie? 5. Express a fraction with denominator 10 as an equivalent fraction with denominator 100, and use this technique to add two fractions with respective denominators 10 and 100.4 For example, express 3/10 as 30/100, and add 3/10 + 4/100 = 34/100. 7. Compare two decimals to hundredths by reasoning about their size. Recognize that comparisons are valid only when the two decimals refer to the same whole. Record the results of comparisons with the symbols >, =, or <, and justify the conclusions, e.g., by using a visual model. 1. Know relative sizes of measurement units within one system of units including km, m, cm; kg, g; lb, oz.; l, ml; hr, min, sec. Within a single system of measurement, express measurements in a larger unit in terms of a smaller unit. Record measurement equivalents in a two-column table. For example, know that 1 ft is 12 times as long as 1 in. Express the length of a 4 ft snake as 48 in. Generate a conversion table for feet and inches listing the number pairs (1, 12), (2, 24), (3, 36), ... 2. Use the four operations to solve word problems involving distances, intervals of time, liquid volumes, masses of objects, and money, including problems involving simple fractions or decimals, and problems that require expressing measurements given in a larger unit in terms of a smaller unit. Represent measurement quantities using diagrams such as number line diagrams that feature a measurement scale. 3. Apply the area and perimeter formulas for rectangles in real world and mathematical problems. For example, find the width of a rectangular room given the area of the flooring and the length, by viewing the area formula as a multiplication equation with an unknown factor. 4. Make a line plot to display a data set of measurements in fractions of a unit (1/2, 1/4, 1/8). Solve problems involving addition and subtraction of fractions by using information presented in line plots. For example, from a line plot find and interpret the difference in length between the longest and shortest specimens in an insect collection. - An angle is measured with reference to a circle with its center at the common endpoint of the rays, by considering the fraction of the circular arc between the points where the two rays intersect the circle. An angle that turns through 1/360 of a circle is called a “one-degree angle,” and can be used to measure angles. - An angle that turns through n one-degree angles is said to have an angle measure of n degrees. 7. Recognize angle measure as additive. When an angle is decomposed into non-overlapping parts, the angle measure of the whole is the sum of the angle measures of the parts. Solve addition and subtraction problems to find unknown angles on a diagram in real world and mathematical problems, e.g., by using an equation with a symbol for the unknown angle measure. 2. Classify two-dimensional figures based on the presence or absence of parallel or perpendicular lines, or the presence or absence of angles of a specified size. Recognize right triangles as a category, and identify right triangles. 3. Recognize a line of symmetry for a two-dimensional figure as a line across the figure such that the figure can be folded along the line into matching parts. Identify line-symmetric figures and draw lines of symmetry. These materials have been produced by and for the teachers of the State of Utah. Copies of these materials may be freely reproduced for teacher and classroom use. When distributing these materials, credit should be given to Utah State Office of Education. These materials may not be published, in whole or part, or in any other format, without the written permission of the Utah State Office of Education, 250 East 500 South, PO Box 144200, Salt Lake City, Utah 84114-4200. For more information about this core curriculum, contact the USOE Specialist, DAVID SMITH or visit the Mathematics - Elementary Home Page. For general questions about Utah's Core Curriculum, contact the USOE Curriculum Director, Sydnee Dickson . UEN Contact Info: 801-581-2999 | 800-866-5852 | Contact Us
http://www.uen.org/core/core.do?courseNum=5140
13
53
|, � & Radiation||Alpha Decay||Beta Decay| |Gamma Decay||Spontaneous Fission||Neutron-Rich Versus Neutron-Poor Nuclides| |Binding Energy Calculations||The Kinetics of Radioactive Decay||Dating By Radioactive Decay| Early studies of radioactivity indicated that three different kinds of radiation were emitted, symbolized by the first three letters of the Greek alphabet , �/i>, and . With time, it became apparent that this classification scheme was much too simple. The emission of a negatively charged �/i>- particle, for example, is only one example of a family of radioactive transformations known as �/em>-decay. A fourth category, known as spontaneous fission, also had to be added to describe the process by which certain radioactive nuclides decompose into fragments of different weight. Alpha decay is usually restricted to the heavier elements in the periodic table. (Only a handful of nuclides with atomic numbers less than 83 emit an -particle.) The product of -decay is easy to predict if we assume that both mass and charge are conserved in nuclear reactions. Alpha decay of the 238U "parent" nuclide, for example, produces 234Th as the "daughter" nuclide. The sum of the mass numbers of the products (234 + 4) is equal to the mass number of the parent nuclide (238), and the sum of the charges on the products (90 + 2) is equal to the charge on the parent nuclide. There are three different modes of beta decay: Electron (�/em>-) emission is literally the process in which an electron is ejected or emitted from the nucleus. When this happens, the charge on the nucleus increases by one. Electron (�/i>-) emitters are found throughout the periodic table, from the lightest elements (3H) to the heaviest (255Es). The product of �/i>--emission can be predicted by assuming that both mass number and charge are conserved in nuclear reactions. If 40K is a �/i>--emitter, for example, the product of this reaction must be 40Ca. Once again the sum of the mass numbers of the products is equal to the mass number of the parent nuclide and the sum of the charge on the products is equal to the charge on the parent nuclide. Nuclei can also decay by capturing one of the electrons that surround the nucleus. Electron capture leads to a decrease of one in the charge on the nucleus. The energy given off in this reaction is carried by an x-ray photon, which is represented by the symbol hv, where h is Planck's constant and v is the frequency of the x-ray. The product of this reaction can be predicted, once again, by assuming that mass and charge are conserved. The electron captured by the nucleus in this reaction is usually a 1s electron because electrons in this orbital are the closest to the nucleus. A third form of beta decay is called positron (�sup>+) emission. The positron is the antimatter equivalent of an electron. It has the same mass as an electron, but the opposite charge. Positron (�/i>+) decay produces a daughter nuclide with one less positive charge on the nucleus than the parent. Positrons have a very short life-time. They rapidly lose their kinetic energy as they pass through matter. As soon as they come to rest, they combine with an electron to form two -ray photons in a matter-antimatter annihilation reaction. Thus, although it is theoretically possible to observe a fourth mode of beta decay corresponding to the capture of a positron, this reaction does not occur in nature. Note that in all three forms of �/i>-decay for the 40K nuclide the mass number of the parent and daughter nuclides are the same for electron emission, electron capture, and position emission. All three forms of �/i>-decay therefore interconvert isobars. The daughter nuclides produced by -decay or �/i>-decay are often obtained in an excited state. The excess energy associated with this excited state is released when the nucleus emits a photon in the -ray portion of the electromagnetic spectrum. Most of the time, the -ray is emitted within 10-12 seconds after the -particle or �/i>-particle. In some cases, gamma decay is delayed, and a short-lived, or metastable, nuclide is formed, which is identified by a small letter m written after the mass number. 60mCo, for example, is produced by the electron emission of 60Fe. The metastable 60mCo nuclide has a half-life of 10.5 minutes. Since electromagnetic radiation carries neither charge nor mass, the product of -ray emission by 60mCo is 60Co. Nuclides with atomic numbers of 90 or more undergo a form of radioactive decay known as spontaneous fission in which the parent nucleus splits into a pair of smaller nuclei. The reaction is usually accompanied by the ejection of one or more neutrons. For all but the very heaviest isotopes, spontaneous fission is a very slow reaction. Spontaneous fission of 238U, for example, is almost two million times slower than the rate at which this nuclide undergoes -decay. |Practice Problem 3: Predict the products of the following nuclear reactions: (a) electron emission by 14C (b) positron emission by 8B (c) electron capture by 125I (d) alpha emission by 210Rn (e) gamma-ray emission by 56mNi In 1934 Enrico Fermi proposed a theory that explained the three forms of beta decay. He argued that a neutron could decay to form a proton by emitting an electron. A proton, on the other hand, could be transformed into a neutron by two pathways. It can capture an electron or it can emit a positron. Electron emission therefore leads to an increase in the atomic number of the nucleus. Both electron capture and positron emission, on the other hand, result in a decrease in the atomic number of the nucleus. A plot of the number of neutrons versus the number of protons for all of the stable naturally occurring isotopes is shown in the figure below. Several conclusions can be drawn from this plot. |A graph of the number of neutrons versus the number of protons for all stable naturally occurring nuclei. Nuclei that lie to the right of this band of stability are neutron poor; nuclei to the left of the band are neutron-rich. The solid line represents a neutron to proton ratio of 1:1.| - The stable nuclides lie in a very narrow band of neutron-to-proton ratios. - The ratio of neutrons to protons in stable nuclides gradually increases as the number of protons in the nucleus increases. - Light nuclides, such as 12C, contain about the same number of neutrons and protons. Heavy nuclides, such as 238U, contain up to 1.6 times as many neutrons as protons. - There are no stable nuclides with atomic numbers larger than 83. - This narrow band of stable nuclei is surrounded by a sea of instability. - Nuclei that lie above this line have too many neutrons and are therefore neutron-rich. - Nuclei that lie below this line don't have enough neutrons and are therefore neutron-poor. The most likely mode of decay for a neutron-rich nucleus is one that converts a neutron into a proton. Every neutron-rich radioactive isotope with an atomic number smaller 83 decays by electron (�/i>-) emission. 14C, 32P, and 35S, for example, are all neutron-rich nuclei that decay by the emission of an electron. Neutron-poor nuclides decay by modes that convert a proton into a neutron. Neutron-poor nuclides with atomic numbers less than 83 tend to decay by either electron capture or positron emission. Many of these nuclides decay by both routes, but positron emission is more often observed in the lighter nuclides, such as 22Na. Electron capture is more common among heavier nuclides, such as 125I, because the 1s electrons are held closer to the nucleus of an atom as the charge on the nucleus increases. A third mode of decay is observed in neutron-poor nuclides that have atomic numbers larger than 83. Although it is not obvious at first, -decay increases the ratio of neutrons to protons. Consider what happens during the -decay of 238U, for example. The parent nuclide (238U) in this reaction has 92 protons and 146 neutrons, which means that the neutron-to-proton ratio is 1.587. The daughter nuclide (234Th) has 90 protons and 144 neutrons, so its neutron-to-proton ratio is 1.600. The daughter nuclide is therefore slightly less likely to be neutron-poor, as shown in the figure below. |Practice Problem 4: Predict the most likely modes of decay and the products of decay of the following nuclides: (a) 17F (b) 105Ag (c) 185Ta We should be able to predict the mass of an atom from the masses of the subatomic particles it contains. A helium atom, for example, contains two protons, two neutrons, and two electrons. The mass of a helium atom should be 4.0329802 amu. |2(1.0072765) amu||=||2.0145530 amu| |2(1.0086650) amu||=||2.0173300 amu| |2(0.0005486) amu||=||0.0010972 amu| |Total mass||=||4.0329802 amu| When the mass of a helium atom is measured, we find that the experimental value is smaller than the predicted mass by 0.0303769 amu. |Predicted mass||=||4.0329802 amu| |Observed mass||=||4.0026033 amu| |Mass defect||=||0.0303769 amu| The difference between the mass of an atom and the sum of the masses of its protons, neutrons, and electrons is called the mass defect. The mass defect of an atom reflects the stability of the nucleus. It is equal to the energy released when the nucleus is formed from its protons and neutrons. The mass defect is therefore also known as the binding energy of the nucleus. The binding energy serves the same function for nuclear reactions as H for a chemical reaction. It measures the difference between the stability of the products of the reaction and the starting materials. The larger the binding energy, the more stable the nucleus. The binding energy can also be viewed as the amount of energy it would take to rip the nucleus apart to form isolated neutrons and protons. It is therefore literally the energy that binds together the neutrons and protons in the nucleus. The binding energy of a nuclide can be calculated from its mass defect with Einstein's equation that relates mass and energy. E = mc2 We found the mass defect of He to be 0.0303769 amu. To obtain the binding energy in units of joules, we must convert the mass defect from atomic mass units to kilograms. Multiplying the mass defect in kilograms by the square of the speed of light in units of meters per second gives a binding energy for a single helium atom of 4.53358 x 10-12 joules. Multiplying the result of this calculation by the number of atoms in a mole gives a binding energy for helium of 2.730 x 1012 joules per mole, or 2.730 billion kilojoules per mole. This calculation helps us understand the fascination of nuclear reactions. The energy released when natural gas is burned is about 800 kJ/mol. The synthesis of a mole of helium releases 3.4 million times as much energy. Since most nuclear reactions are carried out on very small samples of material, the mole is not a reasonable basis of measurement. Binding energies are usually expressed in units of electron volts (eV) or million electron volts (MeV) per atom. The binding energy of helium is 28.3 x 106 eV/atom or 28.3 MeV/atom. Calculations of the binding energy can be simplified by using the following conversion factor between the mass defect in atomic mass units and the binding energy in million electron volts. 1 amu = 931.5016 MeV |Practice Problem 5: Calculate the binding energy of 235U if the mass of this nuclide is 235.0349 amu. Binding energies gradually increase with atomic number, although they tend to level off near the end of the periodic table. A more useful quantity is obtained by dividing the binding energy for a nuclide by the total number of protons and neutrons it contains. This quantity is known as the binding energy per nucleon. The binding energy per nucleon ranges from about 7.5 to 8.8 MeV for most nuclei, as shown in the figure below. It reaches a maximum, however, at an atomic mass of about 60 amu. The largest binding energy per nucleon is observed for 56Fe, which is the most stable nuclide in the periodic table. The graph of binding energy per nucleon versus atomic mass explains why energy is released when relatively small nuclei combine to form larger nuclei in fusion reactions. It also explains why energy is released when relatively heavy nuclei split apart in fission (literally, "to split or cleave") reactions. There are a number of small irregularities in the binding energy curve at the low end of the mass spectrum, as shown in the figure below. The 4He nucleus, for example, is much more stable than its nearest neighbors. The unusual stability of the 4He nucleus explains why -particle decay is usually much faster than the spontaneous fission of a nuclide into two large fragments. Radioactive nuclei decay by first-order kinetics. The rate of radioactive decay is therefore the product of a rate constant (k) times the number of atoms of the isotope in the sample (N). Rate = kN The rate of radioactive decay doesn't depend on the chemical state of the isotope. The rate of decay of 238U, for example, is exactly the same in uranium metal and uranium hexafluoride, or any other compound of this element. The rate at which a radioactive isotope decays is called the activity of the isotope. The most common unit of activity is the curie (Ci), which was originally defined as the number of disintegrations per second in 1 gram of 226Ra. The curie is now defined as the amount of radioactive isotope necessary to achieve an activity of 3.700 x 1010 disintegrations per second. |Practice Problem 6: The most abundant isotope of uranium is 238U; 99.276% of the atoms in a sample of uranium are 238U. Calculate the activity of the 238U in 1 L of a 1.00 M solution of the uranyl ion, UO22+. Assume that the rate constant for the decay of this isotope is 4.87 x 10-18 disintegrations per second. The relative rates at which radioactive nuclei decay can be expressed in terms of either the rate constants for the decay or the half-lives of the nuclei. We can conclude that 14C decays more rapidly than 238U, for example, by noting that the rate constant for the decay of 14C is much larger than that for 238U. |14C:||k = 1.210 x 10-4 y-1| |238U:||k = 1.54 x 10-10 y-1| We can reach the same conclusion by noting that the half-life for the decay of 14C is much shorter than that for 235U. |14C:||t1/2 = 5730 y| |238U:||t1/2 = 4.51 x 109 y| The half-life for the decay of a radioactive nuclide is the length of time it takes for exactly half of the nuclei in the sample to decay. In our discussion of the kinetics of chemical reactions, we concluded that the half-life of a first-order process is inversely proportional to the rate constant for this process. |Practice Problem 7: Calculate the fraction of 14C that remains in a sample after eight half-lives. The half-life of a nuclide can be used to estimate the amount of a radioactive isotope left after a given number of half-lives. For more complex calculations, it is easier to convert the half-life of the nuclide into a rate constant and then use the integrated form of the first-order rate law described in the kinetic section. |Practice Problem 8: How long would it take for a sample of 222Rn that weighs 0.750 g to decay to 0.100 g? Assume a half-life for 222Rn of 3.823 days. The earth is constantly bombarded by cosmic rays emitted by the sun. The total energy received in the form of cosmic rays is small no more than the energy received by the planet from starlight. But the energy of a single cosmic ray is very large, on the order of several billion electron volts (0.200 million kJ/mol). These highly energetic rays react with atoms in the atmosphere to produce neutrons that then react with nitrogen atoms in the atmosphere to produce 14C. The 14C formed in this reaction is a neutron-rich nuclide that decays by electron emission with a half-life of 5730 years. Just after World War II, Willard F. Libby proposed a way to use these reactions to estimate the age of carbon-containing substances. The 14C dating technique for which Libby received the Nobel prize was based on the following assumptions. - 14C is produced in the atmosphere at a more or less constant rate. - Carbon atoms circulate between the atmosphere, the oceans, and living organisms at a rate very much faster than they decay. As a result, there is a constant concentration of 14C in all living things. - After death, organisms no longer pick up 14C. Thus, by comparing the activity of a sample with the activity of living tissue we can estimate how long it has been since the organism died. The natural abundance of 14C is about 1 part in 1012 and the average activity of living tissue is 15.3 disintegrations per minute per gram of carbon. Samples used for 14C dating can include charcoal, wood, cloth, paper, sea shells, limestone, flesh, hair, soil, peat, and bone. Since most iron samples also contain carbon, it is possible to estimate the time since iron was last fired by analyzing for 14C. |Practice Problem 9: The skin, bones and clothing of an adult female mummy discovered in Chimney Cave, Lake Winnemucca, Nevada, were dated by radiocarbon analysis. How old is this mummy if the sample retains 73.9% of the activity of living tissue? We now know that one of Libby's assumptions is questionable: The amount of 14C in the atmosphere hasn't been constant with time. Because of changes in solar activity and the earth's magnetic field, it has varied by as much as 5%. More recently, contamination from the burning of fossil fuels and the testing of nuclear weapons has caused significant changes in the amount of radioactive carbon in the atmosphere. Radiocarbon dates are therefore reported in years before the present era (B.P.). By convention, the present era is assumed to begin in 1950, when 14C dating was introduced. Studies of bristlecone pines allow us to correct for changes in the abundance of 14C with time. These remarkable trees, which grow in the White Mountains of California, can live for up to five thousand years. By studying the 14C activity of samples taken from the annual growth rings in these trees, researchers have developed a calibration curve for 14C dates from the present back to 5145 B.C. After roughly 45,000 years (eight half-lives), a sample retains only 0.4% of the 14C activity of living tissue. At that point it becomes too old to date by radiocarbon techniques. Other radioactive isotopes can be used to date rocks, soils, or archaeological objects that are much older. Potassium-argon dating, for example, has been used to date samples up to 4.3 billion years old. Naturally occurring potassium contains 0.0118% by weight of the radioactive 40K isotope. This isotope decays to 40Ar with a half-life of 1.3 billion years. The 40Ar produced after a rock crystallizes is trapped in the crystal lattice. It can be released, however, when the rock is melted at temperatures up to 2000C. By measuring the amount of 40Ar released when the rock is melted and comparing it with the amount of potassium in the sample, the time since the rock crystallized can be determined.
http://chemed.chem.purdue.edu/genchem/topicreview/bp/ch23/modes.php
13
60
Uniform motion is motion at a constant speed in a straight line. Uniform motion can be described by a few simple equations. The distance s covered by a body moving with velocity v during a time t is given by s=vt. If the velocity is changing, either in direction or magnitude, it is called accelerated motion (see acceleration). Uniformly accelerated motion is motion during which the acceleration remains constant. The average velocity during this time is one half the sum of the initial and final velocities. If a is the acceleration, vo the original velocity, and vf the final velocity, then the final velocity is given by vf=vo + at. The distance covered during this time is s=vot + 1/2 at2. In the simplest circular motion the speed is constant but the direction of motion is changing continuously. The acceleration causing this change, known as centripetal acceleration because it is always directed toward the center of the circular path, is given by a=v2/r, where v is the speed and r is the radius of the circle. The relationship between force and motion was expressed by Sir Isaac Newton in his three laws of motion: (1) a body at rest tends to remain at rest or a body in motion tends to remain in motion at a constant speed in a straight line unless acted on by an outside force, i.e., if the net unbalanced force is zero, then the acceleration is zero; (2) the acceleration a of a mass m by an unbalanced force F is directly proportional to the force and inversely proportional to the mass, or a = F/m; (3) for every action there is an equal and opposite reaction. The third law implies that the total momentum of a system of bodies not acted on by an external force remains constant (see conservation laws, in physics). Newton's laws of motion, together with his law of gravitation, provide a satisfactory basis for the explanation of motion of everyday macroscopic objects under everyday conditions. However, when applied to extremely high speeds or extremely small objects, Newton's laws break down. Motion at speeds approaching the speed of light must be described by the theory of relativity. The equations derived from the theory of relativity reduce to Newton's when the speed of the object being described is very small compared to that of light. When the motions of extremely small objects (atoms and elementary particles) are described, the wavelike properties of matter must be taken into account (see quantum theory). The theory of relativity also resolves the question of absolute motion. When one speaks of an object as being in motion, such motion is usually in reference to another object which is considered at rest. Although a person sitting in a car is at rest with respect to the car, both in motion with respect to the earth, and the earth is in motion with respect to the sun and the center of the galaxy. All these motions are relative. It was once thought that there existed a light-carrying medium, known as the luminiferous ether, which was in a state of absolute rest. Any object in motion with respect to this hypothetical frame of reference would be in absolute motion. The theory of relativity showed, however, that no such medium was necessary and that all motion could be treated as relative. See J. C. Maxwell, Matter and Motion (1877, repr. 1952). Motion of a particle moving at a constant speed on a circle. Though the magnitude of the velocity of such an object may be constant, the object is constantly accelerating because its direction is constantly changing. At any given instant its direction is perpendicular to a radius of the circle drawn to the point of location of the object on the circle. The acceleration is strictly a change in direction and is a result of a force directed toward the centre of the circle. This centripetal force causes centripetal acceleration. Learn more about uniform circular motion with a free trial on Britannica.com. Analysis of the time spent in going through the different motions of a job or series of jobs in the evaluation of industrial performance. Such studies were first instituted in offices and factories in the U.S. in the early 20th century. They were widely adopted as a means of improving work methods by subdividing the different operations of a job into measurable elements, and they were in turn used as aids in standardization of work and in checking the efficiency of workers and equipment. Learn more about time-and-motion study with a free trial on Britannica.com. Repetitive back-and-forth movement through a central, or equilibrium, position in which the maximum displacement on one side is equal to the maximum displacement on the other. Each complete vibration takes the same time, the period; the reciprocal of the period is the frequency of vibration. The force that causes the motion is always directed toward the equilibrium position and is directly proportional to the distance from it. A pendulum displays simple harmonic motion; other examples include the electrons in a wire carrying alternating current and the vibrating particles of a medium carrying sound waves. Learn more about simple harmonic motion with a free trial on Britannica.com. In astronomy, the actual or apparent motion of a body in a direction opposite to that of the predominant (direct or prograde) motions of similar bodies. Observationally and historically, retrograde motion refers to the apparent reversal of the planets' motion through the stars for several months in each synodic period. This required a complex explanation in Earth-centred models of the universe (see Ptolemy) but was naturally explained in heliocentric models (see Copernican system) by the apparent motion as Earth passed by a planet in its orbit. It is now known that nearly all bodies in the solar system revolve and rotate in the same counterclockwise direction as viewed from a position in space above Earth's North Pole. This common direction probably arose during the formation of the solar nebula. The relatively few objects with clockwise motions (e.g., the rotation of Venus, Uranus, and Pluto) are also described as retrograde. Learn more about retrograde motion with a free trial on Britannica.com. Apparent motion of a star across the celestial sphere at right angles to the observer's line of sight, generally measured in seconds of arc per year. Any radial motion (toward or away from the observer) is not included. Edmond Halley was the first to detect proper motions; the largest known is that of Barnard's star, about 10 seconds yearly. Learn more about proper motion with a free trial on Britannica.com. Motion that is repeated in equal intervals of time. The time of each interval is the period. Examples of periodic motion include a rocking chair, a bouncing ball, a vibrating guitar string, a swinging pendulum, and a water wave. Seealso simple harmonic motion. Learn more about periodic motion with a free trial on Britannica.com. Mathematical formula that describes the motion of a body relative to a given frame of reference, in terms of the position, velocity, or acceleration of the body. In classical mechanics, the basic equation of motion is Newton's second law (see Newton's laws of motion), which relates the force on a body to its mass and acceleration. When the force is described in terms of the time interval over which it is applied, the velocity and position of the body can be derived. Other equations of motion include the position-time equation, the velocity-time equation, and the acceleration-time equation of a moving body. Learn more about motion, equation of with a free trial on Britannica.com. Sickness caused by contradiction between external data from the eyes and internal cues from the balance centre in the inner ear. For example, in seasickness the inner ear senses the ship's motion, but the eyes see the still cabin. This stimulates stress hormones and accelerates stomach muscle contraction, leading to dizziness, pallor, cold sweat, and nausea and vomiting. Minimizing changes of speed and direction may help, as may reclining, not turning the head, closing the eyes, or focusing on distant objects. Drugs can prevent or relieve motion sickness but may have side effects. Pressing an acupuncture point on the wrist helps some people. Learn more about motion sickness with a free trial on Britannica.com. Series of still photographs on film, projected in rapid succession onto a screen. Motion pictures are filmed with a movie camera, which makes rapid exposures of people or objects in motion, and shown with a movie projector, which reproduces sound synchronized with the images. The principal inventors of motion-picture machines were Thomas Alva Edison in the U.S. and the Lumière brothers in France. Film production was centred in France in the early 20th century, but by 1920 the U.S. had become dominant. As directors and stars moved to Hollywood, movie studios expanded, reaching their zenith in the 1930s and '40s, when they also typically owned extensive theatre chains. Moviemaking was marked by a new internationalism in the 1950s and '60s, which also saw the rise of the independent filmmaker. The sophistication of special effects increased greatly from the 1970s. The U.S. film industry, with its immense technical resources, has continued to dominate the world market to the present day. Seealso Columbia Pictures; MGM; Paramount Communications; RKO; United Artists; Warner Brothers. Learn more about motion picture with a free trial on Britannica.com. Change in position of a body relative to another body or with respect to a frame of reference or coordinate system. Motion occurs along a definite path, the nature of which determines the character of the motion. Translational motion occurs if all points in a body have similar paths relative to another body. Rotational motion occurs when any line on a body changes its orientation relative to a line on another body. Motion relative to a moving body, such as motion on a moving train, is called relative motion. Indeed, all motions are relative, but motions relative to the Earth or to any body fixed to the Earth are often assumed to be absolute, as the effects of the Earth's motion are usually negligible. Seealso Brownian motion; periodic motion; simple harmonic motion; simple motion; uniform circular motion. Learn more about motion with a free trial on Britannica.com. Relations between the forces acting on a body and the motion of the body, formulated by Isaac Newton. The laws describe only the motion of a body as a whole and are valid only for motions relative to a reference frame. Usually, the reference frame is the Earth. The first law, also called the law of inertia, states that if a body is at rest or moving at constant speed in a straight line, it will continue to do so unless it is acted upon by a force. The second law states that the force math.F acting on a body is equal to the mass math.m of the body times its acceleration math.a, or math.F = math.mmath.a. The third law, also called the action-reaction law, states that the actions of two bodies on each other are always equal in magnitude and opposite in direction. Learn more about Newton's laws of motion with a free trial on Britannica.com. Any of various physical phenomena in which some quantity is constantly undergoing small, random fluctuations. It was named for Robert Brown, who was investigating the fertilization process of flowers in 1827 when he noticed a “rapid oscillatory motion” of microscopic particles within pollen grains suspended in water. He later discovered that similar motions could be seen in smoke or dust particles suspended in air and other fluids. The idea that molecules of a fluid are constantly in motion is a key part of the kinetic theory of gases, developed by James Clerk Maxwell, Ludwig Boltzmann, and Rudolf Clausius (1822–88) to explain heat phenomena. Learn more about Brownian motion with a free trial on Britannica.com. The artist known as Little Eva was actually Carole King's babysitter, having been introduced to King and husband Gerry Goffin by The Cookies, a local girl group who would also record for the songwriters. Apparently the dance came before the lyrics; Eva was bopping to some music that King was playing at home, and a dance with lyrics was soon born. It was the first release on the new Dimension Records label, whose girl-group hits were mostly penned and produced by Goffin and King. The Loco-Motion was quickly recorded by British girl group The Vernons Girls and entered the chart the same week as the Little Eva version. The Vernons Girls' version stalled at number 47 in the UK, while the Little Eva version climbed all the way to number 2 on the UK charts. It re-entered the chart some ten years later and almost became a top ten again, peaking at number 11. The Little Eva version of the song was featured in the 2006 David Lynch film Inland Empire in a sequence involving the recurrent characters of the girl-friends/prostitutes performing the dance routine. The scene has been noted as being particularly surreal, even by the standards of David Lynch movies. Serbian new wave band Električni Orgazam recorded an album of covers Les Chansones Populaires in 1983. The first single off the release was "Locomotion". Ljubomir Đukić provided the lead vocals. Having left the band, Đukić made a guest appearance on the first band's live album Braćo i sestre ("Brothers and sisters") and that is the only live version of the song the band released. Jerick's different version of the song was originally released by Minogue as her debut single on July 27, 1987 in Australia under the title "Locomotion". After an impromptu performance of the song at an Australian rules football charity event with the cast of the Australian soap opera Neighbours, Minogue was signed a record deal by Mushroom Records to release the song as a single. The song was a hit in Australia, reaching number one and remained there for an amazing seven weeks. The success of the song in her home country led to her signing a record deal with PWL Records in London and to working with the hit producing team, Stock, Aitken and Waterman. The music video for "Locomotion" was filmed at Essendon Airport and the ABC studios in Melbourne, Australia. The video for "The Loco-Motion" was created out of footage from the Australian music video. At the end of 1988, the song was nominated for Best International Single at the Canadian Music Industry Awards. In late 1988, Minogue travelled to the United States to promote "The Loco-Motion", where she did many interviews and performances on American television. "The Loco-Motion" debuted at number eighty on the U.S. Billboard Hot 100 and later climbed to number three for two weeks. The song was Minogue's second single to chart in the U.S., but her first to reach the top ten. It remains her biggest hit in the United States. She would not even reach the top ten again until 2002 with the release of "Can't Get You Out Of My Head", which reached number seven on the chart. In Canada, the song reached number one. In Australia, the song was released on July 27 1987 and was a huge hit, reaching number one on the AMR singles chart and remaining there for seven weeks. The song set the record as the biggest Australian single of the decade. Throughout Europe and Asia the song also performed well on the music charts, reaching number one in Belgium, Finland, Ireland, Israel, Japan, and South Africa. The flip-side "I'll Still Be Loving You" is a popular song, and one of the few not released as a single from her huge-selling debut album Kylie. |Australian ARIA Singles Chart||1| |Canada Singles Chart||1| |Eurochart Hot 100||1| |South Africa Singles Chart||1| |Switzerland Singles Chart||2| |UK Singles Chart||2| |Germany Singles Chart||3| |U.S. Billboard Hot 100||3| |France Singles Chart||5| |Belgian Singles Chart||1| |Finland Singles Chart||1| |Hong Kong Singles Chart||1| |U.S. Hot Dance Music/Club Play||12| |U.S. Hot Dance Music/Maxi-Singles Sales||4| |Norway Singles Chart||3| |Italian Singles Chart||6| |Japan Singles Chart||1| Israel #1 New Zealand #8 Sweden #10 USA Dance Chart #12 "Motion Vector Detection Apparatus, Motion Vector Detection Method, Image Encoding Apparatus, Image Encoding Method, and Computer Program" in Patent Application Approval Process Jan 24, 2013; By a News Reporter-Staff News Editor at Politics & Government Week -- A patent application by the inventors Sakamoto, Daisuke...
http://www.reference.com/browse/columbia/motion
13
67
You can use formulas and functions in lists or libraries to calculate data in a variety of ways. By adding a calculated column to a list or library, you can create a formula that includes data from other columns and performs functions to calculate dates and times, to perform mathematical equations, or to manipulate text. For example, on a tasks list, you can use a column to calculate the number of days it takes to complete each task, based on the Start Date and Date Completed columns. Note This article describes the basic concepts related to using formulas and functions. For specific information about a particular function, see the article about that function. Formulas are equations that perform calculations on values in a list or library. A formula starts with an equal sign (=). For example, the following formula multiplies 2 by 3 and then adds 5 to the result. You can use a formula in a calculated column and to calculate default values for a column. A formula can contain functions (function: A prewritten formula that takes a value or values, performs an operation, and returns a value or values. Use functions to simplify and shorten formulas on a worksheet, especially those that perform lengthy or complex calculations.), column references, operators (operator: A sign or symbol that specifies the type of calculation to perform within an expression. There are mathematical, comparison, logical, and reference operators.), and constants (constant: A value that is not calculated and, therefore, does not change. For example, the number 210, and the text "Quarterly Earnings" are constants. An expression, or a value resulting from an expression, is not a constant.), as in the following example. ||The PI() function returns the value of pi: 3.141592654. |Reference (or column name) ||[Result] represents the value in the Result column for the current row. ||Numbers or text values entered directly into a formula, such as 2. ||The * (asterisk) operator multiplies, and the ^ (caret) operator raises a number to a power. A formula might use one or more of the elements from the previous table. Here are some examples of formulas (in order of complexity). Simple formulas (such as =128+345) The following formulas contain constants and operators. ||Adds 128 and 345 Formulas that contain column references (such as =[Revenue] >[Cost]) The following formulas refer to other columns in the same list or library. ||Uses the value in the Revenue column. ||10% of the value in the Revenue column. |=[Revenue] > [Cost] ||Returns Yes if the value in the Revenue column is greater than the value in the Cost column. Formulas that call functions (such as =AVERAGE(1, 2, 3, 4, 5)) The following formulas call built-in functions. |=AVERAGE(1, 2, 3, 4, 5) ||Returns the average of a set of values. |=MAX([Q1], [Q2], [Q3], [Q4]) ||Returns the largest value in a set of values. |=IF([Cost]>[Revenue], "Not OK", "OK") ||Returns Not OK if cost is greater than revenue. Else, returns OK. ||Returns the day part of a date. This formula returns the number 15. Formulas with nested functions (such as =SUM(IF([A]>[B], [A]-[B], 10), [C])) The following formulas specify one or more functions as function arguments. |=SUM(IF([A]>[B], [A]-[B], 10), [C]) The IF function returns the difference between the values in columns A and B, or 10. The SUM function adds the return value of the IF function and the value in column C. The PI function returns the number 3.141592654. The DEGREES function converts a value specified in radians to degrees. This formula returns the value 180. The FIND function searches for the string BD in Column1 and returns the starting position of the string. It returns an error value if the string is not found. The ISNUMBER function returns Yes if the FIND function returned a numeric value. Else, it returns No. Top of Page Functions are predefined formulas that perform calculations by using specific values, called arguments, in a particular order, or structure. Functions can be used to perform simple or complex calculations. For example, the following instance of the ROUND function rounds off a number in the Cost column to two decimal places. The following vocabulary is helpful when you are learning functions and formulas: Structure The structure of a function begins with an equal sign (=), followed by the function name, an opening parenthesis, the arguments for the function separated by commas, and a closing parenthesis. Function name This is the name of a function that is supported by lists or libraries. Each function takes a specific number of arguments, processes them, and returns a value. Arguments Arguments can be numbers, text, logical values such as True or False, or column references. The argument that you designate must produce a valid value for that argument. Arguments can also be constants, formulas, or other functions. In certain cases, you may need to use a function as one of the arguments of another function. For example, the following formula uses a nested AVERAGE function and compares the result with the sum of two column values. Valid returns When a function is used as an argument, it must return the same type of value that the argument uses. For example, if the argument uses Yes or No, then the nested function must return Yes or No. If it doesn't, the list or library displays a #VALUE! error value. Nesting level limits A formula can contain up to eight levels of nested functions. When Function B is used as an argument in Function A, Function B is a second-level function. In the example above for instance, the SUM function is a second-level function because it is an argument of the AVERAGE function. A function nested within the SUM function would be a third-level function, and so on. - Lists and libraries do not support the RAND and NOW functions. - The TODAY and ME functions are not supported in calculated columns but are supported in the default value setting of a column. Top of Page Using column references in a formula A reference identifies a cell in the current row and indicates to a list or library where to search for the values or data that you want to use in a formula. For example, [Cost] references the value in the Cost column in the current row. If the Cost column has the value of 100 for the current row, then =[Cost]*3 returns 300. With references, you can use the data that is contained in different columns of a list or library in one or more formulas. Columns of the following data types can be referenced in a formula: single line of text, number, currency, date and time, choice, yes/no, and calculated. You use the display name of the column to reference it in a formula. If the name includes a space or a special character, you must enclose the name in square brackets ([ ]). References are not case-sensitive. For example, you can reference the Unit Price column in a formula as [Unit Price] or [unit price]. - You cannot reference a value in a row other than the current row. - You cannot reference a value in another list or library. - You cannot reference the ID of a row for a newly inserted row. The ID does not yet exist when the calculation is performed. - You cannot reference another column in a formula that creates a default value for a column. Top of Page Using constants in a formula A constant is a value that is not calculated. For example, the date 10/9/2008, the number 210, and the text "Quarterly Earnings" are all constants. Constants can be of the following data types: - String (Example: =[Last Name] = "Smith") String constants are enclosed in quotation marks and can include up to 255 characters. - Number (Example: =[Cost] >= 29.99) Numeric constants can include decimal places and can be positive or negative. - Date (Example: =[Date] > DATE(2007,7,1)) Date constants require the use of the DATE(year,month,day) function. - Boolean (Example: =IF([Cost]>[Revenue], "Loss", "No Loss") Yes and No are Boolean constants. You can use them in conditional expressions. In the above example, if Cost is greater than Revenue, the IF function returns Yes, and the formula returns the string "Loss". If Cost is equal to or less than Revenue, the function returns No, and the formula returns the string "No Loss". Top of Page Using calculation operators in a formula Operators specify the type of calculation that you want to perform on the elements of a formula. Lists and libraries support three different types of calculation operators: arithmetic, comparison, and text. Use the following arithmetic operators to perform basic mathematical operations such as addition, subtraction, or multiplication; to combine numbers; or to produce numeric results. |+ (plus sign) |– (minus sign) |/ (forward slash) |% (percent sign) You can compare two values with the following operators. When two values are compared by using these operators, the result is a logical value of Yes or No. |= (equal sign) ||Equal to (A=B) |> (greater than sign) ||Greater than (A>B) |< (less than sign) ||Less than (A<B) |>= (greater than or equal to sign) ||Greater than or equal to (A>=B) |<= (less than or equal to sign) ||Less than or equal to (A<=B) |<> (not equal to sign) ||Not equal to (A<>B) Use the ampersand (&) to join, or concatenate, one or more text strings to produce a single piece of text. ||Connects, or concatenates, two values to produce one continuous text value ("North"&"wind") Order in which a list or library performs operations in a formula Formulas calculate values in a specific order. A formula might begin with an equal sign (=). Following the equal sign are the elements to be calculated (the operands), which are separated by calculation operators. Lists and libraries calculate the formula from left to right, according to a specific order for each operator in the formula. If you combine several operators in a single formula, lists and libraries perform the operations in the order shown in the following table. If a formula contains operators with the same precedence — for example, if a formula contains both a multiplication operator and a division operator — lists and libraries evaluate the operators from left to right. ||Negation (as in –1) |* and / ||Multiplication and division |+ and – ||Addition and subtraction ||Concatenation (connects two strings of text) |= < > <= >= <> Use of parentheses To change the order of evaluation, enclose in parentheses the part of the formula that is to be calculated first. For example, the following formula produces 11 because a list or library calculates multiplication before addition. The formula multiplies 2 by 3 and then adds 5 to the result. In contrast, if you use parentheses to change the syntax, the list or library adds 5 and 2 together and then multiplies the result by 3 to produce 21. In the example below, the parentheses around the first part of the formula force the list or library to calculate [Cost]+25 first and then divide the result by the sum of the values in columns EC1 and EC2. Top of Page
http://office.microsoft.com/en-us/windows-sharepoint-services-help/introduction-to-data-calculations-HA010121588.aspx
13
58
The determinant of a matrix (written |A|) is a single number that depends on the elements of the matrix A. Determinants exist only for square matrices (i.e., ones where the number of rows equals the number of columns). Determinants are a basic building block of linear algebra, and are useful for finding areas and volumes of geometric figures, in Cramer's rule, and in many other areas ways. If the characteristic polynomial splits into linear factors, then he determinant is equal to the product of the eigenvalues of a matrix, counted by their algebraic multiplicities. A matrix can be used to transform a geometric figure. For example, in the plane, if we have a triangle defined by its vertices (3,3), (5,1), and (1,4), and we wish to transform this triangle into the triangle of vertices (3,-3), (5,-9), and (1,2), we can simply do a matrix multiplication of each vertex by the matrix . In this transformation, no matter what is the shape of the initial geometric figure, its position, or its area, the final geometric figure will have the same area and orientation. It can be seen that matrix transformations of geometric figures always give resulting figures whose area is proportional to the initial figure, and whose orientation is either always the same, or always the reverse. This ratio is called the determinant of the matrix, and it's positive when the orienation is kept, negative when the orientation is reversed, and zero when the final figure always has zero area. This two-dimensional concept is easily generalized for any dimensions. In 3D, replace area for volume, and in higher dimensions the analogue concept is called hypervolume. The determinant of a matrix is the oriented ratio of the hypervolumes of the transformed figure to the source figure. How to calculate We need to introduce two notions: the minor and the cofactor of a matrix element. Also, the determinant of a 1x1 matrix equals the sole element of that matrix. - The minor mij of the element aij of an NxN matrix is the determinant of the (N-1)x(N-1) matrix formed by removing the ith row and jth column from M. - The cofactor Cij equals the minor mij multiplied by ( − 1)i + j The determinant is then defined to be the sum of the products of the elements of any one row or column with their corresponding cofactors. For the 2x2 matrix the determinant is simply ad-bc (for example, using the above rule on the first row). For a general 3x3 matrix we can expand along the first row to find where each of the 2x2 determinants is given above. Properties of determinants The following are some useful properties of determinants. Some are useful computational aids for simplifying the algebra needed to calculate a determinant. The first property is that | M | = | MT | where the superscript "T" denotes transposition. Thus, although the following rules refer to the rows of a matrix they apply equally well to the columns. - The determinant is unchanged by adding a multiple of one row to any other row. - If two rows are interchanged the sign of the determinant will change - If a common factor α is factored out from each element of a single row, the determinant is multiplied by that same factor. - If all the elements of a single row are zero (or can be made to be zero using the above rules) then the determinant is zero. - | AB | = | A | | B | In practice, one of the most efficient ways of finding the determinant of a large matrix is to add multiples of rows and/or columns until the matrix is in triangular form such that all the elements above or below the diagonal are zero, for example The determinant of such a matrix is simply the product of the diagonal elements (use the cofactor expansion discussed above and expand down the first column).
http://www.conservapedia.com/Determinant
13
56
If you have progressed through the tutorial this far, you are now ready to program in 3D. However, 3D programming is not like modeling clay, where you simply move the clay with your hands and everything looks perfect. 3D programming is strictly mathematical, and you must understand the concepts of 3D mathematics before you can effectively program with them. Don't worry, though. It's nothing complex. You won't need any more math than it takes to program in C++, so you should already be far enough along to be able to understand this. This lesson is a theoretical lesson. We will cover the practice involved in the next lesson. In this lesson we will cover coordinate systems and how they apply to Direct3D and creating a 3D scene. Without understanding of the basic math of 3D, 3D programming would be impossible. And I don't mean doing college algebra all over again, but just understanding the concepts of 3D coordinates, how they work and the various things which might get in your way. Of course, before you understand 3D coordinate systems, you need to understand Cartesian Coordinates. The Cartesian Coordinate System might be better recognized if called a 2D coordinate system. In other words, it is a system of locating an exact point on a flat surface. A point is defined as an exact position along an axis. If we wanted to know how far something has gone, we usually give an exact number, as in "Bob walked 12 meters". 12 meters is a distance along a single axis. We say that 0 is our starting point, and as Bob progresses, he moves farther and farther along this axis. This is a 1D coordinate system. 1D Coordinate System When we look at this scenario from the side, as in the picture, we can see that as Bob continues walking toward the right of the screen, his distance travelled increases away from 0. We will call this '0' the origin, as it is where he started from. On the other side of the origin, we would have negative values instead of positive values. However, what if he were then to turn 90 degrees and walk in a different direction? Truthfully, Bob would then be walking along a second axis, and we would diagram his path like this: The Cartesian Coordinate System Now that we have more than one axis, we give ourselves a way to identify them. The horizontal axis, along which Bob walked 12 meters, we will call the x-axis. The vertical axis we will call the y-axis. Of course, this new axis, like the horizontal axis, also has an origin. It is the point where Bob stopped walking sideways and started walking up. Notice that the y-axis origin is also given the value of 0, and increases the farther Bob walks. (go Bob go...) So now we have two axes (the x-axis and the y-axis), and each have their origins. Well, this is what forms our Cartesian Coordinate System. We can now locate any point along this surface (probably the ground in Bob's case). We can state Bob's exact position by saying how far he is off of each axis' origin, so we could say he is at (x, y) or (12, 4), 12 being his position on the x-axis and 4 being his position on the y-axis. These two numbers are called coordinates, and are used to show how far an exact point is from the origin (or the '0' point on both axes). Actually, the 3D Coordinate System is merely an extention to what we have been discussing. If we took Cartesian Coordinates and added a third axis (a z-axis) running perpendicular to both the x and y axes, we would have 3D coordinates. This is illustrated here. The 3D Coordinate System Like Cartesian Coordinates, 3D coordinates can be both positive and negative, depending on which direction the point is. However, instead of being written like Cartesian Coordinates, 3D coordinates are written with three numbers, like this: (x, y, z) or (12, 4, 15). This would indicate that Bob was somehow fifteen meters in the air. It could also be written (12, 4, -15). Perhaps this means he's lost in a dungeon somewhere. Now let's cover how 3D coordinates are applied to games and game programming. If a point in a 3D coordinate system represents a position in space, then we can form an array of exact positions which will eventually become a 3D model. Of course, setting so many points would take up a lot of space in memory, so an easier and faster way has been employed. This method is set up using triangles. Triangles, of course, are a very useful shape in just about any mathematical area. They can be formed to measure circles, they can be used to strengthen buildings, and they can be used to create 3D images. The reason we would want to use triangles is because triangles can be positioned to form just about any shape imaginable, as shown in these images: Models Made From Triangles Because of the useful nature of triangles when creating 3D models, Direct3D is designed solely around triangles and combining triangles to make shapes. To build a triangle, we use something called vertices. Vertices is plural for vertex. A vertex is defined as an exact point in 3D space. It is defined by three values, x, y and z. In Direct3D, we add to that a little. We also include various properties of this point. And so we extend the definition to mean "the location and properties of an exact point in 3D space". A triangle is made up of three vertices, each defined in your program in clockwise order. When coded, these three vertices form a flat surface, which can then be rotated, textured, positioned and modified as needed. A Triangle Built From Vertices The triangle shown in the above image is created by three points: x = 0, y = 5, z = 1 x = 5, y = -5, z = 1 x = -5, y = -5, z = 1 You will notice that all the above vertices have a z-value of 1. This is because we aren't talking about a 3D object; we are talking about a triangle, which is a 2D object. We could change the z-values, but it would make no essential difference. To make actual 3D objects, we will need to combine triangles. You can see how triangles are combined in the above diagram. To take a simple example, the cube is simply two triangles placed together to create one side. Each side is made up of identical triangles combined the same way. However, defining the 3D coordinates of every triangle in your game multiple times is more than just tedious. It's ridiculously complex! There's just no need to get that involved (and you'll see what I mean in the next lesson). Instead of defining each and every corner of every triangle in the game, all you need to do is create a list of vertices, which contain the coordinates and information of each vertex, as well as what order they go in. A primitive is a single element in a 3D environment, be it a triangle, a line, a dot, or whatever. Following is a list of ways primitives can be combined to create 3D objects. 1. Point Lists 2. Line Lists 3. Line Strips 4. Triangle Lists 5. Triangle Strips 6. Triangle Fans A Point List is a list of vertices that are shown as individual points on the screen. These can be useful for rendering 3D starfields, creating dotted lines, displaying locations on minimaps and so on. This diagram illustrates how a Point List is shown on the screen (without the labels, of course). A Point List (6 Primitives) A Line List is a list of vertices that create separate line segments between each odd-numbered vertex and the next vertex. These can be used for a variety of effects, including 3D grids, heavy rain, waypoint lines, and so on. This diagram illustrates how a Line List is shown on the screen (this is the same set of vertices as before). A Line List (3 Primitives) A Line Strip is similar to a line list, but differs in that all vertices in such a list are connected by line segments. This is useful for creating many wire-frame images such as wire-frame terrain, blades of grass, and other non-model-based objects. It is also very useful in debugging programs. This diagram illustrates how a Line Strip is shown on the screen. A Line Strip (5 Primitives) A Triangle List is a list of vertices where every group of three vertices is used to make a single, separate triangle. This can be used in a variety of effects, such as force-fields, explosions, objects being pieced together, etc. This diagram illustrates how a Triangle List is shown on the screen. A Line List (2 Primitives) A Triangle Strip is a list of vertices that creates a series of triangles connected to one another. This is the most-used method when dealing with 3D graphics. These are mostly used to create the 3D models for your game. This diagram illustrates how a Triangle Strip is shown on the screen. Notice that the first three vertices create a single triangle, and each vertex thereafter creates an additional triangle based on the previous two. A Triangle Strip (4 Primitives) A Triangle Fan is similar to a triangle strip, with the exception that all the triangles share a single vertex. This is illustrated in this diagram: A Triangle Fan (4 Primitives) There is a slight quirk in drawing primitives where only one side of the primitive is shown. It is possible to show both sides, but usually a model is completely enclosed, and you cannot see the inside of it. If the model is completely enclosed, only one side of each triangle need be drawn. After all, drawing both sides of a primitive would take twice as much time. You will see an example of this in the next couple of lessons. A triangle primitive is only drawn when its vertices are given in a clockwise order. If you flip it around, it becomes counter-clockwise, and is therefore not shown. Primitive Only Visible When Drawn Clockwise There is an easy way (though tedious when you get into larger games) to show both sides of a primitive, which is to show the primitive twice, giving one primitive clockwise and the other counter-clockwise. Primitive Visible When Drawn Either Way Color is a rather simple part of 3D programming. However, even if you are very familiar with color spectrums and the physics of light, it would be good to know that Direct3D does not follow the laws of this universe exactly. To do so would be a nightmare on graphics hardware and the CPU. It's just too much, and so we'll just leave graphics like that to the Matrix and make our own laws that we can cope with. Light, of course, is a wavelength of particles that allows you to see and differentiate between various objects around you. Direct3D mimicks this with various mathematical algorithms performed by the graphics hardware. The image is then displayed on the screen appearing well lit. In this section we'll cover the mechanics of how Direct3D mimicks the light we see in nature. In the younger years of your education you may have learned the primary colors to be red, blue and yellow. This isn't actually the case. The colors are actually magenta, cyan and yellow. And why this useless technical detail? To understand this, you must understand the concept of subtractive and additive color. The difference between these two types of color have to do with whether or not the color refers to the color of light or the color of an object. Subtractive color is the color of an object, and has the primary colors magenta, cyan and yellow. Additive color is the color of light, and has the primary colors red, green and blue. In a beam of light, the more primary colors you add the closer you get to white. The colors add together to make white, and thus it is called additive color. Additive Colors Add Up to White Above you can see the primary colors of light combine to make white. However, if you look, you will also see that when you combine two of the colors, you get one of the primary subtractive colors (magenta, cyan or yellow). If we take a look at these subtractive colors, we'll see why this is. Subtractive colors are essentially the opposite of additive colors. They consist of the light that is not reflected off the surface of an object. For example, a red object illuminated by a white light only reflects red light and absorbs green and blue light. If you look at the above image, you will see that green and blue combined make cyan, and so cyan was subtracted from the white light, resulting in red. Subtractive Colors Subtract Out to Black In graphics programming, you will always use the additive colors (red, green and blue), because monitors consist of light. However, when building a 3D engine, it is good to understand what makes objects look the colors they do. By the way, this is why you find magenta, cyan and yellow in printers, and red, green and blue on screens. If you want to really get into color, then following is an article which gives a thorough rundown of color and the physics of light. If you're thinking of the future and DirectX 10's nextgen games, I'd seriously recommend knowing your color well. There's much more to it than you'd think at first, and it makes a big difference in making a great game engine. Anyway, here's the article. Alpha coloring is an additional element to the red-green-blue color of light. When you include some Alpha into your color, the graphic appears semi-transparent, allowing you to see through the object somewhat. This is useful for creating a semi-transparent display for your game, having units cloak (but still be seen somewhat by allies), and numerous other things. I'm sure your imagination can run rampant for some time on this one. Color in Direct3D comes in the form of a 32-bit variable which stores all the information about the color. This includes the primary colors (refered to as RGB for Red, Green and Blue) and the amount of Alpha in the color. Each of these are refered to as channels, and each take up 8-bits, as showed here: Bit Layout of Color Following is the code that defines the above colors: DWORD Color_A = 0xff00ff00; DWORD Color_B = 0x88ff00cc; There are also two functions we can use to build these colors for us, in case we need to plug variables into these values. DWORD Color_A = D3DCOLOR_XRGB(0, 255, 0); DWORD Color_B = D3DCOLOR_ARGB(136, 255, 0, 204); The function D3DCOLOR_ARGB() returns a DWORD filled with the proper values for the color you are building. If you don't want to bother with Alpha, then you can use the D3DCOLOR_XRGB() which does the exact same thing, but automatically fills the Alpha channel with 255. If you want to see an example of this, check out the example from Lesson 1 and 2, which clear the screen using the D3DCOLOR_XRGB() function. I'm not going to cover everything about light here. I'll save that for a later lesson. For now, I just want to cover the basic light equasion, as you will have to understand parts of it before you actually add lighting into your program. Light in nature is a very complicated subject mathematically speaking. When the sun shines, almost everything is lit by it, even though the sun is not shining on a lot of what can be seen. This is because light bounces around an area thousands of times, hitting just about everything, whether the sun shines there or not. To further add to this equation, as the sunlight travels through space, some of it is reflected off dust particles, which scatter the light in a completely uncalculatable pattern. Even if a computer could calculate all this, it could not run real-time. Direct3D uses a system to mimick the light of a real-life environment. To do this, it breaks light down into three types of light that, when combined, closely approximate actual light. These three types of light are Diffuse Light, Ambient Light and Specular Light. Diffuse Light is light that shines upon an object indirectly. This sphere is lit by diffuse lighting alone. Later, you will learn about sources of light. This sphere is lit by one source, coming off from the left somewhere. The further the sphere curves away from the light, the less that portion is lit by the source. Ambient Light is light that is considered to be everywhere. Unlike the diffuse light, it has no source, and if used alone appears a circle (because all parts are lit equally under this lighting). This sphere is the same sphere as last time, but this time has ambient lighting included to fill in the dark, unlit parts. Diffuse and Ambient Lighting This is sometimes refered to as Specular Highlight, because it highlights an object with a reflective color. This sphere is lit with Diffuse and Ambient Light, and has a Specular Highlight added to make it look more real. Diffuse, Ambient and Specular Lighting By now you should understand the basic underlying concepts of the third dimension, and how it is applied to game programming. Now let's go on and put all this theory into practice. In the next lesson, you will take what you know from this lesson and build a basic triangle. Next Lesson: Drawing a Triangle GO! GO! GO! © 2006-2013 DirectXTutorial.com. All Rights Reserved. Expand
http://www.directxtutorial.com/Lesson.aspx?lessonid=9-4-3
13
58
Unlike range and quartiles, the variance combines all the values in a data set to produce a measure of spread. The variance (symbolized by S2) and standard deviation (the square root of the variance, symbolized by S) are the most commonly used measures of spread. We know that variance is a measure of how spread out a data set is. It is calculated as the average squared deviation of each number from the mean of a data set. For example, for the numbers 1, 2, and 3 the mean is 2 and the variance is 0.667. [(1 - 2)2 + (2 - 2)2 + (3 - 2)2] ÷ 3 = 0.667 [squaring deviation from the mean] ÷ number of observations = variance Variance (S2) = average squared deviation of values from mean Calculating variance involves squaring deviations, so it does not have the same unit of measurement as the original observations. For example, lengths measured in metres (m) have a variance measured in metres squared (m2). Taking the square root of the variance gives us the units used in the original scale and this is the standard deviation. Standard deviation (S) = square root of the variance Standard deviation is the measure of spread most commonly used in statistical practice when the mean is used to calculate central tendency. Thus, it measures spread around the mean. Because of its close links with the mean, standard deviation can be greatly affected if the mean gives a poor measure of central tendency. Standard deviation is also influenced by outliers one value could contribute largely to the results of the standard deviation. In that sense, the standard deviation is a good indicator of the presence of outliers. This makes standard deviation a very useful measure of spread for symmetrical distributions with no outliers. Standard deviation is also useful when comparing the spread of two separate data sets that have approximately the same mean. The data set with the smaller standard deviation has a narrower spread of measurements around the mean and therefore usually has comparatively fewer high or low values. An item selected at random from a data set whose standard deviation is low has a better chance of being close to the mean than an item from a data set whose standard deviation is higher. Generally, the more widely spread the values are, the larger the standard deviation is. For example, imagine that we have to separate two different sets of exam results from a class of 30 students the first exam has marks ranging from 31% to 98%, the other ranges from 82% to 93%. Given these ranges, the standard deviation would be larger for the results of the first exam. Standard deviation might be difficult to interpret in terms of how big it has to be in order to consider the data widely spread. The size of the mean value of the data set depends on the size of the standard deviation. When you are measuring something that is in the millions, having measures that are "close" to the mean value does not have the same meaning as when you are measuring the weight of two individuals. For example, a measure of two large companies with a difference of $10,000 in annual revenues is considered pretty close, while the measure of two individuals with a weight difference of 30 kilograms is considered far apart. This is why, in most situations, it is useful to assess the size of the standard deviation relative to the mean of the data set. Although standard deviation is less susceptible to extreme values than the range, standard deviation is still more sensitive than the semi-quartile range. If the possibility of high values (outliers) presents itself, then the standard deviation should be supplemented by the semi-quartile range. When using standard deviation keep in mind the following properties. When analysing normally distributed data, standard deviation can be used in conjunction with the mean in order to calculate data intervals. If = mean, S = standard deviation and x = a value in the data set, then The variance for a discrete variable made up of n observations is defined as: The standard deviation for a discrete variable made up of n observations is the positive square root of the variance and is defined as: Use this step-by-step approach to find the standard deviation for a discrete variable. A hen lays eight eggs. Each egg was weighed and recorded as follows: 60 g, 56 g, 61 g, 68 g, 51 g, 53 g, 69 g, 54 g. |Weight (x)||(x - )||(x - )2| The formulas for variance and standard deviation change slightly if observations are grouped into a frequency table. Squared deviations are multiplied by each frequency's value, and then the total of these results is calculated. Thirty farmers were asked how many farm workers they hire during a typical harvest season. Their responses were: 4, 5, 6, 5, 3, 2, 8, 0, 4, 6, 7, 8, 4, 5, 7, 9, 8, 6, 7, 5, 5, 4, 2, 1, 9, 3, 3, 4, 6, 4 |Workers (x)||Tally||Frequency (f)||(xf)||(x - )||(x - )2||(x - )2f| 220 students were asked the number of hours per week they spent watching television. With this information, calculate the mean and standard deviation of hours spent watching television by the 220 students. |Hours||Number of students| |10 to 14||2| |15 to 19||12| |20 to 24||23| |25 to 29||60| |30 to 34||77| |35 to 39||38| |40 to 44||8| Note: In this example, you are using a continuous variable that has been rounded to the nearest integer. The group of 10 to 14 is actually 9.5 to 14.499 (as the 9.5 would be rounded up to 10 and the 14.499 would be rounded down to 14). The interval has a length of 5 but the midpoint is 12 (9.5 + 2.5 = 12). 6,560 = (2 X 12 + 12 X 17 + 23 X 22 + 60 X 27 + 77 X 32 + 38 X 37 + 8 X 42) Then, calculate the numbers for the xf, (x - ), (x - )2 and (x - )2f formulas. Add them to the frequency table below. |Hours||Midpoint (x)||Frequency (f)||xf||(x - )||(x - )2||(x - )2f| |10 to 14||12||2||24||-17.82||317.6||635.2| |15 to 19||17||12||204||-12.82||164.4||1,972.8| |20 to 24||22||23||506||-7.82||61.2||1,407.6| |25 to 29||27||60||1,620||-2.82||8.0||480.0| |30 to 34||32||77||2,464||2.18||4.8||369.6| |35 to 39||37||38||1,406||7.18||51.6||1,960.8| |40 to 44||42||8||336||12.18||148.4||1,187.2| Use the information found in the table above to find the standard deviation. Note: During calculations, when a variable is grouped by class intervals, the midpoint of the interval is used in place of every other value in the interval. Thus, the spread of observations within each interval is ignored. This makes the standard deviation always less than the true value. It should, therefore, be regarded as an approximation. Assuming the frequency distribution is approximately normal, calculate the interval within which 95% of the previous example's observations would be expected to occur. = 29.82, s = 6.03 Calculate the interval using the following formula: - 2s < x < + 2s 29.82 - (2 X 6.03) < x < 29.82 + (2 X 6.03) 29.82 - 12.06 < x < 29.82 + 12.06 17.76 < x < 41.88 This means that there is about a 95% certainty that a student will spend between 18 hours and 42 hours per week watching television.
http://www.statcan.gc.ca/edu/power-pouvoir/ch12/5214891-eng.htm
13
206
Transcript: So, basically the last few weeks, we've been doing derivatives. Now, we're going to integrals. So -- OK, so more precisely, we are going to be talking about double integrals. OK, so just to motivate the notion, let me just remind you that when you have a function of one variable -- -- say, f of x, and you take its integrals from, say, a to b of f of x dx, well, that corresponds to the area below the graph of f over the interval from a to b. OK, so the picture is something like you have a; you have b. You have the graph of f, and then what the integral measures is the area of this region. And, when we say the area of this region, of course, if f is positive, that's what happens. If f is negative, then we count negatively the area below the x axis. OK, so, now, when you have a function of two variables, then you can try to do the same thing. Namely, you can plot its graph. Its graph will be a surface in space. And then, we can try to look for the volume below the graph. And that's what we will call the double integral of the function over a certain region. OK, so let's say that we have a function of two variables, x and y. Then, we'll look at the volume that's below the graph z equals f of xy. OK, so, let's draw a picture for what this means. I have a function of x and y. I can draw its graph. The graph will be the surface with equation z equals f of x and y. And, well, I have to decide where I will integrate the function. So, for that, I will choose some region in the xy plane. And, I will integrate the function on that region. So, it's over a region, R, in the xy plane. So, I have this region R and I look at the piece of the graph that is above this region. And, we'll try to compute the volume of this solid here. OK, that's what the double integral will measure. So, we'll call that the double integral of our region, R, of f of xy dA and I will have to explain what the notation means. So, dA here stands for a piece of area. A stands for area. And, well, it's a double integral. So, that's why we have two integral signs. And, we'll have to indicate somehow the region over which we are integrating. OK, we'll come up with more concrete notations when we see how to actually compute these things. That's the basic definition. OK, so actually, how do we define it, that's not really much of a definition yet. How do we actually define this rigorously? Well, remember, the integral in one variable, you probably saw a definition where you take your integral from a to b, and you cut it into little pieces. And then, for each little piece, you take the value of a function, and you multiply by the width of a piece. That gives you a rectangular slice, and then you sum all of these rectangular slices together. So, here we'll do the same thing. So, well, let me put a picture up and explain what it does. So, we're going to cut our origin into little pieces, say, little rectangles or actually anything we want. And then, for each piece, with the small area, delta A, we'll take the area delta a times the value of a function in there that will give us the volume of a small box that sits under the graph. And then, we'll add all these boxes together. That gives us an estimate of a volume. And then, to get actually the integral, the integral will be defined as a limit as we subdivide into smaller and smaller boxes, and we sum more and more pieces, OK? So, actually, what we do, oh, I still have a board here. So, the actual definition involves cutting R into small pieces of area that's called delta A or maybe delta Ai, the area of the i'th piece. And then, OK, so maybe in the xy plane, we have our region, and we'll cut it may be using some grade. OK, and then we'll have each small piece. Each small piece will have area delta Ai and it will be at some point, let's call it xi, yi ... yi, xi. And then, we'll consider the sum over all the pieces of f at that point, xi, yi times the area of a small piece. So, what that corresponds to in the three-dimensional picture is just I sum the volumes of all of these little columns that sit under the graph. OK, and then, so what I do is actually I take the limit as the size of the pieces tends to zero. So, I have more and more smaller and smaller pieces. And, that gives me the double integral. OK, so that's not a very good sentence, but whatever. So, OK, so that's the definition. Of course, we will have to see how to compute it. We don't actually compute it. When you compute an integral in single variable calculus, you don't do that. You don't cut into little pieces and sum the pieces together. You've learned how to integrate functions using various formulas, and similarly here, we'll learn how to actually compute these things without doing that cutting into small pieces. OK, any questions first about the concept, or what the definition is? Yes? Well, so we'll have to learn which tricks work, and how exactly. But, so what we'll do actually is we'll reduce the calculation of a double integral to two calculations of single integrals. And so, for V, certainly, all the tricks you've learned in single variable calculus will come in handy. OK, so, yeah that's a strong suggestion that if you've forgotten everything about single variable calculus, now would be a good time to actually brush up on integrals. The usual integrals, and the usual substitution tricks and easy trig in particular, these would be very useful. OK, so, yeah, how do we compute these things? That's what we would have to come up with. And, well, going back to what we did with derivatives, to understand variations of functions and derivatives, what we did was really we took slices parallel to an axis or another one. So, in fact, here, the key is also the same. So, what we are going to do is instead of cutting into a lot of small boxes like that and summing completely at random, we will actually somehow scan through our region by parallel planes, OK? So, let me put up, actually, a slightly different picture up here. So, what I'm going to do is I'm going to take planes, say in this picture, parallel to the yz plane. I'll take a moving plane that scans from the back to the front or from the front to the back. So, that means I set the value of x, and I look at the slice, x equals x0, and then I will do that for all values of x0. So, now in each slice, well, I get what looks a lot like a single variable integral. OK, and that integral will tell me, what is the area in this? Well, I guess it's supposed to be green, but it all comes as black, so, let's say the black shaded slice. And then, when I add all of these areas together, as the value of x changes, I will get the volume. OK, let me try to explain that again. So, to compute this integral, what we do is actually we take slices. So, let's consider, let's call s of x the area of a slice, well, by a plane parallel to the yz plane. OK, so on the picture, s of x is just the area of this thing in the vertical wall. Now, if you sum all of these, well, why does that work? So, if you take the origin between two parallel slices that are very close to each other, what's the volume in these two things? Well, it's essentially s of x times the thickness of this very thin slice, and the thickness would be delta x0 dx if you take a limit with more and more slices. OK, so the volume will be the integral of s of x dx from, well, what should be the range for x? Well, we would have to start at the very lowest value of x that ever happens in our origin, and we'd have to go all the way to the very largest value of x, from the very far back to the very far front. So, in this picture, we probably start over here at the back, and we'd end over here at the front. So, let me just say from the minimum, x, to the maximum x. And now, how do we find S of x? Well, S of x will be actually again an integral. But now, it's an integral of the variable, y, because when we look at this slice, what changes from left to right is y. So, well let me actually write that down. For a given, x, the area S of x you can compute as an integral of f of x, y dy. OK, well, now x is a constant, and y will be the variable of integration. What's the range for y? Well, it's from the leftmost point here to the rightmost point here on the given slice. So, there is a big catch here. That's a very important thing to remember. What is the range of integration? The range of integration for y depends actually on x. See, if I take the slice that's pictured on that diagram, then the range for y goes all the way from the very left to the very right. But, if I take a slice that, say, near the very front, then in fact, only a very small segment of it will be in my region. So, the range of values for y will be much less. Let me actually draw a 2D picture for that. So, remember, we fix x, so, sorry, so we fix a value of x. OK, and for a given value of x, what we will do is we'll slice our graph by this plane parallel to the yz plane. So, now we mention the graph is sitting above that. OK, that's the region R. We have the region, R, and I have the graph of a function above this region, R. And, I'm trying to find the area between this segment and the graph above it in this vertical plane. Well, to do that, I have to integrate from y going from here to here. I want the area of a piece that sits above this red segment. And, so in particular, the endpoints, the extreme values for y depend on x because, see, if I slice here instead, well, my bounds for y will be smaller. OK, so now, if I put the two things together, what I will get -- -- is actually a formula where I have to integrate -- -- over x -- -- an integral over y. OK, and so this is called an iterated integral because we iterate twice the process of taking an integral. OK, so again, what's important to realize here, I mean, I'm going to say that several times over the next few days but that's because it's the single most important thing to remember about double integrals, the bounds here are just going to be numbers, OK, because the question I'm asking myself here is, what is the first value of x by which I might want to slice, and what is the last value of x? Which range of x do I want to look at to take my red slices? And, the answer is I would go all the way from here, that's my first slice, to somewhere here. That's my last slice. For any value in between these, I will have some red segment, and I will want to integrate over that that. On the other hand here, the bounds will depend on the outer variable, x, because at a fixed value of x, what the values of y will be depends on x in general. OK, so I think we should do lots of examples to convince ourselves and see how it works. Yeah, it's called an iterated integral because first we integrated over y, and then we integrate again over x, OK? So, we can do that, well, I mean, y depends on x or x depends, no, actually x and y vary independently of each other inside here. What is more complicated is how the bounds on y depend on x. But actually, you could also do the other way around: first integrate over x, and then over y, and then the bounds for x will depend on y. We'll see that on an example. Yes? So, for y, I'm using the range of values for y that corresponds to the given value of x, OK? Remember, this is just like a plot in the xy plane. Above that, we have the graph. Maybe I should draw a picture here instead. For a given value of x, so that's a given slice, I have a range of values for y, that is, from this picture at the leftmost point on that slice to the rightmost point on that slice. So, where start and where I stop depends on the value of x. Does that make sense? OK. OK, no more questions? OK, so let's do our first example. So, let's say that we want to integrate the function 1-x^2-y^2 over the region defined by x between 0 and 1, and y between 0 and 1. So, what does that mean geometrically? Well, z = 1-x^2-y^2, and it's a variation on, actually I think we plotted that one, right? That was our first example of a function of two variables possibly. And, so, we saw that the graph is this paraboloid pointing downwards. OK, it's what you get by taking a parabola and rotating it. And now, what we are asking is, what is the volume between the paraboloid and the xy plane over the square of side one in the xy plane over the square of side one in the xy plane, x and y between zero and one. OK, so, what we'll do is we'll, so, see, here I try to represent the square. And, we'll just sum the areas of the slices as, say, x varies from zero to one. And here, of course, setting up the bounds will be easy because no matter what x I take, y still goes from zero to one. See, it's easiest to do double integrals what the region is just a rectangle on the xy plane because then you don't have to worry too much about what are the ranges. OK, so let's do it. Well, that would be the integral from zero to one of the integral from zero to one of 1-x^2-y^2 dy dx. So, I'm dropping the parentheses. But, if you still want to see them, I'm going to put that in very thin so that you see what it means. But, actually, the convention is we won't put this parentheses in there anymore. OK, so what this means is first I will integrate 1-x^2-y^2 over y, ranging from zero to one with x held fixed. So, what that represents is the area in this slice. So, see here, I've drawn, well, what happens is actually the function takes positive and negative values. So, in fact, I will be counting positively this part of the area. And, I will be counting negatively this part of the area, I mean, as usual when I do an integral. OK, so what I will do to evaluate this, I will first do what's called the inner integral. So, to do the inner integral, well, it's pretty easy. How do I integrate this? Well, it becomes, so, what's the integral of one? It's y. Just anything to remember is we are integrating this with respect to y, not to x. The integral of x^2 is x^2 times y. And, the integral of y^2 is y^3 over 3. OK, and that we plug in the bounds, which are zero and one in this case. And so, when you plug y equals one, you will get one minus x^2 minus one third minus, well, for y equals zero you get 0, 0, 0, so nothing changes. OK, so you are left with two thirds minus x^2. OK, and that's a function of x only. Here, you shouldn't see any y's anymore because y was your integration variable. But, you still have x. You still have x because the area of this shaded slice depends, of course, on the value of x. And, so now, the second thing to do is to do the outer integral. So, now we integrate from zero to one what we got, which is two thirds minus x^2 dx. OK, and we know how to compute that because that integrates to two thirds x minus one third x^3 between zero and one. And, I'll let you do the computation. You will find it's one third. OK, so that's the final answer. So, that's the general pattern. When we have a double integral to compute, first we want to set it up carefully. We want to find, what will be the bounds in x and y? And here, that was actually pretty easy because our equation was very simple. Then, we want to compute the inner integral, and then we compute the outer integral. And, that's it. OK, any questions at this point? No? OK, so, by the way, we started with dA in the notation, right? Here we had dA. And, that somehow became a dy dx. OK, so, dA became dy dx because when we do the iterated integral this way, what we're actually doing is that we are slicing our origin into small rectangles. OK, that was the area of this small rectangle here? Well, it's the product of its width times its height. So, that's delta x times delta y. OK, so, delta a equals delta x delta y becomes... So actually, it's not just becomes, it's really equal. So, the small rectangles for. Now, it became dy dx and not dx dy. Well, that's a question of, in which order we do the iterated integral? It's up to us to decide whether we want to integrate x first, then y, or y first, then x. But, as we'll see very soon, that is an important decision when it comes to setting up the bounds of integration. Here, it doesn't matter, but in general we have to be very careful about in which order we will do things. Yes? Well, in principle it always works both ways. Sometimes it will be that because the region has a strange shape, you can actually set it up more easily one way or the other. Sometimes it will also be that the function here, you actually know how to integrate in one way, but not the other. So, the theory is that it should work both ways. In practice, one of the two calculations may be much harder. OK. Let's do another example. Let's say that what I wanted to know was not actually what I computed, namely, the volume below the paraboloid, but also the negative of some part that's now in the corner towards me. But let's say really what I wanted was just the volume between the paraboloid and the xy plane, so looking only at the part of it that sits above the xy plane. So, that means, instead of integrating over the entire square of size one, I should just integrate over the quarter disk. I should stop integrating where my paraboloid hits the xy plane. So, let me draw another picture. So, let's say I wanted to integrate, actually -- So, let's call this example two. So, we are going to do the same function but over a different region. And, the region will just be, now, this quarter disk here. OK, so maybe I should draw a picture on the xy plane. That's your region, R. OK, so in principle, it will be the same integral. But what changes is the bounds. Why do the bounds change? Well, the bounds change because now if I set, if I fixed some value of x, then I want to integrate this part of the slice that's above the xy plane and I don't want to take this part that's actually outside of my disk. So, I should stop integrating over y when y reaches this value here. OK, on that picture here, on this picture, it tells me for a fixed value of x, the range of values for y should go only from here to here. So, that's from here to less than one. OK, so for a given, x, the range of y is, well, so what's the lowest value of y that we want to look at? It's still zero. From y equals zero to,what's the value of y here? Well, I have to solve in the equation of a circle, OK? So, if I'm here, this is x^2 y^2 equals one. That means y is square root of one minus x^2. OK, so I will integrate from y equals zero to y equals square root of one minus x^2. And, now you see how the bound from y will depend on the value of x. OK, so while I erase, I will let you think about, what is the bound for x now? It's a trick question. OK, so I claim that what we will do -- We write this as an iterated integral first dy then dx. And, we said for a fixed value of x, the range for y is from zero to square root of one minus x^2. What about the range for x? Well, the range for x should just be numbers. OK, remember, the question I have to ask now is if I look at all of these yellow slices, which one is the first one that I will consider? Which one is the last one that I want to consider? So, the smallest value of x that I want to consider is zero again. And then, I will have actually a pretty big slice. And I will get smaller, and smaller, and smaller slices. And, it stops. I have to stop when x equals one. Afterwards, there's nothing else to integrate. So, x goes from zero to one. OK, and now, see how in the inner integral, the bounds depend on in the inner integral, the bounds depend on x. In the outer one, you just get numbers because the questions that you have to ask to set up this one and set up that one are different. Here, the question is, if I fix a given, x, if I look at a given slice, what's the range for y? Here, the question is, what's the first slice? What is the last slice? Does that make sense? Everyone happy with that? OK, very good. So, now, how do we compute that? Well, we do the inner integral. So, that's an integral from zero to square root of one minus x^2 of one minus x^2 minus y^2 dy. And, well, that integrates to y-x^2y-y^3 over three from zero to square root of one minus x^2. And then, that becomes, well, the root of one minus x^2 minus x^2 root of one minus x^2 minus y minus x^2 to the three halves over three. And actually, if you look at it for long enough, see, this says one minus x^2 times square root of one minus x^2. So, actually, that's also, so, in fact, that simplifies to two thirds of one minus x^2 to the three halves. OK, let me redo that, maybe, slightly differently. This was one minus x^2 times y. So -- -- one minus x^2 times y becomes square root of one minus x^2 minus y^3 over three. And then, when I take y equals zero, I get zero. So, I don't subtract anything. OK, so now you see this is one minus x^2 to the three halves minus a third of it. So, you're left with two thirds. OK, so, that's the integral. The outer integral is the integral from zero to one of two thirds of one minus x^2 to the three halves dx. And, well, I let you see if you remember single variable integrals by trying to figure out what this actually comes out to be is it pi over two, or pi over eight, actually? I think it's pi over eight. OK, well I guess we have to do it then. I wrote something on my notes, but it's not very clear, OK? So, how do we compute this thing? Well, we have to do trig substitution. That's the only way I know to compute an integral like that, OK? So, we'll set x equal sine theta, and then square root of one minus x^2 will be cosine theta. We are using sine squared plus cosine squared equals one. And, so that will become -- -- so, two thirds remains two thirds. One minus x^2 to the three halves becomes cosine cubed theta. dx, well, if x is sine theta, then dx is cosine theta d theta. So, that's cosine theta d theta. And, well, if you do things with substitution, which is the way I do them, then you should worry about the bounds for theta which will be zero to pi over two. Or, you can also just plug in the bounds at the end. So, now you have the two thirds times the integral from zero to pi over two of cosine to the fourth theta d theta. And, how do you integrate that? Well, you have to use double angle formulas. OK, so cosine to the fourth, remember, cosine squared theta is one plus cosine two theta over two. And, we want the square of that. And, so that will give us -- -- of, well, we'll have, it's actually one quarter plus one half cosine to theta plus one quarter cosine square to theta d theta. And, how will you handle this guy? Well, using, again, the double angle formula. OK, so it's getting slightly nasty. So, but I don't know any simpler solution except for one simpler solution, which is you have a table of integrals of this form inside the notes. Yes? No, I don't think so because if you take one half times cosine half times two, you will still have half, OK? So, if you do, again, the double angle formula, I think I'm not going to bother to do it. I claim you will get, at the end, pi over eight because I say so. OK, so exercise, continue calculating and get pi over eight. OK, now what does the show us? Well, this shows us, actually, that this is probably not the right way to do this. OK, the right way to do this will be to integrate it in polar coordinates. And, that's what we will learn how to do tomorrow. So, we will actually see how to do it with much less trig. So, that will be easier in polar coordinates. So, we will see that tomorrow. OK, so we are almost there. I mean, here you just use a double angle again and then you can get it. And, it's pretty straightforward. OK, so one thing that's kind of interesting to know is we can exchange the order of integration. Say we have an integral given to us in the order dy dx, we can switch it to dx dy. But, we have to be extremely careful with the bounds. So, you certainly cannot just swap the bounds of the inner and outer because there you would end up having this square root of one minus x^2 on the outside, and you would never get a number out of that. So, that cannot work. It's more complicated than that. OK, so, well, here's a first baby example. Certainly, if I do integral from zero to one, integral from zero to two dx dy, there, I can certainly switch the bounds without thinking too much. What's the reason for that? Well, the reason for that is this corresponds in both cases to integrating x from zero to two, and y from zero to one. It's a rectangle. So, if I slice it this way, you see that y goes from zero to one for any x between zero and two. It's this guy. If I slice it that way, then x goes from zero to two for any value of y between zero and one. And, it's this one. So, here it works. But in general, I have to draw picture of my region, and see how the slices look like both ways. OK, so let's do a more interesting one. Let's say that I want to compute an integral from zero to one of integral from x to square root of x of e^y over y dy dx. So, why did I choose this guy? Which is the guy because as far as I can tell, there's no way to integrate e^y over y. So, this is an integral that you cannot compute this way. So, it's a good example for why this can be useful. So, if you do it this way, you are stuck immediately. So, instead, we will try to switch the order. But, to switch the order, we have to understand, what do these bounds mean? OK, so let's draw a picture of the region. Well what I am saying is y equals x to y equals square root of x. Well, let's draw y equals x, and y equals square root of x. Well, maybe I should actually put this here, y equals x to y equals square root of x. OK, and so I will go, for each value of x I will go from y equals xo to y equals square root of x. And then, we'll do that for values of x that go from x equals zero to x equals one, which happens to be exactly where these things intersect. So, my region will consist of all this, OK? So now, if I want to do it the other way around, I have to decompose my region. The other way around, I have to, so my goal, now, is to rewrite this as an integral. Well, it's still the same function. It's still e to the y over y. But now, I want to integrate dx dy. So, how do I integrate over x? Well, I fix a value of y. And, for that value of y, what's the range of x? Well, the range for x is from here to here. OK, what's the value of x here? Let's start with an easy one. This is x equals y. What about this one? It's x equals y^2. OK, so, x goes from y2 to y, and then what about y? Well, I have to start at the bottom of my region. That's y equals zero to the top, which is at y equals one. So, y goes from zero to one. So, switching the bounds is not completely obvious. That took a little bit of work. But now that we've done that, well, just to see how it goes, it's actually going to be much easier to integrate because the inner integral, well, what's the integral of e^y over y with respect to x? It's just that times x, right, from x equals y^2 to y. So, that will be, well, if I plug x equals y, I will get e to the y minus, if I plug x equals y^2, I will get e to the y over y times y^2 into the y times y, OK? So, now, if I do the outer integral, I will have the integral from zero to one of e to the y minus y^e to the y dy. And, that one actually is a little bit easier. So, we know how to integrate e^y. We don't quite know how to integrate ye^y. But, let's try. So, let's see, what's the derivative of ye^y? Well, there's a product rule that's one times e^y plus y times the derivative of e^y is ye^y. So, if we do, OK, let's put a minus sign in front. Well, that's almost what we want, except we have a minus e^y instead of a plus e^y. So, we need to add 2e^y. And, I claim that's the antiderivative. OK, if you got lost, you can also integrate by integrating by parts, by taking the derivative of y and integrating these guys. Or, but, you know, that works. Just, your first guess would be, maybe, let's try minus y^e to the y. Take the derivative of that, compare, see what you need to do to fix. And so, if you take that between zero and one, you'll actually get e minus two. OK, so, tomorrow we are going to see how to do double integrals in polar coordinates, and also applications of double integrals, how to use them for interesting things.
http://xoax.net/math/crs/multivariable_calculus_mit/lessons/Lecture16/
13
60
An n dimensional pyramid or cone is a geometric figure consisting of an (n-1) dimensional base and a vertical axis such that the cross-section of the figure at any height y is a scaled down version of the base. The cross-section becomes zero at some height H. The point at which the cross-section is zero is called the vertex. The distinction between a pyramid and cone is that the base of a pyramid is a geometric figure with a finite number of sides whereas there are no such restrictions for the base of a cone (and thus a pyramid is a special case of a cone). A two dimensional pyramid is just a triangle and a three dimensional pyramid is the standard type pyramid with a polygonal base and triangular sides composed of the sides of the base connected to the vertex. The area-volume formulas for these two cases are well known: i.e. In order to deal with the general n dimensional case it is necessary to derive the area of the triangle systematically. The area of a triangle can be found as the limit of a sequence of approximations in which the triangle is covered by a set of rectangles as shown in the diagrams below. In the above construction the vertical axis of the triangle is divided into m equal intervals. The width of a rectangle used in the covering is the width of the triangle at that height. As the subdivision of the vertical axis of the disk becomes finer and finer the sum of the areas of the rectangles approaches a limit which is called the area of the triangle. The process can be represented algebraically. For a pyramid/cone of height H the distance from the vertex is H-y where y is the distance from the base. Let s=(1-y/H) be the scale factor for a cross-section of the cone at a height y above the base. The area of a the (n-1)-dimensional cross-section is equal the area of the base multiplied by a factor of sn-1. The n-dimensional volume of the cone, Vn(B,H), is approximated by the sum of the volumes of the prisms created by the subdivision of the vertical axis. The limit of that sum as the subdivision becomes finer and finer can be expressed as an integral; i.e., The general formula is then: The above general formula can be used to establish a relationship between the volume of an n-dimensional ball and the (n-1)-dimensional area which bounds it. Consider the approximation of the area of a disk of radius r by triangles as shown below: Each of the triangles has a height of r so the sum of the areas of the triangles is equal to the height r times the sum of the bases. In the limit the sum of the bases is equal to the perimeter of the circle so the area of the disk is equal to (1/2)r(2πr) = πr2. Likewise the volume of a ball can be approximated by triangulating the spherical surface and creating pyramids whose verices are all at the center of the ball and whose bases are the triangles at the surface. The height of all these pyramids is radius of the ball r. Thus the volue is equal to one third of the height r times the sum of the base areas. In the limit the sum of the base areas is equal to the area of the sphere, 4πr2. Thus the volume of the ball of radius r is equal to (1/3)r(πr2); i.e., (4/3)πr3. Generalizing, this means that Unfortunately this relation is of no practical help in finding the formula for the volume of an n-dimensional ball in that the formula for the area of the surface of an n-dimensional ball is more obscure that that of the volume. Nevertheless it is interesting to perceive an n-dimensional ball as being composed of n-dimensional pyramids. HOME PAGE OF Thayer Watkins
http://www.sjsu.edu/faculty/watkins/npyramid.htm
13
58
Basic Algebra/Working with Numbers/Distributive Property Sum The resulting quantity obtained by the addition of two or more terms. Real Number: An element of the set of all rational and irrational numbers. All of these numbers can be expressed as decimals. Monomial: An algebraic expression consisting of one term. Binomial: An algebraic expression consisting of two terms. Trinomial: An algebraic expression consisting of three terms. Polynomial: An algebraic expression consisting of two or more terms. Like Terms: Like terms are expressions that have the same variable(s) and the same exponent on the variable(s). Remember that constant terms are all like terms. This follows from the definition because all constant terms can be seen to have a variable with an exponent of zero. The distributive property is the short name for "the distributive property of multiplication over addition", although you will be using it to distribute multiplication over subtraction as well. When you are simplifying or evaluating you follow the order of operations. Sometimes you are unable to simplify any further because you cannot combine like terms. This is when the distributive property comes in handy. When you first learned about multiplication it was described as grouping. You used multiplication as a way to condense the multiple addition of the same quantity. If you wanted to add you could think about it as four groups of three items. |ooo| + |ooo| + |ooo| + |ooo| You have 12 items. This is where comes in. So as you moved on you took this idea to incorporate variables as well. is three groups of x. And is three groups of and is This gives you six x's or 6x. Now we need to take this idea and extend it even further. If you have you might try to simplify using the order of operations first. This would have you do the addition inside the parentheses first. However, x and 1 are not like terms so the addition is impossible. We need to look at this expression differently if we are going to simplify it. What you have is or in other words you have three groups of Here you can collect like terms. You have three x's and three 1's. So you started with and ended with The last equation might make it easier to see what the distributive property says to do. You are taking the multiplication by 3 and distributing that operation across the terms being added in the parentheses. You multiply the x by 3 and you multiply the 1 by 3. Then you just have to simplify using the order of operations. What Is Coming Next After you learn about the distributive property you will know how to multiply a monomial by a polynomial. Next, you can use this information to understand how to multiply a polynomial by a polynomial. You will probably move on to multiplying a binomial times a binomial. This will show up in something like (x+2)(3x+5). You can think of a problem like this as x(3x+5) + 2(3x+5). Breaking up the first binomial like this allows you to use your knowledge of the distributive property. Once you understand this use of the distributive property you can extend this understanding even further to justify the multiplication of any polynomial with any polynomial. Sometimes while you are attempting to isolate a variable in an equation or inequality you will need to use the distributive property. You already know that you use inverse operations to isolate your desired variable, but before you do that you need to combine like terms that are on the same side of the equation (or inequality). Now there might be a step even before that. You will need to see if the distributive property needs to be used before you can combine like terms then proceed to use inverse operations to isolate a variable. Word to the Wise Remember that you still have the order of operations. If you can evaluate operations in a straightforward manner it is usually in your best interest to do so. The distributive property is like a back door to the order of operations for when you get stuck because you do not have like terms. Of course when you are dealing with only constant terms everything you encounter is like terms. The trouble happens when you introduce variables. This means that some terms cannot be combined. Remember that variables take the place of real numbers (at least in Algebra 1) so the same rules that govern real numbers will also govern the variables that hold their place and vice versa. You can use the distributive property even when you do not need to. Example Problems Example Problem #1: Solution to Example Problem #1: Normally, to follow the order of operations you would add the two terms in the parenthesis first, then do the multiplication by. This does not work for this expression because x and 4 are unlike terms so you cannot combine them. We use the distributive property to help us find a way around the order of operations while still being sure that we keep the value of the express. We distribute the multiplication by 2 across the addition. We will have 2 multiplied by x and 2 multiplied by 4. Now we just need to finish the multiplication. is equal to 8. We are done because we just have two terms being added and we cannot add them because they are not like terms. Example Problem #2: Solution to Example Problem #2: Since the terms inside the parentheses are not like terms we cannot combine them. We can use the distributive property to multiply by . This is the first example with subtraction in it. You keep this operation between the two terms just like we kept the addition between the two terms in the previous example. The next step is to multiply In order to complete the previous step you will already need to know how to multiply monomials. To summarize all the steps... Example Problem #3: Solve for in Solution to Example Problem #3: To solve for a variable you must isolate it on one side of the equation. We need to get the out of the parentheses. Since we cannot go through the order of operations and just add x plus 10 then multiply by 2, we will have to use the distributive property. First, distribute the multiplication by 2 across the addition inside the parentheses. Now you can multiply Now we can work on getting the on one side by itself. You need to do the order of operations backwards so we can "undo" what is "being done to" . To get rid of adding 20 you need to subtract 20. And remember that an equation sets up a relationship that we need to preserve. If you subtract 20 from one side you need to subtract 20 from the other side as well to keep the balance. Now we need to "undo" the multiplication by 2, so we divide by 2. Whatever you do to one side must be done to the other. So divide both sides by 2. This is it. You know you are done when the variable is by itself on one side, and it is. Practice Games http://www.phschool.com/atschool/academy123/html/bbapplet_wl-problem-430723.html ( video explanation) Practice Problems (Note: solutions are in red) Use the distributive property to rewrite the expression Notes for Educators It is obvious to most educators in the classroom that students must have a good number sense to comprehend mathematics in a useful way. A critical part to have number sense is understanding multiplication of real numbers and variables that stand in the place of real numbers. Students also need as much practice as possible with counting principles. Explaining multiplication and the distributive property as above helps to solidify some counting principles knowledge in the minds of the students. In order to teach the distributive property an educator might be interested in how students first perceive knowledge of this kind. The better we understand how the brain obtains knowledge the more responsibly we can guide it. Piaget model of cognitive development sets up level of understanding that the students minds passes through. According to this chart, the distributive property would sit in sensory-motor or perhaps the pre-operational stages. Piaget's work has been largely criticized, but few doubt that it is a good starting place to think about how the brain acquires mathematical understanding. Annette Karmiloff-Smith was students of Piaget and many believe that she brings his ideas forward. She believes that human brains are born with some preset modules that have the innate ability to learn and as you have experiences you create more independent modules. Eventually these modules start working together to create a deeper understanding and more applicable knowledge. The person moves from implicit to a more explicit knowledge which helps to create verbal knowledge. Education, and specifically mathematics education plays a role during the process of moving from the instictually implicit stages to the more verbal explicit understanding. A student acquires procedural methods then learning they theory behind the procedure. This runs parallel to mathematics education. If you accept this model of how the mind comes to understand a concept, it would be critical to teach the students the procedural methods and mechanics of how the distributive property must be carried out. It would then be just as important to show them why this works out the way it does, or at least provide them with the educational opportunities to explore why it works out. This exploration should take three stages. First the students needs to master the mechanics of the distributive property. In math ed terms, this might be considered drill and kill. The next step would be asking the students to reflect on why they think the distributive property has such a behavior. This could be related to encouraging metacognition with your students. Have them reflect not only on the procedure of the distributive property but also on why they think that. Hopefully the third and final step would be a the last two steps coming together in the students' minds as a solid understand of the distributive property. Since this knowledge would probably first be link in the students mind as a procedure only helpful in a math classroom, it might also be beneficial to encourage the students to stretch this concept across domains. After all, one of the main purposes of a public mathematics education is to encourage logicality among the populous. One of the most common errors for students to make is to just multiply the first number in the parentheses by the number outside. For example This could initially be remedied by explaining the distributive property as taking 2 groups of (x+1) and adding them, like multiplication means to do. This might lead to another misunderstand though. It might be confusing to think about things like or because it is hard to think about .5 groups of (x+1) or of . When a student first learns about multiplication they are told that it like grouping things together to simplify the addition of the same number multiple times. Once they have mastered this concept multiplication is extended to all rational number. Now multiplication is better thought of as a scaling process. You are taking one number and scaling it by a factor of another. This same mental leap is needed to think about distributing a rational number because the distributive property is still just multiplication. An effective method to explain multiplication as a scale factor is to have two number lines, one right above the other. If you are multiplying by then the scale factor is and you can draw guide lines from the top number line to the bottom number line that scale every number down by one half. So a line will be draw from 2 on the top number line to 1 on the bottom number line. Another line will be drawn from 3 on the top number line to 1.5 on the bottom number line and so on. Of course this method is easier to use if you have an interactive applet or program of some kind that allows you to update the scale factor immediately. Without this instant gratification the students may find this explanation too cumbersome to follow.
http://en.wikibooks.org/wiki/Basic_Algebra/Working_with_Numbers/Distributive_Property
13
51
Origin and Evolutionary Relationships of Giant Galápagos Tortoises Andalgisa Caccone, James P. Gibbs, Valerio Ketmaier, Elizabeth Suatoni, and Jeffrey R. Powell. Proc Natl Acad Sci, USA, 1999 November 9; 96(23)13223-13228 Giant tortoises, up to 5 feet in length, were widespread on all continents except Australia and Antarctica before and during the Pleistocene (3, 4). Now extinct from large landmasses, giant tortoises have persisted through historical times only on remote oceanic islands: the Galápagos, Seychelles, and Mascarenes. The tortoises of the Mascarenes are now extinct; the last animal died in 1804 (5). The tortoises of the Seychelles are represented by a single surviving population on the Aldabra atoll. Only in the Galápagos have distinct populations survived in multiple localities. The Galápagos tortoises remain the largest living tortoises (up to 400 kg) and belong to a pantropical genus of some 21 species (6). The Galápagos Islands are volcanic in origin; the oldest extant island in the eastern part of the archipelago is less than 5 million years (myr) old (7); volcanic activity is ongoing, especially on the younger western islands. Because the archipelago has never been connected to the mainland, tortoises probably reached the islands by rafting from South America, 1000 km to the east. The Humboldt Current travels up the coast of Chile and Peru before diverting westward at Equatorial latitudes corresponding to the Galápagos Archipelago. Three extant species of Geochelone exist on mainland South America and are therefore the best candidates for the closest living relative of the Galápagos tortoises: Geochelone denticulata, the South American yellow-footed tortoise; Geochelone carbonaria, the South American red-footed tortoise; and Geochelone chilensis, the Chaco tortoise. Within the archipelago, up to 15 subspecies (or races) of Galápagos tortoises have been recognized, although only 11 survive to the present (2, 8). Six of these are found on separate islands; five occur on the slopes of the five volcanoes on the largest island, Isabela (Fig. 1). Several of the surviving subspecies of Galápagos tortoises are seriously endangered. For example, a single male nicknamed Lonesome George represents G. nigra abingdoni from Pinta Island. The decline of the populations began in the 17th century when buccaneers and whalers collected tortoises as a source of fresh meat; the animals can survive up to six months without food or water. An estimated 200,000 animals were taken (2). More lastingly, these boats also introduced exotic pests such as rats, dogs, and goats. Today, these feral animals, along with continued poaching, represent the greatest threat to the survival of the tortoises. The designated subspecies differ in a number of morphological characters, such as carapace shape (domed vs. saddle-backed), maximum adult size, and length of the neck and limbs. These differences do not, however, permit clear discrimination between individuals of all subspecies (9). Similarly, an allozyme survey that included seven G. nigra subspecies and the three South American Geochelone failed to reveal patterns of genetic differentiation among the subspecies or to identify any of the mainland species as the closest living relative to the Galápagos tortoises (10). A robust phylogeny of the Galápagos tortoise complex and its relatives is thus unavailable currently, and it is much needed to help resolve the long-term debate over the systematics of this group, as well as to clarify subspecies distinctiveness as a basis for prioritizing conservation efforts. DNA was extracted from blood stored in 100 mM Tris/100 mM EDTA/2% SDS buffer by using the Easy DNA extraction kit (Invitrogen). Modified primer pair 16Sar+16Sbr (12) was used for PCR amplifications of 568 bp of the 16S rRNA gene. A 386-bp-long fragment of the cytochrome b (cytb) gene was amplified by using the cytb GLU: 5'-TGACATGAAAAAYCAYCGTTG (13) and cytb B2: H15149 (14) primers. The D-loop region was amplified with primers based on conserved sequences of the cytb and 12S rRNA genes, which flank the D loop in tortoises. Primer GT12STR (5'-ATCTTGGCAACTTCAGTGCC-3') is at the 5' end of the 12S ribosomal gene, and primer CYTTOR (5' GCTTAACTAAAGCACCGGTCTTG-3') is at the 3' end of the cytb gene. These primers amplify the D loop from several Geochelone species (unpublished observations). Internal primers specific to the D loop of G. nigra were used to amplify and sequence a 708-bp fragment of the D loop (corresponding to 73.7% of the region). Internal primer sequences are available from the senior author upon request. Double-stranded PCR amplifications and automated sequencing were carried out as described (11). To promote accuracy, strands were sequenced in both directions for each individual. In addition to blood from live animals, we also obtained samples of skin from three tortoises collected on Pinta Island in 1906 and now in the California Academy of Science, San Francisco (specimen numbers CAS 8110, CAS 8111, and CAS 8113). One-half gram of skin was surface-cleaned with sterile water and subjected to 20 min of UV irradiation. The skin was pulverized in liquid nitrogen and suspended in buffer A of the Easy DNA kit. Proteinase K (100 µg/ml) was added and the sample was incubated for 24 hr at 58°C, following the Easy DNA procedure with the addition of a second chloroform extraction. The samples were washed in a Centricon 30 microconcentrator (Amicon) and suspended in 100 µl of 10 mM Tris/1 mM EDTA, pH 8.0. Only one skin sample was extracted at a time. Several rounds of PCR were performed, finally yielding four fragments of about 150 bp each, representing about 75% of the sequence obtained from blood samples. All procedures on the skin samples (until PCR products were obtained) were done in a room separate from that where all other DNA work was done. Because of the high sequence similarity, sequences were aligned by eye. The alignment was also checked by using CLUSTAL W (15). Alignments are available from the first author. Phylogenetic analyses were carried out on each gene region and on the combined data set. G. pardalis was used as the outgroup. Phylogenetic inferences were made by using maximum parsimony (MP) (16), maximum likelihood (ML) (17), and neighbor joining (NJ) (18). MP trees were reconstructed by the branch-and-bound search method (19) with ACCTRAN (accelerated transformation) characterstate optimization as implemented in PAUP* (20). Various weighting methods were used: all substitutions unweighted, transversions (Tv) weighted 3 times transitions (Ti), or only Tv. For cytb, MP analyses were also performed, excluding Ti from third positions of all codons. ML analyses were carried out using PAUP* with an empirically determined transition/transversion ratio (9.19) and rates were assumed to follow a gamma distribution with an empirically determined shape parameter (a = 0.149). Sequences were added randomly, with 1000 replicates and TBS as the branch-swapping algorithm. For the NJ analysis, ML distances were calculated by PAUP* with the empirically determined gamma parameter. PAUP* was used to obtain NJ trees based on those distance matrices. The incongruence length difference test (21) was carried out as implemented in PAUP* (in which it is called the partition homogeneity test). As suggested by Cunningham (22), invariant characters were always removed before applying the test. Templeton's (23) test was used to compare competing phylogenetic hypotheses statistically, by using the conservative two-tailed Wilcoxon rank sum test (24). The significance of branch length in NJ trees was tested by using the confidence probability (CP) test as implemented in MEGA (25). The strength of support for individual nodes was tested by the bootstrap method (26) with 1,000 (MP and NJ) or 300 (ML) psuedoreplicates. Rate homogeneity among lineages was tested by Tajima's one-degree-of-freedom method (27). Fig. 2 shows the 50% majority rule consensus tree for MP generated from the cytb and 16S rRNA data combined, by using a branch-and-bound search. There are 167 variable sites, of which 66 are parsimony-informative; there were 12 MP trees of equal length (196 steps), with a consistency index of 0.6667 (excluding uninformative characters). We emphasize that all three reconstruction methods, ML, MP, and NJ, produced very similar topologies, as did all weightings of transitions and transversions; all of the lettered nodes in Fig. 2 were found in all cases. When multiple tree reconstruction methods produce nearly the same tree, there is more confidence in the accuracy of the tree (28). Table 2 presents the statistical analysis of the well-supported nodes. We were particularly interested in identifying the closest extant relative of the Galápagos tortoises; we therefore performed other tests to ask whether alternative trees are statistically worse than are those in Fig. 2. Table 3 presents the results of these tests. Constraining one of the other mainland South American species to be the sister taxon to the G. nigra, or using the three mainland species as a trichotomy produced significantly less parsimonious trees by Templeton's (23) test, even with the relatively conservative two-tailed Wilcoxon rank sum test (24). For the NJ tree, the crucial branch separating the chilensis/nigra clade from the other South American species is significant at the 98% level by the confidence probability test in MEGA (25). Estimates of genetic distances also support the sister taxa status of G. chilensis and G. nigra. Among the subspecies of G. nigra, the maximum likelihood distances range from 0 to 0.0124 with a mean of 0.0066 ± 0.004 (SD). Between subspecies of G. nigra and G. chilensis, the average distance is 0.0788 ± 0.005. Between G. nigra and G. carbonaria or G. denticulataML distances are 0.118 ± 0.005 and 0.116 ± 0.003, respectively. Fig. 2 also reveals some resolution of the relationships among the Galápagos subspecies. One point of interest is that the five named subspecies on Isabela do not form a monophyletic clade. The four southern Isabela subspecies are sister taxa to the subspecies from Santa Cruz, whereas the northernmost subspecies, G. n. becki, is the sister taxon to G. n. darwini on San Salvador. It is a geographically reasonable scenario for southern Isabela to be colonized from Santa Cruz and northern Isabela to be colonized from San Salvador (Fig. 1). There is virtually no evidence for genetic differentiation among the four southern Isabela subspecies. The cytb sequence is identical in all individuals sampled. There are only three differences in the 16S rRNA sequence among the eight samples of these four named subspecies. We have also sequenced what is generally the fastest evolving region of mtDNA, the D loop, in individuals from these four subspecies to test whether this region gives evidence of genetic differentiation (Fig. 3). Only 17 of the 708 sites varied among the 23 individuals sequenced, and there were seven equally most parsimonious trees. The tree is only 23 steps long for the 23 sequences, with only seven nodes having bootstrap values above 50%. The only subspecies for which there is some evidence of a monophyletic clade is G. n. microphyes, but only two individuals have been studied and the bootstrap for this clade is not strong (Fig. 2). Furthermore, trees with G. n. microphyes constrained to not be monophyletic are two steps longer and not significantly worse than the MP tree by Templeton's (23) test, nor is the branch leading to the two G. n. microphyes statistically significant by the confidence probability test. We conclude that there is little or no evidence for significant genetic differentiation corresponding to the four southernmost named subspecies from Isabela. (Genetic differentiation of the other subspecies is addressed under Discussion.) One surprise was the very close relationship of Lonesome George, the sole representative of the G. n. abingdoni subspecies from Pinta, to the subspecies from San Cristóbal and Española (Fig. 2). For cytb and 16S rRNA, the samples from Española and Lonesome George are identical, whereas there is one transition difference in the samples from San Cristóbal. To check whether this sole survivor could have been a recent transplant to Pinta, we obtained samples of skin from three animals collected on Pinta in 1906. Although we could obtain only about 75% of the sequence that we had for the other samples, these segments of the cytb and 16S rRNA are identical to those from Lonesome George; this 75% of the sequences contains all the synapomorphies that place Lonesome George in the San Cristóbal/Española clade. Although G. chilensis is the closest living relative of the Galápagos tortoise, it is unlikely that the direct ancestor of G. nigra was a small-bodied tortoise. Several lines of reasoning (for review, see ref. 2) suggest that gigantism was a preadapted condition for successful colonization of remote oceanic islands, rather than an evolutionary trend triggered by the insular environment. Giant tortoises colonized the Seychelles at least three separate times (29). Fossil giant tortoises are known from mainland South America, and morphological analysis of these and extant species are consistent with a clade containing giant tortoise fossils and G. chilensis (30). Further evidence that the split between the ancestral lineages that gave rise to G. chilensis and G. nigra occurred on mainland South America comes from time estimates based on a molecular clock. We applied the Tajima (27) test of the clocklike behavior of DNA sequences to pairwise comparisons between G. chilensis and Galápagos subspecies, using in turn G. carbonaria and G. denticulata as the outgroup. The tests were done on transitions and transversions together, and on transversions only. We could not reject the hypothesis of constant substitution rates for the vast majority (94%) of comparisons for both genes. Therefore, we assumed that the 16S rRNA and cytb genes were evolving linearly with time. To calculate approximate divergence times between the lineages, we used published mtDNA rates estimated from turtles and other vertebrate ectotherms (3133). Depending on which estimate and gene are used, the predicted time of the split between G. nigra and G. chilensis varies, but most put the date between 6 and 12 myr ago. The oldest extant islands (San Cristóbal and Española) date to less than 5 myr (7), although sea mounts now submerged may have formed islands more than 10 myr ago (34). However, given the existence of mainland giant fossils and the argument that gigantism was required for long distance rafting, invoking colonization on now submerged islands would seem less reasonable than a split on the mainland before colonization, with the immediate ancestral lineage now extinct. The oldest split within G. nigra is estimated at no more than 2 myr ago, consistent with diversification on the existing islands. Times of divergence and colonization of other prominent Galápagos organisms have been estimated by molecular data. The diversification of Darwin's finches has been estimated to have occurred within the age of the extant islands (35). On the other hand, the endemic marine (Conolophorus) and land (Amblyrhyncus) iguanas are estimated to have diverged from each other between 10 and 20 myr ago (36, 37). As argued by Rassmann (37), it is likely that the split occurred on the Archipelago; therefore, it must have occurred on now-submerged islands. Similarly the diversification of the lava lizards (Tropidurus) and geckos (Phyllodactylus) was estimated to have begun around 9 myr ago, although in this case, there is some evidence indicating multiple colonizations (38, 39). Taxonomic Status of Isabela Subspecies. From Fig. 2, it seems clear that the largest and youngest island with tortoise populations, Isabela, was colonized at least twice independently. The four southern subspecies are sister taxa to the Santa Cruz subspecies (G. n. porteri), whereas the subspecies on the northernmost volcano (G. n. becki) is sister to the subspecies (G. n. darwini) on San Salvador. We have found no significant genetic differentiation among the four southern Isabela subspecies (microphyes, vandenburghi, guntheri, and vicina), even for what should be the fastest evolving region of mtDNA (Fig. 3). The lack of genetic differentiation is perhaps not surprising in light of the age of the Isabela volcanoes, estimated to be less than 0.5 myr (7). For colonization by tortoises, most volcanic activity must have ceased and sufficient time must have passed for appropriate vegetation to develop. Given this relatively short time, coupled with long generation time [age of first reproduction is over 20 years (8)], significant genetic differentiation among these populations is unlikely. The genetic distinctness of the population on the northernmost volcano is accounted for by an independent colonization from another island. The lack of genetic differentiation of these four Isabela subspecies is consistent with the morphological assessment of at least one authority. Pritchard (2) suggested that the four southern Isabela subspecies do not warrant separate subspecific status, but rather the described differences are either attributable to environmental differences (especially of rainfall, food availability, and humidity), or do not show geographic correlation, but are artifacts of age and sex. This, coupled with our results, would seem to warrant a reassessment of the taxonomic status of these subspecies. The data presented here also indicate little or no genetic differentiation between or among subspecies connected to nodes c, d, and e in Fig. 2. However, faster evolving regions of the mtDNA do reveal diagnostic differences among all subspecies (unpublished data) with the exception of the four southern Isabela populations, for which none of our data indicate geographically structured differentiation. Because a major purpose of the present study was to identify the mainland sister taxon to the Galápagos lineage, we emphasize here relatively slowly evolving regions. The molecular diagnoses of subspecies, based on larger sample sizes than are available now, should be addressed in the near future. Lonesome George. Perhaps the greatest surprise in our data was the close relationship of the single living representative of the G.n. abingdoni subspecies from Pinta to subspecies on Española and San Cristóbal. Most other relationships make biogeographic sense. The three well supported nodes in Fig. 2 (c, d, and e) all connect subspecies on islands geographically close to one another (Fig. 1). Pinta is the farthest major island from Española and San Cristóbal, being about 300 km distant. One possibility is that Lonesome George actually did originate on Española or San Cristóbal and was transported to Pinta. Morphologically, all three subspecies are considered saddle-backed, although subtle differences among them have been noted (2). Fortunately, we had available to us skin samples from three specimens collected on Pinta in 1906. The DNA sequences we obtained from these skins are identical to those of Lonesome George. Thus, it is reasonable to conclude that Lonesome George is the sole (known) living survivor of this subspecies. Although based solely on geographic distance, it seems unlikely that the Pinta subspecies should be so closely related to those from Española and San Cristóbal, consideration of oceanic currents makes it plausible. There is a strong current running northwest from the northern coast of San Cristóbal leading directly to the area around Pinta (40). These tortoises are not strong swimmers and thus their direction of rafting in the ocean must have depended largely on currents. Attempts to breed Lonesome George have been unsuccessful. However, he has been placed with females primarily from northern Isabela because, given its proximity, it was thought to be the most likely origin of the Pinta population (Fig. 1). Now that we see he has close genetic affinities to the Española and San Cristóbal subspecies, perhaps they would be a more appropriate source of a mate for this sole survivor. Copyright © 1999, The National Academy of Sciences Need to update a veterinary or herp society/rescue listing? |Clean/Disinfect||Green Iguanas & Cyclura||Kids||Prey||Veterinarians| |Home||About Melissa Kaplan||CND||Lyme Disease||Zoonoses| |Help Support This Site||Emergency Preparedness| © 1994-2013 Melissa Kaplan or as otherwise noted by other authors of articles on this site
http://www.anapsid.org/chaco4.html
13
92
- This page is about the measurement using water as a reference. For a general use of specific gravity, see relative density. See intensive property for the property implied by specific. Specific gravity is the ratio of the density of a substance compared to the density (mass of the same unit volume) of a reference substance. Apparent specific gravity is the ratio of the weight of a volume of the substance to the weight of an equal volume of the reference substance. The reference substance is nearly always water for liquids or air for gases. Temperature and pressure must be specified for both the sample and the reference. Pressure is nearly always 1 atm equal to 101.325 kPa. Temperatures for both sample and reference vary from industry to industry. In British brewing practice the specific gravity as specified above is multiplied by 1000. Specific gravity is commonly used in industry as a simple means of obtaining information about the concentration of solutions of various materials such as brines, hydrocarbons, sugar solutions (syrups, juices, honeys, brewers wort, must etc.) and acids. Specific gravity, as it is a ratio of densities, is a dimensionless quantity. Specific gravity varies with temperature and pressure; reference and sample must be compared at the same temperature and pressure, or corrected to a standard reference temperature and pressure. Substances with a specific gravity of 1 are neutrally buoyant in water, those with SG greater than one are denser than water, and so (ignoring surface tension effects) will sink in it, and those with an SG of less than one are less dense than water, and so will float. In scientific work the relationship of mass to volume is usually expressed directly in terms of the density (mass per unit volume) of the substance under study. It is in industry where specific gravity finds wide application, often for historical reasons. True specific gravity can be expressed mathematically as: where is the density of the sample and is the density of water. The apparent specific gravity is simply the ratio of the weights of equal volumes of sample and water in air: where represents the weight of sample and the weight of water, both measured in air. It can be shown that true specific gravity can be computed from different properties: where is the local acceleration due to gravity, is the volume of the sample and of water (the same for both), is the density of the sample, is the density of water and represents a weight obtained in vacuum. The density of water varies with temperature and pressure as does the density of the sample so that it is necessary to specify the temperatures and pressures at which the densities or weights were determined. It is nearly always the case that measurements are made at nominally 1 atmosphere (1013.25 mb ± the variations caused by changing weather patterns) but as specific gravity usually refers to highly incompressible aqueous solutions or other incompressible substances (such as petroleum products) variations in density caused by pressure are usually neglected at least where apparent specific gravity is being measured. For true (in vacuo) specific gravity calculations air pressure must be considered (see below). Temperatures are specified by the notation with representing the temperature at which the sample's density was determined and the temperature at which the reference (water) density is specified. For example SG (20°C/4°C) would be understood to mean that the density of the sample was determined at 20 °C and of the water at 4°C. Taking into account different sample and reference temperatures we note that while (20°C/20°C) it is also the case that (20°C/4°C). Here temperature is being specified using the current ITS-90 scale and the densities used here and in the rest of this article are based on that scale. On the previous IPTS-68 scale the densities at 20 °C and 4 °C are, respectively, 0.9982071 and 0.9999720 resulting in an SG (20°C/4°C) value for water of 0.9982343. As the principal use of specific gravity measurements in industry is determination of the concentrations of substances in aqueous solutions and these are found in tables of SG vs concentration it is extremely important that the analyst enter the table with the correct form of specific gravity. For example, in the brewing industry, the Plato table, which lists sucrose concentration by weight against true SG, were originally (20°C/4°C) i.e. based on measurements of the density of sucrose solutions made at laboratory temperature (20 °C) but referenced to the density of water at 4 °C which is very close to the temperature at which water has its maximum density of equal to 0.999972 g·cm−3 or SI units (or 62.43 lbm·ft−3 in United States customary units). The ASBC table in use today in North America, while it is derived from the original Plato table is for apparent specific gravity measurements at (20°C/20°C) on the IPTS-68 scale where the density of water is 0.9982071 g·cm−3. In the sugar, soft drink, honey, fruit juice and related industries sucrose concentration by weight is taken from a table prepared by A. Brix which uses SG (17.5°C/17.5°C). As a final example, the British SG units are based on reference and sample temperatures of 60F and are thus (15.56°C/15.56°C). Given the specific gravity of a substance, its actual density can be calculated by rearranging the above formula: Occasionally a reference substance other than water is specified (for example, air), in which case specific gravity means density relative to that reference. Measurement: apparent and true specific gravity Specific gravity can be measured in a number of ways. The following illustration involving the use of the pycnometer is instructive. A pycnometer is simply a bottle which can be precisely filled to a specific, but not necessarily accurately known volume, . Placed upon a balance of some sort it will exert a force . where is the mass of the bottle and the gravitational acceleration at the location at which the measurements are being made. is the density of the air at the ambient pressure and is the density of the material of which the bottle is made (usually glass) so that the second term is the mass of air displaced by the glass of the bottle whose weight, by Archimedes Principle must be subtracted. The bottle is, of course, filled with air but as that air displaces an equal amount of air the weight of that air is canceled by the weight of the air displaced. Now we fill the bottle with the reference fluid e.g. pure water. The force exerted on the pan of the balance becomes: If we subtract the force measured on the empty bottle from this (or tare the balance before making the water measurement) we obtain. where the subscript n indicated that this force is net of the force of the empty bottle. The bottle is now emptied, thoroughly dried and refilled with the sample. The force, net of the empty bottle, is now: where is the density of the sample. The ratio of the sample and water forces is: This is called the Apparent Specific Gravity, denoted by subscript A, because it is what we would obtain if we took the ratio of net weighings in air from an analytical balance or used a hydrometer (the stem displaces air). Note that the result does not depend on the calibration of the balance. The only requirement on it is that it read linearly with force. Nor does depend on the actual volume of the pycnometer. Further manipulation and finally substitution of ,the true specific gravity,(the subscript V is used because this is often referred to as the specific gravity in vacuo) for gives the relationship between apparent and true specific gravity. In the usual case we will have measured weights and want the true specific gravity. This is found from Since the density of dry air at 1013.25 mb at 20 °C is 0.001205 g·cm−3 and that of water is 0.998203 g·cm−3 the difference between true and apparent specific gravities for a substance with specific gravity (20°C/20°C) of about 1.100 would be 0.000120. Where the specific gravity of the sample is close to that of water (for example dilute ethanol solutions) the correction is even smaller. Digital density meters Hydrostatic Pressure-based Instruments: This technology relies upon Pascal's Principle which states that the pressure difference between two points within a vertical column of fluid is dependent upon the vertical distance between the two points, the density of the fluid and the gravitational force. This technology is often used for tank gauging applications as a convenient means of liquid level and density measure. Vibrating Element Transducers: This type of instrument requires a vibrating element to be placed in contact with the fluid of interest. The resonant frequency of the element is measured and is related to the density of the fluid by a characterization that is dependent upon the design of the element. In modern laboratories precise measurements of specific gravity are made using oscillating U-tube meters. These are capable of measurement to 5 to 6 places beyond the decimal point and are used in the brewing, distilling, pharmaceutical, petroleum and other industries. The instruments measure the actual mass of fluid contained in a fixed volume at temperatures between 0 and 80 °C but as they are microprocessor based can calculate apparent or true specific gravity and contain tables relating these to the strengths of common acids, sugar solutions, etc. The vibrating fork immersion probe is another good example of this technology. This technology also includes many coriolis-type mass flow meters which are widely used in chemical and petroleum industry for high accuracy mass flow measurement and can be configured to also output density information based on the resonant frequency of the vibrating flow tubes. Ultrasonic Transducer: Ultrasonic waves are passed from a source, through the fluid of interest, and into a detector which measures the acoustic spectroscopy of the waves. Fluid properties such as density and viscosity can be inferred from the spectrum. Radiation-based Gauge: Radiation is passed from a source, through the fluid of interest, and into a scintillation detector, or counter. As the fluid density increases, the detected radiation "counts" will decrease. The source is typically the radioactive isotope cesium-137, with a half-life of about 30 years. A key advantage for this technology is that the instrument is not required to be in contact with the fluid – typically the source and detector are mounted on the outside of tanks or piping. . Buoyant Force Transducer: the buoyancy force produced by a float in a homogeneous liquid is equal to the weight of the liquid that is displaced by the float. Since buoyancy force is linear with respect to the density of the liquid within which the float is submerged, the measure of the buoyancy force yields a measure of the density of the liquid. One commercially available unit claims the instrument is capable of measuring specific gravity with an accuracy of +/- 0.005 SG units. The submersible probe head contains a mathematically characterized spring-float system. When the head is immersed vertically in the liquid, the float moves vertically and the position of the float controls the position of a permanent magnet whose displacement is sensed by a concentric array of Hall-effect linear displacement sensors. The output signals of the sensors are mixed in a dedicated electronics module that provides an output voltage whose magnitude is a direct linear measure of the quantity to be measured. In-Line Continuous Measurement: Slurry is weighed as it travels through the metered section of pipe using a patented, high resolution load cell. This section of pipe is of optimal length such that a truly representative mass of the slurry may be determined. This representative mass is then interrogated by the load cell 110 times per second to ensure accurate and repeatable measurement of the slurry. - Helium gas has a density of 0.164g/liter It is 0.139 times as dense as air. - Air has a density of 1.18g/l - Ethyl alcohol has a specific gravity of 0.789, so it is 0.789 times as dense as water. - Water has a specific gravity of 1. - Table salt has a specific gravity of 2.17, so it is 2.17 times as dense as water. - Aluminum has a specific gravity of 2.7, so it is 2.7 times as dense as water. - Iron has a specific gravity of 7.87, so it is 7.87 times as dense as water. - Lead has a specific gravity of 11.35, so it is 11.35 times as dense as water. - Mercury has a specific gravity of 13.56, so it is 13.56 times as dense as water. - Gold has a specific gravity of 19.3, so it is 19.3 times as dense as water. - Osmium, the densest naturally occurring chemical element, has a specific gravity of 22.59, so it is 22.59 times as dense as water. - Urine normally has a specific gravity between 1.003 and 1.035. - Blood normally has a specific gravity of ~1.060. (Samples may vary, so most of these figures are approximate.) See also - Hough, J.S., Briggs, D.E., Stevens, R and Young, T.W. Malting and Brewing Science, Vol. II Hopped Wort and Beer, Chapman and Hall, London, 1991, p. 881 - Bettin, H.; Spieweck, F.: "Die Dichte des Wassers als Funktion der Temperatur nach Einführung des Internationalen Temperaturskala von 1990" PTB-Mitteilungen 100 (1990) pp. 195–196 - ASBC Methods of Analysis Preface to Table 1: Extract in Wort and Beer, American Society of Brewing Chemists, St Paul, 2009 - ASBC Methods of Analysis op. cit. Table 1: Extract in Wort and Beer - DIN51 757 (04.1994): Testing of mineral oils and related materials; determination of density - Density – VEGA Americas, Inc. Ohmartvega.com. Retrieved on 2011-11-18. - Process Control Digital Electronic Hydrometer. Gardco. Retrieved on 2011-11-18.
http://en.wikipedia.org/wiki/Specific_gravity
13
61
In mathematics, a group is an algebraic structure consisting of a set together with an operation that combines any two of its elements to form a third element. To qualify as a group, the set and the operation must satisfy a few conditions called group axioms, namely associativity, identity and invertibility. While these are familiar from many mathematical structures, such as number systems—for example, the integers endowed with the addition operation form a group—the formulation of the axioms is detached from the concrete nature of the group and its operation. This allows one to handle entities of very different mathematical origins in a flexible way, while retaining essential structural aspects of many objects in abstract algebra and beyond. The ubiquity of groups in numerous areas—both within and outside mathematics—makes them a central organizing principle of contemporary mathematics. Groups share a fundamental kinship with the notion of symmetry. A symmetry group encodes symmetry features of a geometrical object: it consists of the set of transformations that leave the object unchanged, and the operation of combining two such transformations by performing one after the other. Such symmetry groups, particularly the continuous Lie groups, play an important role in many academic disciplines. Matrix groups, for example, can be used to understand fundamental physical laws underlying special relativity and symmetry phenomena in molecular chemistry. The concept of a group arose from the study of polynomial equations, starting with Évariste Galois in the 1830s. After contributions from other fields such as number theory and geometry, the group notion was generalized and firmly established around 1870. Modern group theory—a very active mathematical discipline—studies groups in their own right. To explore groups, mathematicians have devised various notions to break groups into smaller, better-understandable pieces, such as subgroups, quotient groups and simple groups. In addition to their abstract properties, group theorists also study the different ways in which a group can be expressed concretely (its group representations), both from a theoretical and a computational point of view. A particularly rich theory has been developed for finite groups, which culminated with the monumental classification of finite simple groups completed in 1983. Since mid-1980s geometric group theory, which studies finitely generated groups as geometric objects, has become a particularly active area in group theory. One of the most familiar groups is the set of integers Z which consists of the numbers ..., −4, −3, −2, −1, 0, 1, 2, 3, 4, ...The following properties of integer addition serve as a model for the abstract group axioms given in the definition below. The integers, together with the operation "+", form a mathematical object belonging to a broad class sharing similar structural aspects. To appropriately understand these structures without dealing with every concrete case separately, the following abstract definition is developed to encompass the above example along with many others, one of which is the symmetry group detailed below. A group is a set, G, together with an operation "•" that combines any two elements a and b to form another element denoted . The symbol "•" is a general placeholder for a concretely given operation, such as the addition above. To qualify as a group, the set and operation, , must satisfy four requirements known as the group axioms: The order in which the group operation is carried out can be significant. In other words, the result of combining element a with element b need not yield the same result as combining element b with element a; the equation a • b = b • amay not always be true. This equation does always hold in the group of integers under addition, because a + b = b + a for any two integers (commutativity of addition). However, it does not always hold in the symmetry group below. Groups for which the equation a • b = b • a always holds are called abelian (in honor of Niels Abel). Thus, the integer addition group is abelian, but the following symmetry group is not. id (keeping it as is) r1 (rotation by 90° right) r2 (rotation by 180° right) r3 (rotation by 270° right) fv (vertical flip) fh (horizontal flip) fd (diagonal flip) fc (counter-diagonal flip) |The elements of the symmetry group of the square (D4). The vertices are colored and numbered only to visualize the operations.| b • a ("apply the symmetry b after performing the symmetry a". The right-to-left notation stems from composition of functions).The group table on the right lists the results of all such compositions possible. For example, rotating by 270° right (r3) and then flipping horizontally (fh) is the same as performing a reflection along the diagonal (fd). Using the above symbols, highlighted in blue in the group table: fh • r3 = fd. |The elements id, r1, r2, and r3 form a subgroup, highlighted in red (upper left region). A left and right coset of this subgroup is highlighted in green (in the last row) and yellow (last column), respectively.| r3 • fh = fc,i.e. rotating 270° right after flipping horizontally equals flipping along the counter-diagonal (fc). Indeed every other combination of two symmetries still gives a symmetry, as can be checked using the group table. (a • b) • c = a • (b • c)means that the composition of the three elements is independent of the priority of the operations, i.e. composing first a after b, and c to the result thereof amounts to performing a after the composition of b and c.For example, (fd • fv) • r2 = fd • (fv • r2) can be checked using the group table at the right |(fd • fv) • r2||=||r3 • r2||=||r1, which equals| |fd • (fv • r2)||=||fd • fh||=||r1.| id • a = a, a • id = a. fh • fh = id, r3 • r1 = r1 • r3 = id. In contrast to the group of integers above, where the order of the operation is irrelevant, it does matter in D4: fh • r1 = fc but r1 • fh = fd. In other words, D4 is not abelian, which makes the group structure more difficult than the integers introduced first. See main article: History of group theory. The modern concept of an abstract group developed out of several fields of mathematics. The original motivation for group theory was the quest for solutions of polynomial equations of degree higher than 4. The 19th-century French mathematician Évariste Galois, extending prior work of Paolo Ruffini and Joseph-Louis Lagrange, gave a criterion for the solvability of a particular polynomial equation in terms of the symmetry group of its roots (solutions). The elements of such a Galois group correspond to certain permutations of the roots. At first, Galois' ideas were rejected by his contemporaries, and published only posthumously. More general permutation groups were investigated in particular by Augustin Louis Cauchy. Arthur Cayley's On the theory of groups, as depending on the symbolic equation θn = 1 (1854) gives the first abstract definition of a finite group. Geometry was a second field in which groups were used systematically, especially symmetry groups as part of Felix Klein's 1872 Erlangen program. After novel geometries such as hyperbolic and projective geometry had emerged, Klein used group theory to organize them in a more coherent way. Further advancing these ideas, Sophus Lie founded the study of Lie groups in 1884. The third field contributing to group theory was number theory. Certain abelian group structures had been used implicitly in Carl Friedrich Gauss' number-theoretical work Disquisitiones Arithmeticae (1798), and more explicitly by Leopold Kronecker. In 1847, Ernst Kummer led early attempts to prove Fermat's Last Theorem to a climax by developing groups describing factorization into prime numbers. The convergence of these various sources into a uniform theory of groups started with Camille Jordan's Traité des substitutions et des équations algébriques (1870). Walther von Dyck (1882) gave the first statement of the modern definition of an abstract group. As of the 20th century, groups gained wide recognition by the pioneering work of Ferdinand Georg Frobenius and William Burnside, who worked on representation theory of finite groups, Richard Brauer's modular representation theory and Issai Schur's papers. The theory of Lie groups, and more generally locally compact groups was pushed by Hermann Weyl, Élie Cartan and many others. Its algebraic counterpart, the theory of algebraic groups, was first shaped by Claude Chevalley (from the late 1930s) and later by pivotal work of Armand Borel and Jacques Tits. The University of Chicago's 1960–61 Group Theory Year brought together group theorists such as Daniel Gorenstein, John G. Thompson and Walter Feit, laying the foundation of a collaboration that, with input from numerous other mathematicians, classified all finite simple groups in 1982. This project exceeded previous mathematical endeavours by its sheer size, in both length of proof and number of researchers. Research is ongoing to simplify the proof of this classification. These days, group theory is still a highly active mathematical branch crucially impacting many other fields. See main article: Elementary group theory. Basic facts about all groups that can be obtained directly from the group axioms are commonly subsumed under elementary group theory. For example, repeated applications of the associativity axiom show that the unambiguity of a • b • c = (a • b) • c = a • (b • c)generalizes to more than three factors. Because this implies that parentheses can be inserted anywhere within such a series of terms, parentheses are usually omitted. The axioms may be weakened to assert only the existence of a left identity and left inverses. Both can be shown to be actually two-sided, so the resulting definition is equivalent to the one given above. Two important consequences of the group axioms are the uniqueness of the identity element and the uniqueness of inverse elements. There can be only one identity element in a group, and each element in a group has exactly one inverse element. Thus, it is customary to speak of the identity, and the inverse of an element. To prove the uniqueness of an inverse element of a, suppose that a has two inverses, denoted l and r. Then |l||=||l • e||as e is the identity element| |=||l • (a • r)||because r is an inverse of a, so e = a • r| |=||(l • a) • r||by associativity, which allows to rearrange the parentheses| |=||e • r||since l is an inverse of a, i.e. l • a = e| |=||r||for e is the identity element| Hence the two extremal terms l and r are connected by a chain of equalities, so they agree. In other words there is only one inverse element of a. In groups, it is possible to perform division: given elements a and b of the group G, there is exactly one solution x in G to the equation x • a = b. In fact, right multiplication of the equation by a-1 gives the solution x = x • a • a-1 = b • a-1. Similarly there is exactly one solution y in G to the equation a • y = b, namely y = a-1 • b. In general, x and y need not agree. The following sections use mathematical symbols such as X = to denote a set X containing elements x, y, and z, or alternatively x ∈ X to restate that x is an element of X. The notation means f is a function assigning to every element of X an element of Y. See also: Glossary of group theory. To understand groups beyond the level of mere symbolic manipulations as above, more structural concepts have to be employed. There is a conceptual principle underlying all of the following notions: to take advantage of the structure offered by groups (which for example sets—being "structureless"—don't have) constructions related to groups have to be compatible with the group operation. This compatibility manifests itself in the following notions in various ways. For example, groups can be related to each other via functions called group homomorphisms. By the mentioned principle, they are required to respect the group structures in a precise sense. The structure of groups can also be understood by breaking them into pieces called subgroups and quotient groups. The principle of "preserving structures"—a recurring topic in mathematics throughout—is an instance of working in a category, in this case the category of groups. See main article: Group homomorphism. Group homomorphisms are functions that preserve group structure. A function a: G → H between two groups is a homomorphism if the equation a(g • k) = a(g) • a(k).holds for all elements g, k in G, i.e. the result is the same when performing the group operation after or before applying the map a. This requirement ensures that a(1G) = 1H, and also a(g)-1 = a(g-1) for all g in G. Thus a group homomorphism respects all the structure of G provided by the group axioms. Two groups G and H are called isomorphic if there exist group homomorphisms a: G → H and b: H → G, such that applying the two functions one after another (in each of the two possible orders) equal the identity function of G and H, respectively. That is, a(b(h)) = h and b(a(g)) = g for any g in G and h in H. From an abstract point of view, isomorphic groups carry the same information. For example, proving that g • g = 1 for some element g of G is equivalent to proving that a(g) • a(g) = 1, because applying a to the first equality yields the second, and applying b to the second gives back the first. See main article: Subgroup. Informally, a subgroup is a group H contained within a bigger one, G. Concretely, the identity element of G is contained in H, and whenever h1 and h2 are in H, then so are and h1-1, so the elements of H, equipped with the group operation on G restricted to H, form indeed a group. In the example above, the identity and the rotations constitute a subgroup R =
http://everything.explained.at/Group_(mathematics)/
13
55
Send a Note to Zig   | Table of content Chapter 9 : Analysis of Trusses - 9.1 Definition of trusses - 9.2 Properties of 2-force members - 9.3 Method of Joints - 9.4 Method of Sections - 9.5 Compound Trusses - 9.6 Trusses in 3-D - 9.7 Summary - 9.8 Self-Test and computer Program TRUSS Trusses are structures consisting of two or more straight, slender members connected to each other at their endpoints. Trusses are often used to support roofs, bridges, power-line towers, and appear in many other applications. Here is a collection of various structures involving trusses I have come across. Object of our calculations is to determine the external support forces as well as the forces acting on each of the members for given external In order to make calculations possible a few assumptions are made which in most cases reflect reality sufficiently close so that our theoretical results match experimentally determined ones sufficiently accurate. These assumptions pertain to two- as well as three-dimensional trusses. The three assumptions (or maybe better called idealizations) If doubt arises that for a given design any of the three assumptions may not reflect reality accurately enough a more advanced analysis should be conducted. - Each joint consists of a single pin to which the respective members are connected individually. In reality we of course find that members are connected by a variety of means : bolted, welded, glued, rivited or they are joined by gusset plates. Here are some photos of real-life joints. - No member extends beyond a joint. In Fig. 9.1a the schematic of a 2-dimensional truss is shown. That truss consists of 9 members and 6 joints. There is a member from joint A to B, another from joint B to C, and a third from joint C to D. In reality we may have a single beam extended all the way from joint A to D, but if this beam is slender (long in comparison to a lenght representing the size of its cross section) it is permissible to think of this long beam being represented by individual members going just from joint to Fig. 9.1a :  Example of 2-D Truss - Support forces (R1 and R2) and external loads (P1 and P2) are only applied at joints. In reality this may not quite be the case. But if for example the weight of a member has to be taken into account we could represent that by two forces each equal to half the weight acting at either end point. In similar fashion one can assign snow loads on roofs to single forces acting at the joints. Click here for a glimpse at some commonly employed trusses. The three assumptions brought in the previous chapter render each individual member of a truss to be what is called a "2-force member", that is a member with only two points (usually the end-points) at which forces As an example let's look at the member CE extracted from Fig. 9.1a 2-force member plus forces as shown in Fig. 9.2a. I also show the joints C and E with the arrows representing the forces exerted by the connected members onto each joint. In red are entered the forces exerted by the member CE onto the two joints. Acting on member CE we have the two forces FCE and FEC, respectively, which by the principle of action=reaction, are exactly equal but oppositely directed to the (red) forces the member exerts on the two joints it is connected to. Member CE has to be in equilibrium and therefore : - In order for the sum of the moments about point C to be zero, the line of action of force FEC has to go through point C. - In order for the sum of the moments about point E to be zero, the line of action of force FCE has to go through point E. - In order for the sum of the forces in the direction of line CE to be zero the two forces, FCE and FEC, have to be equal but oppositely directed. Note that the three points mentioned above pertain equally to two- and 2-force member with forces Fig. 9.2b shows this in graphical form. The two forces acting on member CE either pull or push at either endpoint in opposite direction with equal strength. If they pull we say that the member is under tension, if they push, it is said to be under compression. For the case inbetween, when the forces at either endpoint are zero, we speak of a zero-force member. This distinction is of great importance and you never should forget to indicate tension, compression, and zero-force clearly for each member of a truss when asked to determine the forces. The reason for this distinction is a consequence of the different ways a particular member of a truss can fail. If a member is under tension the only failure mode occurs when the forces trying to pull so hard that somewhere along the beam adjacent molecules/atoms cannot hold onto each other any longer and separate. If a member is under compression two different types of failures can occur : if the member is somewhat short and stubby molecules/atoms will not be able to resists the external forces and the member will start to crumble or deform to a shorter piece of material. If on the other side the member is long and slender a phenomenon called buckling may set in way before "crumbling" occurs. The member simply does not want to stay straight anymore. To prevent buckling we often employ Nominally these members do not carry any load but they prevent a member under compression from buckling by providing lateral support. The Method of Joints makes use of the properties of 2-force members as derived in section 9.2 in an interesting way which I demonstrate using the sample truss from section 9.1. For two-dimensional this method results in a sequence of sets of two ( three) linear equations. Fig. 9.3a shows this truss again with its geometry given in terms of the angles alpha, beta, and gamma as well as the length a,b, and c. The members AB, BC, CD, and EF are parallel to each other. Method of joints Assume that the forces P1 and P2 are known I also entered the as of yet unknown support forces. Because at point A we have a roller-type connection the support force R1 has only a vertical component. At point D we have a pin/hole type connection which gives rise to a vertical as well as horizontal component for the support force R2. Furthermore, I entered all forces ( in purple ) the 2-force members exert on their respective joints. Remember that each member pulls/pushes with equal force on its two joints. In the figure I labelled these forces according to the labels of the joints involved and assumed that each member pulls on each joint. I have done this just for the purpose of easy book-keeping. For those members actually under compression the value for the respective force will then come out to be negative. (no need to go back into the drawing and change the direction of the arrow, everybody in the business will see the negative sign of the answer and look at your drawing and knows what's going on.) Principle of Method In the Method of Joints we consider now the equilibrium of each joint. - For a 2-dimensional truss as shown here that gives us two equations for each joint : sum of the forces in horizontal and sum of the forces in the vertical direction for example. In the above example we have 6 joints and therefore get a total of For a 3-D truss we have to satify 3 equilibrium equations for each joint. - As far as unknowns is concerned we have one unknown force for each of the 9 members and 3 unknown support forces for a total of 12 unknowns for our example. A truss (2-D or 3-D) is statically determined only if the number of unknown forces (one per member plus unkonowns stemming from the support forces) is equal to the number of available equations ( 2 (3) times the number of joints). 3 foot notes - If the number of unknown forces exceeds the number of available equations the truss is said to be statically undetermined, one needs more information (usually about the way individual members deform under influence of forces) to determine the forces. - If the number of unknown forces is less than the number of available equations the truss will collapse. - On first sight one is tempted to think that by considering the equilibrium of the entire truss more equations can be derived and hence the number of unknowns can be increased correspondingly. Unfortunately, as it turns out, these new equations are linearly dependent on the equilibrium equations on all joints and therefore are automatically satified once the equilibrium equations on all joints are satisfied. On the good side, this redundancy can be used to tests your calculations and/or to solve the system of equations faster. Feel free to test your abilities to write out such equilibrium equations and check against mine. For the truss shown in Fig. 9.3a I looked at the equilibrium of each joint individually, just click on the latter in the following list and compare my sketches and equations with yours : Solving the Equation System As an example I have summarized all 12 equations representing the equilibrium conditions on the joints of the truss shown in Fig. 9.1a. Click here for a closer look. Mathematicians would classify this system as a system of linear equations with constant coefficients ( the values of cos , sin of the various angles) in which the forces are the unknowns. To solve such equation systems various methods are available, many of them based on the Gauss-elimination method or various matrix methods. I have written such a program in a web-based format. ( Program Truss , 2-D version , 3-D version ). For many trusses, the example in Fig. 9.1a being no exception, it is possible to solve for the unknowns forces "manually" by considering the joints in a particular order which can be detected by inspection. Often it is necessary to involve also the equilibrium equations for the entire truss as shown here. The principle of this method is to find by inspection ( of Fig. 9.3a if you like to work along ) a joint which is acted upon by forces of which at most 2 forces (3 forces in 3-D) Solve the equlibrium equations for this joint and repeat. If you are lucky you can solve for all the unknown forces and then use the equlibrium equations of the entire truss to check up on your results. Quite often you will get "stuck" though (or even don't get started in the first place). Don't dispair, here are two tricks which might help you out and "deliver" a joint with only two unknown forces : If you employ one or both of the above tricks and then solve subsequently for the remaining unknown forces you will be left with at least one joint the equilibrium of which you do not need to consider. My recommendation : check the equilibrium of this final joint anyway with your previously obtained values of the forces. (Hey, that little bit of checking is better than a bridge collapsing). - Solve as many of the overall equilibrium equations as you can. - Find zero-force members. Click here if you want to find out how to do that (might save you later ?!). If you like, click here to see the order in which I would solve for the forces of the truss in Fig. 9.1a and read some more useful info. Well, does the manual version of the Method of Joints, including the two tricks, always work ? The answer is unfortunately NO, and here is an example. Problem 9.3a : 2-member truss Problem 9.3b : 4-member truss Problem 9.3c : 7-member truss Problem 9.3d : Roof-truss, Fink, snow load Problem 9.3e : Roof-truss, Howe, snow load One disadvantage of the Method of Joints when employed without the help of computer programs like Program TRUSS is its sequential nature. That is, in order to calculate forces based on the equilibrium equations on a particular joint we have to use results of preceeding calculations. Hence errors propagate and way too often get magnified in the process. In contrast to that, the Method of Sections aims at calculating the force of selected members directly and can therefore be used to check results obtained by the Method of Joints (my favorite usage). Additionally, in the absence of computer programs you find yourself sometimes in the position that you have to jump-start the Method of Principle of Method The Method of Joints was used to analyze the forces in a truss by looking at the equilibrium of its individual members ( discovering the properties of two-force members ) and individual joints (to find equations to be solved for the values of the forces individual members exert and the forces supporting the truss). In the Method of Sections we consider the equilibrium of a selected part of a truss consisting of any number of members and joints. Often this is done after the overall equilibrium equations have been solved. Here I describe the method as it applies to two-dimensional trusses which usually means that you will have to solve three equilibrium equations which still can be done "manually". For three-dimensional trusses this would result in six such equations. As example, assume that our task is to find the force in the member CE of the sample truss shown in Fig. 9.4a. Also, assume that the geometry of the truss, the external loads and support forces are known. Our strategy is now to "mentally" remove three members according to the following two rules : - One of the members is the one the force of which you wish to calculate. - The removal of the three members has to divide the truss into two separate sections. Often you will have several equivalent choices. For the truss in Fig. 9.4a there is only one, namely removal of the three members BC, CE, and EF. You also might think of these three members as pieces which hold the two sections together and exert onto them just enough forces ( again only in the direction of these members) to hold each section in equilibrium. In Fig. 9.b we see the two resulting parts in terms of their respective Free-Body-Diagrams. Each part is exposed to the external loads/support forces as well as the forces the three members exert onto it. Sample Truss, 3 members removed Solving now the equilibrium equations of either part ( the choice is yours ) you obtain the forces in the three removed members. In the above example we could look at the sum of the force in vertical direction on the left section of the truss : R1 - P1 - FCE cos( β ) = 0 If you happen to be interested in the force FBC the sum of the moments about point E (of either the left or the right part) would be just fine because it contains only FBC as unknown. And for FEF ? FOOTNOTE : In many text books you find instead of "removing three members" the phrases "cut three members" or "section the truss". The latter is probably the origin of the title "Method of Section". Problem 9.4a : Roof-truss, Fink, snow load Problem 9.4b : Roof-truss, Howe, snow load Problem 9.4c : Escalator Support Problem 9.4d : Stadium Roof, I Problem 9.4e : Stadium Roof, II Compound Trusses are trusses which one can divide into two or more sub-trusses. This might help in the determination of internal forces. Whether a truss is a compound truss depends very much on who is looking. Fig. 9.5a is an example of a compound truss. The members 12, 13, 23, 24, and 34 could be viewed as comprising one sub-truss, let's call this the sub-truss 1234. The other members making up a second sub-truss, called 4567. This division can help us in this case because each of the two sub-trusses is actually a 2-force member, that is each sub-truss has only two points at which forces are acting (joint 1 and 4 for the left sub-truss and joint 7 and 4 for the right sub-truss. I tried to convey this in Fig. 9.5b. For known load P and geometry we now can determine the forces FL and FR from the equilibrium equation on joint 4. Forces in Compound Truss After determining FL and FR by analyzing the equilibrium of joint 4 all external forces on the two sub-trusses are known and each sub-truss can be analyzed separately. The analysis of 3-dimensional trusses (extremely wide-spread in practice) is usually not content of an introductory course into statics although the underlying principles for their analysis are identical to that of We have the same restrictions on the location of the loads, joints are now of ball/socket type and support forces may have now 3,2, or only 1 unknown component depending on the type of support employed. All members are still 2-force members with the forces they exert on the joints at their two endpoints stil equal but oppositely directly and in line with the line connecting the two endpoints. Hence, these forces have now in general three components and we have three equlibrium equations A 3-dimensional truss is statically determined only if the number of unknown forces (one per member plus support forces) is equal to the number number of available equations ( 3 times the number of joints). Setting up the equilibrium equations and solving them is though an order of magnitude (at least) more tedious than for 2-dimensional trusses. Fig. 9.6a is a simple example where the truss consists of a single tetrahedron with vertices A, B, C, and D. A single load P (having x-, y-, and z-components) is applied at joint D. 3-D Truss, example The support forces are chosen such that the tetrahedron (think of it as a solid body) cannot move away nor rotate in any which way. In 3 dimensions this necessitates 6 components of support forces. At joint A I have specified a ball/socket connection ( 3 unknown components), at point C we have a roller-type connection (2 unknown components) and at point B single component in the We can solve for the unknown forces in the 6 members and the 6 components of support forces by applying the method of joints in the following order : - Joint D : 3 equations for the three forces in the members AD, BD, and CD. - Joint B : 3 equations for the single support force component and the forces in members AB and BC. - Joint C : 3 equations for the two support force components and the force in member AC. - Joint A : 3 equations for the three support force components. You then can use the overall equilibirum equations for a check-up. It is nearly impossible to do these calculations without vector notation. Some more examples of 3-dimensional trusses can be found as sample cases for a 3-D truss program. In this chapter we were concerned with the determination of support forces and forces internal members are exposed to. The structures we could investigate were called trusses which have the properties of : - Consisting only of 2-force members. - Loads and support forces act only on joints. Two principle methods are available to obtain the desired forces : - The Method of Joints which provides us with two ( three in 3-D cases) equations per joint leading to a system of linear equations for the unknown forces. If the truss is statically determined this system can always be solved by a computer program (like Program TRUSS) or in many cases by inspecting the truss as to the order in which these equations must be solved. Depending on the truss geometry this approach is not always possible but solving the overall equilibrium equations and/or looking at the truss as a compound truss might help. When solving for the forces without a computer the sequential nature of the Method of Joints is a disadvantage because errors made initially affect subsequent calculations. - The Method of Sections can also be used to "jump-start" the method of joints. It is very useful when the force of only a few internal members are to be determined. The principle is here to remove 3 members (in 2-dimensional cases) with one member being the one of which we wish to determine the forces. The removal of the 3 members has to divide the truss into two separate parts. The study of the equilibrium of either part yields the forces of the 3 removed members. The self-test is a multiple-choice test. It allows you to ascertain your knowledge of the definition of terms and your understanding of Click here to do the test. Computer Program TRUSS This program is based on the Method of Joints. The user specifies the geometry of the truss in terms of the location of all joints and how these are connected by members and then specifies given external forces and finally provides information concerning the support forces acting on the truss. For more information follow the links below : A warning in particular to my students. Usage of a computer program (except for special parameter studies of which we will do one or the other) does not teach you anything more than just how to use that particular program. The real juice lies in the understanding of the different methods employed and evaluating whether the obtained results make sense. Send a Note to Zig   | Table of content Zig Herzog, email@example.com Last revised: 08/21/09
http://mac6.ma.psu.edu/em211/p09a.html
13
72
A Basic Introduction to the Science Underlying WHAT IS A GENOME? Life is specified by genomes. Every organism, including humans, has a genome that contains all of the biological information needed to build and maintain a living example of that organism. The biological information contained in a genome is encoded in its deoxyribonucleic acid (DNA) and is divided into discrete units called genes. Genes code for proteins that attach to the genome at the appropriate positions and switch on a series of reactions called gene expression. |In 1909, Danish botanist Wilhelm Johanssen coined the word gene for the hereditary unit found on a chromosome. Nearly 50 years earlier, Gregor Mendel had characterized hereditary units as factors— observable differences that were passed from parent to offspring. Today we know that a single gene consists of a unique sequence of DNA that provides the complete instructions to make a functional product, called a protein. Genes instruct each cell type— such as skin, brain, and liver—to make discrete sets of proteins at just the right times, and it is through this specificity that unique organisms arise. The Physical Structure of the Human Genome Inside each of our cells lies a nucleus, a membrane-bounded region that provides a sanctuary for genetic information. The nucleus contains long strands of DNA that encode this genetic information. A DNA chain is made up of four chemical bases: adenine (A) and guanine (G), which are called purines, and cytosine (C) and thymine (T), referred to as pyrimidines. Each base has a slightly different composition, or combination of oxygen, carbon, nitrogen, and hydrogen. In a DNA chain, every base is attached to a sugar molecule (deoxyribose) and a phosphate molecule, resulting in a nucleic acid or nucleotide. Individual nucleotides are linked through the phosphate group, and it is the precise order, or sequence, of nucleotides that determines the product made from that gene. Figure 1. The four DNA bases. Each DNA base is made up of the sugar 2'-deoxyribose linked to a phosphate group and one of the four bases depicted above: adenine (top left), cytosine (top right), guanine (bottom left), and thymine (bottom right). |A DNA chain, also called a strand, has a sense of direction, in which one end is chemically different than the other. The so-called 5' end terminates in a 5' phosphate group (-PO4); the 3' end terminates in a 3' hydroxyl group (-OH). This is important because DNA strands are always synthesized in the 5' to 3' direction. The DNA that constitutes a gene is a double-stranded molecule consisting of two chains running in opposite directions. The chemical nature of the bases in double-stranded DNA creates a slight twisting force that gives DNA its characteristic gently coiled structure, known as the double helix. The two strands are connected to each other by chemical pairing of each base on one strand to a specific partner on the other strand. Adenine (A) pairs with thymine (T), and guanine (G) pairs with cytosine (C). Thus, A-T and G-C base pairs are said to be complementary. This complementary base pairing is what makes DNA a suitable molecule for carrying our genetic information—one strand of DNA can act as a template to direct the synthesis of a complementary strand. In this way, the information in a DNA sequence is readily copied and passed on to the next generation Not all genetic information is found in nuclear DNA. Both plants and animals have an organelle—a "little organ" within the cell— called the mitochondrion. Each mitochondrion has its own set of genes. Plants also have a second organelle, the chloroplast, which also has its own DNA. Cells often have multiple mitochondria, particularly cells requiring lots of energy, such as active muscle cells. This is because mitochondria are responsible for converting the energy stored in macromolecules into a form usable by the cell, namely, the adenosine triphosphate (ATP) molecule. Thus, they are often referred to as the power generators of the cell. Unlike nuclear DNA (the DNA found within the nucleus of a cell), half of which comes from our mother and half from our father, mitochondrial DNA is only inherited from our mother. This is because mitochondria are only found in the female gametes or "eggs" of sexually reproducing animals, not in the male gamete, or sperm. Mitochondrial DNA also does not recombine; there is no shuffling of genes from one generation to the other, as there is with nuclear genes. |Large numbers of mitochondria are found in the tail of sperm, providing them with an engine that generates the energy needed for swimming toward the egg. However, when the sperm enters the egg during fertilization, the tail falls off, taking away the father's mitochondria. Why Is There a Separate Mitochondrial Genome? The energy-conversion process that takes place in the mitochondria takes place aerobically, in the presence of oxygen. Other energy conversion processes in the cell take place anaerobically, or without oxygen. The independent aerobic function of these organelles is thought to have evolved from bacteria that lived inside of other simple organisms in a mutually beneficial, or symbiotic, relationship, providing them with aerobic capacity. Through the process of evolution, these tiny organisms became incorporated into the cell, and their genetic systems and cellular functions became integrated to form a single functioning cellular unit. Because mitochondria have their own DNA, RNA, and ribosomes, this scenario is quite possible. This theory is also supported by the existence of a eukaryotic organism, called the amoeba, which lacks mitochondria. Therefore, amoeba must always have a symbiotic relationship with an aerobic bacterium. Why Study Mitochondria? There are many diseases caused by mutations in mitochondrial DNA (mtDNA). Because the mitochondria produce energy in cells, symptoms of mitochondrial diseases often involve degeneration or functional failure of tissue. For example, mtDNA mutations have been identified in some forms of diabetes, deafness, and certain inherited heart diseases. In addition, mutations in mtDNA are able to accumulate throughout an individual's lifetime. This is different from mutations in nuclear DNA, which has sophisticated repair mechanisms to limit the accumulation of mutations. Mitochondrial DNA mutations can also concentrate in the mitochondria of specific tissues. A variety of deadly diseases are attributable to a large number of accumulated mutations in mitochondria. There is even a theory, the Mitochondrial Theory of Aging, that suggests that accumulation of mutations in mitochondria contributes to, or drives, the aging process. These defects are associated with Parkinson's and Alzheimer's disease, although it is not known whether the defects actually cause or are a direct result of the diseases. However, evidence suggests that the mutations contribute to the progression of both diseases. In addition to the critical cellular energy-related functions, mitochondrial genes are useful to evolutionary biologists because of their maternal inheritance and high rate of mutation. By studying patterns of mutations, scientists are able to reconstruct patterns of migration and evolution within and between species. For example, mtDNA analysis has been used to trace the migration of people from Asia across the Bering Strait to North and South America. It has also been used to identify an ancient maternal lineage from which modern man evolved. |In addition to mRNA, DNA codes for other forms of RNA, including ribosomal RNAs (rRNAs), transfer RNAs (tRNAs), and small nuclear RNAs (snRNAs). rRNAs and tRNAs participate in protein assembly whereas snRNAs aid in a process called splicing —the process of editing of mRNA before it can be used as a template for protein synthesis. Just like DNA, ribonucleic acid (RNA) is a chain, or polymer, of nucleotides with the same 5' to 3' direction of its strands. However, the ribose sugar component of RNA is slightly different chemically than that of DNA. RNA has a 2' oxygen atom that is not present in DNA. Other fundamental structural differences exist. For example, uracil takes the place of the thymine nucleotide found in DNA, and RNA is, for the most part, a single-stranded molecule. DNA directs the synthesis of a variety of RNA molecules, each with a unique role in cellular function. For example, all genes that code for proteins are first made into an RNA strand in the nucleus called a messenger RNA (mRNA). The mRNA carries the information encoded in DNA out of the nucleus to the protein assembly machinery, called the ribosome, in the cytoplasm. The ribosome complex uses mRNA as a template to synthesize the exact protein coded for by the gene. |"DNA makes RNA, RNA makes protein, and proteins make us." Although DNA is the carrier of genetic information in a cell, proteins do the bulk of the work. Proteins are long chains containing as many as 20 different kinds of amino acids. Each cell contains thousands of different proteins: enzymes that make new molecules and catalyze nearly all chemical processes in cells; structural components that give cells their shape and help them move; hormones that transmit signals throughout the body; antibodies that recognize foreign molecules; and transport molecules that carry oxygen. The genetic code carried by DNA is what specifies the order and number of amino acids and, therefore, the shape and function of the protein. The "Central Dogma"—a fundamental principle of molecular biology—states that genetic information flows from DNA to RNA to protein. Ultimately, however, the genetic code resides in DNA because only DNA is passed from generation to generation. Yet, in the process of making a protein, the encoded information must be faithfully transmitted first to RNA then to protein. Transferring the code from DNA to RNA is a fairly straightforward process called transcription. Deciphering the code in the resulting mRNA is a little more complex. It first requires that the mRNA leave the nucleus and associate with a large complex of specialized RNAs and proteins that, collectively, are called the ribosome. Here the mRNA is translated into protein by decoding the mRNA sequence in blocks of three RNA bases, called codons, where each codon specifies a particular amino acid. In this way, the ribosomal complex builds a protein one amino acid at a time, with the order of amino acids determined precisely by the order of the codons in the mRNA. |In 1961, Marshall Nirenberg and Heinrich Matthaei correlated the first codon (UUU) with the amino acid phenylalanine. After that, it was not long before the genetic code for all 20 amino acids A given amino acid can have more than one codon. These redundant codons usually differ at the third position. For example, the amino acid serine is encoded by UCU, UCC, UCA, and/or UCG. This redundancy is key to accommodating mutations that occur naturally as DNA is replicated and new cells are produced. By allowing some of the random changes in DNA to have no effect on the ultimate protein sequence, a sort of genetic safety net is created. Some codons do not code for an amino acid at all but instruct the ribosome when to stop adding new amino acids. Table 1. RNA triplet codons and their corresponding amino | AAU Asparagine | AGU Serine A translation chart of the 64 RNA codons. The Core Gene Sequence: Introns and Exons Genes make up about 1 percent of the total DNA in our genome. In the human genome, the coding portions of a gene, called exons, are interrupted by intervening sequences, called introns. In addition, a eukaryotic gene does not code for a protein in one continuous stretch of DNA. Both exons and introns are "transcribed" into mRNA, but before it is transported to the ribosome, the primary mRNA transcript is edited. This editing process removes the introns, joins the exons together, and adds unique features to each end of the transcript to make a "mature" mRNA. One might then ask what the purpose of an intron is if it is spliced out after it is transcribed? It is still unclear what all the functions of introns are, but scientists believe that some serve as the site for recombination, the process by which progeny derive a combination of genes different from that of either parent, resulting in novel genes with new combinations of exons, the key to evolution. Figure 2. Recombination. Recombination involves pairing between complementary strands of two parental duplex DNAs (top and middle panel). This process creates a stretch of hybrid DNA (bottom panel) in which the single strand of one duplex is paired with its complement from the other duplex. Gene Prediction Using Computers When the complete mRNA sequence for a gene is known, computer programs are used to align the mRNA sequence with the appropriate region of the genomic DNA sequence. This provides a reliable indication of the beginning and end of the coding region for that gene. In the absence of a complete mRNA sequence, the boundaries can be estimated by ever-improving, but still inexact, gene prediction software. The problem is the lack of a single sequence pattern that indicates the beginning or end of a eukaryotic gene. Fortunately, the middle of a gene, referred to as the core gene sequence--has enough consistent features to allow more reliable predictions. From Genes to Proteins: Start to Finish We just discussed that the journey from DNA to mRNA to protein requires that a cell identify where a gene begins and ends. This must be done both during the transcription and the translation process. Transcription, the synthesis of an RNA copy from a sequence of DNA, is carried out by an enzyme called RNA polymerase. This molecule has the job of recognizing the DNA sequence where transcription is initiated, called the promoter site. In general, there are two "promoter" sequences upstream from the beginning of every gene. The location and base sequence of each promoter site vary for prokaryotes (bacteria) and eukaryotes (higher organisms), but they are both recognized by RNA polymerase, which can then grab hold of the sequence and drive the production of an mRNA. Eukaryotic cells have three different RNA polymerases, each recognizing three classes of genes. RNA polymerase II is responsible for synthesis of mRNAs from protein-coding genes. This polymerase requires a sequence resembling TATAA, commonly referred to as the TATA box, which is found 25-30 nucleotides upstream of the beginning of the gene, referred to as the initiator sequence. Transcription terminates when the polymerase stumbles upon a termination, or stop signal. In eukaryotes, this process is not fully understood. Prokaryotes, however, tend to have a short region composed of G's and C's that is able to fold in on itself and form complementary base pairs, creating a stem in the new mRNA. This stem then causes the polymerase to trip and release the nascent, or newly formed, mRNA. The beginning of translation, the process in which the genetic code carried by mRNA directs the synthesis of proteins from amino acids, differs slightly for prokaryotes and eukaryotes, although both processes always initiate at a codon for methionine. For prokaryotes, the ribosome recognizes and attaches at the sequence AGGAGGU on the mRNA, called the Shine-Delgarno sequence, that appears just upstream from the methionine (AUG) codon. Curiously, eukaryotes lack this recognition sequence and simply initiate translation at the amino acid methionine, usually coded for by the bases AUG, but sometimes GUG. Translation is terminated for both prokaryotes and eukaryotes when the ribosome reaches one of the three stop codons. Structural Genes, Junk DNA, and Regulatory Sequences |Over 98 percent of the genome is of unknown function. Although often referred to as "junk" DNA, scientists are beginning to uncover the function of many of these intergenic sequences—the DNA found between genes. Sequences that code for proteins are called structural genes. Although it is true that proteins are the major components of structural elements in a cell, proteins are also the real workhorses of the cell. They perform such functions as transporting nutrients into the cell; synthesizing new DNA, RNA, and protein molecules; and transmitting chemical signals from outside to inside the cell, as well as throughout the cell—both critical to the process of A class of sequences called regulatory sequences makes up a numerically insignificant fraction of the genome but provides critical functions. For example, certain sequences indicate the beginning and end of genes, sites for initiating replication and recombination, or provide landing sites for proteins that turn genes on and off. Like structural genes, regulatory sequences are inherited; however, they are not commonly referred to as genes. Other DNA Regions Forty to forty-five percent of our genome is made up of short sequences that are repeated, sometimes hundreds of times. There are numerous forms of this "repetitive DNA", and a few have known functions, such as stabilizing the chromosome structure or inactivating one of the two X chromosomes in developing females, a process called X-inactivation. The most highly repeated sequences found so far in mammals are called "satellite DNA" because their unusual composition allows them to be easily separated from other DNA. These sequences are associated with chromosome structure and are found at the centromeres (or centers) and telomeres (ends) of chromosomes. Although they do not play a role in the coding of proteins, they do play a significant role in chromosome structure, duplication, and cell division. The highly variable nature of these sequences makes them an excellent "marker" by which individuals can be identified based on their unique pattern of their satellite DNA. Figure 3. A chromosome. A chromosome is composed of a very long molecule of DNA and associated proteins that carry hereditary information. The centromere, shown at the center of this chromosome, is a specialized structure that appears during cell division and ensures the correct distribution of duplicated chromosomes to daughter cells. Telomeres are the structures that seal the end of a chromosome. Telomeres play a critical role in chromosome replication and maintenance by counteracting the tendency of the chromosome to otherwise shorten with each round of replication. Another class of non-coding DNA is the "pseudogene", so named because it is believed to be a remnant of a real gene that has suffered mutations and is no longer functional. Pseudogenes may have arisen through the duplication of a functional gene, followed by inactivation of one of the copies. Comparing the presence or absence of pseudogenes is one method used by evolutionary geneticists to group species and to determine relatedness. Thus, these sequences are thought to carry a record of our evolutionary history. How Many Genes Do Humans Have? In February 2001, two largely independent draft versions of the human genome were published. Both studies estimated that there are 30,000 to 40,000 genes in the human genome, roughly one-third the number of previous estimates. More recently scientists estimated that there are less than 30,000 human genes. However, we still have to make guesses at the actual number of genes, because not all of the human genome sequence is annotated and not all of the known sequence has been assigned a particular position in the genome. So, how do scientists estimate the number of genes in a genome? For the most part, they look for tell-tale signs of genes in a DNA sequence. These include: open reading frames, stretches of DNA, usually greater than 100 bases, that are not interrupted by a stop codon such as TAA, TAG or TGA; start codons such as ATG; specific sequences found at splice junctions, a location in the DNA sequence where RNA removes the non-coding areas to form a continuous gene transcript for translation into a protein; and gene regulatory sequences. This process is dependent on computer programs that search for these patterns in various sequence databases and then make predictions about the existence of a gene. From One Gene–One Protein to a More Global Perspective Only a small percentage of the 3 billion bases in the human genome becomes an expressed gene product. However, of the approximately 1 percent of our genome that is expressed, 40 percent is alternatively spliced to produce multiple proteins from a single gene. Alternative splicing refers to the cutting and pasting of the primary mRNA transcript into various combinations of mature mRNA. Therefore the one gene–one protein theory, originally framed as "one gene–one enzyme", does not precisely hold. With so much DNA in the genome, why restrict transcription to a tiny portion, and why make that tiny portion work overtime to produce many alternate transcripts? This process may have evolved as a way to limit the deleterious effects of mutations. Genetic mutations occur randomly, and the effect of a small number of mutations on a single gene may be minimal. However, an individual having many genes each with small changes could weaken the individual, and thus the species. On the other hand, if a single mutation affects several alternate transcripts at once, it is more likely that the effect will be devastating—the individual may not survive to contribute to the next generation. Thus, alternate transcripts from a single gene could reduce the chances that a mutated gene is transmitted. Gene Switching: Turning Genes On and Off The estimated number of genes for humans, less than 30,000, is not so different from the 25,300 known genes of Arabidopsis thaliana, commonly called mustard grass. Yet, we appear, at least at first glance, to be a far more complex organism. A person may wonder how this increased complexity is achieved. One answer lies in the regulatory system that turns genes on and off. This system also precisely controls the amount of a gene product that is produced and can further modify the product after it is made. This exquisite control requires multiple regulatory input points. One very efficient point occurs at transcription, such that an mRNA is produced only when a gene product is needed. Cells also regulate gene expression by post-transcriptional modification; by allowing only a subset of the mRNAs to go on to translation; or by restricting translation of specific mRNAs to only when the product is needed. At other levels, cells regulate gene expression through DNA folding, chemical modification of the nucleotide bases, and intricate "feedback mechanisms" in which some of the gene's own protein product directs the cell to cease further protein production. Promoters and Regulatory Sequences Transcription is the process whereby RNA is made from DNA. It is initiated when an enzyme, RNA polymerase, binds to a site on the DNA called a promoter sequence. In most cases, the polymerase is aided by a group of proteins called "transcription factors" that perform specialized functions, such as DNA sequence recognition and regulation of the polymerase's enzyme activity. Other regulatory sequences include activators, repressors, and enhancers. These sequences can be cis-acting (affecting genes that are adjacent to the sequence) or trans-acting (affecting expression of the gene from a distant site), even on another chromosome. Globin Genes: An Example of Transcriptional Regulation An example of transcriptional control occurs in the family of genes responsible for the production of globin. Globin is the protein that complexes with the iron-containing heme molecule to make hemoglobin. Hemoglobin transports oxygen to our tissues via red blood cells. In the adult, red blood cells do not contain DNA for making new globin; they are ready-made with all of the hemoglobin they will need. During the first few weeks of life, embryonic globin is expressed in the yolk sac of the egg. By week five of gestation, globin is expressed in early liver cells. By birth, red blood cells are being produced, and globin is expressed in the bone marrow. Yet, the globin found in the yolk is not produced from the same gene as is the globin found in the liver or bone marrow stem cells. In fact, at each stage of development, different globin genes are turned on and off through a process of transcriptional regulation called "switching". To further complicate matters, globin is made from two different protein chains: an alpha-like chain coded for on chromosome 16; and a beta-like chain coded for on chromosome 11. Each chromosome has the embryonic, fetal, and adult form lined up on the chromosome in a sequential order for developmental expression. The developmentally regulated transcription of globin is controlled by a number of cis-acting DNA sequences, and although there remains a lot to be learned about the interaction of these sequences, one known control sequence is an enhancer called the Locus Control Region (LCR). The LCR sits far upstream on the sequence and controls the alpha genes on chromosome 16. It may also interact with other factors to determine which alpha gene is turned on. Thalassemias are a group of diseases characterized by the absence or decreased production of normal globin, and thus hemoglobin, leading to decreased oxygen in the system. There are alpha and beta thalassemias, defined by the defective gene, and there are variations of each of these, depending on whether the embryonic, fetal, or adult forms are affected and/or expressed. Although there is no known cure for the thalassemias, there are medical treatments that have been developed based on our current understanding of both gene regulation and cell differentiation. Treatments include blood transfusions, iron chelators, and bone marrow transplants. With continuing research in the areas of gene regulation and cell differentiation, new and more effective treatments may soon be on the horizon, such as the advent of gene transfer therapies. The Influence of DNA Structure and Binding Domains Sequences that are important in regulating transcription do not necessarily code for transcription factors or other proteins. Transcription can also be regulated by subtle variations in DNA structure and by chemical changes in the bases to which transcription factors bind. As stated previously, the chemical properties of the four DNA bases differ slightly, providing each base with unique opportunities to chemically react with other molecules. One chemical modification of DNA, called methylation, involves the addition of a methyl group (-CH3). Methylation frequently occurs at cytosine residues that are preceded by guanine bases, oftentimes in the vicinity of promoter sequences. The methylation status of DNA often correlates with its functional activity, where inactive genes tend to be more heavily methylated. This is because the methyl group serves to inhibit transcription by attracting a protein that binds specifically to methylated DNA, thereby interfering with polymerase binding. Methylation also plays an important role in genomic imprinting, which occurs when both maternal and paternal alleles are present but only one allele is expressed while the other remains inactive. Another way to think of genomic imprinting is as "parent of origin differences" in the expression of inherited traits. Considerable intrigue surrounds the effects of DNA methylation, and many researchers are working to unlock the mystery behind this concept. Translation is the process whereby the genetic code carried by an mRNA directs the synthesis of proteins. Translational regulation occurs through the binding of specific molecules, called repressor proteins, to a sequence found on an RNA molecule. Repressor proteins prevent a gene from being expressed. As we have just discussed, the default state for a gene is that of being expressed via the recognition of its promoter by RNA polymerase. Close to the promoter region is another cis-acting site called the operator, the target for the repressor protein. When the repressor protein binds to the operator, RNA polymerase is prevented from initiating transcription, and gene expression is Translational control plays a significant role in the process of embryonic development and cell differentiation. Upon fertilization, an egg cell begins to multiply to produce a ball of cells that are all the same. At some point, however, these cells begin to differentiate, or change into specific cell types. Some will become blood cells or kidney cells, whereas others may become nerve or brain cells. When all of the cells formed are alike, the same genes are turned on. However, once differentiation begins, various genes in different cells must become active to meet the needs of that cell type. In some organisms, the egg houses store immature mRNAs that become translationally active only after fertilization. Fertilization then serves to trigger mechanisms that initiate the efficient translation of mRNA into proteins. Similar mechanisms serve to activate mRNAs at other stages of development and differentiation, such as when specific protein products are needed. Mechanisms of Genetic Variation and Heredity Does Everyone Have the Same Genes? When you look at the human species, you see evidence of a process called genetic variation, that is, there are immediately recognizable differences in human traits, such as hair and eye color, skin pigment, and height. Then there are the not so obvious genetic variations, such as blood type. These expressed, or phenotypic, traits are attributable to genotypic variation in a person's DNA sequence. When two individuals display different phenotypes of the same trait, they are said to have two different alleles for the same gene. This means that the gene's sequence is slightly different in the two individuals, and the gene is said to be polymorphic, "poly" meaning many and "morph" meaning shape or form. Therefore, although people generally have the same genes, the genes do not have exactly the same DNA sequence. These polymorphic sites influence gene expression and also serve as markers for genomic research |The cell cycle is the process that a cell undergoes to replicate. Most genetic variation occurs during the phases of the cell cycle when DNA is duplicated. Mutations in the new DNA strand can manifest as base substitutions, such as when a single base gets replaced with another; deletions, where one or more bases are left out; or insertions, where one or more bases are added. Mutations can either be synonymous, in which the variation still results in a codon for the same amino acid or non-synonymous, in which the variation results in a codon for a different amino acid. Mutations can also cause a frame shift, which occurs when the variation bumps the reference point for reading the genetic code down a base or two and results in loss of part, or sometimes all, of that gene product. DNA mutations can also be introduced by toxic chemicals and, particularly in skin cells, exposure to ultraviolet radiation. |The manner in which a cell replicates differs with the various classes of life forms, as well as with the end purpose of the cell replication. Cells that compose tissues in multicellular organisms typically replicate by organized duplication and spatial separation of their cellular genetic material, a process called mitosis. Meiosis is the mode of cell replication for the formation of sperm and egg cells in plants, animals, and many other multicellular life forms. Meiosis differs significantly from mitosis in that the cellular progeny have their complement of genetic material reduced to half that of the parent cell. |Mutations that occur in somatic cells—any cell in the body except gametes and their precursors—will not be passed on to the next generation. This does not mean, however, that somatic cell mutations, sometimes called acquired mutations, are benign. For example, as your skin cells prepare to divide and produce new skin cells, errors may be inadvertently introduced when the DNA is duplicated, resulting in a daughter cell that contains the error. Although most defective cells die quickly, some can persist and may even become cancerous if the mutation affects the ability to regulate Mutations and the Next Generation There are two places where mutations can be introduced and carried into the next generation. In the first stages of development, a sperm cell and egg cell fuse. They then begin to divide, giving rise to cells that differentiate into tissue-specific cell types. One early type of differentiated cell is the germ line cell, which may ultimately develop into mature gametes. If a mutation occurs in the developing germ line cell, it may persist until that individual reaches reproductive age. Now the mutation has the potential to be passed on to the next generation. Mutations may also be introduced during meiosis, the mode of cell replication for the formation of sperm and egg cells. In this case, the germ line cell is healthy, and the mutation is introduced during the actual process of gamete replication. Once again, the sperm or egg will contain the mutation, and during the reproductive process, this mutation may then be passed on to the offspring. One should bear in mind that not all mutations are bad. Mutations also provide a species with the opportunity to adapt to new environments, as well as to protect a species from new pathogens. Mutations are what lie behind the popular saying of "survival of the fittest", the basic theory of evolution proposed by Charles Darwin in 1859. This theory proposes that as new environments arise, individuals carrying certain mutations that enable an evolutionary advantage will survive to pass this mutation on to its offspring. It does not suggest that a mutation is derived from the environment, but that survival in that environment is enhanced by a particular mutation. Some genes, and even some organisms, have evolved to tolerate mutations better than others. For example, some viral genes are known to have high mutation rates. Mutations serve the virus well by enabling adaptive traits, such as changes in the outer protein coat so that it can escape detection and thereby destruction by the host's immune system. Viruses also produce certain enzymes that are necessary for infection of a host cell. A mutation within such an enzyme may result in a new form that still allows the virus to infect its host but that is no longer blocked by an anti-viral drug. This will allow the virus to propagate freely in its environment. Mendel's Laws—How We Inherit Our Genes In 1866, Gregor Mendel studied the transmission of seven different pea traits by carefully test-crossing many distinct varieties of peas. Studying garden peas might seem trivial to those of us who live in a modern world of cloned sheep and gene transfer, but Mendel's simple approach led to fundamental insights into genetic inheritance, known today as Mendel's Laws. Mendel did not actually know or understand the cellular mechanisms that produced the results he observed. Nonetheless, he correctly surmised the behavior of traits and the mathematical predictions of their transmission, the independent segregation of alleles during gamete production, and the independent assortment of genes. Perhaps as amazing as Mendel's discoveries was the fact that his work was largely ignored by the scientific community for over 30 years! Principles of Genetic Inheritance Law of Segregation: Each of the two inherited factors (alleles) possessed by the parent will segregate and pass into separate gametes (eggs or sperm) during meiosis, which will each carry only one of the factors. Law of Independent Assortment: In the gametes, alleles of one gene separate independently of those of another gene, and thus all possible combinations of alleles are equally Law of Dominance: Each trait is determined by two factors (alleles), inherited one from each parent. These factors each exhibit a characteristic dominant, co-dominant, or recessive expression, and those that are dominant will mask the expression of those that are recessive. How Does Inheritance Work? Our discussion here is restricted to sexually reproducing organisms where each gene in an individual is represented by two copies, called alleles—one on each chromosome pair. There may be more than two alleles, or variants, for a given gene in a population, but only two alleles can be found in an individual. Therefore, the probability that a particular allele will be inherited is 50:50, that is, alleles randomly and independently segregate into daughter cells, although there are some exceptions to this rule. The term diploid describes a state in which a cell has two sets of homologous chromosomes, or two chromosomes that are the same. The maturation of germ line stem cells into gametes requires the diploid number of each chromosome be reduced by half. Hence, gametes are said to be haploid—having only a single set of homologous chromosomes. This reduction is accomplished through a process called meiosis, where one chromosome in a diploid pair is sent to each daughter gamete. Human gametes, therefore, contain 23 chromosomes, half the number of somatic cells—all the other cells of the body. Because the chromosome in one pair separates independently of all other chromosomes, each new gamete has the potential for a totally new combination of chromosomes. In humans, the independent segregation of the 23 chromosomes can lead to as many as 16 to 17 million different combinations in one individual's gametes. Only one of these gametes will combine with one of the nearly 17 million possible combinations from the other parent, generating a staggering potential for individual variation. Yet, this is just the beginning. Even more variation is possible when you consider the recombination between sections of chromosomes during meiosis as well as the random mutation that can occur during DNA replication. With such a range of possibilities, it is amazing that siblings look so much alike! Expression of Inherited Genes Gene expression, as reflected in an organism's phenotype, is based on conditions specific for each copy of a gene. As we just discussed, for every human gene there are two copies, and for every gene there can be several variants or alleles. If both alleles are the same, the gene is said to be homozygous. If the alleles are different, they are said to be heterozygous. For some alleles, their influence on phenotype takes precedence over all other alleles. For others, expression depends on whether the gene appears in the homozygous or heterozygous state. Still other phenotypic traits are a combination of several alleles from several different genes. Determining the allelic condition used to be accomplished solely through the analysis of pedigrees, much the way Mendel carried out his experiments on peas. However, this method can leave many questions unanswered, particularly for traits that are a result of the interaction between several different genes. Today, molecular genetic techniques exist that can assist researchers in tracking the transmission of traits by pinpointing the location of individual genes, identifying allelic variants, and identifying those traits that are caused by Nature of Alleles A dominant allele is an allele that is almost always expressed, even if only one copy is present. Dominant alleles express their phenotype even when paired with a different allele, that is, when heterozygous. In this case, the phenotype appears the same in both the heterozygous and homozygous states. Just how the dominant allele overshadows the other allele depends on the gene, but in some cases the dominant gene produces a gene product that the other allele does not. Well-known dominant alleles occur in the human genes for Huntington disease, a form of dwarfism called achondroplasia, and polydactylism (extra fingers and toes). On the other hand, a recessive allele will be expressed only if there are two identical copies of that allele, or for a male, if one copy is present on the X chromosome. The phenotype of a recessive allele is only seen when both alleles are the same. When an individual has one dominant allele and one recessive allele, the trait is not expressed because it is overshadowed by the dominant allele. The individual is said to be a carrier for that trait. Examples of recessive disorders in humans include sickle cell anemia, Tay-Sachs disease, and A particularly important category of genetic linkage has to do with the X and Y sex chromosomes. These chromosomes not only carry the genes that determine male and female traits, but also those for some other characteristics as well. Genes that are carried by either sex chromosome are said to be sex linked. Men normally have an X and a Y combination of sex chromosomes, whereas women have two X's. Because only men inherit Y chromosomes, they are the only ones to inherit Y-linked traits. Both men and women can have X-linked traits because both inherit X-linked traits not related to feminine body characteristics are primarily expressed in the phenotype of men. This is because men have only one X chromosome. Subsequently, genes on that chromosome that do not code for gender are expressed in the male phenotype, even if they are recessive. In women, a recessive allele on one X chromosome is often masked in their phenotype by a dominant normal allele on the other. This explains why women are frequently carriers of X-linked traits but more rarely have them expressed in their own phenotypes. In humans, at least 320 genes are X-linked. These include the genes for hemophilia, red–green color blindness, and congenital night blindness. There are at least a dozen Y-linked genes, in addition to those that code for masculine physical traits. |It is now known that one of the X chromosomes in the cells of human females is completely, or mostly, inactivated early in embryonic life. This is a normal self-preservation action to prevent a potentially harmful double dose of genes. Recent research points to the "Xist" gene on the X chromosome as being responsible for a sequence of events that silences one of the X chromosomes in women. The inactivated X chromosomes become highly compacted structures known as Barr bodies. The presence of Barr bodies has been used at international sport competitions as a test to determine whether an athlete is a male or a female. Exceptions to Mendel's Laws There are many examples of inheritance that appear to be exceptions to Mendel's laws. Usually, they turn out to represent complex interactions among various allelic conditions. For example, co-dominant alleles both contribute to a phenotype. Neither is dominant over the other. Control of the human blood group system provides a good example of co-dominant alleles. Four Basic Blood Types There are four basic blood types, and they are O, A, B, and AB. We know that our blood type is determined by the "alleles" that we inherit from our parents. For the blood type gene, there are three basic blood type alleles: A, B, and O. We all have two alleles, one inherited from each parent. The possible combinations of the three alleles are OO, AO, BO, AB, AA, and BB. Blood types A and B are "co-dominant" alleles, whereas O is "recessive". A codominant allele is apparent even if only one is present; a recessive allele is apparent only if two recessive alleles are present. Because blood type O is recessive, it is not apparent if the person inherits an A or B allele along with it. So, the possible allele combinations result in a particular blood type in this way: OO = blood type O AO = blood type A BO = blood type B AB = blood type AB AA = blood type A BB = blood type B You can see that a person with blood type B may have a B and an O allele, or they may have two B alleles. If both parents are blood type B and both have a B and a recessive O, then their children will either be BB, BO, or OO. If the child is BB or BO, they have blood type B. If the child is OO, he or she will have blood type O. Pleiotropism, or pleotrophy, refers to the phenomenon in which a single gene is responsible for producing multiple, distinct, and apparently unrelated phenotypic traits, that is, an individual can exhibit many different phenotypic outcomes. This is because the gene product is active in many places in the body. An example is Marfan's syndrome, where there is a defect in the gene coding for a connective tissue protein. Individuals with Marfan's syndrome exhibit abnormalities in their eyes, skeletal system, and cardiovascular Some genes mask the expression of other genes just as a fully dominant allele masks the expression of its recessive counterpart. A gene that masks the phenotypic effect of another gene is called an epistatic gene; the gene it subordinates is the hypostatic gene. The gene for albinism in humans is an epistatic gene. It is not part of the interacting skin-color genes. Rather, its dominant allele is necessary for the development of any skin pigment, and its recessive homozygous state results in the albino condition, regardless of how many other pigment genes may be present. Because of the effects of an epistatic gene, some individuals who inherit the dominant, disease-causing gene show only partial symptoms of the disease. Some, in fact, may show no expression of the disease-causing gene, a condition referred to as nonpenetrance. The individual in whom such a nonpenetrant mutant gene exists will be phenotypically normal but still capable of passing the deleterious gene on to offspring, who may exhibit the full-blown disease. Then we have traits that are multigenic, that is, they result from the expression of several different genes. This is true for human eye color, in which at least three different genes are responsible for determining eye color. A brown/blue gene and a central brown gene are both found on chromosome 15, whereas a green/blue gene is found on chromosome 19. The interaction between these genes is not well understood. It is speculated that there may be other genes that control other factors, such as the amount of pigment deposited in the iris. This multigenic system explains why two blue-eyed individuals can have a brown-eyed child. Speaking of eye color, have you ever seen someone with one green eye and one brown eye? In this case, somatic mosaicism may be the culprit. This is probably easier to describe than explain. In multicellular organisms, every cell in the adult is ultimately derived from the single-cell fertilized egg. Therefore, every cell in the adult normally carries the same genetic information. However, what would happen if a mutation occurred in only one cell at the two-cell stage of development? Then the adult would be composed of two types of cells: cells with the mutation and cells without. If a mutation affecting melanin production occurred in one of the cells in the cell lineage of one eye but not the other, then the eyes would have different genetic potential for melanin synthesis. This could produce eyes of two different colors. Penetrance refers to the degree to which a particular allele is expressed in a population phenotype. If every individual carrying a dominant mutant gene demonstrates the mutant phenotype, the gene is said to show complete penetrance. Molecular Genetics: The Study of Heredity, Genes, and DNA As we have just learned, DNA provides a blueprint that directs all cellular activities and specifies the developmental plan of multicellular organisms. Therefore, an understanding of DNA, gene structure, and function is fundamental for an appreciation of the molecular biology of the cell. Yet, it is important to recognize that progress in any scientific field depends on the availability of experimental tools that allow researchers to make new scientific observations and conduct novel experiments. The last section of the genetic primer concludes with a discussion of some of the laboratory tools and technologies that allow researchers to study cells and their DNA. |Back to top | Revised: March 31, 2004.
http://www.ncbi.nlm.nih.gov/About/primer/genetics_genome.html
13
55
Proof of the area of a trapezoid A first good way to start off with the proof of the area of a trapezoid is to draw a trapezoid and turn the trapezoid into a rectangle. Look at the trapezoid ABCD above. How would you turn this into a rectangle? Draw the average base (shown in red) which connects the midpoint of the two sides that are not parallel Then, make 4 triangles as shown below: Let's call the two parallel sides in blue (the bases) b1 and b2 Since triangles EDI and CFI are congruent or equal and triangles KAJ and RBJ are equal, you could make a rectangle by rotating triangles EDI around point I, 180 degrees counterclockwise. And by rotating triangle KAJ clockwise, but still 180 degrees around point J. Because you could make a rectangle with the trapezoid, both figures have the same area The reason that triangle EDI is equal to triangle IFC is because we can find two angles inside the triangles that are the same. If two angles are the same, then the third or last angle must be the same The angles that are the same are shown below. They are in red and green. The angles in green are right angles. The angles in red are vertical angles This is important because if these two triangles are not congruent or the same, we cannot make the rectangle with the trapezoid by rotating triangle EDI. It would not fit properly Again, this same argument applies for the two triangles on the left Therefore, if we can find the area of the rectangle, the trapezoid will have the same area Let us find the area of the rectangle.We will need the following figure again: First, make these important observations: b1 = RC BF = BR + b1 + CF b2 = AD KE = AD − AK − ED, so KE = b2 − AK − ED AK = BR and ED = CF Notice also that you can find the length of the line in red ( the average base ) by taking the average of length BF and length KE Since the length of the line in red is the same as the base of the rectangle, we can just times that by the hegiht to get the area of the trapezoid Finally, we get : An alternative proof of the area of a trapezoid could be done this way. Start with the same trapezoid. Draw heights from From vertex B and C. This will break the trapezoid down into 3 shapes: 2 triangles and a rectangle. Label the base of the small triangle x and the base of the bigger triangle y Label the small base of the trapezoid b1 and b2 b1 = b2 − ( x + y), so x + y = b2 − b1 The area of the rectangle is b1 × h, but the area of the triangles with base x and y are : To get the total area, just add these areas together: The proof of the area of a trapezoid is complete. Any questions, contact me. |Powered by Site Build It|
http://www.basic-mathematics.com/proof-of-the-area-of-a-trapezoid.html
13
124
|Schematic diagram of a high-bypass turbofan engine| |Part of a series on A turbofan is a type of aircraft gas turbine engine that provides thrust using a combination of a ducted fan and a jet exhaust nozzle. Part of the airstream from the ducted fan passes through the core, providing oxygen to burn fuel to create power. However, the rest of the air flow bypasses the engine core and mixes with the faster stream from the core, significantly reducing exhaust noise. The rather slower bypass airflow produces thrust more efficiently than the high-speed air from the core, and this reduces the specific fuel consumption. A few designs work slightly differently and have the fan blades as a radial extension of an aft-mounted low-pressure turbine unit. Turbofans have a net exhaust speed that is much lower than a turbojet. This makes them much more efficient at subsonic speeds than turbojets, and somewhat more efficient at supersonic speeds up to roughly Mach 1.6, but have also been found to be efficient when used with continuous afterburner at Mach 3 and above. However, the lower speed also reduces thrust at high speeds. All of the jet engines used in currently manufactured commercial jet aircraft are turbofans. They are used commercially mainly because they are highly efficient and relatively quiet in operation. Turbofans are also used in many military jet aircraft, such as the F-15 Eagle. Unlike a reciprocating engine, a turbojet undertakes a continuous-flow combustion process. In a single-spool (or single-shaft) turbojet, which is the most basic form and the earliest type of turbojet to be developed, air enters an intake before being compressed to a higher pressure by a rotating (fan-like) compressor. The compressed air passes on to a combustor, where it is mixed with a fuel (e.g. kerosene) and ignited. The hot combustion gases then enter a windmill-like turbine, where power is extracted to drive the compressor. Although the expansion process in the turbine reduces the gas pressure (and temperature) somewhat, the remaining energy and pressure is employed to provide a high-velocity jet by passing the gas through a propelling nozzle. This process produces a net thrust opposite in direction to that of the jet flow. After World War II, 2-spool (or 2-shaft) turbojets were developed to make it easier to throttle back compression systems with a high design overall pressure ratio (i.e., combustor inlet pressure/intake delivery pressure). Adopting the 2-spool arrangement enables the compression system to be split in two, with a Low Pressure (LP) Compressor supercharging a High Pressure (HP) Compressor. Each compressor is mounted on a separate (co-axial) shaft, driven by its own turbine (i.e HP Turbine and LP Turbine). Otherwise a 2-spool turbojet is much like a single-spool engine. Modern turbofans evolved from the 2-spool axial-flow turbojet engine, essentially by increasing the relative size of the Low Pressure (LP) Compressor to the point where some (if not most) of the air exiting the unit actually bypasses the core (or gas-generator) stream, passing through the main combustor. This bypass air either expands through a separate propelling nozzle, or is mixed with the hot gases leaving the Low Pressure (LP) Turbine, before expanding through a Mixed Stream Propelling Nozzle. Owing to a lower jet velocity, a modern civil turbofan is quieter than the equivalent turbojet. Turbofans also have a better thermal efficiency, which is explained later in the article. In a turbofan, the LP Compressor is often called a fan. Civil-aviation turbofans usually have a single fan stage, whereas most military-aviation turbofans (e.g. combat and trainer aircraft applications) have multi-stage fans. It should be noted, however, that modern military transport turbofan engines are similar to those that propel civil jetliners. Turboprop engines are gas-turbine engines that deliver almost all of their power to a shaft to drive a propeller. Turboprops remain popular on very small or slow aircraft, such as small commuter airliners, for their fuel efficiency at lower speeds, as well as on medium military transports and patrol planes, such as the C-130 Hercules and P-3 Orion, for their high takeoff performance and mission endurance benefits respectively. If the turboprop is better at moderate flight speeds and the turbojet is better at very high speeds, it might be imagined that at some speed range in the middle a mixture of the two is best. Such an engine is the turbofan (originally termed bypass turbojet by the inventors at Rolls Royce). Another name sometimes used is ducted fan, though that term is also used for propellers and fans used in vertical-flight applications. The difference between a turbofan and a propeller, besides direct thrust, is that the intake duct of the former slows the air before it arrives at the fan face. As both propeller and fan blades must operate at subsonic inlet velocities to be efficient, ducted fans allow efficient operation at higher vehicle speeds. Depending on specific thrust (i.e. net thrust/intake airflow), ducted fans operate best from about 400 to 2000 km/h (250 to 1300 mph), which is why turbofans are the most common type of engine for aviation use today in airliners as well as subsonic/supersonic military fighter and trainer aircraft. It should be noted, however, that turbofans use extensive ducting to force incoming air to subsonic velocities (thus reducing shock waves throughout the engine). The noise of any type of jet engine is strongly related to the velocity of the exhaust gases, typically being proportional to the eighth power of the jet velocity. High-bypass-ratio (i.e., low-specific-thrust) turbofans are relatively quiet compared to turbojets and low-bypass-ratio (i.e., high-specific-thrust) turbofans. A low-specific-thrust engine has a low jet velocity by definition, as the following approximate equation for net thrust implies: Rearranging the above equation, specific thrust is given by: So for zero flight velocity, specific thrust is directly proportional to jet velocity. Relatively speaking, low-specific-thrust engines are large in diameter to accommodate the high airflow required for a given thrust. Jet aircraft are often considered loud, but a conventional piston engine or a turboprop engine delivering the same thrust would be much louder. Early turbojet engines were very fuel-inefficient, as their overall pressure ratio and turbine inlet temperature were severely limited by the technology available at the time. The very first running turbofan was the German Daimler-Benz DB 670 (also known as 109-007) which was operated on its testbed on April 1, 1943. The engine was abandoned later while the war went on and problems could not be solved. The British wartime Metrovick F.2 axial flow jet was given a fan to create the first British turbofan. Improved materials, and the introduction of twin compressors such as in the Pratt & Whitney JT3C engine, increased the overall pressure ratio and thus the thermodynamic efficiency of engines, but they also led to a poor propulsive efficiency, as pure turbojets have a high specific thrust/high velocity exhaust better suited to supersonic flight. The original low-bypass turbofan engines were designed to improve propulsive efficiency by reducing the exhaust velocity to a value closer to that of the aircraft. The Rolls-Royce Conway, the first production turbofan, had a bypass ratio of 0.3, similar to the modern General Electric F404 fighter engine. Civilian turbofan engines of the 1960s, such as the Pratt & Whitney JT8D and the Rolls-Royce Spey had bypass ratios closer to 1, but were not dissimilar to their military equivalents. The unusual General Electric CF700 turbofan engine was developed as an aft-fan engine with a 2.0 bypass ratio. This was derived from the T-38 Talon and the Learjet General Electric J85/CJ610 turbojet (2,850 lbf or 12,650 N) to power the larger Rockwell Sabreliner 75/80 model aircraft, as well as the Dassault Falcon 20 with about a 50% increase in thrust (4,200 lbf or 18,700 N). The CF700 was the first small turbofan in the world to be certified by the Federal Aviation Administration (FAA). There are now over 400 CF700 aircraft in operation around the world, with an experience base of over 10 million service hours. The CF700 turbofan engine was also used to train Moon-bound astronauts in Project Apollo as the powerplant for the Lunar Landing Research Vehicle. A high specific thrust/low bypass ratio turbofan normally has a multi-stage fan, developing a relatively high pressure ratio and, thus, yielding a high (mixed or cold) exhaust velocity. The core airflow needs to be large enough to give sufficient core power to drive the fan. A smaller core flow/higher bypass ratio cycle can be achieved by raising the (HP) turbine rotor inlet temperature. Imagine a retrofit situation where a new low bypass ratio, mixed exhaust, turbofan is replacing an old turbojet, in a particular military application. Say the new engine is to have the same airflow and net thrust (i.e. same specific thrust) as the one it is replacing. A bypass flow can only be introduced if the turbine inlet temperature is allowed to increase, to compensate for a correspondingly smaller core flow. Improvements in turbine cooling/material technology would facilitate the use of a higher turbine inlet temperature, despite increases in cooling air temperature, resulting from a probable increase in overall pressure ratio. Efficiently done, the resulting turbofan would probably operate at a higher nozzle pressure ratio than the turbojet, but with a lower exhaust temperature to retain net thrust. Since the temperature rise across the whole engine (intake to nozzle) would be lower, the (dry power) fuel flow would also be reduced, resulting in a better specific fuel consumption (SFC). A few low-bypass ratio military turbofans (e.g. F404) have Variable Inlet Guide Vanes, with piano-style hinges, to direct air onto the first rotor stage. This improves the fan surge margin (see compressor map) in the mid-flow range. The swing wing F-111 achieved a very high range / payload capability by pioneering the use of this engine, and it was also the heart of the famous F-14 Tomcat air superiority fighter which used the same engines in a smaller, more agile airframe to achieve efficient cruise and Mach 2 speed. Since the 1970s, most jet fighter engines have been low/medium bypass turbofans with a mixed exhaust, afterburner and variable area final nozzle. An afterburner is a combustor located downstream of the turbine blades and directly upstream of the nozzle, which burns fuel from afterburner-specific fuel injectors. When lit, prodigious amounts of fuel are burnt in the afterburner, raising the temperature of exhaust gases by a significant degree, resulting in a higher exhaust velocity/engine specific thrust. The variable geometry nozzle must open to a larger throat area to accommodate the extra volume flow when the afterburner is lit. Afterburning is often designed to give a significant thrust boost for take off, transonic acceleration and combat maneuvers, but is very fuel intensive. Consequently afterburning can only be used for short portions of a mission. However the Mach 3 SR-71 was designed for continuous operation and to be efficient with the afterburner lit. Unlike the main combustor, where the downstream turbine blades must not be damaged by high temperatures, an afterburner can operate at the ideal maximum (stoichiometric) temperature (i.e. about 2100K/3780Ra/3320F). At a fixed total applied fuel:air ratio, the total fuel flow for a given fan airflow will be the same, regardless of the dry specific thrust of the engine. However, a high specific thrust turbofan will, by definition, have a higher nozzle pressure ratio, resulting in a higher afterburning net thrust and, therefore, a lower afterburning specific fuel consumption. However, high specific thrust engines have a high dry SFC. The situation is reversed for a medium specific thrust afterburning turbofan: i.e. poor afterburning SFC/good dry SFC. The former engine is suitable for a combat aircraft which must remain in afterburning combat for a fairly long period, but only has to fight fairly close to the airfield (e.g. cross border skirmishes) The latter engine is better for an aircraft that has to fly some distance, or loiter for a long time, before going into combat. However, the pilot can only afford to stay in afterburning for a short period, before his/her fuel reserves become dangerously low. Modern low-bypass military turbofans include the Pratt & Whitney F119, the Eurojet EJ200 and the General Electric F110 and F414, all of which feature a mixed exhaust, afterburner and variable area propelling nozzle. Non-afterburning engines include the Rolls-Royce/Turbomeca Adour (afterburning in the SEPECAT Jaguar) and the unmixed, vectored thrust, Rolls-Royce Pegasus. The low specific thrust/high bypass ratio turbofans used in today's civil jetliners (and some military transport aircraft) evolved from the high specific thrust/low bypass ratio turbofans used in such aircraft back in the 1960s. Low specific thrust is achieved by replacing the multi-stage fan with a single stage unit. Unlike some military engines, modern civil turbofans do not have any stationary inlet guide vanes in front of the fan rotor. The fan is scaled to achieve the desired net thrust. The core (or gas generator) of the engine must generate sufficient core power to at least drive the fan at its design flow and pressure ratio. Through improvements in turbine cooling/material technology, a higher (HP) turbine rotor inlet temperature can be used, thus facilitating a smaller (and lighter) core and (potentially) improving the core thermal efficiency. Reducing the core mass flow tends to increase the load on the LP turbine, so this unit may require additional stages to reduce the average stage loading and to maintain LP turbine efficiency. Reducing core flow also increases bypass ratio (5:1, or more, is now common). Further improvements in core thermal efficiency can be achieved by raising the overall pressure ratio of the core. Improved blade aerodynamics reduces the number of extra compressor stages required. With multiple compressors (i.e. LPC, IPC, HPC) dramatic increases in overall pressure ratio have become possible. Variable geometry (i.e. stators) enable high pressure ratio compressors to work surge-free at all throttle settings. The first high-bypass turbofan engine was the General Electric TF39, designed in mid 1960s to power the Lockheed C-5 Galaxy military transport aircraft. The civil General Electric CF6 engine used a derived design. Other high-bypass turbofans are the Pratt & Whitney JT9D, the three-shaft Rolls-Royce RB211 and the CFM International CFM56. More recent large high-bypass turbofans include the Pratt & Whitney PW4000, the three-shaft Rolls-Royce Trent, the General Electric GE90/GEnx and the GP7000, produced jointly by GE and P&W. High-bypass turbofan engines are generally quieter than the earlier low bypass ratio civil engines. This is not so much due to the higher bypass ratio, as to the use of a low pressure ratio, single stage, fan, which significantly reduces specific thrust and, thereby, jet velocity. The combination of a higher overall pressure ratio and turbine inlet temperature improves thermal efficiency. This, together with a lower specific thrust (better propulsive efficiency), leads to a lower specific fuel consumption. For reasons of fuel economy, and also of reduced noise, almost all of today's jet airliners are powered by high-bypass turbofans. Although modern combat aircraft tend to use low bypass ratio turbofans, military transport aircraft (e.g. C-17 ) mainly use high bypass ratio turbofans (or turboprops) for fuel efficiency. Because of the implied low mean jet velocity, a high bypass ratio/low specific thrust turbofan has a high thrust lapse rate (with rising flight speed). Consequently the engine must be over-sized to give sufficient thrust during climb/cruise at high flight speeds (e.g. Mach 0.83). Because of the high thrust lapse rate, the static (i.e. Mach 0) thrust is consequently relatively high. This enables heavily laden, wide body aircraft to accelerate quickly during take-off and consequently lift-off within a reasonable runway length. The turbofans on twin engined airliners are further over-sized to cope with losing one engine during take-off, which reduces the aircraft's net thrust by 50%. Modern twin engined airliners normally climb very steeply immediately after take-off. If one engine is lost, the climb-out is much shallower, but sufficient to clear obstacles in the flightpath. The Soviet Union's engine technology was less advanced than the West's and its first wide-body aircraft, the Ilyushin Il-86, was powered by low-bypass engines. The Yakovlev Yak-42, a medium-range, rear-engined aircraft seating up to 120 passengers introduced in 1980 was the first Soviet aircraft to use high-bypass engines. Turbofan engines come in a variety of engine configurations. For a given engine cycle (i.e. same airflow, bypass ratio, fan pressure ratio, overall pressure ratio and HP turbine rotor inlet temperature), the choice of turbofan configuration has little impact upon the design point performance (e.g. net thrust, SFC), as long as overall component performance is maintained. Off-design performance and stability is, however, affected by engine configuration. As the design overall pressure ratio of an engine cycle increases, it becomes more difficult to throttle the compression system, without encountering an instability known as compressor surge. This occurs when some of the compressor aerofoils stall (like the wings of an aircraft) causing a violent change in the direction of the airflow. However, compressor stall can be avoided, at throttled conditions, by progressively: 1) opening interstage/intercompressor blow-off valves (inefficient) 2) closing variable stators within the compressor Most modern American civil turbofans employ a relatively high pressure ratio High Pressure (HP) Compressor, with many rows of variable stators to control surge margin at part-throttle. In the three-spool RB211/Trent the core compression system is split into two, with the IP compressor, which supercharges the HP compressor, being on a different coaxial shaft and driven by a separate (IP) turbine. As the HP Compressor has a modest pressure ratio it can be throttled-back surge-free, without employing variable geometry. However, because a shallow IP compressor working line is inevitable, the IPC requires at least one stage of variable geometry. Although far from common, the Single Shaft Turbofan is probably the simplest configuration, comprising a fan and high pressure compressor driven by a single turbine unit, all on the same shaft. The SNECMA M53, which powers Mirage fighter aircraft, is an example of a Single Shaft Turbofan. Despite the simplicity of the turbomachinery configuration, the M53 requires a variable area mixer to facilitate part-throttle operation. One of the earliest turbofans was a derivative of the General Electric J79 turbojet, known as the CJ805, which featured an integrated aft fan/low pressure (LP) turbine unit located in the turbojet exhaust jetpipe. Hot gas from the turbojet turbine exhaust expanded through the LP turbine, the fan blades being a radial extension of the turbine blades. This Aft Fan configuration was later exploited in the General Electric GE-36 UDF (propfan) Demonstrator of the early 80's. One of the problems with the Aft Fan configuration is hot gas leakage from the LP turbine to the fan. Many turbofans have the Basic Two Spool configuration where both the fan and LP turbine (i.e. LP spool) are mounted on a second (LP) shaft, running concentrically with the HP spool (i.e. HP compressor driven by HP turbine). The BR710 is typical of this configuration. At the smaller thrust sizes, instead of all-axial blading, the HP compressor configuration may be axial-centrifugal (e.g. General Electric CFE738), double-centrifugal or even diagonal/centrifugal (e.g. Pratt & Whitney Canada PW600). Higher overall pressure ratios can be achieved by either raising the HP compressor pressure ratio or adding an Intermediate Pressure (IP) Compressor between the fan and HP compressor, to supercharge or boost the latter unit helping to raise the overall pressure ratio of the engine cycle to the very high levels employed today (i.e. greater than 40:1, typically). All of the large American turbofans (e.g. General Electric CF6, GE90 and GEnx plus Pratt & Whitney JT9D and PW4000) feature an IP compressor mounted on the LP shaft and driven, like the fan, by the LP turbine, the mechanical speed of which is dictated by the tip speed and diameter of the fan. The high bypass ratios (i.e. fan duct flow/core flow) used in modern civil turbofans tends to reduce the relative diameter of the attached IP compressor, causing its mean tip speed to decrease. Consequently more IPC stages are required to develop the necessary IPC pressure rise. Rolls-Royce chose a Three Spool configuration for their large civil turbofans (i.e. the RB211 and Trent families), where the Intermediate Pressure (IP) compressor is mounted on a separate (IP) shaft, running concentrically with the LP and HP shafts, and is driven by a separate IP Turbine. Consequently, the IP compressor can rotate faster than the fan, increasing its mean tip speed, thereby reducing the number of IP stages required for a given IPC pressure rise. Because the RB211/Trent designs have a higher IPC pressure rise than the American engines, the HPC pressure rise is less resulting in a shorter, lighter engine. However, three spool engines are harder to both build and maintain. As bypass ratio increases, the mean radius ratio of the fan and LP turbine increases. Consequently, if the fan is to rotate at its optimum blade speed the LP turbine blading will spin slowly, so additional LPT stages will be required, to extract sufficient energy to drive the fan. Introducing a (planetary) reduction gearbox, with a suitable gear ratio, between the LP shaft and the fan, enables both the fan and LP turbine to operate at their optimum speeds. Typical of this configuration are the long-established Honeywell TFE731, the Honeywell ALF 502/507, and the recent Pratt & Whitney PW1000G. Most of the configurations discussed above are used in civil turbofans, while modern military turbofans (e.g. SNECMA M88) are usually Basic Two Spool. Most civil turbofans use a high efficiency, 2-stage HP turbine to drive the HP compressor. The CFM56 uses an alternative approach: a single stage, high-work unit. While this approach is probably less efficient, there are savings on cooling air, weight and cost. In the RB211 and Trent series, Rolls-Royce split the two stages into two discrete units; one on the HP shaft driving the HP compressor; the other on the IP shaft driving the IP (Intermediate Pressure) Compressor. Modern military turbofans tend to use single stage HP turbines. Modern civil turbofans have multi-stage LP turbines (e.g. 3, 4, 5, 6, 7). The number of stages required depends on the engine cycle bypass ratio and how much supercharging (i.e. IP compression) is on the LP shaft, behind the fan. A geared fan may reduce the number of required LPT stages in some applications. Because of the much lower bypass ratios employed, military turbofans only require one or two LP turbine stages. Consider a mixed turbofan with a fixed bypass ratio and airflow. Increasing the overall pressure ratio of the compression system raises the combustor entry temperature. Therefore, at a fixed fuel flow there is an increase in (HP) turbine rotor inlet temperature. Although the higher temperature rise across the compression system implies a larger temperature drop over the turbine system, the mixed nozzle temperature is unaffected, because the same amount of heat is being added to the system. There is, however, a rise in nozzle pressure, because overall pressure ratio increases faster than the turbine expansion ratio, causing an increase in the hot mixer entry pressure. Consequently, net thrust increases, whilst specific fuel consumption (fuel flow/net thrust) decreases. A similar trend occurs with unmixed turbofans. So turbofans can be made more fuel efficient by raising overall pressure ratio and turbine rotor inlet temperature in unison. However, better turbine materials and/or improved vane/blade cooling are required to cope with increases in both turbine rotor inlet temperature and compressor delivery temperature. Increasing the latter may require better compressor materials. Overall pressure ratio can be increased by improving fan (or) LP compressor pressure ratio and/or HP compressor pressure ratio. If the latter is held constant, the increase in (HP) compressor delivery temperature (from raising overall pressure ratio) implies an increase in HP mechanical speed. However, stressing considerations might limit this parameter, implying, despite an increase in overall pressure ratio, a reduction in HP compressor pressure ratio. According to simple theory, if the ratio turbine rotor inlet temperature/(HP) compressor delivery temperature is maintained, the HP turbine throat area can be retained. However, this assumes that cycle improvements are obtained, whilst retaining the datum (HP) compressor exit flow function (non-dimensional flow). In practise, changes to the non-dimensional speed of the (HP) compressor and cooling bleed extraction would probably make this assumption invalid, making some adjustment to HP turbine throat area unavoidable. This means the HP turbine nozzle guide vanes would have to be different from the original! In all probability, the downstream LP turbine nozzle guide vanes would have to be changed anyway. Thrust growth is obtained by increasing core power. There are two basic routes available: a) hot route: increase HP turbine rotor inlet temperature b) cold route: increase core mass flow Both routes require an increase in the combustor fuel flow and, therefore, the heat energy added to the core stream. The hot route may require changes in turbine blade/vane materials and/or better blade/vane cooling. The cold route can be obtained by one of the following: all of which increase both overall pressure ratio and core airflow. Alternatively, the core size can be increased, to raise core airflow, without changing overall pressure ratio. This route is expensive, since a new (upflowed) turbine system (and possibly a larger IP compressor) is also required. Changes must also be made to the fan to absorb the extra core power. On a civil engine, jet noise considerations mean that any significant increase in Take-off thrust must be accompanied by a corresponding increase in fan mass flow (to maintain a T/O specific thrust of about 30lbf/lb/s), usually by increasing fan diameter. On military engines, the fan pressure ratio would probably be increased to improve specific thrust, jet noise not normally being an important factor. The turbine blades in a turbofan engine are subject to high heat and stress, and require special fabrication. New material construction methods and material science have allowed blades, which were originally polycrystalline (regular metal), to be made from lined up metallic crystals and more recently mono-crystalline (i.e. single crystal) blades, which can operate at higher temperatures with less distortion. Nickel-based superalloys are used for HP turbine blades in almost all of the modern jet engines. The temperature capabilities of turbine blades have increased mainly through four approaches: the manufacturing (casting) process, cooling path design, thermal barrier coating (TBC), and alloy development. Although turbine blade (and vane) materials have improved over the years, much of the increase in (HP) turbine inlet temperatures is due to improvements in blade/vane cooling technology. Relatively cool air is bled from the compression system, bypassing the combustion process, and enters the hollow blade or vane. After picking up heat from the blade/vane, the cooling air is dumped into the main gas stream. If the local gas temperatures are low enough, downstream blades/vanes are uncooled and solid. Strictly speaking, cycle-wise the HP Turbine Rotor Inlet Temperature (after the temperature drop across the HPT stator) is more important than the (HP) turbine inlet temperature. Although some modern military and civil engines have peak RITs of the order of 3300 °R (2840 °F) or 1833 K (1560 °C), such temperatures are only experienced for a short time (during take-off) on civil engines. The turbofan engine market is dominated by General Electric, Rolls-Royce plc and Pratt & Whitney, in order of market share. GE and SNECMA of France have a joint venture, CFM International which, as the 3rd largest manufacturer in terms of market share, fits between Rolls Royce and Pratt & Whitney. Rolls Royce and Pratt & Whitney also have a joint venture, International Aero Engines, specializing in engines for the Airbus A320 family, whilst finally, Pratt & Whitney and General Electric have a joint venture, Engine Alliance marketing a range of engines for aircraft such as the Airbus A380. Williams International is the world leader in smaller business jet turbofans. GE Aviation, part of the General Electric Conglomerate, currently has the largest share of the turbofan engine market. Some of their engine models include the CF6 (available on the Boeing 767, Boeing 747, Airbus A330 and more), GE90 (only the Boeing 777) and GEnx (developed for the Boeing 747-8 & Boeing 787 and proposed for the Airbus A350, currently in development) engines. On the military side, GE engines power many U.S. military aircraft, including the F110, powering 80% of the US Air Force's F-16 Fighting Falcons and the F404 and F414 engines, which power the Navy's F/A-18 Hornet and Super Hornet. Rolls Royce and General Electric are jointly developing the F136 engine to power the Joint Strike Fighter. CFM International is a joint venture between GE Aircraft Engines and SNECMA of France. They have created the very successful CFM56 series, used on Boeing 737, Airbus A340, and Airbus A320 family aircraft. Rolls-Royce plc is the second largest manufacturer of turbofans and is most noted for their RB211 and Trent series, as well as their joint venture engines for the Airbus A320 and Boeing MD-90 families (IAE V2500 with Pratt & Whitney and others), the Panavia Tornado (Turbo-Union RB199) and the Boeing 717 (BR700). Rolls Royce, as owners of the Allison Engine Company, have their engines powering the C-130 Hercules and several Embraer regional jets. Rolls-Royce Trent 970s were the first engines to power the new Airbus A380. It was also Rolls-Royce Olympus/SNECMA jets that powered the now retired Concorde although they were turbojets rather than turbofans. The famous thrust vectoring Pegasus engine is the primary powerplant of the Harrier "Jump Jet" and its derivatives. Pratt & Whitney is third behind GE and Rolls-Royce in market share. The JT9D has the distinction of being chosen by Boeing to power the original Boeing 747 "Jumbo jet". The PW4000 series is the successor to the JT9D, and powers some Airbus A310, Airbus A300, Boeing 747, Boeing 767, Boeing 777, Airbus A330 and MD-11 aircraft. The PW4000 is certified for 180-minute ETOPS when used in twinjets. The first family has a 94 inch fan diameter and is designed to power the Boeing 767, Boeing 747, MD-11, and the Airbus A300. The second family is the 100 inch (2.5 m) fan engine developed specifically for the Airbus A330 twinjet, and the third family has a diameter of 112 inch designed to power Boeing 777. The Pratt & Whitney F119 and its derivative, the F135, power the United States Air Force's F-22 Raptor and the international F-35 Lightning II, respectively. Rolls Royce are responsible for the lift fan which will provide the F-35B variants with a STOVL capability. The F100 engine was first used on the F-15 Eagle and F-16 Fighting Falcon. Newer Eagles and Falcons also come with GE F110 as an option, and the two are in competition. Aviadvigatel (Russian:Авиационный Двиѓатель) is the Russian aircraft engine company that succeeded the Soviet Soloviev Design Bureau. They have made 1 engine on the market, the Aviadvigatel PS-90. The engine is used on the Ilyushin Il-96-300, -400, T, Tupolev Tu-204, Tu-214 series and the Ilyushin Il-76-MD-90. Later, the company changed its name to Perm Engine Company. Ivchenko-Progress is the Ukrainian aircraft engine company that succeeded the Soviet Ivchenko Design Bureau. Some of their engine models include Progress D-436 available on the Antonov An-72/74, Yakovlev Yak-42, Beriev Be-200, Antonov An-148 and Tupolev Tu-334 and Progress D-18T that powers two of the world largest airplanes, Antonov An-124 and Antonov An-225. In the 1970s Rolls-Royce/SNECMA tested a M45SD-02 turbofan fitted with variable pitch fan blades to improve handling at ultra low fan pressure ratios and to provide thrust reverse down to zero aircraft speed. The engine was aimed at ultra quiet STOL aircraft operating from city centre airports. In a bid for increased efficiency with speed, a development of the turbofan and turboprop known as a propfan engine, was created that had an unducted fan. The fan blades are situated outside of the duct, so that it appears like a turboprop with wide scimitar-like blades. Both General Electric and Pratt & Whitney/Allison demonstrated propfan engines in the 1980s. Excessive cabin noise and relatively cheap jet fuel prevented the engines being put into service. The Unicode standard includes a turbofan character, #274B, in the dingbats range. Its official name is "HEAVY EIGHT TEARDROP-SPOKED PROPELLER. ASTERISK. = turbofan". In appropriately-configured browsers, it should appear in the box on the right.
http://www.thefullwiki.org/Turbofan
13
58
Introduction to Tension An introduction to tension. Solving for the tension(s) in a set of wires when a weight is hanging from them. Introduction to Tension ⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles. - I will now introduce you to the concept of tension. - So tension is really just the force that exists either - within or applied by a string or wire. - It's usually lifting something or pulling on something. - So let's say I had a weight. - Let's say I have a weight here. - And let's say it's 100 Newtons. - And it's suspended from this wire, which is right here. - Let's say it's attached to the ceiling right there. - Well we already know that the force-- if we're on this - planet that this weight is being pull down by gravity. - So we already know that there's a downward force on - this weight, which is a force of gravity. - And that equals 100 Newtons. - But we also know that this weight isn't accelerating, - it's actually stationary. - It also has no velocity. - But the important thing is it's not accelerating. - But given that, we know that the net force on it must be 0 - by Newton's laws. - So what is the counteracting force? - You didn't have to know about tension to say well, the - string's pulling on it. - The string is what's keeping the weight from falling. - So the force that the string or this wire applies on this - weight you can view as the force of tension. - Another way to think about it is that's also the force - that's within the wire. - And that is going to exactly offset the force of gravity on - this weight. - And that's what keeps this point right here stationery - and keeps it from accelerating. - That's pretty straightforward. - Tension, it's just the force of a string. - And just so you can conceptualize it, on a guitar, - the more you pull on some of those higher-- what was it? - The really thin strings that sound higher pitched. - The more you pull on it, the higher the tension. - It actually creates a higher pitched note. - So you've dealt with tension a lot. - I think actually when they sell wires or strings they'll - probably tell you the tension that that wire or string can - support, which is important if you're going to build a bridge - or a swing or something. - So tension is something that should be hopefully, a little - bit intuitive to you. - So let's, with that fairly simple example done, let's - create a slightly more complicated example. - So let's take the same weight. - Instead of making the ceiling here, let's - add two more strings. - Let's add this green string. - Green string there. - And it's attached to the ceiling up here. - That's the ceiling now. - And let's see. - This is the wall. - And let's say there's another string right here - attached to the wall. - So my question to you is, what is the tension in these two - strings So let's call this T1 and T2. - Well like the first problem, this point right here, this - red point, is stationary. - It's not accelerating in either the left/right - directions and it's not accelerating in the up/down - So we know that the net forces in both the x and y - dimensions must be 0. - My second question to you is, what is - going to be the offset? - Because we know already that at this point right here, - there's going to be a downward force, which is the force of - gravity again. - The weight of this whole thing. - We can assume that the wires have no weight for simplicity. - So we know that there's going to be a downward force here, - this is the force of gravity, right? - The whole weight of this entire object of weight plus - wire is pulling down. - So what is going to be the upward force here? - Well let's look at each of the wires. - This second wire, T2, or we could call it w2, I guess. - The second wire is just pulling to the left. - It has no y components. - It's not lifting up at all. - So it's just pulling to the left. - So all of the upward lifting, all of that's going to occur - from this first wire, from T1. - So we know that the y component of T1, so let's - call-- so if we say that this vector here. - Let me do it in a different color. - Because I know when I draw these diagrams it starts to - get confusing. - Let me actually use the line tool. - So I have this. - Let me make a thicker line. - So we have this vector here, which is T1. - And we would need to figure out what that is. - And then we have the other vector, which is its y - component, and I'll draw that like here. - This is its y component. - We could call this T1 sub y. - And then of course, it has an x component too, and I'll do - that in-- let's see. - I'll do that in red. - Once again, this is just breaking up a force into its - component vectors like we've-- a vector force into its x and - y components like we've been doing in the last several - problems. And these are just trigonometry problems, right? - We could actually now, visually see that this is T - sub 1 x and this is T sub 1 sub y. - Oh, and I forgot to give you an important property of this - problem that you needed to know before solving it. - Is that the angle that the first wire forms with the - ceiling, this is 30 degrees. - So if that is 30 degrees, we also know that this is a - parallel line to this. - So if this is 30 degrees, this is also - going to be 30 degrees. - So this angle right here is also going to be 30 degrees. - And that's from our-- you know, we know about parallel - lines and alternate interior angles. - We could have done it the other way. - We could have said that if this angle is 30 degrees, this - angle is 60 degrees. - This is a right angle, so this is also 30. - But that's just review of geometry - that you already know. - But anyway, we know that this angle is 30 degrees, so what's - its y component? - Well the y component, let's see. - What involves the hypotenuse and the opposite side? - Let me write soh cah toa at the top because this is really - just trigonometry. - soh cah toa in blood red. - So what involves the opposite and the hypotenuse? - So opposite over hypotenuse. - So that we know the sine-- let me switch to the sine of 30 - degrees is equal to T1 sub y over the tension in the string - going in this direction. - So if we solve for T1 sub y we get T1 sine of 30 degrees is - equal to T1 sub y. - And what did we just say before we kind of - dived into the math? - We said all of the lifting on this point is being done by - the y component of T1. - Because T2 is not doing any lifting up or down, it's only - pulling to the left. - So the entire component that's keeping this object up, - keeping it from falling is the y component of - this tension vector. - So that has to equal the force of gravity pulling down. - This has to equal the force of gravity. - That has to equal this or this point. - So that's 100 Newtons. - And I really want to hit this point home because it might be - a little confusing to you. - We just said, this point is stationery. - It's not moving up or down. - It's not accelerating up or down. - And so we know that there's a downward force of 100 Newtons, - so there must be an upward force that's being provided by - these two wires. - This wire is providing no upward force. - So all of the upward force must be the y component or the - upward component of this force vector on the first wire. - So given that, we can now solve for the tension in this - first wire because we have T1-- what's sine of 30? - Sine of 30 degrees, in case you haven't memorized it, sine - of 30 degrees is 1/2. - So T1 times 1/2 is equal to 100 Newtons. - Divide both sides by 1/2 and you get T1 is - equal to 200 Newtons. - So now we've got to figure out what the tension in this - second wire is. - And we also, there's another clue here. - This point isn't moving left or right, it's stationary. - So we know that whatever the tension in this wire must be, - it must be being offset by a tension or some other force in - the opposite direction. - And that force in the opposite direction is the x component - of the first wire's tension. - So it's this. - So T2 is equal to the x component of the - first wire's tension. - And what's the x component? - Well, it's going to be the tension in the first wire, 200 - Newtons times the cosine of 30 degrees. - It's adjacent over hypotenuse. - And that's square root of 3 over 2. - So it's 200 times the square root of 3 over 2, which equals - 100 square root of 3. - So the tension in this wire is 100 square root of 3, which - completely offsets to the left and the x component of this - wire is 100 square root of 3 Newtons to the right. - Hopefully I didn't confuse you. - See you in the next video. Be specific, and indicate a time in the video: At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger? Have something that's not a question about this content? This discussion area is not meant for answering homework questions. Share a tip When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831... Have something that's not a tip or feedback about this content? This discussion area is not meant for answering homework questions. Discuss the site For general discussions about Khan Academy, visit our Reddit discussion page. Flag inappropriate posts Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians. - disrespectful or offensive - an advertisement - low quality - not about the video topic - soliciting votes or seeking badges - a homework question - a duplicate answer - repeatedly making the same post - a tip or feedback in Questions - a question in Tips & Feedback - an answer that should be its own question about the site
http://www.khanacademy.org/science/physics/forces-newtons-laws/tension-tutorial/v/introduction-to-tension
13
52
Length, width, height, depth - People often ask about the "correct" use of length, width, height, and depth. - In the most common contexts in elementary school, mathematics does not have a formal definition for these terms, nor does it have rules about "proper" use. But natural usage does follow some conventions. (See below.) - In some contexts, two of these terms -- length and height -- do also have a specific mathematical meaning. Length, when referring to measurement in one dimension (as in the length of a line segment or a piece of string), has a specialized meaning. So does height when it is used in conjunction with base. (See those articles.) Mathematics uses the term length to name the measure of one-dimensional objects like these. Though we might straighten out a piece of string to measure its length against a ruler, the length of the string is the same even when it is curled or folded. (For more about length and distance see either of those topics.) We call the measure of a two-dimensional object its area. We may also measure the length of the boundary of the two-dimensional object -- the line segments or curves that enclose it. The length of the entire boundary is called the perimeter or circumference. Question: Rectangles may be drawn in various sizes, shapes, and positions. Do we label the two dimensions length and width; or do we use width and height; or even length and height? Is there a best way? Answer: There is more than one correct way! Any pair of these words can be used, as long as your words are used sensibly. Length: If you use the word length, it should certainly be for the longest sides of the rectangle. Think of how you would describe the distance along a road: it is the long distance, the length of the road. (The words along, long, and length are all related.) The distance across the road tells how wide the road is from one side to the other. That is the width of the road. (The words wide and width are related, too.) Height: If the rectangle is drawn with horizontal and vertical sides, people often use the word height to describe how high (how tall) the rectangle is. It is then perfectly correct to describe how wide the rectangle is from side to side by using the word width. As for naming the dimensions of a three-dimensional figure, the only rule is be sensible and clear. When the figure is "level," height (if you use the word) refers consistently to how tall the figure is, even if that dimension is greatest; length (if you use the word) refers to the longer of the other two dimensions. But you may also refer to the other dimensions as width and depth (and these are pretty much interchangeable, depending on what "seems" wide or deep about the figure). When the figure is not "level" people cannot know what is meant by width, depth, or height without being told (or without labels), although length is generally still assumed to mean the direction in which the figure is "longest." About the words Length, width, height, and depth are nouns are derived from the adjectives long, wide, high, and deep, and follow a common English pattern that involves a vowel change (often to a shorter vowel) and the addition of th. (The lone t in height is modern. Obsolete forms include heighth and highth, and it is still common to hear people pronounce it that way.) Other English adjective-noun pairs are related in this way, too: e.g., hale as in "hale and hearty" and health (but hale, except in that expression, has now been replaced almost totally with "healthy").
http://thinkmath.edc.org/index.php/Length,_width,_height,_depth
13
167
Ratio And Proportion Activities DOC Proportion Activity (One Foot Tale) Class: Algebra I. Summary: In this lesson, students apply proportions to examine real life questions. ... Write a ratio using your height in inches: 12” : _____ (actual height) A ratio is a comparison of two quantities that tells the scale between them. Ratios can be expressed as quotients, fractions, decimals, percents, or in the form of a:b. Here are some examples: The ratio of girls to boys on the swim team is 2:3 or . Number / Rate Ratio Proportion / Video Interactive / Print Activity. Name: ( adding or subtracting multiplying or dividing. Title: Rate Ratio Proportion PRINT ACTIVITY Author: Alberta Learning Last modified by: Mike.Olsson Created Date: This can be done playing concentration with the words and definitions or similar activities. (D) Ratio, Proportion, Scale: What is the connection? Using the FWL (Fifty Words or Less) strategy (attachment 8) the students will explain the connection between ratio, proportion, and scale. Activities (P/E ratio) Definition. The P/E ratio of a stock is the price of a company's stock divided by the company's earnings per share. ... Tutorial (RATIO AND PROPORTION) Author: 1234 Last modified by: vtc Created Date: 9/15/2003 1:39:00 AM Group activities. Question & Answer during the session.. Learner engagement during session. Worksheet Linked Functional Skills: ... Ratio, Proportion and Scale (Sample Lesson Plan) Functional Skills: Mathematics Level 2. Scale Card 2. Ratio and Proportion Chapter 10 in the Impact Text. Home Activities. Title: Ratio and Proportion Chapter 10 in the Impact Text Author: DOE Last modified by: DOE Created Date: 11/15/2012 3:03:00 PM Company: DOE Other titles: Ratio (Proportion), Percent (Share), and Rate. 1) Ratio and Proportion. ... When you examine “GDP Composition by Economic Activities in Major Countries (2003)” in page 18, how could you describe the main character of US GDP composition? A: To know ratio and proportion. References to the Framework: Year 5 - Numbers and the number system, Ratio and Proportion, p26 – To solve simple problems involving ratio and proportion. Activities: Organisation. Whole class for mental starter and teacher input; Unit 10 Ratio, proportion, ... 6 Oral and Mental Main Teaching Plenary Objectives and Vocabulary Teaching Activities Objectives and Vocabulary Teaching Activities Teaching Activities/ Focus Questions Find pairs of numbers with a sum of 100; ... Ratio for one serving, (i.e. if the recipe uses 1 cup of sugar, and the recipe serves 8, the ratio for one serving equals 1/8 c. sugar). Proportion used to increase recipe to 30 servings. 1/8 servings=x/30 servings. Show the work to solve proportion. Ratios and Proportion Online activities 3.25. ... a subset : set ratio of 4:9 can be expressed equivalently as 4/9 = 0.‾ 4 ˜ 44.44%) Balance the blobs 5.0 understand ratio as both set: set comparison (for example, number of boys : ... RATIO/PROPORTION WORD PROBLEMS A . ratio/rate. is the comparison of two quantities. Examples: The ratio of 12 miles to 18 miles = 12 mi / 18 mi = 2 mi / 3 mi. The rate of $22.50 for 3 hours = $22.50 / 3 hrs. NOTE: 1 ... If we’re interested in the proportion of all M&Ms© that are orange, what are the parameter,, and the statistic,? What is your value of ? p = proportion of all M&Ms that are orange = proportion of M&Ms in my sample of size n that are orange = x/n. Write the proportion. 8 = 192 . 3 n. 2. Write the cross products 8 * n = 192 * 3. 3. Multiply 8n = 576. 4. Undo ... the male to female ratio is 6:6. If there are 160 players in the league, how many are female? 22. RATE/RATIO/PROPORTION /% UNIT. Day 1: Monday. Objective: Students will be able to find the unit rate in various situations. Warm-Up: (Review from last week) Put on board or overhead. Express each decimal as a fraction in simplest form:.60 2) 1.25 3) .35. Students complete Independent Practice activities that serve as a summative assessment, since instructional feedback is not provided at this level. ... Ratio and Proportion Level 1- Level 6 (Individually) http://www.thinkingblocks.com/TB_Ratio/tb_ratio1.html . Fractions, decimals and percentages, ratio and proportion. Year 6 Autumn term Unit ... 6 Oral and Mental Main Teaching Plenary Objectives and Vocabulary Teaching Activities Objectives and Vocabulary Teaching Activities Teaching Activities/ Focus Questions Understand the ratio concept and use ratio reasoning to solve related problems. ... John A. Van de Walle’s Developing Concepts of Ratio and Proportion gives the instructor activities for the development of proportional reasoning in the student. The activities which follow include a good introduction to graphing as well as a great application of ratio and proportion. Directions: What’s the Story, What’s the Math . Proportion. Ratio. Similarity. Generic. Activities: Visit Little Studio Lincoln Room. View Standing Lincoln Exhibit. Examine Resin cast of Volk’s life mask of Lincoln. ... Additional Activities: Visit Atrium Gallery. View Bust Cast from Standing Lincoln Statue in 1910. Activities: Divide the class into small groups. Have the students create in Geogebra create Fibonacci Rectangles and the Shell Spirals. ... which is the Golden ratio, which has many applications in the human body, architecture, and nature. Activities: The following exercises meet the Gateway Standards for Algebra I – 3.0 (Patterns, Functions and Algebraic Thinking) ... The exterior of the Parthenon likewise fits into the golden proportion so that the ratio of the entablature ... Concept Development Activities . 7- 1 Ratio and proportion: Restless rectangles . activity require students to compare and contrast rectangles of the same shapes but different sizes and make a discovery of their lengths and width to discover the properties of similar rectangles. Definitions of ratio, proportion, extremes, means and cross products Write and simplify ratios The difference between a ratio and a proportion ... Learning Activities ... Once completed, students should calculate the ratio of the length to the width by dividing. In both cases, students should calculate a ratio of L :W approximately equal to 1.6 if rounded to the nearest tenth. Ratio, Proportion, Data Handling and Problem Solving. Five Daily Lessons. Unit ... Year Group 5/6 Oral and Mental Main Teaching Plenary Objectives and Vocabulary Teaching Activities Objectives and Vocabulary Teaching Activities Teaching Activities / Focus Questions Find pairs of numbers ... Apply the concept of ratio, proportion, and similarity* in problem-solving situations*. 4.5.a. ... Planning Daily Lesson Activities that Incorporate the MYP: (AOIs, LP, rigor, holistic learning, communication, internationalism, etc) Activities 1) Use guided practice to refresh the concept of ratio and proportion, and the process for solving proportions for a missing value. 2) ... ... Gulliver's Travels Swift 7-12 Ratio, proportion, measurement Webpage Holes Sachar 6-8 Ratio, proportion, data collection , percent ... Activities, Stories, Puzzles, and Games Gonzales, Mitchell & Stone Mathematical activities, puzzles, stories & games from history Moja ... Apply knowledge of ratio and proportion to solve relationships between similar geometric figures. ... Digital Cameras Activities– Heather Sparks. Literature: If you Hopped Like a Frog and Lesson. Materials: Jim and the Bean Stalk Book. They calculate the ratio of circumference to diameter for each object in an attempt ... The interactive Paper Pool game provides an opportunity for students to develop their understanding of ratio, proportion, ... The three activities in this investigation center on situations involving rational ... LessonTitle: Proportion Activities (One Foot Tale, ... Application of Ratio and Proportion Vocabulary Focus. Proportion Materials . A literature book such as Gulliver’s Travels or If You Hopped Like a Frog, Catalogues, Measuring tools. Assess ratio. is a comparison of two numbers. How much sugar do you put in your favorite cookie recipe? How much flour? ... What is the ratio of browns to rainbows? Proportion. A . proportion. is two equal ratios. Look at the first example on page 1 again. Algebra: Ratio & proportion, formulas (Statistics & Prob.: ... questions, and student activities associated with the delivery of the lesson. Nothing should be left to the imagination. Other teachers should be able to reproduce this exact lesson using this lesson plan. Math Forum: Clearinghouse of ratio and proportion activities for 6th grade. http://mathforum.org/mathtools/cell/m6,8.9,ALL,ALL/ Middle School Portal: Here you will find games, problems, ... What is a ratio and how do you use it to solve problems? ... Read and write a proportion. Determining how to solve proportions by cross multiplying. ... Activities. Day 1. Jumping Jacks: the test of endurance: • Is the approach to ratio, proportion and percentages compatible with work in mathematics? ... Athletic activities use measurement of height, distance and time, and data-logging devices to quantify, explore, and improve performance. This material can also be used in everyday problem solving that stems from activities such as baking. Goals and Standards. ... I also expect students to be familiar with the word ratio, ... Write the proportion you find from number 1 in 4 different ways. (Use properties 2-4) (Write and solve a proportion using the scale as one ratio.) ... CDGOALS\BK 6-8\Chp3\AA\Activities\Making a Scale Drawing (n = 1( x 4; n = 14 ft. 2 cm n cm. 25 km 80 km. Title: GRADE SIX-CONTENT STANDARD #4 Author: Compaq Last modified by: Unit 5 Fractions, decimals, percentages, ratio and proportion Term: Summer Year Group: 4/5 Oral and Mental Main Teaching Plenary Objectives and Vocabulary Teaching Activities Objectives and Vocabulary Teaching Activities Teaching Activities/ Focus Questions Y4 ... ... Algebra, 5701, Trade and Industrial, Measurement, Circle, Area, Estimation, Ratio, Proportion, Scale. June, 2005 Career and Technical Education Sample Lesson Plan Format. Title: Constructing a Holiday Wreath. ... Activities: Sell stock. Purchase supplies. Identify the audience in writing ... ACTIVITIES: ICT: RESOURCES: MATHSWATCH: Clip 101 Estimating Answers. Clip 160 Upper & Lower Bounds Difficult Questions. B-A. ... Solve simple ratio and proportion problems such as finding the ratio of teachers to students in a school. Ratio Method of Comparison Significance ... cash plus cash equivalents plus cash flow from operating activities Average Collection Period. ... Total Assets Shows proportion of all assets that are financed with debt Long Term Debt to Total Capitalization Long Term Debt. Ratio, proportion, fraction, equivalent, lowest terms, simplify, percentage, ... ACTIVITIES ICT RESOURCES How to get more pupils from L3 to L5 in mathematics part 2: Learning from misconceptions: Fractions and Decimals Resource sheet A5. formulate how and when a ratio is used . write appropriate labels . apply knowledge of ratios to the project (ie. Holocaust Ratio Project) identify basic rates . differentiate between rates and ratios. ... Proportion. Differentiated Learning Activities. KS3 Framework reference Targeted activities for the introduction or plenary part of lesson Activity Ref: Simplify or transform linear expressions by collecting like terms; multiply a single term over a bracket. ... Ratio & Proportion Date: 2 HRS Ratio and proportion; e. Scale factor; f. Dilations; g. Real-life examples of similarity and congruency; h. Angle measures; j. ... Activities exploring similarity and congruence in three-dimensional figures and analyze the relationship of the area, ... Ratio and proportion. Topic/Sub-topic: Proportions of the human body. Foundational objective(s): ... contribute positively in group learning activities, and treat with respect themselves, others, and the learning materials used (PSVS) UEN- Lesson “Ratio, Rate, and Proportion” Activities 1 and 2 from . http://mypages.iit.edu/~smart/dvorber/lesson3.htm. Sample Formative Assessment Tasks Skill-based task. Identify (given examples) the difference between a ratio and a rate. Problem Task.
http://freepdfdb.com/doc/ratio-and-proportion-activities
13
235
An angle is acute if it is less than 90. A triangle with three internal angles, each less than 90°, is called an acute-angled triangle. The amplitude of a wave is half the vertical distance from a crest to a trough. An angle is a measure of turning. Angles are measured in degrees. The symbol for an angle is . Under a rotation about a centre, C, an original point, A, will produce the image, A', with angle ACA' being the angle of rotation. The part of a circle or curve lying between two points on the curve. Area is a measure of surface size and is calculated in square units (e.g. cm, m, sq.ft.). ASA stands for angle-side-angle and refers to the known details of a possible triangle. A back-bearing provides a return direction for a given bearing, e.g. 080° is the back-bearing of 260°. A bearing is the direction (as an angle measured clockwise from north) that a point lies from a given location. It is usually given as a three-figure bearing, e.g. 087°. To bisect something is to cut it in half. In mathematics, lines and angles are often bisected. Two axes that intersect at right angles are used to identify points in a plane. Conventionally, the horizontal axis is labelled as the x axis and the vertical axis as the y axis. This creates a numbered grid on which points are defined by an ordered pair of numbers (x, y). The system is named after the French scholar, René Descartes. Under a rotation, an original point, A, and its image, A', are always the same distance away from the centre of rotation, C. The angle ACA' is the angle of rotation. A straight line segment joining two points of a circle (or any curve). The perimeter (or length of perimeter) of a circle. A circle is circumscribed about a shape if each vertex of the shape lies on the circumference of the circle. The shape is said to be inscribed in the circle. A closed curve is one that is continuous and that begins and ends in the same place. A numerical value which multiplies a term of an expression is called a coefficient. For example, the coefficient of x in the expression is 4. A factor that is shared by two or more different numbers is called a common factor. If two angles sum to 90 then they are complementary angles. A polygon is concave if one or more of its interior angles is greater than 180. A solid with a circular base. All points on the circumference of the base are joined to a vertex in a different plane than the base. Two shapes are congruent if their lengths and angles are identical (i.e. if one can 'fit' exactly over the other). Mirror images, for example, are congruent. A constant is a quantity (such as a number or symbol) that has a fixed value, in contrast to a variable. The difference between consecutive terms of a linear sequence is called the constant difference. A polygon is convex if all of its interior angles are less than 180. A pair of numbers that determine the location of a point on x-y plane are called coordinates. Any set of points, lines, curves and/or shapes are coplanar if they exist in the same plane. For any right-angled triangle, the cosine ratio for an angle of x° is: A counting number is a positive whole number greater than zero: 1, 2, 3, Counting numbers are also called counting natural numbers. Algebraically, a number multiplied by itself three times is called a cube. The cube root of a number yields that number when multiplied by itself three times that is, when it is cubed. For example, 2 is the cube root of 8. A cuboid is a solid with six rectangular faces. A polygon is cyclic if all of its vertices lie on a circle. A solid with a circular base and a parallel circular top whose every parallel slice in between is also a congruent circle. This is the standard abbreviation for decimal places the number of digits that appear after the decimal point in a decimal number A decagon is a ten-sided polygon. A decimal number is a number that includes tenths, hundredths, thousandths and so on, represented by digits after a decimal point. Decimal places are the digits representing tenths, hundredths, thousandths and so on that appear after the decimal point in a decimal number. In a fraction, the denominator is the number written below the dividing line. A straight line segment joining two points on the circumference of a circle and passing through its centre (or the length of this line). Equal to twice the radius length. One of the symbols 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9. Numbers are made up of one or more digits. For example, the number 72 has 2 digits and the number 1.807 has 4 digits. A dodecagon is a 12-sided polygon. An enlargement is a type of transformation in which lengths are multiplied whilst directions and angles are preserved. The transformation is specified by a scale factor of enlargement and a centre of enlargement. For every point in the original shape, the transformation multiplies the distance between the point and the centre of enlargement by the scale factor. An equation is a mathematical statement that two expressions have equal value. The expressions are linked with the 'equals' symbol (=). Items that are at an equal distance from an identified point, line or plane are said to be equidistant from it. An equilateral triangle has three sides of the same length and hence three angles of 60°. A fraction with the same value as another fraction is called an equivalent fraction. To evaluate a numerical or an algebraic expression means to find its value. The number written as a superscript above another number is called the exponent. It indicates the number of times the first number is to be multiplied by itself. This is also known as the index or the power of the first number. In mathematics, an expression is combination of known symbols including variables, constants and/or numbers. Algebraic equations are made up of expressions. Examples of expressions include : , , and . When the side of a convex polygon is produced (lengthened), the exterior angle is the angle between this line and an adjacent side. A factor is any of two or more numbers (or other quantities) that are multiplied together. For example, 2 and 5 are factors of 10. A formula is a general equation that shows the algebraic connection between related quantities. The gradient is the mathematical measure of slope. This is the standard abbreviation for the highest common factor the factor of highest value that is shared by two or more different numbers. A heptagon is a seven-sided polygon. A hexagon is a six-sided polygon. The factor of highest value that is shared by two or more different numbers is called the highest common factor. This is often abbreviated to HCF. A horizontal line runs parallel to the Earth's surface and at right angles to a vertical line. The horizontal axis of a graph runs from left to right. The hypotenuse of a right-angled triangle is the side opposite the right angle. A shape that is the result of a transformation on the coordinate plane. The number written as a superscript above another number is called the index. It indicates the number of times the first number is to be multiplied by itself. This is also known as the exponent or the power of the first number. Index notation is a shorthand way of writing a number repeatedly multiplied by itself. For example, can be written as (in words: 3 to the power 4). A shape is inscribed within a circle if each vertex of the shape lies on the circumference of the circle. The circle is then said to be circumscribed about the shape. An integer is any of the natural numbers plus zero and the negative numbers: ,-3, -2, -1, 0, 1, 2, 3, In the Cartesian coordinate system, an intercept is the positive or negative distance from the origin to the point where a line or curve cuts a given axis. An interior angle is the angle between adjacent sides at a vertex of a polygon. To intersect is to have a common point or points. For example, two lines intersect at a point and two planes intersect at a straight line. The point at which two or more lines intersect is called a vertex. Any number that cannot be expressed as the ratio of two integers is an irrational number. For example, the square root of 2 and are both irrational numbers. An isosceles triangle is one that has two sides of equal length and hence two angles of equal size. The lattitude measurement of a point on the Earth's surface is the angular distance north or south of the Equator. This is the standard abbreviation for the least common multiple the smallest-value multiple that is shared by two different numbers. The smallest-value multiple that is shared by two different numbers is called the least common multiple. This is often abbreviated to LCM. A line segment is the set of points on the straight line between any two points, including the two endpoints themselves. In a linear sequence, consecutive terms of the sequence are generated by adding or subtracting the same number each time. A locus is a set of points that all satisfy a particular condition. For instance, the two-dimensional locus of points equidistant from two points A and B is the perpendicular bisector of the line segment AB. The longitude measurement of a point on the Earth's surface is the angular distance measured east to west from the zero line at Greenwich, England. The lowest common multiple of the denominators of two or more fractions is called the lowest common denominator. In the context of vectors, magnitude means the length of a vector. The bigger of the two arcs formed by two points on a circle. The midpoint of a line is the point halfway along it. The smaller of the two arcs formed by two points on a circle. A multiple of an integer is the product of that number and another integer. An n-gon is an n-sided polygon. A natural number is a positive whole number greater than zero: 1, 2, 3, Natural numbers are also called counting numbers. Any number that is less than zero is a negative number. A net is a flat pattern of polygons which, when folded up, creates a polyhedron (a many-sided solid). A nonagon is a nine-sided polygon. Mathematical notation is a convention for writing down ideas in mathematics. Some examples are fraction notation, vector notation and index notation. A line on which numbers are represented graphically is called a number line. In a fraction, the numerator is the number written above the dividing line. An angle is obtuse if it is over 90 but less than 180. A triangle with one internal angle of between 90° and 180° is called an obtuse-angled triangle. An octagon is an eight-sided polygon. The origin is the point of intersection of the x axis and the y axis. It has the coordinates (0,0). Two lines, curves or planes are said to be parallel if the perpendicular distance between them is always the same. A parallelogram is a quadrilateral with two sets of parallel sides. A pentagon is a five-sided polygon. A perfect square is a natural number which is equal to the square of another natural number. For example, 4, 9 and 16 are perfect squares as they are equal to 2, 3 and 4 squared respectively. The perimeter of a shape is the total length of its outside edge(s). A function has a period, p, if for all values of x. So the period of the cosine function is 360° since . A periodic function is one for which for all values of x (for some particular value of p). For instance is a periodic function since , for all values of x. Two lines or planes are perpendicular if they are at right angles to one another. A perpendicular bisector is a line that cuts in half a given line segment and forms a 90° angle with it. The irrational number pi represents the ratio of the lengths of the circumference to the diameter of a circle. It has the approximate value 3.14159265 and is always written using the symbol . A planar figure is one that exists in a single plane. A plane has position, length and width but no height. It is an object with two-dimensions. A point has no properties except position. It is an object with zero dimensions. Points in the x-y plane can be specified using x and y coordinates. A polygon is a closed, planar figure bounded by straight line segments. A number is raised to a particular power when it is multiplied by itself that number of times. Powers are written as a superscript above the number that is multiplied by itself. For example means 3 to the power of 4 or . Any factor of a number that is a prime number is called a prime factor. A prime number is a positive integer that has exactly two factors: itself and 1. A solid whose ends are two parallel congruent polygons. Similar points in each shape are joined by a straight line. When a line segment is produced it is extended in the same direction. The result of multiplying one quantity by another. A proof is an argument consisting of a sequence of justified conclusions that is used to universally demonstrate the validity of a statement. A pyramid has a polygon as a base and one other vertex in another plane. This vertex is joined to each of the polygon's edges by a triangle. Pythagoras' theorem states that, for any right-angled triangle, the square of the length of the hypotenuse is equal to the sum of the squares of the lengths of the other two sides. A Pythagorean triple is a set of three positive integers where the square of the largest is equal to the sum of the squares of the other two numbers. Such a set of numbers conform to Pythagoras' theorem and could therefore be the lengths of the sides of a right-angled triangle. An example of a Pythagorean triple is (3, 4, 5). A plane is divided into quarters called quadrants by the axes in a Cartesian coordinate system. The quadrants are numbered first, second, third and fourth in an anticlockwise direction starting in the upper right quadrant. A quadratic equation has the general form , where a, b and c are constants. A quadrilateral is a polygon with four sides. A straight line segment joining the centre of a circle with a point on its circumference (or the length of this line). A ratio compares two quantities. The ratio of a to b is often written a:b. For example, if the ratio of the width to the length of a swimming pool is 1:3, the length is three times the width. Any number that can be written as the exact ratio of two integers is a rational number. For example, -5, 12 and are all rational numbers. A ray is the set of all points lying on a given side of a certain point on a straight line. The reciprocal of a number is 1 divided by that number. For example, is the reciprocal of 3. The product of any number and its reciprocal is always 1. A rectangle is a quadrilateral with four interior angles of 90. A decimal number with an infinitely repeating digit or group of digits is called a recurring decimal. The repeating group is indicated by a dot above the first and last digit. For example, means 3.125125125125 A reflex angle is over 180 but less than 360. A regular polygon has sides of equal length and interior angles of the same size. The amount left over when an integer is divided by another integer is called the remainder. If an integer is divided by one of its factors, then the remainder is zero. An angle of 90°. A right pyramid has a regular polygon as its base and its vertex is directly over the centre of the base. A triangle with one internal angle of 90° is called a right-angled triangle. A transformation specified by a centre and angle of rotation. To round a number is to express it to a required degree of accuracy. In general, a rule is a procedure for performing a process. In the context of sequences, a rule describes the sequence and can be used to generate or extend it. This is the standard abbreviation for significant figures digits used to write a number to a given accuracy, rather than to denote place value. SAS stands for side-angle-side and refers to the known details of a possible triangle. In the context of vectors, a scalar is a quantity that has magnitude only. The scale factor is the ratio of distances between equivalent points on two geometrically similar shapes. A scalene triangle is one with no equal-length sides and therefore no equal-size angles. The region of a circle bounded by an arc and the two radii joining its end points to the centre. The region of a circle bounded by an arc and the chord joining its two end points. A sequence is a set of numbers in a given order. All the numbers in a sequence are generated by applying a rule. Significant figures are the digits used to write a number to a given accuracy, rather than to denote place value. Two shapes are similar if one is congruent to an enlargement of the other. All squares are similar, as are all circles. Simplifying a fraction means to rewrite it so that the numerator and denominator have as small a value as possible. For any right-angled triangle, the sine ratio for an angle of x° is: Three-dimensional shapes are often referred to as 'solids'. A sphere is a ball-shaped solid whose points are all equidistant from a central point. A square number is a natural number which is equal to the square of another natural number. For example, 4, 9 and 16 are square numbers as they are equal to 2, 3 and 4 squared respectively. A square root is a number whose square is equal to the given number. For example, 2 is the square root of 4. SSA stands for side-side-angle and refers to the known details of a possible triangle. SSS stands for side-side-side and refers to the known details of a possible triangle. A straight line is a set of points related by an equation of the form y = ax + c. It has length and position, but no breadth and is therefore one-dimensional. In algebra, to substitute means to replace a given symbol in an expression by its numerical value. For example, substituting 5 for n in the expression x = 3n gives x = 3 × 5 = 15. . An angle is subtended by a line, A, if the lines forming the angle extend from the endpoints of line A. If two angles sum to 180 then they are supplementary angles. A letter or other mark that represents a quantity. The symbol x is often used to denote a variable quantity, while other letters are used to represent constant numbers. A plane figure has symmetry if the effect of a reflection or rotation is to produce an identical-looking figure in the same position. For any right-angled triangle, the tangent ratio for an angle of x° is: A line that touches a circle or curve at only one point. Each of the numbers in a sequence is called a term. In the sequence 3, 6, 9,... 6 is the second term and 9 is the third term. A decimal number that has a finite number of digits is called a terminating decimal. All terminating decimals can be expressed as fractions in which the denominator is a multiple of 2 and/or 5. A four-sided solid shape. A transformation on a shape is any operation which alters the appearance of the shape in a well defined manner. Geometric translation is a transformation on the coordinate plane which changes the position of a shape while retaining its lengths, angles, and orientation. A transversal line intersects two or more coplanar lines. A trapezium is a quadrilateral with one set of parallel sides and one set of non-parallel sides. A triangle is a three-sided polygon. For any right-angled triangle, these are: for an angle of x°. For any right-angled triangle, these are: for an angle of x°. A turning point on a curve is a point at which the gradient is 0 but the points either side have a non-zero gradient. So a quadratic curve always has a minimum or maximum point which is a turning point. An undecagon is an 11-sided polygon. A fraction in which the numerator is equal to 1 is called a unitary fraction. A variable is a non-specified number which can be represented by a letter. The letters x and y are commonly used to represent variables. A vector is a quantity that has magnitude and direction. A vertical line runs at a right angle to a horizontal line. The vertical axis of a graph runs from top to bottom of a page. The x-y plane is a dimensional grid on which points and curves can be plotted. The x axis is normally the horizontal axis and the y axis the vertical one.
http://www.absorblearning.com/mathematics/glossary.html
13
57
To receive more information about up-to-date research on micronutrients, sign up for the free, semi-annual LPI Research Newsletter here. The immune system protects the body against infection and disease. It is a complex and integrated system of cells, tissues, and organs that have specialized roles in defending against foreign substances and pathogenic microorganisms, including bacteria, viruses, and fungi. The immune system also functions to guard against the development of cancer. For these actions, the immune system must recognize foreign invaders as well as abnormal cells and distinguish them from self (1). However, the immune system is a double-edged sword in that host tissues can be damaged in the process of combating and destroying invading pathogens. A key component of the immediate immune response is inflammation, which can cause damage to host tissues, although the damage is usually not significant (2). Inflammation is discussed in a separate article; this article focuses on nutrition and immunity. Cells of the immune system originate in the bone marrow and circulate to peripheral tissues through the blood and lymph. Organs of the immune system include the thymus, spleen, and lymph nodes (3). T-lymphocytes develop in the thymus, which is located in the chest directly above the heart. The spleen, which is located in the upper abdomen, functions to coordinate secretion of antibodies into the blood and also removes old and damaged red blood cells from the circulation (4). Lymph nodes serve as local sentinel stations in tissues throughout the body, trapping antigens and infectious agents and promoting organized immune cell activation. The immune system is broadly divided into two major components: innate immunity and adaptive immunity. Innate immunity involves immediate, nonspecific responses to foreign invaders, while adaptive immunity requires more time to develop its complex, specific responses (1). Innate immunity is the first line of defense against foreign substances and pathogenic microorganisms. It is an immediate, nonspecific defense that does not involve immunologic memory of pathogens. Because of the lack of specificity, the actions of the innate immune system can result in damage to the body’s tissues (5). A lack of immunologic memory means that the same response is mounted regardless of how often a specific antigen is encountered (6). The innate immune system is comprised of various anatomical barriers to infection, including physical barriers (e.g., the skin), chemical barriers (e.g., acidity of stomach secretions), and biological barriers (e.g., normal microflora of the gastrointestinal tract) (1). In addition to anatomical barriers, the innate immune system is comprised of soluble factors and phagocytic cells that form the first line of defense against pathogens. Soluble factors include the complement system, acute phase reactant proteins, and messenger proteins called cytokines (6). The complement system, a biochemical network of more than 30 proteins in plasma and on cellular surfaces, is a key component of innate immunity. The complement system elicits responses that kill invading pathogens by direct lysis (cell rupture) or by promoting phagocytosis. Complement proteins also regulate inflammatory responses, which are an important part of innate immunity (7-9). Acute phase reactant proteins are a class of plasma proteins that are important in inflammation. Cytokines secreted by immune cells in the early stages of inflammation stimulate the synthesis of acute phase reactant proteins in the liver (10). Cytokines are chemical messengers that have important roles in regulating the immune response; some cytokines directly fight pathogens. For example, interferons have antiviral activity (6). These soluble factors are important in recruiting phagocytic cells to local areas of infection. Monocytes, macrophages, and neutrophils are key immune cells that engulf and digest invading microorganisms in the process called phagocytosis. These cells express pattern recognition receptors that identify pathogen-associated molecular patterns (PAMPs) that are unique to pathogenic microorganisms but conserved across several families of pathogens (see figure). For more information about the innate immune response, see the article on Inflammation. Adaptive immunity (also called acquired immunity), a second line of defense against pathogens, takes several days or weeks to fully develop. However, adaptive immunity is much more complex than innate immunity because it involves antigen-specific responses and immunologic “memory.” Exposure to a specific antigen on an invading pathogen stimulates production of immune cells that target the pathogen for destruction (1). Immunologic “memory” means that immune responses upon a second exposure to the same pathogen are faster and stronger because antigens are “remembered.” Primary mediators of the adaptive immune response are B lymphocytes (B cells) and T lymphocytes (T cells). B cells produce antibodies, which are specialized proteins that recognize and bind to foreign proteins or pathogens in order to neutralize them or mark them for destruction by macrophages. The response mediated by antibodies is called humoral immunity. In contrast, cell-mediated immunity is carried out by T cells, lymphocytes that develop in the thymus. Different subgroups of T cells have different roles in adaptive immunity. For instance, cytotoxic T cells (killer T cells) directly attack and kill infected cells, while helper T cells enhance the responses and thus aid in the function of other lymphocytes (5, 6). Regulatory T cells, sometimes called suppressor T cells, suppress immune responses (12). In addition to its vital role in innate immunity, the complement system modulates adaptive immune responses and is one example of the interplay between the innate and adaptive immune systems (7, 13). Components of both innate and adaptive immunity interact and work together to protect the body from infection and disease. Nutritional status can modulate the actions of the immune system; therefore, the sciences of nutrition and immunology are tightly linked. In fact, malnutrition is the most common cause of immunodeficiency in the world (14), and chronic malnutrition is a major risk factor for global morbidity and mortality (15). More than 800 million people are estimated to be undernourished, most in the developing world (16), but undernutrition is also a problem in industrialized nations, especially in hospitalized individuals and the elderly (17). Poor overall nutrition can lead to inadequate intake of energy and macronutrients, as well as deficiencies in certain micronutrients that are required for proper immune function. Such nutrient deficiencies can result in immunosuppression and dysregulation of immune responses. In particular, deficiencies in certain nutrients can impair phagocytic function in innate immunity and adversely affect several aspects of adaptive immunity, including cytokine production as well as antibody- and cell-mediated immunities (18, 19). Overnutrition, a form of malnutrition where nutrients, specifically macronutrients, are provided in excess of dietary requirements, also negatively impacts immune system functions (see Overnutrition and Obesity below). Impaired immune responses induced by malnutrition can increase one’s susceptibility to infection and illness. Infection and illness can, in turn, exacerbate states of malnutrition, for example, by reducing nutrient intake through diminished appetite, impairing nutrient absorption, increasing nutrient losses, or altering the body’s metabolism such that nutrient requirements are increased (19). Thus, states of malnutrition and infection can aggravate each other and lead to a vicious cycle (14). Protein-energy malnutrition (PEM; also sometimes called protein-calorie malnutrition) is a common nutritional problem that principally affects young children and the elderly (20). Clinical conditions of severe PEM are termed marasmus, kwashiorkor, or a hybrid of these two syndromes. Marasmus is a wasting disorder that is characterized by depletion of fat stores and muscle wasting. It results from a deficiency in both protein and calories (i.e., all nutrients). Individuals afflicted with marasmus appear emaciated and are grossly underweight and do not present with edema (21). In contrast, a hallmark of kwashiorkor is the presence of edema. Kwashiorkor is primarily caused by a deficiency in dietary protein, while overall caloric intake may be normal (21, 22). Both forms are more common in developing nations, but certain types of PEM are also present in various subgroups in industrialized nations, such as the elderly and individuals who are hospitalized (17). In the developed world, PEM more commonly occurs secondary to a chronic disease that interferes with nutrient metabolism, such as inflammatory bowel disease, chronic renal failure, or cancer (22). Regardless of the specific cause, PEM significantly increases susceptibility to infection by adversely affecting aspects of both innate immunity and adaptive immunity (15). With respect to innate immunity, PEM has been associated with reduced production of certain cytokines and several complement proteins, as well as impaired phagocyte function (20, 23, 24). Such malnutrition disorders can also compromise the integrity of mucosal barriers, increasing vulnerability to infections of the respiratory, gastrointestinal, and urinary tracts (21). With respect to adaptive immunity, PEM primarily affects cell-mediated aspects instead of components of humoral immunity. In particular, PEM leads to atrophy of the thymus, the organ that produces T cells, which reduces the number of circulating T cells and decreases the effectiveness of the memory response to antigens (21, 24). PEM also compromises functions of other lymphoid tissues, including the spleen and lymph nodes (20). While humoral immunity is affected to a lesser extent, antibody affinity and response is generally decreased in PEM (24). It is important to note that PEM usually occurs in combination with deficiencies in essential micronutrients, especially vitamin A, vitamin B6, folate, vitamin E, zinc, iron, copper, and selenium (21). Experimental studies have shown that several types of dietary lipids (fatty acids) can modulate the immune response (25). Fatty acids that have this role include the long-chain polyunsaturated fatty acids (PUFAs) of the omega-3 and omega-6 classes. PUFAs are fatty acids with more than one double bond between carbons. In all omega-3 fatty acids, the first double bond is located between the third and fourth carbon atom counting from the methyl end of the fatty acid (n-3). Similarly, the first double bond in all omega-6 fatty acids is located between the sixth and seventh carbon atom from the methyl end of the fatty acid (n-6) (26). Humans lack the ability to place a double bond at the n-3 or n-6 positions of a fatty acid; therefore, fatty acids of both classes are considered essential nutrients and must be derived from the diet (26). More information is available in the article on Essential fatty acids. Alpha-linolenic acid (ALA) is a nutritionally essential n-3 fatty acid, and linoleic acid (LA) is a nutritionally essential n-6 fatty acid; dietary intake recommendations for essential fatty acids are for ALA and LA. Other fatty acids in the n-3 and n-6 classes can be endogenously synthesized from ALA or LA (see the figure in a separate article on essential fatty acids). For instance the long-chain n-6 PUFA, arachidonic acid, can be synthesized from LA, and the long-chain n-3 PUFAs, eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), can be synthesized from ALA (26). However, synthesis of EPA and, especially, DHA may be insufficient under certain conditions, such as during pregnancy and lactation (27, 28). EPA and DHA, like other PUFAs, modulate cellular function, including immune and inflammatory responses (29). Long-chain PUFAs are incorporated into membrane phospholipids of immune cells, where they modulate cell signaling of immune and inflammatory responses, such as phagocytosis and T-cell signaling. They also modulate the production of eicosanoids and other lipid mediators (29, 30). Eicosanoids are 20-carbon PUFA derivatives that play key roles in inflammatory and immune responses. During an inflammatory response, long-chain PUFAs (e.g., arachidonic acid [AA] of the n-6 series and EPA of the n-3 series) in immune cell membranes can be metabolized by enzymes to form eicosanoids (e.g., prostaglandins, leukotrienes, and thromboxanes), which have varying effects on inflammation (29). Eicosanoids derived from AA can also regulate B- and T-cell functions. Resolvins are lipid mediators derived from EPA and DHA that appear to have anti-inflammatory properties (30). To a certain extent, the relative production of these lipid mediators can be altered by dietary and supplemental intake of lipids. In those who consume a typical Western diet, the amount of AA in immune cell membranes is much greater than the amount of EPA, which results in the formation of more eicosanoids derived from AA than EPA. However, increasing n-3 fatty acid intake dose-dependently increases the EPA content of immune cell membranes. The resulting effect would be increased production of eicosanoids derived from EPA and decreased production of eicosanoids derived from AA, leading to an overall anti-inflammatory effect (30, 31). While eicosanoids derived from EPA are less biologically active than AA-derived eicosanoids (32), supplementation with EPA and other n-3 PUFAs may nevertheless have utility in treating various inflammatory diseases. This is a currently active area of investigation; see the article on Essential fatty acids. While n-3 PUFA supplementation may benefit individuals with inflammatory or autoimmune diseases, high n-3 PUFA intakes could possibly impair host-defense mechanisms and increase vulnerability to infectious disease (for more information, see the article on Essential fatty acids) (25, 33). In addition to PUFAs, isomers of LA called conjugated linoleic acid (CLA) have been shown to modulate immune function, mainly in animal and in vitro studies (34). CLA is found naturally in meat and milk of ruminant animals, but it is also available as a dietary supplement that contains two isomers, cis-9,trans-11 CLA and trans-10,cis-12 CLA. One study in 28 men and women found that CLA supplementation (3 g/day of a 50:50 mixture of the two main CLA isomers) was associated with an increase in plasma levels of IgA and IgM (35), two classes of antibodies. CLA supplementation was also associated with a decrease in levels of two pro-inflammatory cytokines and an increase in levels of an anti-inflammatory cytokine (35). Similar effects on the immune response have been observed in some animal studies (36, 37); however, a few other human studies have not found beneficial effects of CLA on various measures of immune status and function (38-40). More research is needed to understand the effects of CLA on the human immune response. Further, lipids in general have a number of other roles in immunity besides being the precursors of eicosanoids and similar immune mediators. For instance, lipids are metabolized by immune cells to generate energy and are also important structural and functional components of cell membranes. Moreover, lipids can regulate gene expression through stimulation of membrane receptors or through modification of transcription factor activity. Further, lipids can covalently modify proteins, thereby affecting their function (30). Deficiencies in select micronutrients (vitamins and nutritionally-essential minerals) can adversely affect aspects of both innate and adaptive immunity, increasing vulnerability to infection and disease. Micronutrient inadequacies are quite common in the general U.S. population, but especially in the poor, the elderly, and those who are obese (see Overnutrition and Obesity below) (41, 42). According to data from the U.S. National Health and Nutrition Examination Survey (NHANES), 93% of the U.S. population do not meet the estimated average requirement (EAR) for vitamin E, 56% for magnesium, 44% for vitamin A, 31% for vitamin C, 14% for vitamin B6, and 12% for zinc (43). Moreover, vitamin D deficiency is a major problem in the U.S. and elsewhere; it has been estimated that 1 billion people in the world have either vitamin D deficiency or insufficiency (44). Because micronutrients play crucial roles in the development and expression of immune responses, selected micronutrient deficiencies can cause immunosuppression and thus increased susceptibility to infection and disease. The roles of several micronutrients in immune function are addressed below. Vitamin A and its metabolites play critical roles in both innate and adaptive immunity. In innate immunity, the skin and mucosal cells of the eye and respiratory, gastrointestinal, and genitourinary tracts function as a barrier against infections. Vitamin A helps to maintain the structural and functional integrity of these mucosal cells. Vitamin A is also important to the normal function of several types of immune cells important in the innate response, including natural killer (NK) cells, macrophages, and neutrophils. Moreover, vitamin A is needed for proper function of cells that mediate adaptive immunity, such as T and B cells; thus, vitamin A is necessary for the generation of antibody responses to specific antigens (45). Most of the immune effects of vitamin A are carried out by vitamin A derivatives, namely isomers of retinoic acid. Isomers of retinoic acid are steroid hormones that bind to retinoid receptors that belong to two different classes: retinoic acid receptors (RARs) and retinoid X receptors (RXRs). In the classical pathway, RAR must first heterodimerize with RXR and then bind to small sequences of DNA called retinoic acid response elements (RAREs) to initiate a cascade of molecular interactions that modulate the transcription of specific genes (46). More than 500 genes are directly or indirectly regulated by retinoic acid (47). Several of these genes control cellular proliferation and differentiation; thus, vitamin A has obvious importance in immunity. Vitamin A deficiency is a major public health problem worldwide, especially in developing nations, where availability of foods containing preformed vitamin A is limited (for information on sources of vitamin A, see the separate article on Vitamin A). Experimental studies in animal models, along with epidemiological studies, have shown that vitamin A deficiency leads to immunodeficiency and increases the risk of infectious diseases (45). In fact, deficiency in this micronutrient is a leading cause of morbidity and mortality among infants, children, and women in developing nations. Vitamin A-deficient individuals are vulnerable to certain infections, such as measles, malaria, and diarrheal diseases (45). Subclinical vitamin A deficiency might increase risk of infection as well (48). Infections can, in turn, lead to vitamin A deficiency in a number of different ways, for example, by reducing food intake, impairing vitamin absorption, increasing vitamin excretion, interfering with vitamin utilization, or increasing metabolic requirements of vitamin A (49). Many of the specific effects of vitamin A deficiency on the immune system have been elucidated using animal models. Vitamin A deficiency impairs components of innate immunity. As mentioned above, vitamin A is essential in maintaining the mucosal barriers of the innate immune system. Thus, vitamin A deficiency compromises the integrity of this first line of defense, thereby increasing susceptibility to some types of infection, such as eye, respiratory, gastrointestinal, and genitourinary infections (50-56). Vitamin A deficiency results in reductions in both the number and killing activity of NK cells, as well as the function of neutrophils and other cells that phagocytose pathogens like macrophages. Specific measures of functional activity affected appear to include chemotaxis, phagocytosis, and immune cell ability to generate oxidants that kill invading pathogens (45). In addition, cytokine signaling may be altered in vitamin A deficiency, which would affect inflammatory responses of innate immunity. Additionally, vitamin A deficiency impairs various aspects of adaptive immunity, including humoral and cell-mediated immunity. In particular, vitamin A deficiency negatively affects the growth and differentiation of B cells, which are dependent on retinol and its metabolites (57, 58). Vitamin A deficiency also affects B cell function; for example, animal experiments have shown that vitamin A deficiency impairs antibody responses (59-61). With respect to cell-mediated immunity, retinol is important in the activation of T cells (62), and vitamin A deficiency may affect cell-mediated immunity by decreasing the number or distribution of T cells, altering cytokine production, or by decreasing the expression of cell-surface receptors that mediate T-cell signaling (45). Vitamin A supplementation enhances immunity and has been shown to reduce the infection-related morbidity and mortality associated with vitamin A deficiency. A meta-analysis of 12 controlled trials found that vitamin A supplementation in children decreased the risk of all-cause mortality by 30%; this analysis also found that vitamin A supplementation in hospitalized children with measles was associated with a 61% reduced risk of mortality (63). Vitamin A supplementation has been shown to decrease the severity of diarrheal diseases in several studies (64) and has also been shown to decrease the severity, but not the incidence, of other infections, such as measles, malaria, and HIV (45). Moreover, vitamin A supplementation can improve or reverse many of the abovementioned, untoward effects on immune function, such as lowered antibody production and an exacerbated inflammatory response (65). However, vitamin A supplementation is not beneficial in those with lower respiratory infections, such as pneumonia, and supplementation may actually aggravate the condition (45, 66, 67). Because of potential adverse effects, vitamin A supplements should be reserved for undernourished populations and those with evidence of vitamin A deficiency (64). For information on vitamin A toxicity, see the separate article on Vitamin A. Like vitamin A, the active form of vitamin D, 1,25-dihydroxyvitamin D3, functions as a steroid hormone to regulate expression of target genes. Many of the biological effects of 1,25-dihydroxyvitamin D3 are mediated through a nuclear transcription factor known as the vitamin D receptor (VDR) (68). Upon entering the nucleus of a cell, 1,25-dihydroxyvitamin D3 associates with the VDR and promotes its association with the retinoid X receptor (RXR). In the presence of 1,25-dihydroxyvitamin D3, the VDR/RXR complex binds small sequences of DNA known as vitamin D response elements (VDREs) and initiates a cascade of molecular interactions that modulate the transcription of specific genes. More than 200 genes in tissues throughout the body are known to be regulated either directly or indirectly by 1,25-dihydroxyvitamin D3 (44). In addition to its effects on mineral homeostasis and bone metabolism, 1,25-dihydroxyvitamin D3 is now recognized to be a potent modulator of the immune system. The VDR is expressed in several types of immune cells, including monocytes, macrophages, dendritic cells, and activated T cells (69-72). Macrophages also produce the 25-hydroxyvitamin D3-1-hydroxylase enzyme, allowing for local conversion of vitamin D to its active form (73). Studies have demonstrated that 1,25-dihydroxyvitamin D3 modulates both innate and adaptive immune responses. Antimicrobial peptides (AMPs) and proteins are critical components of the innate immune system because they directly kill pathogens, especially bacteria, and thereby enhance immunity (74). AMPs also modulate immune functions through cell-signaling effects (75). The active form of vitamin D regulates an important antimicrobial protein called cathelicidin (76-78). Vitamin D has also been shown to stimulate other components of innate immunity, including immune cell proliferation and cytokine production (79). Through these roles, vitamin D helps protect against infections caused by pathogens. Vitamin D has mainly inhibitory effects on adaptive immunity. In particular, 1,25-dihydroxyvitamin D3 suppresses antibody production by B cells and also inhibits proliferation of T cells in vitro (80-82). Moreover, 1,25-dihydroxyvitamin D3 has been shown to modulate the functional phenotype of helper T cells as well as dendritic cells (75). T cells that express the cell-surface protein CD4 are divided into two subsets depending on the particular cytokines that they produce: T helper (Th)1 cells are primarily involved in activating macrophages and inflammatory responses and Th2 cells are primarily involved in stimulating antibody production by B cells (12). Some studies have shown that 1,25-dihydroxyvitamin D3 inhibits the development and function of Th1 cells (83, 84) but enhances the development and function of Th2 cells (85, 86) and regulatory T cells (87, 88). Because these latter cell types are important regulators in autoimmune disease and graft rejections, vitamin D is suggested to have utility in preventing and treating such conditions (89). Studies employing various animal models of autoimmune diseases and transplantation have reported beneficial effects of 1,25-dihydroxyvitamin D3 (reviewed in 84). Indeed, vitamin D deficiency has been implicated in the development of certain autoimmune diseases, such as insulin-dependent diabetes mellitus (IDDM; type 1 diabetes mellitus), multiple sclerosis (MS), and rheumatoid arthritis (RA). Autoimmune diseases occur when the body mounts an immune response against its own tissues instead of a foreign pathogen. The targets of the inappropriate immune response are the insulin-producing beta-cells of the pancreas in IDDM, the myelin-producing cells of the central nervous system in MS, and the collagen-producing cells of the joints in RA (90). Some epidemiological studies have found the prevalence of various autoimmune conditions increases as latitude increases (91). This suggests that lower exposure to ultraviolet-B radiation (the type of radiation needed to induce vitamin D synthesis in skin) and the associated decrease in endogenous vitamin D synthesis may play a role in the pathology of autoimmune diseases. Additionally, results of several case-control and prospective cohort studies have associated higher vitamin D intake or serum levels with decreased incidence, progression, or symptoms of IDDM (92), MS (93-96), and RA (97). For more information, see the separate article on Vitamin D. It is not yet known whether vitamin D supplementation will reduce the risk of certain autoimmune disorders. Interestingly, a recent systematic review and meta-analysis of observational studies found that vitamin D supplementation during early childhood was associated with a 29% lower risk of developing IDDM (98). More research is needed to determine the role of vitamin D in various autoimmune conditions. Vitamin C is a highly effective antioxidant that protects the body’s cells against reactive oxygen species (ROS) that are generated by immune cells to kill pathogens. Primarily through this role, the vitamin affects several components of innate and adaptive immunity; for example, vitamin C has been shown to stimulate both the production (99-103) and function (104, 105) of leukocytes (white blood cells), especially neutrophils, lymphocytes, and phagocytes. Specific measures of functions stimulated by vitamin C include cellular motility (104), chemotaxis (104, 105), and phagocytosis (105). Neutrophils, which attack foreign bacteria and viruses, seem to be the primary cell type stimulated by vitamin C, but lymphocytes and other phagocytes are also affected (106). Additionally, several studies have shown that supplemental vitamin C increases serum levels of antibodies (107, 108) and C1q complement proteins (109-111) in guinea pigs, which—like humans—cannot synthesize vitamin C and hence depend on dietary vitamin C. However, some studies have reported no beneficial changes in leukocyte production or function with vitamin C treatment (112-115). Vitamin C may also protect the integrity of immune cells. Neutrophils, mononuclear phagocytes, and lymphocytes accumulate vitamin C to high concentrations, which can protect these cell types from oxidative damage (103, 116, 117). In response to invading microorganisms, phagocytic leukocytes release non-specific toxins, such as superoxide radicals, hypochlorous acid (“bleach”), and peroxynitrite; these ROS kill pathogens and, in the process, can damage the leukocytes themselves (118). Vitamin C, through its antioxidant functions, has been shown to protect leukocytes from such effects of autooxidation (119). Phagocytic leukocytes also produce and release cytokines, including interferons, which have antiviral activity (120). Vitamin C has been shown to increase interferon levels in vitro (121). Further, vitamin C regenerates the antioxidant vitamin E from its oxidized form (122). It is widely thought by the general public that vitamin C boosts the function of the immune system, and accordingly, may protect against viral infections and perhaps other diseases. While some studies suggest the biological plausibility of vitamin C as an immune enhancer, human studies published to date are conflicting. Controlled clinical trials of appropriate statistical power would be necessary to determine if supplemental vitamin C boosts the immune system. For a review of vitamin C and the common cold, see the separate article on Vitamin C. Vitamin E is a lipid-soluble antioxidant that protects the integrity of cell membranes from damage caused by free radicals (123). In particular, the alpha-tocopherol form of vitamin E protects against peroxidation of polyunsaturated fatty acids, which can potentially cause cellular damage and subsequently lead to improper immune responses (124). Several studies in animal models as well as humans indicate that vitamin E deficiency impairs both humoral and cell-mediated aspects of adaptive immunity, including B and T cell function (reviewed in 124). Moreover, vitamin E supplementation in excess of current intake recommendations has been shown to enhance immunity and decrease susceptibility to certain infections, especially in elderly individuals. Aging is associated with immune senescence (125). For example, T-cell function declines with increasing age, evidenced by decreased T-cell proliferation and decreased T-cell production of the cytokine, interleukin-2 (126). Studies in mice have found that vitamin E ameliorates these two age-related, immune effects (127, 128). Similar effects have been observed in some human studies (129). A few clinical trials of alpha-tocopherol supplementation in elderly subjects have demonstrated improvements in immunity. For example, elderly adults given 200 mg/day of synthetic alpha-tocopherol (equivalent to 100 mg of RRR-alpha-tocopherol or 150 IU of RRR-tocopherol; RRR-alpha-tocopherol is also referred to as "natural" or d-alpha-tocopherol) for several months displayed increased formation of antibodies in response to hepatitis B vaccine and tetanus vaccine (130). However, it is not known if such enhancements in the immune response of older adults actually translate to increased resistance to infections like the flu (influenza virus) (131). A randomized, placebo-controlled trial in elderly nursing home residents reported that daily supplementation with 200 IU of synthetic alpha-tocopherol (equivalent to 90 mg of RRR-alpha-tocopherol) for one year significantly lowered the risk of contracting upper respiratory tract infections, especially the common cold, but had no effect on lower respiratory tract (lung) infections (132). Yet, other trials have not reported an overall beneficial effect of vitamin E supplements on respiratory tract infections in older adults (133-136). More research is needed to determine whether supplemental vitamin E may protect the elderly against the common cold or other infections. Vitamin B6 is required in the endogenous synthesis and metabolism of amino acids—the building blocks of proteins like cytokines and antibodies. Animal and human studies have demonstrated that vitamin B6 deficiency impairs aspects adaptive immunity, including both humoral and cell-mediated immunity. Specifically, deficiency in this micronutrient has been shown to affect lymphocyte proliferation, differentiation, and maturation as well as cytokine and antibody production (137-139). Correcting the vitamin deficiency restores the affected immune functions (139). The B vitamin, folate, is required as a coenzyme to mediate the transfer of one-carbon units. Folate coenzymes act as acceptors and donors of one-carbon units in a variety of reactions critical to the endogenous synthesis and metabolism of nucleic acids (DNA and RNA) and amino acids (140, 141). Thus, folate has obvious importance in immunity. Folate deficiency results in impaired immune responses, primarily affecting cell-mediated immunity. However, antibody responses of humoral immunity may also be impaired in folate deficiency (142). In humans, vitamin B12 functions as a coenzyme for two enzymatic reactions. One of the vitamin B12-dependent enzymes is involved in the synthesis of the amino acid, methionine, from homocysteine. Methionine in turn is required for the synthesis of S-adenosylmethionine, a methyl group donor used in many biological methylation reactions, including the methylation of a number of sites within DNA and RNA. The other vitamin B12-dependent enzyme, L-methylmalonyl-CoA mutase, converts L-methylmalonyl-CoA to succinyl-CoA, a compound that is important in the production of energy from fats and proteins as well as in the synthesis of hemoglobin, the oxygen carrying pigment in red blood cells (143). Patients with diagnosed vitamin B12 deficiency have been reported to have suppressed natural killer cell activity and decreased numbers of circulating lymphocytes (144, 145). One study found that these immunomodulatory effects were corrected by treating the vitamin deficiency (144). Zinc is critical for normal development and function of cells that mediate both innate and adaptive immunity (146). The cellular functions of zinc can be divided into three categories: 1) catalytic, 2) structural, and 3) regulatory (see Function in the separate article on zinc) (147). Because zinc is not stored in the body, regular dietary intake of the mineral is important in maintaining the integrity of the immune system. Thus, inadequate intake can lead to zinc deficiency and compromised immune responses (148). With respect to innate immunity, zinc deficiency impairs the complement system, cytotoxicity of natural killer cells, phagocytic activity of neutrophils and macrophages, and immune cell ability to generate oxidants that kill invading pathogens (149-151). Zinc deficiency also compromises adaptive immune function, including lymphocyte number and function (152). Even marginal zinc deficiency, which is more common than severe zinc deficiency, can suppress aspects of immunity (148). Zinc-deficient individuals are known to experience increased susceptibility to a variety of infectious agents (see the separate article on Zinc). Adequate selenium intake is essential for the host to mount a proper immune response because it is required for the function of several selenium-dependent enzymes known as selenoproteins (see the separate article on Selenium). For example, the glutathione peroxidases (GPx) are selenoproteins that function as important redox regulators and cellular antioxidants, which reduce potentially damaging reactive oxygen species, such as hydrogen peroxide and lipid hydroperoxides, to harmless products like water and alcohols by coupling their reduction with the oxidation of glutathione (see the diagram in the article on selenium) (153). These roles have implications for immune function and cancer prevention. Selenium deficiency impairs aspects of innate as well as adaptive immunity (154, 155), adversely affecting both humoral immunity (i.e., antibody production) and cell-mediated immunity (156). Selenium deficiency appears to enhance the virulence or progression of some viral infections (see separate article on Selenium). Moreover, selenium supplementation in individuals who are not overtly selenium deficient appears to stimulate the immune response. In two small studies, healthy (157, 158) and immunosuppressed individuals (159) supplemented with 200 micrograms (mcg)/day of selenium as sodium selenite for eight weeks showed an enhanced immune cell response to foreign antigens compared with those taking a placebo. A considerable amount of basic research also indicates that selenium plays a role in regulating the expression of cytokines that orchestrate the immune response (160). Iron is an essential component of hundreds of proteins and enzymes that are involved in oxygen transport and storage, electron transport and energy generation, antioxidant and beneficial pro-oxidant functions, and DNA synthesis (see Function in the article on iron) (161-163). Iron is required by the host in order to mount effective immune responses to invading pathogens, and iron deficiency impairs immune responses (164). Sufficient iron is critical to several immune functions, including the differentiation and proliferation of T lymphocytes and generation of reactive oxygen species (ROS) that kill pathogens. However, iron is also required by most infectious agents for replication and survival. During an acute inflammatory response, serum iron levels decrease while levels of ferritin (the iron storage protein) increase, suggesting that sequestering iron from pathogens is an important host response to infection (162, 165). Moreover, conditions of iron overload (e.g., hereditary hemochromatosis) can have detrimental consequences to immune function, such as impairments in phagocytic function, cytokine production, complement system activation, and T and B lymphocyte function (164). Further, data from the first National Health and Nutrition Examination Survey (NHANES), a U.S. national survey, indicate that elevated iron levels may be a risk factor for cancer and death, especially in men (167). For men and women combined, there were significant trends for increasing risk of cancer and mortality with increasing transferrin saturation, with risks being higher in those with transferrin saturation >40% compared to ≤30% (167). Despite the critical functions of iron in the immune system, the nature of the relationship between iron deficiency and susceptibility to infection, especially with respect to malaria, remains controversial. High-dose iron supplementation of children residing in the tropics has been associated with increased risk of clinical malaria and other infections, such as pneumonia. Studies in cell cultures and animals suggest that the survival of infectious agents that spend part of their life cycle within host cells, such as plasmodia (malaria) and mycobacteria (tuberculosis), may be enhanced by iron therapy. Controlled clinical studies are needed to determine the appropriate use of iron supplementation in regions where malaria is common, as well as in the presence of infectious diseases, such as HIV, tuberculosis, and typhoid (168). Copper is a critical functional component of a number of essential enzymes known as cuproenzymes (see the separate article on Copper). The mineral plays an important role in the development and maintenance of immune system function, but the exact mechanism of its action is not yet known. Copper deficiency results in neutropenia, an abnormally low number of neutrophils (169), which may increase one’s susceptibility to infection. Adverse effects of insufficient copper on immune function appear most pronounced in infants. Infants with Menkes disease, a genetic disorder that results in severe copper deficiency, suffer from frequent and severe infections (170, 171). In a study of 11 malnourished infants with evidence of copper deficiency, the ability of certain white blood cells to engulf pathogens increased significantly after one month of copper supplementation (172). Immune effects have also been observed in adults with low intake of dietary copper. In one study, 11 men on a low-copper diet (0.66 mg copper/day for 24 days and 0.38 mg/day for another 40 days) showed a reduced proliferation response when white blood cells, called mononuclear cells, were isolated from blood and presented with an immune challenge in cell culture (173). While it is known that severe copper deficiency has adverse effects on immune function, the effects of marginal copper deficiency in humans are not yet clear (174). However, long-term high intakes of copper can result in adverse effects on immune function (175). Probiotics are usually defined as live microorganisms that, when administered in sufficient amounts, benefit the overall health of the host (176). Common examples belong to the Lactobacilli and Bifidobacteria species; these probiotics are consumed in yogurt and other fermented foods. Ingested probiotics that survive digestion can transiently inhabit the lower part of the gastrointestinal tract (177). Here, they can modulate immune functions by interacting with various receptors on intestinal epithelial cells and other gut-associated immune cells, including dendritic cells and M-cells (178). Immune modulation requires regular consumption because probiotics have not been shown to permanently alter intestinal microflora (179). Probiotics have been shown to benefit both innate and adaptive immune responses of the host (180). For example, probiotics can strengthen the gut epithelial barrier—an important innate defense—through a number of ways, such as by inhibiting apoptosis and promoting the survival of intestinal epithelial cells (181). Probiotics can also stimulate the production of antibodies and T lymphocytes, which are critical in the adaptive immune response (180). Several immune effects of probiotics are mediated through altering cell-signaling cascades that modify cytokine and other protein expression (181). However, probiotics exert diverse effects on the immune system that are dependent not only on the specific strain but also on the dose, route, and frequency of delivery (182). Probiotics may have utility in the prevention of inflammatory bowel disorders, diarrheal diseases, allergic diseases, gastrointestinal and other types of infections, and certain cancers. However, more clinical research is needed in order to elucidate the health effects of probiotics (180). Overnutrition is a form of malnutrition where nutrients are supplied in excess of the body’s needs. Overnutrition can create an imbalance between energy intake and energy expenditure and lead to excessive energy storage, resulting in obesity (15). Obesity is a major public health problem worldwide, especially in industrialized nations. Obese individuals are at increased risk of morbidity from a number of chronic diseases, including hypertension and cardiovascular diseases, type 2 diabetes, liver and gallbladder disease, osteoarthritis, sleep apnea, and certain cancers (183). Obesity has also been linked to increased risk of mortality (184). Overnutrition and obesity have been shown to alter immunocompetence. Obesity is associated with macrophage infiltration of adipose tissue; macrophage accumulation in adipose tissue is directly proportional to the degree of obesity (185). Studies in mouse models of genetic and high-fat diet-induced obesity have documented a marked up-regulation in expression of inflammation and macrophage-specific genes in white adipose tissue (186). In fact, obesity is characterized by chronic, low-grade inflammation, and inflammation is thought to be an important contributor in the pathogenesis of insulin resistance—a condition that is strongly linked to obesity. Adipose tissue secretes fatty acids and other molecules, including various hormones and cytokines (called adipocytokines or adipokines), that trigger inflammatory processes (185). Leptin is one such hormone and adipokine that plays a key role in the regulation of food intake, body weight, and energy homeostasis (187, 188). Leptin is secreted from adipose tissue and circulates in direct proportion to the amount of fat stores. Normally, higher levels of circulating leptin suppress appetite and thereby lead to a reduction in food intake (189). Leptin has a number of other functions as well, such as modulation of inflammatory responses and aspects of humoral and cell-mediated responses of the adaptive immune system (187, 190). Specific effects of leptin, elucidated in animal and in vitro studies, include the promotion of phagocytic function of immune cells; stimulation of pro-inflammatory cytokine production; and regulation of neutrophil, natural killer (NK) cell, and dendritic cell functions (reviewed in 190). Leptin also affects aspects of cell-mediated immunity; for example, leptin promotes T helper (Th)1 immune responses and thus may have implications in the development of autoimmune disease (191). Th1 cells are primarily involved in activating macrophages and inflammatory responses (12). Obese individuals have been reported to have higher plasma leptin concentrations compared to lean individuals. However, in the obese, the elevated leptin signal is not associated with the normal responses of reduced food intake and increased energy expenditure, suggesting obesity is associated with a state of leptin resistance. Leptin resistance has been documented in mouse models of obesity, but more research is needed to better understand leptin resistance in human obesity (189). Obese individuals may exhibit increased susceptibility to various infections. Some epidemiological studies have shown that obese patients have a higher incidence of postoperative and other nosocomial infections compared with patients of normal weight (192, 193; reviewed in 194). Obesity has been linked to poor wound healing and increased occurrence of skin infections (195-197). A higher body mass index (BMI) may also be associated with increased susceptibility to respiratory, gastrointestinal, liver, and biliary infections (reviewed in 194). In obesity, the increased vulnerability, severity, or complications of certain infections may be related to a number of factors, such as select micronutrient deficiencies. For example, one study in obese children and adolescents associated impairments in cell-mediated immunity with deficiencies in zinc and iron (198). Deficiencies or inadequacies of other micronutrients, including the B vitamins and vitamins A, C, D, and E, have also been associated with obesity (41). Overall, immune responses appear to be compromised in obesity, but more research is needed to clarify the relationship between obesity and infection-related morbidity and mortality. Written in August 2010 by: Victoria J. Drake, Ph.D. Linus Pauling Institute Oregon State University Reviewed in August 2010 by: Adrian F. Gombart, Ph.D. Department of Biochemistry and Biophysics Principal Investigator, Linus Pauling Institute Oregon State University Reviewed in August 2010 by: Malcolm B. Lowry, Ph.D. Department of Microbiology Oregon State University This article was underwritten, in part, by a grant from Bayer Consumer Care AG, Basel, Switzerland. Last updated 9/2/10 Copyright 2010-2013 Linus Pauling Institute The Linus Pauling Institute Micronutrient Information Center provides scientific information on the health aspects of dietary factors and supplements, foods, and beverages for the general public. The information is made available with the understanding that the author and publisher are not providing medical, psychological, or nutritional counseling services on this site. The information should not be used in place of a consultation with a competent health care or nutrition professional. The information on dietary factors and supplements, foods, and beverages contained on this Web site does not cover all possible uses, actions, precautions, side effects, and interactions. It is not intended as nutritional or medical advice for individual problems. Liability for individual actions or omissions based upon the contents of this site is expressly disclaimed. Thank you for subscribing to the Linus Pauling Institute's Research Newsletter. You should receive your first issue within a month. We appreciate your interest in our work.
http://lpi.oregonstate.edu/infocenter/immunity.html
13
81
Glossary of Motor and Motion Related Terms This motor terminology glossary is guide to explain and define a variety of terms and characteristics that apply to AC and DC electric motors and motion control related terms. AC (Alternating Current) - The commonly available electric power supplied by an AC generator and is distributed in single- or three-phase forms. AC current changes its direction of flow (cycles). Acceleration - rate of increase in velocity with respect to time; equal to net torque divided by inertia.> Accuracy - difference between the actual value and the measured or expected value. Actuator - A device that creates mechanical motion by converting various forms of energy to rotating or linear mechanical energy. Alternating Current (AC) - The standard power supply available from electric utilities. Ambient temperature - temperature of the surroundings. The standard NEMA rating for ambient temperature is not to exceed 40 degrees C. Ampere (Amp)- The standard unit of electric current. The current produced by a pressure of one volt in a circuit having a resistance of one ohm. Amplifier - electronics that convert low level inputs to high level outputs. Armature The rotating part of a DC or universal motor. Armature Current - Armature current is the DC current required by a DC motor to produce torque and drive a load. The maximum safe, continuous current is stamped on the motor nameplate. This can only be exceeded for initial acceleration, and for short periods of time. Armature current is proportional to the amount of torque being produced, therefore, it rises and falls as the torque demand rises and falls Armature Reaction - The current that flows in the armature winding of a DC motor tends to produce magnetic flux in addition to that produced by the field current. This effect, which reduces the torque capacity, is called armature reaction and can effect the commutation and the magnitude of the motor’s generated voltage. Axial Movement Often called "endplay." The endwise movement of motor or gear shafts. Usually expressed in thousandths of an inch. Axial Thrust - The force or loads that are applied to the motor shaft in a direction Parallel to the axis of the shaft (such as from a fan or pump). Back-EMF - Electromotive force generated when a conductor passes through a magnetic field. In a motor it is generated any time the armature is moving in the field whether the motor is under power or not. The term "back" or "counter" EMF is referring to the polarity of the voltage and the direction of the current flow as being opposed to the supply voltage and current to the motor under power. Back EMF constant - [mV/rpm] It is the constant corresponding to the relationship between the induced voltage in the rotor and the speed of rotation. In brushless motors the back-EMF constant is the constant corresponding to the relationship between the induced voltage in the motor phases and the rotational speed. Backlash - This is the typically undesirable quality of "play" or "slop" in a mechanical system. Gearboxes, depending on the level of the precision of the parts and the type of gearing system involved can have varying degrees of backlash internally. Usually expressed in thousandths of an inch and measured at a specific radius at the output shaft. Back of a Motor - The back of a motor is the end which carries the coupling or driving pulley (NEMA). This is sometimes called the drive end (D.E.) or pulley end (P.E.) Base Speed - Base speed is the manufacturer’s nameplate rating where the motor will develop rated HP at rated load and voltage. With DC drives, it is commonly the point where full armature Bearings - Bearings reduce friction and wear while supporting rotating elements. When used in a motor, they must provide a relatively rigid support for the output shaftBearings act as the connection point between the rotating and stationary elements of a motor. There are various types such as roller, ball, sleeve (journal) and needle. Ball bearings are used in virtually all types and sizes of electric motors. They exhibit low friction loss, are suited for high-speed operation and are compatible with a wide range of temperatures Bifilar winding - indicates two distinct windings in the same physical arrangement; these windings are usually wired together, either in series or in parallel, to form one phase. Bipolar chopper drive - drive that uses the switch mode method to control motor current and polarity. Braking - Braking provides a means of stopping an AC or DC motor and can be accomplished in several ways - A. Dynamic Braking slows the motor by applying a resistive load across the armature leads after disconnection from the DC supply. This must be done while the motor field is energized. The motor then acts as a generator until the energy of the rotating armature is dissipated. This is not a holding brake. B. Regenerative Braking is similar to Dynamic Braking, but is accomplished electronically. The generated power is returned to the line through the power converter. It may also be just dissipated as losses in the converter (within its limitations). Breakdown Torque - The maximum torque a motor can achieve with rated voltage applied at rated frequency, without a sudden drop in speed or stalling. Breakaway Torque - The torque required to start a machine from standstill. It is always greater than the torque needed to maintain motion. Bridge Rectifier - A full-wave rectifier that conducts current in only one direction of the input current. AC applied to the input results in approximate DC at the output. Bridge Rectifier (Diode, SCR) - A diode bridge rectifier is a non-controlled full wave rectifier that produces a constant rectifier DC voltage. An SCR bridge rectifier is a full wave rectifier with an output that can be controlled by switching on the gate control element. Brush - A brush is a conductor, usually composed of some element of carbon, serving to maintain an electrical connection between stationary and moving parts of a machine (commutator of a DC motor). The brush is mounted in a spring-loaded holder and positioned tangent to the commutator segments against which it “brushes”. Pairs of brushes are equally spaced around the circumference of the commutator. Brushed DC motor - class of motors that has a permanent magnet stator and a wound iron-core armature, as well as mechanical brushes for commutation; capable of variable speed control, but not readily adaptable to different environments. Brushless servomotor - class of servomotors that uses electrical feedback rather than mechanical brushes for commutation; durable and adaptable to many different environments. Canadian Standards Association (CSA) - The agency that sets safety standards for motors and other electrical equipment used in Canada. Capacitance - As the measure of electrical storage potential of a capacitor, the unit of capacitance is the farad, but typical values are expressed in microfarads. Capacitor - A device that stores electrical energy. Used on single-phase motors, a capacitor can provide a starting "boost" or allow lower current during operation. Capacitor Motor - A single-phase induction motor with a main winding arranged for direct connection to the power source, and an auxiliary winding connected in series with a capacitor. There are three types of capacitor motors - capacitor start, in which the capacitor phase is in Capacitor Start - The capacitor start single-phase motor is basically the same as the split phase start, except that it has a capacitor in series with the starting winding. The addition of the capacitor provides better phase relation and results in greater starting torque with much less power input. As in the case of the split phase motor, this type can be reversed at rest, but not while running unless special starting and reversing switches are used. When properly equipped for reversing while running, the motor is much more suitable for this service than the split phase start since it provides greater reversing ability at less watts input. Case temperature rating - maximum temperature the motor case can reach without the inside of the motor exceeding its internal temperature rating Center Distance - A basic measurement or size reference for worm gear reducers, measured from the centerline of the worm to the centerline of the worm wheel. Centrifugal Cutout Switch - A centrifugally operated automatic mechanism used in conjunction with split phase and other types of single-phase induction motors. Centrifugal cutout switches will open or disconnect the starting winding when the rotor has reached a predetermined speed and reconnect it when the motor speed falls below it. Without such a device, the starting winding would be susceptible to rapid overheating and subsequent burnout. Closed-loop - describes a system where a measured output value is compared to a desired input value and corrected accordingly (e.g., a servomotor system). Cogging - A condition in which a motor does not rotate smoothly but “steps” or “jerks” from one position to another during shaft revolution. Cogging is most pronounced at low motor speeds and can cause objectionable vibrations in the driven machine. Commutation - A term that refers to the action of steering currents or voltage to the proper motor phases so as to produce optimum motor torque. In brush type motors, commutation is done electromechanically via the brushes and commutator. In brushless motors, commutation is done by the switching electronics using rotor position information typically obtained by hall sensors, a tachsyn, a resolve, or an encoder. Commutator - The commutator is mechanical device in a brushed DC or universal motor that passes current from the brushes to the windings and is fastened to the motor shaft and is considered part of the armature assembly. It consists of segments or “bars” that are electrically connected to two ends of one (or more) armature coils. Current flows from the power supply through the brushes, to the commutator and hence through the armature coils. The arrangement of commutator segments is such that the magnetic polarity of each coil changes a number of times per revolution (the number of times depends on the number of poles in the motor). Continuous stall current - amount of current applied to the motor to achieve the continuous stall torque. Continuous stall torque - maximum amount of torque a motor can provide at zero speed without exceeding its thermal capacity. Continuous Duty (CONT) - A motor that can continue to operate within the insulation temperature limits after it has reached normal operating (equilibrium) temperature. Controller - used to describe collective group of electronics that control the motor (e.g. drive, indexer, etc.). Converter - The process of changing AC to DC. This is accomplished through use of a diode rectifier or thyristor rectifier circuit. The term “converter” may also refer to the process of changing AC to DC to AC (e.g. adjustable frequency drive). A “frequency converter”, such as that found in an adjustable frequency drive, consists of a Rectifier, a DC Intermediate Circuit, and Inverter and a Control Unit Current, AC - The standard power supply available from electric utilities or alternators. Current, DC - The power supply available from batteries, generators (not alternators), or a rectified source used for special purpose applications. Coupling - The mechanical connector joining the motor shaft to the equipment to be driven. Current - The flow of electrons through a conducting material. By convention, current is considered to flow from positive to negative potential. The electrons, however, actually flow in the opposite direction. The unit of measurement is the Ampere and 1 Amp is defined as the constant current produced between two straight infinitely long parallel conductors with negligible cross section diameter and spaced one meter apart in a vacuum. Current Constant - The constant corresponding to the relationship between motor current and motor output torque. Current at peak torque - amount of current required to produce peak torque. DC (Direct Current) - Is the type of current where all electrons are flowing in the same direction continuously. If the flow of electrons reverses periodically, the current is called AC (Alternating Current). Deceleration - rate of decrease in velocity with respect to time. Decibel (dB) - A logarithmic measurement of gain. If G is a system gain (ratio of output to input) then 20 log G = gain in decibels (dB). Demagnetization (Current) - When a permanent magnet DC motor is subjected to high current pulses at which the motor permanent magnets will be demagnetized. This is an irreversible effect which will alter the motor characteristics and degrade performance. Detent torque - torque that is present in a non-energized motor. Drive - amplifier that converts step and direction input to motor currents and voltages. Drive Controller - (also called a Variable Speed Drive) An electronic device that can control the speed, torque horsepower and direction of an AC or DC motor. Drive, PWM - A motor drive utilizing Pulse-Width Modulation techniques to control power to the motor. Typically a high efficiency drive that can be used for high response applications. Drive, SCR - A DC motor drive which utilizes internal silicon controlled rectifiers as the power control elements. Usually used for low bandwidths, higher power applications. Drive, Servo - A motor drive which utilizes internal feedback loops for accurate control of motor current and/or velocity. Drive, Stepper - Electronics which convert step and direction inputs to high power currents and voltages to drive a stepping motor. The stepping motor driver is analogous to the servo motor amplifier. Duty Cycle - The relationship between the operating and rest times or repeatable operation at different loads. A motor which can continue to operate within the temperature limits of its insulation system after it has reached normal operating (equilibrium) temperature is considered to have a continuous duty (CONT.) rating. A motor which neverreaches equilibrium temperature but is permitted to cool down between operations, is operating under intermittent (INT) duty. Conditions such as a crane and hoist motor are often rated 15 or 30 minute intermittent duty. Dynamic Braking - A passive technique for stopping a permanent magnet brush or brushless motor. The motor windings are shorted together through a resistor which results in motor braking with an exponential decrease in speed. Eddy Current - Localized currents induced in an iron core by alternating magnetic flux. These currents translate into losses (heat) and their minimization is an important factor in lamination design. Efficiency - Ratio of mechanical output to electrical input indicated by a percent. In motors, it is the effectiveness with which a motor converts electrical energy into mechanical energy. EMF - The initials of electromotive force which is another term for voltage or potential difference. In DC adjustable speed drives, voltage applied to the motor armature from power supply is the EMF and the voltage generated by the motor is the counter-EMF or CEMF. EMI (Electro-Magnetic Interference) - EMI is noise which, when coupled into sensitive electronic circuits, may cause problems. Enclosure - The term used to describe the motor housing. The most common industrial types are Open Drip Proof (ODP), Totally Enclosed Fan Cooled (TEFC), Totally Enclosed Non-Ventilated (TENV), and Totally Enclosed Air Over (TEAO). Encoder - A type of feedback device which converts mechanical motion into electrical signals to indicate actuator position. Typical encoders are designed with a printed disc and a light source. As the disc turns with the actuator shaft, the light source shines through the printed pattern onto a sensor. The light transmission is interrupted by the pattern on the disc. These interruptions are sensed and converted to electric pulses. By counting the pulses, actuator shaft position is determined. End play - amount of axial displacement resulting from the application of a load equal to the stated maximum axial load. End shield - The part of a motor that houses the bearing supporting the rotor and acts as a protective guard to the internal parts of the motor; sometimes called endbell, endplate or end bracket. Error - Difference between the set point signal and the feedback signal. An error is necessary before a correction can be made in a controlled system. Feedback - The element of a control system that provides an actual operation signal for comparison with the set point to establish an error signal used by the regulator circuit. Field Weakening - The action of reducing the current applied to a DC motor shunt field. This action weakens the strength of the magnetic field and thereby increases the motor speed. Filter - A device that passes a signal or a range of signals and eliminates all others. Floating Ground - A circuit whose electrical common point is not at earth ground potential or the same ground potential as circuitry it is associated with. A voltage difference can exist between the floating ground and earth ground. Force - The tendency to change the motion or position of an object with a push or pull. Force is measured in ounces or pounds. Form Factor - A figure of merit which indicates how much rectified current deviates from pur (nonpulsating) DC. A large departure from unity form factor (pure DC) increases the heating effect of the motor. Mathematically, it is expressed as Irms/Iav (Motor heating current / Torque producing current). Four-Quadrant Operation - The four combinations of forward and reverse rotation and forward and reverse torque of which a regenerative drive is capable. The four combinations are - 1. Forward rotation / forward torque (motoring). 2. Forward rotation / reverse torque (regeneration). 3. Reverse rotation / reverse torque (motoring). 4. Reverse rotation / forward torque (regeneration). Frame - The supporting structure for the stator parts of an AC motor. In a DC motor, the frame usually forms a part of the magnetic coil. The frame also determines mounting. Frequency - Alternating electric current frequency is an expression of how often a complete cycle occurs. Cycles per second describe how many complete cycles occur in a given time increment. Hertz (hz) has been adopted to describe cycles per second so that time as well as number of cycles is specified. The standard power supply in North America is 60 hz. Most of the rest of the world has 50 hz power. Friction Torque - The sum of torque losses independent of motor speed. These losses include those caused by static mechanical friction of the ball bearings and magnetic hysteresis of the stator. Front of a Motor - The end opposite the coupling or driving pulley (NEMA). This is sometimes called the opposite pulley end (O.P.E.) or commutator end (C.E.). Full Load Amperes - Line current (amperage) drawn by a motor when operating at rated load and voltage on motor nameplate. Important for proper wire size selection, and motor starter or drive selection. Also called full load current. Full Load Torque - The torque a motor produces at its rated horsepower and full-load speed. Generator - Any machine that converts mechanical energy into electrical energy. Grounded Circuit - An electrical circuit coupled to earth ground to establish a reference point. An electric circuit malfunction caused by insulation breakdown, allowing current flow to ground rather than through the intended circuit. Horsepower - A measure of the amount of work that a motor can perform in a given period of time. Hysteresis Loss - The resistance offered by materials to becoming magnetized (magnetic orientation of molecular structure) results in energy being expended and corresponding loss. Hysteresis loss in a magnetic circuit is the energy expended to magnetize and demagnetize the core. Inductance - The characteristic of an electric circuit by which varying current in it produces a varying magnetic field which causes voltages in the same circuit or in a nearby circuit Induction Motor - The simplest and most rugged electric motor, it consists of a wound stator and a rotor assembly. The AC induction motor is named because the electric current flowing in its secondary member (the rotor) is induced by the alternating current flowing in its primary member (the stator). The power supply is connected only to the stator. The combined electromagnetic effects of the two currents produce the force to create rotation. Inertia - A measure of a body’s resistance to changes in velocity, whether the body is at rest or moving at a constant velocity. The velocity can be either linear or rotational. The moment of Inertia (WK2) is the product of the weight (W) of an object and the square of the radius of gyration (K2). The radius of gyration is a measure of how the mass of the object is distributed about the axis of rotation. WK2 is usually expressed in units of Ib-ft2. Insulation - In motors, classified by maximum allowable operating temperature. NEMA classifications include - Class A = 105°C, Class B = 130°C, Class F = 155°C and Class H = 180°C. Integral Horsepower Motor - A motor built in a frame having a continuous rating of 1 HP or more. Intermittent Duty (INT) - A motor that never reaches equilibrium temperature (equilibrium), but is permitted to cool down between operations. For example, a crane, hoist or machine tool motor is often rated for 15 or 30 duty. International Electrotechnical Comm (IEC) - The worldwide organization that promotes international unification of standards or norms. Its formal decisions on technical matters express, as nearly as possible, an international consensus. Inverter - An electronic device that converts fixed frequency and fixed voltages to variable frequency and voltage. Enables the user to electrically adjust the speed of an AC motor. IR Compensation - A way to compensate for the voltage drop across resistance of the AC or DC motor circuit and the resultant reduction in speed. This compensation also provides a way to improve the speed regulation haracteristics of the motor, especially at low speeds. Drives that use a tachometer-generator for speed feedback generally do not require an IR Compensation circuit because the tachometer will inherently compensate for the loss of speed. Laminations - The steel portion of the rotor and stator cores make up a series of thin laminations (sheets) which are stacked and fastened together by cleats, rivets or welds. Laminations are used instead of a solid piece in order to reduce eddy-current losses.) Locked Rotor Current - Measured current with the rotor locked and with rated voltage and frequency applied to the motor. Locked Rotor Torque - Measured torque with the rotor locked and with rated voltage and frequency applied to the motor. Meggar Test - A test used to measure an insulation system’s resistance. This is usually measured in megohms and tested by passing a high voltage at low current through the motor windings and measuring the resistance of the various insulation systems. Motor - A device that takes electrical energy and converts it into mechanical energy to turn a shaft. Mechanical Time Constant - [ms] The time required by the motor to reach a speed of 63% of its final no-load speed from standstill. NEMA - The National Electrical Manufacturers Association is a nonprofit organization organized and supported by manufacturers of electrical equipment and supplies. Some of the standards NEMA specifies are - HP ratings, speeds, frame sizes and dimensions, torques and enclosures. Nameplate - The plate on the outside of the motor describing the motor horsepower, voltage, speed efficiency, design, enclosure, etc. Nominal Voltage - [V DC] The voltage applied to the armature at which the nominal motor specifications are measured or calculated. No-load speed - [rpm] The maximum speed the motor attains with no additional torque load at a given voltage. This value varies according to the voltage applied to the motor. No-load current - [A] The current consumption of the motor at nominal voltage and under no-load conditions. This value varies proportionally to speed and is influenced by temperature Open Loop - A control system that lacks feedback Output Power - [W] The mechanical power that the motor generates based on a given input power. Mechanical power can be calculated in a few different ways. For motors, one common way is the multiplication of the output speed and torque and conversion factor. Power - Work done per unit of time. Measured in horsepower or watts - 1 HP = 33,000 ft-lb / min. = 746 watts. Plugging - A method of braking a motor that involves applying partial or full voltage in reverse in order to bring the motor to zero speed. Power Factor - A measurement of the time phase difference between the voltage and current in an AC circuit. It is represented by the cosine of the angle of this phase difference. Power factor is the ratio of Real Power (kW) to total kVA or the ratio of actual power (W) to apparent power (volt-amperes). PID - Proportional-Integral-Derivative. An acronym that describes the compensation structure that can be used in a closed-loop system. PMDC Motor - A motor consisting of a permanent magnet stator and a wound iron-core rotor. These are brush type motors and are operated by application of DC current. Prime Mover - In industry, prime mover is most often an electric motor. Occasionally engines, hydraulic or air motors are used. Special application considerations are called for when other than an electric motor is the prime mover. Pull Out Torque - Also called breakdown torque or maximum torque, this is the maximum torque a motor can deliver without stalling. Pull Up Torque - The minimum torque delivered by a motor between zero and the rated RPM, equal to the maximum load a motor can accelerate to rated RPM. PWM - Pulse width modulation. An acronym which describes a switch-mode control technique used in amplifiers and drivers to control motor voltage and current. This control technique is used in contrast to linear control and offers the advantages of greatly improved efficiency. Rectifier - A device that transforms alternating-current to direct-current. Regeneration - The characteristic of a motor to act as a generator when the CEMF is larger than the drive’s applied voltage (DC drives) or when the rotor synchronous frequency is greater than the applied frequency (AC drives). Reluctance - The characteristics of a magnetic field which resist the flow of magnetic lines of force through it. Resistor - A device that resists the flow of electrical current for the purpose of operation, protection or control. There are two types of resistors - fixed and variable. A fixed resistor has a fixed value of ohms while a variable resistor is adjustable. Resolution - The smallest distinguishable increment into which a quantity can be divided (e.g. position or shaft speed). It is also the degree to which nearly equal values of a quantity can be discriminated. For encoders, it is the number of unique electrically identified positions occurring in 360 degrees of input shaft rotation. Ramping - The acceleration and deceleration of a motor. May also refer to the change in frequency of the applied step pulse signal. Regeneration - The action during motor braking, in which the motor acts as a generator and takes kinetic energy from the load, converts it to electrical energy, and returns it to the amplifier. Resistance - [Ohm] It is the measure of opposition to current flow through a given medium. Substances with high resistances are called insulators and those with low resistances are called conductors. Those in between are known as semiconductors. The unit is the Ohm. 1 Ohm is defined as the resistance between two points on a conductor when an electric potential difference of one volt applied between those points produces a current of one Amp and when that conductor is not the source of any electro motive force. Resonance - The effect of a periodic driving force that causes large amplitude increases at a particular frequency. (Resonance frequency.) RFI - Radio frequency interference. Rise Time - The time required for a signal to rise from 10% of its final value to 90% of its final value. RMS Current - Root mean square current. In an intermittent duty cycle application, the RMS current is equal to the value of steady state current which would produce the equivalent resistive heating over a long period of time. RMS Torque - Root mean square torque. For an intermittent duty cycle application, the RMS torque is equal to the steady state torque which would produce the same amount of motor heating over long periods of time. Rotor - The rotating component of an induction AC motor. It is typically constructed of a laminated, cylindrical iron core with slots for cast-aluminum conductors. Short-circuiting end rings complete the "squirrel cage," which rotates when the moving magnetic field induces a current in the shorted conductors. Self-Locking - The inability of a reducer to be driven backwards by its load. As a matter of safety, no LEESON reducer should be considered self-locking. Servo System - An automatic feedback control system for mechanical motion in which the controlled or output quantity is position, velocity, or acceleration. Servo systems are closed loop systems. Service Factor - When used on a motor nameplate, a number which indicates how much above the nameplate rating a motor can be loaded without causing serious degradation (i.e. A motor with 1.15 S-F can produce 15% greater torque than one with 1.0 S-F). When used in applying motors or gear motors, it is a figure of merit which is used to adjust measured loads in an attempt to compensate for conditions which are difficult to measure or define. Settling Time - The time required for a step response of a system parameter to stop oscillating or ringing and reach its final value. Silicon Controlled Rectifier (SCR) A solid-state switch, sometimes referred to as a thyristor. The SCR has an anode, cathode and control element called the gate. The device provides controlled rectification since it can be turned on at will. The SCR can rapidly switch large currents at high voltages. They are small in size and low in weight. Shock Load - The load seen by a clutch, brake or motor in a system which transmits high peak loads. This type of load is present in crushers, separators, grinders, conveyors, winches and cranes. Short Circuit - A fault or defect in a winding causing part of the normal electrical circuit to be bypassed, frequently resulting in overheating of the winding and burnout. Shunt Resistor - A device located in a servo amplifier for controlling regenerative energy generated when braking a motor. This device dissipates or "dumps" the kinetic energy as heat. Skew - The arrangement of laminations on a rotor or armature to provide a slight angular pattern of their slots with respect to the shaft axis. This pattern helps to eliminate low speed cogging in an armature and minimize induced vibration in a rotor as well as reduce associated noise. Slip - The difference between RPM of the rotating magnetic field and RPM of the rotor in an induction motor. Slip is expressed in percentage and may be calculated by the following formula - Slip = Synchronous Speed - Running Speed x 100/ Synchronous Speed Speed constant - [rpm/V] The speed variation per Volt applied to the motor phases at constant load. Speed Range - The speed minimum and maximum at which a motor must operate under constant or variable torque load conditions. Speed Regulation - In adjustable speed drive systems, speed regulation measures the motor and control's ability to maintain a constant preset speed despite changes in load from zero to 100%. It is expressed as a percentage of the drive system's rated full load speed. Stall torque - The torque developed by the motor at zero speed and nominal voltage. Starting Current - Amount of current drawn at the instant a motor is energized--in most cases much higher than the required for running. Same as locked rotor current. Starting Torque - The torque or twisting force delivered by a motor at the instant it is energized. Starting torque is often higher than rated running or full load torque. Stator - The non-rotating part of a magnetic structure. In a motor the stator usually contains the mounting surface, bearings, and non-rotating windings or permanent magnets Stiffness - The ability of a device to resist deviation due to load change Terminal inductance, phase to phase - [µH] The inductance measured between two phases at 1 kHz. Terminal resistance, phase to phase - The resistance measured between two motor phases. The coil temperature directly affects the value. Thermal resistance Rth 1 / Rth 2 - [K/W] Rth 1 corresponds to the value between the coil and housing. Rth 2 corresponds to the value between the housing and the ambient air. Rth 2 can be reduced by enabling exchange of heat between the motor and the ambient air (for example using a heat sink or forced air cooling. Thermal Protector - A device, sensitive to current and heat, which protects the motor against overheating due to overload or failure to start. Basic types include automatic rest, manual reset and resistance temperature detectors. Thrust Load - Force imposed on a shaft parallel to a shaft's axis. Thrust loads are often induced by the driven machine. Take care to be sure the thrust load rating of the reducer is sufficient enough that it's shafts and bearings can absorb the load without premature failure. Torque - A turning force applied to a shaft, tending to cause rotation. Torque Constant (in-lbs) - This motor parameter provides a relationship between input current and output torque. For each ampere of current applied to the rotor, a fixed amount of torque will result. Torque Control -A method of using current limit circuitry to regulate torque instead of speed. Transducer - A device that converts one energy form to another (e.g. mechanical to electrical) Also, a device that when actuated by signals from one or more systems or media, can supply related signals to one or more other systems or media. Transient - A momentary deviation in an electrical or mechanical system. Transistor - A solid-state three-terminal device that allows amplification of signals and can be used for switching and control. The three terminals are called the emitter, base and collector. Totally Enclosed Enclosure - A motor enclosure, which prevents free exchange of air between the inside and the outside of the enclosure but is not airtight. Different methods of cooling can be used with this enclosure. Totally Enclosed Non-Ventilated (TENV) - No vent openings, tightly enclosed to prevent the free exchange of air, but not airtight. Has no external cooling fan and relies on convection for cooling. Suitable for use where exposed to dirt or dampness, but not for hazardous (explosive) locations. Totally Enclosed Fan Cooled (TEFC) - Same as the TENV except has external fan as an integral part of the motor, to provide cooling by blowing air around the outside frame of the motor. Underwriters Laboratories (UL) - Independent United States testing organization that sets safety standards for motors and other electrical equipment. Voltage - The force that causes a current to flow in an electrical circuit. The unit is the Volt. 1 Volt is defined as the difference of electric potential between two points on a conductor that is carrying a constant current of one ampere when the power dissipated between those points is one watt. Watt - The amount of power required to maintain a current of 1 ampere at a pressure of one volt when the two are in phase with each other. One horsepower is equal to 746 watts. Work - A force moving an object over a distance. Work = force x distance.
http://www.sdtdrivetechnology.co.uk/glossary-of-related-terms/
13
86
Visualization of the quicksort algorithm. The horizontal lines are pivot values. |Worst case performance||O(n2)| |Best case performance||O(n log n)| |Average case performance||O(n log n)| |Worst case space complexity||O(n) auxiliary (naive) O(log n) auxiliary (Sedgewick 1978) Quicksort, or partition-exchange sort, is a sorting algorithm developed by Tony Hoare that, on average, makes O(n log n) comparisons to sort n items. In the worst case, it makes O(n2) comparisons, though this behavior is rare. Quicksort is often faster in practice than other O(n log n) algorithms. Additionally, quicksort's sequential and localized memory references work well with a cache. Quicksort is a comparison sort and, in efficient implementations, is not a stable sort. Quicksort can be implemented with an in-place partitioning algorithm, so the entire sort can be done with only O(log n) additional space used by the stack during the recursion. The quicksort algorithm was developed in 1960 by Tony Hoare while in the Soviet Union, as a visiting student at Moscow State University. At that time, Hoare worked in a project on machine translation for the National Physical Laboratory. He developed the algorithm in order to sort the words to be translated, to make them more easily matched to an already-sorted Russian-to-English dictionary that was stored on magnetic tape. Quicksort is a divide and conquer algorithm. Quicksort first divides a large list into two smaller sub-lists: the low elements and the high elements. Quicksort can then recursively sort the sub-lists. The steps are: - Pick an element, called a pivot, from the list. - Reorder the list so that all elements with values less than the pivot come before the pivot, while all elements with values greater than the pivot come after it (equal values can go either way). After this partitioning, the pivot is in its final position. This is called the partition operation. - Recursively apply the above steps to the sub-list of elements with smaller values and separately the sub-list of elements with greater values. The base case of the recursion are lists of size zero or one, which never need to be sorted. Simple version In simple pseudocode, the algorithm might be expressed as this: function quicksort('array') if length('array') ≤ 1 return 'array' // an array of zero or one elements is already sorted select and remove a pivot value 'pivot' from 'array' create empty lists 'less' and 'greater' for each 'x' in 'array' if 'x' ≤ 'pivot' then append 'x' to 'less' else append 'x' to 'greater' return concatenate(quicksort('less'), 'pivot', quicksort('greater')) // two recursive calls Notice that we only examine elements by comparing them to other elements. This makes quicksort a comparison sort. This version is also a stable sort (assuming that the "for each" method retrieves elements in original order, and the pivot selected is the last among those of equal value). The correctness of the partition algorithm is based on the following two arguments: - At each iteration, all the elements processed so far are in the desired position: before the pivot if less than the pivot's value, after the pivot if greater than the pivot's value (loop invariant). - Each iteration leaves one fewer element to be processed (loop variant). The correctness of the overall algorithm can be proven via induction: for zero or one element, the algorithm leaves the data unchanged; for a larger data set it produces the concatenation of two parts, elements less than the pivot and elements greater than it, themselves sorted by the recursive hypothesis. In-place version The disadvantage of the simple version above is that it requires O(n) extra storage space, which is as bad as merge sort. The additional memory allocations required can also drastically impact speed and cache performance in practical implementations. There is a more complex version which uses an in-place partition algorithm and can achieve the complete sort using O(log n) space (not counting the input) on average (for the call stack). We start with a partition function: // left is the index of the leftmost element of the subarray // right is the index of the rightmost element of the subarray (inclusive) // number of elements in subarray = right-left+1 function partition(array, left, right, pivotIndex) pivotValue := array[pivotIndex] swap array[pivotIndex] and array[right] // Move pivot to end storeIndex := left for i from left to right - 1 // left ≤ i < right if array[i] <= pivotValue swap array[i] and array[storeIndex] storeIndex := storeIndex + 1 swap array[storeIndex] and array[right] // Move pivot to its final place return storeIndex This is the in-place partition algorithm. It partitions the portion of the array between indexes left and right, inclusively, by moving all elements less than array[pivotIndex] before the pivot, and the equal or greater elements after it. In the process it also finds the final position for the pivot element, which it returns. It temporarily moves the pivot element to the end of the subarray, so that it doesn't get in the way. Because it only uses exchanges, the final list has the same elements as the original list. Notice that an element may be exchanged multiple times before reaching its final place. Also, in case of pivot duplicates in the input array, they can be spread across the right subarray, in any order. This doesn't represent a partitioning failure, as further sorting will reposition and finally "glue" them together. This form of the partition algorithm is not the original form; multiple variations can be found in various textbooks, such as versions not having the storeIndex. However, this form is probably the easiest to understand. Once we have this, writing quicksort itself is easy: function quicksort(array, left, right) // If the list has 2 or more items if left < right // See "Choice of pivot" section below for possible choices choose any pivotIndex such that left ≤ pivotIndex ≤ right // Get lists of bigger and smaller items and final position of pivot pivotNewIndex := partition(array, left, right, pivotIndex) // Recursively sort elements smaller than the pivot quicksort(array, left, pivotNewIndex - 1) // Recursively sort elements at least as big as the pivot quicksort(array, pivotNewIndex + 1, right) Each recursive call to this quicksort function reduces the size of the array being sorted by at least one element, since in each invocation the element at pivotNewIndex is placed in its final position. Therefore, this algorithm is guaranteed to terminate after at most n recursive calls. However, since partition reorders elements within a partition, this version of quicksort is not a stable sort. Implementation issues Choice of pivot In very early versions of quicksort, the leftmost element of the partition would often be chosen as the pivot element. Unfortunately, this causes worst-case behavior on already sorted arrays, which is a rather common use-case. The problem was easily solved by choosing either a random index for the pivot, choosing the middle index of the partition or (especially for longer partitions) choosing the median of the first, middle and last element of the partition for the pivot (as recommended by R. Sedgewick). Selecting a pivot element is also complicated by the existence of integer overflow. If the boundary indices of the subarray being sorted are sufficiently large, the naïve expression for the middle index, (left + right)/2, will cause overflow and provide an invalid pivot index. This can be overcome by using, for example, left + (right-left)/2 to index the middle element, at the cost of more complex arithmetic. Similar issues arise in some other methods of selecting the pivot element. - To make sure at most O(log N) space is used, recurse first into the smaller half of the array, and use a tail call to recurse into the other. - Use insertion sort, which has a smaller constant factor and is thus faster on small arrays, for invocations on such small arrays (i.e. where the length is less than a threshold t determined experimentally). This can be implemented by leaving such arrays unsorted and running a single insertion sort pass at the end, because insertion sort handles nearly sorted arrays efficiently. A separate insertion sort of each small segment as they are identified adds the overhead of starting and stopping many small sorts, but avoids wasting effort comparing keys across the many segment boundaries, which keys will be in order due to the workings of the quicksort process. It also improves the cache use. Like merge sort, quicksort can also be parallelized due to its divide-and-conquer nature. Individual in-place partition operations are difficult to parallelize, but once divided, different sections of the list can be sorted in parallel. The following is a straightforward approach: If we have processors, we can divide a list of elements into sublists in O(n) average time, then sort each of these in average time. Ignoring the O(n) preprocessing and merge times, this is linear speedup. If the split is blind, ignoring the values, the merge naïvely costs O(n). If the split partitions based on a succession of pivots, it is tricky to parallelize and naïvely costs O(n). Given O(log n) or more processors, only O(n) time is required overall, whereas an approach with linear speedup would achieve O(log n) time for overall. One advantage of this simple parallel quicksort over other parallel sort algorithms is that no synchronization is required, but the disadvantage is that sorting is still O(n) and only a sublinear speedup of O(log n) is achieved. A new thread is started as soon as a sublist is available for it to work on and it does not communicate with other threads. When all threads complete, the sort is done. Other more sophisticated parallel sorting algorithms can achieve even better time bounds. For example, in 1991 David Powers described a parallelized quicksort (and a related radix sort) that can operate in O(log n) time on a CRCW PRAM with n processors by performing partitioning implicitly. Formal analysis Average-case analysis using discrete probability Quicksort takes O(n log n) time on average, when the input is a random permutation. Why? For a start, it is not hard to see that the partition operation takes O(n) time. In the most unbalanced case, each time we perform a partition we divide the list into two sublists of size 0 and (for example, if all elements of the array are equal). This means each recursive call processes a list of size one less than the previous list. Consequently, we can make nested calls before we reach a list of size 1. This means that the call tree is a linear chain of nested calls. The th call does work to do the partition, and , so in that case Quicksort takes time. That is the worst case: given knowledge of which comparisons are performed by the sort, there are adaptive algorithms that are effective at generating worst-case input for quicksort on-the-fly, regardless of the pivot selection strategy. In the most balanced case, each time we perform a partition we divide the list into two nearly equal pieces. This means each recursive call processes a list of half the size. Consequently, we can make only nested calls before we reach a list of size 1. This means that the depth of the call tree is . But no two calls at the same level of the call tree process the same part of the original list; thus, each level of calls needs only O(n) time all together (each call has some constant overhead, but since there are only O(n) calls at each level, this is subsumed in the O(n) factor). The result is that the algorithm uses only O(n log n) time. In fact, it's not necessary to be perfectly balanced; even if each pivot splits the elements with 75% on one side and 25% on the other side (or any other fixed fraction), the call depth is still limited to , so the total running time is still O(n log n). So what happens on average? If the pivot has rank somewhere in the middle 50 percent, that is, between the 25th percentile and the 75th percentile, then it splits the elements with at least 25% and at most 75% on each side. If we could consistently choose a pivot from the two middle 50 percent, we would only have to split the list at most times before reaching lists of size 1, yielding an O(n log n) algorithm. When the input is a random permutation, the pivot has a random rank, and so it is not guaranteed to be in the middle 50 percent. However, when we start from a random permutation, in each recursive call the pivot has a random rank in its list, and so it is in the middle 50 percent about half the time. That is good enough. Imagine that you flip a coin: heads means that the rank of the pivot is in the middle 50 percent, tail means that it isn't. Imagine that you are flipping a coin over and over until you get k heads. Although this could take a long time, on average only 2k flips are required, and the chance that you won't get heads after flips is highly improbable (this can be made rigorous using Chernoff bounds). By the same argument, Quicksort's recursion will terminate on average at a call depth of only . But if its average call depth is O(log n), and each level of the call tree processes at most elements, the total amount of work done on average is the product, O(n log n). Note that the algorithm does not have to verify that the pivot is in the middle half—if we hit it any constant fraction of the times, that is enough for the desired complexity. Average-case analysis using recurrences An alternative approach is to set up a recurrence relation for the T(n) factor, the time needed to sort a list of size . In the most unbalanced case, a single Quicksort call involves O(n) work plus two recursive calls on lists of size and , so the recurrence relation is In the most balanced case, a single quicksort call involves O(n) work plus two recursive calls on lists of size , so the recurrence relation is The master theorem tells us that T(n) = O(n log n). The outline of a formal proof of the O(n log n) expected time complexity follows. Assume that there are no duplicates as duplicates could be handled with linear time pre- and post-processing, or considered cases easier than the analyzed. When the input is a random permutation, the rank of the pivot is uniform random from 0 to n-1. Then the resulting parts of the partition have sizes i and n-i-1, and i is uniform random from 0 to n-1. So, averaging over all possible splits and noting that the number of comparisons for the partition is , the average number of comparisons over all permutations of the input sequence can be estimated accurately by solving the recurrence relation: Solving the recurrence gives This means that, on average, quicksort performs only about 39% worse than in its best case. In this sense it is closer to the best case than the worst case. Also note that a comparison sort cannot use less than comparisons on average to sort items (as explained in the article Comparison sort) and in case of large , Stirling's approximation yields , so quicksort is not much worse than an ideal comparison sort. This fast average runtime is another reason for quicksort's practical dominance over other sorting algorithms. Analysis of Randomized quicksort Using the same analysis, one can show that Randomized quicksort has the desirable property that, for any input, it requires only O(n log n) expected time (averaged over all choices of pivots). However, there exists a combinatorial proof, more elegant than both the analysis using discrete probability and the analysis using recurrences. To each execution of Quicksort corresponds the following binary search tree (BST): the initial pivot is the root node; the pivot of the left half is the root of the left subtree, the pivot of the right half is the root of the right subtree, and so on. The number of comparisons of the execution of Quicksort equals the number of comparisons during the construction of the BST by a sequence of insertions. So, the average number of comparisons for randomized Quicksort equals the average cost of constructing a BST when the values inserted form a random permutation. Consider a BST created by insertion of a sequence of values forming a random permutation. Let C denote the cost of creation of the BST. We have: (whether during the insertion of there was a comparison to ). By linearity of expectation, the expected value E(C) of C is Pr(during the insertion of there was a comparison to ). Fix i and j<i. The values , once sorted, define j+1 intervals. The core structural observation is that is compared to in the algorithm if and only if falls inside one of the two intervals adjacent to . Observe that since is a random permutation, is also a random permutation, so the probability that is adjacent to is exactly . We end with a short calculation: Space complexity The space used by quicksort depends on the version used. The in-place version of quicksort has a space complexity of O(log n), even in the worst case, when it is carefully implemented using the following strategies: - in-place partitioning is used. This unstable partition requires O(1) space. - After partitioning, the partition with the fewest elements is (recursively) sorted first, requiring at most O(log n) space. Then the other partition is sorted using tail recursion or iteration, which doesn't add to the call stack. This idea, as discussed above, was described by R. Sedgewick, and keeps the stack depth bounded by O(log n). Quicksort with in-place and unstable partitioning uses only constant additional space before making any recursive call. Quicksort must store a constant amount of information for each nested recursive call. Since the best case makes at most O(log n) nested recursive calls, it uses O(log n) space. However, without Sedgewick's trick to limit the recursive calls, in the worst case quicksort could make O(n) nested recursive calls and need O(n) auxiliary space. From a bit complexity viewpoint, variables such as left and right do not use constant space; it takes O(log n) bits to index into a list of n items. Because there are such variables in every stack frame, quicksort using Sedgewick's trick requires bits of space. This space requirement isn't too terrible, though, since if the list contained distinct elements, it would need at least O(n log n) bits of space. Another, less common, not-in-place, version of quicksort uses O(n) space for working storage and can implement a stable sort. The working storage allows the input array to be easily partitioned in a stable manner and then copied back to the input array for successive recursive calls. Sedgewick's optimization is still appropriate. Selection-based pivoting A selection algorithm chooses the kth smallest of a list of numbers; this is an easier problem in general than sorting. One simple but effective selection algorithm works nearly in the same manner as quicksort, except instead of making recursive calls on both sublists, it only makes a single tail-recursive call on the sublist which contains the desired element. This small change lowers the average complexity to linear or O(n) time, and makes it an in-place algorithm. A variation on this algorithm brings the worst-case time down to O(n) (see selection algorithm for more information). Conversely, once we know a worst-case O(n) selection algorithm is available, we can use it to find the ideal pivot (the median) at every step of quicksort, producing a variant with worst-case O(n log n) running time. In practical implementations, however, this variant is considerably slower on average. There are four well known variants of quicksort: - Balanced quicksort: choose a pivot likely to represent the middle of the values to be sorted, and then follow the regular quicksort algorithm. - External quicksort: The same as regular quicksort except the pivot is replaced by a buffer. First, read the M/2 first and last elements into the buffer and sort them. Read the next element from the beginning or end to balance writing. If the next element is less than the least of the buffer, write it to available space at the beginning. If greater than the greatest, write it to the end. Otherwise write the greatest or least of the buffer, and put the next element in the buffer. Keep the maximum lower and minimum upper keys written to avoid resorting middle elements that are in order. When done, write the buffer. Recursively sort the smaller partition, and loop to sort the remaining partition. This is a kind of three-way quicksort in which the middle partition (buffer) represents a sorted subarray of elements that are approximately equal to the pivot. - Three-way radix quicksort (developed by Sedgewick and also known as multikey quicksort): is a combination of radix sort and quicksort. Pick an element from the array (the pivot) and consider the first character (key) of the string (multikey). Partition the remaining elements into three sets: those whose corresponding character is less than, equal to, and greater than the pivot's character. Recursively sort the "less than" and "greater than" partitions on the same character. Recursively sort the "equal to" partition by the next character (key). Given we sort using bytes or words of length W bits, the best case is O(KN) and the worst case O(2KN) or at least O(N2) as for standard quicksort, given for unique keys N<2K, and K is a hidden constant in all standard comparison sort algorithms including quicksort. This is a kind of three-way quicksort in which the middle partition represents a (trivially) sorted subarray of elements that are exactly equal to the pivot. - Quick radix sort (also developed by Powers as a o(K) parallel PRAM algorithm). This is again a combination of radix sort and quicksort but the quicksort left/right partition decision is made on successive bits of the key, and is thus O(KN) for N K-bit keys. Note that all comparison sort algorithms effectively assume an ideal K of O(logN) as if k is smaller we can sort in O(N) using a hash table or integer sorting, and if K >> logN but elements are unique within O(logN) bits, the remaining bits will not be looked at by either quicksort or quick radix sort, and otherwise all comparison sorting algorithms will also have the same overhead of looking through O(K) relatively useless bits but quick radix sort will avoid the worst case O(N2) behaviours of standard quicksort and quick radix sort, and will be faster even in the best case of those comparison algorithms under these conditions of uniqueprefix(K) >> logN. See Powers for further discussion of the hidden overheads in comparison, radix and parallel sorting. Comparison with other sorting algorithms Quicksort is a space-optimized version of the binary tree sort. Instead of inserting items sequentially into an explicit tree, quicksort organizes them concurrently into a tree that is implied by the recursive calls. The algorithms make exactly the same comparisons, but in a different order. An often desirable property of a sorting algorithm is stability - that is the order of elements that compare equal is not changed, allowing controlling order of multikey tables (e.g. directory or folder listings) in a natural way. This property is hard to maintain for in situ (or in place) quicksort (that uses only constant additional space for pointers and buffers, and logN additional space for the management of explicit or implicit recursion). For variant quicksorts involving extra memory due to representations using pointers (e.g. lists or trees) or files (effectively lists), it is trivial to maintain stability. The more complex, or disk-bound, data structures tend to increase time cost, in general making increasing use of virtual memory or disk. The most direct competitor of quicksort is heapsort. Heapsort's worst-case running time is always O(n log n). But, heapsort is assumed to be on average somewhat slower than standard in-place quicksort. This is still debated and in research, with some publications indicating the opposite. Introsort is a variant of quicksort that switches to heapsort when a bad case is detected to avoid quicksort's worst-case running time. If it is known in advance that heapsort is going to be necessary, using it directly will be faster than waiting for introsort to switch to it. Quicksort also competes with mergesort, another recursive sort algorithm but with the benefit of worst-case O(n log n) running time. Mergesort is a stable sort, unlike standard in-place quicksort and heapsort, and can be easily adapted to operate on linked lists and very large lists stored on slow-to-access media such as disk storage or network attached storage. Like mergesort, quicksort can be implemented as an in-place stable sort, but this is seldom done. Although quicksort can be written to operate on linked lists, it will often suffer from poor pivot choices without random access. The main disadvantage of mergesort is that, when operating on arrays, efficient implementations require O(n) auxiliary space, whereas the variant of quicksort with in-place partitioning and tail recursion uses only O(log n) space. (Note that when operating on linked lists, mergesort only requires a small, constant amount of auxiliary storage.) Bucket sort with two buckets is very similar to quicksort; the pivot in this case is effectively the value in the middle of the value range, which does well on average for uniformly distributed inputs. See also - Steven S. Skiena (27 April 2011). The Algorithm Design Manual. Springer. p. 129. ISBN 978-1-84800-069-8. Retrieved 27 November 2012. - "Data structures and algorithm: Quicksort". Auckland University. - Shustek, L. (2009). "Interview: An interview with C.A.R. Hoare". Comm. ACM 52 (3): 38–41. doi:10.1145/1467247.1467261. More than one of - Sedgewick, Robert (1 September 1998). Algorithms In C: Fundamentals, Data Structures, Sorting, Searching, Parts 1-4 (3 ed.). Pearson Education. ISBN 978-81-317-1291-7. Retrieved 27 November 2012. - Sedgewick, R. (1978). "Implementing Quicksort programs". Comm. ACM 21 (10): 847–857. doi:10.1145/359619.359631. - qsort.c in GNU libc: , - Miller, Russ; Boxer, Laurence (2000). Algorithms sequential & parallel: a unified approach. Prentice Hall. ISBN 978-0-13-086373-7. Retrieved 27 November 2012. - David M. W. Powers, Parallelized Quicksort and Radixsort with Optimal Speedup, Proceedings of International Conference on Parallel Computing Technologies. Novosibirsk. 1991. - McIlroy, M. D. (1999). "A killer adversary for quicksort". Software: Practice and Experience 29 (4): 341–237. doi:10.1002/(SICI)1097-024X(19990410)29:4<341::AID-SPE237>3.3.CO;2-I. - David M. W. Powers, Parallel Unification: Practical Complexity, Australasian Computer Architecture Workshop, Flinders University, January 1995 - Hsieh, Paul (2004). "Sorting revisited.". www.azillionmonkeys.com. Retrieved 26 April 2010. - MacKay, David (1 December 2005). "Heapsort, Quicksort, and Entropy". users.aims.ac.za/~mackay. Retrieved 26 April 2010. - A Java implementation of in-place stable quicksort - Sedgewick, R. (1978). "Implementing Quicksort programs". Comm. ACM 21 (10): 847–857. doi:10.1145/359619.359631. - Dean, B. C. (2006). "A simple expected running time analysis for randomized "divide and conquer" algorithms". Discrete Applied Mathematics 154: 1–5. doi:10.1016/j.dam.2005.07.005. - Hoare, C. A. R. (1961). "Algorithm 63: Partition". Comm. ACM 4 (7): 321. doi:10.1145/366622.366642. - Hoare, C. A. R. (1961). "Algorithm 64: Quicksort". Comm. ACM 4 (7): 321. doi:10.1145/366622.366644. - Hoare, C. A. R. (1961). "Algorithm 65: Find". Comm. ACM 4 (7): 321–322. doi:10.1145/366622.366647. - Hoare, C. A. R. (1962). "Quicksort". Comput. J. 5 (1): 10–16. doi:10.1093/comjnl/5.1.10. (Reprinted in Hoare and Jones: Essays in computing science, 1989.) - Musser, D. R. (1997). "Introspective Sorting and Selection Algorithms". Software: Practice and Experience 27 (8): 983–993. doi:10.1002/(SICI)1097-024X(199708)27:8<983::AID-SPE117>3.0.CO;2-#. - Donald Knuth. The Art of Computer Programming, Volume 3: Sorting and Searching, Third Edition. Addison-Wesley, 1997. ISBN 0-201-89685-0. Pages 113–122 of section 5.2.2: Sorting by Exchanging. - Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Chapter 7: Quicksort, pp. 145–164. - A. LaMarca and R. E. Ladner. "The Influence of Caches on the Performance of Sorting." Proceedings of the Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, 1997. pp. 370–379. - Faron Moller. Analysis of Quicksort. CS 332: Designing Algorithms. Department of Computer Science, Swansea University. - Martínez, C.; Roura, S. (2001). "Optimal Sampling Strategies in Quicksort and Quickselect". SIAM J. Comput. 31 (3): 683–705. doi:10.1137/S0097539700382108. - Bentley, J. L.; McIlroy, M. D. (1993). "Engineering a sort function". Software: Practice and Experience 23 (11): 1249–1265. doi:10.1002/spe.4380231105. |The Wikibook Algorithm implementation has a page on the topic of: Quicksort| - Animated Sorting Algorithms: Quick Sort – graphical demonstration and discussion of quick sort - Animated Sorting Algorithms: 3-Way Partition Quick Sort – graphical demonstration and discussion of 3-way partition quick sort - Interactive Tutorial for Quicksort - Quicksort applet with "level-order" recursive calls to help improve algorithm analysis - Open Data Structures - Section 11.1.2 - Quicksort - Multidimensional quicksort in Java - Literate implementations of Quicksort in various languages on LiteratePrograms - A colored graphical Java applet which allows experimentation with initial state and shows statistics
http://en.wikipedia.org/wiki/Quicksort
13
71
A magnetic field will exert a force on a single moving charge, so it follows that it will also exert a force on a current, which is a collection of moving charges. The force experienced by a wire of length l carrying a current I in a magnetic field B is given by Again, the right-hand rule can be used to find the direction of the force. In this case, your thumb points in the direction of the current, your fingers point in the direction of B. Your palm gives the direction of F. Parallel wires carrying currents will exert forces on each other. One wire sets up a magnetic field that influences the other wire, and vice versa. When the current goes the same way in the two wires, the force is attractive. When the currents go opposite ways, the force is repulsive. You should be able to confirm this by looking at the magnetic field set up by one current at the location of the other wire, and by applying the right-hand rule. Here's the approach. In the picture above, both wires carry current in the same direction. To find the force on wire 1, look first at the magnetic field produced by the current in wire 2. Everywhere to the right of wire 2, the field due to that current is into the page. Everywhere to the left, the field is out of the page. Thus, wire 1 experiences a field that is out of the page. Now apply the right hand rule to get the direction of the force experienced by wire 1. The current is up (that's your fingers) and the field is out of the page (curl your fingers that way). Your thumb should point right, towards wire 2. The same process can be used to figure out the force on wire 2, which points toward wire 1. Reversing one of the currents reverses the direction of the forces. The magnitude of the force in this situation is given by F = IlB. To get the force on wire 1, the current is the current in wire 1. The field comes from the other wire, and is proportional to the current in wire 2. In other words, both currents come into play. Using the expression for the field from a long straight wire, the force is given by: Note that it is often the force per unit length, F / l, that is asked for rather than the force. A very useful effect is the torque exerted on a loop by a magnetic field, which tends to make the loop rotate. Many motors are based on this effect. The torque on a coil with N turns of area A carrying a current I is given by: The combination NIA is usually referred to as the magnetic moment of the coil. It is a vector normal (i.e., perpendicular) to the loop. If you curl your fingers in the direction of the current around the loop, your thumb will point in the direction of the magnetic moment. There are a number of good applications of the principle that a magnetic field exerts a force on a moving charge. One of these is the mass spectrometer : a mass spectrometer separates charged particles (usually ions) based on their mass. The mass spectrometer involves three steps. First the ions are accelerated to a particular velocity; then just those ions going a particular velocity are passed through to the third and final stage where the separation based on mass takes place. It's worth looking at all three stages because they all rely on principles we've learned in this course. In physics, we usually talk about charged particles (or ions) being accelerated through a potential difference of so many volts. What this means is that we're applying a voltage across a set of parallel plates, and then injecting the ions at negligible speed into the are between the plates near the plate that has the same sign charge as the ions. The ions will be repelled from that plate, attracted to the other one, and if we cut a hole in the second one they will emerge with a speed that depends on the voltage. The simplest way to figure out how fast the ions are going is to analyze it in terms of energy. When the ions enter the region between the plates, the ions have negligible kinetic energy, but plenty of potential energy. If the plates have a potential difference of V, the potential energy is simply U = qV. When the ions reach the other plate, all this energy has been converted into kinetic energy, so the speed can be calculated from: The ions emerge from the acceleration stage with a range of speeds. To distinguish between the ions based on their masses, they must enter the mass separation stage with identical velocities. This is done using a velocity selector, which is designed to allow ions of only a particular velocity to pass through undeflected. Slower ions will generally be deflected one way, while faster ions will deflect another way. The velocity selector uses both an electric field and a magnetic field, with the fields at right angles to each other, as well as to the velocity of the incoming charges. Let's say the ions are positively charged, and move from left to right across the page. An electric field pointing down the page will tend to deflect the ions down the page with a force of F = qE. Now, add a magnetic field pointing into the page. By the right hand rule, this gives a force of F = qvB which is directed up the page. Note that the magnetic force depends on the velocity, so there will be some particular velocity where the electric force qE and the magnetic force qvB are equal and opposite. Setting the forces equal, qE = qvB, and solving for this velocity gives v = E / B. So, a charge of velocity v = E / B will experience no net force, and will pass through the velocity selector undeflected. Any charge moving slower than this will have the magnetic force reduced, and will bend in the direction of the electric force. A charge moving faster will have a larger magnetic force, and will bend in the direction of the magnetic force. A velocity selector works just as well for negative charges, the only difference being that the forces are in the opposite direction to the way they are for positive charges. All these ions, with the same charge and velocity, enter the mass separation stage, which is simply a region with a uniform magnetic field at right angles to the velocity of the ions. Such a magnetic field causes the charges to follow circular paths of radius r = mv / qB. The only thing different for these particles is the mass, so the heavier ions travel in a circular path of larger radius than the lighter ones. The particles are collected after they have traveled half a circle in the mass separator. All the particles enter the mass separator at the same point, so if a particle of mass m1 follows a circular path of radius r1, and a second mass m2 follows a circular path of radius r2, after half a circle they will be separated by the difference between the diameters of the paths after half a circle. The separation is Another good application of the force exerted by moving charges is the Hall effect. The Hall effect is very interesting, because it is one of the few physics phenomena that tell us that current in wires is made up of negative charges. It is also a common way of measuring the strength of a magnetic field. Start by picturing a wire of square cross-section, carrying a current out of the page. We want to figure out whether the charges flowing in that wire are positive, and out of the page, or negative, flowing in to the page. There is a uniform magnetic field pointing down the page. First assume that the current is made up of positive charges flowing out of the page. With a magnetic field down the page, the right-hand rule indicates that these positive charges experience a force to the right. This will deflect the charges to the right, piling up positive charge on the right and leaving a deficit of positive charge (i.e., a net negative charge) on the left. This looks like a set of charged parallel plates, so an electric field pointing from right to left is set up inside the wire by these charges. The field builds up until the force experienced by the charges in this electric field is equal and opposite to the force applied on the charges by the magnetic field. With an electric field, there is a potential difference across the wire that can be measured with a voltmeter. This is known as the Hall voltage, and in the case of the positive charges, the sign on the Hall voltage would indicate that the right side of the wire is positive. Now, what if the charges flowing through the wire are really negative, flowing into the page? Applying the right-hand rule indicates a magnetic force pointing right. This tends to pile up negative charges on the right, resulting in a deficit of negative charge (i.e., a net positive charge) on the left. As above, an electric field is the result, but this time it points from left to right. Measuring the Hall voltage this time would indicate that the left side of the wire is negative. So, the potential difference set up across the wire is of one sign for negative charges, and the other sign for positive charges, allowing us to distinguish between the two, and to tell that when charges flow in wires, they are negative. Note that the electric field, and the Hall voltage, increases as the magnetic field increases, which is why the Hall effect can be used to measure magnetic fields. Back to the course note home page
http://physics.bu.edu/~duffy/PY106/MagForce.html
13
51
There are three basic ways in which heat is transferred. In fluids, heat is often transferred by convection, in which the motion of the fluid itself carries heat from one place to another. Another way to transfer heat is by conduction, which does not involve any motion of a substance, but rather is a transfer of energy within a substance (or between substances in contact). The third way to transfer energy is by radiation, which involves absorbing or giving off electromagnetic waves. Heat transfer in fluids generally takes place via convection. Convection currents are set up in the fluid because the hotter part of the fluid is not as dense as the cooler part, so there is an upward buoyant force on the hotter fluid, making it rise while the cooler, denser, fluid sinks. Birds and gliders make use of upward convection currents to rise, and we also rely on convection to remove ground-level pollution. Forced convection, where the fluid does not flow of its own accord but is pushed, is often used for heating (e.g., forced-air furnaces) or cooling (e.g., fans, automobile cooling systems). When heat is transferred via conduction, the substance itself does not flow; rather, heat is transferred internally, by vibrations of atoms and molecules. Electrons can also carry heat, which is the reason metals are generally very good conductors of heat. Metals have many free electrons, which move around randomly; these can transfer heat from one part of the metal to another. The equation governing heat conduction along something of length (or thickness) L and cross-sectional area A, in a time t is: k is the thermal conductivity, a constant depending only on the material, and having units of J / (s m °C). Copper, a good thermal conductor, which is why some pots and pans have copper bases, has a thermal conductivity of 390 J / (s m °C). Styrofoam, on the other hand, a good insulator, has a thermal conductivity of 0.01 J / (s m °C). Consider what happens when a layer of ice builds up in a freezer. When this happens, the freezer is much less efficient at keeping food frozen. Under normal operation, a freezer keeps food frozen by transferring heat through the aluminum walls of the freezer. The inside of the freezer is kept at -10 °C; this temperature is maintained by having the other side of the aluminum at a temperature of -25 °C. The aluminum is 1.5 mm thick, and the thermal conductivity of aluminum is 240 J / (s m °C). With a temperature difference of 15°, the amount of heat conducted through the aluminum per second per square meter can be calculated from the conductivity equation: This is quite a large heat-transfer rate. What happens if 5 mm of ice builds up inside the freezer, however? Now the heat must be transferred from the freezer, at -10 °C, through 5 mm of ice, then through 1.5 mm of aluminum, to the outside of the aluminum at -25 °C. The rate of heat transfer must be the same through the ice and the aluminum; this allows the temperature at the ice-aluminum interface to be calculated. Setting the heat-transfer rates equal gives: The thermal conductivity of ice is 2.2 J / (s m °C). Solving for T gives: Now, instead of heat being transferred through the aluminum with a temperature difference of 15°, the difference is only 0.041°. This gives a heat transfer rate of: With a layer of ice covering the walls, the rate of heat transfer is reduced by a factor of more than 300! It's no wonder the freezer has to work much harder to keep the food cold. The third way to transfer heat, in addition to convection and conduction, is by radiation, in which energy is transferred in the form of electromagnetic waves. We'll talk about electromagnetic waves in a lot more detail in PY106; an electromagnetic wave is basically an oscillating electric and magnetic field traveling through space at the speed of light. Don't worry if that definition goes over your head, because you're already familiar with many kinds of electromagnetic waves, such as radio waves, microwaves, the light we see, X-rays, and ultraviolet rays. The only difference between the different kinds is the frequency and wavelength of the wave. Note that the radiation we're talking about here, in regard to heat transfer, is not the same thing as the dangerous radiation associated with nuclear bombs, etc. That radiation comes in the form of very high energy electromagnetic waves, as well as nuclear particles. The radiation associated with heat transfer is entirely electromagnetic waves, with a relatively low (and therefore relatively safe) energy. Everything around us takes in energy from radiation, and gives it off in the form of radiation. When everything is at the same temperature, the amount of energy received is equal to the amount given off. Because there is no net change in energy, no temperature changes occur. When things are at different temperatures, however, the hotter objects give off more energy in the form of radiation than they take in; the reverse is true for the colder objects. The amount of energy an object radiates depends strongly on temperature. For an object with a temperature T (in Kelvin) and a surface area A, the energy radiated in a time t is given by the Stefan-Boltzmann law of radiation: The constant e is known as the emissivity, and it's a measure of the fraction of incident radiation energy is absorbed and radiated by the object. This depends to a large extent on how shiny it is. If an object reflects a lot of energy, it will absorb (and radiate) very little; if it reflects very little energy, it will absorb and radiate quite efficiently. Black objects, for example, generally absorb radiation very well, and would have emissivities close to 1. This is the largest possible value for the emissivity, and an object with e = 1 is called a perfect blackbody, Note that the emissivity of an object depends on the wavelength of radiation. A shiny object may reflect a great deal of visible light, but it may be a good absorber(and therefore emitter) of radiation of a different wavelength, such as ultraviolet or infrared light. Note that the emissivity of an object is a measure of not just how well it absorbs radiation, but also of how well it radiates the energy. This means a black object that absorbs most of the radiation it is exposed to will also radiate energy away at a higher rate than a shiny object with a low emissivity. The Stefan-Boltzmann law tells you how much energy is radiated from an object at temperature T. It can also be used to calculate how much energy is absorbed by an object in an environment where everything around it is at a particular temperature : The net energy change is simply the difference between the radiated energy and the absorbed energy. This can be expressed as a power by dividing the energy by the time. The net power output of an object of temperature T is thus: We've looked at the three types of heat transfer. Conduction and convection rely on temperature differences; radiation does, too, but with radiation the absolute temperature is important. In some cases one method of heat transfer may dominate over the other two, but often heat transfer occurs via two, or even all three, processes simultaneously. A stove and oven are perfect examples of the different kinds of heat transfer. If you boil water in a pot on the stove, heat is conducted from the hot burner through the base of the pot to the water. Heat can also be conducted along the handle of the pot, which is why you need to be careful picking the pot up, and why most pots don't have metal handles. In the water in the pot, convection currents are set up, helping to heat the water uniformly. If you cook something in the oven, on the other hand, heat is transferred from the glowing elements in the oven to the food via radiation. Thermodynamics is the study of systems involving energy in the form of heat and work. A good example of a thermodynamic system is gas confined by a piston in a cylinder. If the gas is heated, it will expand, doing work on the piston; this is one example of how a thermodynamic system can do work. Thermal equilibrium is an important concept in thermodynamics. When two systems are in thermal equilibrium, there is no net heat transfer between them. This occurs when the systems are at the same temperature. In other words, systems at the same temperature will be in thermal equilibrium with each other. The first law of thermodynamics relates changes in internal energy to heat added to a system and the work done by a system. The first law is simply a conservation of energy equation: The internal energy has the symbol U. Q is positive if heat is added to the system, and negative if heat is removed; W is positive if work is done by the system, and negative if work is done on the system. We've talked about how heat can be transferred, so you probably have a good idea about what Q means in the first law. What does it mean for the system to do work? Work is simply a force multiplied by the distance moved in the direction of the force. A good example of a thermodynamic system that can do work is the gas confined by a piston in a cylinder, as shown in the diagram. If the gas is heated, it will expand and push the piston up, thereby doing work on the piston. If the piston is pushed down, on the other hand, the piston does work on the gas and the gas does negative work on the piston. This is an example of how work is done by a thermodynamic system. An example with numbers might make this clearer. Consider a gas in a cylinder at room temperature (T = 293 K), with a volume of 0.065 m3. The gas is confined by a piston with a weight of 100 N and an area of 0.65 m2. The pressure above the piston is atmospheric pressure. (a) What is the pressure of the gas? This can be determined from a free-body diagram of the piston. The weight of the piston acts down, and the atmosphere exerts a downward force as well, coming from force = pressure x area. These two forces are balanced by the upward force coming from the gas pressure. The piston is in equilibrium, so the forces balance. Therefore: Solving for the pressure of the gas gives: The pressure in the gas isn't much bigger than atmospheric pressure, just enough to support the weight of the piston. (b) The gas is heated, expanding it and moving the piston up. If the volume occupied by the gas doubles, how much work has the gas done? An assumption to make here is that the pressure is constant. Once the gas has expanded, the pressure will certainly be the same as before because the same free-body diagram applies. As long as the expansion takes place slowly, it is reasonable to assume that the pressure is constant. If the volume has doubled, then, and the pressure has remained the same, the ideal gas law tells us that the temperature must have doubled too. The work done by the gas can be determined by working out the force applied by the gas and calculating the distance. However, the force applied by the gas is the pressure times the area, so: W = F s = P A s and the area multiplied by the distance is a volume, specifically the change in volume of the gas. So, at constant pressure, work is just the pressure multiplied by the change in volume: This is positive because the force and the distance moved are in the same direction, so this is work done by the gas. As has been discussed, a gas enclosed by a piston in a cylinder can do work on the piston, the work being the pressure multiplied by the change in volume. If the volume doesn't change, no work is done. If the pressure stays constant while the volume changes, the work done is easy to calculate. On the other hand, if pressure and volume are both changing it's somewhat harder to calculate the work done. As an aid in calculating the work done, it's a good idea to draw a pressure-volume graph (with pressure on the y axis and volume on the x-axis). If a system moves from one point on the graph to another and a line is drawn to connect the points, the work done is the area underneath this line. We'll go through some different thermodynamic processes and see how this works. There are a number of different thermodynamic processes that can change the pressure and/or the volume and/or the temperature of a system. To simplify matters, consider what happens when something is kept constant. The different processes are then categorized as follows : If the volume increases while the temperature is constant, the pressure must decrease, and if the volume decreases the pressure must increase. The isothermal and adiabatic processes should be examined in a little more detail. In an isothermal process, the temperature stays constant, so the pressure and volume are inversely proportional to one another. The P-V graph for an isothermal process looks like this: The work done by the system is still the area under the P-V curve, but because this is not a straight line the calculation is a little tricky, and really can only properly be done using calculus. The internal energy of an ideal gas is proportional to the temperature, so if the temperature is kept fixed the internal energy does not change. The first law, which deals with changes in the internal energy, thus becomes 0 = Q - W, so Q = W. If the system does work, the energy comes from heat flowing into the system from the reservoir; if work is done on the system, heat flows out of the system to the reservoir. In an adiabatic process, no heat is added or removed from a system. The first law of thermodynamics is thus reduced to saying that the change in the internal energy of a system undergoing an adiabatic change is equal to -W. Since the internal energy is directly proportional to temperature, the work becomes: An example of an adiabatic process is a gas expanding so quickly that no heat can be transferred. The expansion does work, and the temperature drops. This is exactly what happens with a carbon dioxide fire extinguisher, with the gas coming out at high pressure and cooling as it expands at atmospheric pressure. With liquids and solids that are changing temperature, the heat associated with a temperature change is given by the equation: A similar equation holds for an ideal gas, only instead of writing the equation in terms of the mass of the gas it is written in terms of the number of moles of gas, and use a capital C for the heat capacity, with units of J / (mol K): For an ideal gas, the heat capacity depends on what kind of thermodynamic process the gas is experiencing. Generally, two different heat capacities are stated for a gas, the heat capacity at constant pressure (Cp) and the heat capacity at constant volume (Cv). The value at constant pressure is larger than the value at constant volume because at constant pressure not all of the heat goes into changing the temperature; some goes into doing work. On the other hand, at constant volume no work is done, so all the heat goes into changing the temperature. In other words, it takes less heat to produce a given temperature change at constant volume than it does at constant pressure, so Cv < Cp. That's a qualitative statement about the two different heat capacities, but it's very easy to examine them quantitatively. The first law says: We also know that PV = nRT, and at constant pressure the work done is: Note that this applies for a monatomic ideal gas. For all gases, though, the following is true: Another important number is the ratio of the two specific heats, represented by the Greek letter gamma (g). For a monatomic ideal gas this ratio is: Back to the lecture schedule home page
http://physics.bu.edu/~duffy/py105/notes/Heattransfer.html
13
124
A cache is a small amount of memory which operates more quickly than main memory. Data is moved from the main memory to the cache, so that it can be accessed faster. Modern chip designers put several caches on the same die as the processor; designers often allocate more die area to caches than the CPU itself. Increasing chip performance is typically achieved by increasing the speed and efficiency of chip cache. The cache memory performance is the most significant factor in achieving high processor performance. Cache works by storing a small subset of the external memory contents, typically out of it's original order. Data and instructions that are being used frequently, such as a data array or a small instruction loop, are stored in the cache and can be read quickly without having to access the main memory. Cache runs at the same speed as the rest of the processor, which is typically much faster than the external RAM operates at. This means that if data is in the cache, accessing it is faster than accessing memory. Cache helps to speed up processors because it works on the principal of locality. In this chapter, we will discuss several possible cache arrangements, in increasing order of complexity: - No cache, single-CPU, physical addressing - Single cache, single-CPU, physical addressing - Cache hierarchy: L1, L2, L3, etc. - cache replacement policies: associativity, random replacement, LRU, etc. - Split cache: I-cache and D-cache, on top of a unified cache hierarchy - caching with multiple CPUs - cache hardware that supports virtual memory addressing - the TLB as a kind of cache - how single-address-space virtual memory addressing interacts with cache hardware - how per-process virtual memory addressing interacts with cache hardware No cache Most processors today, such as the processors inside standard keyboards and mice, don't have any cache. Many historically important computers, such as Cray supercomputers, don't have any cache. The vast majority of software neither knows nor cares about the specific details of the cache, or if there is even a cache at all. Processors without a cache are usually limited in performance by the main memory access time. Without a cache, the processor fetches each instruction, one at a time, from main memory, and every LOAD or STORE goes to main memory before executing the next instruction. One way to improve performance is to substitute faster main memory. Alas, that usually has a financial limit: hardly anyone is willing to pay a penny a bit for a gigabyte of really fast main memory. Even if money is no object, eventually one reaches physical limits to main memory access time. Even with the fastest possible memory money can buy, the memory access time for a unified 1 gigabyte main memory is limited by the time it takes a signal to get from the CPU to the most distant part of the memory and back. Single cache Using exactly the same technology, it takes less time for a signal to traverse a small block of memory than a large block of memory. The performance of a processor with a cache is no longer limited by the main memory access time. The performance of a processor with a cache is usually limited in performance by the (much faster) cache memory access time: if the cache access time of a processor could be decreased, the processor would have higher performance. However, cache memory is generally much easier to speed up than main memory: really fast memory is much more affordable when we only buy small amounts of it. If it will improve the performance of a system significantly, lots of people are willing to pay a penny a bit for a kilobyte of really fast cache memory. Principal of Locality There are two types of locality, spatial and temporal. Modern computer programs are typically loop-based, and therefore we have two rules about locality: - Spatial Locality - When a data item is accessed, it is likely that data items in sequential memory locations will also be accessed. Consider the traversal of an array, or the act of storing local variables on a stack. In these cases, when one data item is accessed, it is a good idea to load the surrounding memory area into the cache at the same time. - Temporal Locality - When data item is accessed, it is likely that the same data item will be accessed again. For instance, variables are typically read and written to in rapid succession. If is a good idea to keep recently used items in the cache, and not over-write data that has been recently used. Hit or Miss A hit when talking about cache is when the processor finds data in the cache that it is looking for. A miss is when the processor looks for data in the cache, but the data is not available. In the event of a miss, the cache controller unit must gather the data from the main memory, which can cost more time for the processor. Measurements of "the hit ratio" are typically performed on benchmark applications. The actual hit ratio varies widely from one application to another. In particular, video and audio streaming applications often have a hit ratio close to zero, because each bit of data in the stream is read once for the first time (a compulsory miss), used, and then never read or written again. Even worse, many cache algorithms (in particular, LRU) allow this streaming data fill the cache, pushing out of the cache information that will be used again soon (cache pollution). Cache performance A processor with a cache first looks in the cache for data (or instructions). On a miss, the processor then fetches the data (or instructions) from main memory. On a miss, this process takes *longer* than an equivalent processor without a cache. There are three ways a cache gives better net performance than a processor without a cache: - A hit (read from the cache) is faster than the time it takes a processor without a cache to fetch from main memory. The trick is to design the cache so we get hits often enough that their increase in performance more than makes up for the loss in performance on the occasional miss. (This requires a cache that is faster than main memory). - Multiprocessor computers with a shared main memory often have a bottleneck accessing main memory. When a local cache succeeds in satisfying memory operations without going all the way to main memory, main memory bandwidth is freed up for the other processors, and the local processor doesn't need to wait for the other processors to finish their memory operations. - Many systems are designed so the processor often read multiple items from cache simultaneously -- either 3 separate caches for instruction, data, and TLB; or a multiported cache; or both -- which takes less time than reading the same items from main memory one at a time. The last two ways improve overall performance even if the cache is no faster than main memory. A processor without a cache has a constant memory reference time T of A processor with a cache has an average memory access time of - m is the miss ratio - Tm is the time to make a main memory reference - Th is the time to make a cache reference on a hit - E accounts for various secondary factors (memory refresh time, multiprocessor contention, etc.) Flushing the Cache When the processor needs data, it looks in the cache. If the data is not in the cache, it will then go to memory to find the data. Data from memory is moved to the cache and then used by the processor. Sometimes the entire cache contains useless or old data, and it needs to be flushed. Flushing occurs when the cache controller determines that the cache contains more potential misses than hits. Flushing the cache takes several processor cycles, so much research has gone into developing algorithms to keep the cache up to date. Cache Hierarchy Cache is typically divided between multiple levels. The most common levels are L1, L2, and L3. L1 is the smallest but the fastest. L3 is the largest but the slowest. Many chips do not have L3 cache. Some chips that do have an L3 cache actually have an external L3 module that exists on the motherboard between the microprocessor and the RAM. Inclusive, exclusive, and other cache hierarchy When there are several levels of cache, and a copy of the data in some location in main memory has been cached in the L1 cache, is there another copy of that data in the L2 cache? - No. Some systems are designed to have strictly exclusive cache levels: any particular location in main memory is cached in at most one cache level. - Yes. Other systems are designed to have a strictly inclusive cache levels: whenever some location in main memory is cached in any one level, the same location is also cached in all higher levels. All the data in the L2 cache can also be found in L3 (and also in main memory). All the data in a L1 cache can also be found in L2 and L3 (and also in main memory). - Maybe. In some systems, such as the Intel Pentium 4, some data in the L1 cache is also in the L2 cache, while other data in the L1 cache is not in the L2 cache. This kind of cache policy does not yet have a popular name. Size of Cache There are a number of factors that affect the size of cache on a chip: - Moore's law provides an increasing number of transistors per chip. After around 1989, more transistors are available per chip than a designer can use to make a CPU. These extra transistors are easily converted to large caches. - Processor components become smaller as transistors become smaller. This means there is more area on the die for additional cache. - More cache means fewer delays in accessing data, and therefore better performance. Because of these factors, chip caches tend to get larger and larger with each generation of chip. Cache Tagging Cache can contain non-sequential data items in no particular order. A block of memory in the cache might be empty and contain no data at all. In order for hardware to check the validity of entries in the cache, every cache entry needs to maintain the following pieces of information: - A status bit to determine if the block is empty or full - The memory address of the data in the block - The data from the specified memory address (a "block in the cache", also called a "line in the cache") When the processor looks for data in the cache, it sends a memory address to the cache controller. the cache controller checks the address against all the address fields in the cache. If there is a hit, the cache controller returns the data. If there is a miss, the cache controller must pass the request to the next level of cache or to the main memory unit. The cache controller splits an effective memory address (MSB to LSB) into the tag, the index, and the block offset. Some authors refer to the block offset as simply the "offset" or the "displacement". The memory address of the data in the cache is known as the tag. Memory Stall Cycles If the cache misses, the processor will need to stall the current instruction until the cache can fetch the correct data from a higher level. The amount of time lost by the stall is dependent on a number of factors. The number of memory accesses in a particular program is denoted as Am; some of those accesses will hit the cache, and the rest will miss the cache. The rate of misses, equal to the probability that any particular access will miss, is denoted rm. The average amount of time lost for each miss is known as the miss penalty, and is denoted as Pm. We can calculate the amount of time wasted because of cache miss stalls as: Likewise, if we have the total number of instructions in a program, N, and the average number of misses per instruction, MPI, we can calculate the lost time as: If instead of lost time we measure the miss penalty in the amount of lost cycles, the calculation will instead produce the number of cycles lost to memory stalls, instead of the amount of time lost to memory stalls. Read Stall Times To calculate the amount of time lost to cache read misses, we can perform the same basic calculations as above: Ar is the average number of read accesses, rr is the miss rate on reads, and Pr is the time or cycle penalty associated with a read miss. Write Stall Times Determining the amount of time lost to write stalls is similar, but an additional additive term that represents stalls in the write buffer needs to be included: Where Twb is the amount of time lost because of stalls in the write buffer. The write buffer can stall when the cache attempts to synchronize with main memory. Hierarchy Stall Times In a hierarchical cache system, miss time penalties can be compounded when data is missed in multiple levels of cache. If data is missed in the L1 cache, it will be looked for in the L2 cache. However, if it also misses in the L2 cache, there will be a double-penalty. The L2 needs to load the data from the main memory (or the L3 cache, if the system has one), and then the data needs to be loaded into the L1. Notice that missing in two cache levels and then having to access main memory takes longer than if we had just accessed memory directly. Design Considerations L1 cache is typically designed with the intent of minimizing the time it takes to make a hit. If hit times are sufficiently fast, a sizable miss rate can be accepted. Misses in the L1 will be redirected to the L2 and that is still significantly faster than accesses to main memory. L1 cache tends to have smaller block sizes, but benefits from having more available blocks for the same amount of space. In order to make L1 hit times minimal, L1 are typically direct-mapped or even narrowly 2-way set associative. L2 cache, on the other hand, needs to have a lower miss rate to help avoid accesses to main memory. Accesses to L2 cache are much faster than accesses to memory, so we should do everything possible to ensure that we maximize our hit rate. For this reason, L2 cache tends to be fully associative with large block sizes. This is because memory is typically read and written in sequential memory cells, so large block sizes can take advantage of that sequentiality. L3 cache further continues this trend, with larger block sizes, and minimized miss rate. block size A very small cache block size increases the miss ratio, since a miss will fetch less data at a time. A very large cache block size also increases the miss ratio, since it causes the system to fetch a bunch of extra information that is used less than the data it displaces in the cache. In order to increase the read speed in a cache, many cache designers implement some level of associativity. An associative cache creates a relationship between the original memory location and the location in the cache where that data is stored. The relationship between the address in main memory and the location where the data is stored is known as the mapping of the cache. In this way, if the data exists in the cache at all, the cache controller knows that it can only be in certain locations that satisfy the mapping. A direct-mapped system uses a hashing algorithm to assign an identifier to a memory address. A common hashing algorithm for this purpose is the modulo operation. The modulo operation divides the address by a certain number, p, and takes the remainder r as the result. If a is the main memory address, and n is an arbitrary non-negative integer, then the hashing algorithm must satisfy the following equation: If p is chosen properly by the designer, data will be evenly distributed throughout the cache. In a direct-mapped system, each memory address corresponds to only a single cache location, but a single cache location can correspond to many memory locations. The image above shows a simple cache diagram with 8 blocks. All memory addresses therefore are calculated as n mod 8, where n is the memory address to read into the cache. Memory addresses 0, 8, and 16 will all map to block 0 in the cache. Cache performance is worst when multiple data items with the same hash value are read, and performance is best when data items are close together in memory (such as a sequential block of program instructions, or a sequential array). Most external caches (located on the motherboard, but external to the CPU) are direct-mapped or occasionally 2-way set associative, because it's complicated to build higher-associativity caches out of standard components. If there is such a cache, typically there is only one external cache on the motherboard, shared between all CPUs. The replacement policy for a direct-mapped cache is the simplest possible replacement policy: the new data must go in the one and only one place in the cache it corresponds to. (The old data at the location in the cache, if its dirty bit is set, must be written to main memory first). 2-Way Set Associative In a 2-way set associative cache system, the data value is hashed, but each hash value corresponds to a set of cache blocks. Each block contains multiple data cells, and a data value that is assigned to that block can be inserted anywhere in the block. The read speeds are quick because the cache controller can immediately narrow down its search area to the block that matches the address hash value. The LRU replacement policy for a 2-way set associative cache is one of the simplest replacement policies: The new data must go in one of a set of 2 possible locations. Those 2 locations share a LRU bit that is updated whenever either one is read or written, indicating which one of the two entries in the set was the most-recently used. The new data goes in the *other* location (the least-recently used location). (The old data at that LRU location in the cache, if its dirty bit is set, must be written to main memory first). 2 way skewed associative The 2-way skewed associative cache is "the best tradeoff for .... caches whose sizes are in the range 4K-8K bytes" -- André SeznecAndré Seznec. "A Case for Two-Way Skewed-Associative Caches". http://citeseer.ist.psu.edu/seznec93case.html. Retrieved 2007-12-13. Fully Associative In a fully-associative cache, hash algorithms are not employed and data can be inserted anywhere in the cache that is available. A typical algorithm will write a new data value over the oldest unused data value in the cache. This scheme, however, requires the time an item is loaded or accessed to be stored, which can require lots of additional storage. Cache Misses There are three basic types of misses in a cache: - Conflict Misses - Compulsory Misses - Capacity Misses Conflict Misses A conflict miss occurs in a direct-mapped and 2-way set associative cache when two data items are mapped to the same cache locations. In a data miss, a recently used data item is overwritten with a new data item. Compulsory Misses The image above shows the difference between a conflict miss and a compulsory miss. A compulsory miss is an instance where the cache must miss because it does not contain any data. For instance, when a processor is first powered-on, there is no valid data in the cache and the first few reads will always miss. The compulsory miss demonstrates the need for a cache to differentiate between a space that is empty and one that is full. Consider when we turn the processor on, and we reset all the address values to zero, an attempt to read a memory location with a hash value of zero will hit. We do not want the cache to hit if the blocks are empty. Capacity Misses Capacity misses occur when the cache block is not large enough to hold the data item. Cache Write Policy Data writes require the same time delay as a data read. For this reason, caching systems typically will write data to the cache as well. However, when writing to the cache, it is important to ensure that the data is also written to the main memory, so it is not overwritten by the next cache read. If data in the cache is overwritten without being stored in main memory, the data will be lost. It is imperative that caches write data to the main memory, but exactly when that data is written to the main memory is called the write policy. There are two write policies: write through and write back. Write operations take as long to perform as read operations in main memory. Many cached processors therefore will perform write operations on the cache as well as read operations. Write Through When data is written to memory, a write request is sent simultaneously to the main memory and to the cache. This way, the result data is available in the cache before it can be written (and then read again) from the main memory. When writing to the cache, it's important to make sure the main memory and the cache are synchronized and they contain the same data. In a write through system, data that is written to the cache is immediately written to the main memory as well. If many writes need to occur is sequential instructions, the write buffer may get backed up and cause a stall. Write Back In a write back system, the cache controller keeps track of which data items have been synchronized to main memory. The data items which have not been synchronized are called "dirty", and the cache controller prevents dirty data from being overwritten. The cache controller will synchronize data during processor cycles where no other data is being written to the cache. Write bypass Some processors send writes directly to main memory, bypassing the cache. If that location is *not* already cached, then nothing more needs to be done. If that location *is* already cached, then the old data in the cache(s) needs to be marked "invalid" ("stale") so if the CPU ever reads that location, the CPU will read the latest value from main memory rather than some earlier value(s) in the cache(s). Stale Data It is possible for the data in main memory to be changed by a component besides the microcontroller. For instance, many computer systems have memory-mapped I/O, or a DMA controller that can alter the data. Some computer systems have several CPUs connected to a common main memory. It is important that the cache controller check that data in the cache is correct. Data in the cache that is old and may be incorrect is called "stale". The three most popular approaches to dealing with stale data ("cache coherency protocols") are: - Use simple cache hardware that ignores what the other CPUs are doing. - Set all caches to write-through all STOREs (write-through policy). Use additional cache hardware to listen in ("snoop") whenever some other device writes to main memory, and invalidate local cache line whenever some other device writes to the corresponding cached location in main memory. - Design caches to use the MESI protocol. With simple cache hardware that ignores what the other CPUs are doing, cache coherency is maintained by the OS software. The OS sets up each page in memory as either (a) exclusive to one particular CPU (which is allowed to read, write, and cache it); all other CPUs are not allowed to read or write or cache that page; (b) shared read/write between CPUs, and set to "non-cacheable", in the same way that memory-mapped I/O devices are set to non-cacheable; or (c) shared read-only; all CPUs are allowed to cache but not write that page. Split cache High-performance processors invariably have 2 separate L1 caches, the instruction cache and the data cache (I-cache and D-cache). This "split cache" has several advantages over a unified cache: - Wiring simplicity: the decoder and scheduler are only hooked to the I-cache; the registers and ALU and FPU are only hooked to the D-cache. - Speed: the CPU can be reading data from the D-cache, while simultaneously loading the next instruction(s) from the I-cache. Multi-CPU systems typically have a separate L1 I-cache and L1 D-cache for each CPU, each one direct-mapped for speed. Open question: To speed up running Java applications in a JVM (and similar interpreters and CPU emulators), would it help to have 3 separate caches -- a machine instruction cache indexed by the program counter PC, a byte code cache indexed by the VM's instruction pointer IP, and a data cache ? On the other hand, in a high-performance processor, other levels of cache, if any -- L2, L3, etc. -- as well as main memory -- are typically unified, although there are several exceptions (such as the Itanium 2 Montecito). The advantages of a unified cache (and a unified main memory) are: - Some programs spend most of their time in a small part of the program processing lots of data. Other programs run lots of different subroutines against a small amount of data. A unified cache automatically balances the proportion of the cache used for instructions and the proportion used for data -- to get the same performance on a split cache would require a larger cache. - when instructions are written to memory -- by an OS loading an executable file from storage, or from a just-in-time compiler translating bytecode to executable code -- a split cache requires the CPU to flush and reload the instruction cache; a unified cache doesn't require that. error detection Each cache row entry typically has error detection bits. Since the cache only holds a copy of information in the main memory (except for the write-back queue), when an error is detected, the desired data can be re-fetched from the main memory -- treated as a kind of miss-on-invalid -- and the system can continue as if no error occurred. A few computer systems use Hamming error correction to correct single-bit errors in the "data" field of the cache without going all the way back to main memory. Specialized cache features Many CPUs use exactly the same hardware for the instruction cache and the data cache. (And, of course, the same hardware is used for instructions as for data in a unified cache. The revolutionary idea of a Von Neumann architecture is to use the same hardware for instructions and for data in the main memory itself). For example, the Fairchild CLIPPER used 2 identical CAMMU chips, one for the instruction cache and one for the data cache. Because the various caches are used slightly differently, some CPU designers customize each cache in different ways. - Some CPU designers put the "branch history bits" used for branch prediction in the instruction cache. There's no point to adding such information to a data-only cache. - Many instruction caches are designed in such a way that the only way to deal with stale instructions is to invalidate the entire cache and reload. Data caches are typically designed with more fine-grained response, with extra hardware that can invalidate and reload only the particular cache lines that have gone stale. - The virtual-to-physical address translation process often has a lot of specialized hardware associated with it to make it go faster -- the TLB cache, hardware page-walkers, etc. We will discuss this in more detail in the next chapter, Virtual Memory. - Alan Jay Smith. "Design of CPU Cache Memories". Proc. IEEE TENCON, 1987. - Paul V. Bolotoff. "Functional Principles of Cache Memory". 2007. - John L. Hennessy, David A. Patterson. "Computer Architecture: A Quantitative Approach". 2011. ISBN 012383872X, ISBN 9780123838728. page B-9. - David A. Patterson, John L. Hennessy. "Computer organization and design: the hardware/software interface". 2009. ISBN 0123744938, ISBN 9780123744937 "Chapter 5: Large and Fast: Exploiting the Memory Hierarchy". p. 484. - Gene Cooperman. "Cache Basics". 2003. - Ben Dugan. "Concerning Caches". 2002. - Harvey G. Cragon. "Memory systems and pipelined processors". 1996. ISBN 0867204745, ISBN 9780867204742. "Chapter 4.1: Cache Addressing, Virtual or Real" p. 209 - Paul V. Bolotoff. "Functional Principles of Cache Memory". 2007. - Micro-Architecture "Skewed-associative caches have ... major advantages over conventional set-associative caches." - Paul V. Bolotoff. Functional Principles of Cache Memory. 2007. Further reading - Parallel Computing and Computer Clusters/Memory - simulators available for download at University of Maryland: Memory-Systems Research: "Computational Artifacts" can be used to measure cache performance and power dissipation for a microprocessor design without having to actually build it. This makes it much quicker and cheaper to explore various tradeoffs involved in cache design. ("Given a fixed size chip, if I sacrifice some L2 cache in order to make the L1 cache larger, will that make the overall performance better or worse?" "Is it better to use an extremely fast cycle time cache with low associativity, or a somewhat slower cycle time cache with high associativity giving a better hit rate?")
http://en.wikibooks.org/wiki/Microprocessor_Design/Cache
13
80
1 These tables present experimental statistics showing the Gross Value of Irrigated Agricultural Production (GVIAP). Annual data are presented for the reference periods from 2000–01 to 2008–09 for Australia, States and Territories, for the Murray-Darling Basin for selected years (2000–01, 2005–06, 2006–07, 2007–08 and 2008–09) and for Natural Resource Management (NRM) regions from 2005–06 to 2008–09, for key agricultural commodity groups. 2 The tables also present the total gross value of agricultural commodities (GVAP) and the Volume of Water Applied (in megalitres) to irrigated crops and pastures. WHAT IS GVIAP? 3 GVIAP refers to the gross value of agricultural commodities that are produced with the assistance of irrigation. The gross value of commodities produced is the value placed on recorded production at the wholesale prices realised in the marketplace. Note that this definition of GVIAP does not refer to the value that irrigation adds to production, or the "net effect" that irrigation has on production (i.e. the value of a particular commodity that has been irrigated "minus" the value of that commodity had it not been irrigated) - rather, it simply describes the gross value of agricultural commodities produced with the assistance of irrigation. 4 ABS estimates of GVIAP attribute all of the gross value of production from irrigated land to irrigated agricultural production. For this reason, extreme care must be taken when attempting to use GVIAP figures to compare different commodities - that is, the gross value of irrigated production should not be used as a proxy for determining the highest value water uses. Rather, it is a more effective tool for measuring changes over time or comparing regional differences in irrigated agricultural production. 5 Estimating the value that irrigation adds to agricultural production is difficult. This is because water used to grow crops and irrigate pastures comes from a variety of sources. In particular, rainwater is usually a component of the water used in irrigated agriculture, and the timing and location of rainfall affects the amount of irrigation water required. Other factors such as evaporation and soil moisture also affect irrigation water requirements. These factors contribute to regional and temporal variations in the use of water for irrigation. In addition, water is not the only input to agricultural production from irrigated land - fertiliser, land, labour, machinery and other inputs are also used. To separate the contribution that these factors make to total production is not currently possible. Gross value of agricultural production 6 These estimates are based on data from Value of Agricultural Commodities Produced (cat. no. 7503.0), which are derived from ABS agricultural censuses and surveys. During the processing phase of the collections, data checking was undertaken to ensure key priority outputs were produced to high quality standards. As a result, some estimates will have been checked more comprehensively than others. 7 It is not feasible to check every item reported by every business, and therefore some anomalies may arise, particularly for small area estimates (e.g. NRM regions). To present these items geographically, agricultural businesses are allocated to a custom region based on where the business reports the location of their 'main agricultural property'. Anomalies can occur if location details for agricultural businesses are not reported precisely enough to accurately code their geographic location. In addition, some businesses operate more than one property, and some large farms may operate across custom region and NRM boundaries, but are coded to a single location. As a result, in some cases, a particular activity may not necessarily occur in the area specified and the Area of Holding and other estimates of agricultural activity may exceed or not account for all activities within that area. For these reasons, the quality of estimates may be lower for some NRMs and other small area geographies. 8 Gross value of agricultural production (GVAP) is the value placed on recorded production of agricultural commodities at the wholesale prices realised in the market place. It is also referred to as the Value of Agricultural Commodities Produced (VACP). 9 In 2005–06, the ABS moved to a business register sourced from the Australian Taxation Office's Australian Business Register (ABR). Previously the ABS had maintained its own register of agricultural establishments. 10 The ABR-based register consists of all businesses on the ABR classified to an 'agricultural' industry, as well as businesses which have indicated they undertake agricultural activities. All businesses with a turnover of $50,000 or more are required to register on the ABR. Many agricultural businesses with a turnover of less than $50,000 have also chosen to register on the ABR. 11 Moving to the ABR-based register required changes to many of the methods used for compiling agriculture commodity and water statistics. These changes included: using new methods for determining whether agricultural businesses were 'in-scope' of the collection; compiling the data in different ways; and improving estimation and imputation techniques. 12 The ABR-based frame was used for the first time to conduct the 2005–06 Agricultural Census. This means that Value of Agricultural Commodities Produced (VACP) data are not directly comparable with historical time series for most commodities. For detailed information about these estimates please refer to the Explanatory Notes in Value of Agricultural Commodities Produced (cat. no. 7503.0). 13 Statistics on area and production of crops relate in the main to crops sown during the reference year ended 30 June. Statistics of perennial crops and livestock relate to the position as at 30 June and the production during the year ended on that date, or of fruit set by that date. Statistics for vegetables, apples, pears and for grapes, which in some states are harvested after 30 June, are collected by supplementary collections. For 2005–06 to 2007–08, the statistics for vegetables, apples, pears and for grapes included in this product are those collected in the 2005–06 Agriculture Census at 30 June 2006, the 2006–07 Agricultural Survey at 30 June 2007 and the Agricultural Resource Management Survey 2007–08 at 30 June 2008, not those collected by the supplementary collections. For this reason the GVAP (VACP) estimates may differ from the published estimates in the products Agricultural Commodities: Small Area Data, Australia, 2005–06 (cat. no. 7125.0) and Value of Agricultural Commodities Produced, Australia (cat. no. 7503.0). 14 Further, the GVAP (Gross Value of Agricultural Production, also referred to as VACP) and GVIAP estimates for 2005–06 and 2006–07 shown in this product have been revised where necessary, for example, when a new price has become available for a commodity after previous publications. 15 The VACP Market Prices survey collected separate prices for undercover and outdoor production for the first time in 2005–06. This enabled the ABS to better reflect the value of undercover and outdoor production for nurseries and cut flowers. The value of the commodity group “nurseries, cut flowers and cultivated turf” was significantly greater from 2005–06, reflecting an increase in production and an improved valuation of undercover production for nurseries and cut flowers. Volume of water applied 16 'Volume of water applied' refers to the volume of water applied to crops and pastures through irrigation. 17 This information is sourced from the ABS Agriculture Census for 2000–01 and 2005–06 and from the ABS Agricultural Survey for all other years, except for 2002–03 when ABS conducted the Water Survey, Agriculture. As explained above in paragraphs 9–12, there was a change to the register of businesses used for these collections, which may have some impact on the estimates. For further information refer to the Explanatory Notes for Water Use on Australian Farms (cat. no. 4618.0). 18 Volume of water applied is expressed in megalitres. A megalitre is one million litres, or one thousand kilolitres. AGRICULTURAL COMMODITY GROUPS 19 GVIAP is calculated for each irrigated 'commodity group' produced by agricultural businesses. That is, GVIAP is generally not calculated for individual commodities, rather for groups of "like" commodities according to irrigated commodity grouping on the ABS Agricultural Census/Survey form. The irrigated commodity groups vary slightly on the survey form from year-to-year. The commodity groups presented in this publication are: - cereals for grain and seed - total hay production - cereals for hay - pastures cut for hay or silage (including lucerne for hay) - pastures for seed production - sugar cane - other broadacre crops (see Appendix 1 for detail) - fruit trees, nut trees, plantation or berry fruits (excluding grapes) - vegetables for human consumption and seed - nurseries, cut flowers and cultivated turf - dairy production - production from meat cattle - production from sheep and other livestock (excluding cattle) 20 Note that the ABS Agricultural Census/Survey collects area and production data for a wide range of individual commodities within the irrigated commodity groups displayed in the list above. Appendix 1 provides more detail of which commodities comprise these groupings. 21 There were differences in data items (for production, area grown and area irrigated) collected on the Agricultural Census/Surveys in different years. This affects the availability of some commodities for some years. Appendix 2 outlines some of the specific differences and how they have been treated in compiling the estimates for this publication, thereby enabling the production of GVIAP estimates for each of the commodity groups displayed in the list above for every year from 2000–01 to 2008–09. 22 Note that in all GVAP tables, “Total GVAP” includes production from pigs, poultry, eggs, honey (2001 only) and beeswax (2001 only), for completeness. These commodities are not included in GVIAP estimates at all because irrigation is not applicable to them. METHOD USED TO CALCULATE GVIAP 23 The statistics presented here calculate GVIAP at the unit (farm) level, using three simple rules: a. If the area of the commodity group irrigated = the total area of the commodity group grown/sown, then GVIAP = GVAP for that commodity group; b. If the area of the commodity group irrigated is greater than zero but less than the total area of the commodity group grown/sown, then a “yield formula” is applied, with a “yield difference factor”, to calculate GVIAP for the irrigated area of the commodity group; c. If the area of the commodity group irrigated = 0, then GVIAP = 0 for that commodity group. 24 These three rules apply to most commodities; however there are some exceptions as outlined below in paragraph 26. It is important to note that the majority of cases follow rules 1 and 3; that is, the commodity group on a particular farm is either 100% irrigated or not irrigated at all. For example, in 2004–05, 90% of total GVAP came from commodity groups that were totally irrigated or not irrigated at all. Therefore, only 10% of GVAP had to be "split" into either "irrigated" or "non-irrigated" using the “yield formula” (described below). The yield formula is explained in full in the information paper Methods of estimating the Gross Value of Irrigated Agricultural Production (cat. no. 4610.0.55.006). 25 Outlined here is the yield formula referred to in paragraph 20: Ai = area of the commodity under irrigation (ha) Yi = estimated irrigated production for the commodity (t or kg) P = unit price of production for the commodity ($ per t or kg) Q = total quantity of the commodity produced (t or kg) Ad = area of the commodity that is not irrigated (ha) Ydiff = yield difference factor, i.e. estimated ratio of irrigated to non-irrigated yield for the commodity produced Yield difference factors 26 Yield difference factors are the estimated ratio of irrigated to non-irrigated yield for a given commodity group. They are calculated for a particular commodity group by taking the yield (production per hectare sown/grown) of all farms that fully irrigated the commodity group and dividing this "irrigated" yield by the yield of all farms that did not irrigate the commodity group. The yield difference factors used here were determined by analysing data from 2000–01 to 2004–05 and are reported for each commodity group in Appendix 1 of the information paper Methods of estimating the Gross Value of Irrigated Agricultural Production (cat. no. 4610.0.55.006). It is anticipated that the yield difference factors will be reviewed following release of data from the 2010-11 Agriculture Census. 27 In this report "yield" is defined as the production of the commodity (in tonnes, kilograms or as a dollar value) per area grown/sown (in hectares). Commodity groups for which the yield formula is used 28 The GVIAP for the following commodities have been calculated using the yield formula, with varying yield differences: Cereals for grain/seed - yield formula with yield difference of 2 Cereals for hay - yield formula with yield difference of 1.5 Pastures for hay - yield formula with yield difference of 2 Pastures for seed - yield formula with yield difference of 2 Sugar cane - yield formula with yield difference of 1.3 (except for 2008–09 - see paragraphs 29 and 31 below) Other broadacre crops - yield formula with yield difference of 2 Fruit and nuts - yield formula with yield difference of 2 Grapes - yield formula with yield difference of 1.2 (except for 2008–09 - see paragraphs 29 and 31 below) Vegetables for human consumption and seed - yield formula with yield difference of 1 Nurseries, cut flowers and cultivated turf - yield formula with yield difference of 1 Note: a yield difference of 1 implies no difference in yield between irrigated and non-irrigated production. 29 However not all agricultural commodity groups can be satisfactorily calculated using this formula, so the GVIAP for a number of commodity groups has been calculated using other methods: Rice - assume all rice production is irrigated. Cotton - production formula (see paragraph 31). Grapes - production formula (2008–09 only - see paragraph 31). Sugar - production formula (2008–09 only - see paragraph 31). Dairy production - assume that if there is any irrigation of grazing land on a farm that is involved in any dairy production, then all dairy production on that farm is classified as irrigated. Meat cattle, sheep and other livestock - take the average of two other methods: 1. calculate the ratio of the area of irrigated grazing land to the total area of grazing land and multiply this ratio by the total production for the commodity group (this is referred to as the “area formula”); 2. if the farm has any irrigation of grazing land then assume that all livestock production on the farm is irrigated. 30 For more information on the “area formula” for calculating GVIAP please refer to the information paper Methods of estimating the Gross Value of Irrigated Agricultural Production (cat. no. 4610.0.55.006). 31 In 2008–09, cotton, grapes and sugar were the only commodities for which the production formula was used to estimate GVIAP. This formula is based on the ratio of irrigated production (kg or tonnes) to total production (kg or tonnes) and is outlined in the information paper Methods of estimating the Gross Value of Irrigated Agricultural Production (cat. no. 4610.0.55.006). The production formula is used for these three commodities because in 2008–09 they were the only commodities for which actual irrigated production was collected on the ABS agricultural censuses and surveys. Note that prior to 2008–09, cotton was the only commodity for which irrigated production data was collected, except in 2007–08, when there were no commodities for which this data was collected. Qi = irrigated production of cotton (kg) Qd = non-irrigated production of cotton (kg) P = unit price of production for cotton ($ per kg) Qt = total quantity of cotton produced (kg) = Qi + Qd 32 Most of the irrigated commodity groups included in these tables are irrigated simply by the application of water directly on to the commodity itself, or the soil in which it is grown. The exception relates to livestock, which obviously includes dairy. For example, the GVIAP of "dairy" simply refers to all dairy production from dairy cattle that grazed on irrigated pastures or crops. Estimates of GVIAP for dairy must be used with caution, because in this case the irrigation is not simply applied directly to the commodity, rather it is applied to a pasture or crop which is then eaten by the animal from which the commodity is derived (milk). Therefore, for dairy production, the true net contribution of irrigation (i.e. the value added by irrigation, or the difference between irrigated and non-irrigated production) will be much lower than the total irrigation-assisted production (the GVIAP estimate). 33 The difference between (a) the net contribution of irrigation to production and (b) the GVIAP estimate, is probably greater for livestock grazing on irrigated crops/pastures than for commodity groups where irrigation is applied directly to the crops or pastures. 34 Similarly, estimates of GVIAP for all other livestock (meat cattle, sheep and other livestock) must be treated with caution, because as for dairy production, the issues around irrigation not being directly applied to the commodity also apply to these commodity groups. 35 The estimates presented in this product are underpinned by estimates of the Value of Agricultural Commodities Produced (VACP), published annually in the ABS publication Value of Agricultural Commodities Produced (cat. no. 7503.0). VACP estimates (referred to as GVAP in this product) are calculated by multiplying the wholesale price by the quantity of agricultural commodities produced. The price used in this calculation is the average unit value of a given commodity realised in the marketplace. Price information for livestock slaughterings and wool is obtained from ABS collections. Price information for other commodities is obtained from non-ABS sources, including marketing authorities and industry sources. It is important to note that prices are state-based average unit values. 36 Sources of price data and the costs of marketing these commodities vary considerably between states and commodities. Where a statutory authority handles marketing of the whole or a portion of a product, data are usually obtained from this source. Information is also obtained from marketing reports, wholesalers, brokers and auctioneers. For all commodities, values are in respect of production during the year (or season) irrespective of when payments were made. For that portion of production not marketed (e.g. hay grown on farm for own use, milk used in farm household, etc.), estimates are made from the best available information and, in general, are valued on a local value basis. 37 It should be noted that the estimates for GVIAP are presented in current prices; that is, estimates are valued at the commodity prices of the period to which the observation relates. Therefore changes between the years shown in these tables reflect the effects of price change. MURRAY-DARLING BASIN (MDB) 38 The gross value of irrigated agricultural production for the MDB is presented for 2000–01 and 2005–06 through to 2008–09. The 2000–01 and 2005–06 data are available because they are sourced from the Agricultural Census which supports finer regional estimates, while the 2006–07, 2007–08 and 2008–09 data are able to be produced because of the improved register of agricultural businesses (described in paragraphs 9–12). 39 The data for the Murray-Darling Basin (MDB) presented in this publication for 2000–01 were derived from a concordance of Statistical Local Area (SLA) regions falling mostly within the MDB. The data for the MDB for 2006–07, 2007–08 and 2008–09 were derived from a concordance of National Resource Management (NRM) regions falling mostly within the MDB. The MDB data for 2005–06 were derived from geo-coded data. As a result, there will be small differences in MDB data across years and this should be taken into consideration when comparisons are made between years. COMPARABILITY WITH PREVIOUSLY PUBLISHED ESTIMATES 40 Because of this new methodology, the experimental estimates presented here are not directly comparable with other estimates of GVIAP released by ABS in Water Account, Australia, 2000–01 (cat. no. 4610), Characteristics of Australia’s Irrigated Farms, 2000–01 to 2003–04 (cat. no. 4623.0), Water Account, Australia, 2004–05 (cat. no. 4610) and Water and the Murray-Darling Basin, A Statistical Profile 2000–01 to 2005–06 (cat. no. 4610.0.55.007). However, the GVIAP estimates published in the Water Account Australia 2008–09 are the same as those published in this publication. 41 As described above, 'Volume of water applied' refers to the volume of water applied to crops and pastures through irrigation. The estimates of 'Volume of water applied' presented in this publication are sourced directly from ABS Agricultural Censuses and Surveys and are the same as those presented in Water Use On Australian Farms (cat.no. 4618.0). Note that these volumes are different to the estimates of agricultural water consumption published in the 2008–09 Water Account Australia (cat. no. 4610.0) as the Water Account Australia estimates focus on total agricultural consumption (i.e. irrigation plus other agricultural water uses) and are compiled using multiple data sources (not just ABS Agricultural Censuses and Surveys). 42 The differences between the methods used to calculate the GVIAP estimates previously released and the method used to produce the estimates presented in this product, are explained in detail in the information paper Methods of estimating the Gross Value of Irrigated Agricultural Production, 2008 (cat. no. 4610.0.55.006). 43 In particular some commodity groups will show significant differences with what was previously published. These commodity groups include dairy production, meat production and sheep and other livestock production. 44 The main reason for these differences is that previous methods of calculating GVIAP estimates for these commodity groups were based on businesses being classified to a particular industry class (according to the industry classification ANZSIC), however the new method is based on activity. For example, for dairy production, previous methods of calculating GVIAP only considered dairy production from dairy farms which were categorised as such according to ANZSIC. The new method defines dairy production, in terms of GVIAP, as “all dairy production on farms on which any grazing land (pastures or crops used for grazing) has been irrigated”. Therefore, if there is any irrigation of grazing land on a farm that is involved in any dairy production (regardless of the ANZSIC classification of that farm), then all dairy production on that particular farm is classified as irrigated. 45 Where figures for individual states or territories have been suppressed for reasons of confidentiality, they have been included in relevant totals. RELIABILITY OF THE ESTIMATES 46 The experimental estimates in this product are derived from estimates collected in surveys and censuses, and are subject to sampling and non-sampling error. 47 The estimates for gross value of irrigated agricultural production are based on information obtained from respondents to the ABS Agricultural Censuses and Surveys. These estimates are therefore subject to sampling variability (even in the case of the censuses, because the response rate is less than 100%); that is, they may differ from the figures that would have been produced if all agricultural businesses had been included in the Agricultural Survey or responded in the Agricultural Census. 48 One measure of the likely difference is given by the standard error (SE) which indicates the extent to which an estimate might have varied by chance because only a sample was taken or received. There are about two chances in three that a sample estimate will differ by less than one SE from the figure that would have been obtained if all establishments had been reported for, and about nineteen chances in twenty that the difference will be less than two SEs. 49 In this publication, sampling variability of the estimates is measured by the relative standard error (RSE) which is obtained by expressing the SE as a percentage of the estimate to which it refers. Most national estimates have RSEs less than 10%. For some States and Territories, and for many Natural Resource Management regions with limited production of certain commodities, RSEs are greater than 10%. Estimates that have an estimated relative standard error higher than 10% are flagged with a comment in the publication tables. If a data cell has an RSE of between 10% and 25, these estimates should be used with caution as they are subject to sampling variability too high for some purposes. For data cells with an RSE between 25% and 50% the estimate should be used with caution as it is subject to sampling variability too high for most practical purposes. Those data cells with with an RSE greater than 50% indicate that the sampling variability causes the estimates to be considered too unreliable for general use. 50 Errors other than those due to sampling may occur because of deficiencies in the list of units from which the sample was selected, non-response, and errors in reporting by providers. Inaccuracies of this kind are referred to as non-sampling error, which may occur in any collection, whether it be a census or a sample. Every effort has been made to reduce non-sampling error to a minimum in the collections by careful design and testing of questionnaires, operating procedures and systems used to compile the statistics. 51 Where figures have been rounded, discrepancies may occur between sums of the component items and totals. 52 ABS publications draw extensively on information provided freely by individuals, businesses, governments and other organisations. Their continued cooperation is very much appreciated: without it, the wide range of statistics published by the ABS would not be available. Information received by the ABS is treated in strict confidence as required by the Census and Statistics Act 1905. FUTURE DATA RELEASES 53 It is anticipated that ABS will release these estimates on an annual basis. Agricultural Commodities, Australia (cat. no. 7121.0) Agricultural Commodities: Small Area Data, Australia (cat.no. 7125.0) Characteristics of Australia’s Irrigated Farms, 2000–01 to 2003–04 (cat. no. 4623.0) Methods of estimating the Gross Value of Irrigated Agricultural Production (Information Paper) (cat. no. 4610.0.55.006). Value of Agricultural Commodities Produced, Australia (cat. no. 7503.0) Water Account Australia (cat. no. 4610.0) Water and the Murray-Darling Basin, A Statistical Profile, 2000–01 to 2005–06 (cat. no. 4610.0.55.007) Water Use on Australian Farms, Australia (cat. no. 4618.0)
http://www.abs.gov.au/AUSSTATS/abs@.nsf/Lookup/4610.0.55.008Explanatory%20Notes12000%E2%80%9301%20-%202008%E2%80%9309?OpenDocument
13
278
(2002-06-23) The Basics: | f ' = What is a derivative? Well, let me give you the traditional approach first. This will be complemented by an abstract glimpse of the bigger picture, which is more closely related to the way people actually use derivatives, once they are familiar with them. For a given real-valued function f of a real variable, consider the slope (m) of its graph at some point. That is to say, some straight line of equation y = mx+b (for some irrelevant constant b) is tangent to the graph of f at that point. In some definite sense, mx+b is the best linear approximation to f(x) when x is close to the point under consideration... The tangent line at point x may be defined as the limit of a secant line intersecting a curve at point x and point x+h, when h tends to 0. When the curve is the graph of f, the slope of such a secant is equal to [ f(x+h)-f(x) ] / h, and the derivative (m) at point x is therefore the limit of that quantity, as h tends to 0. The above limit may or may not exist, so the derivative of f at point x may or may not be defined. We'll skip that discussion. The popular trivia question concerning the choice of the letter "m" to denote the slope of a straight line (in most US textbooks) is discussed Way beyond this introductory scope, we would remark that the quantity we called h is of a vectorial nature (think of a function of several variables), so the derivative at point x is in fact a tensor whose components are called partial derivatives. Also beyond the scope of this article are functions of a complex variable, in which case the above quantity h is simply a complex number, and the above division by h remains thus purely numerical (albeit complex). However, a complex number h (a point on the plane) may approach zero in a variety of ways that are unknown in the realm of real numbers (points on the line). This happens to severely restrict the class of functions for which the above limit exists. Actually, the only functions of a complex variable which have a derivative are the so-called analytic functions [essentially: the convergent sums of power series]. The above is the usual way the concept of derivative is This traditional presentation may be quite a hurdle to overcome, when given to someone who may not yet be thoroughly familar with functions and/or limits. Having defined the derivative of f at point x, we define the derivative function g = f ' = D( f ) of the function f, as the function g whose value g(x) at point x is the derivative of f at point x. We could then prove, one by one, the algebraic rules listed in the first lines of the following table. These simple rules allow most derivatives to be easily computed from the derivatives of just a few elementary functions, like those tabulated below (the above theoretical definition is thus rarely used in practice): u and v are functions of x, whereas a, b and n are constants. || Derivative D( f ) = f ' |Linearity||a u + b v ||a u' + b v' |u ´ v ||u' ´ v + u ´ v' |u / v ||[ u' ´ v - u ´ v' ] / v 2 ||v' ´ u'(v) |Inversion||v = u-1 ||1 / u'(v) ||n x n-1 ||1/x = x -1 |Exponentials||e x||e x |a x||ln(a) a x |sin x||cos x |cos x||- sin x |tg x||1 + (tg x)2 |ln | cos x |||- tg x |sh x||ch x |ch x||sh x |th x||1 - (th x)2 |ln ( ch x )||th x || 1 / Ö(1-x2 ) | arccos x = p/2 - arcsin x ||-1 / |arctg x|| 1 / (1 + x2 ) || 1 / Ö(1+x2 ) (for |x|>1)|| 1 / |argth x (for |x|<1)|| 1 / (1 - x2 ) |gd x = 2 arctg ex - p/2|| 1 / ch x | gd-1 x = ln tg (x/2 + p/4) || 1 / cos x One abstract approach to the derivative concept would be to bypass (at first) the relevance to slopes, and study the properties of some derivative operator D, in a linear space of abstract functions endowed with an internal product (´), where D is only known to satisfy the following two axioms (which we may call linearity and Leibniz' law, as in the above table): |D(au + bv)||= ||a D(u) + b D(v)| |D( u ´ v )||= ||D(u) ´ v + u ´ D(v)| For example, the product rule imposes that D(1) is zero [in the argument of D, we do not distinguish between a function and its value at point x, so that "1" denotes the function whose value is the number 1 at any point x]. The linearity then imposes that D(a) is zero, for any constant a. Repeated applications of the product rule give the derivative of x raised to the power of any integer, so we obtain (by linearity) the correct derivative for any polynomial. (The two rules may also be used to prove the chain rule for polynomials.) A function that has a derivative at point x (defined as a limit) also has arbitrarily close polynomial approximations about x. We could use this fact to show that both definitions of the D operator coincide, whenever both are valid (if we only assume D to be continuous, in a sense which we won't make more precise here). This abstract approach is mostly for educational purposes at the elementary level. For theoretical purposes (at the research level) the abstract viewpoint which has proven to be the most fruitful is totally different: In the Theory of Distributions, a pointwise product like the above (´) is not even defined, whereas everything revolves around the so-called convolution product (*), which has the following strange property concerning the operator D: D( u * v ) = D(u) * v = u * D(v) To differentiate a convolution product (u*v), differentiate either factor! What's the "Fundamental Theorem of Calculus" ? Once known as Barrow's rule, it states that, if f is the derivative of F, then: | f (x) dx In this, if f and F are real-valued functions of a real variable, the right-hand side represents the area between the curve y = f (x) and the x-axis (y = 0), counting positively what's above the axis and negatively [negative area!] what's below it. Any function F whose derivative is equal to f is called a primitive of f (all such primitives simply differ by an arbitrary additive constant, often called constant of integration). A primitive function is often called an indefinite integral (as opposed to a definite integral which is a mere number, not a function, usually obtained as the difference of the values of the primitive at two different points). The usual indefinite notation is: ò f (x) dx At a more abstract level, we may also call "Fundamental Theorem of Calculus" the generalization of the above expressed in the language of differential forms, which is also known as Stokes' Theorem. Fundamental Theorem of Calculus (Theorem of the Day #2) by Robin Whitty Example involving complex exponentials What is the indefinite integral of cos(2x) e 3x ? That function is the real part of a complex function of a real variable: (cos 2x + i sin 2x) e 3x = e i (2x) e 3x = e (3+2i) x Since the derivative of exp(a x) / a is exp(a x) we obtain, conversely: e(3+2i) x dx = e(3+2i) x / (3+2i) = e3x (cos 2x + i sin 2x) (3-2i) / 13 The relation we were after is obtained as the real part of the above: cos(2x) e 3x dx = (3 cos 2x + 2 sin 2x) e 3x / 13 Integration by parts A useful technique to reduce the computation of one integral to another. This method was first published in 1715 by Brook Taylor (1685-1731). The product rule states that the derivative (uv)' of a product of two functions is u'v+uv'. When the integral of some function f is sought, integration by parts is a minor art form which attempts to use this backwards, by writing f as a product u'v of two functions, one of which (u') has a known integral (u). In which case: ò f dx = ò u'v dx - ò uv' dx This reduces the computation of the integral of f to that of uv'. The tricky part, of course, is to guess what choice of u would make the latter simpler... The choice u' = 1 (i.e., u = x and v = f ) is occasionally useful. Example: ò ln(x) dx x ln(x) - ò (x/x) dx x ln(x) - x Another classical example pertains to Laplace transforms ( p > 0 ) and/or Heaviside's operational calculus, where all integrals are understood to be definite integrals from 0 to +¥ (with a subexponential function ò f '(t) exp(-pt) dt - f (0) + p ò f (t) exp(-pt) dt Integration by parts What is the perimeter of a parabolic curve, given the base length and height of [the] parabola? Choose the coordinate axes so that your parabola has equation y = x2/2p for some constant parameter p. The length element ds along the parabola is such that (ds)2 = (dx)2 + (dy)2, or ds/dx = Ö(1+(dy/dx)2) = Ö(1 + x2/p2). The length s of the arc of parabola from the apex (0,0) to the point (x, y = x2/2p) is simply the following integral of this (in which we may eliminate x or p, using 2py = x2 ). |s || = ||1 + x2/p2 || + (p/2) ln( ||1 + x2/p2 || + x/p ) ||1 + p/2y || + (p/2) ln( ||1 + 2y/p ||1 + (2y/x)2 || + (x2/4y) ln( ||1 + (2y/x)2 || + 2y/x ) For a symmetrical arc extending on both sides of the parabola's axis, the length is 2s (twice the above). If needed, the whole "perimeter" is 2s+2x. What's the top height of a (parabolic) bridge? If a curved bridge is a foot longer than its mile-long horizontal span... Let's express all distances in feet (a mile is 5280 ft). Using the notations of the previous article, 2x = 5280, 2s = 5281, u = x/p = 2y/x = y/1320 |s / x = 5281 / 5280|| = ½ ||1 + u2 || + (1/2u) ln( ||1 + u2 || + u ) For small values of u, the right-hand side is roughly 1+u2/6. Solving for u the equation thus simplified, we obtain The height y is thus roughly equal to that quantity multiplied by 1320 ft ,or about 44.4972 ft. This approximation is valid for any type of smooth enough curve. It can be refined for the parabolic case using successive approximations to solve for u the above equation. This yields u = 0.0337128658566... which exceeds the above by about 85.2 ppm (ppm = parts per million) for a final result of about 44.5010 ft. The previous solution would have satisfied any engineer before the computer era. (2008-03-27; e-mail) Length of a sagging horizontal cable: How long is a cable which spans 28 m horizontally and sags 300 mm? Answer : Surprisingly, just about 28.00857 m... Under its own weight, a uniform cable without any rigidity (a "chain") would actually assume the shape of a In a coordinate system with a vertical y-axis and centered on its apex, the catenary has the following cartesian equation: y/a = ch (x/a) -1 ½ (ex/a - 2 + e-x/a ) 2 sh2 (x/2a) Measured from the apex at x = y = 0, the arclength s along the cable is: s = a sh (x/a) Those formulas are not easy to work with, unless the parameter a is given. For example, in the case at hand (a 28 m span with a 0.3 m sag) all we know is: x = 14 y = 0.3 So, we must solve for a (numerically) the transcendantal 0.3 / a = 2 sh2 (7/a) This yields a = 326.716654425... 2s = 2a sh (14 / a) Thus, an 8.57 mm slack produces a 30 cm sag for a 28 m span. In similar cases, the parameter a is also large (it's equal to the radius of curvature at the curve's apex). So, we may find a good approximation to the relevant transcendental equation by equating the sh function to its (small) 2 sh2 (x/2a) a » x2 / 2y whereby s = a sh (x/a) x ( 1 + x2 / 6a2 ) x ( 1 + 2y2 / 3x2 ) This gives 2s 2x ( 1 + 8/3 (y/2x)2 ) = 28.0085714... in the above case. This is indeed a good approximation to the aforementioned exact result. Parabolic Approximation : If we plug the values x = 14 and y = 0.3 in the above formula for the exact length of a parabolic arc, we obtain: 2s = 28.0085690686... Circular Approximation : A thin circular arc of width 2x and of height y has a length || arcsin ( || ) = 28.00857064... In fact, all smooth enough approximations to a flat enough catenary will have a comparable precision, because this is what results from equating a curve to its osculating circle at the lowest point. The approximative expression we derived above in the case of the catenary is indeed quite general: 2 x [ 1 + 8/3 (y/2x) 2 ] Find the ratio, over one revolution, of the distance moved by a wheel rolling on a flat surface to the distance traced out by a point on its circumference. As a wheel of unit radius rolls (on the x-axis), the trajectory of a point on its circumference is a cycloid, whose parametric equation is not difficult to establish: x = t - sin(t) y = 1 - cos(t) In this, the parameter t is the abscissa [x-coordinate] of the center of the wheel. In the first revolution of the wheel (one arch of the cycloid), t goes from 0 to 2p. The length of one full arch of a cycloid ("cycloidal arch") was first worked out in the 17th century by Evangelista Torricelli (1608-1647), just before the advent of the calculus. Let's do it again with modern tools: Calling s the curvilinear abscissa (the length along the curve), we have: (dx)2 + (dy)2 = [(1-cos(t))2 + (sin(t))2](dt)2 (ds/dt)2 = 2 - 2 cos(t) = 4 sin2(t/2) so, if 0 ≤ t ≤ 2p: ds/dt = 2 sin(t/2) ≥ 0 The length of the whole arch is the integral of this when t goes from 0 to 2p and it is therefore equal to 8, [since the indefinite integral is -4 cos(t/2)]. On the other hand, the length of the trajectory of the wheel's center (a straight line) is clearly 2p (the circumference of the wheel). In other words, the trajectory of a point on the circumference is 4/p times as long as the trajectory of the center, for any whole number of revolutions (that's about 27.324% longer, if you prefer). The ratio you asked for is the reciprocal of that, namely p/4 (which is about 0.7853981633974...), the ratio of the circumference of the wheel to the length of the cycloidal arch. However, the result is best memorized as: "The length of a cycloidal arch is 4 times the diameter of the wheel." (from Schenectady, NY. 2003-04-07; e-mail) What is the [indefinite] integral of (tan x)1/3 dx ? An obvious change of variable is to introduce y = tan x [ dy = (1+y2 ) dx ], so the integrand becomes y1/3 dy / (1+y2 ). This suggests a better change of variable, namely: z = y2/3 = (tan x)2/3 [ dz = (2/3)y-1/3 dy ], which yields z dz = (2/3)y1/3 dy, and makes the integrand equal to the following rational function of z, which may be integrated using standard methods (featuring a decomposition into 3 easy-to-integrate terms): (3/2) z dz / (1+z3 ) = ¼ (2z-1) dz / (1-z+z2 ) + (3/4) dz / (1-z+z2 ) - ½ dz / (1+z) As (1-z+z2 ) is equal to the positive quantity ¼ [(2z - 1)2 + 3] , we obtain: ò (tan x)1/3 dx ¼ ln(1-z+z2 ) - ½ ln(1+z) where z stands for | tan x | 2/3 (D. B. of Grand Junction, CO. A particle moves from right to left along the parabola y = Ö(-x) in such a way that its x coordinate decreases at the rate of 8 m/s. When x = -4, how fast is the change in the angle of inclination of the line joining the particle to the origin? We assume all distances are in meters. When the particle is at a negative abscissa x, the (negative) slope of the line in question is y/x = Ö(-x)/x and the corresponding (negative) angle is thus: a = arctg(Ö(-x)/x) [In this, "arctg" is the "Arctangent" function, which is also spelled "atan" in US textbooks.] Therefore, a varies with x at a (negative) rate: da/dx = -1/(2´Ö(-x)(1-x)) (rad/m) If x varies with time as stated, we have dx/dt = -8 m/s, so the angle a varies with time at a (positive) rate: da/dt = 4/(Ö(-x)(1-x)) (rad/s) When x is -4 m, the rate dA/dt is therefore 4/(Ö4 ´5) rad/s = 0.4 rad/s. The angle a, which is always negative, is thus increasing at a rate of 0.4 rad/s when the particle is 4 meters to the left of the origin (rad/s = radian per second). What's the area bounded by the following curves? - y = f(x) = x3 - 9x - y = g(x) = x + 3 The curves intersect when f(x) = g(x), which translates into x3 - 10x - 3 = 0. This cubic equation factors nicely into (x + 3) (x2 - 3x - 1) = 0 , so we're faced with only a quadratic equation... To find if there's a "trivial" integer which is a root of a polynomial with integer coefficients [whose leading coefficient is ±1], observe that such a root would have to divide the constant term. In the above case, we only had 4 possibilities to try, namely -3, -1, +1, +3. The abscissas A < B < C of the three intersections are therefore: A = -3 , B = ½ (3 - Ö13) C = ½ (3 + Ö13) Answering an Ambiguous Question : The best thing to do for a "figure 8", like the one at hand, is to compute the (positive) areas of each of the two lobes. The understanding is that you may add or subtract these, according to your chosen orientation of the boundary: - The area of the lobe from A to B (where f(x) is above g(x)) is the integral of f(x)-g(x) = x3 - 10x - 3 [whose primitive is x4/4 - 5x2 - 3x] from A to B, namely (39Ö13 - 11)/8, or about 16.202... - The area of the lobe from B to C (where f(x) is below g(x)) is the integral of g(x)-f(x) from B to C, namely (39Ö13)/4, or about 35.154... The area we're after is thus either the sum (±51.356...) or the difference (±18.952...) of these two, depending on an ambiguous boundary orientation... If you don't switch curves at point B, the algebraic area may also be obtained as the integral of g(x)-f(x) from A to C (up to a change of sign). Signed Planar Areas Consistently Defined A net planar area is best defined as the apparent area of a 3D loop. The area surrounded by a closed planar curve may be defined in general terms, even when the curve does cross itself The usual algebraic definition of areas depends on the orientation (clockwise or counterclockwise) given to the closed boundary of a simple planar surface. The area is positive if the boundary runs counterclockwise around the surface, and negative otherwise the positive direction of planar angles is always counterclockwise). In the case of a simple closed curve [without any multiple points] this is often overlooked, since we normally consider only whichever orientation of the curve makes the area of its interior positive... The clear fact that there is such an "interior" bounded by any given closed planar curve is known as "Jordan's Theorem". It's a classical example of an "obvious" fact with a rather However, when the boundary has multiple points (like the center of a "figure 8"), there may be more than two oriented boundaries for it, since we may have a choice at a double point: Either the boundary crosses itself or it does not (in the latter case, we make a sharp turn, unless there's an unusual configuration about the intersection). Not all sets of such choices lead to a complete tracing of the whole loop. At left is the easy-to-prove "coloring rule" for a true self-crossing of the boundary, concerning the number of times the ordinary area is to be counted in the "algebraic area" dicussed here. It's nice to consider a given oriented closed boundary as a projection of a three-dimensional loop whose apparent area is defined as a path integral. x dy - y dx - y dx of Hickory, NC. 2001-04-13/email) [How do you generalize the method] of variation of parameters when solving differential equations (DE) of 3rd and higher order? For example: x''' - 3x'' + 4x = exp(2t) In memory of | taught me this and much more, many years ago. As shown below, a high-order linear DE can be reduced to a system of first-order linear differential equations in several variables. Such a system is of the form: X' = dX/dt = AX + B X is a column vector of n unknown functions of t. The square matrix A may depend explicitely on t. B is a vector of n explicit functions of t, called forcing terms. The associated homogeneous system is obtained by letting B = 0. For a nonconstant A, it may be quite difficult to find n independent solutions of this homogeneous system (an art form in itself) but, once you have them, a solution of the forced system may be obtained by generalizing to n variables the method (called "variation of parameters") commonly used for a single variable. Let's do this using only n-dimensional notations: The fundamental object is the square matrix W formed with the n columns corresponding to the n independent solutions of the homogeneous system. Clearly, W itself verifies the homogeneous equation: W' = AW It's an interesting exercise in the manipulation of determinants to prove that det(W)' = tr(A) det(W) (HINT: Differentiating just the i-th line of W gives a matrix whose determinant is the product of det(W) by the i-th component in the diagonal of the matrix A). Since det(W), the so-called "Wronskian", is thus solution of a first-order linear DE, it's proportional to the exponential of some function and is therefore either nonzero everywhere or zero everywhere. (Also, the Wronskians for different sets of homogeneous solutions must be proportional.) Homogeneous solutions that are linearly independent at some point are therefore independent everywhere and W(t) has an inverse for any t. We may thus look for the solution X to the nonhomogeneous system in the form X = WY : AX + B = X' = W'Y + WY' = AWY + WY' = AX + WY' Therefore, B = WY' So, Y is simply obtained by integrating W-1 B and the general solution of the forced system may be expressed as follows, with a constant vector K (whose n components are the n "constants of integration"). This looks very much like the corresponding formula for a single variable : X(t) = W(t) [ K + W-1(u) B(u) du ] Linear Differential Equation of Order n : A linear differential equation of order n has the following form (where ak and b are explicit functions of t): an-1 x(n-1) + ... + a3 x(3) + a2 x" + a1 x' + a0 x This reduces to the above system X' = AX + B with the following notations : || X = || B = The first n-1 components in the equation X' = AX+B merely define each component of X as the derivative of the previous one, whereas the last component expresses the original high-order differential equation. Now, the general discussion above applies fully with a W matrix whose first line consists of n independent solutions of the homogeneous equation (each subsequent line is simply the derivative of its predecessor). Here comes the Green function... We need not work out every component of W-1 since we're only interested in the first component of X... The above boxed formula tells us that we only need the first component of W(t)W-1(u)B(u) which may be written G(t,u)b(u), by calling G(t,u) the first component of W(t)W-1(u)Z, where Z is a vector whose component are all zero, except the last one which is one. G(t,u) is called the Green function associated to the given homogeneous equation. It has a simple expression (given below) in terms of a ratio of determinants computed for independent solutions of the homogeneous equation. (Such an expression makes it easy to prove that the Green function is indeed associated to the equation itself and not to a particular set of independent solutions, as it is clearly invariant if you replace any solution by some linear combination in which it appears with a nonzero coefficient.) For a third-order equation with homogeneous solutions A(t), B(t) and C(t), the expression of the Green function (which generalizes to any order) is simply: It's also a good idea to define G(t,u) to be zero when u>t, since such values of G(t,u) are not used in the integral ò t G(t,u) b(u) du. This convention allows us to drop the upper limit of the integral, so we may write a special solution of the inhomogeneous equation as the definite integral (from -¥ to +¥, whenever it converges): ò G(t,u) b(u) du. If this integral does not converge (the issue may only arise when u goes to -¥), we may still use this formal expression by considering that the forcing term b(u) is zero at any time t earlier than whatever happens to be the earliest time we wish to consider. (This is one unsatisfying way to reestablish some kind of fixed arbitrary lower bound for the integral of interest when the only natural one, namely -¥, is not acceptable.) In the case of the equation x''' - 3x" + 4x = exp(2t), three independent solutions are A(t) = exp(-t), B(t) = exp(2t), and C(t) = t exp(2t). This makes the denominator in the above (the "Wronskian") equal to 9 exp(3u) whereas the numerator is With those values, the integral of G(t,u)exp(2u)(u)du when u goes from 0 to t turns out to be equal to f(t) = [ (9t2-6t+2)exp(2t) - 2 exp(-t) ]/54, which is therefore a special solution of your equation. The general solution may be expressed as: x(t) = (a + bt + t2/6) exp(2t) + c exp(-t) [ a, b and c are constant ] Clearly, this result could have been obtained without this heavy artillery: Once you've solved the homogeneous equation and realized that the forcing term is a solution of it, it is very natural to look for an inhomogeneous solution of the form z exp(2t) and find that z"=1/3 works. That's far less tedious than computing and using the associated Green's function. However, efficiency in this special case is not what the question was all about... Convolutions and the Theory of Distributions An introduction to the epoch-making approach of Laurent Schwartz. The above may be dealt with using the elegant idea of convolution products among distributions. The notorious Theory of Distributions occurred to the late Schwartz (1915-2002) "one night in 1944". For this, he received the first ever awarded to a Frenchman, in 1950. (Schwartz taught me functional analysis in the Fall of 1977.) A linear differential equation with constant coefficients (an important special case) may be expressed as a convolution a * x = b. The convolution operator * is bilinear, associative and commutative. Its identity element is the Delta distribution d (dubbed Dirac's "function"). Loosely speaking, the Delta distribution d would correspond to a "function" whose integral is 1, but whose value at every point except zero is zero. The integral of an ordinary function which is zero almost everywhere would necessarily be zero. Therefore, the d distribution cannot possibly be an ordinary function: Convolutions must be put in the proper context of the Theory of Distributions. A strong case can be made that the convolution product is the notion that gives rise to the very concept of distribution. Distributions had been used loosely by physicists for a long time, when Schwartz finally found a very simple mathematical definition for them: Considering a (very restricted) space D of so-called test functions, a distribution is simply a linear function which associates a scalar to every test function. Although other possibilities have been studied (which give rise to less general distributions) D is normally the so-called Schwartz space of infinitely derivable functions of compact support These are perfectly smooth functions vanishing outside of a bounded domain, like the function of x which is exp(-1 / (1-x 2 )) in [-1,+1] and 0 elsewhere. What could be denoted f(g) is written This hint of an ultimate symmetry between the rôles of f and g is fulfilled by the following relation, which holds whenever the integral exists for ordinary functions f and g. ò f(t-u)g(u) du This relation may be used to establish commutativity (switch the variable to v = t-u, going from +¥ to -¥ when u goes from -¥ to +¥). The associativity of the convolution product is obtained by figuring out a double integral. Convolutions have many stunning properties. In particular, the Fourier transform of the convolution product of two functions is the ordinary product of their Fourier transforms. Another key property is that the derivative of a convolution product may be obtained by differentiating either one of its factors: This means the derivatives of a function f can be expressed as convolutions, using the derivatives of the d distribution (strange but useful beasts): f = d * f f' = d' f'' = d'' If the n-th order linear differential equation discussed above has constant coefficients, we may write it as f*x = b by introducing the distribution f = d(n) + an-1 d(n-1) + ... + a3 d(3) + a2 d" + a1 d' + Clearly, if we we have a function such that we will obtain a special solution of the inhomogeneous equation as If you translate the convolution product into an integral, what you obtain is thus the general expression involving a Green function G(t,u)=g(t-u), where g(v) is zero for negative values of v. The case where coefficients are constant is therefore much simpler than the general case: Where you had a two-variable integrator, you now have a single-variable one. Not only that, but the homogeneous solutions are well-known (if z is an eigenvalue of multiplicity n+1 for the matrix involved, the product of exp(zt) by any polynomial of degree n, or less, is a solution). In the important special case where all the eigenvalues are distinct, the determinants involved in the expression of G(t,u)=g(t-u) are essentially or Vandermonde cofactors (a Vandermonde determinant is a determinant where each column consists of the successive powers of a particular number). The expression is thus fairly easy to work out and may be put into the following simple form, involving the characteristic polynomial P for the equation (it's also the characteristic polynomial of the matrix we called A in the above). For any eigenvalue z, the derivative P'(z) is the product of the all the differences between that eigenvalue and each of the others (which is what Vandermonde expressions entail): exp(z1v) / P'(z1) + exp(z2v) / P'(z2) + ... + exp(znv) / P'(zn) With this, x = g*b is indeed a special solution of our original equation f*x = b (Brent Watts of Hickory, NC. do you use Laplace transforms to solve this differential system? Initial conditions, for t=0 : w=0, w'=1, y=0, y'=0, z= -1, z'=1. - w" + y + z = -1 - w + y" - z = 0 - -w' -y' + z"=0 The (unilateral) Laplace transform g(p) of a function f(t) is given by: g(p) = òo¥ f(t) exp(-pt) dt This is defined, for a positive p, whenever the integral makes sense. For example, the Laplace transform of a constant k is the function g such that g(p) = k/p. Integrating by parts f '(t) exp(-pt) dt gives a simple relation, which may be iterated, between the respective Laplace transforms L(f ') and L(f) of f ' and f : L(f ')[p] = -f(0) + p L(f)[p] L(f")[p] = -f '(0) + p L(f ')[p] = -f '(0) - p f(0) + p2 L(f)[p] This is the basis of the so-called Operational Calculus, invented by Oliver Heaviside (1850-1925), which translates many practical systems of differential equations into algebraic ones. (Originally, Heaviside was interested in the transient solutions to the simple differential equations arising in electrical circuits). In this particular case, we may use capital letters to denote Laplace transforms of lowercase functions (W=L(w), Y=L(y), Z=L(z)...) and your differential system translates into: In other words: - (p2 W - 1 - 0p)+ Y + Z = -1/p - W + (p2 Y - 0 - 0p) - Z = 0 - -(pW - 0) -(pY - 0) + (p2 Z - 1 + p) = 0 Solve for W,Y and Z and express the results as simple sums (that's usually the tedious part, but this example is clearly designed to be simpler than usual): - p2 W + Y + Z = 1 -1/p - W + p2 Y - Z = 0 - -pW -pY + p2 Z = 1-p The last step is to go from these Laplace transforms back to the original (lowercase) functions of t, with a reverse lookup using a table of Laplace transforms, similar to the (short) one provided below. - W = 1/(p2 +1) - Y = p/(p2 +1) - 1/p - Z = 1/(p2 +1) - p/(p2 +1)2 - w = sin(t) - y = cos(t) - 1 - z = sin(t) - cos(t) With other initial conditions, solutions may involve various linear combinations of no fewer than 5 different types of functions (namely: sin(t), cos(t), exp(-t), t and the constant 1), which would make a better showcase for Operational Calculus than this particularly simple example... Below is a small table of Laplace transforms. This table enables a reverse lookup which is more than sufficient to solve the above for any set of initial conditions: = òo¥ f(t) exp(-pt) dt |1 = t 0||1/p| |t n||n! / pn+1| |exp(at)||1 / (p-a)| |sin(kt)||k / (p2 + k2 )| |cos(kt)||p / (p2 + k2 )| |exp(at) sin(kt)||k / ([p-a]2 + k2 )| |exp(at) cos(kt)||[p-a] / ([p-a]2 + k2 )| |d [Dirac Delta]||1| |f '(t)||p g(p) - f(0)| |f ''(t)||p2 g(p) - p f '(0)| Brent Watts of Hickory, NC. 1) What is an example of a function for which the integral from -¥ to +¥ of |f(x)| dx exists, but [that of] of f(x)dx does not? 2) [What is an example of a function f ] for which the opposite is true? The integral from -¥ to +¥ exists for f(x)dx but not for |f(x)|dx . 1) Consider any nonmeasurable set E within the interval [0,1] (the existence of such a set is guaranteed by Zermelo's Axiom of Choice) and define f(x) to be: The function f is not Lebesgue-integrable, but its absolute value clearly is (|f(x)| is equal to 1 on [0,1] and - +1 if x is in E - -1 if x is in [0,1] but not in E - 0 if x is outside [0,1] That was for Lebesgue integration. For Riemann integration, you may construct a simpler example by letting the above E be the set of rationals between 0 and 1. 2) On the other hand, the function sin(x)/x is a simple example of a function which is Riemann-integrable over (Riemann integration can be defined over an infinite interval, although it's not usually done in basic textbooks), whereas the absolute value |sin(x)/x| is not. Neither function is Lebesgue-integrable over although both are over any finite interval. Show that: f (D)[eax y] = eax f (D+a)[y] , where D is the operator d/dx. The notation has to be explained to readers not familiar with If f (x) is the converging sum of all terms (for some scalar sequence f is called an analytic function [about zero] and it can be defined for some nonnumerical things that can be added, scaled or "exponentiated"... The possibility of exponentiation to the power of a nonnegative integer reasonably requires the definition of some kind of with a neutral element (in order to define the zeroth power) but that multiplication need not be commutative or even associative. The lesser requirement of alternativity suffices (as is observed in the case of the octonions). Here we shall focus on the multiplication of square matrices of finite sizes which corresponds to the composition of linear functions in a vector space of finitely many dimensions. If M is a finite square matrix representing some linear operator (which we shall denote by the same symbol M for convenience) f (M) is defined as a power series of M. If there's a vector basis in which the operator M is diagonal, f (M) is diagonal in that same basis, with f (z) appearing on the diagonal of f (M) wherever z appears in the diagonal of M. Now, the differential operator D is a linear operator like any other, whether it operates on a space of finitely many dimensions (for example, polynomials of degree 57 or less) or infinitely many dimensions (polynomials, formal series...). f (D) may thus be defined the same way. It's a formal definition which may or may not have a numerical counterpart, as the formal series involved may or may not converge. The same thing applies to any other differential operator, and this is how f (D) and f (D+a) are to be interpreted. To prove that a linear relation holds when f appears homogeneously (as is the case here), it is enough to prove that it holds for any n when f (x)=xn : - The relation is trivial for n=0 (the zeroth power of any operator is the identity operator) as the relation translates into exp(ax)y = exp(ax)y. - The case n=1 is: D[exp(ax)y] = a exp(ax)y + exp(ax)D[y] = exp(ax)(D+a)[y]. - The case n=2 is obtained by differentiating the case n=1 exactly like the case n+1 is obtained by differentiating case n, namely: Dn+1[exp(ax)y] = D[exp(ax)(D+a)n(y)] = a exp(ax)(D+a)n[y] + exp(ax) D[(D+a)n(y)] = exp(ax) (D+a)[(D+a)n(y)] = exp(ax) (D+a)n+1[y]. This completes a proof by induction for any f (x) = xn, which establishes the relation for any analytic function f, through summation of such elementary results.
http://www.numericana.com/answer/calculus.htm
13
59
|Part of a series on| An earthquake (also known as a quake, tremor or temblor) is the result of a sudden release of energy in the Earth's crust that creates seismic waves. The seismicity, seismism or seismic activity of an area refers to the frequency, type and size of earthquakes experienced over a period of time. Earthquakes are measured using observations from seismometers. The moment magnitude is the most common scale on which earthquakes larger than approximately 5 are reported for the entire globe. The more numerous earthquakes smaller than magnitude 5 reported by national seismological observatories are measured mostly on the local magnitude scale, also referred to as the Richter scale. These two scales are numerically similar over their range of validity. Magnitude 3 or lower earthquakes are mostly almost imperceptible or weak and magnitude 7 and over potentially cause serious damage over larger areas, depending on their depth. The largest earthquakes in historic times have been of magnitude slightly over 9, although there is no limit to the possible magnitude. The most recent large earthquake of magnitude 9.0 or larger was a 9.0 magnitude earthquake in Japan in 2011 (as of October 2012), and it was the largest Japanese earthquake since records began. Intensity of shaking is measured on the modified Mercalli scale. The shallower an earthquake, the more damage to structures it causes, all else being equal. At the Earth's surface, earthquakes manifest themselves by shaking and sometimes displacement of the ground. When the epicenter of a large earthquake is located offshore, the seabed may be displaced sufficiently to cause a tsunami. Earthquakes can also trigger landslides, and occasionally volcanic activity. In its most general sense, the word earthquake is used to describe any seismic event — whether natural or caused by humans — that generates seismic waves. Earthquakes are caused mostly by rupture of geological faults, but also by other events such as volcanic activity, landslides, mine blasts, and nuclear tests. An earthquake's point of initial rupture is called its focus or hypocenter. The epicenter is the point at ground level directly above the hypocenter. Naturally occurring earthquakes Tectonic earthquakes occur anywhere in the earth where there is sufficient stored elastic strain energy to drive fracture propagation along a fault plane. The sides of a fault move past each other smoothly and aseismically only if there are no irregularities or asperities along the fault surface that increase the frictional resistance. Most fault surfaces do have such asperities and this leads to a form of stick-slip behaviour. Once the fault has locked, continued relative motion between the plates leads to increasing stress and therefore, stored strain energy in the volume around the fault surface. This continues until the stress has risen sufficiently to break through the asperity, suddenly allowing sliding over the locked portion of the fault, releasing the stored energy. This energy is released as a combination of radiated elastic strain seismic waves, frictional heating of the fault surface, and cracking of the rock, thus causing an earthquake. This process of gradual build-up of strain and stress punctuated by occasional sudden earthquake failure is referred to as the elastic-rebound theory. It is estimated that only 10 percent or less of an earthquake's total energy is radiated as seismic energy. Most of the earthquake's energy is used to power the earthquake fracture growth or is converted into heat generated by friction. Therefore, earthquakes lower the Earth's available elastic potential energy and raise its temperature, though these changes are negligible compared to the conductive and convective flow of heat out from the Earth's deep interior. Earthquake fault types There are three main types of fault, all of which may cause an earthquake: normal, reverse (thrust) and strike-slip. Normal and reverse faulting are examples of dip-slip, where the displacement along the fault is in the direction of dip and movement on them involves a vertical component. Normal faults occur mainly in areas where the crust is being extended such as a divergent boundary. Reverse faults occur in areas where the crust is being shortened such as at a convergent boundary. Strike-slip faults are steep structures where the two sides of the fault slip horizontally past each other; transform boundaries are a particular type of strike-slip fault. Many earthquakes are caused by movement on faults that have components of both dip-slip and strike-slip; this is known as oblique slip. Reverse faults, particularly those along convergent plate boundaries are associated with the most powerful earthquakes, including almost all of those of magnitude 8 or more. Strike-slip faults, particularly continental transforms can produce major earthquakes up to about magnitude 8. Earthquakes associated with normal faults are generally less than magnitude 7. This is so because the energy released in an earthquake, and thus its magnitude, is proportional to the area of the fault that ruptures and the stress drop. Therefore, the longer the length and the wider the width of the faulted area, the larger the resulting magnitude. The topmost, brittle part of the Earth's crust, and the cool slabs of the tectonic plates that are descending down into the hot mantle, are the only parts of our planet which can store elastic energy and release it in fault ruptures. Rocks hotter than about 300 degrees Celsius flow in response to stress; they do not rupture in earthquakes. The maximum observed lengths of ruptures and mapped faults, which may break in one go are approximately 1000 km. Examples are the earthquakes in Chile, 1960; Alaska, 1957; Sumatra, 2004, all in subduction zones. The longest earthquake ruptures on strike-slip faults, like the San Andreas Fault (1857, 1906), the North Anatolian Fault in Turkey (1939) and the Denali Fault in Alaska (2002), are about half to one third as long as the lengths along subducting plate margins, and those along normal faults are even shorter. The most important parameter controlling the maximum earthquake magnitude on a fault is however not the maximum available length, but the available width because the latter varies by a factor of 20. Along converging plate margins, the dip angle of the rupture plane is very shallow, typically about 10 degrees. Thus the width of the plane within the top brittle crust of the Earth can become 50 to 100 km (Japan, 2011; Alaska, 1964), making the most powerful earthquakes possible. Strike-slip faults tend to be oriented near vertically, resulting in an approximate width of 10 km within the brittle crust, thus earthquakes with magnitudes much larger than 8 are not possible. Maximum magnitudes along many normal faults are even more limited because many of them are located along spreading centers, as in Iceland, where the thickness of the brittle layer is only about 6 km. In addition, there exists a hierarchy of stress level in the three fault types. Thrust faults are generated by the highest, strike slip by intermediate, and normal faults by the lowest stress levels. This can easily be understood by considering the direction of the greatest principal stress, the direction of the force that 'pushes' the rock mass during the faulting. In the case of normal faults, the rock mass is pushed down in a vertical direction, thus the pushing force (greatest principal stress) equals the weight of the rock mass itself. In the case of thrusting, the rock mass 'escapes' in the direction of the least principal stress, namely upward, lifting the rock mass up, thus the overburden equals the least principal stress. Strike-slip faulting is intermediate between the other two types described above. This difference in stress regime in the three faulting environments can contribute to differences in stress drop during faulting, which contributes to differences in the radiated energy, regardless of fault dimensions. Earthquakes away from plate boundaries Where plate boundaries occur within continental lithosphere, deformation is spread out over a much larger area than the plate boundary itself. In the case of the San Andreas fault continental transform, many earthquakes occur away from the plate boundary and are related to strains developed within the broader zone of deformation caused by major irregularities in the fault trace (e.g., the "Big bend" region). The Northridge earthquake was associated with movement on a blind thrust within such a zone. Another example is the strongly oblique convergent plate boundary between the Arabian and Eurasian plates where it runs through the northwestern part of the Zagros mountains. The deformation associated with this plate boundary is partitioned into nearly pure thrust sense movements perpendicular to the boundary over a wide zone to the southwest and nearly pure strike-slip motion along the Main Recent Fault close to the actual plate boundary itself. This is demonstrated by earthquake focal mechanisms. All tectonic plates have internal stress fields caused by their interactions with neighbouring plates and sedimentary loading or unloading (e.g. deglaciation). These stresses may be sufficient to cause failure along existing fault planes, giving rise to intraplate earthquakes. Shallow-focus and deep-focus earthquakes The majority of tectonic earthquakes originate at the ring of fire in depths not exceeding tens of kilometers. Earthquakes occurring at a depth of less than 70 km are classified as 'shallow-focus' earthquakes, while those with a focal-depth between 70 and 300 km are commonly termed 'mid-focus' or 'intermediate-depth' earthquakes. In subduction zones, where older and colder oceanic crust descends beneath another tectonic plate, deep-focus earthquakes may occur at much greater depths (ranging from 300 up to 700 kilometers). These seismically active areas of subduction are known as Wadati-Benioff zones. Deep-focus earthquakes occur at a depth where the subducted lithosphere should no longer be brittle, due to the high temperature and pressure. A possible mechanism for the generation of deep-focus earthquakes is faulting caused by olivine undergoing a phase transition into a spinel structure. Earthquakes and volcanic activity Earthquakes often occur in volcanic regions and are caused there, both by tectonic faults and the movement of magma in volcanoes. Such earthquakes can serve as an early warning of volcanic eruptions, as during the Mount St. Helens eruption of 1980. Earthquake swarms can serve as markers for the location of the flowing magma throughout the volcanoes. These swarms can be recorded by seismometers and tiltmeters (a device that measures ground slope) and used as sensors to predict imminent or upcoming eruptions. A tectonic earthquake begins by an initial rupture at a point on the fault surface, a process known as nucleation. The scale of the nucleation zone is uncertain, with some evidence, such as the rupture dimensions of the smallest earthquakes, suggesting that it is smaller than 100 m while other evidence, such as a slow component revealed by low-frequency spectra of some earthquakes, suggest that it is larger. The possibility that the nucleation involves some sort of preparation process is supported by the observation that about 40% of earthquakes are preceded by foreshocks. Once the rupture has initiated it begins to propagate along the fault surface. The mechanics of this process are poorly understood, partly because it is difficult to recreate the high sliding velocities in a laboratory. Also the effects of strong ground motion make it very difficult to record information close to a nucleation zone. Rupture propagation is generally modeled using a fracture mechanics approach, likening the rupture to a propagating mixed mode shear crack. The rupture velocity is a function of the fracture energy in the volume around the crack tip, increasing with decreasing fracture energy. The velocity of rupture propagation is orders of magnitude faster than the displacement velocity across the fault. Earthquake ruptures typically propagate at velocities that are in the range 70–90% of the S-wave velocity and this is independent of earthquake size. A small subset of earthquake ruptures appear to have propagated at speeds greater than the S-wave velocity. These supershear earthquakes have all been observed during large strike-slip events. The unusually wide zone of coseismic damage caused by the 2001 Kunlun earthquake has been attributed to the effects of the sonic boom developed in such earthquakes. Some earthquake ruptures travel at unusually low velocities and are referred to as slow earthquakes. A particularly dangerous form of slow earthquake is the tsunami earthquake, observed where the relatively low felt intensities, caused by the slow propagation speed of some great earthquakes, fail to alert the population of the neighbouring coast, as in the 1896 Meiji-Sanriku earthquake. Most earthquakes form part of a sequence, related to each other in terms of location and time. Most earthquake clusters consist of small tremors that cause little to no damage, but there is a theory that earthquakes can recur in a regular pattern. An aftershock is an earthquake that occurs after a previous earthquake, the mainshock. An aftershock is in the same region of the main shock but always of a smaller magnitude. If an aftershock is larger than the main shock, the aftershock is redesignated as the main shock and the original main shock is redesignated as a foreshock. Aftershocks are formed as the crust around the displaced fault plane adjusts to the effects of the main shock. Earthquake swarms are sequences of earthquakes striking in a specific area within a short period of time. They are different from earthquakes followed by a series of aftershocks by the fact that no single earthquake in the sequence is obviously the main shock, therefore none have notable higher magnitudes than the other. An example of an earthquake swarm is the 2004 activity at Yellowstone National Park. In August 2012, a swarm of earthquakes shook Southern California's Imperial Valley, showing the most recorded activity in the area since the 1970s. Sometimes a series of earthquakes occur in a sort of earthquake storm, where the earthquakes strike a fault in clusters, each triggered by the shaking or stress redistribution of the previous earthquakes. Similar to aftershocks but on adjacent segments of fault, these storms occur over the course of years, and with some of the later earthquakes as damaging as the early ones. Such a pattern was observed in the sequence of about a dozen earthquakes that struck the North Anatolian Fault in Turkey in the 20th century and has been inferred for older anomalous clusters of large earthquakes in the Middle East. Size and frequency of occurrence It is estimated that around 500,000 earthquakes occur each year, detectable with current instrumentation. About 100,000 of these can be felt. Minor earthquakes occur nearly constantly around the world in places like California and Alaska in the U.S., as well as in Mexico, Guatemala, Chile, Peru, Indonesia, Iran, Pakistan, the Azores in Portugal, Turkey, New Zealand, Greece, Italy, India and Japan, but earthquakes can occur almost anywhere, including New York City, London, and Australia. Larger earthquakes occur less frequently, the relationship being exponential; for example, roughly ten times as many earthquakes larger than magnitude 4 occur in a particular time period than earthquakes larger than magnitude 5. In the (low seismicity) United Kingdom, for example, it has been calculated that the average recurrences are: an earthquake of 3.7–4.6 every year, an earthquake of 4.7–5.5 every 10 years, and an earthquake of 5.6 or larger every 100 years. This is an example of the Gutenberg–Richter law. The number of seismic stations has increased from about 350 in 1931 to many thousands today. As a result, many more earthquakes are reported than in the past, but this is because of the vast improvement in instrumentation, rather than an increase in the number of earthquakes. The United States Geological Survey estimates that, since 1900, there have been an average of 18 major earthquakes (magnitude 7.0–7.9) and one great earthquake (magnitude 8.0 or greater) per year, and that this average has been relatively stable. In recent years, the number of major earthquakes per year has decreased, though this is probably a statistical fluctuation rather than a systematic trend. More detailed statistics on the size and frequency of earthquakes is available from the United States Geological Survey (USGS). A recent increase in the number of major earthquakes has been noted, which could be explained by a cyclical pattern of periods of intense tectonic activity, interspersed with longer periods of low-intensity. However, accurate recordings of earthquakes only began in the early 1900s, so it is too early to categorically state that this is the case. Most of the world's earthquakes (90%, and 81% of the largest) take place in the 40,000 km long, horseshoe-shaped zone called the circum-Pacific seismic belt, known as the Pacific Ring of Fire, which for the most part bounds the Pacific Plate. Massive earthquakes tend to occur along other plate boundaries, too, such as along the Himalayan Mountains. With the rapid growth of mega-cities such as Mexico City, Tokyo and Tehran, in areas of high seismic risk, some seismologists are warning that a single quake may claim the lives of up to 3 million people. While most earthquakes are caused by movement of the Earth's tectonic plates, human activity can also produce earthquakes. Four main activities contribute to this phenomenon: storing large amounts of water behind a dam (and possibly building an extremely heavy building), drilling and injecting liquid into wells, and by coal mining and oil drilling. Perhaps the best known example is the 2008 Sichuan earthquake in China's Sichuan Province in May; this tremor resulted in 69,227 fatalities and is the 19th deadliest earthquake of all time. The Zipingpu Dam is believed to have fluctuated the pressure of the fault 1,650 feet (503 m) away; this pressure probably increased the power of the earthquake and accelerated the rate of movement for the fault. The greatest earthquake in Australia's history is also claimed to be induced by humanity, through coal mining. The city of Newcastle was built over a large sector of coal mining areas. The earthquake has been reported to be spawned from a fault that reactivated due to the millions of tonnes of rock removed in the mining process. Measuring and locating earthquakes Earthquakes can be recorded by seismometers up to great distances, because seismic waves travel through the whole Earth's interior. The absolute magnitude of a quake is conventionally reported by numbers on the Moment magnitude scale (formerly Richter scale, magnitude 7 causing serious damage over large areas), whereas the felt magnitude is reported using the modified Mercalli intensity scale (intensity II–XII). Every tremor produces different types of seismic waves, which travel through rock with different velocities: - Longitudinal P-waves (shock- or pressure waves) - Transverse S-waves (both body waves) - Surface waves — (Rayleigh and Love waves) Propagation velocity of the seismic waves ranges from approx. 3 km/s up to 13 km/s, depending on the density and elasticity of the medium. In the Earth's interior the shock- or P waves travel much faster than the S waves (approx. relation 1.7 : 1). The differences in travel time from the epicentre to the observatory are a measure of the distance and can be used to image both sources of quakes and structures within the Earth. Also the depth of the hypocenter can be computed roughly. In solid rock P-waves travel at about 6 to 7 km per second; the velocity increases within the deep mantle to ~13 km/s. The velocity of S-waves ranges from 2–3 km/s in light sediments and 4–5 km/s in the Earth's crust up to 7 km/s in the deep mantle. As a consequence, the first waves of a distant earthquake arrive at an observatory via the Earth's mantle. On average, the kilometer distance to the earthquake is the number of seconds between the P and S wave times 8. Slight deviations are caused by inhomogeneities of subsurface structure. By such analyses of seismograms the Earth's core was located in 1913 by Beno Gutenberg. Earthquakes are not only categorized by their magnitude but also by the place where they occur. The world is divided into 754 Flinn-Engdahl regions (F-E regions), which are based on political and geographical boundaries as well as seismic activity. More active zones are divided into smaller F-E regions whereas less active zones belong to larger F-E regions. Standard reporting of earthquakes includes its magnitude, date and time of occurrence, geographic coordinates of its epicenter, depth of the epicenter, geographical region, distances to population centers, location uncertainty, a number of parameters that are included in USGS earthquake reports (number of stations reporting, number of observations, etc.), and a unique event ID. Effects of earthquakes The effects of earthquakes include, but are not limited to, the following: Shaking and ground rupture Shaking and ground rupture are the main effects created by earthquakes, principally resulting in more or less severe damage to buildings and other rigid structures. The severity of the local effects depends on the complex combination of the earthquake magnitude, the distance from the epicenter, and the local geological and geomorphological conditions, which may amplify or reduce wave propagation. The ground-shaking is measured by ground acceleration. Specific local geological, geomorphological, and geostructural features can induce high levels of shaking on the ground surface even from low-intensity earthquakes. This effect is called site or local amplification. It is principally due to the transfer of the seismic motion from hard deep soils to soft superficial soils and to effects of seismic energy focalization owing to typical geometrical setting of the deposits. Ground rupture is a visible breaking and displacement of the Earth's surface along the trace of the fault, which may be of the order of several metres in the case of major earthquakes. Ground rupture is a major risk for large engineering structures such as dams, bridges and nuclear power stations and requires careful mapping of existing faults to identify any which are likely to break the ground surface within the life of the structure. Landslides and avalanches Earthquakes, along with severe storms, volcanic activity, coastal wave attack, and wildfires, can produce slope instability leading to landslides, a major geological hazard. Landslide danger may persist while emergency personnel are attempting rescue. Earthquakes can cause fires by damaging electrical power or gas lines. In the event of water mains rupturing and a loss of pressure, it may also become difficult to stop the spread of a fire once it has started. For example, more deaths in the 1906 San Francisco earthquake were caused by fire than by the earthquake itself. Soil liquefaction occurs when, because of the shaking, water-saturated granular material (such as sand) temporarily loses its strength and transforms from a solid to a liquid. Soil liquefaction may cause rigid structures, like buildings and bridges, to tilt or sink into the liquefied deposits. For example, in the 1964 Alaska earthquake, soil liquefaction caused many buildings to sink into the ground, eventually collapsing upon themselves. Tsunamis are long-wavelength, long-period sea waves produced by the sudden or abrupt movement of large volumes of water. In the open ocean the distance between wave crests can surpass 100 kilometers (62 mi), and the wave periods can vary from five minutes to one hour. Such tsunamis travel 600-800 kilometers per hour (373–497 miles per hour), depending on water depth. Large waves produced by an earthquake or a submarine landslide can overrun nearby coastal areas in a matter of minutes. Tsunamis can also travel thousands of kilometers across open ocean and wreak destruction on far shores hours after the earthquake that generated them. Ordinarily, subduction earthquakes under magnitude 7.5 on the Richter scale do not cause tsunamis, although some instances of this have been recorded. Most destructive tsunamis are caused by earthquakes of magnitude 7.5 or more. A flood is an overflow of any amount of water that reaches land. Floods occur usually when the volume of water within a body of water, such as a river or lake, exceeds the total capacity of the formation, and as a result some of the water flows or sits outside of the normal perimeter of the body. However, floods may be secondary effects of earthquakes, if dams are damaged. Earthquakes may cause landslips to dam rivers, which collapse and cause floods. The terrain below the Sarez Lake in Tajikistan is in danger of catastrophic flood if the landslide dam formed by the earthquake, known as the Usoi Dam, were to fail during a future earthquake. Impact projections suggest the flood could affect roughly 5 million people. An earthquake may cause injury and loss of life, road and bridge damage, general property damage (which may or may not be covered by earthquake insurance), and collapse or destabilization (potentially leading to future collapse) of buildings. The aftermath may bring disease, lack of basic necessities, and higher insurance premiums. One of the most devastating earthquakes in recorded history occurred on 23 January 1556 in the Shaanxi province, China, killing more than 830,000 people (see 1556 Shaanxi earthquake). Most of the population in the area at the time lived in yaodongs, artificial caves in loess cliffs, many of which collapsed during the catastrophe with great loss of life. The 1976 Tangshan earthquake, with a death toll estimated to be between 240,000 to 655,000, is believed to be the largest earthquake of the 20th century by death toll. The 1960 Chilean Earthquake is the largest earthquake that has been measured on a seismograph, reaching 9.5 magnitude on 22 May 1960. Its epicenter was near Cañete, Chile. The energy released was approximately twice that of the next most powerful earthquake, the Good Friday Earthquake, which was centered in Prince William Sound, Alaska. The ten largest recorded earthquakes have all been megathrust earthquakes; however, of these ten, only the 2004 Indian Ocean earthquake is simultaneously one of the deadliest earthquakes in history. Earthquakes that caused the greatest loss of life, while powerful, were deadly because of their proximity to either heavily populated areas or the ocean, where earthquakes often create tsunamis that can devastate communities thousands of kilometers away. Regions most at risk for great loss of life include those where earthquakes are relatively rare but powerful, and poor regions with lax, unenforced, or nonexistent seismic building codes. Many methods have been developed for predicting the time and place in which earthquakes will occur. Despite considerable research efforts by seismologists, scientifically reproducible predictions cannot yet be made to a specific day or month. However, for well-understood faults the probability that a segment may rupture during the next few decades can be estimated. Earthquake warning systems have been developed that can provide regional notification of an earthquake in progress, but before the ground surface has begun to move, potentially allowing people within the system's range to seek shelter before the earthquake's impact is felt. The objective of earthquake engineering is to foresee the impact of earthquakes on buildings and other structures and to design such structures to minimize the risk of damage. Existing structures can be modified by seismic retrofitting to improve their resistance to earthquakes. Earthquake insurance can provide building owners with financial protection against losses resulting from earthquakes. Emergency management strategies can be employed by a government or organization to mitigate risks and prepare for consequences. Ways to Survive an Earthquake - Be Prepared: Before, During and After an Earthquake Earthquakes do not last for a long time, generally a few seconds to a minute. The 1989 San Francisco earthquake only lasted 15 seconds. - Securing water heaters, major appliances and tall, heavy furniture to prevent them from toppling are prudent steps. So, too, are storing hazardous or flammable liquids, heavy objects and breakables on low shelves or in secure cabinets. - If you're indoors, stay there. Get under -- and hold onto --a desk or table, or stand against an interior wall. Stay clear of exterior walls, glass, heavy furniture, fireplaces and appliances. The kitchen is a particularly dangerous spot. If you’re in an office building, stay away from windows and outside walls and do not use the elevator. Stay low and cover your head and neck with your hands and arms. Bracing yourself to a wall or heavy furniture when weaker earthquakes strike usually works. - Cover your head and neck. Use your hands and arms. If you have any respiratory disease, make sure that you cover your head with a t-shirt or bandana, until all the debris and dust has settled. Inhaled dirty air is not good for your lungs. - DO NOT stand in a doorway: An enduring earthquake image of California is a collapsed adobe home with the door frame as the only standing part. From this came our belief that a doorway is the safest place to be during an earthquake. True- if you live in an old, unreinforced adobe house or some older woodframe houses. In modern houses, doorways are no stronger than any other part of the house, and the doorway does not protect you from the most likely source of injury- falling or flying objects. You also may not be able to brace yourself in the door during strong shaking. You are safer under a table. Many are certain that standing in a doorway during the shaking is a good idea. That’s false, unless you live in an unreinforced adode structure; otherwise, you're more likely to be hurt by the door swinging wildly in a doorway. - Inspect your house for anything that might be in a dangerous condition. Glass fragments, the smell of gas, or damaged electrical appliances are examples of hazards. - Do not move. If it is safe to do so, stay where you are for a minute or two, until you are sure the shaking has stopped. Slowly get out of the house. Wait until the shaking has stopped to evacuate the building carefully. - PRACTICE THE RIGHT THING TO DO… IT COULD SAVE YOUR LIFE, You will be more likely to react quickly when shaking begins if you have actually practiced how to protect yourself on a regular basis. A great time to practice Drop, Cover, and Hold. - If you're outside, get into the open. Stay clear of buildings, power lines or anything else that could fall on you. Glass looks smooth and still, but when broken apart, a small piece can damage your foot. This is why you wear heavy shoes to protect your feet at such times. - Be aware that items may fall out of cupboards or closets when the door is opened, and also that chimneys can be weakened and fall with a touch. Check for cracks and damage to the roof and foundation of your home. - Things You'll Need: Blanket, Sturdy shoes, Dust mask to help filter contaminated air and plastic sheeting and duct tape to shelter-in-place, basic hygiene supplies, e.g. soap, Feminine supplies and personal hygiene items. From the lifetime of the Greek philosopher Anaxagoras in the 5th century BCE to the 14th century CE, earthquakes were usually attributed to "air (vapors) in the cavities of the Earth." Thales of Miletus, who lived from 625–547 (BCE) was the only documented person who believed that earthquakes were caused by tension between the earth and water. Other theories existed, including the Greek philosopher Anaxamines' (585–526 BCE) beliefs that short incline episodes of dryness and wetness caused seismic activity. The Greek philosopher Democritus (460–371 BCE) blamed water in general for earthquakes. Pliny the Elder called earthquakes "underground thunderstorms." Earthquakes in culture Mythology and religion In Norse mythology, earthquakes were explained as the violent struggling of the god Loki. When Loki, god of mischief and strife, murdered Baldr, god of beauty and light, he was punished by being bound in a cave with a poisonous serpent placed above his head dripping venom. Loki's wife Sigyn stood by him with a bowl to catch the poison, but whenever she had to empty the bowl the poison dripped on Loki's face, forcing him to jerk his head away and thrash against his bonds, which caused the earth to tremble. In Greek mythology, Poseidon was the cause and god of earthquakes. When he was in a bad mood, he struck the ground with a trident, causing earthquakes and other calamities. He also used earthquakes to punish and inflict fear upon people as revenge. In Japanese mythology, Namazu (鯰) is a giant catfish who causes earthquakes. Namazu lives in the mud beneath the earth, and is guarded by the god Kashima who restrains the fish with a stone. When Kashima lets his guard fall, Namazu thrashes about, causing violent earthquakes. In modern popular culture, the portrayal of earthquakes is shaped by the memory of great cities laid waste, such as Kobe in 1995 or San Francisco in 1906. Fictional earthquakes tend to strike suddenly and without warning. For this reason, stories about earthquakes generally begin with the disaster and focus on its immediate aftermath, as in Short Walk to Daylight (1972), The Ragged Edge (1968) or Aftershock: Earthquake in New York (1998). A notable example is Heinrich von Kleist's classic novella, The Earthquake in Chile, which describes the destruction of Santiago in 1647. Haruki Murakami's short fiction collection after the quake depicts the consequences of the Kobe earthquake of 1995. The most popular single earthquake in fiction is the hypothetical "Big One" expected of California's San Andreas Fault someday, as depicted in the novels Richter 10 (1996) and Goodbye California (1977) among other works. Jacob M. Appel's widely anthologized short story, A Comparative Seismology, features a con artist who convinces an elderly woman that an apocalyptic earthquake is imminent. Contemporary depictions of earthquakes in film are variable in the manner in which they reflect human psychological reactions to the actual trauma that can be caused to directly afflicted families and their loved ones. Disaster mental health response research emphasizes the need to be aware of the different roles of loss of family and key community members, loss of home and familiar surroundings, loss of essential supplies and services to maintain survival. Particularly for children, the clear availability of caregiving adults who are able to protect, nourish, and clothe them in the aftermath of the earthquake, and to help them make sense of what has befallen them has been shown even more important to their emotional and physical health than the simple giving of provisions. As was observed after other disasters involving destruction and loss of life and their media depictions, such as those of the 2001 World Trade Center Attacks or Hurricane Katrina—and has been recently observed in the 2010 Haiti earthquake, it is also important not to pathologize the reactions to loss and displacement or disruption of governmental administration and services, but rather to validate these reactions, to support constructive problem-solving and reflection as to how one might improve the conditions of those affected. - "Earthquake FAQ". Crustal.ucsb.edu. Retrieved 2011-07-24. - Spence, William; S. A. Sipkin, G. L. Choy (1989). "Measuring the Size of an Earthquake". United States Geological Survey. Retrieved 2006-11-03. - Wyss, M. (1979). "Estimating expectable maximum magnitude of earthquakes from fault dimensions". Geology 7 (7): 336–340. Bibcode:1979Geo.....7..336W. doi:10.1130/0091-7613(1979)7<336:EMEMOE>2.0.CO;2. - Sibson R. H. (1982) "Fault Zone Models, Heat Flow, and the Depth Distribution of Earthquakes in the Continental Crust of the United States", Bulletin of the Seismological Society of America, Vol 72, No. 1, pp. 151–163 - Sibson, R. H. (2002) "Geology of the crustal earthquake source" International handbook of earthquake and engineering seismology, Volume 1, Part 1, page 455, eds. W H K Lee, H Kanamori, P C Jennings, and C. Kisslinger, Academic Press, ISBN / ASIN: 0124406521 - "Global Centroid Moment Tensor Catalog". Globalcmt.org. Retrieved 2011-07-24. - "Instrumental California Earthquake Catalog". WGCEP. Retrieved 2011-07-24. - Hjaltadóttir S., 2010, "Use of relatively located microearthquakes to map fault patterns and estimate the thickness of the brittle crust in Southwest Iceland" - "Reports and publications | Seismicity | Icelandic Meteorological office". En.vedur.is. Retrieved 2011-07-24. - Schorlemmer, D.; Wiemer, S.; Wyss, M. (2005). "Variations in earthquake-size distribution across different stress regimes". Nature 437 (7058): 539–542. Bibcode:2005Natur.437..539S. doi:10.1038/nature04094. PMID 16177788. - Talebian, M; Jackson, J (2004). "A reappraisal of earthquake focal mechanisms and active shortening in the Zagros mountains of Iran". Geophysical Journal International 156 (3): 506–526. Bibcode:2004GeoJI.156..506T. doi:10.1111/j.1365-246X.2004.02092.x. - Nettles, M.; Ekström, G. (May 2010). "Glacial Earthquakes in Greenland and Antarctica". Annual Review of Earth and Planetary Sciences 38 (1): 467–491. Bibcode:2010AREPS..38..467N. doi:10.1146/annurev-earth-040809-152414. Avinash Kumar - Noson, Qamar, and Thorsen (1988). Washington State Earthquake Hazards: Washington State Department of Natural Resources. Washington Division of Geology and Earth Resources Information Circular 85. - "M7.5 Northern Peru Earthquake of 26 September 2005" (PDF). National Earthquake Information Center. 17 October 2005. Retrieved 2008-08-01. - Greene II, H. W.; Burnley, P. C. (October 26, 1989). "A new self-organizing mechanism for deep-focus earthquakes". Nature 341 (6244): 733–737. Bibcode:1989Natur.341..733G. doi:10.1038/341733a0. - Foxworthy and Hill (1982). Volcanic Eruptions of 1980 at Mount St. Helens, The First 100 Days: USGS Professional Paper 1249. - Watson, John; Watson, Kathie (January 7, 1998). "Volcanoes and Earthquakes". United States Geological Survey. Retrieved May 9, 2009. - National Research Council (U.S.). Committee on the Science of Earthquakes (2003). "5. Earthquake Physics and Fault-System Science". Living on an Active Earth: Perspectives on Earthquake Science. Washington D.C.: National Academies Press. p. 418. ISBN 978-0-309-06562-7. Retrieved 8 July 2010. - Thomas, Amanda M.; Nadeau, Robert M.; Bürgmann, Roland (December 24, 2009). "Tremor-tide correlations and near-lithostatic pore pressure on the deep San Andreas fault". Nature 462 (7276): 1048–51. Bibcode:2009Natur.462.1048T. doi:10.1038/nature08654. PMID 20033046. - "Gezeitenkräfte: Sonne und Mond lassen Kalifornien erzittern" SPIEGEL online, 29.12.2009 - Tamrazyan, Gurgen P. (1967). "Tide-forming forces and earthquakes". Icarus 7 (1–3): 59–65. Bibcode:1967Icar....7...59T. doi:10.1016/0019-1035(67)90047-4. - Tamrazyan, Gurgen P. (1968). "Principal regularities in the distribution of major earthquakes relative to solar and lunar tides and other cosmic forces". Icarus 9 (1–3): 574–92. Bibcode:1968Icar....9..574T. doi:10.1016/0019-1035(68)90050-X. - "What are Aftershocks, Foreshocks, and Earthquake Clusters?". - "Repeating Earthquakes". United States Geological Survey. January 29, 2009. Retrieved May 11, 2009. - "Earthquake Swarms at Yellowstone". United States Geological Survey. Retrieved 2008-09-15. - Duke, Alan. "Quake 'swarm' shakes Southern California". CNN. Retrieved 27 August 2012. - Amos Nur; Cline, Eric H. (2000). "Poseidon's Horses: Plate Tectonics and Earthquake Storms in the Late Bronze Age Aegean and Eastern Mediterranean". Journal of Archaeological Science 27 (1): 43–63. doi:10.1006/jasc.1999.0431. ISSN 0305-4403. - "Earthquake Storms". Horizon. 1 April 2003. Retrieved 2007-05-02. - "Earthquake Facts". United States Geological Survey. Retrieved 2010-04-25. - Pressler, Margaret Webb (14 April 2010). "More earthquakes than usual? Not really.". KidsPost (Washington Post: Washington Post). pp. C10. - "Earthquake Hazards Program". United States Geological Survey. Retrieved 2006-08-14. - "Seismicity and earthquake hazard in the UK". Quakes.bgs.ac.uk. Retrieved 2010-08-23. - "Italy's earthquake history." BBC News. October 31, 2002. - "Common Myths about Earthquakes". United States Geological Survey. Retrieved 2006-08-14. - "Earthquake Facts and Statistics: Are earthquakes increasing?". United States Geological Survey. Retrieved 2006-08-14. - The 10 biggest earthquakes in history, Australian Geographic, March 14, 2011. - "Historic Earthquakes and Earthquake Statistics: Where do earthquakes occur?". United States Geological Survey. Retrieved 2006-08-14. - "Visual Glossary — Ring of Fire". United States Geological Survey. Retrieved 2006-08-14. - Jackson, James, "Fatal attraction: living with earthquakes, the growth of villages into megacities, and earthquake vulnerability in the modern world," Philosophical Transactions of the Royal Society, doi:10.1098/rsta.2006.1805 Phil. Trans. R. Soc. A 15 August 2006 vol. 364 no. 1845 1911–1925. - "Global urban seismic risk." Cooperative Institute for Research in Environmental Science. - Madrigal, Alexis (4 June 2008). "Top 5 Ways to Cause a Man-Made Earthquake". Wired News (CondéNet). Retrieved 2008-06-05. - "How Humans Can Trigger Earthquakes". National Geographic. February 10, 2009. Retrieved April 24, 2009. - Brendan Trembath (January 9, 2007). "Researcher claims mining triggered 1989 Newcastle earthquake". Australian Broadcasting Corporation. Retrieved April 24, 2009. - "Speed of Sound through the Earth". Hypertextbook.com. Retrieved 2010-08-23. - Geographic.org. "Magnitude 8.0 - SANTA CRUZ ISLANDS Earthquake Details". Gobal Earthquake Epicenters with Maps. Retrieved 2013-03-13. - "On Shaky Ground, Association of Bay Area Governments, San Francisco, reports 1995,1998 (updated 2003)". Abag.ca.gov. Retrieved 2010-08-23. - "Guidelines for evaluating the hazard of surface fault rupture, California Geological Survey". California Department of Conservation. 2002. - "Natural Hazards — Landslides". United States Geological Survey. Retrieved 2008-09-15. - "The Great 1906 San Francisco earthquake of 1906". United States Geological Survey. Retrieved 2008-09-15. - "Historic Earthquakes — 1946 Anchorage Earthquake". United States Geological Survey. Retrieved 2008-09-15. - Noson, Qamar, and Thorsen (1988). Washington Division of Geology and Earth Resources Information Circular 85. Washington State Earthquake Hazards. - MSN Encarta Dictionary. Flood. Retrieved on 2006-12-28. Archived 2009-10-31. - "Notes on Historical Earthquakes". British Geological Survey. Retrieved 2008-09-15. - "Fresh alert over Tajik flood threat". BBC News. 2003-08-03. Retrieved 2008-09-15. - USGS: Magnitude 8 and Greater Earthquakes Since 1900 - "Earthquakes with 50,000 or More Deaths". U.S. Geological Survey - Spignesi, Stephen J. (2005). Catastrophe!: The 100 Greatest Disasters of All Time. ISBN 0-8065-2558-4 - Kanamori Hiroo. "The Energy Release in Great Earthquakes". Journal of Geophysical Research. Retrieved 2010-10-10. - USGS. "How Much Bigger?". United States Geological Survey. Retrieved 2010-10-10. - Earthquake Prediction. Ruth Ludwin, U.S. Geological Survey. - Working Group on California Earthquake Probabilities in the San Francisco Bay Region, 2003 to 2032, 2003, http://earthquake.usgs.gov/regional/nca/wg02/index.php. - "Earthquakes". Encyclopedia of World Environmental History 1. Encyclopedia of World Environmental History. 2003. pp. 358–364. - Sturluson, Snorri (1220). Prose Edda. ISBN 1-156-78621-5. - Sellers, Paige (1997-03-03). "Poseidon". Encyclopedia Mythica. Retrieved 2008-09-02. - Van Riper, A. Bowdoin (2002). Science in popular culture: a reference guide. Westport: Greenwood Press. p. 60. ISBN 0-313-31822-0. - JM Appel. A Comparative Seismology. Weber Studies (first publication), Volume 18, Number 2. - Goenjian, Najarian; Pynoos, Steinberg; Manoukian, Tavosian; Fairbanks, AM; Manoukian, G; Tavosian, A; Fairbanks, LA (1994). "Posttraumatic stress disorder in elderly and younger adults after the 1988 earthquake in Armenia". Am J Psychiatry 151 (6): 895–901. PMID 8185000. - Wang, Gao; Shinfuku, Zhang; Zhao, Shen; Zhang, H; Zhao, C; Shen, Y (2000). "Longitudinal Study of Earthquake-Related PTSD in a Randomly Selected Community Sample in North China". Am J Psychiatry 157 (8): 1260–1266. doi:10.1176/appi.ajp.157.8.1260. PMID 10910788. - Goenjian, Steinberg; Najarian, Fairbanks; Tashjian, Pynoos (2000). "Prospective Study of Posttraumatic Stress, Anxiety, and Depressive Reactions After Earthquake and Political Violence". Am J Psychiatry 157 (6): 911–895. doi:10.1176/appi.ajp.157.6.911. - Coates SW, Schechter D (2004). Preschoolers' traumatic stress post-9/11: relational and developmental perspectives. Disaster Psychiatry Issue. Psychiatric Clinics of North America, 27(3), 473–489. - Schechter, DS; Coates, SW; First, E (2002). "Observations of acute reactions of young children and their families to the World Trade Center attacks". Journal of ZERO-TO-THREE: National Center for Infants, Toddlers, and Families 22 (3): 9–13. - Deborah R. Coen. The Earthquake Observers: Disaster Science From Lisbon to Richter (University of Chicago Press; 2012) 348 pages; explores both scientific and popular coverage - Donald Hyndman, David Hyndman (2009). "Chapter 3: Earthquakes and their causes". Natural Hazards and Disasters (2nd ed.). Brooks/Cole: Cengage Learning. ISBN 0-495-31667-9. |Wikimedia Commons has media related to: Earthquake| - Earthquake Hazards Program of the U.S. Geological Survey - European-Mediterranean Seismological Centre a real-time earthquake information website - Seismological Society of America - Incorporated Research Institutions for Seismology - Open Directory - Earthquakes - World earthquake map captures every rumble since 1898 —Mother Nature Network (MNN) (29 June 2012)
http://en.wikipedia.org/wiki/Earthquakes
13
91
A value is one of the fundamental things — like a letter or a number — that a program manipulates. The values we have seen so far are 4 (the result when we added 2 + 2), and "Hello, World!". These values are classified into different classes, or data types: 4 is an integer, and "Hello, World!" is a string, so-called because it contains a string of letters. You (and the interpreter) can identify strings because they are enclosed in quotation marks. If you are not sure what class a value falls into, Python has a function called type which can tell you. >>> type("Hello, World!") <class 'str'> >>> type(17) <class 'int'> Not surprisingly, strings belong to the class str and integers belong to the class int. Less obviously, numbers with a decimal point belong to a class called float, because these numbers are represented in a format called floating-point. At this stage, you can treat the words class and type interchangeably. We’ll come back to a deeper understanding of what a class is in later chapters. >>> type(3.2) <class 'float'> What about values like "17" and "3.2"? They look like numbers, but they are in quotation marks like strings. >>> type("17") <class 'str'> >>> type("3.2") <class 'str'> Strings in Python can be enclosed in either single quotes (') or double quotes ("), or three of each (''' or """) >>> type('This is a string.') <class 'str'> >>> type("And so is this.") <class 'str'> >>> type("""and this.""") <class 'str'> >>> type('''and even this...''') <class 'str'> Double quoted strings can contain single quotes inside them, as in "Bruce's beard", and single quoted strings can have double quotes inside them, as in 'The knights who say "Ni!"'. Strings enclosed with three occurrences of either quote symbol are called triple quoted strings. They can contain either single or double quotes: >>> print('''"Oh no", she exclaimed, "Ben's bike is broken!"''') "Oh no", she exclaimed, "Ben's bike is broken!" >>> Triple quoted strings can even span multiple lines: >>> message = """This message will ... span several ... lines.""" >>> print(message) This message will span several lines. >>> Python doesn’t care whether you use single or double quotes or the three-of-a-kind quotes to surround your strings: once it has parsed the text of your program or command, the way it stores the value is identical in all cases, and the surrounding quotes are not part of the value. But when the interpreter wants to display a string, it has to decide which quotes to use to make it look like a string. >>> 'This is a string.' 'This is a string.' >>> """And so is this.""" 'And so is this.' So the Python language designers usually chose to surround their strings by single quotes. What do think would happen if the string already contained single quotes? When you type a large integer, you might be tempted to use commas between groups of three digits, as in 42,000. This is not a legal integer in Python, but it does mean something else, which is legal: >>> 42000 42000 >>> 42,000 (42, 0) Well, that’s not what we expected at all! Because of the comma, Python chose to treat this as a pair of values. We’ll come back to learn about pairs later. But, for the moment, remember not to put commas or spaces in your integers, no matter how big they are. Also revisit what we said in the previous chapter: formal languages are strict, the notation is concise, and even the smallest change might mean something quite different from what you intended. One of the most powerful features of a programming language is the ability to manipulate variables. A variable is a name that refers to a value. The assignment statement gives a value to a variable: >>> message = "What's up, Doc?" >>> n = 17 >>> pi = 3.14159 This example makes three assignments. The first assigns the string value "What's up, Doc?" to a variable named message. The second gives the integer 17 to n, and the third assigns the floating-point number 3.14159 to a variable called pi. The assignment token, =, should not be confused with equals, which uses the token ==. The assignment statement binds a name, on the left-hand side of the operator, to a value, on the right-hand side. This is why you will get an error if you enter: >>> 17 = n File "<interactive input>", line 1 SyntaxError: can't assign to literal When reading or writing code, say to yourself “n is assigned 17” or “n gets the value 17”. Don’t say “n equals 17”. A common way to represent variables on paper is to write the name with an arrow pointing to the variable’s value. This kind of figure is called a state snapshot because it shows what state each of the variables is in at a particular instant in time. (Think of it as the variable’s state of mind). This diagram shows the result of executing the assignment statements: If you ask the interpreter to evaluate a variable, it will produce the value that is currently linked to the variable: >>> message 'What's up, Doc?' >>> n 17 >>> pi 3.14159 We use variables in a program to “remember” things, perhaps the current score at the football game. But variables are variable. This means they can change over time, just like the scoreboard at a football game. You can assign a value to a variable, and later assign a different value to the same variable. (This is different from maths. In maths, if you give `x` the value 3, it cannot change to link to a different value half-way through your calculations!) >>> day = "Thursday" >>> day 'Thursday' >>> day = "Friday" >>> day 'Friday' >>> day = 21 >>> day 21 You’ll notice we changed the value of day three times, and on the third assignment we even made it refer to a value that was of a different type. A great deal of programming is about having the computer remember things, e.g. The number of missed calls on your phone, and then arranging to update or change the variable when you miss another call. Variable names can be arbitrarily long. They can contain both letters and digits, but they have to begin with a letter or an underscore. Although it is legal to use uppercase letters, by convention we don’t. If you do, remember that case matters. Bruce and bruce are different variables. The underscore character ( _) can appear in a name. It is often used in names with multiple words, such as my_name or price_of_tea_in_china. There are some situations in which names beginning with an underscore have special meaning, so a safe rule for beginners is to start all names with a letter. If you give a variable an illegal name, you get a syntax error: >>> 76trombones = "big parade" SyntaxError: invalid syntax >>> more$ = 1000000 SyntaxError: invalid syntax >>> class = "Computer Science 101" SyntaxError: invalid syntax 76trombones is illegal because it does not begin with a letter. more$ is illegal because it contains an illegal character, the dollar sign. But what’s wrong with class? It turns out that class is one of the Python keywords. Keywords define the language’s syntax rules and structure, and they cannot be used as variable names. Python has thirty-something keywords (and every now and again improvements to Python introduce or eliminate one or two): You might want to keep this list handy. If the interpreter complains about one of your variable names and you don’t know why, see if it is on this list. Programmers generally choose names for their variables that are meaningful to the human readers of the program — they help the programmer document, or remember, what the variable is used for. Beginners sometimes confuse “meaningful to the human readers” with “meaningful to the computer”. So they’ll wrongly think that because they’ve called some variable average or pi, it will somehow magically calculate an average, or magically know that the variable pi should have a value like 3.14159. No! The computer doesn’t understand what you intend the variable to mean. So you’ll find some instructors who deliberately don’t choose meaningful names when they teach beginners — not because we don’t think it is a good habit, but because we’re trying to reinforce the message that you — the programmer — must write the program code to calculate the average, and you must write an assignment statement to give the variable pi the value you want it to have. A statement is an instruction that the Python interpreter can execute. We have only seen the assignment statement so far. Some other kinds of statements that we’ll see shortly are while statements, for statements, if statements, and import statements. (There are other kinds too!) When you type a statement on the command line, Python executes it. Statements don’t produce any result. An expression is a combination of values, variables, operators, and calls to functions. If you type an expression at the Python prompt, the interpreter evaluates it and displays the result: >>> 1 + 1 2 >>> len("hello") 5 In this example len is a built-in Python function that returns the number of characters in a string. We’ve previously seen the print and the type functions, so this is our third example of a function! The evaluation of an expression produces a value, which is why expressions can appear on the right hand side of assignment statements. A value all by itself is a simple expression, and so is a variable. >>> 17 17 >>> y = 3.14 >>> x = len("hello") >>> x 5 >>> y 3.14 Operators are special tokens that represent computations like addition, multiplication and division. The values the operator uses are called operands. The following are all legal Python expressions whose meaning is more or less clear: 20+32 hour-1 hour*60+minute minute/60 5**2 (5+9)*(15-7) The tokens +, -, and *, and the use of parenthesis for grouping, mean in Python what they mean in mathematics. The asterisk (*) is the token for multiplication, and ** is the token for exponentiation. >>> 2 ** 3 8 >>> 3 ** 2 9 When a variable name appears in the place of an operand, it is replaced with its value before the operation is performed. Addition, subtraction, multiplication, and exponentiation all do what you expect. Example: so let us convert 645 minutes into hours: >>> minutes = 645 >>> hours = minutes / 60 >>> hours 10.75 Oops! In Python 3, the division operator / always yields a floating point result. What we might have wanted to know was how many whole hours there are, and how many minutes remain. Python gives us two different flavors of the division operator. The second, called floor division uses the token //. Its result is always a whole number — and if it has to adjust the number it always moves it to the left on the number line. So 6 // 4 yields 1, but -6 // 4 might surprise you! >>> 7 / 4 1.75 >>> 7 // 4 1 >>> minutes = 645 >>> hours = minutes // 60 >>> hours 10 Take care that you choose the correct flavor of the division operator. If you’re working with expressions where you need floating point values, use the division operator that does the division accurately. Here we’ll look at three more Python functions, int, float and str, which will (attempt to) convert their arguments into types int, float and str respectively. We call these type converter functions. The int function can take a floating point number or a string, and turn it into an int. For floating point numbers, it discards the decimal portion of the number — a process we call truncation towards zero on the number line. Let us see this in action: >>> int(3.14) 3 >>> int(3.9999) # This doesn't round to the closest int! 3 >>> int(3.0) 3 >>> int(-3.999) # Note that the result is closer to zero -3 >>> int(minutes / 60) 10 >>> int("2345") # Parse a string to produce an int 2345 >>> int(17) # It even works if arg is already an int 17 >>> int("23 bottles") This last case doesn’t look like a number — what do we expect? Traceback (most recent call last): File "<interactive input>", line 1, in <module> ValueError: invalid literal for int() with base 10: '23 bottles' The type converter float can turn an integer, a float, or a syntactically legal string into a float: >>> float(17) 17.0 >>> float("123.45") 123.45 The type converter str turns its argument into a string: >>> str(17) '17' >>> str(123.45) '123.45' When more than one operator appears in an expression, the order of evaluation depends on the rules of precedence. Python follows the same precedence rules for its mathematical operators that mathematics does. The acronym PEMDAS is a useful way to remember the order of operations: Parentheses have the highest precedence and can be used to force an expression to evaluate in the order you want. Since expressions in parentheses are evaluated first, 2 * (3-1) is 4, and (1+1)**(5-2) is 8. You can also use parentheses to make an expression easier to read, as in (minute * 100) / 60, even though it doesn’t change the result. Exponentiation has the next highest precedence, so 2**1+1 is 3 and not 4, and 3*1**3 is 3 and not 27. Multiplication and both Division operators have the same precedence, which is higher than Addition and Subtraction, which also have the same precedence. So 2*3-1 yields 5 rather than 4, and 5-2*2 is 1, not 6. Operators with the same precedence are evaluated from left-to-right. In algebra we say they are left-associative. So in the expression 6-3+2, the subtraction happens first, yielding 3. We then add 2 to get the result 5. If the operations had been evaluated from right to left, the result would have been 6-(3+2), which is 1. (The acronym PEDMAS could mislead you to thinking that division has higher precedence than multiplication, and addition is done ahead of subtraction - don’t be misled. Subtraction and addition are at the same precedence, and the left-to-right rule applies.) Due to some historical quirk, an exception to the left-to-right left-associative rule is the exponentiation operator **, so a useful hint is to always use parentheses to force exactly the order you want when exponentiation is involved: >>> 2 ** 3 ** 2 # The right-most ** operator gets done first! 512 >>> (2 ** 3) ** 2 # Use parentheses to force the order you want! 64 The immediate mode command prompt of Python is great for exploring and experimenting with expressions like this. In general, you cannot perform mathematical operations on strings, even if the strings look like numbers. The following are illegal (assuming that message has type string): >>> message - 1 # Error >>> "Hello" / 123 # Error >>> message * "Hello" # Error >>> "15" + 2 # Error Interestingly, the + operator does work with strings, but for strings, the + operator represents concatenation, not addition. Concatenation means joining the two operands by linking them end-to-end. For example: 1 2 3 fruit = "banana" baked_good = " nut bread" print(fruit + baked_good) The output of this program is banana nut bread. The space before the word nut is part of the string, and is necessary to produce the space between the concatenated strings. The * operator also works on strings; it performs repetition. For example, 'Fun'*3 is 'FunFunFun'. One of the operands has to be a string; the other has to be an integer. On one hand, this interpretation of + and * makes sense by analogy with addition and multiplication. Just as 4*3 is equivalent to 4+4+4, we expect "Fun"*3 to be the same as "Fun"+"Fun"+"Fun", and it is. On the other hand, there is a significant way in which string concatenation and repetition are different from integer addition and multiplication. Can you think of a property that addition and multiplication have that string concatenation and repetition do not? There is a built-in function in Python for getting input from the user: 1 n = input("Please enter your name: ") A sample run of this script in PyScripter would pop up a dialog window like this: The user of the program can enter the name and click OK, and when this happens the text that has been entered is returned from the input function, and in this case assigned to the variable n. Even if you asked the user to enter their age, you would get back a string like "17". It would be your job, as the programmer, to convert that string into a int or a float, using the int or float converter functions we saw earlier. So far, we have looked at the elements of a program — variables, expressions, statements, and function calls — in isolation, without talking about how to combine them. One of the most useful features of programming languages is their ability to take small building blocks and compose them into larger chunks. For example, we know how to get the user to enter some input, we know how to convert the string we get into a float, we know how to write a complex expression, and we know how to print values. Let’s put these together in a small four-step program that asks the user to input a value for the radius of a circle, and then computes the area of the circle from the formula Firstly, we’ll do the four steps one at a time: 1 2 3 4 response = input("What is your radius? ") r = float(response) area = 3.14159 * r**2 print("The area is ", area) Now let’s compose the first two lines into a single line of code, and compose the second two lines into another line of code. 1 2 r = float( input("What is your radius? ") ) print("The area is ", 3.14159 * r**2) If we really wanted to be tricky, we could write it all in one statement: 1 print("The area is ", 3.14159*float(input("What is your radius?"))**2) Such compact code may not be most understandable for humans, but it does illustrate how we can compose bigger chunks from our building blocks. If you’re ever in doubt about whether to compose code or fragment it into smaller steps, try to make it as simple as you can for the human to follow. My choice would be the first case above, with four separate steps. The modulus operator works on integers (and integer expressions) and gives the remainder when the first number is divided by the second. In Python, the modulus operator is a percent sign (%). The syntax is the same as for other operators. It has the same precedence as the multiplication operator. >>> q = 7 // 3 # This is integer division operator >>> print(q) 2 >>> r = 7 % 3 >>> print(r) 1 So 7 divided by 3 is 2 with a remainder of 1. The modulus operator turns out to be surprisingly useful. For example, you can check whether one number is divisible by another—if x % y is zero, then x is divisible by y. Also, you can extract the right-most digit or digits from a number. For example, x % 10 yields the right-most digit of x (in base 10). Similarly x % 100 yields the last two digits. It is also extremely useful for doing conversions, say from seconds, to hours, minutes and seconds. So let’s write a program to ask the user to enter some seconds, and we’ll convert them into hours, minutes, and remaining seconds. 1 2 3 4 5 6 7 8 total_secs = int(input("How many seconds, in total?")) hours = total_secs // 3600 secs_still_remaining = total_secs % 3600 minutes = secs_still_remaining // 60 secs_finally_remaining = secs_still_remaining % 60 print("Hrs=", hours, " mins=", minutes, "secs=", secs_finally_remaining) A statement that assigns a value to a name (variable). To the left of the assignment operator, =, is a name. To the right of the assignment token is an expression which is evaluated by the Python interpreter and then assigned to the name. The difference between the left and right hand sides of the assignment statement is often confusing to new programmers. In the following assignment: n = n + 1 n plays a very different role on each side of the =. On the right it is a value and makes up part of the expression which will be evaluated by the Python interpreter before assigning it to the name on the left. Take the sentence: All work and no play makes Jack a dull boy. Store each word in a separate variable, then print out the sentence on one line using print. Add parenthesis to the expression 6 * 1 - 2 to change its value from 4 to -6. Place a comment before a line of code that previously worked, and record what happens when you rerun the program. Start the Python interpreter and enter bruce + 4 at the prompt. This will give you an error: NameError: name 'bruce' is not defined Assign a value to bruce so that bruce + 4 evaluates to 10. The formula for computing the final amount if one is earning compound interest is given on Wikipedia as Write a Python program that assigns the principal amount of $10000 to variable P, assign to n the value 12, and assign to r the interest rate of 8%. Then have the program prompt the user for the number of years t that the money will be compounded for. Calculate and print the final amount after t years. Evaluate the following numerical expressions in your head, then use the Python interpreter to check your results: - >>> 5 % 2 - >>> 9 % 5 - >>> 15 % 12 - >>> 12 % 15 - >>> 6 % 6 - >>> 0 % 7 - >>> 7 % 0 What happened with the last example? Why? If you were able to correctly anticipate the computer’s response in all but the last one, it is time to move on. If not, take time now to make up examples of your own. Explore the modulus operator until you are confident you understand how it works. You look at the clock and it is exactly 2pm. You set an alarm to go off in 51 hours. At what time does the alarm go off? (Hint: you could count on your fingers, but this is not what we’re after. If you are tempted to count on your fingers, change the 51 to 5100.) Write a Python program to solve the general version of the above problem. Ask the user for the time now (in hours), and ask for the number of hours to wait. Your program should output what the time will be on the clock when the alarm goes off.
http://www.openbookproject.net/thinkcs/python/english3e/variables_expressions_statements.html
13
139
||This article needs additional citations for verification. (April 2012)| Multiplication (often denoted by the cross symbol "×") is the mathematical operation of scaling one number by another. It is one of the four basic operations in elementary arithmetic (the others being addition, subtraction and division). Because the result of scaling by whole numbers can be thought of as consisting of some number of copies of the original, whole-number products greater than 1 can be computed by repeated addition; for example, 3 multiplied by 4 (often said as "3 times 4") can be calculated by adding 4 copies of 3 together: Here 3 and 4 are the "factors" and 12 is the "product". Educators differ as to which number should normally be considered as the number of copies, and whether multiplication should even be introduced as repeated addition. For example 3 multiplied by 4 can also be calculated by adding 3 copies of 4 together: Multiplication can also be visualized as counting objects arranged in a rectangle (for whole numbers) or as finding the area of a rectangle whose sides have given lengths (for numbers generally). The area of a rectangle does not depend on which side you measure first, which illustrates that the order numbers are multiplied together in doesn't matter. In general the result of multiplying two measurements gives a result of a new type depending on the measurements. For instance: The inverse operation of multiplication is division. For example, 4 multiplied by 3 equals 12. Then 12 divided by 3 equals 4. Multiplication by 3, followed by division by 3, yields the original number. Multiplication is also defined for other types of numbers (such as complex numbers), and for more abstract constructs such as matrices. For these more abstract constructs, the order that the operands are multiplied in sometimes does matter. Notation and terminology |This section does not cite any references or sources. (August 2011)| |addend + addend =||sum| |minuend − subtrahend =||difference| |multiplicand × multiplier =||product| |dividend ÷ divisor =||quotient| |nth root (√)| |degree √ =||root| - (verbally, "two times three equals six") There are several other common notations for multiplication. Many of these are intended to reduce confusion between the multiplication sign × and the commonly used variable x: - The middle dot is standard in the United States, the United Kingdom, and other countries where the period is used as a decimal point. In other countries that use a comma as a decimal point, either the period or a middle dot is used for multiplication. Internationally, the middle dot is commonly connotated with a more advanced or scientific use. - The asterisk (as in 5*2) is often used in programming languages because it appears on every keyboard. This usage originated in the FORTRAN programming language. - In algebra, multiplication involving variables is often written as a juxtaposition (e.g., xy for x times y or 5x for five times x). This notation can also be used for quantities that are surrounded by parentheses (e.g., 5(2) or (5)(2) for five times two). - In matrix multiplication, there is actually a distinction between the cross and the dot symbols. The cross symbol generally denotes a vector multiplication, while the dot denotes a scalar multiplication. A similar convention distinguishes between the cross product and the dot product of two vectors. The numbers to be multiplied are generally called the "factors" or "multiplicands". When thinking of multiplication as repeated addition, the number to be multiplied is called the "multiplicand", while the number of multiples is called the "multiplier". In algebra, a number that is the multiplier of a variable or expression (e.g., the 3 in 3xy2) is called a coefficient. The result of a multiplication is called a product, and is a multiple of each factor if the other factor is an integer. For example, 15 is the product of 3 and 5, and is both a multiple of 3 and a multiple of 5. The common methods for multiplying numbers using pencil and paper require a multiplication table of memorized or consulted products of small numbers (typically any two numbers from 0 to 9), however one method, the peasant multiplication algorithm, does not. Multiplying numbers to more than a couple of decimal places by hand is tedious and error prone. Common logarithms were invented to simplify such calculations. The slide rule allowed numbers to be quickly multiplied to about three places of accuracy. Beginning in the early twentieth century, mechanical calculators, such as the Marchant, automated multiplication of up to 10 digit numbers. Modern electronic computers and calculators have greatly reduced the need for multiplication by hand. Historical algorithms The Egyptian method of multiplication of integers and fractions, documented in the Ahmes Papyrus, was by successive additions and doubling. For instance, to find the product of 13 and 21 one had to double 21 three times, obtaining 1 × 21 = 21, 2 × 21 = 42, 4 × 21 = 84, 8 × 21 = 168. The full product could then be found by adding the appropriate terms found in the doubling sequence: - 13 × 21 = (1 + 4 + 8) × 21 = (1 × 21) + (4 × 21) + (8 × 21) = 21 + 84 + 168 = 273. The Babylonians used a sexagesimal positional number system, analogous to the modern day decimal system. Thus, Babylonian multiplication was very similar to modern decimal multiplication. Because of the relative difficulty of remembering 60 × 60 different products, Babylonian mathematicians employed multiplication tables. These tables consisted of a list of the first twenty multiples of a certain principal number n: n, 2n, ..., 20n; followed by the multiples of 10n: 30n 40n, and 50n. Then to compute any sexagesimal product, say 53n, one only needed to add 50n and 3n computed from the table. In the mathematical text Zhou Bi Suan Jing, dated prior to 300 BC, and the Nine Chapters on the Mathematical Art, multiplication calculations were written out in words, although the early Chinese mathematicians employed Rod calculus involving place value addition, subtraction, multiplication and division. These place value decimal arithmetic algorithms were introduced by Al Khwarizmi to Arab countries in the early 9th century. Modern method The modern method of multiplication based on the Hindu–Arabic numeral system was first described by Brahmagupta. Brahmagupta gave rules for addition, subtraction, multiplication and division. Henry Burchard Fine, then professor of Mathematics at Princeton University, wrote the following: - The Indians are the inventors not only of the positional decimal system itself, but of most of the processes involved in elementary reckoning with the system. Addition and subtraction they performed quite as they are performed nowadays; multiplication they effected in many ways, ours among them, but division they did cumbrously. Computer algorithms The standard method of multiplying two n-digit numbers requires n2 simple multiplications. Multiplication algorithms have been designed that reduce the computation time considerably when multiplying large numbers. In particular for very large numbers methods based on the Discrete Fourier Transform can reduce the number of simple multiplications to the order of n log2(n). Products of measurements When two measurements are multiplied together the product is of a type depending on the types of the measurements. The general theory is given by dimensional analysis. This analysis is routinely applied in physics but has also found applications in finance. One can only meaningfully add or subtract quantities of the same type but can multiply or divide quantities of different types. A common example is multiplying speed by time gives distance, so - 50 kilometers per hour × 3 hours = 150 kilometers. Products of sequences Capital Pi notation The product of a sequence of terms can be written with the product symbol, which derives from the capital letter Π (Pi) in the Greek alphabet. Unicode position U+220F (∏) contains a glyph for denoting such a product, distinct from U+03A0 (Π), the letter. The meaning of this notation is given by: The subscript gives the symbol for a dummy variable (i in this case), called the "index of multiplication" together with its lower bound (m), whereas the superscript (here n) gives its upper bound. The lower and upper bound are expressions denoting integers. The factors of the product are obtained by taking the expression following the product operator, with successive integer values substituted for the index of multiplication, starting from the lower bound and incremented by 1 up to and including the upper bound. So, for example: In case m = n, the value of the product is the same as that of the single factor xm. If m > n, the product is the empty product, with the value 1. Infinite products One may also consider products of infinitely many terms; these are called infinite products. Notationally, we would replace n above by the lemniscate ∞. The product of such a series is defined as the limit of the product of the first n terms, as n grows without bound. That is, by definition, One can similarly replace m with negative infinity, and define: provided both limits exist. For the natural numbers, integers, fractions, and real and complex numbers, multiplication has certain properties: - Commutative property - The order in which two numbers are multiplied does not matter: - Associative property - Expressions solely involving multiplication or addition are invariant with respect to order of operations: - Distributive property - Holds with respect to multiplication over addition. This identity is of prime importance in simplifying algebraic expressions: - Identity element - The multiplicative identity is 1; anything multiplied by one is itself. This is known as the identity property: - Zero element - Any number multiplied by zero is zero. This is known as the zero property of multiplication: - Zero is sometimes not included amongst the natural numbers. There are a number of further properties of multiplication not satisfied by all types of numbers. - Negative one times any number is equal to the opposite of that number. - Negative one times negative one is positive one. - The natural numbers do not include negative numbers. - Order preservation - Multiplication by a positive number preserves order: if a > 0, then if b > c then ab > ac. Multiplication by a negative number reverses order: if a < 0 and b > c then ab < ac. - The complex numbers do not have an order predicate. Other mathematical systems that include a multiplication operation may not have all these properties. For example, multiplication is not, in general, commutative for matrices and quaternions. In the book Arithmetices principia, nova methodo exposita, Giuseppe Peano proposed axioms for arithmetic based on his axioms for natural numbers. Peano arithmetic has two axioms for multiplication: Here S(y) represents the successor of y, or the natural number that follows y. The various properties like associativity can be proved from these and the other axioms of Peano arithmetic including induction. For instance S(0). denoted by 1, is a multiplicative identity because The axioms for integers typically define them as equivalence classes of ordered pairs of natural numbers. The model is based on treating (x,y) as equivalent to x−y when x and y are treated as integers. Thus both (0,1) and (1,2) are equivalent to −1. The multiplication axiom for integers defined this way is The rule that −1 × −1 = 1 can then be deduced from Multiplication with set theory It is possible, though difficult, to create a recursive definition of multiplication with set theory. Such a system usually relies on the Peano definition of multiplication. Cartesian product if the n copies of a are to be combined in disjoint union then clearly they must be made disjoint; an obvious way to do this is to use either a or n as the indexing set for the other. Then, the members of are exactly those of the Cartesian product . The properties of the multiplicative operation as applying to natural numbers then follow trivially from the corresponding properties of the Cartesian product. Multiplication in group theory There are many sets that, under the operation of multiplication, satisfy the axioms that define group structure. These axioms are closure, associativity, and the inclusion of an identity element and inverses. A simple example is the set of non-zero rational numbers. Here we have identity 1, as opposed to groups under addition where the identity is typically 0. Note that with the rationals, we must exclude zero because, under multiplication, it does not have an inverse: there is no rational number that can be multiplied by zero to result in 1. In this example we have an abelian group, but that is not always the case. To see this, look at the set of invertible square matrices of a given dimension, over a given field. Now it is straightforward to verify closure, associativity, and inclusion of identity (the identity matrix) and inverses. However, matrix multiplication is not commutative, therefore this group is nonabelian. Another fact of note is that the integers under multiplication is not a group, even if we exclude zero. This is easily seen by the nonexistence of an inverse for all elements other than 1 and -1. Multiplication in group theory is typically notated either by a dot, or by juxtaposition (the omission of an operation symbol between elements). So multiplying element a by element b could be notated a b or ab. When referring to a group via the indication of the set and operation, the dot is used, e.g., our first example could be indicated by Multiplication of different kinds of numbers Numbers can count (3 apples), order (the 3rd apple), or measure (3.5 feet high); as the history of mathematics has progressed from counting on our fingers to modelling quantum mechanics, multiplication has been generalized to more complicated and abstract types of numbers, and to things that are not numbers (such as matrices) or do not look much like numbers (such as quaternions). - is the sum of M copies of N when N and M are positive whole numbers. This gives the number of things in an array N wide and M high. Generalization to negative numbers can be done by and . The same sign rules apply to rational and real numbers. - Rational numbers - Generalization to fractions is by multiplying the numerators and denominators respectively: . This gives the area of a rectangle high and wide, and is the same as the number of things in an array when the rational numbers happen to be whole numbers. - Real numbers - is the limit of the products of the corresponding terms in certain sequences of rationals that converge to x and y, respectively, and is significant in calculus. This gives the area of a rectangle x high and y wide. See Products of sequences, above. - Complex numbers - Considering complex numbers and as ordered pairs of real numbers and , the product is . This is the same as for reals, , when the imaginary parts and are zero. - Further generalizations - See Multiplication in group theory, above, and Multiplicative Group, which for example includes matrix multiplication. A very general, and abstract, concept of multiplication is as the "multiplicatively denoted" (second) binary operation in a ring. An example of a ring that is not any of the above number systems is a polynomial ring (you can add and multiply polynomials, but polynomials are not numbers in any usual sense.) - Often division, , is the same as multiplication by an inverse, . Multiplication for some types of "numbers" may have corresponding division, without inverses; in an integral domain x may have no inverse "" but may be defined. In a division ring there are inverses but they are not commutative (since is not the same as , may be ambiguous). When multiplication is repeated, the resulting operation is known as exponentiation. For instance, the product of three factors of two (2×2×2) is "two raised to the third power", and is denoted by 23, a two with a superscript three. In this example, the number two is the base, and three is the exponent. In general, the exponent (or superscript) indicates how many times to multiply base by itself, so that the expression indicates that the base a to be multiplied by itself n times. See also - Makoto Yoshida (2009). "Is Multiplication Just Repeated Addition?". - Henry B. Fine. The Number System of Algebra – Treated Theoretically and Historically, (2nd edition, with corrections, 1907), page 90, http://www.archive.org/download/numbersystemofal00fineuoft/numbersystemofal00fineuoft.pdf - PlanetMath: Peano arithmetic - Boyer, Carl B. (revised by Merzbach, Uta C.) (1991). History of Mathematics. John Wiley and Sons, Inc. ISBN 0-471-54397-7. - Multiplication and Arithmetic Operations In Various Number Systems at cut-the-knot - Modern Chinese Multiplication Techniques on an Abacus
http://en.wikipedia.org/wiki/Multiplication
13
89
A triangle is a type of polygon having three sides and, therefore, three angles. The triangle is a closed figure formed from three straight line segments joined at their ends. The points at the ends can be called the corners, angles, or vertices of the triangle. Since any given triangle lies completely within a plane, triangles are often treated as two-dimensional geometric figures. As such, a triangle has no volume and, because it is a two-dimensionally closed figure, the flat part of the plane inside the triangle has an area, typically referred to as the area of the triangle. Triangles are always convex polygons. A triangle must have at least some area, so all three corner points of a triangle cannot lie in the same line. The sum of the lengths of any two sides of a triangle is always greater than the length of the third side. The preceding statement is sometimes called the Triangle Inequality. Certain types of triangles Categorized by angle The sum of the interior angles in a triangle always equals 180o. This means that no more than one of the angles can be 90o or more. All three angles can all be less than 90oin the triangle; then it is called an acute triangle. One of the angles can be 90o and the other two less than 90o; then the triangle is called a right triangle. Finally, one of the angles can be more than 90o and the other two less; then the triangle is called an obtuse triangle. Categorized by sides If all three of the sides of a triangle are of different length, then the triangle is called a scalene triangle. If two of the sides of a triangle are of equal length, then it is called an isoceles triangle. In an isoceles triangle, the angle between the two equal sides can be more than, equal to, or less than 90o. The other two angles are both less than 90o. If all three sides of a triangle are of equal length, then it is called an equilateral triangle and all three of the interior angles must be 60o, making it equilangular. Because the interior angles are all equal, all equilateral triangles are also the three-sided variety of a regular polygon and they are all similar, but might not be congruent. However, polygons having four or more equal sides might not have equal interior angles, might not be regular polygons, and might not be similar or congruent. Of course, pairs of triangles which are not equilateral might be similar or congruent. Opposite corners and sides in triangles If one of the sides of a triangle is chosen, the interior angles of the corners at the side's endpoints can be called adjacent angles. The corner which is not one of these endpoints can be called the corner opposite to the side. The interior angle whose vertex is the opposite corner can be called the angle opposite to the side. Likewise, if a corner or its angle is chosen, then the two sides sharing an endpoint at that corner can be called adjacent sides. The side not having this corner as one of its two endpoints can be called the side opposite to the corner. The sides or their lengths of a triangle are typically labeled with lower case letters. The corners or their corresponding angles can be labeled with capital letters. The triangle as a whole can be labeled by a small triangle symbol and its corner points. In a triangle, the largest interior angle is opposite to longest side, and vice versa. Any triangle can be divided into two right triangles by taking the longest side as a base, and extending a line segment from the opposite corner to a point on the base such that it is perpendicular to the base. Such a line segment would be considered the height or altitude ( h ) for that particular base ( b ). The two right triangles resulting from this division would both share the height as one of its sides. The interior angles at the meeting of the height and base would be 90o for each new right triangle. For acute triangles, any of the three sides can act as the base and have a corresponding height. For more information on right triangles, see Right Triangles and Pythagorean Theorem. Area of Triangles If base and height of a triangle are known, then the area of the triangle can be calculated by the formula: ( is the symbol for area) Ways of calculating the area inside of a triangle are further discussed under Area. The centroid is constructed by drawing all the medians of the triangle. All three medians intersect at the same point: this crossing point is the centroid. Centroids are always inside a triangle. They are also the centre of gravity of the triangle. The three angle bisectors of the triangle intersect at a single point, called the incentre. Incentres are always inside the triangle. The three sides are equidistant from the incentre. The incentre is also the centre of the inscribed circle (incircle) of a triangle, or the interior circle which touches all three sides of the triangle. The circumcentre is the intersection of all three perpendicular bisectors. Unlike the incentre, it is outside the triangle if the triangle is obtuse. Acute triangles always have circumcentres inside, while the circumcentre of a right triangle is the midpoint of the hypotenuse. The vertices of the triangle are equidistant from the circumcentre. The circumcentre is so called because it is the centre of the circumcircle, or the exterior circle which touches all three vertices of the triangle. The orthocentre is the crossing point of the three altitudes. It is always inside acute triangles, outside obtuse triangles, and on the right vertex of the right-angled triangle. Please note that the centres of an equilateral triangle are always the same point.
http://en.wikibooks.org/wiki/Geometry/Triangle
13
62
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | A statistical hypothesis test is a method of making statistical decisions from and about experimental data. Null-hypothesis testing just answers the question of "how well the findings fit the possibility that chance factors alone might be responsible." This is done by asking and answering a hypothetical question. One use is deciding whether experimental results contain enough information to cast doubt on conventional wisdom. As an example, consider determining whether a suitcase contains some radioactive material. Placed under a Geiger counter, it produces 10 counts per minute. The null hypothesis is that no radioactive material is in the suitcase and that all measured counts are due to ambient radioactivity typical of the surrounding air and harmless objects in a suitcase. We can then calculate how likely it is that the null hypothesis produces 10 counts per minute. If it is likely, for example if the null hypothesis predicts on average 9 counts per minute and a standard deviation of 1 count per minute, we say that the suitcase is compatible with the null hypothesis (which does not imply that there is no radioactive material, we just can't determine!); on the other hand, if the null hypothesis predicts for example 1 count per minute and a standard deviation of 1 count per minute, then the suitcase is not compatible with the null hypothesis and there are likely other factors responsible to produce the measurements. The test described here is more fully the null-hypothesis statistical significance test. The null hypothesis is a conjecture that exists solely to be falsified by the sample. Statistical significance is a possible finding of the test - that the sample is unlikely to have occurred by chance given the truth of the null hypothesis. The name of the test describes its formulation and its possible outcome. One characteristic of the test is its crisp decision: reject or do not reject (which is not the same as accept). A calculated value is compared to a threshold. One may be faced with the problem of making a definite decision with respect to an uncertain hypothesis which is known only through its observable consequences. A statistical hypothesis test, or more briefly, hypothesis test, is an algorithm to choose between the alternatives (for or against the hypothesis) which minimizes certain risks. This article describes the commonly used frequentist treatment of hypothesis testing. From the Bayesian point of view, it is appropriate to treat hypothesis testing as a special case of normative decision theory (specifically a model selection problem) and it is possible to accumulate evidence in favor of (or against) a hypothesis using concepts such as likelihood ratios known as Bayes factors. There are several preparations we make before we observe the data. - The null hypothesis must be stated in mathematical/statistical terms that make it possible to calculate the probability of possible samples assuming the hypothesis is correct. For example: The mean response to treatment being tested is equal to the mean response to the placebo in the control group. Both responses have the normal distribution with this unknown mean and the same known standard deviation ... (value). - A test statistic must be chosen that will summarize the information in the sample that is relevant to the hypothesis. In the example given above, it might be the numerical difference between the two sample means, m1 − m2. - The distribution of the test statistic is used to calculate the probability sets of possible values (usually an interval or union of intervals). In this example, the difference between sample means would have a normal distribution with a standard deviation equal to the common standard deviation times the factor where n1 and n2 are the sample sizes. - Among all the sets of possible values, we must choose one that we think represents the most extreme evidence against the hypothesis. That is called the critical region of the test statistic. The probability of the test statistic falling in the critical region when the null hypothesis is correct, is called the alpha value (or size) of the test. - The probability that a sample falls in the critical region when the parameter is , where is for the alternative hypothesis, is called the power of the test at . The power function of a critical region is the function that maps to the power of . After the data are available, the test statistic is calculated and we determine whether it is inside the critical region. If the test statistic is inside the critical region, then our conclusion is one of the following: - Reject the null hypothesis. (Therefore the critical region is sometimes called the rejection region, while its complement is the acceptance region.) - An event of probability less than or equal to alpha has occurred. The researcher has to choose between these logical alternatives. In the example we would say: the observed response to treatment is statistically significant. If the test statistic is outside the critical region, the only conclusion is that there is not enough evidence to reject the null hypothesis. This is not the same as evidence in favor of the null hypothesis. That we cannot obtain using these arguments, since lack of evidence against a hypothesis is not evidence for it. On this basis, statistical research progresses by eliminating error, not by finding the truth. Definition of termsEdit Following the exposition in Lehmann and Romano, we shall make some definitions: - Simple hypothesis - Any hypothesis which specifies the population distribution completely. - Composite hypothesis - Any hypothesis which does not specify the population distribution completely. - Statistical test - A decision function that takes its values in the set of hypotheses. - Region of acceptance - The set of values for which we fail to reject the null hypothesis. - Region of rejection / Critical region - The set of values of the test statistic for which the null hypothesis is rejected. - Power of a test (1-) - The test's probability of correctly rejecting the null hypothesis. The complement of the false negative rate - Size / Significance level of a test () - For simple hypotheses, this is the test's probability of incorrectly rejecting the null hypothesis. The false positive rate. For composite hypotheses this is the upper bound of the probability of rejecting the null hypothesis over all cases covered by the null hypothesis. - Most powerful test - For a given size or significance level, the test with the greatest power. - Uniformly most powerful test (UMP) - A test with the greatest power for all values of the parameter being tested. - Unbiased test - For a specific alternative hypothesis, a test is said to be unbiased when the probability of rejecting the null hypothesis is not less than the significance level when the alternative is true and is less than or equal to the significance level when the null hypothesis is true. - Uniformly most powerful unbiased (UMPU) - A test which is UMP in the set of all unbiased tests. Common test statisticsEdit |One-sample z-test||(Normal distribution or n > 30) and σ known.| (z is the distance from the mean in standard deviations. It is possible to calculate a minimum proportion of a population that falls within n standard deviations (see: Chebyshev's inequality). |Two-sample z-test||Normal distribution and independent observations and (σ1 AND σ2 known)| |(Normal population or n > 30) and σ unknown| |(Normal population of differences or n > 30) and σ unknown| |One-proportion z-test||n .p > 10 and n (1 − p) > 10| |Two-proportion z-test, equal variances|| |n1.p1 > 5 AND n1(1 − p1) > 5 and n2.p2 > 5 and n2(1 − p2) > 5 and independent observations| |Two-proportion z-test, unequal variances||n1.p1 > 5 and n1(1 − p1) > 5 and n2.p2 > 5 and n2(1 − p2) > 5 and independent observations| |Two-sample pooled t-test||(Normal populations or n1+n2 > 40) and independent observations and σ1 = σ2 and (σ1 and σ2 unknown)| |Two-sample unpooled t-test||(Normal populations or n1+n2 > 40) and independent observations and σ1 ≠ σ2 and (σ1 and σ2 unknown)| |Definition of symbols|| = sample size| = sample mean = population mean = population standard deviation = t statistic = degrees of freedom = sample 1 size = sample 2 size = sample 1 std. deviation = sample 2 std. deviation = sample mean of differences = population mean difference = std. deviation of differences = proportion 1 = proportion 2 = population 1 mean = population 2 mean = minimum of n1 or n2 Hypothesis testing is largely the product of Ronald Fisher, Jerzy Neyman, Karl Pearson and (son) Egon Pearson. Fisher was an agricultural statistician who emphasized rigorous experimental design and methods to extract a result from few samples assuming Gaussian distributions. Neyman (who teamed with the younger Pearson) emphasized mathematical rigor and methods to obtain more results from many samples and a wider range of distributions. Modern hypothesis testing is an (extended) hybrid of the Fisher vs Neyman/Pearson formulation, methods and terminology developed in the early 20th century. The following example is summarized from Fisher Fisher thoroughly explained his method in a proposed experiment to test a Lady's claimed ability to determine the means of tea preparation by taste. The article is less than 10 pages in length and is notable for its simplicity and completeness regarding terminology, calculations and design of the experiment. The example is loosely based on an event in Fisher's life. The Lady proved him wrong. - The null hypothesis was that the Lady had no such ability. - The test statistic was a simple count of the number of successes in 8 trials. - The distribution associated with the null hypothesis was the binomial distribution familiar from coin flipping experiments. - The critical region was the single case of 8 successes in 8 trials based on a conventional probability criterion (<5%). - Fisher asserted that no alternative hypothesis was (ever) required. If, and only if the 8 trials produced 8 successes was Fisher willing to reject the null hypothesis - effectively acknowledging the Lady's ability with >98% confidence (but without quantifying her ability). Fisher later discussed the benefits of more trials and repeated tests. Little criticism of the technique appears in introductory statistics texts. Criticism is of the application or of the interpretation rather than of the method. Criticism of null-hypothesis significance testing is available in other articles (null-hypothesis and statistical significance) and their references. Attacks and defenses of the null-hypothesis significance test are collected in Harlow et al. The original purposes of Fisher's formulation, as a tool for the experimenter, was to plan the experiment and to easily assess the information content of the small sample. There is little criticism, Bayesian in nature, of the formulation in its original context. In other contexts, complaints focus on flawed interpretations of the results and over-dependence/emphasis on one test. Numerous attacks on the formulation have failed to supplant it as a criterion for publication in scholarly journals. The most persistent attacks originated from the field of Psychology. After review, the American Psychological Association did not explicitly deprecate the use of null-hypothesis significance testing, but adopted enhanced publication guidelines which implicitly reduced the relative importance of such testing. The International Committee of Medical Journal Editors recognizes an obligation to publish negative (not statistically significant) studies under some circumstances. The applicability of the null-hypothesis testing to the publication of observational (as contrasted to experimental) studies is doubtful. Some statisticians have commented that pure "significance testing" has what is actually a rather strange goal of detecting the existence of a "real" difference between two populations. In practice a difference can almost always be found given a large enough sample, what is typically the more relevant goal of science is a determination of causal effect size. The amount and nature of the difference, in other words, is what should be studied. Many researchers also feel that hypothesis testing is something of a misnomer. In practice a single statistical test in a single study never "proves" anything. [How to reference and link to summary or text] "Hypothesis testing: generally speaking, this is a misnomer since much of what is described as hypothesis testing is really null-hypothesis testing." "Statistics do not prove anything." "Billions of supporting examples for absolute truth are outweighed by a single exception." "...in statistics, we can only try to disprove or falsify." Even when you reject a null hypothesis, effect sizes should be taken into consideration. If the effect is statistically significant but the effect size is very small, then it is a stretch to consider the effect theoretically important.[How to reference and link to summary or text] Philosophical criticism Edit Philosophical criticism to hypothesis testing includes consideration of borderline cases. Any process that produces a crisp decision from uncertainty is subject to claims of unfairness near the decision threshold. (Consider close election results.) The premature death of a laboratory rat during testing can impact doctoral theses and academic tenure decisions. Clotho, Lachesis and Atropos yet spin, weave and cut the threads of life under the guise of Probability.[How to reference and link to summary or text] "... surely, God loves the .06 nearly as much as the .05" The statistical significance required for publication has no mathematical basis, but is based on long tradition. "It is usual and convenient for experimenters to take 5% as a standard level of significance, in the sense that they are prepared to ignore all results which fail to reach this standard, and, by this means, to eliminate from further discussion the greater part of the fluctuations which chance causes have introduced into their experimental results." Fisher, in the cited article, designed an experiment to achieve a statistically significant result based on sampling 8 cups of tea. Ambivalence attacks all forms of decision making. A mathematical decision-making process is attractive because it is objective and transparent. It is repulsive because it allows authority to avoid taking personal responsibility for decisions. Pedagogic criticism Edit Pedagogic criticism of the null-hypothesis testing includes the counter-intuitive formulation, the terminology and confusion about the interpretation of results. "Despite the stranglehold that hypothesis testing has on experimental psychology, I find it difficult to imagine a less insightful means of transiting from data to conclusions." Students find it difficult to understand the formulation of statistical null-hypothesis testing. In rhetoric, examples often support an argument, but a mathematical proof "is a logical argument, not an empirical one". A single counterexample results in the rejection of a conjecture. Karl Popper defined science by its vulnerability to dis-proof by data. Null-hypothesis testing shares the mathematical and scientific perspective rather the more familiar rhetorical one. Students expect hypothesis testing to be a statistical tool for illumination of the research hypothesis by the sample; It is not. The test asks indirectly whether the sample can illuminate the research hypothesis. Students also find the terminology confusing. While Fisher disagreed with Neyman and Pearson about the theory of testing, their terminologies have been blended. The blend is not seamless or standardized. While this article teaches a pure Fisher formulation, even it mentions Neyman and Pearson terminology (Type II error and the alternative hypothesis). The typical introductory statistics text is less consistent. The Sage Dictionary of Statistics would not agree with the title of this article, which it would call null-hypothesis testing. "...there is no alternate hypothesis in Fisher's scheme: Indeed, he violently opposed its inclusion by Neyman and Pearson." In discussing test results, "significance" often has two distinct meanings in the same sentence; One is a probability, the other is a subject-matter measurement (such as currency). The significance (meaning) of (statistical) significance is significant (important). There is widespread and fundamental disagreement on the interpretation of test results. "A little thought reveals a fact widely understood among statisticians: The null hypothesis, taken literally (and that's the only way you can take it in formal hypothesis testing), is almost always false in the real world.... If it is false, even to a tiny degree, it must be the case that a large enough sample will produce a significant result and lead to its rejection. So if the null hypothesis is always false, what's the big deal about rejecting it?" (The above criticism only applies to point hypothesis tests. If one were testing, for example, whether a parameter is greater than zero, it would not apply.) "How has the virtually barren technique of hypothesis testing come to assume such importance in the process by which we arrive at our conclusions from our data?" Null-hypothesis testing just answers the question of "how well the findings fit the possibility that chance factors alone might be responsible." Null-hypothesis significance testing does not determine the truth or falseness of claims. It determines whether confidence in a claim based solely on a sample-based estimate exceeds a threshold. It is a research quality assurance test, widely used as one requirement for publication of experimental research with statistical results. It is uniformly agreed that statistical significance is not the only consideration in assessing the importance of research results. Rejecting the null hypothesis is not a sufficient condition for publication. "Statistical significance does not necessarily imply practical significance!" Practical criticism Edit Practical criticism of hypothesis testing includes the sobering observation that published test results are often contradicted. Mathematical models support the conjecture that most published medical research test results are flawed. Null-hypothesis testing has not achieved the goal of a low error probability in medical journals. "Contradiction and initially stronger effects are not unusual in highly cited research of clinical interventions and their outcomes." "Most Research Findings Are False for Most Research Designs and for Most Fields" Jones and Tukey suggested a modest improvement in the original null-hypothesis formulation to formalize handling of one-tail tests. Fisher ignored the 8-failure case (equally improbable as the 8-success case) in the example tea test which altered the claimed significance by a factor of 2. Killeen proposed an alternative statistic that estimates the probability of duplicating an experimental result. It "provides all of the information now used in evaluating research, while avoiding many of the pitfalls of traditional statistical inference." - Comparing means test decision tree - Confidence limits (statistics) - Multiple comparisons - Omnibus test - Behrens-Fisher problem - Bootstrapping (statistics) - Fisher's method for combining independent tests of significance - Null hypothesis testing - Predictability (measurement) - Prediction errors - Statistical power - Statistical theory - Statistical significance - Theory formulation - Theory verification - Type I error, Type II error - ↑ 1.0 1.1 1.2 1.3 The Sage Dictionary of Statistics, pg. 76, Duncan Cramer, Dennis Howitt, 2004, ISBN 076194138X - ↑ Testing Statistical Hypotheses, 3E. - ↑ 3.0 3.1 Fisher, Sir Ronald A. (1956). "Mathematics of a Lady Tasting Tea" James Roy Newman The World of Mathematics, volume 3. - ↑ What If There Were No Significance Tests? (Harlow, Mulaik & Steiger, 1997, ISBN 978-0-8058-2634-0 - ↑ The Tao of Statistics, pg. 91, Keller, 2006, ISBN 1-4129-2473-1 - ↑ Rosnow, R.L. & Rosenthal, R. (1989). Statistical procedures and the justification of knowledge in psychological science. American Psychologist, 44, 1276-1284. - ↑ 7.0 7.1 Loftus, G.R. 1991. On the tyranny of hypothesis testing in the social sciences. Contemporary Psychology 36: 102-105. - ↑ 8.0 8.1 Cohen, J. 1990. Things I have learned (so far). American Psychologist 45: 1304-1312. - ↑ Introductory Statistics, Fifth Edition, 1999, pg. 521, Neil A. Weiss, ISBN 0-201-59877-9 - ↑ Ioannidis JPA (2005) Contradicted and initially stronger effects in highly cited clinical research. JAMA 294: 218-228. - ↑ Ioannidis JPA (2005) Why most published research findings are false. PLoS Med 2(8): e124. - ↑ A Sensible Formulation of the Significance Test, Jones and Tukey, Psychological Methods 2000, Vol. 5, No. 4, pg. 411-414 - ↑ An Alternative to Null-Hypothesis Significance Tests, Killeen, Psychol Sci. 2005 May ; 16(5): 345-353. - A Guide to Understanding Hypothesis Testing - A good Introduction - Bayesian critique of classical hypothesis testing - Critique of classical hypothesis testing highlighting long-standing qualms of statisticians - Analytical argumentations of probability and statistics - Laws of Chance Tables - used for testing claims of success greater than what can be attributed to random chance |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
http://psychology.wikia.com/wiki/Statistical_test
13
87
Using Living Math Materials and Plans with Structured, Incremental or Various Other Math Teaching Approaches Note: This article was written for families using the Living Math Lesson plans in response to questions by families wanting to integrate the material with their math approach. Most of the comments, however, are applicable to families using various living math resources, and are not limited to lesson plans users. The Living Math lesson plans were written in a format to facilitate teaching mathematics to all ages within a framework of its historical development. Materials facilitating this have been available for advanced high school levels on up, but I have yet to run across materials beyond the Luetta Reimer "Mathematicians Are People, Too" support materials, Historical Connections in Mathematics available from AIMS Education, that attempt to provide more than tidbits to students who have not mastered high school level algebra and geometry. Because the lesson plans follow math development through history, they refer to more and more complex ideas. This structure / organization does not directly facilitate the sequence of elementary math learning topics that a traditional math curriculum does. It becomes more naturally sequential by the advanced / high school levels, because the math skills and cognitive development necessary to understand advanced ideas have more likely been attained. As such, the materials aren't written to be used back to back in levels. The Primary Level plans suggest readings and activities appropriate for early elementary students, but it would be entirely appropriate to use the materials over a three to four year period,often repetitively as will be explained below, rather than used once through in order and then assumed to be ready to move on to the Intermediate Level. The same goes for Intermediate and even to a degree to Advanced. High School is different, in that the skills needed to complete this level of work may be attained to allow for a sequential, college-model study of mathematics in the full context of history. Therefore the comments below contain more for the elementary years than beyond. When I taught weekly co-op classes using the plans and activities, parents approached the material in several ways. Unschoolers, or advocates of delayed formal academics, tend to find it easiest to adapt the plan materials, because the philosophy of this learning style is not generally incremental learning based. For a child who enjoys reading aloud with a parent and doing hands on activities, it can work quite well to provide math exposure in a wide range of real life situations. It can also provide the parent with many experiences to enrich their ability to stimulate interest in mathematics, especially if they enjoy history. Relaxed or "eclectic" schoolers often took the opportunity of the classes or lesson plans to take a completely different approach to math for a given period of time. For many families, it was a break from a structured approach to help shift attitudes in a more positive direction toward math learning. Some families continued with math curriculum on certain days of the week, and did "living math" on other days. Others immersed themselves in the math history studies and left curriculum aside for months, if not a year. If their children enjoyed the materials, this was a beneficial process. Many went back to curriculum work after a time, reporting their children enjoyed it more, and together they had "hooks" to place ideas they were encountering in the curriculum on that were simply abstract before. Some families continued a fully structured approach to math, supplementing with Living Math readings and activities for enrichment. They still used a daily curriculum such as Saxon, but reported that they could reduce the number of problems they assigned their children because they were "getting" the math more quickly after the classes or doing activities and readings at home. Many of these families chose to use the Living Math plans as their history curriculum, not worrying about the order of math concepts reported, as they continued to use the curriculum for sequential learning. The most challenging model to teach and/or describe would be one wherein the sequential teaching is done through the Living Math plan materials, without using a formal math curriculum as the base. This is, however, in essence how I myself use the materials with my own children now, although we still use texts and workbooks that appeal to my children. Living math has been a feature of our home for so many years, my children have had read to them and/or have read to themselves the math readers that are scheduled in the lesson plans, often repeatedly. We do the activities – the best ones are often repeated at least once, if not more times over the course of several years. We as a family read the historical and thematic readers and refer to them as we experience things that relate to what we've read. The reason we can repeat an activity in as short a period of a year is the fact we are not trying to limit the learning objective. The emphasis of a repeated activity tends to follow whatever concepts I know my children are working on learning at a given point in their math development. This natural approach took a few years to develop, and required me to be familiar with the tools that were out there to use. I will give examples below. Dialoguing with families about the various ways they used the materials, and the experiences I've had with my own children, have given me confidence that the lesson plan materials can be used with a wide range of homeschooling and teaching approaches. I hope to give parents ideas of ways the materials can be adapted in various situations. Primary/ Elementary Levels (approximately ages 6 to 8) In order to use the Living Math materials as the primary basis to teach incrementally, it requires the parent to be familiar with the math activities available, and be able to identify when activities can be used to facilitate learning of the concepts the parent wants to emphasize in a given period. In other words, get to know your toolkit. When I first began using these materials myself, I had no guides or manuals as to how to teach math through history at the pre-high school levels. The first step I took years ago was that of keeping us on a curriculum through the week, but having math history days wherein we would not work on curriculum at all, but rather read the materials and follow any bunny paths they led us on. This allowed me to keep the structure I felt I needed, but blend in the other materials for interest and relevancy. The bunny paths we took often involved math concepts the kids hadn't yet fully learned, yet they wanted to keep going. We found ourselves spending hours and hours on math ideas and activities, whereas had we set that time aside for working math curriculum we would have spent far less time. Interest and relevancy provided energy to spend many more hours on math learning, and in context. Most of the plan activities in the early lesson plans came from these bunny path explorations. I became more and more educated myself about these ideas to be able to naturally insert them when the opportunity came up with a younger child, or if a different concept that was linked to it came up. As we spent more and more time on bunny paths, we cut back on the curriculum use, as it was simply becoming unnecessary and a distraction from our our highly productive and enjoyable studies. I also realized many of these activities did not require as advanced math skills as I assumed. Many times doing an activity with my 9 year old and 6 year old, I could see that the difference was only in how much of the work I myself did to complete the activity, or whether we completed the activity at all. It was okay to stop when substantial learning had occurred and interest began to wane with a younger child, just like leaving some food on your plate when you're full. Similarly, I might be able to do an activity written for an older child with a younger child if I rounded every number to numbers the younger child could comprehend, or if the fractions we encountered were rounded to whole numbers, or to easy fractions they could easily work with. To give an example, in the Pythagoras lessons, ideas of number theory are introduced. Number theory is usually considered a high-level, complex mathematical area. But the simple idea of even and odd numbers is number theory that originated with Pythagoras. And while simple, it is profound; math theorists often rely on even and odd properties when constructing complex proofs. While I might spend time with my youngest working on even and odd numbers, and introduce figurate numbers to them in concrete / pictorial ways, I would investigate further with my older child relationships between figurate numbers to the level they could understand. We would simply stop when it was clear the child could not comprehend anything more. In the Ancient Mathematics units different number bases are introduced. Again, in the traditional curriculum, number bases are usually considered middle school level math. But in co-op classes with children as young as 4 or 5 years old, I could demonstrate the Mayan base 20 system quite effectively, if they understood ones, fives and tens, as the numbers are written pictorially. Binary would be more difficult as it is more abstract, but it could be presented concretely with objects, games and activities. More advanced number bases would be an idea for older children. The relationships between numbers can be analyzed by very young children in terms of their additive properties, or their multiplicative properties – big words, but concepts that are shown in math picture books quite easily. The doubling sequence is so prevalent in the history of mathematics it comes up in many activities and stories. The ubiquity of the idea itself communicates to a child to the notion that it is a very important idea. I've had young children who could recall and chant the binary sequence as well or better than they could skip counting by 2s. A young child might go as far as 1, 2, 4, 8 . . . . my older child may go up to 32 . . . and the oldest as far as they like. And of course, a middle school child on up can understand that this is an exponential pattern, and relate it to other bases. I would not attempt to teach exponents to a younger child, relying on concrete examples, but it they may in fact pick it up themselves, especially if it is compared to our base 10 system. Having taught my oldest children math using more standard math programs such as Math U See and Singapore Math, I found that the activities over time were working the same principles that were presented in the curriculum. But it was up to me as the math mentor to bring the concept teaching in when it was appropriate for their level. As I worked with the materials and ideas more and more, I became better at identifying when a specific activity might work well with my child to work on a concept they were learning. I became an expert at adapting an activity to a child's level, because by exposing myself to these repetitively, I began to intuitively see the basic math structure underneath them. When you yourself actually do an activity with your child, you can observe and participate in the process required to do it. When you realize that multiplication is simply fast addition, any activity involving multiplication can be converted to an activity in addition, by bringing the numbers down to simple terms the child can add, or ending the activity when the terms become too large. Any simple multiplication activity can be used if a child can skip count,reinforcing the upcoming link to multiplication. Once I'd read through the historical materials sequentially with my children, I did not need to stick to the plan order anymore for activities. We can read Hans Magnus Enzensberger's The Number Devil: A Mathematical Adventure and decide to go back to Pythagoras ideas we visited a few months or even a year earlier. If you read the book and did the activity with your children, you can make the connections with them. We can read Theoni Pappas's Penrose the Mathematical Cat books, and revisit the numerous activities and ideas we covered in other units. Repetition in these activities is usually well tolerated, and even welcomed, if it isn't immediately after the first exposure, because in the repetition they see and understand things they didn't understand the first time. Because the activities take more time than a typical math worksheet, they are often remembered for a long time, but even more so if they are repeated at a later date. The "ah-ha" moments are very empowering, showing them how much more they are able to understand than when they saw an idea presented before. This happens to my children very often when a younger child reads an older reader repeatedly over years. Here are some examples of blending living math materials with incremental teaching at younger levels, and how I've identified opportunities quite often by teaching to older children. My 7th grader was going through the Harold Jacobs Elementary Algebra text with a friend of his who planned on going to high school the next year. As such, his friend's goal was to complete the course, and my son committed to the same goal as long as he is able to keep up. We met twice a week to learn concepts and work problems, and they did homework between our meetings. As my son was familiar with the math history topics from the younger levels, we blended in the advanced reading materials and some activities as they match the ideas in the algebra course, vs. strictly following the lesson plans. One activity suggested in the Harold Jacobs text to demonstrate the idea of a direct variation function involved experimenting with dropping bouncy balls from various heights, and recording the data in tables to generate an algebraic formula. It was an extremely effective activity for these middle school boys. We completed the experiments and generated formulas to describe the direct variation between the height we dropped the ball from and the bounce height. I observed that this experiment was similar to one I had done in the Galileo lesson in our math history studies, but it was different in that we were measuring the bounce height, rather than the time. I realized this was easier for younger children to measure. Removing the more abstract aspect of the formula, I realized I could do this with my fourth grade daughter's math group. The girls were working on multiple digit multiplication, easy division and easy fractions / proportions in word problems. We were using a Singapore word problem book to provide a sequential framework for them to work on these skills between our meetings. So whatever activity we did, I emphasized the math skills they were working on, even as other math skills, and many logical reasoning skills may have come into play. One day the girls were doing some Hands On Equations work which involved solving equations with "x" and "(-x)." One of the girls wanted to know, is there such a thing as a "y" or "z"? What a great lead-in to the bouncy ball experiment I had already planned I thought. I could say, yes, we'll get a "y" in there today. So the girls did the same activity the boys did – dropping the balls, recording the heights, and finding patterns. They estimated the relationships between the two different balls – one ball bounced on average about 2/3 of the way up, the other bounced three fourths of the way up. If we were careful with our measurements, the relationships were strikingly accurate. We converted the bounces into percentages of the original drop height using calculators at first. They have not technically learned percentages, but we've encountered them many times in activities, and I put percentage ideas in terms of cents and dollars which they do understand – i.e, three quarters is the same as 75 cents out of a dollar. We put up a table of their results where they could see that no matter what height they dropped the ball from,the bounces were about the same fraction of the drop height, two thirds for the first ball, three quarters. I made sure we were rounding the figures to significant numbers they could understand. When presented this way, my 9 year old daughter could easily answer the question: If the ball is bounced from 10 feet, how high will the bounce be? Initially she said 7 feet, drew the picture up to three fourths, and then realizing it was 7-1/2 feet, corrected her answer. She also could figure out that if I dropped the other ball from 9 feet, it would bounce up to 6 feet high. She could do this if I kept the numbers round and simple. Now she has another concrete "hook" to continue to refer to as we work on these skills. The key with activities like this with younger children is to keep the numbers simple and intuitive, so they do not have to rely on more complex algorithms such as long division to get the answers, confusing the lesson to be learned. When children can begin to comprehend simple relationships such as basic fractions of halves, quarters and thirds (and many children can begin to comprehend these in terms of dividing up food or items by age 5 or 6), and can do simple addition, they can begin doing these activities, and the parent need not worry about the fact they can't complete them. I allow younger children to use calculators to complete activities when the math is beyond their comprehension, to again facilitate what they are to learn without confusing it with what they aren't ready for. When we did the Cheops Pyramid activity in the Thales lesson (from Mark Wahl's Mathematical Mystery Tour), an activity I have done successfully many times, the math can become complicated for all but middle schoolers on up. But if I give younger kids a calculator they can complete it. It gives them an idea of how to use a calculator, the importance of a decimal point, and experience with rounding. For these kids, the learning objective isn't how to do division in repeating decimals. It's to see that mathematical relationships can be built in spatial objects. Once the calculations are done, they compare the results and see that the numbers are very similar. To realize how powerful some of these lessons can be in terms of retention of ideas, my oldest who entering his junior year in high school still recalls many of the concrete lessons he learned. He homeschooled for 9 years before entering high school two years ago. Recently he saw the pyramid my daughter made and commented, "Oh, that's the pyramid that has pi built into it, because it's basically a half sphere, right?" He was 11 years old when he first did the pyramid activity in one of our co-op classes. He tells me that he recalls the formula for the circumference of a circle because he remembers our Egyptian rope-stretching activities. The circumference of a circle is the diameter times three and a "little bit" – a funny idea from his beloved Murderous Maths - and he routinely expresses that in the abstract form, C = d pi or C = 2r pi, while recalling its meaning, it's not just a formula. I have done the rope stretching activity twice now with my 9 year old and will likely do it again before she gets to this point. Each time we've done it, she enjoys it and learns something more from it. In the last instance we did this activity, we practiced division factors of 12 to get the proportions of the right triangle in place. We also practiced multiplication when finding Pythagorean triples. My 12 year old now has a series of these memorized from doing the activity and then extending it to a chapter in Ed Zaccaro's Challenge Math where he solved a number of right triangle problems that used Pythagorean triples to keep the answers in whole numbers. Another rather obvious example for early elementary incremental learning is reading math readers or playing around with manipulatives for fun and exposure, but filing away in your mind what the lesson of the activity is if they aren't ready to master it. My youngest daughter was working on addition with regrouping when she was 7 years old. At 5, she read "A Fair Bear Share" MathStart reader which focuses on regrouping, and has read it many times since then. We also worked quite a bit with an abacus at one time. While she could follow a year ago or so, she couldn't reproduce the process if given a problem in a workbook. We later encountered regrouping again reading "Mr. Base Ten Invents Mathematics." She exhibited more conceptual understanding in following it on the page, but no interest in attempting to do it herself on paper. Then, at 7, she encountered regrouping in a Singapore workbook. We brought out the Fair Bear Share book, Mr. Base Ten and the abacus again, and used these old friends to help learn the concept as it was presented in her Singapore book. The books turned out to be more effective than abacus for her at this point, as she is a print oriented learner. We could refer to the characters and objects in the books when going through the idea and developing a process for her to figure out her answer. She quickly developed her own personal notation to make sure she does not lose track of her ones to be carried over, and in a matter of a couple days she had this idea fully mastered. A month later, she mastered regrouping of tens and hundreds, realizing the same idea applied as she had learned for her 1s. She moved out of numbers and quantities she could concretely understand to applying to a more abstract numbers and quantities. The idea here is that when we first pulled out these materials in the context of the ideas presented in the math history lessons, I was not attempting to teach her the lesson to mastery, nor did I wait to introduce materials to her because she wasn't ready to master the concepts. She was having fun and enjoying the ideas presented. When she was ready, we pulled out these same materials she was already familiar with, and the lesson was very quickly learned to mastery. I filed away in my mind that her next logical step could be multiple place values of regrouping, which she herself discovered in the Singapore book a few days later. One more lesson to show her that the same process applies to other place values and she understood it, in large part because she really does understand place value concretely through many hours of exploration with base 10 blocks. Therefore she understands that she is carrying tens, or hundreds, not ones, as many children get confused when taught the regrouping algorithm. If she did not appear to be ready to understand this, I would have waited and kept her supplied with other math activities. Her "next" learning objective might be what she encounters in her Singapore book again,or it might be what she is learning with Hands On Equations, a program that teaches algebraic ideas in a logical sequential manner. I am prepared with the materials that will blend well with what I see her working on next. My goal, and what I hope will be a benefit to others using these materials, is to become a better and better math mentor to my children with this constant exposure to math in contextual and interesting applications that are far off the page of what I was taught. For many parents, elementary math concepts are no longer routine, but can become exciting and interesting in these context, giving us a fresh and new perspective to share with our children. If one wants to teach incrementally using activities, the Primary Level readers and lesson plans contain multiple activities for all basic concepts in early elementary. One would need to separate, however, the activities from the readings to present them incrementally. This is fine, the plans are written as guides, not strict methodology. In fact, the only setting that the plans really would be strictly followed would be in a classroom setting wherein everyone needs to be on the same page. In a home setting, you have total control over how to use the materials. Your own comfort level in working activities that contain ideas you yourself never really learned may be a factor. My ability to teach with these materials has improved dramatically over the years because I myself understand them well. I did not have anything more than a typical math education myself until a few years ago. I never heard of Fibonacci numbers, Pascal's Triangle, or Pythagorean triples before embarking on this study with my children. I could not naturally and comfortably present this material to my kids unless I had read, investigated and understood it myself to some degree. In understanding it myself, I could see the underpinnings of the math ideas – that Pascal's Triangle is built on a very simple addition process that a first grader can understand up to a certain level, and that I can go even further if we make it into an art activity, because visual representations of relationships in the triangle become apparent. But if I don't understand it myself, I can't see these underpinnings. So just as with any study, learning ahead of your kids will make you much more comfortable presenting to your children. As you know your own child, you'll see connections they are likely to make based on their current development. And likely they'll make many more connections you won't expect them to make if you do not limit them by not exposing them to ideas beyond where you assess their development to be. If you enjoy the material, you will be much more likely to inspire them with your own enthusiasm. Knowing your children is also instrumental in how much of what type of resource to use and the timing to use it. For wiggly children who do not have long attention spans, abbreviated readings make sense, and possibly you may linger more on hands on activities, or reserve the more challenging history readings for bedtime when they can become quite attentive, especially if it means they might be able to go to bed a little later :o) If readings are too challenging, consider putting the book away for six months or a year, rather than allowing them to develop a negative attitude toward the book that would mean future exposure will be resisted. Focus on the kinds of readers or activities your child is enjoying. In homeschooling, it seems to me, timing is everything. Seasoned homeschoolers will tell you, what is "wrong" for a child now may be totally "right" a year later. Middle School / Pre-Algebra Level Middle school often tends to be a period where curriculum and classrooms keep kids in the pre-algebra territory until they seem ready for algebra, recycling concepts in progressively more difficult settings. This can be a wise strategy in terms of delaying formal algebra instruction until they have all the tools necessary to complete a full course, but it can be boring for a child that has essentially learned basic mathematics, but who has not yet fully developed the level of "proportional reasoning" needed to move to the abstract level formal algebra requires. This is a level that the Living Math Plans lend themselves quite well to use as written. The course provides a review of all basic pre-algebra ideas from counting and place value on up, but in contexts they've likely never seen before. Many of the activities are generated to exercise pre-algebra skills in real contexts. When algebra is referred to, it is usually possible to get the answers without it. Decimals, percents, and ratios are used extensively in activities. Exponents, radicals and other important concepts for algebra success are woven in. Links between geometry and algebra are brought in to give students more of an idea where all this math they are learning is heading. The plans can just as easily be used, however, in a similar way as Primary Level if a family desires, especially if a child is borderline for the level, or highly asynchronous with their reading and math skills. Historical readings can be scheduled while the activities can be done in a different order based on the child's skill level. Readings can be done through the week and a family can schedule an activity day, since the amount of time required to complete these activities increases with the skill level. After a year of co-op classes, my middle schoolers grazed on reading multi-concept books such as the Murderous Maths series by Kjartan Poskitt and The Penrose series by Theoni Pappas. My oldest son had read these before, but understood the math in them much better after having gone through the math history activities. Advanced Level and Up: Algebra and Beyond This level offers up a number of ways to use the Living Math materials as well. If an advanced level student has never encountered the ideas in the plans, as is the case for many parents, then working through all the material at a comfortable pace is beneficial. There are many opportunities to learn algebraic ideas that are embedded in the plans. It can be used as a pre-cursor to a formal algebra course, as in fact this level was for my oldest son who in his pre-secondary years was more language oriented than math oriented (this has changed in his high school years to being evenly balanced in skill and interest). He completed a formal algebra course with ease after spending nearly two years with the Living Math plans. The materials can also be used as a conceptual review after an algebra course is completed, as the contexts will be different from most algebra texts, and this was the situation with some of my co-op students as well. Finally, a challenging algebra book is suggested through the lessons (Gelfand's Algebra) for students wanting to learn algebra in a problem-solving framework that is not a typical textbook. Even students who have had an algebra course may find this challenging, it is recommended by the Art of Problem Solving staff for gifted students and students who really want to understand algebra, vs. learn it procedurally. If a student is ready for algebra and will be working through an algebra course concurrently, the pace will need to be slow enough to make room for the time required to learn the algebra material. One way to accomplish this is to make the math history lessons the basis of social studies, and treat the readings and activities as that subject. It might mean that the algebra course would take more than the usual year if you wish to get the full benefit out of both programs. Homeschooling allows us to pace a course this way. My middle son went through the Intermediate Level math history materials a couple years ago, and even that was his second round, as we'd done quite a lot of reading and activities since he was 5 or 6 years old. He has been doing an abbreviated version of the Advanced Level plans, going through materials such as String Straightedge and Shadow that he never read before. He is picking up ideas he did not fully understand, or ideas he forgot, and the familiarity of the previous exposure makes them feel like old friends. His choice to work on formal algebra course last year with his friend was due to the opportunity created by his friend's goal to be ready for high school next year, his own realization that he is enjoying algebra after going through Hands on Equations the past few months, and his learning style which is less print oriented than his brother, he learns better with me teaching than trying to teach himself. After half a year of formal algebra, we decided to table that for next year, and he picked up Ed Zaccaro's Challenge Math and 25 Real Life Math Investigations books for the rest of his 7th grade year. So in his situation, we used a fully sequential math textbook as the basis of his math learning for part of the year, laying over it the math history reading and activities that match up to it. The Harold Jacobs Algebra was a good choice for this, since Jacobs does bring in a lot of number patterns and other tools for being able to learn algebra in an analytical way, vs. simply learning via rote practice of processes introduced. Taking the time to work on the binary system worked well with understanding the difference between exponential group vs. pure multiplicative growth or additive growth – and these ideas are presented early in the text as they learn to differentiate between different sorts of functions and their graphs. If you would like a look at how I've blending these in this fashion, I posted a tentative syllabus here: http://www.livingmath.net/JacobsAlgebraYear/tabid/1000/Default.aspx I could do the same thing with a geometry text. Two years before my oldest son took Algebra I, he took a high school Euclidean geometry class. This provided structure for his math learning that we laid our math history studies on as well. Incremental Learning Objective Tagging It is a goal of mine to go through all activities and "tag" them with the concepts they focus on. This is very time consuming, but the project is moving along. I have a concern that activities might be tagged as only being beneficial in teaching certain concepts. In reality, numerous activities can present ideas a kindergartner can learn as well as a high schooler if they have never been exposed to the idea (the Egyptian rope stretching is a great example, or the King's Chessboard, etc.). Parents exposed to these ideas for the first time can understand this. In the mean time, the Primary levels have grown considerably since I originally wrote them to provide a suggested rotation of all primary level concepts through a Cycle of lesson plans. It is not possible, however, that every child will be working on the same concepts at the same time. So it is up to parental discretion as to how much of the various concepts they cover with the child and how they do it. Book lists are posted for each unit, which include extensive reader lists by concept. So even if you only purchased the first unit, if your child is working on skip counting, going through all four units of reading lists for skip counting resources is fine. If they are working on division, look at the Unit 2 list of readers and incorporate those in your living math studies. You won't ruin a scheduled reader by reading it ahead of time as most Primary children enjoy reading math picture books multiple times. A to-do of mine is to include a list of basic concepts included in each unit, while it can be derived by looking at the book lists, it would be easier to see if the rotation were seen visually on the website. It's one of the project I am fitting in as I can with my own homeschooling.
http://www.livingmath.net/LessonPlans/UsingLivingMathArticle/tabid/1026/language/en-US/Default.aspx
13
59
An optical telescope is a telescope which is used to gather and focus light mainly from the visible part of the electromagnetic spectrum to directly view a magnified image for making a photograph, or collecting data through electronic image sensors. There are three primary types of optical telescope: refractors which use lenses (dioptrics), reflectors which use mirrors (catoptrics), and catadioptric telescopes which use both lenses and mirrors in combination. A telescope's light gathering power and ability to resolve small detail is directly related to the diameter (or aperture) of its objective (the primary lens or mirror that collects and focuses the light). The larger the objective, the more light the telescope can collect and the finer detail it can resolve. The telescope is more a discovery of optical craftsmen than an invention of scientist. The lens and the properties of refracting and reflecting light had been known since antiquity and theory on how they worked were developed by ancient Greek philosophers, preserved and expanded on in the medieval Islamic world, and had reached a significantly advanced state by the time of the telescope's invention in early modern Europe. But the most significant step cited in the invention of the telescope was the development of lens manufacture for spectacles, first in Venice and Florence in the thirteenth century, and later in the spectacle making centers in both the Netherlands and Germany. It is in the Netherlands in 1608 where the first recorded optical telescopes (refracting telescopes) appeared. The invention is credited to the spectacle makers Hans Lippershey and Zacharias Janssen in Middelburg, and the instrument-maker and optician Jacob Metius of Alkmaar. Galileo greatly improved upon these designs the following year and is generally credited with being the first to use a telescope for astronomical purposes. Galileo's telescope used Hans Lippershey's design of a convex objective lens and a concave eye lens and this design has come to be called a Galilean telescope. Johannes Kepler proposed an improvement on the design that used a convex eyepiece, often called the Keplerian Telescope. The next big step in the development of refractors was the advent of the Achromatic lens in the early 18th century that corrected chromatic aberration seen in Keplerian telescopes up to that time, allowing for much shorter instruments with much larger objectives. For reflecting telescopes, which use a curved mirror in place of the objective lens, theory preceded practice. The theoretical basis for curved mirrors behaving similar to lenses was probably established by Alhazen, whose theories had been widely disseminated in Latin translations of his work. Soon after the invention of the refracting telescope Galileo, Giovanni Francesco Sagredo, and others, spurred on by their knowledge that curved mirrors had similar properties as lenses, discussed the idea of building a telescope using a mirror as the image forming objective. The potential advantages of using parabolic mirrors (primarily a reduction of spherical aberration with elimination of chromatic aberration) led to several proposed designs for reflecting telescopes, the most notable of which was published in 1663 by James Gregory and came to be called the Gregorian telescope, but no working models were built. Isaac Newton has been generally credited with constructing the first practical reflecting telescopes, the Newtonian telescope, in 1668 although due to their difficulty of construction and the poor performance of the speculum metal mirrors used it took over 100 years for reflectors to become popular. Many of the advances in reflecting telescopes included the perfection of parabolic mirror fabrication in the 18th century, silver coated glass mirrors in the 19th century, long-lasting aluminum coatings in the 20th century, segmented mirrors to allow larger diameters, and active optics to compensate for gravitational deformation. A mid-20th century innovation was catadioptric telescopes such as the Schmidt camera, which uses both a lens (corrector plate) and mirror as primary optical elements, mainly used for wide field imaging without spherical aberration. The basic scheme is that the primary light-gathering element the objective (1) (the convex lens or concave mirror used to gather the incoming light), focuses that light from the distant object (4) to a focal plane where it forms a real image (5). This image may be recorded or viewed through an eyepiece (2) which acts like a magnifying glass. The eye (3) then sees an inverted magnified virtual image (6) of the object. Inverted images Most telescope designs produce an inverted image at the focal plane; these are referred to as inverting telescopes. In fact, the image is both inverted and reverted, or rotated 180 degrees from the object orientation. In astronomical telescopes the rotated view is normally not corrected, since it does not affect how the telescope is used. However, a mirror diagonal is often used to place the eyepiece in a more convenient viewing location, and in that case the image is erect but everted (reversed left to right). In terrestrial telescopes such as Spotting scopes, monoculars and binoculars, prisms (e.g., Porro prisms), or a relay lens between objective and eyepiece are used to correct the image orientation. There are telescope designs that do not present an inverted image such as the Galilean refractor and the Gregorian reflector. These are referred to as erecting telescopes. Design variants Many types of telescope fold or divert the optical path with secondary or tertiary mirrors. These may be integral part of the optical design (Newtonian telescope, Cassegrain reflector or similar types), or may simply be used to place the eyepiece or detector at a more convenient position. Telescope designs may also use specially designed additional lenses or mirrors to improve image quality over a larger field of view. Angular resolution Ignoring blurring of the image by turbulence in the atmosphere (atmospheric seeing) and optical imperfections of the telescope, the angular resolution of an optical telescope is determined by the diameter of the primary mirror or lens gathering the light (also termed its "aperture") Here, denotes the resolution limit in arcseconds and is in millimeters. In the ideal case, the two components of a double star system can be discerned even if separated by slightly less than . This is taken into account by the Dawes limit The equation shows that, all else being equal, the larger the aperture, the better the angular resolution. The resolution is not given by the maximum magnification (or "power") of a telescope. Telescopes marketed by giving high values of the maximum power often deliver poor images. For large ground-based telescopes, the resolution is limited by atmospheric seeing. This limit can be overcome by placing the telescopes above the atmosphere, e.g., on the summits of high mountains, on balloon and high-flying airplanes, or in space. Resolution limits can also be overcome by adaptive optics, speckle imaging or lucky imaging for ground-based telescopes. Recently, it has become practical to perform aperture synthesis with arrays of optical telescopes. Very high resolution images can be obtained with groups of widely-spaced smaller telescopes, linked together by carefully controlled optical paths, but these interferometers can only be used for imaging bright objects such as stars or measuring the bright cores of active galaxies. Example images of starspots on Betelgeuse can be seen here. Focal length and f-ratio The focal length determines how wide an angle the telescope can view with a given eyepiece or size of a CCD detector or photographic plate. The f-ratio (or focal ratio, or f-number) of a telescope is the ratio between the focal length and the diameter (i.e., aperture) of the objective. Thus, for a given objective diameter, low f-ratios indicate wide fields of view. Wide-field telescopes (such as astrographs) are used to track satellites and asteroids, for cosmic-ray research, and for astronomical surveys of the sky. It is more difficult to reduce optical aberrations in telescopes with low f-ratio than in telescopes with larger f-ratio. Light-gathering power The light-gathering power of an optical telescope is proportional to the area of the objective lens or mirror, or proportional to the square of the diameter (or aperture). For example, a telescope with a lens which has a diameter three times that of another will have nine times the light-gathering power. A bigger telescope can have an advantage over a smaller one, because their sensitivity increases as the square of the entrance diameter. For example, a 7 meter telescope would be about ten times more sensitive than a 2.4 meter telescope. For a survey of a given area, the field of view is just as important as raw light gathering power. Survey telescopes such as Large Synoptic Survey Telescope therefore try to maximize the product of mirror area and field of view (or etendue) rather than raw light gathering ability alone. Imperfect images No telescope can form a perfect image. Even if a reflecting telescope could have a perfect mirror, or a refracting telescope could have a perfect lens, the effects of aperture diffraction are unavoidable. In reality, perfect mirrors and perfect lenses do not exist, so image aberrations in addition to aperture diffraction must be taken into account. Image aberrations can be broken down into two main classes, monochromatic, and polychromatic. In 1857, Philipp Ludwig von Seidel (1821–1896) decomposed the first order monochromatic aberrations into five constituent aberrations. They are now commonly referred to as the five Seidel Aberrations. The five Seidel aberrations - Spherical aberration - The difference in focal length between paraxial rays and marginal rays, proportional to the square of the objective diameter. - A most objectionable defect by which points are imaged as comet-like asymmetrical patches of light with tails, which makes measurement very imprecise. Its magnitude is usually deduced from the optical sine theorem. - The image of a point forms focal lines at the sagittal and tangental foci and in between (in the absence of coma) an elliptical shape. - Curvature of Field - The Petzval field curvature means that the image instead of lying in a plane actually lies on a curved surface which is described as hollow or round. This causes problems when a flat imaging device is used e.g. a photographic plate or CCD image sensor. - Either barrel or pincushion, a radial distortion which must be corrected for if multiple images are to be combined (similar to stitching multiple photos into a panoramic photo). They are always listed in the above order since this expresses their interdependence as first order aberrations via moves of the exit/entrance pupils. The first Seidel aberration, Spherical Aberration, is independent of the position of the exit pupil (as it is the same for axial and extra-axial pencils). The second, coma, changes as a function of pupil distance and spherical aberration, hence the well-known result that it is impossible to correct the coma in a lens free of spherical aberration by simply moving the pupil. Similar dependencies affect the remaining aberrations in the list. The chromatic aberrations - Longitudinal chromatic aberration: As with spherical aberration this is the same for axial and oblique pencils. - Transverse chromatic aberration (chromatic aberration of magnification) Astronomical research telescopes Optical telescopes have been used in astronomical research since the time of their invention in the early 17th century. Many types have be constructed over the years depending on the optical technology, such as refracting and reflecting, the nature of the light or object being imaged, and even where they are placed, such as space telescopes. Some are classified by the task they perform such as Solar telescopes, Large reflectors Nearly all large research-grade astronomical telescopes are reflectors. Some reasons are: - In a lens the entire volume of material has to be free of imperfection and inhomogeneities, whereas in a mirror, only one surface has to be perfectly polished. - Light of different colors travels through a medium other than vacuum at different speeds. This causes chromatic aberration. - Reflectors work in a wider spectrum of light since certain wavelengths are absorbed when passing through glass elements like those found in a refractor or catadioptric. - There are technical difficulties involved in manufacturing and manipulating large-diameter lenses. One of them is that all real materials sag in gravity. A lens can only be held by its perimeter. A mirror, on the other hand, can be supported by the whole side opposite to its reflecting face. Most large research reflectors operate at different focal planes, depending on the type and size of the instrument being used. These including the prime focus of the main mirror, the cassegrain focus (light bounced back down behind the primary mirror), and even external to the telescope all together (such as the Nasmyth and coudé focus). A new era of telescope making was inaugurated by the Multiple Mirror Telescope (MMT), with a mirror composed of six segments synthesizing a mirror of 4.5 meters diameter. This has now been replaced by a single 6.5 m mirror. Its example was followed by the Keck telescopes with 10 m segmented mirrors. The largest current ground-based telescopes have a primary mirror of between 6 and 11 meters in diameter. In this generation of telescopes, the mirror is usually very thin, and is kept in an optimal shape by an array of actuators (see active optics). This technology has driven new designs for future telescopes with diameters of 30, 50 and even 100 meters. Relatively cheap, mass-produced ~2 meter telescopes have recently been developed and have made a significant impact on astronomy research. These allow many astronomical targets to be monitored continuously, and for large areas of sky to be surveyed. Many are robotic telescopes, computer controlled over the internet (see e.g. the Liverpool Telescope and the Faulkes Telescope North and South), allowing automated follow-up of astronomical events. Initially the detector used in telescopes was the human eye. Later, the sensitized photographic plate took its place, and the spectrograph was introduced, allowing the gathering of spectral information. After the photographic plate, successive generations of electronic detectors, such as the charge-coupled device (CCDs), have been perfected, each with more sensitivity and resolution, and often with a wider wavelength coverage. Current research telescopes have several instruments to choose from such as: - imagers, of different spectral responses - spectrographs, useful in different regions of the spectrum - polarimeters, that detect light polarization. The phenomenon of optical diffraction sets a limit to the resolution and image quality that a telescope can achieve, which is the effective area of the Airy disc, which limits how close two such discs can be placed. This absolute limit is called the diffraction limit (and may be approximated by the Rayleigh criterion, Dawes limit or Sparrow's resolution limit). This limit depends on the wavelength of the studied light (so that the limit for red light comes much earlier than the limit for blue light) and on the diameter of the telescope mirror. This means that a telescope with a certain mirror diameter can theoretically resolve up to a certain limit at a certain wavelength. For conventional telescopes on Earth, the diffraction limit is not relevant for telescopes bigger than about 10 cm. Instead, the seeing, or blur caused by the atmosphere, sets the resolution limit. But in space, or if adaptive optics are used, then reaching the diffraction limit is sometimes possible. At this point, if greater resolution is needed at that wavelength, a wider mirror has to be built or aperture synthesis performed using an array of nearby telescopes. In recent years, a number of technologies to overcome the distortions caused by atmosphere on ground-based telescopes have been developed, with good results. See adaptive optics, speckle imaging and optical interferometry. See also - Amateur telescope making - Depth of field - Globe effect - Bahtinov mask - Carey mask - Hartmann mask - History of optics - List of optical telescopes - List of largest optical reflecting telescopes - List of largest optical refracting telescopes - List of largest optical telescopes historically - List of solar telescopes - List of space telescopes - List of telescope types - galileo.rice.edu The Galileo Project > Science > The Telescope by Al Van Helden – “the telescope was not the invention of scientists; rather, it was the product of craftsmen.” - Fred Watson, Stargazer (page 55) - The History of the Telescope By Henry C. King, Page 25-29 - progression is followed through Robert Grosseteste Witelo, Roger Bacon, through Johannes Kepler, D. C. Lindberg, Theories of Vision from al-Kindi to Kepler, (Chicago: Univ. of Chicago Pr., 1976), pp. 94-99 - galileo.rice.edu The Galileo Project > Science > The Telescope by Al Van Helden - Renaissance Vision from Spectacles to Telescopes By Vincent Ilardi, page 210 - galileo.rice.edu The Galileo Project > Science > The Telescope by Al Van Helden - The History of the Telescope By Henry C. King, Page 27 "(spectacles) invention, an important step in the history of the telescope" - galileo.rice.edu The Galileo Project > Science > The Telescope by Al Van Helden "The Hague discussed the patent applications first of Hans Lipperhey of Middelburg, and then of Jacob Metius of Alkmaar... another citizen of Middelburg, Sacharias Janssen had a telescope at about the same time but was at the Frankfurt Fair where he tried to sell it" - See his books Astronomiae Pars Optica and Dioptrice - Sphaera - Peter Dollond answers Jesse Ramsden - A review of the events of the invention of the achromatic doublet with emphasis on the roles of Hall, Bass, John Dollond and others. - Stargazer - By Fred Watson, Inc NetLibrary, Page 108 - Stargazer - By Fred Watson, Inc NetLibrary, Page 109 - works by Bonaventura Cavalieri and Marin Mersenne among others have designs for reflecting telescopes - Stargazer - By Fred Watson, Inc NetLibrary, Page 117 - The History of the Telescope By Henry C. King, Page 71 - Isaac Newton: adventurer in thought, by Alfred Rupert Hall, page 67 - Parabolic mirrors were used much earlier, but James Short perfected their construction. See "Reflecting Telescopes (Newtonian Type)". Astronomy Department, University of Michigan. - Silvering was introduced by Léon Foucault in 1857, see madehow.com - Inventor Biographies - Jean-Bernard-Léon Foucault Biography (1819-1868), and the adoption of long lasting aluminized coatings on reflector mirrors in 1932. Bakich sample pages Chapter 2, Page 3 "John Donavan Strong, a young physicist at the California Institute of Technology, was one of the first to coat a mirror with aluminum. He did it by thermal vacuum evaporation. The first mirror he aluminized, in 1932, is the earliest known example of a telescope mirror coated by this technique." - http://optics.nasa.gov/concept/vlst.html NASA - SOMTC- Advanced Concepts Studies – The Very Large Space Telescope (VLST) - S. McLean, Electronic imaging in astronomy: detectors and instrumentation, page 91 - Notes on AMATEUR TELESCOPE OPTICS - Online Telescope Math Calculator - The Resolution of a Telescope - skyandtelescope.com - What To Know (about telescopes)
http://en.wikipedia.org/wiki/Optical_telescope
13
75
APEC/CANR, University of Delaware In 1763-67 Charles Mason and Jeremiah Dixon surveyed and marked most of the boundaries between Maryland, Pennsylvania and the Three Lower Counties that became Delaware. The survey, commissioned by the Penn and Calvert families to settle their long-running boundary dispute, provides an interesting reference point in the region’s history. This paper summarizes the historical background of the boundary dispute, the execution of Mason and Dixon’s survey, and the symbolic role of the Mason-Dixon Line in American civil rights history. English claims to North America originated with John Cabot's letters patent from King Henry VII (1496) to explore and claim territories for England. ("John Cabot" was actually a Venetian named Giovanni Caboto.) Cabot almost certainly sailed past Cape Breton, Nova Scotia, and Newfoundland. He stepped ashore only once in North America, in the summer of 1497 at an unknown location, to claim the region for England. It is highly unlikely that Cabot came anywhere near the mid-Atlantic coast, however. The first Europeans to explore the Chesapeake Bay in the 1500's were Spanish explorers and Jesuit missionaries. But based on Cabot's prior claim, Queen Elizabeth I granted Sir Walter Ralegh a land patent in 1584 to establish the first English colony in America. The first English colonists settled on Roanoke Island inside the Outer Banks of North Carolina in 1585 (see colonist John White's map). Most of the colonists returned to England the following year; the remaining settlers had disappeared when White's re-supply ship finally returned to the island in 1590. The Virginia Company of London, a joint stock venture, established the first permanent English colony at "James Fort," aka Jamestown, in 1607. The fort was erected on a peninsula on the James River. The colony was supposed to extract gold from the Indians, or mine for it, and it only survived by switching its economic focus to tobacco (introduced to England by John Rolfe), furs, etc. John Smith published his famous Map of Virginia (1612) based on his 1608 exploration of the Chesapeake. Smith's map includes the "Smyths fales" on the Susquehanna River (now the Conowingo dam), "Gunters Harbour" (North East, MD), and "Pergryns mount" (Iron Hill near Newark, DE). Notice that the latitude markings at the top of the map are surprisingly accurate. The Maryland colony When George Calvert, England’s Secretary of State under King James I, publicly declared his Catholicism in 1625, English law required that he resign. James awarded him an Irish baronetcy, making him the first Lord Baltimore. Although Calvert was an investor in the Virginia Company, he was barred from Virginia because of his religion. He then started his own "Avalon" colony in Newfoundland, but the climate proved inhospitable. So Calvert persuaded James’s successor, Charles I, to grant his family the land north of the Virginia colony that became Maryland. The 1632 grant gave the Calverts everything north of the Potomac to the 40th parallel, and from the Atlantic west to the source of the Potomac. George Calvert died later in 1632, and his sons started the Maryland colony, named in honor of Charles I’s consort Henrietta Maria On May 27th, 1634, Leonard Calvert and about 300 settlers arrived in the Chesapeake Bay at St. Mary’s. George Alsop published a "Land-skip" map of the new colony (1666). But while the Calverts were settling on the Chesapeake Bay, Dutch and Swedish colonists were settling on the Delaware bay (named by Captain Thomas Argyll in honor of Lord De La Warr, the governor of the Virginia colony). At the bottom of the Delaware Bay, Dutch colonists established a settlement at Zwaanendael (now Lewes) and a trading post at Fort Nassau in 1631, although these settlers were killed in a dispute with local Indians within a year. Swedish colonists, led by Peter Minuit had purchased Manhattan Island in 1626 for the Dutch West India Company and directed the New Netherlands colony, including New Amsterdam (New York), from 1626 until 1633, when he was dismissed from the Company. He then negotiated with the Swedish government to create the New Sweden colony on the Delaware River. Minuit and a first group of Swedish colonists on two Swedish ships, the Kalmar Nyckel and the Fogel Grip, arrived at Swedes Landing in 1638 and established Fort Christina (Wilmington) as the principal town in the new colony. The political and economic chaos of the English Civil Wars (1642-51) and the Commonwealth and Protectorate periods stalled English colonial expansion. Charles I had not called a Parliament for a decade, until the Bishops' War in Scotland (1639) bankrupted the crown and forced him to call a new Parliament in 1640. This "Long Parliament" (which lasted eight years!) could only be adjourned by itself. Having lost control of it, Charles left London, raised a Royalist army and sought help from Scottish and Irish Catholic sympathizers. After a series of battles with Parliamentarian forces, Charles was imprisoned in 1648. The "Rump Parliament" under the control of the "New Model Army" ordered his trial for treason. He was conviced and beheaded in 1649. A Parliamentary Commonwealth (1649-53) was replaced by the Protectorate under Oliver Cromwell. After military campaigns in Scotland and Ireland, Cromwell had to deal with the first and second Dutch Wars (1652-54 and 1655-57). After Cromwell died in 1658, the army replaced his son Richard with another Parliamentary Commonwealth under a dysfunctional Rump Parliament (1659-60) before restoring the monarchy and inviting Charles II back from exile. While England was in chaos, the Dutch kept expanding their American colonies. Colonial governor Peter Stuyvesant purchased the land between the Christina River and Bombay Hook from the Indians, and established Fort Casimir at what is now New Castle in 1651. The Swedes, just a few miles up the river, captured Fort Casimir in 1654, but Dutch soldiers from New Amsterdam (Manhattan) took control of the entire New Sweden colony in 1655. After the Restoration brought Charles II to the English throne, English colonial expansion resumed. In 1664 the Duke of York, Charles II's brother James, captured New Amsterdam, renaming it New York, and he seized the Swedish-Dutch colonies on the Delaware River as well. The Dutch briefly recaptured New York in 1673, but after their 1674 defeat in Europe in the third Dutch War, they ceded all their American claims to England in the Treaty of Westminster. Having regained his American territories, the Duke of York granted the land between the Hudson and Delaware rivers to his friends George Carteret and John Berkeley in 1675, and they established the colony of New Jersey. The Pennsylvania colony Sir William Penn had served the Duke of York in the Dutch wars, and had loaned the crown about £16,000. His son William Penn, who had become a Quaker, petitioned Charles for a grant of land north of the Maryland colony as repayment of the debt. In 1681 Charles granted Penn all the land extending five degrees west from the Delaware River between the 40th and 43rd parallels, excluding the lands held by the Duke of York within a "twelve-mile circle" centered on New Castle, plus the lands to the south that had been ceded by the Dutch. Was this to be a twelve-mile radius circle, or a twelve-mile diameter circle, or maybe a twelve-mile circumference circle?--—the language was uncler and unnecessary: even a twelve-mile radius circle centered on New Castle lies entirely below the 40th parallel. The Calvert family had ample opportunity to get the 40th parallel surveyed and marked, but never bothered to do so. Philadelphia was established at the upper limit of deep-water navigability on the Delaware, although it was below the 40th parallel, and Pennsylvania colonists settled areas west and south of the city with no resistace from the Calverts. Penn needed to get his colony better access to the Atlantic, and in 1682 he leased the Duke of York’s lands from New Castle down to Cape Henlopen. Penn arrived in New Castle in October 1682 to take official possession of the "Three Lower Counties" on the Delaware Bay. He renamed St. Jones County to Kent County, and Deale County to Sussex County, and the Three Lower Counties were annexed to the Pennsylvania colony. Penn negotiated with the third Lord Baltimore at the end of 1682 at Annapolis, and in April 1683 at New Castle, to establish and mark a formal boundary between Maryland and Pennsylvania including the Three Lower Counties. The Calverts wanted to determine the 40th parallel by astronomical survey, while Penn suggested measuring northward from the southern tip of the Delmarva peninsula (about 370 5' N), assuming 60 miles per degree as Charles II had suggested. (The true distance of one degree of latitude is about 69 miles.) This would have given Pennsylvania the uppermost part of the Chesapeake Bay. After the negotiations failed, Penn took his case to the Commission for Trade and Plantations. In 1685 the Commission determined that the land lying north of Cape Henlopen between the Delaware Bay and the Chesapeake should be divided equally; the western half belonged to the Calverts, while the eastern half belonged to the crown, i.e., to the Duke of York, and thus to Pennsylvania under Penn's lease. So the north-south boundary between Maryland and the Three Lower Counties was now legally defined, but the east-west boundary between Pennsylvania and Maryland remained unresolved. Charles II died in 1685, and the Duke of York, a Catholic convert, succeeded him as James II. But three years later, William of Orange, the Dutch grandson of Charles I and husband of James II’s protestant daughter Mary, seized the English throne in the "Glorious Revolution." The Calverts lost control of their Maryland holdings, and Maryland was declared a royal colony. Penn’s ownership of Pennsylvania and the Lower Three Counties was also suspended from 1691 to 1694. The Calverts did not regain their proprietorship of Maryland until 1713 when Charles Calvert, the fifth Lord Baltimore, renounced Catholicism. Penn revisited America in 1699-1701, and reluctantly granted Pennsylvania and the Lower Three Counties separate elected legislatures under the Charter of Privileges. He also commissioned local surveyors Thomas Pierson and Isaac Taylor to survey and demarcate the twelve-mile radius arc boundary between New Castle and Chester counties. Pierson and Taylor completed the survey in ten days using just a chain and compass. The survey marks were tree blazes, and once these disappeared, the location of the arc boundary was mostly a matter of fuzzy recall and conjecture. Geodetic science was in its infancy. Latitude could be estimated reasonably accurately with sextant and compass, but longitude was largely guesswork. As England’s naval power and colonial holdings continued to expand, the demand for better maps and navigation intensified. Parliament set a prize of £20,000 for a solution to the "longitude problem" in 1712. The challenge was to determine a longitude in the West Indies onboard a ship with less than half a degree of longitude error. Dava Sobel’s book Longitude (1996) details how clock-maker John Harrison eventually won the prize with his "H4" precision chronometer. Penn died in 1718, disinheriting his alcoholic eldest son William Jr., and leaving the colonies to his second wife Hannah, who transferred the lands to her sons Thomas, John, Richard and Dennis. Thomas outlived the others and accumulated a two-thirds interest in the holdings. In 1731, the fifth Lord Baltimore petitioned King George II for an official resolution of the boundary dispute. In the ensuing negotiations the Calverts tried to hold out for the 40th parallel, but Pennsylvania colonists had settled enough land to the west and southward of Philadelphia that this was no longer practical. In 1732 the parties agreed that the boundary line should run east from Cape Henlopen to the midpoint of the peninsula, then north to a tangency with the west side of the twelve-mile radius arc around New Castle, then around the arc to its northernmost point, then due north to an east-west line 15 miles south of Philadelphia. It was a bad deal for the Calverts. The east-west line would turn out to be about 19 miles south of the 40th parallel, and, as the map appended to the agreement shows (Senex, 1732), would intersect the arc. The map placed "Cape Hinlopen" at what is now Fenwick Island, almost 20 miles to the south as well; this error was an attempt at deception, not ignorance (compare the 1670 map from more that 50 years earlier). But litigation over interpretation and details dragged on. The border conflict led to sporadic local violence. In 1736 a mob of Pennsylvanians attacked a Maryland farmstead. A survey party commissioned by the Calverts was run off by another mob in 1743. In 1750, the Court of Chancery established a bipartisan commission to survey and mark the boundaries per the 1732 agreement. The commissioners hired local surveyors to mark an east-west transpeninsular line from Fenwick Island to the Chesapeake in 1750-51, and then determine the middle point of this line, which would mark the southwest corner of the Three Lower Counties. As the survey team worked from Fenwick Island westward the rivers, swamps and dense vegetation made the work difficult, and there were continuing disputes, e.g., should distances be determined by horizontal measures or on the slopes of the terrain? Should the transpeninsular line stop at the Slaughter Creek estuary or continue across that peninsula, known as Taylor’s Island, to the open Chesapeake? Should the line stop at the inundated marsh line of the Chesapeake or at open water? The transpeninsular survey and its middle point were not officially approved in London until 1760. In 1761, the colonial surveyors began running the north-south "tangency" line from the middle point toward a target tangent point on the twelve-mile arc. With poor equipment and some miscalculations, their first try at a tangency line passed a half-mile east of the target point on the arc. Their second try was 350 yards to the west. The disputants required much higher standards of accuracy, and they consulted the royal astronomer James Bradley at the Greenwich observatory for advice on getting the survey done right. The Mason and Dixon survey Bradley recommended Charles Mason and Jeremiah Dixon to complete the boundary survey. Mason was Bradley’s assistant at the observatory, an Anglican widower with two sons. Dixon was a skilled surveyor from Durham, a Quaker bachelor whose Meeting had ousted him for his unwillingness to abstain from liquor. In 1761 Mason and Dixon had sailed together for Sumatra, but only made it to the Cape of Good Hope, to record a transit of Venus across the sun to support the Royal Society’s calculations of distance by parallax between the Earth and sun. Their major tasks in America would be to survey the exact tangent line northward from the middle point of the transpeninsular line to the twelve-mile arc, and survey the east-west boundary five degrees westward along a line of latitude passing fifteen miles south of the southernmost part of Philadelphia (Figure 6). It would be one of the great technological feats of the century Mason and Dixon arrived in Philadelphia on November 15th 1763 during a tense period. The Seven Years’ War had spilled over to North America as the French and Indian Wars, and although the Treaty of Paris, signed in February 1763, had put an official end to the hostilities, conflicts between colonists and Indians continued. The Iroquois League, or Six Nations (Mohawk, Onondaga, Cayuga, Seneca, Oneida and Tuscarora), had supported the British against their longtime enemies, the Cherokee, Huron, Algonquin and Ottawa, whom the French had supported in their attacks on colonists. Pontiac, chief of the Ottawa, had organized a large-scale attack on Fort Detroit on May 5th 1763, and some 200 settlers were massacred along the western frontier. Local reaction to the news was brutal. In Lancaster, Pennsylvania, a mob of mostly Scots-Irish immigrants known as the "Paxton boys" attacked a small Conestoga Indian village in December, hacked their victims to death and scalping them. The remaining Conestogas were brought to the town jail for protection, but when the mob attacked the jail the regiment assigned to protect the Indians did nothing to stop them. The helpless Indians—men, women and children—were all hacked to pieces and scalped in their cells. The Paxton Boys then went after local Moravian Indians, who were taken to Philadelphia for protection. Enraged that the government would "protect Indians but not settlers," about 500 Paxton Boys actually invaded Philadelphia on February 6, 1764, although Benjamin Franklin was able to calm the mob. Mason and Dixon were shocked at the violence, and Mason would visit the scene of the Lancaster murders a year later. As the survey progressed, racial violence and the relentless dispossession of Indians were frequent background themes. Mason had brought along state-of-the-art equipment for the survey. This included a "transit and equal altitude instrument," a telescope with cross-hairs, mounted with precision adjustment screws, to sight exact horizontal points using a mounted spirit level, and also to determine true north by tracking stars to their maximum heights in the sky where they crossed the meridian. The famous "zenith sector," built by London instrument-maker John Bird, was a six-foot telescope mounted on a six-foot radius protractor scale, with fine tangent screws to adjust its position; it was used to measure the angles of reference stars from the zenith of the sky as they crossed the meridian. These measurements could be compared against published measurements of the same stars’ angles of declination at the equator to determine latitude. These were more reliable than measurements of azimuth against a plumb bob, which were already known to be subject to local gravitational anomalies. The zenith sector traveled on a mattress laid on a cart with a spring suspension. Mason and Dixon also brought a Hadley quadrant, used to measure angular distances; high-quality survey telescopes; 66-foot long Gunter chains comprised of 100 links each (1 chain = 4 rods; 1 chain × 10 chains = 43,560 square feet = 1 acre; 80 chains = 1 mile), along with a precision brass measure to calibrate the chain lengths; and wood measuring rods or "levels" to measure level distances across sloping ground. A large wooden chest contained a collection of star almanacs, seven-figure logarithm tables, trigonometric tables and other reference materials; Mason was skilled at spherical trigonometry. Mason had acquired a precision clock so that the local times of predicted astronomical events could be compared against published Greenwich times. Each one-minute local time difference implies a 15-second longitude difference. John Harrison’s "H4" chronometer had sailed to Jamaica and back in 1761, losing only 39 seconds on the round trip; the longitude calculations in Jamaica based on his clock were well within the accuracy standards Parliament had set for the £20,000 longitude prize. But Nevil Maskelyne, who had succeeded Bradley as royal astronomer, and the Royal Society remained skeptical about the reliability of chronometers in complementing astronomical calculations of longitude. Maskelyne insisted on the superiority of a purely astronomical approach, a computationally complex "lunar distance" method based on angular distances between the moon and various reference stars. Harrison wouldn’t collect his entire prize until 1773. Mason and Dixon would test the reliability of chronometric positioning, although Mason was skeptical of it. The southernmost part of Philadelphia was determined by the survey commissioners to be the north wall of a house on the south side of Cedar Street (the address is now 30 South Street) near Second Street. Mason and Dixon had a temporary observatory erected 55 yards northwest of the house, and after detailed celestial observations and calculations, they determined the latitude of the house wall to be 39o56’29.1"N. Since going straight south would take them through the Delaware River, they then surveyed and measured an arbitrary distance (31 miles) west to a farm owned by John Harland in Embreeville, Pennsylvania, at the "Forks of the Brandywine." They negotiated with Harland to set up an observatory, and set a reference stone, now known as the Stargazer’s Stone, at the same latitude. They spent the winter at Harland’s farm making astronomical observations on clear nights and enjoying local taverns on cloudy nights. The Harland house still stands at the intersection of Embreeville and Stargazer Roads, and the Stargazers’ Stone is in a stone enclosure just up Stargazer Road on the right. Its latitude is 39o56’18.9"N, which they calculated to be 356.8 yards south of the parallel determined in Philadelphia. At Harland’s they observed and timed predicted transits of Jupiter’s moons, as well as a lunar eclipse on March 17th 1764. The average (sun) time of these events at the Stargazers’ Stone was 5 hours 12 minutes and 54 seconds earlier than published predicted times for the Paris observatory (longitude 2o20’14"E). So they were able to estimate their longitude as (5:12:54)/(24:00:00) x 360o = 78o13’30"west of Paris, and thus 78o13’30" - 2o20’14" = 75o53’6" west of Greenwich. They published these findings in the Royal Society’s Philosophical Transactions in 1769. The clock used in this experiment was actually 37 seconds fast, so at fifteen arc seconds of longitude per clock second, their calculated longitude was 9’15" or about eight miles too far west. That was more accurate than Parliament’s longitude prize had required, but the margin of error was still a thousand times larger than the margin of error in their latitude calculations. Fortunately, Mason and Dixon’s principal tasks involved more local positioning than global positioning. They proposed measuring a degree of longitude for the Royal Society as part of their survey of the parallel between Pennsylvania and Maryland; although the Society never funded that project, it would fund their measurement of a degree of latitude in 1768. In the spring of 1764 the survey party ran a line due south from Harland’s farm, measured with the survey chains and levels, with a team of axmen clearing a "visto" or line of sight eight or nine yards wide the entire way. They arrived in April 1764 at a farm field owned by Alexander Bryan in what is now the Possum Hill section of Delaware’s White Clay Creek State Park. They placed an oak post called "Post mark’d West" at a latitude of 39o43’ 26.4"N, after verifying that this point was exactly 15 miles below the 39o56’29.1"N latitude they had determined in Philadelphia. This point is now marked by a stone monument accessible by a short spur trail off the Bryan’s Field trail, about 600 yards downhill (due south) from the ruins of the farmstead. The easiest access point is from the east (gravel road) parking lot at Possum Hill off Paper Mill Road. The Post mark’d West would be the eastern origin and reference latitude point for the west line. Mason and Dixon then headed south to the middle point of the transpeninsular line that the colonial surveyors had marked, and they spent the rest of 1764 surveying the north-south boundary line. With a team of axmen clearing the vistos ahead of them, they resurveyed and marked the tangency line northward from the middle point toward the target tangency point on twelve-mile arc 82 miles to the north. They crossed the Nanticoke River, Marshyhope Creek, Choptank River, Bohemia River, and Broad Creek. Where their survey chains could not span a river, they measured the river width by triangulation, using the Hadley quadrant on its side to calculate the angle between two points on the opposite side. They arrived at the 82-mile point in August 1764. Mason and Dixon then ran an exact twelve-mile line from the New Castle courthouse to the tangency line, setting the tangent point marker at the 82-mile point of the tangency line; this is located by a small drainage pond at the edge of an apartment complex, about 600 meters south of the Delaware-Maryland boundary on Elkton Road, about 100 meters north of the rail lines. It was 17 chains and 25 links west of the tangency point targeted by the 1761 survey. Since the tangency line runs slightly west of true north, the tangent point lies south and slightly east of the arc’s westernmost point. After joining the tangency line perpendicularly to the twelve-mile radius line from New Castle in August 1764, they returned south to the middle point, checking and correcting marks as they went. On this re-check, their final error at the middle point, after 82 miles, was 26 inches. They returned north again, making final placements of the marks into November. During this phase of the survey, their base of operations in Delaware was St. Patrick’s Tavern in Newark, where the Deer Park Tavern now stands. Tavern scenes in Thomas Pynchon’s 1997 novel Mason & Dixon are consistent with at least one contemporary account of their enjoyment of the taproom. In January 1765 Mason visited Lancaster (and the jail where the Tuscaroras had been slaughtered) and "Pechway" (Pequa). In February, he toured Princeton NJ and New York. Mason and Dixon began the survey of the west line from the "Post mark’d West" in April 1765. The Arc Corner Monument, located at the north side of the W.S. Carpenter Recreation Area of White Clay Creek State Park, just off Hopkins Bridge Road, marks the intersection of the west line with the 12-mile arc, and is the start of the actual Maryland-Pennsylvania boundary line. Mason and Dixon spent the next couple of years surveying this line westward. Again, their axmen cleared vistos, generally eight yards wide. They would survey straight 12-mile line segments, starting at headings about 9 minutes northward of true west and sighting linear chords to the true latitude curve, then make detailed astronomical calculations to adjust the intermediate mile mark southward to the exact 39o 43’ 17.4" N latitude. The true latitude is not a straight line: looking westward in the northern hemisphere it gradually curves to the right. It was exacting work. The survey crossed the two branches of the Christina Creek, the Elk River, and the winding Octoraro several times. The survey party reached the Susquehanna in May 1765. At the end of May they interrupted the survey of the west line, and returned to Newark to survey the north line from the tangent point through the western edge of the 12-mile arc to its intersection with the west line. From the tangent point, the survey proceeded due north, intersecting the arc boundary again about a mile and a half further up at a point now marked by an "intersection stone." The location is behind the DuPont Company’s Stine-Haskell labs north of Elkton Road very near the Conrail rail line. The north line ended at a perpendicular intersection with the west line in a tobacco field owned by Captain John Singleton. This is the northeast corner of Maryland. The boundaries between Maryland and the Three Lower Counties were now complete. The locations of the final mile points on the tangent and north lines, and the discernible inflections of the Maryland/Delaware boundary at the tangent point, are shown on the Newark West 7.5-minute USGS topographic map. The tri-state marker is located about 150 meters east of Rt. 896 behind a blue industrial building at the MD/PA boundary. The thin sliver of land (secant) west of the North Line but within the 12-mile arc west of the North Line was assigned to New Castle County (PA, now DE) per the 1732 agreement. The "Wedge" between the North Line and the 12-mile arc just below the West Line was assigned to Chester County, PA, but later ceded to Delaware. In June 1765 Mason and Dixon reported their progress to the survey commissioners representing the Penn and Calvert families at Christiana Bridge (now the village of Christiana). They then resumed the survey of the west line from the Susquehanna. As they went along, the locals learned whether they were Marylanders or Pennsylvanians. They reached South Mountain (mile 61) at the end of August, crossed Antietam Creek and the Potomac River in late September, and continued westward to North (aka Cove) Mountain near Mercersburg PA in late October, completing a total of 117 miles of the west line that year. From the summit of North Mountain they could see that their west line would pass about two miles north of the northernmost bend in the Potomac. Had the Potomac looped further north into Pennsylvania, the western piece of Maryland would have been cut off from the rest of the colony. The survey party stored their instruments at the house of a Captain Shelby near North Mountain, and returned east in the fall, checking and resetting their marks. In November 1765 they returned to the middle point to place the first 50 mile markers along the tangent line. These had been quarried and carved in England, and were delivered via the Nanticoke and Choptank rivers. They spent January 1766 at the Harland farm. In February and March, Mason traveled "for curiosity" to York PA; Frederick MD; Alexandria, Port Royal, Williamsburg and Annapolis VA. The survey party rendezvoused at North Mountain in March 1766 and resumed the survey from there, reaching Sideling Hill at mile 135 at the end of April. There were long periods of rain and snow through late April. West of Sideling Hill was almost unbroken wilderness, and the wagons with the marker stones couldn’t make it over the mountain so they marked with oak posts from there onward. They reached mile 165 in June, near the eastern continental divide, and spent the rest of the summer backtracking for corrections and final placement of marks. Mason noted the gradual curvature of the visto along the latitude, as seen from several summits including the top of South Mountain: From any Eminence in the Line where 15 or 20 Miles of the Visto can be seen (of which there are many): The said Line or Visto very apparently shows itself to form a Parallel of Northern Latitude. The Line is measured Horizontal; the Hills and Mountains measured with a 16 ½ feet Level and besides the Mile Posts; we have set Posts in the true Line, marked W, on the West side, all along the Line opposite the Stationary Points where the Sector and Transit Instrument stood. The said Posts stand in the Middle of the Visto; which in general is about Eight yards wide. The number of Posts set in the West Line is 303. (Journal entry for 25 September 1766) Back in Newark in October, they got permission from the commissioners to measure the distance of a degree of latitude as a side project for the Royal Society. They returned to the middle point of the transpeninsular line for astronomical observations in preparation for this, then returned to Newark and began setting 100 stone mile markers along tangent and west lines. The stones in the west line were set at mile intervals starting from the northeast corner of Maryland. At the end of November, at the request of the commissioners, they measured the eastward extension of the west line from the Post mark’d West across Pike, Mill, Red Clay and Christiana (Christina) creeks to the Delaware River. The southern boundary of Pennsylvania was to extend 5 degrees of longitude west from this point. They spent parts of the winter of 1766-67 at Harland’s farm making astronomical observations, using a clock on loan from the Royal Society to time ephemera. Mason spent the late winter and early spring traveling through Pennsylvania, Maryland and Virginia. He met the chief of the Tuscaroras in Williamsburg. The survey was supposed to extend a full five degrees of longitude (about 265 miles) to the west, but the Iroquois wanted the survey stopped. Negotiations between the Six Nations and William Johnson, the commissioner of Indian Affairs, lasted well into 1767. After a payment of £500 to the Indians, Mason and Dixon finally got authorization in June 1767 to continue the survey from the forks of the Potomac near Cumberland. They started out with more than 100 men that summer, including an Indian escort party and a translator, Hugh Crawford, as they continued the survey westward from mile 162. A.H. Mason’s edition of the survey journal (1969) includes a long undated memorandum written by Mason describing the terrain crossed by the west line. West of the Monongahela they met Catfish, a Delaware; then a party of Seneca warriors on a raid against the Cherokees; then Prisqueetom, an 86-year-old Delaware who "had a great mind to go and see the great King over the Waters; and make a perpetual Peace with him; but was afraid he should not be sent back to his own Country." The memorandum includes Crawford’s detailed descriptions of the Allegheny, Ohio and Mississippi rivers and many of their tributaries. As the survey party opened the visto further westward, the Indians grew increasingly resentful of the intrusion into their lands. The survey team reached mile 219 at the Monongahela River in September. Twenty-six men quit the crew in fear of reprisals from Indians, leaving only fifteen axmen to continue clearing vistos for the survey until additional axmen could be sent from Fort Cumberland. On October 9th, 231 miles from the Post mark’d West, the survey crossed the Great Warrior Path, the principal north-south Indian footpath in eastern North America. The Mohawks accompanying the survey said the warpath was the western extent of the commission with the chiefs of the Six Nations, and insisted the survey be terminated there. Realizing they had gone as far as they could, Mason and Dixon set up their zenith sector and corrected their latitude, and backtracked about 25 miles to reset their last marks. They left a stone pyramid at the westernmost point of their survey, 233 miles 17 chains and 48 links west of the Post mark’d West in Bryan’s field. Mason and Dixon returned east, arriving back at Bryan’s farm on December 9th 1767, and reported their work to the commissioners at Christiana Bridge later that month. They had hoped the Royal Society would sponsor a measurement of a degree of longitude along the west line, but that proposal was never approved. Mason calculated that if the earth were a perfect spheroid of uniform density (which it is not) the measurement would be 53.5549 miles. They spent about four months in early 1768 working on the latitude measurement project for the Royal society, using high-precision measuring levels with adjustments for temperature. They worked their way southward from the tangent point reaching the middle point in early June 1768, then working northward again. In Mason’s final calculation, published in the Royal Society’s Philosophical Transactions in 1768, a degree of latitude on the Delmarva Peninsula from the middle point northward was 68.7291 miles. On August 16th 1768 they delivered 200 printed copies of the map of their surveys, as drawn by Dixon, to the commissioners at a meeting at New Town on the Chester River. They were elected to the American Philosophical Society in April 1768. After settling their accounts, they enjoyed ten days of socializing in Philadelphia and then left for New York, sailing on the Halifax Packet to Falmouth, England, on September 11th 1768. Mason and Dixon never worked together again. In May 1769 the Royal Society sent Dixon to Hammerfest, above the Arctic Circle in Norway, and Mason to Cavan, Ireland, to record the June 4th transit of Venus, which occurred simultaneously with a lunar eclipse. David Rittenhouse and members of the American Philosophical Society conducted simultaneous observations in America. Dixon was elected a fellow of the Royal Society in 1773. He remained a bachelor, retired to Cockfield, Durham, and died in 1779 at age 45. Mason remarried in 1770, and continued to work for Nevil Maskelyne at the Royal Observatory, although he was never elected to the Royal Society. He returned to Philadelphia with his second wife and eight children in July 1786, died there on October 25th, and was buried in the Christ Church burial ground on Arch Street. His widow and her six children returned to England. His two sons from his first marriage remained in America. Less than a decade after the 1763-67 survey settled their long-running boundary dispute, the Penns and Calverts lost their colonies to the American Revolution. On June 15, 1776, the "Assembly of the Lower Counties of Pennsylvania" declared that New Castle, Kent and Sussex Counties would be separate and independent of both Pennsylvania and Britain. So Mason and Dixon’s tangent, north and west lines became the boundaries between the three new states of Delaware, Maryland and Pennsylvania. The Mason-Dixon Line The west line would not become famous as the "Mason-Dixon Line" for another fifty years as America slowly and haltingly addressed longstanding inequities in civil rights. In the east, the piedmont Lenni Lenape tribes of Delaware and Pennsylvania were completely dispossessed, and the remnants of the tribes were eventually relocated by a series of forced marches: to Ohio, Indiana, Missouri, Kansas, and finally to the Indian Territory which became Oklahoma. Hannah Freeman (1730-1802), known as "Indian Hannah," was the last of the Lenni Lenape in Chester County, Pennsylvania. The tidewater Nanticoke communities were dispersed from Delaware and Maryland by 1750, and the last tribal speaker of the Nanticoke, Lydia Clark, died before 1850. Some migrated as far north as Canada and were assimilated into other tribes, and some were relocated west. The remnant that remains in the area holds an inter-tribal pow-wow each September in Sussex County. With Indians almost entirely displaced from the eastern states, the national debate focused on slavery and abolition, and whether new states entering the Union should be free or slave states. The Missouri Compromise of 1820 designated Mason and Dixon’s west line as the national divide between the "free" and "slave" states east of the Ohio River, and the line suddenly acquired new significance. Delaware’s 1776 state constitution had banned the importation of slaves, and state legislation in 1797 effectively stopped the export of slaves by declaring exported slaves automatically free. The state’s population in the 1790 census was 15 percent black, and only 30 percent of these were free blacks. By the 1820 census, 78 percent of Delaware’s blacks were free. By 1840, 87 percent were free. Both escaped slaves and legally free blacks living anywhere near the line were vulnerable to kidnapping by slave-catchers operating out of Maryland. One of the most famous kidnappers was Patty Cannon, a notoriously violent woman who, with her son-in-law Joe Johnson, ran a tavern on the Delaware-Maryland line near the Nanticoke River. The Cannon-Johnson gang seized blacks as far north as Philadelphia and transported them south for sale, hiding them in her house or supposedly shackled to trees on a small island in the Nanticoke River, and then transporting across the Woodland ferry or loading them onto a schooner to be shipped down the Nanticoke for eventual sale in Georgia. In 1829 Cannon and Johnson were arrested and charged with kidnapping, and Cannon was charged with several murders, including the murder of a slave buyer for his money. Johnson was flogged, and Cannon died in jail before trial, reportedly a suicide by poison. Her skull is kept in a hatbox at the Dover Public Library. It does not circulate via inter-library loan. For free blacks in Delaware, freedom was quite restricted. Blacks could not vote, or testify in court against whites. After Nat Turner’s 1831 rebellion in Virginia triggered rumors and panic about a black insurrection in Sussex County, the Delaware legislature banned blacks from owning weapons, or meeting in groups larger than twelve. Through the first half of the 19th century the Mason-Dixon Line represented the line of freedom for tens of thousands of blacks escaping slavery in the south. The Underground Railroad provided food and temporary shelter at secret way-stations, and guided or sometimes transported northbound slaves across the Line. The spirituals sung by these slaves included coded references for escapees: the song "Follow the drinking gourd" referred to the Big Dipper from which runaways could sight the North Star; the River Jordan was the Mason-Dixon Line; Pennsylvania was the Promised Land. After the Fugitive Slave Act of 1850 allowed slave owners to pursue their escaped slaves into the north, the line of freedom became the Canadian border, "Canaan" in the spirituals, and abolitionists created Underground Railroad stops all the way to Canada. Thomas Garrett, a member of Wilmington’s Quaker community, was one of the most prominent conductors on the Underground Railroad. In 1813, while Garrett was still living in Upper Darby, Pennsylvania, a free black employee of his family’s was kidnapped and taken into Maryland. Garrett succeeded in rescuing her, but the experience reportedly made him a committed abolitionist, and he dedicated the next fifty years of his life to helping others escape slavery. Garrett moved to Wilmington in 1822 and lived at 227 Shipley Street, where he ran a successful iron business. He befriended and helped Harriet Tubman as she brought group after group of escaping slaves over the line; his house was the final step to freedom. Garrett was caught in 1848, prosecuted and convicted, forthrightly telling the court he had helped over 1,400 slaves escape. Judge Roger Taney ordered Garrett to reimburse the owners of slaves he was known to have helped, and it bankrupted him, but he continued in his work, assisting approximately 1,300 more slaves to freedom by 1864. Taney went on to become Chief Justice of the US Supreme Court, and wrote the majority decision in Dred Scott v. Sanford (1857), declaring that no blacks, slave or free, could ever be US citizens, and striking down the Missouri Compromise. In the buildup to the Civil War, Delaware was a microcosm of the country, sharply split between abolitionists in New Castle County and pro-slavery interests in Sussex County. A series of abolition bills were defeated in the state legislature by a single vote. Like other Union border states, Delaware remained a slave state during the war, although its slave population had fallen to only a few hundred. President Abraham Lincoln offered a federal reimbursement of $500 per slave (far more than their market value) to Delaware slave-owners if Delaware would abolish slavery, but the state legislature stubbornly refused. Lincoln’s January 1st 1863 Emancipation Proclamation abolished slavery in the Confederate states, but not in the Union border states. After the Civil War, Maryland, Pennsylvania, West Virginia, Ohio, Indiana, Illinois, Missouri and Arkansas outlawed slavery on or before their individual ratifications of the Thirteenth Amendment in 1865. New Jersey had technically abolished slavery in 1846, although it only ratified the Amendment in 1866. So as the Thirteenth Amendment neared ratification by 27 of the 36 states on December 6th 1865, America’s last two remaining slave states were Kentucky and Delaware. Delaware didn’t ratify the Thirteenth, Fourteenth or Fifteenth amendments until 1901. In the middle of the 20th century the Mason-Dixon Line was the backdrop for one of the five school desegregation cases that were eventually consolidated into the US Supreme Court’s Brown v. Board of Education of Topeka case. Until 1952, public education in Delaware was strictly segregated. Since the late 19th century, property taxes paid by whites in Delaware had funded whites-only schools, while property taxes paid by blacks funded blacks-only schools. In the 1910’s, P.S. duPont had financed the construction of schools for black children throughout Delaware, and effectively shamed the Legislature into providing better school facilities for whites as well. There was only high school for black children in the entire state—Howard High School. Persistent income disparities between blacks and whites insured persistent inequalities in public education. In 1950 the Bulah family had a vegetable stand at the corner of Valley Road and Limestone Road, and Shirley Bulah attended Hockessin Colored Elementary School 107, which had no bus service. The bus to Hockessin School 29, the white school, went right past the Bulah farm, and the Bulahs merely asked if Shirley could ride the bus to her own school. But Delaware law prohibited black and white children on the same school bus. Shirley’s mother Sarah Bulah contacted Wilmington lawyer Louis Redding, who had recently won the Parker v. University of Delaware case forcing the University to admit blacks. In 1950, the Wilmington chapter of the NAACP had launched an effort to get black parents in and around Wilmington to register their children in white schools, but the children were turned away. Redding chose the Bulahs as plaintiffs in one of two test cases, and convinced Sarah Bulah to sue in Delaware’s Chancery Court for Shirley’s right to attend the white school (Bulah v. Gebhart). Parents of eight black children from Claymont filed a parallel suit (Belton v. Gebhart). The complaints argued that the school system violated the "separate but equal" clause in Delaware’s Constitution (taken from Plessy v. Ferguson) because the white and black schools clearly were not equal. Redding knew that a court venue on the Mason-Dixon Line, with its local legacies of slavery and abolitionism, would be most likely to support integration. He argued the cases pro bono and the Wilmington NAACP paid the court costs. In 1952, Judge Collins Seitz found that the plaintiffs’ black schools were not equal to the white schools, and ordered the white schools to admit the plaintiff children. The Bulah v. Gebhart decision did not challenge the "separate but equal" doctrine directly, but it was the first time an American court found racial segregation in public schools to be unconstitutional. The state appealed Seitz’s decision to the Delaware Supreme Court, where it was upheld. The state’s appeal to the US Supreme Court was consolidated into the Brown v. Board case, which also upheld the decision. The town of Milford, Delaware, had riots when it integrated its schools immediately after the Brown decision. Elsewhere in Delaware, school integration proceeded slowly; the resistance to it was passive but pervasive. A decade after Brown, Delaware still had seventeen blacks-only school districts. As Wilmington’s schools were integrated, upscale families, both black and white, were moving to the suburbs, leaving behind high-poverty, black-majority city neighborhoods. Wilmington’s public school system, now serving a predominantly black, low-income population, was mired in corruption and failure. Following a second round of civil rights litigation in the 1970’s, the US Third Circuit court imposed a desegregation plan on New Castle County in 1976, under which schools in Wilmington would teach grades 4, 5 and 6 for all children in the northern half of the county, while suburban schools would teach grades 1-3 and 7-12. Wilmington children would have nine years of busing to the suburbs; suburban children would have three years of busing to Wilmington. After the 1976 desegregation order, a spate of new private schools popped up in the suburbs. One third of all schoolchildren living within four districts around Wilmington now attend non-public schools. In 1978 the Delaware legislature split the northern half of New Castle County into four large suburban districts, each to include a slice of Wilmington. The Brandywine, Red Clay Consolidated and Colonial districts are contiguous to Wilmington and serve adjacent city neighborhoods. The Christina district has two non-contiguous areas: the large Newark-Bear-Glasgow area and a high-poverty section of Wilmington about 10 miles distant on I-95. In 1995, the federal court lifted the desegregation order, declaring that the county had achieved "unitary status." Wilmington’s poorest communities remain predominantly black, but the urbanized Newark-New Castle corridor now has far more minority households than Wilmington. The school districts are committed to reducing black-white school achievement gaps as mandated under the federal No Child Left Behind Act (the 2000 reauthorization of the Elementary and Secondary Education Act). Louis Redding and Collins Seitz both died in 1998. The city government building at 800 North French St. in Wilmington is named in Redding’s honor. The 800-acre triangular area known as the Wedge lies below the eastern end of Mason and Dixon’s West line, bounded by Mason and Dixon’s North line on the west and the 12-mile arc on the east. It is located west of Wedgewood Road, and is intersected by Rt. 896 in Delaware just before the road crosses the very northeast tip of Maryland into Pennsylvania. Although the Delaware legislature had representatives from the Wedge in the mid-19th century, jurisdiction over the Wedge remained ambiguous. A joint Delaware-Pennsylvania commission assigned it to Delaware in 1889, and Pennsylvania ratified the assignment in 1897, but Delaware, sensitive to Wedge residents who considered themselves Pennsylvanians, didn’t vote accept the Wedge as part of Delaware until 1921. Congress ratified the compact in 1921. Through most of the 19th century the Wedge was a popular hideout for criminals, and a place for duels, prize-fighting, gambling and other recreations, conveniently outside any official jurisdiction. A historic marker on Rt. 896 summarizes its history. An 1849 stone marker replaced the stone Mason and Dixon used to mark the intersection of the North line with the West line; when the Wedge was annexed to Delaware in 1921 this became the MD/PA/DE tri-state marker. Until fairly recently, the area around Rising Sun, Maryland, had sporadic activity from a local Ku Klux Klan group whose occasional requests for parade permits attracted a lot of media attention. In his book Walkin’ the Line, William Ecenbarger recounts watching a Klan rally in Rising Sun 1995. Local Klan leader Chester Doles served a prison sentence for assault, and then left Cecil County for Georgia. Whatever Klan is left in this area has been very quiet since. The Mason-Dixon Trail is a 193-mile hiking trail, marked in light blue paint blazes. It begins at the intersection of Pennsylvania Route 1 and the Brandywine River in Chadds Ford, PA; runs southeast through Hockessin and Newark, DE; eastward though Elkton to Perryville and Havre de Grace, MD (although pedestrians are not allowed on the Rt. 40 bridge!); then northward up the west side of the Susquehanna into York County, PA, and proceeding northwest through York County through Gifford Pinchot State Park to connect with the Appalachian Trail at Whiskey Springs. The Mason-Dixon Trail does not actually follow any line that Mason and Dixon surveyed, but it’s an interesting trail over diverse terrain. The stone markers used in the Mason-Dixon survey were quarried and carved in England and shipped to America. The ordinary mile markers placed by the survey party are inscribed with "M" and "P" on opposite sides. Every fifth mile was marked with a crownstone with the Calvert and Penn coats of arms on opposite sites. The locations of many of these markers are noted on USGS 7.5-minute topographic maps. Roger Nathan and William Ecenbarger have both explored these markers and written readable histories of them. Many markers are lost, but some are still accessible (with landowner permission). Cope, Thomas D. 1949. Degrees along the west line, the parallel between Maryland and Pennsylvania. Proceedings of the American Philosophical Society 93(2):127-133 (May 1949). Thomas Cope, a physics professor at the University of Pennsylvania, published a number of articles about the survey. Cummings, Hubertis Maurice, 1962. The Mason and Dixon line, story for a bicentenary, 1763-1963. Commonwealth of Pennsylvania, Dept. of Internal Affairs, Harrisburg, PA. Written for the bicentennial of the survey, this book provides a good mix of technical detail and narrative. Danson, Edwin, 2001 Drawing the line : How Mason and Dixon surveyed the most famous border in America. John Wiley & Sons, New York. Provides the clearest technical explanations of the survey along with a readable narrative of it. Ecenbarger, William, 2000. Walkin' the line: a journey from past to present along the Mason-Dixon. M. Evans, New York. Ecenbarger describes his tour of the tangent, north and west lines, and intertwines local vignettes of slavery and civil rights with brief descriptions of the actual survey. Latrobe, John H. B. 1882. "The history of Mason and Dixon's line" contained in an address, delivered by John H. B. Latrobe of Maryland, before the Historical society of Pennsylvania, November 8, 1854. G. Bower, Oakland, DE. Mason, A.H. (ed.) Journal of Charles Mason [1728-1786] and Jeremiah Dixon [1733-1779]. 1969. Memoirs of the American Philosophical Society vol. 76). American Philosophical Society, Philadelphia. The survey journal, written in Mason’s hand, was lost for most of a century, turning up in Halifax, Nova Scotia, in 1860; the original is now in the National Archives in Washington DC. A transcription edited by A. Hughlett Mason was published in 1969 by the American Philosophical Association. The journal is mostly technical notes of the survey, with letters received and comments by Mason on his travels interspersed. An abridged fair copy of the journal, titled "Field Notes and Astronomical Observations of Charles Mason and Jeremiah Dixon," is in Maryland’s Hall of Records in Annapolis. Nathan, Roger E. 2000. East of the Mason-Dixon Line: a history of the Delaware boundaries. Delaware Heritage Press, Wilmington, DE. Focuses on the history of Delaware’s boundaries, in which Mason and Dixon played the largest part. Pynchon, Thomas. 1997. Mason & Dixon. Henry Holt, New York. Pynchon's novel mixes historically accurate details with wild fantasies. Mason and Dixon are portrayed as naïve, picaresque characters, the Laurel and Hardy of the 18th century, surrounded by an odd cast including a talking dog, a mechanical duck in love with an insane French chef, an electric eel, a renegade Chinese Jesuit feng-shui master, and a narrator who swallowed a perpetual motion watch. Mason and Dixon personify America’s confused moral compass, slowly realizing how their survey line defiles a wild, innocent landscape, and opens the west to the violence and moral ambiguities that accompany "civilization." Sobel, Dava. 1996. Longitude: the true story of a lone genius who solved the greatest scientific problem of his time. Walker & Co., New York.
http://www.udel.edu/johnmack/mason_dixon/
13
78
Suppose that we are asked to find the area enclosed by a circle of given radius. A simple way to go about this is to draw such a circle on graph paper and count the number of small squares within it. Then area contained ≈ number of small squares within circle × area of a small square. A circle drawn on graph paper - the area inside is approximately the number of small squares times the area of each small square If we doubled all the lengths involved then the new circle would have 4 times the area contained in the old circle We notice that if we doubled all the lengths involved then the new circle would have twice the radius and each of the small squares would have four times the area. Thus ≈ number of small squares × area of a new small square = number of small squares × 4 × area of a old small square ≈ 4 × area contained in old circle. By imagining what would happen if we used finer and finer graph paper, we conclude that doubling the radius of a circle increases the area by a factor of 4.The same argument shows that, if we multiply the the radius of a circle by , its area is multiplied by . Thus We can now play another trick. Consider our circle of radius r as a cake and divide it into a large number of equal slices. By reassembling the slices so that their pointed ends are alternately up and down we obtain something which looks like a rectangle of height r and width half the length of the circle. The area covered by a cake is unchanged by cutting it up and moving the pieces about. But the area of a rectangle is width × height and the area of of the cake is unchanged by cutting it up and moving the pieces about. So Approximating PiOne approximation goes back to the ancient Greeks who looked at the length of a regular polygon inscribed in a circle of unit radius. As we increase the number of sides of the polygon the length of the polygon will get closer and closer to the length of the circle that is to . Can you compute the total length of an inscribed square? Of an inscribed equilateral triangle? Many of the ideas in this article go back to Archimedes, but developments in mathematical notation and computation enabled the great 16th century mathematician Vieta to put them in a more accessible form. (Among other things, Vieta broke codes for the King of France. The King of Spain, who believed his codes to be unbreakable, complained to the Pope that black magic was being employed against his country.) We can approximate the circle with n-sided polygon, in this case an hexagon with n=6. If you calculated the length of the perimeter for an inscribed square or triangle, does our general formula for an n-sided polygon agree for n = 3 and 4? If you try to use your calculator to calculate , , you'll observe that the results aren't a very good approximation for .There are two problems with our formula for . The first is that we need to take large to get a good approximation to . The second is that we cheated when we used our calculator to evaluate since the calculator uses hidden mathematics substantially more complicated than occurs in our discussion. Doubling sidesHow can we calculate ? The answer is that we cannot with the tools presented here. However, if instead of trying to calculate for all , we concentrate on , , , ... (in other words we double the number of sides each time), we begin to see a way forward. Ideally, we would like to know how to calculate from . We can not quite do that, but we do know the formulae (from the standard trigonometric identities) We can make the pattern of the calculation clear by writing things algebraically. Let us take and write Vieta's formula and beyondAlthough Vieta lived long before computers, this approach is admirably suited to writing a short computer program. The definitions of and lead to the rules Nowadays we leave computation of square roots to electronic calculators, but, already in Vieta's time, there were methods for calculating square roots by hand which were little more complicated than long division. Vieta used his method to calculate to 10 decimal places. We have shown that From an elementary point of view this formula is nonsense, but it is beautiful nonsense and the theory of the calculus shows that, from a more advanced standpoint, we can make complete sense of such formulae. The Rhind Papyrus, from around 1650 BC, is thought to contain the earliest approximation of pi; as 3.16. The accuracy of this approximation increases fairly steadily, as you will have seen if you used your calculator to compute the successive values of sn. Roughly speaking the number of correct decimal places after the nth step is proportional to n. Other methods were developed using calculus, but it remained true for these methods that number of correct decimal places after the nth step was proportional to n. In the 1970's the mathematical world was stunned by the discovery by Brent and Salamin of method which roughly doubled the number of correct decimal places at each step. To show that it works requires hard work and first year university calculus. However, the method is simple enough to be given here. Take and . Let Since then even faster methods have been discovered. Although has now been calculated to a trillion (that is to say 1,000,000,000,000) places, the hunt for better ways to do the computation will continue. About the author Tom Körner is a lecturer in the Department of Pure Mathematics and Mathematical Statistics at the University of Cambridge, and Director of Mathematical Studies at Trinity Hall, Cambridge. His popular mathematics book, The Pleasures of Counting, was reviewed in Issue 13 of Plus. He works in the field of classical analysis and has a particular interest in Fourier analysis. Much of his research deals with the construction of fairly intricate counter-examples to plausible conjectures.
http://plus.maths.org/content/os/issue43/features/korner/index
13
64
The Delivery System Our first objective is to describe why the orbits of Neptune and Uranus came to be located 1.8 and 2.8 billion miles from the Sun. These distances are 19.2 and 30.1 a.u. from the Sun. Some groundwork must be laid. Earlier, evidence was presented in the form of twin spins that both planets co-orbited in space somewhere beyond 900 a.u. from the Sun. How did planets such as Neptune and Uranus relocate from there to "here" in the visible, "inner" solar system? Our question is one of relocation, not one of creation. A delivery by implication involves either a one time delivery, or more often, a delivery system. If a package is dropped on the doorstep with a "United Parcel" delivery label, that implies a delivery system. If such deliveries happen repeatedly, it implies a repeating route, and perhaps a periodic delivery schedule. Postal deliveries fall into this periodic pattern. So are deliveries in our cosmological theory. A Comparison with Early 20th Century Cosmologies In the 1910's, it occurred to James Jeans that the total of the planetary mass was only 0.14% of the Sun's mass. Yet these nine planet systems contained the bulk of the angular momentum in the Solar System. In fact, Jupiter along has 60%. The four giant planets carry 98% of the angular momentum of all matter this side of Pluto. The gigantic Sun carries only 2%. The law of conservation of angular momentum would seem to suggest that if the Sun and its plasma were the genesis of the Solar System, then the Sun should retain most of the angular momentum that is observed. The Sun isn't even close to conforming. This is why Jeans and others began searching for an extra-solar source, a passing star perhaps. This principle is demonstrated in an ice skater, who spins with her arms extended. As her arms are withdrawn, she spins all the faster. But if that were an example of the Sun, the Sun is one of the most slowly spinning bodies of the Solar System. Its photosphere requires 26.8 days for a rotation at its equator. Even more strange, the Sun's photosphere displays even slower rotations rates in its higher latitudes. At 60° latitude, it spins once in 30.8 days and at 75° latitude, it spins in 31.8 days. Although the Nebular Hypothesis is still being taught as cosmological fact to many students, Press and Siever make the following damaging assessment: To address this defect, Moulton and Chamberlin proposed that the Sun was approached by a much larger star, to a proximity of 4 or 5 billion miles. That huge star allegedly pulled out from the Sun a filament of material, providing material eventually condensing into the planets, into planet satellites, in spin rates, etc. In the 1930's, Henry N. Russell recognized further defects. Instead, Russell postulated the Sun had been approached by a pair of stars, a binary system. Together they did the job whereas, he felt, Jeans' theory and Moulton's both failed. Note that early 20th century cosmologists continued to assume that the planets were formed from solar ejecta, billions of years ago. This was a major conceptual mistake. Our concept of a delivery system contains the idea that the planets were delivered to the Sun. They were delivered from a region more distant than 1,000 a.u. The planets of this Solar System never were part of a gaseous filament pulled out of the Sun (or any other star.) Our concept is different and the details of our cosmology are far different from the "standard fare" of this 19th and 20th century. DIFFERENCE # 1. One difference is that their approaching star, or binary pair of stars, approached from interstellar space, from beyond the nearer stars. We offer that the Sun was approached from a region less than 5% of the distance to the nearest star, that is, from between 1,000 and 2,000 a.u., or possibly from 3,000 a.u. DIFFERENCE # 2. A second difference is that our delivery system body was not, and is not luminous. Their concept was one, or two co-rotating luminous stars, at least one of which was larger than the Sun. Had such a luminous star been in the neighborhood four billion years ago, it could still be seen and its path charted. No trace of such can be seen in the Milky Way.. DIFFERENCE # 3. A third difference is that our delivery system body is ONLY 3% TO 4% as massive as the Sun, + or - 1%. The view of Jeans, Russell and Lyttleton was that the approaching star or pair of stars was much heftier than the Sun. DIFFERENCE # 4. A fourth difference is that in our delivery system, the delivering body came much closer to the Sun. They suggest such a theoretical approach was several billion miles from the Sun, and considerably beyond Neptune's orbit. Neptune is three billion miles distant. Evidence exists that the intruder approached as close as 15,000,000 miles from the Sun, more than twice as close as Mercury. This evidence will be cited and presented, along with its ramifications, in chapters 7 through 10. DIFFERENCE # 5. A fifth distance is conceptual. Traditional 18th, 19th and 20th century cosmologies have the Sun as the mother of the planets in a natal sense. In a natal sense, the planets are like afterbirth material which for some reason, the Sun expelled. In contrast, we offer that the Sun is the mother of the nine planets only in an adoptive sense, not a natal sense. Some writers, including science fiction authors, have speculated on Solar System disturbances coming in from beyond the realm of visibility, beyond the orbit of Pluto. Those writers have named their fictitious intruders such names as "Planet X" and "the Nemesis Star", etc. We choose the name "Little Brother" since it evidently was 3% to 4% as massive as the Sun, and it penetrated deeply into the hot, inner region. DIFFERENCE # 6. A sixth difference, if our hypothesis is correct, is that Little Brother continues to orbit the Sun. And on its own schedule, whatever that is, it will return in due time. When it returns, it will realign any planet that gets in its path. And when it returns, it could bring in a new package of planets and drop them off in the Inner Solar System. Evidence, not science fiction writers, indicates Little Brother exists. We choose this name because the Sun is "Big Brother." Its "nickname" is "L. B." This nickname has nothing to do with any prominent politician from Texas. If the Sun stripped the planets away from Little Brother, and if Little Brother delivered them, then that capture must have followed certain mathematical constraints. One constraint is that the planets Mercury through Neptune were all dropped off on the same plane, the orbit plane of "L. B." A second constraint is the "Radius of Action" principle, the zone of control of Little Brother. As "L. B." approached the Sun, this zone inexorably kept shrinking. And as it returned to its aphelion, perhaps 1,000 to 3,000 a.u. distant. This is 5 to 15 light days distant. How expansive would be Little Brother's "zone of control" out there where the Sun's attraction is so faint? How extensive is that "zone of control" which allowed, at three billion miles from the Sun, for Neptune and Uranus to begin to get away? The Sun's mass is 332,000 as massive as the Earth, and 1,050 times as massive as Jupiter. Thus, Little Brother, if our analysis is correct, is about 30 to 40 times as massive as the giant Jupiter. Control versus capture in our Solar System follows a principle, which Gerard Kuiper called the "radius of action". As geographers and engineers, we prefer to call it the "zone of control". It is the same thing. For instance, in our present orbit, the Earth's zone of control, its radius of action, is out to 750,000 miles. At this distance theoretically the Earth would automatically lose any satellite forever to the Sun. At this distance, a satellite merely exchanges its orbit prime focus for the Sun. For math buffs, Kuiper's equation is an approximation, not a rule, not a mathematical law. It is an approximation that merits some elaboration and qualifications. The approximation of a zone of control anywhere in this solar system In this equation, RA is the radius of action. "" is the mass of the planet (Little Brother) divided by the total of the mass of the Sun and Little Brother. "a" is an astronomical unit, 93,000,000 miles.F2 The Capture of NeptuneUranus System Neptune and Uranus had to co-orbit in two long, narrow, highly eccentric orbits. Both revolved around a "bary center," a point that is the common center of mass. Most of the time, Neptune and Uranus were co-orbiting with a considerable distance in between. But with highly eccentric elliptical orbits, fast flybys and sharp spasms of catastrophism occurring every few years. As was discussed earlier, each flyby increased the spin rates of each planet, and the increase was in a reciprocal manner. We suggest that when these two planets were co-orbiting, their bary center orbited Little Brother at a distance of about 600,000,000 miles. At about 2,500,000,000 miles (or 27 a.u.) from the Sun, the Sun stripped this binary away from "L. B." It captured them and dispersed or separated them. Uranus was sent nearer, Neptune farther. In this cosmology, Little Brother performed the job of a delivery service. The Sun proceeded to separate Neptune from Uranus and redirect them into new, virgin, capture orbits. One ended up 1.8 million miles from the Sun and the other 2.8 million miles. Their twin spins are clues of their former co-orbiting relationship, when they were much much deeper in dark, frigid, remote, debris-strewn space. When were the planets Neptune and Uranus dropped off at their present location? That is the $64,000 question for which we do not have the answer. However, there is no evidence for such an event being billions of years ago. There is evidence friendly to the thought that they were delivered less than 100,000 years ago. This evidence involves the capture of other planets, satellites and icy ring systems. We are tempted to get ahead of our story. The story of the Sun's capture of the Neptune system and the Uranian system is story 6 in our new skyscraper cosmology. It involves planetary catastrophism deep in space, before capture by the Sun. We cannot agree that these planets are so far out because of "chance" or "coincidence". They are so remote from the Sun because they were co-orbiting at a similarly remote distance from Little Brother. That "similarly remote" distance from L. B. was some 600,000,000 miles, compared to their present remoteness of 1.8 and 2.8 billion miles. The Capture of JupiterSaturn System In a like manner, Little Brother once "owned" the Jupiter-Saturn binary, a co-orbiting pair whose bary center was perhaps 200,000,000 miles from "L. B." That is about as far as the asteroid belt is today from the Sun. When Little Brother approached the Sun to a distance of some 600,000,000 miles to 700,000,000 miles (6.5 to 7.5 a.u.), the gravitational competition increased. It increased to the threshold where "L.B." was no longer able to retain this second co-orbiting binary either. First, Saturn was stripped from Jupiter, and then from L. B. also. Following that loss, Jupiter was next, stripped away from L. B. as it inexorably kept approaching the Sun. As it was with Saturn, so also with Jupiter; each planet was wrenched from L.B. together with its satellite systems. As Jupiter and Saturn were lost by the Little Brother system, that incoming system was stripped of about 0.3% of its mass; it also lost a similar amount of energy. Its orbit shifted just a wiggle. The L. B. system, now separated, also lost a little angular momentum. That angular momentum in the bodies of Jupiter and Saturn was relocated inward from one to three thousand a.u. all the way down to five to ten a.u. The two captures by the Sun, Neptune-Uranus and Jupiter-Saturn, did not necessarily occur during the same incoming flyby of Little Brother. But it is a distinct possibility. There is evidence that Little Brother has made either one or just a very few such "delivery trips" to the doorstep of the Sun. That evidence will be presented in a few chapters. The evidence of a paucity of trips by L. B. around the Sun is critical to our understanding as to how, and how recently this Solar System was organized. Thus we will present evidence suggesting that the Solar System is recent, less than a million years, less than a hundred thousand years perhaps. From the gradualist dogma of four billion years, this could be a reduction in time requirements of 99.975% The question becomes, "Did Little Brother acquire the Jupiter-Saturn binary during the same orbit into deep space that it acquired the Neptune-Uranus binary?" If the answer were "yes", it follows that the Sun acquired all four planet systems within the same score of years. If the answer were "no", it follows that they were captured in different eras. Which is more probable? At this point in time, we do not know. What we are sure of is that Jupiter and Saturn once co-orbited as a binary pair in remote space. Their twin spins are a solid pair of clues. Coupled with the spins of Uranus and Neptune, we now gather two pairs of solid clues. The Earth's acquisition of the Moon, likely in remote space, is a fifth clue that our "capture cosmology" is the best approach. The seventh story of our catastrophic cosmology is the acquisition of their modern orbits by Jupiter and Saturn. It will be noticed that Uranus and Neptune were closely related in remote space. Today, Uranus and Neptune are still fairly closely related. They are the seventh and eighth closest planets. They still are next to each other, though not as closely as when under the dominion of Little Brother. Jupiter and Saturn are our fifth and sixth most distant planets from the Sun. Although they were separated by the Sun, they also, as Inner Solar System planets, still remain next to each other, although they are not as close as when under Little Brother's dominion. Their place in the solar system, next to each other, fifth and sixth, is no accident, no coincidence, not a result of chance. The neighboring relationships of both Neptune-Uranus and Saturn-Jupiter are vestiges of the former age when they co-orbited and created reciprocal spins. A Dating ClueThe Icy Rings of Saturn The ice in the icy rings of Saturn does effervesce away constantly into space. Various estimates have been made of the rate the thinning of these icy rings. So far as we are aware, there is no consensus. It could make an exciting study to examine the celestial brilliance of the rings of Saturn as they were on early photographic plates over 100 years ago. Comparing to the present reflectivity, one could then estimate the minute rate of effervescence - thinning of the ring system. Estimates have been made as to how long in the future Saturn's rings will last. Those estimates range from 10,000 years future to 100,000 years for the time left for the life span of those icy rings. The icy rings had a genesis when an icy satellite, an ice ball was rerouted too close to Saturn. "Too close" is 2.5 radii, as was defined by Roche's Limit. In 1850, Edouard Roche studied the tidal effects of two planets, or a planet and a moon that theoretically were on a collision course. He found that due to sudden internal tidal stresses that would be generated, the smaller of two planets on a collision course would fragment before collision. He calculated the distance of fragmentation at 2.44 radii. He assumed two bodies of equal density, and with circular orbits. Saturn has a radius of almost 36,000 miles. Thus, for an ice ball, and given Saturn's low density, its "Roche Limit" is about 85,000 miles. This distance is also the outer boundary of the icy ring system, a confirmation of the case for an icy fragmentation. Most likely, this ice ball was redirected during a close Saturn-Jupiter flyby in the former age when they co-orbited Little Brother. The icy rings would begin to effervesce when Saturn was delivered to the Inner Solar System. Following this, we don't know how much ice originally orbited Saturn; we do know the masses of its inner ice balls. They may be similar in size to the fragmented ice ball. In this way, the icy rings of Saturn suggests some degree of recentness for Saturn's delivery to its present orbit. After more study, if the estimate of 100,000 years for present longevity of Saturn's rings holds up, then it points to a Solar System that is "shockingly youthful" (to gradualists.) For Saturn's icy moons, it was chaos to experience a close flyby of Jupiter in the previous era. Mimas is Saturn's innermost surviving satellite, at 115,000 miles. It is just beyond the Saturnian Roche Limit. As was mentioned earlier, Mimas is an icy satellite pocked with craters and pitlets. Mimas has one crater that is one-third of its own radius. Perhaps some of its craters came from hits by icy debris from Saturn's icy fragmentation. The density of little Mimas is 1.2 compared to water, at a density of 1.0. No one knows, or wonders about how much ice formerly was in the ring system of Saturn. Little Mimas has an orbit radius of some 115,000 miles. It has a physical radius of about 121 miles. Largely composed of ice, its volume exceeds 7,000,000 cubic miles. No one knows whether the ice ball that did fragment was of a similar size, but it is a reasonable conjecture. Mimas might be an indication of how much ice originally might have been in Saturn's icy fragmentation. A fraction of that amount of ice settled into Saturn's resplendent icy rings. As was the case with the capture of Neptune and Uranus, the dating of L. B.'s delivery of Saturn and Jupiter to the doorstep of the Sun is a $64,000 question. The icy rings of Saturn, and their rate of effervescence, are an indication of recentness as gradualist astronomers assess time past. Saturn's rings point to our first theme, planetary catastrophism. These icy rings also point toward our second theme, a shockingly young solar system. And, Dr. Watson, the plot is about to thicken. The clue of Saturn's Rings, and their rate of effervescence is our story 8 of the new, catastrophic cosmology. That leaves some 62 stories yet to be erected. The Capture of The Four Inner Planets The Most Recent of the Snatches We have modeled that the Uranus and Neptune pair formerly co-orbited L. B., as did the Saturn-Jupiter pair. The model of capture of the inner planets by the Sun also works very well if we model Venus and the Earth formerly co-orbiting in orbits of low eccentricity. Venus has a mass 81.5% compared to the Earth. It has a density of 5.24 compared to 5.52 for the Earth. Venus has a polar diameter of 7,5l7 miles compared to the Earth's at 7,900 miles (polar). Physically, Venus is the Earth's twin. Mars, Mercury and the Moon, on the other hand, have masses with reference to the Earth, of only .107, .055 and .012 respectively. The Original Quintet in the Pre-Capture Era In size, the Venus and Earth pair are virtually twin planets. However, there are no twin spins, which means no close flybys in the previous age for Venus and the Earth. On the other hand, Mars has a twin spin with the Earth, indicating a third case of repeated planetary catastrophism in the former era. The model also works wonderfully well if we assume that, in orbiting L. B., Venus co-orbited with the Earth, and like the Moon, Venus was spinless - it showed the same face constantly to he Earth. In addition, our model functions best if Venus co-orbited the Earth in the clockwise direction (as viewed from Polaris). This is the same direction it slowly rotates today, backward. All of the nine planets today orbit the Sun counter-clockwise. And eight of the nine all rotate in the counter-clockwise mode, all except Venus. Today, although Venus hardly rotates at all, what little spin it does have is backward. The model of delivery by Little Brother and capture by the Sun included a package of five small bodies - Earth, Venus, Mars, Mercury and the Moon. The model works best if the conditions of this little five group was the following: Subset A. Originally in deep space, the Moon orbited the Earth in a roundish orbit at a distance of roughly 250,000 miles. This is similar to today. In so doing, the Moon rotated so as always to show the same face to the Earth. It still does. From the Earth's viewpoint, the Moon does not rotate. But from the Sun's viewpoint, the Moon rotates once in 29+ days, with one side always facing the Earth. However it has no spin axis. Subset B. Second, in deep space, in the former age, Mercury orbited Venus also in a roundish orbit at a distance of some 300,000 miles. It also behaved like the Moon; it showed one face and one face only to Venus. It, too, lacked a spin axis. Subset C. Third, in deep space, Mars orbited the Earth on a slightly different plane than the Moon. The orbit of Mars must have been long and narrow, i.e. highly eccentric. This is evident because the two developed reciprocal twin spins, just like Neptune-Uranus and Jupiter-Saturn. Twin spins developed from those close flybys long before the two planets were delivered to the doorstep of the Sun. We model Mars in deep space in the previous age coming within 30,000 miles of the Earth but retreating a distance of several million miles. Subset D. In the previous era, perhaps 1,000 a.u. from the Sun, Venus and the Earth co-orbited at a distance of perhaps 950,000 miles to 1,000,000 miles from each other. Its slow, backward rotation today corresponds to a slow, circular, backward revolving around the Earth in the previous age. The direction or co-orbiting was clockwise for Venus. Thus, Venus orbited the Earth in the opposite direction that Little Brother orbited the Sun. This we call "retrograde" (uncommon) or "clockwise," as it is viewed from Polaris, the North Star. Thus, in deep space, the Earth had a co-orbiting partner (Venus) plus two satellites, Mars and the Moon. Its partner, Venus, also had a non-rotating satellite some 300,000 miles distant, Mercury. This was a sticky little quintet. This quintet also was relatively close in to Little Brother (compared to Jupiter-Saturn and Neptune-Uranus.) . Hence, when stripping time came, if all the planet-stripping was done in one flyby, the quintet was the last system to be stripped off "L. B." and dismembered by the Sun, the Moon excepted. Hence these five comprise what some consider to be the "inner solar system" of today. All orbit within 160,000,000 miles of the Sun, compared to Jupiter's 480,000,000 miles. In this last capture process, for the sticky quintet, first the Sun separated Mars from the Earth. Shortly, perhaps within days, the Earth was divorced from its co-orbiting partner, spinless Venus. Within a couple of weeks more, as "L. B." inexorably approached the Sun, little spinless Mercury was stripped from Venus. Venus, was deposited on the brink of Hell's Kitchen, and the other, Mercury, as it was separated, was sent into an orbit inside Hell's Kitchen itself where temperatures rise to 700° and 800° F. Only the Earth-Moon system had survived the process of dismemberment and realignment around the Sun. This process of capture can be modeled. Figure 3 illustrates the last and the nearest to the Sun of the three packages of celestial captives. The Delivery Orbit for Mars Story 9 is about vestiges and the geographical relationships of the planets today. Mars was delivered to the inner Solar System. Evidently, it was delivered with a long, narrow, highly eccentric orbit, and it maintained that orbital trait into its second and even its third age. The "First Orbit of Mars" was when it orbited the Earth in the remote region 1,000 a.u. or more. The "Second Orbit of Mars" was ended when Mars met Astra in space, some 230,000,000 miles distant. Astra fragmented. Mars gained a little mass, and some angular momentum. But it lost some energy in the crisis. But we are getting ahead of ourselves. "The Scars of Mars" is the title for Volume II, where the details of the Second Orbit, the Third Orbit, and the Fourth Orbit of Mars are analyzed, and why the shifts. To summarize, somewhere between 150,000,000 and 200,000,000 miles from the Sun, both Little Brother and the Earth lost Mars. About 92,000,000 miles from the Sun, Little Brother lost the Earth, and shortly, some 67,000,000 miles, Venus (already stripped from the Earth) was also lost by "L. B." Finally, some 35,000,000 miles from the Sun, Mercury, already stripped from Venus, was also lost by Little Brother. Little Brother was picked clean of its satellites systems. Figure 3 illustrates. Earlier, it was noted that Neptune and Uranus once co-orbited, and they are still in the same neighborhood in the Solar System, still next to each other. This is a vestige of the ancient age. Next it was noted that Saturn and Jupiter once co-orbited, and they also are still next to each other, a second vestige of the primordial age. Now, we see that Mars and the Earth once co-orbited in the remote frigid region. And when the Sun stripped them, they continued to be next to each other. This is a third vestige. In addition, Venus and the Earth co-orbited, and they are still next to each other, a fourth vestige. Finally, little Mercury was a satellite of Venus and after it was stripped from Venus, it also settled down into an orbit next to Venus. Such is our fifth vestige. All of these five vestiges are geographical catastrophism and the geography of the cosmos. This series of separations from L. B., and repositions of the various planets may seem complicated. It isn't. There are only three bunches of planets that were separated from L. B., and (or) delivered to the Sun. First was Uranus-Neptune with satellites, second was Jupiter-Saturn with satellites, and third was Venus-Earth, of which two of the three satellites were stripped off, Mars and Mercury. Because of their greater distance from the Sun, Neptune, Uranus, Saturn and Jupiter each retained their satellite systems. But because Venus and the Earth were separated so close to the Sun (within 100,000,000 miles), two of the their three satellites, Mercury and Mars, were stripped off. Today we call them planets, tiny ones to be sure. In science, there is a maxim that almost always valid. When science is faced with two explanations for a phenomenon, a simple answer and a convoluted one, the simple answer is almost always the correct one. It is known as "Occam's Razor." In the 1300's, William of Occam (Ockham) wrote, "Entia non sunt multiplicanda praeter necessitatem." Loosely translated, it says that complications ought not to be multiplied except out of necessity. In his century, Occam was scientifically quite correct, although he was politically incorrect (and he paid the price of that age.) Our relatively simple, straight-forward theory of capture of three clusters of planets needs to be compared with, and contrasted to the many convolutions, and revision after revision of the nebular hypothesis. The nebular hypothesis, still a favorite of gradualists some 200 years later, tries to affirm all planetary components were extruded from the Sun. More on this convoluted, "popular" (frequently taught) approach is reserved for Chapter 10. The Placement of The Dismembered VenusEarth Binary Earlier, it was noted that Uranus and Neptune were separated from each other, but in that separation still remain somewhat close by each other. The same can be said for the Jupiter-Saturn binary; they too are still somewhat close by each other. Now, once again, we note that even though Venus and the Earth were separated from each other, they still remain fairly close to each other, side by side in the order of the planets. These are vestiges of delivery and capture; they are not three coincidences. The Earth formed a 360-day orbit, its "second orbit." Its first orbit was around Little Brother. This second orbit around the Sun was some 92,250,000 miles from the Sun, almost 1% closer than is the present arrangement. The "second orbit" of Mars in our model has Mars in a new, capture orbit where it may have came in to a region some 64,000,000 miles to the Sun but yet returned out to approximately 230,000,000 miles. Today this region, 230,000,000 miles from the Sun, is known as the heart of the asteroid belt. As was mentioned earlier, the "Second Orbit of Mars" will be discussed at some length in the next volume. Then, and in that context, occurred its sudden conversion, or deterioration into the infamous "Third Orbit of Mars." In the process Mars acquired an interesting display of scars on its surface. Thus, the Sun's radius of action broke up both the ancient Earth-Venus and the Earth-Mars relationships. It also broke up the Venus-Mercury relationship. But it did not break up the Earth-Moon relationship, merely because the Moon originally was so close to the Earth. The Moon never ventured out anywhere near 750,000 miles from the Earth, where it, too, could have been picked off. Figure 3 illustrates this original quintet, as the group orbited L.B. Mars, Earth-Moon system, Venus, Mercury. Such was the order of separation and delivery to the Sun, or, put in another way, it was the "order of capture" by the Sun. It was something like an adoption agency, sending five siblings all from the same family off in four different directions, allowing only the smallest of the five siblings to stay with the largest. Mars was separated from the Earth, yet maintaining its ancient long narrow orbit. Gravities attract, and Mars continued to cross the Earth's orbit. In a sense, Mars continued to "search for" and to seek the Earth, its former major focus. But with little success, at first. The Backward Slow Rotation of Venus The slow, backward rotation of Venus has been a mammoth-like conundrum, probably the greatest conundrum of all for gradualists during the 20th century. Venus, deeply in the Inner Solar System, together with Mercury, are right there where accretion from the Sun's ejecta was supposed to have condensed to the maximum, but instead somehow it has functioned to the minimum. This failure can no longer be swept under the rug. Somehow, some way, Venus, the morning star, rotates backward. And ever so slowly. Its backward (retrograde) rotation rate is once in 243.01 days - once in 5,832 hours. Its equatorial rotation measures to be only 4.05 mph., walking speed. The Earth rotates 1,037.6 mph in the other direction, prograde, counter-clockwise. This conundrum is easily solved. All that is needed is a well thought out model of capture and delivery. The key is the pre-capture era. Venus in the pre-capture era co-orbited with the Earth, at a distance of almost 1,000,000 miles. Both moved from the outer solar region to the inner region by orbiting, or revolving around L.B. clockwise - backward, or retrograde. In addition, Venus did not rotate, but, like the Moon, its ancient "face" "looked at" the Earth constantly. The model works best if the two planets co-orbited around "L. B." in the clockwise mode, opposite to the mode "L.B." orbited the Sun. Given this model, Venus would be picked off by the Sun, separating it first from the Earth, and second from "L. B.". Venus was sent into an orbit with an average radius of some 68,000,000 miles. With its retrograde direction of orbiting, at the moment of separation and capture, Venus kept facing the Earth. After separation, even today, Venus still in fact looks back to the Earth. (Gradualists, please note). Venus was like a lover, being separated from her husband during World War II. He, the soldier, boarded the troop train, or the boat, leaving forever. She, the wife left behind, on the dock departed, slowly throwing BACKWARD a final kiss to her beloved. This kind of thing happened many times to GIs and their brides in the early l940's. And many soldiers in fact never did return. The liberty (and ability) to a think in terms of planetary catastrophism frees us, as cosmologists, from the straight-jacket and the jail cell of gradualism. This is something like Copernicus and Kepler being freed from the tragedy of geocentricity. Copernicus and Kepler went on to provide the first two birth pangs of something entirely new to the history of man. It was the discovery of a system of natural law, which we now call "science". The tenth story in our cosmological construction is the acquisition of backward rotation by Venus. If gradualists choose to refute planetary catastrophism, this is where they should begin. This is certainly one of their biggest dilemmas, and we know a secret. It, the backward, slow rotation of Venus, will continue to be their foremost dilemma until they chuck gradualism. The Prograde Slow Rotation of Mercury Story 11 of our skyscraper is concerned with the ever-so-slow rotation of Mercury, prograde. Mercury rotates once in 58.65 days. At Mercury's equator, it rotates 6.7 mph. It compares to Venus, which rotates at 4.l mph at its equator. Both have such slow rotations because they were non-rotating satellites in the primordial age, when they revolved around "L. B." Mercury was dropped off into Hell's Kitchen because it was the last of the satellites to be stripped. This means that Little Brother approached at least 28,000,000 miles close to the Sun, because such is Mercury's distance today. Mercury's orbit period is 87.97 days. For reasons presently unknown, Mercury's rotation and is orbit period are in 3:2 resonance. Recently, it was determined that Mercury is not a liquid planet with a crust like Venus, the Earth and Mars. It is a solid planet. This is an indication that Mercury's center was very cold when it was delivered, and it hasn't warmed up a great deal since the time of delivery. How the Earth-Moon System Acquired Its Ancient 360-Day Orbit The twelfth story of our celestial skyscraper of cosmology concerns how the Earth-Moon system acquired its ancient 360-day orbit, some 92,250,000 miles from the Sun. This location for the Earth is in the middle of a 15,000,000-mile slot in the Solar System. In this narrow slot, and only in this slot, water neither boils constantly (as on Venus) nor does water freeze permanently (as on Mars.) This "slot" happens to be the one and only favorable location in the solar system where chloroplasts and chlorophyll can function. And where a planet can be greened. But we are getting ahead of our story. Whether by chance or design, the Earth was dropped into that marvelous, advantageous slot. It was 92,250,000 miles from the Sun, just 25,000,000 miles from Venus where surface temperatures rise to 700° F. The Earth was dropped off into "the slot" due to its previous distance from Little Brother and due to the geometry (and geography) of capture by the Sun. Our age, in part framed by gradualist dogma, is the age of the vanity of humanity. Our good fortune for our planet's location "in the slot" is not widely appreciated. Compared to vanity, humility is better, and an age of humility would be best. Job learned this, before it was too late, long ago. Job, viewing the grandeur of creation in a new light, was utterly speechless. The sixth story in our skyscraper is how the orbits of Neptune and Uranus came to be, and why Neptune and Uranus still are neighbors. Their ancient spin rates were nearly identical before separation and still are. The seventh story is how and why, if not when Jupiter and Saturn were picked off and captured by the Sun, some 480,000,000 miles and 880,000,000 miles respectively from the Sun. Part of the story is why they, too, are still neighbors. Like the Neptune-Uranus case, their spin rates also were nearly identical and still are. The eighth story is related to the seventh. It is probable that Saturn already had its icy rings before it was ferried close to the Sun and delivered. Since then, the solar radiation has been effervescing away those splendid rings; they are a mere shadow of what they once were. The rings of Saturn are one kind of dating mechanism for the origin of the solar system, and as such, indicate recentness. The eighth story is the breakup of the quintet in the inner region of the solar system. A two-planet co-orbiting binary, with three satellites, revolving around "L. B." was converted to four planets and just one satellite, all revolving around the Sun. The ninth story of our skyscraper is how an early orbit of Mars and Earth was changed; Mars was liberated first from the Earth and next, from Little Brother. It features the new second orbit of Mars. It was still long and narrow, but Mars now orbited the Sun instead of L. B. In the earlier age, Mars had sought the Earth repeatedly. And next, despite having been separated from the Earth, it continued to seek our planet. The tenth story of our skyscraper of cosmology addresses why Venus rotates so slowly, and why it rotates in the backward mode. Gradualists have pondered this for 100 years and have yet to gain even an inkling. This story also reveals why Venus orbits on the other side, on the edge of Hell's Kitchen, only 65,000,000+ miles from the Sun. We are pleased to announce that understanding this condition is merely by understanding its previous co-orbiting of the Earth and related conditions. Given a good model, the unique, backward, slow rotation of Venus isn't that hard to solve. The eleventh story of our skyscraper is why tiny Mercury rotates so slowly. The reason is because formerly it had no spin axis at all when it orbited Venus in the previous, primordial era. Once non-rotating, its geometry of capture dictated a very, very slow prograde rotation. Mercury rotates in 58.65 days. The twelfth story in our catastrophic cosmology is an observation that the Earth-Moon was dropped off in "the slot." It acquired a new orbit around the Sun, one conquered neither by superheated waters like Venus, nor by perpetual ices like Mars. Coincidence? Perhaps. By design? More likely. For some 200+ years, gradualists have always looked to the Sun for cosmic supplies to stock the Solar System. Having admitted their error in part (after over 100 years), they now have settled for claiming the Sun and planets formed simultaneously out of a cloud. Sublime in misdirection, the gradualists have been looking in just exactly the wrong region for the origin of the planets, the region of inner space, near to and in "Hell's Kitchen." They should have been looking to the region 1,000 a.u. or so from the Sun in dark, remote, frigid space. However, evidence indicates that spin rates were acquired in a remote region at or beyond 1,000 a.u. or more from the Sun. So were satellite systems and craters in abundance. A delivery into the Inner Solar System requires a properly modeled delivery system, and a logical route. The logical route is simply the ecliptic plane. The delivery system is some super-planet along the lines we have modeled, a super-planet 30 to 40 times as massive as Jupiter in mass, also 9,000 to 12,000 times the Earth's mass. If its density is similar to the Earth's, Little Brother could have a diameter of 190,000 miles. If these eight planets were delivered to the Sun into the inner Solar system, there must be a delivery system, a United Parcel Service of the cosmos. The deliveries of the Neptune-Uranus pair and the Saturn Jupiter pair are two items in evidence. The delivery of the quintet is a third indication that a delivery system exists. But, is there more evidence? PREVIEW. Surprisingly, as one proceeds into an analysis of the planets in "Hell's Kitchen," including the Sun itself, three or four more scars, or clues, can be observed, scars of Little Brother's last flyby around the Sun. Read on. We offer our model with logic interspersed with various kinds of evidence. PREVIEW. As it so happens in good movies, while the previously mentioned clues are good evidence of this delivery system, the United Parcel Service of the cosmos, nevertheless the best clues are left for the last. Those clues, three or four of them, can be seen inside the orbit of Venus. Read on. F2 Kuiper, Gerard P., Planets and Satellites. Chicago, Univ. of Chicago Press, 1961, pp. 577-578. The Recent Organization of The Solar System by Patten & Windsor
http://www.creationism.org/patten/PattenRecOrgSolSys/PattenRootssCh06.html
13
105
To receive more information about up-to-date research on micronutrients, sign up for the free, semi-annual LPI Research Newsletter here. Structure and Physiology Bone Composition and Structure. Our skeleton may seem an inert structure, but it is an active organ, made up of tissue and cells in a continual state of activity throughout a lifetime. Bone tissue is comprised of a mixture of minerals deposited around a protein matrix, which together contribute to the strength and flexibility of our skeletons. Sixty-five percent of bone tissue is inorganic mineral, which provides the hardness of bone. The major minerals found in bone are calcium and phosphorus in the form of an insoluble salt called hydroxyapatite (HA) [chemical formula: (Ca)10(PO4)6(OH)2)]. HA crystals lie adjacent and bound to the organic protein matrix. Magnesium, sodium, potassium, and citrate ions are also present, conjugated to HA crystals rather than forming distinct crystals of their own (1). The remaining 35% of bone tissue is an organic protein matrix, 90-95% of which is type I collagen. Collagen fibers twist around each other and provide the interior scaffolding upon which bone minerals are deposited (1). Types of Bone. There are two types of bone tissue: cortical (compact) bone and trabecular (spongy or cancellous) bone (2). Eighty percent of the skeleton is cortical bone, which forms the outer surface of all bones. The small bones of the wrists, hands, and feet are entirely cortical bone. Cortical bone looks solid but actually has microscopic openings that allow for the passage of blood vessels and nerves. The other 20% of skeleton is trabecular bone, found within the ends of long bones and inside flat bones (skull, pelvis, sternum, ribs, and scapula) and spinal vertebrae. Both cortical and trabecular bone have the same mineral and matrix components but differ in their porosity and microstructure: trabecular bone is much less dense, has a greater surface area, and undergoes more rapid rates of turnover (see Bone Remodeling/Turnover below). There are three phases of bone development: growth, modeling (or consolidation), and remodeling (see figure). During the growth phase, the size of our bones increases. Bone growth is rapid from birth to age two, continues in spurts throughout childhood and adolescence, and eventually ceases in the late teens and early twenties. Although bones stop growing in length by about 20 years of age, they change shape and thickness and continue accruing mass when stressed during the modeling phase. For example, weight training and body weight exert mechanical stresses that influence the shape of bones. Thus, acquisition of bone mass occurs during both the growth and modeling/consolidation phases of bone development. The remodeling phase consists of a constant process of bone resorption (breakdown) and formation that predominates during adulthood and continues throughout life. Beginning around age 34, the rate of bone resorption exceeds that of bone formation, leading to an inevitable loss of bone mass with age (3). Peak Bone Mass. Bone mass refers to the quantity of bone present, both matrix and mineral. Bone mass increases through adolescence and peaks in the late teen years and into our twenties. The maximum amount of bone acquired is known as peak bone mass (PBM) (see figure) (4, 5). Achieving one’s genetically determined PBM is influenced by several environmental factors, discussed more extensively below (see Determinants of Adult Bone Health below). Technically, we cannot detect the matrix component of bone, so bone mass cannot be measured directly. We can, however, detect bone mineral by using dual X-ray absorptiometry (DEXA). In this technique, the absorption of photons from an X-ray is a function of the amount of mineral present in the path of the beam. Therefore, bone mineral density (BMD) measures the quantity of mineral present in a given section of bone and is used as a proxy for bone mass (6). Although BMD is a convenient clinical marker to assess bone mass and is associated with osteoporotic fracture risk, it is not the sole determinant of fracture risk. Bone quality (architecture, strength) and propensity to fall (balance, mobility) also factor into risk assessment and should be considered when deciding upon an intervention strategy (see Osteoporosis). Bone Remodeling/Turnover. Bone tissue, both mineral and organic matrix, is continually being broken down and rebuilt in a process known as remodeling or turnover. During remodeling, bone resorption and formation are always “coupled”—osteoclasts first dissolve a section of bone and osteoblasts then invade the newly created space and secrete bone matrix (6). The goal of remodeling is to repair and maintain a healthy skeleton, adapt bone structure to new loads, and regulate calcium concentration in extracellular fluids (7). The bone remodeling cycle, which refers to the time required to complete the entire series of cellular events from resorption to final mineralization, lasts approximately 40 weeks (8, 9). Additionally, remodeling units cycle at staggered stages. Thus, any intervention that influences bone remodeling will affect newly initiated remodeling cycles at first, and there is a lag time, known as the “bone remodeling transient,” until all remodeling cycles are synchronized to the treatment exposure (8). Considering the bone remodeling transient and the length of time required to complete a remodeling cycle, a minimum of two years is needed to realize steady-state treatment effects on BMD (10). The rates of bone tissue turnover differ depending on the type of bone: trabecular bone has a faster rate of turnover than cortical bone. Osteoporotic fracture manifests in trabecular bone, primarily as fractures of the hip and spine, and many osteoporotic therapies target remodeling activities in order to alter bone mass (11). Bone Cells. The cells responsible for bone formation and resorption are osteoblasts and osteoclasts, respectively. Osteoblasts prompt the formation of new bone by secreting the collagen-containing component of bone that is subsequently mineralized (1). The enzyme alkaline phosphatase is secreted by osteoblasts while they are actively depositing bone matrix; alkaline phosphatase travels to the bloodstream and is therefore used as a clinical marker of bone formation rate. Osteoblasts have receptors for vitamin D, estrogen, and parathyroid hormone (PTH). As a result, these hormones have potent effects on bone health through their regulation of osteoblastic activity. Once they have finished secreting matrix, osteoblasts either die, become lining cells, or transform into osteocytes, a type of bone cell embedded deep within the organic matrix (9, 12). Osteocytes make up 90-95% of all bone cells and are very long-lived (up to decades) (12). They secrete soluble factors that influence osteoclastic and osteoblastic activity and play a central role in bone remodeling in response to mechanical stress (9, 12, 13). Osteoclasts erode the surface of bones by secreting enzymes and acids that dissolve bone. More specifically, enzymes degrade the organic matrix and acids solubilize bone mineral salts (1). Osteoclasts work in small, concentrated masses and take approximately three weeks to dissolve bone, at which point they die and osteoblasts invade the space to form new bone tissue. In this way, bone resorption and formation are always “coupled.” End products of bone matrix breakdown (hydroxyproline and amino-terminal collagen peptides) are excreted in the urine and can be used as convenient biochemical measures of bone resorption rates. Maximum Attainment of Peak Bone Mass. The majority of bone mass is acquired during the growth phase of bone development (see figure) (4, 6). Attaining one’s peak bone mass (PBM) (i.e., the maximum amount of bone) is the product of genetic, lifestyle, and environmental factors (5, 14). Sixty to 80% of PBM is determined by genetics, while the remaining 20-40% is influenced by lifestyle factors, primarily nutrition and physical activity (15). In other words, diet and exercise are known to contribute to bone mass acquisition but can only augment PBM within an individual’s genetic potential. Acquisition of bone mass during the growth phase is sometimes likened to a “bone bank account” (4, 5). As such, maximizing PBM is important when we are young in order to protect against the consequences of age-related bone loss. However, improvements in bone mineral density (BMD) generally do not persist once a supplement or exercise intervention is terminated (16, 17). Thus, attention to diet and physical activity during all phases of bone development is beneficial for bone mass accrual and skeletal health. Rate of Bone Loss with Aging. Bone remodeling is a lifelong process, with resorption and formation linked in space and time. Yet the scales tip such that bone loss outpaces bone gain as we age. Beginning around age 34, the rate of bone resorption exceeds the rate of bone formation, leading to an inevitable loss of bone mass with age (see figure) (18). Age-related estrogen reduction is associated with increased bone remodeling activity—both resorption and formation—in both sexes (13). However, the altered rate of bone formation does not match that of resorption; thus, estrogen deficiency contributes to loss of bone mass over time (9, 13). The first three to five years following the onset of menopause ('early menopause') are associated with an accelerated, self-limiting loss of bone mass (3, 18, 19). Subsequent postmenopausal bone loss occurs at a linear rate as we age (3). As we continue to lose bone, we near the threshold for osteoporosis and are at high-risk for fractures of the hip and spine. Osteomalacia. Osteomalacia, also known as “adult rickets,” is a failure to mineralize bone. Stereotypically, osteomalacia results from vitamin D deficiency (serum 25-hydroxyvitamin D levels <20 nmol/L or <8 ng/mL) and the associated inability to absorb dietary calcium and phosphorus across the small intestine. Plasma calcium concentration is tightly controlled, and the body has a number of mechanisms in place to adjust to fluctuating blood calcium levels. In response to low blood calcium, PTH levels increase and vitamin D is activated. The increase in PTH stimulates bone remodeling activity—both resorption and formation, which are always coupled. Thus, osteoclasts release calcium and phosphorus from bone in order to restore blood calcium levels, and osteoblasts mobilize to replace the resorbed bone. During osteomalacia, however, the deficiency of calcium and phosphorus results in incomplete mineralization of the newly secreted bone matrix. In severe cases, newly formed, unmineralized bone loses its stiffness and can become deformed under the strain of body weight. Osteopenia. Simply put, osteopenia and osteoporosis are varying degrees of low bone mass. Whereas osteomalacia is characterized by low-mineral and high-matrix content, osteopenia and osteoporosis result from low levels of both. As defined by the World Health Organization (WHO), osteopenia precedes osteoporosis and occurs when one’s bone mineral density (BMD) is between 1 and 2.5 standard deviations (SD) below that of the average young adult (30 years of age) woman (see figure). Osteoporosis. Osteoporosis is a condition of increased bone fragility and susceptibility to fracture due to loss of bone mass. Clinically, osteoporosis is defined as a BMD that is greater than 2.5 SD below the mean for young adult women (see figure). It has been estimated that fracture risk in adults is approximately doubled for each SD reduction in BMD (6). Common sites of osteoporotic fracture are the hip, femoral neck, and vertebrae of spinal column—skeletal sites rich in trabecular bone. BMD, the quantity of mineral present per given area/volume of bone, is only a surrogate for bone strength. Although it is a convenient biomarker used in clinical and research settings to predict fracture risk, the likelihood of experiencing an osteoporotic fracture cannot be predicted solely by BMD (6). The risk of osteoporotic fracture is influenced by additional factors, including bone quality (microarchitecture, geometry) and propensity to fall (balance, mobility, muscular strength). Other modifiable and non-modifiable factors also play into osteoporotic fracture risk, and they are generally additive (21). The WHO Fracture Risk Assessment Tool was designed to account for some of these additional risk factors. Once you have your BMD measurement, visit the WHO Web site to calculate your 10-year probability of fracture, taking some of these additional risk factors into account. Paying attention to modifiable risk factors for osteoporosis is an important component of fracture prevention strategies. For more details about individual dietary factors and osteoporosis, see the Micronutrient Information Center's Disease Index and the LPI Research Newsletter article by Dr. Jane Higdon. Micronutrient supply plays a prominent role in bone health. Several minerals have direct roles in hydroxyapatite (HA) crystal formation and structure; other nutrients have indirect roles as cofactors or as regulators of cellular activity (22, 23).Table 1 below lists the dietary reference intakes (DRIs) for micronutrients important to bone health. The average dietary intake of Americans (aged 2 years and older) is also provided for comparative purposes (24). |Table 1. DRIs for Micronutrients Important to Bone Health| |Micronutrient||RDA or AI*||UL (≥19 y)||Mean intake (≥2 y, all food sources) (24)| 1,000 mg/d (19-70y) 1,200 mg/d (>70y) 1,000 mg/d (19-50y) 1,200 mg/d (>50y) |Men & Women: 2,500 mg/d (19-50y) 2,000 mg/d (>50y) |Phosphorus||Men & Women: |Men & Women: 4 g/d (19-70y) 3 g/d (>70y) |Fluoride||Men: 4 mg/d* Women: 3 mg/d* |Men & Women: 400 mg/d (19-30y) 420 mg/d (>31y) 310 mg/d (19-30y) 320 mg/d (>31y) |Men & Women: |Sodium||Men & Women: 1.5 g/d (19-50y) 1.3 g/d (51-70y) 1.2 g/d (>70y) |Men & Women: |Vitamin D||Men & Women: 15 mcg (600 IU)/d (19-70y) 20 mcg (800 IU)/d (>70y) |Men & Women: (3,000 IU)/db Women: |Men & Women: 3,000 mcg (10,000 IU)/db |ND||80 mcg/d||Vitamin C||Men: |Men & Women: 1.3 mg/d (19-50y) 1.7 mg/d (>50y) 1.3 mg/d (19-50y) 1.5 mg/d (>50y) |Men & Women: |Folate||Men & Women: |Men & Women: |Vitamin B12||Men & Women: |Abbreviations: RDA, recommended dietary allowance; AI, adequate intake; UL, tolerable upper intake level; y, years; d, day; g, gram; mg, milligram; mcg, microgram; IU, international units; ND, not determinable| aApplies only to the supplemental form bApplies only to preformed retinol cApplies to the synthetic form in fortified foods and supplements Calcium. Calcium is the most common mineral in the human body. About 99% of the calcium in the body is found in bones and teeth, while the other 1% is found in blood and soft tissues. Calcium levels in the blood must be maintained within a very narrow concentration range for normal physiological functioning, namely muscle contraction and nerve impulse conduction. These functions are so vital to survival that the body will demineralize bone to maintain normal blood calcium levels when calcium intake is inadequate. In response to low blood calcium, parathyroid hormone (PTH) is secreted. PTH targets three main axes in order to restore blood calcium concentration: (1) vitamin D is activated (see the section on vitamin D below), (2) filtered calcium is retained by the kidneys, and (3) bone resorption is induced (1). It is critical to obtain enough dietary calcium in order to balance the calcium taken from our bones in response to fluctuating blood calcium concentrations. Several randomized, placebo-controlled trials (RCTs) have tested whether calcium supplementation reduces age-related bone loss and fracture incidence in postmenopausal women. In the Women’s Health Initiative (WHI), 36,282 healthy, postmenopausal women (aged 50 to 79 years; mean age 62 years) were randomly assigned to receive placebo or 1,000 mg calcium carbonate and 400 IU vitamin D3 daily (25). After a mean of seven years of follow-up, the supplement group had significantly less bone loss at the hip. A 12% reduction in the incidence of hip fracture in the supplement group did not reach statistical significance, possibly due to the low rates of absolute hip fracture in the 50 to 60 year age range. The main adverse event reported in the supplement group was an increased proportion of women with kidney stones. Another RCT assessed the effect of 1,000 mg of calcium citrate versus placebo on bone density and fracture incidence in 1,472 healthy postmenopausal women (aged 74±4 years) (26). Calcium had a significant beneficial effect on bone mineral density (BMD) but an uncertain effect on fracture rates. The high incidence of constipation with calcium supplementation may have contributed to poor compliance, which limits data interpretation and clinical efficacy. Hip fracture was significantly reduced in an RCT involving 1,765 healthy, elderly women living in nursing homes (mean age 86±6 years) given 1,200 mg calcium triphosphate and 800 IU vitamin D3 daily for 18 months (27). The number of hip fractures was 43% lower and the number of nonvertebral fractures was 32% lower in women treated with calcium and vitamin D3 supplements compared to placebo. While there is a clear treatment benefit in this trial, the institutionalized elderly population is known to be at high risk for vitamin deficiencies and fracture rates and may not be representative of the general population. Overall, the majority of calcium supplementation trials (and meta-analyses thereof) show a positive effect on BMD, although the size of the effect is modest (3, 7, 28, 29). Furthermore, the response to calcium supplementation may depend on habitual calcium intake and age: those with chronic low intakes will benefit most from supplementation (7, 29), and women within the first five years after menopause are somewhat resistant to calcium supplementation (7, 10). The current recommendations in the U.S. for calcium are based on a combination of balance data and clinical trial evidence, and they appear to be set at levels that support bone health (see table 1 above) (30, 31). Aside from the importance of meeting the RDA, calcium is a critical adjuvant for therapeutic regimens used to treat osteoporosis (7, 11). The therapy (e.g., estrogen replacement, pharmaceutical agent, and physical activity) provides a bone-building stimulus that must be matched by raw materials (nutrients) obtained from the diet. Thus, calcium supplements are a necessary component of any osteoporosis treatment strategy. A recent meta-analysis (32) and prospective study (33) have raised concern over the safety of calcium supplements, either alone or with vitamin D, on the risk of cardiovascular events. Although these analyses raise an issue that needs further attention, there is insufficient evidence available at this time to definitely refute or support the claims that calcium supplementation increases the risk of cardiovascular disease. For more extensive discussion of this issue, visit the LPI Spring/Summer 2012 Research Newsletter or the LPI News Article. Phosphorus. More than half the mass of bone mineral is comprised of phosphorus, which combines with calcium to form HA crystals. In addition to this structural role, osteoblastic activity relies heavily on local phosphate concentrations in the bone matrix (11, 34). Given its prominent functions in bone, phosphorus deficiency could contribute to impaired bone mineralization (34). However, in healthy individuals, phosphorus deficiency is uncommon, and there is little evidence that phosphorus deficiency affects the incidence of osteoporosis (23). Excess phosphorus intake has negligible affects on calcium excretion and has not been linked to a negative impact on bone (35). Fluoride. Fluoride has a high affinity for calcium, and 99% of our body fluoride is stored in calcified tissues, i.e., teeth and bones (36). In our teeth, very dense HA crystals are embedded in collagen fibers. The presence of fluoride in the HA crystals (fluoroapatite) enhances resistance to destruction by plaque bacteria (1, 36), and fluoride has proven efficacy in the prevention of dental caries (37). While fluoride is known to stimulate bone formation through direct effects on osteoblasts (38), high-dose fluoride supplementation may not benefit BMD or reduce fracture rates (39, 40). The presence of fluoride in HA increases the crystal size and contributes to bone fragility; thus, uncertainties remain about the quality of newly formed bone tissue with fluoride supplementation (9, 23). Chronic intake of fluoridated water, on the other hand, may benefit bone health (9, 36). Two large prospective studies comparing fracture rates between fluoridated and non-fluoridated communities demonstrate that long-term, continuous exposure to fluoridated water (1 mg/L) is safe and associated with reduced incidence of fracture in elderly individuals (41, 42). Magnesium. Magnesium (Mg) is a major mineral with essential structural and functional roles in the body. It is a critical component of our skeleton, with 50-60% of total body Mg found in bone where it colocalizes with HA, influencing the size and strength of HA crystals (23). Mg also serves a regulatory role in mineral metabolism. Mg deficiency is associated with impaired secretion of PTH and end-organ resistance to the actions of PTH and 1,25-dihydroxyvitamin D3 (43). Low dietary intake of Mg is common in the U.S. population (24), and it has therefore been suggested that Mg deficiency could impair bone mineralization and represent a risk factor for osteoporosis. However, observational studies of the association between Mg intake and bone mass or bone loss have produced mixed results, with most showing no association (34). The effect of Mg supplementation on trabecular bone density in postmenopausal women was assessed in one controlled intervention trial (44). Thirty-one postmenopausal women (mean age, 57.6±10.6 years) received two to six tablets of 125 mg each magnesium hydroxide (depending on individual tolerance levels) for six months, followed by two tablets daily for another 18 months. Twenty-three age-matched osteoporotic women who refused treatment served as controls. After one year of Mg supplementation, there was either an increase or no change in bone density in 27 out of 31 patients; bone density was significantly decreased in controls after one year. Although encouraging, this is a very small study, and only ten Mg-supplemented patients persisted into the second year. Sodium. Sodium is thought to influence skeletal health through its impact on urinary calcium excretion (34). High-sodium intake increases calcium excretion by the kidneys. If the urinary calcium loss is not compensated for by increased intestinal absorption from dietary sources, bone calcium will be mobilized and could potentially affect skeletal health. However, even with the typical high sodium intakes of Americans (2,500 mg or more per day), the body apparently increases calcium absorption efficiency to account for renal losses, and a direct connection between sodium intake and abnormal bone status in humans has not been reported (34, 45). Nonetheless, compensatory mechanisms in calcium balance may diminish with age (11), and keeping sodium within recommended levels is associated with numerous health benefits. Vitamin A. Both vitamin A deficiency and excess can negatively affect skeletal health. Vitamin A deficiency is a major public health concern worldwide, especially in developing nations. In growing animals, vitamin A deficiency causes bone abnormalities due to impaired osteoclastic and osteoblastic activity (46). These abnormalities can be reversed upon vitamin A repletion (47). In animals, vitamin A toxicity (hypervitaminosis A) is associated with poor bone growth, loss of bone mineral content, and increased rate of fractures (22). Case studies in humans have indicated that extremely high vitamin A intakes (100,000 IU/day or more, several fold above the tolerable upper intake level [UL] (see table 1 above) are associated with hypercalcemia and bone resorption (48-50). The question remains, however, if habitual, excessive vitamin A intake has a negative effect on bone (22, 51, 52). There is some observational evidence that high vitamin A intake (generally in supplement users and at intake levels >1,500 mcg [5,000 IU]/day) is associated with an increased risk of osteoporosis and hip fracture (53-55). However, methods to assess vitamin A intake and status are notoriously unreliable (56), and the observational studies evaluating the association between vitamin A status or vitamin A intake with bone health report inconsistent results (57, 58). At this time, striving for the recommended dietary intake (RDA) for vitamin A (see table 1 above) is an important and safe goal for optimizing skeletal health. Vitamin D. The primary function of vitamin D is to maintain calcium and phosphorus absorption in order to supply the raw materials of bone mineralization (9, 59). In response to low blood calcium, vitamin D is activated and promotes the active absorption of calcium across the intestinal cell (59). In conjunction with PTH, activated 1,25-dihydroxyvitamin D3 retains filtered calcium by the kidneys. By increasing calcium absorption and retention, 1,25-dihydroxyvitamin D3 helps to offset calcium lost from the skeleton. Low circulating 25-hydoxyvitamin D3 (the storage form of vitamin D3) triggers a compensatory increase in PTH, a signal to resorb bone. The Institute of Medicine determined that maintaining a serum 25-hydroxyvitamin D3 level of 50 nmol/L (20 ng/ml) benefits bone health across all age groups (31). However, debate remains over the level of serum 25-hydroxyvitamin D3 that corresponds to optimum bone health. Based on a recent review of clinical trial data, the authors concluded that serum 25-hydroxyvitamin D3 should be maintained at 75-110 nmol/L (30-44 ng/ml) for optimal protection against fracture and falls with minimal risk of hypercalcemia (60). The level of intake associated with this higher serum 25-hydroxyvitamin D3 range is 1,800 to 4,000 IU per day, significantly higher than the current RDA (see table 1 above) (60). As mentioned in the Calcium section above, several randomized controlled trials (and meta-analyses) have shown that combined calcium and vitamin D supplementation decreases fracture incidence in older adults (29, 61-63). The efficacy of vitamin D supplementation may depend on habitual calcium intake and the dose of vitamin D used. In combination with calcium supplementation, the dose of vitamin D associated with a protective effect is 800 IU or more per day (29, 64). In further support of this value, a recent dosing study performed in 167 healthy, postmenopausal, white women (aged 57 to 90 years old) with vitamin D insufficiency (15.6 ng/mL at baseline) demonstrated that 800 IU/d of vitamin D3 achieved a serum 25-hydoxyvitamin D3 level greater than 20 ng/mL (65). The dosing study, which included seven groups ranging from 0 to 4,800 IU per day of vitamin D3 plus calcium supplementation for one year, also revealed that serum 25-hydroxyvitamin D3 response was curvilinear and plateaued at approximately 112 nmol/L (45 ng/mL) in subjects receiving more than 3,200 IU per day of vitamin D3. Some trials have evaluated the effect of high-dose vitamin D supplementation on bone health outcomes. In one RCT, high-dose vitamin D supplementation was no better than the standard dose of 800 IU/d for improving bone mineral density (BMD) at the hip and lumbar spine (66). In particular, 297 postmenopausal women with low bone mass (T-score ≤-2.0) were randomized to receive high-dose (20,000 IU vitamin D3 twice per week plus 800 IU per day) or standard-dose (placebo plus 800 IU per day) for one year; both groups also received 1,000 mg elemental calcium per day. After one year, both groups had reduced serum PTH, increased serum 25-hydroxyvitamin D3, and increased urinary calcium/creatinine ratio, although to a significantly greater extent in the high-dose group. BMD was similarly unchanged or slightly improved in both groups at all measurement sites. In the Vital D study, 2,256 elderly women (aged 70 years and older) received a single annual dose of 500,000 IU of vitamin D3 or placebo administered orally in the autumn or winter for three to five years (67). Calcium intake was quantified annually by questionnaire; both groups had a median daily calcium intake of 976 mg. The vitamin D group experienced significantly more falls and fractures compared to placebo, particularly within the first three months after dosing. Not only was this regimen ineffective at lowering risk, it suggests that the safety of infrequent, high-dose vitamin D supplementation warrants further study. The RDAs for calcium and vitamin D go together, and the requirement for one nutrient assumes that the need for the other nutrient is being met (31). Thus, the evidence supports the use of combined calcium and vitamin D supplements in the prevention of osteoporosis in older adults. Vitamin K. The major function of vitamin K1 (phylloquinone) is as a cofactor for a specific enzymatic reaction that modifies proteins to a form that facilitates calcium-binding (68). Although only a small number of vitamin-K-dependent proteins have been identified, four are present in bone tissue: osteocalcin (also called bone GLA protein), matrix GLA protein (MGP), protein S, and Gas 6 (68, 69). The putative role of vitamin K in bone biology is attributed to its role as cofactor in the carboxylation of these glutamic acid (GLA)-containing proteins (70). There is observational evidence that diets rich in vitamin K are associated with a decreased risk of hip fracture in both men and women; however, the association between vitamin K intake and BMD is less certain (70). It is possible that a higher intake of vitamin K1, which is present in green leafy vegetables, is a marker of a healthy lifestyle that is responsible for driving the beneficial effect on fracture risk (68, 70). Furthermore, a protective effect of vitamin K1 supplementation on bone loss has not been confirmed in randomized controlled trials (69-71). Vitamin K2 (menaquinone) at therapeutic doses (45 mg/day) is used in Japan to treat osteoporosis (see the Micronutrient Information Center’s Disease Index). Although a 2006 meta-analysis reported an overall protective effect of menaquinone-4 (MK-4) supplementation on fracture risk at the hip and spine (72), more recent data have not corroborated a protective effect of MK-4 and may change the outcome of the meta-analysis if included in the dataset (70). A double-blind, placebo-controlled intervention performed in 2009 observed no effect of either vitamin K1 (1 mg/d) or MK-4 (45 mg/d) supplementation on markers of bone turnover or BMD among healthy, postmenopausal women (N=381) receiving calcium and vitamin D supplements (69). In the Postmenopausal Health Study II, the effect of supplemental calcium, vitamin D, and vitamin K (in fortified dairy products) and lifestyle counseling on bone health was examined in healthy, postmenopausal women (73, 74). One hundred fifty women (mean age 62 years) were randomly assigned to one of four groups: (1) 800 mg calcium plus 10 mcg vitamin D3 (N=26); (2) 800 mg calcium, 10 mcg vitamin D3, plus 100 mcg vitamin K1 (N=26); (3) 800 mg calcium, 10 mcg vitamin D3, plus 100 mcg MK-7 (N=24); and (4) control group receiving no dietary intervention or counseling. Supplemental nutrients were delivered via fortified milk and yoghurt, and subjects were advised to consume one portion of each on a daily basis and to attend biweekly counseling sessions during the one-year intervention. BMD significantly increased in all three treatments compared to controls. Between the three diet groups, a significant effect of K1 or MK-7 on BMD remained only at the lumbar spine (not at hip and total body) after controlling for serum vitamin D and calcium intake. Overall, the positive influence on BMD was attributed to the combined effect of diet and lifestyle changes associated with the intervention, rather than with an isolated effect of vitamin K or MK-7 (73). We often discuss the mineral aspect of bone, but the organic matrix is also an integral aspect of bone quality and health. Collagen makes up 90% of the organic matrix of bone. Type I collagen fibers twist around each other in a triple helix and become the scaffold upon which minerals are deposited. Vitamin C is a required cofactor for the hydroxylation of lysine and proline during collagen synthesis by osteoblasts (75). In guinea pigs, vitamin C deficiency is associated with defective bone matrix production, both quantity and quality (76). Unlike humans and guinea pigs, rats can synthesize ascorbic acid on their own. Using a special strain of rats with a genetic defect in ascorbic acid synthesis (Osteogenic Disorder Shionogi [ODS] rats), researchers can mimic human scurvy by feeding these animals a vitamin C-deficient diet (77). Ascorbic acid-deficient ODS rats have a marked reduction in bone formation with no defect in bone mineralization (78). More specifically, ascorbic acid deficiency impairs collagen synthesis, the hydroxylation of collagenous proline and lysine residues, and osteoblastic adhesion to bone matrix (78). In observational studies, vitamin C intake and status is inconsistently associated with bone mineral density and fracture risk (22). A double-blind, placebo-controlled trial was performed with the premise that improving the collagenous bone matrix will enhance the efficacy of mineral supplementation to counteract bone loss (75). Sixty osteopenic women (35 to 55 years of age) received a placebo comprised of calcium and vitamin D (1,000 mg calcium carbonate plus 250 IU vitamin D) or this placebo plus CB6Pro (500 mg vitamin C, 75 mg vitamin B6, and 500 mg proline) daily for one year. In contrast to controls receiving calcium plus vitamin D alone, there was no bone loss detected in the spine and femur in the CB6Pro group. High levels of a metabolite known as homocysteine (hcy) are an independent risk factor for cardiovascular disease (CVD) (see the Disease Index) and may also be a modifiable risk factor for osteoporotic fracture (22). A link between hcy and the skeleton was first noted in studies of hyperhomocysteinuria, a metabolic disorder characterized by exceedingly high levels of hcy in the plasma and urine. Individuals with hyperhomocysteinuria exhibit numerous skeletal defects, including reduced bone mineral density (BMD) and osteopenia (79). In vitro studies indicate that a metabolite of hcy inhibits lysyl oxidase, an enzyme involved in collagen cross-linking, and that elevated hcy itself may stimulate osteoclastic activity (80-82). The effect of more subtle elevations of plasma hcy on bone health is more difficult to demonstrate, and observational studies in humans report conflicting results (79, 83). Some report an association between elevated plasma hcy and fracture risk (84-86), while others find no relationship (87-89). A recent meta-analysis of 12 observational studies reported that elevated plasma homocysteine is associated with increased risk of incident fracture (90). Folate, vitamin B12, and vitamin B6 help keep blood levels of hcy low; thus, efforts to reduce plasma hcy levels by meeting recommended intake levels for these vitamins may benefit bone health (83). Few intervention trials evaluating hcy-lowering therapy on bone health outcomes have been conducted. In one trial, 5,522 participants (aged 55 years and older) in the Heart Outcomes Prevention Evaluation (HOPE) 2 trial were randomized to receive daily hcy level-lowering therapy (2.5 mg folic acid, 50 mg vitamin B6, and 1 mg vitamin B12) or placebo for a mean duration of five years (91). Notably, HOPE 2 participants were at high-risk for cardiovascular disease and have preexisting CVD, diabetes mellitus, or another CVD risk factor. Although plasma hcy levels were reduced in the treatment group, there were no significant differences between treatment and placebo on the incidence of skeletal fracture. A randomized, double-blind, placebo-controlled intervention is under way that will assess the effect of vitamin B12 and folate supplementation on fracture incidence in elderly individuals (92). During the B-PROOF (B-vitamins for the Prevention Of Osteoporotic Fracture) trial, 2,919 subjects (65 years and older) with elevated hcy (≥12 micromol/L) will receive placebo or a daily tablet with 500 mcg B12 plus 400 mcg folic acid for two years (both groups also receive 15 mcg [600 IU] vitamin D daily). The first results are expected in 2013 and may help clarify the relationship between hcy, B-vitamin status, and osteoporotic hip fracture. Smoking. Cigarette smoking has an independent, negative effect on bone mineral density (BMD) and fracture risk in both men and women (93, 94). Several meta-analyses have been conducted to assess the relationship between cigarette smoking and bone health. After pooling data from a number of similar studies, there is a consistent, significant reduction in bone mass and increased risk of fracture in smokers compared to non-smokers (95-97). The effects were dose-dependent and had a strong association with age. Smoking cessation may slow or partially reverse the bone loss caused by years of smoking. Unhealthy lifestyle habits and low body weight present in smokers may contribute to the negative impact on bone health (93, 94). Additionally, smoking leads to alterations in hormone (e.g., 1,25-dihydroxyvitamin D3 and estrogen) production and metabolism that could affect bone cell activity and function (93, 94). The deleterious effects of smoking on bone appear to be reversible; thus, efforts to stop smoking will benefit many aspects of general health, including bone health. Alcohol. Chronic light alcohol intake is associated with a positive effect on bone density (98). If one standard drink contains 10 g ethanol, then this level of intake translates to one drink per day for women and two drinks per day for men (98). The effect of higher alcohol intakes (11-30 g ethanol per day) on BMD is more variable and may depend on age, gender, hormonal status, and type of alcoholic beverage consumed (98). At the other end of the spectrum, chronic alcoholism has a documented negative effect on bone and increases fracture risk (98). Alcoholics consuming 100-200 g ethanol per day have low bone density, impaired osteoblastic activity, and metabolic abnormalities that compromise bone health (98, 99). Physical Activity. Physical activity is highly beneficial to skeletal health across all stages of bone development. Regular resistance exercise helps to reduce osteoporotic fracture risk for two reasons: it both directly and indirectly increases bone mass, and it reduces falling risk by improving strength, balance, and coordination (100). Physical activity increases bone mass because mechanical forces imposed on bone induce an adaptive osteogenic (bone-forming) response. Bone adjusts its strength in proportion to the degree of bone stress (1), and the intensity and novelty of the load, rather than number of repetitions or sets, matter for building bone mass (101). The American College of Sports Medicine suggests that adults engage in the following exercise regimen in order to maintain bone health (see table 2 below) (100): |Table 2. Exercise recommendations for bone health according to the American College of Sports Medicine| |MODE||Weight-bearing endurance activities||Tennis, stair climbing, jogging| |Activities that involve jumping||Volleyball, basketball| |Resistance exercise||Weight lifting| |INTENSITY||Moderate to high| |FREQUENCY||Weight-bearing endurance activities||3-5 times per week| |Resistance exercise||2-3 times per week| |DURATION||30-60 minutes per day||Combination of weight-bearing endurance activities, activities that involve jumping, and resistance exercise that targets all major muscle groups| Additionally, the ability of the skeleton to respond to physical activity can be either constrained or enabled by nutritional factors. For example, calcium insufficiency diminishes the effectiveness of mechanical loading to increase bone mass, and highly active people who are malnourished are at increased fracture risk (2, 100). Thus, exercise can be detrimental to bone health when the body is not receiving the nutrients it needs to remodel bone tissue in response to physical activity. Micronutrients play a prominent role in bone health. The emerging theme with supplementation trials seems to be that habitual intake influences the efficacy of the intervention. In other words, correcting a deficiency and meeting the RDAs of micronutrients involved in bone health will improve bone mineral density (BMD) and benefit the skeleton (see table 1). To realize lasting effects on bone, the intervention must persist throughout a lifetime. At all stages of life, high impact and resistance exercise in conjunction with adequate intake of nutrients involved in bone health are critical factors in maintaining a healthy skeleton and minimizing bone loss. The propensity of clinical trial data supports supplementation with calcium and vitamin D in older adults as a preventive strategy against osteoporosis. Habitual, high intake of vitamin A at doses >1,500 mcg (5,000 IU) per day may negatively impact bone. Although low dietary vitamin K intake is associated with increased fracture risk, RCTs have not supported a direct role for vitamin K1 (phylloquinone) or vitamin K2 (menaquinone) supplementation in fracture risk reduction. The other micronutrients important to bone health (phosphorus, fluoride, magnesium, sodium, and vitamin C) have essential roles in bone, but clinical evidence in support of supplementation beyond recommended levels of intake to improve BMD or reduce fracture incidence is lacking. Many Americans, especially the elderly, are at high risk for deficiencies of several micronutrients (24). Some of these nutrients are critical for bone health, and the LPI recommends supplemental calcium, vitamin D, and magnesium for healthy adults (see the LPI Rx for Health). Written in August 2012 by: Giana Angelo, Ph.D. Linus Pauling Institute Oregon State University Reviewed in August 2012 by: Connie M. Weaver, Ph.D. Distinguished Professor and Department Head Department of Nutrition Science This article was underwritten, in part, by a grant from Bayer Consumer Care AG, Basel, Switzerland. Copyright 2012-2013 Linus Pauling Institute The Linus Pauling Institute Micronutrient Information Center provides scientific information on the health aspects of dietary factors and supplements, foods, and beverages for the general public. The information is made available with the understanding that the author and publisher are not providing medical, psychological, or nutritional counseling services on this site. The information should not be used in place of a consultation with a competent health care or nutrition professional. The information on dietary factors and supplements, foods, and beverages contained on this Web site does not cover all possible uses, actions, precautions, side effects, and interactions. It is not intended as nutritional or medical advice for individual problems. Liability for individual actions or omissions based upon the contents of this site is expressly disclaimed. Thank you for subscribing to the Linus Pauling Institute's Research Newsletter. You should receive your first issue within a month. We appreciate your interest in our work.
http://lpi.oregonstate.edu/infocenter/bonehealth.html
13
97
Heat transfer coefficient The heat transfer coefficient, in thermodynamics and in mechanical and chemical engineering, is used in calculating the heat transfer, typically by convection or phase transition between a fluid and a solid: - Q = heat flow in input or lost heat flow , J/s = W - h = heat transfer coefficient, W/(m2K) - A = heat transfer surface area, m2 - = difference in temperature between the solid surface and surrounding fluid area From the above equation, the heat transfer coefficient is the proportionality coefficient between the heat flux, that is heat flow per unit area, q/A, and the thermodynamic driving force for the flow of heat (i.e., the temperature difference, ΔT). The heat transfer coefficient has SI units in watts per squared meter kelvin: W/(m2K). There are numerous methods for calculating the heat transfer coefficient in different heat transfer modes, different fluids, flow regimes, and under different thermohydraulic conditions. Often it can be estimated by dividing the thermal conductivity of the convection fluid by a length scale. The heat transfer coefficient is often calculated from the Nusselt number (a dimensionless number). There are also online calculators available specifically for heat transfer fluid applications. An understanding of convection boundary layers is necessary to understanding convective heat transfer between a surface and a fluid flowing past it. A thermal boundary layer develops if the fluid free stream temperature and the surface temperatures differ. A temperature profile exists due to the energy exchange resulting from this temperature difference. The heat transfer rate can then be written as, And because heat transfer at the surface is by conduction, These two terms are equal; thus Making it dimensionless by multiplying by representative length L, The right hand side is now the ratio of the temperature gradient at the surface to the reference temperature gradient. While the left hand side is similar to the Biot modulus. This becomes the ratio of conductive thermal resistance to the convective thermal resistance of the fluid, otherwise known as the Nusselt number, Nu. Alternative Method (A simple method for determining the overall heat transfer coefficient) A simple method for determining an overall heat transfer coefficient that is useful to find the heat transfer between simple elements such as walls in buildings or across heat exchangers is shown below. Note that this method only accounts for conduction within materials, it does not take into account heat transfer through methods such as radiation. The method is as follows: - = the overall heat transfer coefficient (W/m2 K) - = the contact area for each fluid side (m2) (with A_1 and A_2 expressing either surface) - = the thermal conductivity of the material (W/mK) - = the individual convection heat transfer coefficient for each fluid (W/m2 K) - = the wall thickness (m) As the areas for each surface approach being equal the equation can be written as the transfer coefficient per unit area as shown below: NOTE: Often the value for is referred to as the difference of two radii where the inner and outer radii are used to define the thickness of a pipe carrying a fluid, however, this figure may also be considered as a wall thickness in a flat plate transfer mechanism or other common flat surfaces such as a wall in a building when the area difference between each edge of the transmission surface approaches zero. In the walls of buildings the above formula can be used to derive the formula commonly used to calculate the heat through building components. Architects and engineers call the resulting values either the U-Value or the R-Value of a construction assembly like a wall. Each type of value (R or U) are related as the inverse of each other such that R-Value = 1/U-Value and both are more fully understood through the concept of an overall heat transfer coefficient described in lower section of this document. Convective heat transfer Correlations Although convective heat transfer can be derived analytically through dimensional analysis, exact analysis of the boundary layer, approximate integral analysis of the boundary layer and analogies between energy and momentum transfer, these analytic approaches may not offer practical solutions to all problems when there are no mathematical models applicable. As such, many correlations were developed by various authors to estimate the convective heat transfer coefficient in various cases including natural convection, forced convection for internal flow and forced convection for external flow. These empirical correlations are presented for their particular geometry and flow conditions. As the fluid properties are temperature dependent, they are evaluated at the film temperature , which is the average of the surface and the surrounding bulk temperature, . Natural convection External flow, Vertical plane Churchill and Chu correlation for natural convection adjacent to vertical planes. NuL applies to all fluids for both laminar and turbulent flows. L is the characteristic length with respect to the direction of gravity, and RaL is the Rayleigh Number with respect to this length. For laminar flows in the range of , the following equation can be further improved. External flow, Vertical cylinders For cylinders with their axes vertical, the expressions for plane surfaces can be used provided the curvature effect is not too significant. This represents the limit where boundary layer thickness is small relative to cylinder diameter D. The correlations for vertical plane walls can be used when where is the Grashof number. External flow, Horizontal plates W.H. McAdams suggested the following correlations. The induced buoyancy will be different depending upon whether the hot surface is facing up or down. For a hot surface facing up or a cold surface facing down, For a hot surface facing down or a cold surface facing up, The length is the ratio of the plate surface area to perimeter. If the plane surface is inclined at an angle θ, the equations for vertical plane by Churchill and Chu may be used for θ up to . When boundary layer flow is laminar, the gravitational constant g is replaced with g cosθ for calculating the Ra in the equation for laminar flow External flow, Horizontal cylinder For cylinders of sufficient length and negligible end effects, Churchill and Chu has the following correlation for External flow, Spheres For spheres, T. Yuge has the following correlation. for Pr≃1 and Forced convection Internal flow, Laminar flow Sieder and Tate has the following correlation for laminar flow in tubes where D is the internal diameter, μ_b is the fluid viscosity at the bulk mean temperature, μ_w is the viscosity at the tube wall surface temperature. Internal flow, Turbulent flow The Dittus-Boelter correlation (1930) is a common and particularly simple correlation useful for many applications. This correlation is applicable when forced convection is the only mode of heat transfer; i.e., there is no boiling, condensation, significant radiation, etc. The accuracy of this correlation is anticipated to be ±15%. For a fluid flowing in a straight circular pipe with a Reynolds number between 10 000 and 120 000 (in the turbulent pipe flow range), when the fluid's Prandtl number is between 0.7 and 120, for a location far from the pipe entrance (more than 10 pipe diameters; more than 50 diameters according to many authors) or other flow disturbances, and when the pipe surface is hydraulically smooth, the heat transfer coefficient between the bulk of the fluid and the pipe surface can be expressed as: - - thermal conductivity of the bulk fluid - - - Hydraulic diameter - Nu - Nusselt number - (Dittus-Boelter correlation) - Pr - Prandtl number - Re - Reynolds number - n = 0.4 for heating (wall hotter than the bulk fluid) and 0.33 for cooling (wall cooler than the bulk fluid). The fluid properties necessary for the application of this equation are evaluated at the bulk temperature thus avoiding iteration Forced convection, External flow In analyzing the heat transfer associated with the flow past the exterior surface of a solid, the situation is complicated by phenomena such as boundary layer separation. Various authors have correlated charts and graphs for different geometries and flow conditions. For Flow parallel to a Plane Surface, where x is the distance from the edge and L is the height of the boundary layer, a mean Nusselt number can be calculated using the Colburn analogy. Thom correlation There exist simple fluid-specific correlations for heat transfer coefficient in boiling. The Thom correlation is for flow boiling of water (subcooled or saturated at pressures up to about 20 MPa) under conditions where the nucleate boiling contribution predominates over forced convection. This correlation is useful for rough estimation of expected temperature difference given the heat flux: - is the wall temperature elevation above the saturation temperature, K - q is the heat flux, MW/m2 - P is the pressure of water, MPa Note that this empirical correlation is specific to the units given. Heat transfer coefficient of pipe wall The resistance to the flow of heat by the material of pipe wall can be expressed as a "heat transfer coefficient of the pipe wall". However, one needs to select if the heat flux is based on the pipe inner or the outer diameter. where k is the effective thermal conductivity of the wall material and x is the wall thickness. If the above assumption does not hold, then the wall heat transfer coefficient can be calculated using the following expression: where di and do are the inner and outer diameters of the pipe, respectively. The thermal conductivity of the tube material usually depends on temperature; the mean thermal conductivity is often used. Combining heat transfer coefficients For two or more heat transfer processes acting in parallel, heat transfer coefficients simply add: For two or more heat transfer processes connected in series, heat transfer coefficients add inversely: For example, consider a pipe with a fluid flowing inside. The rate of heat transfer between the bulk of the fluid inside the pipe and the pipe external surface is: - q = heat transfer rate (W) - h = heat transfer coefficient (W/(m2·K)) - t = wall thickness (m) - k = wall thermal conductivity (W/m·K) - A = area (m2) - = difference in temperature. Overall heat transfer coefficient The overall heat transfer coefficient is a measure of the overall ability of a series of conductive and convective barriers to transfer heat. It is commonly applied to the calculation of heat transfer in heat exchangers, but can be applied equally well to other problems. For the case of a heat exchanger, can be used to determine the total heat transfer between the two streams in the heat exchanger by the following relationship: - = heat transfer rate (W) - = overall heat transfer coefficient (W/(m²·K)) - = heat transfer surface area (m2) - = log mean temperature difference (K) The overall heat transfer coefficient takes into account the individual heat transfer coefficients of each stream and the resistance of the pipe material. It can be calculated as the reciprocal of the sum of a series of thermal resistances (but more complex relationships exist, for example when heat transfer takes place by different routes in parallel): - R = Resistance(s) to heat flow in pipe wall (K/W) - Other parameters are as above. The heat transfer coefficient is the heat transferred per unit area per kelvin. Thus area is included in the equation as it represents the area over which the transfer of heat takes place. The areas for each flow will be different as they represent the contact area for each fluid side. The thermal resistance due to the pipe wall is calculated by the following relationship: - x = the wall thickness (m) - k = the thermal conductivity of the material (W/(m·K)) - A = the total area of the heat exchanger (m2) This represents the heat transfer by conduction in the pipe. As mentioned earlier in the article the convection heat transfer coefficient for each stream depends on the type of fluid, flow properties and temperature properties. Some typical heat transfer coefficients include: - Air - h = 10 to 100 W/(m2K) - Water - h = 500 to 10,000 W/(m2K) Thermal resistance due to fouling deposits Surface coatings can build on heat transfer surfaces during heat exchanger operation due to fouling. These add extra thermal resistance to the wall and may noticeably decrease the overall heat transfer coefficient and thus performance. (Fouling can also cause other problems.) The additional thermal resistance due to fouling can be found by comparing the overall heat transfer coefficient determined from laboratory readings with calculations based on theoretical correlations. They can also be evaluated from the development of the overall heat transfer coefficient with time (assuming the heat exchanger operates under otherwise identical conditions). This is commonly applied in practice, e.g. The following relationship is often used: - = overall heat transfer coefficient based on experimental data for the heat exchanger in the "fouled" state, - = overall heat transfer coefficient based on calculated or measured ("clean heat exchanger") data, - = thermal resistance due to fouling, See also - Convective heat transfer - Heat sink - Churchill-Bernstein Equation - Heat pump - Heisler Chart - Thermal conductivity - Fourier number - Nusselt number - James R. Welty; Charles E. Wicks; Robert E. Wilson; Gregory L. Rorrer., "Fundamentals of Momentum, Heat and Mass transfer" 5th edition, John Wiley and Sons - S.S. Kutateladze and V.M. Borishanskii, A Concise Encyclopedia of Heat Transfer, Pergamon Press, 1966. - F. Kreith (editor), "The CRC Handbook of Thermal Engineering", CRC Press, 2000. - W.Rohsenow, J.Hartnet, Y.Cho, "Handbook of Heat Transfer", 3rd edition, McGraw-Hill, 1998. - This relationship is similar to the harmonic mean; however, note that it is not multiplied with the number n of terms. - Coulson and Richardson, "Chemical Engineering", Volume 1,Elsevier, 2000 - Turner C.W.; Klimas S.J.; Bbrideau M.G., "Thermal resistance of steam-generator tube deposits under single-phase forced convection and flow-boiling heat transfer", Canadian Journal of Chemical Engineering, 2000, vol. 78, No 1, pp. 53-60
http://en.wikipedia.org/wiki/Heat_transfer_coefficient
13
58
Measurements of AC magnitude So far we know that AC voltage alternates in polarity and AC current alternates in direction. We also know that AC can alternate in a variety of different ways, and by tracing the alternation over time we can plot it as a “waveform.” We can measure the rate of alternation by measuring the time it takes for a wave to evolve before it repeats itself (the “period”), and express this as cycles per unit time, or “frequency.” In music, frequency is the same as pitch, which is the essential property distinguishing one note from another. However, we encounter a measurement problem if we try to express how large or small an AC quantity is. With DC, where quantities of voltage and current are generally stable, we have little trouble expressing how much voltage or current we have in any part of a circuit. But how do you grant a single measurement of magnitude to something that is constantly changing? One way to express the intensity, or magnitude (also called the amplitude), of an AC quantity is to measure its peak height on a waveform graph. This is known as the peak or crest value of an AC waveform: Figure below Peak voltage of a waveform. Another way is to measure the total height between opposite peaks. This is known as the peak-to-peak (P-P) value of an AC waveform: Figure below Peak-to-peak voltage of a waveform. Unfortunately, either one of these expressions of waveform amplitude can be misleading when comparing two different types of waves. For example, a square wave peaking at 10 volts is obviously a greater amount of voltage for a greater amount of time than a triangle wave peaking at 10 volts. The effects of these two AC voltages powering a load would be quite different: Figure below A square wave produces a greater heating effect than the same peak voltage triangle wave. One way of expressing the amplitude of different waveshapes in a more equivalent fashion is to mathematically average the values of all the points on a waveform's graph to a single, aggregate number. This amplitude measure is known simply as the average value of the waveform. If we average all the points on the waveform algebraically (that is, to consider their sign, either positive or negative), the average value for most waveforms is technically zero, because all the positive points cancel out all the negative points over a full cycle: Figure below The average value of a sinewave is zero. This, of course, will be true for any waveform having equal-area portions above and below the “zero” line of a plot. However, as a practical measure of a waveform's aggregate value, “average” is usually defined as the mathematical mean of all the points' absolute values over a cycle. In other words, we calculate the practical average value of the waveform by considering all points on the wave as positive quantities, as if the waveform looked like this: Figure below Waveform seen by AC “average responding” meter. Polarity-insensitive mechanical meter movements (meters designed to respond equally to the positive and negative half-cycles of an alternating voltage or current) register in proportion to the waveform's (practical) average value, because the inertia of the pointer against the tension of the spring naturally averages the force produced by the varying voltage/current values over time. Conversely, polarity-sensitive meter movements vibrate uselessly if exposed to AC voltage or current, their needles oscillating rapidly about the zero mark, indicating the true (algebraic) average value of zero for a symmetrical waveform. When the “average” value of a waveform is referenced in this text, it will be assumed that the “practical” definition of average is intended unless otherwise specified. Another method of deriving an aggregate value for waveform amplitude is based on the waveform's ability to do useful work when applied to a load resistance. Unfortunately, an AC measurement based on work performed by a waveform is not the same as that waveform's “average” value, because the power dissipated by a given load (work performed per unit time) is not directly proportional to the magnitude of either the voltage or current impressed upon it. Rather, power is proportional to the square of the voltage or current applied to a resistance (P = E2/R, and P = I2R). Although the mathematics of such an amplitude measurement might not be straightforward, the utility of it is. Consider a bandsaw and a jigsaw, two pieces of modern woodworking equipment. Both types of saws cut with a thin, toothed, motor-powered metal blade to cut wood. But while the bandsaw uses a continuous motion of the blade to cut, the jigsaw uses a back-and-forth motion. The comparison of alternating current (AC) to direct current (DC) may be likened to the comparison of these two saw types: Figure below Bandsaw-jigsaw analogy of DC vs AC. The problem of trying to describe the changing quantities of AC voltage or current in a single, aggregate measurement is also present in this saw analogy: how might we express the speed of a jigsaw blade? A bandsaw blade moves with a constant speed, similar to the way DC voltage pushes or DC current moves with a constant magnitude. A jigsaw blade, on the other hand, moves back and forth, its blade speed constantly changing. What is more, the back-and-forth motion of any two jigsaws may not be of the same type, depending on the mechanical design of the saws. One jigsaw might move its blade with a sine-wave motion, while another with a triangle-wave motion. To rate a jigsaw based on its peak blade speed would be quite misleading when comparing one jigsaw to another (or a jigsaw with a bandsaw!). Despite the fact that these different saws move their blades in different manners, they are equal in one respect: they all cut wood, and a quantitative comparison of this common function can serve as a common basis for which to rate blade speed. Picture a jigsaw and bandsaw side-by-side, equipped with identical blades (same tooth pitch, angle, etc.), equally capable of cutting the same thickness of the same type of wood at the same rate. We might say that the two saws were equivalent or equal in their cutting capacity. Might this comparison be used to assign a “bandsaw equivalent” blade speed to the jigsaw's back-and-forth blade motion; to relate the wood-cutting effectiveness of one to the other? This is the general idea used to assign a “DC equivalent” measurement to any AC voltage or current: whatever magnitude of DC voltage or current would produce the same amount of heat energy dissipation through an equal resistance:Figure below An RMS voltage produces the same heating effect as a the same DC voltage In the two circuits above, we have the same amount of load resistance (2 Ω) dissipating the same amount of power in the form of heat (50 watts), one powered by AC and the other by DC. Because the AC voltage source pictured above is equivalent (in terms of power delivered to a load) to a 10 volt DC battery, we would call this a “10 volt” AC source. More specifically, we would denote its voltage value as being 10 volts RMS. The qualifier “RMS” stands for Root Mean Square, the algorithm used to obtain the DC equivalent value from points on a graph (essentially, the procedure consists of squaring all the positive and negative points on a waveform graph, averaging those squared values, then taking the square root of that average to obtain the final answer). Sometimes the alternative terms equivalent or DC equivalent are used instead of “RMS,” but the quantity and principle are both the same. RMS amplitude measurement is the best way to relate AC quantities to DC quantities, or other AC quantities of differing waveform shapes, when dealing with measurements of electric power. For other considerations, peak or peak-to-peak measurements may be the best to employ. For instance, when determining the proper size of wire (ampacity) to conduct electric power from a source to a load, RMS current measurement is the best to use, because the principal concern with current is overheating of the wire, which is a function of power dissipation caused by current through the resistance of the wire. However, when rating insulators for service in high-voltage AC applications, peak voltage measurements are the most appropriate, because the principal concern here is insulator “flashover” caused by brief spikes of voltage, irrespective of time. Peak and peak-to-peak measurements are best performed with an oscilloscope, which can capture the crests of the waveform with a high degree of accuracy due to the fast action of the cathode-ray-tube in response to changes in voltage. For RMS measurements, analog meter movements (D'Arsonval, Weston, iron vane, electrodynamometer) will work so long as they have been calibrated in RMS figures. Because the mechanical inertia and dampening effects of an electromechanical meter movement makes the deflection of the needle naturally proportional to the average value of the AC, not the true RMS value, analog meters must be specifically calibrated (or mis-calibrated, depending on how you look at it) to indicate voltage or current in RMS units. The accuracy of this calibration depends on an assumed waveshape, usually a sine wave. Electronic meters specifically designed for RMS measurement are best for the task. Some instrument manufacturers have designed ingenious methods for determining the RMS value of any waveform. One such manufacturer produces “True-RMS” meters with a tiny resistive heating element powered by a voltage proportional to that being measured. The heating effect of that resistance element is measured thermally to give a true RMS value with no mathematical calculations whatsoever, just the laws of physics in action in fulfillment of the definition of RMS. The accuracy of this type of RMS measurement is independent of waveshape. For “pure” waveforms, simple conversion coefficients exist for equating Peak, Peak-to-Peak, Average (practical, not algebraic), and RMS measurements to one another: Figure below Conversion factors for common waveforms. In addition to RMS, average, peak (crest), and peak-to-peak measures of an AC waveform, there are ratios expressing the proportionality between some of these fundamental measurements. The crest factor of an AC waveform, for instance, is the ratio of its peak (crest) value divided by its RMS value. The form factor of an AC waveform is the ratio of its RMS value divided by its average value. Square-shaped waveforms always have crest and form factors equal to 1, since the peak is the same as the RMS and average values. Sinusoidal waveforms have an RMS value of 0.707 (the reciprocal of the square root of 2) and a form factor of 1.11 (0.707/0.636). Triangle- and sawtooth-shaped waveforms have RMS values of 0.577 (the reciprocal of square root of 3) and form factors of 1.15 (0.577/0.5). Bear in mind that the conversion constants shown here for peak, RMS, and average amplitudes of sine waves, square waves, and triangle waves hold true only for pure forms of these waveshapes. The RMS and average values of distorted waveshapes are not related by the same ratios: Figure below Arbitrary waveforms have no simple conversions. This is a very important concept to understand when using an analog D'Arsonval meter movement to measure AC voltage or current. An analog D'Arsonval movement, calibrated to indicate sine-wave RMS amplitude, will only be accurate when measuring pure sine waves. If the waveform of the voltage or current being measured is anything but a pure sine wave, the indication given by the meter will not be the true RMS value of the waveform, because the degree of needle deflection in an analog D'Arsonval meter movement is proportional to the average value of the waveform, not the RMS. RMS meter calibration is obtained by “skewing” the span of the meter so that it displays a small multiple of the average value, which will be equal to be the RMS value for a particular waveshape and a particular waveshape only. Since the sine-wave shape is most common in electrical measurements, it is the waveshape assumed for analog meter calibration, and the small multiple used in the calibration of the meter is 1.1107 (the form factor: 0.707/0.636: the ratio of RMS divided by average for a sinusoidal waveform). Any waveshape other than a pure sine wave will have a different ratio of RMS and average values, and thus a meter calibrated for sine-wave voltage or current will not indicate true RMS when reading a non-sinusoidal wave. Bear in mind that this limitation applies only to simple, analog AC meters not employing “True-RMS” technology. - The amplitude of an AC waveform is its height as depicted on a graph over time. An amplitude measurement can take the form of peak, peak-to-peak, average, or RMS quantity. - Peak amplitude is the height of an AC waveform as measured from the zero mark to the highest positive or lowest negative point on a graph. Also known as the crest amplitude of a wave. - Peak-to-peak amplitude is the total height of an AC waveform as measured from maximum positive to maximum negative peaks on a graph. Often abbreviated as “P-P”. - Average amplitude is the mathematical “mean” of all a waveform's points over the period of one cycle. Technically, the average amplitude of any waveform with equal-area portions above and below the “zero” line on a graph is zero. However, as a practical measure of amplitude, a waveform's average value is often calculated as the mathematical mean of all the points' absolute values (taking all the negative values and considering them as positive). For a sine wave, the average value so calculated is approximately 0.637 of its peak value. - “RMS” stands for Root Mean Square, and is a way of expressing an AC quantity of voltage or current in terms functionally equivalent to DC. For example, 10 volts AC RMS is the amount of voltage that would produce the same amount of heat dissipation across a resistor of given value as a 10 volt DC power supply. Also known as the “equivalent” or “DC equivalent” value of an AC voltage or current. For a sine wave, the RMS value is approximately 0.707 of its peak value. - The crest factor of an AC waveform is the ratio of its peak (crest) to its RMS value. - The form factor of an AC waveform is the ratio of its RMS value to its average value. - Analog, electromechanical meter movements respond proportionally to the average value of an AC voltage or current. When RMS indication is desired, the meter's calibration must be “skewed” accordingly. This means that the accuracy of an electromechanical meter's RMS indication is dependent on the purity of the waveform: whether it is the exact same waveshape as the waveform used in calibrating.
http://www.allaboutcircuits.com/vol_2/chpt_1/3.html
13
220
In this chapter fluids at rest will be studied. Mass density, weight density, pressure, fluid pressure, buoyancy, and Pascal's principle will be discussed. In the following, the symbol ( ρ ) is pronounced " rho ." Example 1 : The mass density of steel is 7.8 gr /cm3. A chunk of steel has a volume of 141cm3. Determine (a) its mass in grams and (b) its weight density in N/m3. Solve before looking at the solution. Use horizontal fraction bars. Solution: (a) Since ρ = M / V ; M = ρV ; M = (7.8 gr / cm3) (141 cm3 ) ; M = 1100 grams. Before going to Part (b), let's first convert (gr/cm3) to its Metric version (kg/m3). Use horizontal fraction bars. 7.8 gr / cm3 = 7.8 (0.001kg) / (0.01m)3 = 7800 kg/m3. 1 kg is equal to 1000gr. This means that 1 gr is 0.001kg as is used above. Also, 1 m is 100cm. This means that 1cm is 0.01m. Cubing each, results in : 1cm3 = 0.000001m3 as is used above. Now, let's solve Part (b). (b) D = ρg ; D = [7800 kg /m3] [ 9.8 m/s2] = 76000 N /m3. Not only you should write Part (b) with horizontal fraction bars, but also check the correctness of the units as well. Example 2 : A piece of aluminum weighs 31.75N. Determine (a) its mass and (b) its volume if the mass density of aluminum is 2.7gr/cm3. Solution: (a) w = Mg ; M = w / g ; M = 31.75N / [9.8 m/s2] ; M = 3.2 kg ; M = 3200 grams. (b) ρ = M / V ; V = M / ρ ; V = 3200gr / [2.7 gr /cm3] = 1200 cm3. Example 3 : The mass densities of gold and copper are 19.3 gr/cm3 and 8.9 gr/cm3, respectively. A piece of gold that is known to be an alloy of gold and copper has a mass of 7.55kg and a volume of 534 cm3. Calculate the mass percentage of gold in the alloy assuming that the volume of the alloy is equal to the volume of copper plus the volume of gold. In other words, no volume is lost or gained as a result of the alloying process. Do your best to solve it by yourself first. Solution: Two equations can definitely be written down. The sum of masses as well as the sum of volumes are given. The formula M = ρV is applicable to both metals. Mgold = ρgoldVgold and Mcopper = ρcopperVcopper . Look at the following as a system of two equations in two unknowns: |Mg + Mc = 7550 gr||ρgVg + ρcVc = 7550||19.3 Vg+ 8.9Vc = 7550||19.3 Vg + 8.9Vc =7550| |Vg + Vc = 534 cm3||Vg + Vc = 534||Vg + Vc = 534||Vg = 534 - Vc| Substituting for Vg in the first equation yields: 19.3 (534 - Vc) + 8.9 Vc = 7550 ; 10306.2 -10.4Vc = 7550 ; 2756.2 = 10.4Vc ; Vc = 265 cm3. Since Vg = 534 - Vc ; therefore, Vg = 534 - 265 = 269 cm3. The masses are: Mg = ρgVg ; Mg = (19.3 gr/cm3) ( 269 cm3 ) = 5190 gr ; Mc = 2360 gr. The mass percentage of gold in the alloy (7750gr) is Mgold / Malloy = (5190/7550) = 0.687 = 68.7 % Karat means the number of portions out of 24 portions. [68.7 / 100] = [ x / 24] ; x = 16.5 karat. Pressure is defined as force per unit area. Let's use lower case p for pressure; therefore, p = F / A. The SI unit for pressure is N/m2 called " Pascal." The American unit is lbf / ft2. Two useful commercial units are: kgf / cm2 and lbf / in2 or psi. Example 4: Calculate the average pressure that a 120-lbf table exerts on the floor by each of its four legs if the cross-sectional area of each leg is 1.5 in2. Solution: p = F / A ; p = 120lbf / (4x 1.5 in2) = 20 lbf / in2 or 20 psi. Example 5: (a) Calculate the weight of a 102-gram mass piece of metal. If this metal piece is rolled to a square sheet that is 1.0m on each side, and then spread over the same size (1.0m x 1.0m ) table, (b) what pressure would it exert on the square table? Solution: (a) w = Mg ; w = (0.102 kg)(9.8 m/s2) ; w = 1.0 N (b) p = F / A ; p = 1.0N / (1.0m x 1.0m) ; p = 1.0 N/m2 ; p = 1.0 Pascal (1.0Pa) As you may have noticed, 1 Pa is a small amount of pressure. The atmospheric pressure is 101,300 Pa. We may say that the atmospheric pressure is roughly 100,000 Pa, or 100kPa We will calculate this later. Fluid Pressure: Both liquids and gases are considered fluids. The study of fluids at rest is called Fluid Statics. The pressure in stationary fluids depends on weight density, D, of the fluid and depth, h, at which pressure is to be calculated. Of course, as we go deeper in a fluid, its density increases slightly because at lower points, there are more layers of fluid pressing down causing the fluid to be denser. For liquids, the variation of density with depth is very small for relatively small depths and may be neglected. This is because of the fact that liquids are incompressible. For gases, the density increase with depth becomes significant and may not be neglected. Gases are called compressible fluids. If we assume that the density of a fluid remains fairly constant for relatively small depths, the formula for fluid pressure my be written as: p = hD or p = h ρg where ρ is the mass density and D is the weight density of the fluid. Example 6: Calculate (a) the pressure due to just water at a depth of 15.0m below lake surface. (b) What is the total pressure at that depth if the atmospheric pressure is 101kPa? (c) Also find the total external force on a spherical research chamber which external diameter is 5.0m. Water has a mass density of ρ = 1000 kg/m3. Solution: (a) p = hD ; p = h ρg ; p = (15.0m)(1000 kg/m3)(9.8 m/s2) = 150,000 N /m2 or Pa. (b) [p total] external = p liquid + p atmosphere ; [p total] external = 150,000Pa + 101,000Pa = 250,000Pa. (c) p = F / A ; solving for F, yields: F = pA ; Fexternal = (250,000 N/m2)(4π)(2.5m)2 = 20,000,000 N. F = 2.0x107N ( How may millions?!) Chapter 11 Test Yourself 1: 1) Average mass density, ρ is defined as (a) mass of unit volume (b) mass per unit volume (c) a &b. click here 2) Average weight density, D is defined as (a) weight per unit volume (b) mass of unit volume times g (c) both a & b. 3) D = ρg is correct because (a) w = Mg (b) D is weight density and ρ is mass density (c) both a & b. 4) 4.0cm3 of substance A has a mass of 33.0grams, and 8.0cm3 of substance B has a mass of 56.0 grams. (a) A is denser than B (b) B is denser than A (c) Both A and B have the same density. click here Problem: 1gram was originally defined to be the mass of 1cm3 of pure water. Answer the following question by first doing the calculations. Make sure to write down neatly with horizontal fraction bars. click here 5) On this basis, one suitable unit for the mass density of water is (a) 1 cm3/gr (b) 1 gr/cm3 (c) both a & b. 6) We know that 1kg = 1000gr. We may say that (a) 1gr = (1/1000)kg (b) 1gr = 0.001kg (c) both a & b. 7) We know that 1m = 100cm. We may say that (a) 1m3 = 100cm3 (b) 1m3 = 10000cm3 (c) 1m3 = 1000,000cm3. 8) We know that 1cm = 0.01m. We may write (a) 1cm3 = 0.000001m3 (b) 1cm3 = 0.001m3 (c) 1cm3 = 0.01m3. 9) Converting gr/cm3 to kg/m3 yields: (a)1gr/cm3 = 1000 kg/m3 (b)1gr/cm3 = 100 kg/m3 (c)1gr/cm3 = 10 kg/m3. 10) From Q9, the mass density of water is also (a) 1000 kg/m3 (b) 1 ton/m3, because 1ton=1000kg (c) both a & b. 11) Aluminum is 2.7 times denser than water. Since ρwater = 1000kg/m3 ; therefore, ρAlum. = (a) 2700kg/m3 (b) 27kg/m3 (c) 27000kg/m3. click here 12) Mercury has a mass density of 13.6 gr/cm3. In Metric units (kg/m3), its density is (a) 1360 kg/m3 (b) 13600 kg/m3 (c) 0.00136kg/m3. 13) The weight density of water is (a) 9.8 kg/m3 (b) 9800kg/m3 (c) 9800N/m3. click here 14) The volume of a piece of copper is 0.00247m3. Knowing that copper is 8.9 times denser than water, first find the mass density of copper in Metric units and then find the mass of the copper piece. Ans. : (a) 44kg (b) 22kg (c) 16kg. Problem: The weight of a gold sphere is 1.26N. The mass density of gold is ρgold = 19300kg/m3. 15) The weight density, D, of gold is (a) 1970 N/m3 (b) 189000 N/m3 (c) 100,000 N/m3. 16) The volume of the gold sphere is (a) 6.66x10-6m3 (b) 6.66cm3 (c) both a & b. click here 17) The radius of the gold sphere is (a) 1.167cm (b) 0.9523cm (c) 2.209cm. 18) Pressure is defined as (a) force times area (b) force per unit area (c) force per length. 19) The Metric unit for pressure is (a) N/m3 (b) N/cm3 (c) N/m2. click here 20) Pascal is the same thing as (a) lbf / ft2 (b) N/m2 (c) lbf / in2. 21) psi is (a) lbf / ft2 (b) N/m2 (c) lbm / in2 (d) none of a, b, or c. 22) A solid brick may be placed on a flat surface on three different sides that have three different surface areas. To create the greatest pressure it must be placed on its (a) largest side (b) smallest side (c) middle-size side. Problem: 113 grams is about 4.00 ounces. A 102 gram mass is 0.102kg. The weight of a 0.102kg mass is 1.00N. Verify this weight. If a 0.102gram piece of say copper that weighs 1.00N, is hammered or rolled to a flat sheet (1.00m by 1.00m), how thin would that be? May be one tenth of 1 mm? Note that a (1m) by (1m) rectangular sheet of metal may be viewed as a rectangular box which height or thickness is very small, like a sheet of paper. If you place your hand under such thin sheet of copper, do you hardly feel any pressure? Answer the following questions: 23) The weight density of copper that is 8.9 times denser than water is (a) 8900N/m2 (b) 1000N/m3 (c) 87220N/m3. 24) The volume of a 0.102kg or 1.00N piece (sheet) of copper is (a) 1.15x10-5m3 (b) 1.15x105m3 (c) 8900m3. 25) For a (1m)(1m) = 1m2 base area of the sheet, its height or thickness is (a) 1.15x10-5m (b) 1.15x105m (c) 8900m. 26) The small height (thickness) in Question 25 is (a) 0.0115mm (b) 0.0115cm (c) 890cm. 27) The pressure (force / area) or (weight / area) that the above sheet generates is (a) 1N/1m2 (b) 1 Pascal (c) both a & b. 28) Compared to pressures in water pipes or car tires, 1 Pascal of pressure is (a) a great pressure (b) a medium pressure (c) a quite small pressure. 29) The atmospheric pressure is roughly (a) 100Pa (b) 100,000 Pa (c) 100kPa (d) both b & c. 30) The atmospheric pressure is (a) 14.7 psi (b) 1.0 kgf/m2 (c) 1.0 kgf/cm2 (d) a & c. |Gravity pulls the air molecules around the Earth toward the Earth's center. This makes the air layers denser and denser as we move from outer space toward the Earth's surface. It is the weight of the atmosphere that causes the atmospheric pressure. The depth of the atmosphere is about 60 miles. If we go 60 miles above the Earth surface, air molecules become very scarce to where we might travel one meter and not collide with even a single molecule (a good vacuum!). Vacuum establishes the basis for absolute zero pressure. Any gas pressure measured with respect to vacuum is called " absolute pressure. "|| Calculation of the Atmospheric Pressure: The trick to calculate the atmospheric pressure is to place a 1-m long test tube filled with mercury inverted over a pot of mercury such that air can not get in the tube. Torricelli ( Italian) was the first to try this. The figure is shown above. In doing this, we will see that the mercury level drops to 76.0cm or 30.0 inches if the experiment is performed at ocean level. The top of the tube lacks air and does not build up air pressure. This device acts as a balance. If it is taken to the top of a high mountain where there is a smaller number of air layers above one's head, the mercury level goes down. This device can even be calibrated to measure elevation for us based on air pressure. The pressure that the 76.0-cm column of mercury generates is equal the pressure that the same diameter column of air generates but with a length of 60 miles ( from the Earth's surface all the way up to the no-air region). Using the formula for pressure ( p = F / A ), the pressure of the mercury column or the atmospheric pressure can be calculated as follows: patm = the mercury weight / the tube cross-sectional Area. ( Write using horiz. fraction bars). patm = (VHg)(DHg) / A = (A)(hHg)(DHg) / A = hHgDHg . Note that the tube's volume = VHg. = (base area) (height) = (A)(hHg.). patm = hHgDHg ( This further verifies the formula for for pressure in a fluid). In Torricelli's experiment, hHg = 76.0cm and DHg = 13.6 grf /cm3 ; therefore , patm = ( 76.0cm )( 13.6 grf /cm3 ) = 1033.6 grf / cm2 Converting grf to kgf results in patm = 1.0336 kgf / cm2 To 2 significant figures, this result is a coincidence: patm = 1.0 kgf /cm2. If you softly place a 2.2 lbf (or 1.0 kgf ) weight over your finger nail ( A = 1 cm2 almost), you will experience a pressure of 1.0 kgf / cm2 (somewhat painful) that is equivalent to the atmospheric pressure. The atmosphere is pressing with a force of 1 kgf = 9.8 N on every cm2 of our bodies and we are used to it. This pressure acts from all directions perpendicular to our bodies surfaces at any point. An astronaut working outside a space station must be in a very strong suit that can hold 1 atmosphere of pressure inside compared to the zero pressure outside and not explode. Example 7: Convert the atmospheric pressure from 1.0336 kgf / cm2 to lbf / in2 or psi. Solution: 1 kgf = 2.2 lbf and 1 in. = 2.54 cm. Convert and show that patm = 14.7 psi. Example 8: Convert the atmospheric pressure from 1.0336 kgf / cm2 to N / m2 or Pascals (Pa). Solution: 1 kgf = 9.8N and 1 m = 100 cm. Convert and show that patm = 101,300 Pa. Example 9: The surface area of an average size person is almost 1m2. Calculate the total force that the atmosphere exerts on such person. Solution: p = F / A ; F = pA ; F = ( 101,300 N/m2 )( 1 m2 ) = 100,000 N. F = ( 1.0336 kgf / cm2 )( 10,000 cm2 ) = 10,000 kgf = 10 ton force. Example 10: A submarine with a total outer area of 2200m2 is at a depth of 65.0m below ocean surface. The density of ocean water is 1030 kg/m3. Calculate (a) the pressure due to water at that depth, (b) the total external pressure at that depth, and (c) the total external force on it. Let g = 9.81 m/s2. Solution: (a) p = hD ; p = h ρg ; p = (65.0m)(1030 kg/m3)(9.81 m/s2) = 657,000 N /m2 or Pa. (b) [p total] external = p liquid + p atmosphere ; [ p total ]external = 657,000Pa + 101,000Pa = 758,000Pa. (c) p = F / A ; solving for F, yields: F = pA ; F = (758,000 N/m2)(2200m) = 1.67x109 N. Buoyancy, Archimedes' Principle: When a non-dissolving object is submerged in a fluid (liquid or gas), the fluid exerts an upward force onto the object that is called the buoyancy force (B). The magnitude of the buoyancy force is equal to the weight of displaced fluid. The formula for buoyancy is therefore, B = Vobject Dfluid Example 11: Calculate the downward force necessary to keep a 1.0-lbf basketball submerged under water knowing that its diameter is 1.0ft. The American unit for the weight density of water is Dwater = 62.4 lbf /ft3. Solution: The volume of the basketball (sphere) is: Vobject = (4/3) π R3 = (4/3)(3.14)(0.50 ft)3 = 0.523 ft3. The upward force (buoyancy) on the basketball is: B = Vobject Dfluid = (0.523 ft3)(62.4 lbf / ft3) = 33 lbf . Water pushes the basketball up with a force of magnitude 33 lbf while gravity pulls it down with a force of 1.0 lbf (its weight); therefore, a downward force of 32 lbf is needed to keep the basketball fully under water. The force diagram is shown below: A Good Link to Try: http://www.mhhe.com/physsci/physical/giambattista/fluids/fluids.html . Example12: Calculate the necessary upward force to keep a (5.0cm)(4.0cm)(2.0cm)-rectangular aluminum bar from sinking when submerged in water knowing that Dwater = 1 grf / cm3 and DAl = 2.7 grf / cm3. Solution: The volume of the bar is Vobject = (5.0cm)(4.0cm)(2.0cm) = 40cm3. The buoyancy force is: B = Vobject Dfluid = (40cm3)(1 grf / cm3) = 40grf. The weight of the bar in air is w = Vobject Dobject = (40cm3)(2.7 grf / cm3) = 110grf. Water pushes the bar up with a force of magnitude 40. grf while gravity pulls it down with 110grf ; therefore, an upward force of 70 grf is needed to keep the bar fully under water and to avoid it from sinking. The force diagram is shown below: Example13: A boat has a volume of 40.0m3 and a mass of 2.00 tons. What load will push 75.0% of its volume into water? Each metric ton is 1000 kg. Let g = 9.81 m/s2. Solution: Vobj = 0.750 x 40.0m3 = 30.0m3. B =Vobject Dfluid = (30.0m3)(1000 kg /m3)(9.81 m/s2) = 294,000N. w = Mg = (2.00 x 103 kg)(9.81 m/s2) = 19600N. F = B - w = 294,000N - 19600N = 274,000N. An important and useful principle in fluid statics is the " Pascal's Principle." Its statement is as follows: The pressure imposed at any point of a confined fluid transmits itself to all points of that fluid without significant losses. One application of the Pascal's principle is the mechanism in hydraulic jacks. As shown in the figure, a small force, f, applied to a small piston of area, a ,imposes a pressure onto the liquid (oil) equal to f/a. This pressure transmits throughout the oil as well as onto the internal boundaries of the jack specially under the big piston. On the big piston, the big load F, pushes down over the big area A. This pressure is F/A . The two pressures must be equal, according to Pascal's principle. We may write: f /a = F/A Although, for balance, the force that pushes down on the big piston is much greater in magnitude than the force that pushes down on the small piston; however, the small piston goes through a big displacement in order for the big piston to go through a small displacement. Example14: In a hydraulic jack the diameters of the small and big pistons are 2.00cm and 26.00cm respectively. A truck that weighs 33800N is to be lifted by the big piston. Find (a) the force that has to push the smaller piston down, and (b) the pressure under each piston. Solution: (a) a = π r2 = π (1.00cm)2 = 3.14 cm2 ; A = π R2 = π (13.00cm)2 = 530.66 cm2 f / a = F / A ; f / 3.14cm2 = 33800N / 530.66cm2 ; f = 200N (b) p = f /a = 63.7 N/cm2 ; p = F / A = 63.7 N/cm2. Chapter 11 Test Yourself 2: 1) In Torricelli's experiment of measuring the atmospheric pressure at the ocean level, the height of mercury in the tube is (a) 76.0cm (b) 7.6cm (c) 760mm (d) a & c. click here 2) The space above the tube in the Torricelli's experiment is (a) at regular pressure (b) almost vacuum (c) at a very small amount of mercury vapor pressure, because due to vacuum, a slight amount of mercury evaporates and creates minimal mercury vapor pressure (d) both b & c. 3) The pressure inside a stationary fluid (liquid or gas) depends on (a) mass density and depth (b) weight density and depth (c)depth only regardless of the fluid type. click here 4) A pressure gauge placed 50.0m below ocean surface measures (a) a higher (b) a lower (c) the same pressure compared to a gauge that is placed at the same depth in a lake. 5) The actual pressure at a certain depth in an ocean on a planet that has an atmosphere is equal to (a) just the liquid pressure (b) the liquid pressure + the atmospheric pressure (c) the atmospheric pressure only. click here 6) The formula that calculates the pressure at a certain depth in a fluid is (a) p = h ρ (b) p = hD (c) p = h ρg (d) both b & c. Problem: Mercury is a liquid metal that is 13.6 times denser than water. Answer the following questions: 7) The mass density of mercury is (a) 13600 kg/m3 (b) 13.6 ton / m3 (c) both a & b. click here 8) The weight density of mercury is (a) 130,000N/m3 (b) 1400N/m3 (c) 20,100N/m3. 9) In a mercury tank, the liquid pressure at a depth if 2.4m below mercury surface is (a) 213000N/m2 (b) 312000N/m3 (c) 312000N/m2. click here 10) In the previous question, the total pressure at that depth is (a) 412000 N/m3 (b) 212000N/m2 (c) 412000 N/m2. Problem: This problem shows the effect of depth due to liquid pressure. A long vertical and narrow steel pipe of 1.0cm in diameter is connected to a spherical barrel of internal diameter of 1.00m. The barrel is also made of steel and can withstand an internal total force of 4,00,000N! The barrel is gradually filled with water through the thin and long pipe on its top while allowing the air out of the tank. When the spherical part of the tank is full of water, further filling makes the water level in the thin pipe to go up fast (Refer to Problem 7 at the very end of this chapter for a suitable figure). As the water level goes up, it quickly builds up pressure. p = hD. If the pipe is 15.5m long, for example, answer Questions 11 through 14. 11) The liquid pressure at the center of the barrel is (a) 151900N/m2 (b) 156800N/m2 (c) 16000kg/m2. 12) The total internal area of the barrel is (a) 0.785m2. (b) 3.141m2. (a) 1.57m2. click here 13) The total force on the internal surface of the sphere is (a) 246000N (b) 123000N (c) 492000N. 14) Based on the results in the previous question, the barrel (a) withstands the pressure (b) does not withstand the pressure. click here 15) The liquid pressure at a depth of 10m ( 33ft ) below water on this planet is roughly (a) 200,000Pa (b) 100,000Pa (c) 300,000Pa. 16) Since the atmospheric pressure is also roughly 100,000 Pa, we may say that every 10m of water depth or height is equivalent to (a) 1 atmosphere of pressure (b) 2 atmospheres of pressure (c) 3 atmospheres of pressure. 17) In the Torricelli's experiment, if the formula P = hD is used to calculate the pressure caused by 0.760m of mercury the value of atmospheric pressure becomes (a) 9800 N/m2 (b) 98000 N/m2 (c) 101,000 N/m2. Perform the calculation. ρmercury = 13600kg/m3. click here 18) To convert 101,000 N/m2 or the atmospheric pressure to lbf /in2 or psi, one may replace (N) by 0.224 lbf and (m) by 39.37 in. The result of the conversion is (a) 25.4ps (b) 14.7psi (c) 16.2psi. Perform the calculation. 19) To convert 101,000 N/m2 or the atmospheric pressure to kgf /cm2, one may replace (N) by 0.102 kgf and (m) by 100cm. The result of the conversion is (a) 1.0 kgf /cm2 (b) 2.0 kgf /cm2 (c) 3.0kgf /cm2. Perform the calculation. 20) Due to the atmospheric pressure, every cm2 of our bodies is under a force of (a) 1.0kgf (b) 9.8N (c) both a & b. click here 21) An example of an area approximately close to 1cm2 is the size of (a) a finger nail (b) a quarter (c) a dollar coin. 22) The formula that calculates the area of sphere is Asphere = (a) πr2 (b) 2πr2 (c) 4πr2. 23) The force due to liquid pressure on a 5.0m diameter spherical chamber that is at a depth of 40.0m below ocean surface is (a) 3.14x106N (b) 3.08x107N (c) 6.16x106N. click here 24) Buoyancy for a submerged object in a non-dissolving liquid is (a) the upward force that the liquid exerts on that object (b) equal to the mass of the displaced fluid (c) equal to the weight of the displaced fluid (d) a & c. 25) The direction of the buoyancy force is (a) always downward (b) always upward (c) sometimes upward and sometimes downward. click here 26) The buoyancy on a cube 0.080m on each side and fully submerged in water is (a) 5.02N (b) 63N (c) 0.512N. 27) If the cube in the previous question is made of aluminum ( ρ = 2700 kg/m3), it has a weight of (a) 13.5N (b) 170N (c) 0.189N. click here 28) The force necessary to keep the cube in the previous question from sinking in water is (a) 107N (b) 8.5N (c) 7.0N. Problem: A (12.0m)(50.0m)(8.0m-height)-barge has an empty mass of 1250 tons. For safety reasons and preventing it from sinking, only 6.0m of its height is allowed to go under water. Answer the following questions: 29) The total volume of the barge is (a) 480m3 (b) 60m3 (c) 4800m3. click here 30) The effective (safe) volume of the barge that can be submerged in water is (a) 3600m3 (b) 50m3 (c) 360m3. 31) The buoyancy force on the barge when submerged in water to its safe height is (a) 1.83x106N (b) 5.43x108N (c) 3.53x107N. click here 32) The safe load that the barge can carry is (a) Buoyancy + its empty weight (b) Buoyancy - its empty weight (c) Buoyancy - its volume. 33) The mass of the barge in kg is (a) 1.25x103 ton (b) 1.25x106 kg (c) a & b. 34) The weight of the barge in N is (a) 1.23x107 N (b) 2.26x107 N (c) neither a nor b. click here 35) The safe load in N that the barge can carry is (a) 3.41x107N (b) 2.3x107N (c) 2.53x107N. 36) The safe load in Metric tons is (a) 1370 ton (b) 2350 ton (a) 5000 ton. 37) According to Pascal's principle, a pressure imposed (a) on any fluid (b) on a confined fluid (c) on a mono-atomic fluid, transmits itself to all points of that fluid without any significant loss. click here Problem: In a hydraulic jack the diameter of the big cylinder is 10.0 times the diameter of the small cylinder. Answer the following questions: 38) The ratio of the areas (of the big piston to the small piston) is (a) 10.0 (b) 100 (c) 50.0. click here 39) The ratio of the applied forces (on the small piston to that of the big piston) is (a) 1/100 (b) 1/10 (c) 1/25. click here 40) If the applied force to the small piston is 147.0N, the mass of the car it can lift is (a) 1200kg (b) 3500kg (c) 1500kg. 1) The mass density of mercury is 13.6 gr /cm3. A cylindrical vessel that has a height of 8.00cm and a base radius of 4.00cm is filled with mercury. Find (a) the volume of the vessel. Calculate the mass of mercury in (b) grams, (c) kg, and (d) find its weight both in N and kgf. Note that 1kgf = 9.81N. 2) A piece of copper weighs 49N. Determine (a) its mass and (b) its volume. The mass density of copper is 8.9gr/cm3. 3) The mass densities of gold and copper are 19.3 gr/cm3 and 8.9 gr/cm3, respectively. A piece of gold necklace has a mass of 51.0 grams and a volume of 3.50 cm3. Calculate (a) the mass percentage and (b) the karat of gold in the alloy assuming that the volume of the alloy is equal to the volume of copper plus the volume of gold. In other words, no volume is lost or gained as a result of the alloying process. 4) Calculate the average pressure that a 32-ton ten-wheeler truck exerts on the ground by each of its ten tires if the contact area of each tire with the ground is 750 cm2. 1 ton = 1000kg. Express your answers in (a) Pascal, (b) kgf/cm2, and (c) psi. 5) Calculate (a) the water pressure at a depth of 22.0m below ocean surface. (b)What is the total pressure at that depth if the atmospheric pressure is 101,300Pa? (c) Find the total external force on a shark that has an external total surface area of 32.8 ft2. Ocean water has a mass density of ρ = 1030 kg/m3. 6) A submarine with a total outer area of 1720m2 is at a depth of 33.0m below ocean surface. The mass density of ocean water is 1025 kg/m3. Calculate (a) the pressure due to water at that depth. (b) the total external pressure at that depth, and (c) the total external force on it. Let g = 9.81 m/s2. 7) In the figure shown, calculate the liquid pressure at the center of the barrel if the narrow pipe is filled up to (a) Point A, (b) Point B, and (c) Point C. Using each pressure you find (in Parts a, b, and c) as the average pressure inside the barrel, calculate (d) the corresponding internal force on the barrel in each case. If it takes 4.00x107N for the barrel to rupture, (e) at what height of water in the pipe will that happen? A sphere = 4πR2 and g = 9.81m/s2. 8) In problem 7, why is it not necessary to add the atmospheric pressure to the pressure you find for each case? 9) A volleyball has a diameter of 25.0cm and weighs 2.0N. Find (a) its volume. What downward force can keep it submerged in a type of alcohol that has a mass density of 834 kg/m3 (b) in Newtons and (c) lb-force? Vsphere=(4/3)πR3. 10) What upward force is needed to keep a 560cm3 solid piece of aluminum completely under water avoiding it from sinking (a) in Newtons, and (b) in lbf.? The mass density of aluminum is 2700kg/m3. 11) A boat has a volume of 127m3 and weighs 7.0 x104N. For safety, no more than 67.0% of its volume should be in water. What maximum load (a) in Newtons, (b) in kgf, (c) in ton-force, and (d) in lbf can be put in it? 1) 402cm3, 5470grams, 5.47kg, 53.7N & 5.47kgf 2) 5.0kg , 562 cm3 3) 72.3% , 17.4 karat 4) 420 kPa, 4.3 kgf/cm2, 61 psi 5) 222kPa, 323kPa, 969 kN 6) 332kPa, 433kPa, 7.45x108N 7) 176kPa, 225kPa, 255kPa, 2.00x107N, 2.55x107N, 2.88x107N, 36.1m above the barrel's center 8) For students to answer 9) 8.18x10-3m3, 64.9N, 14.6 lbf 10) 9.3N, 2.1 lbf 11) 765000 N, 78000 kgf, 78 ton-force, 170,000 lbf
http://www.pstcc.edu/departments/natural_behavioral_sciences/Web%20Physics/Chapter11.htm
13
110
From Math Images - An involute of a circle can be obtained by rolling a line around the circle in a special way. Basic DescriptionImagine you have a string attached to a point on a fixed curve. Then, tautly wind the string onto the curve. The trace of the end point on the string gives an involute of the original curve, and the original curve is called the evolute of its involute. The animation on the right gives an example of an involute of a circle. We can see that a straight line is rotating around the center circle. This is like unwinding a string from a pole and keeping it taut the whole time. The trace of the end point of the line is the involute of the circle. In the image, it is the red spiral. The involute is also the roulette of a selected point on a line that rolls (as a tangent) along a given curve. Notice that while the rolling object can be anything (point, line, curve) for roulettes, it has to be a line to roll out involutes. For more information about roulettes, you can refer to the roulette image page. History of Involute The involute was first introduced by Huygens in 1673 in his Horologium Oscillatorium sive de motu pendulorum, in which he focused on theories about pendulum motion. Christiaan Huygens was a Dutch mathematician, astronomer and physicist. In 1656, he invented and built the world's first pendulum clock, which was the basic design for more accurate clocks in the following 300 years. Huygens had long realized that the period of a simple pendulum's oscillations was not constant, but rather depended on the magnitude of the movements. In other words, if the releasing height of the pendulum changes, the time for one oscillation changes as well. In search of improvements to the pendulum clocks, Huygens showed that the cycloid was the solution to this issue in 1659. He found that for a particle (subject only to gravity) to slide down a cycloidal curve without friction, the time it took to reach the lowest point of the curve was independent of its starting point. But how can we make sure the pendulum oscillates along a cycloid, instead of a circular arc? This was the point where the involute came in. In Horologium Oscillatorium sive de motu pendulorum, Huygens proved that the involute of a cycloid is another cycloid (you will see an example in the More Mathematical Explanation section). To force the pendulum bob to swing along a cycloid, the string needed to "unwrap" from the evolute of the cycloid. As shown in the image on the right, he suspended the pendulum from the cusp point between two cycloidal semi-arcs. As a result, the pendulum bob traveled along a cycloidal path that was exactly the same as the cycloid to which the semi-arcs belonged. Thus, the time needed for the pendulum to complete swings was the same regardless of the swing amplitude. To Draw an Involute The involute of a given curve can be approximately drawn following the instructions below (there will be an example afterwards): - Draw a number of tangent lines to the given curve. - Pick a pair of neighboring tangent lines and set their intersection as the center. Then, with an endpoint at the point where one of those tangent lines meets the curve, draw an arc bounded by the two tangent lines. We call the line whose tangent point to the curve is also on the arc L1, the other tangent line L2, and the newly constructed arc Arc1. - Pick the neighboring tangent of L2 and call it L3. Set the intersection of L2 and L3 as the center. Then draw an arc bounded by these two tangents, using a radius that will make this arc join Arc1. In other words, the radius would be the distance between the point where L2 meets L3 and the point where L2 meets Arc1. - Repeat the step above for the rest of the tangents: pick the neighboring tangent, draw the arc; pick the next neighboring tangent, draw another arc... Here is an illustration of the construction procedure above: This method does not produce the accurate involute curve because, for each of the line segments between the original curve and the involute (e.g. BA1, CA2, DA3...), instead of using the length of arc of the original curve as its length, we use the sum of the segments of the tangents. To be more concrete, the length of segment BA1 should equal the length of arc AB; the length of segment CA2 should equal the length of arc BC. This is because the line segment represents the part of the string that has just been unwound. It was originally wrapped along the fixed curve. As a result, its length should equal the length of the part of the original curve that it "covered" before. However, with the construction process described above, the lengths of these line segments are in fact sums of tangent segments. For example, the length of BA1 is found by adding XA and BX, which is shorter than it is supposed to be. Therefore, the distance between the involute and the original curve is not perfectly accurate. Fortunately, this error gets smaller as we construct more tangent lines that are closer to each other. A More Mathematical Explanation In this section, we will derive the general formula for involutes, [...] General Formula for Involutes This section derives the general equation for involutes of curves: As an example, the equation of the involute of a circle is going to be derived immediately in the examples of involutes section. Recall that we can think of the involute as the path of the end point of a string that is unwound from a fixed curve. So the tangent line segment we see between a point on the original curve and its corresponding point on the involute can be considered the part of the string that has just been unwound from the fixed curve. Therefore, the length of this line segment equals the distance traveled by the contact point between the unwound string and the fixed curve (the tangent point). If we are given a curve, we construct its involute by unwinding a string tautly from the curve. For the point on the original curve that has Cartesian coordinates (f(t ), g(t )), its corresponding point on the involute (the endpoint of the string) can be found with this formula: where is the position vector for a point on the original curve (where the unwound part of the string touches the curve), is the unit tangent vector to the original curve at this point (the current direction of the string), s is the distance traveled by this point so far (how much of the string has been unwound), and is the position vector for the corresponding point on the involute curve (the position of the end of the string). The image on the right takes a circle as an example and explains these variables visually. How each part is calculated is going to be explained below. Note: For the following lines, we will use f to represent f(t ) so that the formulas look neater. Similarly, g(t ), f′(t ), g′(t ) are shortened to be g, f′ and g′ respectively. Generally, the tangent vector for a curve with a position vector is defined as . If you are not very familiar with calculus, you can check these webpages to learn more about tangent vectors, derivatives, and velocity vector. In the case we have, the tangent vector is Thus, the unit tangent vector is: The distance traveled by the contact point can be calculated by taking the integral of its speed. In the horizontal direction, its velocity is the derivative f′; in the vertical direction, its velocity is g′. Therefore, the velocity of the point written as a vector is (f′, g′) (We can see that its velocity is exactly the tangent vector). Its speed is: Thus, the point has traveled a distance of where f(a ) is the point where the involute and the curve intersect (the starting point of the unwinding process). Therefore, if we write the vectors using Cartesian coordinates and plug in the results we get above for each term in the equation, we have Hence, the parametric equation for the involute is In the examples of involutes section, you will see the derivation of the equation for the involute of a circle as an example. Examples of Involutes The image on the left shows the Involute of a Circle. It resembles an Archimedean spiral. We know that the parametric equation for a circle with radius a is: For any point on the circle, its position vector is Then its tangent vector (also the velocity of the point on the circle) is The unit tangent vector is The speed of the point on the circle is The distance traveled by the point on the circle from its starting point is calculated by taking the integral of its speed: Plugging these numbers into the general formula, which is , we get the position vector of the corresponding point on the involute: Therefore, the parametric equation for the involute of a circle with radius a is: We can go through the same procedure to get the equations for all the involute curves below, but we are not going to derive all these equations in this page. |The involute of a parabola looks like the images on the left.| For example, if the parabola is its involute curve is |On the other hand, if the parabola is its involute curve is |The involute of an astroid is another astroid that is half of its original size and rotated 1/8 of a turn. | For example, if the parametric equation for the astroid is The involute of the astroid is |The involute of a cycloid is a shifted copy of the original cycloid.| If the cycloid is its involute curve is |The involute of a cardioid is a mirrored, but bigger cardioid. For example, the cardioid is given as Its involute is For more examples of involutes, you can visit WolframMathWorld -- Involutes and Evolutes Properties of Involutes - Why are the various involutes of a given circle parallel to each other? We can approach this question through string unwinding again. - Since each involute of the circle is symmetrical, we can look at one side of the cusp first. That is, we unwind two strings from a circle following the same direction. The endpoints of the strings are initially at points A and B (see the image below on the left). After some unwinding, the line that starts at point A will pass point B. Then, it is easy to see that the involute curves are parallel as desired. Starting at point B, we can also think of the case as a long string (the one drawing out the red involute) and a short string (the one drawing out the blue involute) being unwound simultaneously from the circle, and the starting point for both of them is point B (see the image below on the right). Therefore, they are always a constant distance apart. - For similar reasons, curves on the other side of the cusps (formed by unwinding the string in an opposite direction) are a constant distance apart as well. If we put the two parts of the involute curves together, we can see that all the involutes of a circle are parallel to each other. Why It's Interesting |One of the most commonly used gearing systems today is the involute gear. The image on the right is an example of such a gear. | In an involute gear system, it is desired that the two wheels should revolve as if the two pitch-circles were rolling against each other. This effect can be achieved if the teeth profiles are drawn as involutes of the base-circles and the tops of the teeth are arcs of circles that are concentric with, and bigger than the pitch circles. Due to how the involute is constructed (unwinding the string), we know that Because Q′ and R′ move the same distance in the same time interval, we can conclude that Q and R move with equal velocities. Therefore, points on the pitch-circles will also move with same velocities. |In the animation to the right, we can see that each pair of gear teeth has an instant contact point, called the pitch-point (point P in the hidden section above), and it moves along one single line as the gears rotate. This line is called the line of action and it is the common internal tangent of the two circles. In other words, the involutes of the two circles are always tangent to each other at a point on their common internal tangent. Because of this design, a constant velocity ratio is transmitted and the fundamental law of gearing is satisfied. Also, having all the contact points on a single straight line results in a constant force and pressure, while for gear teeth of other shapes, the relative speeds and forces change as teeth engage, resulting in vibration, noise, and excessive wear. Lastly, involute gears have the advantage that they are easy to manufacture since all the teeth are uniform. - There are currently no teaching materials for this page. Add teaching materials. - ↑ Christiaan Huygens (n.d). Retrieved from http://www.robertnowlan.com/pdfs/Huygens,%20Christiaan.pdf - ↑ 2.0 2.1 Wikipedia (List of gear nomenclature). (n.d). List of gear nomenclature. Retrieved from http://en.wikipedia.org/wiki/List_of_gear_nomenclature - ↑ Wikiversity (Gears). (n.d.). Gears. Retrieved from http://en.wikiversity.org/wiki/Gears LockWood E.H.(1967). A Book of Curves. The Syndics of the Cambridge University Press. Yates, Robert C.(1952). A Handbook on Curves and Their Properties. Edwards Brothers, Inc. Wikipedia (Involute gear). (n.d.). Involute gear. Retrieved from http://en.wikipedia.org/wiki/Involute_gear Wikipedia (Involute). (n.d.). Involute. Retrieved from http://en.wikipedia.org/wiki/Involute Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
http://mathforum.org/mathimages/index.php/Involute
13
58
What is a parametric curve? A parametric curve in the plane is a pair of functions where the two continuous functions define ordered pairs (x,y). These two equations are usually called the parametric equations of a curve. The extent of the curve will depend on the range of t. We wil pay close attention to the range of t in our investigations. Graph x= cos(t) and y=sin(t) for 0<t<2pi As you can see, these pair of functions produce a parametric curve that is a circle with a radius of 1. Now, let's do some changes to this graph and see some other transformations! y=sin(bt) for different values of a and b. We are still dealing with t between 0 and 2pi. Note: When a and b = 1, we get the graph that we started with! Let' s check out what happens for values other than 1. I noticed something really interesting. As long as a and b are equal, they produce the same graph that x=cos(t) and y=sin(t) did, only thicker because a and b effect how many times around the circle the function goes. Notice how this function is thicker since a and b = 10 than when a and b = 1 Notice how thick this is when a and b = 100. The circle is much thicker. I am not sure why there are some indentations in the circle as the curve is parameterized. I did notice, however, that as a and b get really large, it apears that the indentations go away somewhat. However, the circle does become really thick. See the graph below when a and b = 1000. This graph is in yellow. Do you think that when a and b would be at something like 1,000,000 that the circle would be completely filled in? Let's see. The center of the circle does become smaller as the curve parameterizes 1000000 times in the graph below. Notice the cool graph this pair of functions graphs. Notice the presence of the segments whose endpoints form the boundary of the circular curve. I found a neat website where you can go and plot any parametric curve of your choice. Click here to go to that website and do some exploring of your own. You may get a syntax error when you pull it up, but you can type in the values below to get it working! For f, you can put in cos(t) For g, you can put in sin(t) Then for the parameter, type t. For a and b, put 0 to 2*pi. Use * for multiplying two values Let' s look further and see how a and b affect the parametric curve when we plug a and b into the function like below: Notice below when a and b are equal to 2 that the effect these values have is to increase the radius of the original circle by 2. Where x= cos(t) and y=sin(t) have a radius equal to 1, the graph of x=2cos(t) and y= 2sin(t) has a radius equal to 2. Let's try it with a and b = 5 and see it this produces a radius of 5. It does, therefore by altering a and b in this way effects the radius of the circle. I know that some of the circle is cut off but it does have a radius that goes through the points 5. What happens when a and b are negative? Do you think the radius is the same as when a and b were positive? Let's take a look at the parametric curve when a and b = -3. It doesn't seem that the negative has an effect on the radius of the circle. This is because the area of a circle is represented by the formula A= pir^2. Since the radius is squared, it wil always be positive which this case shows. Let's investigate the following functions for different values of a and b, particularly when a=b, a<b, and a>b. We will start off where a=b. I picked the values of a and b to be equal to 2. Remember, the a value corresponds along the x-axis and the b value corresponds along the y-axis. With a and b = 2, we get to be a circle where it has a radius of 2. The blue function represents the square of the first function and the yellow function represents the cube of the first function. Notice the square function (second function) is in the first Quadrant where it is always positive. This is because anything squared is always positive. This function appears to be a segment and has endpoints at (2,0) and (0,2). Finally, the cube function looks like a diamond. It is in yellow and has points of intersection at the points (2,0),(0,2), (-2,0), and (0,-2). Let's see what looks different when we take a>b. Let's let a=3 and b=2. See the graph below. Now that a>b where 3>2, we see that for the value of a along the x-axis, the function intersects the x-axis at the points (3,0) and (-3,0). For the value of b along the y-axis, the function intersects the y-axis at the points (0,2) and (0,-2). This function appears to be an ellipse when a>b. For the second function in blue, it still appears to be a segment and it intersects the point (0,2) along the y-axis and the point (3,0) along the x-axis. It is still in the first quadrant because all values of the function will be positive. For the third function, the cube function in yellow, it still looks like a diamond. Its points form at the points (0,2) and (0,-2) along the y-axis for the values of b and at the points (3,0) and (-3,0) along the x-axis for the values of a. Notice how all of the functions cross the x and y-axis at the same points. Now that a<b where 2>3, we see that for the value of a along the x-axis, the function intersects the x-axis at the points (2,0) and (-2,0). For the value of b along the y-axis, the function intersects the y-axis at the points (0,3) and (0,-3). This function appears to be an ellipse when a<b. For the second function in blue, it still appears to be a segment and it intersects the point (0,3) along the y-axis and the point (2,0) along the x-axis. It is still in the first quadrant because all values of the function will be positive. For the third function, the cube function in yellow, it still looks like a diamond. Its points form at the points (0,3) and (0,-3) along the y-axis for the values of b and at the points (2,0) and (-2,0) along the x-axis for the values of a. To sum it up, if a>b, then the functions will be longer or more spread out along the x-axis. If a<b, then the functions will graph longer along the y-axis. Only when a=b, does the function form a circle. Otherwise, it forms an ellipse. All of the three functions cross the x and y-axis at the same points in all cases. This concludes my investigation of parametric curves.
http://jwilson.coe.uga.edu/EMT668/EMAT6680.2003.Su/Fagler/AssignmentsBF/Assignment10BF/Assignment10BF.html
13
52
11. Investigating the shapes of graphs It is a useful skill to be able to draw an accurate (representative) sketch graph of a function of a variable. It can aid in the understanding of a topic, and moreover, it can aid those who might find the mental envisaging of some of the more complex functions very difficult. Often one refers to the "vertex" of a quadratic function, but what is this? The vertex is the point where the graph changes the direction, but with the new skills of differentiation this can be generalised (rather helpfully): A stationary point is a point on a graph of: This is simple to explain in words. One is basically finding all values of "x" (and hence the coordinates of the corresponding points, if it is required) of the places on the graph where the gradient is equal to zero. First one must calculate the derivative, such that one is able to calculate the value of "x" for which this is zero, and hence the gradient is zero. Hence, one now uses the original function to obtain the "y" value, and thence the coordinate of the point: Hence there is a stationary point, or vertex at (-1, 2). One can check this using the rules about the transformation of graphs, along with the completion of the square technique. Maximum and minimum points It is evident that there are different types of these stationary points. One can envisage simply that there are those point whose gradient is positive, and then becomes zero, after which they are negative (maxima), and those points whose gradient change is from negative to zero to positive (minima) One could perform an analysis upon these points to check whether they are maxima, or minima. 1. For the stationary point calculated in the previous example, deduce whether it is a point of local maximum, or local minimum. One obtained the point (-1, 2) on the graph of: One can therefore take an "x" value either side of this stationary point, and calculate the gradient. This is evidently negative. This is evidently positive. Hence the gradient has gone from negative, to zero, to positive; and therefore the stationary point is a local minimum. It is important that one understands that these "minima" and "maxima" are with reference to the local domain. This means that one can have several points of local minimum, or several points of local maximum on the same graph (the maximum is not the single point whose value of "y" is greatest, and the minimum is not the single point whose value of "y" is least). An application to roots of equations It is evident, and has been shown previously, that one can obtain the roots of an equation through the analysis, and calculation of the points of intersection of two functions (when graphed). It is evident why this is true; for example: It is therefore simple to deduce that: Are the real roots to the equation. This is correct, and is useful knowledge when conjoined with the knowledge of stationary points, and basic sketching skills. Consider that one wishes to calculate the roots of the equation: These roots (if they are real) are graphically described as the intersections of the lines: Hence one would plot both graphs, and calculate the points of intersection. However, it is often the case that one will merely want to know how many real roots there are to an equation, and hence the work on sketch graphs is relevant. One does not need to know the accurate roots, merely the number of them, and hence it is useful to learn how to plot a good sketch graph. First one would calculate the stationary points of one of the functions, and then one could deduce their type. This could then be sketched onto a pair of axes. Repetition of this with the second function would lead to a clear idea of where the intersections may (or may not be), and therefore one can not only give the number of real roots to the equation, but also approximations as to the answer (these are usually given as inequalities relating to the positioning of the x-coordinate of the intersection with those of the stationary points). There is a much better, and usually more powerful technique for calculating the type of stationary point one is dealing with than the method described earlier. If one is to think of the derivative of a function to be the "rate of change" of the function, the second derivative is the "rate of change, of the rate of change" of a function. This is a difficult sounding phrase, but it is a rather easy concept. In some functions, one will have noticed that the derivative involves a value of "x", and hence there is a change to the gradient. A straight line has a constant value as the derivative, and hence it has a constant gradient. A constant gradient is not changing, there is no "rate of change of the gradient, other than zero". A derivative that has a term of a variable within it will have a second derivative other than zero. This is because at any given value of "x" on the curve, there is a different gradient, and hence one can calculate how this changes. 1. What is the second derivative of the function of "x": This is simple; being by calculating the first derivative: Now, one would like to know what the rate of change, of this gradient is, hence one can calculate the derivative of the derivative (the second derivative): One should be aware that the second derivative is notated in two ways: The former is pronounced "f two-dashed x", and the latter, "d two y, d x squared". The application of this to minima and maxima is useful. In many cases, this is a much more powerful tool than the original testing with values above and below the stationary point. 1. Demonstrate why (through the use of second derivative) the stationary point calculated in the example earlier (in this section) produced a local minimum. First one can assert the function: Now, the derivative: Finally, the second derivative: Hence the point is a local minimum, and moreover, the graph bends upwards, and does not have a local maximum. Graphs of other functions Functions other than the simple polynomial one that has been considered can use the same method. One is already aware that these graphs of fractional, or negative indices of "x" are differentiable using the same rule for differentiating powers of "x" as the positive, integer powers use. One does have to be slightly more careful however, as there are some points on these graphs that are undefined (the square root of any negative value, for instance, is not defined in the set of real numbers). One should simply apply the same principles. First one must find the derivative (it might be a good idea to find the second derivative at the same time, so as to do all of the calculus first): (Note, in this example one might wish to write down the original expression of "y" as a positive, and negative power of "x", it will aid, one would imagine, the understanding of the situation). Now one can find the stationary points: Hence, there are stationary points at (-1, -2), and (1, 2). Now one can identify them, in turn: Hence there is a point of local maximum at (-1, -2), and a point of local minimum at (1, 2). One thing that one should be aware of is that sometimes one will encounter a change in the gradient of a curve that is from a positive to a positive, or from a negative to a negative, this is a point of inflexion. Although this is not strictly in the syllabus, it is useful to know, and can help to explain the stationary point that is found in graphs such as . Read these other OCR Core 1 notes: - Coordinates, points, and lines - Some important graphs - Index Notation - Graphs of nth power functions - Transforming graphs - Investigating the shapes of graphs - Applications of differentiation
http://www.thestudentroom.co.uk/wiki/Revision:OCR_Core_1_-_Investigating_the_shapes_of_graphs
13
152
These video lectures of Professor Gilbert Strang teaching 18.06 were recorded in Fall 1999 and do not correspond precisely to the current edition of the textbook. However, this book is still the best reference for more information on the topics covered in each lecture. Instructor/speaker: Prof. Gilbert Strang I've been multiplying matrices already, but certainly time for me to discuss the rules for matrix multiplication. And the interesting part is the many ways you can do it, and they all give the same answer. And they're all important. So matrix multiplication, and then, come inverses. So we mentioned the inverse of a matrix. That's a big deal. Lots to do about inverses and how to find them. Okay, so I'll begin with how to multiply two matrices. First way, okay, so suppose I have a matrix A multiplying a matrix B and -- giving me a result -- well, I could call it C. A times B. Okay. So, let me just review the rule for this entry. That's the entry in row i and column j. So that's the i j entry. Right there is C i j. We always write the row number and then the column number. So I might -- I might -- maybe I take it C 3 4, just to make it specific. So instead of i j, let me use numbers. C 3 4. So where does that come from, the three four entry? It comes from row three, here, row three and column four, as you know. Column four. And can I just write down, or can we write down the formula for it? If we look at the whole row and the whole column, the quick way for me to say it is row three of A -- I could use a dot for dot product. I won't often use that, actually. Dot column four of B. But this gives us a chance to just, like, use a little matrix notation. What are the entries? What's this first entry in row three? That number that's sitting right there is... A, so it's got two indices and what are they? 3 1. So there's an a 3 1 there. Now what's the first guy at the top of column four? So what's sitting up there? B 1 4, right. So that this dot product starts with A 3 1 times B 1 4. And then what's the next -- so this is like I'm accumulating this sum, then comes the next guy, A 3 2, second column, times B 2 4, second row. So it's b A 3 2, B 2 4 and so on. Just practice with indices. Oh, let me even practice with a summation formula. So this is -- most of the course, I use whole vectors. I very seldom, get down to the details of these particular entries, but here we'd better do it. So it's some kind of a sum, right? Of things in row three, column K shall I say? Times things in row K, column four. Do you see that that's what we're seeing here? This is K is one, here K is two, on along -- so the sum goes all the way along the row and down the column, say, one to N. So that's what the C three four entry looks like. A sum of a three K b K four. Just takes a little practice to do that. Okay. And -- well, maybe I should say -- when are we allowed to multiply these matrices? What are the shapes of these things? The shapes are -- if we allow them to be not necessarily square matrices. If they're square, they've got to be the same size. If they're rectangular, they're not the same size. If they're rectangular, this might be -- well, I always think of A as m by n. m rows, n columns. So that sum goes to n. Now what's the point -- how many rows does B have to have? n. The number of rows in B, the number of guys that we meet coming down has to match the number of ones across. So B will have to be n by something. Whatever. P. So the number of columns here has to match the number of rows there, and then what's the result? What's the shape of the result? What's the shape of C, the output? Well, it's got these same m rows -- it's got m rows. And how many columns? P. m by P. Okay. So there are m times P little numbers in there, entries, and each one, looks like that. Okay. So that's the standard rule. That's the way people think of multiplying matrices. I do it too. But I want to talk about other ways to look at that same calculation, looking at whole columns and whole rows. Okay. So can I do A B C again? A B equaling C again? But now, tell me about... I'll put it up here. So here goes A, again, times B producing C. And again, this is m by n. This is n by P and this is m by P. Okay. Now I want to look at whole columns. I want to look at the columns of -- here's the second way to multiply matrices. Because I'm going to build on what I know already. How do I multiply a matrix by a column? I know how to multiply this matrix by that column. Shall I call that column one? That tells me column one of the answer. The matrix times the first column is that first column. Because none of this stuff entered that part of the answer. The matrix times the second column is the second column of the answer. Do you see what I'm saying? That I could think of multiplying a matrix by a vector, which I already knew how to do, and I can think of just P columns sitting side by side, just like resting next to each other. And I multiply A times each one of those. And I get the P columns of the answer. Do you see this as -- this is quite nice, to be able to think, okay, matrix multiplication works so that I can just think of having several columns, multiplying by A and getting the columns of the answer. So, like, here's column one shall I call that column one? And what's going in there is A times column one. Okay. So that's the picture a column at a time. So what does that tell me? What does that tell me about these columns? These columns of C are combinations, because we've seen that before, of columns of A. Every one of these comes from A times this, and A times a vector is a combination of the columns of A. And it makes sense, because the columns of A have length m and the columns of C have length m. And every column of C is some combination of the columns of A. And it's these numbers in here that tell me what combination it is. Do you see that? That in that answer, C, I'm seeing stuff that's combinations of these columns. Now, suppose I look at it -- that's two ways now. The third way is look at it by rows. So now let me change to rows. Okay. So now I can think of a row of A -- a row of A multiplying all these rows here and producing a row of the product. So this row takes a combination of these rows and that's the answer. So these rows of C are combinations of what? Tell me how to finish that. The rows of C, when I have a matrix B, it's got its rows and I multiply by A, and what does that do? It mixes the rows up. It creates combinations of the rows of B, thanks. Rows of B. That's what I wanted to see, that this answer -- I can see where the pieces are coming from. The rows in the answer are coming as combinations of these rows. The columns in the answer are coming as combinations of those columns. And so that's three ways. Now you can say, okay, what's the fourth way? The fourth way -- so that's -- now we've got, like, the regular way, the column way, the row way and -- what's left? The one that I can -- well, one way is columns times rows. What happens if I multiply -- So this was row times column, it gave a number. Okay. Now I want to ask you about column times row. If I multiply a column of A times a row of B, what shape I ending up with? So if I take a column times a row, that's definitely different from taking a row times a column. So a column of A was -- what's the shape of a column of A? n by one. A column of A is a column. It's got m entries and one column. And what's a row of B? It's got one row and P columns. So what's the shape -- what do I get if I multiply a column by a row? I get a big matrix. I get a full-sized matrix. If I multiply a column by a row -- should we just do one? Let me take the column two three four times the row one six. That product there -- I mean, when I'm just following the rules of matrix multiplication, those rules are just looking like -- kind of petite, kind of small, because the rows here are so short and the columns there are so short, but they're the same length, one entry. So what's the answer? What's the answer if I do two three four times one six, just for practice? Well, what's the first row of the answer? Two twelve. And the second row of the answer is three eighteen. And the third row of the answer is four twenty four. That's a very special matrix, there. Very special matrix. What can you tell me about its columns, the columns of that matrix? They're multiples of this guy, right? They're multiples of that one. Which follows our rule. We said that the columns of the answer were combinations, but there's only -- to take a combination of one guy, it's just a multiple. The rows of the answer, what can you tell me about those three rows? They're all multiples of this row. They're all multiples of one six, as we expected. But I'm getting a full-sized matrix. And now, just to complete this thought, if I have -- let me write down the fourth way. A B is a sum of columns of A times rows of B. So that, for example, if my matrix was two three four and then had another column, say, seven eight nine, and my matrix here has -- say, started with one six and then had another column like zero zero, then -- here's the fourth way, okay? I've got two columns there, I've got two rows there. So the beautiful rule is -- see, the whole thing by columns and rows is that I can take the first column times the first row and add the second column times the second row. So that's the fourth way -- that I can take columns times rows, first column times first row, second column times second row and add. Actually, what will I get? What will the answer be for that matrix multiplication? Well, this one it's just going to give us zero, so in fact I'm back to this -- that's the answer, for that matrix multiplication. I'm happy to put up here these facts about matrix multiplication, because it gives me a chance to write down special matrices like this. This is a special matrix. All those rows lie on the same line. All those rows lie on the line through one six. If I draw a picture of all these row vectors, they're all the same direction. If I draw a picture of these two column vectors, they're in the same direction. Later, I would use this language. Not too much later, either. I would say the row space, which is like all the combinations of the rows, is just a line for this matrix. The row space is the line through the vector one six. All the rows lie on that line. And the column space is also a line. All the columns lie on the line through the vector two three four. So this is like a really minimal matrix. And it's because of these ones. Okay. So that's a third way. Now I want to say one more thing about matrix multiplication while we're on the subject. And it's this. You could also multiply -- You could also cut the matrix into blocks and do the multiplication by blocks. Yet that's actually so, useful that I want to mention it. Block multiplication. So I could take my matrix A and I could chop it up, like, maybe just for simplicity, let me chop it into two -- into four square blocks. Suppose it's square. Let's just take a nice case. And B, suppose it's square also, same size. So these sizes don't have to be the same. What they have to do is match properly. Here they certainly will match. So here's the rule for block multiplication, that if this has blocks like, A -- so maybe A1, A2, A3, A4 are the blocks here, and these blocks are B1, B2,3 and B4? Then the answer I can find block. And if you tell me what's in that block, then I'm going to be quiet about matrix multiplication for the rest of the day. What goes into that block? You see, these might be -- this matrix might be -- these matrices might be, like, twenty by twenty with blocks that are ten by ten, to take the easy case where all the blocks are the same shape. And the point is that I could multiply those by blocks. And what goes in here? What's that block in the answer? A1 B1, that's a matrix times a matrix, it's the right size, ten by ten. Any more? Plus, what else goes in there? A2 B3, right? It's just like block rows times block columns. Nobody, I think, not even Gauss could see instantly that it works. But somehow, if we check it through, all five ways we're doing the same multiplications. So this familiar multiplication is what we're really doing when we do it by columns, by rows by columns times rows and by blocks. Okay. I just have to, like, get the rules straight for matrix multiplication. Okay. All right, I'm ready for the second topic, which is inverses. Okay. Ready for inverses. And let me do it for square matrices first. Okay. So I've got a square matrix A. And it may or may not have an inverse, right? Not all matrices have inverses. In fact, that's the most important question you can ask about the matrix, is if it's -- if you know it's square, is it invertible or not? If it is invertible, then there is some other matrix, shall I call it A inverse? And what's the -- if A inverse exists -- there's a big "if" here. If this matrix exists, and it'll be really central to figure out when does it exist? And then if it does exist, how would you find it? But what's the equation here that I haven't -- that I have to finish now? This matrix, if it exists multiplies A and produces, I think, the identity. But a real -- an inverse for a square matrix could be on the right as well -- this is true, too, that it's -- if I have a -- yeah in fact, this is not -- this is probably the -- this is something that's not easy to prove, but it works. That a left -- square matrices, a left inverse is also a right inverse. If I can find a matrix on the left that gets the identity, then also that matrix on the right will produce that identity. For rectangular matrices, we'll see a left inverse that isn't a right inverse. In fact, the shapes wouldn't allow it. But for square matrices, the shapes allow it and it happens, if A has an inverse. Okay, so give me some cases -- let's see. I hate to be negative here, but let's talk about the case with no inverse. So -- these matrices are called invertible or non-singular -- those are the good ones. And we want to be able to identify how -- if we're given a matrix, has it got an inverse? Can I talk about the singular case? No inverse. All right. Best to start with an example. Tell me an example -- let's get an example up here. Let's make it two by two -- of a matrix that has not got an inverse. And let's see why. Let me write one up. No inverse. Let's see why. Let me write up -- one three two six. Why does that matrix have no inverse? You could answer that various ways. Give me one reason. Well, you could -- if you know about determinants, which you're not supposed to, you could take its determinant and you would get -- Zero. Okay. Now -- all right. Let me ask you other reasons. I mean, as for other reasons that that matrix isn't invertible. Here, I could use what I'm saying here. Suppose A times other matrix gave the identity. Why is that not possible? Because -- oh, yeah -- I'm thinking about columns here. If I multiply this matrix A by some other matrix, then the -- the result -- what can you tell me about the columns? They're all multiples of those columns, right? If I multiply A by another matrix that -- the product has columns that come from those columns. So can I get the identity matrix? No way. The columns of the identity matrix, like one zero -- it's not a combination of those columns, because those two columns lie on the -- both lie on the same line. Every combination is just going to be on that line and I can't get one zero. So, do you see that sort of column picture of the matrix not being invertible. In fact, here's another reason. This is even a more important reason. Well, how can I say more important? All those are important. This is another way to see it. A matrix has no inverse -- yeah -- here -- now this is important. A matrix has no -- a square matrix won't have an inverse if there's no inverse because I can solve -- I can find an X of -- a vector X with A times -- this A times X giving zero. This is the reason I like best. That matrix won't have an inverse. Can you -- well, let me change I to U. So tell me a vector X that, solves A X equals zero. I mean, this is, like, the key equation. In mathematics, all the key equations have zero on the right-hand side. So what's the X? Tell me an X here -- so now I'm going to put -- slip in the X that you tell me and I'm going to get zero. What X would do that job? Three and negative one? Is that the one you picked, or -- yeah. Or another -- well, if you picked zero with zero, I'm not so excited, right? Because that would always work. So it's really the fact that this vector isn't zero that's important. It's a non-zero vector and three negative one would do it. That just says three of this column minus one of that column is the zero column. Okay. So now I know that A couldn't be invertible. But what's the reasoning? If A X is zero, suppose I multiplied by A inverse. Yeah, well here's the reason. Here -- this is why this spells disaster for an inverse. The matrix can't have an inverse if some combination of the columns gives z- it gives nothing. Because, I could take A X equals zero, I could multiply by A inverse and what would I discover? Suppose I take that equation and I multiply by -- if A inverse existed, which of course I'm going to come to the conclusion it can't because if it existed, if there was an A inverse to this dopey matrix, I would multiply that equation by that inverse and I would discover X is zero. If I multiply A by A inverse on the left, I get X. If I multiply by A inverse on the right, I get zero. So I would discover X was zero. But it -- X is not zero. X -- this guy wasn't zero. There it is. It's three minus one. So, conclusion -- only, it takes us some time to really work with that conclusion -- our conclusion will be that non-invertible matrices, singular matrices, some combinations of their columns gives the zero column. They they take some vector X into zero. And there's no way A inverse can recover, right? That's what this equation says. This equation says I take this vector X and multiplying by A gives zero. But then when I multiply by A inverse, I can never escape from zero. So there couldn't be an A inverse. Where here -- okay, now fix -- all right. Now let me take -- all right, back to the positive side. Let's take a matrix that does have an inverse. And why not invert it? Okay. Can I -- so let me take on this third board a matrix -- shall I fix that up a little? Tell me a matrix that has got an inverse. Well, let me say one three two -- what shall I put there? Well, don't put six, I guess is -- right? Do I any favorites here? One? Or eight? I don't care. What, seven? Seven. Okay. Seven is a lucky number. All right, seven, okay. Okay. So -- now what's our idea? We believe that this matrix is invertible. Those who like determinants have quickly taken its determinant and found it wasn't zero. Those who like columns, and probably that -- that department is not totally popular yet -- but those who like columns will look at those two columns and say, hey, they point in different directions. So I can get anything. Now, let me see, what do I mean? How I going to computer A inverse? So A inverse -- here's A inverse, now, and I have to find it. And what do I get when I do this multiplication? The identity. You know, forgive me for taking two by two-s, but -- lt's good to keep the computations manageable and let the ideas come out. Okay, now what's the idea I want? I'm looking for this matrix A inverse, how I going to find it? Right now, I've got four numbers to find. I'm going to look at the first column. Let me take this first column, A B. What's up there? What -- tell me this. What equation does the first column satisfy? The first column satisfies A times that column is one zero. The first column of the answer. And the second column, C D, satisfies A times that second column is zero one. You see that finding the inverse is like solving two systems. One system, when the right-hand side is one zero -- I'm just going to split it into two pieces. I don't even need to rewrite it. I can take A times -- so let me put it here. A times column j of A inverse is column j of the identity. I've got n equations. I've got, well, two in this case. And they have the same matrix, A, but they have different right-hand sides. The right-hand sides are just the columns of the identity, this guy and this guy. And these are the two solutions. Do you see what I'm going -- I'm looking at that equation by columns. I'm looking at A times this column, giving that guy, and A times that column giving that guy. So -- Essentially -- so this is like the Gauss -- we're back to Gauss. We're back to solving systems of equations, but we're solving -- we've got two right-hand sides instead of one. That's where Jordan comes in. So at the very beginning of the lecture, I mentioned Gauss-Jordan, let me write it up again. Okay. Here's the Gauss-Jordan idea. Gauss-Jordan solve two equations at once. Okay. Let me show you how the mechanics go. How do I solve a single equation? So the two equations are one three two seven, multiplying A B gives one zero. And the other equation is the same one three two seven multiplying C D gives zero one. Okay. That'll tell me the two columns of the inverse. I'll have inverse. In other words, if I can solve with this matrix A, if I can solve with that right-hand side and that right-hand side, I'm invertible. I've got it. Okay. And Jordan sort of said to Gauss, solve them together, look at the matrix -- if we just solve this one, I would look at one three two seven, and how do I deal with the right-hand side? I stick it on as an extra column, right? That's this augmented matrix. That's the matrix when I'm watching the right-hand side at the same time, doing the same thing to the right side that I do to the left? So I just carry it along as an extra column. Now I'm going to carry along two extra columns. And I'm going to do whatever Gauss wants, right? I'm going to do elimination. I'm going to get this to be simple and this thing will turn into the inverse. This is what's coming. I'm going to do elimination steps to make this into the identity, and lo and behold, the inverse will show up here. K--- let's do it. Okay. So what are the elimination steps? So you see -- here's my matrix A and here's the identity, like, stuck on, augmented on. STUDENT: I'm sorry... STRANG: Yeah? STUDENT: -- is the two and the three supposed to be switched? STRANG: Did I -- oh, no, they weren't supposed to be switched. Sorry. Thank you very much. And there -- I've got them right. Okay, thanks. Okay. So let's do elimination. All right, it's going to be simple, right? So I take two of this row away from this row. So this row stays the same and two of those come away from this. That leaves me with a zero and a one and two of these away from this is that what you're getting -- after one elimination step -- Let me sort of separate the -- the left half from the right half. So two of that first row got subtracted from the second row. Now this is an upper triangular form. Gauss would quit, but Jordan says keeps going. Use elimination upwards. Subtract a multiple of equation two from equation one to get rid of the three. So let's go the whole way. So now I'm going to -- this guy is fine, but I'm going to -- what do I do now? What's my final step that produces the inverse? I multiply this by the right number to get up to ther to remove that three. So I guess, I -- since this is a one, there's the pivot sitting there. I multiply it by three and subtract from that, so what do I get? I'll have one zero -- oh, yeah that was my whole point. I'll multiply this by three and subtract from that, which will give me seven. And I multiply this by three and subtract from that, which gives me a minus three. And what's my hope, belief? Here I started with A and the identity, and I ended up with the identity and who? That better be A inverse. That's the Gauss Jordan idea. Start with this long matrix, double-length A I, eliminate, eliminate until this part is down to I, then this one will -- must be for some reason, and we've got to find the reason -- must be A inverse. Shall I just check that it works? Let me just check that -- can I multiply this matrix this part times A, I'll carry A over here and just do that multiplication. You'll see I'll do it the old fashioned way. Seven minus six is a one. Twenty one minus twenty one is a zero, minus two plus two is a zero, minus six plus seven is a one. Check. So that is the inverse. That's the Gauss-Jordan idea. So, you'll -- one of the homework problems or more than one for Wednesday will ask you to go through those steps. I think you just got to go through Gauss-Jordan a couple of times, but I -- yeah -- just to see the mechanics. But the, important thing is, why -- is, like, what happened? Why did we -- why did we get A inverse there? Let me ask you that. We got -- so we take -- We do row reduction, we do elimination on this long matrix A I until the first half is up. Then a second half is A inverse. Well, how do I see that? Let me put up here how I see that. So here's my Gauss-Jordan thing, and I'm doing stuff to it. So I'm -- well, whole lot of E's. Remember those are those elimination matrices. Those are the -- those are the things that we figured out last time. Yes, that's what an elimination step is it's in matrix form, I'm multiplying by some Es. And the result -- well, so I'm multiplying by a whole bunch of Es. So, I get a -- can I call the overall matrix E? That's the elimination matrix, the product of all those little pieces. What do I mean by little pieces? Well, there was an elimination matrix that subtracted two of that away from that. Then there was an elimination matrix that subtracted three of that away from that. I guess in this case, that was all. So there were just two Es in this case, one that did this step and one that did this step and together they gave me an E that does both steps. And the net result was to get an I here. And you can tell me what that has to be. This is, like, the picture of what happened. If E multiplied A, whatever that E is -- we never figured it out in this way. But whatever that E times that E is, E times A is -- What's E times A? It's I. That E, whatever the heck it was, multiplied A and produced I. So E must be -- E A equaling I tells us what E is, namely it is -- STUDENT: It's the inverse of A. STRANG: It's the inverse of A. Great. And therefore, when the second half, when E multiplies I, it's E -- Put this A inverse. You see the picture looking that way? E times A is the identity. It tells us what E has to be. It has to be the inverse, and therefore, on the right-hand side, where E -- where we just smartly tucked on the identity, it's turning in, step by step -- It's turning into A inverse. There is the statement of Gauss-Jordan elimination. That's how you find the inverse. Where we can look at it as elimination, as solving n equations at the same time -- -- and tacking on n columns, solving those equations and up goes the n columns of A inverse. Okay, thanks. See you on Wednesday.
http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/lecture-3-multiplication-and-inverse-matrices/
13
56
Mechanics: Work, Energy and Power Work, Energy and Power: Problem Set Overview This set of 32 problems targets your ability to use equations related to work and power, to calculate the kinetic, potential and total mechanical energy, and to use the work-energy relationship in order to determine the final speed, stopping distance or final height of an object. The more difficult problems are color-coded as blue problems. Work results a force acts upon an object to cause a displacement (or a motion) or in some instances, to hinder a motion. Three variables are of importance in this definition - force, displacement, and the extent to which the force causes or hinders the displacement. Each of these three variables find their way into the equation for work. That equation is: Work = Force • Displacement • Cosine(theta) W = F • d • cos(theta) Since the standard metric unit of force is the Newton and the standard meteric unit of displacement is the meter, then the standard metric unit of work is a Newton•meter, defined as a Joule and abbreviated with a J. The most complicated part of the work equation and work calculations is the meaning of the angle theta in the above equation. The angle is not just any stated angle in the problem; it is the angle between the F and the d vectors. In solving work problems, one must always be aware of this definition - theta is the angle between the force and the displacement which it causes. If the force is in the same direction as the displacement, then the angle is 0 degrees. If the force is in the opposite direction as the displacement, then the angle is 180 degrees. If the force is up and the displacement is to the right, then the angle is 90 degrees. This is summarized in the graphic below. Power is defined as the rate at which work is done upon an object. Like all rate quantities, power is a time-based quantity. Power is related to how fast a job is done. Two identical jobs or tasks can be done at different rates - one slowly or and one rapidly. The work is the same in each case (since they are identical jobs) but the power is different. The equation for power shows the importance of time: Power = Work / time P = W / t The unit for standard metric work is the Joule and the standard metric unit for time is the second, so the standard metric unit for power is a Joule / second, defined as a Watt and abbreviated W. Special attention should be taken so as not to confuse the unit Watt, abbreviated W, with the quantity work, also abbreviated by the letter W. Combining the equations for power and work can lead to a second equation for power. Power is W/t and work is F•d•cos(theta). Substituting the expression for work into the power equation yields P = F•d•cos(theta)/t. If this equation is re-written as P = F • cos(theta) • (d/t) one notices a simplification which could be made. The d/t ratio is the speed value for a constant speed motion or the average speed for an accelerated motion. Thus, the equation can be re-written as P = F • v • cos(theta) where v is the constant speed or the average speed value. A few of the problems in this set of problems will utilize this derived equation for power. Mechanical, Kinetic and Potential Energies There are two forms of mechanical energy - potential energy and kinetic energy. Potential energy is the stored energy of position. In this set of problems, we will be most concerned with the stored energy due to the vertical position of an object within Earth's gravitational field. Such energy is known as the gravitational potential energy (PEgrav) and is calculated using the equation PEgrav = m•g•h where m is the mass of the object (with standard units of kilograms), g is the acceleration of gravity (9.8 m/s/s) and h is the height of the object (with standard units of meters) above some arbitraily defined zero level (such as the ground or the top of a lab table in a physics room). Kinetic energy is defined as the energy possessed by an object due to its motion. An object must be moving to possess kinetic energy. The amount of kinetic energy (KE) possessed by a moving object is dependent upon mass and speed. The equation for kinetic energy is KE = 0.5 • m • v2 where m is the mass of the object (with standard units of kilograms) and v is the speed of the object (with standard units of m/s). The total mechanical energy possessed by an object is the sum of its kinetic and potential energies. There is a relationship between work and total mechanical energy. The relationship is best expressed by the equation TMEi + Wnc = TMEf In words, this equations says that the initial amount of total mechanical energy (TMEi) of a system is altered by the work which is done to it by non-conservative forces (Wnc). The final amount of total mechanical energy (TMEf) possessed by the system is equivalent to the initial amount of energy (TMEi) plus the work done by these non-conservative forces (Wnc). The mechanical energy possessed by a system is the sum of the kinetic energy and the potential energy. Thus the above equation can be re-arranged to the form of KEi + PEi + Wnc = KEf + PEf 0.5 • m • vi2 + m • g • hi + F • d • cos(theta) = 0.5 • m • vf2 + m • g • hf The work done to a system by non-conservative forces (Wnc) can be described as either positive work or negative work. Positive work is done on a system when the force doing the work acts in the direction of the motion of the object. Negative work is done when the force doing the work opposes the motion of the object. When a positive value for work is substituted into the work-energy equation above, the final amount of energy will be greater than the initial amount of energy; the system is said to have gained mechanical energy. When a negative value for work is substituted into the work-energy equation above, the final amount of energy will be less than the initial amount of energy; the system is said to have lost mechanical energy. There are occasions in which the only forces doing work are conservative forces (sometimes referred to as internal forces). Typically, such conservative forces include gravitational forces, elastic or spring forces, electrical forces and magnetic forces. When the only forces doing work are conservative forces, then the Wnc term in the equation above is zero. In such instances, the system is said to have conserved its mechanical energy. The proper approach to work-energy problem involves carefully reading the problem description and substituting values from it into the work-energy equation listed above. Inferences about certain terms will have to be made based on a conceptual understanding of kinetic and potential energy. For instance, if the object is initially on the ground, then it can be inferred that the PEi is 0 and that term can be canceled from the work-energy equation. In other instances, the height of the object is the same in the initial state as in the final state, so the PEi and the PEf terms are the same. As such, they can be mathematically canceled from each side of the equation. In other instances, the speed is constant during the motion, so the KEi and KEf terms are the same and can thus be mathematically canceled from each side of the equation. Finally, there are instances in which the KE and or the PE terms are not stated; rather, the mass (m), speed (v), and height (h) is given. In such instances, the KE and PE terms can be determined using their respective equations. Make it your habit from the beginning to simply start with the work and energy equation, to cancel terms which are zero or unchanging, to substitute values of energy and work into the equation and to solve for the stated unknown. Habits of an Effective Problem-Solver An effective problem solver by habit approaches a physics problem in a manner that reflects a collection of disciplined habits. While not every effective problem solver employs the same approach, they all have habits which they share in common. These habits are described briefly here. An effective problem-solver... - ...reads the problem carefully and develops a mental picture of the physical situation. If needed, they sketch a simple diagram of the physical situation to help visualize it. - ...identifies the known and unknown quantities in an organized manner, often times recording them on the diagram iteself. They equate given values to the symbols used to represent the corresponding quantity (e.g., m = 1.50 kg, vi = 2.68 m/s, F = 4.98 N, t = 0.133 s, vf = ???). - ...plots a strategy for solving for the unknown quantity; the strategy will typically center around the use of physics equations be heavily dependent upon an understaning of physics principles. - ...identifies the appropriate formula(s) to use, often times writing them down. Where needed, they perform the needed conversion of quantities into the proper unit. - ...performs substitutions and algebraic manipulations in order to solve for the unknown quantity. Additional Readings/Study Aids: The following pages from The Physics Classroom tutorial may serve to be useful in assisting you in the understanding of the concepts and mathematics associated with these problems. - Sample Work Calculations - Resolving the Weight Vector on an Inclined Plane - Potential Energy - Kinetic Energy - Mechanical Energy - Situations Involving External Forces - Situations Involving Energy Conservation
http://www.physicsclassroom.com/calcpad/energy/index.cfm
13