score
int64 10
1.34k
| text
stringlengths 296
618k
| url
stringlengths 16
1.13k
| year
int64 13
18
|
---|---|---|---|
36 | Through this post I am going to explain How Linear Regression works? Let us start with what is regression and how it works? Regression is widely used for prediction and forecasting in field of machine learning. Focus of regression is on the relationship between dependent and one or more independent variables. The “dependent variable” represents the output or effect, or is tested to see if it is the effect. The “independent variables” represent the inputs or causes, or are tested to see if they are the cause. Regression analysis helps to understand how the value of the dependent variable changes when any one of the independent variables is varied, while the other independent variables are kept unchanged. In the regression, dependent variable is estimated as function of independent variables which is called regression function. Regression model involves following variables.
- Independent variables X.
- Dependent variable Y
- Unknown parameter θ
In the regression model Y is function of (X,θ). There are many techniques for regression analysis, but here we will consider linear regression.
In the Linear regression, dependent variable(Y) is the linear combination of the independent variables(X). Here regression function is known as hypothesis which is defined as below.
hθ(X) = f(X,θ)
Suppose we have only one independent variable(x), then our hypothesis is defined as below.
The goal is to find some values of θ(known as coefficients), so we can minimize the difference between real and predicted values of dependent variable(y). If we take the values of all θ are zeros, then our predicted value will be zero. Cost function is used as measurement factor of linear regression model and it calculates average squared error for m observations. Cost function is denoted by J(θ) and defined as below.
As we can see from the above formula, if cost is large then, predicted value is far from the real value and if cost is small then, predicted value is nearer to real value. Therefor, we have to minimize cost to meet more accurate prediction.
Linear regression in R
R is language and environment for statistical computing. R has powerful and comprehensive features for fitting regression models. We will discuss about how linear regression works in R. In R, basic function for fitting linear model is lm(). The format is
fit <- lm(formula, data)
where formula describes model(in our case linear model) and data describes which data are used to fit model. The resulting object(fit in this case) is a list that contains information about the fitted model. The formula typically written as
Y ~ x1 + x2 + … + xk
where ~ separates the dependent variable(y) on the left from independent variables(x1, x2, ….. , xk) from right, and the independent variables are separated by + signs. let’s see simple regression example(example is from book R in action). We have the dataset women which contains height and weight for a set of 15 women ages 30 to 39. we want to predict weight from height. R code to fit this model is as below.
>fit <-lm(weight ~ height, data=women) >summary(fit)
Output of the summary function gives information about the object fit. Output is as below
Call: lm(formula = weight ~ height, data = women) Residuals: Min 1Q Median 3Q Max -1.7333 -1.1333 -0.3833 0.7417 3.1167 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -87.51667 5.93694 -14.74 1.71e-09 *** height 3.45000 0.09114 37.85 1.09e-14 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.525 on 13 degrees of freedom Multiple R-squared: 0.991, Adjusted R-squared: 0.9903 F-statistic: 1433 on 1 and 13 DF, p-value: 1.091e-14
Let’s understand the output. Values of coefficients(θs) are -87.51667 and 3.45000, hence prediction equation for model is as below
Weight = -87.52 + 3.45*height
In the output, residual standard error is cost which is 1.525. Now, we will look at real values of weight of 15 women first and then will look at predicted values. Actual values of weight of 15 women are as below
Output 115 117 120 123 126 129 132 135 139 142 146 150 154 159 164
Predicted values of 15 women are as below
Output 1 2 3 4 5 6 7 8 9 112.5833 116.0333 119.4833 122.9333 126.3833 129.8333 133.2833 136.7333 140.1833 10 11 12 13 14 15 143.6333 147.0833 150.5333 153.9833 157.4333 160.8833
We can see that predicted values are nearer to the actual values.Finally, we understand what is regression, how it works and regression in R.
Here, I want to beware you from the misunderstanding about correlation and causation. In the regression, dependent variable is correlated with the independent variable. This means, as the value of the independent variable changes, value of the dependent variable also changes. But, this does not mean that independent variable cause to change the value of dependent variable. Causation implies correlation , but reverse is not true. For example, smoking causes the lung cancer and smoking is correlated with alcoholism. Many discussions are there on this topic. if we go deep into than one blog is not enough to explain this.But, we will keep in mind that we will consider correlation between dependent variable and independent variable in regression.
In the next blog, I will discuss about the real world business problem and how to use regression into it.
Liked this? Get more by Signing up for our free newsletter!
Would you like to understand the value of predictive analysis when applied on web analytics data to help improve your understanding relationship between different variables? So register now for our Upcoming Webinar: How to perform predictive analysis on your web analytics tool data. Get More Info & Book Your Seat Now! | http://www.tatvic.com/blog/linear-regression-using-r/ | 13 |
45 | Pre-Algebra, Algebra 1, and Algebra 2 all require you to master new math skills. Do you find solving equations and word problems difficult in Algebra class? Are the exponents, proportions, and variables of Algebra keeping you up at night? Intercepts, functions, and expressions can be confusing to most Algebra students, but a qualified tutor can clear it all up! Our Algebra tutors are experts in math and specialize in helping students like you understand Algebra. If you are worried about an upcoming Algebra test or fear not passing your Algebra class for the term, getting an Algebra tutor will make all the difference.
Pre-algebra - The goal of Pre-algebra is to develop fluency with rational numbers and proportional relationships. Students will: extend their elementary skills and begin to learn algebra concepts that serve as a transition into formal Algebra and Geometry; learn to think flexibly about relationships among fractions, decimals, and percents; learn to recognize and generate equivalent expressions and solve single-variable equations and inequalities; investigate and explore mathematical ideas and develop multiple strategies for analyzing complex situations; analyze situations verbally, numerically, graphically, and symbolically; and apply mathematical skills and make meaningful connections to life's experiences.
Algebra I - The main goal of Algebra is to develop fluency in working with linear equations. Students will: extend their experiences with tables, graphs, and equations and solve linear equations and inequalities and systems of linear equations and inequalities; extend their knowledge of the number system to include irrational numbers; generate equivalent expressions and use formulas; simplify polynomials and begin to study quadratic relationships; and use technology and models to investigate and explore mathematical ideas and relationships and develop multiple strategies for analyzing complex situations.
Algebra II - A primary goal of Algebra II is for students to conceptualize, analyze, and identify relationships among functions. Students will: develop proficiency in analyzing and solving quadratic functions using complex numbers; investigate and make conjectures about absolute value, radical, exponential, logarithmic and sine and cosine functions algebraically, numerically, and graphically, with and without technology; extend their algebraic skills to compute with rational expressions and rational exponents; work with and build an understanding of complex numbers and systems of equations and inequalities; analyze statistical data and apply concepts of probability using permutations and combinations; and use technology such as graphing calculators.
College Algebra – Topics for this course include basic concepts of algebra; linear, quadratic, rational, radical, logarithmic, exponential, and absolute value equations; equations reducible to quadratic form; linear, polynomial, rational, and absolute value inequalities, and complex number system; graphs of linear, polynomial, exponential, logarithmic, rational, and absolute value functions; conic sections; inverse functions; operations and compositions of functions; systems of equations; sequences and series; and the binomial theorem.
No matter the level of the algebra course that the student is taking, we have expert tutors available and ready to help. All of our algebra tutors have a degree in mathematics, science, or a related field (like accounting). We are so confident in our algebra tutors that you can meet with them for free. Just ask your tutoring coordinator about our Meet and Greet program.
Our Tutoring Service
We offer our clients choice when searching for a tutor, and we work with you all the way through the selection process. When you choose to work with one of our tutors, expect quality, professionalism, and experience. We will never offer you a tutor that is not qualified in the specific subject area you request. We will provide you with the degrees, credentials, and certifications each selected tutor holds so that you have the same confidence in them that we do. And for your peace of mind, we conduct a nation-wide criminal background check, sexual predator check and social security verification on every single tutor we offer you. We will find you the right tutor so that you can find success! | http://www.advancedlearners.com/albuquerque/algebra/tutor/find.aspx | 13 |
14 | Not Just Black and White
Not Just Black and White is a science project that teaches kids about color and light. Different colors will appear when you and your kids view spinning black-and-white circles.
What You'll Need:
- White paper
- Black paper
- Black marker
- Knitting needle
- Paper plate
Learn About Not Just Black and White:Step 1: Draw and cut out 3 circles of white paper that are each 5-1/2 inches in diameter. Put a small hole in the center of each circle.
Step 2: Draw and cut out a circle of black paper that is 5-1/2 inches in diameter. Cut the black circle in half. Cut 1 of the halves in half.
Step 3: Use these materials to make several different disks. Glue a black half-circle onto a white circle so that the disk is 1/2 black and 1/2 white. Glue a black quarter-circle onto a white circle so that the disk is 1/4 black and 3/4 white.
Step 4: Using a black marker, divide 1 white disk into 8 pie-wedge shapes. Color some of the pie wedges black, leaving others white.
Step 5: Wrap some tape around the middle of a knitting needle. Put the knitting needle through the middle of a 6-inch paper plate, and push the plate down to rest on the tape.
Step 6: Spin the plate. Be sure it spins smoothly and doesn't wobble. Use this as your spinner. Poke the knitting needle through the hole in the center of 1 disk, and let the disk rest on the paper plate.
Step 7: Spin the plate, and look at the disk as it spins. What colors do you see? Do you see different colors when the disk is spinning quickly or slowly? Spin the other disks to see what colors they produce.
Colors at a Distance is a science project that teaches kids about visual perception. Learn about Colors at a Distance on the next page of science projects for kids: spectrum of colors. | http://tlc.howstuffworks.com/family/science-projects-for-kids-spectrum-of-colors1.htm | 13 |
11 | 3.8 Related Rates
Two variables, perhaps x and y, are both functions of a third variable, time, t, and x and y are related by an equation.
Example A fire has started in a dry, open field and spreads in the form of a circle. The radius of the circle increases at a rate of 6 ft/min. Find the rate at which the fire area is increasing when the radius is 150 ft.
Strategy: Draw and label picture. What are we finding? Name variables and equations involved. (Substitute) and differentiate, then "plug-in" values.
Implicitly differentiate with respect to t. Note: The area and radius are both functions of t.
Given: ft/min and r = 150
Example: A ladder 26 ft long leans against a vertical wall. The foot of the ladder is drawn away from the wall at a rate of 4 ft/s. How fast is the top of the ladder sliding down the wall, when the foot of the ladder is 10 ft from the wall?
Strategy: Draw and label pictures. What are we finding? Name variables and equations involved. (Substitute) and differentiate, then "plug-in" values.
Differentiate implicitly with respect to the variable t. Note: x and y are both functions of t.
Given: Must find y using
Example: Water runs into a conical tank shown at a constant rate of 2 ft3 per minute. The dimensions of the tank are altitude of 12ft and base radius of 6 ft. How fast is the water level rising when the water is 6 feet deep?
Draw a picture. Need both the volume of a cone and similar triangle proportions.
Find: Volume of cone = Similar Triangle:
Example: A spherical balloon is inflated with gas at the rate of 100 ft3/min. Assuming the gas pressure remains constant, how fast is the radius of the balloon increasing when the radius is 3 ft?
Find: Volume of Sphere =
Given: r = 3 Know:
Example: A man 6 ft tall walks at the rate of 5 ft/sec. toward a street light that is 16 ft. above the ground. At what rate is the tip of his shadow moving?
Find: Similar Triangles
Assignment 3.8 pg 186; 1-11 odd, 12 [ans:-1/(20π)], 13,15, 16 [ans: 0.6 m/s]19, 20 [ans: ] 21, 27, 29, 30 [ans: -1/8 rad/s], 31 | http://homepages.ius.edu/MEHRINGE/M215/Fall%2007%20Notesr/Section3.8.htm | 13 |
24 | Dec. 8, 2010 Thirteen billion years ago our universe was dark. There were neither stars nor galaxies; there was only hydrogen gas left over after the Big Bang. Eventually that mysterious time came to an end as the first stars ignited and their radiation transformed the nearby gas atoms into ions. This phase of the universe's history is called the Epoch of Reionization (EoR), and it is intimately linked to many fundamental questions in cosmology. But looking back so far in time presents numerous observational challenges.
Arizona State University's Judd Bowman and Alan Rogers of Massachusetts Institute of Technology have developed a small-scale radio astronomy experiment designed to detect a never-before-seen signal from the early universe during this period of time, a development that has the potential to revolutionize the understanding of how the first galaxies formed and evolved.
"Our goal is to detect a signal from the time of the Epoch of Reionization. We want to pin down when the first galaxies formed and then understand what types of stars existed in them and how they affected their environments," says Bowman, an assistant professor at the School of Earth and Space Exploration in ASU's College of Liberal Arts and Sciences.
Bowman and Rogers deployed a custom-built radio spectrometer called EDGES to the Murchison Radio-astronomy Observatory in Western Australia to measure the radio spectrum between 100 and 200 MHz. Though simple in design -- consisting of just an antenna, an amplifier, some calibration circuits, and a computer, all connected to a solar-powered energy source -- its task is highly complex. Instead of looking for early galaxies themselves, the experiment looks for the hydrogen gas that existed between the galaxies. Though an extremely difficult observation to make, it isn't impossible, as Bowman and Rogers have demonstrated in their paper published in Nature on Dec. 9.
"This gas would have emitted a radio line at a wavelength of 21 cm -- stretched to about 2 meters by the time we see it today, which is about the size of a person," explains Bowman. "As galaxies formed, they would have ionized the primordial hydrogen around them and caused the radio line to disappear. Therefore, by constraining when the line was present or not present, we can learn indirectly about the first galaxies and how they evolved in the early universe." Because the amount of stretching, or redshifting, of the 21 cm line increases for earlier times in the Universe's history, the disappearance of the inter-galactic hydrogen gas should produce a step-like feature in the radio spectrum that Bowman and Rogers measured with their experiment.
Radio measurements of the redshifted 21 cm line are anticipated to be an extremely powerful probe of the reionization history, but they are very challenging. The experiment ran for three months, a rather lengthy observation time, but a necessity given the faintness of the signal compared to the other sources of emission from the sky.
"We carefully designed and built this simple instrument and took it out to observe the radio spectrum and we saw all kinds of astronomical emission but it was 10,000 times stronger than the theoretical expectation for the signal we are looking for," explains Bowman. "That didn't surprise us because we knew that going into it, but it means it's very hard to see the signal we want to see."
The low frequency radio sky is dominated by intense emission from our own galaxy that is many times brighter than the cosmological signal. Add to that the interference from television, FM radio, low earth orbit satellites, and other telecommunications radio transmitters (present even in remote areas like Australia's Outback) and it is a real challenge. Filtering out or subtracting these troublesome foreground signals is a principal focus of instrument design and data analysis techniques. Fortunately, many of the strongest foregrounds have spectral properties that make them possible to separate from the expected EoR signals.
After careful analysis of their observations, Bowman and Rogers were able to show that the gas between galaxies could not have been ionized extremely rapidly. This marks the first time that radio observations have directly probed the properties of primordial gas during the EoR and paves the way for future studies. "We're breaking down barriers to open an entirely new window into the early universe," Bowman says.
The next generation of large radio telescopes is under construction right now to attempt much more sophisticated measurements of the 21 cm line from the EoR. Bowman is the project scientist for one of the telescopes called the Murchison Widefield Array. According to him, the most likely physical picture for the EoR looked like a lot of bubbles that started percolating out from galaxies and then grew together -- but that idea needs to be tested. If lots of galaxies all put out a little bit of radiation, then there would be many little bubbles everywhere and those would grow and eventually merge like a really fizzy and frothy foam. On the other hand, if there were just a few big galaxies that each emitted a lot of radiation then there would have been only a few big bubbles that grew together.
"Our goal, eventually, is to make radio maps of the sky showing how and when reionization occurred. Since we can't make those maps yet, we are starting with these simple experiments to begin to constrain the basic properties of the gas and how long it took for galaxies to change it," explains Bowman. "This will improve our understanding of the large-scale evolution of the universe."
Other social bookmarking and sharing tools:
- Judd D. Bowman, Alan E. E. Rogers. A lower limit of Δz > 0.06 for the duration of the reionization epoch. Nature, 2010; 468 (7325): 796 DOI: 10.1038/nature09601
Note: If no author is given, the source is cited instead. | http://www.sciencedaily.com/releases/2010/12/101208132210.htm | 13 |
16 | Solution to Additional Practice
Unit 1: Analyzing Lines on a Graph
In the graph below, the straight line
S is given by the equation y = c + dx. If the line
shifts from this initial position S0 to a new position of
what must have changed in the equation?
- In this graph, the line has changed
in steepness, which means the slope must have changed.
- In the equation y = c + dx, "d"
is the slope of the line. Since the slope must have
changed, the constant "d"
must have changed. Since S1 is steeper than S0 , "d" must
have increased. and "c" is the y-intercept.
- In the equation y = c + dx,
"c" is the y-intercept. In the
graph, the lines have not been extended
to where they intercept the y-axis, so it is hard to tell
if "c" changed or not. Unless you extend
the lines to the y-axis and can be certain the two lines
both intercept the y-axis in the same place, it is hard to tell if "c" changed
or not, but we can be certain that
"d" did change.
- If you do extend both lines through
the y-axis, you will find they have the same y-intercept,
which means "c" does not change.
If you feel comfortable with this material, move on to the
If you still do not understand this practice, you may need more
review than is offered by this book. You may wish to review Book
I of this series before moving on. | http://cstl.syr.edu/fipse/graphb/unit6/SolnT2Full.html | 13 |
17 | ENGLISH AND LANGUAGE ARTS
Sixth Graders review parts of speech and verb tenses and write detailed reports and compositions. Grammar emphasis is on clauses, phrases and the formulation of good sentences and paragraphs. Oral presentations of reports and research are given with an artistic component. Students practice lengthy recitation of epic poems such as “Horatio at the Bridge” or “Hiawatha.” Class plays usually come from Roman or Medieval history. Biographies are assigned for reports, and readers include: The Bronze Bow, King Arthur legends, and Otto of the Silver Hand.
The seventh grade grammar lessons emphasize different styles of writing, use of an outline, paragraph format, self-editing, organization of compositions, note taking and the development of compound and complex sentences. Creative writing is practiced in the Wish, Wonder and Surprise block. For the first time in an English block, the students are graded on quizzes, tests, essays, artwork, class participation, and timeliness. Poetry continues to be spoken daily, and oral reports are given to the class. The class play is usually placed in the Renaissance or late medieval times. Independent reading with regular book reports gives the students an opportunity to explore different literature. Often choices include The Giver, Education of Little Tree, Midwives Apprentice, Wrinkle in Time and Robin Hood.
Eighth Graders learn to edit their writing, summarize written work, and solidify their grammar skills (passive and active verbs, direct and indirect objects, clauses and phrases, pronouns). The spoken work continues with more oral reports including biographies, modern history and geography. Poetry continues to be a lively part of the main lesson. The class play is often Shakespeare or a modern play with rich use of language. Each individual now begins to understand a point of view and the dramatic themes used in acting. Eighth grade continues with some assigned reading, book reports and short stories such as Dragon Wings, The Master Puppeteer, and Johnny Tremain.
MATHEMATICSThe sixth grade Math curriculum is based on an intense review of previously taught material. This review is done in such a way that there is always something new. A continual theme through the year is the sense of number and the interrelationship between division, fractions, decimals, and percents, with fractions playing the central role. Another theme in sixth grade math is developing good work habits. Weekly homework assignments, organization skills, and keeping a good notebook are emphasized. Percents, business math, and algebraic formulas are introduced in sixth grade as well as drawing geometric figures exactly with Euclidean tools: the compass and the straight edge.
The seventh graders’ introduction to algebra (done in one three-week Main Lesson block) is an important milestone in development of the students’ abstract thinking. This serves as a crucial foundation for studying mathematics in high school. Another central theme for the seventh grade year is ratios, through which p and irrational numbers are introduced. The study of geometry continues with the Euclidean constructions that were introduced in sixth grade, and then moves on to theorems and proofs, culminating in the Pythagorean theorem. The year often ends with the students learning how to calculate the square roots of numbers by hand.
Instead of devoting a large portion of the eighth grade year to algebra in order to get the students “ahead,” the bulk of the material found in a traditional Algebra I course is kept for ninth grade, the year that we feel most students are ripe for algebra. Much of our eighth grade year is dedicated to non-traditional topics, such as number bases, in order to develop abstract thinking, and stereometry (the study of three-dimensional solids) and loci (the study of two-dimensional curves such as the conic sections), in order to develop the capacity of “exact” imagination. The traditional topics covered in eighth grade include volumes, proportions, dimensional analysis, percents and exponential growth.
Middle School Science
In the next three grades, the study of science turns to the lawfulness that comes from cause-and-effect relationships in the physical world. The focus now shifts to a threefold approach to the phenomena: observation, evaluation, and conceptualization. There is an emphasis on the hands-on and visual approaches in the middle school, by doing experiments that speak to the kinesthetic learners and drawings on the board that serve the visual.
In sixth grade, the threefold approach is now applied to electricity, magnetism, optics, acoustics, and heat in physics. Geography expands again, spiraling out to include either Europe (paralleling the study of Rome in history) or South America (as an extension of the North American studies in fifth grade). The polarity between the heights and the depths is explored in the complementary studies of Astronomy and Mineralogy.
In seventh grade, a mathematical approach is applied for the first time to physics content in mechanics, acoustics, electricity, heat and optics. In mechanics, for example, fulcrums are studied by first approaching the phenomena with seesaws and weights, and by identifying levers all around them in their homes and lives, then developing a rule or law. The students then use the rule to predict leverage and mechanical advantage for new arrangements. In chemistry, combustion, the lime cycle, and acids and bases form the content. The transformation of a substance through burning is an important highlight in this course. Nutrition, as well as Physiology, is taught in Main Lesson. In Geography, Africa is studied, continuing the expansion outward from the local to the farther extents of the world.
In eighth grade, Geography either focuses on a study of Asia, or of world religions. In physics, students learn how certain concepts are applied to technology or natural systems. The content areas (heat, light, electricity, acoustics, and mechanics) manifest as convection systems, refraction and lenses, the electric motor, musical instruments, and fluid mechanics and hydraulics. Fats, carbohydrates and proteins are studied in chemistry both in terms of what is happening in their own metabolisms and what can be achieved externally, such as by making personal care products (lip balm, soap, lotion, etc.). In biology, the human anatomy is studied, for example the musculoskeletal and nervous systems, to complement and complete the work done in seventh grade. Eight Graders also study Meteorology.
SOCIAL STUDIES AND HISTORY
Sixth grade history often begins with the life and conquests of Alexander the Great. In two three-week blocks, important highlights of life in the Roman Empire are studied, including the rise of the Empire, the emperors, the Republic, conquests, government, building and construction, barbarian incursions and the fall of the empire. Also included, are the life of Jesus of Nazareth and the influence of Christianity on the Empire. The Sixth Grader is left with a strong impression of all we have inherited from ancient Rome.
Later in the year, a three-week block delves into the life of medieval Europe. This includes, but is not limited to feudalism, peasant life, knighthood and the life of the monasteries. The life of Mohammed and the rise of Islam as a counterforce to Christianity are studied. This naturally brings in the Crusades. Parallels to modern life become evident in this block. The geography of Latin America is the focus this year. Each country is handled much like the states in our study of the U.S., but in one three-week block. Each student will write a report on one of the countries in this region.
Some of the books that may be read during this year to further support these studies may include, The Sword and the Circle, by Rosemary Sutcliff, The Bronze Bow, by Elizabeth George Spear, Otto of the Silver Hand, by Howard Pyle, and Secret of the Andes, by Ann Nolan Clark.
In seventh grade the students study European history from the late Middle Ages through the Renaissance. There are usually three, three or four week Main Lesson blocks. Key biographies of either people who were forerunners of the times or individuals who particularly exemplified a character type from that time are studied in depth. In the Late Middle Ages, Marco Polo, Eleanor of Aquitaine, and Joan of Arc are typical biographies. As the curriculum moves towards the Reformation, the role of the Roman Catholic Church is explored with emphasis on the developments that took place within the church that contributed to the turbulence of the times. Martin Luther is typical of a key biography for this time period. Not only are the changes that took place in the religious/political life studied, but also the explorers in science, art, and world travel. Copernicus, Galileo, Columbus, Magellan, de Vinci, and Michelangelo are some of the fascinating biographies that tell the story of the times. The students deeply immerse themselves in the art of the times through their own reproductions of the work of “the renaissance masters.” The geography of Africa and Europe are covered in seventh grade. Typically, students write a report related to some aspect of a particular country. Some of the books related to history that are read in seventh grade include: Robin Hood, Adam of the Road, and Young Joan.
The eighth grade History curriculum spans the time from Elizabethan England through the modern times, with particular emphasis on the founding of America. First, the social, political, and economic climates in Europe set a stage for the mass migration to the American continent. The Revolutionary War, the Declaration of Independence, and the Constitution of the U.S. are studied in depth through biographies, art, literature, and pertinent readings. The settling of America, including the interaction of the settlers with the Native American people, is explored. Biographies of great Americans, such as, Abraham Lincoln lead the students into the Civil War and the Industrial Revolution. Rockefeller and Carnegie are two major biographies juxtaposed to the life of the common factory worker or miner. Through student presentations on the inventions of the 1900’s, the class is introduced to the genius of the modern world. The students are led through history to the two World Wars as well as the Civil Rights Movement and the biography of Martin Luther King Jr. Geography focuses on the Asian continent. Students continue to write reports on a country or on some aspect of world geography related to commerce. A wide variety of readers can be used in eighth grade depending on the focus of the teacher. Some examples related to history and geography may include: Johnny Tremain, Dragonwings, The Master Puppeteer, and My Brother Sam is Dead.
Studying music gives children an inspiring aesthetic experience while it develops focus, discipline, and social skills. Both singing and playing in ensembles strengthens students’ ability to work as individuals within a group. Middle school students become aware of their individual responsibility to the group as they work together to create a meaningful musical experience. They have many opportunities to perform in concerts, assemblies, and festivals throughout the school year.
Sixth grade students continue to develop their musical skills in choir, band and orchestra. They begin to explore how music developed throughout history by studying and performing music of different styles and eras.
Students continue to participate in choir, band and orchestra classes, bringing musical concepts and skill acquisition together in rehearsing and performing. Seventh graders are introduced to music related to the historical and geographical eras they study—such as the Renaissance and Africa.
In their ongoing musical education, eighth grade students benefit from the opportunity to experience more intense and varied emotions through the music they create together. Study and performance of good music of various styles enhances their aesthetic development and helps them begin to develop musical judgment and an understanding of the profound effects music can have on human beings.
WORLD LANGUAGESMWS offers German and Spanish in Grades 1-12. Grades 1-7 have three lessons each week in blocks. At the end of a block, students switch to the alternate language. In Grade 8 students choose between Spanish and German and continue with this selection in the High School. Beginning in Grade 9 each student has four World Language classes per week. The World Language teachers strive to integrate Morning Lesson topics into the World Language lessons in support of our interdisciplinary approach to teaching.
By Grade 6, writing and reading has become a focal point. Beginning elements of grammar are taught. The language teacher uses dialogues, storytelling, verses, songs, tongue twisters, and small plays during instruction. Throughout these years, the students’ vocabulary comprehension increases,and they are able to say simple descriptive sentences, perform dialogues, and retell simple stories. In Grade 6 German, students use Zusammen Lesen by Roswitha Garff. In Spanish, they use an easy reader called Piratas del Caribe.
In Grades 7 and 8 teachers emphasize the languages’ phonetic structure so that students can read and write correctly. Teachers also place emphasis on listening comprehension and oral competence.
EURYTHMYThe sixth grader has changed physically from the well-proportioned fifth grader into the developing adolescent, often with limbs akimbo. The eurythmy curriculum for this grade is designed to meet the physical and emotional changes that accompany this challenging developmental time. One way to work with these changes is to introduce the orderly forms of geometry, with their accompanying laws such as: the five-pointed star, hexagon, square, and figure eight.. The students use a capacity they are just beginning to develop, cause and effect thinking.
Students learn to listen to and identify the major intervals. They then learned to form the eurythmy gestures for these intervals, forming the gestures for the tonic to the octave, where they must reach upwards, out of the narrow confines of themselves.
Some of the eurythmy elements include the vowel and consonant forms, and mirroring. Copper rod exercises continue, including: the seven-fold, waterfall, spiral, spinning, and tossing. Copper rod exercises help improve the students’ posture, as well as enhance their spatial orientation.
The seventh grade eurythmy curriculum is full of the dark and light aspects in movement that reflect the turbulent emotional climate of the developing adolescent. Humor and drama are key elements in expressing this range. Head and foot gestures are learned as a kind of punctuation to enhance the understanding of poetry and music. The work with the copper rods becomes more challenging. Forms learned in years past become more complicated in their execution, e.g., the figure-eight form and the seven-pointed star.
The 8th grade year reviews forms learned in previous years, but now taken up in new ways, with the students beginning to apply their own understanding in the creation of the forms. They learn the deeper meaning of the gestures for the sounds of the alphabet and create their own poetry to move to. They often perform a story set to eurythmy for the younger students.
HANDWORK AND PRACTICAL ARTSFollowing the lower school years, in Middle School students expand on their skills with increasingly sophisticated and complex projects. Sixth grade brings the opportunity to design and hand-sew an animal. Seventh grade progresses to hand-sewn dolls and doll clothing. In the eighth grade, while students are studying the Industrial Age, the Handwork curriculum involves sewing clothes on a treadle sewing machine.
Middle school students are combined weekly for a double period of Practical Arts. During these classes, mixed-age groups of students rotate through a variety of project-based classes. This provides an opportunity for our middle school students to learn and work together, and encourages greater familiarity among the grades. Through performing, fine and practical arts students deepen and transform experience. Every creation bears the stamp of individuality and expresses the student’s response to the world. The student uses imagination, cognition, and skill to bring each artistic or practical task to fruition. Experiencing this process repeatedly builds confidence for setting and implementing goals later in adult life.
Middle School Practical Arts activities include: watercolor painting in both veil and wet-on-wet technique, needle and wet felting, baking/cooking, batik, pastel drawing, charcoal drawing, figure drawing, mosaics, stained glass, folk dancing, basketry, bead work, metal work, printmaking, pottery, gymnastics, geometric string art, clay work, mountain biking, figure drawing and print making, and Outdoor education skills (gardening, earth-based skills and Winter skills). | http://shiningmountainwaldorf.org/our-program/middle-school/main-lesson-and-subject-classes/ | 13 |
28 | Here's a very simple GNU Make function: it takes three arguments and makes a
'date' out of them by inserting / between the first and second and second and third arguments:
make_date = $1/$2/$3
The first thing to notice is that make_date is defined just like any other GNU Make macro (you must use = and not := for reasons we'll see below).
To use make_date we $(call) it like this:
today = $(call make_date,19,12,2007)
That will result in today containing 19/12/2007.
The macro uses special macros $1, $2, and $3. These macros contain the argument specified in the $(call). $1 is the first argument, $2 the second and so on.
There's no maximum number of arguments, but if you go above 10 then you need parens: you can't write $10 instead of $(10). There's also no minimum number. Arguments that are missing are just undefined and will typically be treated as an empty string.
The special argument $0 contains the name of the function. In the example above $0 is make_date.
Since functions are just macros with some special automatic macros filled in (if you use the $(origin) function on any of the argument macros ($1 etc.) you'll find that they are classed as automatic just like $@), you can use GNU Make built in functions to build up complex functions.
Here's a function that turns every / into a \ in a path"
unix_to_dos = $(subst /,\,$1)
using the $(subst). Don't be worried about the use of / and \ there. GNU Make does very
little escaping and a literal \ is most of the time just a \.
Some argument handling gotchas
When GNU Make is processing a $(call) it starts by splitting the argument list on commas to set $1 etc. The arguments are expanded so that $1 etc. are completely expanded before they are ever referenced (it's as if GNU Make used := to set them). This means that if an argument has a side-effect (such as calling $(shell)) then that side-effect will always occur as soon as the $(call) is executed, even if the argument was never actually used by the function.
One common problem is that if an argument contains a comma the splitting of
arguments can go wrong. For example, here's a simple function that swaps its two arguments:
swap = $2 $1
If you do $(call swap,first,argument,second) GNU Make doesn't have any way to know that the first argument was meant to be first,argument and swap ends up returning argument first instead of second first,argument.
There are two ways around this. You could simply hide the first argument inside a macro. Since GNU Make doesn't expand the arguments until after splitting a comma inside a macro will not cause any confusion:
FIRST := first,argument
SWAPPED := $(call swap,$(FIRST),second)
The other way to do this is to create a simple macro that just contains a comma and use that instead:
c := ,
SWAPPED := $(call swap,first$cargument,second)
Or even call that macro , and use it (with parens):
, := ,
SWAPPED := $(call swap,first$(,)argument,second)
Calling built-in functions
It's possible to use the $(call) syntax with built in GNU Make functions. For example, you could call $(warning) like this:
This is useful because it means that you can pass any function name as an argument to a user-defined function and $(call) it without needing to know if it's built-in or not.
This gives you the ability to created functions that act on functions. The classic functional programming map function (which applies a function to every member of a list returning the resulting list) can be created | http://www.agileconnection.com/article/gnu-make-user-defined-functions | 13 |
19 | Least common multiple
In arithmetic and number theory, the least common multiple (also called the lowest common multiple or smallest common multiple) of two integers a and b, usually denoted by LCM(a, b), is the smallest positive integer that is divisible by both a and b. If either a or b is 0, LCM(a, b) is defined to be zero.
The LCM of more than two integers is also well-defined: it is the smallest integer that is divisible by each of them.
A multiple of a number is the product of that number and an integer. For example, 10 is a multiple of 5 because 5 × 2 = 10, so 10 is divisible by 5 and 2. Because 10 is the smallest positive integer that is divisible by both 5 and 2, it is the least common multiple of 5 and 2. By the same principle, 10 is the least common multiple of −5 and 2 as well.
What is the LCM of 4 and 6?
Multiples of 4 are:
- 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, 56, 60, 64, 68, 72, 76, ...
and the multiples of 6 are:
- 6, 12, 18, 24, 30, 36, 42, 48, 54, 60, 66, 72, ...
Common multiples of 4 and 6 are simply the numbers that are in both lists:
- 12, 24, 36, 48, 60, 72, ....
So, from this list of the first few common multiples of the numbers 4 and 6, their least common multiple is 12.
When adding, subtracting, or comparing vulgar fractions, it is useful to find the least common multiple of the denominators, often called the lowest common denominator, because each of the fractions can be expressed as a fraction with this denominator. For instance,
where the denominator 42 was used because it is the least common multiple of 21 and 6.
Computing the least common multiple
Reduction by the greatest common divisor
Many school age children are taught the term greatest common factor (GCF) instead of the greatest common divisor(GCD); therefore, for those familiar with the concept of GCF, substitute GCF when GCD is used below.
The following formula reduces the problem of computing the least common multiple to the problem of computing the greatest common divisor (GCD):
This formula is also valid when exactly one of a and b is 0, since gcd(a, 0) = |a|.
Because gcd(a, b) is a divisor of both a and b, it's more efficient to compute the LCM by dividing before multiplying:
This reduces the size of one input for both the division and the multiplication, and reduces the required storage needed for intermediate results (overflow in the a×b computation). Because gcd(a, b) is a divisor of both a and b, the division is guaranteed to yield an integer, so the intermediate result can be stored in an integer. Done this way, the previous example becomes:
Finding least common multiples by prime factorization
The unique factorization theorem says that every positive integer greater than 1 can be written in only one way as a product of prime numbers. The prime numbers can be considered as the atomic elements which, when combined together, make up a composite number.
Here we have the composite number 90 made up of one atom of the prime number 2, two atoms of the prime number 3 and one atom of the prime number 5.
This knowledge can be used to find the LCM of a set of numbers.
Example: Find the value of lcm(8,9,21).
First, factor out each number and express it as a product of prime number powers.
The lcm will be the product of multiplying the highest power of each prime number together. The highest power of the three prime numbers 2, 3, and 7 is 23, 32, and 71, respectively. Thus,
This method is not as efficient as reducing to the greatest common divisor, since there is no known general efficient algorithm for integer factorization, but is useful for illustrating concepts.
This method can be illustrated using a Venn diagram as follows. Find the prime factorization of each of the two numbers. Put the prime factors into a Venn diagram with one circle for each of the two numbers, and all factors they share in common in the intersection. To find the LCM, just multiply all of the prime numbers in the diagram.
Here is an example:
- 48 = 2 × 2 × 2 × 2 × 3,
- 180 = 2 × 2 × 3 × 3 × 5,
and what they share in common is two "2"s and a "3":
- Least common multiple = 2 × 2 × 2 × 2 × 3 × 3 × 5 = 720
- Greatest common divisor = 2 × 2 × 3 = 12
This also works for the greatest common divisor (GCD), except that instead of multiplying all of the numbers in the Venn diagram, one multiplies only the prime factors that are in the intersection. Thus the GCD of 48 and 180 is 2 × 2 × 3 = 12.
A simple algorithm
This method works as easily for finding the LCM of several integers.
Let there be a finite sequence of positive integers X = (x1, x2, ..., xn), n > 1. The algorithm proceeds in steps as follows: on each step m it examines and updates the sequence X(m) = (x1(m), x2(m), ..., xn(m)), X(1) = X. The purpose of the examination is to pick up the least (perhaps, one of many) element of the sequence X(m). Assuming xk0(m) is the selected element, the sequence X(m+1) is defined as
- xk(m+1) = xk(m), k ≠ k0
- xk0(m+1) = xk0(m) + xk0.
In other words, the least element is increased by the corresponding x whereas the rest of the elements pass from X(m) to X(m+1) unchanged.
The algorithm stops when all elements in sequence X(m) are equal. Their common value L is exactly LCM(X). (For a proof and an interactive simulation see reference below, Algorithm for Computing the LCM.)
A method using a table
This method works for any number of factors. One begins by listing all of the numbers vertically in a table (in this example 4, 7, 12, 21, and 42):
The process begins by dividing all of the factors by 2. If any of them divides evenly, write 2 at the top of the table and the result of division by 2 of each factor in the space to the right of each factor and below the 2. If a number does not divide evenly, just rewrite the number again. If 2 does not divide evenly into any of the numbers, try 3.
Now, check if 2 divides again:
Once 2 no longer divides, divide by 3. If 3 no longer divides, try 5 and 7. Keep going until all of the numbers have been reduced to 1.
Now, multiply the numbers on the top and you have the LCM. In this case, it is 2 × 2 × 3 × 7 = 84. You will get to the LCM the quickest if you use prime numbers and start from the lowest prime, 2.
Fundamental theorem of arithmetic
where the exponents n2, n3, ... are non-negative integers; for example, 84 = 22 31 50 71 110 130 ...
Given two integers and their least common multiple and greatest common divisor are given by the formulas
In fact, any rational number can be written uniquely as the product of primes if negative exponents are allowed. When this is done, the above formulas remain valid. Using the same examples as above:
The positive integers may be partially ordered by divisibility: if a divides b (i.e. if b is an integer multiple of a) write a ≤ b (or equivalently, b ≥ a). (Forget the usual magnitude-based definition of ≤ in this section - it isn't used.)
Under this ordering, the positive integers become a lattice with meet given by the gcd and join given by the lcm. The proof is straightforward, if a bit tedious; it amounts to checking that lcm and gcd satisfy the axioms for meet and join. Putting the lcm and gcd into this more general context establishes a duality between them:
- If a formula involving integer variables, gcd, lcm, ≤ and ≥ is true, then the formula obtained by switching gcd with lcm and switching ≥ with ≤ is also true. (Remember ≤ is defined as divides).
The following pairs of dual formulas are special cases of general lattice-theoretic identities.
This identity is self-dual:
Let D be the product of ω(D) distinct prime numbers (i.e. D is squarefree).
where the absolute bars || denote the cardinality of a set.
The LCM in commutative rings
The least common multiple can be defined generally over commutative rings as follows: Let a and b be elements of a commutative ring R. A common multiple of a and b is an element m of R such that both a and b divide m (i.e. there exist elements x and y of R such that ax = m and by = m). A least common multiple of a and b is a common multiple that is minimal in the sense that for any other common multiple n of a and b, m divides n.
In general, two elements in a commutative ring can have no least common multiple or more than one. However, any two least common multiples of the same pair of elements are associates. In a unique factorization domain, any two elements have a least common multiple. In a principal ideal domain, the least common multiple of a and b can be characterised as a generator of the intersection of the ideals generated by a and b (the intersection of a collection of ideals is always an ideal). In principal ideal domains, one can even talk about the least common multiple of arbitrary collections of elements: it is a generator of the intersection of the ideals generated by the elements of the collection.
See also
- Crandall, Richard; Pomerance, Carl (2001), Prime Numbers: A Computational Perspective, New York: Springer, ISBN 0-387-94777-9
- Hardy, G. H.; Wright, E. M. (1979), An Introduction to the Theory of Numbers (Fifth edition), Oxford: Oxford University Press, ISBN 978-0-19-853171-5
- Landau, Edmund (1966), Elementary Number Theory, New York: Chelsea
- Long, Calvin T. (1972), Elementary Introduction to Number Theory (2nd ed.), Lexington: D. C. Heath and Company, LCCN 77-171950
- Pettofrezzo, Anthony J.; Byrkit, Donald R. (1970), Elements of Number Theory, Englewood Cliffs: Prentice Hall, LCCN 77-81766 | http://en.wikipedia.org/wiki/Least_common_multiple | 13 |
13 | 2. Sample Prolog Programs
In this chapter we provide several sample Prolog programs. The programs
are given in a progression from fairly simple programs to more complex
programs. The key goals of the presentation are to show several important
methods of knowledge representation in Prolog and the declarative programming
methodology of Prolog.
2.1 Map colorings
This section uses a famous mathematical problem -- that of coloring planar
maps -- to motivate logical representations of facts and rules in Prolog.
The prolog program developed provides a representation for adjacent regions
in a map, and shows a way to represent colorings, and a definition of when
the colorings in conflict; that is, when two adjacent regions have the
same coloring. The section introduces the concept of a semantic program
clause tree -- to motivate the issue of semantics for logic-based programming.
2.2 Two factorial definitions
This section introduces the student to computations of mathematical functions
using Prolog. Various built-in arithmetic operators are discussed. Also
discussed is the concept of a Prolog derivation tree, and how derivation
trees are related to tracings of Prolog.
2.3 Towers of Hanoi puzzle
This famous puzzle is formulated in Prolog. The discussion concerns both
the declarative and the procedural meanings of the program. The program
write puzzle solutions to the screen.
2.4 Loading programs, editing programs
Examples show various ways to load programs into Prolog, and an example
of a program calling a system editor is given. The reader is encouraged
to read sections 3.1 an 3.2 on How Prolog Works before continuing with
2.5 Negation as failure
The section gives an introduction to Prolog's negation-as-failure feature,
with some simple examples. Further examples show some of the difficulties
that can be encountered for programs with negation as failure.
2.6 Tree data and relations
This section shows Prolog operator definitions for a simple tree structure.
Tree processing relations are defined and corresponding goals are studied.
2.7 Prolog lists
This section contains some of the most useful Prolog list accessing and
processing relations. Prolog's primary dynamic structure is the list, and
this structure will be used repeatedly in later sections.
2.8 Change for a dollar
A simple change maker program is studied. The important observation here
is how a Prolog predicate like 'member' can be used to generate choices,
the choices are checked to see whether they solve the problem, and then
backtracking on 'member' generates additional choices. This fundamental
generate and test strategy is very natural in Prolog.
2.9 Map coloring redux
We take another look at the map coloring problem introduced in Section
2.1. This time, the data representing region adjacency is stored in a list,
colors are supplied in a list, and the program generates colorings which
are then checked for correctness.
2.10 Simple I/O
This section discusses opening and closing files, reading and writing of
2.11 Chess queens challenge puzzle.
This familiar puzzle is formulate in Prolog using a permutation generation
program from Section 2.7. Backtracking on permutations produces all solutions.
2.12 Set of answers
Prolog's 'setof' and 'bagof' predicates are presented. An implementation
of 'bagof' using 'assert' and 'retract' is given.
2.13 Truth table maker
This section designs a recursive evaluator for infix Boolean expressions,
and a program which prints a truth table for a Boolean expression. The
variables are extracted from the expression and the truth assignments are
2.14 DFA parser
A generic DFA parser is designed. Particular DFAs are represented as Prolog
2.15 Graph structures and paths
This section designs a path generator for graphs represented using a static
Prolog representation. This section serves as an introduction to and motivation
for the next section, where dynamic search grows the search graph as it
The previous section discussed path generation in a static graph. This
section develops a general Prolog framework for graph searching, where
the search graph is constructed as the search proceeds. This can be the
basis for some of the more sophisticated graph searching techniques in
2.17 Animal identification game
This is a toy program for animal identification that has appeared in several
references in some form or another. We take the opportunity to give a unique
formulation using Prolog clauses as the rule database. The implementation
of verification of askable goals (questions) is especially clean. This
example is a good motivation for expert systems, which are studied in Chapter
2.18 Clauses as data
This section develops a Prolog program analysis tool. The program analyses
a Prolog program to determine which procedures (predicates) use, or call,
which other procedures in the program. The program to be analyzed is loaded
dynamically and its clauses are processed as first-class data.
2.19 Actions and plans
An interesting prototype for action specifications and plan generation
is presented, using the toy blocks world. This important subject is continued
and expanded in Chapter 7.
Prolog Tutorial Contents | http://www.csupomona.edu/~jrfisher/www/prolog_tutorial/2.html | 13 |
10 | talk about the structure of DNA and RNA
Warming up the brain: Nucleic acids are made up of nucleotides, consisting of
bases (purines and pyrimidines), as you probably recall from
your genetics or cell biology class, sugars (ribose or deoxyribose),
and a phosphate backbone.
Remember that we have some rules, called "Watson-Crick" base pairing, by
which adenylate nucleotides can hydrogen bond to thymidylate nucleotides (or
uridylate in RNA), while guanylate nucleotides hydrogen bond to cytidylate nucleotides.
A pairs with T (or U)
Is this all starting to come back to you now? Let's find out.
about the bases
Stop me if you've heard this one...
A guy walks into a bar and says "My name's Chargaff, and 22% of my DNA is "A"
nucleotides. I'll bet anyone that they can't guess what percentage of my DNA is "C"
nucleotides!" You say "I'm thirsty, so I'll take that bet!" and then
Yes, it can be done! Erwin explains, we have double stranded DNA genomes, so if 22%
is "A", then there must also be 22% "T", because every "A"
base will be paired with a "T" base. You with me? So 22%+22%=44% is the
percentage of the DNA that is either "A" or "T". That implies
that the percentage that is "G" or "C" must be whatever is left,
or 100%-44%=56%. Every "G" must be base paired with a "C" and
every "C" must be base paired with a "G", so exactly half of
that 56% must be "C" bases. That is, 28% are "C" bases and 28%
are "G" bases.
Here's a photo gallery (Click for larger images):
red = oxygen, blue
= nitrogen, white = hydrogen, gray = carbon.
What atom does the amber color represent?
The nucleotide bases make up the core
of the double helix, as you can see in the picture below.
This snapshot comes from a site you'll probably want to investigate, an "Interactive
Animated Nonlinear Tutorial"
by Eric Martz, from the Department of Microbiology at the University of Massachusetts-Amherst.
Here's another good site to visit, to learn about the Chime plug-in,
and to study the overall structure of DNA. Chime is pronounced with a hard "K"
sound as in "kind", not a "Ch" sound as in "chair."
You can develop a real "feel" for molecules if you familiarize yourself
with the shareware RasMol (RasMac) program. With this program, you can
inspect crystallographic structures downloaded from Brookhaven National Labs, turning
the molecules on the screen so you can see them from every side and angle. Downloading
instructions are available on
the Web, as are instructions for finding molecules to play with.
If you have the Chime plug-in working, you may be able to see the following two examples,
generated by GLACTONE (http://chemistry.gsu.edu/glactone/). You may also download them directly and use
An AT base pair
hydrogen bonding come to pass?
Well, suppose this is a cherry, and you're going to make chocolate cupcakes with
cherries on top. You make the cake mix, fill the little cupcake holders and bake
the cupcakes. Then you put a cherry on top of each, and whip up a batch of chocolate
icing. Here is one, ready to cover with frosting!
Here's one that was covered well, in fact it was so evenly covered with frosting
that you can no longer see the cherry!
Then, an interesting thing happens. On some of the cupcakes, the chocolate icing
is very thin. It dribbles down onto the cake, leaving the cherry somewhat visible
through the frosting.
It is almost as if the cupcake and the cherry are fighting for the frosting, and
the cupcake is winning!
In fact, sometimes the frosting gets so thin, that there's nothing left to hold the
cherry in place, so it pops out, leaving the frosting still stuck to the cake.
Hmmm... What does this make us think of? Why polar covalent bonds, of course!
You see, some atoms are more electronegative
than others. Oxygen is more electronegative than hydrogen, so in an -OH group, the
oxygen takes more than its fair share of electrons. That's just like the cupcake
taking more than its fair share of frosting. The electrons get very thinly distributed
over the hydrogen and get more thickly distributed over the oxygen.
That gives a partial negative charge to the oxygen and a partial positive charge
to the hydrogen. Why? Because the electron is charged, and if more of it is distributed
in one place, that place will get a bit of charge.
Nitrogen can play the same trick, because it is also more electronegative than hydrogen.
On the other hand, carbon and hydrogen are about the same in electronegativity, so
they share the electrons pretty fairly. There will not be a partial charge on the
carbon, because the electrons are distributed evenly in the bond. The carbon-hydrogen
bond reminds us of the well-frosted cake - all neutrally distributed:
On the other hand, the oxygen-hydrogen and nitrogen-hydrogen bonds remind us of the
thinly-frosted cake, and the thin frosting leads to a "dipole moment",
or partial charge:
difference between DNA and RNA?
DNA contains the sugar deoxyribose while RNA is made with the sugar ribose. It's
just a matter of a single 2' hydroxyl, which deoxyribose doesn't have, and ribose
does have. Of course, you all remember that RNA uses the base uracil instead of thymine
Cytosine naturally has a high rate of deamination to give uracil
Cytosine deamination (i.e. water attacks!)
Uracil in the DNA is a big no no,
and there are specific enzymes called uracil N-glycosylases (from the gene called
ung, about which we'll have much more to say in a later lecture) that excises
the offending deoxyuridylate nucleotide so that it can be replaced. If the uracil
had arisin by deamination, then what will be the nucleotide base across from it?
There will be a G nucleotide across from it, if the mutation just occurred. That's
because the G was paired with the C that deaminated to a U. On the other hand, if
there is a round of DNA replication before the uracil N-glycosylase arrives on the
scene, then there will be an A nucleotide across from the U. That's because the U
will have had a chance to be a template in DNA replication, and U base pairs to A,
If you're an organism that doesn't want
to end up looking like a Teenage
Mutant Ninja Turtle (who as
you may recall, were suffering from the effects of a "retromutagen" that
made them behave like adolescent boys), then you should keep a sharp eye out for
deoxyuridylate nucleotides. The dU should be excised rapidly and replaced with a
C, so that these deamination events do not become "fixed" as a mutation.
Some types of mutations change a pyrimidine
to a different pyrimidine, or a purine to a different purine. We call these transition mutations. If a purine is mutated to a pyrimidine, then
it is a transversion
mutation. So, for example, a mutation
of A to T or C to A would be what? Right! A transversion, and a mutation of A to
G or T to G would be a transition.
Sometimes deoxycytosine is methylated on its "5 position," so what would
happen to the coding content of deoxy-5-methyl-cytosine if it were unlucky enough
to be naturally deaminated?
Deamination of 5-methyl cytosine gives you
Do you know my name?
So you see the problem...the 5-methyl cytosine is deaminated to thymidine. The new
thymidine looks like any other thymidine - it's a mutation! A transition mutation,
because it is a pyrimidine changed to another pyrimidine.
Perhaps that is why there are so few CG dinucleotides in mammalian genomes. CG dinucleotides
are frequently methylated on the C base, so CG may frequently mutate to TG, leaving
CG "under represented". In fact, CG dinucleotides are sometimes associated
with regulatory regions of genes, and we call them "CG islands" because
they are so rare.
about the sugars
Now let's look at the sugar
component of nucleic acids. Remember that ribose and which is deoxyribose?
There is a 5' end
and a 3' end to a nucleic acid. The 5' end frequently has
a phosphate attached, while the 3' end is typically a hydroxyl group. A single strand
of DNA has a "polarity" or "directionality."
It isn't like a piece of string, in which you cannot distinguish one end from the
DNA vs. RNA sugars
Deoxyribose with thymine base
Ribose with uracil base
Study the phosphate at the 5' end
click for larger image
Study the hydroxyl at the 3' end
click for larger image
Synthesize? Degrade? Sit and wait? How
does an enzyme like DNA polymerase Klenow Fragment know what to do next? Well, there
are some general rules of conduct that these enzymes learn in school, and you can
learn them too.
rules of conduct for Klenow and T4 DNA polymerases
1. Remember your base
pairing rules: G goes with C and A goes with T.
2. The 5' ends are strictly
off limits, unless you have your holoenzyme license (and for your information, you
3. There will be no
synthesis without a free 3' end, unless you have your RNA polymerase license (and
for your information, you don't!)
4. There will be no
degradation without a free 3' end, unless you have your endonuclease license (and
for your information, you don't!)
5. There will be no
synthesis without an underlying template, unless you have your terminal transferase
license (and for your information, you don't!). Excess nucleotide substrates is NOT
accepted as an excuse for untemplated additions to the 3' end.
6. Under no circumstances
may you make a synthetic addition to the 5' end (even holoenzymes are not permitted
to do that!). Having a template or substrate available is not an excuse for 3' to
7. There will be no
reconstruction of a broken phosphodiester bond, unless you have your ligase license
(and for your information, you don't!). If you are synthesizing DNA and run into
an obstruction on your template, you must stop and leave the nick unrepaired. You
may not excise the 5' nucleotide that is obstructing your path (see rule 2).
8. If you have no remaining
template, then you must excise the nucleotide at the 3' end (and don't be tempted
to break rule 5!). (repeat rule 8 until it does not apply).
9. If you have been
provided with a free 3' end, a template, and a substrate molecule that is correct,
you must add that nucleotide to the growing end of the strand (i.e. to the 3' end.)
10. If you have a free
3' end and a template, but after waiting for the appropriate number of milliseconds
you are still missing the appropriate nucleotide substrate for the next synthetic
step, you may go back and remove the one preceding nucleotide. Either of rules 9
or 10 may apply thereafter. | http://escience.ws/b572/L1/L1.htm | 13 |
22 | Geometry is heavily tested on the GRE Math section, and a thorough review of geometrical concepts is essential to a high score. Consider the following problem:
“If the length of an edge of a cube X is twice the length of an edge of cube Y, what is the ratio of the volume of cube Y to the volume of cube X?”
The easiest way to solve this is to pick a number for the initial edge length and plug it into the problem. For instance, let’s say cube X is a 4x4x4 cube. Cube X would have a volume of 64. Cube Y would have to be a 2x2x2 cube, since 2 is half of 4, and it would have a volume of 8. The ratio of the volume of cube Y to the volume of cube X would thus be 8 to 64, or 1/8.
However, you really should have known that to begin with. Imagine that cube X had edges that were three times as long as those of Cube Y. Then Cube X would now be a 6x6x6 cube if Cube Y remains a 2x2x2 cube, and the volume ratio would be 8 to 216, or 1/27. Notice something? 8 is 2 ^3, and 27 is 3^3. If the ratio of the sides is 1:4, the ratio of the volumes will be 1:64. If the ratio of the sides is 1:5, the ratio of the volumes will be 1:125. Since these are cubes, you just cube the ratios. 1^3 is 1, and 4^3 is 64; 5^3 is 125. If you know this simple property of the relationship between length and volume, it will take a problem that would take 30 seconds to solve and turn it into a problem that takes 5 seconds to solve. On a timed exam, that could be the difference between getting another, harder question right or wrong. Memorizing these kinds of mathematical facts is something that the GRE test writers expect top scorers to do, and they write the questions so that they can be solved quickly if you know them. It also pays to memorize the squares and cubes of the numbers 1 through 12.
So with cubes, you cube the ratio of the sides. What about squares? If you guessed that you square the ratio of the side lengths in order to get the ratio of the areas, you’d be right, as you can see from a quick demonstration. If the original square has side lengths of 1 and the new square has side lengths of 2, the side ratio is 1:2 and the area ratio is 1:4. If the new square has side lengths of 3, then the side ratio is 1:3 and the area ratio is 1:9. If the new square has side lengths of 4, then the side ratio is 1:4 and the area ratio is 1:16, and so on. Sure enough, you just square the original ratio.
So now you know about cubes and squares, but what about tesseracts? “Tessawhats?” you say? A tesseract is to a cube as a cube is to a square, just as a cube is to a square what a square is to a line. Still confused? Let me explain it this way: say you draw a line a foot long running from east to west. This line only exists in one dimension: east-west. Then, you decide to square it by adding three more lines: two perpendicular to it running north to south and one parallel to it running east to west. This square exists in two dimensions: east-west and north-south. Now you decide to turn the square into a cube by adding lines in the up-down dimension, so that each edge of the original square is now the edge of another square emanating from it. This cube exists in three spatial dimensions: east-west, north-south, and up-down. Now you take this cube you’ve made and decide to square it…in a fourth spacial dimension.
What is this fourth dimension? Who knows. We live in a world in which we experience only three spacial dimensions, so it is impossible for us to imagine what a four dimensional object would look like. That hasn’t stopped mathematicians from naming four-dimensional objects, and this hypercube I’ve just described to you is called a tesseract. As you know, even though a cube is a three dimensional object, it is possible to draw a cube on a piece of paper in only two dimensions by using perspective and all those other artistic illusions. Likewise, some have attempted to render tesseracts in three dimensions in order to give some approximation of what they might look like. Having never seen an actual tesseract, though, you might still find these representations confusing.
In terms of doing calculations, though, tesseracts are simple as can be. For a square with side lengths of 1 and another square with side lengths of 2, the ratio of side lengths is 1:2^1 (since sides are 1 dimensional), or 1:2, and the ratio of areas will be 1:2^2 (since squares are 2 dimensional) or 1:4. For a cube with side lengths of 1 and another cube with side lengths of 2, the ratio of volumes is 1:2^3 (since cubes are 3 dimensional), or 1:8. So, for a tesseract with side lengths of 1 and another tesseract with side lengths of 2, the ratio of hypervolumes(?) is 1:2^4 (since tesseracts are 4 dimensional), or 1:16. It just follows the pattern. Try not to think about it too much.
If you’re having trouble with tesseracts, don’t worry. They’re not on the test. I just wrote about them to mess with your head.
Remember, if you ever want extra help getting ready for the GRE, you can always study with experts like me through Test Masters. Until then, happy studying! | http://www.newgre.org/preparation/sample-math-problem-hip-square-or-cube/ | 13 |
20 | Smooth muscle is responsible for the contractility of hollow organs, such as blood vessels, the gastrointestinal tract, the bladder, or the uterus. Its structure differs greatly from that of skeletal muscle, although it can develop isometric force per cross-sectional area that is equal to that of skeletal muscle. However, the speed of smooth muscle contraction is only a small fraction of that of skeletal muscle.
Structure: The most striking feature of smooth muscle is the lack of visible cross striations (hence the name smooth). Smooth muscle fibers are much smaller (2-10 m in diameter) than skeletal muscle fibers (10-100 m ). It is customary to classify smooth muscle as single-unit and multi-unit smooth muscle (Fig. SM1). The fibers are assembled in different ways. The muscle fibers making up the single-unit muscle are gathered into dense sheets or bands. Though the fibers run roughly parallel, they are densely and irregularly packed together, most often so that the narrower portion of one fiber lies against the wider portion of its neighbor. These fibers have connections, the plasma membranes of two neighboring fibers form gap junctions that act as low resistance pathway for the rapid spread of electrical signals throughout the tissue. The multi-unit smooth muscle fibers have no interconnecting bridges. They are mingled with connective tissue fibers.
Fig. SM1. Single-unit and multi-unit smooth muscle.
Innervation and stimulation: Smooth muscle is primarily under the control of autonomic nervous system, whereas skeletal muscle is under the control of the somatic nervous system. The single-unit smooth muscle has pacemaker regions where contractions are spontaneously and rhythmically generated. The fibers contract in unison, that is the single unit of smooth muscle is syncytial. The fibers of multi-unit smooth muscle are innervated by sympathetic and parasympathetic nerve fibers and respond independently from each other upon nerve stimulation.
Nerve stimulation in smooth muscle causes membrane depolarization, like in skeletal muscle. Excitation, the electrochemical event occurring at the membrane is followed by the mechanical event, contraction. In the case of smooth muscle, this excitation-contraction coupling is termed electromechanical coupling; the link for the coupling is Ca2+ that permeates from the extracellular space into the intracellular water of smooth muscle. There is another excitation mechanism in smooth muscle, which is independent of the membrane potential change; it is based on receptor activation by drugs or hormones followed by muscle contraction. This is termed pharmacomechanical coupling. The link is Ca2+ that is released from an internal source, the sarcoplasmic reticulum.
The role of mechanical events of smooth muscle in the wall of hollow organs is twofold: 1) Its tonic contraction maintains organ dimensions against imposed load. 2) Force development and muscle shortening, like in skeletal muscle.
Myofibril proteins: In general, smooth muscle contains much less protein (~110 mg/g muscle) than skeletal muscle (~200 mg/g). Notable is the decreased myosin content, ~20 mg/g in smooth muscle versus ~80 mg/g in skeletal muscle. On the other hand, the amounts of actin and tropomyosin are the same in both types of muscle. Smooth muscle does not contain troponin, instead of it there are two other thin filament proteins, caldesmon and calponin.
The amino acid sequence of smooth muscle actin is very similar to that of its skeletal muscle counterpart, and it seems likely that their three-dimensional structures are also similar. Smooth muscle actin combines with either smooth or skeletal muscle myosin. However, there is a major difference in the activation of myosin ATPase by actin, smooth muscle myosin has to be phosphorylated for actin-activation to occur.
The size and shape of the smooth muscle myosin molecule is similar to that of the skeletal muscle myosin (Fig. M1). There is a small difference in the light chain composition; out of the four light chains of the smooth muscle myosin two have molecular weight of 20,000 and two of 17,000. The 20,000 light chain is phosphorylatable. Upon phosphorylation of the light chain the actin-activated smooth muscle myosin ATPase increases about 50-fold, to about 0.16 mol ATP hydrolyzed per mol of myosin head per sec, at physiological ionic strength and temperature. (Under the same conditions, the actin-activated skeletal muscle myosin ATPase is 10 -20 mol/mol/sec). The ionic strength dependence of smooth muscle myosin Ca2+-activated ATPase also differs from that of skeletal muscle myosin (Fig. M5), increasing ionic strength increases the smooth muscle myosin ATPase but decreases the skeletal muscle myosin ATPase.
Four smooth muscle specifc myosin heavy chain isoforms are known ( described in Quevillon-Cheruel et al., 1999). Two isoforms (named SMB and SMA) are defined by the presence or the absence of an insert of seven amino acids in the N-terminal globular head region. The two others (SM1 and SM2) differ at their C-termini by 43 versus 9 amino acids. To understand the role of the C-terminal extremities of SM1 and SM2 in smooth muscle thick filament assembly, various fragments of these myosins, such as the rod region, the rod with no tailpiece, or light meromyosins were prepared as recombinant proteins in bacterial cells (Rowner et al., 2001; Quevillon-Cheruel et al.,1999). The results showed that the smooth muscle myosin tailpieces differentially affect filament assembly and suggested that homogeneous thick filaments containing SM1 or SM2 myosin could serve distinct functions within smooth muscle cells.
Although the mechanism of thick filament assembly for purified smooth muscle myosins in vitro has been described, the regulation of thick filament formation in intact muscle is poorly understood. Cross-sectional density of the thick filaments measured electron microscopically in intact airway smooth muscle (Herrera et al., 2002) showed that the density increased substantially (144%) when the muscle was activated. In resting muscle, in the absence of Ca2+, the filament density decreased by 35%. It appears that in smooth muscle filamentous myosin exists in equilibrium with monomeric myosin; activation favors filament formation.
Kathleen Trybus pioneered in expressing and purifying smooth muscle myosin subfragments using the baculovirux /insect cell expression system. This procedure and the methods needed to characterize the new proteins (gel assays, ATPase activity determinations, transient state kinetic parameters, and the vitro motility assay) are described in her review (Trybus, 2000). Studies on engineered smooth muscle myosin and heavy meromyosin showed: the interaction between the regulatory light chain domains on two heads is critical for regulation of smooth muscle myosin (Li et al., 2000; Sweeney et al., 2000), a long, weakly charged actin-binding loop is required for phosphorylation-dependent regulation of smooth muscle myosin (Rovner, 1998), and coiled-coil unwinding at the smooth muscle myosin head-rod junction is required for optimal mechanical performance (Lauzon et al., 2001).
In vitro, both caldesmon and calponin are inhibiting the actin-activated ATPase activity of phosphorylated smooth muscle myosin. In case of calponin, this inhibitory activity is reversed by the binding of Ca2+-calmodulin or by phosphorylation. Calponin is a 34-kDa protein containing binding sites for actin, tropomyosin and Ca2+-calmodulin. Caldesmon is a long, flexible, 87-kDa protein containing binding sites for myosin, as well as actin, tropomyosin, and Ca2+-calmodulin. Electron microscopy and three-dimensional image reconstruction of isolated smooth muscle thin filaments revealed that calponin and caldesmon are located peripherally along the long-pitch actin helix (Hodgkinson et al., 1997; Lehman et al., 1997). The physiological role of caldesmon or calponin is not known.
Phosphorylation and Dephosphorylation of the 20-kDa Myosin Light Chain
Myosin light chain kinase and myosin light chain phosphatase: Smooth muscle (as well as skeletal and cardiac muscle) contains myosin light chain kinase (MLCK), activated by Ca2+-calmodulin, the enzyme which transfers the terminal phosphate group of ATP to serine (and/or threonine) hydroxyl groups of phosphorylatable light chain (LC) according to the following reaction:
LC-OH + MgATP2- ® LC-O-PO32- + MgADP- + H+ (1)
Dephosphorylation is brought about by smooth muscle myosin light chain phosphatase (MLCP) according to the following reaction:
LC-O-PO32- + H2O ® LC-OH + HPO42- (2)
The properties of MLCK are reviewed by Stull et al. (1996) and the properties of MLCP are reviewed by Hartshorne et al. (1998).
It is generally believed that LC phosphorylation-dephosphorylation controls the contraction-relaxation cycle of smooth muscle:
For a long time, research focused on the role of MLCK in smooth muscle contractility, but recently the interest shifted to MLCP. It turned out that MLCP is composed of three subunits: a catalytic subunit of 37-38-kDa of the type 1 phosphatase, a subunit of about 20-kDa whose function is not known, and a larger 110-130-kDa subunit that targets MLCP to myosin. The phosphatase activity of the catalytic subunit is low and it is enhanced significantly by addition of the targeting subunit. Upon phosphorylation of serine and threonine residues in the targeting subunit, its activating effect on the catalytic subunit is lost, and thereby the MLCP holoenzyme is inhibited.
Recent reports (Feng et al., 1999; Kaibuchi et al., 1999; Nagumo et al., 2000; Somlyo and Somlyo, 2000; Sward et al., 2000) indicate that in smooth muscle a Rho-regulated system of MLCP exists. Rho-kinase is the major player in this system, the enzyme phosphorylates the 130-kDa myosin binding subunit of MLCP and thereby inhibits MLCP activity. Due to the antagonism between MLCK and MLCP, inhibition of MLCP results in an increase in the phosphoryl content of LC with concomitant increase in muscle force. Under these conditions, submaximal Ca2+-levels are sufficient for maximal force, a phenomenon called increased Ca2+-sensitivity (Somlyo and Somlyo, 1994). Specific inhibitors for rho-kinase Y-27632 (Feng et al., 1999; Kaibuchi et al., 1999), and HA-1077 (Nagumo et al., 2000; Sward el al., 2000) are available.
MLCP activity can also be inhibited by a 17-kDa myosin phosphatase inhibitor protein, called CPI-17, (Kitazawa et al., 2000) , which inhibits the catalytic subunit of MLCP and the holoenzyme MLCP. Phosphorylation of CPI-17 at Thr38 increases its inhibitory potency 1000-fold. The solution NMR structure of CPI-17 has been determined.(Ohki et al., 2001), it forms a novel four-helix. Phosphorylation of Thr38 induces a conformational change involving displacement of one helix without significant movement of the other three helices. Rho-kinases and PKC are responsible for the phosphorylation of CPI-17.
A rich array of second messengers regulate MLCP activity under physiological and pathological conditions (Solaro, 2000) through phosphorylation of either the targeting subunit of MLCP or CPI-17.
Myosin light chain phosphorylation in intact smooth muscle: 32P-labeling of the muscle is a reliable method for such studies. When a dissected smooth muscle, e.g. artery or a uterine strip, is incubated at 37oC in physiological salt solution containing radioactive inorganic phosphate, the 32P permeates the plasma membrane and enters the intracellular space of the muscle. Through the oxidative phosphorylation mechanism the 32P incorporates into the terminal P group of ATP:
ADP + 32P ® ADP32P
Transfer of the terminal 32P of ATP to LC-OH by MLCK (equation 1) yields the radioactive LC-O-32PO32- species that can be isolated and quantified. The isolation involves two-dimensional (2D) gel electrophoresis and the quantification requires measuring the specific radioactivity of the terminal P of ATP.
Smooth muscle contraction is correlated with LC phosphorylation (reviewed by Bárány and Bárány, 1996c). Fig. SM2 illustrates an experiment: Two carotid arteries were dissected from freshly killed pigs and labeled with 32P. One artery was contracted with KCl for 30 sec then frozen in liquid nitrogen, while the other artery was frozen in the resting state. The arteries were pulverized, washed with perchloric acid to precipitate the muscle proteins and remove 32P-containing phosphate metabolites from the muscle. The washed residue was neutralized with a NaOH solution then dissolved in sodium dodecyl sulfate (SDS). After centrifugation at high speed to remove insoluble particles, the protein content of the supernatant was determined and aliquots of 360 mg protein were subjected to 2D polyacrylamide gel electrophoresis. This procedure separates the proteins according to their charge (pH 4-6) in the first dimension and according to their size (SDS ) in the second dimension. After staining, the profile of the arterial proteins appeared, shown in the upper row of Fig. SM2. LC, is in the lower middle part of the gel, it contains multiple spots. The LC spots were scanned, the staining intensities are shown in the lower row of the Figure. The radioactive spots on the gel were detected by autoradiography, the middle row of Fig. SM2 shows the black spots on the film corresponding to the radioactive spots on the gel.
Visual inspection of the radioactive LC spots in the Figure shows much more radioactivity in LC from the contracting muscle (right) than from the resting muscle (left). One can calculate the incorporation of the 32P-phosphate into LC as follows. First one has to determine the specific radioactivity of the terminal P of ATP from the muscle. The ATP is in the perchloric acid extract of the frozen and pulverized muscle, described before, and Bárány and Bárány (1996c) describe the determination of the specific radioactivity. The next step is the determination of the radioactivity in LC: the gel spots are excised, digested with H2O2, and after the gel is dissolved, radioactivity (counts per minute) is measured. The extent of LC phosphorylation can be calculated from the radioactivity in the LC spots and in the terminal phosphate of ATP, from the total protein applied onto the gel, and from the LC content of the total protein (Bárány and Bárány, 1996c). Such a calculation shows that under conditions of Fig. SM2, the LC of the resting muscle contained 0.25 mol 32P-phosphate/mol LC, whereas the LC of the contracting muscle contained 0.70 mol. Thus, 0.45 mol 32P-phosphate was transferred by MLCK from the terminal phosphate of ADP32P to free LC-OH groups as the result of muscle contraction.
Fig. SM2. Light chain phosphorylation during smooth muscle contraction as studied by 2D gel electrophoresis. (Bárány and Bárány, 1996a, with permission from Biochemistry of Smooth Muscle Contraction, 1996, Academic Press). Left, 32P-labeled porcine carotid arterial muscle was frozen at rest. Right, 32P-labeled porcine carotid arterial muscle was frozen 30 sec after 100 mM KCl challenge. Upper panel shows the Coomassie blue staining pattern of the arterial proteins; middle panel shows the corresponding autoradiograms; bottom panel shows the corresponding densitometric scans of LC.
Isoforms of the 20-kDa myosin light chain: Protein isoforms have the same size but different charge. They are generated either by protein modification or genetic alteration. Protein phosphorylation is the physiological protein modification, because phosphorylation of a protein increases its negative charge. Thus, LC has at least two isoforms, a non-phosphorylated and a phosphorylated one. Genetic alteration changes the amino acid composition of a protein, thereby providing at least two isoforms. For instance, completely dephosphorylated LC exhibits two spots on 2D gels (Fig. SM3) with a percentage distribution of 85% and 15%, corresponding to the major and minor LC isoforms.
Fig. SM3. Myosin light chain isoforms as analyzed by 2D gel electrophoresis. LC was dephosphorylated by homogenizing porcine carotid arteries in 150 mM NaCl and 1 mM EGTA, followed by incubation at 25oC for 2 hours. Top, stained gel, LC spots are numbered as 2 and 4, corresponding to their isoform number. Bottom, densitometric tracing of the LC spots.
Figure SM4 illustrates the formation of LC isoforms as a result of phosphorylation. The major isoform (LCa) when mono-phosphorylated (PLCa) moves into Spot 3, and when it is di-phosphorylated (2PLCa) moves into Spot 2. The same Spot 2 also contains the non-phosphorylated minor isoform (LCb), thus the comigration of the di-phosphorylated LC isoform with the minor isoform makes Spot 2 radioactive. This explains why out of the four LC spots three are phosphorylated. The mono-phosphorylated minor isoform (PLCb) moves into Spot 1, which is the most acidic spot.
Fig. SM4. Scheme for the explanation of four stained and three radioactive LC spots, shown on Fig. SM2. (Bárány and Bárány, 1996a, with permission from Biochemistry of Smooth Muscle Contraction, 1996, Academic Press).
Phosphorylation site: The amino acid sequence of LC exhibits a similarity among LCs from various smooth muscles. Such a conservative sequence suggests a functional significance for the protein. The phosphorylation sites are located at the amino terminal part of the LC molecule, shown in Fig. SM5. Serine 19 is the site that is phosphorylated by MLCK in the intact muscle. Threonine 18 is phosphorylated by MLCK rarely. Beside MLCK, protein kinase C (PKC) also phosphorylates LC; the sites involve Serine 1, Serine 2, and Threonine 9.
Fig. SM5. Phosphorylation sites of LC.
Two-dimensional tryptic peptide mapping: Phosphopeptide maps differentiate MLCK-catalyzed LC phosphorylation from that catalyzed by PKC (Erdodi et al., 1998). Fig. SM6 illustrates the experiment: With ADP32P as a substrate, pure LC was phosphorylated either by MLCK (middle panel), or PKC (right panel). Actomyosin that contains endogenous LC, MLCK, and PKC, was also phosphorylated (left panel). The 32P-LC was isolated by 2D gel electrophoresis, digested by trypsin, and the peptides were separated by 2D peptide mapping. The map of LC phosphorylated by MLCK exhibits four peptides: A, B, both containing serine residues, corresponding to the Ser-19 site, and C, D, both containing threonine, corresponding to the Thr-18 site. When LC is phosphorylated by PKC, the map exhibits two peptides: E, containing serine, corresponding to Ser-1 or Ser-2 site, and F, containing threonine, corresponding to theThr-9 site. When LC is phosphorylated in actomyosin, peptides characteristic for both MLCK and PKC phosphorylation are present.
Fig. SM6. Autoradiograms of 2D phosphopeptide maps of LC tryptic digests.
Fig. SM7. Phosphopeptide maps of LC from K+-contracted muscle versus PDBu-treated muscle.
The role of Ca2+ in light chain phosphorylation: As in skeletal muscle, Ca2+ also plays a central role in the contractility of smooth muscle. In skeletal muscle TN-C is the target of the myoplasmic Ca2+, whereas in smooth muscle Ca2+ activates MLCK. Actually, the Ca2+ complexed to calmodulin is the activator of the enzyme. In agreement with the in vitro studies, intact smooth muscle cease contracting when Ca2+ is omitted from the bathing solution, or when it is complexed with EGTA. Furthermore, inhibitors of calmodulin, such as trifluoperazine or chlorpromazine inhibit smooth muscle contraction.
In the resting muscle there is about 0.1 µM Ca2+, upon stimulation the Ca2+ concentration increases about 100-fold through electromechanical or pharmacomechanical coupling. It is conventional to use fluorescent indicators to follow changes in the intracellular Ca2+ concentration immediately after the stimulation and during the plateau of the mechanical activity. Large variations are reported, depending on the nature of the smooth muscle, the tissue preparation, or the drug used. However, all investigators agree that in order to elicit relaxation the Ca2+ level in the sarcoplasm must be returned near to the resting value. Two mechanisms participate in decreasing the Ca2+ level: 1) The plasma membrane Ca2+ transporting ATPase pumps Ca2+ from the inside into the extracellular space. 2) The sarco(endo)plasmic reticulum Ca2+ transporting ATPase pumps Ca2+ into the SR.
Stretch-induced light chain phosphorylation: As discussed before, smooth muscle can be stimulated electrically or by chemical agents. Here we describe the mechanochemical activation of smooth muscle. Stretching of arterial or uterine muscles induced light chain phosphorylation to the same extent as was observed in muscles contracted by K+ or norepinephrine (Bárány and Bárány, 1996c). Muscles which were stretched 1.6 times their resting length did not develop tension, but contracted normally when the stretch was released and the muscles were allowed to return to their rest length. Importantly, this contraction was spontaneous, indicating that the stretch-induced activation carries all the information necessary for normal contraction. Mobilization of Ca2+ was necessary for the stretch-induced light chain phosphorylation and contraction to occur. When EGTA (the strong Ca2+ complexing agent) was added to the muscle bath both the stretch-induced phosphorylation and the stretch-release-induced tension were inhibited; however, upon removal of EGTA by washings, both processes were fully restored. Treatment of the muscle with chlorpromazine (the calmodulin inhibitor) also abolished both the stretch-induced LC phosphorylation and the stretch-release-induced tension development. These results suggest the presence of mechanosensitive receptors in smooth muscle that are interacting with Ca2+ release channels in SR.
Further comments are warranted on the finding that 1.6 times stretched muscles, which are unable to contract (because there is no overlap between actin and myosin filaments), are able to fully phosphorylate their LC. Accordingly, smooth muscle contraction and LC phosphorylation are not coupled. Time course experiment also demonstrated that LC phosphorylation precedes tension development. Thus, LC phosphorylation plays a role in the activation process but not in the contraction per se. Furthermore, K+-contracted muscle maintains its tension for a prolonged time although its LC becomes dephosphorylated. This is another example for the lack of coupling between phosphate content of LC and contractility of muscle.
Phosphorylation of Heat-Shock Proteins
Low molecular weight heat schock proteins are phosphorylated in smooth muscle: A 27-28-kDa protein is phosphorylated in various intact smooth muscles and smooth muscle cells (reviewed in Bárány and Bárány, (1996c). Cyclic nucleotide-dependent vasorelaxation is associated with the phosphorylation of a 20-kDa heat shock protein, called HSP20 (Beal et al., 1997; Rembold et al., 2000). It was found that HSP20 is an actin-associated protein (Brophy et al.,1999; Rembold et al. 2000) suggesting that smooth muscle relaxation may be brought about by the binding of the phosphorylated HSP20 to the actin filaments.
The binding of an agonist (e.g. norepinephrine or oxytocin) to the surface receptor of smooth muscle induces a signal that spreads from the outside to the inside of the plasma membrane and activates several effectors that ultimately initiate contraction. There are three components of this system that we discuss: 1) Inositol 1,4,5-trisphosphate, 2) G-proteins, 3) Phosphoinositide-specific phospholipase C.
Inositol 1,4,5-trisphosphate: The inositol ring contains six hydroxyl residues, most of them can be phosphorylated by specific kinases. Inositol 1-monophosphate is the constituent of phosphatidylinositol (PI) one of the phospholipids in animal cell membranes. PI 4-kinase and PI (4) P 5-kinase to generate PI (4) P and PI (4,5) P2, respectively, sequentially phosphorylate PI. Inside the cell membrane resides a phosphoinositide specific phospholipase C, one of its hydrolytic product is inositol 1,4,5-trisphosphate (IP3), (see Fig. SM 8).
Fig. SM8. D-myo-inositol 1,4,5-trisphosphate. (Bárány and Bárány, 1996b,with permission from Biochemistry of Smooth Muscle Contraction, 1996, Academic Press). The arrow indicates the site of the ester link with diacylglycerol in phosphatidylinositol. The negative charge of the phosphate group is not indicated.
G-proteins: The guanine nucleotide binding proteins (G-proteins) are heterotrimers consisting of a-, b- and g-subunits. The a-subunits appear to be most diverse and are believed to be responsible for the specificity of the interaction of different G-proteins with their effectors. Fig. SM9 depicts a simple model for the activation of G-proteins. In the basal state, the a-subunit contains bound GDP and association of a- and bg-subunits is highly favored, keeping the G-protein in the inactive form. Stimulation of the G-protein results when it binds GTP rather than GDP. Receptors interact most efficiently with the heterotrimeric form of the G-protein and accelerate activation by increasing the rate of dissociation of GDP and enhancing the association of GTP. Activation of G-protein coupled receptor results in the dissociation of heterotrimeric G-proteins into a-subunits and bg-dimers. Finally, the G-protein a-subunit has an intrinsic hydrolytic activity that slowly converts GTP to GDP and returns the G-protein to its inactive form.
Fig. SM9. Model for the activation of G-proteins. ( Bárány and Bárány, 1996b, with permission from Biochemistry of Smooth Muscle Contraction, 1996, Academic Press).
Phosphoinositide-specific phospholipase C: This term refers to a family of enzymes all specific for the phosphoinositide moiety of the phosphatidylinositol, but differing in their specificity depending on the number of the phosphoryl groups on the inositol ring. The b-, g- and d-isoforms of PI-phospholipase C (PI-PLC) show the greatest specificity for the trisphosphorylated phospholipid (PIP2)). There are two basic mechanisms by which agonists activate PIP2 hydrolysis (Fig. SM10). In case of hormones, neurotransmitters, and certain other agonists, the signal is transduced to b-isozymes of PI-PLC. The upper left row of Fig. SM10 shows the most common pathway for PI-PLCb-isoform activation, initiated by stimulation of a a1-adrenergic receptor (a1-R) with norepinephrine (NE), and involving Gaq-proteins. The lower left row shows the activation of PI-PLC-b isoforms, initiated by acetylcholine (ACH) stimulation of M2-muscarinic receptor (M2-R), and mediated by the b g-subunit of the pertussis toxin-sensitive G-protein (GI). Concerning the other basic activating mechanism, e.g. in the case of growth factors, activation of their receptors results in enhanced tyrosine kinase activity. The right part of Fig. SM10 shows the activation of PI-PLC-g isoforms, initiated by the binding of epidermal growth factor (EGF) to its receptors, and executed by the tyrosine phosphorylation (YP) of PI-PLC-g . In all three examples, the activated PI-PLC hydrolyzes PIP2 to form the messengers IP3 and diacylglycerol (DAG). IP3 releases Ca2+ from the sarcoplasmic reticulum and thereby initiates smooth muscle contraction. DAG activates protein kinase C, the exact result of this activation is not known at the cellular level.
Fig. SM 10. Pathways for activation of PI-PLC isoforms. (Bárány and Bárány, 1996b, with permission from Biochemistry of Smooth Muscle Contraction, 1996, Academic Press). .
The Contractile Event of Smooth Muscle
A scheme for smooth muscle contraction is shown in Fig. SM11. Contraction is initiated by the increase of Ca2+ in the myoplasm; this happens in the following ways:
- Ca2+ may enter from the extracellular fluid through channels in the plasmalemma. These channels open, when the muscle is electrically stimulated or the plasmalemma is depolarized by excess K+.
- Due to agonist induced receptor activation, Ca2+ may be released from the sarcoplasmic reticulum (SR). In this pathway, the activated receptor interacts with a G-protein (G) which in turn activates phospholipase C (PLC). The activated PLC hydrolyzes phosphatidyl inositol bisphosphate; one product of the hydrolysis is inositol 1,4,5-trisphosphate (IP3). IP3 binds to its receptor on the surface of SR, this opens Ca2+ channels and Ca2+ from SR is entering the myoplasm.
- Ca2+ combines with calmodulin (CaM) and the Ca2+ -CaM complex activates MLCK, which in turn phosphorylates LC. The phosphorylated myosin filament combines with the actin filament and the muscle contracts.
Fig. SM11. A scheme for smooth muscle contraction. ( Bárány, 1996, with permission from Biochemistry of Smooth Muscle Contraction, 1996, Academic Press).
Two books (Bárány, 1996; Kao and Carsten, 1997) and a special journal issue (Murphy, 1999) are recommended for further studying the mechanism of smooth muscle contraction.and relaxation.
Monomer (G) to Polymer (F) Transformation of Actin in Smooth Muscle
Mehta and Gunst (1999) and Jones et al (1999) reported the existence of G-actin in smooth muscle, based on the method of DNase I inhibition and phalloidin staining, respectively. Subsequently, Bárány, et al (2001) showed the exchange of the actin-bound nucleotide in intact smooth muscle. This was based on the separation of the actin bound nucleotides from the cytoplasmic nucleotides with 50% ethanol (Fig. SM12).
Fig. SM12. Extraction of
nucleotides and radioactivity from 32P-labeled
arterial smooth muscles (From Bárány et al.,
2001).The percentage of the total absorbance and counts
eluted from the muscles in 8 extractions is shown on the
Fig. SM12. Extraction of nucleotides and radioactivity from 32P-labeled arterial smooth muscles (From Bárány et al., 2001).The percentage of the total absorbance and counts eluted from the muscles in 8 extractions is shown on the ordinate.
The composition of the PCA extract is shown on Fig. SM13.
Fig. SM13. Dowex -1
chromatography of Extracts No. 7 and 8, shown in Fig.
SM12.(From Bárány et al., 2001). Squares
correspond to Counts per ml and triangles correspond to
Fig. SM13. Dowex -1 chromatography of Extracts No. 7 and 8, shown in Fig. SM12.(From Bárány et al., 2001). Squares correspond to Counts per ml and triangles correspond to Absorbance.
In order to quantify the extent of exchange of the actin-bound nucleotide and Pi, one has to determine their specific activity (counts /min/mol nucleotide or Pi) and compare it with those of the specific activities (s.a.) of the gamma- and beta-phosphates of the cytoplasmic ATP and that of PCr (Bárány et al., 2001). With this knowledge one can calculate the percentage exchange for each of the actin components; for instance , the percentage exchange of the actin-bound- ADP is:
(s.a. of actin-ADP/s.a. of beta-P of cytoplasmic ATP) x 100
Fig. SM14 compares the exchange of the actin-bound ADP between smooth and skeletal muscles. The exchange is rapid in smooth muscle, half-time about 15 min, whereas the exchange is slow in skeletal muscle, about 15% in three hours, in agreement with the studies of Martonosi et al., 1960) in live animals.
Fig. SM14. Time course of the
exchange of the actin-bound ADP in smooth (porcine carotid
artery) and skeletal (rat vastus lateralis) muscle. (From
Bárány et al., 2001).
Fig. SM14. Time course of the exchange of the actin-bound ADP in smooth (porcine carotid artery) and skeletal (rat vastus lateralis) muscle. (From Bárány et al., 2001).
Characteristics of the exchange of the actin-bound nucleotide in smooth muscle:
ATP is a prerequisite for the exchange to take place. If ATP synthesis is inhibited by azide or iodoacetamide the exchange is also inhibited. If ATP sysnthesis is reduced, by incubation of the muscles with deoxyglucose, instead of glucose, the exchange is also reduced.
Ca2+ is not required for the exchange, i.e. full exchange is observed in the muscle in the presence of EGTA.
Several smooth muscles, arteries, uteri, urinary bladder, and stomach exhibiited the exchange of the actin-bound nucleotide and phosphate, suggesting that the exchange is a property of every smooth muscle.
Upon contraction of smooth muscle, the exchange of the bound-nucleotide and phosphate decreased and upon relaxation from the contracted state it increased, suggesting that polymerization-deplolymerization of actin is a part of the contraction-relaxation cycle of smooth muscle.
Bárány, M. (1996). Biochemistry of Smooth Muscle Contraction. Academic Press.
Bárány, K. and Bárány, M. (1996a). Myosin light chains. In Biochemistry of Smooth Muscle Contraction (M. Bárány , Ed.), pp. 21-35, Academic Press.
Bárány, M. and Bárány, K. (1996b). Inositol 1,4,5-trisphosphate production. In Biochemistry of Smooth Muscle Contraction (M. Bárány, Ed.), pp. 269-282, Academic Press.
Bárány, M. and Bárány, K. (1996c). Protein phosphorylation during contraction and relaxation. In Biochemistry of Smooth Muscle Contraction (M. Bárány, Ed.), pp. 321-339, Academic Press.
Bárány, M., Barron, J.T., Gu, L., and Bárány, K. (2001). Exchange of the actin-bound nucleotide in intact arterial smooth muscle. J. Biol. Chem., 276, 48398-48403.
Beall, A.C., Kato, K., Goldenring, J.R., Rasmussen, R., and Brophy, C.M. (1997) Cyclic nucleotide-dependent vasorelaxation is associated with the phosphorylation of a small heat shock-related protein. J. Biol. Chem. 272, 11283-11287.
Brophy, C.M., Lamb, S., and Graham, A. (1999). The small heat shock-related protein-20 is an actin-associated protein. J. Vasc. Surg. 29, 326-333.
Erdödi, F., Rokolya, A., Bárány, M., and Bárány, K. (1988). Phosphorylation of the 20,000 dalton myosin light chain isoforms of arterial smooth muscle by myosin light chain kinase and protein kinase C. Arch. Biochem. Biophys. 266, 583-591.
Feng, J., Ito, M., Ichikawa, K., Isaka, N., Nishikawa, M., Hartshorne, D.J., and Nakano, T. (1999). Inhibitory phosphorylation site for rho-associated kinase on smooth muscle myosin phosphatase. J. Biol. Chem. 274, 37385-37390.
Hartshorne, D.J., Ito, M., and Erdödi, F. (1998). Myosin light chain phosphatase: subunit composition, interactions and regulation. J. Muscle Res. Cell Motil. 19, 325-341.
Herrera, A.M., Kuo, K-H., and Seow, C.Y. (2002). Influence of calcium on myosin thick filament formation in intact airway smooth muscle. Am. J. Physiol. Cell Physiol., 282, C310-C316.
Hodgkinson, J.L., el-Mezgueldi, M., Craig, R., Vibert, P., Marston, S.B., and Lehman, W. (1997). 3-D image reconstruction of reconstituted smooth muscle thin filaments containing calponin : visulaization of interactions between F-actin and calponin. J. Mol. Biol., 273, 159-159.
Jones, K.A., Perkins, W.J., Lorenz, R.R., Prakash, Y.S., Sieck, G.C., Warner, D.O. (1999). F-actin stabilization increases tension cost during contraction of permeabilized airway smooth muscles in dog. J.Physiol., 519, 527-538.
Kaibuchi, K., Kuroda, S., and Amano, M. (1999). Regulation of the cytoskeleton and cell adhesion by the rho family GTPases in mammalian cells. Annu. Rev. Biochem. 68, 459-486.
Kao, C.Y. and Carsten, M. E. (1997). Cellular Aspects of Smooth Muscle Function. Cambridge University Press.
Kitazawa, T., Eto, M., Woodsome, T.P., and Brautigan, D.L. (2000). Agonists trigger G protein-mediated activation of the CPI-17 inhibitor phosphoprotein of myosin light chain phosphatase to enhance vascular smooth muscle contractility. J. Biol. Chem., 275, 9897-9900.
Lauzon, A-M., Fagnant, P.M., Warshaw, D.M., and Trybus, K.M. (2001) Coiled-coil unwinding at the smooth muscle myosin head-rod junction is required for optimal mechanical performance. Biophys. J. 80, 1900-1904.
Lehman, W., Vibert, P., Craig, R. (1997). Visualization of caldesmon on smooth muscle thin filaments. J. Mol. Biol., 274, 310-317.
Li, X-D., Saito, J., Ikebe, R., Mabuchi, K., and Ikebe, M. (2000). The interaction between the regulatory light chain domains on two heads is critical for regulation of smooth muscle myosin. Biochemistry, 39, 2254-2260.
Martonosi, A., Gouvea, M.A., and Gergely, J. (1960). Studies on actin. III. G-F transformation of actin and muscular contraction (experiments in vivo). J. Biol. Chem. 235, 1707-1710.
Mehta, D. and Gunst, S.J. (1999). Actin polymerization stimulated by contractile activation regulates force development in canine tracheal smooth muscle. J. Physiol., 519, 820-840.
Murphy, R.A. (1999). Signal transduction in smooth muscle. Reviews of Physiology Biochemistry and Pharmacology. vol.134
Nagumo, H., Sasaki, Y., Ono, Y., Okamoto, H., Seto, M., and Takuwa, Y. (2000). Rho-kinase inhibitor HA-1077 prevents rho-mediated myosin phosphatase inhibition in smooth muscle cells. Am. J. Physiol., 278, C57-C65.
Ohki, S-Y., Eto, M., Kariya, A., Hayano, T., Hayashi, Y., Yazawa, M., Brautigan, D., and Kainosho, M. (2001). Solution NMR structure of the myosin phosphatase inhibitor protein CPI-17 shows phosphorylation-induced conformational changes responsible for activation. J. Mol. Biol. 314, 839-849.
Quevillon-Cheruel, S., Foucault, G., Desmadril, M., Lompre, A-M., and Bechet, J-J. (1999). Role of the C-terminal extremities of the smooth muscle myosin heavy chains: implication for assembly properties. FEBS Letters 454, 303-306.
Rembold, C.M., Foster, D.B., Strauss, J.D., Wingard, C.J., Van Eyk, J.E. (2000). cGMP-mediated phosphorylation of heat shock protein 20 may cause smooth muscle relaxation without myosin light chain dephosphorylation in swine carotid artery. J. Physiol., 524, 865-878.
Rovner, A.S., Fagnant, P.M., Lowey, S, and Trybus, K.M. (2002). The carboxyl-terminal isoforms of smooth muscle myosin heavy chain determine thick filament assembly properties. J. Cell. Biol. 156, 113-124.
Rowner, A.S. (1998). A long, weakly charged actin-binding loop is required for phosphorylation dependent regulation of smooth muscle myosin. J. Biol. Chem. 273, 27939-27944.
Solaro, R.J. (2000). Myosin light chain phosphatase a cinderella of cellular signaling. Circ. Res. 87, 173-175.
Somlyo, A.P. and Somlyo, A.V. (1994). Signal transduction and regulation in smooth muscle. Nature, 372, 231-236.
Somlyo, A.P. and Somlyo, A.V. (2000). Signal transduction by G-proteins, Rho-kinase and protein phosphatase to smooth muscle and non-muscle myosin II. J. Physiol., 522, 177-185.
Stull, J.T., Krueger, J.K., Kamm, K.E., Gao, Z-H., Zhi, G., and Padre, R. (1996). Myosin light chain kinase. In Biochemistry of Smooth Muscle Contraction (M. Bárány, Ed.), pp. 119-130. Academic Press.
Sward, K., Dreja, K., Susnjar, M., Hellstrand, P., Hartshorne, D.J., and Walsh, M.P. (2000). Inhibition of rho-associated kinase blocks agonist induced Ca2+sensitization of myosin phosphorylation and force in guinea pig ileum. J. Physiol. 522, 33-49.
Sweeney, H.L., Chen, L-Q., and Trybus, K.M. (2000). Regulation of asymmetric smooth muscle myosin II molecules. J.B iol. Chem. 275, 41273-41277.
Trybus, K. (2000). Biochemical studies of myosin. METHODS, 22, 327-335.
Back to the Home
Back to the Beginning of the Chapter
Go to the Next Chapter | http://www.uic.edu/classes/phyb/phyb516/smoothmuscleu3.htm | 13 |
16 | Knowing how and where birds migrate and breed is an important part of understanding how and why their numbers increase or decrease over time. However, we don't know much about the exact migratory patterns of most birds. After all, they are one of the most itinerant animals on earth, coming and going from one place to another as regularly as the seasons change. Where are they going? How will they get there? Why do they go? Based on what we currently know about migration, we can assume that they head toward areas where the weather is more conducive to survival and breeding. Now, a new technique to track birds is helping researchers understand an important concept called migratory connectivity.
Migratory connectivity is the degree to which breeding and non-breeding populations of birds are linked to one another; it is the relationship that helps us understand how these interactions contribute to the natural ecology of the animals' habitats. Until recently, the mark-recapture method has been the only technique used to do this. This method includes tagging individual birds and recording where they were recaptured, if they were recaptured at all. Unfortunately, this method was not always successful if the bird was not caught again.
Researchers recently tried a new method to track gray catbirds (Dumetella carolinensis), in addition to the older mark-recapture technique. The scientists fit 13 male and nine female birds with geolocators, which are special devices that resemble tags on the birds' legs and that can record the estimated latitude and longitude of the wearer based on sunlight levels every 10 minutes.
Birds wore these during the breeding and non-breeding seasons from July 2009 through May/June of 2010, when researchers recovered the devices. Only three males and three females were recaptured, so the data from the six geolocators was the only new information that the researchers had to work with. They were able to successfully download this data and use special software to correct and calibrate any errors in the information. They were also aware that the readings were sometimes slightly skewed if the bird had been perching in a shady area rather than in direct sunlight, but despite this, the team was able to get a good impression as to where the birds had been.
Next, the data was compared to previously documented mark-recapture records from 1914 to 2009 in order to provide a wide-range view of migratory connectivity. The data from both were similar, indicating a strong connectivity. It also showed that the gray catbirds that bred in Washington D.C. migrated to Florida and the Caribbean during the winters. In addition, long-term mark recapture data from the U.S. Geological Services Bird Banding Lab indicated that gray catbirds from the Midwest migrate down to Central America during the winters.
Although the data was consistent from both sources, there are limitations to both. The geolocators require proper light so that data is not misrepresented. The birds wearing these devices may also have trouble getting around, as weight and drag are increased, and a piece protrudes unnaturally. This can ultimately affect their survival. The authors did state however, that statistically the recapture and return rates of birds from both the geolocator study and the historical records were about the same. Data collected via mark-recapture techniques only seems to have meaning when the data is collected over a long period of time.
This article summarizes the information in this publication:
Ryder, Thomas B., Fox, James W., and Peter P. Marra. 2011. Estimating Migratory Connectivity of Gray Catbirds (Dumetella carolinensis) Using Geolocator and Mark-Recapture Data. The Auk 128(3):448-453.
Understanding the connectivity between breeding and nonbreeding populations of migratory birds is fundamental to our knowledge of biological phenomena such as population dynamics and dispersal. Moreover, our ability to quantify migratory connectivity has inevitable consequences for both conservation and management of species that utilize distinct geographic locations. Technology is rapidly advancing our ability to track birds throughout the annual cycle and to collect data on the degree of connectivity among breeding and nonbreeding populations. We combined two direct methods, mark–recapture (n = 17) and geolocation (n = 6), to estimate the migratory connectivity of breeding and nonbreeding populations of Gray Catbirds (Dumetella carolinensis). Data from geolocators show that birds breeding in the Mid-Atlantic overwinter in both Cuba and southern Florida. Mark–recapture data supported our geolocator results but also provided a broader spatial perspective by documenting that Mid-Atlantic and Midwestern populations occupy distinct geographic localities during the nonbreeding period. This research underscores the importance of geolocators, as well as other tools, to advance our understanding of migratory connectivity. Finally, our results highlight the potential value of U.S. Geological Survey (USGS) Bird Banding Laboratory mark–recapture data, which are often underutilized in ornithological research.
Teachers, Standards of Learning, as they apply to these articles, are available for each state. | http://nationalzoo.si.edu/scbi/migratorybirds/science_article/default.cfm?id=137 | 13 |
11 | This figure from NASA's Dawn mission shows the varied minerals on the surface of the giant asteroid Vesta in false color. The colors, derived from data obtained by Dawn's visible and infrared mapping spectrometer, have been chosen to emphasize mineral differences on a half-mile (kilometer) scale. Data from the spectrometer also demonstrate that Vesta's surface and subsurface show localized areas of bright and dark hues.
Geological structures at scales of tens of miles (kilometers) often show mineralogical differences. The differences can be seen particularly around craters that are surrounded by ejected material and that have experienced landslides.Oppia Crater is highlighted in the white box.
Colors were assigned to ratios of particular infrared wavelengths to emphasize differences not visible to the human eye. In this color scheme, green shows the relative strength of a particular mineralogical characteristic -- the absorption of pyroxene, an iron- and magnesium-rich mineral. Brighter green signifies a higher relative strength of this band, which indicates chemistry involving pyroxene. On the other hand, reddish colors indicate a different mineral composition.
The data used to create this mosaic were collected in August 2011, at an average altitude of 1,700 miles (2,700 kilometers). The visible and infrared mapping spectrometer data lie over a mosaic made by Dawn's framing camera.
The Dawn mission to Vesta and Ceres is managed by NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, for NASA's Science Mission Directorate, Washington. UCLA is responsible for overall Dawn mission science. The visible and infrared mapping spectrometer was provided by the Italian Space Agency and is managed by Italy's National Institute for Astrophysics, Rome, in collaboration with Selex Galileo, where it was built.
More information about Dawn is online at http://www.nasa.gov/dawn and http://dawn.jpl.nasa.gov. | http://photojournal.jpl.nasa.gov/catalog/PIA15671 | 13 |
14 | Arithmetic coding actually refers to half of an Arithmetic Coding data compression system.
It has two parts:
- An arithmetic coder
- A data model (e.g., Markovian model)
- There is a new data character (a fixed number of bits per character)
- There is a set of probabilities for each possible character
The larger the range, the less bits it takes to code the character. The smaller the range, the more bits it takes to code the character.
Typically, the model used to code the data changes based on the data input stream contents. This is known as adaptive coding.
An arithmetic encoder takes a string of symbols as input and produces a rational number in the interval [0, 1) as output. As each symbol is processed, the encoder will restrict the output to a smaller interval.
Let N be the number of distinct symbols in the input; let x1, x2 ... xN represent the symbols, and let P1, P2 ... PN represent the probability of each symbol appearing. At each step in the process, the output is restricted to the current interval [y, y+R). Partition this interval into N disjoint subintervals:
- I1 = [y, y + P1R)
- I2 = [y + P1R, y + P1R + P2R)
Note that at each stage, all the possible intervals are pairwise disjoint. Therefore a specific sequence of symbols produces exactly one unique output range, and the process can be reversed.
Since arithmetic encoders are typically implemented on binary computers, the actual output of the encoder is generally the shortest sequence of bits representing the fractional part of a rational number in the final interval.
Suppose our entire input string contains M symbols: then xi appears exactly PiM times in the input. Therefore, the size of the final interval will be
However, IBM and other companies own patents in the United States and other countries on algorithms essential for implementing an arithmetic encoder. But are those patent holders willing to license the patents royalty-free for use in open-source software? | http://www.encyclopedia4u.com/a/arithmetic-coding.html | 13 |
25 | August 30 - By observing the collision of two distant galaxies, scientists say that they now have direct evidence of dark matter's existence. For decades scientists have proposed the existence of dark matter as an explanation for how galaxies rotate at their observed velocities.
Dark matter emits no light and can only be detected by how it interacts with ordinary matter through gravity. One of the ways dark matter can be detected is by a phenomenon called gravitational lensing, which occurs when an object's gravitational field distorts light from background galaxies. However, dark matter is often embedded in galaxies, making it difficult to isolate the lensing it causes.
Researchers were able to directly detect dark matter by observing the collision between an enormous cluster of galaxies and a smaller galaxy cluster more than 3 billion light years away. The team reasoned that when the galaxies hit each other, the vast volumes of gas in each would slow down, but the dark matter would continue to speed along. Images from NASA's Chandra X-ray Observatory, the Hubble Space Telescope, and other instruments showed gravitational lensing in an area where there was no visible matter, indicating the presence of dark matter.
There are several National Research Council reports dealing with dark matter. Connecting Quarks with the Cosmos: Eleven Science Questions for the New Century examines 11 questions that need to be and can be answered in the next decade, including "what is the nature of dark matter and energy." Astronomy and Astrophysics in the New Millennium recommends further research into dark matter and into developing dark matter detectors. Revealing the Hidden Nature of Space and Time: Charting the Course for Elementary Particle Physics affirms how particle physics research is necessary to maintain the United States' position as a scientific world leader and recommends several frontiers for further research, including dark matter.
of News and Public Information
Science in the Headlines
|Copyright © 2006. National Academy of Sciences. All rights reserved. 500 Fifth St. N.W., Washington, D.C. 20001.| | http://www.nationalacademies.org/printer/headlines/20060830.html | 13 |
16 | Language is a means of communication. By using a language people can communicate with each other. Using a language is not as simply as we thought because there is a set of rules that must be followed, which is called Grammar. Actually grammar is used to mean the structure of a language. It is an essential part of the use of language process, both in spoken and written language. Based on Digital Library of PETRA University, the grammar of a language is a description of the ways in which the language uses patterns of structure to convey the meaning. It would be impossible to learn language effectively without knowing the grammar, because grammar helps learners to identify grammatical forms, which serve to enhance and sharpen the expression of meaning.
Having a good grammar system of a language, learners will be helpful in delivering their ideas, messages and feelings either to the listeners or readers. Language without grammar would be disorganized and causes some communicative problems, like grammatical errors in writing. Hence, learners need to know the grammatical system of language they can communicate with others in order to transfer the message properly.
In order to use a language well, learners should learn the rules of a language or to know how they work. They cannot avoid errors because errors mostly occur in learning process. It happens because they use different forms to deliver their ideas, fellings or messages so they need considerable amount of time to be able to master the target language well. Besides, by making errors, learners will build their new knowledge to use the target language as Littlewood stated that making errors during studying the second language can be considered as a means of building learners’ abilities because they can learn something from making errors (Littlewood, 1992)
According to Robert and Freida in Yulianti’s thesis (1972: 154), learning English is not easy, the language learners may have difficulties. The difficulties that are encountered by every student will vary according to his / her native language. Because of these, there will be errors that can be found in their learning.
These errors will influence their communication. Therefore, it is important to analyze the errors because by learning the errors there are many advantages such as (a) a device which the learner uses in order to learn ( Selinker in Soesanti’s thesis, 1992 : 150 ), (b) to fully grasp and understand the nature of errors, and (c) instead of just being able to recognize errors, the learners are now able to explain the rules and correct the errors ( Mei Lin Ho, 2003 : 1).
The errors usually occur in the productive skills, speaking and writing, but to analyze the errors in productive skills in short time is not easy. It takes much time, money, and requires a high ability of an analyst. Therefore, the writer decided to analyze only the grammatical errors in students’ writing.
The writer chooses the students of Writing IV class as the subject of the research because they are expected to make writings which are correct in grammar, so it is important to know whether the students make grammatical errors or not and what kind of grammatical errors that students make. The writer hopes the result of the research will be useful; not only for the students of Writing IV class, but also for the lectures.
The grammatical error that will be analyze are subject and verb; verb agreement, tense, and form; pronoun agreement, and reference.
1.2 Research Problem
The central problem of this research is: “What grammatical errors are made by the students taking Writing IV class at the English Department Academic Year 2009 / 2010?”
1.3 Objective Of The Study
Based on the problem above, this research intends to find out the grammatical errors which are made by students of Writing IV class at the English Department in their writing of argumentative essay academic year test 2009 / 2010.
1.4 Significances Of The Study
This research has significances as follows:
1. To help teachers of the English Department, by giving them an important contribution in the English teaching process which is part of grammar they should pay attention to.
2. To help students, by giving valuable input about errors they encounter and how to overcome them.
3. It hopes that this thesis will help other researchers to do the some related researches in deeper, further and better techniques.
The scope of this study is the grammatical errors made by students taking Writing IV class at the English department in their three assignments of argumentative essay Academic Year 2009 / 2010.
The errors which the researcher will analyze are only the errors which include in the following three categories of problem areas. Those are:
1. Subject and verb
e.g. There is some glasses on the table
2. Verb agreement, tense, and form
e.g. I will coming soon.
3. Pronoun, agreement, and reference
e.g. Julie likes the flower. He will buy it.
1.6 Definitions of Key Term
• Error is a part of conversation or a composition that deviates from some selected norm of mature language performance.
• Error analysis is identifying, clasifying errors of a foreign language and giving solution.
• Grammatical errors are errors which happen in writing
• Students of English Department Academic year 2007 regular A are the students who were registered in 2007 in English Department and particularly taking the course of Writing IV in their fifth semester.
• English Department is one of the departments in the Faculty of Teachers Training and Education of the Lambung Mangkurat University, Banjarmasin which is located on Jl. H. Hasan Bashri Kayu Tangi Banjarmasin.
REVIEW OF LTERATURE
2.1 The Nature of writing
2.1.1 Definition of writing
According to Cohen and Riel in Yulianti’s thesis (1989), writing as a communicative act, a way of sharing observations, information, thought, or ideas with others. Meanwhile, Bryne in Yulianti’s thesis (1979) defined writing is transforming our thoughts into language. In other words, writing is transforming our thoughts into language. In other words, writing can be defined as a way of communication by transforming observations, information, thought, or ideas into language, so it can be shared with others. Also, Bryne (1979) added that it is neither easy nor spontaneous; it requires conscious mental effort. Writing is not only just transforming our thought or idea in written form but also it relays to the process of monitoring any single words or features that we have written and the process of rereading and revising our writing.
Voss and Keene (1992:2-3) write why we should bother with writing and purposes for writing as follows:
1. writng is a way of thinking and learning. Writing gives unique opportunities of explore ideas and enquire information. By writing, we come to know subjects well and make them our own.
2. Writing is a way of discovering. The act of writing allows us to make unexpected connections among ideas and language
3. Writing create reading. Writing create permanent, visible record of our ideas for others to read and ponder. Writing is powerfull means of communication for reading information and shapes human thought.
4. Writing ability is needed by educated people. Our skill writing is often considered to reflect our level of education.
Purpose for writing:
- To express yourself
- To provide information for your reader
- To persuade your reader
- To create a literary work
In Wikipedia’s website, it is stated that according to William Caslon, writing may refer to two activities:
1. The inscribing of characters on a medium, with the intention of forming words and other constructs that represent language or record information.
2. The creation of material to be conveyed trough written language (there are some expectation; for example, the use of a type writer to recard language is generally called typing, rather than writing).
Therefore, there are some writing components that should be considered by a writer before he begins to write because without considering the components we will not produce a good writing.
According to Raimes (1983), there are eight writing components that should be considered by a writer in order to produce good writing. The components are ;
1. Grammar : rules of verbs, agreement, pronouns.
2. Mechanics : handwriting, spelling, punctuation.
3. Organization : paragraphs, topics, and supports,
cohesion and unity.
4. word choice : vocabulary and idiom.
5. Purpose : reason for writing.
6. Audience : reader(s).
7. The writer’s purposes : getting ideas, getting started, writing
8. Content : relevance, clarity, originality, logic.
In order to get good result of writing, the writer should consider them in writing a paragraph or an essay.
Definition of a Paragraph and an Essay
A paragraph is a basic unit of organization in writing in which a group of related sentences develops one main idea. It can consist of one sentence or as long as ten sentences. However the number of sentences is unimportant but it should be long enough to develop the main idea clearly (Oshima and Ann Houge. 1999:16). A paragraph consists of several related sentences that develop one unit of content. A paragraph may stand alone as a brief work, but usually it functions as a part of a longer piece of writing (Dornan and Dawe. 1987:244).
A paragraph consists of one topic sentence and some support sentences. Some paragraph can create an essay, because an essay consists of some general statement and a thesis statement. Also, there is a concluding paragraph which concludes the main points in the body of the essay.
An essay is a piece of writing several paragraphs. It consists more than main idea, so it needs more than one paragraph to cover the ideas (Oshima and Houge. 1999:100)
In this research, the writer will analyze essays writing of students for their three writing assignments in argumentative essay academic year 2009 / 2010. They are required to write argumentative essays with the topics that have been prepared by the lectures. They developed the topics become essays writing.
2.2 The Nature of Error
2.2.1 Definition of Error
An error is different from mistake, so we have to be careful to differentiate. According to Yulianti (2007: 9):
- A mistake is a performance error, which is either a random guess or a ‘slip’, i.e. a failure to utilize a known system correctly.
- An error is a noticeable deviation from the adult grammar of a native speaker, reflecting the inter language competence of the learner.
She also clearly differentiated a mistake from an error. She stated:
- A mistake is a slip that a learner can self-correct.
- An error is what a learner can not self-correct.
From those definitions above, the writer concludes that a mistake is just a slip that the learner forgets the right form. While, an error is a deviation which is made by the learner because he does not know the rule and he /she will make it repetitively.
The Sources of Error Occurrence
The sources of error occurrence according to Ancker (2000: 1):
(1) Interference from the native language
The learner may assume that the target language and his native language are similar. Then, he will over generalize the rules of his native language and the target language.
(2) An incomplete knowledge of the target language
Because of the incomplete knowledge, the learner may make guesses. When he has something that he doesn’t know, he may guess what it should be there. Lengo (1995:1) added that foreign language learners commit errors largely because of the paucity of their knowledge of the target language whereas deviant forms produced by native speakers are dismissed as slips of the tongue or slips of the pen.
(3) The complexity of the target language
Certain aspects in English are difficult for some learners, it may be caused the rules of their native language are quite different from English and even more complex than their native language.
2.2.3 The Benefits of Analyzing Errors
Errors are normal and unavoidable during the learning process as Richard (1974: 95) mentioned that no one could learn without making errors. Meanwhile, Lengo (2003: 1) mentioned that errors are believed to be an indicator of the learners’ stages in their target language development. So, it is important to analyze the errors because there are many benefits in analyzing the errors, such as:
(1) a device which the learner uses in order to learn ( Selinker in Soesanti’s thesis, 1992: 150 )
(2) to fully grasp and understand the nature of the errors made, and
(3) instead of just being able to explain the rules and correct the errors ( Mei Lin Ho, 2003 : 1 ).
Grammar can be defined as a set of shared assumptions about how language works (Yulianti 2007:11). The assessment whether the learners have mastered some grammatical points should not be based on their ability to state the rules of grammar, but on their ability to use the grammatical points to share their ideas, emotions, feelings, or observations with other people. Especially in the context of the teaching English in Indonesia, the teaching of grammar should be integrated in the development of the four language skills.
Knowing about how grammar works is to understand more about how grammar is used and misused (Yulianti.2007:12). It means that there is a possibility of error occurrence in students learning. In this research, the term of error in grammar will be called a grammatical error. The writer has chosen only three catagories or problem areas in grammatical errors, they are:
1. Subject and verb
In a sentence, there are at least one subject and one verb. The subject may be a noun, a pronoun, and the predicate may be a verb or to be.
Some types of errors that might appear in this category are:
a. Subject missing
e.g., From the text above, can be concluded that book is important.
It should be: from the text above, it can conclude that book is important
b. Simple predicate missing be
e.g., Water very important for human being.
It should be: Water is very important for human being.
c. Wrong simple predicate missing be
e.g., There are student in the library.
It should be: There is student in the library.
d. Superfluous be
e.g. John and Taylor are do their homework.
It should be: John and Taylor do their homework.
2. Verb agreement, tense, and form.
Every sentence has at least one verb. It indicates number of the subject, the tense, etc wherever it stands in a sentence.
a. Misinformation of passive form
e.g., Andi was borrow it two days ago.
It shoul be: Andi was borrowed it two days ago
b. Passive order, but active form
e.g., The wedding will held tomorrow.
It should be: the wedding will be hold tomorrow
c. Active order, but passive form
e.g., The police is caught by the thief.
It should be: the police caught the thief
d. Misinformation of the next verbal word
e.g., We will coming soon
it should be: we will come soon.
e. The verb comes after the subject
e.g., Jane look at herself in a mirror.
It should be : jane looks at herself in a mirror
f. A form of have/ has
e.g., She have a book.
It should be: she has a book
g. A form of do / does
e.g., Andi do not know the rules
it should be: andi doesn’t know the rules.
3. Pronoun form, agreement, reference
Pronoun is a word that used to replace noun in a sentence or a paragraph. So, there is no repetition for the noun that may bore the audience, that is, the reader or the listener.
The example of the error that might appear in this area is:
e.g., He borrows the books. It will be returned soon.
It should be: he borrows teh books. They will be returned
METHODOLOGY OF RESEARCH
3.1 The Research Method
This research uses a descriptive method to describe the grammatical errors in students’ writing for the final test made by the students taking Writing IV class at English Department academic year 2009 / 2010.
3.2 The Research Variable
The variable of this research is the grammatical errors which occur in the students’ writing for assignments of argumentative essay.
3.3 Data Sources
The population of this research is the students of Regular A who take Writing IV class at English Department academic year 2009 / 2010. The total numbers of the student are about 30 students.
According to Suharsimi Arikunto (2002:1200), untuk sekedar ancer-ancer maka apabila subjeknya kurang dari 100, maka lebih baik dianbil semua. Therefore, from the 30 students of writing IV class, the writer takes the entire student as samples.
3.4 Technique of Data Collection
The data which is used in this research is from the students’ writings of all English Department students taking Writing IV class for three assignments of argumentative essay academic year 2009 / 2010. In order to collect the data, the writer asks the lecturers of Writing IV class for their permission. Then, the writer borrows them to make the copies.
3.5 Technique of Data Analysis
The technique which is used in analyzing the data is qualitative. The data will be classified into three categories of problem areas: subject and verb; verb agreement, tense, and form; pronoun form, agreement, and reference. In which the first category is divided into four types of errors: surrogate subject missing; simple predicate missing; wrong simple predicate missing; superfluous. The second is divided into fives types of errors: misinformation of passive form; passive order but active form, active order but passive form; misinformation of verb after modal; the verb comes after the subject; a from of have / has; a form of do / does. And the third is only one type of errors: wrong pronouns.
3.6 Method of Drawing Conclusion
The writer uses inductive method in making final conclusion. The conclusion is from the data analysis as the result of the research and the answer of the problem.
Ancker, William. 2000. Errors and Corrective Feedback : Updated Theory and Classroom Practice. Forum ( online ), Vol. 38, No.4, (http // exchanges.states.gov//forum/)
Arikunto, Suharsimi, Prof. Dr. 2002. Prosedur Penelitian : Suatu Pendekatan Praktek. Jakarta : PT. Rineka Cipta.
Azar, Betty S. 1941. Fundamentals of English Grammar. London : Regents/ Prentuce Hall.
Burt, Marina K & Kiparsky. Carol. 1975. The Gooficon. Massaschusetts : Newburry House Publisher.
Dornan, Edward A. and Charles W. Dawe. The Brief English Handbook. 1987. Little, Process. Boston. Houghton Mifflin Company
Haris, Abdul. 2003. A Descriptive Study Of Grammatical Errors Made By English Department Students Who Take Seminar Class In Their Seminar Paper Academic Year 2002-2003. A Thesis. English Department Unlam
Humphries, Richard. 1996. Regaining Accuracy in a Fluency Oriented Writing Class. English Teaching forum, July / October 1996 : 79-82.
Hutchinson, Tom & Water, Alan. 1986. English for Specific Purposes : A learning- Centered Approach.
Lengo, Nsakala. 1995. What is an Error? Forum (online), Vol.33. No.3, (http//exchanges.states.gov/forum/).
Mei Lin Ho, Caroline. 2003. Empowering English Teachers to Grapple with Errors in Grammar.Tesl (online), Vol.9. No.3, (http// itesl.org/).
Murphy, Raymond. 1994. English Grammar in Use. Cambridge University Press.
Oshima, alice & Houge, Ann. 1999. Writing Academic English. London : Longman.
Ozbek, Nurdan. 1995. Integrating Grammar into the Teaching of Paragraph Level Composition. Forum (online), Vol.33, No.1, (http//exchanges.state.gov/forum/).
Raimes,A. 1983. Technique In Teaching Writing. Oxford University Perss.
Richards, Jack C. 1974. Error Analysis. London : Longman.
Saukah, Ali. 2000. The Teaching of Writing and Grammar in English. Jurnal Ilmu Pendidikan. 28(2): 191-199.
Seliger, Herbert W. & Shohamy, Eliana. 1989. Second Language Research Method. New York : Oxford University press.
Yulianty. 2007. A Descriptive study of Grammatical Errors Made by the Students of Writing III Class at the English Department of FKIP UNLAM Academic Year 2003-2004. A thesis. English Department of FKIP Unlam.
Voss, Ralph F and Michael L. Keene. The Heath Guide to Collage Writing. 1992.D.C. Heath and Company.
Wikipedia. (online), (http: //en.wikipedia.org/wiki/writing: accessed on March 21,2006) | http://cupep.blogspot.com/2010/01/skripsi-analysis-of-grammatical-error.html | 13 |
17 | Topics covered: Exceptions to Lewis structure rules; Ionic bonds
Instructor: Catherine Drennan, Elizabeth Vogel Taylor
Lecture Notes (PDF - 1.1MB)
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: OK, let's get started here. Go ahead and take 10 more seconds on the clicker question, which probably looks all too familiar at this point, if you went to recitation yesterday. All right, and let's see how we do here.
OK. So, let's talk about this for one second. So what we're asking here, if we can settle down and listen up, is which equations can be used if we're talking about converting wavelength to energy for an electron. Remember, the key word here is electron. This might look familiar to the first part of problem one on the exam, and problem one on the exam is what tended to be the huge problem on the exam. I think over 2/3 of you decided on the exam to use this first equation, e equals h c over wavelength.
So I just want to reiterate one more time, why can we not use this equation if we're talking about an electron? C. OK, good, good, I'm hearing it. So the answer is c. What you need to do is you need to ask yourself if you're trying to convert from wavelength to energy for an electron, and you are tempted, because we are all tempted to use this equation, and if you were tempted, say, does an electron travel at the speed of light? And if the answer is no, an electron does not travel at the speed of light, light travels at the speed of light, then you want to stay away from using this equation. And I know how tempting it is to do that, but we have other equations we can use -- the DeBroglie wavelength, and this is just a combination of energy equals 1/2 m v squared, and the definition of momentum, so we can combine those things to get it.
You might be wondering why I'm telling you this now, you've already -- if you've lost points on that, lost the points on it, and what I'm saying to you is if there are parts of exam 1 that you did not do well on, you will have a chance to show us again that you now understand that material on the final. One quarter of the final is going to be exam 1 material, and what that means is when we look at your grade at the end of the semester, and we take a look at what you got on exam 1, and you're right at that borderline, and we say well, what happened, did they understand more at the end of the semester, did the concepts kind of solidify over the semester? And if they did and if you showed us that they did, then you're going to get bumped up into that next grade category.
So keep that in mind as you're reviewing the exam, sometimes if things don't go as well as you want them to, the temptation is just to put that exam away forever and ever. But the reality is that new material builds on that material, and specifically exam 1 a, question 1 a that deals with converting wavelength to energy for an electron. I really want you guys know this and to understand it, so I can guarantee you that you will see this on the final. Specifically, question 1, part a. You will see something very, very similar to this on the final. If you are thinking about 1 thing to go back and study on exam 1, 1 a is a really good choice for that. This is important to me, so you're going to see it on the final.
So if you have friends that aren't here, you might want to mention it to them, or maybe not, maybe this is your reward for coming to class, which is fine with me as well.
All right. So I want to talk a little bit about exam 1. I know most you picked up your examine in recitation. If you didn't, any extra exams can be picked up in the Chemistry Education office, that's room 2204.
So, the class average for the exam was a 68%, which is actually a strong, solid average for an exam 1 grade in the fall semester of 511-1. What we typically see is something right in this range, either ranging from the 50's for an exam 1 average to occasionally getting into the 70's, but most commonly what we've seen for exam 1 averages is 60, 61 -- those low 60's. So in many ways, seeing this 68 here, this is a great sign that we are off to a good start for this semester. And I do want to address, because I know many of you, this is only your second exam at MIT, and perhaps you've never gotten an exam back that didn't start with a 90 or start with an 80 in terms of the grades. So one thing you need to keep in mind is don't just look at the number grade. The reason that we give you these letters grade categories is that you can understand what it actually means, what your exam score actually says in terms of how we perceive you as understanding the material.
So, for example, and this is the same categories that were shared in recitation, so I apologize for repeating, but I know sometimes when you get an exam back, no more information comes into your head except obsessing over the exam, so I'm just going to say it one more time, and that is between 88 and 100, so that's 20% of you got an A. This is just absolutely fantastic, you really nailed this very hard material and these hard questions on the exam where you had to both use equations and solve problems, but also understand the concept in order to get yourself started on solving the problem.
The same with the B, the B range was between 69 and 87 -- anywhere in between those ranges, you've got a B, some sort of B on the exam. So again, if you're in the A or the B category here, this is really something to be proud of, you really earned these grades. You know these exams, our 511-1 exams, we're not giving you points here, there are no give me, easy points, you earned every single one of these points. So, A and B here, these are refrigerator-worthy grades, hang those up in your dorm. This is something to feel good about.
All right. So, for those of you that got between a 51 and a 68, this is somewhere in the C range. For some people, they feel comfortable being in the C range, other people really do not like being in this range. We understand that, there is plenty of room up there with the A's and the B's. You are welcome to come up to these higher ranges starting with the next exam. And what I want to tell you if you are in the C range, and this is not a place that you want to be in, anyone that's got below the class average, so below a 68 -- or a 68 or below, is eligible for free tutoring, and I put the website on the front page of your notes. This means you get a one-on-one tutor paid for by the Chemistry Department to help you if it's concepts you're not quite up on, if it's exam strategy that you need to work on more. Whatever it is that you need to work on, we want to help you get there.
So, if you have a grade that you're not happy with, that you're feeling upset or discouraged about, please, I'm happy to talk to all of you about your grades individually. You can come talk to me, bring your exam, and we'll go over what the strategy should be in terms of you succeeding on the next exam. You can do the same thing with all of your TAs are more than happy to meet with each and every one of you. And then in addition to that, we can set you up with a tutor if you are in the C range or below, in terms of this first exam.
All right. So 44 to 50, this is going to be in the D range. And then anything below a 44 is going to be failing on this exam. And also keep in mind, for those of you that are freshman, you need at least a C to pass the class. So, if you did get a D or an F on the first exam, you are going to need to really evaluate why that happened and make some changes, and we're absolutely here to help you do that. So the real key is identifying where the problem is -- is it with understanding the concepts, are you in a study group that's dragging you along but you're not understanding? Do you kind of panic when you get in the exam? There are all sorts of scenarios we can talk about and we want to talk about them with you.
Seriously, even if we had a huge range in this exam from 17 to 100, if you're sitting there and you're the 17, and actually there's more than 1 so don't feel alone, if you're a 17 or you're a 20, it's not time to give up, it's not time to drop the class and say I'm no good at chemistry, I can't do this. You still can, this is your first couple of exams, certainly your first in this class, potentially one of your first at MIT, so there's tons of room to improve from here on out. This is only 100 points out of 750. So, the same thing goes if you did really well, you still have 650 other points that you need to deal with. So, make sure you don't just rest on your high score from this first exam.
So, OK, so that's pretty much what I wanted to say about the exam, and in terms of there's tons of resources if things didn't work out quite as you wanted. If you feel upset in any way, please come and talk to me. We want you to love chemistry and feel good about your ability to do it. Nobody get into MIT by mistake, so you all deserve to be sitting here, and you all can pass this class and do well in it, so we can help you get there no matter what. You all absolutely can do this.
And then one more time, to reiterate, in case anyone missed it, 1 a, make sure you understand that, I feel like that's important. And actually all of 1 -- I really feel like the photoelectric effect is important for understanding all of these energy concepts. So, as you go on in this class, make sure you don't go on before you go back and make sure you understand that problem.
All right, so let's move on to material for exam 2 now, and we're already three lectures into exam 2 material. And I do want to say that in terms of 511-1, what tends to happen is the exam scores go up and up and up, in terms of as we go from exam 1, to exam 2, to exam 3. One of these reasons is we are building on material, the other reason is you'll be shocked at how much better you are at taking an exam just a few weeks from now. So this will be on, starting with the Lewis structures, so go back in your notes -- if this doesn't sound familiar, if you spent too much time -- or not too much time, spent a lot of time studying exam 1 and didn't move on here.
Today we're going to talk about the breakdown of the octet rule. Cases where we don't have eight electrons around our Lewis structures, then we'll move on to talking about ionic bonds. We had already talked about covalent bonds, and then we talked about Lewis structures, which describe the electron configuration in covalent bonds. So now let's think about the other extreme of ionic bonds, and then we'll talk about polar covalent bonds to end, if we get there or will start with that in class on Monday.
Also, public service announcement for all of you, voter registration in Massachusetts, which is where we are, is on Monday, the deadline if you want to register to vote. There's some websites up there that can guide you through registering and also can guide you through, if you need an absentee ballot for your home state. And I actually saw, and I saw a 5.111 student manning, there's some booths around MIT that will register you or get you an absentee ballot. So, the deadline's coming soon, so patriotic duty, I need to remind you of that as your chemistry teacher -- chemistry issues are important in politics as well. So make sure you get registered to vote.
I just remembered one more announcement, too, that I did want to mention, some of you may have friends in 511-2 and have heard their class average for exam 1. And I want to tell you, this happens every year, their average was 15 points higher than our average. Last year, their average was 15 points higher than our average. This is for exam 1. This is what tends to happen to 511-2 grades as the exam goes on. This is what happens to 511-1. You guys are in a good spot. Also, I want to point out that what's not important is just that number grade, but also the letter that goes with it.
So, for example, if you got a 69 in this class on this exam, that's a B minus. If you got a 69 on your exam in 511-2, that's a D, you didn't pass the exam. So keep that in mind when your friend might have gotten a higher number grade than you and you know you understand the similar material just as well. Similarly, an 80 in this class on the exam was a B plus, a very high B. An 80 in that class is going to be a C. So, just don't worry so much about exactly where that average lies, you really want to think about what the letter grade means. OK, I've said enough. I just -- I hate to see people discouraged, and I know that a few people have been feeling discouraged, so that's my long-winded explanation of exam 1 grades.
All right. So, let's move on with life though, so talking about the breakdown of the octet rule. The first example where we're going to see a breakdown is any time we have an odd number of valence electrons. This is probably the easiest to explain and to think about, because if we have an odd number that means that we can't have our octet rule, because our octet rule works by pairing electrons. And if we have an odd number, we automatically have an odd electron out.
So, if we look at an example, the methyl radical, we can first think about how we draw the Lewis structure -- we draw the skeletal structure here. And then what we're going to do is add up our valence electrons -- we have 3 times 1 for the hydrogen atoms, carbon has 4 valence electrons, so we have a total of 7. If we want to fill all of our valence shells in each of these atoms, we're going to need a total of 14 electrons. So, what we see we're left with is that we have 7 bonding electrons. So we can fill in 6 of those straightforward here, because we know that we need to make 3 different bonds. And now we're left over with 1 electron, we can't make a bond.
So, what we'll do is carbon does not have an octet yet. We can't get it one, but we can do the best we can and help it out with adding that extra electron onto the carbon atom, so that at least we're getting as close as possible to filling our octets.
This is what we call a radical species or a free radical. Free radical or radical species is essentially any type of a molecule that has this unpaired electron on one of the atoms. This might look really strange, we're used to seeing octets. But you'll realize, if you calculate the formal charge on this molecule, that it's not the worst situation ever for carbon. At least it's formal charge is zero, even if it doesn't have -- it would rather have an extra bond and have a full octet. But it's not the worst scenario that we can imagine. But still, radicals tend to be incredibly reactive because they do want to fill that octet.
So, what happens when you have a radical is it tends to react with the first thing that it runs into, especially highly reactive radicals that are not stabilized in some other way, which you'll tend to talk about it organic chemistry -- how you can stabilize radicals.
So the term free radical should sound familiar to you, whether you've heard it in chemistry before, or you haven't heard it in chemistry, but maybe have heard it, I don't know, commercials for facial products or other things. People like to talk about free radicals, and they're sort of the hero that gets rid of free radicals, which are antioxidants. So you hear in a lot of different creams or products or vitamins that they have antioxidants in them, which get rid of free radicals. The reason you would want to get rid of free radicals is that free radicals can damage DNA, so they're incredibly reactive. It makes sense that if they hit a strand of DNA, they're going to react with the DNA, you end up breaking the strands of DNA and causing DNA damage.
So, this is actually what happens in aging because we have a lot of free radicals in our body. We can introduce them artificially, for example, cigarette smoke has a lot of really dangerous free radicals that get into the cells in your lungs, which damage your lung DNA, which can cause lung cancer. But also, all of us are living and breathing, which means we're having metabolism go on in our body, which means that as we use oxygen and as we metabolize our food, we are actually producing free radicals as well. So it's kind of a paradox because we need them because they are a natural by-product of these important processes, but then they can go on and damage cells, which is what kind of is causing aging and can lead to cancer.
We have enzymes in our body that repair damage that is done by free radicals, that will put the strands of DNA back together. And we also have antioxidants in our body. So, you might know that, for example, very brightly colored fruit is full of antioxidants, they're full of chemicals that will neutralize free radicals. Lots of vitamins are also antioxidants, so we have vitamin A on the top there and vitamin E.
So, the most common thing we think of when we think of free radicals is very reactive, bad for your body, causes DNA damage. But the reality is that free radicals are also essential for life. So this is kind of interesting to think about. And, for example, certain enzymes or proteins actually use free radicals in order to carry out the reactions that they carry out in your body. So, for example, this is a picture or a snapshot of a protein, this is a crystal structure of ribonucleotide reductase is what it's called. It's an enzyme that catalyzes the reaction of an essential step in both DNA synthesis and also DNA repair, and it requires having radicals within its active site in order to carry out the chemistry.
So, this is kind of a neat paradox, because radicals damage DNA, but in order to repair your DNA, you need certain enzymes, and those enzymes require different types of free radicals. So, free radicals are definitely very interesting, and once we get -- or hopefully you will get into organic chemistry at some point and get to really think about what they do in terms of a radical mechanism.
We can think about radicals that are also more stable, so let's do another example with the molecule nitric acid. So we can again, draw the skeleton here, and just by looking at it we might not know it's a radical, but as we start to count valence electrons, we should be able to figure it out very quickly, because what we have is 11 valence electrons. We need 16 electrons to have full octets. So, we're left with 5 bonding electrons. We put a double bond in between our nitrogen and our oxygen, so what we're left over with is this single bonding electron, and we'll put that on the nitrogen here. And I'll explain why we put it on the nitrogen and not the oxygen in just a minute.
But what we find is then once we fill in the rest of the valence electrons in terms of lone pairs, this is the structure that we get. And if you add up all of the formal charges on the nitrogen and on the oxygen, what you'll see is they're both 0. So if you happen to try drawing this structure and you put the lone pair on oxygen and then you figured out the formal charge and saw that you had a split charge, a plus 1 and a minus 1, the first thing you might want to try is putting it on the other atom, and once you did that you'd see that you had a better structure with no formal charge.
I have to mention what nitric oxide does, because it's a very interesting molecule. Don't get it confused with nitrous oxide, which is happy gas, that's n o 2. This is nitric oxide, and it's actually much more interesting than nitrous oxide. It's a signaling molecule in your body, it's one of the very few signaling molecules that is a gas, and obviously, it's also a radical. What happens with n o is that it's produced in the endothelium of your blood vessels, so the inner lining of your blood vessels, and it signals for smooth muscle that line your blood vessels to relax, which causes vasodilation , and by vasodilation, I just mean a widening of the blood vessels. So, n o signals for your blood vessels to get wider and allow more blood to flow through. And if you think about what consequences this could have, in terms of places where they have high altitude, so they have lower oxygen levels, do you think that they produce more or less and n o their body? More? Yeah, it turns out they do produce more. The reason they produce more is that they want to have more blood flowing through their veins so that they can get more oxygenated blood into different parts of their body.
N o is also a target in the pharmaceutical industry. A very famous one that became famous I guess over 10 years ago now, and this is from a drug that actually targets one of n o's receptors, and this drug has the net effect of vasodilation or widening of blood vessels in a certain area in the body. So this is viagra, some of you may be familiar, I think everyone's heard of viagra. Now you know how viagra works. Viagra breaks down, or it inhibits the breakdown of n o's binding partner in just certain areas, not everywhere in your body. So, in those areas, what happens is you get more n o signaling, you get more vasodilation, you get increased blood flow. So that's a little bit of pharmacology for you here today.
All right, so let's talk about one more example in terms of the breakdown of the octet rule with radicals. Let's think about molecular oxygen. So let's go ahead and quickly draw this Lewis structure. We have o 2. The second thing we need to do is figure out valence electrons. 6 plus 6, so we would expect to see 12. For a complete octet we would need 8 electrons each, so 16. So in terms of bonding electrons, what we have is 4 bonding electrons. So, we can go ahead and fill those in as a double bond between the two oxygens.
So, what we end up having left, and this would be step six then because five was just filling in that, is 12 minus 4, so we have 8 lone pair electrons left. So we can just fill it in to our oxygens like this.
All right, so using everything we've learned about Lewis structures, we here have the structure of molecular oxygen. And I just want to point out for anyone that gets confused, when we talk about oxygen as an atom, that's o, but molecular oxygen is actually o 2, the same for molecular hydrogen, for example.
All right, so let's look at what the actual Lewis structure is for molecular oxygen, and it turns out that actually we don't have a double bond, we have a single bond, and we have two radicals. And any time we have two radicals, we talk about what's called a biradical. And while using this exception to the Lewis structure rule, to the octet rule for odd numbers of valence electrons can clue us into the fact that we have a radical, there's really no way for us to use Lewis structures to predict when we have a biradical, right, because we would just predict that we would get this Lewis structure here.
So, when I first introduced Lewis structures, I said these are great, they're really easy to use and they work about 90% of the time. This falls into that 10% that Lewis structures don't work for us. It turns out, in order to understand that this is the electron configuration for o 2, we need to use something called molecular orbital theory, and just wait till next Wednesday and we will tell you what that is, and we will, in fact, use it for oxygen. But until that point, I'll just tell you that molecular orbital theory takes into account quantum mechanics, which Lewis theory does not. So that's why, in fact, there are those 10% of cases that Lewis structures don't work for.
All right, the second case of exceptions to the octet rule are when we have octet deficient molecules. So basically, this means we're going to have a molecule that's stable, even though it doesn't have a complete octet. And these tend to happen in group 13 molecules, and actually happen almost exclusively in group 13 molecules, specifically with boron and aluminum. So, any time you see a Lewis structure with boron or aluminum, you want to just remember that I should look out to make sure that these might have an incomplete octet, so look out for that when you see those atoms.
So, let's look at b f 3 as our example here. And what we see for b f 3 is the number of valence electrons that we have are 24, because the valence number of electrons for boron is 3, and then 3 times 7 for each fluorine. For total filled octets we need 32, so that means we need 8 bonding electrons. So, let's assign two to each bond here, and then we're going to have two extra bonding electrons, so let's just arbitrarily pick a fluorine to give a double bond to. And then we can fill in the lone pair electrons, we have 16 left over. So thinking about what the formal charge is, if we want to figure out the formal charge for the boron here, what we're talking about is the valence number for boron, which is 3, minus because there are no lone pairs, minus 1/2 of 8 because there are eight shared electrons. We get a formal charge of minus 1.
What is our formal charge since we learned this on Monday for thinking about the double bonded fluorine in boron? So, look at your notes and look at the fluorine that has a double bond with it, and I want you to go ahead and tell me what that formal charge should be.
All right, let's take 10 more seconds on that. OK, so 49%. So, let's go look back at the notes, we'll talk about why about 50% of you are right, and 50% need to review, which I totally understand you haven't had time to do yet, your formal charge rules from Monday's class, there were other things going on. But let's talk about how we figure out formal charge. Formal charge is just the number of valence electrons you have. So fluorine has 7. You should be able to look at a periodic table and see that fluorine has seven. What we subtract from that is the number of lone pair electrons, and there are four lone pair electrons on this double bonded fluorine, so it's minus 4. Then we subtract 1/2 of the shared electrons. Well we have a double bond with boron here, so we have a total of 4 shared electrons. And when we do the subtraction here, what we end up with is a formal charge plus 1 on the double bonded fluorine.
Without even doing a calculation, what do you think that the formal charge should be on you single bonded fluorines? Good. OK, it should be and it is 0. The reason it's zero in terms of calculating it is 7 minus 6 lone pair electrons minus 1/2 half of 2 shared electrons is 0. The reason that you all told me, I think, and I hope, is that you know that the formal charge on individual atoms has to equal the total charge on the molecule. So if we already have a minus 1 and a plus 1, and we know we have no charge in the molecule, and we only have one type of atom left to talk about, that formal charge had better be 0.
OK. So this looks pretty good in terms of a Lewis structure, we figured out our formal charges. These also look pretty good, too, we don't have too much charge separation. But what actually it turns out is that if you experimentally look at what type of bonds you have, it turns out that all three of the b f bonds are equal in length, and they all have a length that would correspond to a single bond. So, experimentally, we know we have to throw out this Lewis structure here, we have some more information, let's think about how this could happen.
So this could happen, for example, is if we take this two of the electrons that are in the b f double bond and we put it right on to the fluorine here, so now we have all single bonds. And let's think about what the formal charge situation would be in this case here. What happens here is now we would have a formal charge of on the boron, we'd have a formal charge of on all of the fluorine molecules as well. So, it turns out that actually looking at formal charge, even though the first case didn't look too bad, this case actually looks a lot better. We have absolutely no formal charge separation whatsoever. It turns out again, boron and aluminum, those are the two that you want to look out for. They can be perfectly happy without a full octet, they're perfectly happy with 6 instead of 8 in terms of electrons in their valence shell. So that is our exception the number two.
We have one more exception and this is a valence shell expansion, and this can be the hardest to look out for, students tend to forget to look for this one, but it's very important as well, because there are a lot of structures that are affected for this . And this is only applicable if we're talking about a central atom that has an n value or a principle quantum number that's equal to or greater than three. What happens when we have n that's equal to or greater to three, is that now, in addition to s orbitals and p orbitals, what else do we have available to us? D orbitals, great. So what we see is we have some empty d orbitals, which means that we can have more than eight electrons that fit around that central atom.
If you're looking to see if this is going to happen, do you think this would happen with a large or small central atom? So think of it in terms of just fitting. We've got to fit more than 8 electrons around here. Yeah, so it's going to be, we need to have a large central atom in order for this to take place. Literally, we just need to fit everything around is probably the easiest way to think about it. And what happens is it also tends to have small atoms that it's bonded to. Again, just think of it in terms of all fitting in there.
So, let's take an example p c l 5. The first example is the more straightforward example, because let's start to draw the Lewis structure, and what we see is that phosphorous has five chlorines around it. So we already know if we want to form five bonds we've broken our octet rule. But let's go through and figure this out and see how that happens.
What we know is we need 40 valence electrons, we have those -- 5 from the phosphorous, and we have 7 from each of the chlorine atoms. If we were to fill out all of those octets, that would be 48 electrons. So what we end up with when we do our Lewis structure calculation is that we only have 8 bonding electrons available to us. So we can fill those in between the phosphorous and the chlorine, those 8 bonding electrons.
So, this is obviously a problem. To make 5 p c l bonds we need 10 shared electrons, and we know that that's the situation because it's called p c l 5 and not p c l 4, so we can go right ahead and add in that extra electron pair. So we've used up 10 for bonding, so that means what we have left is 30 lone pair electrons, and I would not recommend filling all of these in your notes right now, you can go back and do that, but just know the rest end up filling up the octets for all of the chlorines.
So, in this first case where you actually need to make more than for bonds, you will immediately know you need to use this exception to the Lewis structure octet rule, but sometimes it won't be as obvious. So, let's look at c r o 4, the 2 minus version here, so a chromate ion, and if we draw the skeletal structure, we have four things that the chromate needs to bond to.
So, let's do the Lewis structure again. When we figure out the valence electrons, we have total, we have 6 from the chromium, we have 6 from each of the different oxygens, and where did this 2 come from? Yup, the negative charge. So, remember, we have 2 extra electrons hanging out in our molecule, so we need to include those. We have a total of 32. 40 are needed to fill up octets. So again, we have 8 bonding electrons available, so we can go ahead and fill these in between each of the bonds. What happens is that we then have 24 lone pair electrons left, and we can fill those in like this. And the problem comes now when we figure out the formal charge.
So, when we do that what we find is that the chromium has a formal charge of plus 1, and that each of the oxygens has a total charge of minus 1. So we actually have a bit of charge separation here. Without even doing a calculation, what is the total charge of these that are added up? OK, it's minus 2, that's right. We know that the total charge of each of the formal charges has to add up to minus 2, because that's the charge in our molecule. We can also just calculate it -- the chromate gives us a plus 2, then we have 4 times minus 1 for each of the oxygens, so we have a minus 2.
So, we have some charge separation here, and in some cases, if we're not at n equals 3 or higher, there's really nothing we can do about it, this would be the best structure we can do. But since we have these d orbitals available, we can use them, and it turns out that experimentally this is what's found, that the length and the strength are not single bonds, but they're actually something between a single bond and a double bond.
So how do we get a 1 and 1/2 bond, for example, what's the term that let's us do that? Resonance. That's right. So that's exactly what's happening here. So, if we went ahead and drew this structure here where we have now two double bonds and two single bonds, that would be in resonance with another structure where we have two double bonds instead to these two oxygens, and now, single bonds to these two oxygens. We can actually also have several other resonance structures as well. Remember, the definition of a resonance structure is where all the atoms stay the same, but what we can do is move around the electrons -- we're moving around those extra two electrons that can be in double bonds.
So, why don't you tell me how many other resonance structures you would expect to see for this chromate ion? All right, let's take 10 more seconds on this.
All right. This is good. I know this is a real split response, but the right answer is the one that is indicated in the graph here that it's four. This takes a little bit of time to get used to thinking about all the different Lewis structures you can have. So, you guys should all go back home if you can't see it immediately right now and try drawing out those four other Lewis structures, for chromate, there are four others. You'll probably get a chance to literally do this example in recitation where you draw out all four, but it's even better to make sure you understand it before you get to that point. So, we can go back to the class notes.
So it turns out there's four other Lewis structures, so basically just think about all the other different combinations where you can have single and double bonds, and when you draw those out, you end up with four. So, for every single one of these Lewis structures, we could figure out what the formal charges are, and what we would find is that it's on the chromium, it's for the double bonded oxygens, and it's going to be negative 1 for the single bonded oxygens.
So, what you can see is that in this situation, we end up having less formal charge separation, and that's what we're looking for, that's the more stable structure. So any time you can have an expanded octet -- an expanded valence shell, where you have n is equal to or greater than 3, and by expanding and adding more electrons into that valence shell, you lower the charge separation, you want to do that.
I also want to point out, I basically said there's 6 different ways we can draw this in terms of drawing all the resonance structures. You might be wondering if you have to figure out the formal charge for each structure individually, and the answer is no, you can pick any single structure and the formal charges will work out the same. So, for example, if you pick this structure and your friend picks this structure, you'll both get the right answer that there's just the negative 1 on the oxygens and no other formal charges in the molecule.
All right. So those are the end of our exceptions to the octet rule for Lewis structures, that's everything we're going to say about Lewis structures. And remember, that when we talk about Lewis structures, what they tell us is the electron configuration in covalent bonds, so that valence shell electron configuration. So we talked a lot about covalent bonds before we got into Lewis structures, and then how to represent covalent bonds by Lewis structures.
So now I'll say a little bit about ionic bonds, which are the other extreme, and when you have an ionic bond, what you have now is a complete transfer of either one or many electrons between two atoms. So the key word for covalent bond was electron sharing, the key word for ionic bonds is electron transfer. And the bonding between the two atoms ends up resulting from an attraction that we're very familiar with, which is the Coulomb or the electrostatic attraction between the negatively charged and the positively charged ions.
So let's take an example. The easiest one to think about is where we have a negative 1 and a positive 1 ion. So this is salt, n a c l -- actually lots of things are call salt, but this is what we think of a table salt. So, let's think about what we have to do if we want the form sodium chloride from the neutral sodium and chlorine atoms. So, the first thing that we're going to need to do is we need to convert sodium into sodium plus.
What does this process look like to you? Is this one of those periodic trends, perhaps? Can anyone name what we're looking at here? Exactly, ionization energy. So, if we're going to talk about the energy difference here, what we're going to be talking about is the ionization energy, or the energy it takes to rip off an electron from sodium in order to form the sodium plus ion. So, we can just put right here, that's 494 kilojoules per mole.
The next thing that we want to look at is chlorine, so in terms of chlorine we need to go to chlorine minus, so we actually need to add an electron. This is actually the reverse of one of the periodic trends we talked about. Which trend is that this is the reverse of? Electron affinity, right. Because if we go backwards we're saying how badly does chlorine want to grab an electron? Chlorine wants to do this very badly, and it turns out the electron affinity for chlorine is huge, it's 349 kilojoules per mole, but remember, we're going in reverse, so we need to talk about it as negative 349 kilojoules per mole.
So if we talk about the sum of what's happening here, what we need to do is think about going from the neutrals to the ions, so we can just add those two energies together, and what we end up with is plus 145 kilojoules per mole, in order to go from neutral sodium in chlorine to the ions.
So, the problem here is that we have to actually put energy into our system, so this doesn't seem favorable, right. What's favorable is when we actually get energy out and our energy gets lower, but what we're saying here is that we actually need to put in energy. So another way to say this is this process actually requires energy. It does not emit energy, it does not give off excess energy, it requires energy.
So, we need to think about how can we solve this problem in terms of thinking about ionic bonds, and the answer is Coulomb attraction. So there's one more force that we need to talk about, and that is when we talk about the attraction between the negatively and the positively charged ions, such that we form sodium chloride. So this process here has a delta energy, a change in energy of negative 589 kilojoules per mole. So that's huge, we're giving off a lot of energy by this attraction. So if we add up the net energy for all of this process, all we need to do is add negative 589 to plus 145. So what we end up getting is the net energy change is going to be negative 444 kilojoules per mole, so you can see that, in fact, it is very favorable for neutral sodium and neutral chloride to form sodium chloride in an ionic bond. And the net increase then, is a decrease in energy.
So, I just gave you the number in terms of what that Coulomb potential would be in attraction, but we can I easily calculate it as well using this equation here where the energy is equal to the charge on each of the ions, and this is just multiplied by the value of charge for an electron divided by 4 pi epsilon nought times r, are r is just the distance in terms of the bond length we could talk about.
So, let's calculate and make sure that I didn't tell you a false number here. Let's say we do the calculation with the bond length that we've looked up, which is 2 . 3 6 angstroms for the bond length between sodium and chloride. So we should be able to figure out the Coulombic attraction for this.
So, if we talk about the energy of attraction, we need to multiply plus 1, that's the charge on the sodium, times minus 1, the charge on the chlorine, times the charge in an electron, 1 . 6 2 times 10 the negative 19 Coulombs, and that's all divided by 4 pi, and then I've written out epsilon nought in your notes, so I won't write it on the board. And then r, so r is going to be 2 . 3 6 and times -- what is angstrom, everyone? Yup, 10 to the negative 10. So 10 to the negative 10 meters. So, if we do this calculation here, what we end up with is negative 9.774 times 10 to the negative 19 joules.
So that's what we have in terms of our energy. That does not look the same as what we saw -- yup, do you have a question?
PROFESSOR: OK. Luckily, although, I did not write it in my own notes, I did it when I put in my calculator, thank you. So you need to square this value here and then you should get this value right here, negative 9.77.
All right, so what we need to do though is convert from joules into kilojoules per mole, because that's what we were using. So if we multiply that number there by kilojoules per mole -- or excuse me, first kilojoules per joule, so we have 1,000 joules in every kilojoule. And then we multiply that by Avagadro's number, 6.022 times 10 to the 23 per mole. What we end up with is negative 589 kilojoules per mole. So this is that same Coulombic attraction that we saw in the first place.
So, notice that you will naturally get out a negative charge here, remember negative means an attractive force in this case, because you have the plus and the minus 1 in here. So we should be able to easily do that calculation, and what we end up getting matches up with what I just told you, luckily, and thank you for catching the square, that's an important part in getting the right answer. So, experimentally then, what we find is that the change in energy for this reaction is negative 444 kilojoules per mole.
If we look experimentally what we see, it's actually a little bit different, it's negative 411 kilojoules per mole. So, in terms of this class, this is the method that we're going to use, and we're going to say this gets us close enough such that we can make comparisons and have a meaningful conversations about different types of ionic bonds and the attraction between them.
But let's think about where this discrepancy comes from, and before I do that I want to point out, one term we use a lot is change in energy for a reaction where, for example, you break a bond. Remember that the negative of the change in energy is what's called delta e sub d. We first saw this when we first introduced the idea of covalent bonds. Do you remember what this term here means, delta e sub d? A little bit and some no's, which this was pre-exam, I understand, you still need to review those notes, it's dissociation energy. So you get a negative energy out by breaking the bond. The dissociation energy means how much energy that bond is worth in terms of strength, so it's the opposite of the energy you get out of breaking the bond -- or excuse me, the energy that you get out of forming the bond. It's the amount of energy you need to put in to break the bond is dissociation energy. It takes this much energy to dissociate your bond, excuse me.
All right. So, let's take a look here at our predictions, so I just put them both ways so we don't get confused. The dissociation energy is 444. The change in energy for forming the bond is negative 444. We made the following approximations, which explain why, in fact, we got a different experimental energy, if we look at that.
The first thing is that we ignored any repulsive interactions. If you think about salt, it's not just two single atoms that you are talking about. It's actually in a whole network or whole lattice of other molecules, so you actually have some other chlorines around that are going to be having repulsive interactions with our chlorine that we're talking about. We're going to ignore those, make the approximation that those don't matter, at this point, in these calculations. And the result for that is that we end up with a larger dissociation energy than the experimental value. That's because the bond is going to be a little bit more broken than it was in our calculation, because we do have these repulsive interactions.
The other thing that we did is that we treated both sodium and the chlorine as point charges. And this is what actually allowed us to make this calculation and calculate the Coulomb potential so easily, we just treated them as if they're point charges. We're ignoring quantum mechanics in this -- this is sort of the class where we ignore quantum mechanics, we ignored it for Lewis structures, we're ignoring it here. We will be back to paying a lot of attention to quantum mechanics in lecture 14 when we talk about MO theory, but for now, these are approximations, these are models where we don't take it into consideration. And I think you'll agree that we come reasonably close such that we'll be able to make comparisons between different kinds of ionic bonds.
All right. So, the last thing I want to introduce today is talking about polar covalent bonds. We've now covered the two extremes. One extreme is complete total electron sharing -- if we have a perfectly covalent bond, we have perfect sharing. The other is electron transfer in terms of ionic bonds. So when we talk about a polar covalent bond, what we're now talking about is an unequal sharing of electrons between two atoms.
So, this is essentially something we've seen before, we just never formally talked about what we would call it. This is any time you have a bond forming between two non-metals that have different electronegativities, so, for example, hydrogen choride, h c l. The electronegativity for hydrogen is 2.2, for chlorine it's 3.2. And in general, what we say is we consider a difference in terms of a first approximation if the difference in electronegativity is more than 0. 5, so this is on the Pauling electronegativity scale. So what we end up having is we sort of have a kind of, and what we call it is a partial negative charge on the chlorine, and a partial positive charge in the hydrogen. The reason we have that is because the chlorine's more electronegative, it wants to pull more of that shared electron density to itself. If it has more electron density, it's going to have a little bit of a negative charge and the hydrogen's going to be left with a little bit of a positive charge.
So, we can compare this, for example to, molecular hydrogen where they're going to have that complete sharing, so there's not going to be a delta plus or a delta minus, delta is going to be equal to zero on each of the atoms. They are completely sharing their electrons.
And we can also explain this in another way by talking about a dipole moment where we have a charged distribution that results in this dipole, this electric dipole. And we talk about this using the term mu, which is a measurement of what the dipole is. A dipole is always written in terms of writing an arrow from the positive charge to the negative charge. In chemistry, we are always incredibly interested in what the electrons are doing, so we tend to pay attention to them in terms of arrows. Oh, the electrons are going over to the chlorine, so we're going to draw our arrow toward the chlorine atom.
So, we measure this here, so mu is equal to q times r, the distance between the two. And q, that charge is just equal to the partial negative or the partial positive times the charge on the electron. So this is measured in Coulomb meters, you won't ever see a measurement of electronegativity in Coulomb meters -- we tend to talk about it in terms of debye or 1 d, or sometimes there's no units at all, so the d is just assumed, and it's because 1 debye is just equal to this very tiny number of Coulomb meters and it's a lot easier to work with debye's here.
So, when we talk about polar molecules, we can actually extend our idea of talking about polar bonds to talking about polar molecules. So, actually let's start with that on Monday. So everyone have a great weekend. | http://ocw.mit.edu/courses/chemistry/5-111-principles-of-chemical-science-fall-2008/video-lectures/lecture-12/ | 13 |
18 | CHALCOLITHIC ERA in Persia. Chalcolithic (< Gk. khalkos “copper” + lithos “stone”) is a term adopted for the Near East early in this century as part of an attempt to refine the framework of cultural developmental “stages” (Paleolithic, Mesolithic, Neolithic, Bronze, and Iron Ages) and used by students of western European prehistory (E. F. Henrickson,1983, pp. 68-79). In Near Eastern archeology it now generally refers to the “evolutionary” interval between two “revolutionary” eras of cultural development: the Neolithic (ca. 10,000-5500 b.c.e., but varying from area to area), during which techniques of food production and permanent village settlement were established in the highlands and adjacent regions, and the Bronze Age (ca. 3500-1500 b.c.e., also varying with the area), during which the first cities and state organizations arose.
Although archeologists have devoted less attention to the Chalcolithic, it was an era of fundamental economic, social, political, and cultural development, made possible by the economic advances of the Neolithic and providing in turn the essential basis for the innovations of the Bronze Age. The era can be divided into three general phases, Early, Middle, and Late Chalcolithic, approximately equivalent respectively to the Early, Middle, and Late Village periods identified by Frank Hole (1987a; 1987b; for more detailed discussion of the internal chronology of the Persian Chalcolithic, see Voigt; idem and Dyson). Those aspects most directly attested by archeological evidence (primarily demographic and economic) will be emphasized here, with some attention to less clearly identifiable social, political, and ideological trends. Persia is essentially a vast desert plateau surrounded by discontinuous habitable areas, limited in size and ecologically and geographically diverse, few of them archeologically well known, especially in the eastern half of the country. The evidence is highly uneven and drawn primarily from surveys and excavations in western and southwestern Persia.
Settlement patterns. It is remarkable that in so geographically diverse and discontinuous a country a single distinctive pattern of settlement development characterized the Chalcolithic era in most of the agriculturally exploitable highland valleys and lowland plains that have been surveyed. During the early phase most habitable areas were sparsely settled; small, undifferentiated village sites were located near streams or springs. This pattern was essentially an extension of the prevailing Neolithic settlement pattern and in a few areas (e.g., northwestern Iran; Swiny) appears to have continued throughout the Chalcolithic. In the great majority of the arable mountain valleys and lowland plains, however, it developed in several significant ways through the Middle and Late Chalcolithic. The number of villages increased substantially (in many areas strikingly so) at the end of the Early and especially in the Middle Chalcolithic; then, in the Late Chalcolithic the trend was abruptly reversed, and the number of permanent settlements had dropped precipitously by the end of the era. On the Susiana plain, an eastern extension of the Mesopotamian lowlands in southwestern Persia, Hole (1987a, p. 42) recorded sixteen sites of the Early (= Susiana a) and eighty-six of the Middle Chalcolithic (= Susiana d). In the Late Chalcolithic the number declined to fifty-eight (= early Susa A), then thirty-one (= later Susa A), and finally eighteen (= terminal Susa A). In the much smaller and slightly higher adjacent Deh Luran (Dehlorān) plain the pattern was similar but developed somewhat earlier. Fewer than ten settlement sites were recorded from the early phase of Early Chalcolithic (Chogha Mami Transitional phase 5, Sabz phase 8), approximately twenty from the later Early and early Middle Chalcolithic (Khazineh [Ḵazīna] phase 20, Mehmeh 18), and a steady decline through later Middle and Late Chalcolithic, with only a few permanent settlements by the end of the era (Bayat 14, Farukh [Farroḵ] 12, Susa A 5, Sargarab [Sargarāb]/Terminal Susa A 2; Hole, 1987a; idem, 1987b, p. 100). The best survey data available from southern Persia come from the Marvdašt plain in the broad Kor river drainage basin (Sumner, 1972; idem, 1977) and the smaller Fasā and Dārāb plains (Hole, 1987a, pp. 52-55; idem, 1987b, p. 101). In all three areas the overall settlement pattern was the same: The number of villages increased gradually through the Neolithic and the Early Chalcolithic to an impressive peak in the Middle Chalcolithic Bakun (Bakūn) period (e.g., 146 sites in the Kor river basin), only to drop off dramatically during the Late Chalcolithic and Bronze Age levels. In a survey of the Rūd-e Gošk (Kūšk) near Tepe Yahya (Yaḥyā) Martha Prickett (1976; 1986) found a similar pattern, with the peak in the Yahya VA phase and the sharp drop immediately afterward in the Aliabad (ʿAlīābād) phase (both Late Chalcolithic). In the central Zagros highlands of western Persia the three most comprehensively surveyed valleys revealed a generally similar settlement pattern, though the timing of the peak differed somewhat. In the Māhīdašt, one of the broadest and richest stretches of arable level land in the Zagros, alluviation has added as much as 10m to the late prehistoric land surface, and many Chalcolithic sites are undoubtedly still buried (Brookes et al.). Nevertheless, the number of known villages shows a marked increase from the Neolithic (ten in Sarāb) to the Early Chalcolithic; an abrupt and complete change in the ceramic assemblage, with the appearance at seventy sites of J ware, showing definite generic influence of Halaf (Ḥalaf) pottery in neighboring Mesopotamia (See ceramics iv. the chalcolithic period in the zagros), suggests that the increase may have been caused by an influx of people from the north and west. In the Middle Chalcolithic the number of sites at which black-on-buff and related monochrome-painted wares were found rose sharply to a prehistoric peak of 134. A small number of sites yielded pottery from the purely highland Dalma (Dalmā) tradition, indicating another source of external cultural influence (E. F. Henrickson, 1986; idem, 1990; idem and Vitali). Some degree of indirect outside influence from the Ubaid (ʿObayd) culture of lowland Mesopotamia is also apparent in several of the locally made monochrome-painted wares (E. F. Henrickson, 1986; idem, 1990). In the Late Chalcolithic the flourishing village life in the Māhīdašt seems to have declined; only a handful of sites have yielded pottery characteristic of this period (E. F. Henrickson, 1983, chap. 6; idem, 1985b). Either the settled population dropped considerably at this time, owing to emigration, increased mortality, or adoption of a more mobile and less archeologically visible life style like pastoralism, or the monochrome-painted buff-ceramic tradition persisted until the end of the Chalcolithic. Definitive answers await further investigations in the field. In the Kangāvar valley, 100 km east of the Māhīdašt on the great road to Khorasan, the pattern was noticeably different from that in the western and southern Zagros. The number of villages rose from a single Neolithic example, Shahnabad (Šahnābād) on mound C at Seh Gabi (Se Gābī; McDonald) to twenty in the early Middle Chalcolithic (Dalma phase), located almost exclusively near the streams crossing the central valley floor. All these villages were small, typically covering about 0.5 ha. In the Middle and early Late Chalcolithic the number and location of sites remained relatively stable (seventeen in the Seh Gabi phase, twenty-three contemporary with Godin [Gowdīn] VII), even though the ceramics and other aspects of material culture changed abruptly between these two phases. This stability probably reflects a similar stability in subsistence strategy, as well as greater isolation from external cultural influences. Only toward the end of the Late Chalcolithic was there a notable increase in the number of villages (thirty-nine sites contemporary with Godin VI). The delayed and less marked population increase in Kangāvar, anomalous compared to most well-surveyed areas of western Persia, may have resulted from the cooler, drier climate, established from both ancient and modern ecological data and from the marked clustering of sites on the valley floor near sources of irrigation water (E. F. Henrickson, 1983, pp. 9-36, 466-68). Sociopolitical developments and external connections with the lowlands may also have accounted for a local increase or influx of population during the Godin VI period (E. F. Henrickson, forthcoming; Weiss and Young). The smaller and more marginal Holaylān valley south of the Māhīdašt has been more intensively surveyed. Permanent settlement peaked there in the Middle Chalcolithic; subsistence strategies appear to have become more diversified in the Late Chalcolithic, followed by a marked decline in preserved sites of all types. Peder Mortensen (1974; 1976) found three cave sites, one open-air site, and five village settlements dating to the Neolithic, reflecting a diverse and not completely sedentary system in which both the valley floor and the surrounding hills were exploited economically. Neither J nor Dalma wares were found that far south, and the developments in the Early and early Middle Chalcolithic are thus unclear. Eleven sites with Middle Chalcolithic black-on-buff pottery resembling Seh Gabi painted and Māhīdašt black-on-buff wares were recorded, all on the valley floor (Mortensen, 1976, fig. 11). By the early Late Chalcolithic settlement had again been diversified to include two open-air and two village sites in the hills, as well as seven villages on the valley floor, all yielding ceramics related to generic Susa A wares, including black-on-red; the number of sites remained quite stable (Mortensen, 1976, fig. 13, legend erroneously exchanged with that of fig. 12). The sharp decline in settlement occurred later; only two villages on the valley floor, two cave sites, and two open-air camps, all yielding ceramics related to those of Sargarab and Godin VI, are known (Mortensen, 1976, fig. 12), suggesting a destabilization of village life and a concomitant increase in pastoralism in this area, as in others where the same general pattern has been observed (E. F. Henrickson, 1985a).
Modest settlement hierarchies seem to have developed in some highland valleys during the Chalcolithic, though such geological processes as alluviation and water and wind erosion have undoubtedly obscured the evidence in some areas. Normally a few larger villages seem to have grown up among a preponderance of small villages. In the Māhīdašt the average size of sites without heavy overburden was 1.6 ha in the Early and just over 1 ha in the Middle Chalcolithic, but several sites covering more than 3 ha existed in both phases (E. F. Henrickson, 1983, pp. 458-60). Nothing more is known about these sites, as none have been excavated. Tepe Giyan (Gīān) in the Nehāvand valley was a relatively large highland site (in the 3-ha range) from Early Chalcolithic times; seals and copper objects were found there (Contenau and Ghirshman; Hole, 1987a, pp. 87-89). At Godin Tepe, a small town in the Bronze Age (R. Henrickson, 1984), the Chalcolithic is buried under deep Bronze and Iron Age overburden, and it is not known how large or important it was in relation to the rest of Kangāvar during most of that era (Young, 1969; idem and Levine). During the Late Chalcolithic, however, an oval enclosure (Godin V) was located there, the seat of an enclave of people from the lowlands apparently involved in long-distance commodity exchange, contemporary with the latter part of the prosperous period VI occupation at Godin and in Kangāvar generally (Weiss and Young; Levine and Young). Elsewhere in the central Zagros, especially in northeastern Luristan, several large and strategically located Late Chalcolithic sites developed just at the time when the number of smaller settlements was abruptly declining (Goff, 1966; idem, 1971). In the southwestern lowlands of Ḵūzestān the evolution of a settlement hierarchy progressed farther than anywhere else in Chalcolithic Persia. In Dehlorān two settlement centers grew up. In the Farukh phase of the Middle Chalcolithic Farukhabad (Farroḵābād), estimated to have originally covered approximately 2 ha, contained at least one thick-walled, elaborately bonded brick building, constructed on a low platform (Wright, 1981, pp. 19-21), and in the Susa A period of the Late Chalcolithic the large site of Mussian (Mūsīān; Gautier and Lamprey dominated Dehlorān. Farther south, on the Susiana plain, two “primate” settlement centers developed during the Chalcolithic. Chogha Mish (Čoḡā Mīš, q.v.) in the east flourished in the Middle Chalcolithic, when the number of sites on the plain reached its peak; it covered an area of 11 ha and included domestic architecture and at least one large, thick-walled monumental public building with buttresses, containing many small rooms, including a pottery storeroom and a possible flint-working room (Delougaz; Delougaz and Kantor, 1972; idem, 1975; Kantor, 1976a; idem, 1976b). The contemporaneous settlement at Jaffarabad (Jaʿfarābād) was a specialized pottery-manufacturing site with many kilns (Dollfus, 1975). After the demise of Chogha Mish the settlement on the acropolis at Susa in western Susiana gained prominence, developing into the most impressive Chalcolithic center yet known in Persia, with an area of approximately 20 ha. The high platform was about 70 m2 and stood more than 10 m high. Its brick facing was adorned with rows of inset ceramic “nails,” cylinders with flaring heads (Canal, 1978a; idem, 1978b). Fragmentary architectural remains atop the platform suggest storage rooms and a larger structure that may have been a temple (Steve and Gasche) but the evidence for its function is inconclusive (Pollock). Beside one corner of the terrace was a mortuary structure analogous to a mass mausoleum (de Morgan; de Mecquenem; Canal, 1978a), containing an unknown number of burials, recently estimated at 1,000-2,000 (Hole, 1987a, pp. 41-42; idem, 1990). This burial facility was apparently not intended only for the elite: Only some of the burials were in brick-lined tombs, and a wide range of grave goods were included with individual bodies, from ordinary cooking pots to luxury objects, particularly eggshell-thin Susa A fine painted-ware goblets and copper axes (Canal, 1978a; Hole, 1983). The acropolis at Susa was thus a unique multipurpose Chalcolithic settlement and ceremonial center, a focal point for the region. It may not have had a large resident population, but it nevertheless served a series of complex centralizing sociopolitical functions, presumably both religious and secular. Centers like Chogha Mish and Susa, like the late Ubaid center at Eridu, presaged the rise of the first true cities in the Mesopotamian lowlands in the subsequent Uruk period.
Strategies for subsistence. Irrigation appears to have been utilized throughout the arable highland valleys and lowland plains of Persia for the first time during the Middle Chalcolithic. The best-documented area is Dehlorān, where careful collection and interpretation of botanical, settlement, and geomorphological data by several different expeditions have resulted in an unusually clear picture both of flourishing irrigation agriculture and the subsequent abuse of the land and decline of permanent agricultural settlement in the Late Chalcolithic (Hole, Flannery, and Neely; Hole, 1977; Wright, 1975). Direct botanical evidence of Chalcolithic irrigation is not as rich for other sites in Persia, but in surveys of the Māhīdašt (Levine, 1974; idem, 1976; idem and McDonald), Kangāvar (Young, 1974), Susiana (Hole, 1987a; idem, 1987b), Kāna-Mīrzā (Zagarell), the Kor river basin (Sumner, 1983), and elsewhere linear alignment of contemporaneous sites along ancient watercourses provides strong indirect evidence. In the Rūd-e Gošk survey Prickett (1976) also noted a strong association between many Middle Chalcolithic (Yahya VB and VA) sites, on one hand, and alluvial fans and ancient terraces used for flood irrigation. Of course, not all Middle Chalcolithic villages required irrigation; many were located in areas with sufficient rainfall for dry farming.
In the western highlands there is strong evidence of specialized mobile pastoralism, apparently distinct from settled village farming, during the Middle and especially the Late Chalcolithic (E. F. Henrickson, 1985a). It includes the isolated Paṛčīna and Hakalān cemeteries in the Pošt-e Kūh, located far from any ancient village site (Vanden Berghe, 1973; idem, 1974; idem, 1975a; idem, 1975b; idem, forthcoming); an increased number of open-air and cave sites located near sometimes seasonal sources of fresh water, in Holaylān, Ḵorramābād (Wright et al.), the Pošt-e Kūh (Kalleh Nissar [Kalla-Nesār]; Vanden Berghe, 1973), the hinterlands south and east of Susiana, including Īza and Qaḷʿa-ye Tal (Wright, 1987), and the Baḵtīārī region (Zagarell); and the appearance of at least one distinctive pottery type, black-on-red ware, which was widely but sparsely distributed in Luristan, Ḵūzestān, and adjacent areas, probably carried by mobile pastoralists (E. F. Henrickson, 1985a). The pervasive Late Chalcolithic decline in the number of villages provides indirect support for the hypothesis of increased diversification and mobility in subsistence strategies. In areas like the Kor river basin, where this decline appears to have been more gradual, many of the remaining sites are adjacent to natural grazing land, suggesting increased reliance on herding even among villagers (Hole, 1987a, pp. 54-55). Some degree of ecological or climatic deterioration may have contributed to this shift in certain areas, and political and economic pressures from the adjacent lowlands may also have increased (Lees and Bates; Bates and Lees; Adams; E. F. Henrickson, 1985a).
Crafts and “trade.” The Chalcolithic era was distinguished from other eras of prehistory by the variety of painted pottery that was produced, most of it utilitarian and probably made in village homes or by part-time potters who did not earn their livelihoods entirely from their craft. With a few notable exceptions, each highland valley system and lowland plain produced a distinctive ceramic assemblage over time; although there was some resemblance to pottery from nearby areas, typically each assemblage was recognizable as the work of a separate community, with different approaches and expectations. Technical and aesthetic quality, though variable, tended to improve over time, culminating in the Bakun painted ware of the Middle Chalcolithic and the Susa A fine ware of the Late Chalcolithic. Both were produced in prosperous and heavily populated areas during phases in which village settlement had reached or just passed its prehistoric zenith and pronounced settlement hierarchies had developed; their demise was associated with the subsequent rapid decline in permanent village settlement. Both were of extremely fine buff fabric without inclusions, skillfully decorated with a variety of standardized geometric patterns in dark paint; each, however, was characterized by a unique “grammar,” “syntax,” and symbolic “semantics” of design (Hole, 1984). It is not yet clear, however, that either or both of these wares resulted from occupational specialization. Archeological evidence for specialized ceramic production in the Persian Chalcolithic is extremely rare. At Tal-e Bakun, the type site for Bakun painted ware, one Middle Chalcolithic residential area of twelve buildings was excavated (Langsdorff and McCown). Several appear to have been potters’ workshops, in which work tables with nearby clay supplies and storage boxes for ash temper were found. In addition, three large kilns were associated with this group of houses (Langsdorff and McCown, pp. 8-15, figs. 2, 4). Hole (1987b, p. 86) has pointed out that the published plans imply that only one of the kilns was in use at any one lime, which suggests specialized production, most likely of Bakun painted ware, perhaps partially for export: The ware was quite widespread in the Kor river basin and adjacent areas of southern Persia. The technical prowess and artistic sophistication involved are arguments for specialized production, possibly involving full-time artisans. From Susa itself there is no direct evidence of specialized ceramic production in the Susa A period, but many of the sites surveyed in Susiana have yielded remains of kilns and many wasters, evidence of widespread localized pottery production in Middle and Late Chalcolithic times. Although some excavated sites have also revealed houses with kilns (e.g., Tepe Bendebal [Band-e Bāll]; Dollfus, 1983), only one is known to have been devoted exclusively to ceramic production: Middle Chalcolithic (Chogha Mish phase) Jaffarabad (Dollfus, 1975). As with Bakun painted ware, however, the exceptionally high technical and aesthetic quality of Susa A fine ware strongly suggests production by full-time specialists at Susa itself and perhaps at other sites as well.
Wide geographic distribution of a distinctive ware or pottery style does not automatically indicate a centralized network of commodity distribution. The absence of efficient transportation in the Chalcolithic, especially in the highlands, must have precluded Systematic, high-volume ceramic exchange, even between the few relatively highly organized centers. For example, in the early Middle Chalcolithic the full Dalma ceramic assemblage, characterized by painted and impressed wares, was remarkably widespread, dominating the Soldūz-Ošnū area of Azerbaijan and the Kangāvar and Nehāvand valleys of northeastern Luristan. The latter ware also occurred in conjunction with Dalma plain red-slipped ware in the Māhīdašt. This distribution pattern was almost certainly not the result of organized long-distance trade in Dalma pottery, which was not a “luxury” ware and was far too heavy and bulky to have been transported economically through the Zagros mountains, especially in the absence of wheeled vehicles and beasts of burden. Furthermore, Dalma settlement data reveal a strictly village economy with no sociopolitical or economic settlement hierarchy. The wide distribution of the pottery must therefore be explained sociologically, rather than economically, as reflecting the distribution of a people, probably a kin-based ethnic group that may have shared a common dialect or religion and produced a distinctive utilitarian pottery, as well as other visible but perishable items of material culture; these items would have served as group markers, analogous to the distinctive dress and rug patterns of today’s Zagros Kurds (E. F. Henrickson and Vitali). Similar situations in the Early Chalcolithic include the spread of Chogha Mami (Čoḡā Māmī) transitional pottery from eastern Mesopotamia into Dehlorān (Hole, 1977) and probably the appearance of J ware in the Māhīdašt (Levine and McDonald). Any pottery “exchange” over a considerable distance was probably a coincidental result of contact for other reasons; late Middle Chalcolithic-Late Chalcolithic black-on-red ware is a good example (E. F. Henrickson, 1985a). In other instances “related” pottery assemblages from adjacent areas are not identical, which implies that, instead of actual movement of vessels, indirect “exchange” took place involving assimilation of selected elements from an external ceramic style into local tradition. One example is the diluted and locally “edited” influence of Ubaid ceramics on otherwise diverse highland Māhīdašt pottery (E. F. Henrickson, 1983; idem, 1986; idem, 1990) in the Middle and Late Chalcolithic. In the eastern central Zagros and adjacent plateau area a different ceramic tradition, labeled Godin VI in the mountains and Sialk (Sīalk) III/6-7 (Ghirshman, 1938) and Ghabristan (Qabrestān) IV (Majidzadeh, 1976; idem, 1977; idem, 1978; idem, 1981) farther east, developed in the Late Chalcolithic. Other archeological evidence suggests that this particular phenomenon may have coincided with an attempt at organizing a regional economic or sociopolitical entity (E. F. Henrickson, forthcoming). The broad distribution of these distinctive ceramics, taken together with glyptic evidence (E. F. Henrickson, 1988) and the remains in several eastern Luristan valleys of large settlements (Goff, 1971), at least one of which permitted the apparently peaceful establishment of a lowland trading enclave in its midst (Weiss and Young), supports an economic explanation.
The special cases of Susa A fine and Bakun painted ware have been discussed above; as true “art” wares, they are probably the best candidates for medium- to long-distance ceramic exchange in Iranian Chalcolithic, but available data are inconclusive, and strictly local production (probably by specialists at a few sites in each area) cannot be ruled out.
There are almost no archeological data for craft production other than ceramics in Chalcolithic Persia.
Only a few widely scattered examples of copper, stone, and glyptic work have been excavated. There are a number of sources for copper (q.v.) in central Persia, but copper processing is known from only one site of this period, Tal-i Iblis (Tal-e Eblīs) near Kermān (Caldwell, 1967; idem and Shahmirzadi). In Iblis I (Early Chalcolithic) and II (late Middle-Late Chalcolithic) hundreds of slag-stained crucible fragments were recovered, along with chunks of slag and rejected copper ore. Although the accompanying ceramics do not reflect outside contact, the presence of large quantities of pyrometallurgical debris and the remote location near copper sources strongly suggest that the site was established specifically to process locally mined copper ore in quantity for export (Caldwell, p. 34). Sialk, from which copper artifacts were recovered in various Chalcolithic levels (Ghirshman, 1938), was also located in a copper-bearing area, near Kāšān; there is no known direct evidence of copper processing at the site, but cast copper tools and ornaments (e.g., round-sectioned pins) were found (Ghirshman, 1938, pl. LXXXIV). In Chalcolithic Giyan V, west of Sialk in northeastern Luristan, copper objects included borers, small spirals, tubes, rectangular-sectioned pins, and a rectangular axe (Contenau and Ghirshman, pp. 16-45, 64ff.). Only a few other sites have yielded copper objects, including the axes from burial hoards at Susa. Copper thus seems to have been a rare and presumably expensive material throughout the Persian Chalcolithic. Direct, unequivocal evidence for other craft production and exchange (e.g., stone, glyptic, and textile work) is either rare or lacking altogether, though scattered small finds from various houses and graves suggest at least a low level of such craft activity in certain areas during certain phases. The exception is obsidian, which was obtained from Anatolian sources in small quantities throughout the Neolithic and Chalcolithic (see Hole, 1987b, pp. 86-87).
Burial practices. Outside the realm of economics and subsistence available archeological data and their interpretation are extremely problematic. The only evidence consists of sparse and unevenly preserved burials and associated structures and goods (for detailed discussion, see Hole, 1987b; idem, 1990). In the Early Chalcolithic all known highland and lowland burials (fewer than a dozen, from three sites: Seh Gabi, Jaffarabad, and Chogha Mish) are of infants or children, who were deposited under the floors of houses, a possible indication of family continuity and settlement stability. As in the Neolithic, grave goods were limited to a few modest personal items, mainly pots and simple jewelry, suggesting a relatively egalitarian society. These data reflect continuation of the predominant Neolithic pattern in southwestern Persia and in lowland Mesopotamia as well. Burying customs for adults are unknown; the burials must have been extramural, but no Early Chalcolithic cemetery has been identified. In the northern and central Zagros the Early Chalcolithic pattern continued to evolve in the next phase. At Dalma Tepe, Seh Gabi, and Kozagaran (Kūzagarān) children were buried under house floors but were first placed in pots or bowls. In contrast, a completely new burial form developed in Ḵūzestān. At Jaffarabad, Chogha Mish, Jowi (Jovī), and Bendebal infants (and a very few adults out of a relatively large sample) have been found in brick tombs outside the houses. Grave goods still consisted of a few simple utilitarian objects, primarily pots, with nothing to indicate differences in status. In the Pošt-e Kūh just north of Dehlorān abundant data have been recovered from almost 200 stone-lined tomb burials, mostly of adults, in the two pastoralist cemeteries, Parchineh and Hakalan. These cemeteries appear to reflect the adoption of lowland burial customs in the outer ranges of the Zagros, lending support to speculation about migration routes between the two areas and interaction between pastoralists and villagers. Grave goods were limited almost entirely to utilitarian ceramics and a few stone tools, weapons, and pieces of jewelry, insufficient to suggest significant differences in status.
The Late Chalcolithic burial sample is very small, except for the large mortuary at Susa. The few known burials were all of children or infants and generally continued the two Middle Chalcolithic patterns: Those from Seh Gabi and Giyan in the central highlands were in jars or pots without burial goods, though architectural context was unclear at both sites. Two infant burials from lowland Jaffarabad were in mat-lined mud “boxes,” accompanied only by pottery and a single seal; it is impossible to interpret this one instance as a status item. Although the large Susa A burial facility appears to have been unique in Chalcolithic Persia, it nevertheless reflected the Middle-Late Chalcolithic lowland custom of burial in brick tombs, demonstrating a formal standardization in the treatment of the dead: one corpse to a tomb, supine in an extended position. Grave goods were much more elaborate than elsewhere, but, with a few striking exceptions (hoards of copper objects), they, too, seem to have been standardized, consisting primarily of ceramics vessels ranging in quality from utilitarian “cooking pots” to distinctive Susa A fine painted goblets (often in the same tombs). The absence of an excavation record for this part of Susa is frustrating, but, even though the size and architectural elaboration of the site are evidence of its function as a regional center, the burials do not seem to reflect a society in which status differences were structurally the most important; rather, an emphasis on the unity of the regional “community” is suggested. It is possible, however, that only individuals or families of high status were buried at Susa and that the majority of those in the economic “sustaining area” were buried elsewhere, probably near their own homes. If so, then the simple fact of burial at the regional center, rather than elaborate individual tombs or grave goods, would have been the primary mark of high status. The rest of the population of Chalcolithic Persia seems to have lived in egalitarian villages or pastoral groups. Larger local settlement centers, involving development of sociopolitical and economic differences in status, were clearly the exception.
R. M. Adams, “The Mesopotamian Social Landscape. A View from the Frontier,” in C. B. Moore, ed., Reconstructing Complex Societies, Cambridge, Mass., 1974, pp. 1-20.
F. Bagherzadeh, ed., Proceedings of the IInd Annual Symposium on Archaeological Research in Iran, Tehran, 1974.
Idem, ed., Proceedings of the IIIrd Annual Symposium on Archaeological Research in Iran, Tehran, 1975.
Idem, ed., Proceedings of the IVth Annual Symposium on Archaeological Research in Iran, Tehran, 1976.
D. G. Bates and S. H. Lees, “The Role of Exchange in Productive Specialization,” American Anthropologist 79/4, 1977, pp. 824-41.
I. A. Brookes, L. D. Levine, and R. Dennell, “Alluvial Sequence in Central West Iran and Implications for Archaeological Survey,” Journal of Field Archaeology 9, 1982, pp. 285-99.
J. R. Caldwell, ed., Investigations at Tall-i Iblis, Illinois State Museum Preliminary Report 9, Springfield, Ill., 1967.
Idem and S. M. Shahmirzadi, Tal-i Iblis. The Kerman Range and the Beginnings of Smelting, Illinois State Museum Preliminary Report 7, Springfield, Ill., 1966.
D. Canal, “La haute terrasse de l’Acropole de Suse,” Paléorient 4, 1978a, pp. 39-46.
Idem, “La terrasse haute de l’Acropole de Suse,” CDAFI 9, 1978b, pp. 11-55.
G. Contenau and R. Ghirshman, Fouilles du Tépé Giyan près de Néhavend, 1931, 1932, Paris, 1935.
P. Delougaz, “The Prehistoric Architecture at Choga Mish,” in The Memorial Volume of the VIth International Congress of Iranian Art and Archaeology, Oxford, 1972, Tehran, 1976, pp. 31-48.
Idem and H. Kantor, “New Evidence for the Prehistoric and Protoliterate Culture Development of Khuzestan,” in The Memorial Volume of the Vth International Congress of Iranian Art and Archaeology, Tehran, 1972, pp. 14-33.
Idem, “The 1973-74 Excavations at Coqa Mis,” in Bagherzadeh, ed., 1975, pp. 93-102.
G. Dollfus, “Les fouilles à Djaffarabad de 1972 à 1974.
Djaffarabad periodes I et II,” CDAFI 5, 1975, pp. 11-220.
Idem, “Djowi et Bendebal. Deux villages de la plaine centrale du Khuzistan (Iran),” CDAFI 13, 1983, pp. 17-275.
J. E. Gautier and G. Lampre, “Fouilles de Moussian,” MDAFP 8, 1905, pp. 59-149.
R. Ghirshman, Fouilles de Sialk près de Kashan, 1933, 1934, 1937 I, Paris, 1938.
C. Goff, New Evidence of Cultural Development in Luristan in the Late 2nd and Early First Millennium, Ph.D. diss., University of London, 1966.
Idem, “Luristan before the Iron Age,” Iran 9, 1971, pp. 131-52.
E. F. Henrickson, Ceramic Styles and Cultural Interaction in the Early and Middle Chalcolithic of the Central Zagros, Iran, Ph.D. diss., University of Toronto, 1983.
Idem, “The Early Development of Pastoralism in the Central Zagros Highlands (Luristan),” Iranica Antiqua 20, 1985a, pp. 1-42.
Idem, “An Updated Chronology of the Early and Middle Chalcolithic of the Central Zagros Highlands, Western Iran,” Iran 23, 1985b, pp. 63-108.
Idem, “Ceramic Evidence for Cultural Interaction between Chalcolithic Mesopotamia and Western Iran,” in W. D. Kingery, ed., Technology and Style. Ceramics and Civilization II, Columbus, Oh., 1986, pp. 87-133.
Idem, “Chalcolithic Seals and Sealings from Seh Gabi, Central Western Iran,” Iranica Antiqua 23, 1988, pp. 1-19.
Idem, “Stylistic Similarity and Cultural Interaction between the ʿUbaid Tradition and the Central Zagros Highlands,” in E. F. Henricksen and I. Thuesen, eds., 1990, pp. 368-402.
Idem, “The Outer Limits. Settlement and Economic Strategies in the Zagros Highlands during the Uruk Era,” in G. Stein and M. Rothman, eds., Chiefdoms and Early States in the Near East. The Organizational Dynamics of Complexity, Albuquerque, forthcoming.
Idem and I. Thuesen, eds., Upon This Foundation. The ʿUbaid Reconsidered, Copenhagen, Carsten Niebuhr Institute Publication 8, 1990.
Idem and V. Vitali, “The Dalma Tradition. Prehistoric Interregional Cultural Integration in Highland Western Iran,” Paléorient 13/2, 1987, pp. 37-46.
R. C. Henrickson, Godin III, Godin Tepe, and Central Western Iran, Ph.D. diss., University of Toronto, 1984.
F. Hole, Studies in the Archaeological History of the Deh Luran Plain. The Excavation of Chogha Sefid, The University of Michigan Museum of Anthropology Memoirs 9, Ann Arbor, Mich., 1977.
Idem, “Symbols of Religion and Social Organization at Susa,” in L. Braidwood et al., eds., The Hilly Flanks and Beyond. Essays on the Prehistory of Southwestern Asia, The University of Chicago Oriental Institute Studies in Ancient Oriental Civilization 36, Chicago, 1983, pp. 233-84.
Idem, “Analysis of Structure and Design in Prehistoric Ceramics,” World Archaeology, 15/3, 1984, pp. 326-47.
Idem, “Archaeology of the Village Period,” in F. Hole, ed., 1987a, pp. 29-78.
Idem, “Settlement and Society in the Village Period,” in F. Hole, ed., 1987b, pp. 79-106.
Idem, “Patterns of Burial in the Fifth Millennium,” in E. F. Henricksen and I. Thuesen, eds. (forthcoming).
Idem, ed., The Archaeology of Western Iran. Settlement and Society from Prehistory to the Islamic Conquest, Washington, D.C., 1987.
F. Hole, K. V. Flannery, and J. A. Neely, Prehistory and Human Ecology of the Deh Luran Plain, The University of Michigan Museum of Anthropology Memoirs 1, Ann Arbor, Mich., 1969.
H. Kantor, “The Excavations at Coqa Mish, 1974-75,” in Bagherzadeh, ed., 1976a, pp. 23-41.
Idem, “Prehistoric Cultures at Choga Mish and Boneh Fazili (Khuzistan),” in Memorial Volume of the VIth International Congress on Iranian Art and Archaeology, Oxford, 1972, Tehran, 1976b, pp. 177-94.
A. Langsdorff and D. E. McCown, Tal-i Bakun A, The University of Chicago Oriental Institute Publications 59, Chicago, 1942.
S. H. Lees and D. G. Bates, “The Origins of Specialized Pastoralism. A Systemic Model,” American Antiquity 39, 1974, pp. 187-93.
L. D. Levine, “Archaeological Investigations in the Mahidasht, Western Iran, 1975,” Paléorient 2/2, 1974, pp. 487-90.
Idem, “Survey in the Province of Kermanshahan 1975.
Mahidasht in the Prehistoric and Early Historic Periods,” in Bagherzadeh, ed., 1976, pp. 284-97.
Idem and M. M. A. McDonald, “The Neolithic and Chalcolithic Periods in the Mahidasht,” Iran 15, 1977, pp. 39-50.
L. D. Levine and T. C. Young, Jr., “A Summary of the Ceramic Assemblages of the Central Western Zagros from the Middle Neolithic to the Late Third Millennium B.C.,” in J. L. Huot, ed., Préhistoire de la Mésopotamie. La Mésopotamie préhistorique et l’exploration récente du Djebel Hamrin, Paris, 1987, pp. 15-53.
M. M. A. McDonald, An Examination of Mid-Holocene Settlement Patterns in the Central Zagros Region of Western Iran, Ph.D. diss., University of Toronto, 1979.
Y. Majidzadeh, The Early Prehistoric Cultures of the Central Plateau of Iran. An Archaeological History of Its Development during the Fifth and Fourth Millennia B.C., Ph.D. diss., The University of Chicago, 1976.
Idem, “Excavations in Tepe Ghabristan. The First Two Seasons, 1970 and 1971,” Marlik 2, 1977, pp. 45-61.
Idem, “Corrections of the Chronology for the Sialk III Period on the Basis of the Pottery Sequence at Tepe Ghabristan,” Iran 16, 1978, pp. 93-101.
Idem, “Sialk III and the Pottery Sequence at Tepe Ghabristan,” Iran 19, 1981, pp. 141-46.
R. de Mecquenem, “Fouilles préhistoriques en Asie occidentale. 1931-1934,” l’Anthropologie 45, 1935, pp. 93-104.
J. de Morgan, “Observations sur les couches profondes de l’Acropole de Suse,” MDP 13, 1912, pp. 1-25.
P. Mortensen, “A Survey of Prehistoric Settlements in Northern Luristan,” Acta Archaeologica 45, 1974, pp. 1-47.
Idem, “Chalcolithic Settlements in the Holailan Valley,” in Bagherzadeh, ed.,1976, pp. 42-62.
S. Pollock, “Power Politics in the Susa A Period,” in E. F. Henricksen and I. Thuesen, eds. (forthcoming).
M. E. Prickett, “Tepe Yahya Project. Upper Rud-i Gushk Survey,” Iran 14, 1976, pp. 175-76.
Idem, Man, Land, and Water. Settlement Distribution and the Development of Irrigation Agriculture in the Upper Rud-i Gushk Drainage, Southeastern Iran, Ph.D. diss., Harvard University, 1986.
M. J. Steve and H. Gasche, L’Acropole de Suse, MDAFI 46, 1971.
W. Sumner, Cultural Development in the Kur River Basin, Iran. An Archaeological Analysis of Settlement Patterns, Ph.D. diss., University of Pennsylvania, Philadelphia, 1972.
Idem, “Early Settlements in Fars Province, Iran,” in L. D. Levine and T. C. Young, Jr., eds., Mountains and Lowlands. Essays in the Archaeology of Greater Mesopotamia, Malibu, Calif., 1977, pp. 291-305.
S. Swiny, “Survey in Northwest Iran, 1971,” East and West 25/1-2, 1975, pp. 77-96.
L. Vanden Berghe, “Excavations in Luristan. Kalleh Nissar,” Bulletin of the Asia Institute of Pahlavi University 3, 1973a, pp. 25-56.
Idem, “Le Luristan avant l’Age du Bronze. Le nécropole du Hakalan,” Archaeologia 57, 1973b, pp. 49-58.
Idem, “Le Lorestan avant l’Age du Bronze. La nécropole de Hakalan,” in Bagherzadeh, ed., 1974, pp. 66-79.
Idem, “Fouilles au Lorestan, la nécropole de Dum Gar Parchineh,” in Bagherzadeh, 1975a, pp. 45-62.
Idem, “La nécropole de Dum Gar Parchinah,” Archaeologia 79, 1975b, pp. 46-61. Idem, Mission
Archéologique dons le Pusht-i Kuh, Luristan. IXe Campagne 1973. La nécropole de Dum Gar Parchinah (Rapport préliminaire), 2 vols., forthcoming.
M. Voigt, “Relative and Absolute Chronologies for Iran between 6500 and 3500 cal. B. C.,” in O. Aurenche, J. Evin, and F. Hours, eds., Chronologies in the Near East. Relative Chronologies and Absolute Chronology. 16,000-4,000 B.P., British Archaeological Reports International Series 379, Oxford, 1987, pp. 615-46.
Idem and R. H. Dyson, Jr., “The Chronology of Iran, ca. 8000-2000 B.C.,” in R. W. Ehrich, ed., Chronologies in Old World Archaeology, Chicago, forthcoming.
H. Weiss and T. C. Young, Jr., “The Merchants of Susa. Godin V and Plateau-Lowland Relations in the Late Fourth Millennium B.C.,” Iran 13, 1975, pp. 1-18.
H. T. Wright, An Early Town on the Deh Luran Plain. Excavations at Tepe Farukhabad, The University of Michigan Museum of Anthropology Memoirs 13, Ann Arbor, Mich., 1981.
Idem, “The Susiana Hinterlands during the Era of Primary State Formation,” in F. Hole, ed., 1987, pp. 141-56.
Idem et al., “Early Fourth Millennium Developments in Southwestern Iran,” Iran 13, 1975, pp. 129-48.
T. C. Young, Jr., Excavations at Godin Tepe, Royal Ontario Museum Art and Archaeology Occasional Papers 17, Toronto, 1969.
Idem, “An Archaeological Survey in Kangavar Valley,” in Bagherzadeh, ed., 1975, pp. 23-30.
Idem and L. D. Levine, Excavations at the Godin Project. Second Progress Report, Royal Ontario Museum Art and Archaeology Occasional Papers 26, Toronto, 1974.
A. Zagarell, The Prehistory of the Northeast Baḫtiyari Mountains, Iran, TAVO, Beihefte B42, Wiesbaden, 1982.
(Elizabeth F. Henrickson)
Originally Published: December 15, 1991
Last Updated: October 13, 2011
This article is available in print.
Vol. V, Fasc. 4, pp. 347-353 | http://www.iranicaonline.org/articles/chalcolithic-era-in-persia | 13 |
20 | simple machineArticle Free Pass
simple machine, any of several devices with few or no moving parts that are used to modify motion and force in order to perform work. The simple machines are the inclined plane, lever, wedge, wheel and axle, pulley, and screw.
The inclined plane
An inclined plane consists of a sloping surface; it is used for raising heavy bodies. The plane offers a mechanical advantage in that the force required to move an object up the incline is less than the weight being raised (discounting friction). The steeper the slope, or incline, the more nearly the required force approaches the actual weight. Expressed mathematically, the force F required to move a block D up an inclined plane without friction is equal to its weight W times the sine of the angle the inclined plane makes with the horizontal (θ). The equation is F = W sin θ.
The principle of the inclined plane is used widely—for example, in ramps and switchback roads, where a small force acting for a distance along a slope can do a large amount of work.
A lever is a bar or board that rests on a support called a fulcrum. A downward force exerted on one end of the lever can be transferred and increased in an upward direction at the other end, allowing a small force to lift a heavy weight.
All early people used the lever in some form, for example, for moving heavy stones or as digging sticks for land cultivation. The principle of the lever was used in the swape, or shadoof, a long lever pivoted near one end with a platform or water container hanging from the short arm and counterweights attached to the long arm. A man could lift several times his own weight by pulling down on the long arm. This device is said to have been used in Egypt and India for raising water and lifting soldiers over battlements as early as 1500 bce.
A wedge is an object that tapers to a thin edge. Pushing the wedge in one direction creates a force in a sideways direction. It is usually made of metal or wood and is used for splitting, lifting, or tightening, as in securing a hammer head onto its handle.
The wedge was used in prehistoric times to split logs and rocks; an ax is also a wedge, as are the teeth on a saw. In terms of its mechanical function, the screw may be thought of as a wedge wrapped around a cylinder.
The wheel and axle
A wheel and axle is made up of a circular frame (the wheel) that revolves on a shaft or rod (the axle). In its earliest form it was probably used for raising weights or water buckets from wells.
Its principle of operation is best explained by way of a device with a large gear and a small gear attached to the same shaft. The tendency of a force, F, applied at the radius R on the large gear to turn the shaft is sufficient to overcome the larger force W at the radius r on the small gear. The force amplification, or mechanical advantage, is equal to the ratio of the two forces (W:F) and also equal to the ratio of the radii of the two gears (R:r).
If the large and small gears are replaced with large- and small-diameter drums that are wrapped with ropes, the wheel and axle becomes capable of raising weights. The weight being lifted is attached to the rope on the small drum, and the operator pulls the rope on the large drum. In this arrangement the mechanical advantage is the radius of the large drum divided by the radius of the small drum. An increase in the mechanical advantage can be obtained by using a small drum with two radii, r1 and r2, and a pulley block. When a force is applied to the large drum, the rope on the small drum winds onto D and off of d.
A measure of the force amplification available with the pulley-and-rope system is the velocity ratio, or the ratio of the velocity at which the force is applied to the rope (VF) to the velocity at which the weight is raised (VW). This ratio is equal to twice the radius of the large drum divided by the difference in the radii of the smaller drums D and d. Expressed mathematically, the equation is VF/VW = 2R/(r2 - r1). The actual mechanical advantage W/F is less than this velocity ratio, depending on friction. A very large mechanical advantage may be obtained with this arrangement by making the two smaller drums D and d of nearly equal radius.
A pulley is a wheel that carries a flexible rope, cord, cable, chain, or belt on its rim. Pulleys are used singly or in combination to transmit energy and motion. Pulleys with grooved rims are called sheaves. In belt drive, pulleys are affixed to shafts at their axes, and power is transmitted between the shafts by means of endless belts running over the pulleys.
One or more independently rotating pulleys can be used to gain mechanical advantage, especially for lifting weights. The shafts about which the pulleys turn may affix them to frames or blocks, and a combination of pulleys, blocks, and rope or other flexible material is referred to as a block and tackle. The Greek mathematician Archimedes (3rd century bce) is reported to have used compound pulleys to pull a ship onto dry land.
What made you want to look up "simple machine"? Please share what surprised you most... | http://www.britannica.com/EBchecked/topic/1194584/simple-machine | 13 |
11 | Traditional methods of food drying is to spread the foodstuffs to place the foodstuffs in the sun in the open air. This method, called sun drying, is effective for small amounts of food. The area needed for sun drying expands with food quantity and since the food is placed in the open air, it is easily contaminated. Therefore, one major reason why sun drying is not easily performed with larger quantities of food is that the monitoring and overview becomes increasingly more difficult with increasing food quantities.
In contrast to sun drying, where the meat is exposed directly to the sun, the solar drying method uses indirect solar radiation. The principle of the solar drying technique is to collect solar energy by heating-up the air volume in solar collectors and conduct the hot air from the collector to an attached enclosure, the meat drying chamber. Here the products to be dried are laid out.
In this closed system, consisting of a solar collector and a meat drying chamber, without direct exposure of the meat to the environment, meat drying is more hygienic as there is no secondary contamination of the products through rain, dust, insects, rodents or birds. The products are dried by hot air only. There is no direct impact of solar radiation (sunshine) on the product. The solar energy produces hot air in the solar collectors. Increasing the temperature in a given volume of air decreases the relative air humidity and increases the water absorption capacity of the air. A steady stream of hot air into the drying chamber circulating through and over the meat pieces results in continuous and efficient dehydration.
The solar dryer is a relatively simple concept. The basic principles employed in a solar dryer are:
- Converting light to heat: Any black on the inside of a solar dryer will improve the effectiveness of turning light into heat.
- Trapping heat: Isolating the air inside the dryer from the air outside the dryer makes an important difference. Using a clear solid, like a plastic bag or a glass cover, will allow light to enter, but once the light is absorbed and converted to heat, a plastic bag or glass cover will trap the heat inside. This makes it possible to reach similar temperatures on cold and windy days as on hot days.
- Moving the heat to the food. Both the natural convection dryer and the forced convection dryer use the convection of the heated air to move the heat to the food.
There are a variety of solar dryer designs. Principally, solar dryers can be categorized into three groups: a) natural convection dryers, which are solar dryers that use the natural vertical convection that occurs when air is heated and b) forced convection dryers, in which the convection is forced over the food through the use of a fan and c) tunnel dryers.
While several different designs of the solar dryers exist, the basic components of a solar dryer are illustrated in Figure 1. In the case of a forced convection dryer, an additional component would be the fan.
The structure of a tunnel dryer is relatively simple. The basic design components of a tunnel dryer are the following:
- A semi circular shaped solar tunnel in the form of a poly house framed structure with UV stabilized polythene sheet
- The structure is, in contrast to the other dryer designs, large enough for a person to enter
The design of a tunnel dryer is illustrated in Figure 2. In addition, the technology teaser image at the top of this description is an image of the inside of a tunnel dryer.
Natural Convection Dryer Large scale design
Generally, natural convection dryers are sized appropriately for on-farm use. One design that has undergone considerable development by the Asian Institute of Technology in Bangkok, Thailand is shown in Figure 3. This natural covenction dryer is a large scale structure: the collector is 4.5 meters long and 7 meters wide and the drying bin is 1 meter long and 7 meters wide. The structure consists of three main components: a solar collector, a drying bin and a solar chimney. The drying bin in this design is made of bamboo matting. In addition to the collector, air inside the solar chimney is heated which also increases the thermal draught through the dryer. The solar chimney is covered with black plastic sheet in order to increase the thermal absorption. A disadvantage of the dryer is its high structural profile which poses stability problems in windy conditions, and the need to replace the plastic sheet every 1-2 years.
Figure 4 shows a smaller design for a natural convection dryer. The capacity of this dryer is ten times smaller than the capacity for food drying in the larger design. However, the design is simple to build and is less susceptible to stability problems.
Natural Convection dryer small scale design
These solar food dryers are basically wooden boxes with vents at the top and bottom. Food is placed on screened frames which slide into the boxes. A properly sized solar air heater with south-facing plastic glazing and a black metal absorber is connected to the bottom of the boxes. Air enters the bottom of the solar air heater and is heated by the black metal absorber. The warm air rises up past the food and out through the vents at the top (see Figure 5). While operating, these dryers produce temperatures of 130–180° F (54–82° C), which is a desirable range for most food drying and for pasteurization. With these dryers, it’s possible to dry food in one day, even when it is partly cloudy, hazy, and very humid. Inside, there are thirteen shelves that will hold 35 to 40 medium sized apples or peaches cut into thin slices.
In the case of forced convection dryers, the structure can be relatively similar. However, the forced convection dryer requires a power source for the fans to provide the air flow. The forced convection dryer doesn't require an incline for the air flow however, the collector can be placed horizontallly with the fan at one end and the drying bin at the other end. In addition, the forced convection dryer is less dependent on solar energy as it provides the air flow itself; this allows the design to work in weather conditions in which the natural convection dryer doesn't work. As inadequate ventilation is a primary cause of loss of food in solar food dryers, and is made worse by intermittent heating, it is essential to realize proper ventilation. Adding a forced convection flow, for instance provided through a PV- solar cell connected to a fan, will prevent the loss of food.
Drying is an important step in the food production process. The main argument for food drying is to preserve the food for longer periods of time. However, it is important to note that the process is not just concerned with the removal of moisture content from the food. Additional quality factors are influenced by the selection of drying conditions and equipment:
- Moisture Content. It is essential that the foodstuff after drying is at a moisture content suitable for storage. The desired moisture content will depend on the type of food, duration of storage and the storage conditions available. The drying operation is also essential in minimizing the range of moisture levels in the batch of food as portions of under-dried food can lead to deterioration of the entire batch.
- Nutritive value. Food constituents can be adversely affected when excessive temperatures are reached.
- Mould growth. The rate of development of micro-organisms is dependent on the food moisture content, temperature and the degree of physical damage to the food.
- Appearance and smell of the food. For example, the colour of milled rice can be adversely affected if the paddy is dried with direct heated dryers with poorly maintained or operated burners or furnaces.
Therefore, it is essential to not only monitor the moisture content of the foodstuffs, but to also monitor temperature, mould growth, appearance and smell of food, air flow, etc. Whether a natural convection dryer, a forced convection dryer or a tunnel dryer is appropriate depends on the amount of food, the climate and the demands placed on the end-product (how long does it need to be stored, in what quantities, etc.). A typical pattern of several of these factors is shown in Figure 6.
In addition, an important feature of solar drying devices is the size of the solar collectors. Depending on the quantity of goods to be dried, collectors must have the capacity to provide sufficient quantities of hot air to the drying chamber. Collectors which are too small in proportion to the amount of food to be dried will result in failed attempts and spoiled food.
According to the FAO (no date), the most common drying method of grain in tropical developing countries is sun drying. The process of sun drying starts when the crop is standing in the field prior to harvest; maize may be left on the standing plant for several weeks after attaining maturity. However, this may render the grain subject to insect infestation and mould growth. In addition, it prevents the land being prepared for the next crop and is vulnerable to theft and damage from animals.
A more controlled practice is to bring the foodstuffs into a structure which is specifically designed for food drying. This removes the issue of bacterial contamination, theft and insect infestation. Modern variations are to dry food in special enclosed drying racks or cabinets and expose the food to a flow of dry air heated by electricity, propane or solar radiation.
Although it is difficult to establish the current status of the technology in terms of market penetration as data on this technology is insufficient, some general remarks can be made about the market potential.
There seem to be no major design barriers to a solar dryer: the design is easy to build with a minimum of materials required. This is especially true for the natural convection dryer, which doesn't require any machinery or energy source (next to the solar energy source). In contrast, the forced convection flow, the electricity heated design and the propane fuelled dryers all require some form of machinery and require an external heat source (in the form of electricity or propane). This complicates their designs and makes their operational costs more expensive. However, these designs possibly do have lower food loss rates due to more constant air flow.
Related to the previous remark, the easy design cuts costs. The design can be made primarily from materials found in the local surroundings. For instance the frame of the structure can be constructed from wood, bamboo or any other natural product that is strong enough. This characteristic enhances the market potential of this product.
The technology provides several socio-economic benefits. As the FAO (2010) notes, one of the main issues facing developing countries today is the issue of food security. The solar food dryer can improve food security through allowing the longer storage of food after drying compared to food that hasn't been dried.
The solar dryer can save fuel and electricity when it replaces dryer variations that require an external energy source in the form of electricity or fossil fuel. In addition solar food dryers cut drying times in comparison to sun drying. While fossil fuel or electrically powered dryers might provide certain benefits (more consistent air flow and higher temperatures), the financial barriers that these technologies provide might be too high for marginal farmers. For instance, electricity might be not available or too expensive and fossil fuel powered drying might pose large initial and running costs.
Fruits, vegetables and meat dried in a solar dryer are better in quality and hygiene compared to fruits, vegetables and meat dried in sun drying conditions. As mentioned, due to the closed system design, contamination of food is prevented or minimized. In addition, the food is not vulnerable to rain and dust, compared to the open system design of sun drying.
In rural areas where farmers grow fruits and vegetables without proper food drying facilities, the farmers need to sell the food in the market shortly after harvesting. When food production is high, the farmers have to sell the food at low price to prevent the food from losing value through decomposition. Therefore, the solar food dryer might be able to prevent the financial losses farmers in these situations face. Dried food can be stored longer and retain quality longer. Moreover, dried fruits and vegetables might be sold as differentiated products which possibly enhances their market value. For example, dried meat can be processed into a variety of different products.
Drying food reduces its volume. Therefore, in combination to longer storage times, the food is also more easily transported after drying which potentially opens up additional markets to the producer of the food.
While there is insufficient data at the moment to elaborate fully on the financial requirements and costs of this technology, certain general remarks can be made.
For natural convection dryers, the financial requirements are low. The structure is made from components that are mostly easily available (wood, bamboo, other strong construction materials). However, the major cost components are likely to be the glass sheets required to trap the heat, and the plastic sheets need to be available. Operational costs of the natural convection technology are limited to labour costs. Forced convection dryers have higer initial costs and higher operational costs, as the fan needs to be purchased and operated.
As mentioned, dried food products might yield a higher price on the market as it can be sold out-of-season (the fresh food version might no longer be on the market in a particular season, which might increase the price of the dried version of the food.
FAO, 2010. “Climate-Smart” Agriculture - Policies, Practices and Financing for Food Security, Adaptation and Mitigation. Food and Agriculture Organization of the United Nations 2010. Document can be found at: http://www.fao.org
FAO, no date. Information retrieved from the following websites: http://www.fao.org/docrep/t0395e/T0395E04.htm , http://www.fao.org/docrep/x0209e/x0209e06.htm and http://www.fao.org/docrep/t1838e/T1838E0v.htm | http://climatetechwiki.org/print/technology/jiqweb-edf | 13 |
10 | How far have these students walked by the time the teacher's car
reaches them after their bus broke down?
Experiment with the interactivity of "rolling" regular polygons,
and explore how the different positions of the red dot affects its
vertical and horizontal movement at each stage.
Position the lines so that they are perpendicular to each other.
What can you say about the equations of perpendicular lines?
Experiment with the interactivity of "rolling" regular polygons, and explore how the different positions of the red dot affects the distance it travels at each stage.
Two cyclists, practising on a track, pass each other at the starting line and go at constant speeds... Can you find lap times that are such that the cyclists will meet exactly half way round the. . . .
On the grid provided, we can draw lines with different gradients.
How many different gradients can you find? Can you arrange them in
order of steepness?
Explore the relationship between resistance and temperature
Collect as many diamonds as you can by drawing three straight
Two buses leave at the same time from two towns Shipton and Veston on the same long road, travelling towards each other. At each mile along the road are milestones. The buses' speeds are constant. . . .
How does the position of the line affect the equation of the line?
What can you say about the equations of parallel lines?
I took the graph y=4x+7 and performed four transformations. Can you
find the order in which I could have carried out the
Investigate what happens to the equations of different lines when
you reflect them in one of the axes. Try to predict what will
happen. Explain your findings.
Can you decide whether two lines are perpendicular or not? Can you
do this without drawing them?
Can you adjust the curve so the bead drops with near constant
Looking at the graph - when was the person moving fastest? Slowest?
Investigate what happens to the equation of different lines when
you translate them. Try to predict what will happen. Explain your
When I park my car in Mathstown, there are two car parks to choose
from. Which car park should I use?
Follow the instructions and you can take a rectangle, cut it into 4 pieces, discard two small triangles, put together the remaining two pieces and end up with a rectangle the same size. Try it!
You can move the 4 pieces of the jigsaw and fit them into both
outlines. Explain what has happened to the missing one unit of
A 1 metre cube has one face on the ground and one face against a
wall. A 4 metre ladder leans against the wall and just touches the
cube. How high is the top of the ladder above the ground?
Straight lines are drawn from each corner of a square to the mid
points of the opposite sides. Express the area of the octagon that
is formed at the centre as a fraction of the area of the square.
Logo helps us to understand gradients of lines and why Muggles Magic is not magic but mathematics. See the problem Muggles magic.
In a snooker game the brown ball was on the lip of the pocket but
it could not be hit directly as the black ball was in the way. How
could it be potted by playing the white ball off a cushion?
Which is bigger, n+10 or 2n+3? Can you find a good method of
answering similar questions?
If you take two tests and get a marks out of a maximum b in the first and c marks out of d in the second, does the mediant (a+c)/(b+d)lie between the results for the two tests separately.
In this problem we are faced with an apparently easy area problem,
but it has gone horribly wrong! What happened? | http://nrich.maths.org/public/leg.php?code=68&cl=3&cldcmpid=4954 | 13 |
29 | 7. Ratio and Proportion
We need to be a bit careful because lots of people use the words "ratio", "fraction" and "proportion" to mean the same thing in everyday speech.
This makes it difficult when we meet the terms in mathematics, because they are not necessarily used to mean the same thing.
Ratios and Fractions
Ethanol or methanol (wood-based methyl alcohol) is sometimes added to gasoline to reduce pollution and cost. Car engines can typically run on a petrol-ethanol mixture in a ratio of `9:1`. The "`9:1`" means that for each nine units of petrol, there is 1 unit of ethanol.
For example, if we had 9 L of petrol, we would need 1 L of ethanol.
We can see that altogether we would have 10 L of the mixture.
As fractions, the proportion of each liquid is:
You can see a more advanced question involving ratios and gasoline in Applied Verbal Problems, in the algebra chapter.
Concrete is a mixture of gravel, sand and cement, usually in the ratio `3:2:1`.
We can see that there are 3 + 2 + 1 = 6 items altogether. As fractions, the amount of each component of the concrete is:
One of the most famous ratios is the ratio of the circumference of a circle to its diameter.
The value of that ratio cannot be determined exactly. It is approximately 3.141592654... We call it:
`pi` (the Greek letter "pi").
See more on Pi.
We can talk about the proportion of one quantity compared to another.
In mathematics, we define proportion as an equation with a ratio on each side.
Considering our ethanol/petrol example above, if we have 54 L of petrol, then we need 6 L of ethanol to give us a 9:1 mix.
We could write this as:
`54:6 = 9:1`
We could use fractions to write our proportion, as follows:
A normal walking speed is 1 km in 10 min. This is a rate, where we are comparing how far we can go in a certain amount of time.
Our walking rate is equivalent to 6 km/h.
Example 6 - Conversion of Units
A bullet leaves a gun travelling at 500 m/s. Convert this speed into km/h.
A famous ratio: `Phi`
Now let's move on to an interesting ratio called `Phi` ("phi"), in Math of Beauty.
Didn't find what you are looking for on this page? Try search:
Online Algebra Solver
This algebra solver can solve a wide range of math problems. (Please be patient while it loads.)
Go to: Online algebra solver
Ready for a break?
Play a math game.
(Well, not really a math game, but each game was made using math...)
The IntMath Newsletter
Sign up for the free IntMath Newsletter. Get math study tips, information, news and updates each fortnight. Join thousands of satisfied students, teachers and parents!
Short URL for this Page
Save typing! You can use this URL to reach this page:
Algebra Lessons on DVD
Easy to understand algebra lessons on DVD. See samples before you commit.
More info: Algebra videos | http://www.intmath.com/numbers/7-ratio-proportion.php | 13 |
16 | Winning Lesson Plans
these science lesson plans submitted by teachers for Science WoRx contests.
Building Working Models of the Human Heart
In this activity by David Brock, students design and build functioning artificial "hearts" to study and to demonstrate their knowledge of the circulatory system.
The Cell as a System
The animal cell can be seen as a comparison to many systems that exist in our world today. In this lesson plan by Diana VonEyes, students will use the process of scientific literacy and scientific inquiry to design the animal cell as a city system.
Using Simple Machines to Create an Obstacle Course
Combine a lesson on simple machines with physical fitness in this fun science activity by Michelle Brooks. Build an obstacle course using simple machines!
Inheritance and Natural Selection
In this jam-packed lesson plan by Kristina Woods, students will explore how genes are inherited, meiosis, genetic mutations and natural selection.
Searching for Lead in Our Environment - An Environmental Science Lab
Lead is a common and deadly contaminant in the environment, especially in older, urban environments. In this lab by Bill Felinski III, students will measure the concentration of lead in water and soil samples.
Measuring the Speed of Light - A Physics Lab
Break out the s'mores and a microwave and calculate the speed of light with this physics lab by Brian Heglund!
Magnet Car Contest - A Physics Lab
This lesson by Randy Moehnke provides students with a fun and challenging hands-on activity where they apply the principles of electricity and magnetism to creating toy cars!
This educational game created by John Sowash reinforces a student's understanding of meiosis in a fun and challenging way!
The Scientific Method and Basic Microbiology
Molly Jean Woofter shares this lab experiment in which students form hypotheses about the effectiveness of household cleaners in inhibiting the growth of bacteria.
Ancient Indian Gravity Sewers - A Physics and Social Studies Lab
This cross-curriculum experiment by Michael Ryan combines information on the creation of sewers in the Ancient Indus Valley with the hands-on experience of a science experiment dealing with gravity.
Creating an Ecosystem
In this creative, hands-on environmental science activity by Marsha Fischer, students create a shoebox diarama of an ecosystem, including both biotic and abiotic factors.
The Three Pigs Construction Company - A Lab on Force and Motion
In this hands-on lab by Lisa Milenkovic, students will try to build a "house" that can withstand big winds - such as the huffing and puffing of the big bad wolf, or even a hurricane!
Identify a Real-Life Monster! Student Worksheet
If it's scaly, creepy-crawly and has lots of legs, then it's definitely a monster - or an insect! Some of the most bizarre-looking creatures are insects, but don't be afraid: What can be named can be conquered! Use your skills of observation and the dichotomous key to identify the Order of these scary-looking insects.
Identify a Real-Life Monster! Answer Key
Here's an answer key for our Identify a Real-Life Monster! activity!
Video Lesson Guides
Science Pro Lab videos? Find the full lesson plans and each video here.
Science Lesson Plan: Ice on Fire Experiment
Science Pro Robert showcases an amazing chemistry experiment that helps kids identify the coefficients and subscripts in a chemical equation. This video is most appropriate for science classes in grades 9-12.
View the video
Science Lab: Extracting DNA from a Strawberry
Watch Science Pros Laurie and Carrie as they conduct a science lab that will extract so much strawberry DNA that you can see it with the naked eye! It?s a great idea for a science fair project! This video is most appropriate for science classes in grades 7-10.
View the video
Science Lab: Testing pH with Cabbage
Join Science Pros Marya and Angie to test the pH of substances we use every day with a most unusual ingredient: cabbage juice! This video is most appropriate for science classes in grades 5-8.
View the video
Science Lab: Chromatography: Why Do Leaves Change Color?
Check out this science lab with Science Pros Margaret and Jeff as they use everyday items to separate the colored pigments in a leaf using chromatography! This video is most appropriate for science classes in grades 10-12.
View the video | http://www.scienceworx.org/Resources.aspx | 13 |
11 | ||It has been suggested that Combining dimensions be merged into this article. (Discuss) Proposed since August 2012.|
||This article needs attention from an expert in Mathematics. (February 2013)|
In mathematics, a projection is a mapping of a set (or other mathematical structure) into a subset (or sub-structure), which is equal to its square for mapping composition (or, in other words, which is idempotent). The restriction to a subspace of a projection is also called a projection, even if the idempotence property is lost. An everyday example of a projection is the casting of shadows onto a plane (paper sheet). The projection of a point is its shadow on the paper sheet. The shadow of a point of the paper sheet is the point itself (idempotence). The shadow of a three dimensional sphere is a circle. Originally, the notion of projection was introduced in Euclidean geometry to denote the projection of the Euclidean space of three dimensions onto a plane in it, like the shadow example. The two main projections of this kind are:
- The projection from a point onto a plane or central projection: If C is the point, called center of projection, the projection of a point P different than C is the intersection with the plane of the line CP. The point C and the points P such that the line CP is parallel to the plane do not have any image by the projection.
- The projection parallel to a direction D, onto a plane: The image of a point P is the intersection with the plane of the line parallel to D passing through P.
The concept of projection in mathematics is a very old one, most likely having its roots in the phenomenon of the shadows cast by real world objects on the ground. This rudimentary idea was refined and abstracted, first in a geometric context and later in other branches of mathematics. Over time differing versions of the concept developed, but today, in a sufficiently abstract setting, we can unify these variations.
In cartography, a map projection is a map of a part of the surface of the Earth onto a plane, which, in some cases, but not always, is the restriction of a projection in the above meaning. The 3D projections are also at the basis of the theory of perspective.
The need for unifying the two kinds of projections and of defining the image by a central projection of any point different of the center of projection are at the origin of projective geometry.
In an abstract setting we can generally say that a projection is a mapping of a set (or of a mathematical structure) which is idempotent, which means that a projection is equal to its composition with itself. A projection may also refer to a mapping which has a left inverse. Both notions are strongly related, as follows. Let p be an idempotent map from a set E into itself (thus p∘p = p) and F = p(E) be the image of p. If we denote by π the map p viewed as a map from E onto F and by i the injection of F into E, then we have i∘π = IdF. Conversely, i∘π = IdF implies that π∘i is idempotent.
The original notion of projection has been extended or generalized to various mathematical situations, frequently, but not always, related to geometry, for example:
- In set theory:
- An operation typified by the j th projection map, written projj , that takes an element x = (x1, ..., xj , ..., xk) of the cartesian product X1 × … × Xj × … × Xk to the value projj (x) = xj . This map is always surjective.
- A mapping that takes an element to its equivalence class under a given equivalence relation is known as the canonical projection.
- The evaluation map sends a function f to the value f(x) for a fixed x. The space of functions YX can be identified with the cartesian product , and the evaluation map is a projection map from the cartesian product.
- In category theory, the above notion of cartesian product of sets can be generalized to arbitrary categories. The product of some objects has a canonical projection morphism to each factor. This projection will take many forms in different categories. The projection from the Cartesian product of sets, the product topology of topological spaces (which is always surjective and open), or from the direct product of groups, etc. Although these morphisms are often epimorphisms and even surjective, they do not have to be.
- In linear algebra, a linear transformation that remains unchanged if applied twice (p(u) = p(p(u))), in other words, an idempotent operator. For example, the mapping that takes a point (x, y, z) in three dimensions to the point (x, y, 0) in the plane is a projection. This type of projection naturally generalizes to any number of dimensions n for the source and k ≤ n for the target of the mapping. See orthogonal projection, projection (linear algebra). In the case of orthogonal projections, the space admits a decomposition as a product, and the projection operator is a projection in that sense as well.
- In differential topology, any fiber bundle includes a projection map as part of its definition. Locally at least this map looks like a projection map in the sense of the product topology, and is therefore open and surjective.
- In topology, a retract is a continuous map r: X → X which restricts to the identity map on its image. This satisfies a similar idempotency condition r2 = r and can be considered a generalization of the projection map. A retract which is homotopic to the identity is known as a deformation retract. This term is also used in category theory to refer to any split epimorphism.
- The scalar projection (or resolute) of one vector onto another. | http://en.wikipedia.org/wiki/Projection_(mathematics) | 13 |
11 | A & P Study Guide
Exam # 2
Chapter 6 – Skeletal System
Skeleton: Overview (p. 79)
at least five functions of the skeleton.
a classification of bones based on their shapes.
the anatomy of a long bone.
the growth and development of bones.
and describe eight types of fractures, and state the four steps in
Axial Skeleton (p. 82)
between the axial and appendicular skeletons.
the bones of the skull, and state the important features of each bone.
the structure and function of the hyoid bone.
the bones of the vertebral column and the thoracic cage. Be able to label
diagrams of them.
a typical vertebra, the atlas and axis, and the sacrum and coccyx.
the three types of ribs and the three parts of the sternum.
the bones of the pectoral girdle and the pelvic girdle. Be able to label
diagrams of them.
the bones of the upper limb (arm) and the lower limb (leg). Be able to
label diagrams that include surface features.
- Cite at
least five differences between the female and male pelvises.
Joints (Articulations) (p. 100)
how joints are classified, and give examples of each type of joint.
the types of movements that occur at synovial
Effects of Aging (p. 105)
and physiological changes occur in the skeletal system as we age.
Chapter 7 – Muscular System
Types and Functions of Muscles (p. 111)
the three types of muscles, and indicate whether each type is
and discuss four functions of muscles.
Skeletal Muscle Structure and Contraction (p. 111)
the anatomy of a whole muscle and a muscle fiber.
the manner in which a muscle fiber contracts.
a muscle twitch, summation, and tetanus.
how ATP is made available for muscle contraction.
how muscles work together to achieve movement.
muscle tone and the effect of contraction on the size of a muscle.
Skeletal Muscles of the Body (p. 117)
the superficial muscles of the head, neck, and trunk; shoulder and upper
limb (arm); and thigh and lower limb (leg). Indicate their origins and
insertions, and give their functions
Chapter 8 – Nervous System
Nervous System (p. 136)
the three functions of the nervous system.
the structure and function of the three types of neurons and four types of
how a nerve impulse is conducted along a nerve and across a synapse.
the structure of a nerve and the differences between the three different
types of nerves.
the structure of a reflex arc and the function of a reflex.
Central Nervous System (p. 144)
the major parts of the brain and the lobes of the cerebral cortex. State
functions for each structure.
in detail the structure of the spinal cord, and state its functions.
the three layers of meninges, and state the function
how cerebrospinal fluid is formed and circulates.
Peripheral Nervous System (p. 152)
the twelve pairs of cranial nerves, and give a function for each.
the structure and function of spinal nerves.
and describe the autonomic nervous system.
between the sympathetic and parasympathetic divisions in four ways, and
give examples of their respective effects on specific organs.
Effects of Aging (p. 158)
and physiological changes occur in the nervous system as we age.
Chapter 9 – Sensory System
General Receptors (p. 166)
sensory receptors according to the system used in the text.
the four senses of the skin, and state the location of their receptors.
the function of visceral receptors.
the function of proprioceptors.
Chemoreceptors (p. 168)
the chemoreceptors, and state their location,
anatomy, and mechanism of action.
Photoreceptors (p. 169)
the anatomy and function of the accessory organs of the eye.
the anatomy of the eye and the function of each part.
the sensory receptors for sight, their mechanism of action, and the
mechanism for stereoscopic vision.
common disorders of sight discussed in the text.
Mechanoreceptors (p. 179)
the anatomy of the ear and the function of each part.
the sensory receptors for balance and hearing, and their mechanism of
Effects of Aging (p. 183)
and physiological changes occur in the sensory system as we age.
Gravis (p. 171)
Lenses (p. 178)
Damage and Deafness (p. 182) | http://www2.thomas.edu/faculty/hansenj/sc321/Study2.htm | 13 |
12 | Highlights DNA Replication (continued)
1. Replication of linear eukaryotic chromosomes is more complex than replicating the circles of prokaryotic cells. During each round of eukaryotic DNA replication, a small portion of the DNA at the end of the chromosome (known as a telomer) is lost. Telomers have thousands of copies of the same short nucleotide sequence. Telomer length may be relative the cellular lifespan.
2. Telomerases are enzymes that make telomers and they are active in fetal cells. They serve to elongate the ends of linear chromosomal DNAs, adding thousands of repeats of a short sequence (junk DNA). This "junk DNA" is called a telomere. At each round of eukaryotic DNA replication, a short stretch at the end of the DNA is lost, shortening the telomere. The longer a telomere is, the more times a cell can divide before it starts losing important DNA sequences.
3. Tumor cells are another cell type that has an active telomerase. This probably is a factor that enables them to be "immortal".
4. Telomerase acts as a reverse transcriptase, using an RNA primer that it carries with it to copy and make the repetitive sequences of the telomer.
5. Eukaryotic cells tightly control the process that leads to their division. The cycle is called the cell cycle and the protein p53 plays an important role. If p53 detects that replication has not completed properly, it stimulates production of repair proteins that try to fix the damage. If the damage is fixed, the cell cycle continues and the cell ultimately divides. If the damage cannot be fixed, p53 stimulates the cell to commit suicide - a phenomenon called apoptosis.
1. Transcription is the making of RNA using DNA as a template. Transcription requires an RNA polymerase, a DNA template and 4 ribonucleoside triphosphates (ATP, GTP, UTP, and CTP). Prokaryotic cells have only a single RNA polymerase. Transcription occurs in the 5' to 3' direction. RNA polymerases differ from DNA polymerases in the RNA polymerases do NOT require a primer.
2. Transcription requires DNA strands to be opened to allow the RNA polymerase to enter and begin making RNA. Transcription starts near special DNA sequences called promoters.
3.A factor known as sigma associates with the RNA polymerase in E. coli and helps it to recognize and bind to the promoter. A promoter is a sequence in DNA that is recognized by the RNA Polymerase-Sigma complex. (Note that sigma factor binds to BOTH the RNA Polymerase and to the promoter sequence in the DNA. Note also that sigma factor is a PROTEIN). Genes that are to be transcribed have a promoter close by to facilitate RNA Polymerase binding to begin transcription.
4. Promoters in E. coli have two common features. The first is a sequence usually located about 10 base pairs "upstream" of the transcription start site (the transcription start site is the location where the first base of RNA starts). This sequence is known as the "-10" sequence or the Pribnow (TATA) box', which is so-named because the most common version of it (known as a consensus sequence) has the sequence 5'-TATAAT-3'. The second common feature of E. coli promoters is located about 35 base pairs upstream of the transcription start site. Eukaryotic promotoers also frequently have a TATA box, but in a slightly different position.
5. Transcription occurs in three phases - initiation, elongation, and termination. Binding of RNA Polymerase and sigma is the first step in transcription (initiation). After polymerization starts, sigma factor leaves the RNA polymerase and the elongation process continues.
6. Termination of transcription in E. coli occurs by several mechanisms. One I discussed in class is factor independent transcription termination, which occurs as a result of a hairpin loop forming in the sequence of an RNA. When it forms, it "lifts" the RNA polymerase off the DNA and everything falls apart and transcription stops at that point.
7. Factor dependent termination and is caused by a protein called rho. Rho works by binding to the 5' end of the RNA and sliding up the RNA faster than the RNA Polymerase makes RNA. When rho catches the RNA polymerase, it causes the RNA polymerase to dissociate (come off of) the DNA and release the RNA.
8. An operon is a collection of genes all under the control of the same promoter. When an operon is transcribed, all of the genes on the operon are on the same mRNA. Operons occur in prokaryotes, but not eukaryotes. In eukaryotes, each gene is made on individual mRNAs and each gene has its own promoter. | http://oregonstate.edu/instruction/bb350/spring12/highlightstranscription.html | 13 |
16 | In Kepler, NASA has an exoplanet hunter. In the Curiosity rover, the space agency has finely tuned mechanism for tracking down geological signs of past life on Mars. It even has an asteroid hunter capable of chasing down hurtling chunks of rock from millions of miles away. Now, NASA wants a comet hunter. Literally. In a lab at NASA’s Goddard Space Flight Center in Maryland researchers have constructed a massive crossbow in which they are testing huge harpoons that they hope will one day blast through the surface of a speeding comet.
The overarching idea: to fly a spacecraft close enough to a comet that it can fire a harpoon into the comet’s surface, acquire material samples from within it, and recover the samples to the spacecraft for return to Earth. But before they can do that, researchers have to prove that their harpoon will work. That’s why in a closet-sized lab space at GSFC there sits a six-foot-tall crossbow--it’s technically a ballista, a siege weapon invented by the ancient Greeks to hurl large missiles at their foes--made from a pair of truck leaf springs and equipped with a half-inch-thick steel bowstring.
The ballista is positioned pointing downward for obvious reasons. It’s bowstring is pulled back mechanically to create up to 1,000 pounds of force, which can launch projectiles upwards of 100 feet per second. It’s here that GSFC researchers are firing various harpoon designs into 55-gallon drums of simulated comet material (usually some mix of pebbles, salt, sand, or the like) to see what sticks--and what doesn’t.It’s the first phase of a long design project aimed at proving that harpooning a comet and returning the samples to Earth is feasible. That Japanese space agency JAXA has succeeded in returning asteroid samples to Earth and NASA’s Stardust has collected samples from the tail of a comet, but researchers really want to see what’s inside. That’s where they might find a bit of the “primordial ooze” that could’ve seeded life on this planet millions of years ago through comet strikes here on Earth.
But landing on a comet and drilling core samples isn’t so easy. Unlike large asteroids, comets exert very little gravity and are basically just huge chunks of ice and dust leftover from the solar system’s formation. In order to land a spacecraft on one, NASA would likely have to somehow tether a spacecraft to the comet and pull itself onto the surface. In other words, it would need a harpoon of sorts anyhow. The idea here is to simply go ahead and make the harpoon the subsurface sampling device, circumventing the need to actually land.
To do so, GSFC researchers are trying to figure out and demonstrate the best tip designs, cross-section, ideal velocities, and explosive charges to propel the harpoon (the actual mission wouldn’t pack a crossbow, but some kind of chemical propellant to launch the harpoons). Once it has penetrated the surface, researchers need to show their harpoon can gather a sample, detach (probably leaving the tip behind), and ferry the sample back up to the spacecraft. And they will have to demonstrate the ability to do this in a variety of possible materials, because there’s no way to know what the comet’s composition will be like until the spacecraft gets there.
It’s an ambitious project, and it starts with a huge ballista sitting in a closet at Goddard.
Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more. | http://www.popsci.com/technology/article/2011-12/hunt-comets-nasa-building-giant-harpoon | 13 |
45 | In the preceding posts, I mentioned infinite products as approximations for π. These may be seen geometrically as exhaustion methods, where the area of a polygon approaches the circular area alternately from above, from below, from above, from below, etc.
There are also integral representations of pi. In such integral representations, π appears in the quantitative value of the integral of a mathematical function. Visually, this is often represented as the area delimited by the bounds of the function. However, the relation with the circle is lost, when viewed under Cartesian coordinates. For example, the graph of the simplest instance of the Cauchy-Lorentz distribution, f(x)=1/(1+x²), "has nothing at all to do with circles or geometry in any obvious way" as quoted from last Pi-day Sunday function from Matt Springer's Built on Facts blog.
In order to view the role of the circle in integral representations of π, we need to switch to alternative ways to visualize math functions. As an example, let's take the constant function y=f(x)=2. The function f maps an element x from a domain to the element y of the target. In this case, for every x, the target y has the constant value 2. With Cartesian coordinates, we are used to represent this function as a horizontal straight line, like in Figure 1a (click on the figure to view it enlarged). If however we write it as R=f(r)=2, where the function f maps any circle of radius r of the domain to a target circle of radius R=2, the same function can be viewed as a circle of constant radius, like in Figure 1b. So the same function f can be equally well viewed as a straight line or as a circle (x, y, r or R are only dummy variables).
Now if we take another example, the linear function, y=f(x)=2x, we are often used to view it in Cartesian coordinates as a straight line with slope 2, like in Figure 1c. In the circular representation R=f(r)=2r, this works however differently. Because we are relating circles of the input domain to other circles of the target, for each circle of radius r, we need to draw the target circle of radius 2r. A single line won't do. For one value of r, we need to draw two circles. If we use blue circles for elements of the input domain and red circles for elements of the target, we could visualize it for successive values of r as an animation like in Figure 1d. In that way, we view the progression of the target circle as the input circle becomes larger.
Unlike the Cartesian representation which shows the progression of a function in a static graph, this circular representation needs a dynamic or recurrent process to get grip of the progression of the function. Therefore it isn't very adapted for illustrations in print media. On the other hand, it has the advantage of keeping track of the geometrical form of the circle. And that's exactly what we need in order to perceive the circular nature when π shows up in mathematical functions. The relation of the integral of the Cauchy-Lorentz distribution f(r)=1/(1+r²) with the circle can then be seen with the help of the geometric counterparts of arithmetic operations like addition, squaring and dividing. A convenient procedure is illustrated in the successive steps of Figure 2.
Step 1. Draw the input circle of radius r and the reference circle of radius unity.
Step 2. Determine r².
Step 5. Find the target ring related to the input ring ranging over [r, r + dr]. This yields a ring of width dr/(1+r²). The location of this ring depends on the relative progression rates of r and r² (I've not yet found a straightforward explanation for this determination).
Step 6. Integrate dr/(1+r²) for r running over all space. For r becoming larger and larger, the summed area tends towards the area of a circle of radius 1. For the positive half plane, this corresponds to the π/2 value found analytically.
The tricky step seems to be the way how to relate the progression between r and 1/(1+r²) in steps 5 and 6. One can verify for example the value of the integral at intermediate steps. For the integral from r=0 to 1, the value in the positive half plane must be π/4, which can be verified on the figure below.
In order to gain more insight on π, it could be of interest to develop skills for this circular representation. | http://commonsensequantum.blogspot.com/2010/04/keeping-track-of-circle-for-integral.html | 13 |
23 | LCROSS Finds Water on the Moon
November 13, 2009: The argument that the Moon is a dry, desolate place no longer holds water.
At a press conference today, researchers revealed preliminary data from NASA's Lunar Crater Observation and Sensing Satellite, or LCROSS, indicating that water exists in a permanently shadowed lunar crater. The discovery opens a new chapter in our understanding of the Moon.
Above: Visible camera images showing the ejecta plume at about 20 seconds after impact. Credit: LCROSS/NASA [more images]
"We are ecstatic," said Anthony Colaprete, LCROSS project scientist and principal investigator at NASA's Ames Research Center.
On Oct. 9th, the LCROSS spacecraft and a companion rocket stage made twin impacts in crater Cabeus near the Moon's south pole. A plume of debris traveled at a high angle beyond the rim of Cabeus and into sunlight, while an additional curtain of debris was ejected more laterally.
Since the impacts, the LCROSS science team has been analyzing the huge amount of data the spacecraft collected. The team concentrated on data from the satellite's spectrometers, which provide the most definitive information about the presence of water. A spectrometer helps identify the composition of materials by examining light they emit or absorb.
The team took the known near-infrared spectral signatures of water and other materials and compared them to the impact spectra the LCROSS near-infrared spectrometer collected.
"We were able to match the spectra from LCROSS data only when we inserted the spectra for water," Colaprete said. "No other reasonable combination of other compounds that we tried matched the observations. The possibility of contamination from the Centaur also was ruled out."
Right: Data from LCROSS's near-infrared spectrometer taken 20 to 60 seconds after the impact of the Centaur booster. The smooth curve corresponds to a model containing water and other compounds--some of which remain unidentified. A model-fit containing only water may be found here. Credit: NASA [larger image] [more images]
Additional confirmation came from an emission in the ultraviolet spectrum that was attributed to hydroxyl (OH), one product from the break-up of water (H2O) by sunlight.
Data from the other LCROSS instruments are being analyzed for additional clues about the state and distribution of the material at the impact site. The LCROSS science team and colleagues are poring over the data to understand the entire impact event, from flash to crater. The goal is to understand the distribution of all materials within the soil at the impact site.
"The full understanding of the LCROSS data may take some time. The data is that rich," Colaprete said. "Along with the water in Cabeus, there are hints of other intriguing substances. The permanently shadowed regions of the Moon are truly cold traps, collecting and preserving material over billions of years."
Stay tuned for updates.
A longer version of this press release may be found here.
LCROSS Photographed by Backyard Astronomers -- (Science@NASA)
School Kids Track LCROSS -- (Science@NASA)
LCROSS Home Page -- (NASA/HQ)
LCROSS Mission Page -- (NASA/Ames) | http://science.nasa.gov/science-news/science-at-nasa/2009/13nov_lcrossresults/ | 13 |
18 | Fallacies are defects in an argument other than false premises which cause an argument to be invalid, unsound or weak. Fallacies can be separated into two general groups: formal and informal. A formal fallacy is a defect which can be identified merely be looking at the logical structure of an argument rather than any specific statements.
Formal fallacies are only found only in deductive arguments with identifiable forms. One of the things which makes them appear reasonable is the fact that they look like and mimic valid logical arguments, but are in fact invalid. Here is an example:
1. All humans are mammals. (premise)
2. All cats are mammals. (premise)
3. All humans are cats. (conclusion)
Both premises in this argument are true, but the conclusion is false. The defect is a formal fallacy, and can be demonstrated by reducing the argument to its bare structure:
1. All A are C
2. All B are C
3. All A are B
It does not really matter what A, B and C stand for we could replace them with wines, milk and beverages. The argument would still be invalid and for the exact same reason. Sometimes, therefore, it is helpful to reduce an argument to its structure and ignore content in order to see if it is valid.
Informal fallacies are defects which can be identified only through an analysis of the actual content of the argument rather than through its structure. Here is an example:
1. Geological events produce rock. (premise)
2. Rock is a type of music. (premise)
3. Geological events produce music. (conclusion)
The premises in this argument are true, but clearly the conclusion is false. Is the defect a formal fallacy or an informal fallacy? To see if this is actually a formal fallacy, we have to break it down to its basic structure:
1. A = B
2. B = C
3. A = C
As we can see, this structure is valid, therefore the defect cannot be a formal fallacy identifiable from the structure. Therefore, the defect must be an informal fallacy identifiable from the content. In fact, when we examine the content, we find that a key term, rock, is being used with two different definitions (the technical term for this sort of fallacy is Equivocation).
Informal fallacies can work in several ways. Some distract the reader from what is really going on. Some, like in the above example, make use of vagueness or ambiguity to cause confusion. Some appeal to emotions rather than logic and reason.
Categorizing fallacies can be done in a number of different methods. Aristotle was the first to try and systematically describe and categorize fallacies, identifying thirteen fallacies divided into two groups. Since then many more have been described and the categorization is more complicated. Thus, while the categorization used here should prove. | http://atheism.about.com/od/logicalarguments/a/fallacy.htm | 13 |
10 | Evidence of String Theory: Gamma Ray Bursts
Among the various phenomena in the universe, two types produce large amounts of energy and may provide some insight into string theory: gamma ray bursts (GRBs) and cosmic rays.
Exactly what causes a gamma ray burst is disputed, but it seems to happen when massive objects, such as a pair of neutron stars or a neutron star and a black hole (the most probable theories), collide with each other. These objects orbit around each other for billions of years, but finally collapse together, releasing energy in the most powerful events observed in the universe, depicted in this figure.
The name gamma ray bursts clearly implies that most of this energy leaves the event in the form of gamma rays, but not all of it does. These objects release bursts of light across a range of different energies (or frequencies — energy and frequency of photons are related).
According to Einstein, all the photons from a single burst should arrive at the same time, because light (regardless of frequency or energy) travels at the same speed. By studying GRBs, it may be possible to tell if this is true.
Calculations based on Amelino-Camelia’s work has shown that photons of different energy that have traveled for billions of years could, due to (estimated and possibly over-optimistic) quantum gravity effects at the Planck scale, have differences of about 1 one-thousandth of a second (0.001s).
The Fermi Gamma-ray Space Telescope (formerly the Gamma-ray Large Area Space Telescope, or GLAST) was launched in June 2008 as a joint venture between NASA, the U.S. Department of Energy, and French, German, Italian, Japanese, and Swedish government agencies. Fermi is a low-Earth orbit observatory with the precision required to detect differences this small.
So far, there’s no evidence that Fermi has identified Planck scale breakdown of general relativity. To date it’s identified a dozen gamma ray–only pulsars, a phenomenon that had never been observed before Fermi. (Prior to Fermi, pulsars — spinning and highly magnetized neutron stars that emit energy pulses — were believed to emit their energy primarily through radio waves.)
If Fermi (or some other means) does detect a Planck scale breakdown of relativity, then that will only increase the need for a successful theory of quantum gravity, because it will be the first experimental evidence that the theory does break down at these scales. String theorists would then be able to incorporate this knowledge into their theories and models, perhaps narrowing the string theory landscape to regions that are more feasible to work with. | http://www.dummies.com/how-to/content/evidence-of-string-theory-gamma-ray-bursts.html | 13 |
22 | Copyright © Peter Wakefield Sault 2003-2005
All rights reserved worldwide
The shape of the Earth is an oblate spheroid,
which for most practical purposes can be approximated as an ellipsoid.
Due to the Earth's spin it is wider across the Equator than across the Poles, the
difference in diameters amounting to some 27 miles.
The first step in approximating a great circle circumference is to
ascertain the minor radius, R. The major radius (a) is
always the equatorial radius of the Earth. That great circle tilted 0°
is the Equator itself (EW, coloured red in Figure D-1) and is here
treated as though it were a true circle. As the angle of tilt increases,
the minor radius - which is that from the centre of the Earth to the
vertex of the great circle - decreases. The length of the radius is
calculated by taking it as an intermediate radius of a meridian (EVN,
coloured green in Figure D-1), an ellipse which passes through
the North and South Poles and whose major and minor radii are known,
being the equatorial (a) and polar (b) radii respectively,
using the following formula, where q is the
angle of tilt to the Equator (i.e. the dihedral angle between the plane of the
equator and that of the tilted great circle):-
The circumference P of the tilted great circle (VW, coloured blue
in Figure D-1) is then calculated by substituting R into
Ramanujan's second approximation (which, for ellipses comparable in size and
eccentricity to great circles of the earth, is accurate to within a Bohr radius):-
Ignoring topographical features, the Moon is an almost perfect sphere.
It has a very slight bulge, amounting to no more than a few metres, in the side
which faces the Earth. This bulge, which dates from the solidification of the
Moon before the features which we see resulting from meteoric bombardment
appeared, is sufficient to keep the same side facing towards the Earth at
all times. It is a curious accident that
the apparent angular diameter of the Moon as viewed from the Earth, about ½°, is almost
identical to that of the Sun, allowing the Solar Corona to be viewed at the
periphery of the Lunar disk during total eclipses of the Sun.
The Moon's mean radius, k, is expressed in units of Earth's equatorial radius.
The Moon's orbit about the Earth is an ellipse of eccentricity 0.0549
inclined at 5°8' to the Ecliptic, the plane of the Earth's orbit.
The points where the Moon's orbit crosses the Ecliptic are the nodes,
which move westward, taking 18.6 years to go all the way round the Earth.
The point where the Moon is nearest the Earth, the perigee,
moves eastward, taking 8.8 years for a complete circuit.
The movement of the Moon against the stars is, therefore, quite complicated and variable.
Nevertheless, the Moon remains within the zodiacal band along the Ecliptic.
The highest latitude where the Moon ever passes directly overhead is 28°35',
comprising the sum of the inclinations of the terrestrial Equator and
the orbit of the Moon, 23°27' and 5°8' respectively, to the Ecliptic.
Copyright © Peter Wakefield Sault 2003-2005All rights reserved worldwide
Music: Nikolai RIMSKY-KORSAKOV (1844-1908), Scheherezade, Op.35 (symphonic suite), The Sea and Sinbad's Ship.
MIDI realization J.Nishio
THE CLASSICAL ARCHIVES | http://www.odeion.org/atlantis/appendix-d.html | 13 |
11 | In geometry, a surface S is ruled (also called a scroll) if through every point of S there is a straight line that lies on S. The most familiar examples are the plane and the curved surface of a cylinder or cone. Other examples are a conical surface with elliptical directrix, the right conoid, the helicoid, and the tangent developable of a smooth curve in space.
A ruled surface can always be described (at least locally) as the set of points swept by a moving straight line. For example, a cone is formed by keeping one point of a line fixed whilst moving another point along a circle.
A surface is doubly ruled if through every one of its points there are two distinct lines that lie on the surface. The hyperbolic paraboloid and the hyperboloid of one sheet are doubly ruled surfaces. The plane is the only surface which contains three distinct lines through each of its points.
The properties of being ruled or doubly ruled are preserved by projective maps, and therefore are concepts of projective geometry. In algebraic geometry ruled surfaces are sometimes considered to be surfaces in affine or projective space over a field, but they are also sometimes considered as abstract algebraic surfaces without an embedding into affine or projective space, in which case "straight line" is understood to mean an affine or projective line.
Ruled surfaces in differential geometry
Parametric representation
The "moving line" view means that a ruled surface has a parametric representation of the form
one obtains a ruled surface that contains the Möbius strip.
Alternatively, a ruled surface can be parametrized as , where and are two non-intersecting curves lying on the surface. In particular, when and move with constant speed along two skew lines, the surface is a hyperbolic paraboloid, or a piece of an hyperboloid of one sheet.
Developable surface
A developable surface is a surface that can be (locally) unrolled onto a flat plane without tearing or stretching it. If a developable surface lies in three-dimensional Euclidean space, and is complete, then it is necessarily ruled, but the converse is not always true. For instance, the cylinder and cone are developable, but the general hyperboloid of one sheet is not. More generally, any developable surface in three dimensions is part of a complete ruled surface, and so itself must be locally ruled. There are developable surfaces embedded in four dimensions which are however not ruled. (Hilbert & Cohn-Vossen 1952, pp. 341–342)
Ruled surfaces in algebraic geometry
In algebraic geometry, ruled surfaces were originally defined as projective surfaces in projective space containing a straight line through any given point. This immediately implies that there is a projective line on the surface through any given point, and this condition is now often used as the definition of a ruled surface: ruled surfaces are defined to be abstract projective surfaces satisfying this condition that there is a projective line though any point. This is equivalent to saying that they are birational to the product of a curve and a projective line. Sometimes a ruled surface is defined to be one satisfying the stronger condition that it has a fibration over a curve with fibers that are projective lines. This excludes the projective plane, which has a projective line though every point but cannot be written as such a fibration.
Ruled surfaces appear in the Enriques classification of projective complex surfaces, because every algebraic surface of Kodaira dimension −∞ is a ruled surface (or a projective plane, if one uses the restrictive definition of ruled surface). Every minimal projective ruled surface other than the projective plane is the projective bundle of a 2-dimensional vector bundle over some curve. The ruled surfaces with base curve of genus 0 are the Hirzebruch surfaces.
Ruled surfaces in architecture
- Hyperbolic paraboloids, such as saddle roofs.
- Hyperboloids of one sheet, such as cooling towers and some trash bins.
See also
- Differential geometry of ruled surfaces
- Rational normal scroll, ruled surface built from two rational normal curves
- Barth, Wolf P.; Hulek, Klaus; Peters, Chris A.M.; Van de Ven, Antonius (2004), Compact Complex Surfaces, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. 4, Springer-Verlag, Berlin, ISBN 978-3-540-00832-3, MR2030225
- Beauville, Arnaud (1996), Complex algebraic surfaces, London Mathematical Society Student Texts 34 (2nd ed.), Cambridge University Press, ISBN 978-0-521-49510-3; 978-0-521-49842-5 Check
|isbn=value (help), MR1406314
- Sharp, John (2008), D-Forms, Tarquin. Models exploring rules surfaces Review: Jrnl of Mathematics and the Arts 3 (2009), 229-230 ISBN 978-1-899618-87-3
- Edge, W. L. (1931), The Theory of Ruled Surfaces, Cambridge, University Press. Review: Bull. Amer. Math. Soc. 37 (1931), 791-793, doi:10.1090/S0002-9904-1931-05248-4
- Hilbert, David; Cohn-Vossen, Stephan (1952), Geometry and the Imagination (2nd ed.), New York: Chelsea, ISBN 978-0-8284-1087-8.
- Iskovskikh, V.A. (2001), "Ruled surface", in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 | http://en.wikipedia.org/wiki/Ruled_surface | 13 |
11 | Table of contents
The specific three dimensional arrangement of atoms in molecules is referred to as molecular geometry. We also define molecular geometry as the positions of the atomic nuclei in a molecule. There are various instrumental techniques such as X-Ray crystallography and other experimental techniques which can be used to tell us where the atoms are located in a molecule. Using advanced techniques, very complicated structures for proteins, enzymes, DNA, and RNA have been determined. Molecular geometry is associated with the chemistry of vision, smell and odors, taste, drug reactions and enzyme controlled reactions to name a few.
Molecular geometry is associated with the specific orientation of bonding atoms. A careful analysis of electron distributions in orbitals will usually result in correct molecular geometry determinations. In addition, the simple writing of Lewis diagrams can also provide important clues for the determination of molecular geometry. Click on a picture to link to a page with the GIF file and a short discussion of the molecule.
Valence Shell Electron Pair Repulsion (VSEPR) theory
Electron pairs around a central atom arrange themselves so that they can be as far apart as possible from each other. The valence shell is the outermost electron-occupied shell of an atom that holds the electrons involved in bonding. In a covalent bond, a pair of electrons is shared between two atoms. In a polyatomic molecule, several atoms are bonded to a central atom using two or more electron pairs. The repulsion between negatively charged electron pairs in bonds or as lone pairs causes them to spread apart as much as possible.
The idea of "electron pair repulsion can be demonstrated by tying several inflated balloons together at their necks. Each balloon represents an electron pair. The balloons will try to minimize the crowding and will spread as far apart as possible. According to VSEPR theory, molecular geometry can be predicted by starting with the electron pair geometry about the central atom and adding atoms to some or all of the electron pairs. This model produces good agreement with experimental determinations for simple molecules. With this model in mind, the molecular geometry can be determined in a systematic way.
Molecules can then be divided into two groups:
This page viewed 20065 times | http://chemwiki.ucdavis.edu/index.php?title=Inorganic_Chemistry/Molecular_Geometry&bc=0 | 13 |
12 | Mother Nature provided a surprise light show over Russia early on February 14. That’s when a major meteor entered Earth’s atmosphere. The object was originally 17 meters (55 feet) in diameter. That’s as wide as a 5-story building is high. It had also weighed a whopping 10,000 metric tons, according to the National Aeronautics and Space Administration (NASA).
The meteor was traveling about 65,000 kilometers (40,000 miles) per hour. It created a brilliant streak as it traveled across the sky for nearly 33 seconds. The meteor then exploded about 20 to 25 kilometers (12 to 15 miles) above Earth’s surface.
When the meteor exploded, it created a flash brighter than the sun. The explosion released nearly 500 kilotons of energy. That’s about 70 times as much energy as was released just a few days earlier when North Korea tested a nuclear bomb.
NASA maintains an orbital debris office in Texas. It tracks anything that might damage satellites or knock them out of orbit. Asteroids and other rocky objects sailing through space are prime candidates. In fact, on the day the Russian meteor hit, NASA and a number of other space centers around the world had been tracking a very different space rock. Experts were watching for an asteroid called 2012 DA14, about 50 meters (164 feet) in diameter. It was due to pass very close to Earth — within 27,000 kilometers (about 16,800 miles) — later on the same day. While the scientists were watching for DA14, the meteor that broke up over Chelyabinsk, Russia, caught them by surprise.
Astronomers rely on telescopes to view such objects approaching from space. But those objects must reflect sunlight for scientists to see them, Margaret Campbell-Brown explained to Science News. She’s an astronomer at Canada’s Western Ontario University in London. The Russian meteor approached Earth from a sunward direction. That means the sun’s reflection would be on the side of the meteor facing away from Earth. So it escaped detection until it began racing through Earth’s atmosphere. At that point, the meteor was impossible to miss. It created a fireball and multiple thunderous sonic booms.
A special microphone network that picks up infrasound — sound below the threshold of human hearing — detected the incoming object long before anyone heard it. That infrasound pinpointed the breakup of the meteor just 32.5 seconds after entering Earth’s atmosphere.
The meteor’s breakup created shock waves, heard on the ground as sonic booms. The shock waves collapsed walls and shattered windows. Glass went flying. An estimated 1,200 Russians suffered cuts and other injuries.
NASA scientists estimate that a meteorite this big hits Earth about once every 100 years. This was the biggest to hit since a far larger meteorite hit in 1908. It also struck Russia.
Most of the recent meteor disintegrated in the atmosphere. Some fragments certainly survived the fall to Earth. Usually only about 1 to 5 percent of a meteor’s mass is found on the ground in the form of meteorites, according to the Russian Academy of Sciences.
atmosphere The envelope of gases surrounding Earth or another planet.
debris Scattered fragments, typically of something wrecked or destroyed. Space debris includes the wreckage of defunct satellites and spacecraft.
friction The resistance to movement that occurs when two things — solids, gases, liquids or a combination of two of these — are in contact. Friction generally causes a heating, which can damage the surface of the materials rubbing against one another.
asteroid A small, usually rocky celestial body in orbit around the sun.
meteor An asteroid or other small celestial body from outer space that enters Earth’s atmosphere. The friction of the atmosphere causes intense heating that will cause a meteor to at least partially burn up or break apart. As a meteor passes through the atmosphere, it appears as a streak of light.
meteorite The remains of a meteor that reach the ground.
orbit The curved path of an object or spacecraft around a star, planet or moon. | http://www.sciencenewsforkids.org/2013/02/meteor-explodes-over-russia/ | 13 |
18 | Video 1. Statistics As Problem Solving
Consider statistics as a problem-solving process and examine its four components: asking questions, collecting appropriate data, analyzing the data, and interpreting the results. This session investigates the nature of data and its potential sources of variation. Variables, bias, and random sampling are introduced. Go to this unit.
Video 2. Data Organization and Representation
Explore different ways of representing, analyzing, and interpreting data, including line plots, frequency tables, cumulative and relative frequency tables, and bar graphs. Learn how to use intervals to describe variation in data. Learn how to determine and understand the median. Go to this unit.
Video 3. Describing Distributions
Continue learning about organizing and grouping data in different graphs and tables. Learn how to analyze and interpret variation in data by using stem and leaf plots and histograms. Learn about relative and cumulative frequency. Go to this unit.
Video 4. The Five-Number Summary
Investigate various approaches for summarizing variation in data, and learn how dividing data into groups can help provide other types of answers to statistical questions. Understand numerical and graphic representations of the minimum, the maximum, the median, and quartiles. Learn how to create a box plot. Go to this unit.
Video 5. Variation About the Mean
Explore the concept of the mean and how variation in data can be described relative to the mean. Concepts include fair and unfair allocations, and how to measure variation about the mean. Go to this unit.
Video 6. Designing Experiments
Examine how to collect and compare data from observational and experimental studies, and learn how to set up your own experimental studies. Go to this unit.
Video 7. Bivariate Data and Analysis
Analyze bivariate data and understand the concepts of association and co-variation between two quantitative variables. Explore scatter plots, the least squares line, and modeling linear relationships. Go to this unit.
Video 8. Probability
Investigate some basic concepts of probability and the relationship between statistics and probability. Learn about random events, games of chance, mathematical and experimental probability, tree diagrams, and the binomial probability model. Go to this unit.
Video 9. Random Sampling and Estimation
Learn how to select a random sample and use it to estimate characteristics of an entire population. Learn how to describe variation in estimates, and the effect of sample size on an estimate's accuracy. Go to this unit.
Video 10. Classroom Case Studies, Grades K-2
Explore how the concepts developed in this course can be applied through a case study of a K-2 teacher, Ellen Sabanosh, a former course participant who has adapted her new knowledge to her classroom. Go to this unit.
Video 11. Classroom Case Studies, Grades 3-5 and 6-8
Explore how the concepts developed in this course can be applied through case studies of a grade 3-5 teacher, Suzanne L'Esperance, and a grade 6-8 teacher, Paul Snowden, both former courses participant who have adapted their new knowledge to their classrooms. Go to this unit. | http://www.learner.org/resources/series158.html | 13 |
15 | communities and governments that work
for and support human rights for children
provide ethnic and national knowledge
and roots for their children. They name
their children, and help them acquire
a sense of belonging in their family,
nation, and world. Through this belonging,
their children become invested in the
positive development of their family
and nation (CRC Articles 7, 8).
Rights: Children have the right
to a name; to a nationality.
Responsibilities: Children are
responsible for respecting the rights
of those who live in or come from other
countries; standing up for their own and
other's rights to a name, nationality,
and other indicators of identity; working
toward the positive development of their
* gain a beginning understanding of the
* increase their understanding about their
nationality, race, ethnicity, gender,
and life role;
* increase respect for their own and others'
* share information with their children
that helps them understand their heritage;
* teach and role model, according to their
child's evolving capacity to learn, responsibilities
pertaining to their name and their nationality;
* explore ways to augment children's self-concept
with knowledge about their name and heritage;
* explore the formation of our collective
- Flags of the World chart;
- Native Cultures flag chart:
- Colored paper, scissors, glue (red,
white and blue), and markers;
- Rice, scoops, cups and spoons (You
might provide a variety of rice, so
participants can see the differences);
- Red and blue paint, white paper,
paint brushes, star stamp;
- Construction paper American flags
with instructions on them (see Parent/Child
Interactive Activities, Name and
- Chart paper and markers;
- Name cards (from early childhood
- Wee Sing Around The World audiotape;
- Raffi's One Light, One Sun audiotape,
;Like Me and You; song;
- Colorful, thin-point markers;
- Extra copies of the Convention
on the Rights of the Child.
Greet as usual. Make sure everyone gets
a name tag.
Parent/Child Interactive Activities
- 1. FLAGS (CREATIVE EXPRESSION)
The flag from our country symbolizes
the nation that we call our homeland.
- Families make flags of the place
(country, tribe, area, region) from
where their ancestors came.
- Supply charts which show various
flags of nations and tribes.
Rice is a food with which a majority
of the world's people are familiar.
- Place uncooked rice of several varieties
and scoops, etc. into the sensory table.
Suggest parents help children in sorting
and naming the varieties.
- 3. AMERICAN FLAG (COOPERATIVE ART)
The American flag is the symbol for
the United States of America (USA). The
50 stars represent the 50 states. The
13 stripes represent the original 13 colonies.
- Provide a star stamp and red and
white paint. Parent and children will
make the American flag together by making
red stripes and stamping stars onto
blue paper in top left hand corner.
- 4. NAME AND NATION WALK (SMALL
AND LARGE MUSCLES)
Provides a vehicle for parentchild
discussion about name and nationality.
This discussion is preparatory for the
parent discussion topic of the day.
- Use flag shapes and write instructions
on them. Put the flags around the room.
Have parents and children walk around
the room, read them, and do the actions
requested on the flag.
Name and Nation Walk Preparation:
Make flags with some, or all, of
the below instructions on them.
- Tell your child your full name, ask
him or her to say their full name.
- Tell your child whom he or she is
- Tell your child the meaning of his
or her name.
- Tell what you know about the ethnic
origin of your child's name.
- Finger spell your name to your child
(Use the American Sign Language Finger
Spelling Chart). Finger spell your
child's name, and encourage children
to finger spell their own names.
- Tell your child what your nickname
is and how you got it. Tell your child
how he or she got their nickname or
why they don't have one.
- Tell your child what country your
ancestors came from.
- Show your child the flags of all
the countries your ancestors came from.
- 5. COLOR NAME (CREATIVE EXPRESSION)
Gives a visual way to celebrate names
and the value of each person as an individual.
It also reminds children that they are
part of this country.
- This activity uses the cue card from
Session 3. Fold an 8 ; inch piece of
paper in half. Then
- write your child's name above the
crease with different colors of glue.
Red, white and blue glue are provided.
Fold paper again and pat down, open
paper and have your child sprinkle glitter
to create a mirror image of their name.
Using the red, white and blue glue will
remind children of the flag of the United
- 6. BOOK CORNER (LANGUAGE)
- Everybody Cooks Rice, by Ann
- A Flag for Our Country, by
- Families Are Different, by
- I Hate English, by Ellen Levine
- Everybody Cooks Rice, Norah
- 1. Transition: Early childhood
teacher speaks to each child, and/or
touches them on the shoulder and reminds
them that circle time will begin soon.
After connecting with each child, the
teacher begins a gathering song.
- 2. I'm happy to see all of
you!; Sing a get acquainted song of
your choice, or sing, ;Shake Hands With
Friends and Say Hello; and the ;Name
- 3. Today, our theme is name
and nationality. Let's sing a song to
recognize all the children here today.;
- 4. Explain: ;Your name is special
even if you know someone with a name
like yours or the same as your name,
your parents gave you a name that they
thought was just right for who you are.
Your name is as precious as a jewel.
So is every other person's name. Names
are precious and need to be protected.
It's very important that no one ever
makes fun of someone's name. Later we
will talk more about names.
- ;Parents, as we go around this circle
will you shout out your child's complete
name? You say the first, middle and
last names, and then we will sing this
song using their first name only.;
- 5. Sing: If your name is ______
stand up tall.; The teacher introduces
the ;Name Game.; Invite everyone to
stand up. Invite parents to help their
children point to the person being named.
It goes like this:
a. The teacher selects a child to begin
by singing: Ann, Ann, look at everyone
Point to Sue and then you're done.
b. After the child points to Sue, she/he
sits down, and the teacher continues:
Sue, Sue, look at everyone Point to Bill
and then you're done.;
c. Continue in this way until all the
children are named. If your group is small
enough (eight or less), name parents as
- 6. Sing ;Shake Hands With Friends;
again, and ask that participants
say, ;Hello, _____ (child's name), I
hope that we can be friends today,;
as they are singing the song. In other
words, participants use people's names
with their handshake.
- 7. I'd like to go around the
circle one more time and have each child,
with help from their parent, tell us
which countries you or your ancestors
came from. Here in the United States
there are people from all over the world.
Let's find out which countries are represented
in our class.;
- Begin by stating which country/ies
you or your parents, grandparents or
great grandparents came from. Then the
child to the right or left tells about
his or her ancestry, and so on, around
the circle. After the last child/parent
has shared, thank everyone for sharing
- 8. Sing: ;The More We Get Together,;
(using sign language signs, if possible)
and ;This Land is Your Land.;
- 9. Close with: ;This Little
Light of Mine.;
NOTE: Adults may have to help children
begin this game. As children get comfortable,
they will not be shy.
Separate learning time
Children's Learning Circle Session
- 1. Invite children to the circle
with a gathering song.
- 2. Teacher says: ;Remember
when we talked about names in the big
circle today? Let's remember everybody's
name again. Go around the circle and
as a group, say everyone's name together.
- 3. Sing: ;The Name Chant; or
;Everybody Stand up Tall.;
- 4. Ask the children if they
can remember what country their ancestors
came from. Ask the children what country
they live in now.
- 5. Share the American flag
with the children. Count the stripes
and stars together.
- 6. ;The American Flag is the
symbol for our country, the United States
of America. Sometimes it is called America,
or the USA. Those are different names
for the same country. There are fifty
stars on our flag. Each star represents,
or stands for a state in our country.
The state we live in is _________. There
are thirteen stripes on our flag. Each
stripe counts for one of the thirteen
colonies that were the original states
when our country was born.;
- 7. Sing: ;This Land is Your
Land; or ;This Little Light of Mine.;
- 8. ;Now, we have a color flag
game to help us learn about the colors
in our flag.; (From Hap Palmer record:
Learning Basic Skills Through Music
AR 514 Vol 1. Original words and music
by Hap Palmer.) Hand out red, blue,
green and yellow flags to all the children.
- ;Let's listen to what this song says
and follow the directions. It will tell
us to stand up or sit down. Let's all
try that now. We will need to listen
very carefully. Look at what color your
flag is. When you hear your color name,
then stand up or sit down according
to what the song says.;
- 9. Ask for favorite songs from
the children and sing them.
- 10. Close the circle with:
;The More We Get Together.;
Parent Education Session 4
Preparation: ;Name and nationality.;
Write this topic title on chart paper
or chalk board. As parents enter the room,
have the Wee Sing Around the World
audiotape playing. Write on newsprint,
;SIGN IN, PLEASE! Please write your entire
name on this newsprint.; Provide thin-pointed
markers for participants to write their
names. As soon as everyone is assembled,
turn off the tape.
- 1. Greeting: ;Shalom! Bonjour!
Buenas Tardes! G'day! Guten Tag! Cio!
Nyob zoo! We are ready to begin. Today
our topic is 'Being and Belonging.';
- 2. Names Group instructions:
a. Say your entire name as it is written
on the newsprint, as well as the full
name of your child.
b. If you know how to say hello in one
of the languages of your family's origin,
c. State something you believe about names.
- 3. Discussion and questions:
How/why does your name hold importance
to you? What do our names give us? What
does our language of origin give us?
- 4. Name art: Some of you did
Name Art cards with your child today.
Please share your creation and your
;name story; with the group, if you
have one. For example, you might tell
us the significance or meaning of your
name, whom you are named after, and
so on. You might also tell us about
your choice of colors (if you provided
- 5. Think about the activities
you and your child just worked on together.
What did you notice about your child's
interest in or reaction to one of the
activities? (flags, color name art,
Parents focus on children's feelings
or discussion during interaction time
activities. Parents interpret children's
- 6. How does a name relate to
Parents make connections between who
they are known as and how they know themselves.
Names identify who we are to others.
- 7. Nationality: ;We are addressing
the 'Right to a name and nationality,'
during this session. We just talked
about our own names and how our names
may affect us. Now let's talk
about how our nationality impacts our
lives. We often take our nationality
for granted, rather than recognize how
powerful an impact it has on how we
see ourselves, how we see each other,
and how we see the world. For example,
the Pledge of Allegiance is a
defining document for us in the United
States. Does everyone know it?; If not,
recite it for them.
- ;Does anyone want to share their thoughts
about this pledge? . . . How does this
pledge describe us?. . . What does it
say about our nationality? . . . How
do we feel about that?;
Invite open discussion. Remind participants,
if necessary, to appreciate each person's
- 8. Brainstorming activity: Collective
a. Write the word ;nationality;
on the chalkboard and give parents time
to reflect on its meaning. Chart their
b. Together identify as many things as
possible that we share because we live
in the United States. Make a list. When
finished say, ;This list tells us about
our collective identity. Are these things
that make you proud that you live in this
c. List things people wish were not part
of our collective identity.
- 9. Brainstorming activity: Standing
up for one's country
Ways we typically stand up for the country
we consider our homeland. (List.)
Ways we listed that we can use while
also honoring the rights of people in
Put a star by the few that meet this
criteria. For example, we may stand up
for our country by going to war to protect
her. However, this criteria would not
receive a star because it is not good
for other countries. When we stand up
for our country through peaceful means
we can show our support for our country
without showing disrespect for other countries.
"How we can stand up for our country
in ways that teach our children about
compassion, embracing differences, peacemaking,
and so on, and generally role model what
we want them to learn?"
"How can we impact our collective
identity and make a statement about who
we want to be in this country, while standing
up for our country?"
- 10. Summary: ;Human beings
have a basic need to belong. They must
know themselves and how they fit into
the world. They must know who they are
and to whom and what they belong, or
of what they are a part. For these reasons,
having a name and a nationality are
basic human rights. When these rights
are honored, children can know themselves
and their country. Through developing
a deeper understanding of their name
and their nationality, they can go beyond
blind acceptance of that identity and
learn to question it. This questioning
is part of our identity in this country.
- ;Our children, and we, their parents,
can make a difference for our homeland
by standing up for what is right, knowing
that part of our collective identity
is honoring liberty and justice for
all. In this way we increase our respect
for ourselves, and we impact the collective
identity in positive ways.;
a. What are those things you hope your
child values about his or her family or
about his or her country?
b. While listening to Raffi's "Like
Me and You" song:
Reflect on the music and words to this
Record some thoughts about your child's
name and your family's national ties. | http://www1.umn.edu/humanrts/edumat/hreduseries/rrr/sess4.html | 13 |
11 | COMMON AMATEUR ASTRONOMY TERMS
The user is assisted in exploiting the formulas found in this book through usage notes, definitions, and examples provided throughout the Astro Functions and Methods. This sheet lists some common terms and concepts used throughout the work.
· UT is Universal Time, which is the standard time at the prime meridian (0-degrees longitude) running through Greenwich England. UT times are given on a 24-hour clock. In the Americas a number of hours must be added to local time to calculate UT. In the continental USA the standard corrections are +5 (Eastern), +6 (Central), +7 (Mountain), and +8 (Pacific) hours. Add one hour less when daylight savings is in effect. Note that, if this addition causes the time to pass midnight (exceeds 24-hours) you must increment your calendar date. For instance, Central Standard Time (CST) is 6-hours behind UT, if it were 8:44 PM CST on May 20th you would determine UT as follows:
· The celestial equator is a circle of reference created by an extension of the Earth's equator into space. For an observer standing on the equator, it would run dead east-west through the zenith -- the highest point in the sky. Observers at the poles would have the celestial equator running along the horizon. The ecliptic is another reference circle, created by using the plane of the Earth's orbit. The path taken by the sun across the sky traces a section of the ecliptic each day. The moon and planets move some degrees north or south of this circle. The ecliptic and celestial equator would be the same circle if the Earth's axis of rotation were perpendicular to its orbit. But, the planet is tilted: so these circles intersect each other at the two equinoxes and form an angle called the obliquity of the ecliptic. The vernal equinox is the intersection point that the sun reaches in spring and is used as the starting point for measuring angular distances along the ecliptic or equator.
· Right ascension (RA) and declination (DEC) form the celestial equatorial system of measure, that uses the vernal equinox and celestial equator as starting points. It is similar to the system of longitude and latitude on the surface of the globe. RA is measured eastward along the celestial equator from the vernal equinox. It is given in units of hours, which correspond to 15 degrees of arc. In this way, 24 hours of RA equal 360 degrees of arc (24 x 15 = 360). These hours are subdivided into minutes and seconds, just like the hours on a clock (see the Mean Solar Day to Sidereal Day function at the end of Basic Conversions on why you can't use your watch to measure off RA). For purposes of calculation these hour:minute:second of position are first converted into degrees of arc. Declination is measured in degrees from the celestial equator (0-degrees) north (+) and south (-) to the celestial poles, which reside at +/-90 degrees of declination and coincide with the rotation axis of the planet.
· The next most common positional system encountered by the amateur astronomer is celestial ecliptic. It shares with the RA and DEC system in the use of the vernal equinox as a starting point for positive eastward measurement. Measurement, though, is along or perpendicular to the circle of the ecliptic. The angular distance north or south of this baseline is ecliptic or celestial latitude. It runs up to ±90-degrees ending at the ecliptic poles. Ecliptic or celestial longitude is the angular distance eastward along the ecliptic from the vernal equinox point. Unlike RA and DEC, both measurements are usually given in degrees.
· Gravitational effects on the Earth, mainly from the Sun and Moon, cause the equinox points to shift along the celestial equator. A long term effect, known as precession, causes the celestial poles to rotate around the ecliptic poles in a cycle of 26,000 years. This has the unfortunate effect of allowing celestial coordinates for an object to change over time. Therefore, all such coordinates are given in terms of a date epoch. Currently the standard epoch is known as J2000.0. This is equivalent to noontime on the first day of the year 2000. You may also see epochs for quarter and midyears: 1991.25, 1999.5, etc., as well as instantaneous epochs.
Astro Utilities Electronic Book Copyright © 1999 Pietro Carboni. All rights reserved. | http://www.pietro.org/Astro_Util_StaticDemo/CommonAstronomyTerms.htm | 13 |
495 | |This is the print version of Geometry
You won't see this message or any elements not part of the book's content when you print or preview this page.
Part I- Euclidean Geometry
Chapter 1: Points, Lines, Line Segments and Rays
Points and lines are two of the most fundamental concepts in Geometry, but they are also the most difficult to define. We can describe intuitively their characteristics, but there is no set definition for them: they, along with the plane, are the undefined terms of geometry. All other geometric definitions and concepts are built on the undefined ideas of the point, line and plane. Nevertheless, we shall try to define them.
A point is an exact location in space. Points are dimensionless. That is, a point has no width, length, or height. We locate points relative to some arbitrary standard point, often called the "origin". Many physical objects suggest the idea of a point. Examples include the tip of a pencil, the corner of a cube, or a dot on a sheet of paper.
As for a line segment, we specify a line with two points. Starting with the corresponding line segment, we find other line segments that share at least two points with the original line segment. In this way we extend the original line segment indefinitely. The set of all possible line segments findable in this way constitutes a line. A line extends indefinitely in a single dimension. Its length, having no limit, is infinite. Like the line segments that constitute it, it has no width or height. You may specify a line by specifying any two points within the line. For any two points, only one line passes through both points. On the other hand, an unlimited number of lines pass through any single point.
We construct a ray similarly to the way we constructed a line, but we extend the line segment beyond only one of the original two points. A ray extends indefinitely in one direction, but ends at a single point in the other direction. That point is called the end-point of the ray. Note that a line segment has two end-points, a ray one, and a line none.
A point exists in zero dimensions. A line exists in one dimension, and we specify a line with two points. A plane exists in two dimensions. We specify a plane with three points. Any two of the points specify a line. All possible lines that pass through the third point and any point in the line make up a plane. In more obvious language, a plane is a flat surface that extends indefinitely in its two dimensions, length and width. A plane has no height.
Space exists in three dimensions. Space is made up of all possible planes, lines, and points. It extends indefinitely in all directions.
Mathematics can extend space beyond the three dimensions of length, width, and height. We then refer to "normal" space as 3-dimensional space. A 4-dimensional space consists of an infinite number of 3-dimensional spaces. Etc.
[How we label and reference points, lines, and planes.]
Chapter 2: Angles
An angle is the union of two rays with a common endpoint, called the vertex. The angles formed by vertical and horizontal lines are called right angles; lines, segments, or rays that intersect in right angles are said to be perpendicular.
Angles, for our purposes, can be measured in either degrees (from 0 to 360) or radians (from 0 to ). Angles length can be determined by measuring along the arc they map out on a circle. In radians we consider the length of the arc of the circle mapped out by the angle. Since the circumference of a circle is , a right angle is radians. In degrees, the circle is 360 degrees, and so a right angle would be 90 degrees.
Angles are named in several ways.
- By naming the vertex of the angle (only if there is only one angle formed at that vertex; the name must be non-ambiguous)
- By naming a point on each side of the angle with the vertex in between.
- By placing a small number on the interior of the angle near the vertex.
Classification of Angles by Degree Measure
- an angle is said to be acute if it measures between 0 and 90 degrees, exclusive.
- an angle is said to be right if it measures 90 degrees.
- notice the small box placed in the corner of a right angle, unless the box is present it is not assumed the angle is 90 degrees.
- all right angles are congruent
- an angle is said to be obtuse if it measures between 90 and 180 degrees, exclusive.
Special Pairs of Angles
- adjacent angles
- adjacent angles are angles with a common vertex and a common side.
- adjacent angles have no interior points in common.
- complementary angles
- complementary angles are two angles whose sum is 90 degrees.
- complementary angles may or may not be adjacent.
- if two complementary angles are adjacent, then their exterior sides are perpendicular.
- supplementary angles
- two angles are said to be supplementary if their sum is 180 degrees.
- supplementary angles need not be adjacent.
- if supplementary angles are adjacent, then the sides they do not share form a line.
- linear pair
- if a pair of angles is both adjacent and supplementary, they are said to form a linear pair.
- vertical angles
- angles with a common vertex whose sides form opposite rays are called vertical angles.
- vertical angles are congruent.
Side-Side-Side (SSS) (Postulate 12) If three sides of one triangle are congruent to three sides of a second triangle, then the two triangles are congruent.
Side-Angle-Side (SAS) (Postulate 13)
If two sides and the included angle of a second triangle, then the two triangles are congruent.
If two angles and the included side of one triangle are congruent to two angles and the included side of a second triangle, then two triangles are congruent.
If two angles and a non-included side of one triangle are congruent to two angles and the corresponding non-included side of a second triangle, then the two triangles are congruent.
NO - Angle-Side-Side (ASS)
The "ASS" postulate does not work, unlike the other ones. A way that students can remember this is that "ass" is not a nice word, so we don't use it in geometry (since it does not work).
There are two approaches to furthering knowledge: reasoning from known ideas and synthesizing observations. In inductive reasoning you observe the world, and attempt to explain based on your observations. You start with no prior assumptions. Deductive reasoning consists of logical assertions from known facts.
What you need to know
Before one can start to understand logic, and thereby begin to prove geometric theorems, one must first know a few vocabulary words and symbols.
Conditional: a conditional is something which states that one statement implies another. A conditional contains two parts: the condition and the conclusion, where the former implies the latter. A conditional is always in the form "If statement 1, then statement 2." In most mathematical notation, a conditional is often written in the form p ⇒ q, which is read as "If p, then q" where p and q are statements.
Converse: the converse of a logical statement is when the conclusion becomes the condition and vice versa; i.e., p ⇒ q becomes q ⇒ p. For example, the converse of the statement "If someone is a woman, then they are a human" would be "If someone is a human, then they are a woman." The converse of a conditional does not necessarily have the same truth value as the original, though it sometimes does, as will become apparent later.
AND: And is a logical operator which is true only when both statements are true. For example, the statement "Diamond is the hardest substance known to man AND a diamond is a metal" is false. While the former statement is true, the latter is not. However, the statement "Diamond is the hardest substance known to man AND diamonds are made of carbon" would be true, because both parts are true.
OR: If two statements are joined together by "or," then the truth of the "or" statement is dependant upon whether one or both of the statements from which it is composed is true. For example, the statement "Tuesday is the day after Monday OR Thursday is the day after Saturday" would have a truth value of "true," because even though the latter statement is false, the former is true.
NOT: If a statement is preceded by "NOT," then it is evaluating the opposite truth value of that statement. The symbol for "NOT" is For example, if the statement p is "Elvis is dead," then ¬p would be "Elvis is not dead." The concept of "NOT" can cause some confusion when it relates to statements which contain the word "all." For example, if r is "¬". "All men have hair," then ¬r would be "All men do not have hair" or "No men have hair." Do not confuse this with "Not all men have hair" or "Some men have hair." The "NOT" should apply to the verb in the statement: in this case, "have." ¬p can also be written as NOT p or ~p. NOT p may also be referred to as the "negation of p."
Inverse: The inverse of a conditional says that the negation of the condition implies the negation of the conclusion. For example, the inverse of p ⇒ q is ¬p ⇒ ¬q. Like a converse, an inverse does not necessarily have the same truth value as the original conditional.
Biconditional: A biconditional is conditional where the condition and the conclusion imply one another. A biconditional starts with the words "if and only if." For example, "If and only if p, then q" means both that p implies q and that q implies p.
Premise: A premise is a statement whose truth value is known initially. For example, if one were to say "If today is Thursday, then the cafeteria will serve burritos," and one knew that what day it was, then the premise would be "Today is Thursday" or "Today is not Thursday."
⇒: The symbol which denotes a conditional. p ⇒ q is read as "if p, then q."
Iff: Iff is a shortened form of "if and only if." It is read as "if and only if."
⇔: The symbol which denotes a biconditonal. p ⇔ q is read as "If and only if p, then q."
∴: The symbol for "therefore." p ∴ q means that one knows that p is true (p is true is the premise), and has logically concluded that q must also be true.
∧: The symbol for "and."
∨: The symbol for "or."
There are a few forms of deductive logic. One of the most common deductive logical arguments is modus ponens, which states that:
- p ⇒ q
- p ∴ q
- (If p, then q)
- (p, therefore q)
An example of modus ponens:
- If I stub my toe, then I will be in pain.
- I stub my toe.
- Therefore, I am in pain.
Another form of deductive logic is modus tollens, which states the following.
- p ⇒ q
- ¬q ∴ ¬p
- (If p, then q)
- (not q, therefore not p)
Modus tollens is just as valid a form of logic as modus ponens. The following is an example which uses modus tollens.
- If today is Thursday, then the cafeteria will be serving burritos.
- The cafeteria is not serving burritos, therefore today is not Thursday.
Another form of deductive logic is known as the If-Then Transitive Property. Simply put, it means that there can be chains of logic where one thing implies another thing. The If-Then Transitive Property states:
- p ⇒ q
- (q ⇒ r) ∴ (p ⇒ r)
- (If p, then q)
- ((If q, then r), therefore (if p, then r))
For example, consider the following chain of if-then statements.
- If today is Thursday, then the cafeteria will be serving burritos.
- If the cafeteria will be serving burritos, then I will be happy.
- Therefore, if today is Thursday, then I will be happy.
Inductive reasoning is a logical argument which does not definitely prove a statement, but rather assumes it. Inductive reasoning is used often in life. Polling is an example of the use of inductive reasoning. If one were to poll one thousand people, and 300 of those people selected choice A, then one would infer that 30% of any population might also select choice A. This would be using inductive logic, because it does not definitively prove that 30% of any population would select choice A.
Because of this factor of uncertainty, inductive reasoning should be avoided when possible when attempting to prove geometric properties.
Truth tables are a way that one can display all the possibilities that a logical system may have when given certain premises. The following is a truth table with two premises (p and q), which shows the truth value of some basic logical statements. (NOTE: T = true; F = false)
|p||q||¬p||¬q||p ⇒ q||p ⇔ q||p ∧ q||p ∨ q|
Unlike science which has theories, mathematics has a definite notion of proof. Mathematics applies deductive reasoning to create a series of logical statements which show that one thing implies another.
Consider a triangle, which we define as a shape with three vertices joined by three lines. We know that we can arbitrarily pick some point on a page, and make that into a vertex. We repeat that process and pick a second point. Using a ruler, we can connect these two points. We now make a third point, and using the ruler connect it to each of the other points. We have constructed a triangle.
In mathematics we formalize this process into axioms, and carefully lay out the sequence of statements to show what follows. All definitions are clearly defined. In modern mathematics, we are always working within some system where various axioms hold.
The most common form of explicit proof in highschool geometry is a two column proof consists of five parts: the given, the proposition, the statement column, the reason column, and the diagram (if one is given).
Example of a Two-Column Proof
Now, suppose a problem tells you to solve for , showing all steps made to get to the answer. A proof shows how this is done:
Prove: x = 1
|Property of subtraction|
We use "Given" as the first reason, because it is "given" to us in the problem.
Written proofs (also known as informal proofs, paragraph proofs, or 'plans for proof') are written in paragraph form. Other than this formatting difference, they are similar to two-column proofs.
Sometimes it is helpful to start with a written proof, before formalizing the proof in two-column form. If you're having trouble putting your proof into two column form, try "talking it out" in a written proof first.
Example of a Written Proof
We are given that x + 1 = 2, so if we subtract one from each side of the equation (x + 1 - 1 = 2 - 1), then we can see that x = 1 by the definition of subtraction.
A flowchart proof or more simply a flow proof is a graphical representation of a two-column proof. Each set of statement and reasons are recorded in a box and then arrows are drawn from one step to another. This method shows how different ideas come together to formulate the proof.
Postulates in geometry are very similar to axioms, self-evident truths, and beliefs in logic, political philosophy and personal decision-making. The five postulates of Euclidean Geometry define the basic rules governing the creation and extension of geometric figures with ruler and compass. Together with the five axioms (or "common notions") and twenty-three definitions at the beginning of Euclid's Elements, they form the basis for the extensive proofs given in this masterful compilation of ancient Greek geometric knowledge. They are as follows:
- A straight line may be drawn from any given point to any other.
- A straight line may be extended to any finite length.
- A circle may be described with any given point as its center and any distance as its radius.
- All right angles are congruent.
- If a straight line intersects two other straight lines, and so makes the two interior angles on one side of it together less than two right angles, then the other straight lines will meet at a point if extended far enough on the side on which the angles are less than two right angles.
Postulate 5, the so-called Parallel Postulate was the source of much annoyance, probably even to Euclid, for being so relatively prolix. Mathematicians have a peculiar sense of aesthetics that values simplicity arising from simplicity, with the long complicated proofs, equations and calculations needed for rigorous certainty done behind the scenes, and to have such a long sentence amidst such other straightforward, intuitive statements seems awkward. As a result, many mathematicians over the centuries have tried to prove the results of the Elements without using the Parallel Postulate, but to no avail. However, in the past two centuries, assorted non-Euclidean geometries have been derived based on using the first four Euclidean postulates together with various negations of the fifth.
Chapter 7. Vertical Angles
Vertical angles are a pair of angles with a common vertex whose sides form opposite rays. An extensively useful fact about vertical angles is that they are congruent. Aside from saying that any pair of vertical angles "obviously" have the same measure by inspection, we can prove this fact with some simple algebra and an observation about supplementary angles. Let two lines intersect at a point, and angles A1 and A2 be a pair of vertical angles thus formed. At the point of intersection, two other angles are also formed, and we'll call either one of them B1 without loss of generality. Since B1 and A1 are supplementary, we can say that the measure of B1 plus the measure of A1 is 180. Similarly, the measure of B1 plus the measure of A2 is 180. Thus the measure of A1 plus the measure of B1 equals the measure of A2 plus the measure of B1, by substitution. Then by subracting the measure of B1 from each side of this equality, we have that the measure of A1 equals the measure of A2.
Parallel Lines in a Plane
Two coplanar lines are said to be parallel if they never intersect. For any given point on the first line, its distance to the second line is equal to the distance between any other point on the first line and the second line. The common notation for parallel lines is "||" (a double pipe); it is not unusual to see "//" as well. If line m is parallel to line n, we write "m || n". Lines in a plane either coincide, intersect in a point, or are parallel. Controversies surrounding the Parallel Postulate lead to the development of non-Euclidean geometries.
Parallel Lines and Special Pairs of Angles
When two (or more) parallel lines are cut by a transversal, the following angle relationships hold:
- corresponding angles are congruent
- alternate exterior angles are congruent
- same-side interior angles are supplementary
Theorems Involving Parallel Lines
- If a line in a plane is perpendicular to one of two parallel lines, it is perpendicular to the other line as well.
- If a line in a plane is parallel to one of two parallel lines, it is parallel to both parallel lines.
- If three or more parallel lines are intersected by two or more transversals, then they divide the transversals proportionally.
Congruent shapes are the same size with corresponding lengths and angles equal. In other words, they are exactly the same size and shape. They will fit on top of each other perfectly. Therefore if you know the size and shape of one you know the size and shape of the others. For example:
Each of the above shapes is congruent to each other. The only difference is in their orientation, or the way they are rotated. If you traced them onto paper and cut them out, you could see that they fit over each other exactly.
Having done this, right away we can see that, though the angles correspond in size and position, the sides do not. Therefore it is proved the triangles are not congruent.
Similar shapes are like congruent shapes in that they must be the same shape, but they don't have to be the same size. Their corresponding angles are congruent and their corresponding sides are in proportion.
Methods of Determining Congruence
Two triangles are congruent if:
- each pair of corresponding sides is congruent
- two pairs of corresponding angles are congruent and a pair of corresponding sides are congruent
- two pairs of corresponding sides and the angles included between them are congruent
Tips for Proofs
Commonly used prerequisite knowledge in determining the congruence of two triangles includes:
- by the reflexive property, a segment is congruent to itself
- vertical angles are congruent
- when parallel lines are cut by a transversal corresponding angles are congruent
- when parallel lines are cut by a transversal alternate interior angles are congruent
- midpoints and bisectors divide segments and angles into two congruent parts
For two triangles to be similar, all 3 corresponding angles must be congruent, and all three sides must be proportionally equal. Two triangles are similar if...
- Two angles of each triangle are congruent.
- The acute angle of a right triangle is congruent to the acute angle of another right triangle.
- The two triangles are congruent. Note here that congruency implies similarity.
A quadrilateral is a polygon that has four sides.
Special Types of Quadrilaterals
- A parallelogram is a quadrilateral having two pairs of parallel sides.
- A square, a rhombus, and a rectangle are all examples of parallelograms.
- A rhombus is a quadrilateral of which all four sides are the same length.
- A rectangle is a parallelogram of which all four angles are 90 degrees.
- A square is a quadrilateral of which all four sides are of the same length, and all four angles are 90 degrees.
- A square is a rectangle, a rhombus, and a parallelogram.
- A trapezoid is a quadrilateral which has two parallel sides (U.S.)
- U.S. usage: A trapezium is a quadrilateral which has no parallel sides.
- U.K usage: A trapezium is a quadrilateral with two parallel sides (same as US trapezoid definition).
- A kite is an quadrilateral with two pairs of congruent adjacent sides.
One of the most important properties used in proofs is that the sum of the angles of the quadrilateral is always 360 degrees. This can easily be proven too:
If you draw a random quadrilateral, and one of its diagonals, you'll split it up into two triangles. Given that the sum of the angles of a triangle is 180 degrees, you can sum them up, and it'll give 360 degrees.
A parallelogram is a geometric figure with two pairs of parallel sides. Parallelograms are a special type of quadrilateral. The opposite sides are equal in length and the opposite angles are also equal. The area is equal to the product of any side and the distance between that side and the line containing the opposite side.
Properties of Parallelograms
The following properties are common to all parallelograms (parallelogram, rhombus, rectangle, square)
- both pairs of opposite sides are parallel
- both pairs of opposite sides are congruent
- both pairs of opposite angles are congruent
- the diagonals bisect each other
- A rhombus is a parallelogram with four congruent sides.
- The diagonals of a rhombus are perpendicular.
- Each diagonal of a rhombus bisects two angles the rhombus.
- A rhombus may or may not be a square.
- A square is a parallelogram with four right angles and four congruent sides.
- A square is both a rectangle and a rhombus and inherits all of their properties.
A Trapezoid (American English) or Trapezium (British English) is a quadrilateral that has two parallel sides and two non parallel sides.
Some properties of trapezoids:
- The interior angles sum to 360° as in any quadrilateral.
- The parallel sides are unequal.
- Each of the parallel sides is called a base (b) of the trapezoid. The two angles that join one base are called 'base angles'.
- If the two non-parallel sides are equal, the trapezoid is called an isosceles trapezoid.
- In an isosceles trapezoid, each pair of base angles are equal.
- If one pair of base angles of a trapezoid are equal, the trapezoid is isosceles.
- A line segment connecting the midpoints of the non-parallel sides is called the median (m) of the trapeziod.
- The median of a trapezoid is equal to one half the sum of the bases (called b1 and b2).
- A line segment perpendicular to the bases is called an altitude (h) of the trapezoid.
The area (A) of a trapezoid is equal to the product of an altitude and the median.
Recall though that the median is half of the sum of the bases.
Substituting for m, we get:
A circle is a set of all points in a plane that are equidistant from a single point; that single point is called the centre of the circle and the distance between any point on circle and the centre is called radius of the circle.
a chord is an internal segment of a circle that has both of its endpoints on the circumference of the circle.
- the diameter of a circle is the largest chord possible
a secant of a circle is any line that intersects a circle in two places.
- a secant contains any chord of the circle
a tangent to a circle is a line that intersects a circle in exactly one point, called the point of tangency.
- at the point of tangency the tangent line and the radius of the circle are perpendicular
Chapter 16. Circles/Arcs
An arc is a segment of the perimeter of a given circle. The measure of an arc is measured as an angle, this could be in radians or degrees (more on radians later). The exact measure of the arc is determined by the measure of the angle formed when a line is drawn from the center of the circle to each end point. As an example the circle below has an arc cut out of it with a measure of 30 degrees.
As I mentioned before an arc can be measured in degrees or radians. A radian is merely a different method for measuring an angle. If we take a unit circle (which has a radius of 1 unit), then if we take an arc with the length equal to 1 unit, and draw line from each endpoint to the center of the circle the angle formed is equal to 1 radian. this concept is displayed below, in this circle an arc has been cut off by an angle of 1 radian, and therefore the length of the arc is equal to because the radius is 1.
From this definition we can say that on the unit circle a single radian is equal to radians because the perimeter of a unit circle is equal to . Another useful property of this definition that will be extremely useful to anyone who studies arcs is that the length of an arc is equal to its measure in radians multiplied by the radius of the circle.
Converting to and from radians is a fairly simple process. 2 facts are required to do so, first a circle is equal to 360 degrees, and it is also equal to . using these 2 facts we can form the following formula:
, thus 1 degree is equal to radians.
From here we can simply multiply by the number of degrees to convert to radians. for example if we have 20 degrees and want to convert to radians then we proceed as follows:
The same sort of argument can be used to show the formula for getting 1 radian.
, thus 1 radian is equal to
A tangent is a line in the same plane as a given circle that meets that circle in exactly one point. That point is called the point of tangency. A tangent cannot pass through a circle; if it does, it is classified as a chord. A secant is a line containing a chord.
A common tangent is a line tangent to two circles in the same plane. If the tangent does not intersect the line containing and connecting the centers of the circles, it is an external tangent. If it does, it is an internal tangent.
Two circles are tangent to one another if in a plane they intersect the same tangent in the same point.
Sector of a circle
A sector of a circle can be thought of as a pie piece. In the picture below, a sector of the circle is shaded yellow.
To find the area of a sector, find the area of the whole circle and then multiply by the angle of the sector over 360 degrees.
A more intuitive approach can be used when the sector is half the circle. In this case the area of the sector would just be the area of the circle divided by 2.
- See Angle
Addition Property of Equality
For any real numbers a, b, and c, if a = b, then a + c = b + c.
A figure is an angle if and only if it is composed of two rays which share a common endpoint. Each of these rays (or segments, as the case may be) is known as a side of the angle (For example, in the illustration at right), and the common point is known as the angle's vertex (point B in the illustration). Angles are measured by the difference of their slopes. The units for angle measure are radians and degrees. Angles may be classified by their degree measure.
- Acute Angle: an angle is an acute angle if and only if it has a measure of less than 90°
- Right Angle: an angle is an right angle if and only if it has a measure of exactly 90°
- Obtuse Angle: an angle is an obtuse angle if and only if it has a measure of greater than 90°
Angle Addition Postulate
If P is in the interior of an angle , then
Center of a circle
Point P is the center of circle C if and only if all points in circle C are equidistant from point P and point P is contained in the same plane as circle C.
A collection of points is said to be a circle with a center at point P and a radius of some distance r if and only if it is the collection of all points which are a distance of r away from point P and are contained by a plane which contain point P.
A polygon is said to be concave if and only if it contains at least one interior angle with a measure greater than 180° exclusively and less than 360° exclusively.
Two angles formed by a transversal intersecting with two lines are corresponding angles if and only if one is on the inside of the two lines, the other is on the outside of the two lines, and both are on the same side of the transversal.
Corresponding Angles Postulate
If two lines cut by a transversal are parallel, then their corresponding angles are congruent.
Corresponding Parts of Congruent Triangles are Congruent Postulate
The Corresponding Parts of Congruent Triangles are Congruent Postulate (CPCTC) states:
- If ∆ABC ≅ ∆XYZ, then all parts of ∆ABC are congruent to their corresponding parts in ∆XYZ. For example:
- ∠ABC ≅ ∠XYZ
- ∠BCA ≅ ∠YZX
- ∠CAB ≅ ∠ZXY
CPCTC also applies to all other parts of the triangles, such as a triangle's altitude, median, circumcenter, et al.
A line segment is the diameter of a circle if and only if it is a chord of the circle which contains the circle's center.
- See Circle
and if they cross they are congruent
A collection of points is a line if and only if the collection of points is perfectly straight (aligned), is infinitely long, and is infinitely thin. Between any two points on a line, there exists an infinite number of points which are also contained by the line. Lines are usually written by two points in the line, such as line AB, or
A collection of points is a line segment if and only if it is perfectly straight, is infinitely thin, and has a finite length. A line segment is measured by the shortest distance between the two extreme points on the line segment, known as endpoints. Between any two points on a line segment, there exists an infinite number of points which are also contained by the line segment.
Two lines or line segments are said to be parallel if and only if the lines are contained by the same plane and have no points in common if continued infinitely.
Two planes are said to be parallel if and only if the planes have no points in common when continued infinitely.
Two lines that intersect at a 90° angle.
Given a line, and a point P not in line , then there is one and only one line that goes through point P perpendicular to
An object is a plane if and only if it is a two-dimensional object which has no thickness or curvature and continues infinitely. A plane can be defined by three points. A plane may be considered to be analogous to a piece of paper.
A point is a zero-dimensional mathematical object representing a location in one or more dimensions. A point has no size; it has only location.
A polygon is a closed plane figure composed of at least 3 straight lines. Each side has to intersect another side at their respective endpoints, and that the lines intersecting are not collinear.
The radius of a circle is the distance between any given point on the circle and the circle's center.
- See Circle
A ray is a straight collection of points which continues infinitely in one direction. The point at which the ray stops is known as the ray's endpoint. Between any two points on a ray, there exists an infinite number of points which are also contained by the ray.
The points on a line can be matched one to one with the real numbers. The real number that corresponds to a point is the point's coordinate. The distance between two points is the absolute value of the difference between the two coordinates of the two points.
Geometry/Synthetic versus analytic geometry
- Two and Three-Dimensional Geometry and Other Geometric Figures
Perimeter and Arclength
Perimeter of Circle
The circles perimeter can be calculated using the following formula
where and the radius of the circle.
Perimeter of Polygons
The perimeter of a polygon with number of sides abbreviated can be caculated using the following formula
Arclength of Circles
The arclength of a given circle with radius can be calculated using
where is the angle given in radians.
Arclength of Curves
If a curve in have a parameter form for , then the arclength can be calculated using the following fomula
Derivation of formula can be found using differential geometry on infinitely small triangles.
Area of Circles
The method for finding the area of a circle is
Where π is a constant roughly equal to 3.14159265358978 and r is the radius of the circle; a line drawn from any point on the circle to its center.
Area of Triangles
Three ways of calculating the area inside of a triangle are mentioned here.
If one of the sides of the triangle is chosen as a base, then a height for the triangle and that particular base can be defined. The height is a line segment perpendicular to the base or the line formed by extending the base and the endpoints of the height are the corner point not on the base and a point on the base or line extending the base. Let B = the length of the side chosen as the base. Let
h = the distance between the endpoints of the height segment which is perpendicular to the base. Then the area of the triangle is given by:
This method of calculating the area is good if the value of a base and its corresponding height in the triangle is easily determined. This is particularly true if the triangle is a right triangle, and the lengths of the two sides sharing the 90o angle can be determined.
- , also known as Heron's Formula
If the lengths of all three sides of a triangle are known, Hero's formula may be used to calculate the area of the triangle. First, the semiperimeter, s, must be calculated by dividing the sum of the lengths of all three sides by 2. For a triangle having side lengths a, b, and c :
Then the triangle's area is given by:
If the triangle is needle shaped, that is, one of the sides is very much shorter than the other two then it can be difficult to compute the area because the precision needed is greater than that available in the calculator or computer that is used. In otherwords Heron's formula is numerically unstable. Another formula that is much more stable is:
where , , and have been sorted so that .
In a triangle with sides length a, b, and c and angles A, B, and C opposite them,
This formula is true because in the formula . It is useful because you don't need to find the height from an angle in a separate step, and is also used to prove the law of sines (divide all terms in the above equation by a*b*c and you'll get it directly!)
Area of Rectangles
The area calculation of a rectangle is simple and easy to understand. One of the sides is chosen as the base, with a length b. An adjacent side is then the height, with a length h, because in a rectangle the adjacent sides are perpendicular to the side chosen as the base. The rectangle's area is given by:
Sometimes, the baselength may be referred to as the length of the rectangle, l, and the height as the width of the rectangle, w. Then the area formula becomes:
Regardless of the labels used for the sides, it is apparent that the two formulas are equivalent.
Of course, the area of a square with sides having length s would be:
Area of Parallelograms
The area of a parallelogram can be determined using the equation for the area of a rectangle. The formula is:
A is the area of a parallelogram. b is the base. h is the height.
The height is a perpendicular line segment that connects one of the vertices to its opposite side (the base).
Area of Rhombus
Remember in a rombus all sides are equal in length.
and represent the diagonals.
Area of Trapezoids
The area of a trapezoid is derived from taking the arithmetic mean of its two parallel sides to form a rectangle of equal area.
Where and are the lengths of the two parallel bases.
Area of Kites
The area of a kite is based on splitting the kite into four pieces by halving it along each diagonal and using these pieces to form a rectangle of equal area.
Where a and b are the diagonals of the kite.
Alternatively, the kite may be divided into two halves, each of which is a triangle, by the longer of its diagonals, a. The area of each triangle is thus
Where b is the other (shorter) diagonal of the kite. And the total area of the kite (which is composed of two identical such triangles) is
Which is the same as
Areas of other Quadrilaterals
The areas of other quadrilaterals are slightly more complex to calculate, but can still be found if the quadrilateral is well-defined. For example, a quadrilateral can be divided into two triangles, or some combination of triangles and rectangles. The areas of the constituent polygons can be found and added up with arithmetic.
Volume is like area expanded out into 3 dimensions. Area deals with only 2 dimensions. For volume we have to consider another dimension. Area can be thought of as how much space some drawing takes up on a flat piece of paper. Volume can be thought of as how much space an object takes up.
|Common equations for volume:|
|A cube:||s = length of a side|
|A rectangular prism:||l = length, w = width, h = height|
|A cylinder (circular prism):||r = radius of circular face, h = height|
|Any prism that has a constant cross sectional area along the height:||A = area of the base, h = height|
|A sphere:||r = radius of sphere
which is the integral of the Surface Area of a sphere
|An ellipsoid:||a, b, c = semi-axes of ellipsoid|
|A pyramid:||A = area of the base, h = height of pyramid|
|A cone (circular-based pyramid):||r = radius of circle at base, h = distance from base to tip
(The units of volume depend on the units of length - if the lengths are in meters, the volume will be in cubic meters, etc.)
The volume of any solid whose cross sectional areas are all the same is equal to that cross sectional area times the distance the centroid(the center of gravity in a physical object) would travel through the solid.
If two solids are contained between two parallel planes and every plane parallel to these two plane has equal cross sections through these two solids, then their volumes are equal.
A Polygon is a two-dimensional figure, meaning all of the lines in the figure are contained within one plane. They are classified by the number of angles, which is also the number of sides.
One key point to note is that a polygon must have at least three sides. Normally, three to ten sided figures are referred to by their names (below), while figures with eleven or more sides is an n-gon, where n is the number of sides. Hence a forty-sided polygon is called a 40-gon.
A polygon with three angles and sides.
A polygon with four angles and sides.
A polygon with five angles and sides.
A polygon with six angles and sides.
A polygon with seven angles and sides.
A polygon with eight angles and sides.
A polygon with nine angles and sides.
A polygon with ten angles and sides.
For a list of n-gon names, go to and scroll to the bottom of the page.
Polygons are also classified as convex or concave. A convex polygon has interior angles less than 180 degrees, thus all triangles are convex. If a polygon has at least one internal angle greater than 180 degrees, then it is concave. An easy way to tell if a polygon is concave is if one side can be extended and crosses the interior of the polygon. Concave polygons can be divided into several convex polygons by drawing diagonals. Regular polygons are polygons in which all sides and angles are congruent.
A triangle is a type of polygon having three sides and, therefore, three angles. The triangle is a closed figure formed from three straight line segments joined at their ends. The points at the ends can be called the corners, angles, or vertices of the triangle. Since any given triangle lies completely within a plane, triangles are often treated as two-dimensional geometric figures. As such, a triangle has no volume and, because it is a two-dimensionally closed figure, the flat part of the plane inside the triangle has an area, typically referred to as the area of the triangle. Triangles are always convex polygons.
A triangle must have at least some area, so all three corner points of a triangle cannot lie in the same line. The sum of the lengths of any two sides of a triangle is always greater than the length of the third side. The preceding statement is sometimes called the Triangle Inequality.
Certain types of triangles
Categorized by angle
The sum of the interior angles in a triangle always equals 180o. This means that no more than one of the angles can be 90o or more. All three angles can all be less than 90oin the triangle; then it is called an acute triangle. One of the angles can be 90o and the other two less than 90o; then the triangle is called a right triangle. Finally, one of the angles can be more than 90o and the other two less; then the triangle is called an obtuse triangle.
Categorized by sides
If all three of the sides of a triangle are of different length, then the triangle is called a scalene triangle.
If two of the sides of a triangle are of equal length, then it is called an isoceles triangle. In an isoceles triangle, the angle between the two equal sides can be more than, equal to, or less than 90o. The other two angles are both less than 90o.
If all three sides of a triangle are of equal length, then it is called an equilateral triangle and all three of the interior angles must be 60o, making it equilangular. Because the interior angles are all equal, all equilateral triangles are also the three-sided variety of a regular polygon and they are all similar, but might not be congruent. However, polygons having four or more equal sides might not have equal interior angles, might not be regular polygons, and might not be similar or congruent. Of course, pairs of triangles which are not equilateral might be similar or congruent.
Opposite corners and sides in triangles
If one of the sides of a triangle is chosen, the interior angles of the corners at the side's endpoints can be called adjacent angles. The corner which is not one of these endpoints can be called the corner opposite to the side. The interior angle whose vertex is the opposite corner can be called the angle opposite to the side.
Likewise, if a corner or its angle is chosen, then the two sides sharing an endpoint at that corner can be called adjacent sides. The side not having this corner as one of its two endpoints can be called the side opposite to the corner.
The sides or their lengths of a triangle are typically labeled with lower case letters. The corners or their corresponding angles can be labeled with capital letters. The triangle as a whole can be labeled by a small triangle symbol and its corner points. In a triangle, the largest interior angle is opposite to longest side, and vice versa.
Any triangle can be divided into two right triangles by taking the longest side as a base, and extending a line segment from the opposite corner to a point on the base such that it is perpendicular to the base. Such a line segment would be considered the height or altitude ( h ) for that particular base ( b ). The two right triangles resulting from this division would both share the height as one of its sides. The interior angles at the meeting of the height and base would be 90o for each new right triangle. For acute triangles, any of the three sides can act as the base and have a corresponding height. For more information on right triangles, see Right Triangles and Pythagorean Theorem.
Area of Triangles
If base and height of a triangle are known, then the area of the triangle can be calculated by the formula:
( is the symbol for area)
Ways of calculating the area inside of a triangle are further discussed under Area.
The centroid is constructed by drawing all the medians of the triangle. All three medians intersect at the same point: this crossing point is the centroid. Centroids are always inside a triangle. They are also the centre of gravity of the triangle.
The three angle bisectors of the triangle intersect at a single point, called the incentre. Incentres are always inside the triangle. The three sides are equidistant from the incentre. The incentre is also the centre of the inscribed circle (incircle) of a triangle, or the interior circle which touches all three sides of the triangle.
The circumcentre is the intersection of all three perpendicular bisectors. Unlike the incentre, it is outside the triangle if the triangle is obtuse. Acute triangles always have circumcentres inside, while the circumcentre of a right triangle is the midpoint of the hypotenuse. The vertices of the triangle are equidistant from the circumcentre. The circumcentre is so called because it is the centre of the circumcircle, or the exterior circle which touches all three vertices of the triangle.
The orthocentre is the crossing point of the three altitudes. It is always inside acute triangles, outside obtuse triangles, and on the right vertex of the right-angled triangle.
Please note that the centres of an equilateral triangle are always the same point.
Right Triangles and Pythagorean Theorem
Right triangles are triangles in which one of the interior angles is 90o. A 90o angle is called a right angle. Right triangles are sometimes called right-angled triangles. The other two interior angles are complementary, i.e. their sum equals 90o. Right triangles have special properties which make it easier to conceptualize and calculate their parameters in many cases.
The side opposite of the right angle is called the hypotenuse. The sides adjacent to the right angle are the legs. When using the Pythagorean Theorem, the hypotenuse or its length is often labeled with a lower case c. The legs (or their lengths) are often labeled a and b.
Either of the legs can be considered a base and the other leg would be considered the height (or altitude), because the right angle automatically makes them perpendicular. If the lengths of both the legs are known, then by setting one of these sides as the base ( b ) and the other as the height ( h ), the area of the right triangle is very easy to calculate using this formula:
This is intuitively logical because another congruent right triangle can be placed against it so that the hypotenuses are the same line segment, forming a rectangle with sides having length b and width h. The area of the rectangle is b × h, so either one of the congruent right triangles forming it has an area equal to half of that rectangle.
Right triangles can be neither equilateral, acute, nor obtuse triangles. Isosceles right triangles have two 45° angles as well as the 90° angle. All isosceles right triangles are similar since corresponding angles in isosceles right triangles are equal. If another triangle can be divided into two right triangles (see Triangle), then the area of the triangle may be able to be determined from the sum of the two constituent right triangles. Also the Pythagorean theorem can be used for non right triangles. a2+b2=c2-2c
For history regarding the Pythagorean Theorem, see Pythagorean theorem. The Pythagorean Theorem states that:
- In a right triangle, the square of the length of the hypotenuse is equal to the sum of the squares of the lengths of the other two sides.
Let's take a right triangle as shown here and set c equal to the length of the hypotenuse and set a and b each equal to the lengths of the other two sides. Then the Pythagorean Theorem can be stated as this equation:
Using the Pythagorean Theorem, if the lengths of any two of the sides of a right triangle are known and it is known which side is the hypotenuse, then the length of the third side can be determined from the formula.
Sine, Cosine, and Tangent for Right Triangles
Sine, Cosine, and Tangent are all functions of an angle, which are useful in right triangle calculations. For an angle designated as θ, the sine function is abbreviated as sin θ, the cosine function is abbreviated as cos θ, and the tangent function is abbreviated as tan θ. For any
angle θ, sin θ, cos θ, and tan θ are each single determined values and if θ is a known value, sin θ, cos θ, and tan θ can be looked up in a table or found with a calculator. There is a table listing these function values at the end of this section. For an angle between listed values, the sine, cosine, or tangent of that angle can be estimated from the values in the table. Conversely, if a number is known to be the sine, cosine, or tangent of a angle, then such tables could be used in reverse to find (or estimate) the value of a corresponding angle.
These three functions are related to right triangles in the following ways:
In a right triangle,
- the sine of a non-right angle equals the length of the leg opposite that angle divided by the length of the hypotenuse.
- the cosine of a non-right angle equals the length of the leg adjacent to it divided by the length of the hypotenuse.
- the tangent of a non-right angle equals the length of the leg opposite that angle divided by the length of the leg adjacent to it.
For any value of θ where cos θ ≠ 0,
If one considers the diagram representing a right triangle with the two non-right angles θ1and θ2, and the side lengths a,b,c as shown here:
For the functions of angle θ1:
Analogously, for the functions of angle θ2:
Table of sine, cosine, and tangent for angles θ from 0 to 90°
|θ in degrees||θ in radians||sin θ||cos θ||tan θ|
General rules for important angles:
Polyominoes are shapes made from connecting unit squares together, though certain connections are not allowed.
A domino is the shape made from attaching unit squares so that they share one full edge. The term polyomino is based on the word domino. There is only one possible domino.
Tromino↑Jump back a section
A polymino made from four squares is called a tetromino. There are five possible combinations and two reflections:
A polymino made from five squares is called a pentomino. There are twelve possible pentominoes, excluding mirror images and rotations.
Ellipses are sometimes called ovals. Ellipses contain two foci. The sum of the distance from a point on the ellipse to one focus and that same point to the other focus is constant
Area Shapes Extended into 3rd Dimension
Geometry/Area Shapes Extended into 3rd Dimension
Area Shapes Extended into 3rd Dimension Linearly to a Line or Point
Geometry/Area Shapes Extended into 3rd Dimension Linearly to a Line or Point
Ellipsoids and Spheres
Geometry/Ellipsoids and Spheres
Suppose you are an astronomer in America. You observe an exciting event (say, a supernova) in the sky and would like to tell your colleagues in Europe about it. Suppose the supernova appeared at your zenith. You can't tell astronomers in Europe to look at their zenith because their zenith points in a different direction. You might tell them which constellation to look in. This might not work, though, because it might be too hard to find the supernova by searching an entire constellation. The best solution would be to give them an exact position by using a coordinate system.
On Earth, you can specify a location using latitude and longitude. This system works by measuring the angles separating the location from two great circles on Earth (namely, the equator and the prime meridian). Coordinate systems in the sky work in the same way.
The equatorial coordinate system is the most commonly used. The equatorial system defines two coordinates: right ascension and declination, based on the axis of the Earth's rotation. The declination is the angle of an object north or south of the celestial equator. Declination on the celestial sphere corresponds to latitude on the Earth. The right ascension of an object is defined by the position of a point on the celestial sphere called the vernal equinox. The further an object is east of the vernal equinox, the greater its right ascension.
A coordinate system is a system designed to establish positions with respect to given reference points. The coordinate system consists of one or more reference points, the styles of measurement (linear measurement or angular measurement) from those reference points, and the directions (or axes) in which those measurements will be taken. In astronomy, various coordinate systems are used to precisely define the locations of astronomical objects.
Latitude and longitude are used to locate a certain position on the Earth's surface. The lines of latitude (horizontal) and the lines of longitude (vertical) make up an invisible grid over the earth. Lines of latitude are called parallels. Lines of longitude aren't completely straight (they run from the exact point of the north pole to the exact point of the south pole) so they are called meridians. 0 degrees latitude is the Earth's middle, called the equator. O degrees longitude was tricky because there really is no middle of the earth vertically. It was finally agreed that the observatory in Greenwich, U.K. would be 0 degrees longitude due to its significant roll in scientific discoveries and creating latitude and longitude. 0 degrees longitude is called the prime meridian.
Latitude and longitude are measured in degrees. One degree is about 69 miles. There are sixty minutes (') in a degree and sixty seconds (") in a minute. These tiny units make GPS's (Global Positioning Systems) much more exact.
There are a few main lines of latitude:the Arctic Circle, the Antarctic Circle, the Tropic of Cancer, and the Tropic of Capricorn. The Antarctic Circle is 66.5 degrees south of the equator and it marks the temperate zone from the Antarctic zone. The Arctic Circle is an exact mirror in the north. The Tropic of Cancer separates the tropics from the temperate zone. It is 23.5 degrees north of the equator. It is mirrored in the south by the Tropic of Capricorn.
Horizontal coordinate system
One of the simplest ways of placing a star on the night sky is the coordinate system based on altitude or azimuth, thus called the Alt-Az or horizontal coordinate system. The reference circles for this system are the horizon and the celestial meridian, both of which may be most easily graphed for a given location using the celestial sphere.
In simplest terms, the altitude is the angle made from the position of the celestial object (e.g. star) to the point nearest it on the horizon. The azimuth is the angle from the northernmost point of the horizon (which is also its intersection with the celestial meridian) to the point on the horizon nearest the celestial object. Usually azimuth is measured eastwards from due north. So east has az=90°, south has az=180°, west has az=270° and north has az=360° (or 0°). An object's altitude and azimuth change as the earth rotates.
Equatorial coordinate system
The equatorial coordinate system is another system that uses two angles to place an object on the sky: right ascension and declination.
Ecliptic coordinate system
The ecliptic coordinate system is based on the ecliptic plane, i.e., the plane which contains our Sun and Earth's average orbit around it, which is tilted at 23°26' from the plane of Earth's equator. The great circle at which this plane intersects the celestial sphere is the ecliptic, and one of the coordinates used in the ecliptic coordinate system, the ecliptic latitude, describes how far an object is to ecliptic north or to ecliptic south of this circle. On this circle lies the point of the vernal equinox (also called the first point of Aries); ecliptic longitude is measured as the angle of an object relative to this point to ecliptic east. Ecliptic latitude is generally indicated by φ, whereas ecliptic longitude is usually indicated by λ.
Galactic coordinate system
As a member of the Milky Way Galaxy, we have a clear view of the Milky Way from Earth. Since we are inside the Milky Way, we don't see the galaxy's spiral arms, central bulge and so forth directly as we do for other galaxies. Instead, the Milky Way completely encircles us. We see the Milky Way as a band of faint starlight forming a ring around us on the celestial sphere. The disk of the galaxy forms this ring, and the bulge forms a bright patch in the ring. You can easily see the Milky Way's faint band from a dark, rural location.
Our galaxy defines another useful coordinate system — the galactic coordinate system. This system works just like the others we've discussed. It also uses two coordinates to specify the position of an object on the celestial sphere. The galactic coordinate system first defines a galactic latitude, the angle an object makes with the galactic equator. The galactic equator has been selected to run through the center of the Milky Way's band. The second coordinate is galactic longitude, which is the angular separation of the object from the galaxy's "prime meridian," the great circle that passes through the Galactic center and the galactic poles. The galactic coordinate system is useful for describing an object's position with respect to the galaxy's center. For example, if an object has high galactic latitude, you might expect it to be less obstructed by interstellar dust.
Transformations between coordinate systems
One can use the principles of spherical trigonometry as applied to triangles on the celestial sphere to derive formulas for transforming coordinates in one system to those in another. These formulas generally rely on the spherical law of cosines, known also as the cosine rule for sides. By substituting various angles on the celestial sphere for the angles in the law of cosines and by thereafter applying basic trigonometric identities, most of the formulas necessary for coordinate transformations can be found. The law of cosines is stated thus:
To transform from horizontal to equatorial coordinates, the relevant formulas are as follows:
where RA is the right ascension, Dec is the declination, LST is the local sidereal time, Alt is the altitude, Az is the azimuth, and Lat is the observer's latitude. Using the same symbols and formulas, one can also derive formulas to transform from equatorial to horizontal coordinates:
Transformation from equatorial to ecliptic coordinate systems can similarly be accomplished using the following formulas:
where RA is the right ascension, Dec is the declination, φ is the ecliptic latitude, λ is the ecliptic longitude, and ε is the tilt of Earth's axis relative to the ecliptic plane. Again, using the same formulas and symbols, new formulas for transforming ecliptic to equatorial coordinate systems can be found:
- Traditional Geometry:
A topological space is a set X, and a collection of subsets of X, C such that both the empty set and X are contained in C and the union of any subcollection of sets in C and the intersection of any finite subcollection of sets in C are also contained within C. The sets in C are called open sets. Their complements relative to X are called closed sets.
Given two topological spaces, X and Y, a map f from X to Y is continuous if for every open set U of Y, f−1(U) is an open set of X.
Hyperbolic and Elliptic Geometry
There are precisely three different classes of three-dimensional constant-curvature geometry: Euclidean, hyperbolic and elliptic geometry. The three geometries are all built on the same first four axioms, but each has a unique version of the fifth axiom, also known as the parallel postulate. The 1868 Essay on an Interpretation of Non-Euclidean Geometry by Eugenio Beltrami (1835 - 1900) proved the logical consistency of the two Non-Euclidean geometries, hyperbolic and elliptic.
The Parallel Postulate
The parallel postulate is as follows for the corresponding geometries.
Euclidean geometry: Playfair's version: "Given a line l and a point P not on l, there exists a unique line m through P that is parallel to l." Euclid's version: "Suppose that a line l meets two other lines m and n so that the sum of the interior angles on one side of l is less than 180°. Then m and n intersect in a point on that side of l." These two versions are equivalent; though Playfair's may be easier to conceive, Euclid's is often useful for proofs.
Hyperbolic geometry: Given an arbitrary infinite line l and any point P not on l, there exist two or more distinct lines which pass through P and are parallel to l.
Elliptic geometry: Given an arbitrary infinite line l and any point P not on l, there does not exist a line which passes through P and is parallel to l.
Hyperbolic geometry is also known as saddle geometry or Lobachevskian geometry. It differs in many ways to Euclidean geometry, often leading to quite counter-intuitive results. Some of these remarkable consequences of this geometry's unique fifth postulate include:
1. The sum of the three interior angles in a triangle is strictly less than 180°. Moreover, the angle sums of two distinct triangles are not necessarily the same.
2. Two triangles with the same interior angles have the same area.
Models of Hyperbolic Space
The following are four of the most common models used to describe hyperbolic space.
1. The Poincaré Disc Model. Also known as the conformal disc model. In it, the hyperbolic plane is represented by the interior of a circle, and lines are represented by arcs of circles that are orthogonal to the boundary circle and by diameters of the boundary circle. Preserves hyperbolic angles.
2. The Klein Model. Also known as the Beltrami-Klein model or projective disc model. In it, the hyperbolic plane is represented by the interior of a circle, and lines are represented by chords of the circle. This model gives a misleading visual representation of the magnitude of angles.
3. The Poincaré Half-Plane Model. The hyperbolic plane is represented by one-half of the Euclidean plane, as defined by a given Euclidean line l, where l is not considered part of the hyperbolic space. Lines are represented by half-circles orthogonal to l or rays perpendicular to l. Preserves hyperbolic angles.
4. The Lorentz Model. Spheres in Lorentzian four-space. The hyperbolic plane is represented by a two-dimensional hyperboloid of revolution embedded in three-dimensional Minkowski space.
Based on this geometry's definition of the fifth axiom, what does parallel mean? The following definitions are made for this geometry. If a line l and a line m do not intersect in the hyperbolic plane, but intersect at the plane's boundary of infinity, then l and m are said to be parallel. If a line p and a line q neither intersect in the hyperbolic plane nor at the boundary at infinity, then p and q are said to be ultraparallel.
The Ultraparallel Theorem
For any two lines m and n in the hyperbolic plane such that m and n are ultraparallel, there exists a unique line l that is perpendicular to both m and n.
Elliptic geometry differs in many ways to Euclidean geometry, often leading to quite counter-intuitive results. For example, directly from this geometry's fifth axiom we have that there exist no parallel lines. Some of the other remarkable consequences of the parallel postulate include: The sum of the three interior angles in a triangle is strictly greater than 180°.
Models of Elliptic Space
Spherical geometry gives us perhaps the simplest model of elliptic geometry. Points are represented by points on the sphere. Lines are represented by circles through the points.
- Euclid's First Four Postulates
- Euclid's Fifth Postulate
- Incidence Geometry
- Projective and Affine Planes (necessary?)
- Axioms of Betweenness
- Pasch and Crossbar
- Axioms of Congruence
- Continuity (necessary?)
- Hilbert Planes
- Neutral Geometry
If you would like to request anything in this topic please post it below.
- Modern geometry
- An Alternative Way and Alternative Geometric Means of Calculating the Area of a Circle =
Geometry/An Alternative Way and Alternative Geometric Means of Calculating the Area of a Circle | http://en.m.wikibooks.org/wiki/Geometry/Print_version | 13 |
19 | A theorem generally has a set-up - a number of conditions, which may be listed in the theorem or described beforehand. Then it has a conclusion - a mathematical statement which is true under the given set up. The proof, though necessary to the statement's classification as a theorem is not considered part of the theorem.
In general mathematics a statement must be interesting or important in some way to be called a theorem. Less important statements are called:
- lemma: a statement that forms part of the proof of a larger theorem. Of course, the distinction between theorems and lemmas is rather arbitrary, since one mathematician's major result is another's minor claim. Gauss' Lemma and Zorn's Lemma, for example, are interesting enough per se for some authors to stop at the nominal lemma without going on to use that result in any "major" theorem.
- corollary: a statement which follows immediately or very simply from a theorem. A proposition A is a corollary of a proposition or theorem B if A can be deduced quickly and easily from B.
- proposition: a result not associated with any particular theorem.
- claim: a very minor, but necessary or interesting result, which may be part of the proof of another statement. Despite the name, claims are proven.
- remark: similar to claim. Probably presented without proof, which is assumed to be obvious.
As noted above, a theorem requires some sort of logical framework, this will consist of a basic set of axioms (see axiomatic system), as well as a process of inference, which allows to derive new theorems from axioms and other theorems that have been derived earlier. In propositional logic, any proven statement is called a theorem.
- mathematics for a list of famous theorems and conjectures.
- Gödel's incompleteness theorem | http://www.encyclopedia4u.com/t/theorem.html | 13 |
239 | The Origin of Asteroidsby Dr. Walt Brown
(This article has been reproduced with permission from the Center for Scientific Creation. The original article can be found here.)
NOTE - In order to fully understand the content of this article (and it’s companion article The Origin of Comets), you should read the book, In the Beginning by Dr. Walt Brown. This book fully explains Dr. Brown’s Hydroplate Theory which is the foundation upon which this article is written. In fact, this “article” is actually a chapter in the book, In the Beginning. Members of the 4th Day Alliance can download the complete PDF copy of this chapter by clicking here.
Figure 156: Asteroid Ida and Its Moon, Dactyl. In 1993, the Galileo spacecraft, heading toward Jupiter, took this picture 2,000 miles from asteroid Ida. To the surprise of most, Ida had a moon (about 1 mile in diameter) orbiting 60 miles away! Both Ida and Dactyl are composed of earthlike rock. We now know of 68 other asteroids that have moons.1 According to the laws of orbital mechanics (described in the preceding chapter), capturing a moon in space is unbelievably difficult—unless both the asteroid and a nearby potential moon had very similar speeds and directions and unless gases surrounded the asteroid during capture. If so, the asteroid, its moon, and each gas molecule were probably coming from the same place and were launched at about the same time. Within a million years, passing bodies would have stripped the moons away, so these asteroid-moon captures must have been recent.
From a distance, large asteroids look like big rocks. However, many show, by their low density, that they contain either much empty space or something light, such as water ice.2 Also, the best close-up pictures of an asteroid show millions of smaller rocks on its surface. Therefore, asteroids are flying rock piles held together by gravity. Ida, about 35 miles long, does not have enough gravity to squeeze itself into a spherical shape.
SUMMARY: The fountains of the great deep launched rocks as well as muddy water. As rocks moved farther from Earth, Earth’s gravity became less significant to them, and the gravity of nearby rocks became increasingly significant. Consequently, many rocks, assisted by their mutual gravity and surrounding clouds of water vapor, merged to become asteroids. Isolated rocks in space are meteoroids. Drag forces caused by water vapor and thrust forces produced by the radiometer effect concentrated asteroids in what is now the asteroid belt. All the so-called “mavericks of the solar system” (asteroids, meteoroids, and comets) resulted from the explosive events at the beginning of the flood.
Asteroids, also called minor planets, are rocky bodies orbiting the Sun. The orbits of most asteroids lie between those of Mars and Jupiter, a region called the asteroid belt. The largest asteroid, Ceres, is almost 600 miles in diameter and has about one-third the volume of all other asteroids combined. Orbits of almost 30,000 asteroids have been calculated. Many more asteroids have been detected, some less than 20 feet in diameter. A few that cross the Earth’s orbit would do great damage if they ever collided with Earth.
Two explanations are given for the origin of asteroids: (1) they were produced by an exploded planet, and (2) a planet failed to evolve completely. Experts recognize the problems with each explanation and are puzzled. The hydroplate theory offers a simple and complete—but quite different—solution that also answers other questions.
Meteorites, Meteors, and MeteoroidsIn space, solid bodies smaller than an asteroid but larger than a molecule are called “meteoroids.” They are renamed “meteors” as they travel through Earth’s atmosphere, and “meteorites” if they hit the ground.
Exploded-Planet Explanation. Smaller asteroids are more numerous than larger asteroids, a pattern typical of fragmented bodies. Seeing this pattern led to the early belief that asteroids are remains of an exploded planet. Later, scientists realized that all the fragments combined would not make up one small planet.3 Besides, too much energy is needed to explode and scatter even the smallest planet.
Failed-Planet Explanation. The most popular explanation today for asteroids is that they are bodies that did not merge to become a planet. Never explained is how, in nearly empty space, matter merged to become these rocky bodies in the first place,4 why rocky bodies started to form a planet but stopped,5 or why it happened only between the orbits of Mars and Jupiter. Also, because only vague explanations have been given for how planets formed, any claim to understand how one planet failed to form lacks credibility. In general, orbiting rocks do not merge to become either planets or asteroids. Special conditions are required, as explained on page 267 and Endnote 23 on page 288.] Today, collisions and near collisions fragment and scatter asteroids, just the opposite of this “failed-planet explanation.” In fact, during the 4,600,000,000 years evolutionists say asteroids have existed, asteroids would have had so many collisions that they should be much more fragmented than they are today.6
Hydroplate Explanation. Asteroids are composed of rocks expelled from Earth. The size distribution of asteroids does show that at least part of a planet fragmented. Although an energy source is not available to explode and disperse an entire Earth-size planet, the eruption of so much supercritical water from the subterranean chambers could have launched one 2,300th of the Earth—the mass of all asteroids combined. Astronomers have tried to describe the exploded planet, not realizing they were standing on the remaining 99.95% of it—too close to see it.7
As flood waters escaped from the subterranean chambers, pillars, forced to carry more and more of the weight of the overlying crust, were crushed. Also, the almost 10-mile-high walls of the rupture were unstable, because rock is not strong enough to support a cliff more than 5 miles high. As lower portions of the walls were crushed, large blocks8 were swept up and launched by the jetting fountains. Unsupported rock in the top 5 miles then fragmented. The smaller the rock, the faster it accelerated and the farther it went, just as a rapidly flowing stream carries smaller dirt particles faster and farther.
Water droplets in the fountains partially evaporated and quickly froze. Large rocks had large spheres of influence which grew as the rocks traveled away from Earth. Larger rocks became “seeds” around which other rocks and ice collected as spheres of influence expanded. Because of all the evaporated water vapor and the resulting aerobraking, even more mass concentrated around the “seeds.”Clumps of rocks became asteroids.
Question 1: Why did some clumps of rocks and ice in space become asteroids and others become comets?Imagine living in a part of the world where heavy frost settled each night, but the Sun shone daily. After many decades, would the countryside be buried in hundreds of feet of frost?
The answer depends on several things besides the obvious need for a large source of water. If dark rocks initially covered the ground, the Sun would heat them during the day, so frost from the previous night would tend to evaporate. However, if the sunlight was dim or the frost was thick (thereby reflecting more sunlight during the day), little frost would evaporate. More frost would accumulate the next night. Frost thickness would increase every 24 hours.
Now imagine living on a newly formed asteroid. Its spin would give you day-night cycles. After sunset, surface temperatures would plummet toward nearly absolute zero (-460°F), because asteroids do not have enough gravity to hold an atmosphere for long. With little atmosphere to insulate the asteroid, the day’s heat would quickly radiate, unimpeded, into outer space. Conversely, when the Sun rose, its rays would have little atmosphere to warm, so temperatures at the asteroid’s surface would rise rapidly.
As the fountains of the great deep launched rocks and water droplets, evaporation in space dispersed an “ocean” of water molecules and other gases in the inner solar system. Gas molecules that struck the cold side of your spinning asteroid would become frost.9 Sunlight would usually be dim on rocks in larger, more elongated orbits. Therefore, little frost would evaporate during the day, and the frost’s thickness would increase. Your “world” would become a comet. However, if your “world” orbited relatively near the Sun, its rays would evaporate each night’s frost, so your “world” would remain an asteroid.
Heavier rocks could not be launched with as much velocity as smaller particles (dirt, water droplets, and smaller rocks). The heavier rocks merged to become asteroids, while the smaller particles, primarily water, merged to become comets, which generally have larger orbits. No “sharp line” separates asteroids and comets.
PREDICTION 33Asteroids are rock piles, often with ice acting as a weak “glue” inside. Large rocks that began the capture process are nearer the centers of asteroids. Comets, which are primarily ice, have rocks in their cores. Four years after this prediction was published in 2001 (In the Beginning, 7th edition, page 220), measurements of the largest asteroid, Ceres, found that it does indeed have a dense, rocky core and primarily a water-ice mantle.10
Question 2: Wasn’t asteroid Eros found to be primarily a large, solid rock?A pile of dry sand here on Earth cannot maintain a slope greater than about 30 degrees. If it were steeper, the sand grains would roll downhill. Likewise, a pile of dry pebbles or rocks on an asteroid cannot have a slope exceeding about 30 degrees. However, 4% of Eros’ surface exceeds this slope, so some scientists concluded that much of Eros must be a large, solid rock. This conclusion overlooks the possibility that ice is present between some rocks and acts as a weak glue—as predicted above. Ice in asteroids would also explain their low density. Endnote 8 gives another reason why asteroids are probably flying rock piles.
Question 3: Objects launched from Earth should travel in elliptical, cometlike orbits. How could rocky bodies launched from Earth become concentrated in almost circular orbits between Mars and Jupiter?Gases, such as water vapor and its components,11 were abundant in the inner solar system for many years after the flood. Hot gas molecules striking each asteroid’s hot side were repelled with great force. This jetting action was like air rapidly escaping from a balloon, applying a thrust in a direction opposite to the escaping gas.12 Cold molecules striking each asteroid’s cold side produced less jetting. This thrusting, efficiently powered by solar energy, pushed asteroids outward, away from the sun, concentrating them between the orbits of Mars and Jupiter.13 [See Figures 157 and 158.]
Figure 157: Thrust and Drag Acted on Asteroids (Sun, asteroid, gas molecules, and orbit are not to scale.) The fountains of the great deep launched rocks and muddy water from Earth. The larger rocks, assisted by water vapor and other gases within the spheres of influence of these rocks, captured other rocks and ice particles. Those growing bodies that were primarily rocks became asteroids.
The Sun heats an asteroid’s near side, while the far side radiates its heat into cold outer space. Therefore, large temperature differences exist on opposite sides of each rocky, orbiting body. The slower the body spins, the darker the body,14 and the closer it is to the Sun, the greater the temperature difference. (For example, temperatures on the sunny side of our Moon reach a searing 260°F, while on the dark side, temperatures can drop to a frigid -280°F.) Also, gas molecules (small blue circles) between the Sun and asteroid, especially those coming from very near the Sun, are hotter and faster than those on the far side of an asteroid. Hot gas molecules hitting the hot side of an asteroid bounce off with much higher velocity and momentum than cold gas molecules bouncing off the cold side. Those impacts slowly expanded asteroid orbits until too little gas remained in the inner solar system to provide much thrust. The closer an asteroid was to the Sun, the greater the outward thrust. Gas molecules, densely concentrated near Earth’s orbit, created a drag on asteroids. My computer simulations have shown how gas, throughout the inner solar system for years after the flood, herded asteroids into a tight region near Earth’s orbital plane—an asteroid belt.15 Thrust primarily expanded the orbits. Drag circularized orbits and reduced their angles of inclination.
Figure 158: The Radiometer Effect. This well-known novelty, called a radiometer, demonstrates the unusual thrust that pushed asteroids into their present orbits. Sunlight warms the dark side of each vane more than the light side. The partial vacuum inside the bulb approaches that found in outer space, so gas molecules travel relatively long distances before striking other molecules. Gas molecules bounce off the hotter, black side with greater velocity than off the colder, white side. This turns the vanes away from the dark side.
The black side also radiates heat faster when it is warmer than its surroundings. This can be demonstrated by briefly placing the radiometer in a freezer. There the black side cools faster, making the white side warmer than the black, so the vanes turn away from the white side. In summary, the black side gains heat faster when in a hot environment and loses heat faster when in a cold environment. Higher gas pressure always pushes on the warmer side.
Question 4: Could the radiometer effect push asteroids 1–2 astronomical units (AU) farther from the Sun?Each asteroid began as a swarm of particles (rocks, ice, and gas molecules) orbiting within a large sphere of influence. Because a swarm’s volume was quite large, its spin was much slower than it would be as it shrank to become an asteroid—perhaps orders of magnitude slower. The slow spin produced extreme temperature differences between the hot and cold sides. The cold side would have been so cold that gas molecules striking it would tend to stick, thereby adding “fuel” to the developing asteroid. Because the swarm’s volume was large, the radiometer pressure acted over a large area and produced a large thrust. The swarm’s large thrust and low density caused the swarm to rapidly accelerate—much like a feather placed in a gentle breeze. Also, the Sun’s gravity 93,000,000 miles from the Sun (the Earth-Sun distance) is 1,600 times weaker than Earth’s gravity here on Earth.17 So, pushing a swarm of rocks and debris farther from the Sun was surprisingly easy, because there is almost no resistance in outer space.
Question 5: Why are 4% of meteorites almost entirely iron and nickel? Also, why do meteorites rarely contain quartz, which constitutes about 27% of granite’s volume?Pillars were formed in the subterranean chamber when the thicker portions of the crust were squeezed downward onto the chamber floor. Twice daily, during the centuries before the flood, these pillars were stretched and compressed by tides in the subterranean water. This gigantic heating process steadily raised pillar temperatures. [See “What Triggered the Flood?” here.] As explained in Figure 159, temperatures in what are now iron-nickel meteorites once exceeded 1,300°F, enough to dissolve quartz and allow iron and nickel to settle downward and become concentrated in the pillar tips.18 (A similar gravitational settling process concentrated iron and nickel in the Earth’s core after the flood began. See “Melting the Inner Earth” here.)
Evolutionists have great difficulty explaining iron-nickel meteorites. First, everyone recognizes that a powerful heating mechanism must first melt at least some of the parent body from which the iron-nickel meteorites came, so iron and nickel can sink and be concentrated. How this could have occurred in the weak gravity of extremely cold asteroids has defied explanation.19 Second, the concentrated iron and nickel, which evolutionists visualize in the core of a large asteroid, must then be excavated and blasted into space. Available evidence shows that this has not happened.20
Figure 156: Asteroid Ida and Its Moon, Dactyl. Most iron-nickel meteorites display Widmanstätten patterns. That is, if an iron-nickel meteorite is cut and its face is polished and then etched with acid, the surface has the strange crisscross pattern shown above. This shows that temperatures throughout those meteorites exceeded 1,300°F.16 Why were so many meteoroids, drifting in cold space, at one time so uniformly hot? An impact would not produce such uniformity, nor would a blowtorch. The heating a meteor experiences in passing through the atmosphere is barely felt more than a fraction of an inch beneath the surface. If radioactive decay generated the heat, certain daughter products should be present; they are not. Question 5 explains how these high temperatures were probably reached.
Question 6: Aren’t meteoroids chips from asteroids?This commonly-taught idea is based on an error in logic. Asteroids and meteoroids have some similarities, but that does not mean that one came from the other. Maybe a common event produced both asteroids and meteoroids.
Also, three major discoveries suggest that meteoroids came not from asteroids, but from Earth.
1. In the mid-1970s, the Pioneer 10 and 11 spacecraft traveled out through the asteroid belt. NASA expected that the particle detection experiments on board would find 10 times more meteoroids in the belt than are present near Earth’s orbit.21 Surprisingly, the number of meteoroids diminished as the asteroid belt was approached.22 This showed that meteoroids are not coming from asteroids but from nearer the Earth’s orbit.
2. A faint glow of light, called the zodiacal light, extends from the orbit of Venus out to the asteroid belt. The light is reflected sunlight bouncing off dust-size particles. This lens-shaped swarm of particles orbits the Sun, near Earth’s orbital plane. (On dark, moonless nights, zodiacal light can be seen in the spring in the western sky after sunset and in the fall in the eastern sky before sunrise.) Debris chipped off asteroids would have a wide range of sizes and would not be as uniform and fine as the particles reflecting the zodiacal light. Debris expelled by the fountains of the great deep would place fine dust particles in the Earth's orbital plane.
3. Many meteorites have remanent magnetism, so they must have come from a larger magnetized body. Eros, the only asteroid on which a spacecraft has landed and taken magnetic measurements, has no net magnetic field. If this is true of other asteroids as well, meteorites probably did not come from asteroids.30 If asteroids are flying rock piles, as it now appears, any magnetic fields in the randomly oriented rocks would be largely self-canceling, so the asteroid would have no net magnetic field. Therefore, instead of coming from asteroids, meteorites likely came from a magnetized body such as a planet. Because Earth’s magnetic field is 2,000 times greater than that of all other rocky planets combined, meteorites probably came from Earth.
Remanent magnetism decays, so meteorites must have recently broken away from their parent magnetized body. Those who believe that meteorites were chipped off asteroids say this happened millions of years ago.
PREDICTION 34:Most rocks comprising asteroids will be found to be magnetized.
Two InterpretationsWith a transmission electron microscope, Japanese scientist Kazushige Tomeoka identified several major events in the life of one meteorite. Initially, this meteorite was part of a much larger parent body orbiting the Sun. The parent body had many thin cracks, through which mineral-rich water cycled. Extremely thin mineral layers were deposited on the walls of these cracks. These deposits, sometimes hundreds of layers thick, contained calcium, magnesium, carbonates, and other chemicals. Mild thermal metamorphism in this rock shows that temperatures increased before it experienced some final cracks and was blasted into space.31
Hydroplate Interpretation. Earth was the parent body of all meteorites, most of which came from pillars. [Pages 381–386 explain how, why, when, and where pillars formed.] Twice a day before the flood, tides in the subterranean water compressed and stretched these pillars. Compressive heating occurred and cracks developed. Just as water circulates through a submerged sponge that is squeezed and stretched, mineral-laden water circulated through cracks in pillars for years before they broke up. Pillar fragments, launched into space by the fountains of the great deep, became meteoroids. In summary, water did it.
Tomeoka’s (and Most Evolutionists’) Interpretation. Impacts on an asteroid cracked the rock that was to become this meteorite. Ice was deposited on the asteroid. Impacts melted the ice, allowing liquid water to circulate through the cracks and deposit hundreds of layers of magnesium, calcium, and carbonate bearing minerals. A final impact blasted rocks from this asteroid into space. In summary, impacts did it.
Figure 160: Shatter Cone. When a large, crater-forming meteorite strikes the Earth, a shock wave radiates outward from the impact point. The passing shock wave breaks the rock surrounding the crater into meteorite-size fragments having distinctive patterns called shatter cones. (Until shatter cones were associated with impact craters by Robert S. Dietz in 1969, impact craters were often difficult to identify.)
If large impacts on asteroids launched asteroid fragments toward Earth as meteorites, a few meteorites should have shatter cone patterns. None have ever been reported. Therefore, meteorites are probably not derived from asteroids. Likewise, impacts have not launched meteorites from Mars.
Question 7: Does other evidence support this hypothesis that asteroids and meteoroids came from Earth?Yes. Here are seventeen additional observations that either support the proposed explanation or are inconsistent with other current theories on the origin of asteroids and meteoroids:
1. The materials in meteorites and meteoroids are remarkably similar to those in the Earth’s crust.32 Some meteorites contain very dense elements, such as nickel and iron. Those heavy elements seem compatible only with the denser rocky planets: Mercury, Venus, and Earth—Earth being the densest.
A few asteroid densities have been calculated. They are generally low, ranging from 1.2 to 3.3 gm/cm3. The higher densities match those of the Earth’s crust. The lower densities imply the presence of empty space between loosely held rocks or something light such as water ice.33
PREDICTION 35:Rocks in asteroids are typical of the Earth’s crust. Expensive efforts to mine asteroids34 to recover strategic or precious metals will be a waste of money.
2. Meteorites contain different varieties (isotopes) of the chemical element molybdenum, each isotope having a slightly different atomic weight. If, as evolutionists teach, a swirling cloud of gas and dust mixed for millions of years and produced the Sun, its planets, and meteorites, then each meteorite should have about the same combination of these molybdenum isotopes. Because this is not the case,35 meteorites did not come from a swirling dust cloud or any source that mixed for millions of years.
3. Most meteorites36 and some asteroids37 contain metamorphosed minerals, showing that those bodies reached extremely high temperatures, despite a lifetime in the “deep freeze” of outer space. Radioactive decay within such relatively small bodies could not have produced the necessary heating, because too much heat would have escaped from their surfaces. Stranger still, liquid water altered some meteorites38 while they and their parent bodies were heated—sometimes heated multiple times.39
Impacts in space are often proposed to explain this mysterious heating throughout an asteroid or meteroite. However, an impact would raise the temperature only near the point of impact. Before gravel-size fragments from an impact could become uniformly hot, they would radiate their heat into outer space.40
For centuries before the flood, heat was steadily generated within pillars in the subterranean water chamber. As the flood began, the powerful jetting water launched rock fragments into space—fragments of hot, crushed pillars and fragments from the crumbling walls of the ruptured crust. Those rocks became meteoroids and asteroids.
4. Because asteroids came from Earth, they typically spin in the same direction as Earth (counterclockwise, as seen from the North). However, collisions have undoubtedly randomized the spins of many smaller asteroids in the last few thousand years.41
5. Some asteroids have captured one or more moons. [See Figure 156 at top of this page.] Sometimes the “moon” and asteroid are similar in size. Impacts would not create equal-size fragments that could capture each other.42 The only conceivable way for this to happen is if a potential moon enters an asteroid’s expanding sphere of influence while traveling about the same speed and direction as the asteroid. If even a thin gas surrounds the asteroid, the moon will be drawn closer to the asteroid, preventing the moon from being stripped away later. An “exploded planet” would disperse relatively little gas. The “failed planet explanation” meets none of the requirements. The hydroplate theory satisfies all the requirements.
Figure 161: Chondrules. The central chondrule above is 2.2 millimeters in diameter, the size of this circle: o. This picture was taken in reflected light. However, meteorites containing chondrules can be thinly sliced and polished, allowing light from below to pass through the thin slice and into the microscope. Such light becomes polarized as it passes through the minerals. The resulting colors identify minerals in and around the chondrules. [Meteorite from Hammada al Hamra Plateau, Libya.]
Chondrules (CON-drools) are strange, spherical, BB-size objects found in 86% of all meteorites. To understand the origin of meteorites we must also understand how chondrules formed.
Their spherical shape and texture show they were once molten, but to melt chondrules requires temperatures exceeding 3,000°F. How could chondrules get that hot without melting the surrounding rock, which usually has a lower melting temperature? Because chondrules contain volatile substances that would have bubbled out of melted rock, chondrules must have melted and cooled quite rapidly.23 By one estimate, melting occurred in about one-hundredth of a second.24
The standard explanation for chondrules is that small pieces of rock, moving in outer space billions of years ago, before the Sun and Earth formed, suddenly and mysteriously melted. These liquid droplets quickly cooled, solidified, and then were encased inside the rock that now surrounds them. Such vague conditions, hidden behind a veil of space and time, make it nearly impossible to test this explanation in a laboratory. Scientists recognize that this standard story does not explain the rapid melting and cooling of chondrules or how they were encased uniformly in rocks which are radiometrically older than the chondrules.25 As one scientist wrote, “The heat source of chondrule melting remains uncertain. We know from the petrological data that we are looking for a very rapid heating source, but what?”26
Frequently, minerals grade (gradually change) across the boundaries between chondrules and surrounding material.27 This suggests that chondrules melted while encased in rock. If so, the heating sources must have acted briefly and been localized near the center of what are now chondrules. But how could this have happened?
The most common mineral in chondrules is olivine.28 Deep rocks contain many BB-size pockets of olivine. Pillars within the subterranean water probably had similar pockets. As the subterranean water escaped from under the crust, pillars had to carry more of the crust’s weight. When olivine reaches a certain level of compression, it suddenly changes into another mineral, called spinel (spin-EL), and shrinks in volume by about 10%.29 (Material surrounding each pocket would not shrink.)
Tiny, collapsing pockets of olivine transforming into spinel would generate great heat, for two reasons. First, the transformation is exothermic; that is, it releases heat chemically. Second, it releases heat mechanically, by friction. Here’s why. At the atomic level, each pocket would collapse in many stages—much like falling dominos or the section-by-section crushing of a giant scaffolding holding up an overloaded roof. Within each pocket, as each microscopic crystal slid over adjacent crystals at these extreme pressures, melting would occur along sliding surfaces. The remaining solid structures in the olivine pocket would then carry the entire compressive load—quickly collapsing and melting other parts of the “scaffolding.”
The fountains of the great deep expelled pieces of crushed pillars into outer space where they rapidly cooled. Their tumbling action, especially in the weightlessness of space, would have prevented volatiles from bubbling out of the encased liquid pockets within each rock. In summary, chondrules are a by product of the mechanism that produced meteorites—a rapid process that started under the Earth’s crust as the flood began.
Also, tidal effects, as described on pages 425–428, limit the lifetime of the moons of asteroids to about 100,000 years.43 This fact and the problems in capturing a moon caused evolutionist astronomers to scoff at early reports that some asteroids have moons.
Figure 162: Peanut Asteroids. The fountains of the great deep expelled dirt, rocks, and considerable water from Earth. About half of that water quickly evaporated into the vacuum of space; the remainder froze. Each evaporated gas molecule became an orbiting body in the solar system. Asteroids then formed as explained on pages 298–302. Many are shaped like peanuts.
Gas molecules captured by asteroids or released by icy asteroids became their atmospheres. Asteroids with thick atmospheres sometimes captured smaller asteroids as moons. If an atmosphere remained long enough, the moon would lose altitude and gently merge with the low-gravity asteroid, forming a peanut-shaped asteroid. (We see merging when a satellite or spacecraft reenters Earth’s atmosphere, slowly loses altitude, and eventually falls to Earth.) Without an atmosphere, merging becomes almost impossible.
Japan’s Hayabusa spacecraft orbited asteroid Itokawa (shown above) for two months in 2005. Scientists studying Itokawa concluded that it consists of two smaller asteroids that merged. Donald Yeomans, a mission scientist and member of NASA’s Jet Propulsion Laboratory, admitted, “It’s a major mystery how two objects each the size of skyscrapers could collide without blowing each other to smithereens. This is especially puzzling in a region of the solar system where gravitational forces would normally involve collision speeds of 2 km/sec.”45 The mystery is easily solved when one understands the role that water played in the origin of comets and asteroids.
Notice, a myriad of rounded boulders, some 150 feet in diameter, litter Itokawa’s surface. High velocity water produces rounded boulders; an exploded planet or impacts on asteroids would produce angular rocks.
6. The smaller moons of the giant planets (Jupiter, Saturn, Uranus, and Neptune) are captured asteroids. Most astronomers probably accept this conclusion, but have no idea how these captures could occur.44
As explained earlier in this chapter, for decades to centuries after the flood the radiometer effect, powered by the Sun’s energy, spiraled asteroids outward from Earth’s orbit. Water vapor, around asteroids and in interplanetary space, temporarily thickened asteroid and planet atmospheres. This facilitated aerobraking which allowed massive planets to capture asteroids.
Recent discoveries indicate that Saturn’s 313-mile-wide moon, Enceladus (en-SELL-uh-duhs), is a captured asteroid. Geysers at Enceladus’ south pole are expelling water vapor and ice crystals which escape Enceladus and supply Saturn’s E ring.46 That water contains salts resembling Earth’s ocean waters.47 Because asteroids are icy and weak, they would experience strong tides if captured by a giant planet. Strong tides would have recently48 generated considerable internal heat, slowed the moon’s spin, melted ice, and boiled deep reservoirs of water. Enceladus’ spin has almost stopped, its internal water is being launched (some so hot that it becomes a plasma),49 and its surface near the geysers has buckled, probably due to the loss of internal water. Because the material for asteroids and their organic matter came recently from Earth, water is still jetting from cold Enceladus’ surprisingly warm south pole, and “dark green organic material”50 is on its surface.
7. A few asteroids suddenly develop comet tails, so they are considered both asteroid and comet. The hydroplate theory says that asteroids are weakly joined piles of rocks and ice. If such a pile cracked slightly, perhaps due to an impact by space debris, then internal ice, suddenly exposed to the vacuum of space, would violently vent water vapor and produce a comet tail. The hydroplate theory explains why comets are so similar to asteroids.
8. A few comets have nearly circular orbits within the asteroid belt. Their tails lengthen as they approach perihelion and recede as they approach aphelion. If comets formed beyond the planet Neptune, it is highly improbable that they could end up in nearly circular orbits in the asteroid belt.51 So, these comets almost certainly did not form in the outer solar system. Also, comet ice that near the Sun would evaporate relatively quickly. Only the hydroplate theory explains how comets (icy rock piles) recently entered the asteroid belt.
9. If asteroids passing near Earth came from the asteroid belt, too many of them have diameters less than 50 meters,52 and too many have circular orbits.53 However, we would expect this if the rocks that formed asteroids were launched from Earth.
10. Computer simulations, both forward and backward in time, show that asteroids traveling near Earth have a maximum expected lifetime of only about a million years. They “quickly” collide with the Sun.54 This raises doubts that all asteroids began 4,600,000,000 years ago as evolutionists claim—living 4,600 times longer than the expected lifetime of near-Earth asteroids.
11. Earth has one big moon and several small moons—up to 650 feet in diameter.55 The easiest explanation for the small moons is that they were launched from Earth with barely enough velocity to escape Earth’s gravity. (To understand why the largest of these small moons is about 650 feet in diameter, see Endnote 8.)
12. Asteroids 3753 Cruithne and 2000 AA29 are traveling companions of Earth.56 They delicately oscillate, in a horseshoe pattern, around two points that lie 60° (as viewed from the Sun) forward and 60° behind the Earth but on Earth’s nearly circular orbit. These points, predicted by Lagrange in 1764 and called Lagrange points, are stable places where an object would not move relative to the Earth and Sun if it could once occupy either point going at zero velocity relative to the Earth and Sun. But how could a slowly moving object ever reach, or get near, either point? Most likely, it barely escaped from Earth.
Also, Asteroid 3753 could not have been in its present orbit for long, because it is so easy for a passing gravitational body to perturb it out of its stable niche. Time permitting, Venus will pass near this asteroid 8,000 years from now and may dislodge it.57
13. Furthermore, Jupiter has two Lagrange points on its nearly circular orbit. The first, called L4, lies 60° (as seen from the Sun) in the direction of Jupiter’s motion. The second, called L5, lies 60° behind Jupiter.
Visualize planets and asteroids as large and small marbles rolling in orbitlike paths around the Sun on a large frictionless table. At each Lagrange point is a bowl-shaped depression that moves along with each planet. Because there is no friction, small marbles (asteroids) that roll down into a bowl normally pick up enough speed to roll back out. However, if a chance gravitational encounter slowed one marble right after it entered a bowl, it might not exit the bowl. Marbles trapped in a bowl would normally stay 60° ahead of or behind their planet, gently rolling around near the bottom of their moving bowl.
One might think an asteroid is just as likely to get trapped in Jupiter’s leading bowl as its trailing bowl—a 50–50 chance, as with the flip of a coin. Surprisingly, 1068 asteroids are in Jupiter’s leading (L4) bowl, but only 681 are in the trailing bowl.69 This shouldn’t happen in a trillion trials if an asteroid is just as likely to get trapped at L4 as L5. What concentrated so many asteroids near the L4 Lagrange point?
According to the hydroplate theory, asteroids formed near Earth’s orbit. Then, the radiometer effect spiraled them outward, toward the orbits of Mars and Jupiter. Some spiraled through Jupiter’s circular orbit and passed near both L4 and L5. Jupiter’s huge gravity would have slowed those asteroids that were moving away from Jupiter but toward L4. That braking action would have helped some asteroids settle into the L4 bowl. Conversely, asteroids that entered L5 were accelerated toward Jupiter, so they would quickly be pulled out of L5 by Jupiter’s gravity. The surprising excess of asteroids near Jupiter’s L4 is what we would expect based on the hydroplate theory.
Figure 163: Asteroid Belt and Jupiter’s L4 and L5. The size of the Sun, planets, and especially asteroids are magnified, but their relative positions are accurate. About 90% of the 30,000 precisely known asteroids lie between the orbits of Mars and Jupiter, a doughnut-shaped region called the asteroid belt. A few small asteroids cross Earth’s orbit.
Jupiter’s Lagrange points, L4 and L5, lie 60° ahead and 60° behind Jupiter, respectively. They move about the Sun at the same velocity as Jupiter, as if they were fixed at the corners of the two equilateral triangles shown. Items 12 and 13 explain why so many asteroids have settled near L4 and L5, and why significantly more oscillate around L4 than L5.
14. Without the hydroplate theory, one has difficulty imagining situations in which an asteroid would (a) settle into one of Jupiter’s Lagrange points, (b) capture a moon, especially a moon with about the same mass as the asteroid, or (c) have a circular orbit, along with its moon, about their common center of mass. If all three happened to an asteroid, astronomers would be shocked; no astronomer would have predicted that it could happen to a comet. Nevertheless, an “asteroid” discovered earlier, named 617 Patroclus, satisfies (a)–(c). Patroclus and its moon, Menoetius, have such low densities that they would float in water; therefore, both are probably comets70—dirty, fluffy snowballs. Paragraphs 5, 7, 8, and 13 (above) explain why these observations make perfect sense with the hydroplate theory.
15. As explained in “Shallow Meteorites,” meteorites are almost always found surprisingly near Earth’s surface. The one known exception is in southern Sweden, where 40 meteorites and thousands of grain-size fragments of one particular type of meteorite have been found at different depths in a few limestone quarries. The standard explanation is that all these meteorites somehow struck this same small area over a 1–2-million-year period about 480 million years ago.71
A more likely explanation is that some meteorites, not launched with enough velocity to escape Earth during the flood, fell back to Earth. One or more meteorites fragmented on reentering Earth’s atmosphere. The pieces landed in mushy, recently-deposited limestone layers in southern Sweden.
16. Light spectra (detailed color patterns, much like a long bar code) from certain asteroids in the outer asteroid belt imply the presence of organic compounds, especially kerogen, a coal-tar residue.72 No doubt the kerogen came from plant life. Life as we know it could not survive in such a cold region of space, but common organic matter launched from Earth could have been preserved.
17. Many asteroids are reddish and have light characteristics showing the presence of iron.73 On Earth, reddish rocks almost always imply iron oxidized (rusted) by oxygen gas. Today, oxygen is rare in outer space. If iron on asteroids is oxidized, what was the source of the oxygen? Answer: Water molecules, surrounding and impacting asteroids, dissociated (broke apart), releasing oxygen. That oxygen then combined chemically with iron on the asteroid’s surface, giving the reddish color.
Mars, often called the red planet, derives its red color from oxidized iron. Again, oxygen contained in water vapor launched from Earth during the flood, probably accounts for Mars’ red color.
Mars’ topsoil is richer in iron and magnesium than Martian rocks beneath the surface. The dusty surface of Mars also contains carbonates, such as limestone.74 Because meteorites and Earth’s subterranean water contained considerable iron, magnesium, and carbonates, it appears that Mars was heavily bombarded by meteorites and water launched from Earth’s subterranean chamber. [See “The Origin of Limestone” on pages 224–229.]
Those who believe that meteorites came from asteroids have wondered why meteorites do not have the red color of most asteroids.75 The answer is twofold: (a) as explained on page 301, meteorites did not come from asteroids but both came from Earth, and (b) asteroids contain oxidized iron, as explained above, but meteorites are too small to attract an atmosphere gravitationally.
Figure 164: Salt of the Earth. On 22 March 1998, this 2 3/4 pound meteorite landed 40 feet from boys playing basketball in Monahans, Texas. While the rock was still warm, police were called. Hours later, NASA scientists cracked the meteorite open in a clean-room laboratory, eliminating any possibility of contamination. Inside were salt (NaCl) crystals 0.1 inch (3 mm) in diameter and liquid water!58 Some of these salt crystals are shown in the blue circle, highly magnified and in true color. Bubble (B) is inside a liquid, which itself is inside a salt crystal. Eleven quivering bubbles were found in about 40 fluid pockets. Shown in the green circle is another bubble (V) inside a liquid (L). The length of the horizontal black bar represents 0.005 mm, about 1/25 the diameter of a human hair.
NASA scientists who investigated this meteorite believe that it came from an asteroid, but that is highly unlikely. Asteroids, having little gravity and being in the vacuum of space, cannot sustain liquid water, which is required to form salt crystals. (Earth is the only planet, indeed the only body in the solar system, that can sustain liquid water on its surface.) Nor could surface water (gas, liquid, or solid) on asteroids withstand high-velocity impacts. Even more perplexing for the evolutionist: What is the salt’s origin? Also, what accounts for the meteorite’s other contents: potassium, magnesium, iron, and calcium—elements abundant on Earth, but as far as we know, not beyond Earth?59 Dust-sized meteoroids often come from comets. Most larger meteoroids are rock fragments that never merged into a comet or asteroid.
Much evidence supports Earth as the origin of meteorites.
- Minerals and isotopes in meteorites are remarkably similar to those on Earth.32
- Some meteorites contain sugars,60 salt crystals containing liquid water,61 and possible cellulose.62
- Other meteorites contain limestone,63 which, on Earth, forms only in liquid water.
- Three meteorites contain excess amounts of left-handed amino acids64—a sign of once-living matter.
- A few meteorites show that “salt-rich fluids analogous to terrestrial brines” flowed through their veins.65
- Some meteorites have about twice the heavy hydrogen concentration as Earth’s water today.66 As explained in the preceding chapter and in “Energy in the Subterranean Water” here, this heavy hydrogen came from the subterranean chambers.
- About 86% of all meteorites contain chondrules, which are best explained by the hydroplate theory.
- Seventy-eight types of living bacteria have been found in two meteorites after extreme precautions were taken to avoid contamination.67 Bacteria need liquid water to live, grow, and reproduce. Obviously, liquid water does not exist inside meteoroids whose temperatures in outer space are near absolute zero (-460°F). Therefore, the bacteria must have been living in the presence of liquid water before being launched into space. Once in space, they quickly froze and became dormant. Had bacteria originated in outer space, what would they have eaten?
Water on MarsWater recently and briefly flowed at various locations on Mars.76 Photographic comparisons show that some water flowed within the last 2–5 years!77 Water is now stored as ice at Mars’ poles78 and in surface soil. Mars’ stream beds usually originate on crater walls rather than in ever smaller tributaries as on Earth.79 Rain formed other channels.80 Martian drainage channels and layered strata are found at almost isolated 200 locations.81 Most gullies are on crater slopes at high latitudes82—extremely cold slopes that receive little sunlight. One set of erosion gullies is on the central peak of an impact crater!83
Figure 165: Erosion Channels on Mars. These channels frequently originate in scooped-out regions, called amphitheaters, high on a crater wall. On Earth, where water falls as rain, erosion channels begin with narrow tributaries that merge with larger tributaries and finally, rivers. Could impacts of comets or icy asteroids have formed these craters, gouged out amphitheaters, and melted the ice—each within seconds? Mars, which is much colder than Antarctica in the winter, would need a heating source, such as impacts, to produce liquid water.
Today, Mars is cold, averaging -80°F (112 Fahrenheit degrees below freezing). Water on Mars should be ice, not liquid water. Mars’ low atmospheric pressures would hasten freezing even more.84
Water probably came from above. Soon after Earth’s global flood, the radiometer effect caused asteroids to spiral out to the asteroid belt, just beyond Mars. This gave asteroids frequent opportunities to collide with Mars. When crater-forming impacts occurred, large amounts of debris were thrown into Mars’ atmosphere. Mars’ thin atmosphere and low gravity allowed the debris to settle back to the surface in vast layers of thin sheets—strata.
PREDICTION 36Most sediments taken from layered strata on Mars and returned to Earth will show that they were deposited through Mars’ atmosphere, not through water. (Under a microscope, water deposited grains have nicks and gouges, showing that they received many blows as they tumbled along stream bottoms. Sediments deposited through an atmosphere receive few nicks.)
Impact energy (and heat) from icy asteroids and comets bombarding Mars released liquid water, which often pooled inside craters or flowed downhill and eroded the planet’s surface.87 (Most liquid water soaked into the soil and froze.) Each impact was like the bursting of a large dam here on Earth. Brief periods of intense, hot rain and localized flash floods followed.88 These Martian hydrodynamic cycles quickly “ran out of steam,” because Mars receives relatively little heat from the Sun. While the consequences were large for Mars, the total water was small by Earth’s standards—about twice the water in Lake Michigan.
Today, when meteorites strike icy soil on Mars, some of that ice melts. When this happens on a crater wall, liquid water flows down the crater wall, leaving the telltale gullies that have shocked the scientific community.77
PREDICTION 37As has been discovered on the Moon and apparently on Mercury, frost will be found within asteroids and in permanently shadowed craters on Mars. This frost will be rich in heavy hydrogen.
Are Some Meteorites from Mars?Widely publicized claims have been made that at least 30 meteorites from Mars have been found. With international media coverage in 1996, a few scientists also proposed that one of these meteorites, named ALH84001, contained fossils of primitive life. Later study rejected that claim.
The wormy-looking shapes discovered in a meteorite [supposedly] from Mars turned out to be purely mineralogical and never were alive.89
The 30 meteorites are presumed to have come from the same place, because they contain similar ratios of three types of oxygen: oxygen weighing 16, 17, and 18 atomic mass units. (That presumption is not necessarily true, is it?) A chemical argument then indirectly links one of those meteorites to Mars, but the link is more tenuous than most realize.90 That single meteorite had tiny glass nodules containing dissolved gases. A few of these gases (basically the noble gases: argon, krypton, neon, and xenon) had the same relative abundances as those found in Mars’ atmosphere in 1976. (Actually, a later discovery shows that the mineralogy of these meteorites differs from that of almost all Martian rock.91) Besides, if two things are similar, it does not mean that one came from the other. Similarity in the relative abundances of the noble gases in Mars’ atmosphere and in one meteorite may be because those gases originated in Earth’s preflood subterranean chamber. Rocks and water from the subterranean chamber may have transported those gases to Mars.
Could those 30 meteorites have come from Mars? To escape the gravity of Mars requires a launch velocity of 3 miles per second. Additional velocity is then needed to transfer to an orbit intersecting Earth, 34–236 million miles away. Supposedly, one or more asteroids slammed into Mars and blasted off millions of meteoroids. Millions are needed, because less than one in a million92 would ever hit Earth, be large enough to survive reentry, be found, be turned over to scientists, and be analyzed in detail. Besides, if meteorites can come to Earth from Mars, many more should have come from the Moon—but haven’t.93
For an impact suddenly to accelerate, in a fraction of a second, any solid from rest to a velocity of 3 miles per second requires such extreme shock pressures that much of the material would melt, if not vaporize.94 All 30 meteorites should at least show shock effects. Some do not. Also, Mars should have at least six giant craters if such powerful blasts occurred, because six different launch dates are needed to explain the six age groupings the meteorites fall into (based on evolutionary dating methods). Such craters are hard to find, and large, recent impacts on Mars should have been rare.
Then there are energy questions. Almost all impact energy is lost as shock waves and ultimately as heat. Little energy remains to lift rocks off Mars. Even with enough energy, the fragments must be large enough to pass through Mars’ atmosphere. To see the difficulty, imagine throwing a ball high into the air. Then visualize how hard it would be to throw a handful of dust that high. Atmospheric drag, even in Mars’ thin atmosphere, absorbs too much of the smaller particles’ kinetic energy. Finally, for large particles to escape Mars, the expelling forces must be focused, as occurs in a gun barrel or rocket nozzle. For best results, this should be aimed straight up, to minimize the path length through the atmosphere.
A desire to believe in life on Mars produced a type of “Martian mythology” that continues today. In 1877, Italian astronomer Giovanni Schiaparelli reported seeing grooves on Mars. The Italian word for groove is “canali”; therefore, many of us grew up hearing about “canals” on Mars—a mistranslation. Because canals are man-made structures, people started thinking about “little green men” on Mars.
In 1894, Percival Lowell, a wealthy, amateur astronomer with a vivid imagination, built Lowell Observatory primarily to study Mars. Lowell published a map showing and naming Martian canals, and wrote several books: Mars (1895), Mars and Its Canals (1906), and Mars As the Abode of Life (1908). Even into the 1960s, textbooks displayed his map, described vegetative cycles on Mars, and explained how Martians may use canals to convey water from the polar ice caps to their parched cities. Few scientists publicly disagreed with the myth, even after 1949 when excellent pictures from the 200-inch telescope on Mount Palomar were available. Those of us in school before 1960 were directly influenced by such myths; almost everyone has been indirectly influenced.
Artists, science fiction writers, and Hollywood helped fuel this “Martian mania.” In 1898, H. G. Wells wrote The War of the Worlds telling of strange-looking Martians invading Earth. In 1938, Orson Welles, in a famous radio broadcast, panicked many Americans into thinking New Jersey was being invaded by Martians. In 1975, two Viking spacecraft were sent to Mars to look for life. Carl Sagan announced, shortly before the tests were completed, that he was certain life would be discovered—a reasonable conclusion, if life evolved. The prediction failed. In 1996, United States President Clinton read to a global television audience, “More than 4 billion years ago this piece of rock [ALH84001] was formed as a part of the original crust of Mars. After billions of years, it broke from the surface and began a 16-million-year journey through space that would end here on Earth.” “... broke from the surface ...”? The myth is still alive.
Final ThoughtsAs with the 24 other major features listed on page 106 [of the book, In the Beginning], we have examined the origin of asteroids and meteoroids from two directions: “cause-to-effect” and “effect-to-cause.”
Cause-to-Effect. We saw that given the assumption listed on page 115 [of the book, In the Beginning], consequences naturally followed: subterranean water became supercritical, the fountains of the great deep erupted; large rocks, muddy water, and water vapor were launched into space; gas and gravity assembled asteroids; and gas pressure powered by the Sun’s energy (the radiometer effect) herded asteroids into the asteroid belt. Isolated rocks still moving in the solar system are meteoroids.
Effect-to-Cause. We considered seventeen effects (pages 302–306)[of the book, In the Beginning], each incompatible with present theories on the origin of asteroids and meteoroids. Each effect was evidence that many rocks and large volumes of water vapor were launched from Earth.
Portions of Part III will examine this global flood from a third direction: historical records from claimed eyewitnesses. All three perspectives reinforce each other, illuminating in different ways this catastrophic event.
To access the footnotes for this article, click here. | http://4thdayalliance.com/articles/solar-system/origin-of-asteroids/ | 13 |
27 | Applying Doppler Effect to Moving Galaxies
Overview: Students make the observation that farther galaxies move away faster, and check that a model of an expanding universe makes predictions that match with those observations.
Physical resources: Expanding universe model
Electronic resources: Virtual spectroscopy
Observations of moving galaxies:
- Motivating question: How can we use the idea of redshift to figure out the velocity of objects? Students brainstorm ideas with group.
- How do we know what was emitted? Introduce spectral lines as the photon we know must have been emitted with a certain energy, in our case, we'll look at line emission from hydrogen atoms.
- Virtual spectroscope activity: (MiniSpectroscopy)
- Examine Hydrogen spectrum at rest, predict how "example galaxy" is moving, relative to Earth. (Peak is at a longer wavelength, so it is moving away from us.)
- Give students only the spectra of galaxies A through D
- What direction are they moving? (away from Earth, because peak of emission is at a longer wavelength than it is when hydrogen is at rest)
- Challenge: put them in order by the speed (slowest to fastest) they are moving away from Earth.
- Now, give students the images of galaxies A through D
- What's different about these galaxies? (angular diameter)
- Given that most galaxies are about the same linear diameter, put them in order by their distance from Earth, closest to furthest.
- Students should describe the pattern in these observations, and put their description on the whiteboard. (The order is the same, further galaxies move away faster.)
- Instructor introduces Hubble's law as a restatement of this observation: Galaxies that are farther away move away from us faster.
Model of expanding universe, to explain Hubble's law observations above:
- Introduce two-dimensional "expanding universe" model which we've taken an image of at two different times
- Label "smaller" universe as time t = 0, and "larger" universe as time t = 10 seconds.
- Have groups of students "live" in galaxy A, B or C and have them make predictions of the following for each of the two other labeled galaxies, as well as another galaxy of their choice:
- Distance from your galaxy to other galaxy at time t = 0 seconds (cm)
- Distance from your galaxy to other galaxy at time t = 10 seconds (cm)
- Change in distance (cm)
- Change in time (sec, all should be 10 seconds)
- Speed = change in distance / change in time (cm / sec)
- Direction of motion (description, or arrow)
- Have students populate classroom prediction table
- Summarize important patterns seen in predictions: Galaxies at a greater distance move faster.
- Have students line up their "home galaxy" while holding both "universes" up to the light, and describe what has happened to all the other galaxies (they have moved away from the home galaxy, on a line connecting the home galaxy to the other galaxy.) Then have them switch their "Home galaxy" to the other two labeled galaxies, in turn. (All galaxies will see this pattern of all others moving away).
- Refined prediction: Galaxies at a greater distance move faster, and move away from each other along a line connecting the two. From any galaxy, all others look like they are moving away.
- These predictions match up with the observations we've made about actual galaxies in our universe, so we can't rule out the "expanding universe" model.
- Some students have difficulty identifying what information they should extract from the spectra when comparing the sample of galaxies. They may think the intensity of the peak is what they should order by, instead of the location of the peak on the wavelength (energy) scale.
- Many students have difficulty separating the observations from the models in this activity. If so, clarify with the assessment question below.
- Can we determine redshifts for galaxies that do not have emission lines? (no, we must know the energy at which the photons were originally emitted).
- What if we observed every galaxy moving toward us, with further galaxies moving toward us faster? How would that change our model to explain the observations? (contracting universe).
- Which is a statement of the Doppler effect, and which is a statement of Hubble's Law?
- When we observe galaxies moving away from us, we receive lower energy photons compared to what that galaxy actually emits. (Doppler effect)
- Galaxies moving away from us faster are also further away (Hubble's Law).
- Galaxies moving towards us give us photons that are higher energy than when they were emitted (Doppler effect).
- Closer galaxies are moving away from us slower (Hubble's Law).
- Discuss what each deals with: Hubble's law relates speed to distance, and Doppler effect relates change in energy of photons to speed of motion.
- Image of review page of notes: (Hubble's law 2)
< return to Investigation 6 | http://ocw.mit.edu/high-school/courses/chandra-astrophysics-institute/investigations/investigation-6/activity-3/ | 13 |
46 | GMAT Coordinate Geometry
August 1, 2012
The key to many GMAT coordinate geometry questions is to remember that coordinate geometry is just another way of expressing the possible solutions to a two variable equation. Each point on the line in a coordinate plane corresponds to a solution for the equation of that line.
The base equation for a line is y = mx + b, where b is the y intercept, or the point at which the line crosses the y-axis, and m is the slope, or the steepness of the line. More specifically, the slope of a line is the change in the y coordinates divided by the change in the x coordinates between any two points on the line.
While understanding the basic format for an equation of a line can be very useful on the GMAT quantitative section, you will encounter GMAT problems in which it is faster and easier to think of the problem in algebraic terms. In such cases you should think of the equation as an algorithm that will produce the y value given any x value. This is the reason that the x values are sometimes referred to as inputs and the y values as outputs.
For example, if your answer choices are solution sets and you are asked to determine which option is on the line given in the y = mx + b form, rather than graphing the line and trying to determine which point falls on it, which is especially difficult as you will not have graph paper, you can plug each x value into the equation and determine which one produces the appropriate y value.
On test day, the key is to remember that coordinate geometry is just a way of expressing algebraic concepts visually. Thus, we can often treat these problems as algebra rather than as geometry. To see this in action, try the problem below.
In the xy-coordinate system, if (m, n) and (m 1 2, n 1 k) are two points on the line
with the equation x 5 2y 1 5, then k 5
Step 1: Analyze the Question
For any question involving the equation of a line, a good
place to start is the slope-intercept form of the line,
y = mx 1 b. Remember that if you have two points on a
line, you can derive the entire equation, and if you have an
equation of the line, you can calculate any points on that
Step 2: State the Task
We are solving for k, which is the amount by which the
y-coordinate increases when the x-coordinate increases
Step 3: Approach Strategically
The slope of a line is the ratio between the change in y and
the change in x. In other words, every time the x-coordinate
increases by 1, the y-coordinate increases by the amount
of the slope.
The equation of the line in the question stem is defined as
x = 2y + 5. We must isolate y to have slope-intercept form:
So the slope of this line is 1/2 . This means that for every
change of +1 in the x direction, there is a change of + 1/2
in the y direction. Then we know that, because there is an
increase in 2 units in the x direction when moving from
m to m + 2, there must be a change of 1 unit in the y
direction when moving from n to n + k. So k = 1.
Since there are variables that eventually cancel (m and n
are not part of the answers), we can Pick Numbers. Let’s
say that you choose the y-coordinate of the point (m, n) to
be 0 to allow for easier calculations. Using the equation
we’re given to relate x- and y-coordinates, we can calculate
So (m, n) is the point (5, 0).
Now we’ll plug our values of m and n into the next point:
(m + 2, n + k). That yields (7, k). All we have to do is plug
an x-coordinate of 7 into the equation to solve for k, the | http://blog.kaplangmat.com/2012/08/01/gmat-coordinate-geometry/ | 13 |
10 | Reduplication is used in inflections to convey a grammatical function, such as plurality, intensification, etc., and in lexical derivation to create new words. It is often used when a speaker adopts a tone more "expressive" or figurative than ordinary speech and is also often, but not exclusively, iconic in meaning. Reduplication is found in a wide range of languages and language groups, though its level of linguistic productivity varies.
Reduplication is the standard term for this phenomenon in the linguistics literature. Other terms that are occasionally used include cloning, doubling, duplication, repetition, and tautonym.
Typological description
Reduplication is often described phonologically in one of two different ways: either (1) as reduplicated segments (sequences of consonants/vowels) or (2) as reduplicated prosodic units (syllables or moras). In addition to phonological description, reduplication often needs to be described morphologically as a reduplication of linguistic constituents (i.e. words, stems, roots). As a result, reduplication is interesting theoretically as it involves the interface between phonology and morphology.
The base is the word (or part of the word) that is to be copied. The reduplicated element is called the reduplicant, often abbreviated as RED or sometimes just R.
In reduplication, the reduplicant is most often repeated only once. However, in some languages, reduplication can occur more than once, resulting in a tripled form, and not a duple as in most reduplication. Triplication is the term for this phenomenon of copying two times. Pingelapese has both reduplication and triplication.
|kɔul 'to sing'||kɔukɔul 'singing'||kɔukɔukɔul 'still singing'|
|mejr 'to sleep'||mejmejr 'sleeping'||mejmejmejr 'still sleeping'|
Sometimes gemination (i.e. the doubling of consonants or vowels) is considered to be a form of reduplication. The term dupleme has been used (after morpheme) to refer to different types of reduplication that have the same meaning.
Full and partial reduplication
|[ɡin]||'ourselves'||→||[ɡinɡin]||'we (to) us'||(ɡin-ɡin)|
|[jaː]||'themselves'||→||[jaːjaː]||'they (to) them'||(jaː-jaː)||(Watters 2002)|
|[kʼʷə́ɬ]||'to capsize'||→||[kʼʷə́ɬkʼʷəɬ]||'likely to capsize'||(kʼʷə́ɬ-kʼʷəɬ)|
|[qʷél]||'to speak'||→||[qʷélqʷel]||'talkative'||(qʷél-qʷel)||(Shaw 2004)|
Partial reduplication involves a reduplication of only part of the word. For example, Marshallese forms words meaning 'to wear X' by reduplicating the last consonant-vowel-consonant (CVC) sequence of a base, i.e. base+CVC:
|kagir||'belt'||→||kagirgir||'to wear a belt'||(kagir-gir)|
|takin||'sock'||→||takinkin||'to wear socks'||(takin-kin)||(Moravsik 1978)|
Many languages often use both full and partial reduplication, as in the Motu example below:
|Base Verb||Full reduplication||Partial reduplication|
|mahuta 'to sleep'||mahutamahuta 'to sleep constantly'||mamahuta 'to sleep (plural)'|
Reduplicant position
Initial reduplication in Agta (CV- prefix):
|[ŋaŋaj]||'a long time'||→||[ŋaŋaŋaj]||'a long time (in years)'||(ŋa-ŋaŋaj)||(Healey 1960)|
Final reduplication in Dakota (-CCV suffix):
|[hãska]||'tall (singular)'||→||[hãskaska]||'tall (plural)'||(hãska-ska)|
|[waʃte]||'good (singular)'||→||[waʃteʃte]||'good (plural)'||(waʃte-ʃte)||(Shaw 1980, Marantz 1982, Albright 2002)|
Internal reduplication in Samoan (-CV- infix):
|savali||'he/she walks' (singular)||→||savavali||'they walk' (plural)||(sa-va-vali)|
|alofa||'he/she loves' (singular)||→||alolofa||'they love' (plural)||(a-lo-lofa)||(Moravcsik 1978, Broselow and McCarthy 1984)|
|le tamaloa||'the man' (singular)||→||tamaloloa||'men' (plural)||(tama-lo-loa)|
Internal reduplication is much less common than the initial and final types.
Copying direction
A reduplicant can copy from either the left edge of a word (left-to-right copying) or from the right edge (right-to-left copying). There is a tendency for prefixing reduplicants to copy left-to-right and for suffixing reduplicants to copy right-to-left:
Final R → L copying in Sirionó:
|ñimbuchao||→||ñimbuchaochao||'to come apart'||(ñimbuchao-chao)||(McCarthy and Prince 1996)|
Copying from the other direction is possible although less common:
Initial R → L copying in Tillamook:
|[təq]||'break'||→||[qtəq]||'they break'||(q-təq)||(Reichard 1959)|
Final L → R copying in Chukchi:
|nute-||'ground'||→||nutenut||'ground (abs. sg.)'||(nute-nut)|
|jilʔe-||'gopher'||→||jilʔejil||'gopher (abs. sg.)'||(jilʔe-jil)||(Marantz 1982)|
Internal reduplication can also involve copying the beginning or end of the base. In Quileute, the first consonant of the base is copied and inserted after the first vowel of the base.
Internal L → R copying in Quileute:
|[tsiko]||'he put it on'||→||[tsitsko]||'he put it on (frequentative)'||(tsi-ts-ko)|
|[tukoːjoʔ]||'snow'||→||[tutkoːjoʔ]||'snow here and there'||(tu-t-ko:jo’)||(Broselow and McCarthy 1984)|
In Temiar, the last consonant of the root is copied and inserted before the medial consonant of the root.
|[sluh]||'to shoot (perfective)'||→||[shluh]||'to shoot (continuative)'||(s-h-luh)|
|[slɔɡ]||'to marry (perfective)'||→||[sɡlɔɡ]||'to marry (continuative)'||(s-ɡ-lɔɡ)||(Broselow and McCarthy 1984, Walther 2000)|
A rare type of reduplication is found in Semai (an Austroasiatic language of Malaysia). "Expressive minor reduplication" is formed with an initial reduplicant that copies the first and last segment of the base:
|[dŋɔh]||→||[dhdŋɔh]||'appearance of nodding constantly'||(dh-dŋɔh)|
|[cruhaːw]||→||[cwcruhaːw]||'monsoon rain'||(cw-cruhaːw)||(Diffloth 1973|
Reduplication and other morphological processes
All of the examples above consist of only reduplication. However, reduplication often occurs with other phonological and morphological process, such as deletion, affixation of non-reduplicating material, etc.
For instance, in Tz'utujil a new '-ish' adjective form is derived from other words by suffixing the reduplicated first consonant of the base followed by the segment [oχ]. This can be written succinctly as -Coχ. Below are some examples:
- [kaq] 'red' → [kaqkoχ] 'reddish' (kaq-k-oχ)
- [qʼan] 'yellow' → [qʼanqʼoχ] 'yellowish' (qʼan-qʼ-oχ)
- [jaʔ] 'water' → [jaʔjoχ] 'watery' (jaʔ-j-oχ) (Dayley 1985)
Somali has a similar suffix that is used in forming the plural of some nouns: -aC (where C is the last consonant of the base):
- [toɡ] 'ditch' → [toɡaɡ] 'ditches' (toɡ-a-ɡ)
- [ʕad] 'lump of meat' → [ʕadad] 'lumps of meat' (ʕad-a-d)
- [wɪːl] 'boy' → [wɪːlal] 'boys' (wɪːl-a-l) (Abraham 1964)
This combination of reduplication and affixation is commonly referred to as fixed-segment reduplication.
- [nowiu] 'ox' → [nonnowiu] 'ox (distributive)' (no-n-nowiu)
- [hódai] 'rock' → [hohhodai] 'rock (distributive)' (ho-h-hodai)
- [kow] 'dig out of ground (unitative)' → [kokkow] 'dig out of ground (repetitive)' (ko-k-kow)
- [ɡɨw] 'hit (unitative)' → [ɡɨɡɡɨw] 'hit (repetitive)' (ɡɨ-ɡ-ɡɨw) (Haugen forthcoming)
Sometimes gemination can be analyzed as a type of reduplication.
|This section requires expansion. (May 2008)|
Phonological processes, environment, and reduplicant-base relations
|This section requires expansion. (December 2009)|
- base-reduplicant "identity" (OT terminology: BR-faithfulness)
- tonal transfer/non-transfer
Function and meaning
In the Malayo-Polynesian family, reduplication is used to form plurals (among many other functions):
- Malay rumah "house", rumah-rumah "houses".
In pre-1972 Indonesian and Malay orthography, 2 was shorthand for the reduplication that forms plurals: orang "person", orang-orang or orang2 "people". This orthography has resurfaced widely in text messaging and other forms of electronic communication.
Chinese also uses reduplication: 人 rén for "person", 人人 rénrén for "everybody". Japanese does it too: 時 toki "time", tokidoki 時々 "sometimes, from time to time". Both languages can use a special written iteration mark 々 to indicate reduplication, although in Chinese the iteration mark is no longer used in standard writing and is often found only in calligraphy.
- spondeo, spopondi (Latin, "I vow, I vowed")
- λείπω, λέλοιπα (Greek, "I leave, I left")
- δέρκομαι, δέδορκα (Greek, "I see, I saw"; these Greek examples exhibit ablaut as well as reduplication)
- háitan, haíháit (Gothic, "to name, I named")
None of these sorts of forms survive in modern English, although they existed in its parent Germanic languages. A number of verbs in the Indo-European languages exhibit reduplication in the present stem rather than the perfect stem, often with a different vowel from that used for the perfect: Latin gigno, genui ("I beget, I begat") and Greek τίθημι, ἔθηκα, τέθηκα (I place, I placed, I have placed). Other Indo-European verbs used reduplication as a derivational process; compare Latin sto ("I stand") and sisto ("I remain"). All of these Indo-European inherited reduplicating forms are subject to reduction by other phonological laws.
Contemporary spoken Finnish uses reduplicated nouns to indicate genuinity, completeness, originality and being uncomplicated as opposed to being fake, incomplete, complicated or fussy. It can be thought as compound word formation. For example, Söin viisi jäätelöä, pullapitkon ja karkkia, sekä tietysti ruokaruokaa. "I ate five choc-ices, a long loaf of coffee bread and candy, and of course food-food". Here, the "food-food" is contrasted to the "junk-food"—the principal role of food is nutrition, and "junkfood" isn't nutritious, so "food-food" is nutritious food, exclusively. One may say "En ollut eilen koulussa, koska olin kipeä. Siis kipeäkipeä" ("I wasn't at school yesterday because I was sick. Sick-sick, that is"), meaning one was actually suffering from an illness and is not making up excuses as usual.
- ruoka "food", ruokaruoka "proper food", as opposed to snacks
- peli "game", pelipeli "complete game",as opposed to a mod
- puhelin "phone", puhelinpuhelin "phone for talking", as opposed to a pocket computer
- kauas "far away", kauaskauas "unquestionably far away"
- koti "home", kotikoti "home of your parents", as opposed to one's current place of residence
These sorts of reduplicative forms, such as "food-food," are not merely literal translations of the Finnish but in fact have some frequency in contemporary English for emphasising, as in Finnish, an "authentic" form of a certain thing. "Food-food" is one of the most common, along with such a possibilities for "car-car" to describe a vehicle which is actually a car (small automobile) and not something else such as a truck, or "house-house," for a stand-alone house structure as opposed to an apartment, for instance.
Reduplication comes after inflection in Finnish. Young adults may ask one another Menetkö kotiin vai kotiinkotiin? "Are you going home or home-home?" The reduplicated home refers to the old home that used to be their home before they moved out to their new home.
In Swiss German, the verbs gah or goh "go", cho "come", la or lo "let" and aafa or aafo "begin" reduplicate when combined with other verbs.
|literal translation:||she||comes||our||Christmas tree||come||adorn|
|translation||She comes to adorn our Christmas tree.|
|translation:||She doesn't let him sleep.|
In some Salishan languages, reduplication is used to mark both diminution and plurality, one process applying to each end of the word, as in the following example from Shuswap. Note that the data was transcribed in a way that is not comparable to the IPA, but the reduplication of both initial and final portions of the root is clear: ṣōk!Emē'’n 'knife' reduplicated as ṣuk!ṣuk!Emen'’me’n 'plural small knives' (Haeberlin 1918:159).
Reduplicative babbling in child language acquisition
During the period 25–50 weeks after birth, all typically developing infants go through a stage of reduplicated or canonical babbling (Stark 198, Oller, 1980). Canonical babbling is characterized by repetition of identical or nearly identical consonant-vowel combinations, such as 'nanana' or 'didididi'. It appears as a progression of language development as infants experiment with their vocal apparatus and home in on the sounds used in their native language. Canonical/reduplicated babbling also appears at a time when general rhythmic behavior, such as rhythmic hand movements and rhythmic kicking, appear. Canonical babbling is distinguished from earlier syllabic and vocal play, which has less structure.
The Proto-Indo-European language used partial reduplication of a consonant and e in many stative aspect verb forms. The perfect or preterite (past) tense of some Ancient Greek, Gothic, and Latin verbs preserves this reduplication:
- λύω lúō 'I free' vs. λέλυκα léluka "I have freed"
- hald "I hold" vs. haíhald (hĕhald) "I/he held"
- currō "I run" vs. cucurrī "I ran" or "have run"
Proto-Indo-European also used reduplication for imperfective aspect. Ancient Greek preserves this reduplication in the present tense of some verbs. Usually, but not always, this is reduplication of a consonant and i, and contrasts with e-reduplication in the perfect:
- δίδωμι dídōmi "I give" (present)
- δέδωκα dédōka "I have given" (perfect)
- *σίσδω sísdō → ἵζω hízō "I set" (present)
- *σέσδομαι sésdomai → ἕζομαι hézomai "I sit down" (present; from sd-, zero-grade of root in *sed-os → ἕδος hédos "seat, abode")
English has several types of reduplication, ranging from informal expressive vocabulary (the first four forms below) to grammatically meaningful forms (the last two below).
- Rhyming reduplication: hokey-pokey, razzle-dazzle, super-duper, boogie-woogie, teenie-weenie, walkie-talkie, wingding. Although at first glance "Abracadabra" appears to be an English rhyming reduplication it in fact is not; instead, it is derived from the Aramaic formula "Abəra kaDavəra" meaning "I would create as I spoke")
- Exact reduplications (baby-talk-like): bye-bye, choo-choo, night-night, no-no, pee-pee, poo-poo. Couscous is not an English example for reduplication, since it is taken from a French word which has a Maghrebi origin.
- Ablaut reduplications: bric-a-brac, chit-chat, criss-cross, ding-dong, jibber-jabber, kitty-cat, knick-knack, pitter-patter, splish-splash, zig-zag. In the ablaut reduplications, the first vowel is almost always a high vowel and the reduplicated ablaut variant of the vowel is a low vowel.
- Shm-reduplication can be used with most any word; e.g. baby-shmaby, cancer-schmancer and fancy-schmancy. This process is a feature of American English from Yiddish, starting among the American Jews of New York City, then the New York dialect and then the whole country.
Only the last of the above types is productive, meaning that examples of the first three are fixed forms and new forms are not easily accepted.
- Comparative reduplication: In the sentence "John's apple looked redder and redder," the reduplication of the comparative indicates that the comparative is becoming more true over time, meaning roughly "John's apple looked progressively redder as time went on." In particular, this construction does not mean that John's apple is redder than some other apple, which would be a possible interpretation in the absence of reduplication, e.g. in "John's apple looked redder." With reduplication, the comparison is of the object being compared to itself over time. Comparative reduplication always combines the reduplicated comparative with "and". This construction is common in speech and is used even in formal speech settings, but it is less common in formal written texts. Although English has simple constructs with similar meanings, such as "John's apple looked ever redder," these simpler constructs are rarely used in comparison with the reduplicative form. Comparative reduplication is fully productive and clearly changes the meaning of any comparative to a temporal one, despite the absence of any time-related words in the construction. For example, the temporal meaning of "The frug seemed wuggier and wuggier" is clear: Despite not knowing what a frug is or what wugginess is, we know that the apparent wugginess of the frug was increasing over time, as indicated by the reduplication of the comparative "wuggier".
- Contrastive focus reduplication: Exact reduplication can be used with contrastive focus (generally where the first noun is stressed) to indicate a literal, as opposed to figurative, example of a noun, or perhaps a sort of Platonic ideal of the noun, as in "Is that carrot cheesecake or carrot CAKE-cake?". This is similar to the Finnish use mentioned below. An extensive list of such examples is found in .
More can be learned about English reduplication in Thun (1963), Cooper and Ross (1975), and Nevins and Vaux (2003).
While not common in Dutch, reduplication does exist. Most, but not all (e.g., pipi, blauwblauw (laten), taaitaai (gingerbread)) reduplications in Dutch are loanwords (e.g., koeskoes, bonbon, (ik hoorde het) via via) or imitative (e.g., tamtam, tomtom). Another example is a former safe sex campaign slogan in Flanders: Eerst bla-bla, dan boem-boem (First talk, then have sex). In Dutch the verb "gaan" (to go) can be used as an auxiliary verb, which can lead to a triplication: we gaan (eens) gaan gaan (we are going to get going). The use of gaan as an auxiliary verb with itself is considered incorrect, but is commonly used in Flanders. Numerous examples of reduplication in Dutch (and other languages) are discussed by Daniëls (2000).
Afrikaans regularly utilizes reduplication to emphasize the meaning of the word repeated. For example, krap means "to scratch one's self," while krap-krap-krap means "to scratch one's self vigorously." Reduplication in Afrikaans has been described extensively in the literature - see for example Botha (1988), Van Huyssteen (2004) and Van Huyssteen & Wissing (2007). Further examples of this include: "koes" (to dodge) being reduplicated in the sentence "Piet hardloop koes-koes weg" (Piet is running away while constantly dodging / cringing); "sukkel" (to struggle) becoming "sukkel-sukkel" (making slow progress; struggling on); and "kierang" (to cheat) becoming "kierang-kierang" to indicate being cheated on repeatedly .
In Italian reduplication was used both to create new words or words associations (tran-tran, via via, leccalecca) and to intensify the meaning (corri!, corri! "run!, run!").
Common in Lingua Franca, particularly but not exclusively for onomatopoeic action descriptions: "Spagnoli venir...boum boum...andar; Inglis venir...boum boum bezef...andar; Francés venir...tru tru tru...chapar." ("The Spaniards came, cannonaded, and left. The English came, cannonaded heavily, and left. The French came, trumpeted on bugles, and captured it.")
Common uses for reduplication in French are the creation of hypocoristics for names, whereby Louise becomes "Loulou", and Zinedine Zidane becomes Zizou; and in many infantile words, like dada, 'horse' (standard cheval), tati, 'aunt' (standard tante), or tonton, 'uncle' (standard oncle).
- Romanian: mormăi, ţurţur, dârdâi, expessions talmeş-balmeş, harcea-parcea, terchea-berchea, ţac-pac, calea-valea, hodoronc-tronc, and recent slang, trendy-flendy.
- Catalan: balandrim-balandram, baliga-balaga, banzim-banzam, barliqui-barloqui, barrija-barreja, bitllo-bitllo, bub-bub, bum-bum, but-but, catric-catrac, cloc-cloc, cloc-piu, corre-corrents, de nyigui-nyogui, farrigo-farrago, flist-flast, fru-fru, gara-gara, gloc-gloc, gori-gori, leri-leri, nap-buf, ning-nang, ning-ning, non-non, nyam-nyam, nyau-nyau, nyec-nyec, nyeu-nyeu, nyic-nyic, nyigo-nyigo, nyigui-nyogui, passa-passa, pengim-penjam, pif-paf, ping-pong, piu-piu, poti-poti, rau-rau, ringo-rango, rum-rum, taf-taf, tam-tam, tau-tau, tic-tac, tol·le-tol·le, tric-trac, trip-trap, tris-tras, viu-viu, xano-xano, xau-xau, xerric-xerrac, xim-xim, xino-xano, xip-xap, xiu-xiu, xup-xup, zig-zag, ziga-zaga, zim-zam, zing-zing, zub-zub, zum-zum.
In colloquial Mexican Spanish it is common the use of reduplicated adverbs such as luego luego (after after) meaning "immediately", or casi casi (almost almost) which intensifies the meaning of 'almost'.
Slavic languages
The reduplication in the Russian language serves for various kinds of intensifying of the meaning and exists in several forms: a hyphenated or repeated word (either exact or inflected reduplication), and forms similar to shm-reduplication.
Reduplication is a very common practice in Persian, to the extent that there are jokes about it. Mainly due to the mixed nature of the Persian language, most of the reduplication comes in the form of a phrase consisting of a Persian word -va- (and) and an Arabic word, like "Taghdir-Maghdir". Reduplication is particularly common in the city of Shiraz in southwestern Iran. One can further categorize the reduplicative words into "True" and "Quasi" ones. In true reduplicative words, both words are actually real words and have meaning in the language in which it is used. In quasi-reduplicative words, at least one of the words does not have a meaning. Some examples of true reduplicative words in Persian are: "Xert-o-Pert" (Odds and ends); "Čert-o-Pert" (Nonsense); "Čarand-o-Parand" (Nonsense); "Āb-o-Tāb" (much detail). Among the quasi-reduplicative words are "Zan-o-man" (wife); "Davā-Mavā" (Argument); "Talā-malā" (jewelry); and "Raxt-o-Paxt" (Items of Clothing). In general reduplication in Persian, is mainly a mockery of words with non-Persian origins.
Indo-Aryan (and Dravidian) languages
Typically all Indo-Aryan languages, like Hindi, Punjabi, Gujarati and Bengali use reduplication in some form or the other. It is usually used to sound casual, or in a suggestive manner. It is often used to mean etcetera. For example in Hindi, chai-shai (chai means tea, while this phrase means tea or any other supplementary drink or tea along with snacks). Quite common in casual conversations are a few more examples like shopping-wopping, khana-wana. Reduplication is also used in Dravidian languages like Telugu for the same purpose.
A number of Nepalese nouns are formed by reduplication. As in other languages, the meaning is not that of a true plural, but collectives that refer to a set of the same or related objects, often in a particular situation.
For example, "rangi changi"* describes an object that is extremely or vividly colorful, like a crazy mix of colors and/or patterns, perhaps dizzying to the eye. The phrase "hina mina" means "scattered," like a large collection of objects spilled (or scampering, as in small animals) in all different directions. The basic Nepalese word for food, "khana" becomes "khana sana" to refer to the broad generality of anything served at a meal. Likewise, "chiya" or tea (conventionally made with milk and sugar) becomes "chiya siya": tea and snacks (such as biscuits or cookies). *Please note, these examples of Nepalese words are spelled with a simplified Latin transliteration only, not as exact spellings.
In Turkish, a word can be reduplicated while replacing the initial consonants (not being m, and possibly missing) with m. The effect is that the meaning of the original word is broadened. For example, tabak means "plate(s)", and tabak mabak then means "plates, dishes and such". This can be applied not only to nouns but to all kinds of words, as in yeşil meşil meaning "green, greenish, whatever". Although not used in formal written Turkish, it is a completely standard and fully accepted construction.
Reduplication is commonly used only with 'suurensuuri' 'big of big', 'pienenpieni' 'small of small' and 'hienonhieno' 'fine of fine' but other adjectives may sometimes be duplicated as well, where a superlative is too strong an expression, somewhat similarly to Slavic languages. The structure may be written also separately as 'genitive' 'nominative', which may create confusion on occasion (f.e. 'suurensuuri jalka' 'big of big foot' vs. 'suuren suuri jalka' 'big foot of a big one')
Reduplication is usually rhyming. It can add emphasis: 'pici' (tiny) -> ici-pici (very tiny) and it can modify meaning: 'néha-néha' ('seldom-seldom': seldom but repeatedly), 'erre-arra' ('this way-that way', meaning movement without a definite direction), 'ezt-azt' ('this-that', meaning 'all sort of things'), Reduplication often evokes a sense of playfulness and it's quite common when talking to small children.
Bantu languages
- Swahili piga 'to strike'; pigapiga 'to strike repeatedly'
- Ganda okukuba (oku-kuba) 'to strike'; okukubaakuba (oku-kuba-kuba) 'to strike repeatedly, to batter'
- Chewa tambalalá 'to stretch one's legs'; tambalalá-tambalalá to stretch one's legs repeatedly'
Popular names that have reduplication include
Semitic languages frequently reduplicate consonants, though often not the vowels that appear next to the consonants in some verb form. This can take the shape of reduplicating the antepenultimate consonant (usually the second of three), the last of two consonants, or the last two consonants.
In the Hebrew, reduplication is used in nouns and adjectives. For stress, as in גבר גבר (Gever Gever) where the noun גבר 'man' - is duplicated to mean a manly man, a man among man. Or as in לאט לאט (le-aht le-aht) where the adverb לאט 'slowly' - is duplicated to mean very slowly.
Meaning every, as in יום יום (yom yom) where the noun יום 'day' is duplicated to every day, day in day out, day by day.
Some nouns and adjectives can also be made into diminutives by reduplication of the last two consonants (biconsonantal reduplication), e.g.
- כלב (Kelev) = Dog
- כלבלב (Klavlav) = Puppy
- חתול (Chatul) = Cat
- חתלתול (Chataltul) = Kitten
- לבן (Lavan) = White
- לבנבן (Levanban) = Whitish
- קטן (Katan) = Small
- קטנטן (Ktantan) = Tiny
Reduplication in Hebrew is also productive for the creation of verbs, by reduplicating the root or part of it e.g.:
dal (דל) 'poor,spare' > dilel (דלל) 'to dilute' but also dildel (דלדל) 'to impoverish, to weaken'; nad (נד) 'to move, to nod' > nadad (נדד) 'to wander' but also nidned (נדנד) 'to swing, to nag'.
In Amharic, verb roots can be reduplicated three different ways. These can result in verbs, nouns, or adjectives (which are often derived from verbs).
From the root sbr 'break', antepenultimate reduplication produces täsäbabbärä 'it was shattered' and biconsonantal reduplication produces täsbäräbbärä 'it was shattered repeatedly' and səbərbari 'a shard, a shattered piece'.
From the root kHb 'pile stones into a wall', since the second radical is not fully specified, what some call "hollow", the antepenultimate reduplication process reduplicates the k, which is by some criteria antepenultimate, and produces akakabä 'pile stones repeatedly'.
In Burmese, reduplication is used in verbs and adjectives to form adverbs. Many Burmese words, especially adjectives such as လှပ ('beautiful' [l̥a̰pa̰]), which consist of two syllables (when reduplicated, each syllable is reduplicated separately), when reduplicated (လှပ → လှလှပပ 'beautifully' [l̥a̰l̥a̰ pa̰pa̰]) become adverbs. This is also true of many Burmese verbs, which become adverbs when reduplicated.
Some nouns are also reduplicated to indicate plurality. For instance, ပြည်, means "country," but when reduplicated to အပြည်ပြည်, it means "many countries" (as in အပြည်ပြည်ဆိုင်ရာ, "international"). Another example is အမျိုး, which means "kinds," but the reduplicated form အမျိုးမျိုး means "multiple kinds."
A few measure words can also be reduplicated to indicate "one or the other":
- ယောက် (measure word for people) → တစ်ယောက်ယောက် (someone)
- ခု (measure word for things) → တစ်ခုခု (something)
Adjective reduplication is common in Standard Chinese, typically denoting emphasis, less acute degree of the quality described, or an attempt at more indirect speech: xiǎoxiǎo de 小小的 (small), chòuchòu de 臭臭的 (smelly) (this can also reflect a "cute", juvenile or informal register). In the case of adjectives composed of two characters (morphemes), generally each of the two characters is reduplicated separately: piàoliang 漂亮 (beautiful) reduplicates as piàopiàoliangliang 漂漂亮亮.
Verb reduplication is also common in Standard Chinese, conveying the meaning of informal and temporary character of the action. It is often used in imperative expressions, in which it lessens the degree of imperativity: zuòzuò 坐坐 (sit (for a while)), děngděng 等等 (wait (for a while)). Compound verbs are reduplicated as a whole word: xiūxixiūxi 休息休息 (rest (for a while)). This can be analyzed as an instance of omission of "一" (originally, e.g., "坐一坐" or "等一等" ) or "一下" (originally, e.g., "坐一下").
Noun reduplication, though nearly absent in Standard Chinese, is found in the southwestern dialect of Mandarin. For instance, in Sichuan Mandarin, bāobāo 包包 (handbag) is used whereas Beijing use bāor 包儿 (one exception is the colloquial use of bāobāo 包包 by non-Sichuan Mandarin speakers to reflect a perceived fancy or attractive purse). However, there are few nouns that can be reduplicated in Standard Chinese, and reduplication denotes generalisation and uniformity: rén 人 (human being) and rénrén 人人 (everybody (in general, in common)), jiājiāhùhù 家家户户 (every household (uniformly)) - in the latter jiā and hù additionally duplicate the meaning of household, which is a common way of creating compound words in Chinese.
A small number of native Japanese nouns have collective forms produced by reduplication (possibly with rendaku), such as 人々 hitobito "people" (h → b is rendaku) – these are written with the iteration mark "々" to indicate duplication. This formation is not productive and is limited to a small set of nouns. Similarly to Standard Chinese, the meaning is not that of a true plural, but collectives that refer to a large, given set of the same object; for example, the formal English equivalent of 人々 would be "people" (collective), rather than "persons" (plural individuals).
Japanese also contains a large number of mimetic words formed by reduplication of a syllable. These words include not only onomatopoeia, but also words intended to invoke non-auditory senses or psychological states. By one count, approximately 43% of Japanese mimetic words are formed by full reduplication, and many others are formed by partial reduplication, as in がささ〜 ga-sa-sa- (rustling) – compare English "a-ha-ha-ha".
Words called từ láy are found abundantly in Vietnamese. They are formed by repeating a part of a word to form new words, altering the meaning of the original word. Its effect is to sometimes either increase or decrease the intensity of the adjective, and is often used as a literary device (like alliteration) in poetry and other compositions, as well as in everyday speech.
Examples of reduplication increasing intensity:
- đau → đau điếng: hurt → hurt horribly
- mạnh → mạnh mẽ: strong → very strong
- rực → rực rỡ: flaring → blazing
Examples of reduplication decreasing intensity:
- nhẹ → nhè nhẹ: soft → soft (less)
- xinh → xinh xinh: pretty → cute
- đỏ → đo đỏ: red → somewhat red
- xanh → xanh xanh: blue/green → somewhat blue/green
Examples of blunt sounds or physical conditions:
- loảng xoảng — sound of glass breaking to pieces or metallic objects falling to the ground
- hớt hơ hớt hải- (also hớt ha hớt hải) — hard gasps -> in extreme hurry, in panic, panic-stricken
- lục đục — the sound of hard, blunt (and likely wooden) objects hitting against each other -> disagreements and conflicts inside a group or an organisation
Khmer uses reduplication for several purposes, including emphasis and pluralization. Reduplication in Khmer, like many Mon–Khmer languages, can express complex thoughts. Khmer also uses a form of reduplication known as "synonym compounding", in which two phonologically distinct words with similar or identical meanings are combined, either to form the same term or to form a new term altogether.
The wide use of reduplication is certainly one of the most prominent grammatical features of Indonesian and Malay (as well as of other South-East Asian and Austronesian languages).
Malay and Indonesian
In Malay and Indonesian, reduplication is a very productive process. It is used for expression of various grammatical functions (such as verbal aspect) and it is part in a number of complex morphological models. Simple reduplication of nouns and pronouns can express at least 3 meanings:
- Diversity or non-exhaustive plurality:
- Burung-burung itu juga diekspor ke luar negeri = "All those birds are also exported out of the country".
- Conceptual similarity:
- langit-langit = "ceiling; palate; etc." < langit = "sky";
- jari-jari = "spoke; bar; radius; etc." < jari = "finger" etc.
- Pragmatic accentuation:
- Saya bukan anak-anak lagi! "I am not a child anymore!" (anak = "child")
Reduplication of an adjective can express different things:
- Adverbialisation: Jangan bicara keras-keras! = "Don't speak loudly!" (keras = hard)
- Plurality of the corresponding noun: Rumah di sini besar-besar = "The houses here are big" (besar = "big").
Reduplication of a verb can express various things:
- Simple reduplication:
- Pragmatic accentuation: Kenapa orang tidak datang-datang? = "Why aren't people coming?"
- Reduplication with me- prefixation, depending on the position of the prefix me-:
- Repetition or continuation of the action: Orang itu memukul-mukul anaknya: "That man continuously beat his child";
- Reciprocity: Kedua orang itu pukul-memukul = "Those two men would beat each other".
Notice that in the first case, the nasalisation of the initial consonant (whereby /p/ becomes /m/) is repeated, while in the second case, it only applies in the repeated word.
Reduplication can convey a simple plural meaning, for instance wahine "woman", waahine "women", tangata "person", taangata "people". Biggs calls this "infixed reduplication". It occurs a small subset of people words in most Polynesian languages.
Reduplication can convey emphasis or repetition, for example mate "die", matemate "die in numbers"; and de-emphasis, for example wera "hot" and werawera "warm".
Reduplication can also extend the meaning of a word; for instance paki "pat" becomes papaki "slap or clap once" and pakipaki "applaud"; kimo "blink" becomes kikimo "close eyes firmly".
In Japanese imperative oit'oi'te (leave behind) of the compound verb oitoku is pseudo-reduplication. It appears to be 'oit' repeated, especially when spoken quickly, but the root is 'oite'(leave)+'oku'(to place something). Therefore the 'oit' sound is repeated twice, but its by chance placement and not repetition (different meanings).
Australian Aboriginal languages
Reduplication is common in many Australian place names due to their Aboriginal origins. Examples: Turramurra, Parramatta, Wooloomooloo. In the language of the Wiradjuri people of south eastern Australian, plurals are formed by doubling a word, hence 'Wagga' meaning crow becomes Wagga Wagga meaning 'place of many crows'. This occurs in other place names deriving from the Wiradjuri language including Gumly Gumly, Grong Grong and Book Book.
See also
- Language acquisition
- Syntactic doubling
- For an example of a language with many types of reduplication see: St'at'imcets language#Reduplication.
- Word word
- List of people with reduplicated names
- Pratt, George (1984) . A Grammar and Dictionary of the Samoan Language, with English and Samoan vocabulary (3rd and revised ed.). Papakura, New Zealand: R. McMillan. ISBN 0-908712-09-X. Retrieved 8 June 2010.
- The Malay Spelling Reform, Asmah Haji Omar, (Journal of the Simplified Spelling Society, 1989-2 pp.9-13 later designated J11)
- Jila Ghomeshi, Ray Jackendoff, Nicole Rosen, and Kevin Russell (2004). "Contrastive focus reduplication in English (the Salad-Salad paper)". Natural Language & Linguistic Theory 22 (2): 307–357. doi:10.1023/B:NALA.0000015789.98638.f9.
- A Glossary of Lingua Franca, 5th ed.
- Peter Unseth. 2003. Surveying bi-consonantal reduplication in Semitic. In Selected Comparative-Historical Afrasian Linguistic Studies in Memory of Igor M. Diakonoff, ed. by M. Lionel Bender, 257-273. Munich: Lincom Europa.
- p. 1029. Wolf Leslau. 1995. Reference Grammar of Amharic. Wiesbaden: Harrassowitz.
- Peter Unseth. 2002. Biconsonantal reduplication in Amharic. Doctoral dissertation, University of Texas at Arlington.
- p. 1035. Wolf Leslau. 1995. Reference Grammar of Amharic. Wiesbaden: Harrassowitz.
- Tamamura, Fumio. 1979. Nihongo to chuugokugo ni okeru onshoochoogo [Sound-symbolic words in Japanese and Chinese]. Ootani Joshidai Kokubun 9:208-216.
- Tamamura, Fumio. 1989. Gokei [Word forms]. In Kooza nihongo to nihongo kyooiku 6, ed. Fumio Tamamura, 23-51. Tokyo: Meiji Shoin.
- Reduplicants and Prefixes in Japanese Onomatopoeia, Akio Nasu
- Yury A. Lande, "Nominal reduplication in Indonesian challenging the theory of grammatical change", International Symposium on Malay/Indonesian Linguistics, Nijmegen, The Netherlands, 27–29 June 2003.
- Biggs, Bruce, 1998. Let's learn Maori: a guide to the study of the Maori language. Auckland: Auckland University Press, p137.
- Abraham, Roy. (1964). Somali-English dictionary. London, England: University of London Press.
- Albright, Adam. (2002). A restricted model of UR discovery: Evidence from Lakhota. (Draft version).
- Alderete, John; Benua, Laura; Gnanadesikan, Amalia E.; Beckman, Jill N.; McCarthy, John J.; and Urbanczyk, Suzanne. (1999). Reduplication with fixed segmentism. Linguistic Inquiry, 30, 327-364. (Online version ROA 226-1097).
- Botha, Rudi P. (1988). Form and meaning in word formation : a study of Afrikaans reduplication. Cambridge: Cambridge University Press.
- Broselow, Ellen; and McCarthy, John J. (1984). A theory of internal reduplication. The linguistic review, 3, 25-88.
- Cooper, William E.; and Ross, "Háj" John R. (1975). World order. In R. E. Grossman, L. J. San, and T. J. Vance (Eds.), Papers from the parasession on functionalism (pp. 63–111). Chicago, IL: Chicago Linguistic Society.
- Dayley, Jon P. (1985). Tzutujil grammar. Berkeley, CA: University of California Press.
- Diffloth, Gérald. (1973). Expressives in Semai. In P. N. Jenner, L. C. Thompson, and S. Starsota (Eds.), Austroasiatic studies part I (pp. 249–264). University Press of Hawaii.
- Fabricius, Anne H. (2006). A comparative survey of reduplication in Australian languages. LINCOM Studies in Australian Languages (No. 03). Lincom. ISBN 3-89586-531-1.
- Haeberlin, Herman. (1918). “Types of Reduplication in Salish Dialects.” International Journal of American Linguistics 1: 154-174.
- Haugen, Jason D. (forthcoming). Reduplicative allomorphy and language prehistory in Uto-Aztecan. (Paper presented at Graz Reduplication Conference 2002, November 3–6).
- Harlow, Ray. (2007) Māori: a linguistic introduction Cambridge University Press. ISBN 978-0-521-80861-3. 127-129
- Healey, Phyllis M. (1960). An Agta grammar. Manila: The Institute of National Language and The Summer Institute of Linguistics.
- Hurch, Bernhard (Ed.). (2005). Studies on reduplication. Empirical approaches to language typology (No. 28). Mouton de Gruyter. ISBN 3-11-018119-3.
- Inkelas, Sharon; & Zoll, Cheryl. (2005). Reduplication: Doubling in morphology. Cambridge studies in linguistics (No. 106). Cambridge University Press. ISBN 0-521-80649-6.
- Key, Harold. (1965). Some semantic functions of reduplication in various languages. Anthropological Linguistics, 7(3), 88-101.
- Marantz, Alec. (1982). Re reduplication. Linguistic Inquiry 13: 435-482.
- McCarthy, John J. and Alan S. Prince. (1986 ). Prosodic morphology 1986. Technical report #32. Rutgers University Center for Cognitive Science. (Unpublished revised version of the 1986 paper available online on McCarthy's website: http://ruccs.rutgers.edu/pub/papers/pm86all.pdf).
- McCarthy, John J.; and Prince, Alan S. (1995). Faithfulness and reduplicative identity. In J. Beckman, S. Urbanczyk, and L. W. Dickey (Eds.), University of Massachusetts occasional papers in linguistics 18: Papers in optimality theory (pp. 249–384). Amherst, MA: Graduate Linguistics Students Association. (Available online on the Rutgers Optimality Archive website: http://roa.rutgers.edu/view.php3?id=568).
- McCarthy, John J.; and Prince, Alan S. (1999). Faithfulness and identity in prosodic morphology. In R. Kager, H. van der Hulst, and W. Zonneveld (Eds.), The prosody morphology interface (pp. 218–309). Cambridge: Cambridge University Press. (Available online on the Rutgers Optimality Archive website: http://roa.rutgers.edu/view.php3?id=562).
- Moravcsik, Edith. (1978). Reduplicative constructions. In J. H. Greenberg (Ed.), Universals of human language: Word structure (Vol. 3, pp. 297–334). Stanford, CA: Stanford University Press.
- Nevins, Andrew; and Vaux, Bert. (2003). Metalinguistic, shmetalinguistic: The phonology of shm-reduplication. (Presented at the Chicago Linguistics Society, April 2003). (Online version: http://ling.auf.net/lingbuzz/@qclBWVDkyQupkDAI/yuTibEgY?78).
- Oller, D. Kimbrough. 1980. The emergence of the sounds of speech in infancy, in Child Phonology Vol. I, edited by G. H. Yeni-Komshian, J. F. Kavanaugh, and C. A. Ferguson. Academic Press, New York. pp. 93–112.
- Raimy, Eric. (2000). Remarks on backcopying. Linguistic Inquiry 31:541-552.
- Rehg, Kenneth L. (1981). Ponapean reference grammar. Honolulu: The University Press of Hawaii.
- Reichard, Gladys A. (1959). A comparison of five Salish languages. International Journal of American Linguistics, 25, 239-253.
- Shaw, Patricia A. (1980). Theoretical Issues in Dakota Phonology and Morphology. Garland Publ: New York. pp. ix + 396.
- Shaw, Patricia A. (2004). Reduplicant order and identity: Never trust a Salish CVC either?. In D. Gerdts and L. Matthewson (Eds.), Studies in Salish linguistics in honor of M. Dale Kinkade. University of Montana Occasional Papers in Linguistics (Vol. 17). Missoula, MT: University of Montana.
- Stark, Rachel E. (1980). Features of infant sounds: The emergence of cooing. Journal of Child Language Vol 5(3) Oct 1978, 379-390.
- Thun, Nils. (1963). Reduplicative words in English: A study of formations of the types tick-tock, hurly-burly, and shilly-shally. Uppsala.
- Van Huyssteen, Gerhard B. (2004). Motivating the composition of Afrikaans reduplications: a cognitive grammar analysis. In: Radden, G & Panther, K-U. (eds.). Studies in Linguistic Motivation. ISBN 3-11-018245-9. Berlin: Mouton de Gruyter. pp. 269–292.
- Van Huyssteen, Gerhard B and Wissing, Daan P. (2007). Datagebaseerde Aspekte van Afrikaanse Reduplikasies. [Data-based Aspects of Afrikaans Reduplications]. Southern African Linguistics and Applied Language Studies. 25(3): 419–439.
- Watters, David E. (2002). A grammar of Kham. Cambridge grammatical descriptions. Cambridge: Cambridge University Press. ISBN 0-521-81245-3.
- Wilbur, Ronnie B. (1973). The phonology of reduplication. Doctoral dissertation, University of Illinois. (Also published by Indiana University Linguistics Club in 1973, republished 1997.)
|Look up reduplication in Wiktionary, the free dictionary.|
- Reduplication (Lexicon of Linguistics)
- What is reduplication? (SIL)
- Echo-Word Reduplication Lexicon
- Exhaustive list of reduplications in English
- List of contrastive focus reduplications in English
- graz database on reduplication (gdr) Institute of Linguistics, University of Graz
- La réduplication à m dans l’arabe parlé à Mardin | http://en.wikipedia.org/wiki/Reduplication | 13 |
13 | The Digestive System - Design: parts of the digestive system
The digestive system may be broken into two parts: a long, winding, muscular tube accompanied by accessory digestive organs and glands. That open-ended tube, known as the alimentary canal or digestive tract, is composed of various organs. These organs are, in order, the mouth, pharynx, esophagus, stomach, small intestine, and large intestine. The rectum and anus form the end of the large intestine. The accessory digestive organs and glands that help in the digestive process include the tongue, teeth, salivary glands, pancreas, liver, and gall bladder.
The walls of the alimentary canal from the esophagus through the large intestine are made up of four tissue layers. The innermost layer is the mucosa, coated with mucus. This protects the alimentary canal from chemicals and enzymes (proteins that speed up the rate of chemical reactions) that break down food and from germs and parasites that might be in that food. Around the mucosa is the submucosa, which contains blood vessels, nerves, and lymph vessels. Wrapped around the submucosa are two layers of muscles that help move food along the canal. The outermost layer, the serosa, is moist, fibrous tissue that protects the alimentary canal and helps it move against the surrounding organs in the body.
Food enters the body through the mouth, or oral cavity. The lips form and protect the opening of the mouth, the cheeks form its sides, the tongue forms its floor, and the hard and soft palates form its roof. The hard palate is at the front; the soft palate is in the rear. Attached to the soft palate is a fleshy, fingerlike projection called the uvula (from the Latin word meaning "little grape"). Two U-shaped rows of teeth line the mouth—one above and one below. Three pair of salivary glands open at various points into the mouth.
- Alimentary canal (al-i-MEN-tah-ree ka-NAL):
- Also known as the digestive tract, the series of muscular structures through which food passes while being converted to nutrients and waste products; includes the oral cavity, pharynx, esophagus, stomach, large intestine, and small intestine.
- Amylase (am-i-LACE):
- Any of various digestive enzymes that convert starches to sugars.
- Appendix (ah-PEN-dix):
- Small, apparently useless organ extending from the cecum.
- Greenish yellow liquid produced by the liver that neutralizes acids and emulsifies fats in the duodenum.
- Bolus (BO-lus):
- Rounded mass of food prepared by the mouth for swallowing.
- Cecum (SEE-kum):
- Blind pouch at the beginning of the large intestine.
- Chyle (KILE):
- Thick, whitish liquid consisting of lymph and tiny fat globules absorbed from the small intestine during digestion.
- Chyme (KIME):
- Soupylike mixture of partially digested food and stomach secretions.
- Colon (KOH-lun):
- Largest region of the large intestine, divided into four sections: ascending, transverse, descending, and sigmoid (colon is sometimes used to describe the entire large intestine).
- Colostomy (kuh-LAS-tuh-mee):
- Surgical procedure where a portion of the large intestine is brought through the abdominal wall and attached to a bag to collect feces.
- Defecation (def-e-KAY-shun):
- Elimination of feces from the large intestine through the anus.
- Dentin (DEN-tin):
- Bonelike material underneath the enamel of teeth, forming the main part.
- Duodenum (doo-o-DEE-num or doo-AH-de-num):
- First section of the small intestine.
- Emulsify (e-MULL-si-fie):
- To break down large fat globules into smaller droplets that stay suspended in water.
- Enamel (e-NAM-el):
- Whitish, hard, glossy outer layer of teeth.
- Enzymes (EN-zimes):
- Proteins that speed up the rate of chemical reactions.
- Epiglottis (ep-i-GLAH-tis):
- Flaplike piece of tissue at the top of the larynx that covers its opening when swallowing is occurring.
- Esophagus (e-SOF-ah-gus):
- Muscular tube connecting the pharynx and stomach.
- Feces (FEE-seez):
- Solid body wastes formed in the large intestine.
- Flatus (FLAY-tus):
- Gas generated by bacteria in the large intestine.
- Gastric juice (GAS-trick JOOSE):
- Secretion of the gastric glands of the stomach, containing hydrochloric acid, pepsin, and mucus.
- Ileocecal valve (ill-ee-oh-SEE-kal VALV):
- Sphincter or ring of muscule that controls the flow of chyme from the ileum to the large intestine.
- Ileum (ILL-ee-um):
- Final section of the small intestine.
- Jejunum (je-JOO-num):
- Middle section of the small intestine.
- Lacteals (LAK-tee-als):
- Specialized lymph capillaries in the villi of the small intestine.
- Larynx (LAR-ingks):
- Organ between the pharynx and trachea that contains the vocal cords.
- Lipase (LIE-pace):
- Digestive enzyme that converts lipids (fats) into fatty acids.
- Lower esophageal sphincter (LOW-er i-sof-ah-GEE-alSFINGK-ter):
- Strong ring of muscle at the base of the esophagus that contracts to prevent stomach contents from moving back into the esophagus.
- Palate (PAL-uht):
- Roof of the mouth, divided into hard and soft portions, that separates the mouth from the nasal cavities.
- Papillae (pah-PILL-ee):
- Small projections on the upper surface of the tongue that contain taste buds.
- Peristalsis (per-i-STALL-sis):
- Series of wavelike muscular contractions that move material in one direction through a hollow organ.
- Pharynx (FAR-inks):
- Short, muscular tube extending from the mouth and nasal cavities to the trachea and esophagus.
- Plaque (PLACK):
- Sticky, whitish film on teeth formed by a protein in saliva and sugary substances in the mouth.
- Pyloric sphincter (pie-LOR-ick SFINGK-ter):
- Strong ring of muscle at the junction of the stomach and the small intestine that regulates the flow of material between them.
- Rugae (ROO-jee):
- Folds of the inner mucous membrane of organs, such as the stomach, that allow those organs to expand.
- Trypsin (TRIP-sin):
- Digestive enzyme that converts proteins into amino acids; inactive form is trypsinogen.
- Uvula (U-vue-lah):
- Fleshy projection hanging from the soft palate that raises to close off the nasal passages during swallowing.
- Vestigial organ (ves-TIJ-ee-al OR-gan):
- Organ that is reduced in size and function when compared with that of evolutionary ancestors.
- Villi (VILL-eye):
- Tiny, fingerlike projections on the inner lining of the small intestine that increase the rate of nutrient absorption by greatly increasing the intestine's surface area.
THE TONGUE. The muscular tongue is attached to the base of the mouth by a fold of mucous membrane. On the upper surface of the tongue are small projections called papillae, many of which contain taste buds (for a discussion of taste, see chapter 12). Most of the tongue lies within the mouth, but its base extends into the pharynx. Located at the base of the tongue are the lingual tonsils, small masses of lymphatic tissue that serve to prevent infection.
TEETH. Humans have two sets of teeth: deciduous and permanent. The deciduous teeth (also known as baby or milk teeth) start to erupt through the gums in the mouth when a child is about six months old. By the age of two, the full set of twenty teeth has developed. Between the ages of six and twelve, the roots of these teeth are reabsorbed into the body and the teeth begin to fall out. They are quickly replaced by the thirty-two permanent adult teeth. (The third molars, the wisdom teeth, may not erupt because of inadequate space in the jaw. In such cases, they become impacted or embedded in the jawbone and must be removed surgically.)
Teeth are classified according to shape and function. Incisors, the chisel-shaped front teeth, are used for cutting. Cuspids or canines, the pointed teeth next to the incisors, are used for tearing or piercing. Bicuspids (or premolars) and molars, the back teeth with flattened tops and rounded, raised tips, are used for grinding.
Each tooth consists of two major portions: the crown and the root. The crown is the exposed part of the tooth above the gum line; the root is enclosed in a socket in the jaw. The outermost layer of the crown is the whitish enamel. Made mainly of calcium, enamel is the hardest substance in the body.
Underneath the enamel is a yellowish, bonelike material called dentin. It forms the bulk of the tooth. Within the dentin is the pulp cavity, which receives blood vessels and nerves through a narrow root canal at the base of the tooth.
THE SALIVARY GLANDS. Three pair of salivary glands produce saliva on a continuous basis to keep the mouth and throat moist. The largest pair, the parotid glands, are located just below and in front of the ears. The next largest pair, the submaxillary or submandibular glands, are located in the lower jaw. The smallest pair, the sublingual glands, are located under the tongue.
Ivan Petrovich Pavlov (1849–1936) was a Russian physiologist (a person who studies the physical and chemical processes of living organisms) who conducted pioneering research into the digestive activities of mammals. His now-famous experiments with a dog ("Pavlov's dog") to show how the central nervous system affects digestion earned him the Nobel Prize for Medicine or Physiology in 1904.
Interested in the actions of digestion and gland secretion, Pavlov set up an ingenious experiment. In a laboratory, he severed a dog's throat (Pavlov was a skillful surgeon and the animal was unharmed). When the dog ate food, the food dropped out of the animal's throat before reaching its stomach. Through this simulated feeding, Pavlov discovered that the sight, smell, and swallowing of food was enough to cause the secretion of gastric juice. He demonstrated that the stimulation of the vagus nerve (one of the major nerves of the brain) influences the actions of the gastric glands.
In another famous study, Pavlov set out to determine whether he could turn unconditioned (naturally occurring) reflexes or responses of the central nervous system into conditioned (learned) reflexes. He had noticed that laboratory dogs would sometimes salivate merely at the approach of lab assistants who fed them. Pavlov then decided to ring a bell each time a dog was given food. After a while, he rang the bell without feeding the dog. He discovered that the dog salivated at the sound of the bell, even though food was not present. Through this experiment, Pavlov demonstrated that unconditioned reflexes (salivation and gastric activity) could become conditioned reflexes that were triggered by a stimulus (the bell) that previously had no connection with the event (eating).
Ducts or tiny tubes carry saliva from these glands into the mouth. Ducts from the parotid glands open into the upper portion of the mouth; ducts from the submaxillary and sublingual glands open into the mouth beneath the tongue.
The salivary glands are controlled by the autonomic nervous system, a division of the nervous system that functions involuntarily (meaning the processes it controls occur without conscious effort on the part of an individual). The glands produce between 1.1 and 1.6 quarts (1 and 1.5 liters) of saliva each day. Although the flow is continuous, the amount varies. Food (or anything else) in the mouth increases the amount produced. Even the sight or smell of food will increase the flow.
Saliva is mostly water (about 99 percent), with waste products, antibodies, and enzymes making up the small remaining portion. At mealtimes, saliva contains large quantities of digestive enzymes that help break down food. Saliva also controls the temperature of food (cooling it down or warming it up), cleans surfaces in the mouth, and kills certain bacteria present in the mouth.
The pharynx, or throat, is a short, muscular tube extending about 5 inches (12.7 centimeters) from the mouth and nasal cavities to the esophagus and trachea (windpipe). It serves two separate systems: the digestive system (by allowing the passage of solid food and liquids) and the respiratory system (by allowing the passage of air).
The esophagus, sometimes referred to as the gullet, is the muscular tube connecting the pharynx and stomach. It is approximately 10 inches (25 centimeters) in length and 1 inch (2.5 centimeters) in diameter. In the thorax (area of the body between the neck and the abdomen), the esophagus lies behind the trachea. At the base of the esophagus, where it connects with the stomach, is a strong ring of muscle called the lower esophageal sphincter. Normally, this circular muscle is contracted, preventing contents in the stomach from moving back into the esophagus.
The stomach is located on the left side of the abdominal cavity just under the diaphragm (a membrane of muscle separating the chest cavity from the abdominal cavity). When empty, the stomach is shaped like the letter J and its inner walls are drawn up into long, soft folds called rugae. When the stomach expands, the rugae flatten out and disappear. This allows the average adult stomach to hold as much as 1.6 quarts (1.5 liters) of material.
The dome-shaped portion of the stomach to the left of the lower esophageal sphincter is the fundus. The large central portion of the stomach is the body. The part of the stomach connected to the small intestine (the curve of the J) is the pylorus. The pyloric sphincter is a muscular ring that regulates the flow of material from the stomach into the small intestine by variously opening and contracting. That material, a soupylike mixture of partially digested food and stomach secretions, is called chyme.
The stomach wall contains three layers of smooth muscle. These layers contract in a regular rhythm—usually three contractions per minute—to mix and churn stomach contents. Mucous membrane lines the stomach. Mucus, the thick, gooey liquid produced by the cells of that membrane, helps protect the stomach from its own secretions. Those secretions—acids and enzymes—enter the stomach through millions of shallow pits that open onto the surface of the inner stomach. Called gastric pits, these openings lead to gastric glands, which secrete about 1.6 quarts (1.5 liters) of gastric juice each day.
Gastric juice contains hydrochloric acid and pepsin. Pepsin is an enzyme that breaks down proteins; hydrochloric acid kills microorganisms and breaks down cell walls and connective tissue in food. The acid is strong enough to burn a hole in carpet, yet the mucus produced by the mucous membrane prevents it from dissolving the lining of the stomach. Even so, the cells of the mucous membrane wear out quickly: the entire stomach lining is replaced every three days. Mucus also aids in digestion by keeping food moist.
William Beaumont (1785–1853) was an American surgeon who served as an army surgeon during the War of 1812 (1812–15) and at various posts after the war. It was at one of these posts that he saw what perhaps no one before him had seen: the inner workings of the stomach.
In 1882, while serving at Fort Mackinac in northern Michigan, Beaumont was presented with a patient named Alexis St. Martin. The French Canadian trapper, only nineteen at the time, has been accidently shot in the stomach. The bullet had torn a deep chunk out of the left side of St. Martin's lower chest. At first, no one thought he would survive, but amazingly he did. However, his wound never completely healed, leaving a 1 inch-wide (2.5 centimeter-wide) opening. This opening allowed Beaumont to put his finger all the way into St. Martin's stomach.
Beaumont decided to take advantage of opening into St. Martin's side to study human digestion. He started by taking small chunks of food, tying them to a string, then inserting them directly into the young man's stomach. At irregular intervals, he pulled the food out to observe the varying actions of digestion. Later, using a hand-held lens, Beaumont peered into St. Martin's stomach. He observed how the human stomach behaved at various stages of digestion and under differing circumstances.
Beaumont conducted almost 240 experiments on St. Martin. In 1833, he published his findings in Experiments and Observations on the Gastric Juice and the Physiology of Digestion , a book that provided invaluable information on the digestive process.
The small intestine
The small intestine is the body's major digestive organ. Looped and coiled within the abdominal cavity, it extends about 20 feet (6 meters) from the stomach to the large intestine. At its junction with the stomach, it measures about 1.5 inches (4 centimeters) in diameter. By the time it meets the large intestine, its diameter has been reduced to 1 inch (2.5 centimeters). Although much longer than the large intestine, the small intestine is called "small" because its overall diameter is smaller.
The small intestine is divided into three regions or sections. The first section, the duodenum, is the initial 10 inches (25 centimeters) closest to the stomach. Chyme from the stomach and secretions from the pancreas and liver empty into this section. The middle section, the jejunum, measures about 8.2 feet (2.5 meters) in length. Digestion and the absorption of nutrients occurs mainly in the jejunum. The final section, the ileum, is also the longest, measuring about 11 feet (3.4 meters) in length. The ileum ends at the ileocecal valve, a sphincter that controls the flow of chyme from the ileum to the large intestine.
The inner lining of the small intestine is covered with tiny, fingerlike projections called villi (giving it an appearance much like the nap of a plush, soft towel). The villi greatly increase the intestinal surface area available for absorbing digested material. Within each villus (singular for villi) are blood capillaries and a lymph capillary called a lacteal. Digested food molecules are absorbed through the walls of the villus into both the capillaries and the lacteal. At the bases of the villi are openings of intestinal glands, which secrete a watery intestinal juice. This juice contains digestive enzymes that convert food materials into simple nutrients the body can readily use. On average, about 2 quarts (1.8 liters) of intestinal juice are secreted into the small intestine each day.
As with the lining of the stomach, a coating of mucus helps protect the lining of the small intestine. Yet again, the digestive enzymes prove too strong for the delicate cells of that lining. They wear out and are replaced about every two days.
The large intestine
Extending from the end of the small intestine to the anus, the large intestine measures about 5 feet (1.5 meters) in length and 3 inches (7.5 centimeters) in diameter. It almost completely frames the small intestine. The large intestine is divided into three major regions: the cecum, colon, and rectum.
Cecum comes from the Latin word caecum , meaning "blind." Shaped like a rounded pouch, the cecum lies immediately below the area where the ileum empties into the large intestine. Attached to the cecum is the slender, fingerlike appendix, which measures about 3.5 inches (9 centimeters) in the average adult. Composed of lymphatic tissue, the appendix seems to have no function in present-day humans. For that reason, scientists refer to it as a vestigial organ (an organ that is reduced in size and function when compared with that of evolutionary ancestors).
Sometimes used to describe the entire large intestine, the colon is actually the organ's main part. It is divided into four sections: ascending, transverse, descending, and sigmoid. The ascending colon travels from the cecum up the right side of the abdominal cavity until it reaches the liver. It then makes a turn, becoming the transverse colon, which travels horizontally across the abdominal cavity. Near the spleen on the left side, it turns down to form the descending colon. At about where it enters the pelvis, it becomes the S-shaped sigmoid colon.
After curving and recurving, the sigmoid colon empties into the rectum, a fairly straight, 6-inch (15-centimeter) tube ending at the anus, the opening to the outside. Two sphincters (rings of muscle) control the opening and closing of the anus.
Roughly 1.6 quarts (1.5 liters) of watery material enters the large intestine each day. No digestion takes place in the large intestine, only the reabsorption or recovery of water. Mucus produced by the cells in the lining of the large intestine help move the waste material along. As more and more water is removed from that material, it becomes compacted into soft masses called feces. Feces are composed of water, cellulose and other indigestible material, and dead and living bacteria. The remnants of worn red blood cells gives feces their brown color. Only about 3 to 7 ounces (85 to 200 grams) of solid fecal material remains after the large intestine has recovered most of the water. That material is then eliminated through the anus, a process called defecation.
The pancreas is a soft, pink, triangular-shaped gland that measures about 6 inches (15 centimeters) in length. It lies behind the stomach, extending from the curve of the duodenum to the spleen. While a part of the digestive system, the pancreas is also a part of the endocrine system, producing the hormones insulin and glucagon (for a further discussion of this process, see chapter 3).
Primarily a digestive organ, the pancreas produces pancreatic juice that helps break down all three types of complex food molecules in the small intestine. The enzymes contained in that juice include pancreatic amylase, pancreatic lipase, and trypsinogen. Amylase breaks down starches into simple sugars, such as maltose (malt sugar). Lipase breaks down fats into simpler fatty acids and glycerol (an alcohol). Trypsinogen is the inactive form of the enzyme trypsin, which breaks down proteins into amino acids. Trypsin is so powerful that if produced in the pancreas, it would digest the organ itself. To prevent this, the pancreas produces trypsinogen, which is then changed in the duodenum to its active form.
Pancreatic juice is collected from all parts of the pancreas through microscopic ducts. These ducts merge to form larger ducts, which eventually combine to form the main pancreatic duct. This duct, which runs the length of the pancreas, then transports pancreatic juice to the duodenum of the small intestine.
The largest glandular organ in the body, the liver weighs between 3 and 4 pounds (1.4 and 1.8 kilograms). It lies on the right side of the abdominal cavity just beneath the diaphragm. In this position, it overlies and almost completely covers the stomach. Deep reddish brown in color, the liver is divided into four unequal lobes: two large right and left lobes and two smaller lobes visible only from the back.
The liver is an extremely important organ. Scientists have discovered that it performs over 200 different functions in the body. Among its many functions are processing nutrients, making plasma proteins and blood-clotting chemicals, detoxifying (transforming into less harmful substances) alcohol and drugs, storing vitamins and iron, and producing cholesterol.
One of the liver's main digestive functions is the production of bile. A watery, greenish yellow liquid, bile consists mostly of water, bile salts, cholesterol, and assorted lipids or fats. Liver cells produce roughly 1 quart (1 liter) of bile each day. Bile leaves the liver through the common hepatic duct. This duct unites with the cystic duct from the gall bladder to form the common bile duct, which delivers bile to the duodenum.
In the small intestine, bile salts emulsify fats, breaking them down from large globules into smaller droplets that stay suspended in the watery fluid in the small intestine. Bile salts are not enzymes and, therefore, do not digest fats. By breaking down the fats in to smaller units, bile salts aid the fatdigesting enzymes present in the small intestine.
The gall bladder
The gall bladder is a small, pouchlike, green organ located on the undersurface of the right lobe of the liver. It measures 3 to 4 inches (7.6 to 10 centimeters) in length. The gall bladder's function is to store bile, of which it can hold about 1.2 to 1.7 ounces (35 to 50 milliliters).
The liver continuously produces bile. When digestion is not occurring, bile backs up the cystic duct and enters the gall bladder. While holding the bile, the gall bladder removes water from it, making it more concentrated. When fatty food enters the duodenum once again, the gall bladder is stimulated to contract and spurt out the stored bile. | http://www.faqs.org/health/Body-by-Design-V1/The-Digestive-System-Design-parts-of-the-digestive-system.html | 13 |
15 | The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Adding + It Up: Helping Children Learn Mathematics
teachers should know. Many of these ideas are treated in more detail in textbooks intended for prospective elementary school teachers.
A major theme of the chapter is that numbers are ideas—abstractions that apply to a broad range of real and imagined situations. Operations on numbers, such as addition and multiplication, are also abstractions. Yet in order to communicate about numbers and operations, people need representations—something physical, spoken, or written. And in order to carry out any of these operations, they need algorithms: step-by-step procedures for computation. The chapter closes with a discussion of the relationship between number and other important mathematical domains such as algebra, geometry, and probability.
At first, school arithmetic is mostly concerned with the whole numbers: 0, 1, 2, 3, and so on.1 The child’s focus is on counting and on calculating— adding and subtracting, multiplying and dividing. Later, other numbers are introduced: negative numbers and rational numbers (fractions and mixed numbers, including finite decimals). Children expend considerable effort learning to calculate with these less intuitive kinds of numbers. Another theme in school mathematics is measurement, which forms a bridge between number and geometry.
Mathematicians like to take a bird’s-eye view of the process of developing an understanding of number. Rather than take numbers a pair at a time and worry in detail about the mechanics of adding them or multiplying them, they like to think about whole classes of numbers at once and about the properties of addition (or of multiplication) as a way of combining pairs of numbers in the class. This view leads to the idea of a number system. A number system is a collection of numbers, together with some operations (which, for purposes of this discussion, will always be addition and multiplication), that combine pairs of numbers in the collection to make other numbers in the same collection. The main number systems of arithmetic are (a) the whole numbers, (b) the integers (i.e., the positive whole numbers, their negative counterparts, and zero), and (c) the rational numbers—positive and negative ratios of whole numbers, except for those ratios of a whole number and zero.
Thinking in terms of number systems helps one clarify the basic ideas involved in arithmetic. This approach was an important mathematical discovery in the late nineteenth and early twentieth centuries. Some ideas of arithmetic are fairly subtle and cause problems for students, so it is useful to have a viewpoint from which the connections between ideas can be surveyed. | http://www.nap.edu/openbook.php?record_id=9822&page=72 | 13 |
14 | Greening of the Red Planet
|Tweet|Greening of the Red Planet A hardy microbe from Earth might one day transform
the barren ground of Mars into arable soil.
January 26, 2001 -- Although Mars may once have been warm and wet, the Red Planet today is a frozen wasteland. Most scientists agree, it's highly unlikely that any living creature --even a microbe-- could survive for long on the surface of Mars.
When the first humans travel there to explore the Red Planet up close, they will have to grow their food in airtight, heated greenhouses. The Martian atmosphere is far too cold and dry for edible plants to grow in the open air. But if humans ever hope to establish long-term colonies on their planetary neighbor, they will no doubt want to find a way to farm outdoors. Imre Friedmann has an idea of how they might take the first step.
Above: Artists' James Graham and Kandis Elliot impression of a more habitable Mars. [more from ThinkQuest.org]
Sign up for EXPRESS SCIENCE NEWS delivery
Mars is covered by a layer of ground-up rock and fine dust, known as regolith. To convert regolith into soil, it will be necessary to add organic matter, much as organic farmers on Earth fertilize their soil by adding compost to it.
On Earth, compost is made up primarily of decayed vegetable matter. Microorganisms play an important role in breaking down dead plants, recycling their nutrients back into the soil so that living plants can reuse them. But on Mars, says Friedmann, where there is no vegetation to decay, the dead bodies of the microorganisms themselves will provide the organic matter needed to build up the soil.
The trick is finding the right microbe.
"Among the organisms that are known today," says Friedmann, "Chroococcidiopsis is most suitable" for the task.
Chroococcidiopsis is one of the most primitive cyanobacteria known. What makes it such a good candidate is its ability to survive in a wide range of extreme environments that are hostile to most other forms of life. Chroococcidiopsis has been found growing in hot springs, in hypersaline (high-salt) habitats, in a number of hot, arid deserts throughout the world, and in the frigid Ross Desert in Antarctica.
Above: A photomicrograph of Chroococcidiopsis, enlarged 100 times.
"Chroococcidiopsis is the constantly appearing organism in nearly all extreme environments," Friedmann points out, "at least extreme dry, extreme cold, and extremely salty environments. This is the one which always comes up."
Moreover, where Chroococcidiopsis survives, it is often the only living thing that does. But it gladly gives up its dominance when conditions enable other, more complex forms of life to thrive.
For clues on how to farm Chroococcidiopsis on Mars, Friedmann looks to its growth habits in arid regions on Earth. In desert environments, Chroococcidiopsis grows either inside porous rocks, or just underground, on the lower surfaces of translucent pebbles.
Above: In many desert environments, Chroococcidiopsis grows on the undersides of transparent rocks, just below the surface.
The pebbles provide an ideal microenvironment for Chroococcidiopsis in two ways. First, they trap moisture underneath them. Experiments have shown that small amounts of moisture can cling to the undersurfaces of rocks for weeks after their above-ground surfaces have dried out. Second, because the pebbles are translucent, they allow just enough light to reach the organisms to sustain growth.
Friedmann envisions large farms where the bacteria are cultured on the underside of strips of glass that are treated to achieve the proper light-transmission characteristics. Mars today, however, is too cold for this technique to work effectively. Before even as hardy a microbe as Chroococcidiopsis could be farmed on Mars, the planet would have to be warmed up considerably, to just below the freezing point.
Friedmann, pictured left, admits that his ideas about growing Chroococcidiopsis are, at this point, merely a thought experiment.
"I don't think any of us alive today will see this happen," he muses. When the time does come to make Mars a more habitable place, "the technology will be so different that everything we plan today... will be ridiculously outdated."
Friedmann fully expects that genetic engineering will eventually develop designer organisms to do the job. Even if Chroococcidiopsis is ultimately used as the basis, it will be a vastly improved version of today's microbe.
The Physics and Biology of Making Mars Habitable -- Web page for the conference where Friedmann presented his research
Bibliography on terraforming -- extensive list of publications about terraforming, compiled by Chris McKay of NASA's Astrobiology Institute
The Terraforming Information Pages -- links to a variety of resources about terraforming
Meet Conan the Bacterium -- Science@NASA article: a humble microbe could become "The Accidental (Space) Tourist"
NASA Astrobiology Institute -- home page
Join our growing list of subscribers - sign up for our express news delivery and you will receive a mail message every time we post a new story!!!
|The Science and Technology Directorate at NASA's Marshall Space Flight Center sponsors the Science@NASA web sites. The mission of Science@NASA is to help the public understand how exciting NASA research is and to help NASA scientists fulfill their outreach responsibilities.|
|For lesson plans and educational activities related to breaking science news, please visit Thursday's Classroom||
Production Editor: Dr. Tony Phillips
Curator: Bryan Walls
Media Relations: Steve Roy
Responsible NASA official: John M. Horack | http://science1.nasa.gov/science-news/science-at-nasa/2001/ast26jan_1/ | 13 |
21 | The graph of a quadratic function of the form f(x) = a x 2 + b x + c
is a parabola
Properties of Graphs of Quadratic Functions.
a) If a > 0, the parabola opens upward; if a < 0, the parabola opens downward.
b) As | a| increases, the parabola becomes narrower; as | a| decreases, the parabola becomes wider.
c) The lowest point of a parabola (when a > 0) or the highest point (when a < 0) is called the vertex.
d) The domain of a quadratic function is R, because the graph extends indefinitely to the right and to the left. If (h, k) is the vertex of the parabola, then the range of the function is [k,+ ∞ ) when a > 0 and (- ∞, k] when a < 0.
e) The graph of a quadratic function is symmetric with respect to a vertical line containing the vertex. This line is called the axis of symmetry. If (h, k) is the vertex of a parabola, then the equation of the axis of symmetry is x = h.
How to calculate the vertex of a parabola
1. To determine the vertex of the graph of a quadratic function, f(x) = ax2+ bx + c, we can either do it:
a) Completing the square to rewrite the function in the form f(x) = a(x – h)2 + k. The vertex is (h, k).
b) Using the formula to find the x-coordinate of the vertex and then, the y- coordinate of the vertex can be determined by evaluating . The vertex is (x,y).
How to calculate the intercepts of a parabola.
To find the y-intercept of the parabola, find f(0); to find the x-intercepts, solve the quadratic equation ax2+ bx + c = 0. | http://www.emathematics.net/parabola.php | 13 |
14 | Mathematics » High School: Geometry » Similarity, Right Triangles, & Trigonometry
Standards in this domain:
Understand similarity in terms of similarity transformations
- CCSS.Math.Content.HSG-SRT.A.1 Verify experimentally the properties of dilations given by a center and a scale factor:
- CCSS.Math.Content.HSG-SRT.A.1a A dilation takes a line not passing through the center of the dilation to a parallel line, and leaves a line passing through the center unchanged.
- CCSS.Math.Content.HSG-SRT.A.1b The dilation of a line segment is longer or shorter in the ratio given by the scale factor.
- CCSS.Math.Content.HSG-SRT.A.2 Given two figures, use the definition of similarity in terms of similarity transformations to decide if they are similar; explain using similarity transformations the meaning of similarity for triangles as the equality of all corresponding pairs of angles and the proportionality of all corresponding pairs of sides.
- CCSS.Math.Content.HSG-SRT.A.3 Use the properties of similarity transformations to establish the AA criterion for two triangles to be similar.
Prove theorems involving similarity
- CCSS.Math.Content.HSG-SRT.B.4 Prove theorems about triangles. Theorems include: a line parallel to one side of a triangle divides the other two proportionally, and conversely; the Pythagorean Theorem proved using triangle similarity.
- CCSS.Math.Content.HSG-SRT.B.5 Use congruence and similarity criteria for triangles to solve problems and to prove relationships in geometric figures.
Define trigonometric ratios and solve problems involving right triangles
- CCSS.Math.Content.HSG-SRT.C.6 Understand that by similarity, side ratios in right triangles are properties of the angles in the triangle, leading to definitions of trigonometric ratios for acute angles.
- CCSS.Math.Content.HSG-SRT.C.7 Explain and use the relationship between the sine and cosine of complementary angles.
- CCSS.Math.Content.HSG-SRT.C.8 Use trigonometric ratios and the Pythagorean Theorem to solve right triangles in applied problems.★
Apply trigonometry to general triangles
- CCSS.Math.Content.HSG-SRT.D.9 (+) Derive the formula A = 1/2 ab sin(C) for the area of a triangle by drawing an auxiliary line from a vertex perpendicular to the opposite side.
- CCSS.Math.Content.HSG-SRT.D.10 (+) Prove the Laws of Sines and Cosines and use them to solve problems.
- CCSS.Math.Content.HSG-SRT.D.11 (+) Understand and apply the Law of Sines and the Law of Cosines to find unknown measurements in right and non-right triangles (e.g., surveying problems, resultant forces). | http://www.corestandards.org/Math/Content/HSG/SRT | 13 |
11 | Exoplanet CoRoT-7b is five times heavier than the Earth
Even in ancient times, people observed the planets that orbit our Sun. (See also the astronomy question from week 1: Why are there seven days in a week?) Nowadays we know that there are many trillions of other stars in the Universe, in addition to the Sun. It seems likely that planets orbit many of these stars too. The evidence that extrasolar planets (exoplanets for short) exist was obtained for the first time in the 1990s. However, exoplanets are small, non-luminous bodies that are light years away and as a rule indiscernible to us – how are we able to prove that they exist?
Since 1995, over 370 exoplanets have been found – and there appears to be no end to the discoveries. Although astronomers have now succeeded in making a direct optical verification, two indirect astronomical measuring techniques have been shown to be particularly reliable in the search for exoplanets: the ‘radial velocity’ method and the 'transit' method.
Methods to verify the existence of extrasolar planets
Corot-Mission: Exoplanets can be discovered using the transit method
The radial velocity method is based on the premise that a star and the planet orbiting it have a reciprocal influence on each other due to their gravity. For this reason, the star moves periodically (in synchrony with movement of the planet around it) a little towards the observer and a little away from the observer along the line of sight. Due to the Doppler effect, in the electromagnetic spectrum of the star a radial movement such as this leads to a small periodic shift in the spectral lines – first towards the blue wavelength range, then back towards the red. (See also the astronomy question from week 38: How quickly is the Universe expanding?) If we analyse this movement of the spectral lines quantitatively, what is known as the radial-velocity curve can be derived from it. This yields parameters for the planetary orbit and the maximum mass of the planetary candidates. If the latter is less than the mass that a heavenly body requires to initiate thermonuclear fusion, the body is regarded as a planet.
The transit method works if the orbit of the planet is such that, when viewed from the Earth, it passes in front of the star. During the planet’s passage across the star disk, known as a transit, the planet’s presence reduces the amount of radiation from the stellar disc that reaches the observer and a decrease in the apparent brightness of the star can be measured. The radius of the planet and its density can be calculated from these measurements together with other data (such as the distance of the star from Earth) – astronomers then know whether the planet in question is a rocky planet or a gas planet. Such findings are incorporated into models of how planets are formed and help us to better understand how planetary systems develop. | http://www.dlr.de/en/desktopdefault.aspx/tabid-5170/8702_read-20367/usetemplate-print/ | 13 |
25 | A circle is a simple shape of Euclidean
geometry consisting of the set of points in a plane that is a given distance from a given
point, the centre. The distance between any of the points and the centre is
called the radius.
Circles are simple closed curves which divide the plane into two regions: an interior and an exterior. In everyday use, the
term "circle" may be used interchangeably to refer to either the
boundary of the figure, or to the whole figure including its interior; in
strict technical usage, the circle is the former and the latter is called a disk.
A circle is a special ellipse in which the two foci are
coincident and the eccentricity is 0. Circles are conic sections attained when a right circular cone is intersected by a plane perpendicular
to the axis of the cone.
of the circle = π x
area of the shaded square
As proved by Archimedes, the area
enclosed by a circle is equal to
that of a right triangle whose base has the length of the circle's
circumference and whose height equals the circle's radius, which comes to π multiplied by the radius squared:
(Our solved example in
mathguru.com uses this concept).
Equivalently, denoting diameter by d,
that is, approximately 79 percent of the circumscribing square (whose side is of length d).
The circle is the plane curve enclosing the maximum area for a
given arc length. This relates the circle to a problem in the calculus of variations, namely the isoperimetric inequality.
In Euclidean plane
geometry, a rectangle is any quadrilateral with four right angles. The term "oblong" is occasionally used to refer to a
non-square rectangle. A rectangle
with vertices ABCD would be denoted as ABCD.
A so-called crossed
rectangle is a crossed
(self-intersecting) quadrilateral which consists of two opposite sides of a
rectangle along with the two diagonals. Its
angles are not right angles. Other geometries, such as spherical, elliptic, and hyperbolic, have so-called rectangles
with opposite sides equal in length and equal angles that are not right angles.
If a rectangle has length l and width w
It has area A = lw (Our solved example in mathguru.com
uses this concept).
above explanation is copied from Wikipedia, the free encyclopedia and is
remixed as allowed under the Creative Commons Attribution- ShareAlike 3.0 Unported | http://www.mathguru.com/level2/areas-related-to-circles-2009110500046093.aspx | 13 |
10 | Want to stay on top of all the space news? Follow @universetoday on Twitter
Dust is everywhere in space, but the pervasive stuff is one thing astronomers know little about. Cosmic dust is also elusive, as it lasts only about 10,000 years, a brief period in the life of a star. “We not only do not know what the stuff is, but we do not know where it is made or how it gets into space,” said Donald York, a professor at the University of Chicago. But now York and a group of collaborators have observed a double-star system, HD 44179, that may be creating a fountain of dust. The discovery has wide-ranging implications, because dust is critical to scientific theories about how stars form.
The double star system sits within what astronomers call the Red Rectangle, a nebula full of gas and dust located approximately 2,300 light years from Earth.
One of the double stars is a post-asymptotic giant branch (post-AGB) star, a type of star astronomers regard as a likely source of dust. These stars, unlike the sun, have already burned all the hydrogen in their cores and have collapsed, burning a new fuel, helium.
During the transition between burning hydrogen and helium, which takes place over tens of thousands of years, these stars lose an outer layer of their atmosphere. Dust may form in this cooling layer, which radiation pressure coming from the star’s interior pushes out the dust away from the star, along with a fair amount of gas.
In double-star systems, a disk of material from the post-AGB star may form around the second smaller, more slowly evolving star. “When disks form in astronomy, they often form jets that blow part of the material out of the original system, distributing the material in space,” York explained.
“If a cloud of gas and dust collapses under its own gravity, it immediately gets hotter and starts to evaporate,” York said. Something, possibly dust, must immediately cool the cloud to prevent it from reheating.
The giant star sitting in the Red Rectangle is among those that are far too hot to allow dust condensation within their atmospheres. And yet a giant ring of dusty gas encircles it.
Witt’s team made approximately 15 hours of observations on the double star over a seven-year period with the 3.5-meter telescope at Apache Point Observatory in New Mexico. “Our observations have shown that it is most likely the gravitational or tidal interaction between our Red Rectangle giant star and a close sun-like companion star that causes material to leave the envelope of the giant,” said collaborator Adolph Witt, from the University of Toledo.
Some of this material ends up in a disk of accumulating dust that surrounds that smaller companion star. Gradually, over a period of approximately 500 years, the material spirals into the smaller star.
Just before this happens, the smaller star ejects a small fraction of the accumulated matter in opposite directions via two gaseous jets, called “bipolar jets.”
Other quantities of the matter pulled from the envelope of the giant end up in a disk that skirts both stars, where it cools. “The heavy elements like iron, nickel, silicon, calcium and carbon condense out into solid grains, which we see as interstellar dust, once they leave the system,” Witt explained.
Cosmic dust production has eluded telescopic detection because it only lasts for perhaps 10,000 years—a brief period in the lifetime of a star. Astronomers have observed other objects similar to the Red Rectangle in Earth’s neighborhood of the Milky Way. This suggests that the process Witt’s team has observed is quite common when viewed over the lifetime of the galaxy.
“Processes very similar to what we are observing in the Red Rectangle nebula have happened maybe hundreds of millions of times since the formation of the Milky Way,” said Witt, who teamed up with longtime friends at Chicago for the study.
The team had set out to achieve a relatively modest goal: find the Red Rectangle’s source of far-ultraviolet radiation. The Red Rectangle displays several phenomena that require far-ultraviolet radiation as a power source. “The trouble is that the very luminous central star in the Red Rectangle is not hot enough to produce the required UV radiation,” Witt said, so he and his colleagues set out to find it.
It turned out neither star in the binary system is the source of the UV radiation, but rather the hot, inner region of the disk swirling around the secondary, which reaches temperatures near 20,000 degrees. Their observations, Witt said, “have been greatly more productive than we could have imagined in our wildest dreams.”
Source: University of Chicago | http://www.universetoday.com/24699/astronomers-find-cosmic-dust-fountain/ | 13 |
24 | - High School
- Number & Quantity
- Statistics & Probability
- Language Arts
- Social Studies
- Art & Music
- World Languages
- Your Life
- Experiment With Transformations In The Plane
G.CO.1Know precise definitions of angle, c...more
Know precise definitions of angle, circle, perpendicular line, parallel line, and line segment, based on the undefined notions of point, line, distance along a line, and distance around a circular arc.
G.CO.2Represent transformations in the pla...more
Represent transformations in the plane using, e.g., transparencies and geometry software; describe transformations as functions that take points in the plane as inputs and give other points as outputs. Compare transformations that preserve distance and angle to those that do not (e.g., translation versus horizontal stretch).
G.CO.3Given a rectangle, parallelogram, tr...more
Given a rectangle, parallelogram, trapezoid, or regular polygon, describe the rotations and reflections that carry it onto itself.
G.CO.4Develop definitions of rotations, re...more
Develop definitions of rotations, reflections, and translations in terms of angles, circles, perpendicular lines, parallel lines, and line segments.
G.CO.5Given a geometric figure and a rotat...more
Given a geometric figure and a rotation, reflection, or translation, draw the transformed figure using, e.g., graph paper, tracing paper, or geometry software. Specify a sequence of transformations that will carry a given figure onto another.
- Understand Congruence In Terms Of Rigid Motions
G.CO.6Use geometric descriptions of rigid ...more
Use geometric descriptions of rigid motions to transform figures and to predict the effect of a given rigid motion on a given figure; given two figures, use the definition of congruence in terms of rigid motions to decide if they are congruent.
G.CO.7Use the definition of congruence in ...more
Use the definition of congruence in terms of rigid motions to show that two triangles are congruent if and only if corresponding pairs of sides and corresponding pairs of angles are congruent.
G.CO.8Explain how the criteria for triangl...more
Explain how the criteria for triangle congruence (ASA, SAS, and SSS) follow from the definition of congruence in terms of rigid motions.
- Prove Geometric Theorems
G.CO.9Prove theorems about lines and angle...more
Prove theorems about lines and angles. Theorems include: vertical angles are congruent; when a transversal crosses parallel lines, alternate interior angles are congruent and corresponding angles are congruent; points on a perpendicular bisector of a line segment are exactly those equidistant from the segment’s endpoints.
G.CO.10Prove theorems about triangles. Theo...more
Prove theorems about triangles. Theorems include: measures of interior angles of a triangle sum to 180°; base angles of isosceles triangles are congruent; the segment joining midpoints of two sides of a triangle is parallel to the third side and half the length; the medians of a triangle meet at a point.
G.CO.11Prove theorems about parallelograms....more
Prove theorems about parallelograms. Theorems include: opposite sides are congruent, opposite angles are congruent, the diagonals of a parallelogram bisect each other, and conversely, rectangles are parallelograms with congruent diagonals.
- Make Geometric Constructions
G.CO.12Make formal geometric constructions ...more
Make formal geometric constructions with a variety of tools and methods (compass and straightedge, string, reflective devices, paper folding, dynamic geometric software, etc.). Copying a segment; copying an angle; bisecting a segment; bisecting an angle; constructing perpendicular lines, including the perpendicular bisector of a line segment; and constructing a line parallel to a given line through a point not on the line.
G.CO.13Construct an equilateral triangle, a...more
Construct an equilateral triangle, a square, and a regular hexagon inscribed in a circle.
Major cluster will be a majority of the assessment, Supporting clusters will be assessed through their success at supporting the Major Clusters and Additional Clusters will be assessed as well. The assessments will strongly focus where the standards strongly focus.
Now Creating a New Plan
Moving Games. Just a moment... | http://powermylearning.com/zh-hans/directory/math/high-school/geometry/congruence | 13 |
47 | Novell's Networking Primer
Although we routinely use the terms "data" and "information" interchangeably, they are not technically the same thing. Computer data is a series of electrical charges arranged in patterns to represent information. In other words, the term "data" refers to the form of the information (the electrical patterns), not the information itself.
Conversely, the term "information" refers to data that has been decoded. In other words, information is the real-world, useful form of data. For example, the data in an electronic file can be decoded and displayed on a computer screen or printed onto paper as a business letter.
Encoding and Decoding Data
To store meaningful information as data and to retrieve the information, computers use encoding schemes: series of electrical patterns that represent each of the discrete pieces of information to be stored and retrieved. For example, a particular series of electrical patterns represents the alphabetic character "A." There are many encoding schemes in use. One common data-encoding scheme is American Standard Code for Information Interchange (ASCII).
To encode information into data and later decode that data back into information, we use electronic devices, such as the computer, that generate electronic signals. Signals are simply the electric or electromagnetic encoding of data. Various components in a computer enable it to generate signals to perform encoding and decoding tasks.
To guarantee reliable transmission of this data across a network, there must be an agreed-on method that governs how data is sent, received, and decoded. That method must address questions such as: How does a sending computer indicate to which computer it is sending data? If the data will be passed through intervening devices, how are these devices to understand how to handle the data so that it will get to the intended destination? What if the sending and receiving computers use different data formats and data exchange conventions—how will data be translated to allow its exchange?
In response to these questions, a communication model known as the OSI model was developed. It is the basis for controlling data transmission on computer networks. Understanding the OSI model will allow you to understand how data can be transferred between two networked computers.
ISO and the OSI Model
The OSI model was developed by the International Organization for Standardization (ISO) as a guideline for developing standards to enable the interconnection of dissimilar computing devices. It is important to understand that the OSI model is not itself a communication standard. In other words, it is not an agreed-on method that governs how data is sent and received; it is only a guideline for developing such standards.
The Importance of the OSI Model
It would be difficult to overstate the importance of the OSI model. Virtually all networking vendors and users understand how important it is that network computing products adhere to and fully support the networking standards this model has generated.
When a vendor's products adhere to the standards the OSI model has generated, connecting those products to other vendors' products is relatively simple. Conversely, the further a vendor departs from those standards, the more difficult it becomes to connect that vendor's products to those of other vendors.
In addition, if a vendor were to depart from the communication standards the model has engendered, software development efforts would be very difficult because the vendor would have to build every part of all necessary software, rather than being able to build on the existing work of other vendors.
The first two problems give rise to a third significant problem for vendors: a vendor's products become less marketable as they become more difficult to connect with other vendors' products.
The Seven Layers of the OSI Model
Because the task of controlling communications across a computer network is too complex to be defined by one standard, the ISO divided the task into seven subtasks. Thus, the OSI model contains seven layers, each named to correspond to one of the seven defined subtasks.
Each layer of the OSI model contains a logically grouped subset of the functions required for controlling network communications. The seven layers of the OSI model and the general purpose of each are shown in Figure 2.
Figure 2: The OSI model
Network Communications through the OSI Model
Using the seven layers of the OSI model, we can explore more fully how data can be transferred between two networked computers. Figure 3 uses the OSI model to illustrate how such communications are accomplished.
Figure 3: Networked computers communicating through the OSI model
The figure represents two networked computers. They are running identical operating systems and applications and are using identical protocols (or rules) at all OSI layers. Working in conjunction, the applications, the OS, and the hardware implement the seven functions described in the OSI model.
Each computer is also running an e-mail program that is independent of the OSI layers. The e-mail program enables the users of the two computers to exchange messages. Our figure represents the transmission of one brief message from Sam to Charlie.
The transmission starts when Sam types in a message to Charlie and presses the "send" key. Sam's operating system appends to the message (or "encapsulates") a set of application-layer instructions (OSI Layer 7) that will be read and executed by the application layer on Charlie's computer. The message with its Layer 7 header is then transferred to the part of the operating system that deals with presentation issues (OSI Layer 6) where a Layer 6 header is appended to the message. The process repeats through all the layers until each layer has appended a header. The headers function as an escort for the message so that it can successfully negotiate the software and hardware in the network and arrive intact at its destination.
When the data-link-layer header is added at Layer 2, the data unit is known as a "frame." The final header, the physical-layer header (OSI Layer 1) tells the hardware in Sam's computer the electrical specifics of how the message will be sent (which medium, at which voltage, at which speed, etc.). Although it is the final header to be added, the Layer 1 header is the first in line when the message travels through the medium to the receiving computer.
When the message with its seven headers arrives at Charlie's computer, the hardware in his computer is the first to handle the message. It reads the instructions in the Layer 1 header, executes them, and strips off the header before passing the message to the Layer 2 components. These Layer 2 components execute those instructions, strip off the header, and pass the message to Layer 3, and so on. Each layer's header is successively stripped off after its instructions have been read so that by the time the message arrives at Charlie's e-mail application, the message has been properly received, authenticated, decoded, and presented.
Commonly Used Standards and Protocols
National and international standards organizations have developed standards for each of the seven OSI layers. These standards define methods for controlling the communication functions of one or more layers of the OSI model and, if necessary, for interfacing those functions with the layers above and below.
A standard for any layer of the OSI model specifies the communication services to be provided and a protocol that will be used as a means to provide those services. A protocol is a set of rules network devices must follow (at any OSI layer) to communicate. A protocol consists of the control functions, control codes, and procedures necessary for the successful transfer of data.
More than one protocol standard exists for every layer of the OSI model. This is because a number of standards were proposed for each layer, and because the various organizations that defined those standards—specifically, the standards committees inside these organizations—decided that more than one of the proposed standards had real merit. Thus, they allowed for the use of different standards to satisfy different networking needs. As technologies develop and change, some standards win a larger share of the market than others, and some dominate to the point of becoming "de facto" standards.
To understand the capabilities of computer networking products, it will help to know the OSI layer at which particular protocols operate and why the standard for each layer is important. By converting protocols or using multiple protocols at different layers of the OSI model, it becomes possible for different computer systems to share data, even if they use different software applications, operating systems, and data-encoding techniques.
Figure 4 shows some commonly used standards and the OSI layer at which they operate.
Figure 4: Important standards at various OSI layers
Layer 7 and Layer 6 Standards: Application and Presentation
The application layer performs high-level services such as making sure necessary resources are present (such as a modem on the receiving computer) and authenticating users when appropriate (to authenticate is to grant access after verifying that the you are who you say you are). The presentation layer, usually part of an operating system, converts incoming and outgoing data from one presentation format to another. Presentation-layer services include data encryption and text compression. Most standards at this level specify Layer 7 and Layer 6 functions in one standard.
The predominant standards at Layer 7 and Layer 6 were developed by the Department of Defense (DoD) as part of the Transmission Control Protocol/Internet Protocol (TCP/IP) suite. This suite consists of the following protocols, among others: File Transfer Protocol (FTP), the protocol most often used to download files from the Internet; Telnet, which enables you to connect to mainframe computers over the Internet; HyperText Transfer Protocol (HTTP), which delivers Web pages; and Simple Mail Transfer Protocol (SMTP), which is used to send e-mail messages. These are all Layer 7 protocols; the TCP/IP suite consists of more than 40 protocols at several layers of the OSI model.
X.400 is an International Telecommunication Union (ITU) standard that encompasses both the presentation and application layers. X.400 provides message handling and e-mail services. It is the basis for a number of e-mail applications (primarily in Europe and Canada) as well as for other messaging products. Another ITU standard in the presentation layer is the X.500 protocol, which provides directory access and management.
File Transfer, Access, and Management (FTAM) and Virtual Terminal Protocol (VTP) are ISO standards that encompass the application layer. FTAM provides user applications with useful file transfer and management functions. VTP is similar to Telnet; it specifies how to connect to a mainframe over the Internet via a "virtual terminal" or terminal emulation. In other words, you can see and use a mainframe's terminal display on your own PC. These two standards have been largely eclipsed by the DoD standards.
Compact HTML is defined by the World Wide Web Consortium (W3C) and is a subset of HTML protocols. Like WAP, it addresses small-client limitations by excluding functions such as JPEG images, tables, image maps, multiple character fonts and styles, background colors and images, and frame style sheets.
Layer 5 Standards: Session
As its name implies, the session layer establishes, manages, and terminates sessions between applications. Sessions consist of dialogue between the presentation layer (OSI Layer 6) of the sending computer and the presentation layer of the receiving computer. The session layer synchronizes dialogue between these presentation layer entities and manages their data exchange. In addition to basic regulation of conversations (sessions), the session layer offers provisions for data expedition, class of service, and exception reporting of problems in the session, presentation, and application layers.
Transmission Control Protocol (TCP)—part of the TCP/IP suite—performs important functions at this layer as does the ISO session standard, named simply "session." In a NetWare environment the NetWare Core Protocol™ (NCP™) provides most of the necessary session-layer functions. The Service Advertising Protocol (SAP) also provides functions at this layer. Both NCP and SAP are discussed in greater detail in the "Internetworking" section of this primer.
Wireless Session Protocol (WSP), part of the WAP suite, provides WAE with two session services: a connection-oriented session over Wireless Transaction Protocol (WTP) and a connectionless session over Wireless Datagram Protocol (WDP).
Wireless Transaction Protocol (WTP), also part of the WAP suite, runs on top of UDP and performs many of the same tasks as TCP but in a way optimized for wireless devices. For example, WTP does not include a provision for rearranging out-of-order packets; because there is only one route between the WAP proxy and the handset, packets will not arrive out of order as they might on a wired network.
Layer 4 Standards: Transport
Standards at this OSI layer work to ensure that all packets have arrived. This layer also isolates the upper three layers—which handle user and application requirements—from the details that are required to manage the end-to-end connection.
IBM's Network Basic Input/Output System (NetBIOS) protocol is an important protocol at this layer and at the session layer. However, designed specifically for a single network, this protocol does not support a routing mechanism to allow messages to travel from one network to another. For routing to take place, NetBIOS must be used in conjunction with another "transport mechanism" such as TCP. TCP provides all functions required for the transport layer.
WDP is the transport-layer protocol for WAP that allows WAP to be bearer-independent; that is, regardless of which protocol is used for Layer 3—USSD, SMS, FLEX, or CDMA—WDP adapts the transport-layer protocols so that WAP can operate on top of them.
Layer 3 Standards: Network
The function of the network layer is to manage communications: principally, the routing and relaying of data between nodes. (A node is a device such as a workstation or a server that is connected to a network and is capable of communicating with other network devices.) Probably the most important network-layer standard is Internet Protocol (IP), another part of the TCP/IP suite. This protocol is the basis for the Internet and for all intranet technology. IP has also become the standard for many LANs.
The ITU X.25 standard has been a common fixture in the network layer, but newer, faster standards are quickly replacing it, especially in the United States. It specifies the interface for connecting computers on different networks by means of an intermediate connection made through a packet-switched network (for example, a common carrier network such as Tymnet). The X.25 standard includes X.21, the physical-layer protocol and link access protocol balanced (LAPB), the data-link-layer protocol.
Layer 2 Standards: Data-Link (Media Access Control and Logical Link Control)
The most commonly used Layer 2 protocols are those specified in the Institute of Electrical and Electronics Engineering (IEEE): 802.2 Logical Link Control, 802.3 Ethernet, 802.4 Token Bus, and 802.5 Token Ring. Most PC networking products use one of these standards. A few Layer 2 standards under development or that have recently been proposed to IEEE are 802.1P Generic Attribute Registration Protocol (GARP) for virtual bridge LANs, 802.1Q Virtual LAN (VLAN), and 802.15 Wireless Personal Area Network (WPAN), which will define standards used to link mobile computers, mobile phones, and other portable handheld devices, and to provide connectivity to the Internet. Another Layer 2 standard is Cells In Frames (CIF), which provides a way to send Asynchronous Transfer Mode (ATM) cells over legacy LAN frames.
ATM is another important technology at Layer 2, as are 100Base-T (IEEE 802.2u), and frame relay. These technologies are treated in greater detail in the "Important WAN and High-Speed Technologies" section.
Layer 2 standards encompass two sublayers: media access control (MAC) and logical link control.
Media Access Control
The media access control protocol specifies how workstations cooperatively share the transmission medium. Within the MAC sublayer there are several standards governing how data accesses the transmission medium.
The IEEE 802.3 standard specifies a media access method known as "carrier sense multiple access with collision detection" (CSMA/CD), and the IEEE 802.4, 802.5, and fiber distributed data interface (FDDI) standards all specify some form of token passing as the MAC method. These standards are discussed in greater detail in the "Network Topologies" section.
The token-ring MAC method is not as prominent in computer networks as it once was: Ethernet, which uses CSMA/CD, has become the more popular networking protocol for linking workstations and servers. The token-ring technology of ARCnet (Attached Resource Computer network), however, has become the preferred method for embedded and real-time systems such as automobiles, factory control systems, casino games, and heating, ventilation, and cooling systems.
Logical Link Control
The function of the logical link control sublayer is to ensure the reliability of the physical connection. The IEEE 802.2 standard (also called Logical Link Control or LLC) is the most commonly used logical link control standard because it works with either the CSMA/CD or token-ring standards. The Point-to-Point Protocol (PPP) is another standard at this OSI level. This protocol is typically used to connect two computers through a serial interface, such as when connecting a personal computer to a server through a phone line or a T1 or T3 line. PPP encapsulates TCP/IP packets and forwards them to a server, which then forwards them to the Internet. The advantage to using PPP is that it is a "full-duplex" protocol, which means that it can carry a sending and a receiving signal simultaneously over the same line. It can also be used over twisted-pair wiring, fiber optic cable, and satellite transmissions.
Layer 1 Standards: Physical
Standards at the physical layer include protocols for transmitting a bitstream over media such as baseband coaxial cable, unshielded twisted-pair wiring, optical fiber cable, or through the air. The most commonly used are those specified in the IEEE 802.3, 802.4, and 802.5 standards. Use of the American National Standards Institute (ANSI) FDDI standard has declined as Ethernet has replaced token-ring technologies. Much of the FDDI market has largely been replaced by Synchronous Optical Network (SONET) and Asynchronous Transfer Mode (ATM). The different types of network cable and other network hardware will be discussed in greater detail in the "Hardware Technology" section.
Further Perspective: Standards and Open Systems
You probably noticed from looking at Figure 4 that most accepted standards do not include all (and only) those services specified for any OSI layer. In fact, most common standards encompass parts of multiple OSI layers.
Product vendors' actual implementation of OSI layers is divided less neatly. Vendors implement accepted standards—which already include mixed services from multiple layers—in different ways.
The OSI model was never intended to foster a rigid, unbreakable set of rules: it was expected that networking vendors would be free to use whichever standard for each layer they deemed most appropriate. They would also be free to implement each standard in the manner best suited to the purposes of their products.
However, it is clearly in a vendor's best interest to manufacture products that conform to the intentions behind the OSI model. To do this, a vendor must provide the services required at each OSI model layer in a manner that will enable the vendor's system to be connected to the systems of other vendors easily. Systems that conform to these standards and offer a high degree of interoperability with heterogeneous environments are called open systems. Systems that provide interoperability with components from only one vendor are called proprietary systems. These systems use standards created or modified by the vendor and are designed to operate in a homogeneous or single-vendor environment.Return to Primer Index | Next Section | http://www.novell.com/info/primer/prim05.html | 13 |
21 | © NASA/Zuber, M.T. et al., Nature, 2012
Elevation (left) and shaded relief (right) image of Shackleton, a 21-km-diameter (12.5-mile-diameter) permanently shadowed crater adjacent to the lunar south pole. The structure of the crater's interior was revealed by a digital elevation model constructed from over 5 million elevation measurements from the Lunar Orbiter Laser Altimeter.
NASA said its Lunar Reconnaissance Orbiter (LRO) spacecraft has found a crater - dubbed Shackleton -- on the south pole of the moon that may have as much as 22% of its surface covered in ice.
Shackleton, named after the Antarctic explorer Ernest Shackleton, is two miles deep and more than 12 miles wide and because of the Moon's tilt is always in the dark. Using laser light from LRO's laser altimeter NASA said found the crater's floor is brighter than those of other nearby craters, which is consistent with the presence of small amounts of ice. This information will help researchers understand crater formation and study other uncharted areas of the Moon, NASA said.
NASA said the LRO mapped Shackleton crater with unprecedented detail, and the laser light measured to a depth comparable to its wavelength, or about a micron. That represents a millionth of a meter, or less than one ten-thousandth of an inch. The team also used the instrument to map the relief of the crater's terrain based on the time it took for laser light to bounce back from the Moon's surface. The longer it took, the lower the terrain's elevation, NASA said.
NASA said that in addition to the possible evidence of ice, the study of Shackleton revealed a remarkably preserved crater that has remained relatively unscathed since its formation more than three billion years ago. The crater's floor is itself pocked with several small craters, which may have formed as part of the collision that created Shackleton.
Maria Zuber, the team's lead investigator from the Massachusetts Institute of Technology said that while the crater's floor was relatively bright, its walls were even brighter. The finding was at first puzzling because scientists had thought that if ice were anywhere in a crater, it would be on the floor, where no direct sunlight penetrates. The upper walls of Shackleton crater are occasionally illuminated, which could evaporate any ice that accumulates. A theory offered by the team to explain the puzzle is that "moonquakes"-- seismic shaking brought on by meteorite impacts or gravitational tides from Earth -- may have caused Shackleton's walls to slough off older, darker soil, revealing newer, brighter soil underneath. Zuber's team's ultra-high-resolution map provides strong evidence for ice on both the crater's floor and walls.
"There may be multiple explanations for the observed brightness throughout the crater," Zuber said in a statement. "For example, newer material may be exposed along its walls, while ice may be mixed in with its floor."
This is not the first time NASA has found ice on the moon. The space agency's Lunar CRater Observation and Sensing Satellite (LCROSS)
and Lunar Reconnaissance Orbiter which in 2009 slammed into the Moon as part of an experiment to find out what the orb was really made of found an ice-filled a debris plume from the experiment.
NASA said the mission found evidence that the lunar soil within craters is rich in useful materials, and the moon is chemically active and has a water cycle. Scientists also confirmed the water was in the form of mostly pure ice crystals in some places.
In 2010, using data from a NASA radar that flew aboard India's Chandrayaan-1 spacecraft, scientists detected ice deposits near the moon's north pole. NASA's Mini-SAR instrument found more than 40 small craters with water ice. The craters range in size from 1 to 9 miles (2 to15 km) in diameter. Although the total amount of ice depends on its thickness in each crater, it's estimated there could be at least 1.3 trillion pounds (600 million metric tons) of water ice. | http://www.sott.net/article/246974-NASA-finds-major-ice-source-in-Moon-crater | 13 |
19 | In computer science and technology, a database cursor is a control structure that enables traversal over the records in a database. Cursors facilitate subsequent processing in conjunction with the traversal, such as retrieval, addition and removal of database records. The database cursor characteristic of traversal makes cursors akin to the programming language concept of iterator.
Cursors are used by database programmers to process individual rows returned by database system queries. Cursors enable manipulation of whole result sets at once. In this scenario, a cursor enables the rows in a result set to be processed sequentially.
In SQL procedures, a cursor makes it possible to define a result set (a set of data rows) and perform complex logic on a row by row basis. By using the same mechanics, an SQL procedure can also define a result set and return it directly to the caller of the SQL procedure or to a client application.
A cursor can be viewed as a pointer to one row in a set of rows. The cursor can only reference one row at a time, but can move to other rows of the result set as needed.
To use cursors in SQL procedures, you need to do the following:
- Declare a cursor that defines a result set.
- Open the cursor to establish the result set.
- Fetch the data into local variables as needed from the cursor, one row at a time.
- Close the cursor when done.
To work with cursors you must use the following SQL statements
This section introduces the ways the SQL:2003 standard defines how to use cursors in applications in embedded SQL. Not all application bindings for relational database systems adhere to that standard, and some (such as CLI or JDBC) use a different interface.
A programmer makes a cursor known to the DBMS by using a
CURSOR statement and assigning the cursor a (compulsory) name:
DECLARE cursor_name CURSOR FOR SELECT ... FROM ...
Before code can access the data, it must open the cursor with the
OPEN statement. Directly following a successful opening, the cursor is positioned before the first row in the result set.
Programs position cursors on a specific row in the result set with the
FETCH statement. A fetch operation transfers the data of the row into the application.
FETCH cursor_name INTO ...
Once an application has processed all available rows or the fetch operation is to be positioned on a non-existing row (compare scrollable cursors below), the DBMS returns a SQLSTATE '02000' (usually accompanied by an SQLCODE +100) to indicate the end of the result set.
The final step involves closing the cursor using the
After closing a cursor, a program can open it again, which implies that the DBMS re-evaluates the same query or a different query and builds a new result set.
Programmers may declare cursors as scrollable or not scrollable. The scrollability indicates the direction in which a cursor can move.
With a non-scrollable (or forward-only) cursor, you can
FETCH each row at most once, and the cursor automatically moves to the next row. After you fetch the last row, if you fetch again, you will put the cursor after the last row and get the following code: SQLSTATE 02000 (SQLCODE +100).
A program may position a scrollable cursor anywhere in the result set using the
FETCH SQL statement. The keyword SCROLL must be specified when declaring the cursor. The default is
NO SCROLL, although different language bindings like JDBC may apply a different default.
DECLARE cursor_name sensitivity SCROLL CURSOR FOR SELECT ... FROM ...
The target position for a scrollable cursor can be specified relatively (from the current cursor position) or absolutely (from the beginning of the result set).
FETCH [ NEXT | PRIOR | FIRST | LAST ] FROM cursor_name
FETCH ABSOLUTE n FROM cursor_name
FETCH RELATIVE n FROM cursor_name
Scrollable cursors can potentially access the same row in the result set multiple times. Thus, data modifications (insert, update, delete operations) from other transactions could have an impact on the result set. A cursor can be SENSITIVE or INSENSITIVE to such data modifications. A sensitive cursor picks up data modifications impacting the result set of the cursor, and an insensitive cursor does not. Additionally, a cursor may be ASENSITIVE, in which case the DBMS tries to apply sensitivity as much as possible.
Cursors are usually closed automatically at the end of a transaction, i.e. when a COMMIT or ROLLBACK (or an implicit termination of the transaction) occurs. That behavior can be changed if the cursor is declared using the WITH HOLD clause. (The default is WITHOUT HOLD.) A holdable cursor is kept open over COMMIT and closed upon ROLLBACK. (Some DBMS deviate from this standard behavior and also keep holdable cursors open over ROLLBACK.)
DECLARE cursor_name CURSOR WITH HOLD FOR SELECT ... FROM ...
When a COMMIT occurs, a holdable cursor is positioned before the next row. Thus, a positioned UPDATE or positioned DELETE statement will only succeed after a FETCH operation occurred first in the transaction.
Note that JDBC defines cursors as holdable per default. This is done because JDBC also activates auto-commit per default. Due to the usual overhead associated with auto-commit and holdable cursors, both features should be explicitly deactivated at the connection level.
Positioned update/delete statements
Cursors can not only be used to fetch data from the DBMS into an application but also to identify a row in a table to be updated or deleted. The SQL:2003 standard defines positioned update and positioned delete SQL statements for that purpose. Such statements do not use a regular WHERE clause with predicates. Instead, a cursor identifies the row. The cursor must be opened and already positioned on a row by means of
UPDATE table_name SET ... WHERE CURRENT OF cursor_name
DELETE FROM table_name WHERE CURRENT OF cursor_name
The cursor must operate on an updatable result set in order to successfully execute a positioned update or delete statement. Otherwise, the DBMS would not know how to apply the data changes to the underlying tables referred to in the cursor.
Cursors in distributed transactions
Using cursors in distributed transactions (X/Open XA Environments), which are controlled using a transaction monitor, is no different than cursors in non-distributed transactions.
One has to pay attention when using holdable cursors, however. Connections can be used by different applications. Thus, once a transaction has been ended and committed, a subsequent transaction (running in a different application) could inherit existing holdable cursors. Therefore, an application developer has to be aware of that situation.
Cursors in XQuery
The XQuery language allows cursors to be created using the subsequence() function.
The format is:
let $displayed-sequence := subsequence($result, $start, $item-count)
Where $result is the result of the initial XQuery, $start is the item number to start and $item-count is the number of items to return.
Equivalently this can also be done using a predicate:
let $displayed-sequence := $result[$start to $end]
Where $end is the end sequence.
For complete examples see the XQuery Wikibook.
Disadvantages of cursors
The following information may vary depending on the specific database system.
Fetching a row from the cursor may result in a network round trip each time. This uses much more network bandwidth than would ordinarily be needed for the execution of a single SQL statement like DELETE. Repeated network round trips can severely impact the speed of the operation using the cursor. Some DBMSs try to reduce this impact by using block fetch. Block fetch implies that multiple rows are sent together from the server to the client. The client stores a whole block of rows in a local buffer and retrieves the rows from there until that buffer is exhausted.
Cursors allocate resources on the server, for instance locks, packages, processes, temporary storage, etc. For example, Microsoft SQL Server implements cursors by creating a temporary table and populating it with the query's result set. If a cursor is not properly closed (deallocated), the resources will not be freed until the SQL session (connection) itself is closed. This wasting of resources on the server can not only lead to performance degradations but also to failures.
EMPLOYEES TABLE SQL> DESC EMPLOYEES_DETAILS; Name NULL? TYPE ----------------------------------------- -------- -------------------- EMPLOYEE_ID NOT NULL NUMBER(6) FIRST_NAME VARCHAR2(20) LAST_NAME NOT NULL VARCHAR2(25) EMAIL NOT NULL VARCHAR2(30) PHONE_NUMBER VARCHAR2(20) HIRE_DATE NOT NULL DATE JOB_ID NOT NULL VARCHAR2(10) SALARY NUMBER(8,2) COMMISSION_PCT NUMBER(2,2) MANAGER_ID NUMBER(6) DEPARTMENT_ID NUMBER(4) SAMPLE CURSOR KNOWN AS EE CREATE OR REPLACE PROCEDURE EE AS BEGIN DECLARE v_employeeID EMPLOYEES_DETAILS.EMPLOYEE_ID%TYPE; v_FirstName EMPLOYEES_DETAILS.FIRST_NAME%TYPE; v_LASTName EMPLOYEES_DETAILS.LAST_NAME%TYPE; v_JOB_ID EMPLOYEES_DETAILS.JOB_ID%TYPE:= 'IT_PROG'; Cursor c_EMPLOYEES_DETAILS IS SELECT EMPLOYEE_ID, FIRST_NAME, LAST_NAME FROM EMPLOYEES_DETAILS WHERE JOB_ID ='v_JOB_ID'; BEGIN OPEN c_EMPLOYEES_DETAILS; LOOP FETCH c_EMPLOYEES_DETAILS INTO v_employeeID,v_FirstName,v_LASTName; DBMS_OUTPUT.put_line( v_employeeID); DBMS_OUTPUT.put_line( v_FirstName); DBMS_OUTPUT.put_line( v_LASTName); EXIT WHEN c_EMPLOYEES_DETAILS%NOTFOUND; END LOOP; CLOSE c_EMPLOYEES_DETAILS; END; END;
- Christopher J. Date: Database in Depth, O'Reilly & Associates, ISBN 0-596-10012-4
- Thomas M. Connolly, Carolyn E. Begg: Database Systems, Addison-Wesley, ISBN 0-321-21025-5
- Ramiz Elmasri, Shamkant B. Navathe: Fundamentals of Database Systems, Addison-Wesley, ISBN 0-201-54263-3
- Neil Matthew, Richard Stones: Beginning Databases with PostgreSQL: From Novice to Professional, Apress, ISBN 1-59059-478-9
- Thomas Kyte: Expert One-On-One: Oracle, Apress, ISBN 1-59059-525-4
- Kevin Loney: Oracle Database 10g: The Complete Reference, Oracle Press, ISBN 0-07-225351-7
- Cursor Optimization Tips (for MS SQL Server)
- Descriptions from Portland Pattern Repository
- PostgreSQL Documentation
- Berkeley DB Reference Guide: Cursor operations
- Java SE 7
- Q3SqlCursor Class Reference
- OCI Scrollable Cursor
- function oci_new_cursor
- MySQL's Cursor Documentation
- FirebirdSQL cursors documentation
- Cursors in DB2 CLI applications; Cursors in DB2 SQL stored procedures | http://en.wikipedia.org/wiki/Cursor_(databases) | 13 |
25 | Just because harmonics is becoming a more prevalent problem, that doesn't mean the subject is getting any easier to understand
Harmonics are AC voltages and currents with frequencies that are integer multiples of the fundamental frequency. On a 60-Hz system, this could include 2nd order harmonics (120 Hz), 3rd order harmonics (180 Hz), 4th order harmonics (240 Hz), and so on. Normally, only odd-order harmonics (3rd, 5th, 7th, 9th) occur on a 3-phase power system. If you observe even-order harmonics on a 3-phase system, you more than likely have a defective rectifier in your system.
If you connect an oscilloscope to a 120V receptacle, the image on the screen usually isn't a perfect sine wave. It may be very close, but it will likely be different in one of several ways. It might be slightly flattened or dimpled as the magnitude approaches its positive and negative maximum values (Fig. 1). Or perhaps the sine wave is narrowed near the extreme values, giving the waveform a peaky appearance (Fig. 2 below). More than likely, random deviations from the perfect sinusoid occur at specific locations on the sine wave during every cycle (Fig. 3 below).
The flattened and dimpled sinusoid in Fig. 1 has the mathematical equation, y=sin (x)+0.25 sin (3x). This means a 60-Hz sinusoid (the fundamental frequency) added to a second sinusoid with a frequency three times greater than the fundamental (180 Hz) and an amplitude ¼ (0.25 times) of the fundamental frequency produces a waveform similar to the first part of Fig. 1. The 180-Hz sinusoid is called the third harmonic, since its frequency is three times that of the fundamental frequency.
Similarly, the peaky sinusoid in Fig. 2 has the mathematical equation, y=sin (x) -0.25 sin (3x). This waveform has the same composition as the first waveform, except the third harmonic component is out of phase with the fundamental frequency, as indicated by the negative sign preceding the “0.25 sin (3x)” term. This subtle mathematical difference produces a very different appearance in the waveform.
The waveform in Fig. 3 contains several other harmonics in addition to the third harmonic. Some are in phase with the fundamental frequency and others out of phase. As the harmonic spectrum becomes richer in harmonics, the waveform takes on a more complex appearance, indicating more deviation from the ideal sinusoid. A rich harmonic spectrum may completely obscure the fundamental frequency sinusoid, making a sine wave unrecognizable.
Analyzing harmonics. When the magnitudes and orders of harmonics are known, reconstructing the distorted waveform is simple. Adding the harmonics together, point by point, produces the distorted waveform. The waveform in Fig. 1 is synthesized in Fig. 4 by adding the magnitudes of the two components, the fundamental frequency (red waveform) and the third harmonic (blue waveform), for each value of x, which results in the green waveform.
Decomposing a distorted waveform into its harmonic components is considerably more difficult. This process requires Fourier analysis, which involves a fair amount of calculus. However, electronic equipment has been developed to perform this analysis on a real-time basis. One manufacturer offers a 3-phase power analyzer that can digitally capture 3-phase waveforms and perform a host of analysis functions, including Fourier analysis, to determine harmonic content. Another manufacturer offers similar capabilities for single-phase applications. Easy-to-use analyzers like these can help detect and diagnose harmonic-related problems on most power systems.What causes harmonics? If harmonic voltages aren't generated intentionally, where do they come from? One common source of harmonics is iron core devices like transformers. The magnetic characteristics of iron are almost linear over a certain range of flux density, but quickly saturate as the flux density increases. This nonlinear magnetic characteristic is described by a hysteresis curve. Because of the nonlinear hysteresis curve, the excitation current waveform isn't sinusoidal. A Fourier analysis of the excitation current waveform reveals a significant third harmonic component, making it similar to the waveform shown in Fig. 2.
Core iron isn't the only source of harmonics. Generators themselves produce some 5th harmonic voltages due to magnetic flux distortions that occur near the stator slots and nonsinusoidal flux distribution across the air gap. Other producers of harmonics include nonlinear loads like rectifiers, inverters, adjustable-speed motor drives, welders, arc furnaces, voltage controllers, and frequency converters.
Semiconductor switching devices produce significant harmonic voltages as they abruptly chop voltage waveforms during their transition between conducting and cutoff states. Inverter circuits are notorious for producing harmonics, and are in widespread use today. An adjustable-speed motor drive is one application that makes use of inverter circuits, often using pulse width modulation (PWM) synthesis to produce the AC output voltage. Various synthesis methods produce different harmonic spectra. Regardless of the method used to produce an AC output voltage from a DC input voltage, harmonics will be present on both sides of the inverter and must often be mitigated.Effects of harmonics. Besides distorting the shape of the voltage and current sinusoids, what other effects do harmonics cause? Since harmonic voltages produce harmonic currents with frequencies considerably higher than the power system fundamental frequency, these currents encounter much higher impedances as they propagate through the power system than does the fundamental frequency current. This is due to “skin effect,” which is the tendency for higher frequency currents to flow near the surface of the conductor. Since little of the high-frequency current penetrates far beneath the surface of the conductor, less cross-sectional area is used by the current. As the effective cross section of the conductor is reduced, the effective resistance of the conductor is increased. This is expressed in the following equation:
where R is the resistance of the conductor, ρ is the resistivity of the conductor material, L is the length of the conductor, and A is the cross-sectional area of the conductor. The higher resistance encountered by the harmonic currents will produce a significant heating of the conductor, since heat produced — or power lost — in a conductor is I2R, where I is the current flowing through the conductor.
This increased heating effect is often noticed in two particular parts of the power system: neutral conductors and transformer windings. Harmonics with orders that are odd multiples of the number three (3rd, 9th, 15th, and so on) are particularly troublesome, since they behave like zero-sequence currents. These harmonics, called triplen harmonics, are additive due to their zero-sequence-like behavior. They flow in the system neutral and circulate in delta-connected transformer windings, generating excessive conductor heating in their wake.Reducing the effects of harmonics. Because of the adverse effect of harmonics on power system components, the IEEE developed standard 519-1992 to define recommended practices for harmonic control. This standard also stipulates the maximum allowable harmonic distortion allowed in the voltage and current waveforms on various types of systems.
Two approaches are available for mitigating the effects of excessive heating due to harmonics, and a combination of the two approaches is often implemented. One strategy is to reduce the magnitude of the harmonic waveforms, usually by filtering. The other method is to use system components that can handle the harmonics more effectively, such as finely stranded conductors and k-factor transformers.
Harmonic filters can be constructed by adding an inductance (L) in series with a power factor correction capacitor (C). The series L-C circuit can be tuned for a frequency close to that of the troublesome harmonic, which is often the 5th. By tuning the filter in this way, you can attenuate the unwanted harmonic.
Filtering isn't the only means of reducing harmonics. The switching angles of an inverter can be preselected to eliminate some harmonics in the output voltage. This can be a very cost-effective means of reducing inverter-produced harmonics.
Since skin effect is responsible for the increased heating caused by harmonic currents, using conductors with larger surface areas will lessen the heating effects. This can be done by using finely stranded conductors, since the effective surface area of the conductor is the sum of the surface area of each strand.
Specially designed transformers called k-factor transformers are also advantageous when harmonic currents are prevalent. They parallel small conductors in their windings to reduce skin effect and incorporate special core designs to reduce the saturation effects at the higher flux frequencies produced by the harmonics.
You should also increase the size of neutral conductors to better accommodate triplen harmonics. Per the FPN in 210.4(A) and 220.22 of the 2002 NEC, “A 3-phase, 4-wire wye-connected power system used to supply power to nonlinear loads may necessitate that the power system design allow for the possibility of high harmonic neutral currents.” And per 310.15(B)(4)(c), “On a 4-wire, 3-phase wye circuit where the major portion of the load consists of nonlinear loads, harmonic currents are present on the neutral conductor: the neutral shall therefore be considered a current-carrying conductor.” It's important to note that the duct bank ampacity tables in B.310.5 through B.310.7 are designed for a maximum harmonic loading on the neutral conductor of 50% of the phase currents.
Harmonics will undoubtedly continue to become more of a concern as more equipment that produces them is added to electrical systems. But if adequately considered during the initial design of the system, harmonics can be managed and their detrimental effects avoided.
Fehr is an independent engineering consultant located in Clearwater, Fla. | http://ecmweb.com/print/archive/harmonics-made-simple | 13 |
20 | In meteorology, precipitation (also known as one of the classes of hydrometeors, which are atmospheric water phenomena) is any product of the condensation of atmospheric water vapour that falls under gravity. The main forms of precipitation include drizzle, rain, sleet, snow, graupel and hail. Precipitation occurs when a local portion of the atmosphere becomes saturated with water vapour, so that the water condenses and "precipitates". Thus, fog and mist are not precipitation but suspensions, because the water vapour does not condense sufficiently to precipitate. Two processes, possibly acting together, can lead to air becoming saturated: cooling the air or adding water vapour to the air. Generally, precipitation will fall to the surface; an exception is Virga which evaporates before reaching the surface. Precipitation forms as smaller droplets coalesce via collision with other rain drops or ice crystals within a cloud. Rain drops range in size from oblate, pancake-like shapes for larger drops, to small spheres for smaller drops. Unlike raindrops, snowflakes grow in a variety of different shapes and patterns, determined by the temperature and humidity characteristics of the air the snowflake moves through on its way to the ground. While snow and ice pellets require temperatures close to the ground to be near or below freezing, hail can occur during much warmer temperature regimes due to the process of its formation.
Moisture overriding associated with weather fronts is an overall major method of precipitation production. If enough moisture and upward motion is present, precipitation falls from convective clouds such as cumulonimbus and can organize into narrow rainbands. Where relatively warm water bodies are present, for example due to water evaporation from lakes, lake-effect snowfall becomes a concern downwind of the warm lakes within the cold cyclonic flow around the backside of extratropical cyclones. Lake-effect snowfall can be locally heavy. Thundersnow is possible within a cyclone's comma head and within lake effect precipitation bands. In mountainous areas, heavy precipitation is possible where upslope flow is maximized within windward sides of the terrain at elevation. On the leeward side of mountains, desert climates can exist due to the dry air caused by compressional heating. The movement of the monsoon trough, or intertropical convergence zone, brings rainy seasons to savannah climes.
Precipitation is a major component of the water cycle, and is responsible for depositing the fresh water on the planet. Approximately 505,000 cubic kilometres (121,000 cu mi) of water falls as precipitation each year; 398,000 cubic kilometres (95,000 cu mi) of it over the oceans and 107,000 cubic kilometres (26,000 cu mi) over land. Given the Earth's surface area, that means the globally averaged annual precipitation is 990 millimetres (39 in), but over land it is only 715 millimetres (28.1 in). Climate classification systems such as the Köppen climate classification system use average annual rainfall to help differentiate between differing climate regimes.
|Part of the nature series|
Any phenomenon which was at some point produced due to condensation or precipitation of moisture within the Earth's atmosphere is known as a hydrometeor. Particles composed of fallen precipitation which fell onto the Earth's surface can become hydrometeors if blown off the landscape by wind. Formations due to condensation such as clouds, haze, fog, and mist are composed of hydrometeors. All precipitation types are hydrometeors by definition, including virga, which is precipitation which evaporates before reaching the ground. Particles removed from the Earth's surface by wind such as blowing snow and blowing sea spray are also hydrometeors.
Precipitation is a major component of the water cycle, and is responsible for depositing most of the fresh water on the planet. Approximately 505,000 km3 (121,000 mi3) of water falls as precipitation each year, 398,000 km3 (95,000 cu mi) of it over the oceans. Given the Earth's surface area, that means the globally averaged annual precipitation is 990 millimetres (39 in).
Mechanisms of producing precipitation include convective, stratiform, and orographic rainfall. Convective processes involve strong vertical motions that can cause the overturning of the atmosphere in that location within an hour and cause heavy precipitation, while stratiform processes involve weaker upward motions and less intense precipitation. Precipitation can be divided into three categories, based on whether it falls as liquid water, liquid water that freezes on contact with the surface, or ice. Mixtures of different types of precipitation, including types in different categories, can fall simultaneously. Liquid forms of precipitation include rain and drizzle. Rain or drizzle that freezes on contact within a subfreezing air mass is called "freezing rain" or "freezing drizzle". Frozen forms of precipitation include snow, ice needles, ice pellets, hail, and graupel.
How the air becomes saturated
Cooling air to its dew point
The dew point is the temperature to which a parcel must be cooled in order to become saturated, and (unless super-saturation occurs) condenses to water. Water vapour normally begins to condense on condensation nuclei such as dust, ice, and salt in order to form clouds. An elevated portion of a frontal zone forces broad areas of lift, which form clouds decks such as altostratus or cirrostratus. Stratus is a stable cloud deck which tends to form when a cool, stable air mass is trapped underneath a warm air mass. It can also form due to the lifting of advection fog during breezy conditions.
There are four main mechanisms for cooling the air to its dew point: adiabatic cooling, conductive cooling, radiational cooling, and evaporative cooling. Adiabatic cooling occurs when air rises and expands. The air can rise due to convection, large-scale atmospheric motions, or a physical barrier such as a mountain (orographic lift). Conductive cooling occurs when the air comes into contact with a colder surface, usually by being blown from one surface to another, for example from a liquid water surface to colder land. Radiational cooling occurs due to the emission of infrared radiation, either by the air or by the surface underneath. Evaporative cooling occurs when moisture is added to the air through evaporation, which forces the air temperature to cool to its wet-bulb temperature, or until it reaches saturation.
Adding moisture to the air
The main ways water vapour is added to the air are: wind convergence into areas of upward motion, precipitation or virga falling from above, daytime heating evaporating water from the surface of oceans, water bodies or wet land, transpiration from plants, cool or dry air moving over warmer water, and lifting air over mountains.
Coalescence occurs when water droplets fuse to create larger water droplets, or when water droplets freeze onto an ice crystal, which is known as the Bergeron process. The fall rate of very small droplets is negligible, hence clouds do not fall out of the sky; precipitation will only occur when these coalesce into larger drops. When air turbulence occurs, water droplets collide, producing larger droplets. As these larger water droplets descend, coalescence continues, so that drops become heavy enough to overcome air resistance and fall as rain.
Raindrops have sizes ranging from 0.1 millimetres (0.0039 in) to 9 millimetres (0.35 in) mean diameter, above which they tend to break up. Smaller drops are called cloud droplets, and their shape is spherical. As a raindrop increases in size, its shape becomes more oblate, with its largest cross-section facing the oncoming airflow. Contrary to the cartoon pictures of raindrops, their shape does not resemble a teardrop. Intensity and duration of rainfall are usually inversely related, i.e., high intensity storms are likely to be of short duration and low intensity storms can have a long duration. Rain drops associated with melting hail tend to be larger than other rain drops. The METAR code for rain is RA, while the coding for rain showers is SHRA.
Ice pellets
Ice pellets or sleet are a form of precipitation consisting of small, translucent balls of ice. Ice pellets are usually (but not always) smaller than hailstones. They often bounce when they hit the ground, and generally do not freeze into a solid mass unless mixed with freezing rain. The METAR code for ice pellets is PL.
Ice pellets form when a layer of above-freezing air exists with sub-freezing air both above and below. This causes the partial or complete melting of any snowflakes falling through the warm layer. As they fall back into the sub-freezing layer closer to the surface, they re-freeze into ice pellets. However, if the sub-freezing layer beneath the warm layer is too small, the precipitation will not have time to re-freeze, and freezing rain will be the result at the surface. A temperature profile showing a warm layer above the ground is most likely to be found in advance of a warm front during the cold season, but can occasionally be found behind a passing cold front.
Like other precipitation, hail forms in storm clouds when supercooled water droplets freeze on contact with condensation nuclei, such as dust or dirt. The storm's updraft blows the hailstones to the upper part of the cloud. The updraft dissipates and the hailstones fall down, back into the updraft, and are lifted again. Hail has a diameter of 5 millimetres (0.20 in) or more. Within METAR code, GR is used to indicate larger hail, of a diameter of at least 6.4 millimetres (0.25 in). GR is derived from the French word grêle. Smaller-sized hail, as well as snow pellets, use the coding of GS, which is short for the French word grésil. Stones just larger than golf ball-sized are one of the most frequently reported hail sizes. Hailstones can grow to 15 centimetres (6 in) and weigh more than .5 kilograms (1.1 lb). In large hailstones, latent heat released by further freezing may melt the outer shell of the hailstone. The hailstone then may undergo 'wet growth', where the liquid outer shell collects other smaller hailstones. The hailstone gains an ice layer and grows increasingly larger with each ascent. Once a hailstone becomes too heavy to be supported by the storm's updraft, it falls from the cloud.
Snow crystals form when tiny supercooled cloud droplets (about 10 μm in diameter) freeze. Once a droplet has frozen, it grows in the supersaturated environment. Because water droplets are more numerous than the ice crystals the crystals are able to grow to hundreds of micrometers or millimeters in size at the expense of the water droplets. This process is known as the Wegner-Bergeron-Findeison process. The corresponding depletion of water vapor causes the droplets to evaporate, meaning that the ice crystals grow at the droplets' expense. These large crystals are an efficient source of precipitation, since they fall through the atmosphere due to their mass, and may collide and stick together in clusters, or aggregates. These aggregates are snowflakes, and are usually the type of ice particle that falls to the ground. Guinness World Records list the world's largest snowflakes as those of January 1887 at Fort Keogh, Montana; allegedly one measured 38 cm (15 inches) wide. The exact details of the sticking mechanism remain a subject of research.
Although the ice is clear, scattering of light by the crystal facets and hollows/imperfections mean that the crystals often appear white in color due to diffuse reflection of the whole spectrum of light by the small ice particles. The shape of the snowflake is determined broadly by the temperature and humidity at which it is formed. Rarely, at a temperature of around −2 °C (28 °F), snowflakes can form in threefold symmetry—triangular snowflakes. The most common snow particles are visibly irregular, although near-perfect snowflakes may be more common in pictures because they are more visually appealing. No two snowflakes are alike, which grow at different rates and in different patterns depending on the changing temperature and humidity within the atmosphere that the snowflake falls through on its way to the ground. The METAR code for snow is SN, while snow showers are coded SHSN.
Diamond dust
Diamond dust, also known as ice needles or ice crystals, forms at temperatures approaching −40 °F (−40 °C) due to air with slightly higher moisture from aloft mixing with colder, surface based air. They are made of simple ice crystals that are hexagonal in shape. The METAR identifier for diamond dust within international hourly weather reports is IC.
Frontal activity
Stratiform or dynamic precipitation occurs as a consequence of slow ascent of air in synoptic systems (on the order of cm/s), such as over surface cold fronts, and over and ahead of warm fronts. Similar ascent is seen around tropical cyclones outside of the eyewall, and in comma-head precipitation patterns around mid-latitude cyclones. A wide variety of weather can be found along an occluded front, with thunderstorms possible, but usually their passage is associated with a drying of the air mass. Occluded fronts usually form around mature low-pressure areas. Precipitation may occur on celestial bodies other than Earth. When it gets cold, Mars has precipitation that most likely takes the form of ice needles, rather than rain or snow.
Convective rain, or showery precipitation, occurs from convective clouds, e.g., cumulonimbus or cumulus congestus. It falls as showers with rapidly changing intensity. Convective precipitation falls over a certain area for a relatively short time, as convective clouds have limited horizontal extent. Most precipitation in the tropics appears to be convective; however, it has been suggested that stratiform precipitation also occurs. Graupel and hail indicate convection. In mid-latitudes, convective precipitation is intermittent and often associated with baroclinic boundaries such as cold fronts, squall lines, and warm fronts.
Orographic effects
Orographic precipitation occurs on the windward side of mountains and is caused by the rising air motion of a large-scale flow of moist air across the mountain ridge, resulting in adiabatic cooling and condensation. In mountainous parts of the world subjected to relatively consistent winds (for example, the trade winds), a more moist climate usually prevails on the windward side of a mountain than on the leeward or downwind side. Moisture is removed by orographic lift, leaving drier air (see katabatic wind) on the descending and generally warming, leeward side where a rain shadow is observed.
In Hawaii, Mount Waiʻaleʻale, on the island of Kauai, is notable for its extreme rainfall, as it has the second highest average annual rainfall on Earth, with 460 inches (12,000 mm). Storm systems affect the state with heavy rains between October and March. Local climates vary considerably on each island due to their topography, divisible into windward (Koʻolau) and leeward (Kona) regions based upon location relative to the higher mountains. Windward sides face the east to northeast trade winds and receive much more rainfall; leeward sides are drier and sunnier, with less rain and less cloud cover.
In South America, the Andes mountain range blocks Pacific moisture that arrives in that continent, resulting in a desertlike climate just downwind across western Argentina. The Sierra Nevada range creates the same effect in North America forming the Great Basin and Mojave Deserts.
Extratropical cyclones can bring cold and dangerous conditions with heavy rain and snow with winds exceeding 119 km/h (74 mph), (sometimes referred to as windstorms in Europe). The band of precipitation that is associated with their warm front is often extensive, forced by weak upward vertical motion of air over the frontal boundary which condenses as it cools and produces precipitation within an elongated band, which is wide and stratiform, meaning falling out of nimbostratus clouds. When moist air tries to dislodge an arctic air mass, overrunning snow can result within the poleward side of the elongated precipitation band. In the Northern Hemisphere, poleward is towards the North Pole, or north. Within the Southern Hemisphere, poleward is towards the South Pole, or south.
Southwest of extratropical cyclones, curved cyclonic flow bringing cold air across the relatively warm water bodies can lead to narrow lake-effect snow bands. Those bands bring strong localized snowfall which can be understood as follows: Large water bodies such as lakes efficiently store heat that results in significant temperature differences (larger than 13 °C or 23 °F) between the water surface and the air above. Because of this temperature difference, warmth and moisture are transported upward, condensing into vertically oriented clouds (see satellite picture) which produce snow showers. The temperature decrease with height and cloud depth are directly affected by both the water temperature and the large-scale environment. The stronger the temperature decrease with height, the deeper the clouds get, and the greater the precipitation rate becomes.
In mountainous areas, heavy snowfall accumulates when air is forced to ascend the mountains and squeeze out precipitation along their windward slopes, which in cold conditions, falls in the form of snow. Because of the ruggedness of terrain, forecasting the location of heavy snowfall remains a significant challenge.
Within the tropics
The wet, or rainy, season is the time of year, covering one or more months, when most of the average annual rainfall in a region falls. The term green season is also sometimes used as a euphemism by tourist authorities. Areas with wet seasons are dispersed across portions of the tropics and subtropics. Savanna climates and areas with monsoon regimes have wet summers and dry winters. Tropical rainforests technically do not have dry or wet seasons, since their rainfall is equally distributed through the year. Some areas with pronounced rainy seasons will see a break in rainfall mid-season when the intertropical convergence zone or monsoon trough move poleward of their location during the middle of the warm season. When the wet season occurs during the warm season, or summer, rain falls mainly during the late afternoon and early evening hours. The wet season is a time when air quality improves, freshwater quality improves, and vegetation grows significantly. Soil nutrients diminish and erosion increases. Animals have adaptation and survival strategies for the wetter regime. The previous dry season leads to food shortages into the wet season, as the crops have yet to mature. Developing countries have noted that their populations show seasonal weight fluctuations due to food shortages seen before the first harvest, which occurs late in the wet season.
Tropical cyclones, a source of very heavy rainfall, consist of large air masses several hundred miles across with low pressure at the centre and with winds blowing inward towards the centre in either a clockwise direction (southern hemisphere) or counterclockwise (northern hemisphere). Although cyclones can take an enormous toll in lives and personal property, they may be important factors in the precipitation regimes of places they impact, as they may bring much-needed precipitation to otherwise dry regions. Areas in their path can receive a year's worth of rainfall from a tropical cyclone passage.
Large-scale geographical distribution
On the large scale, the highest precipitation amounts outside topography fall in the tropics, closely tied to the Intertropical Convergence Zone, itself the ascending branch of the Hadley cell. Mountainous locales near the equator in Colombia are amongst the wettest places on Earth. North and south of this are regions of descending air that form subtropical ridges where precipitation is low; the land surface underneath is usually arid, which forms most of the Earth's deserts. An exception to this rule is in Hawaii, where upslope flow due to the trade winds lead to one of the wettest locations on Earth. Otherwise, the flow of the Westerlies into the Rocky Mountains lead to the wettest, and at elevation snowiest, locations within North America. In Asia during the wet season, the flow of moist air into the Himalayas leads to some of the greatest rainfall amounts measured on Earth in northeast India.
The standard way of measuring rainfall or snowfall is the standard rain gauge, which can be found in 100 mm (4 in) plastic and 200 mm (8 in) metal varieties. The inner cylinder is filled by 25 mm (1 in) of rain, with overflow flowing into the outer cylinder. Plastic gauges have markings on the inner cylinder down to 0.25 mm (0.01 in) resolution, while metal gauges require use of a stick designed with the appropriate 0.25 mm (0.01 in) markings. After the inner cylinder is filled, the amount inside it is discarded, then filled with the remaining rainfall in the outer cylinder until all the fluid in the outer cylinder is gone, adding to the overall total until the outer cylinder is empty. These gauges are used in the winter by removing the funnel and inner cylinder and allowing snow and freezing rain to collect inside the outer cylinder. Some add anti-freeze to their gauge so they do not have to melt the snow or ice that falls into the gauge. Once the snowfall/ice is finished accumulating, or as 300 mm (12 in) is approached, one can either bring it inside to melt, or use lukewarm water to fill the inner cylinder with in order to melt the frozen precipitation in the outer cylinder, keeping track of the warm fluid added, which is subsequently subtracted from the overall total once all the ice/snow is melted.
Other types of gauges include the popular wedge gauge (the cheapest rain gauge and most fragile), the tipping bucket rain gauge, and the weighing rain gauge. The wedge and tipping bucket gauges will have problems with snow. Attempts to compensate for snow/ice by warming the tipping bucket meet with limited success, since snow may sublimate if the gauge is kept much above freezing. Weighing gauges with antifreeze should do fine with snow, but again, the funnel needs to be removed before the event begins. For those looking to measure rainfall the most inexpensively, a can that is cylindrical with straight sides will act as a rain gauge if left out in the open, but its accuracy will depend on what ruler is used to measure the rain with. Any of the above rain gauges can be made at home, with enough know-how.
When a precipitation measurement is made, various networks exist across the United States and elsewhere where rainfall measurements can be submitted through the Internet, such as CoCoRAHS or GLOBE. If a network is not available in the area where one lives, the nearest local weather office will likely be interested in the measurement.
Return period
The likelihood or probability of an event with a specified intensity and duration, is called the return period or frequency. The intensity of a storm can be predicted for any return period and storm duration, from charts based on historic data for the location. The term 1 in 10 year storm describes a rainfall event which is rare and is only likely to occur once every 10 years, so it has a 10 percent likelihood any given year. The rainfall will be greater and the flooding will be worse than the worst storm expected in any single year. The term 1 in 100 year storm describes a rainfall event which is extremely rare and which will occur with a likelihood of only once in a century, so has a 1 percent likelihood in any given year. The rainfall will be extreme and flooding to be worse than a 1 in 10 year event. As with all probability events, it is possible to have multiple "1 in 100 Year Storms" in a single year.
Role in Köppen climate classification
The Köppen classification depends on average monthly values of temperature and precipitation. The most commonly used form of the Köppen classification has five primary types labeled A through E. Specifically, the primary types are A, tropical; B, dry; C, mild mid-latitude; D, cold mid-latitude; and E, polar. The five primary classifications can be further divided into secondary classifications such as rain forest, monsoon, tropical savanna, humid subtropical, humid continental, oceanic climate, Mediterranean climate, steppe, subarctic climate, tundra, polar ice cap, and desert.
Rain forests are characterized by high rainfall, with definitions setting minimum normal annual rainfall between 1,750 millimetres (69 in) and 2,000 millimetres (79 in). A tropical savanna is a grassland biome located in semi-arid to semi-humid climate regions of subtropical and tropical latitudes, with rainfall between 750 millimetres (30 in) and 1,270 millimetres (50 in) a year. They are widespread on Africa, and are also found in India, the northern parts of South America, Malaysia, and Australia. The humid subtropical climate zone where winter rainfall (and sometimes snowfall) is associated with large storms that the westerlies steer from west to east. Most summer rainfall occurs during thunderstorms and from occasional tropical cyclones. Humid subtropical climates lie on the east side continents, roughly between latitudes 20° and 40° degrees away from the equator.
An oceanic (or maritime) climate is typically found along the west coasts at the middle latitudes of all the world's continents, bordering cool oceans, as well as southeastern Australia, and is accompanied by plentiful precipitation year round. The Mediterranean climate regime resembles the climate of the lands in the Mediterranean Basin, parts of western North America, parts of Western and South Australia, in southwestern South Africa and in parts of central Chile. The climate is characterized by hot, dry summers and cool, wet winters. A steppe is a dry grassland. Subarctic climates are cold with continuous permafrost and little precipitation.
Effect on agriculture
Precipitation, especially rain, has a dramatic effect on agriculture. All plants need at least some water to survive, therefore rain (being the most effective means of watering) is important to agriculture. While a regular rain pattern is usually vital to healthy plants, too much or too little rainfall can be harmful, even devastating to crops. Drought can kill crops and increase erosion, while overly wet weather can cause harmful fungus growth. Plants need varying amounts of rainfall to survive. For example, certain cacti require small amounts of water, while tropical plants may need up to hundreds of inches of rain per year to survive.
In areas with wet and dry seasons, soil nutrients diminish and erosion increases during the wet season. Animals have adaptation and survival strategies for the wetter regime. The previous dry season leads to food shortages into the wet season, as the crops have yet to mature. Developing countries have noted that their populations show seasonal weight fluctuations due to food shortages seen before the first harvest, which occurs late in the wet season.
Changes due to global warming
Increasing temperatures tend to increase evaporation which leads to more precipitation. Precipitation has generally increased over land north of 30°N from 1900 through 2005 but has declined over the tropics since the 1970s. Globally there has been no statistically significant overall trend in precipitation over the past century, although trends have varied widely by region and over time. Eastern portions of North and South America, northern Europe, and northern and central Asia have become wetter. The Sahel, the Mediterranean, southern Africa and parts of southern Asia have become drier. There has been an increase in the number of heavy precipitation events over many areas during the past century, as well as an increase since the 1970s in the prevalence of droughts—especially in the tropics and subtropics. Changes in precipitation and evaporation over the oceans are suggested by the decreased salinity of mid- and high-latitude waters (implying more precipitation), along with increased salinity in lower latitudes (implying less precipitation, more evaporation, or both). Over the contiguous United States, total annual precipitation increased at an average rate of 6.1 percent per century since 1900, with the greatest increases within the East North Central climate region (11.6 percent per century) and the South (11.1 percent). Hawaii was the only region to show a decrease (-9.25 percent).
Changes due to urban heat island
The urban heat island warms cities 0.6 °C (1.1 °F) to 5.6 °C (10.1 °F) above surrounding suburbs and rural areas. This extra heat leads to greater upward motion, which can induce additional shower and thunderstorm activity. Rainfall rates downwind of cities are increased between 48% and 116%. Partly as a result of this warming, monthly rainfall is about 28% greater between 20 miles (32 km) to 40 miles (64 km) downwind of cities, compared with upwind. Some cities induce a total precipitation increase of 51%.
The Quantitative Precipitation Forecast (abbreviated QPF) is the expected amount of liquid precipitation accumulated over a specified time period over a specified area. A QPF will be specified when a measurable precipitation type reaching a minimum threshold is forecast for any hour during a QPF valid period. Precipitation forecasts tend to be bound by synoptic hours such as 0000, 0600, 1200 and 1800 GMT. Terrain is considered in QPFs by use of topography or based upon climatological precipitation patterns from observations with fine detail. Starting in the mid to late 1990s, QPFs were used within hydrologic forecast models to simulate impact to rivers throughout the United States. Forecast models show significant sensitivity to humidity levels within the planetary boundary layer, or in the lowest levels of the atmosphere, which decreases with height. QPF can be generated on a quantitative, forecasting amounts, or a qualitative, forecasting the probability of a specific amount, basis. Radar imagery forecasting techniques show higher skill than model forecasts within six to seven hours of the time of the radar image. The forecasts can be verified through use of rain gauge measurements, weather radar estimates, or a combination of both. Various skill scores can be determined to measure the value of the rainfall forecast.
See also
- List of meteorology topics
- Basic precipitation
- Mango showers, pre-monsoon showers in the Indian states of Karnataka and Kerala that help in the ripening of mangoes.
- Sunshower, an unusual meteorological phenomenon in which rain falls while the sun is shining.
- Wintry showers, an informal meteorological term for various mixtures of rain, freezing rain, sleet and snow.
- "Precipitation". Glossary of Meteorology. American Meteorological Society. 2009. Retrieved 2009-01-02.
- Dr. Chowdhury's Guide to Planet Earth (2005). "The Water Cycle". WestEd. Retrieved 2006-10-24.
- Dr. Jim Lochner (1998). "Ask an Astrophysicist". NASA Goddard Space Flight Center. Retrieved 2009-01-16.
- Glossary of Meteorology (2009). "Hydrometeor". American Meteorological Society. Retrieved 2009-07-16.
- The American Meteor Society (2001-08-27). "Definition of terms by the IAU Commission 22, 1961". Archived from the original on 2009-04-20. Retrieved 2009-07-16.
- Emmanouil N. Anagnostou (2004). "A convective/stratiform precipitation classification algorithm for volume scanning weather radar observations". Meteorological Applications (Cambridge University Press) 11 (4): 291–300. Bibcode:2004MeApp..11..291A. doi:10.1017/S1350482704001409.
- A.J. Dore, M. Mousavi-Baygi, R.I. Smith, J. Hall, D. Fowler and T.W. Choularton (June 2006). "A model of annual orographic precipitation and acid deposition and its application to Snowdonia". Atmosphere Environment 40 (18): 3316–3326. doi:10.1016/j.atmosenv.2006.01.043.
- Robert Penrose Pearce (2002). Meteorology at the Millennium. Academic Press. p. 66. ISBN 978-0-12-548035-2. Retrieved 2009-01-02.
- Jan Jackson (2008). "All About Mixed Winter Precipitation". National Weather Service. Retrieved 2009-02-07.
- Glossary of Meteorology (June 2000). "Dewpoint". American Meteorological Society. Retrieved 2011-01-31.
- FMI (2007). "Fog And Stratus - Meteorological Physical Background". Zentralanstalt für Meteorologie und Geodynamik. Retrieved 2009-02-07.
- Glossary of Meteorology (2009). "Adiabatic Process". American Meteorological Society. Retrieved 2008-12-27.
- TE Technology, Inc (2009). "Peltier Cold Plate". Retrieved 2008-12-27.
- Glossary of Meteorology (2009). "Radiational cooling". American Meteorological Society. Retrieved 2008-12-27.
- Robert Fovell (2004). "Approaches to saturation". University of California in Los Angelese. Retrieved 2009-02-07.
- National Weather Service Office, Spokane, Washington (2009). "Virga and Dry Thunderstorms". Retrieved 2009-01-02.
- Bart van den Hurk and Eleanor Blyth (2008). "Global maps of Local Land-Atmosphere coupling". KNMI. Retrieved 2009-01-02.
- H. Edward Reiley, Carroll L. Shry (2002). Introductory horticulture. Cengage Learning. p. 40. ISBN 978-0-7668-1567-4. Retrieved 2011-01-31.
- National Weather Service JetStream (2008). "Air Masses". Retrieved 2009-01-02.
- Dr. Michael Pidwirny (2008). "CHAPTER 8: Introduction to the Hydrosphere (e). Cloud Formation Processes". Physical Geography. Retrieved 2009-01-01.
- Paul Sirvatka (2003). "Cloud Physics: Collision/Coalescence; The Bergeron Process". College of DuPage. Retrieved 2009-01-01.
- United States Geological Survey (2009). "Are raindrops tear shaped?". United States Department of the Interior. Retrieved 2008-12-27.
- J . S. 0guntoyinbo and F. 0. Akintola (1983). "Rainstorm characteristics affecting water availability for agriculture". IAHS Publication Number 140. Retrieved 2008-12-27.
- Robert A. Houze Jr (1997). "Stratiform Precipitation in Regions of Convection: A Meteorological Paradox?". Bulletin of the American Meteorological Society 78 (10): 2179–2196. Bibcode:1997BAMS...78.2179H. doi:10.1175/1520-0477(1997)078<2179:SPIROC>2.0.CO;2.
- Norman W. Junker (2008). "An ingredients based methodology for forecasting precipitation associated with MCS’s". Hydrometeorological Prediction Center. Retrieved 2009-02-07.
- Alaska Air Flight Service Station (2007-04-10). "SA-METAR". Federal Aviation Administration via the Internet Wayback Machine. Archived from the original on 2008-05-01. Retrieved 2009-08-29.
- "Hail (glossary entry)". National Oceanic and Atmospheric Administration's National Weather Service. Retrieved 2007-03-20.
- Weatherquestions.com. "What causes ice pellets (sleet)?". Retrieved 2007-12-08.
- Glossary of Meteorology (2009). "Hail". American Meteorological Society. Retrieved 2009-07-15.
- Ryan Jewell and Julian Brimelow (2004-08-17). "P9.5 Evaluation of an Alberta Hail Growth Model Using Severe Hail Proximity Soundings in the United States". Retrieved 2009-07-15.
- National Severe Storms Laboratory (2007-04-23). "Aggregate hailstone". National Oceanic and Atmospheric Administration. Retrieved 2009-07-15.
- Julian C. Brimelow, Gerhard W. Reuter, and Eugene R. Poolman (October 2002). "Modeling Maximum Hail Size in Alberta Thunderstorms". Weather and Forecasting 17 (5): 1048–1062. Bibcode:2002WtFor..17.1048B. doi:10.1175/1520-0434(2002)017<1048:MMHSIA>2.0.CO;2.
- Jacque Marshall (2000-04-10). "Hail Fact Sheet". University Corporation for Atmospheric Research. Retrieved 2009-07-15.
- M. Klesius (2007). "The Mystery of Snowflakes". National Geographic 211 (1): 20. ISSN 0027-9358.
- William J. Broad (2007-03-20). "Giant Snowflakes as Big as Frisbees? Could Be". New York Times. Retrieved 2009-07-12.
- Jennifer E. Lawson (2001). Hands-on Science: Light, Physical Science (matter) - Chapter 5: The Colors of Light. Portage & Main Press. p. 39. ISBN 978-1-894110-63-1. Retrieved 2009-06-28.
- Kenneth G. Libbrecht (2006-09-11). "Guide to Snowflakes". California Institute of Technology. Retrieved 2009-06-28.
- John Roach (2007-02-13). ""No Two Snowflakes the Same" Likely True, Research Reveals". National Geographic News. Retrieved 2009-07-14.
- Kenneth Libbrecht (Winter 2004/2005). "Snowflake Science". American Educator. Retrieved 2009-07-14.
- Glossary of Meteorology (June 2000). "Diamond Dust". American Meteorological Society. Retrieved 2010-01-21.
- Kenneth G. Libbrecht (2001). "Morphogenesis on Ice: The Physics of Snow Crystals". Engineering & Science (California Institute of Technology) (1): 12. Retrieved 2010-01-21.
- B. Geerts (2002). "Convective and stratiform rainfall in the tropics". University of Wyoming. Retrieved 2007-11-27.
- David Roth (2006). "Unified Surface Analysis Manual". Hydrometeorological Prediction Center. Retrieved 2006-10-22.
- Glossary of Meteorology (2009). "Graupel". American Meteorological Society. Retrieved 2009-01-02.
- Toby N. Carlson (1991). Mid-latitude Weather Systems. Routledge. p. 216. ISBN 978-0-04-551115-0. Retrieved 2009-02-07.
- Diana Leone (2002). "Rain supreme". Honolulu Star-Bulletin. Retrieved 2008-03-19.
- Western Regional Climate Center (2002). "Climate of Hawaii". Retrieved 2008-03-19.
- Paul E. Lydolph (1985). The Climate of the Earth. Rowman & Littlefield. p. 333. ISBN 978-0-86598-119-5. Retrieved 2009-01-02.
- Michael A. Mares (1999). Encyclopedia of Deserts. University of Oklahoma Press. p. 252. ISBN 978-0-8061-3146-7. Retrieved 2009-01-02.
- Adam Ganson (2003). "Geology of Death Valley". Indiana University. Retrieved 2009-02-07.
- Joan Von Ahn; Joe Sienkiewicz; Greggory McFadden (2005-04). "Hurricane Force Extratropical Cyclones Observed Using QuikSCAT Near Real Time Winds". Mariners Weather Log (Voluntary Observing Ship Program) 49 (1). Retrieved 2009-07-07.
- Owen Hertzman (1988). Three-Dimensional Kinematics of Rainbands in Midlatitude Cyclones Abstract. PhD thesis. University of Washington. Bibcode:1988PhDT.......110H.
- Yuh-Lang Lin (2007). Mesoscale Dynamics. Cambridge University Press. p. 405. ISBN 978-0-521-80875-0. Retrieved 2009-07-07.
- B. Geerts (1998). "Lake Effect Snow.". University of Wyoming. Retrieved 2008-12-24.
- Greg Byrd (1998-06-03). "Lake Effect Snow". University Corporation for Atmospheric Research. Retrieved 2009-07-12.
- Karl W. Birkeland and Cary J. Mock (1996). "Atmospheric Circulation Patterns Associated With Heavy Snowfall Events, Bridger Bowl, Montana, USA". Mountain Research and Development (International Mountain Society) 16 (3): 281–286. doi:10.2307/3673951. JSTOR 3673951.
- Glossary of Meteorology (2009). "Rainy season". American Meteorological Society. Retrieved 2008-12-27.
- Costa Rica Guide (2005). "When to Travel to Costa Rica". ToucanGuides. Retrieved 2008-12-27.
- Michael Pidwirny (2008). "CHAPTER 9: Introduction to the Biosphere". PhysicalGeography.net. Retrieved 2008-12-27.
- Elisabeth M. Benders-Hyde (2003). "World Climates". Blue Planet Biomes. Retrieved 2008-12-27.
- Mei Zheng (2000). "The sources and characteristics of atmospheric particulates during the wet and dry seasons in Hong Kong". University of Rhode Island. Retrieved 2008-12-27.
- S. I. Efe, F. E. Ogban, M. J. Horsfall, E. E. Akporhonor (2005). "Seasonal Variations of Physico-chemical Characteristics in Water Resources Quality in Western Niger Delta Region, Nigeria". Journal of Applied Scientific Environmental Management 9 (1): 191–195. ISSN 1119-8362. Retrieved 2008-12-27.
- C. D. Haynes, M. G. Ridpath, M. A. J. Williams (1991). Monsoonal Australia. Taylor & Francis. p. 90. ISBN 978-90-6191-638-3. Retrieved 2008-12-27.
- Marti J. Van Liere, Eric-Alain D. Ategbo, Jan Hoorweg, Adel P. Den Hartog, and Joseph G. A. J. Hautvast (1994). "The significance of socio-economic characteristics for adult seasonal body-weight fluctuations: a study in north-western Benin". British Journal of Nutrition (Cambridge University Press) 72 (3): 479–488. doi:10.1079/BJN19940049. PMID 7947661.
- Chris Landsea (2007). "Subject: D3 - Why do tropical cyclones' winds rotate counter-clockwise (clockwise) in the Northern (Southern) Hemisphere?". National Hurricane Center. Retrieved 2009-01-02.
- Climate Prediction Center (2005). "2005 Tropical Eastern North Pacific Hurricane Outlook". National Oceanic and Atmospheric Administration. Retrieved 2006-05-02.
- Jack Williams (2005-05-17). "Background: California's tropical storms". USA Today. Retrieved 2009-02-07.
- National Climatic Data Center (2005-08-09). "Global Measured Extremes of Temperature and Precipitation". National Oceanic and Atmospheric Administration. Retrieved 2007-01-18.
- Dr. Owen E. Thompson (1996). Hadley Circulation Cell. Channel Video Productions. Retrieved on 2007-02-11.
- ThinkQuest team 26634 (1999). The Formation of Deserts. Oracle ThinkQuest Education Foundation. Retrieved on 2009-02-16.
- "USGS 220427159300201 1047.0 Mt. Waialeale Rain Gage nr Lihue, Kauai, HI". USGS Real-time rainfall data at Waiʻaleʻale Raingauge. Retrieved 2008-12-11.
- USA Today. Mt. Baker snowfall record sticks. Retrieved on 2008-02-29.
- National Weather Service Office, Northern Indiana (2009). "8 Inch Non-Recording Standard Rain Gauge". Retrieved 2009-01-02.
- Chris Lehmann (2009). "10/00". Central Analytical Laboratory. Retrieved 2009-01-02.
- National Weather Service Office Binghamton, New York (2009). "Rainguage Information". Retrieved 2009-01-02.
- National Weather Service (2009). "Glossary: W". Retrieved 2009-01-01.
- Discovery School (2009). "Build Your Own Weather Station". Discovery Education. Archived from the original on 2008-12-26. Retrieved 2009-01-02.
- "Community Collaborative Rain, Hail & Snow Network Main Page". Colorado Climate Center. 2009. Retrieved 2009-01-02.
- The Globe Program (2009). "Global Learning and Observations to Benefit the Environment Program". Retrieved 2009-01-02.
- National Weather Service (2009). "NOAA's National Weather Service Main Page". Retrieved 2009-01-01.
- Glossary of Meteorology (June 2000). "Return period". American Meteorological Society. Retrieved 2009-01-02.
- Glossary of Meteorology (June 2000). "Rainfall intensity return period". American Meteorological Society. Retrieved 2009-01-02.
- Boulder Area Sustainability Information Network (2005). "What is a 100 year flood?". Boulder Community Network. Retrieved 2009-01-02.
- Peel, M. C. and Finlayson, B. L. and McMahon, T. A. (2007). "Updated world map of the Köppen-Geiger climate classification". Hydrol. Earth Syst. Sci. 11: 1633–1644. doi:10.5194/hess-11-1633-2007. ISSN 1027-5606. (direct: Final Revised Paper)
- Susan Woodward (1997-10-29). "Tropical Broadleaf Evergreen Forest: The Rainforest". Radford University. Retrieved 2008-03-14.
- Susan Woodward (2005-02-02). "Tropical Savannas". Radford University. Retrieved 2008-03-16.
- "Humid subtropical climate". Encyclopædia Britannica. Encyclopædia Britannica Online. 2008. Retrieved 2008-05-14.
- Michael Ritter (2008-12-24). "Humid Subtropical Climate". University of Wisconsin–Stevens Point. Retrieved 2008-03-16.
- Lauren Springer Ogden (2008). Plant-Driven Design. Timber Press. p. 78. ISBN 978-0-88192-877-8.
- Michael Ritter (2008-12-24). "Mediterranean or Dry Summer Subtropical Climate". University of Wisconsin–Stevens Point. Retrieved 2009-07-17.
- Brynn Schaffner and Kenneth Robinson (2003-06-06). "Steppe Climate". West Tisbury Elementary School. Retrieved 2008-04-15.
- Michael Ritter (2008-12-24). "Subarctic Climate". University of Wisconsin–Stevens Point. Retrieved 2008-04-16.
- Bureau of Meteorology (2010). "Living With Drought". Commonwealth of Australia. Retrieved 2010-01-15.
- Robert Burns (2007-06-06). "Texas Crop and Weather". Texas A&M University. Retrieved 2010-01-15.
- James D. Mauseth (2006-07-07). "Mauseth Research: Cacti". University of Texas. Retrieved 2010-01-15.
- A. Roberto Frisancho (1993). Human Adaptation and Accommodation. University of Michigan Press, pp. 388. ISBN 978-0-472-09511-7. Retrieved on 2008-12-27.
- Climate Change Division (2008-12-17). "Precipitation and Storm Changes". United States Environmental Protection Agency. Retrieved 2009-07-17.
- Dale Fuchs (2005-06-28). "Spain goes hi-tech to beat drought". London: The Guardian. Retrieved 2007-08-02.
- Goddard Space Flight Center (2002-06-18). "[[NASA]] Satellite Confirms Urban Heat Islands Increase Rainfall Around Cities". National Aeronautics and Space Administration. Retrieved 2009-07-17. Wikilink embedded in URL title (help)[dead link]
- Jack S. Bushong (1999). "Quantitative Precipitation Forecast: Its Generation and Verification at the Southeast River Forecast Center". University of Georgia. Retrieved 2008-12-31.
- Daniel Weygand (2008). "Optimizing Output From QPF Helper". National Weather Service Western Region. Retrieved 2008-12-31.
- Noreen O. Schwein (2009). "Optimization of quantitative precipitation forecast time horizons used in river forecasts". American Meteorological Society. Retrieved 2008-12-31.
- Christian Keil, Andreas Röpnack, George C. Craig, and Ulrich Schumann (2008-12-31). "Sensitivity of quantitative precipitation forecast to height dependent changes in humidity". Geophysical Research Letters 35 (9): L09812. Bibcode:2008GeoRL..3509812K. doi:10.1029/2008GL033657.
- P. Reggiani and A. H. Weerts (2007). "Probabilistic Quantitative Precipitation Forecast for Flood Prediction: An Application". Journal of Hydrometeorology 9 (1): 76–95. Bibcode:2008JHyMe...9...76R. doi:10.1175/2007JHM858.1. Retrieved 2008-12-31.
- Charles Lin (2005). "Quantitative Precipitation Forecast (QPF) from Weather Prediction Models and Radar Nowcasts, and Atmospheric Hydrological Modelling for Flood Simulation". Achieving Technological Innovation in Flood Forecasting Project. Retrieved 2009-01-01.
|Look up precipitation in Wiktionary, the free dictionary.|
- World precipitation map
- Collision/Coalescence; The Bergeron Process
- Report local rainfall inside the United States at this site (CoCoRaHS)
- Report local rainfall related to tropical cyclones worldwide at this site
- Global Precipitation Climatology Center GPCC | http://en.wikipedia.org/wiki/Precipitation_(meteorology) | 13 |
12 | New Earth-like planets: How did astronomers find them?
NASA's Kepler spacecraft has spotted a pair of rocky Earth-sized planets orbiting a distant star. How do you find a new planet?
Since 2009, NASA's Kepler spacecraft has been sitting in space, pointing its telescope at a patch of the sky near the constellations Cygnus and Lyrae. Its field of view, a region of the Milky Way galaxy about the size of two open hands raised to the cosmos, contains roughly 160,000 stars. Scientists on the Kepler team are interested not in these stars themselves, but in the planets that may orbit them.Skip to next paragraph
Subscribe Today to the Monitor
"The goal of the Kepler mission is to find planets like Earth in the habitable zones of their parent stars," said Guillermo Torres, a member of the Kepler team based at the Harvard-Smithsonian Center for Astrophysics. They are looking for Earth twins, because these are the likeliest candidates for worlds that could host extraterrestrial life.
To find these alien Earths, the Kepler team uses a technique called the "transit method." They scour the data collected by the Kepler telescope looking for slight drops in the intensity of light coming from any of the stars in its line of sight. About 90 percent of the time, these dips in brightness signify that a planet "has passed in front of its star, essentially eclipsing the light," Torres told Life's Little Mysteries.
Planets the size of Earth passing in front of a star typically cause it to dim by only one-hundredth of a percent — akin to the drop in brightness of a car's headlight when a fly crosses in front of it, the scientists say. To detect these faint and faraway eclipses, the Kepler telescope must be extremely sensitive and it must be stationed in space, away from the glare and turbulence of Earth's atmosphere.
Using the transit method, Kepler has detected 2,326 "candidate planets" so far, Torres said. Those are dips detected in starlight that are probably caused by passing planets, but for which other alternative explanations haven't yet been ruled out. [Could There Be Life on the New Earth-Size Planets?]
"The signals are an indication that something is crossing in front of the star and then you have to confirm it's a planet, not something else," he said. "Roughly 90 percent of the signals that Kepler detects are true planets. The other 10 percent of the cases are false positives. We're not happy with leaving the probability at 90 percent — we've set a higher bar — so, even though a priori we know a signal is 90-percent sure [to be a planet], we do more work."
To confirm that a candidate is a true exoplanet — a planet outside our solar system — the Kepler scientists use the world's largest ground-based telescopes to study the star in question, looking for alternative explanations for the transit signal. "One example is an eclipsing binary in the background of the star. There could be two stars behind the star [of interest] that are orbiting each other and eclipsing each other, but because they're in the background they're much fainter. So their light is diluted by the brighter star," he said.
With today's (Dec. 20) announcement of five new confirmed exoplanets orbiting a star called Kepler-20 located 950 light-years away, including two that are Earth-size, the number of confirmed exoplanets has moved up to 33.
- A Field Guide to Alien Planets
- Stunning Photo of New Solar System Captured by Amateur Astronomer
- Will We Really Find Alien Life Within 20 Years? | http://www.csmonitor.com/Science/2011/1221/New-Earth-like-planets-How-did-astronomers-find-them | 13 |
10 | Science Fair Project Encyclopedia
A space rendezvous between two spacecraft, often between a spacecraft and a space station, is an orbital maneuver where the two arrive at the same orbit, make the orbital velocities the same, and bring them together (an approach maneuver, taxiing maneuver); it may or may not include docking.
- A visit to the International Space Station (manned) by:
- Visit to the Hubble Space Telescope (unmanned), for servicing, by Space Shuttle (manned), and possibly in future by the Hubble Robotic Vehicle (HRV) to be developed (unmanned)
- Moon landing crew returning from the Moon in the ascent stage of the Apollo Lunar Module (LM), to the Apollo Command/Service Module (CSM) orbiting the Moon (Project Apollo) (both manned)
- The STS-49 crew attached a rocket motor to the Intelsat VI (F-3) communications satellite to allow it an orbital maneuver
Alternatively the two are already together, and just undock and dock in a different way:
- Soyuz spacecraft from one docking point to another on the ISS
- in the Apollo spacecraft, an hour or so after Trans Lunar Injection of the sequence third stage of the Saturn V rocket/ LM inside LM adapter / CSM (in order from bottom to top at launch, also the order from back to front with respect to the current motion), with CSM manned, LM at this stage unmanned:
- the CSM separated, while the four upper panels of the LM adapter were disposed of
- the CSM turned 180 degrees (from engine backward, toward LM, to forward)
- the CSM connected to the LM while that was still connected to the third stage
- the CSM/LM combination then separated from the third stage
Another kind of "rendezvous" was in 1969, when the Apollo 12 mission involved a manned landing on the Moon within walking distance of the unmanned Surveyor 3, which had made a soft landing in 1967. Parts of the Surveyor were brought back. Later analysis showed that bacteria had survived their stay on the Moon.
On August 12, 1962 Vostok 3 and Vostok 4 were placed into adjacent orbits and passed within several kilometers of each other, but did not have the orbital maneuvering capability to perform a space rendezvous. This was also the case on June 16, 1963 when Vostok 5 and Vostok 6 were launched into adjacent orbits.
An example of an undesired rendezvous in space is an uncontrolled one with space debris.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Docking_maneuver | 13 |
10 | Absorption is the process whereby toxicants gain entrance to the body. Ingested and inhaled materials, nonetheless, are considered outside the body until they cross the cellular barriers of the gastrointestinal tract or the respiratory system. To exert an effect on internal organs a toxicant must be absorbed, although such local toxicity as irritation, may occur.
Absorption varies greatly with specific chemicals and with the route of exposure. For skin, oral or respiratory exposure, the exposure dose (or, "outside" dose) is usually only a fraction of the absorbed dose (that is, the internal dose). For substances injected or implanted directly into the body, exposure dose is the same as the absorbed or internal dose.
Several factors affect the likelihood that a foreign chemical or, xenobiotic, will be absorbed. The most important are:
- route of exposure;
- concentration of the substance at the site of contact; and
- chemical and physical properties of the substance.
The relative roles of concentration and properties of the substance vary with the route of exposure. In some cases, a high percentage of a substance may not be absorbed from one route whereas a low amount may be absorbed via another route. For example, very little DDT powder will penetrate the skin whereas a high percentage will be absorbed when it is swallowed. Due to such route-specific differences in absorption, xenobiotics are often ranked for hazard in accordance with the route of exposure. A substance may be categorized as relatively non-toxic by one route and highly toxic via another route.
The primary routes of exposure by which xenobiotics can gain entry into the body are:
Other routes of exposure—used primarily for specific medical purposes—are:
For a xenobiotic to enter the body (as well as move within, and leave the body) it must pass across cell membranes (cell walls). Cell membranes are formidable barriers and major body defenses that prevent foreign invaders or substances from gaining entry into body tissues. Normally, cells in solid tissues (for example, skin or mucous membranes of the lung or intestine) are so tightly compacted that substances can not pass between them. Entry, therefore, requires that the xenobiotic have some capaability to penetrate cell membranes. Also, the substance must cross several membranes in order to go from one area of the body to another.
In essence, for a substance to move through one cell requires that it first move across the cell membrane into the cell, pass across the cell, and then cross the cell membrane again in order to leave the cell. This is true whether the cells are in the skin, the lining of a blood vessel, or an internal organ (for example, the liver). In many cases, in order for a substance to reach its site of toxic action, it must pass through several membrane barriers.
A foreign chemical will pass through several membranes before it comes into contact with, and can damage, the nucleus of a liver cell.
Cell membranes (often referred to as ''plasma membranes'') surround all body cells and are basically similar in structure. They consist of two layers of phospholipid molecules arranged like a sandwich (referred to as a "phospholipid bilayer"). Each phospholipid molecule consists of a phosphate head and a lipid tail. The phosphate head is polar/ That is, it is hydrophilic (attracted to water). In contrast, the lipid tail is lipophilic (attracted to lipid-soluble substances).
The two phospholipid layers are oriented on opposing sides of the membrane so that they are approximate mirror images of each other. The polar heads face outward and the lipid tails inward in the membrane sandwich, as illustrated in Figure 2.
The cell membrane is tightly packed with these phospholipid molecules—interspersed with various proteins and cholesterol molecules. Some proteins span across the entire membrane providing for the formation of aqueous channels or pores.
Some toxicants move across a membrane barrier with relative ease while others find it difficult or impossible. Those that can cross the membrane, do so by one of two general methods: either passive transfer or facilitated transport.
Passive transfer consists of simple diffusion (or osmotic filtration) and is "passive" in that there is no requirement for cellular energy or assistance.
Some toxicants can not simply diffuse across the membrane. They require assistance that is facilitated by specialized transport mechanisms. The primary types of specialized transport mechanisms are:
- facilitated diffusion;
- active transport; and
- endocytosis (phagocytosis and pinocytosis).
Passive transfer is the most common way that xenobiotics cross cell membranes. Two factors determine the rate of passive transfer:
- differences in concentrations of the substance on opposite sides of the membrane (substance moves from a region of high concentration to one having a lower concentration. Diffusion will continue until the concentration is equal on both sides of the membrane); and
- ability of the substance to move either through the small pores in the membrane or through the lipophilic interior of the membrane.
Properties of the chemical substance that affect its ability for passive transfer are:
- lipid solubility;
- molecular size; and
- degree of ionization (that is, the electrical charge of an atom).
Substances with high lipid solubility readily diffuse through the phospholipid membrane. Small water-soluble molecules can pass across a membrane through the aqueous pores, along with normal intracellular water flow.
Large water-soluble molecules usually can not make it through the small pores, although some may diffuse through the lipid portion of the membrane, but at a slow rate. In general, highly ionized chemicals have low lipid solubility and pass with difficulty through the lipid membrane.
Most aqueous pores are about 4 ångström (Å) in size and allow chemicals of molecular weight 100-200 to pass through. Exceptions are membranes of capillaries and kidney glomeruli that have relatively large pores (about 40Å) that allow molecules up to a molecular weight of about 50,000 (molecules slightly smaller than albumen which has a molecular weight of 60,000) to pass through.
Facilitated diffusion is similar to simple diffusion in that it does not require energy and follows a concentration gradient. The difference is that it is a carrier-mediated transport mechanism. The results are similar to passive transport but faster and capable of moving larger molecules that have difficulty diffusing through the membrane without a carrier. Examples are the transport of sugar and amino acids into red blood cells (RBCs), and into the central nervous system (CNS).
Some substances are unable to move with diffusion, unable to dissolve in the lipid layer, and are too large to pass through the aqueous channels. For some of these substances, active transport processes exist in which movement through the membrane may be against the concentration gradient: they move from low to higher concentrations. Cellular energy from adenosine triphosphate (ADP) is required in order to accomplish this. The transported substance can move from one side of the membrane to the other side by this energy process. Active transport is important in the transport of xenobiotics into the liver, kidney, and central nervous system and for maintenance of electrolyte and nutrient balance.
Many large molecules and particles can not enter cells via passive or active mechanisms. However, some may enter, as yet, by a process known as endocytosis.
In endocytosis, the cell surrounds the substance with a section of its cell wall. This engulfed substance and section of membrane then separates from the membrane and moves into the interior of the cell. The two main forms of endocytosis are phagocytosis and pinocytosis.
In phagocytosis (cell eating), large particles suspended in the extracellular fluid are engulfed and either transported into cells or are destroyed within the cell. This is a very important process for lung phagocytes and certain liver and spleen cells. Pinocytosis (cell drinking) is a similar process but involves the engulfing of liquids or very small particles that are in suspension within the extracellular fluid.
The gastrointestinal tract (GI tract, the major portion of the alimentary canal) can be viewed as a tube going through the body. Its contents are considered exterior to the body until absorbed. Salivary glands, the liver, and the pancreas are considered accessory glands of the GI tract as they have ducts entering the GI tract and secrete enzymes and other substances. For foreign substances to enter the body, they must pass through the gastrointestinal mucosa, crossing several membranes before entering the blood stream.
Substances must be absorbed from the gastrointestinal tract in order to exert a systemic toxic effect, although local gastrointestinal damage may occur. Absorption can occur at any place along the entire gastrointestinal tract. However, the degree of absorption is strongly site-dependent.
Three main factors affect absorption within the various sites of the gastrointestinal tract:
- type of cells at the specific site;
- period of time that the substance remains at the site; and
- pH of stomach or intestinal contents at the site.
Under normal conditions, xenobiotics are poorly absorbed within the mouth and esophagus, due mainly to the very short time that a substance resides within these portions of the gastrointestinal tract. There are some notable exceptions. For example, nicotine readily penetrates the mouth mucosa. Also, nitroglycerin is placed under the tongue (sublingual) for immediate absorption and treatment of heart conditions. The sublingual mucosa under the tongue and in some other areas of the mouth is thin and highly vascularized so that some substances will be rapidly absorbed.
The stomach, having high acidity (pH 1-3), is a significant site for absorption of weak organic acids, which exist in a diffusible, nonionized and lipid-soluble form. In contrast, weak bases will be highly ionized and therefore are absorbed poorly. Chemically, the acidic stomach may break down some substances. For this reason those substances must be administered in gelatin capsules or coated tablets, that can pass through the acidic stomach into the intestine before they dissolve and release their contents.
Another determinant that affects the amount of a substance that will be absorbed in the stomach is the presence of food. Food ingested at the same time as the xenobiotic may result in a considerable difference in absorption of the xenobiotic. For example, the LD50 for Dimethline (a respiratory stimulant) in rats is 30 mg/kg (or 30 parts per million) when ingested along with food, but only 12 mg/kg when it is administered to fasting rats.
The greatest absorption of chemicals, as with nutrients, takes place in the intestine, particularly in the small intestine. The intestine has a large surface area consisting of outward projections of the thin (one-cell thick) mucosa into the lumen of the intestine (the villi). This large surface area facilitates diffusion of substances across the cell membranes of the intestinal mucosa.
Since the intestinal pH is near neutral (pH 5-8), both weak bases and weak acids are nonionized and are usually readily absorbed by passive diffusion. Lipid soluble, small molecules effectively enter the body from the intestine by passive diffusion.
In addition to passive diffusion, facilitated and active transport mechanisms exist to move certain substances across the intestinal cells into the body, including such essential nutrients as glucose, amino acids and calcium. Also, strong acids, strong bases, large molecules, and metals (and some important toxins) are transported by these mechanisms. For example, lead, thallium, and paraquat (herbicide) are toxicants that are transported across the intestinal wall by active transport systems.
The high degree of absorption of ingested xenobiotics is also due to the slow movement of substances through the intestinal tract. This slow passage increases the length of time that a compound is available for absorption at the intestinal membrane barrier.
Intestinal microflora and gastrointestinal enzymes can affect the toxicity of ingested substances. Some ingested substances may be only poorly absorbed but they may be biotransformed within the gastrointestinal tract. In some cases, their biotransformed products may be absorbed and be more toxic than the ingested substance. An important example is the formation of carcinogenic nitrosamines from non-carcinogenic amines by intestinal flora.
Very little absorption takes place in the colon and rectum. As a general rule, if a xenobiotic has not been absorbed after passing through the stomach or small intestine, very little further absorption will occur. However, there are some exceptions, as some medicines may be administered as rectal suppositories with significant absorption. An example, is Anusol (hydrocortisone preparation) used for treatment of local inflammation which is partially absorbed (about 25%).
Many environmental and occupational agents as well as some pharmaceuticals are inhaled and enter the respiratory tract. Absorption can occur at any place within the upper respiratory tract. However, the amount of a particular xenobiotic that can be absorbed at a specific location is highly dependent upon its physical form and solubility.
There are three basic regions to the respiratory tract:
- nasopharyngeal region;
- tracheobronchial region; and
- pulmonary region.
By far the most important site for absorption is the pulmonary region consisting of the very small airways (bronchioles) and the alveolar sacs of the lung.
The alveolar region has a very large surface area (about 50 times that of the skin). In addition, the alveoli consist of only a single layer of cells with very thin membranes that separate the inhaled air from the blood stream. Oxygen, carbon dioxide and other gases pass readily through this membrane. In contrast to absorption via the gastrointestinal tract or through the skin, gases and particles, which are water-soluble (and thus blood soluble), will be absorbed more efficiently from the lung alveoli. Water-soluble gases and liquid aerosols can pass through the alveolar cell membrane by simple passive diffusion.
In addition to solubility, the ability to be absorbed is highly dependent on the physical form of the agent (that is, whether the agent is a gas/vapor or a particle). The physical form determines penetration into the deep lung.
A gas or vapor can be inhaled deep into the lung and if it has high solubility in the blood, it is almost completely absorbed in one respiration. Absorption through the alveolar membrane is by passive diffusion, following the concentration gradient. As the agent dissolves in the circulating blood, it is taken away so that the amount that is absorbed and enters the body may be quite large.
The only way to increase the amount absorbed is to increase the rate and depth of breathing. This is known as ventilation-limitation. For blood-soluble gases, equilibrium between the concentration of the agent in the inhaled air and that in the blood is difficult to achieve. Inhaled gases or vapors, which have poor solubility in the blood, have quite limited capacity for absorption. The reason for this is that the blood can become quickly saturated. Once saturated, blood will not be able to accept the gas and it will remain in the inhaled air and then exhaled.
The only way to increase absorption would be to increase the rate of blood supply to the lung. This is known as flow-limitation. Equilibrium between blood and the air is reached more quickly for relatively insoluble gases than for soluble gases.
The absorption of airborne particles is usually quite different from that of gases or vapors. The absorption of solid particles, regardless of solubility, is dependent upon particle size.
Large particles (>5 µM) are generally deposited in the nasopharyngeal region (head airways region) with little absorption. Particles 2-5 µM can penetrate into the tracheobronchial region. Very small particles (<1 µM) are able to penetrate deep into the alveolar sacs where they can deposit and be absorbed.
Minimal absorption takes place in the nasopharyngeal region due to the cell thickness of the mucosa and the rapid movement of gases and particles through the region. Within the tracheobronchial region, relatively soluble gases can quickly enter the blood stream. Most deposited particles are moved back up to the mouth where they are swallowed.
Absorption in the alveoli is quite efficient compared to other areas of the respiratory tract. Relatively soluble material (gases or particles) is quickly absorbed into systemic circulation. Pulmonary macrophages exist on the surface of the alveoli. They are not fixed and not a part of the alveolar wall. They can engulf particles just as they engulf and kill microorganisms. Some non-soluble particles are scavenged by these alveolar macrophages and cleared into the lymphatic system.
The nature of toxicity of inhaled materials depends on whether the material is absorbed or remains within the alveoli and small bronchioles. If the agent is absorbed and is also lipid soluble, it can rapidly distribute throughout the body passing through the cell membranes of various organs or into fat depots. The time to reach equilibrium is even greater for the lipid soluble substances. Chloroform and ether are examples of lipid-soluble substances with high blood solubility.
Non-absorbed foreign material can also cause severe toxic reactions within the respiratory system. This may take the form of chronic bronchitis, alveolar breakdown (emphysema), fibrotic lung disease, and even lung cancer. In some cases, the toxic particles can kill the alveolar macrophages, which results in a lowering of the bodies' respiratory defense mechanism.
In contrast to the thin membranes of the respiratory alveoli and the gastrointestinal villi, the skin is a complex, multilayer tissue. For this reason, it is relatively impermeable to most ions as well as aqueous solutions. It represents, therefore, a barrier to most xenobiotics. Some notable toxicants, however, can gain entry into the body following skin contamination.
For example, certain commonly used organophosphate pesticides have poisoned agricultural workers following dermal exposure. The neurological warfare agent, Sarin, readily passes through the skin and can produce quick death to exposed persons. Several industrial solvents can cause systemic toxicity by penetration through the skin. For example, carbon tetrachloride penetrates the skin and causes liver injury. Hexane can pass through the skin and cause nerve damage.
The skin consists of three main layers of cells:
- dermis; and
- subcutaneous tissue.
The epidermis (and particularly the stratum corneum) is the only layer that is important in regulating penetration of a skin contaminant. It consists of an outer layer of cells, packed with keratin, known as the stratum corneum layer. The stratum corneum is devoid of blood vessels. The cell walls of the keratinized cells are apparently double in thickness due to the presence of the keratin, which is chemically resistant and an impenetrable material. The blood vessels are usually about 100 µM from the skin surface. To enter a blood vessel, an agent must pass through several layers of cells that are generally resistant to penetration by chemicals.
|By Daniel de Souza Telles (File:HumanSkinDiagram.xcf) [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC-BY-SA-3.0-2.5-2.0-1.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons|
The thickness of the stratum corneum varies greatly with regions of the body. The stratum corneum of the palms and soles is very thick (400-600 µM) whereas that of the arms, back, legs, and abdomen is much thinner (8-15 µM). The stratum corneum of the axillary (underarm) and inquinal (groin) regions is the thinnest with the scrotum especially thin. As expected, the efficiency of penetration of toxicants is inversely related to the thickness of the epidermis.
Any process that removes or damages the stratum corneum can enhance penetration of a xenobiotic. Abrasion, scratching, or cuts to the skin will make it more penetrable. Some acids, alkalis, and corrosives can injure the stratum corneum and increase penetration to themselves or other agents. The most prevalent skin conditions that enhance dermal absorption are skin burns and dermatitis.
Toxicants move across the stratum corneum by passive diffusion. There are no known active transport mechanisms functioning within the epidermis. Polar and nonpolar toxicants diffuse through the stratum corneum by different mechanisms. Polar compounds (which are water-soluble) appear to diffuse through the outer surface of the hydrated keratinized layer. Nonpolar compounds (which are lipid-soluble) dissolve in and diffuse through the lipid material between the keratin filaments.
Water plays an important role in dermal absorption. Normally, the stratum corneum is partially hydrated (~7% by weight). Penetration of polar substances is about 10 times as effective as when the skin is completely dry. Additional hydration can increase penetration by 3-5 times which further increases the ability of a polar compound to penetrate the epidermis.
A solvent sometimes used to promote skin penetration of drugs is dimethyl sulfoxide (DMSO). It facilitates penetration of chemicals by an unknown mechanism. Removal of the lipid material creates holes in the epidermis. This results in a reversible change in protein structure due to substitution of water molecules.
Considerable species differences exist in skin penetration and can influence the selection of species used for safety testing. Penetration of chemicals through the skin of the monkey, pig, and guinea pig is often similar to that of humans. The skin of the rat and rabbit is generally more permeable whereas the skin of the cat is generally less permeable. For practical reasons and to assure adequate safety, the rat and rabbit are normally used for dermal toxicity safety tests.
In addition to the stratum corneum, small amounts of chemicals may be absorbed through the sweat glands, sebaceous glands, and hair follicles. Since these structures represent, however, only a very small percentage of the total surface area, they are not ordinarily important in dermal absorption.
Once a substance penetrates through the stratum corneum, it enters lower layers of the epidermis, the dermis, and subcutaneous tissue. These layers are far less resistant to further diffusion. They contain a porous, nonselective aqueous diffusion medium, that can be penetrated by simple diffusion. Most toxicants that have passed through the stratum corneum can now readily move on through the remainder of the skin and enter the circulatory system via the large numbers of venous and lymphatic capillaries in the dermis.
Other Routes of Exposure
In addition to the common routes of environmental, occupational, and medical exposure (oral, respiratory, and dermal), other routes of exposure may be used for medical purposes. Many pharmaceuticals are administered by parenteral routes. That is, by injection into the body usually via syringe and hollow needle.
Intradermal injections are made directly into the skin, just under the stratum corneum. Tissue reactions are minimal and absorption is usually slow. If the injection is beneath the skin, the route is referred to as a subcutaneous injection. Since the subcutaneous tissue is quite vascular, absorption into the systemic circulation is generally rapid. Tissue sensitivity is also high and thus irritating substances may induce pain and an inflammatory reaction.
Many pharmaceuticals, especially antibiotics and vaccines are administered directly into muscle tissue (the intramuscular route). It is an easy procedure and the muscle tissue is less likely to become inflamed compared to subcutaneous tissue. Absorption from muscle is about the same as from subcutaneous tissue.
Substances may be injected directly into large blood vessels when they are irritating or when an immediate action is desired, such as anesthesia. These are known as intravenous or intraarterial routes depending on whether the vessel is a vein or artery.
Parenteral injections may also be made directly into body cavities, rarely in humans but frequently in laboratory animal studies. Injection into the abdominal cavity is known as intraperitoneal injection. If it is injected directly into the chest cavity, it is referred to as an intrapleural injection. Since the pleura and peritoneum have minimal blood vessels, irritation is usually minimal and absorption is relatively slow.
Implantation is another route of exposure of increasing concern. A large number of pharmaceuticals and medical devices are now implanted in various areas of the body. Implants may be used to allow slow, time-release of a substance (e.g., hormones). In other cases, no absorption is desired. For example, for implanted medical devices and materials (e.g., artificial lens, tendons and joints, and cosmetic reconstruction).
Some materials enter the body via skin penetration as the result of accidents or violence (for example, weapons). The absorption in these cases is highly dependent on the nature of the substance. Metallic objects (such as bullets) may be poorly absorbed whereas more soluble materials that are thrust through the skin and into the body from accidents may be absorbed rapidly into the circulation.
Novel methods of introducing substances into specific areas of the body are often used in medicine. For example, conjunctival instillations (eye drops) are used for treatment of ocular conditions where high concentrations are needed on the outer surface of the eye, not possible by other routes.
Therapy for certain conditions require that a substance be deposited in body openings where high concentrations and slow release may be needed while keeping systemic absorption to a minimum. For these substances, the pharmaceutical agent is suspended in a poorly absorbed material such as beeswax with the material known as a suppository. The usual locations for use of suppositories are the rectum and vagina.
National Library of Medicine
Toxicology Tutor II, Toxicogenetics, Adsorption
Disclaimer: This article is taken wholly from, or contains information that was originally published by, the National Library of Medicine. Topic editors and authors for the Encyclopedia of Earth may have edited its content or added new information. The use of information from the National Library of Medicine should not be construed as support for or endorsement by that organization for any new information added by EoE personnel, or for any editing of the original content. | http://www.eoearth.org/article/Absorption_of_toxicants | 13 |
17 | Math in Music Lesson Plan
MEDIA RESOURCES FROM THE GET THE MATH WEBSITE
- The Setup (video) Optional
An introduction to Get the Math and the professionals and student teams featured in the program.
- Math in Music: Introduction (video)
Manny Dominguez and Luis Lopez of DobleFlo talk about how their duo got started, how they use math in producing hip-hop music, and set up a music-related algebra challenge.
- Math in Music: Take the challenge (web interactive)
In this interactive activity, users try to solve the challenge presented in the video segment, “Math in Music: Introduction,” by matching the tempo of the electronic drum track to the tempo of the instrumental sample.
- Math in Music: See how the teams solved the challenge (video)
The teams use algebra to match the tempo of an electronic drum track to the tempo of an instrumental sample created by DobleFlo.
- Math in Music: Try other music challenges (web interactive)
In this activity students select from several options of instrumental samples and drum tracks and then try to match the tempo of the selected drum track to that of the selected instrumental sample.
For the class:
- Computer, projection screen, and speakers (for class viewing of online/downloaded video segments)
- One copy of “Math in Music: Take the Challenge” answer key (download DOC | PDF)
- One copy of the “Math in Music: Try other music challenges” answer key (download DOC | PDF)
For each student:
- One copy of the “Math in Music: Take the challenge” handout (download DOC | PDF)
- One copy of the “Math in Music: Try other music challenges” handout (download DOC | PDF)
- One calculator for use in Learning Activities 1 and 2 (Optional)
- Grid paper, chart paper, whiteboards/markers or other materials for students to display their math strategies used to solve the challenges in the Learning Activities.
- Computers with internet access for Learning Activities 1 and 2 (Note: These activities can either be conducted with one computer and an LCD screen or by dividing students into small groups and using multiple computers.)
BEFORE THE LESSON
Prior to teaching this lesson, you will need to:
- Preview all of the video segments and web interactives used in this lesson.
- Download the video clips used in the lesson to your classroom computer(s) or prepare to watch them using your classroom’s internet connection.
- Bookmark all web interactives you plan to use in the lesson on each computer in your classroom. Using an online bookmarking tool (such as delicious, diigo, or portaportal) will allow you to organize all the links in a central location.
- Make one copy of the “Math in Music: Take the challenge” and “Math in Music: Try other music challenges” handouts for each student.
- Print out one copy of the “Math in Music: Take the challenge” and the “Math in Music: Try other music challenges” answer keys.
- Begin with a brief discussion about music. For example, ask students to tell you their favorite genres of music (jazz, hip-hop, pop, classical, etc.).
- Explain that today’s lesson will be focusing on the use of math in music. Ask students where they think mathematics might be used in music. (Possible answers include: in counting the beat, in calculating the tempo, writing rhymes, in digital music programs, etc.) Ask your students if they play a musical instrument and, if so, to describe how math can be helpful in mastering music.
- Explain that today’s lesson features video segments and interactives from Get the Math, a program that highlights how math is used in the real world. If this is your first time using the program with this class, you may choose to play the video segment The Setup, which introduces the professionals and student teams featured in Get the Math.
- Introduce the video segment Math in Music: Introduction by letting students know that you will now be showing them a segment which features musicians Manny Dominguez and Luis Lopez from Brooklyn, NY, who have formed a hip-hop duo named DobleFlo. Ask students to watch for the math that the artists are using and to write down their observations as they watch the video.
- Play Math in Music: Introduction. After showing the segment, ask students to discuss the different ways that Manny and Luis use math in their music. (Sample responses: counting, decimals, numerical operations, ratios, rates, subtraction, elapsed time, problem solving using proportions.)
- Ask students to describe the challenge that Manny and Luis posed to the teens in the video segment. (In the featured sample of music, the tempo of the drum track doesn’t match the tempo of the instrumental sample. The tempo, or speed, is measured in beats per minute (BPM). Since the drum beat is programmed electronically, it is possible to use the computer to speed up or slow down this beat to match the instrumental sample. In order to correctly match the drum beat to the sample, it is necessary to figure out the tempo of the sample. DobleFlo asked the students to calculate the BPM of the instrumental sample to determine the tempo.)
LEARNING ACTIVITY 1
- Explain that the students will now have an opportunity to solve the problem. Ask students what common rates they are familiar with in daily life. (Sample responses: miles per gallon; miles per hour, etc.)
- Ask students if they have ever had their pulse taken at the doctor’s office. Ask if the doctors/nurses hold their fingers on your pulse for a full minute or several minutes to find beats per minute. (A part of a minute will be enough time.) Discuss why you would need only part of a minute to calculate the pulse rate. (You can compare a part to a whole using ratios/proportions.)
- Explain that word “per” means “for each” (for example, miles per gallon/miles per hour) and a rate can be represented by division. (For example, to calculate miles per gallon, the equation would be miles divided by gallons.)
- Explain that just like the doctors/nurses only need to calculate the pulse for a few seconds to figure out the pulse rate, the same is true for calculating the beats per minute in music. Students only need to listen to the music for a few seconds to calculate the beats per minute.
- Review the following terminology with your students:
- Tempo: the speed at which music is played, or the “beat” of the song.
- BPM: beats per minute
- Distribute the “Math in Music: Take the challenge” handout.
Note: The handout is designed to be used in conjunction with the Math in Music: Take the challenge interactive here on the web site.
- Let your students know that it is now their turn to solve the challenge DobleFlo presented to the teams in the video. Ask students to work together to explore the Math in Music: Take the challenge interactive and complete the handout. Use the “Math in Music: Take the Challenge” answer key as a guide to help students explore the interactive.
- If you have multiple computers, ask students to work in small groups to explore the interactive and complete the handout.
- If you only have one computer, conduct the activity with your students as a group, so that they can all hear the instrumental sample and count the total number of beats together.
- As students complete the challenge, encourage them to use the following 6-step mathematical modeling cycle to develop a plan:
- Step 1: Understand the problem: Identify variables in the situation that represent essential features (For example, let “b” represent the number of beats and “t” represent the time, or specify in either seconds “s” or minutes “m”).
- Step 2: Formulate a model by creating and selecting multiple representations (For example, students may use symbolic representations such as a proportion, or may use a chart or table to record information).
- Step 3: Compute by analyzing and performing operations on relationships to draw conclusions (For example, operations include multiplication and algebraic transformations used to determine cross products as they solve a proportion).
- Step 4: Interpret the results in terms of the original situation (The results of the first three steps should be examined in the context of the challenge to mix the music tracks).
- Step 5: Ask students to validate their conclusions by comparing them with the situation, and then either improving the model or, if acceptable,
- Step 6: Report on the conclusions and the reasoning behind them. (This step allows a student to explain their strategy and justify their choices in a specific context.)
Assess the reasoning process and product by asking students to articulate how they are solving the challenge:
- What strategy are you using to find the solution? How will your strategy help you to calculate the beats per minute?
- After students have completed the handout, ask each group to share their solutions and problem solving strategies with the class using whiteboards, overhead transparencies, chart paper, or other tools to illustrate how they solved the challenge.
- As students present their solutions, ask them to discuss the mathematics they used in solving the challenge. (Sample responses: counting beats, numerical operations, ratios, rates, problem solving using proportions.)
- Introduce the Math in Music: See how the teams solved the challenge video segment by letting students know that they will now be seeing how the teams in the video calculated the BPM. Ask students to observe what strategies the teams used and whether they were similar to or different from the strategies presented by the class.
- Play Math in Music: See how the teams solved the challenge. After showing the video, ask students to discuss the strategies the teams used and to compare them to the strategies presented by the class. During the discussion, point out that the two teams in the video solved the music challenge in two distinct ways. Ask students to discuss why one team ended up with an incorrect answer. Discuss the strategies listed in the “Math in Music: Take the challenge” answer key, which the class has not yet discussed (if any).
LEARNING ACTIVITY 2:
- Go to Math in Music: Try other music challenges. Let your students know that they will now calculate the Beats Per Minute using other music samples on the Get the Math website. Explain that this interactive provides students with additional opportunities to match the tempo of an electronic drum track to the tempo of the instrumental sample.
Note: As in the previous challenge, you can conduct this activity with one computer and an LCD projector in front of the entire class or your students can work in small groups on computers. This can also be assigned to students to complete as an independent project or as homework using the accompanying handout as a guide.
- Distribute the “Math in Music: Try other music challenges” handout. Clarify and discuss the directions. Ask students to explore the “Math in Music: Try other music challenges” interactive on the Get the Math website, using the handout as a guide. Ask students to complete all of the steps listed on the handout.
- As in Learning Activity 1, encourage your students to use the 5-step mathematical modeling cycle as they develop a strategy to solve the challenge.
- After students have completed the activity, lead a group discussion where students can share the strategies they used to find the correct tempo for each combination. Refer to and discuss the strategies and solutions presented in the “Math in Music: Try other music challenges” answer key, as desired.
- Assess deeper understanding: Ask your students to reflect upon and write down their thoughts about the following:
- How did you determine an effective strategy for the problem situation? What are your conclusions and the reasoning behind them? (Sample answers: by looking for relationships between the number of beats and the time; by setting up a proportion and/or an equation to solve the problem you can compare part of a minute to a whole minute, or the number of samples in a whole minute, to find the solution.)
- Compare and contrast the various numerical and algebraic representations possible for the problem. How does the approach used to solve the challenge affect the choice of representations? (Sample answers: some approaches use numerical operations in a sequence or order; another approach is to use symbols or variables to represent what is unknown and then write a proportion to solve the problem.) Are all equivalent? (Yes.) Why do you think this is the case? (There are many different ways to represent and solve a problem; a proportion is an equation that can be written using ratios that are equivalent but in a different order as long as some common element ties the numerators together and a common element ties the denominators together, such as beats and minutes.)
- What is proportionality? How does using this concept help you to understand and solve problems? (Sample answer: When two quantities are proportional, a change in one quantity corresponds to a predictable change in the other. This helps to set up a comparison of the two quantities, or a ratio, that can be used to solve a problem by increasing or decreasing the ratio by the same factor.)
- Why is it useful to represent real-life situations algebraically? (Sample responses: Symbols or variables can be used to represent missing values to set up and solve equations to find a solution. Using algebra can be a simpler and efficient way to set up and solve problems by using ratios, rates, or proportions.)
- What are some ways to represent, describe, and analyze patterns that occur in our world? (Sample responses: Patterns can be represented with numbers, symbols, expressions/equations, words, and pictures or graphs.)
- After students have written their reflections, lead a group discussion where students can discuss their journal entries. During the discussion, ask students to share their thoughts about how algebra can be applied to music. Ask students to brainstorm other real-world situations which involve the type of math and problem solving that they used in this lesson to calculate the Beats Per Minute (for example, miles per gallon, pulse rate, etc). | http://www.thirteen.org/get-the-math/teachers/math-in-music-lesson-plan/activities/76/ | 13 |
20 | Selection bias is a statistical bias in which there is an error in choosing the individuals or groups to take part in a scientific study. It is sometimes referred to as the selection effect. The phrase "selection bias" most often refers to the distortion of a statistical analysis, resulting from the method of collecting samples. If the selection bias is not taken into account then certain conclusions drawn may be wrong.
There are many types of possible selection bias, including:
Sampling bias is systematic error due to a non-random sample of a population, causing some members of the population to be less likely to be included than others, resulting in a biased sample, defined as a statistical sample of a population (or non-human factors) in which all participants are not equally balanced or objectively represented. It is mostly classified as a subtype of selection bias, sometimes specifically termed sample selection bias, but some classify it as a separate type of bias.
A distinction, albeit not universally accepted, of sampling bias is that it undermines the external validity of a test (the ability of its results to be generalized to the rest of the population), while selection bias mainly addresses internal validity for differences or similarities found in the sample at hand. In this sense, errors occurring in the process of gathering the sample or cohort cause sampling bias, while errors in any process thereafter cause selection bias.
Examples of sampling bias include self-selection, pre-screening of trial participants, discounting trial subjects/tests that did not run to completion and migration bias by excluding subjects who have recently moved into or out of the study area.
- Early termination of a trial at a time when its results support a desired conclusion.
- A trial may be terminated early at an extreme value (often for ethical reasons), but the extreme value is likely to be reached by the variable with the largest variance, even if all variables have a similar mean.
- Susceptibility bias
- Clinical susceptibility bias, when one disease predisposes for a second disease, and the treatment for the first disease erroneously appears to predispose to the second disease. For example, postmenopausal syndrome gives a higher likelihood of also developing endometrial cancer, so estrogens given for the postmenopausal syndrome may receive a higher than actual blame for causing endometrial cancer.
- Protopathic bias, when a treatment for the first symptoms of a disease or other outcome appear to cause the outcome. It is a potential bias when there is a lag time from the first symptoms and start of treatment before actual diagnosis. It can be mitigated by lagging, that is, exclusion of exposures that occurred in a certain time period before diagnosis.
- Indication bias, a potential mix up between cause and effect when exposure is dependent on indication, e.g. a treatment is given to people in high risk of acquiring a disease, potentially causing a preponderance of treated people among those acquiring the disease. This may cause an erroneous appearance of the treatment being a cause of the disease.
- Partitioning data with knowledge of the contents of the partitions, and then analyzing them with tests designed for blindly chosen partitions.
- Rejection of "bad" data on arbitrary grounds, instead of according to previously stated or generally agreed criteria.
- Rejection of "outliers" on statistical grounds that fail to take into account important information that could be derived from "wild" observations.
- Selection of which studies to include in a meta-analysis (see also combinatorial meta-analysis).
- Performing repeated experiments and reporting only the most favorable results, perhaps relabelling lab records of other experiments as "calibration tests", "instrumentation errors" or "preliminary surveys".
- Presenting the most significant result of a data dredge as if it were a single experiment (which is logically the same as the previous item, but is seen as much less dishonest).
Attrition bias is a kind of selection bias caused by attrition (loss of participants), discounting trial subjects/tests that did not run to completion. It includes dropout, nonresponse (lower response rate), withdrawal and protocol deviators. It gives biased results where it is unequal in regard to exposure and/or outcome. For example, in a test of a dieting program, the researcher may simply reject everyone who drops out of the trial, but most of those who drop out are those for whom it was not working. Different loss of subjects in intervention and comparison group may change the characteristics of these groups and outcomes irrespective of the studied intervention.
Data is filtered not only by study design and measurement, but by the necessary precondition that there has to be someone doing a study. In situations where the existence of the observer or the study is correlated with the data observation selection effects occur, and anthropic reasoning is required.
An example is the past impact event record of Earth: if large impacts cause mass extinctions and ecological disruptions precluding the evolution of intelligent observers for long periods, no one will observe any evidence of large impacts in the recent past (since they would have prevented intelligent observers from evolving). Hence there is a potential bias in the impact record of Earth. Astronomical existential risks might similarly be underestimated due to selection bias, and an anthropic correction has to be introduced.
In the general case, selection biases cannot be overcome with statistical analysis of existing data alone, though Heckman correction may be used in special cases. An informal assessment of the degree of selection bias can be made by examining correlations between exogenous (background) variables and a treatment indicator. However, in regression models, it is correlation between unobserved determinants of the outcome and unobserved determinants of selection into the sample which bias estimates, and this correlation between unobservables cannot be directly assessed by the observed determinants of treatment.
Selection bias is closely related to:
- publication bias or reporting bias, the distortion produced in community perception or meta-analyses by not publishing uninteresting (usually negative) results, or results which go against the experimenter's prejudices, a sponsor's interests, or community expectations.
- confirmation bias, the distortion produced by experiments that are designed to seek confirmatory evidence instead of trying to disprove the hypothesis.
- exclusion bias, results from applying different criteria to cases and controls in regards to participation eligibility for a study/different variables serving as basis for exclusion.
- Berkson's paradox
- Black Swan theory
- Cherry picking (fallacy)
- Funding bias
- List of cognitive biases
- Reporting bias
- Sampling bias
- Self-fulfilling prophecy
- Publication bias
- Participation bias
- Survivorship bias
- Dictionary of Cancer Terms → selection bias. Retrieved on September 23, 2009.
- Medical Dictionary - 'Sampling Bias' Retrieved on September 23, 2009
- TheFreeDictionary → biased sample. Retrieved on 2009-09-23. Site in turn cites: Mosby's Medical Dictionary, 8th edition.
- Dictionary of Cancer Terms → Selection Bias. Retrieved on September 23, 2009.
- The effects of sample selection bias on racial differences in child abuse reporting Ards S, Chung C, Myers SL Jr. Child Abuse Negl. 1999 Dec;23(12):1209; author reply 1211-5. PMID 9504213.
- Sample Selection Bias Correction Theory Corinna Cortes, Mehryar Mohri, Michael Riley, and Afshin Rostamizadeh. New York University.
- Domain Adaptation and Sample Bias Correction Theory and Algorithm for Regression Corinna Cortes, Mehryar Mohri. New York University.
- Page 262 in: Behavioral Science. Board Review Series. By Barbara Fadem. ISBN 0-7817-8257-0, ISBN 978-0-7817-8257-9. 216 pages
- Feinstein AR, Horwitz RI (November 1978). "A critique of the statistical evidence associating estrogens with endometrial cancer". Cancer Res. 38 (11 Pt 2): 4001–5. PMID 698947.
- Tamim H, Monfared AA, LeLorier J (March 2007). "Application of lag-time into exposure definitions to control for protopathic bias". Pharmacoepidemiol Drug Saf 16 (3): 250–8. doi:10.1002/pds.1360. PMID 17245804.
- Matthew R. Weir (2005). Hypertension (Key Diseases) (Acp Key Diseases Series). Philadelphia, Pa: American College of Physicians. p. 159. ISBN 1-930513-58-5.
- Kruskal, W. (1960) Some notes on wild observations, Technometrics.
- Jüni P, Egger M. Empirical evidence of attrition bias in clinical trials. Int J Epidemiol. 2005 Feb;34(1):87-8.
- Nick Bostrom, Anthropic Bias: Observation selection effects in science and philosophy. Routledge, New York 2002
- Milan M. Církovic, Anders Sandberg, and Nick Bostrom. Anthropic Shadow: Observation Selection Effects and Human Extinction Risks. Risk Analysis, Vol. 30, No. 10, 2010.
- Max Tegmark and Nick Bostrom, How unlikely is a doomsday catastrophe? Nature, Vol. 438 (2005): 75. arXiv:astro-ph/0512204
- Heckman, J. (1979) Sample selection bias as a specification error. Econometrica, 47, 153–61. | http://en.wikipedia.org/wiki/Selection_bias | 13 |
13 | This article describes the formula syntax and usage of the CONCATENATE function (function: A prewritten formula that takes a value or values, performs an operation, and returns a value or values. Use functions to simplify and shorten formulas on a worksheet, especially those that perform lengthy or complex calculations.) in Microsoft Office Excel.
The CONCATENATE function joins up to 255 text strings into one text string. The joined items can be text, numbers, cell references, or a combination of those items. For example, if your worksheet contains a person's first name in cell A1 and the person's last name in cell B1, you can combine the two values in another cell by using the following formula:
The second argument in this example (" ") is a space character. You must specify any spaces or punctuation that you want to appear in the results as an argument that is enclosed in quotation marks.
CONCATENATE(text1, [text2], ...)
The CONCATENATE function syntax has the following arguments (argument: A value that provides information to an action, an event, a method, a property, a function, or a procedure.):
- text1 Required. The first text item to be concatenated.
- text2 ... Optional. Additional text items, up to a maximum of 255 items. The items must be separated by commas.
Note You can also use the ampersand (&) calculation operator instead of the CONCATENATE function to join text items. For example, =A1 & B1 returns the same value as =CONCATENATE(A1, B1).
The example may be easier to understand if you copy it to a blank worksheet.
How do I copy an example?
- Select the example in this article. If you are copying the example in Excel Web App, copy and paste one cell at a time.Important Do not select the row or column headers.
Selecting an example from Help
- Press CTRL+C.
- Create a blank workbook or worksheet.
- In the worksheet, select cell A1, and press CTRL+V. If you are working in Excel Web App, repeat copying and pasting for each cell in the example.
Important For the example to work properly, you must paste it into cell A1 of the worksheet.
- To switch between viewing the results and viewing the formulas that return the results, press CTRL+` (grave accent), or on the Formulas tab, in the Formula Auditing group, click the Show Formulas button.
After you copy the example to a blank worksheet, you can adapt it to suit your needs.
|=CONCATENATE("Stream population for ",A2," ",A3," is ",A4,"/mile")
||Creates a sentence by concatenating the data in column A with other text.
||Stream population for brook trout species is 32/mile
|=CONCATENATE(B2, " ", C2)
||Concatenates the string in cell B2, a space character, and the value in cell C2.
|=CONCATENATE(C2, ", ", B2)
||Concatenates the string in cell C2, a string consisting of a comma and a space character, and the value in cell B2.
|=CONCATENATE(B3," & ",C3)
||Concatenates the string in cell B3, a string consisting of a space, an ampersand, another space, and the value in cell C3.
||Fourth & Pine
|=B3 & " & " & C3
||Concatenates the same items as the previous example, but by using the ampersand (&) calculation operator instead of the CONCATENATE function.
||Fourth & Pine | http://office.microsoft.com/en-us/excel-help/concatenate-function-HP010062562.aspx?CTT=5&origin=HA010248390 | 13 |
13 | Initializing a variable is considered very helpful while making programs. We can initialize variables of primitive types at the time of their declarations. For example:
int a = 10;
In Object Oriented Programming language (OOPL) like Java, the need of initialization of fields of a new object is even more common. We have already done this using two approaches.
In the first approach, we used a dot operator to access and assign values to the instance variables individually for each object. However, it can be a tedious job to initialize the instance variables of all the objects individually. Moreover, it does not promotes data hiding.
r1.length = 5;
r2.length = 7;
Where rl,r2 are objects of a Rectangle class.
In another approach, we made use of method setData() to assign values to fields of each object individually. But it would have to be called explicitly for each object. This would become inconvenient if the number of objects are very large.
rl.setData(5,6);//sets length and breadth of rl Rectangle object
The above two approaches does not simulate the problem properly. A better solution to the above problem is to initialize values to the object at the time of its creation in the same way as we initialize values to a variable of primitive data types. This is accomplished using a special method in Java known as constructor that enables an object to initialize itself at the time of its creation without the need to make separate call to the instance method.
A constructor is a special method that is called whenever an object is created using the new keyword. It contains a block of statements that can be used to initialize instance variables of an object before the reference to this object is returned by new. A constructor does look and feel a lot like a method but it is different from a method generally in two ways.
A constructor always has the same name as the class whose instance members they initialize. The constructor does not have a return type, nor even void. It is because the constructor is automatically called by the compiler whenever an object of a class is created.
The syntax for constructor is as follows.
Here, the ConstructorName is same as the class name it belongs to.
The parameterList is the list of optional zero or more parameter(s) that is specified after the classname in parentheses. Each parameter specification, if any, consists of a type and a name and are separated from each other by commas.
Now let us consider a program
// use of Constructor
length = 5;
breath = 6;
Int rectArea = length * breath;
Public static void main(String args)
Rectangle firstRect = new Rectangle();
System.out.println(“Area of Rectangle = ”+ firstrect.area());
Output: Area of rectangle =30
Explanation : In this program, when the statement
Rectangle firstRect = new Rectangle();
IS executed, the new operator creates a new but uninitialized object of the class. Then the constructor (Rectangle (») is called and the statements in its body are executed. As a result, the instance variables length and breadth of object firstRect will be initialized to integer literals 5 and 6 respectively.
Then the address of the allocated Rectangle object is returned and assigned to the reference variable firstRect. This method of initializing instance variable(s) of an object(s) using constructor is very simple and concise as there is no need to explicitly call the method for each object separately.
There are three types of Constructor as follows:-
1)Default Constructor:- Default Constructor is also called as Empty Constructor which has no arguments andis Automatically called when we creates the object of class but Remember name of Constructor is same as name of class.
2)Parameterized Constructor :- This is AnotherConstructor which has some Arguments and same name as class name but it uses some Arguments So For this We have to create object of Class by passing some Arguments at the time of creating object with the name of class.
3)Copy Constructor:- This is also Another type of Constructor. InConstructor object of another Constructor is passed As name Suggests you Copy means Copy values of another Class object This is used for Copying the values of class object into an another object of class So For Calling Copy Constructor We have to pass the name of object whose values we wants to Copying .
1. It is a member function whose name is same as the class. But preceded by a ~ (tilde) symbol.
2. It has no return type.
3. It cannot have parameters.
4. It is implicitly called when object is no longer required.
5. It is used to release the memory and close files and database conversions.
( No Destructor concept in java )
Note :- java does not support destructor. But it is supported by C,C++. | http://ecomputernotes.com/java/what-is-java/what-is-java-constructor-type-of-constructor | 13 |
11 | Math with manipulatives: Preschool number activities designed to foster your child’s number sense
© 2008 -2013 Gwen Dewar, Ph.D., all rights reserved
Never mind the talking toys and fancy video games. These preschool number activities require only a dose of imagination and a few household supplies.
Discoveries in cognitive psychology and neuroscience suggest that preschool number activities should address more than verbal counting.
Young children need to develop an intuitive feeling for numerosity—-the “how many-ness” associated with specific numbers.
These activities are designed to help kids sharpen their “number sense” and provide them with opportunities to put several math concepts into practice, including
• the notion of relative magnitudes
• the one-to-one principle of numerosity (two sets are equal if the items in each set can be matched one-to-one with no items left over)
• the one-to-one principle of counting (each item to be counted is counted once and only once)
• the stable order principle (number words must be recited in the same order)
• the principle of increasing magnitudes (the later number words refer to greater numerosities)
• the cardinal principle (the last word counted represents the numerosity of the set)
Most of these preschool number activities rely on a set of cards and a set of tokens.
Here’s what you need to get started.
Making a set of cards and manipulatives
The cards will be used in two ways—as displays of dots for kids to count, and as templates for kids to cover with tokens.
For this group of preschool number activities, you’ll need
• 10 or more tokens (each over 1.25” in diameter to avoid a choking hazard)
• 10 or more sheets of heavy-stock paper or large index cards
• Felt-tip pen
• Optional: a set of small stickers
Finding tokens that aren't distracting or hazardous
A variety of objects can be used for tokens, but keep in mind: Kids can get distracted if your tokens are too interesting, so it's best to avoid the fancy plastic frogs and fully-embellished coins (Petersen and McNeil 2012).
Also, you need to be conscious of choking hazards for kids under 3. According to the U.S. Consumer Product Safety Commission, a ball-shaped object is unsafe if it is smaller than a 1.75” diameter golf ball. Other objects are unsafe if they can fit inside a tube with a diameter of 1.25” inches. I’ve used plastic poker chips. You can also use something safe and edible, like “O”-shaped cereal pieces.
Creating the cards
Each card will be marked by an Arabic numeral and corresponding number of dots. Make the dots with a felt tip marker. Alternatively, you can use stickers to make your dots. The dots should be spaced far enough apart for your child to place a token over each dot. The larger your tokens, the larger your cards will need to be.
Make at least one card for each number between 1 and 10. In addition, make multiple cards for the same number—each card bearing dots arranged in different configurations. For example, one “three” card might show three dots arranged in a triangular configuration. Another might show the dots arranged in a line. Still another might show the dots that appear to have been placed randomly. Whatever your configuration, leave enough space between dots for your child to place a token over each dot.
Preschool number activities: Mix and match
One you have your cards and tokens, you can play any of the preschool number activities below. As you play, keep in mind the points raised in my guide to
preschool math lessons:
• Start small. It’s important to adjust the game to your child’s attention span and developmental level. For beginners, this means counting tasks that focus on very small numbers (up to 3 or 4).
• Keep it fun. If it’s not playful and fun, it’s time to stop.
• Be patient. It takes kids about a year to learn how the counting system works.
The basic game: One-to-one matching
Place a card, face up, before your child. Then ask your child to place the correct number of tokens on the card—one token over each dot.
After the child has finished the task, replace the card and tokens and start again with a new card.
Once your child has got the hang of this, you can modify the game by helping your child count each token as he puts it in place.
The Tea Party: Relative magnitudes
Choose two cards that display a different number of dots, taking care that the cards differ by a ratio of at least 2:1. For instance, try 1 vs. 2, 2 vs. 4, and 2 vs. 5. You can also try larger numbers, like 6 vs. 12.
Then set each card down in front of a toy creature / doll / teddy bear, and show your child how to cover the dots with a token. When I’ve played this game, I used poker chips and called them cookies. But you could also use edible tokens, like pieces of cereal.
After your child has covered each dot with a token, ask him
“Which (creature) has more (cookies / treats)?”
After he answers you, you can count each “tray” of treats to check the answer. But I’d skip this step if you are working with larger numbers (like 6 vs. 12) that are beyond your child’s current grasp. You don’t want to make this game feel like a tedious exercise.
As your child becomes better at this game, you can try somewhat smaller ratios (like 5 vs. 9).
Bigger and bigger: Increasing magnitudes
Instead of playing with the tokens, have your child place the cards side-by-side in correct numeric sequence.
For beginners, try this with very small numbers (1, 2, 3) and with numbers that vary by a large degree (e.g., 1, 3, 6, 12).
Sharing at the tea party: The one-to-one principle
I’ve stolen this one directly from experiments done by Brian Butterworth and his colleagues (2008). Choose three toy creatures as party attendees and have your child set the table—providing one and only plate, cup, and spoon to each toy. Then give your child a set of “cookies” (tokens or real edibles) and ask her to share these among the party guests so they each receive the same amount. Make it simple by giving your child 6 or 9 tokens so that none will be left over.
As always, go at your child’s pace and quit if it isn’t fun.
If your child makes a mistake and gives one creature too many tokens, you can play the part of another creature and complain that it isn’t fair.
You can also play the part of tea party host and deliberately make a mistake. Ask for your child’s help? Did someone get too many tokens? Or not enough? Have your child fix it.
Once your child gets the hang of things, try providing him with one token too many and discuss what to do about this "leftover."
One solution is to divide the remainder into three equal bits. But your child may come up with other, non-mathematical solutions, like eating the extra bit himself.
Matching patterns: Counting and numerosity
Play the basic game as described above, but instead of having your child place the tokens directly over the dots, have your child place the tokens alongside the card. Ask your child to arrange his tokens in the same pattern that is illustrated on the card. And count!
Matching patterns: Conservation of number
For this game, use cards bearing dots only-—no numerals. To play, place two cards—-each bearing the same number of dots, but arranged in different patterns—-side by side.
Ask your child to recreate each pattern using his tokens. When she’s done, help her count the number of tokens in each pattern. The patterns look different, but they use the same number of dots/tokens.
Spotting the goof: The one-to-one and cardinal principles
Here’s another activity swiped from the experimental literature.
In one study, researchers asked preschoolers to watch—-and help-—a rather incompetent puppet count a set of objects (Gelman et al 1986). The puppet would occasionally violate the one-to-one principle by double-counting (e.g., “one, two, three, three, four…). He also sometimes skipped an object or repeated the wrong cardinal value.
Kids ranging in age from 3 to 5 were pretty good at detecting these violations. So your child might have fun correcting your own “goof up” puppet or toy at home. Have the puppet count the number of tokens in a set, and, sometimes, make mistakes. If your child doesn’t notice the error, you can correct the puppet yourself.
But either way, ask your child to explain what went wrong. Experimenters working with 4- and 5-year olds found that kids didn’t make conceptual progress unless they were asked to explain either their own or the experimenter's reasoning (Muldoon et al 2007).
The cookie maker: Making predictions about changes to a set
Even before kids master counting, they can learn about the concepts of addition and subtraction. Here are some research-inspired preschool number activities that ask kids to make predictions about addition and subtraction
For these games, have a puppet or toy “bake cookies” (a set of tokens). Ask your child to count the cookies (helping if necessary) and then have the puppet bake one more cookie and add it to the set.
Are there more cookies or fewer cookies now? Ask your child to predict how many cookies are left. Then count again to check the answer.
Try the same thing with subtraction by having the puppet eat a cookie.
Don’t expect answers that are precise and correct. But you may find that your child is good at getting the gist. When researchers asked 3-, 4- and 5-year olds to perform similar tasks, they found that 90% of the predictions were in the right direction (Zur and Gelman 2004).
The Big Race: Increasing magnitudes and the number line
As your child begins to master the first few number words, you can also try these
research-tested preschool number activities for teaching kids about the number line.
References: Preschool number activities
Butterworth B, Reeve R, and Lloyd D. 2008. Numerical thought with and without words: Evidence from indigenous Australian children. Proceedings of the National Academy of Sciences 105(35): 13179-13184.
Gelman R, Meck E, and Merkin S. 1986. Young children’s numerical competence. Cognitive Development 1(1): 1-29.
Muldoon KP, Lewis C, Francis B. 2007. Using cardinality to compare quantities: the role of social-cognitive conflict in early numeracy. Developmental Psychology 10(5):694-711.
Petersen LA and McNeil NM. 2012. Effects of Perceptually Rich Manipulatives on Preschoolers' Counting Performance: Established Knowledge Counts. Child Dev. 2012 Dec 13. doi: 10.1111/cdev.12028. [Epub ahead of print]
Zur O and Gelman R. 2004. Young children can add and subtract by predicting and checking. Early childhood Research Quarterly 19: 121-137.
Content last modified 3/13 | http://www.parentingscience.com/preschool-number-activities.html | 13 |
11 | Gorilla Black Hole in the Mist
This false-color image from NASA's Spitzer Space Telescope shows a distant galaxy (yellow) that houses a quasar, a super-massive black hole circled by a ring, or torus, of gas and dust. Spitzer's infrared eyes cut through the dust to find this hidden object, which appears to be a member of the long-sought population of missing quasars. The green and blue splotches are galaxies that do not hold quasars.
Astronomers had predicted that most quasars are blocked from our view by their tori, or by surrounding dust-drenched galaxies, making them difficult to find. Because infrared light can travel through gas and dust, Spitzer was able to detect enough of these objects to show that there is most likely a large population of obscured quasars.
In addition to the quasar-bearing galaxy shown here, Spitzer discovered 20 others in a small patch of sky. Astronomers identified the quasars with the help of radio data from the National Radio Astronomy Observatory's Very Large Array radio telescope in New Mexico. While normal galaxies do not produce strong radio waves, many galaxies with quasars appear bright when viewed with radio telescopes.
In this image, infrared data from Spitzer is colored both blue (3.6 microns) and green (24 microns), and radio data from the Very Large Array telescope is colored red. The quasar-bearing galaxy stands out in yellow because it emits both infrared and radio light.
Of the 21 quasars uncovered by Spitzer, astronomers believe that 10 are hidden by their dusty tori, while the rest are altogether buried in dusty galaxies. The quasar inside the galaxy pictured here is of the type that is obscured by its torus. | http://www.spitzer.caltech.edu/images/1462-ssc2005-17a-Gorilla-Black-Hole-in-the-Mist | 13 |
11 | Mercury may have harbored an ancient magma ocean
Massive lava flows may have given rise to two distinct rock types on Mercury’s surface.
February 22, 2013
By analyzing Mercury’s rocky surface, scientists have been able to partially reconstruct the planet’s history over billions of years. Now, drawing upon the chemical composition of rock features on the planet’s surface, scientists at the Massachusetts Institute of Technology (MIT) in Cambridge have proposed that Mercury may have harbored a large, roiling ocean of magma early in its history, shortly after its formation about 4.5 billion years ago.
United States Geological Survey
The scientists analyzed data gathered by the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER), a NASA probe that has orbited the planet since March 2011. Later that year, a group of scientists analyzed X-ray fluorescence data from the probe and identified two distinct compositions of rocks on the planet’s surface. The discovery unearthed a planetary puzzle: What geological processes could have given rise to such distinct surface compositions?
To answer that question, the MIT team used the compositional data to recreate the two rock types in the lab and subjected each synthetic rock to high temperatures and pressures to simulate various geological processes. From their experiments, the scientists came up with only one phenomenon to explain the two compositions: a vast magma ocean that created two different layers of crystals, solidified, and eventually remelted into magma that then erupted onto Mercury’s surface.
“The thing that’s really amazing on Mercury is this didn’t happen yesterday,” said Timothy Grove from MIT. “The crust is probably more than 4 billion years old, so this magma ocean is a really ancient feature.”
Making Mercury’s rocks
MESSENGER entered Mercury’s orbit during a period of intense solar-flare activity; as the solar system’s innermost planet, Mercury takes the brunt of the Sun’s rays. The rocks on its surface reflect an intense fluorescent spectrum that scientists can measure with X-ray spectrometers to determine the chemical composition of surface materials.
As the spacecraft orbited the planet, an onboard X-ray spectrometer measured the X-ray radiation generated by Mercury’s surface. In September 2011, the MESSENGER science team parsed these energy spectra into peaks, with each peak signifying a certain chemical element in the rocks. From this research, the group identified two main rock types on Mercury’s surface.
Grove and his team set out to find an explanation for the differences in rock compositions. They translated the chemical element ratios into the corresponding building blocks that make up rocks, such as magnesium oxide, silicon dioxide, and aluminum oxide. The researchers then consulted what Grove refers to as a “pantry of oxides” — finely powdered chemicals — to recreate the rocks in the lab.
“We just mix these together in the right proportions, and we’ve got a synthetic copy of what’s on the surface of Mercury,” Grove said.
Crystals in the melt
The researchers then melted the samples of synthetic rock in a furnace, cranking the heat up and down to simulate geological processes that would cause crystals, and eventually rocks, to form in the melt.
“You can tell what would happen as the melt cools and crystals form and change the chemical composition of the remaining melted rock,” Grove said. “The leftover melt changes composition.”
After cooling the samples, the researchers picked out tiny crystals and melt pockets for analysis. The scientists initially looked for scenarios in which both original rock compositions might be related. For example, both rock types may have come from one region: One rock may have crystallized more than the other, creating distinct but related compositions.
But Grove found the two compositions were too different to have originated from the same region and, instead, may have come from two separate areas within the planet. The easiest explanation for what created these distinct regions, Grove said, is a large magma ocean, which over time likely formed different compositions of crystals as it solidified. This molten ocean eventually remelted, spewing lava onto the surface of the planet in massive volcanic eruptions.
Grove estimates that this magma ocean likely existed early in Mercury’s existence — possibly within the first 1 million to 10 million years — and may have been created from the violent processes that formed the planet. As the solar nebula condensed, small pieces of matter collided into larger chunks to form tiny, and then larger, planets. That process of colliding and accreting may produce enough energy to completely melt the planet — a scenario that would make an early magma ocean plausible.
“The acquisition of data by spacecraft must be combined with laboratory experiments,” said Bernard Charlier from MIT. “Although these data are valuable by themselves, experimental studies on these compositions enable scientists to reach the next level in the interpretation of planetary evolution.”
Look for this icon. This denotes premium subscriber content.
Learn more » | http://www.astronomy.com/~/link.aspx?_id=12b282fd-3b4d-4ba8-9db0-7ad1bc4aff4c | 13 |
36 | 2.1 The Strength of Gravity and Electric Forces
Gravity is a relatively very weak force. The electric Coulomb force between a proton and an electron is of the order of 1039 (that’s 1 with 39 zeros after it) times stronger than the gravitational force between them.
We can get a hint of the relative strength of electromagnetic forces when we use a small magnet to pick up an iron object, say, a ball bearing. Even though the whole of Earth’s gravitation attraction is acting upon the ball bearing, the magnet overcomes this easily when close enough to the ball bearing. In space, gravity only becomes significant in those places where the electromagnetic forces are shielded or neutralized.
For spherical masses and charges, both the gravity force and the electric Coulomb force vary inversely with the square of the distance and so decrease rapidly with distance. For other geometries/configurations, the forces decrease more slowly with distance. For example, the force between two relatively long and thin electric currents moving parallel to each other varies inversely with the first power of the distance between them.
Electric currents can transport energy over huge distances before using that energy to create some detectable result, just like we use energy from a distant power station to boil a kettle in our kitchen. This means that, over longer distances, electromagnetic forces and electric currents together can be much more effective than either the puny force of gravity or even the stronger electrostatic Coulomb force.
Remember that, just in order to explain the behavior of the matter we can detect, the Gravity Model needs to imagine twenty-four times more matter than we can see, in special locations, and of a special invisible type. It seems much more reasonable to investigate whether the known physics of electromagnetic forces and electric currents can bring about the observed effects instead of having to invent what may not exist.
2.2 The “Vacuum” of Space
Until about 100 years ago, space was thought to be empty. The words “vacuum” and “emptiness” were interchangeable. But probes have found that space contains atoms, dust, ions, and electrons. Although the density of matter in space is very low, it is not zero. Therefore, space is not a vacuum in the conventional sense of there being “nothing there at all”. For example, the Solar “wind” is known to be a flow of charged particles coming from the Sun and sweeping round the Earth, ultimately causing visible effects like the Northern (and Southern) Lights.
The dust particles in space are thought to be 2 to 200 nanometers in size, and many of them are also electrically charged, along with the ions and electrons. This mixture of neutral and charged matter is called plasma, and it is suffused with electromagnetic fields. We will discuss plasma and its unique interactions with electromagnetic fields in more detail in Chapter 3. The “empty” spaces between planets or stars or galaxies are very different from what astronomers assumed in the earlier part of the 20th century.
(Note about terminology in links: astronomers often refer to matter in the plasma state as “gas,” “winds,” “hot, ionized gas,” “clouds,” etc. This fails to distinguish between the two differently-behaving states of matter in space, the first of which is electrically-charged plasma and the other of which may be neutral gas which is just widely-dispersed, non-ionized molecules or atoms.)
The existence of charged particles and electromagnetic fields in space is accepted in both the Gravity Model and the Electric Model. But the emphasis placed on them and their behavior is one distinctive difference between the models. We will therefore discuss magnetic fields next.
2.3 Introduction to Magnetic Fields
What do we mean by the terms “magnetic field” and “magnetic field lines”? In order to understand the concept of a field, let’s start with a more familiar example: gravity.
We know that gravity is a force of attraction between bodies or particles having mass. We say that the Earth’s gravity is all around us here on the surface of the Earth and that the Earth’s gravity extends out into space. We can express the same idea more economically by saying that the Earth has a gravitational field which extends into space in all directions. In other words, a gravitational field is a region where a gravitational force of attraction will be exerted between bodies with mass.
Similarly, a magnetic field is a region in which a magnetic force would act on a magnetized or charged body. (We will look at the origin of magnetic fields later). The effect of the magnetic force is most obvious on ferromagnetic materials. For example, iron filings placed on a surface in a magnetic field align themselves in the direction of the field like compass needles.
Because the iron filings tend to align themselves south pole to north pole, the pattern they make could be drawn as a series of concentric lines, which would indicate the direction and, indirectly, strength of the field at any point.
Therefore magnetic field lines are one convenient way to represent the direction of the field, and serve as guiding centers for trajectories of charged particles moving in the field (ref. Fundamentals of Plasma Physics, Cambridge University Press, 2006, Paul Bellan, Ph.D.).
It is important to remember that field lines do not exist as physical objects. Each iron filing in a magnetic field is acting like a compass: you could move it over a bit and it would still point magnetic north-south from its new position. Similarly, a plumb bob (a string with a weight at one end) will indicate the local direction of the gravitational field. Lines drawn longitudinally through a series of plumb bobs would make a set of gravitational field lines. Such lines do not really exist; they are just a convenient, imaginary means of visualizing or depicting the direction of force applied by the field. See Appendix I for more discussion of this subject, or here, at Fizzics Fizzle.
A field line does not necessarily indicate the direction of the force exerted by whatever is causing the field. Field lines may be drawn to indicate direction or polarity of a force, or may be drawn as contours of equal intensities of a force, in the same way as contour lines on a map connect points of equal elevation above, say, sea level. Often, around 3-dimensional bodies with magnetic fields, imaginary surfaces are used to represent the area of equal force, instead of lines.
By consensus, the definition of the direction of a magnetic field at some point is from the north to the south pole.
In a gravitational field, one could choose to draw contour lines of equal gravitational force instead of the lines of the direction of the force. These lines of equal gravitational force would vary with height (that is, with distance from the center of the body), rather like contour lines on a map. To find the direction of the force using these elevation contour lines, one would have to work out which way a body would move. Placed on the side of a hill, a stone rolls downhill, across the contours. In other words the gravitational force is perpendicular to the field lines of equal gravitational force.
Magnetic fields are more complicated than gravity in that they can either attract or repel. Two permanent bar magnets with their opposite ends (opposite “poles”, or N-S) facing each other will attract each other along the direction indicated by the field lines of the combined field from them both (see image above). Magnets with the same polarity (N-N or S-S) repel one other along the same direction.
Magnetic fields also exert forces on charged particles that are in motion. Because the force that the charged particle experiences is at right angles to both the magnetic field line and the particle’s direction, a charged particle moving across a magnetic field is made to change direction (i.e. to accelerate) by the action of the field. Its speed remains unchanged to conserve kinetic energy. The following image shows what happens to an electron beam in a vacuum tube before and after a magnetic field is applied, in a lab demonstration.
The magnetic force on a charged particle in motion is analogous to the gyroscopic force. A charged particle moving directly along or “with” a magnetic field line won’t experience a force trying to change its direction, just as pushing on a spinning gyroscope directly along its axis of rotation will not cause it to turn or “precess”.
Even though the force on different charged particles varies, the concept of visualizing the direction of the magnetic field as a set of imaginary field lines is useful because the direction of the force on any one material, such as a moving charged particle, can be worked out from the field direction.
2.4 The Origin of Magnetic Fields
There is only one way that magnetic fields can be generated: by moving electric charges. In permanent magnets, the fields are generated by electrons spinning around the nuclei of the atoms. A strong magnet is created when all the electrons orbiting the nuclei have spins that are aligned, creating a powerful combined force. If the magnet is heated to its Curie temperature, the thermal motion of the atoms breaks down the orderly spin alignments, greatly reducing the net magnetic field. In a metal wire carrying a current, the magnetic field is generated by electrons moving down the length of the wire. A more detailed introduction to the complex subject of exchange coupling and ferromagnetism can be found here.
Either way, any time electric charges move, they generate magnetic fields. Without moving electric charges, magnetic fields cannot exist. Ampère’s Law states that a moving charge generates a magnetic field with circular lines of force, on a plane that is perpendicular to the movement of the charge.
Since electric currents made up of moving electric charges can be invisible and difficult to detect at a distance, detecting a magnetic field at a location in space (by well-known methods in astronomy, see below) is a sure sign that it is accompanied by an electric current.
If a current flows in a conductor, such as a long straight wire or a plasma filament, then each charged particle in the current will have a small magnetic field around it. When all the individual small magnetic fields are added together, the result is a continuous magnetic field around the whole length of the conductor. The regions in space around the wire where the field strength is equal (called “equipotential surfaces”) are cylinders concentric with the wire.
Time-varying electric and magnetic fields are considered later. (See Chapter IV and Appendix III)
The question of the origin of magnetic fields in space is one of the key differences between the Gravity Model and the Electric Model.
The Gravity Model allows for the existence of magnetic fields in space because they are routinely observed, but they are said to be caused by dynamos inside stars. For most researchers today, neither electric fields nor electric currents in space play any significant part in generating magnetic fields.
In contrast, the Electric Model, as we shall see in more detail later, argues that magnetic fields must be generated by the movement of charged particles in space in the same way that magnetic fields are generated by moving charged particles here on Earth. Of course, the Electric Model accepts that stars and planets have magnetic fields, too, evidenced by magnetospheres and other observations. The new insight has been to explain a different origin for these magnetic fields in space if they are not created by dynamos in stars.
2.5 Detecting Magnetic Fields in Space
Since the start of the space age, spacecraft have been able to measure magnetic fields in the solar system using instruments on board the spacecraft. We can “see” magnetic fields beyond the range of spacecraft because of the effect that the fields have on light and other radiation passing through them. We can even estimate the strength of the magnetic fields by measuring the amount of that effect.
Optical image Magnetic field intensity, direction
We have known about the Earth’s magnetic field for centuries. We can now detect such fields in space, so the concept of magnetic fields in space is intuitively easy to understand, although astronomers have difficulty in explaining the origination of these magnetic fields.
Magnetic fields can be detected at many wavelengths by observing the amount of symmetrical spectrographic emission line or absorption line splitting that the magnetic field induces. This is known as the Zeeman effect, after Dutch physicist and 1902 Nobel laureate, Pieter Zeeman, (1865—1943). Note in the right image above how closely the field direction aligns with the galactic arms visible in the optical image, left.
Another indicator of the presence of magnetic fields is the polarization of synchrotron emission radiated by electrons in magnetic fields, useful at galactic scales. See Beck’s article on Galactic Magnetic Fields, in Scholarpedia, plus Beck and Sherwood’s Atlas of Magnetic Fields in Nearby Galaxies. Measurement of the degree of polarization makes use of the Faraday effect. The Faraday rotation in turn leads to the derivation of the strength of the magnetic field through which the polarized light is passing.
The highly instructional paper by Phillip Kronberg et al, Measurement of the Electric Current in a Kpc-Scale Jet, provides a compelling insight into the direct link between the measured Faraday rotation in the powerful “knots” in a large galactic jet, the resultant magnetic field strength, and the electric current present in the jet.
Magnetic fields are included in both the Gravity Model and the Electric Model of the Universe. The essential difference is that the Electric Model recognizes that magnetic fields in space always accompany electric currents. We will take up electric fields and currents next.
2.6 Introduction to Electric Fields
An electric charge has polarity. That is, it is either positive or negative. By agreement, the elementary (smallest) unit of charge is equal to that of an electron (-e) or a proton (+e). Electric charge is quantized; it is always an integer multiple of e.
The fundamental unit of charge is the coulomb (C), where e = 1.60×10-19 coulomb. By taking the inverse of the latter tiny value, one coulomb is 6.25×1018 singly-charged particles. One ampere (A) of electric current is one coulomb per second. A 20A current thus would be 20 C of charge per second, or the passage of 1.25×1020 electrons per second past a fixed point.
Every charge has an electric field associated with it. An electric field is similar to a magnetic field in that it is caused by the fundamental force of electromagnetic interaction and its “range” or extent of influence is infinite, or indefinitely large. The electric field surrounding a single charged particle is spherical, like the gravitational acceleration field around a small point mass or a large spherical mass.
The strength of an electric field at a point is defined as the force in newtons (N) that would be exerted on a positive test charge of 1 coulomb placed at that point. Like gravity, the force from one charge is inversely proportional to the square of the distance to the test (or any other) charge.
The point in defining a test charge as positive is to consistently define the direction of the force due to one charge acting upon another charge. Since like charges repel and opposites attract, just like magnetic poles, the imaginary electric field lines tend to point away from positive charges and toward negative charges. See a short YouTube video on the electric field here.
Here is a user-controlled demonstration of 2 charges and their associated lines of force in this Mathematica application.
You may need to download Mathematica Player (just once, and it’s free) from the linked web site to play with the demo. Click on “Download Live Demo”after you install Mathematica Player. You can adjust strength and polarity of charge (+ or -) with the sliders, and drag the charged particles around the screen. Give the field lines time to smooth out between changes.
Electromagnetic forces are commonly stronger than gravitational forces on plasma in space. Electromagnetism can be shielded, while gravity can not, so far as is known. The common argument in the standard model is that most of the electrons in one region or body are paired with protons in the nuclei of atoms and molecules, so the net forces of the positive charges and negative charges cancel out so perfectly that “for large bodies gravity can dominate” (link: Wikipedia, Fundamental Interactions, look under the Electromagnetism sub-heading).
What is overlooked above is that, with the occasional exception of relatively cool, stable and near-neutral planetary environments like those found here on Earth, most other matter in the Universe consists of plasma; i.e., charged particles and neutral particles moving in a complex symphony of charge separation and the electric and magnetic fields of their own making. Gravity, while always present, is not typically the dominant force.
Far from consisting of mostly neutralized charge and weak magnetic and electric fields and their associated weak currents, electric fields and currents in plasma can and often do become very large and powerful in space. The Electric Model holds that phenomena in space such as magnetospheres, Birkeland currents, stars, pulsars, galaxies, galactic and stellar jets, planetary nebulas, “black holes”, energetic particles such as gamma rays and X-rays and more, are fundamentally electric events in plasma physics. Even the rocky bodies – planets, asteroids, moons and comets, and the gas bodies in a solar system – exist in the heliospheres of their stars, and are not exempt from electromagnetic forces and their effects.
Each separate charged particle contributes to the total electric field. The net force at any point in a complex electromagnetic field can be calculated using vectors, if the charges are assumed stationary. If charged particles are moving (and they always are), however, they “create” – are accompanied by – magnetic fields, too, and this changes the magnetic configuration. Changes in a magnetic field in turn create electric fields and thereby affect currents themselves, so fields that start with moving particles represent very complex interactions, feedback loops and messy mathematics.
Charges in space may be distributed spatially in any configuration. If, instead of a point or a sphere, the charges are distributed in a linear fashion so that the length of a charged area is much longer than its width or diameter, it can be shown that the electric field surrounds the linear shape like cylinders of equal force potential, and that the field from this configuration decreases with distance from the configuration as the inverse of the distance (not the inverse square of the distance) from the centerline. This is important in studying the effects of electric and magnetic fields in filamentary currents such as lightning strokes, a plasma focus, or large Birkeland currents in space.
Remember that the direction of applied force on a positive charge starts from positive charge and terminates on negative charge, or failing a negative charge, extends indefinitely far. Even a small charge imbalance with, say, more positively-charged particles here and more negatively-charged particles a distance away leads to a region of force or electric field between the areas of separated dissimilar charges. The importance of this arrangement will become more clear in the discussion of double layers in plasma, further on.
Think of an electrical capacitor where there are two separated, oppositely charged plates or layers, similar to the two charged plates “B” in the diagram above. There will be an electric field between the layers. Any charged particle moving or placed between the layers will be accelerated towards the oppositely charged layer. Electrons (which are negatively charged) accelerate toward the positively charged layer, and positive ions and protons toward the negatively charged layer.
According to Newton’s Laws, force results in acceleration. Therefore electric fields will result in charged particles’ acquiring velocity. Oppositely charged particles will move in opposite directions. An electric current is, by definition, movement of charge past a point. Electric fields therefore cause electric currents by giving charged particles a velocity.
If an electric field is strong enough, then charged particles will be accelerated to very high velocities by the field. For a little further reading on electric fields see this.
2.7 Detecting Electric Fields and Currents in Space
Electric fields and currents are more difficult to detect without putting a measuring instrument directly into the field, but we have detected currents in the solar system using spacecraft. One of the first was the low-altitude polar orbit TRIAD satellite in the 1970s, which found currents interacting with the Earth’s upper atmosphere. In 1981 Hannes Alfvén described a heliospheric current model in his book, Cosmic Plasma.
Since then, a region of electric current called the heliospheric current sheet (HCS) has been found that separates the positive and negative regions of the Sun’s magnetic field. It is tilted approximately 15 degrees to the solar equator. During one half of a solar cycle, outward-pointing magnetic fields lie above the HCS and inward-pointing fields below it. This is reversed when the Sun’s magnetic field reverses its polarity halfway through the solar cycle. As the Sun rotates, the HCS rotates with it, “dragging” its undulations into what NASA terms “the standard Parker spiral”.
Spacecraft have measured changes over time in the current sheet at various locations since the 1980s. They have detected near-Earth and solar currents as well. The Gravity Model accepts that these currents exist in space but assumes they are a result of the magnetic field. We will return to this point later.
Electric fields outside the reach of spacecraft are not detectable in precisely the same way as magnetic fields. Line-splitting or broadening in electric fields occurs, but it is asymmetrical line splitting that indicates the presence of an electric field, in contrast to the symmetric line splitting in magnetic fields. Further, electric field line broadening is sensitive to the mass of the elements emitting light (the lighter elements being readily broadened or split, and heavier elements less so affected), while Zeeman (magnetic field) broadening is indifferent to mass. Asymmetric bright-line splitting or broadening is called the Stark effect, after Johannes Stark (1874–1957).
Another way in which we can detect electric fields is by inference from the behavior of charged particles, especially those that are accelerated to high velocities, and the existence of electromagnetic radiation such as X-rays in space, which we have long known from Earth-bound experience are generated by strong electric fields.
Electric currents in low density plasmas in space operate like fluorescent lights or evacuated Crookes Tubes. In a weak current state, the plasma is dark and radiates little visible light (although cold, thin plasma can radiate a lot in the radio and far infrared wavelengths). As current increases, plasma enters a glow mode, radiating a modest amount of electromagnetic energy in the visible spectrum. This is visible in the image at the end of this chapter. When electrical current becomes very intense in a plasma, the plasma radiates in the arc mode. Other than scale, there is little significant difference between lightning and the radiating surface of a star’s photosphere.
This means, of course, that alternative explanations for these effects are also possible, at least in theory. The Gravity Model often assumes that the weak force of gravity multiplied by supernatural densities that are hypothesized to make up black holes or neutron stars creates these types of effect. Or maybe particles are accelerated to near-light-speed by supernovae explosions. The question is whether “multiplied gravity” or lab-testable electromagnetism is more consistent with observations that the Universe is composed of plasma.
The Electric Model argues that electrical effects are not just limited to those parts of the solar system that spacecraft have been able to reach. The Electric Model supposes that similar electrical effects also occur outside the solar system. After all, it would be odd if the solar system was the only place in the Universe where electrical effects do occur in space.
End of Chapter 2 | http://www.thunderbolts.info/wp/2011/10/17/essential-guide-to-the-eu-chapter-2/ | 13 |
32 | Fourier analysis is named after Jean Baptiste Joseph Fourier (1768-1830), a French mathematician and physicist. (Fourier is pronounced: , and is always capitalized). While many contributed to the field, Fourier is honored for his mathematical discoveries and insight into the practical usefulness of the techniques. Fourier was interested in heat propagation, and presented a paper in 1807 to the Institut de France on the use of sinusoids to represent temperature distributions. The paper contained the controversial claim that any continuous periodic signal could be represented as the sum of properly chosen sinusoidal waves. Among the reviewers were two of history's most famous mathematicians, Joseph Louis Lagrange (1736-1813), and Pierre Simon de Laplace (1749-1827).
While Laplace and the other reviewers voted to publish the paper, Lagrange adamantly protested. For nearly 50 years, Lagrange had insisted that such an approach could not be used to represent signals with corners, i.e., discontinuous slopes, such as in square waves. The Institut de France bowed to the prestige of Lagrange, and rejected Fourier's work. It was only after Lagrange died that the paper was finally published, some 15 years later. Luckily, Fourier had other things to keep him busy, political activities, expeditions to Egypt with Napoleon, and trying to avoid the guillotine after the French Revolution (literally!).
Who was right? It's a split decision. Lagrange was correct in his assertion that a summation of sinusoids cannot form a signal with a corner. However, you can get very close. So close that the difference between the two has zero energy. In this sense, Fourier was right, although 18th century science knew little about the concept of energy. This phenomenon now goes by the name: Gibbs Effect, and will be discussed in Chapter 11.
Figure 8-1 illustrates how a signal can be decomposed into sine and cosine waves. Figure (a) shows an example signal, 16 points long, running from sample number 0 to 15. Figure (b) shows the Fourier decomposition of this signal, nine cosine waves and nine sine waves, each with a different frequency and amplitude. Although far from obvious, these 18 sinusoids
add to produce the waveform in (a). It should be noted that the objection made by Lagrange only applies to continuous signals. For discrete signals, this decomposition is mathematically exact. There is no difference between the signal in (a) and the sum of the signals in (b), just as there is no difference between 7 and 3+4.
Why are sinusoids used instead of, for instance, square or triangular waves? Remember, there are an infinite number of ways that a signal can be decomposed. The goal of decomposition is to end up with something easier to deal with than the original signal. For example, impulse decomposition allows signals to be examined one point at a time, leading to the powerful technique of convolution. The component sine and cosine waves are simpler than the original signal because they have a property that the original signal does not have: sinusoidal fidelity. As discussed in Chapter 5, a sinusoid input to a system is guaranteed to produce a sinusoidal output. Only the amplitude and phase of the signal can change; the frequency and wave shape must remain the same. Sinusoids are the only waveform that have this useful property. While square and triangular decompositions are possible, there is no general reason for them to be useful.
The general term: Fourier transform, can be broken into four categories, resulting from the four basic types of signals that can be encountered.
A signal can be either continuous or discrete, and it can be either periodic or aperiodic. The combination of these two features generates the four categories, described below and illustrated in Fig. 8-2.
This includes, for example, decaying exponentials and the Gaussian curve. These signals extend to both positive and negative infinity without repeating in a periodic pattern. The Fourier Transform for this type of signal is simply called the Fourier Transform.
Here the examples include: sine waves, square waves, and any waveform that repeats itself in a regular pattern from negative to positive infinity. This version of the Fourier transform is called the Fourier Series.
These signals are only defined at discrete points between positive and negative infinity, and do not repeat themselves in a periodic fashion. This type of Fourier transform is called the Discrete Time Fourier Transform.
These are discrete signals that repeat themselves in a periodic fashion from negative to positive infinity. This class of Fourier Transform is sometimes called the Discrete Fourier Series, but is most often called the Discrete Fourier Transform.
You might be thinking that the names given to these four types of Fourier transforms are confusing and poorly organized. You're right, the names have evolved rather haphazardly over 200 years. There is nothing you can do but memorize them and move on.
These four classes of signals all extend to positive and negative infinity. Hold on, you say! What if you only have a finite number of samples stored in your computer, say a signal formed from 1024 points. Isn't there a version of the Fourier Transform that uses finite length signals? No, there isn't. Sine and cosine waves are defined as extending from negative infinity to positive infinity. You cannot use a group of infinitely long signals to synthesize something finite in length. The way around this dilemma is to make the finite data look like an infinite length signal. This is done by imagining that the signal has an infinite number of samples on the left and right of the actual points. If all these imaginary samples have a value of zero, the signal looks discrete and aperiodic, and the Discrete Time Fourier Transform applies. As an alternative, the imaginary samples can be a duplication of the actual 1024 points. In this case, the signal looks discrete and periodic, with a period of 1024 samples. This calls for the Discrete Fourier Transform to be used.
As it turns out, an infinite number of sinusoids are required to synthesize a signal that is aperiodic. This makes it impossible to calculate the Discrete Time Fourier Transform in a computer algorithm. By elimination, the only
type of Fourier transform that can be used in DSP is the DFT. In other words, digital computers can only work with information that is discrete and finite in length. When you struggle with theoretical issues, grapple with homework problems, and ponder mathematical mysteries, you may find yourself using the first three members of the Fourier transform family. When you sit down to your computer, you will only use the DFT. We will briefly look at these other Fourier transforms in future chapters. For now, concentrate on understanding the Discrete Fourier Transform.
Look back at the example DFT decomposition in Fig. 8-1. On the face of it, it appears to be a 16 point signal being decomposed into 18 sinusoids, each consisting of 16 points. In more formal terms, the 16 point signal, shown in (a), must be viewed as a single period of an infinitely long periodic signal. Likewise, each of the 18 sinusoids, shown in (b), represents a 16 point segment from an infinitely long sinusoid. Does it really matter if we view this as a 16 point signal being synthesized from 16 point sinusoids, or as an infinitely long periodic signal being synthesized from infinitely long sinusoids? The answer is: usually no, but sometimes, yes. In upcoming chapters we will encounter properties of the DFT that seem baffling if the signals are viewed as finite, but become obvious when the periodic nature is considered. The key point to understand is that this periodicity is invoked in order to use a mathematical tool, i.e., the DFT. It is usually meaningless in terms of where the signal originated or how it was acquired.
Each of the four Fourier Transforms can be subdivided into real and complex versions. The real version is the simplest, using ordinary numbers and algebra for the synthesis and decomposition. For instance, Fig. 8-1 is an example of the real DFT. The complex versions of the four Fourier transforms are immensely more complicated, requiring the use of complex numbers. These are numbers such as: 3 + 4j, where j is equal to √-1 (electrical engineers use the variable j, while mathematicians use the variable, i). Complex mathematics can quickly become overwhelming, even to those that specialize in DSP. In fact, a primary goal of this book is to present the fundamentals of DSP without the use of complex math, allowing the material to be understood by a wider range of scientists and engineers. The complex Fourier transforms are the realm of those that specialize in DSP, and are willing to sink to their necks in the swamp of mathematics. If you are so inclined, Chapters 28-31 will take you there.
The mathematical term: transform, is extensively used in Digital Signal Processing, such as: Fourier transform, Laplace transform, Z transform, Hilbert transform, Discrete Cosine transform, etc. Just what is a transform? To answer this question, remember what a function is. A function is an algorithm or procedure that changes one value into another value. For example, y = 2x + 1 is a function. You pick some value for x, plug it into the equation, and out pops a value for y. Functions can also change several values into a single value, such as: y = 2a + 3b + 4c, where a, b and c are changed into y.
Transforms are a direct extension of this, allowing both the input and output to have multiple values. Suppose you have a signal composed of 100 samples. If you devise some equation, algorithm, or procedure for changing these 100 samples into another 100 samples, you have yourself a transform. If you think it is useful enough, you have the perfect right to attach your last name to it and expound its merits to your colleagues. (This works best if you are an eminent 18th century French mathematician). Transforms are not limited to any specific type or number of data. For example, you might have 100 samples of discrete data for the input and 200 samples of discrete data for the output. Likewise, you might have a continuous signal for the input and a continuous signal for the output. Mixed signals are also allowed, discrete in and continuous out, and vice versa. In short, a transform is any fixed procedure that changes one chunk of data into another chunk of data. Let's see how this applies to the topic at hand: the Discrete Fourier transform. | http://www.dspguide.com/ch8/1.htm | 13 |
44 | The Rate Law
When studying a chemical reaction, it is important to consider not only the chemical properties of the reactants, but also the conditions under which the reaction occurs, the mechanism by which it takes place, the rate at which it occurs, and the equilibrium toward which it proceeds. According to the law of mass action, the rate of a chemical reaction at a constant temperature depends only on the concentrations of the substances that influence the rate (Wikipedia). The substances that influence the rate of reaction are usually one or more of the reactants, but can occasionally be a product. Another influence on the rate of reaction can be a catalyst that does not appear in the balanced overall chemical equation. The rate law can only be experimentally determined and can be used to predict the relationship between the rate of a reaction and the concentrations of reactants.
How fast a reaction occurs depends on the reaction mechanism, the step-by-step molecular pathway leading from the reaction to products. Chemical kinetics is concerned with how rates of chemical reactions are measured, how they can be predicted, and how reaction rate data is used to deduce probable reactions. The reaction rate or speed refers to something that happens in a unit of time. Consider that you are driving, and you want to know the distance from point A to point B, the distance from the points is the product of time x rate. Just think of it as the distance (concentration) is equal to the product of speed in M/sec and time(sec, min, and hour).
The rate itself is defined as the change in concentration of a reactant or product per unit of time. If A is a reactant and C a product, the rate might be expressed as:
In other words, the reaction rate is the change in concentration of reactant A or product C, over the change in time. This is an average rate of change and the minus sign is used to express the rate in terms of a reactant concentration. The reason for this is that by the conservation of mass, the rate of generation of product C must be equal to the rate of consumption of reactant A.
One may also wish to consider the instantaneous rate by taking the limit of the average rate as delta t approaches 0. This will give the instantaneous rate as:
Now the reaction rate is expressed as a derivative of the concentration of reactant A or product C, with respect to time, t.
Consider a reaction 2A + B -> C, in which one mole of C is produced from every 2 moles of A and one mole of B. The rate of this reaction may be described in terms of either the disappearance of reactants over time, or the appearance of products over time:
rate = (decrease in concentration of reactions)/(time) = (increase in concentration of products)/time
Because the concentration of a reactant decreases during the reaction, a minus sign is placed before a rate that is expressed in terms of reactants. For the reaction above, the rate of reaction with respect to A is -Δ[A]/Δt, with respect to B is -Δ[B]/Δt, and with respect to C is Δ[C]/Δt. In this particular reaction, the three rates are not equal. According to the stoichiometry of the reaction, A is used up twice as fast as B, and A is consumed twice as fast as C is produced. To show a standard rate of reaction in which the rates with respect to all substances are equal, the rate for each substance should be divided by its stoichiometric coefficient.
Rate = -(1/2)(Δ[A]/Δt) = -Δ[B]/Δt = Δ[C]/Δt
Rate (as well as the Rate Law) is expressed in the units of molarity per second.
Rate Law (Rate Equation)
For nearly all forward, irreversible reactions, the rate is proportional to the product of the concentrations of the reactants, each raised to some power. For the general reaction:
aA + bB → cC + dD
The rate is proportional to [A]m[B]n that is:
rate = k[A]m[B]n
This expression is the rate law for the general reaction above, where k is the rate constant. Multiplying the units of k by the concentration factors raised to the appropriate powers give the rate in units of concentration/time.
The dependence of the rate of reaction on the concentrations can often be expressed as a direct proportionality in which the concentrations may appear to be the zero, first, or second power. The power to which the concentration of a substance appears in the rate law is the order of the reaction with respect to that substance. In the reaction above the order of reaction is:
m + n
The order of the chemical equation can only be determined by experiment. In other words, one cannot determine what m and n are by just looking at a balanced chemical equation; m and n must be determined by the use of data. The overall order of a reaction is the sum of the orders with respect to the sum of the exponents. Furthermore, the order of a reaction is stated with respect to a named substance in the reaction. The exponents in the rate law are not equal to the stoichiometric coefficients unless the reaction actually occurs via a single step mechanism. However, the exponents are equal to the stoichiometric coefficients of the rate-determining step. In general, the rate law can calculate the rate of reaction from known concentrations for reactants and derive an equation that expresses a reactant as a function of time.
The proportionality factor, k, called the rate constant is a constant at a fixed temperature. Nonetheless, the rate constant varies with temperature. There are dimensions to k and can be determined with simple dimensional analysis of the particular rate law. The units should be expressed when the k-values are tabulated. The higher the k value, the faster the reaction proceeds.
Experimental Determination of Rate Law
The values of k, x, and y in the rate law equation (r =[A]m[B]n) must be determined experimentally for a given reaction at a given temperature. The rate is usually measured as a function of the initial concentrations of the reactants, A and B.
Example: Given the data below, find the rate law for the following reaction at 300K.
A + B → C + D
Solution: First, look for two trials in which the concentrations of all but one of the substances are held constant.
a. In trials 1 and 2, the concentration of A is kept constant while the concentration of B is doubled. The rate increases by a factor of approximately 4. Write down the rate expression of the two trials.
Trial 1: r1 = k[A]x[B]y = k(1.00)x(1.00)y
Trial 2: r2 = k[A]x[B]y = k(1.00)x(2.00)y
Divide the second equation by the first which yields:
4 = (2.00)y
y = 2
b. In trials 2 and 3, the concentration of B is kept constant while the concentration of A is doubled; the rate is increased by a factor of approximately 2. The rate expressions of the two trails are:
Trial 2: r2 = k[A]x[B]y = k(1.00)x(2.00)y
Trial 3: r3 = k[A]x[B]y = k(2.00)x(1.00)y
Divide the second equation by the third which yields:
2 = (2.00)x
x = 1
So r = k[A][B]2
The order of the reaction with respect to A is 1 and with respect to B is 2; the overall reaction order is:
1 + 2 = 3
To calculate k, substitute the values from any one of the above trials into the rate law:
2.0 M/sec = k(1.00 M)(1.00M)2
k = 2.0 M-2 sec-1
Therefore the rate law is r =2.0[A][B]2
Order of Reactions
Chemical reactions are often classified on the basis of kinetics as zero-order, first-order, second-order, mixed order, or higher-order reactions. The general reaction aA + bB → cC + dD will be used in the discussion next.
First lets note what each of these orders means in terms of initial rate of reaction effect:
A zero-order reaction has a constant rate, which is independent of the reactant's concentrations. Thus the rate law is:
rate = k = constant
where k has the units of M(sec-1). In other words, a zero-order reaction has a rate law in which the sum of the exponents is equal to zero. An increase in temperature or a decrease in in temperature is the only factor that can change the rate of a zero-order reaction. In addition, a reaction is zero order if concentration data are plotted versus time and the result is a straight line. The slope of this resulting line is the negative of the zero order rate constant k.
At times, chemists and researchers are also concerned with the relationship between the concentration of a reactant and time. Such expression is called the integrated rate law in which the equation expresses the concentration of a reactant as a function of time (remember, each order of reaction has its own unique integrated rate law). The integrated rate law of a zero-order reaction is:
[At] = -kt + [A0] (See page on zero-order reactions to see how this is derived)
Notice, however, that this model cannot be entirely accurate since this equation predicts negative concentrations at sufficiently large times. In other words, if one were to graph the concentration of A as a function of time, at some point, the line will cross below 0. This is of course, physically impossible since concentrations cannot be negative. Nevertheless, this model is a sufficient model for ranges of time where concentration is predicted as greater than zero.
The half life (t1/2) of a reaction is the time needed for the concentration of the radioactive substance to decrease to one-half of its original value. The half-life of a zero-order reaction can be derived as follows:
Given a reaction involving reactant A and from the definition of a half-life, we know that t1/2 is the time it takes for half of the initial concentration of reactant A to react. So we can now substitute new conditions into the integrated rate law form to obtain:
We now solve for t1/2 to obtain the following:
A first-order reaction has a rate proportional to the concentration of one reactant.
rate = k[A] or rate = k[B]
First-order rate constants have units of sec-1. In other words, a first-order reaction has a rate law in which the sum of the exponents is equal to 1.
The integrated rate law of a first-order reactions is:
ln[A]t = -kt + ln[A]0
ln([A]t/[A]0) = -kt
Moreover, a first-order reaction can be determined by plotting a graph of ln[A] vs. time t and a straight line is produced with a negative slope of k.
The classic example of a first-order reaction is the process of radioactive decay. The concentration of radioactive substance A at any time t an be expressed mathematically as:
[At] = [A0]e-kt
where [A0] = initial concentration of A
[At] = concentration of A at time t
k = rate constant
t = elapsed time
The half-life of a first order reaction can be calculated in a similar fashion as with the half-life of the zero order reaction and one would obtain the following:
where k is the first order rate constant. Notice that the half-life associated with the first-order reaction is the only case where half-life is independent of concentration of a reactant or product. In other words, [A] does not appear in the half-life formula above.
A second-order reaction has a rate proportional to the product of the concentration of two reactants, or to the square of the concentration of a single reactant. For example:
rate = k[A]2
rate = k[B]2
rate = k[A][B]
are all second-order reactions. Therefore, a second-order reaction has rate law in which the sum of the exponents are equal to 2.
The integrated rate law of a second-order reaction is as follows:
(See page on second-order reactions to see how this is derived)
The half-life of a second-order reaction is:
Determining Reaction Rate
In the laboratory, one may collect a sample of data consisting of measured concentrations of a certain reactant A at different times. This sample data may look like the following (Sample data obtain from ChemElements Post-Laboratory Exercises):
One can then plot [A] versus time, ln[A] versus time, and 1/[A] versus time to see which plot yields a straight line. The reaction order will then be the order associated with the plot that gives a straight line. While it may seem that doing this seems tedious and difficult, the process becomes quite simple with the use of Excel, or any other similar program.
By utilizing the formula capabilities of Excel, we can obtain two more data tables of ln[A] vs. time and 1/[A] vs. time very easily.
We now plot the three data sets to get
We can see clearly that the graph of ln[A] vs time is a straight line. Therefore the reaction associated with the given data is a first order reaction.
1. In a third-order reaction involving two reactants and two products, doubling the concentration of the first reaction causes the rate to increase by a factor of 2. If the concentration of the second reactant is cut in half, the rate of this reaction will be?
Solution: The rate is directly proportional to the concentration of the first reactant. When the concentration of the reactant doubles, the rate also doubles. Because the reaction is third-order, the sum of the exponents in the rate law must be equal to 3. Therefore, the rate law is defined as follows: rate - k[A][B]2. Reactant A has no exponent because its concentration is directly proportional to the rate. For this reason, the concentration of reactant B must be squared in order to write a law that represents a third-order reaction. when the concentration of reactant B is multiplied by 1/2, the rate will be multiplied by 1/4. Therefore, the rate of reaction will decrease by a factor of 4.
2. A certain chemical reaction follows the rate law, rate = k[NO][Cl2]. Which of the following statements describe the kinetics of this reaction:
3. The data in the following table is collected for the combustion of the theoretical compound XH4:
XH4 + 2O2 → XO2 + 2H2O
What is the rate law for the reaction described?
If you want future readers to know that you worked on this module (not required)
This page viewed 222397 times | http://chemwiki.ucdavis.edu/index.php?title=Physical_Chemistry/Kinetics/Rate_Laws/The_Rate_Law&bc=0 | 13 |
26 | Imagine we have four bags containing a large number of 1s, 4s, 7s and 10s. What numbers can we make?
A game for 2 players that can be played online. Players take it in
turns to select a word from the 9 words given. The aim is to select
all the occurrences of the same letter.
Euler discussed whether or not it was possible to stroll around Koenigsberg crossing each of its seven bridges exactly once. Experiment with different numbers of islands and bridges.
ABC is an equilateral triangle and P is a point in the interior of
the triangle. We know that AP = 3cm and BP = 4cm. Prove that CP
must be less than 10 cm.
Can you cross each of the seven bridges that join the north and south of the river to the two islands, once and once only, without retracing your steps?
A huge wheel is rolling past your window. What do you see?
Show that among the interior angles of a convex polygon there
cannot be more than three acute angles.
If you can copy a network without lifting your pen off the paper and without drawing any line twice, then it is traversable.
Decide which of these diagrams are traversable.
How many pairs of numbers can you find that add up to a multiple of
11? Do you notice anything interesting about your results?
Do you know how to find the area of a triangle? You can count the
squares. What happens if we turn the triangle on end? Press the
button and see. Try counting the number of units in the triangle
now. . . .
Is it possible to rearrange the numbers 1,2......12 around a clock
face in such a way that every two numbers in adjacent positions
differ by any of 3, 4 or 5 hours?
Make a set of numbers that use all the digits from 1 to 9, once and
once only. Add them up. The result is divisible by 9. Add each of
the digits in the new number. What is their sum? Now try some. . . .
You can work out the number someone else is thinking of as follows. Ask a friend to think of any natural number less than 100. Then ask them to tell you the remainders when this number is divided by. . . .
Find some triples of whole numbers a, b and c such that a^2 + b^2 + c^2 is a multiple of 4. Is it necessarily the case that a, b and c must all be even? If so, can you explain why?
These formulae are often quoted, but rarely proved. In this article, we derive the formulae for the volumes of a square-based pyramid and a cone, using relatively simple mathematical concepts.
Blue Flibbins are so jealous of their red partners that they will
not leave them on their own with any other bue Flibbin. What is the
quickest way of getting the five pairs of Flibbins safely to. . . .
Is it true that any convex hexagon will tessellate if it has a pair
of opposite sides that are equal, and three adjacent angles that
add up to 360 degrees?
In how many distinct ways can six islands be joined by bridges so that each island can be reached from every other island...
Can you find all the 4-ball shuffles?
Powers of numbers behave in surprising ways. Take a look at some of
these and try to explain why they are true.
Can you arrange the numbers 1 to 17 in a row so that each adjacent
pair adds up to a square number?
Imagine we have four bags containing numbers from a sequence. What numbers can we make now?
This article invites you to get familiar with a strategic game called "sprouts". The game is simple enough for younger children to understand, and has also provided experienced mathematicians with. . . .
There are four children in a family, two girls, Kate and Sally, and
two boys, Tom and Ben. How old are the children?
Caroline and James pick sets of five numbers. Charlie chooses three of them that add together to make a multiple of three. Can they stop him?
Can you discover whether this is a fair game?
You have been given nine weights, one of which is slightly heavier
than the rest. Can you work out which weight is heavier in just two
weighings of the balance?
Take any two digit number, for example 58. What do you have to do to reverse the order of the digits? Can you find a rule for reversing the order of digits for any two digit number?
Prove that if a^2+b^2 is a multiple of 3 then both a and b are multiples of 3.
A standard die has the numbers 1, 2 and 3 are opposite 6, 5 and 4
respectively so that opposite faces add to 7? If you make standard
dice by writing 1, 2, 3, 4, 5, 6 on blank cubes you will find. . . .
Factorial one hundred (written 100!) has 24 noughts when written in full and that 1000! has 249 noughts? Convince yourself that the above is true. Perhaps your methodology will help you find the. . . .
The nth term of a sequence is given by the formula n^3 + 11n . Find
the first four terms of the sequence given by this formula and the
first term of the sequence which is bigger than one million. . . .
I start with a red, a blue, a green and a yellow marble. I can
trade any of my marbles for three others, one of each colour. Can I
end up with exactly two marbles of each colour?
The picture illustrates the sum 1 + 2 + 3 + 4 = (4 x 5)/2. Prove the general formula for the sum of the first n natural numbers and the formula for the sum of the cubes of the first n natural. . . .
Show that if three prime numbers, all greater than 3, form an
arithmetic progression then the common difference is divisible by
6. What if one of the terms is 3?
Can you see how this picture illustrates the formula for the sum of
the first six cube numbers?
A little bit of algebra explains this 'magic'. Ask a friend to pick 3 consecutive numbers and to tell you a multiple of 3. Then ask them to add the four numbers and multiply by 67, and to tell you. . . .
Replace each letter with a digit to make this addition correct.
Carry out cyclic permutations of nine digit numbers containing the
digits from 1 to 9 (until you get back to the first number). Prove
that whatever number you choose, they will add to the same total.
What happens to the perimeter of triangle ABC as the two smaller
circles change size and roll around inside the bigger circle?
Some puzzles requiring no knowledge of knot theory, just a careful
inspection of the patterns. A glimpse of the classification of
knots and a little about prime knots, crossing numbers and. . . .
What are the missing numbers in the pyramids?
We are given a regular icosahedron having three red vertices. Show
that it has a vertex that has at least two red neighbours.
Problem solving is at the heart of the NRICH site. All the problems
give learners opportunities to learn, develop or use mathematical
concepts and skills. Read here for more information.
Advent Calendar 2011 - a mathematical activity for each day during the run-up to Christmas.
Can you fit Ls together to make larger versions of themselves?
Pick a square within a multiplication square and add the numbers on
each diagonal. What do you notice?
Choose a couple of the sequences. Try to picture how to make the next, and the next, and the next... Can you describe your reasoning?
Consider the equation 1/a + 1/b + 1/c = 1 where a, b and c are
natural numbers and 0 < a < b < c. Prove that there is
only one set of values which satisfy this equation.
Three frogs hopped onto the table. A red frog on the left a green in the middle and a blue frog on the right. Then frogs started jumping randomly over any adjacent frog. Is it possible for them to. . . . | http://nrich.maths.org/public/leg.php?code=71&cl=3&cldcmpid=4928 | 13 |
20 | The science of eclipses
Did you know that the ancient Greeks were able to work out the diameter of the Earth using data from lunar eclipses?
The study of Earth's shadow projected on the Moon allows us to deduce that Earth is spherical. The ancient Greeks worked this out. Using lunar eclipse timing, as far back as the third century BC, Aristarchus from Samos estimated the lunar diameter.
Using Eratosthene's previous measurement of Earth's diameter, he deduced the Earth-Moon distance. Hipparcos (150 BC) and Ptolemeus (2nd century AD) improved with impressive precision the measurements of the lunar diameter and Earth-Moon distance.
In the 17th century, in order to improve longitude determination, absolute cartography made use of lunar eclipse phenomena, which were observable simultaneously from different points.
Today, during lunar eclipses, laser-ranging measurements can be made with great accuracy using reflectors placed on the Moon during the Apollo and Lunokhod missions. This has allowed more precise measurement of lunar acceleration and the slowing down in Earth's rotation.
Analysis of the refracted light of Earth's atmosphere during lunar eclipses has also made it possible to show that atmospheric ozone is confined to a layer between 50 and 80 kilometres above Earth's surface.
Eclipses and scientific discovery
The ancient Greeks and Romans used dated references to eclipses to improve the calendar. They also noted phenomena related to eclipses. The corona seen during eclipses was only identified as a solar phenomenon in the middle of the 19th century.
Until then it was thought that the corona might come from terrestrial smoke, or that it indicated a lunar atmosphere. Kepler attributed it to solar light refracted by the atmosphere of the Moon. Even Halley (who predicted with success the path of the 1715 eclipse) and Arago interpreted the corona to be lunar in origin.
It was Cassini who established a link with the solar zodiacal light in 1683. The British amateur astronomer Francis Baily observed from the 1836 annular eclipse the irregularities of the lunar limb.
The first successful total eclipse photograph was taken on July 1851 by Berkowski from Königsberg. At the 1860 eclipse, photographs obtained by W. De La Rue and A. Secchi from two sites 500 km apart showed that prominences could not belong to the Moon, but were in fact of solar origin.
From 1842, the use of spectroscopes allowed the recognition of helium emission as well as a new, unknown emission line measured by Janssen at the 1868 eclipse from India. This was shown later by Ramsey (in 1900) to come from an element then unknown on Earth and therefore given the name 'helium' - now measured as the second most abundant element in the Universe.
Coronal eclipse spectra taken in 1869 also showed mysterious green and yellow spectral lines first attributed to another unknown element given the name 'coronium'. It was only much later, after the development of quantum mechanics and the measurement of spark discharge spectra by Bowen Edlen (1939), that the physicist Grotrian was able to solve the mystery of the coronium.
Grotrian showed that these mysterious transitions in fact show iron in a very high state of ionisation due to the extreme temperature (iron having lost nine electrons for the red coronal line and 13 electrons for the green coronal line) of the corona.
This can only occur at temperatures exceeding a million degrees. This discovery has led to another puzzle still unsolved today, but to which SOHO has unveiled fundamental clues: what heats the corona?
Another famous eclipse in 1919 allowed Arthur Eddington to confirm Einstein's prediction of general relativity space-time distortion in a gravity field.
An earlier German expedition to conduct this test in August 1914 failed when the team was taken as prisoners in Russia before being able to perform the key experiment. In 1919, Eddington selected two sites of observation from Brazil and Principe Island. The eclipse pictures showed an offset in the positions of stars due to solar gravitational bending of light that confirmed Einstein's theory exactly.
What can be measured during solar eclipses?
Eclipses made it possible to determine with precision the shape of the Moon. Their study improved the prediction of ephemerides. Even today, a total solar eclipse still allows astrophysicists to make valuable scientific measurements, particularly when co-ordinated with measurements from observatories in space.
Solar eclipses enable scientists to measure accurately the diameter of the Sun and to search for variations in that diameter over long time scales. Geophysicists measure eclipse phenomena induced in the high terrestrial atmosphere.
Total solar eclipses allow the observation of structures of the solar corona that cannot usually be studied due to the higher normal luminosity of skylight during the day.
The structures in the corona are similar to patterns seen around a magnet. In fact sunspots were shown to be solar surface magnetic structures, which have their counterpart in the corona. The study of the solar corona gives us much information about the Sun's surface and its global variations. The morphology of the corona is changing due to the reorganisation of the surface magnetic field during the solar cycle, which can be seen in eclipse pictures taken at different epochs. The re-analysis of historical eclipse reports and documents could help to understand long term solar magnetic variations.
One can follow these magnetically confined structures deep into the interplanetary medium. Eclipses make it possible to diagnose the physical conditions of temperature (at more than 1 million degrees), densities and dynamics, both in the corona and at the base of the sources of the solar wind. The dynamic instabilities, the solar wind and environment pervade the whole solar system and interact with Earth's magnetosphere.
Artificial eclipses and coronagraphs
Until the invention of the coronagraph in 1930, the rare glimpses from solar eclipses were the only opportunities to observe and study the solar corona.
French astrophysicist Bernard Lyot developed the coronagraph instrument which made it possible for the first time to occult the solar disk in order to study the inner corona (creating an artificial eclipse). This is still limited by the stray-light emitted by the daylight atmosphere and only works from clean and high altitude sites such as the Pic du Midi, Sacramento Peak and Hawaii observatories.
Used with additional filtering techniques to isolate specific emission this has given interesting results. Lyot made a spectacular movie using the Pic du Midi instrument showing giant prominences, arches and ejections of coronal mass. Unfortunately he died soon after a total solar eclipse expedition in Khartoum in 1952.
The solar corona from space
A revolution in the study of the corona came with the space age. In early sounding rocket experiments extreme-ultraviolet (EUV) and X-ray telescopes gave a view of the Sun very different from that previously seen in visible light.
X-ray radiation arises from high-temperature coronal plasmas, and with these telescopes the corona can be mapped over the whole solar disk, and not only above the limb as in the eclipses. The X-ray and EUV instruments on the Skylab platform provided motion pictures of the solar corona, with the discovery of coronal holes of low X-ray emission, and changes in strongly emitting active magnetic regions.
Also, space coronagraphs developed for the Solar Maximum Mission launched in 1982 mapped the outer corona in visible light, extending to long timescales the previously rare coronal snapshots obtained during eclipses.
The Yohkoh Japanese X-ray satellite, launched in 1991, has obtained millions of X-ray images of the dynamic solar corona. The ultimate observations of the solar corona are now being obtained with SOHO, the Solar Heliospheric Observatory.
This includes data obtained with the Extreme UV Imaging Telescope (EIT), spectro-imagers (CDS, SUMER), a UV coronagraph (UVCS) measuring intensities and flows in the corona, and a three-channel visible coronagraph in the visible (LASCO) covering an impressive range of distances from 0.2 to 30 solar radii above the limb.
In addition, experiments map the surface magnetic field (MDI) and in-situ particle detectors measure the solar wind and instabilities a million kilometres before they reach Earth. SOHO now gives us a continuous view of the solar corona.
Co-ordinated eclipse-space observations
In this era of orbiting solar observatories, is there still a scientific benefit in making eclipse observations from Earth?
The biggest benefit comes from co-ordinating modern ground-based eclipse observations with space measurements. There are still new discoveries to be made from eclipses, by using the latest methods of investigation (very accurate timing, fast rate of measurements, wavelengths not covered from space such as infrared or visible ranges, new experimental techniques).
The interpretation of eclipse data together with space data gives us new insights into earlier eclipse observations, and also allows the study of long-term historical variability of the solar corona, and of the solar magnetic cycle.
Since the launch of SOHO in 1995, co-ordinated campaigns have been conducted during the total solar eclipse of 26 February 1998, and the 11 August 1999 eclipse. SOHO measurements are analysed, together with ground based eclipse results, providing important insights into the nature of our Sun.
Last update: 28 September 2004 | http://www.esa.int/Our_Activities/Space_Science/The_science_of_eclipses/(print) | 13 |
21 | Speed of gravity
Isaac Newton's mechanical systems included the concept of a force that operated between two objects, gravity. The quantity of force was dependent on the masses of the two objects, with more massive objects exerting more force. This led to a problem: it seemed that each object had to "know" about the other in order to exert the proper amount of force on it. This troubled Newton, who commented that he made no claims to how it could work.
Given two bodies attracting each other, the question then arises as to the speed of propagation of the force itself. Newton demonstrated that unless the force was instantaneous, relative motion would lead to the non-conservation of angular momentum. This he could observe as not being true, in fact the conservation of momentum was one of the observations that led to his theory of gravitation in the first place. He therefore concluded that gravity was instantaneous.
Michael Faraday's work on electromagnetism in the mid-1800s provided a new framework for understanding electromagnetic forces. In these "field theories" the objects in question do not act on each other, but on space itself. Other objects react to that field, not to the distant object itself. There is no requirement for one object to have any "knowledge" of the other. With this simple change, many of the philosophical problems of Newton's seminal work simply disappeared while the answers stayed the same, and in many cases the answers were easier to calculate.
By viewing gravity as being transmitted by a field rather than a force, it is possible for gravity to be transmitted at a finite speed without running into the problems that Newton sees. If gravity is transmitted by a field, a moving object will cause the field potentials to be non-circular. Hence by using a delayed field rather than a delayed force, one can show that the force will point to where an object is currently rather than where it was in the past. Gravity is still traveling at a finite speed because a sudden change in the direction of an object will not be noticed by the object it is pulling without a delay.
A similar effect occurs in electromagnetic fields. This view has some major implications for how physicists view the world. Until the mid-19th century, the standard view among physicists is that forces are the fundamental entity and fields are merely mathematical shorthand to describe the behavior of forces. Since the late 19th century, physicists have gradually come to view fields as the more fundamental entity and forces merely manifestations of the behavior of fields. The phenomenon in which a delayed force theory will lead to wrong answers, but a delayed field theory will lead to right ones, is one reason why.
The belief that fields rather than forces were the fundamental entity was one of the main motivating factors that led Albert Einstein to develop his theory of general relativity in the early 20th century to replace Newtonian gravity which was widely considered defective because it relied on the notion of instantaneous forces to transmit gravity rather than fields. In general relativity (GR), the field is elevated to the only real concern. The gravitational field is equated with the curvature of space-time, and propagations (including gravity waves) can be shown, according to this theory, to travel at a single speed, cg.
This finite speed may at first seem to lead to exactly the same sorts of problems that Newton was originally concerned with. Although the calculations are considerably more complicated, one can show that general relativity does not suffer from these problems just as classical delayed potential theory does not. However, Tom Van Flandern has made a name for himself by insisting that this proves general relativity incorrect. Other physicists who have interacted with him argue that the objection he presents was resolved in the 19th century.
In 1999 a Russian researcher from Izhevsk Yuri Ivanov confirmed the results of Roland Eötvös experiments, according to which a copper ball should fall with an acceleration of 0,000000001 fraction faster than a water drop.
Ivanov has discovered a phenomenon which he called shady gravitational effect (Russian: теневой гравитационный эффект): the gravity of the Sun passing through the more dense envelopes of the Earth core is partially enfeebled and forms on the opposite side of the planet a sort of gravitational shadow. In the winter solstice of 2002 a torsion balance has reacted to the change of the attracting force of the Sun eight minutes before the midnight. That meaned that a speed of diffusion of the gravitation is practically momentary (according to the calculations of Laplace this speed should exceed the speed of light at least in six million times).
In September 2002, Sergei Kopeikin made an indirect experimental measurement of the speed of gravity, using Ed Fomalont's data from a transit of Jupiter across the line-of-sight of a bright radio source. The speed of gravity, presented in January 2003, was found to be somewhere in the range between 0.8 and 1.2 times the speed of light, which is consistent with the theoretical prediction of general relativity that the speed of gravity is exactly the same as the speed of light.
Some physicists have criticised the conclusions drawn from this experiment on the grounds that, as it was structured, the experiment was incapable of finding any results other than agreement with the speed of light. This criticism originates from the belief in electromagetic field origin of the fundamental speed c so that according to those physicists the Einstein equations must depend on the physical speed of light which explains why gravity always propagate in that theory with the speed of light. Alternative point of view is that the Einstein equations describe the origin and evolution of the space-time curvature and gravitational waves which are conceptually independent of the electromagnetic field and, hence, the fundamental speed c in the Einstein equations can not be interpreted as a physical speed of light despite that it must have the same numerical value as the speed of light in vacuum if the general relativity is correct. Perhaps, the best illustrative way to distinguish two speeds is to denote the speed of gravity in the Einstein equations as cg and the speed of light in Maxwell's equations as c. Kopeikin-Fomalont experiment observed the bending of quasar's light caused by time-dependent gravitational field of Jupiter and measured the ratio c/cg. This observation shows that this ratio is unity with the precision 20%.
On the other hand, Tom Van Flandern at the U.S. Army's Research Lab at the University of Maryland, calculates the speed of gravity as ≥ 2x1010 c (that is, twenty trillion times the speed of light). If true, this could explain why gravity appears to be an instantaneous (ie, of infinite speed), rather than a finite force. Van Flandern's theories are controversial and not well accepted among physicists.
- Does Gravity Travel at the Speed of Light? in The Physics FAQ
- New Scientist story on experimental measurement of the speed of gravity
- Testing Relativistic Effect of Propagation of Gravity by Very-Long Baseline Interferometry
- Measuring the Speed of Propagation of Gravity
- The Measurement of the Light Deflection from Jupiter: Experimental Results
- A critism of the above, another
- Aberration and the Speed of Gravity
- Aberration and the Speed of Gravity in the Jovian Deflection Experiment
- The post-Newtonian treatment of the VLBI experiment on September 8, 2002
- The Speed of Gravity in General Relativity and Theoretical Interpretation of the Jovian Deflection Experiment
- The Speed of Gravity - Repeal of the Speed Limit, By Tom Van Flandern, Meta Research
- The Speed of Gravity - What the Experiments Say - Tom Van Flandern views
- Experiments indicate that the speed of gravity is minimum 20 billion times c. by Alfonso Leon Guillen Gomez | http://www.exampleproblems.com/wiki/index.php/Speed_of_gravity | 13 |
89 | An Introduction to MATLAB: Basic Operations
MATLAB is a programming language that is very useful for numerical simulation and data analysis. The following tutorials are intended to give you an introduction to scientific computing in MATLAB.
Lots of MATLAB demos are available online at
You can work through these at your leisure, if you want. Everything you need for EOS 225 should be included in the following tutorials.
At its simplest, we can use MATLAB as a calculator. Type
What do you get?
ans = 5
What do you get?
ans = 21
Can also do more complicated operations, like taking exponents: for "3 squared" type
ans = 9
For "two to the fourth power" type
ans = 16
"Scientific notation" is expressed with "10^" replaced by "e" - that is, 10^7 is written 1e7 and 2.15x10^-3 is written 2.15e-3. For example:
ans = 0.0150
2e-3 * 1000
ans = 2
MATLAB has all of the basic arithmetic operations built in:
+ addition - subtraction * multiplication \ division ^ exponentiation
as well as many more complicated functions (e.g. trigonometric, exponential):
sin(x) sine of x (in radians) cos(x) cosine of x (in radians) exp(x) exponential of x log(x) base e logarithm of x (normally written ln)
The above are just a sample - MATLAB has lots of built-in functions.
When working with arithmetic operations, it's important to be clear about the order in which they are to be carried out. This can be specified by the use of brackets. For example, if you want to multiply 5 by 2 then add 3, we can type
ans = 13
and we get the correct value. If we want to multiply 5 by the sum of 2 and 3, we type
ans = 25
and this gives us the correct value. Carefully note the placement of the brackets. If you don't put brackets, Matlab has its own built in order of operations: multiplication/division first, then addition/subtraction. For example:
ans = 13
gives the same answer as (5*2)+3. As another example, if we want to divide 8 by 2 and then subtract 3, we type
ans = 1
and get the right answer. To divide 8 by the difference between 2 and 3, we type
ans = -8
and again get the right answer. If we type
ans = 1
we get the first answer - the order of operations was division first, then subtraction.
In general, it's good to use brackets - they invovle more typing, and may make a computation look more cumbersome, but they help reduce ambiguity regarding what you want the computation to do.
This is a good point to make a general comment about computing. Computers are actually quite stupid - they do what you tell them to, not what you want them to do. When you type any commands into a computer program like MATLAB, you need to be very careful that these two things match exactly.
You can always get help in MATLAB by typing "help". Type this alone and you'll get a big list of directories you can get more information about - which is not always too useful. It's more useful to type "help" with some other command that you'd like to know more about. E.g.:
SIN Sine of argument in radians. SIN(X) is the sine of the elements of X. See also ASIN, SIND. Reference page in Help browser doc sin
ATAN Inverse tangent, result in radians. ATAN(X) is the arctangent of the elements of X. See also ATAN2, TAN, ATAND. Reference page in Help browser doc atan
You can get a list of all the built-in functions by typing
Elementary math functions. Trigonometric. sin - Sine. sind - Sine of argument in degrees. sinh - Hyperbolic sine. asin - Inverse sine. asind - Inverse sine, result in degrees. asinh - Inverse hyperbolic sine. cos - Cosine. cosd - Cosine of argument in degrees. cosh - Hyperbolic cosine. acos - Inverse cosine. acosd - Inverse cosine, result in degrees. acosh - Inverse hyperbolic cosine. tan - Tangent. tand - Tangent of argument in degrees. tanh - Hyperbolic tangent. atan - Inverse tangent. atand - Inverse tangent, result in degrees. atan2 - Four quadrant inverse tangent. atanh - Inverse hyperbolic tangent. sec - Secant. secd - Secant of argument in degrees. sech - Hyperbolic secant. asec - Inverse secant. asecd - Inverse secant, result in degrees. asech - Inverse hyperbolic secant. csc - Cosecant. cscd - Cosecant of argument in degrees. csch - Hyperbolic cosecant. acsc - Inverse cosecant. acscd - Inverse cosecant, result in degrees. acsch - Inverse hyperbolic cosecant. cot - Cotangent. cotd - Cotangent of argument in degrees. coth - Hyperbolic cotangent. acot - Inverse cotangent. acotd - Inverse cotangent, result in degrees. acoth - Inverse hyperbolic cotangent. hypot - Square root of sum of squares. Exponential. exp - Exponential. expm1 - Compute exp(x)-1 accurately. log - Natural logarithm. log1p - Compute log(1+x) accurately. log10 - Common (base 10) logarithm. log2 - Base 2 logarithm and dissect floating point number. pow2 - Base 2 power and scale floating point number. realpow - Power that will error out on complex result. reallog - Natural logarithm of real number. realsqrt - Square root of number greater than or equal to zero. sqrt - Square root. nthroot - Real n-th root of real numbers. nextpow2 - Next higher power of 2. Complex. abs - Absolute value. angle - Phase angle. complex - Construct complex data from real and imaginary parts. conj - Complex conjugate. imag - Complex imaginary part. real - Complex real part. unwrap - Unwrap phase angle. isreal - True for real array. cplxpair - Sort numbers into complex conjugate pairs. Rounding and remainder. fix - Round towards zero. floor - Round towards minus infinity. ceil - Round towards plus infinity. round - Round towards nearest integer. mod - Modulus (signed remainder after division). rem - Remainder after division. sign - Signum.
MATLAB can be used like a calculator - but it's much more. It's also a programming language, with all of the basic components of any such language.
The first and most basic of these components is one that we use all the time in math - the variable. Like in math, variables are generally denoted symbolically by individual characters (like "a" or "x") or by strings of characters (like "var1" or "new_value").
In class we've distinguished between variables and parameters - but denoted both of these by characters. MATLAB doesn't make this distinction - any numerical quantity given a symbolic "name" is a variable".
How do we assign a value to a variable? Easy - just use the equality sign. For example
a = 3
a = 3
sets the value 3 to the variable a. As another example
b = 2
b = 2
sets the value 2 to the variable b. We can carry out mathematical operations with these variables: e.g.
ans = 5
ans = 6
ans = 9
ans = 20.0855
Although operation of setting a value to a variable looks like an algebraic equality like we use all the time in math, in fact it's something quite different. The statement
a = 3
should not be interpreted as "a is equal to 3". It should be interpreted as "take the value 3 and assign it to the variable a". This difference in interpretation has important consequences. In algebra, we can write a = 3 or 3 = a -- these are equivalent. The = symbol in MATLAB is not symmetric - the command
a = b
should be interpreted as "take the value of b and assign it to the variable a" - there's a single directionality. And so, for example, we can type
a = 3
a = 3
with no problem, if we type
we get an error message. The value 3 is fixed - we can't assign another number to it. It is what it is.
Another consequence of the way that the = operator works is that a statement like
a = a+1
makes perfect sense. In algebra, this would imply that 0 = 1, which is of course nonsense. In MATLAB, it means "take the value that a has, add one to it, then assign that value to a". This changes the value of a, but that's allowed. For example, type:
a = 3 a = a+1
a = 3 a = 4
First a is assigned the value 3, then (by adding one) it becomes 4.
There are some built in variables; one of the most useful is pi:
ans = 3.1416
We can also assign the output of a mathematical operation to a new variable: e.g.
b = a*exp(a)
b = 218.3926
b = 218.3926
If you want MATLAB to just assign the value of a calculation to a variable without telling you the answer right away, all you have to do is put a semicolon after the calculation:
b = a*exp(a);
Being able to use variables is very convenient, particularly when you're doing a multi-step calculation with the same quantity and want to be able to change the value. For example:
a = 1; b = 3*a; c = a*b^2; d = c*b-a;
d = 26
Now say I want to do the same calculation with a = 3; all I need to do is make one change
a = 3; b = 3*a; c = a*b^2; d = c*b-a;
How does this make things any easier? Well, it didn't really here - we still had to type out the equations for b, c, and d all over again. But we'll seee that in a stand-alone computer program it's very useful to be able to do this.
In fact, the sequence of operations above is an example of a computer program. Operations are carried out in a particular order, with the results of earlier computations being fed into later ones.
It is very important to understand this sequential structure of programming. In a program, things happen in a very particular order: the order you tell them to have. It's very important to make sure you get this order right. This is pretty straightforward in the above example, but can be much more complicated in more complicated programs.
Any time a variable is created, it's kept in memory until you purposefully get rid of it (or quit the program). This can be useful - you can always use the variable again later. It can also make things harder - for example, in a long program you may try using a variable name that you've already used for another variable earlier in the program, leading to confusion.
It can therefore be useful sometimes make MATLAB forget about a variable; for this the "clear" command is used. For example, define
b = 3;
Now if we ask what b is, we'll get back that it's 3
b = 3
Using the clear command to remove b from memory
now if we ask about b
we get the error message that it's not a variable in memory - we've succeeded in getting rid of it. To get rid of everything in memory, just type
An important idea in programming is that of an array (or matrix). This is just an ordered sequence of numbers (known as elements): e.g.
M = [1, 22, -0.4]
is a 3-element array in which the first element is 1, the second element is 22, and the third element is -0.4. These are ordered - in this particular array, these numbers always occur in this sequence - but this doesn't mean that there's any particular structure ordering in general. That is - in an array, numbers don't have to increase or decrease or anything like that. The elements can be in any order - but that order partly defines the array. Also note that the numbers can be integers or rational numbers, or positive or negative.
While the elements of the array can be any kind of number, their positions are identified by integers: there is a first, a second, a third, a fourth, etc. up until the end of the array. It's standard to indicate the position of the array using bracket notation: in the above example, the first element is
M(1) = 1
the second element is
M(2) = 22
and the third element is
M(3) = -0.4.
These integers counting off position in the array are known as "indices" (singular "index").
All programming languages use arrays, but MATLAB is designed to make them particularly easy to work with (the MAT is for "matrix"). To make the array above in MATLAB all you need to do is type
M = [1 22 -0.4]
M = 1.0000 22.0000 -0.4000
Then to look at individual elements of the array, just ask for them by index number:
ans = 1
ans = 22
ans = -0.4000
We can also ask for certain ranges of an array, using the "colon" operator. For an array M we can ask for element i through element j by typing
ans = 1 22
ans = 22.0000 -0.4000
If we want all elements of the array, we can type the colon on its own
ans = 1.0000 22.0000 -0.4000
We can also use this notation to make arrays with a particular structure. Typing
M = a:b:c
makes an array that starts with first element
and increases with increment b:
M(2) = a+b M(3) = a+2b M(4) = a+3b
The array stops at the largest value of N for which M(N) <= c.
M = 1:1:3
M = 1 2 3
The array starts with 1, increases by 1, and ends at 3
M = 1:.5:3
M = 1.0000 1.5000 2.0000 2.5000 3.0000
The array starts at 1, increases by 0.5, and ends at 3
M = 1:.6:3
M = 1.0000 1.6000 2.2000 2.8000
Here the array starts at 1, increases by 0.6, and ends at 2.8 - because making one more step in the array would make the last element bigger than 3.
M = 3:-.5:1
M = 3.0000 2.5000 2.0000 1.5000 1.0000
This kind of array can also be decreasing.
If the increment size b isn't specified, a default value of 1 is used:
M = 1:5
M = 1 2 3 4 5
That is, the array a:c is the same as the array a:1:c
It is important to note that while the elements of an array can be any kind of number, the indices must be positive integers (1 and bigger). Trying non-positive or fractional integers will result in an error message:
Each of the elements of an array is a variable on its own, which can be used in a mathematical operation. E.g.:
ans = 4
ans = 6
The array itself is also a kind of variable - an array variable. You need to be careful with arithmetic operations (addition, subtraction, multiplication, division, exponentiaion) when it comes to arrays - these things can be defined, but they have to be defined correctly. We'll look at this later.
In MATLAB, when most functions are fed an array as an argument they give back an array of the function acting on each element. That is, for the function f and the array M, g=f(M) is an array such that
g(i) = f(M(i)).
a = 0:4; b = exp(a)
b = 1.0000 2.7183 7.3891 20.0855 54.5982
Let's define two arrays of the same size
a = 1:5; b = exp(a);
and what we get is a plot of the array a versus the array b - in this case, a discrete version of the exponential function exp(x) over the range x=1 to x=5.
We can plot all sorts of things: the program
a = 0:.01:5; b = cos(2*pi*a); plot(a,b)
sets the variable a as a fine discretisation of the range from x=0 to x=5, defines b as the cosine of 2 pi x over that range, and plots a agaist b - showing us the familiar sinusoidal waves.
We can also do all sorts of things with plots - stretch them vertically and horizontally, flip them upside down, give them titles and label the axes, have multiple subplots in a single plot ... but we'll come to these as we need them.
Arithmetic operations (addition, subtraction, multiplication, division) between an array and a scalar (a single number) are straightforward. If we add an array and a scalar, every element in the array is added to that scalar: the ith element of the sum of the array M and the scalar a is M(i)+a.
M = [1 3 -.5 7]; M2 = M+1
M2 = 2.0000 4.0000 0.5000 8.0000
Similarly, we can subtract, multiply by, and divide by a scalar.
M3 = 3*M
M3 = 3.0000 9.0000 -1.5000 21.0000
M4 = M/10
M4 = 0.1000 0.3000 -0.0500 0.7000
It's even possible to add, subtract, multiply and divide arrays with other arrays - but we have to be careful doing this. In particular, we can only do these things between arrays of the same size: that is, we can't add a 5-element array to a 10-element array.
If the arrays are the same size, these arithmetic operations are straightforward. For example, the sum of the N-element array a and the N-element array b is an N-element array c whose ith element is
c(i) = a(i)+b(i)
a = [1 2 3]; b = [2 -1 4]; c = a+b; c
c = 3 1 7
That is, addition is element-wise. It's just the same with subtraction.
d = a-b; d
d = -1 3 -1
With multiplication we use a somewhat different notation. Mathematics defines a special kind of multiplication between arrays - matrix multiplication - which is not what we're doing here. However, it's what MATLAB thinks you're doing if you use the * sign between arrays. To multiply arrays element-wise (like with addition), we need to use the .* notation (note the "." before the "*"):
e = a.*b; e
e = 2 -2 12
Similarly, to divide, we don't use /, but rather ./
f = a./b; f
f = 0.5000 -2.0000 0.7500
(once again, note the dot). As we'll see over and over again, it's very useful to be able to carry out arithmetic operations between arrays.
For example, say we want to make a plot of x versus 1/x between x = 2 and x = 4. Then we can type in the program
x = 2:.1:4; y = 1./x; plot(x,y) xlabel('x'); ylabel('y');
Note how we put the labels on the axes - using the commands xlabel and ylabel, with the arguments 'x' and 'y'. Because the arguments are character strings - not numbers - they need to be in single quotes. The axis labels can be more complicated, e.g.
xlabel('x (between 1 and 5)') ylabel('y = 1/x')
We haven't talked yet about how to exponentiate an array. To take the array M to the power b element-wise, we type M.^b Note again the "." before the "^" in the exponentiation. As an example
x = [1 2 3 4]; y = x.^2
y = 1 4 9 16
As another example, we can redo the earlier program:
x = 2:.1:4; y = x.^(-1); plot(x,y) xlabel('x'); ylabel('y');
Note that we put the "-1" in brackets - this makes sure that the minus sign associated with making the exponent negative is applied before the "^" of the exponentiation. In this case, we don't have to do this - but when programming it doesn't hurt to be as specific as possible.
These are the basic tools that we'll need to use MATLAB. Subsequent tutorials will cover other aspects of writing a program - but what we've talked about above forms the core. Everything that follows will build upon the material in this tutorial.
The following exercises will use the tools we've learned above and are designed to get you thinking about programming.
In writing your programs, you'll need to be very careful to think through:
(1) what is the goal of the program (what do I need it to do?)
(2) what do I need to tell MATLAB?
(3) what order do I need to tell it in?
It might be useful to sketch the program out first, before typing anything into MATLAB. It can even be useful to write the program out on paper first and walk through it step by step, seeing if it will do what you think it should.
Plot the following functions:
(a) y = 3x+2 with x = 0, 0.25, 0.5, ...., 7.75, 8
(b) y = exp(-x^2) with x = 0, 0.1, 0.2, ..., 2
(c) y = ln(exp(x^-1)) with x = 1, 1.5, 2, ..., 4
(d) y = (ln(exp(x)))^-1 with x = 1, 1.5, 2, ..., 4
A mountain range has a tectonic uplift rate of 1 mm/yr and erosional timescale of 1 million years. If the mountain range starts with a height h(0) = 0 at time t = 0, write a program that predicts and plots the height h(t) at t=0, t=1 million years, t=2 million years, t=3 million years, t=4 million years, and t=5 million years (neglecting isostatic effects). Label the axes of this plot, including units.
Repeat Exercise 2 in the case that the erosional timescale is 500,000 years.
Repeat Exercise 3 in the case that the tectonic uplift rate is 2 mm/yr. | http://web.uvic.ca/~monahana/eos225/matlab_tutorial/tutorial_1/introduction_to_matlab.html | 13 |
11 | ||This article needs additional citations for verification. (February 2010)|
In mechanical engineering, backlash is the striking back of connected wheels in a piece of mechanism when pressure is applied. Another source defines it as the maximum distance through which one part of something can be moved without moving a connected part. In the context of gears backlash, sometimes called lash or play, is clearance between mating components, or the amount of lost motion due to clearance or slackness when movement is reversed and contact is re-established. For example, in a pair of gears backlash is the amount of clearance between mated gear teeth.
Theoretically, the backlash should be zero, but in actual practice some backlash must be allowed to prevent jamming. It is unavoidable for nearly all reversing mechanical couplings, although its effects can be negated. Depending on the application it may or may not be desirable. Reasons for requiring backlash include allowing for lubrication, manufacturing errors, deflection under load and thermal expansion.
Factors affecting the amount backlash required in a gear train include errors in profile, pitch, tooth thickness, helix angle and center distance, and runout. The greater the accuracy the smaller the backlash needed. Backlash is most commonly created by cutting the teeth deeper into the gears than the ideal depth. Another way of introducing backlash is by increasing the center distances between the gears.
Backlash due to tooth thickness changes is typically measured along the pitch circle and is defined by:
|= backlash due to tooth thickness modifications|
|= tooth thickness on the pitch circle for ideal gearing (no backlash)|
|= actual tooth thickness|
Backlash, measured on the pitch circle, due to operating center modifications is defined by:
|= backlash due to operating center distance modifications|
|= difference between actual and ideal operating center distances|
|= pressure angle|
Standard practice is to make allowance for half the backlash in the tooth thickness of each gear. However, if the pinion (the smaller of the two gears) is significantly smaller than the gear it is meshing with then it is common practice to account for all of the backlash in the larger gear. This maintains as much strength as possible in the pinion's teeth. The amount of additional material removed when making the gears depends on the pressure angle of the teeth. For a 14.5° pressure angle the extra distance the cutting tool is moved in equals the amount of backlash desired. For a 20° pressure angle the distance equals 0.73 times the amount of backlash desired.
In a gear train, backlash is cumulative. When a gear-train is reversed the driving gear is turned a short distance, equal to the total of all the backlashes, before the final driven gear begins to rotate. At low power outputs, backlash results in inaccurate calculation from the small errors introduced at each change of direction; at large power outputs backlash sends shocks through the whole system and can damage teeth and other components.
Anti-backlash designs
In certain applications, backlash is an undesirable characteristic and should be minimized.
Gear trains where positioning is key but power transmission is light
The best example here is an analog radio tuning dial where one may make precise tuning movements both forwards and backwards. Specialized gear designs allow this. One of the more common designs splits the gear into two gears, each half the thickness of the original. One half of the gear is fixed to its shaft while the other half of the gear is allowed to turn on the shaft, but pre-loaded in rotation by small coil springs that rotate the free gear relative to the fixed gear. In this way, the spring tension rotates the free gear until all of the backlash in the system has been taken out; the teeth of the fixed gear press against one side of the teeth of the pinion while the teeth of the free gear press against the other side of the teeth on the pinion. Loads smaller than the force of the springs do not compress the springs and with no gaps between the teeth to be taken up, backlash is eliminated.
Leadscrews where positioning and power are both important
Another area where backlash matters is in leadscrews. Again, as with the gear train example, the culprit is lost motion when reversing a mechanism that is supposed to transmit motion accurately. Instead of gear teeth, the context is screw threads. The linear sliding axes (machine slides) of machine tools are an example application.
Most machine slides for many decades, and many even today, were simple-but-accurate cast iron linear bearing surfaces, such as a dovetail slide or box slide, with an Acme leadscrew drive. With just a simple nut, some backlash is inevitable. On manual (non-CNC) machine tools, the way that machinists compensate for the effect of backlash is to approach all precise positions using the same direction of travel. This means that if they have been dialing left, and now they want to move to a rightward point, they move rightward all the way past it and then dial leftward back to it. The setups, tool approaches, and toolpaths are designed around this constraint.
The next step up from the simple nut is a split nut, whose halves can be adjusted and locked with screws so that one side rides leftward thread faces, and the other side rides rightward faces. Notice the analogy here with the radio dial example using split gears, where the split halves are pushed in opposing directions. Unlike in the radio dial example, the spring tension idea is not useful here, because machine tools taking a cut put too much force against the screw. Any spring light enough to allow slide movement at all would allow cutter chatter at best and slide movement at worst. These screw-adjusted split-nut-on-an-Acme-leadscrew designs cannot eliminate all backlash on a machine slide unless they are adjusted so tight that the travel starts to bind. Therefore this idea can't totally obviate the always-approach-from-the-same-direction concept; but backlash can be held to a small amount (1 or 2 thousandths of an inch), which is more convenient and in some non-precise work is enough to allow one to ignore the backlash (i.e., act as if there weren't any).
CNCs can be programmed to use the always-approach-from-the-same-direction concept, but that is not the normal way they are used today, because hydraulic anti-backlash split nuts and newer forms of leadscrew other than Acme/trapezoidal, such as recirculating ball screws or duplex worm gear sets, effectively eliminate the backlash. The axis can move in either direction without the go-past-and-come-back motion.
The simplest CNCs, such as microlathes or manual-to-CNC conversions, use just the simple old nut-and-Acme-screw drive. The controls can be programmed with a parameter value entered for the total backlash on each axis, and the machine will automatically add that much to the program's distance-to-go when it changes directions. This [programmatic] "backlash compensation", as it's called, is a useful trick for capital-frugal applications. "Professional-grade" CNCs, though, use the more expensive backlash-eliminating drives mentioned above. This allows them to do 3D contouring with a ball-nosed endmill, for example, where the endmill travels around in many directions with ease and constant rigidity.
Some motion controllers include backlash compensation. Compensation may be achieved by simply adding extra compensating motion (as described earlier) or by sensing the load's position in a closed loop control scheme. The dynamic response of backlash itself, essentially a delay, makes the position loop less stable and prone to oscillation.
Minimum backlash
Minimum backlash is the minimum transverse backlash at the operating pitch circle allowable when the gear tooth with the greatest allowable functional tooth thickness is in mesh with the pinion tooth having its greatest allowable functional tooth thickness, at the tightest allowable center distance, under static conditions.
Difference between the maximum and minimum backlash occurring in a whole revolution of the larger of a pair of mating gears.
Gear couplings use backlash to allow for angular misalignment.
Backlash is undesirable in precision positioning applications such as machine tool tables. It can be minimized by tighter design features such as ball screws instead of leadscrews, and by using preloaded bearings. A preloaded bearing uses a spring or other compressive force to maintain bearing surfaces in contact despite reversal of direction.
There can be significant backlash in unsynchronized transmissions because of the intentional gap between dog gears (also known as dog clutches). The gap is necessary so that the driver or electronics can engage the gears easily while synchronizing the engine speed with the driveshaft speed. If there was a small clearance, it would be nearly impossible to engage the gears because the teeth would interfere with each other in most configurations. In synchronized transmissions, synchromesh solves this problem.
See also
- Engineering - A Complete Online Guide for Every Mechanical Engineer
- Backlash, archived from the original on 2010-02-09, retrieved 2010-02-09.
- Jones, Franklin Day; Ryffel, Henry H. (1984), Gear design simplified (3rd ed.), Industrial Press Inc., p. 20, ISBN 978-0-8311-1159-5.
- Adler, Michael, Meccano Frontlash Mechanism, archived from the original on 2010-02-09, retrieved 2010-02-09.
- Gear Nomenclature, Definition of Terms with Symbols. American Gear Manufacturers Association. p. 72. ISBN 1-55589-846-7. OCLC 65562739. ANSI/AGMA 1012-G05. | http://en.wikipedia.org/wiki/Backlash_(engineering) | 13 |
13 | The Big Bang
The Big Bang theory states that the universe arose from a singularity of virtually no size, which gave rise to the dimensions of space and time, in addition to all matter and energy. At the beginning of the Big Bang, the four fundamental forces began to separate from each other. Early in its history (10-36 to 10-32 seconds), the universe underwent a period of short, but dramatic, hyper-inflationary expansion. The cause of this inflation is unknown, but was required for life to be possible in the universe.
Quarks and antiquarks combined to annihilate each other. Originally, it was expected that the ratio of quarks and antiquarks to be exactly equal to one, since neither would be expected to have been produced in preference to the other. If the ratio were exactly equal to one, the universe would have consisted solely of energy - not very conducive to the existence of life. However, recent research showed that the charge–parity violation could have resulted naturally given the three known masses of quark families.1 However, this just pushes fine tuning a level down to ask why quarks display the masses they have. Those masses must be fine tuned in order to achieve a universe that contains any matter at all.
Large, just right-sized universe
Even so, the universe is enormous compared to the size of our Solar System. Isn't the immense size of the universe evidence that humans are really insignificant, contradicting the idea that a God concerned with humanity created the universe? It turns out that the universe could not have been much smaller than it is in order for nuclear fusion to have occurred during the first 3 minutes after the Big Bang. Without this brief period of nucleosynthesis, the early universe would have consisted entirely of hydrogen.2 Likewise, the universe could not have been much larger than it is, or life would not have been possible. If the universe were just one part in 1059 larger,3 the universe would have collapsed before life was possible. Since there are only 1080 baryons in the universe, this means that an addition of just 1021 baryons (about the mass of a grain of sand) would have made life impossible. The universe is exactly the size it must be for life to exist at all.
Early evolution of the universe
Cosmologists assume that the universe could have evolved in any of a number of ways, and that the process is entirely random. Based upon this assumption, nearly all possible universes would consist solely of thermal radiation (no matter). Of the tiny subset of universes that would contain matter, a small subset would be similar to ours. A very small subset of those would have originated through inflationary conditions. Therefore, universes that are conducive to life "are almost always created by fluctuations into the[se] 'miraculous' states," according to atheist cosmologist Dr. L. Dyson.4
Just right laws of physics
The laws of physics must have values very close to those observed or the universe does not work "well enough" to support life. What happens when we vary the constants? The strong nuclear force (which holds atoms together) has a value such that when the two hydrogen atoms fuse, 0.7% of the mass is converted into energy. If the value were 0.6% then a proton could not bond to a neutron, and the universe would consist only of hydrogen. If the value were 0.8%, then fusion would happen so readily that no hydrogen would have survived from the Big Bang. Other constants must be fine-tuned to an even more stringent degree. The cosmic microwave background varies by one part in 100,000. If this factor were slightly smaller, the universe would exist only as a collection of diffuse gas, since no stars or galaxies could ever form. If this factor were slightly larger, the universe would consist solely of large black holes. Likewise, the ratio of electrons to protons cannot vary by more than 1 part in 1037 or else electromagnetic interactions would prevent chemical reactions. In addition, if the ratio of the electromagnetic force constant to the gravitational constant were greater by more than 1 part in 1040, then electromagnetism would dominate gravity, preventing the formation of stars and galaxies. If the expansion rate of universe were 1 part in 1055 less than what it is, then the universe would have already collapsed. The most recently discovered physical law, the cosmological constant or dark energy, is the closest to zero of all the physical constants. In fact, a change of only 1 part in 10120 would completely negate the effect.
Universal probability bounds
"Unlikely things happen all the time." This is the mantra of the anti-design movement. However, there is an absolute physical limit for improbable events to happen in our universe. The universe contains only 1080 baryons and has only been around for 13.7 billion years (1018 sec). Since the smallest unit of time is Planck time (10-45 sec),5 the lowest probability event that can ever happen in the history of the universe is:
1080 x 1018 x 1045 =10143
So, although it would be possible that one or two constants might require unusual fine-tuning by chance, it would be virtually impossible that all of them would require such fine-tuning. Some physicists have indicated that any of a number of different physical laws would be compatible with our present universe. However, it is not just the current state of the universe that must be compatible with the physical laws. Even more stringent are the initial conditions of the universe, since even minor deviations would have completely disrupted the process. For example, adding a grain of sand to the weight of the universe now would have no effect. However, adding even this small amount of weight at the beginning of the universe would have resulted in its collapse early in its history.
What do cosmologists say?
Even though many atheists would like to dismiss such evidence of design, cosmologists know better, and have made statements such as the following, which reveal the depth of the problem for the atheistic worldview:
*"This type of universe, however, seems to require a degree of fine-tuning of the initial conditions that is in apparent conflict with 'common wisdom'."6
*"Polarization is predicted. It's been detected and it's in line with theoretical predictions. We're stuck with this preposterous universe."7
*"In all of these worlds statistically miraculous (but not impossible) events would be necessary to assemble and preserve the fragile nuclei that would ordinarily be destroyed by the higher temperatures. However, although each of the corresponding histories is extremely unlikely, there are so many more of them than those that evolve without "miracles," that they would vastly dominate the livable universes that would be created by Poincare recurrences. We are forced to conclude that in a recurrent world like de Sitter space our universe would be extraordinarily unlikely."8 | http://reasonablekansans.blogspot.com/2010/08/fine-tuning-of-universe.html | 13 |
11 | A line is a shortest path between the two points that is straight, infinitely long and thin. In the coordinate plane, the location of a line is defined by the points whose coordinates are known and through which the line passes to an infinitely long distance in both the direction. There are no end points in case of line.
A Ray joining which is used to join two points is also known as a line, it is a basic concept of elementary Geometry. Here we differentiate two words line and Line Segment. In the Line Segment the end points are included, and in case of line there is no endpoint.
If two distinct lines ‘A‘ and ‘B’, these two lines are intersects each other or both lines are parallel to each other. The two lines which intersect each other have a unique Point or represent a unique point i.e. the point of Intersection. And the intersection point lies on the lines.
In case of Parallel Lines there are no common points. Let you have 5 lines, than every line divides the line into two parts, which is known as rays. A piece of line is known as ray, which have only one end point. Rays are used in defining the angles.
The line which does not have any end is known as line segment. The line in geometry is the basic design tool. A line has length, width etc. It suggested a direction through which we can find the path easily. If we have a line and which is not straight, then the line usually known as a curve or arc.
In the plane geometry, line is used to indicate the Straight Line and the object which is straight, infinity times long are also known as line. A geometry line is always in one dimensional; its width is always zero. If we draw a line with the help of pencil then it shows that the pencil has a measurable width.
According to the theorem of Geometry, if two points lies in a plane then there is exactly one line that passes through the two points. So, to write line equation we need two points. The equation of line can be written as,
y = mx + c
'x' and 'y' are two vertices.
'm' is Slope.
'c' is y-intercept.
Line or the straight line can be used to represent basic Position of an object such as height, width and length of an object. When an object is placed in two dimensional space then Straight Line is used to measure height and length of this object. When an object is placed in three dimensional space then its height, length and the width are calculated using st...Read More
We come across various lines in our daily life while going for the assembly sessions in the school, while standing for the ticket on the ticket counter and many such places. The lines can be of following types: straig...Read More
A line is the Set of infinite points which join together to form the line. A line extends in both the directions and so we say that it has no fixed length. When we say that two given lines are perpendicular lines it ...Read More
A line is the Set of points which extend endlessly in both the directions. We say that a line has no fixed length and so it cannot be drawn on a plane. We say that a pair of lines is parallel line if they are at equal dis...Read More
Parallel planes are two plates or planes that do not intersect. If there are two planes ‘A’ and ‘B’ which are parallel to each other A || B and, if there are three planes and two planes are parallel to ...Read More | http://www.tutorcircle.com/line-t3Ijp.html | 13 |
170 | Area And Perimeter Powerpoint PPT
Therefore, Family A has the pool with the bigger swimming area. The perimeter of Family A’s pool is 12 units long. ... PowerPoint Presentation Author: Dr. Beth McCulloch Vinson Last modified by: Bill Ide Created Date: 7/1/2000 5:08:39 PM
Area and Perimeter By Christine Berg Edited by V T Hamilton Perimeter The sum of the lengths of all sides of the figure Area The number of square units inside a figure Calculating Area Area Abbreviations: A = Area b = base h = height Rectangle To find the area multiply the length of the base ...
Perimeter Author: Tiffany Bennett Last modified by: Tom Deibel Created Date: 9/19/2003 8:47:37 PM Document presentation format: On-screen Show Company: PISD Other titles:
Finding Area and Perimeter of Polygons Area Definition: The number of square units needed to cover a surface. (INSIDE) Length x width Perimeter Definition: The sum of the length of all the sides of a polygon.
The distance around the outside of a shape is called the perimeter. 8 cm 6 cm 8 cm 6 cm The perimeter of the shape is 8 + 6 + 8 + 6 = 28cm. First we need to find he length of each side by counting the squares.
Perimeter The perimeter of a closed plane figure is the length of its boundary. 10cm 8cm 7cm 8cm 15cm Perimeter = 15 + 8 + 10 + 8 + 7 = 48cm Perimeter Rectangle Area Area is the space within the boundary of a figure.
Jeopardy Perimeter & Area Perimeter Triangles Circles Toss Up Q $100 Q $100 Q $100 Q $100 Q$600 Q $200 Q $200 Q $200 Q $200 Q $600 Q $300 Q $300 Q $300 Q $300 Q $600
Finding the Perimeter Finding the Perimeter Take a walk around the edge! 6cm 10 cm The perimeter is… 32cm ! 6 16 22 32 Take a walk around the edge! 8cm 10 cm The perimeter is… 26 cm ! 8cm 8 16 26 Take a walk around the edge!
Area of a Rectangle www.mathxtc.com This is one in a series of Powerpoint slideshows to illustrate how to calculate the area of various 2-D shapes.
Perimeter, Circumference and Area Author: Charlotte-Mecklenburg School District Last modified by: Administrator Created Date: 3/9/2011 4:33:43 PM Document presentation format: On-screen Show Company: Charlotte-Mecklenburg School District Other titles:
1.7 Notes Perimeter, Circumference, Area. These formulas are found on p. 49 of your textbook. 1.7 Notes Perimeter, Circumference, Area. †Ask Dr. Math for a discussion on “square units” vs. “units squared.”
Free powerpoint template: www.brainybetty.com Formulas for Geometry Mr. Ryan Don’t Get Scared!!! ... x 5 = Area 4 x 5 = 20 If the radius is 5, then the diameter is 10 Radius 5 Area=3.14 x (5 x 5) Perimeter = 3.14 x 10 * * * * Title: Formulas for Geometry Author: Mr. Ryan Last ...
AREA OF A TRIANGLE You probably already know the formula for the area of a triangle. Area is one-half the base times the height. ... PowerPoint Presentation Author: Haider Last modified by: Stephen Corcoran Created Date: 7/21/2004 4:11:54 PM
Surface Area What does it mean to ... Prism SA You can find the SA of any prism by using the basic formula for SA which is 2B + LSA= SA LSA= lateral Surface area LSA= perimeter of the base x height of the prism B = the base of ... PowerPoint Presentation Author: Chamberlain School No. 7-1 Last ...
The Area and Perimeter of a Circle The Area and Perimeter of a Circle Diameter Radius centre What is the formula relating the circumference to the diameter?
One of the great areas of confusion for students in the measurement strand is Area and Perimeter, in fact it sometimes seems that there is another term out there, “Arimeter”.
Area and Perimeter Math 7 Area and Perimeter There are two measurements that tell you about the size of a polygon. The number of squares needed to cover the polygon is the area of the shape.
Area, Perimeter and Volume Section 3-4-3 Perimeter Perimeter is measuring around the outside of something. Perimeter requires the addition of all sides of the shape.
Area and . Perimeter . Triangles, Parallelograms, Rhombus, and Trapezoids. Mr. Miller. Geometry. Chapter 10 . Sections 1 and 2
The Area and Perimeter of a Circle The Area and Perimeter of a Circle The Area and Perimeter of a Circle A circle is defined by its diameter or radius Diameter radius The perimeter or circumference of a circle is the distance around the outside The area of a circle is the space inside it The ...
The perimeter of a triangle is the measure around the triangle = a + b + c To find the area of a triangle: The height = the ... PowerPoint Presentation Author: Valued Gateway Client Last modified by:
Understanding Area and Perimeter Amy Boesen CS255 Perimeter Perimeter is simply the distance around an object. ... PowerPoint Presentation Last modified by: Registered User Created Date: 1/1/1601 12:00:00 AM Document presentation format:
Times New Roman Tahoma Wingdings Verdana Whirlpool PowerPoint Presentation Area and Perimeter Perimeter and Area Perimeter Find the Area of these shapes Doing your Work! Plenary How can we find the perimeter of this shape ? Now try these ...
Area and perimeter of irregular shapes 19yd 30yd 37yd 23 yd 7yd 18yd What is the perimeter of this irregular shape? To find the perimeter, you first need to make sure you have all of the information you need.
PowerPoint Presentation Author: Cub Last modified by: Certiport, Inc. Created Date: 3/3/2003 2:01:31 PM Document presentation format: On-screen Show Company: Mount Carmel Academy Other titles:
Area is the amount of square units needed to cover the face (or flat side) of a figure When the area is found, it is reported in square units.
... Circumference as length Calculate the Surface Area of a Gear Head Motor 2.00” Dia. 1.55” 3.850” 1.367” Perimeter and Area of Basic Shapes b h s P = s1 + s2 + s3 A = ½ bh s s ... PowerPoint Presentation Perimeter and Area of Basic Shapes PowerPoint ...
... 56” 0.190” dia. typ. 0.5” 1.00” 45 deg. 0.71” 1.41” 0.190” typ. Calculate the Perimeter of this Component Perimeter Worksheet Perimeter and Area of Basic Shapes b h s P = s1 + s2 + s3 A = ½ ... PowerPoint Presentation Perimeter and Area of Basic Shapes PowerPoint ...
Squares Perimeter = 4l The area of a square is given as: ... PowerPoint Presentation Author: MELISSA TROQUET Description: Contents written and edited by Melissa Lieberman ©Boardworks Ltd 2004 Flash developed by John Deans Last modified by:
Area Formulas Rectangle Rectangle What is the area formula? Rectangle What is the area formula? ... Answers PowerPoint Presentation PowerPoint Presentation PowerPoint Presentation Trapezoid Trapezoid Trapezoid Trapezoid Trapezoid Trapezoid Trapezoid Trapezoid Practice! Answers ...
Area, Perimeter, and Circumference Author: Registered User Last modified by: Registered User Created Date: 11/26/2002 5:55:51 PM Document presentation format: On-screen Show Company: Northern Michigan University Other titles:
Effect of Change The effects on perimeter, area, and volume when dimensions are changed proportionally. * * Perimeter of a rectangle How would the perimeter change if the dimensions of the rectangle are doubled? 7 ft. 4 ft. 14 ft. 8 ft.
8cm 4cm P= _____ 2L + 2W P= (2 x8cm) + (2x 4cm) = 24 cm P= S+S+S+S 16cm+8cm = Practice Find the Area and Perimeter of the following rectangle: 10cm ... PowerPoint Presentation Author: Stephanie Green Last modified by: Stephanie Green Created Date: 7/1/2000 5:08:39 PM
Perimeter Area Applications Objectives By: April Rodriguez, Betty Williams, ... PowerPoint Presentation PowerPoint Presentation Now, what should we do? You did say area, right? Remember, area is the number of square units, or units2, needed to cover the surface.
... 3.14 x 6 =18.84 x 10 = 188.4 SA = 244.92 2B + LSA = SA Rectangular Prism A B C 7 6 in 9 2B + LSA= SA Area of Base x 2 = LSA = perimeter x Height = Total SA = Triangular Prism 8 17 22 m 15 2B + LSA= SA Area of Base x 2 ... PowerPoint Presentation Author: Chamberlain School No. 7-1 Last ...
Geometry: Perimeter and Area Lesson 6-8 Find the perimeter and area of figures Perimeter The distance around a figure. One way to find the perimeter of a figure is to add the length of all the sides. When finding the perimeter of a rectangle we use a common formula.
Perimeter & Area of Rectangles, Squares ... PowerPoint Presentation Area of a Rectangle Area of a Rectangle Area of a Rectangle Area of a Rectangle PowerPoint Presentation Area of a Square Area of a Square Take Out Your Learning Targets LT #8 Perimeter of a Rectangle & Square PowerPoint ...
Definition Circumference is the distance around a circle or the perimeter. Formula = Pi x diameter Area is the measure of the amount of surface enclosed by the boundary of a figure. ... for helping me use Microsoft PowerPoint: Definition from Connected Mathematics.
Section 6.1: Perimeter & Area of Rectangles & Parallelograms Perimeter – the distance around the OUTSIDE of a figure Area – the number of square units INSIDE a figure Finding the Perimeter of Rectangles and Parallelograms Find the perimeter of each figure.
Ruggedized Unattended Sensor Systems for Perimeter or Area Security Key Features & Benefits: Expands Physical Security & Surveillance Footprint
Irregular shapes perimeter Objective 0506.4.2 Decompose irregular shapes to find perimeter and area. Remember!!! Perimeter all sides added together Review of perimeter http://www.jogtheweb.com/run/deYhohv5NJMJ/Area-and-Perimeter#1 Find the perimeter Find the perimeter What is an irregular shape?
Area and Circumference of a Circle Author: Letitia Lee Cox Last modified by: Letitia Lee Cox Created Date: 6/17/2008 6:32:33 PM Document presentation format: On-screen Show Company: Jefferson County Schools Other titles:
Area & Perimeter of Quadrilaterals & Parallelogram Perimeter Add up all the sides Quadrilateral has 4 sides Add them up Or use P = 2L + 2W Perimeter of a square is P = 4s Ex 1 Ex 2 Ex 3 Ex 4 Ex 5 Area A = L * W Rectangle A = S2 Square A = B * h Parallelogram Note: base and height will ...
Estimate Perimeter, Circumference, and Area When estimating perimeter of shapes on a grid, use the length of one grid square to approximate the length of each side of the figure.
Lesson 8 Perimeter and Area Perimeter The perimeter of a closed figure is the distance around the outside of the figure. In the case of a polygon, the perimeter is found by adding the lengths of all of its sides.
Area of a circle Area examples ... PowerPoint Presentation Author: Project 2002 Last modified by: Bernie Lafferty Created Date: 1/28/2002 2:58:41 PM Document presentation format: On-screen Show Company: Glasgow City Council
Perimeter, area and volume Aim To introduce approaches to working out perimeter, area and volume of 2D and 3D shapes. ... PowerPoint Presentation Author: David King Last modified by: ruthc Created Date: 9/22/2003 11:25:06 AM Document presentation format:
Inicios in Mathematics NCCEP - UT Austin More Algebra with some geometry Geometry (Cabri Jr) and Navigator Activities Using both Activity Center and Screen Capture Area Invariance for Triangles, Parallelism and More (A Dynamic Geometry Interpretation of 1/2 b * h) More Area and Perimeter ...
Sec. 1-9 Perimeter, Circumference, and Area Objective: Find the perimeters and areas of rectangles, squares, & the circumferences of circles.
... the sum of the areas of all of its surfaces Formulas for PRISMS LA = Ph Lateral Area = Perimeter of Base height of prism SA = Ph + 2B Surface Area = Perimeter of Base height of prism + 2 Area ... PowerPoint Presentation Author: cee13931 Last modified by: Griesemer, Sarah Created ... | http://freepdfdb.com/ppt/area-and-perimeter-powerpoint | 13 |
19 | SOURCES OF GAMMA RAYS
Brighter colors in the Cygus region indicate greater numbers of gamma rays detected by the Fermi gamma-ray space telescope. Credit: NASA/DOE/International LAT Team
Gamma rays have the smallest wavelengths and the most energy of any wave in the electromagnetic spectrum. They are produced by the hottest and most energetic objects in the universe, such as neutron stars and pulsars, supernova explosions, and regions around black holes. On Earth, gamma waves are generated by nuclear explosions, lightning, and the less dramatic activity of radioactive decay.
DETECTING GAMMA RAYS
Unlike optical light and x-rays, gamma rays cannot be captured and reflected by mirrors. Gamma-ray wavelengths are so short that they can pass through the space within the atoms of a detector. Gamma-ray detectors typically contain densely packed crystal blocks. As gamma rays pass through, they collide with electrons in the crystal. This process is called Compton scattering, wherein a gamma ray strikes an electron and loses energy, similar to what happens when a cue ball strikes an eight ball. These collisions create charged particles that can be detected by the sensor.
GAMMA RAY BURSTS
Gamma-ray bursts are the most energetic and luminous electromagnetic events since the Big Bang and can release more energy in 10 seconds than our Sun will emit in its entire 10-billion-year expected lifetime! Gamma-ray astronomy presents unique opportunities to explore these exotic objects. By exploring the universe at these high energies, scientists can search for new physics, testing theories and performing experiments that are not possible in Earth-bound laboratories.
If we could see gamma rays, the night sky would look strange and unfamiliar. The familiar view of constantly shining constellations would be replaced by ever-changing bursts of high-energy gamma radiation that last fractions of a second to minutes, popping like cosmic flashbulbs, momentarily dominating the gamma-ray sky and then fading.
NASA's Swift satellite recorded the gamma-ray blast caused by a black hole being born 12.8 billion light years away (below). This object is among the most distant objects ever detected.
Credit: NASA/Swift/Stefan Immler, et al.
COMPOSITION OF PLANETS
Scientists can use gamma rays to determine the elements on other planets. The Mercury Surface, Space Environment, Geochemistry, and Ranging (MESSENGER) Gamma-Ray Spectrometer (GRS) can measure gamma rays emitted by the nuclei of atoms on planet Mercury's surface that are struck by cosmic rays. When struck by cosmic rays, chemical elements in soils and rocks emit uniquely identifiable signatures of energy in the form of gamma rays. These data can help scientists look for geologically important elements such as hydrogen, magnesium, silicon, oxygen, iron, titanium, sodium, and calcium.
The gamma-ray spectrometer on NASA's Mars Odyssey Orbiter detects and maps these signatures, such as this map (below) showing hydrogen concentrations of Martian surface soils.
Credit: NASA/Goddard Space Flight Center Scientific Visualization Studio
GAMMA RAY SKY
Gamma rays also stream from stars, supernovas, pulsars, and black hole accretion disks to wash our sky with gamma-ray light. These gamma-ray streams were imaged using NASA's Fermi gamma-ray space telescope to map out the Milky Way galaxy by creating a full 360-degree view of the galaxy from our perspective here on Earth.
Credit: NASA/DOE/International LAT Team
A FULL-SPECTRUM IMAGE
The composite image below of the Cas A supernova remnant shows the full spectrum in one image. Gamma rays from Fermi are shown in magenta; x-rays from the Chandra Observatory are blue and green. The visible light data captured by the Hubble space telescope are displayed in yellow. Infrared data from the Spitzer space telescope are shown in red; and radio data from the Very Large Array are displayed in orange.
Credit: NASA/DOE/Fermi LAT Collaboration, CXC/SAO/JPL-Caltech/Steward/O. Krause et al., and NRAO/AUI
National Aeronautics and Space Administration, Science Mission Directorate. (2010). Gamma Rays. Retrieved , from Mission:Science website:
Science Mission Directorate. "Gamma Rays" Mission:Science. 2010. National Aeronautics and Space Administration. | http://missionscience.nasa.gov/ems/12_gammarays.html | 13 |
11 | Mercury's prime meridian, or 0° longitude, crosses through the left side of this image. The prime meridian was defined as the longitude where the Sun was directly overhead as Mercury passed through its first perihelion in the year 1950. The area was first seen by a spacecraft during MESSENGER's second Mercury flyby and is located to the northwest of the impact crater Derain. The image here has been placed into a map projection with north to the top. The original image was binned on the spacecraft from its original 1024 x 1024 pixel size to 512 x 512. This type of image compression helps to reduce the amount of data that must be downlinked across interplanetary space from MESSENGER to the Deep Space Network on Earth.
On March 17, 2011 (March 18, 2011, UTC), MESSENGER became the first spacecraft ever to orbit the planet Mercury. The mission is currently in its commissioning phase, during which spacecraft and instrument performance are verified through a series of specially designed checkout activities. In the course of the one-year primary mission, the spacecraft's seven scientific instruments and radio science investigation will unravel the history and evolution of the Solar System's innermost planet. Visit the Why Mercury? section of this website to learn more about the science questions that the MESSENGER mission has set out to answer.
Image Mission Elapsed Time (MET): 209937428
Image ID: 67124
Instrument: Wide Angle Camera (WAC) of the Mercury Dual Imaging System (MDIS)
WAC filter: 7 (748 nanometers)
Center Latitude: 5.9°
Center Longitude: 4.6° E
Resolution: 1253 meters/pixel
Scale: The horizontal width of scene is about 875 kilometers (550 miles)
These images are from MESSENGER, a NASA Discovery mission to conduct the first orbital study of the innermost planet, Mercury. For information regarding the use of images, see the MESSENGER image use policy. | http://photojournal.jpl.nasa.gov/catalog/PIA14197 | 13 |
30 | A major advantage of positional numeral systemss over other systems of writing down numbers is that they facilitate the usual grade-school method of long multiplication: multiply the first number with every digit of the second number and then add up all the properly shifted results. In order to perform this algorithm, one needs to know the products of all possible digits, which is why multiplication tables have to be memorized. Humans use this algorithm in base 10, while computers employ the same algorithm in base 2. The algorithm is a lot simpler in base 2, since the multiplication table has only 4 entries. Rather than first computing the products, and then adding them all together in a second phase, computers add the products to the result as soon as they are computed. Modern chips implement this algorithm for 32-bit or 64-bit numbers in hardware or in microcode. To multiply two numbers with n digits using this method, one needs about n2 operations. More formally: the time complexity of multiplying two n-digit numbers using long multiplication is &Theta(n2).
An old method for multiplication, that doesn't require multiplication tables, is the Peasant multiplication algorithm; this is actually a method of multiplication using base 2.
For systems that need to multiply huge numbers in the range of several thousand digits, such as computer algebra systems and bignum libraries, this algorithm is too slow. These systems employ Karatsuba multiplication which was discovered in 1962 and proceeds as follows: suppose you work in base 10 (unlike most computer implementations) and want to multiply two n-digit numbers x and y, and assume n = 2m is even (if not, add zeros at the left end). We can write
If T(n) denotes the time it takes to multiply two n-digit numbers with Karatsuba's method, then we can write
It is possible to experimentally verify whether a given system uses Karatsuba's method or long multiplication: take your favorite two 100,000 digit numbers, multiply them and measure the time it takes. Then take your favorite two 200,000 digit numbers and measure the time it takes to multiply those. If Karatsuba's method is being used, the second time will be about three times as long as the first; if long multiplication is being used, it will be about four times as long.
Another Method of multiplication is called Toom-Cook or Toom3
There exist even faster algorithms, based on the fast Fourier transform. The idea, due to Strassen (1968), is the following: multiplying two numbers represented as digit strings is virtually the same as computing the convolution of those two digit strings. Instead of computing a convolution, one can instead first compute the discrete Fourier transforms, multiply them entry by entry, and then compute the inverse Fourier transform of the result. (See convolution theorem.) The fastest known method based on this idea was described in 1972 by Schönhage/Strassen and has a time complexity of Θ(n ln(n) ln(ln(n))). These approaches are not used in computer algebra systems and bignum libraries because they are difficult to implement and don't provide speed benefits for the sizes of numbers typically encountered in those systems. The GIMPS distributed Internet prime search project deals with numbers having several million digits and employs a Fourier transform based multiplication algorithm. Using number-theoretic transforms instead of discrete Fourier transforms should avoid any rounding error problems by using modular arithmetic instead of complex numbers.
All the above multiplication algorithms can also be used to multiply polynomials.
A simple improvement to the basic recursive multiplication algorithm:
This may not help so much for multiplication by real or complex values, but is useful for multiplication of very large integers which are supported in some programming languages such as Haskell, Ruby, and Common Lisp. | http://www.fact-index.com/m/mu/multiplication_algorithm.html | 13 |
11 | Article Summary: These are all good tips for developing
a plan of attack in math problem solving. If you use these 20 tips as basis
for developing your own problem solving technique you will be successful.
Most students use the tips described above, use them for a few problems,
and then adapt them to fit their style of learning and problem solving.
Solving problems, especially word problems, are always a challenge. To become a good problem solver you need to have a plan or method which is easy to follow to determine what needs to be solved. Then the plan is carried out to solve the problem. The key is to have a plan which works in any math problem solving situation. For students having problems with problem solving, the following 20 tips are provided for helping children become good problem solvers.
Tip 1: When given a problem to solve look for clues to determine what math operation is needed to solve the problem, for example addition, subtraction, etc.
Tip 2: Read the problem carefully as you look for clues and important information. Write down the clues, underline, or highlight the clues.
Tip 3: Look for key words like sum, difference, product, perimeter, area, etc. They will lead you to what operation you need to use. Rewrite the problem if necessary.
Tip 4: Look for what you need to find out, for example: how many will you have left, the total will be, everyone gets red, everyone gets one of each, etc. They will also lead you to the type of operation needed to solve the problem.
Tip 5: Use variable symbols, such as "X" for missing information.
Tip 6: Eliminate all non-essential information by drawing a line through this distracting information.
Tip 7: Addition problems use words like sum, total, in all, and perimeter.
Tip 8: Subtraction problems use words like difference, how much more, and exceeds.
Tip 9: Multiplication problems use words like product, total, area, and times.
Tip 10: Division problems use words like share, distribute, quotient, and average.
Tip 11: Draw sketches, drawings, and models to see the problem.
Tip 12: Use guess and check techniques to see if you are on the right track.
Tip 13: Ask yourself if you have ever seen a problem like this before, if so how did you solve it.
Tip 14: Use a formula for solving the problem, for example for finding the area of a circle.
Tip 15: Develop a plan based on the information that you have determined to be important to solving the problem.
Tip 16: Carry out the plan using the math operations you determined would find the answer.
Tip 17: See if the answer seems reasonable, if does then you are probably ok - if not then check your work.
Tip 18: Work the problem in reverse or backwards starting with the answer to see if you wind up with your original problem.
Tip 19: Do not forget about units of measure as you work the problem, such as: inches, pounds, ounces, feet, yard, meter, etc. Not using units of measure may result in the wrong answer.
Tip 20: Ask yourself did you answer the problem? Are you sure? How do you know you are sure?
These are all good tips for developing a plan of attack in math problem solving. If you use these 20 tips as basis for developing your own problem solving technique you will be successful. Most students use the tips described above, use them for a few problems, and then adapt them to fit their style of learning and problem solving. This is perfectly fine, because these 20 tips are only meant as a starting point for learning how to solve problems.
One tip that is not mentioned above is that as you develop a strategy for solving math problems, then this strategy will become your strategy for solving problems in other subjects and dealing with life's problems you will encounter as you continue to grow. | http://www.mathworksheetscenter.com/mathtips/goodproblemsolvers.html | 13 |
10 | Learn something new every day More Info... by email
A Boolean array in computer programming is a sequence of values that can only hold the values of true or false. By definition, a Boolean can only be true or false and is unable to hold any other intermediary value. An array is a sequence of data types that occupy numerical positions in a linear memory space. While the actual implementation of a Boolean array is often left up to the compiler or computer language libraries, it is most efficiently done by using bits instead of complete bytes or words. There are several uses for a Boolean array, including keeping track of property flags and aligning settings for physical hardware interfaces.
The idea of a Boolean array stems from original methods that were used to store information on computers where there was very little available memory. The first implementation of a Boolean array took the form of a bit array. This used larger data types such as bytes or long integers to hold information by setting the bits of the data type to true or false. In this way, a single byte that is eight bits long could hold eight different true or false values, saving space and allowing for efficient bitwise operations.
As the size of computer memory increased, the need to use bit arrays declined. While using bits does offer the possibility for bit shifting and using logical operators that allow incredibly fast processing, it also requires custom code to handle these types of operations. Using a standard array structure to hold a sequence of bytes is a simpler solution, but it takes much more memory during program execution. This can be seen when creating an array of 32 Boolean values. With a bit array, the data will only occupy four bytes of memory, but a Boolean type array might occupy anywhere from 32 to 128 bytes, depending on the system implementation.
Some computer programming languages do actually implement a bit array when a Boolean array type is used, although this is not common. A Boolean array has the advantage of being very easy to read when viewing source code. Comparisons and assignments are presented clearly, whereas with a bit array the logical operators "and", "or" and "not" must be used, potentially creating confusing code.
Despite the ease of use, one feature that cannot be used with a Boolean array is a bitmask. A bitmask is a single byte or larger data type that contains a sequence of true and false values relating to multiple conditions. In a single operation, multiple bits can be checked for their true or false states, all at once. With an integer-based array of Boolean values, the same operation would need to be performed with a loop. | http://www.wisegeek.com/what-is-a-boolean-array.htm | 13 |
11 | This interactive activity for grades 8-12 features eight models that explore atomic arrangements for gases, solids, and liquids. Highlight an atom and view its trajectory to see how the motion differs in each of the three primary phases. As the lesson progresses, students observe and manipulate differences in attractions among atoms in each state and experiment with adding energy to produce state changes. More advanced students can explore models of latent heat and evaporative cooling. See Related Materials for a Teacher's Guide developed specifically to accompany this activity.
This item is part of the Concord Consortium, a nonprofit research and development organization dedicated to transforming education through technology. The Concord Consortium develops deeply digital learning innovations for science, mathematics, and engineering.
6-8: 4D/M1a. All matter is made up of atoms, which are far too small to see directly through a microscope.
6-8: 4D/M1cd. Atoms may link together in well-defined molecules, or may be packed together in crystal patterns. Different arrangements of atoms into groups compose all substances and determine the characteristic properties of substances.
6-8: 4D/M3ab. Atoms and molecules are perpetually in motion. Increased temperature means greater average energy of motion, so most substances expand when heated.
6-8: 4D/M3cd. In solids, the atoms or molecules are closely locked in position and can only vibrate. In liquids, they have higher energy, are more loosely connected, and can slide past one another; some molecules may get enough energy to escape into a gas. In gases, the atoms or molecules have still more energy and are free of one another except during occasional collisions.
4E. Energy Transformations
6-8: 4E/M3. Thermal energy is transferred through a material by the collisions of atoms within the material. Over time, the thermal energy tends to spread out through a material and from one material to another if they are in contact. Thermal energy can also be transferred by means of currents in air, water, or other fluids. In addition, some thermal energy in all materials is transformed into light energy and radiated into the environment by electromagnetic waves; that light energy can be transformed back into thermal energy when the electromagnetic waves strike another material. As a result, a material tends to cool down unless some other form of energy is converted to thermal energy in the material.
9-12: 4E/H9. Many forms of energy can be considered to be either kinetic energy, which is the energy of motion, or potential energy, which depends on the separation between mutually attracting or repelling objects.
11. Common Themes
6-8: 11B/M1. Models are often used to think about processes that happen too slowly, too quickly, or on too small a scale to observe directly. They are also used for processes that are too vast, too complex, or too dangerous to study.
6-8: 11B/M4. Simulations are often useful in modeling events and processes.
Common Core State Standards for Mathematics Alignments
Standards for Mathematical Practice (K-12)
MP.4 Model with mathematics.
Define, evaluate, and compare functions. (8)
8.F.2 Compare properties of two functions each represented in a different way (algebraically, graphically, numerically in tables, or by verbal descriptions).
Use functions to model relationships between quantities. (8)
8.F.5 Describe qualitatively the functional relationship between two quantities by analyzing a graph (e.g., where the function is increasing or decreasing, linear or nonlinear). Sketch a graph that exhibits the qualitative features of a function that has been described verbally.
High School — Functions (9-12)
Interpreting Functions (9-12)
F-IF.5 Relate the domain of a function to its graph and, where applicable, to the quantitative relationship it describes.?
Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications. | http://www.compadre.org/portal/items/detail.cfm?ID=11191 | 13 |
19 | In this example, a rational function with one vertical asymptote greater than zero is graphed.
Review of graps of rational functions:
A rational function can be written as the ratio of two functions, f(x) and g(x). Rational functions are usually written in this form:
y = f(x)/g(x)
Since division by zero is undefined, then with all rational functions written in the form shown above, the function g(x) cannot equal zero. In fact, the graphs of rational functions have a characteristic shape around the values of x where g(x) cannot equal zero.
The graphs of rational functions have vertical asymptotes, based on the values of x where g(x) equals zero. A vertical boundary line defines the place where the graph of the rational function approaches but does not equal this value of x.
In this set of Math Tutorials we analyze the rational functions, identify the vertical asymptotes, and then graph the function. You should use a graphing calculator to graph the rational function and identify the asymptotes.
Tweet Learn More About Math Tutorials
The library of Math Tutorials is a comprehensive collection of worked-out solutions to common math problems. This overcomes a common limitation of most textbooks: the handful of worked-out examples for a given concept. We provide the full array of examples and solutions, allowing students to identify patterns among the solutions, in order to aid concept retention. We also have quizzes for many of these topics.
Our current inventory of Math Examples include:
- Math Tutorial: Examples Using Algebra Tiles
- Math Tutorial: Solving Equations in One Variable
- Math Tutorial: Solving Equations with Fractions
- Math Tutorial: Solving Equations with Percents
- Math Tutorial: Slope formula
- Math Tutorial: Midpoint formula
- Math Tutorial: Distance formula
- Math Tutorial: Graphing linear functions, given m and b
- Math Tutorial: Graphing absolute value functions
- Math Tutorial: Graphing linear inequalities
- Math Tutorial: Using the Point-Slope form
- Math Tutorial: Finding the equation of a line given two points
- Math Tutorial: Graphing parallel and perpendicular lines
- Math Tutorial: Solving quadratics graphically
- Math Tutorial: Solving quadratics by completing the square
- Math Tutorial: Factoring Quadratics
- Math Tutorial: Polynomial Expansion
- Math Tutorial: Solving quadratics using the quadratic formula
- Math Tutorial: Using FOIL
- Math Tutorial: Graphs of Exponential Functions
- Math Tutorial: Laws of Exponents
- Math Tutorial: Graphs of Logarithmic Functions
- Math Tutorial: Graphs of Rational Functions
- Math Tutorial: Combining Rational Expressions
- Math Tutorial: Graphs of Conic Sections | http://media4math.com/Examples/GraphingRationalFunctions/GraphingRationalFunctions8.html | 13 |
17 | July 21, 2004 Using ESA’s Integral and XMM-Newton observatories, an international team of astronomers has found more evidence that massive black holes are surrounded by a doughnut-shaped gas cloud, called a torus. Depending on our line of sight, the torus can block the view of the black hole in the centre. The team looked `edge on’ into this doughnut to see features never before revealed in such a clarity.
Black holes are objects so compact and with gravity so strong that not even light can escape from them. Scientists think that `supermassive’ black holes are located in the cores of most galaxies, including our Milky Way galaxy. They can contain the mass of thousands of millions of suns, confined within a region no larger than our Solar System. They appear to be surrounded by a hot, thin disk of accreting gas and, farther out, the thick doughnut-shaped torus. Depending on the inclination of the torus, it can hide the black hole and the hot accretion disc from the line of sight. Galaxies in which a torus blocks the light from the central accretion disc are called `Seyfert 2’ types and are usually faint to optical telescopes. Another theory, however, is that these galaxies appear rather faint because the central black hole is not actively accreting gas and the disc surrounding it is therefore faint. An international team of astronomers led by Dr Volker Beckmann, Goddard Space Flight Center (Greenbelt, USA) has studied one of the nearest objects of this type, a spiral galaxy called NGC 4388, located 65 million light years away in the constellation Virgo. Since NGC 4388 is relatively close, and therefore unusually bright for its class, it is easier to study.
Astronomers often study black holes that are aligned face-on, thus avoiding the enshrouding torus. However, Beckmann's group took the path less trodden and studied the central black hole by peering through the torus. With XMM-Newton and Integral, they could detect some of the X-rays and gamma rays, emitted by the accretion disc, which partially penetrate the torus. "By peering right into the torus, we see the black hole phenomenon in a whole new light, or lack of light, as the case may be here," Beckmann said.
Beckmann's group saw how different processes around a black hole produce light at different wavelengths. For example, some of the gamma rays produced close to the black hole get absorbed by iron atoms in the torus and are re-emitted at a lower energy. This in fact is how the scientists knew they were seeing `reprocessed’ light farther out. Also, because of the line of sight towards NGC 4388, they knew this iron was from a torus on the same plane as the accretion disk, and not from gas clouds `above’ or `below’ the accretion disk.
This new view through the haze has provided valuable insight into the relationship between the black hole, its accretion disc and the doughnut, and supports the torus model in several ways.
Gas in the accretion disc close to the black hole reaches high speeds and temperatures (over 100 million degrees, hotter than the Sun) as it races toward the void. The gas radiates predominantly at high energies, in the X-ray wavelengths.
According to Beckmann, this light is able to escape the black hole because it is still outside of its border, but ultimately collides with matter in the torus. Some of it is absorbed; some of it is reflected at different wavelengths, like sunlight penetrating a cloud; and the very energetic gamma rays pierce through. "This torus is not as dense as a real doughnut or a true German Krapfen, but it is far hotter - up to a thousand degrees - and loaded with many more calories," Beckmann said.
The new observations also pinpoint the origin of the high-energy emission from NGC 4388. While the lower-energy X-rays seen by XMM-Newton appear to come from a diffuse emission, far away from the black hole, the higher-energy X-rays detected by Integral are directly related to the black hole activity.
The team could infer the doughnut’s structure and its distance from the black hole by virtue of light that was either reflected or completely absorbed. The torus itself appears to be several hundred light years from the black hole, although the observation could not gauge its diameter, from inside to outside.
The result marks the clearest observation of an obscured black hole in X-ray and gamma-ray `colours’, a span of energy nearly a million times wider than the window of visible light, from red to violet. Multi-wavelength studies are increasingly important to understanding black holes, as already demonstrated earlier this year. In May 2004, the European project known as the Astrophysical Virtual Observatory, in which ESA plays a major role, found 30 supermassive black holes that had previously escaped detection behind masking dust clouds.
This result will appear on The Astrophysical Journal. Besides Volker Beckmann, the author list includes Neil Gehrels, Pascal Favre, Roland Walter, Thierry Courvoisier, Pierre-Olivier Petrucci and Julien Malzac.
For more information about the Astrophysical Virtual Observatory programme and how it has allowed European scientists to discover a number of previously hidden black holes, see:
More about Integral
The International Gamma Ray Astrophysics Laboratory (Integral) is the first space observatory that can simultaneously observe celestial objects in gamma rays, X-rays and visible light. Integral was launched on a Russian Proton rocket on 17 October 2002 into a highly elliptical orbit around Earth. Its principal targets include regions of the galaxy where chemical elements are being produced and compact objects, such as black holes.
More information on Integral can be found at:
More about XMM-Newton
XMM-Newton can detect more X-ray sources than any previous observatory and is helping to solve many cosmic mysteries of the violent Universe, from black holes to the formation of galaxies.
It was launched on 10 December 1999, using an Ariane-5 rocket from French Guiana. It is expected to return data for a decade. XMM-Newton’s high-tech design uses over 170 wafer-thin cylindrical mirrors spread over three telescopes.
Its orbit takes it almost a third of the way to the Moon, so that astronomers can enjoy long, uninterrupted views of celestial objects.
More information on XMM-Newton can be found at:
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | http://www.sciencedaily.com/releases/2004/07/040721085717.htm | 13 |
23 | Want to stay on top of all the space news? Follow @universetoday on Twitter
It is well documented that dark matter makes up the majority of the mass in our universe. The big problem comes when trying to prove dark matter really is out there. It is dark, and therefore cannot be seen. Dark matter may come in many shapes and sizes (from the massive black hole, to the tiny neutrino), but regardless of size, no light is emitted and therefore it cannot be observed directly. Astronomers have many tricks up their sleeves and are now able to indirectly observe massive black holes (by observing the gravitational, or lensing, effect on light passing by). Now, large-scale structures have been observed by analyzing how light from distant galaxies changes as it passes through the cosmic web of dark matter hundreds of millions of light years across…
Dark matter is believed to hold over 80% of the Universe’s total mass, leaving the remaining 20% for “normal” matter we know, understand and observe. Although we can observe billions of stars throughout space, this is only the tip of the iceberg for the total cosmic mass.
Using the influence of gravity on space-time as a tool, astronomers have observed halos of distant stars and galaxies, as their light is bent around invisible, but massive objects (such as black holes) between us and the distant light sources. Gravitational lensing has most famously been observed in the Hubble Space Telescope (HST) images where arcs of light from young and distant galaxies are warped around older galaxies in the foreground. This technique now has a use when indirectly observing the large-scale structure of dark matter intertwining its way between galaxies and clusters.
Astronomers from the University of British Columbia (UBC) in Canada have observed the largest structures ever seen of a web of dark matter stretching 270 million light years across, or 2000 times the size of the Milky Way. If we could see the web in the night sky, it would be eight times the area of the Moons disk.
This impressive observation was made possible by using dark matter gravity to signal its presence. Like the HST gravitational lensing, a similar method is employed. Called “weak gravitational lensing”, the method takes a portion of the sky and plots the distortion of the observed light from each distant galaxy. The results are then mapped to build a picture of the dark matter structure between us and the galaxies.
The team uses the Canada-France-Hawaii-Telescope (CFHT) for the observations and their technique has been developed over the last few years. The CFHT is a non-profit project that runs a 3.6 meter telescope on top of Mauna Kia in Hawaii.
Understanding the structure of dark matter as it stretches across the cosmos is essential for us to understand how the Universe was formed, how dark matter influences stars and galaxies, and will help us determine how the Universe will develop in the future.
“This new knowledge is crucial for us to understand the history and evolution of the cosmos [...] Such a tool will also enable us to glimpse a little more of the nature of dark matter.” – Ludovic Van Waerbeke, Assistant Professor, Department of Physics and Astronomy, UBC
Source: UBC Press Release | http://www.universetoday.com/12939/record-breaking-dark-matter-web-structure-observed-spanning-270-million-light-years/ | 13 |
12 | |Solving Multi-Step Linear Equations
In this lesson, we are going to look at a few worked examples while putting emphasis on the key steps in solving multi-step equations. You will have the opportunity to practice on your own by trying some problems and compare your answers to the solutions provided. If you just want to practice and skip the lesson itself, go ahead, and click the button below.
To solve multi-step equations, you will still need the techniques you learned in solving one-step and two-step equations. This type of equation requires additional steps in order to solve for the value of the unknown variable. Usually the variable involved is x, but it is not always the case. It could be any letters such as m, n, h and z.
The main goal in solving multi-step equations is to keep the unknown variable on one side of the equal symbol while keeping the constant or pure number on the opposite side. More importantly, there is no rule where to keep the variable. It all depends on your preference. The "standard" way is to have it on the left side, but there are cases when it is convenient to leave it on the right side of the equation.
Finally, since we are dealing with equations, we need to keep in mind that whatever we do on one side must be applied to the other side to keep everything balanced. For instance, adding 5 on the left should force you to add 5 on the right side. To get rid of numbers in the process of solving equations, ALWAYS remember the idea of opposite operations because they are used to cancel or move around numbers.
Key steps to remember
1) Eliminate parenthesis by applying the Distributive Property
2) Simplify both sides of the equation by Combining Like Terms. In other words, combine similar variables and constants together.
3) Decide where you want to keep the variable; that helps you decide where to keep the constants (opposite side where the variable is located).
4) Cancel out numbers by applying opposite operations: addition and subtraction are opposite operations as in the case of multiplication and division.
Now it's time to take a look at some examples!
Example 1: Solve the multi-step equation
This is a typical problem in multi-step equations where there are variables on both sides. Notice that there are no parenthesis in this equation and nothing to combine like terms in either both sides of the equation. Clearly, our first step is to decide where to keep or isolate the unknown variable x. Since 7x is "larger" than 2x, then we might as well keep it on the left side.
This means we have to get rid of the 2x on the right side. To do that, we need to subtract both sides by 2x because the opposite of +2x is -2x.
After simplifying by subtracting both sides by 2x, we have...
It's nice to see just the variable x on the left side. This implies that we have to move all the constants to the right side and that +3 on the left must be removed. The opposite of +3 is -3, therefore, we will subtract both sides by 3.
After subtracting both sides by 3, we get...
The last step is to isolate variable x by itself on the left side of the equation. Since +5 is multiplying x, then its opposite operation is to divide by +5. So, we are going to divide both sides by 5 and then we are done! | http://chilimath.com/algebra/intermediate/linear_equations/revisited/linear_multistep1.html | 13 |
11 | How to Find the Centroid of a Triangle
The three medians of a triangle intersect at its centroid. The centroid is the triangle’s balance point, or center of gravity. (In other words, if you made the triangle out of cardboard, and put its centroid on your finger, it would balance.) On each median, the distance from the vertex to the centroid is twice as long as the distance from the centroid to the midpoint of the side opposite the vertex. That means that the centroid is exactly 1/3 of the way from the midpoint of the side to the vertex of the triangle. Take a look at the following figure.
If you’re from Missouri (the Show-Me State), you might want to actually see how a triangle balances on its centroid. Cut a triangle of any shape out of a fairly stiff piece of cardboard. Carefully find the midpoints of two of the sides, and then draw the two medians to those midpoints. The centroid is where these medians cross. (You can draw in the third median if you like, but you don’t need it to find the centroid.) Now, using something with a small, flat top such as an unsharpened pencil, the triangle will balance if you place the centroid right in the center of the pencil’s tip. | http://www.dummies.com/how-to/content/how-to-find-the-centroid-of-a-triangle.html | 13 |
13 | According to our best models for the formation of the Solar System, comets and meteorites bombarded the planets in their early days. This bombardment brought water to Earth, Mars, and the other terrestrial worlds. But Mercury is an airless body and is much closer to the Sun, so its daytime surface temperature reaches 426°C. That's hot enough to make sure that any exposed surface water would have evaporated long ago, along with other volatile substances.
However, a new set of measurements from the MESSENGER probe has revealed the probable presence of water ice in shadowed craters near Mercury's poles. Using laser and radar reflection along with measurements of neutron emissions, MESSENGER scientists found patches of reflective material that alternate with much darker regions than the average planet surface. This data suggests the presence of both ice and complex organic molecules, both of them probably left over from when the Solar System was young.
The presence of water on Mercury had been suspected since as early as 1992, due to Earth-based radar reflection experiments. These measurements were ambiguous enough that more direct observation was desirable. Enter MESSENGER: the MErcury Surface Space ENvironment, GEochemistry, and Ranging probe, which has been orbiting Mercury since 2011. MESSENGER carries a variety of instruments for measuring surface features and magnetic fields, along with more ordinary cameras for imaging.
Among those devices are radar and laser range-finders, which (in addition to registering distance) measure the reflective properties of the surface. It's also got a neutron detector. Neutrons are emitted by radioactive materials on Mercury, including those that are made radioactive by cosmic rays. Measuring the neutron spectrum (numbers and energy) helps determine the chemical composition of surface materials, particularly the hydrogen content.
The results from both the reflection and neutron analyses were consistent: several craters in Mercury's polar regions provide sufficient shadow for stable water ice. The large craters named Prokofiev and Kandinsky were both found to contain significant radar-bright (RB) patches, indicating highly reflective materials. (Craters on Mercury are commonly named for famous artists, authors, composers, and the like. As a fan of both Prokofiev and Kandinsky, I approve.)
The size of the reflective patches matched the total proportion of each crater that lies in permanent shadow. Unlike Earth, Mercury has almost no axial tilt, so it doesn't experience seasons. This leaves many deep craters in the polar regions untouched by sunlight and means that if they're shadowed now, they will generally remain that way.
The radar reflection study found that some of Mercury's ice is in the form of frozen ponds. Other ice—still detectable through scattered light and neutron emission—is covered in dark, highly nonreflective material up to 20cm deep. The researchers determined this dark layer contains far less hydrogen than should be present if it were a water-saturated material. Complex organic—meaning carbon-containing—molecules are both dark in color and relatively common components of asteroids, meteorites, and comets.
The clear discovery of water ice on Mercury meshes nicely with similar finds on the planetoid/asteroid Vesta and the Moon. The abundance of both water and organic materials is consistent with models of the early Solar System, in which bombardment by comets and meteorites deposited both types of molecules onto the terrestrial planets and moons. | http://arstechnica.com/science/2012/11/craters-on-mercury-are-oases-for-water-ice-organic-molecules/ | 13 |
33 | Compounding Functions and Graphing Functions of Functions
- 0:06 Functions
- 0:58 Composite Functions
- 4:01 Domain and Range of Composite…
- 7:06 Lesson Summary
Did You Know…
This lesson is part of a free course that leads to real college credit accepted by 2,900 colleges.
We know that functions map numbers to other numbers, so what happens when you have a function of a function? Welcome to functions within functions, the realm of composite functions!
Recall that functions are like a black box; they map numbers to other numbers. If y is a function of x, then we write it as y=f(x). And for this function, we have an input, x, and an output, y. So x is our independent variable, and y is our dependent variable. Our input will be anywhere within the domain of the function, and our output will be anywhere within the range of the function. So perhaps it's not too much of a stretch to know that you can combine functions into a big function.
In math, this is known as a composition of functions. Here you start with x, and you use it as input to a function, y=f(x). And you're going to put that as input into a second function, g. So if we have a function y=f(x), and we want to plug it into z=g(y), we can end up with z=g(f(x)). This is a composite function.
When you're looking at composite functions, there are two main points to keep in mind. First, you need to evaluate the function from the inside out. You need to figure out what f(x) is before you figure out what g is. Say we have the function f(x)=3x, and we have another function g(x)= 4 + x. I'm going to find z when x=2. We're going to find f(x) when x=2 for f(2)= 3 * 2, which is 6. Saying g(f(2)) is like saying g(6). We do the same thing and say g(6) = 4 + 6. Well, that's 10, so z is just 10.
The second thing to keep in mind is that g(f(x)) does not equal f(g(x)). There are some cases where it can, but in general, it does not. So if we use f(x)=3x and g(x)=x + 4, then let's look at the case where x=0. Then g(f(0)), where f(0) is 0 * 3 - well that's just zero, so I'm looking at g(0). I plug zero in for x here, and it's just 4. Now, if I look at f(g(0)), that's like saying f(4), and that gives me 12. f(g(0))=12, and g(f(0))=4. Those are not the same. So, g(f(x)) does not equal f(g(x)).
Domain and Range of Composite Functions
What happens to the domain and range of a composite function? Well, if we have the function g(x), we have some domain and some range for g(x). Separately we've got a domain for f(x) and a range for g(x). If I write f(g(x)), then the output of g(x), which is the range, has to be somewhere in the domain of f(x). Otherwise, we could get a number here that f(x) really doesn't know what to do with. What does all this really mean? Consider the function f(x)=sin(x).
The domain of sin(x) is going to be all of x, and the range is going to be between -1 and 1. Now let's look at the function g(x) equals the absolute value of x, or g(x)=abs(x). Again the domain is all of x, and the range is everything greater than 1 or equal to 0. If I take those two - here's my range of sin(x) - what happens to g(f(x))? So g is the absolute value, so I'll have abs(sin(x)).
What's the domain and range of that composite function? If I'm graphing g(f(x)), I'm graphing the absolute value of sin(x), so the graph looks like this. I have a range here that goes from 0 to 1 and a domain that covers all of x. Well, this makes sense. What if I look at f(g(x)), so the function is going to be sine of the absolute value of x, sin(abs(x)).
For the absolute value of x, you can take anything as input, so the domain is going to be all values of x, and the range of abs(x) is going to be zero and up, so anything that's a positive number. Now, sine can take anything, so the range of abs(x) is within the domain of sin(x), but what happens to the output? What is the range of this composite function? Let's graph it - is that unexpected? Now the range is in between -1 and 1, which just so happens to be the range of f(x).
To recap, we know functions map numbers to other numbers, like y=f(x). The domain and range tell us the possible values for the input and output of a function.
Composite functions take the output of one function and use it as input for another function, and we write this f(g(x)). We're going to evaluate f(g(x)) from the inside out, so we're going to evaluate g(x) before we evaluate f(x). And we also know that f(g(x)) does not equal g(f(x)).
Chapters in Math 104: Calculus
People are saying…
"This just saved me about $2,000 and 1 year of my life." — Student
"I learned in 20 minutes what it took 3 months to learn in class." — Student
"Really helped me understand something I have been struggling with for a while!!!" — High School Student
"Totally awesome! I got the lecture quick, like never before in my life!!!" — College Student
"When I studied algebra, linear equations were the hardest for me. I wish someone could have explained rise/run and abstract linear equations like this when I was learning it in high school." — Student
"You have made a complex subject like calculus very concrete." — Student
"I've never seen numbers in this light before. Thank you." — Sarah, College Student
"I think this is the best way to relate classroom teaching to our daily life." — Student | http://education-portal.com/academy/lesson/compounding-functions-and-graphing-functions-of-functions.html | 13 |
11 | In calculus, an antiderivative or primitive function of a given real valued function f is a function F whose derivative is equal to f, i.e. F ' = f. The process of finding antiderivatives is antidifferentiation (or indefinite integration).
For example: F(x) = x³ / 3 is an antiderivative of f(x) = x². As the derivative of a constant is zero, x² will have an infinite number of antiderivatives; such as (x³ / 3) + 0 and (x³ / 3) + 7 and (x³ / 3) - 36...thus; the antiderivative family of x² is collectively referred to by F(x) = (x³ / 3) + C; where C is any constant. Essentially, related antiderivatives are vertical translations of each other; each graph's location depending upon the value of C.
Every continuous function f has an antiderivative, and one antiderivative F is given by the integral of f with variable upper boundary:
There are also some non-continuous functions which have an antiderivative, for example f(x) = 2x sin (1/x) - cos(1/x) with f(0) = 0 is not continuous at x = 0 but has the antiderivative F(x) = x² sin(1/x) with F(0) = 0.
There are many functions whose antiderivatives, even though they exist, cannot be expressed in terms of elementary functions (like polynomials, exponential functions, logarithms, trigonometric functions, inverse trigonometric functions and their combinations). Examples of these are
Techniques of integration
Finding antiderivatives is considerably harder than finding derivatives. We have various methods at our disposal:
- the linearity of integration allows us to break complicated integrals into simpler ones,
- integration by substitution, often combined with trigonometric identities
- integration by parts to integrate products of functions,
- the inverse chain rule method, a special case of integration by substitution
- the method of partial fractions in integration allows us to integrate all rational functions (fractions of two polynomials),
- the natural logarithm integral condition,
- the Risch algorithm,
- integrals can also be looked up in a table of integrals.
- When integrating multiple time, we can use certain additional techniques, see for instance double integrals and polar co-ordinates, the Jacobian and the Stokes theorem.
- If a function has no elementary antiderivative (for instance, exp(x²)), an area integral can be approximated using numerical integration. | http://www.encyclopedia4u.com/a/antiderivative.html | 13 |
40 | An ice core is a core sample that is typically removed from an ice sheet, most commonly from the polar ice caps of Antarctica, Greenland or from high mountain glaciers elsewhere. As the ice forms from the incremental build up of annual layers of snow, lower layers are older than upper, and an ice core contains ice formed over a range of years. The properties of the ice and the recrystallized inclusions within the ice can then be used to reconstruct a climatic record over the age range of the core, normally through isotopic analysis. This enables the reconstruction of local temperature records and the history of atmospheric composition.
Ice cores contain an abundance of information about climate. Inclusions in the snow of each year remain in the ice, such as wind-blown dust, ash, bubbles of atmospheric gas and radioactive substances. The variety of climatic proxies is greater than in any other natural recorder of climate, such as tree rings or sediment layers. These include (proxies for) temperature, ocean volume, precipitation, chemistry and gas composition of the lower atmosphere, volcanic eruptions, solar variability, sea-surface productivity, desert extent and forest fires.
The length of the record depends on the depth of the ice core and varies from a few years up to 800 kyr (800,000 years) for the EPICA core. The time resolution (i.e. the shortest time period which can be accurately distinguished) depends on the amount of annual snowfall, and reduces with depth as the ice compacts under the weight of layers accumulating on top of it. Upper layers of ice in a core correspond to a single year or sometimes a single season. Deeper into the ice the layers thin and annual layers become indistinguishable.
An ice core from the right site can be used to reconstruct an uninterrupted and detailed climate record extending over hundreds of thousands of years, providing information on a wide variety of aspects of climate at each point in time. It is the simultaneity of these properties recorded in the ice that makes ice cores such a powerful tool in paleoclimate research.
Structure of ice sheets and cores
Ice sheets are formed from snow. Because an ice sheet survives summer, the temperature in that location usually does not warm much above freezing. In many locations in Antarctica the air temperature is always well below the freezing point of water. If the summer temperatures do get above freezing, any ice core record will be severely degraded or completely useless, since meltwater will percolate into the snow.
The surface layer is snow in various forms, with air gaps between snowflakes. As snow continues to accumulate, the buried snow is compressed and forms firn, a grainy material with a texture similar to granulated sugar. Air gaps remain, and some circulation of air continues. As snow accumulates above, the firn continues to densify, and at some point the pores close off and the air is trapped. Because the air continues to circulate until then, the ice age and the age of the gas enclosed are not the same, and may differ by hundreds of years. The gas age–ice age difference is as great as 7 kyr in glacial ice from Vostok.
Under increasing pressure, at some depth the firn is compressed into ice. This depth may range between a few to several tens of meters to typically 100 m for Antarctic cores. Below this level material is frozen in the ice. Ice may appear clear or blue.
Layers can be visually distinguished in firn and in ice to significant depths. In a location on the summit of an ice sheet where there is little flow, accumulation tends to move down and away, creating layers with minimal disturbance. In a location where underlying ice is flowing, deeper layers may have increasingly different characteristics and distortion. Drill cores near bedrock often are challenging to analyze due to distorted flow patterns and composition likely to include materials from the underlying surface.
Characteristics of firn
The layer of porous firn on Antarctic ice sheets is 50–150 m deep. It is much less deep on glaciers.
Air in the atmosphere and firn are slowly exchanged by molecular diffusion through pore spaces, because gases move toward regions of lower concentration. Thermal diffusion causes isotope fractionation in firn when there is rapid temperature variation, creating isotope differences which are captured in bubbles when ice is created at the base of firn. There is gas movement due to diffusion in firn, but not convection except very near the surface.
Below the firn is a zone in which seasonal layers alternately have open and closed porosity. These layers are sealed with respect to diffusion. Gas ages increase rapidly with depth in these layers. Various gases are fractionated while bubbles are trapped where firn is converted to ice.
A core is collected by separating it from the surrounding material. For material which is sufficiently soft, coring may be done with a hollow tube. Deep core drilling into hard ice, and perhaps underlying bedrock, involves using a hollow drill which actively cuts a cylindrical pathway downward around the core.
When a drill is used, the cutting apparatus is on the bottom end of a drill barrel, the tube which surrounds the core as the drill cuts downward around the edge of the cylindrical core. The length of the drill barrel determines the maximum length of a core sample (6 m at GISP2 and Vostok). Collection of a long core record thus requires many cycles of lowering a drill/barrel assembly, cutting a core 4–6 m in length, raising the assembly to the surface, emptying the core barrel, and preparing a drill/barrel for drilling.
Because deep ice is under pressure and can deform, for cores deeper than about 300 m the hole will tend to close if there is nothing to supply back pressure. The hole is filled with a fluid to keep the hole from closing. The fluid, or mixture of fluids, must simultaneously satisfy criteria for density, low viscosity, frost resistance, as well as workplace safety and environmental compliance. The fluid must also satisfy other criteria, for example those stemming from the analytical methods employed on the ice core. A number of different fluids and fluid combinations have been tried in the past. Since GISP2 (1990–1993) the US Polar Program has utilized a single-component fluid system, n-butyl acetate, but the toxicology, flammability, aggressive solvent nature, and longterm liabilities of n-butyl acetate raises serious questions about its continued application. The European community, including the Russian program, has concentrated on the use of two-component drilling fluid consisting of low-density hydrocarbon base (brown kerosene was used at Vostok) boosted to the density of ice by addition of halogenated-hydrocarbon densifier. Many of the proven densifier products are now considered too toxic, or are no longer available due to efforts to enforce the Montreal Protocol on ozone-depleting substances. In April 1998 on the Devon Ice Cap filtered lamp oil was used as a drilling fluid. In the Devon core it was observed that below about 150 m the stratigraphy was obscured by microfractures.
Core processing
Modern practice is to ensure that cores remain uncontaminated, since they are analysed for trace quantities of chemicals and isotopes. They are sealed in plastic bags after drilling and analysed in clean rooms.
The core is carefully extruded from the barrel; often facilities are designed to accommodate the entire length of the core on a horizontal surface. Drilling fluid will be cleaned off before the core is cut into 1-2 meter sections. Various measurements may be taken during preliminary core processing.
Current practices to avoid contamination of ice include:
- Keeping ice well below the freezing point.
- At Greenland and Antarctic sites, temperature is maintained by having storage and work areas under the snow/ice surface.
- At GISP2, cores were never allowed to rise above -15 °C, partly to prevent microcracks from forming and allowing present-day air to contaminate the fossil air trapped in the ice fabric, and partly to inhibit recrystallization of the ice structure.
- Wearing special clean suits over cold weather clothing.
- Mittens or gloves.
- Filtered respirators.
- Plastic bags, often polyethylene, around ice cores. Some drill barrels include a liner.
- Proper cleaning of tools and laboratory equipment.
- Use of laminar-flow bench to isolate core from room particulates.
For shipping, cores are packed in Styrofoam boxes protected by shock absorbing bubble-wrap.
Due to the many types of analysis done on core samples, sections of the core are scheduled for specific uses. After the core is ready for further analysis, each section is cut as required for tests. Some testing is done on site, other study will be done later, and a significant fraction of each core segment is reserved for archival storage for future needs.
Projects have used different core-processing strategies. Some projects have only done studies of physical properties in the field, while others have done significantly more study in the field. These differences are reflected in the core processing facilities.
Ice relaxation
Deep ice is under great pressure. When brought to the surface, there is a drastic change in pressure. Due to the internal pressure and varying composition, particularly bubbles, sometimes cores are very brittle and can break or shatter during handling. At Dome C, the first 1000 m were brittle ice. Siple dome encountered it from 400 to 1000 m. It has been found that allowing ice cores to rest for some time (sometimes for a year) makes them become much less brittle.
Decompression causes significant volume expansion (called relaxation) due to microcracking and the exsolving of enclathratized gases. Relaxation may last for months. During this time, ice cores are stored below -10 °C to prevent cracking due to expansion at higher temperatures. At drilling sites, a relaxation area is often built within existing ice at a depth which allows ice core storage at temperatures below -20 °C.
It has been observed that the internal structure of ice undergoes distinct changes during relaxation. Changes include much more pronounced cloudy bands and much higher density of "white patches" and bubbles.
Several techniques have been examined. Cores obtained by hot water drilling at Siple Dome in 1997–1998 underwent appreciably more relaxation than cores obtained with the PICO electro-mechanical drill. In addition, the fact that cores were allowed to remain at the surface at elevated temperature for several days likely promoted the onset of rapid relaxation.
Ice core data
Many materials can appear in an ice core. Layers can be measured in several ways to identify changes in composition. Small meteorites may be embedded in the ice. Volcanic eruptions leave identifiable ash layers. Dust in the core can be linked to increased desert area or wind speed.
Isotopic analysis of the ice in the core can be linked to temperature and global sea level variations. Analysis of the air contained in bubbles in the ice can reveal the palaeocomposition of the atmosphere, in particular CO2 variations. There are great problems relating the dating of the included bubbles to the dating of the ice, since the bubbles only slowly "close off" after the ice has been deposited. Nonetheless, recent work has tended to show that during deglaciations CO2 increases lag temperature increases by 600 +/- 400 years. Beryllium-10 concentrations are linked to cosmic ray intensity which can be a proxy for solar strength.
There may be an association between atmospheric nitrates in ice and solar activity. However, recently it was discovered that sunlight triggers chemical changes within top levels of firn which significantly alter the pore air composition. This raises levels of formaldehyde and NOx. Although the remaining levels of nitrates may indeed be indicators of solar activity, there is ongoing investigation of resulting and related effects of effects upon ice core data.
Core contamination
Some contamination has been detected in ice cores. The levels of lead on the outside of ice cores is much higher than on the inside. In ice from the Vostok core (Antarctica), the outer portion of the cores have up to 3 and 2 orders of magnitude higher bacterial density and dissolved organic carbon than the inner portion of the cores, respectively, as a result of drilling and handling.
Paleoatmospheric sampling
As porous snow consolidates into ice, the air within it is trapped in bubbles in the ice. This process continuously preserves samples of the atmosphere. In order to retrieve these natural samples the ice is ground at low temperatures, allowing the trapped air to escape. It is then condensed for analysis by gas chromatography or mass spectrometry, revealing gas concentrations and their isotopic composition respectively. Apart from the intrinsic importance of knowing relative gas concentrations (e.g. to estimate the extent of greenhouse warming), their isotopic composition can provide information on the sources of gases. For example CO2 from fossil-fuel or biomass burning is relatively depleted in 13C. See Friedli et al., 1986.
Dating the air with respect to the ice it is trapped in is problematic. The consolidation of snow to ice necessary to trap the air takes place at depth (the 'trapping depth') once the pressure of overlying snow is great enough. Since air can freely diffuse from the overlying atmosphere throughout the upper unconsolidated layer (the 'firn'), trapped air is younger than the ice surrounding it.
Trapping depth varies with climatic conditions, so the air-ice age difference could vary between 2500 and 6000 years (Barnola et al., 1991). However, air from the overlying atmosphere may not mix uniformly throughout the firn (Battle et al., 1986) as earlier assumed, meaning estimates of the air-ice age difference could be less than imagined. Either way, this age difference is a critical uncertainty in dating ice-core air samples. In addition, gas movement would be different for various gases; for example, larger molecules would be unable to move at a different depth than smaller molecules so the ages of gases at a certain depth may be different. Some gases also have characteristics which affect their inclusion, such as helium not being trapped because it is soluble in ice.
In Law Dome ice cores, the trapping depth at DE08 was found to be 72 m where the age of the ice is 40±1 years; at DE08-2 to be 72 m depth and 40 years; and at DSS to be 66 m depth and 68 years.
Paleoatmospheric firn studies
At the South Pole, the firn-ice transition depth is at 122 m, with a CO2 age of about 100 years. Gases involved in ozone depletion, CFCs, chlorocarbons, and bromocarbons, were measured in firn and levels were almost zero at around 1880 except for CH3Br, which is known to have natural sources. Similar study of Greenland firn found that CFCs vanished at a depth of 69 m (CO2 age of 1929).
Analysis of the Upper Fremont Glacier ice core showed large levels of chlorine-36 that definitely correspond to the production of that isotope during atmospheric testing of nuclear weapons. This result is interesting because the signal exists despite being on a glacier and undergoing the effects of thawing, refreezing, and associated meltwater percolation. 36Cl has also been detected in the Dye-3 ice core (Greenland), and in firn at Vostok.
Studies of gases in firn often involve estimates of changes in gases due to physical processes such as diffusion. However, it has been noted that there also are populations of bacteria in surface snow and firn at the South Pole, although this study has been challenged. It had previously been pointed out that anomalies in some trace gases may be explained as due to accumulation of in-situ metabolic trace gas byproducts.
Dating cores
Shallow cores, or the upper parts of cores in high-accumulation areas, can be dated exactly by counting individual layers, each representing a year. These layers may be visible, related to the nature of the ice; or they may be chemical, related to differential transport in different seasons; or they may be isotopic, reflecting the annual temperature signal (for example, snow from colder periods has less of the heavier isotopes of H and O). Deeper into the core the layers thin out due to ice flow and high pressure and eventually individual years cannot be distinguished. It may be possible to identify events such as nuclear bomb atmospheric testing's radioisotope layers in the upper levels, and ash layers corresponding to known volcanic eruptions. Volcanic eruptions may be detected by visible ash layers, acidic chemistry, or electrical resistance change. Some composition changes are detected by high-resolution scans of electrical resistance. Lower down the ages are reconstructed by modeling accumulation rate variations and ice flow.
Dating is a difficult task. Five different dating methods have been used for Vostok cores, with differences such as 300 years at 100 m depth, 600yr at 200 m, 7000yr at 400 m, 5000yr at 800 m, 6000yr at 1600 m, and 5000yr at 1934 m.
Different dating methods makes comparison and interpretation difficult. Matching peaks by visual examination of Moulton and Vostok ice cores suggests a time difference of about 10,000 years but proper interpretation requires knowing the reasons for the differences.
Ice core storage and transport
Ice cores are typically stored and transported in refrigerated ISO container systems. Due to the high value and the temperature-sensitive nature of the ice core samples, container systems with primary and back-up refrigeration units and generator sets are often used. Known as a Redundant Container System in the industry, the refrigeration unit and generator set automatically switches to its back-up in the case of a loss of performance or power to provide the ultimate peace of mind when shipping this valuable cargo.
Ice core sites
Ice cores have been taken from many locations around the world. Major efforts have taken place on Greenland and Antarctica.
Sites on Greenland are more susceptible to snow melt than those in Antarctica. In the Antarctic, areas around the Antarctic Peninsula and seas to the west have been found to be affected by ENSO effects. Both of these characteristics have been used to study such variations over long spans of time.
The first to winter on the inland ice was J.P. Koch and Alfred Wegener in a hut they built on the ice in Northeast Greenland. Inside the hut they drilled to a depth of 25 m with an auger similar to an oversized corkscrew.
Station Eismitte
Eismitte means Ice-Center in German. The Greenland campsite was located 402 kilometers (250 mi) from the coast at an estimated altitude of 3,000 meters (9,843 feet).
As a member of the Alfred Wegener Expedition to Eismitte in central Greenland from July 1930 to August 1931, Ernst Sorge hand-dug a 15 m deep pit adjacent to his beneath-the-surface snow cave. Sorge was the first to systematically and quantitatively study the near-surface snow/firn strata from inside his pit. His research validated the feasibility of measuring the preserved annual snow accumulation cycles, like measuring frozen precipitation in a rain gauge.
Camp VI
During 1950-1951 members of Expeditions Polaires Francaises (EPF) led by Paul Emile Victor reported boring two holes to depths of 126 and 150 m on the central Greenland inland ice at Camp VI and Station Central (Centrale). Camp VI is in the western part of Greenland on the EPF-EGIG line at an elevation of 1598 masl.
Station Centrale
The Station Centrale was not far from station Eismitte. Centrale is on a line between Milcent (70°18’N 45°35’W, 2410 masl) and Crête (71°7’N 37°19’W), at about (70°43'N 41°26'W), whereas Eismitte is at (71°10’N 39°56’W, ~3000 masl).
Site 2
In 1956, pre-International Geophysical Year (IGY) of 1957-58, a 10 cm diameter core using a rotary mechanical drill (US) to 305 m was recovered.
A second 10 cm diameter core was recovered in 1957 by the same drill rig to 411 m. A commercially modified, mechanical-rotary Failing-1500 rock-coring rig was used, fitted with special ice cutting bits.
Camp Century
Three cores were attempted at Camp Century in 1961, 1962, and again in 1963. The third hole was started in 1963 and reached 264 m. The 1963 hole was re-entered using the thermal drill (US) in 1964 and extended to 535 m. In mid-1965 the thermal drill was replaced with an electro-mechanical drill, 9.1 cm diameter, that reached the base of the ice sheet in July 1966 at 1387 m. The Camp Century, Greenland, (77°10’N 61°08’W, 1885 masl) ice core (cored from 1963–1966) is 1390 m deep and contains climatic oscillations with periods of 120, 940, and 13,000 years.
Another core in 1977 was drilled at Camp Century using a Shallow (Dane) drill type, 7.6 cm diameter, to 100 m.
North Site
At the North Site (75°46’N 42°27’W, 2870 masl) drilling began in 1972 using a SIPRE (US) drill type, 7.6 cm diameter to 25 m. The North Site was 500 km north of the EGIG line. At a depth of 6–7 m diffusion had obliterated some of the seasonal cycles.
North Central
The first core at North Central (74°37’N 39°36’W) was drilled in 1972 using a Shallow (Dane) drill type, 7.6 cm diameter to 100 m.
At Crête in central Greenland (71°7’N 37°19’W) drilling began in 1972 on the first core using a SIPRE (US) drill type, 7.6 cm diameter to 15 m.
The Crête core was drilled in central Greenland (1974) and reached a depth of 404.64 meters, extending back only about fifteen centuries. Annual cycle counting showed that the oldest layer was deposited in 534 AD.
The Crête 1984 ice cores consist of 8 short cores drilled in the 1984-85 field season as part of the post-GISP campaigns. Glaciological investigations were carried out in the field at eight core sites (A-H).
"The first core drilled at Station Milcent in central Greenland covers the past 780 years." Milcent core was drilled at 70.3°N, 44.6°W, 2410 masl. The Milcent core (398 m) was 12.4 cm in diameter, using a Thermal (US) drill type, in 1973.
Dye 2
Drilling with a Shallow (Swiss) drill type at Dye 2 (66°23’N 46°11’W, 2338 masl) began in 1973. The core was 7.6 cm in diameter to a depth of 50 m. A second core to 101 m was 10.2 cm in diameter was drilled in 1974. An additional core at Dye 2 was drilled in 1977 using a Shallow (US) drill type, 7.6 cm diameter, to 84 m.
Summit Camp
The camp is located approximately 360 km from the east coast and 500 km from the west coast of Greenland at (Saattut, Uummannaq), and 200 km NNE of the historical ice sheet camp Eismitte. The closest town is Ittoqqortoormiit, 460 km ESE of the station. The station however is not part of Sermersooq municipality, but falls within the bounds of the Northeast Greenland National Park.
An initial core at Summit (71°17’N 37°56’W, 3212 masl) using a Shallow (Swiss) drill type was 7.6 cm in diameter for 31 m in 1974. Summit Camp, also Summit Station, is a year-round research station on the apex of the Greenland Ice Sheet. Its coordinates are variable, since the ice is moving. The coordinates provided here (72°34’45”N 38°27’26”W, 3212 masl) are as of 2006.
South Dome
The first core at South Dome (63°33’N 44°36’W, 2850 masl) used a Shallow (Swiss) drill type for a 7.6 cm diameter core to 80 m in 1975.
Hans Tausen (or Hans Tavsen)
The first GISP core drilled at Hans Tausen Iskappe (82°30’N 38°20’W, 1270 masl) was in 1975 using a Shallow (Swiss) drill type, 7.6 cm diameter core to 60 m. The second core at Hans Tausen was drilled in 1976 using a Shallow (Dane) drill type, 7.6 cm diameter to 50 m. The drilling team reported that the drill was stuck in the drill hole and lost.
The Hans Tausen ice cap in Peary Land was drilled again with a new deep drill to 325 m. The ice core contained distinct melt layers all the way to bedrock indicating that Hans Tausen contains no ice from the glaciation; i.e., the world’s northernmost ice cap melted away during the post-glacial climatic optimum and was rebuilt when the climate got colder some 4000 years ago.
Camp III
The first core at Camp III (69°43’N 50°8’W) was drilled in 1977 using a Shallow (Swiss) drill type, 7.6 cm, to 49 m. The last core at Camp III was drilled in 1978 using a Shallow (Swiss) drill type, 7.6 cm diameter, 80 m depth.
Dye 3
The Renland ice core from East Greenland apparently covers a full glacial cycle from the Holocene into the previous Eemian interglacial. It was drilled in 1985 to a length of 325 m. From the delta-profile, the Renland ice cap in the Scoresbysund Fiord has always been separated from the inland ice, yet all the delta-leaps revealed in the Camp Century 1963 core recurred in the Renland ice core.
The GRIP and GISP cores, each about 3000 m long, were drilled by European and US teams respectively on the summit of Greenland. Their usable record stretches back more than 100,000 years into the last interglacial. They agree (in the climatic history recovered) to a few metres above bedrock. However, the lowest portion of these cores cannot be interpreted, probably due to disturbed flow close to the bedrock. There is evidence the GISP2 cores contain an increasing structural disturbance which casts suspicion on features lasting centuries or more in the bottom 10% of the ice sheet. The more recent NorthGRIP ice core provides an undisturbed record to approx. 123,000 years before present. The results indicate that Holocene climate has been remarkably stable and have confirmed the occurrence of rapid climatic variation during the last ice age.
The NGRIP drilling site is near the center of Greenland ( , 2917 m, ice thickness 3085). Drilling began in 1999 and was completed at bedrock in 2003. The NGRIP site was chosen to extract a long and undisturbed record stretching into the last glacial. NGRIP covers 5 kyr of the Eemian, and shows that temperatures then were roughly as stable as the pre-industrial Holocene temperatures were.
The North Greenland Eemian Ice Drilling (NEEM) site is located at 77°27’N 51°3.6’W, masl. Drilling started in June 2009. The ice at NEEM was expected to be 2545 m thick. On July 26, 2010, drilling reached bedrock at 2537.36 m.
For the list of ice cores visit IceReader web site
Plateau Station
Plateau Station is an inactive American research and Queen Maud Land traverse support base on the central Antarctic Plateau. The base was in continuous use until January 29, 1969. Ice core samples were made, but with mixed success.
Byrd Station
Marie Byrd Land formerly hosted the Operation Deep Freeze base Byrd Station (NBY), beginning in 1957, in the hinterland of Bakutis Coast. Byrd Station was the only major base in the interior of West Antarctica. In 1968, the first ice core to fully penetrate the Antarctic Ice Sheet was drilled here.
Dolleman Island
The British Antarctic Survey (BAS) has used Dolleman Island as ice core drilling site in 1976, 1986 and 1993.
Berkner Island
In the 1994/1995 field season the British Antarctic Survey, Alfred Wegener Institute and the Forschungsstelle für Physikalische Glaziologie of the University of Münster cooperated in a project drilling ice cores on the North and South Domes of Berkner Island.
Cape Roberts Project
Between 1997 and 1999 the international Cape Roberts Project (CRP) has recovered up to 1000 m long drill cores in the Ross Sea, Antarctica to reconstruct the glaciation history of Antarctica.
International Trans-Antarctic Scientific Expedition (ITASE)
The International Trans-Antarctic Scientific Expedition (ITASE) was created in 1990 with the purpose of studying climate change through research conducted in Antarctica. A 1990 meeting held in Grenoble, France, served as a site of discussion regarding efforts to study the surface and subsurface record of Antarctica’s ice cores.
Lake Vida
The lake gained widespread recognition in December 2002 when a research team, led by the University of Illinois at Chicago's Peter Doran, announced the discovery of 2,800 year old halophile microbes (primarily filamentous cyanobacteria) preserved in ice layer core samples drilled in 1996.
As of 2003, the longest core drilled was at Vostok station. It reached back 420,000 years and revealed 4 past glacial cycles. Drilling stopped just above Lake Vostok. The Vostok core was not drilled at a summit, hence ice from deeper down has flowed from upslope; this slightly complicates dating and interpretation. Vostok core data are available.
EPICA/Dome C and Kohnen Station
The European Project for Ice Coring in Antarctica (EPICA) first drilled a core near Dome C at (560 km from Vostok) at an altitude of 3,233 m. The ice thickness is 3,309 +/-22 m and the core was drilled to 3,190 m. It is the longest ice core on record, where ice has been sampled to an age of 800 kyr BP (Before Present). Present-day annual average air temperature is -54.5 °C and snow accumulation 25 mm/y. Information about the core was first published in Nature on June 10, 2004. The core revealed 8 previous glacial cycles. They subsequently drilled a core at Kohnen Station in 2006.
Although the major events recorded in the Vostok, EPICA, NGRIP, and GRIP during the last glacial period are present in all four cores some variation with depth (both shallower and deeper) occur between the Antarctic and Greenland cores.
Dome F
Two deep ice cores were drilled near the Dome F summit ( , altitude 3,810 m). The first drilling started in August 1995, reached a depth of 2503 m in December 1996 and covers a period back to 320,000 years. The second drilling started in 2003, was carried out during four subsequent austral summers from 2003/2004 until 2006/2007, and by then a depth of 3,035.22 m was reached. This core greatly extends the climatic record of the first core, and, according to a first, preliminary dating, it reaches back until 720,000 years.
WAIS Divide
The West Antarctic Ice Sheet Divide (WAIS Divide) Ice Core Drilling Project began drilling over the 2005 and 2006 seasons, drilling ice cores up to the depth of 300 m for the purposes of gas collection, other chemical applications, and to test the site for use with the Deep Ice Sheet Coring (DISC) Drill. Sampling with the DISC Drill will begin over the 2007 season and researchers and scientists expect that these new ice cores will provide data to establish a greenhouse gas record back over 40,000 years.
TAlos Dome Ice CorE Project is a new 1620 m deep ice core drilled at Talos Dome that provides a paleoclimate record covering at least the last 250,000 years. The TALDICE coring site (159°11'E 72°49'S; 2315 m a.s.l.; annual mean temperature -41°C) is located near the dome summit and is characterised by an annual snow accumulation rate of 80 mm water equivalent.
Non-polar cores
The non-polar ice caps, such as found on mountain tops, were traditionally ignored as serious places to drill ice cores because it was generally believed the ice would not be more than a few thousand years old, however since the 1970s ice has been found that is older, with clear chronological dating and climate signals going as far back as the beginning of the most recent ice age. Although polar cores have the clearest and longest chronological record, four-times or more as long, ice cores from tropical regions offer data and insights not available from polar cores and have been very influential in advancing understanding of the planets climate history and mechanisms.
Mountain ice cores have been retrieved in the Andes in South America, Mount Kilimanjaro in Africa, Tibet, various locations in the Himalayas, Alaska, Russia and elsewhere. Mountain ice cores are logistically very difficult to obtain. The drilling equipment must be carried by hand, organized as a mountaineering expedition with multiple stage camps, to altitudes upwards of 20,000 feet (helicopters are not safe), and the multi-ton ice cores must then be transported back down the mountain, all requiring mountaineering skills and equipment and logistics and working at low oxygen in extreme environments in remote third world countries. Scientists may stay at high altitude on the ice caps for up 20 to 50 days setting altitude endurance records that even professional climbers do not obtain. American scientist Lonnie Thompson has been pioneering this area since the 1970s, developing light-weight drilling equipment that can be carried by porters, solar-powered electricity, and a team of mountaineering-scientists. The ice core drilled in Guliya ice cap in western China in the 1990s reaches back to 760,000 years before the present — farther back than any other core at the time, though the EPICA core in Antarctica equalled that extreme in 2003.
Because glaciers are retreating rapidly worldwide, some important glaciers are now no longer scientifically viable for taking cores, and many more glacier sites will continue to be lost, the "Snows of Mount Kilimanjaro" (Hemingway) for example could be gone by 2015.
Upper Fremont Glacier
Ice core samples were taken from Upper Fremont Glacier in 1990-1991. These ice cores were analyzed for climatic changes as well as alterations of atmospheric chemicals. In 1998 an unbroken ice core sample of 164 m was taken from the glacier and subsequent analysis of the ice showed an abrupt change in the oxygen isotope ratio oxygen-18 to oxygen-16 in conjunction with the end of the Little Ice Age, a period of cooler global temperatures between the years 1550 and 1850. A linkage was established with a similar ice core study on the Quelccaya Ice Cap in Peru. This demonstrated the same changes in the oxygen isotope ratio during the same period.
Nevado Sajama
Quelccaya Ice Cap
Mount Kilimanjaro ice fields
These cores provide a ~11.7 ka record of Holocene climate and environmental variability including three periods of abrupt climate change at ~8.3, ~5.2 and ~4 ka. These three periods correlate with similar events in the Greenland GRIP and GISP2 cores.
East Rongbuk Glacier
See also
- Core drill
- Core sample in general from ocean floor, rocks and ice.
- Greenland ice cores
- Ice core brittle zone
- Jean Robert Petit
- Scientific drilling
- WAIS Divide Ice Core Drilling Project.
- ^ Bender M, Sowers T, Brook E (August 1997). "Gases in ice cores". Proc. Natl. Acad. Sci. U.S.A. 94 (16): 8343–9. Bibcode:1997PNAS...94.8343B. doi:10.1073/pnas.94.16.8343. PMC 33751. PMID 11607743.
- ^ Kaspers, Karsten Adriaan. "Chemical and physical analyses of firn and firn air: from Dronning Maud Land, Antarctica; 2004-10-04". DAREnet. Retrieved October 14, 2005.
- ^ "The Composition of Air in the Firn of Ice Sheets and the Reconstruction of Anthropogenic Changes in Atmospheric Chemistry". Retrieved October 14, 2005.
- ^ "http://www.ssec.wisc.edu/icds/reports/Drill_Fluid.pdf" (PDF). Retrieved October 14, 2005.
- ^ "http://pubs.usgs.gov/prof/p1386j/history/history-lores.pdf" (PDF). Retrieved October 14, 2005.
- ^ Journal of Geophysical Research (Oceans and Atmospheres) Special Issue [Full Text]. Retrieved October 14, 2005.
- ^ "Physical Properties Research on the GISP2 Ice Core". Retrieved October 14, 2005.
- ^ Svensson, A., S. W. Nielsen, S. Kipfstuhl, S. J. Johnsen, J. P. Steffensen, M. Bigler, U. Ruth, and R. Röthlisberger (2005). "Visual stratigraphy of the North Greenland Ice Core Project (NorthGRIP) ice core during the last glacial period". J. Geophys. Res. 110 (D02108): D02108. Bibcode:2005JGRD..11002108S. doi:10.1029/2004JD005134.
- ^ A.J. Gow and D.A. Meese. "The Physical and Structural Properties of the Siple Dome Ice Cores". WAISCORES. Retrieved October 14, 2005.
- ^ "Purdue study rethinks atmospheric chemistry from ground up". Archived from the original on December 28, 2005. Retrieved October 14, 2005.
- "Summit_ACS.html". Retrieved October 14, 2005.
- ^ Amy Ng and Clair Patterson (1981). "Natural concentrations of lead in ancient Arctic and Antarctic ice". Geochimica et Cosmochimica Acta 45 (11): 2109–21. Bibcode:1981GeCoA..45.2109N. doi:10.1016/0016-7037(81)90064-8.
- ^ "Glacial ice cores: a model system for developing extraterrestrial decontamination protocols". Publications of Brent Christner. Archived from the original on March 7, 2005. Retrieved May 23, 2005.
- ^ Michael Bender, Todd Sowersdagger, and Edward Brook (1997). "Gases in ice cores". Proc. Natl. Acad. Sci. USA 94 (August): 8343–9. Bibcode:1997PNAS...94.8343B. doi:10.1073/pnas.94.16.8343. PMC 33751. PMID 11607743. Bender, M.; Sowers, T; Brook, E (1997). "Gases in ice cores". Proceedings of the National Academy of Sciences 94 (16): 8343–9. Bibcode:1997PNAS...94.8343B. doi:10.1073/pnas.94.16.8343. PMC 33751. PMID 11607743. More than one of
- ^ "TRENDS: ATMOSPHERIC CARBON DIOXIDE". Retrieved October 14, 2005.
- ^ "CMDL Annual Report 23: 5.6. MEASUREMENT OF AIR FROM SOUTH POLE FIRN". Retrieved October 14, 2005.
- ^ "Climate Prediction Center — Expert Assessments". Retrieved October 14, 2005.
- ^ M.M. Reddy, D.L. Naftz, P.F. Schuster. "FUTURE WORK". ICE-CORE EVIDENCE OF RAPID CLIMATE SHIFT DURING THE TERMINATION OF THE LITTLE ICE AGE. Archived from the original on September 13, 2005. Retrieved October 14, 2005.
- ^ "Thermonuclear 36Cl". Archived from the original on May 23, 2005. Retrieved October 14, 2005.
- ^ Delmas RJ, J Beer, HA Synal, et al (2004). "Bomb-test 36Cl measurements in Vostok snow (Antarctica) and the use of 36Cl as a dating tool for deep ice cores". Tellus B 36 (5): 492. Bibcode:2004TellB..56..492D. doi:10.1111/j.1600-0889.2004.00109.x.
- ^ Carpenter EJ, Lin S, Capone DG (October 2000). "Bacterial Activity in South Pole Snow". Appl. Environ. Microbiol. 66 (10): 4514–7. doi:10.1128/AEM.66.10.4514-4517.2000. PMC 92333. PMID 11010907.
- ^ Warren SG, Hudson SR (October 2003). "Bacterial Activity in South Pole Snow Is Questionable". Appl. Environ. Microbiol. 69 (10): 6340–1; author reply 6341. doi:10.1128/AEM.69.10.6340-6341.2003. PMC 201231. PMID 14532104.
- ^ Sowers, T. (2003). "Evidence for in-situ metabolic activity in ice sheets based on anomalous trace gas records from the Vostok and other ice cores". EGS - AGU - EUG Joint Assembly: 1994. Bibcode:2003EAEJA.....1994S.
- ^ "NOAA Paleoclimatology Program — Vostok Ice Core Timescales". Retrieved October 14, 2005.
- ^ "Polar Paleo-Climate Interests". Retrieved October 14, 2005.
- ^ Jim White and Eric Steig. "Siple Dome Highlights: Stable isotopes". WAISCORES. Retrieved October 14, 2005.
- ^ "GISP2 and GRIP Records Prior to 110 kyr BP". Archived from the original on September 9, 2005. Retrieved October 14, 2005.
- ^ Gow, A. J., D. A. Meese, R. B. Alley, J. J. Fitzpatrick, S. Anandakrishnan, G. A. Woods, and B. C. Elder (1997). "Physical and structural properties of the Greenland Ice Sheet Project 2 ice core: A review". J. Geophys. Res. 102 (C12): 26559–76. Bibcode:1997JGR...10226559G. doi:10.1029/97JC00165.
- ^ Whitehouse, David (14 October 2005). "Breaking through Greenland's ice cap". BBC.
- ^ "NOAA Paleoclimatology Program — Vostok Ice Core". Retrieved October 14, 2005.
- ^ Bowen, Mark (2005). Thin Ice. Henry Holt Company, ISBN 0-8050-6443-5
- British Antarctic Survey, The ice man cometh - ice cores reveal past climates
- Hubertus Fischer, Martin Wahlen, Jesse Smith, Derek Mastroianni, Bruce Deck (1999-03-12). "Ice Core Records of Atmospheric CO2 Around the Last Three Glacial Terminations". Science (Science) 283 (5408): 1712–4. Bibcode:1999Sci...283.1712F. doi:10.1126/science.283.5408.1712. PMID 10073931. Retrieved 2010-06-20.
- Dansgaard W. Frozen Annals Greenland Ice Sheet Research. Odder, Denmark: Narayana Press. p. 124. ISBN 87-990078-0-0.
- Langway CC Jr. (Jan 2008). "The History of Early Polar Ice Cores". Cold Regions Science and Technology 52 (2): 101. doi:10.1016/j.coldregions.2008.01.001.
- Wegener K (Sep 1955). "Die temperatur in Grönlandischen inlandeis". Pure Appl Geophys. 32 (1): 102–6. Bibcode:1955GeoPA..32..102W. doi:10.1007/BF01993599.
- Rose LE. "The Greenland Ice Cores". Kronos 12 (1): 55–68.
- "Crete Ice Core".
- Oeschger H, Beer J, Andree M; Beer; Andree (Aug 1987). "10Be and 14C in the Earth system". Phil Trans R Soc Lond A. 323 (1569): 45–56. Bibcode:1987RSPTA.323...45O. doi:10.1098/rsta.1987.0071. JSTOR 38000.
- "NOAA Paleoclimatology World Data Centers Dye 3 Ice Core".
- Hansson M, Holmén K (Nov 2001). "High latitude biospheric activity during the Last Glacial Cycle revealed by ammonium variations in Greenland Ice Cores". Geophy Res Lett. 28 (22): 4239–42. Bibcode:2001GeoRL..28.4239H. doi:10.1029/2000GL012317.
- National Science Foundation press release for Doran et al. (2003)
- "Deep ice tells long climate story". BBC News. September 4, 2006. Retrieved May 4, 2010.
- Peplow, Mark (25 January 2006). "Ice core shows its age". Nature (journal). doi:10.1038/news060123-3. "part of the European Project for Ice Coring in Antarctica (EPICA) ... Both cores were ... Dome C ... the Kohnen core ..."
- "Deciphering the ice". CNN. 12 September 2001. Archived from the original on 13 June 2008. Retrieved 8 July 2010.
- Thompson LG, Mosley-Thompson EM, Henderson KA (2000). "Ice-core palaeoclimate records in tropical South America since the Last Glacial Maximum". J Quaternary Sci. 15 (4): 377–94. Bibcode:2000JQS....15..377T. doi:10.1002/1099-1417(200005)15:4<377::AID-JQS542>3.0.CO;2-L.
- Thompson LG, Mosley-Thompson EM, Davis ME, Henderson KA, Brecher HH, Zagorodnov VS, Mashlotta TA, Lin PN, Mikhalenko VN, Hardy DR, Beer J (2002). "Kilimanjaro ice core records: evidence of Holocene climate change in tropical Africa". Science. 298 (5593): 589–93. Bibcode:2002Sci...298..589T. doi:10.1126/science.1073198. PMID 12386332.
- Ming J, Cachier H, Xiao C, "et al." (2008). ACP 8 (5): 1343–52.
- http://www.tonderai.co.uk/earth/ice_cores.php "The Chemistry of Ice Cores" literature review
- Barnola J, Pimienta P, Raynaud D, Korotkevich Y (1991). "CO2-Climate relationship as deduced from the Vostok ice core – a reexamination based on new measurements and on a reevaluation of the air dating". Tellus Series B-Chemical and Physical Meteorology 43 (2): 83–90. Bibcode:1991TellB..43...83B. doi:10.1034/j.1600-0889.1991.t01-1-00002.x.
- Battle M, Bender M, Sowers T, et al (1996). "Atmospheric gas concentrations over the past century measured in air from firn at the South Pole". Nature 383 (6597): 231–5. Bibcode:1996Natur.383..231B. doi:10.1038/383231a0.
- Friedli H, Lotscher H, Oeschger H, et al (1986). "Ice core record of the C13/C12 ratio of atmospheric CO2 in the past two centuries". Nature 324 (6094): 237–8. Bibcode:1986Natur.324..237F. doi:10.1038/324237a0.
- Andersen KK, Azuma N, Barnola JM, et al. (September 2004). "High-resolution record of Northern Hemisphere climate extending into the last interglacial period" (PDF). Nature 431 (7005): 147–51. Bibcode:2004Natur.431..147A. doi:10.1038/nature02805. PMID 15356621.
|Wikimedia Commons has media related to: Ice cores|
- Ice Core Gateway
- National Ice Core Laboratory - Facility for storing, curating, and studying ice cores recovered from the polar regions.
- Ice-core evidence of rapid climate shift during the termination of the Little Ice Age - Upper Fremont Glacier study
- Byrd Polar Research Center - Ice Core Paleoclimatology Research Group
- National Ice Core Laboratory - Science Management Office
- West Antarctic Ice Sheet Divide Ice Core Project
- PNAS Collection of Articles on the Rapid Climate Change
- Map of some worldwide ice core drilling locations
- Map of some ice core drilling locations in Antarctica
- Alley RB (February 2000). "Ice-core evidence of abrupt climate changes". Proc. Natl. Acad. Sci. U.S.A. 97 (4): 1331–4. Bibcode:2000PNAS...97.1331A. doi:10.1073/pnas.97.4.1331. PMC 34297. PMID 10677460.
- August 2010: Ice Cores: A Window into Climate History interview with Eric Wolff, British Antarctic Survey from Allianz Knowledge
- September 2006: BBC: Core reveals carbon dioxide levels are highest for 800,000 years
- June 2004: "Ice cores unlock climate secrets" from the BBC
- June 2004: "Frozen time" from Nature
- June 2004: "New Ice Core Record Will Help Understanding of Ice Ages, Global Warming" from NASA
- September 2003: "Oldest ever ice core promises climate revelations" - from New Scientist | http://en.wikipedia.org/wiki/Ice_core | 13 |
19 | In electronics or EET, a voltage divider (also known as a potential divider) is a linear circuit that produces an output voltage (Vout) that is a fraction of its input voltage (Vin). Voltage division refers to the partitioning of a voltage among the components of the divider.
An example of a voltage divider consists of two resistors in series or a potentiometer. It is commonly used to create a reference voltage, or to get a low voltage signal proportional to the voltage to be measured, and may also be used as a signal attenuator at low frequencies. For direct current and relatively low frequencies, a voltage divider may be sufficiently accurate if made only of resistors; where frequency response over a wide range is required, (such as in an oscilloscope probe), the voltage divider may have capacitive elements added to allow compensation for load capacitance. In electric power transmission, a capacitive voltage divider is used for measurement of high voltage.
A voltage divider referenced to ground is created by connecting two electrical impedances in series, as shown in Figure 1. The input voltage is applied across the series impedances Z1 and Z2 and the output is the voltage across Z2. Z1 and Z2 may be composed of any combination of elements such as resistors, inductors and capacitors.
Applying Ohm's Law, the relationship between the input voltage, Vin, and the output voltage, Vout, can be found:
The transfer function (also known as the divider's voltage ratio) of this circuit is simply:
A resistive divider is the case where both impedances, Z1 and Z2, are purely resistive (Figure 2).
Substituting Z1 = R1 and Z2 = R2 into the previous expression gives:
If R1 = R2 then
If Vout=6V and Vin=9V (both commonly used voltages), then:
and by solving using algebra, R2 must be twice the value of R1.
To solve for R1:
To solve for R2:
Any ratio greater than 1 is possible. That is, using resistors alone it is not possible to either invert the voltage or increase Vout above Vin.
Low-pass RC filter
Consider a divider consisting of a resistor and capacitor as shown in Figure 3.
Comparing with the general case, we see Z1 = R and Z2 is the impedance of the capacitor, given by
This divider will then have the voltage ratio:
The product τ (tau) = RC is called the time constant of the circuit.
The ratio then depends on frequency, in this case decreasing as frequency increases. This circuit is, in fact, a basic (first-order) lowpass filter. The ratio contains an imaginary number, and actually contains both the amplitude and phase shift information of the filter. To extract just the amplitude ratio, calculate the magnitude of the ratio, that is:
Inductive dividers split AC input according to inductance:
The above equation is for non-interacting inductors; mutual inductance will alter the results.
Inductive dividers split DC input according to the resistance of the elements as for the resistive divider above.
Capacitive dividers do not pass DC input.
For an AC input a simple capacitive equation is:
Any leakage current in the capactive elements requires use of the generalized expression with two impedances. By selection of parallel R and C elements in the proper proportions, the same division ratio can be maintained over a useful range of frequencies. This is the principle applied in compensated oscilloscope probes to increase measurement bandwidth.
The voltage output of a voltage divider is not fixed but varies according to the load. To obtain a reasonably stable output voltage the output current should be a small fraction of the input current. The drawback of this is that most of the input current is wasted as heat in the divider.
Voltage dividers are used for adjusting the level of a signal, for bias of active devices in amplifiers, and for measurement of voltages. A Wheatstone bridge and a multimeter both include voltage dividers. A potentiometer is used as a variable voltage divider in the volume control of a radio.
- Voltage divider or potentiometer calculations
- Voltage divider tutorial video in HD
- Online calculator to choose the values by series E24, E96
- Online voltage divider calculator: chooses the best pair from a given series and also gives the color code
- Voltage divider theory - RC low-pass filter example and voltage divider using Thévenin's theorem | http://en.wikipedia.org/wiki/Voltage_divider | 13 |
41 | In a new finding that could have game-changing effects if borne out, two astrophysicists think they've finally tracked down the elusive signature of dark matter.
This invisible substance is thought to make up much of the universe but scientists have little idea what it is. They can only infer the existence of dark matter by measuring its gravitational tug on the normal matter that they can see.
Now, after sifting through observations of the center of our Milky Way galaxy, two researchers think they've found evidence of the annihilation of dark matter particles in powerful explosions.
"Nothing we tried besides dark matter came anywhere close to being able to accommodate the features of the observation," Dan Hooper, of the Fermi National Accelerator Laboratory in Batavia, Ill., and the University of Chicago, told SPACE.com. "It's always hard to be sure there isn't something you just haven't thought of. But I've talked to a lot of experts and so far I haven't heard anything that was a plausible alternative."
Hooper conducted the analysis with Lisa Goodenough, a graduate student at New York University.
Dark matter destruction
The idea of dark matter was first proposed in the 1930s, after the velocities of galaxies and stars suggested the universe contained much more mass than what could be seen. Dark matter would not reflect light, so it couldn't be observed directly by telescopes.
Now scientists calculate dark matter makes up roughly 80 percent of all matter, with regular atoms contributing a puny 20 percent.
The Fermi Gamma-ray Space Telescope, which has scanned the heavens in high-energy gamma-ray light since it was launched in 2008, has observed a signal of gamma-rays at the very center of the galaxy that was brighter than expected. Hooper and Goodenough tested many models to explain what could be creating this light. They ultimately concluded it must be caused by dark matter particles that are packed in so densely that they are destroying each other and releasing energy in the form of light.
Physicists have theorized that dark matter particles might be their own antimatter partners, and thus when two dark matter particles meet under the right circumstances, they would destroy each other. Alternatively, dark matter particles might be meeting anti-dark matter particles at the galactic center.
Either way, the researchers think the Milky Way's gamma-ray glow is caused by dark matter explosions.
By studying the data on this radiation, Hooper and Goodenough calculated that dark matter must be made of particles called WIMPs (weakly interacting massive particles) with masses between 7.3 and 9.2 GeV (giga electron volts) almost nine times the mass of a proton. They also calculated a property known as the cross-section, which describes how likely the particle is to interact with others.
Knowing these two properties would represent a huge leap forward in our understanding of dark matter.
"It's the biggest thing that's happened in dark matter since we learned it existed," Hooper said. "So long as no unexpected alternative explanations come forward, I think yes, we've finally found it."
The researchers have submitted a paper describing their findings to the journal Physics Review Letters B, but it has not yet gone through the peer-review process.
Some skepticism remains
Not everyone is ready to accept that dark matter has been found.
Hooper and Goodenough based their analysis on data released to the public from the Fermi observatory's Large Area Telescope. However, the official Fermi team, a large collaboration of international scientists, has not finished studying the intriguing glow. While they don't exclude the possibility that it is dark matter, team members are not ready to dismiss the possibility of another explanation.
"We feel that astrophysical interpretations for the gamma-ray signals from the region of the galactic center have to be further explored," said Seth Digel, analysis coordinator for the Large Area Telescope collaboration and a staff physicist at the SLAC National Accelerator Laboratory in Menlo Park, Calif. "I can't and won't say what they've done is wrong, but as a collaboration we dont have our own final understanding of the data."
Fermi scientists stressed that the analysis of the Milky Way's center is very complex, because there are so many bright sources of gamma-ray light in this crowded region. Various types of spinning stars called pulsars, as well as remnants left over from supernovas, also contribute confusing signals.
"More work needs to be done in this direction, and people within the collaboration are working hard to accomplish this goal. Until this is done, it is too difficult to interpret the data," said Simona Murgia, another SLAC scientist and Fermi science team member.
Hooper agreed that the case is not yet closed.
"I want a lot of people who are experts to think about this hard and try to make it go away," he said. "If we all agree we can't, then we'll have our answer."
One reason he and Goodenough think they are on the right track is that their calculation of the mass of dark matter particles aligns with some promising hints from other studies, he said.
Two ground-based experiments aimed at detecting dark matter have found preliminary indications of particles with roughly the same mass. The University of Chicago's CoGeNT project, buried deep in the Soudan iron mine in northeastern Minnesota, and DAMA, an Italian experiment underground near the Gran Sasso Mountains outside of Rome, both found signals that they can't completely attribute to normal particles, but can't prove are from dark matter.
"Part of why this picture is so compelling has to do with those in fact," Hooper said. "I would argue that it's likely that all three of these experiments are seeing the same dark matter particle."
Space news from NBCNews.com
Teen's space mission fueled by social media
Science editor Alan Boyle's blog: "Astronaut Abby" is at the controls of a social-media machine that is launching the 15-year-old from Minnesota to Kazakhstan this month for the liftoff of the International Space Station's next crew.
- Buzz Aldrin's vision for journey to Mars
- Giant black hole may be cooking up meals
- Watch a 'ring of fire' solar eclipse online
- Teen's space mission fueled by social media
The Sagan standard
Still, it will take a lot of work to convince most astrophysicists that such a slippery substance has been captured at last.
"It's a complicated task to interpret what Dan and Lisa are seeing," said Doug Finkbeiner, a researcher at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass. "I do not find it persuasive, but that doesn't mean it is wrong."
Some scientists said we finally may be getting close to solving the mystery of dark matter. Michael Turner, director of the Kavli Institute for Cosmological Physics at the University of Chicago, said that between Fermi, the ground-based experiments, and the recently opened Large Hadron Collider particle accelerator at the CERN laboratory in Switzerland, scientists will likely confirm the existence of dark matter within the next decade.
For now, though, he's still waiting.
"This result is very intriguing but doesn't yet rise to the Sagan standard extraordinary claims require extraordinary evidence," Turner said. Other explanations would have to be eliminated, he said. "Nature knows many ways to make gamma rays."
- Dark Energy and Dark Matter Might Not Exist, Scientists Allege
- Video: Dark Matter in 3-D
- What is Antimatter
© 2013 Space.com. All rights reserved. More from Space.com. | http://www.nbcnews.com/id/39874873/ns/technology_and_science-space/t/has-dark-matter-finally-been-seen/ | 13 |
50 | Teaching Plan 3
Explore the Circumcenter of a Triangle
This lesson plan is to introduce the concepts of circumcenter by using computers with sketchpad software to explore. Students are able to observe and explore possible results (images) through computers by carrying out their ideas in front of screens.
IL Learning Standards
1. Understand the concepts of circumcenter of a triangle and other relative knowledge.
2. Be able to use computers with Geometer's Sketchpad to observe possible results and solve geometric problems.
1. Computers and Geometer's Sketchpad software
2. Papers, pencils, and rulers
Lesson PlanDay 1 - Introduction of basic definition, review of relative concepts, and class discussionDay 1
Day 2 - Group activity to answer questions by using computers with sketchpad
Day 3 - Group discussion, sharing results, and making conclusion
1. The instructors introduces the basic definition of circumcenter and had better review similar concepts about centroid, incenter, and orthocenter of a triangle.
2. Discuss students' thought and other relative questions about circumcenter. Such as: How many circumcenters are there in a triangle? Is the point of circumcenter always on the inside of a triangle? If not, please describe the possible results and depend on what kind of triangle is.
3 Then, the instructor and students turn toward to play and test computers and discuss how to draw graphs and find their answers by using computers.
The instructor has 2-3 students form a group team to work through computers to collect data in order to decide the conclusion for questions. The instructor should turn around each group to observe students' learning and offer some help if students have problems on how to operate computers with sketchpad software.
1. Is there only a point of circumcenter in a triangle? Explain your possible reasons.
2. Is the point of circumcenter always on the inside of a triangle? If not. Please describe the possible results and depend on what kind of triangle is. Worksheet#1 and GSP file.
3. What are the different properties among centroid, incenter, orthocenter, and circumcenter?
4. What kind of triangle will result in that centroid, incenter, orthocenter, or circumcenter in the same triangle will overlap? GSP file
5. Which three points among centroid, incenter, orthocenter, and circumcenter will be on a line? ( This line is called Euler line.) Describe your experimental result and explain it. GSP file.
6. In a triangle ABC, suppose that O is the point of circumcenter of triangle ABC. Observe the relation between angle ABC and angle AOC. Make a conclusion and explain it. Worksheet#2. and GSP file.
7. In a triangle ABC, suppose that O is the point of circumcenter of triangle ABC. Observe the length of OA, OB, and OC. Are they equal? Explain it. Let O be the center, and the length of OA be the radius to draw a circle. Observe the situation of point B and C and explain it. GSP file. ( This circle is called circumscribed circle to the triangle ABC.)
In this class, students offer their results to discuss and share among groups and make the final conclusion for the questions of Day 2 activity. Finally, if possible, the instructor should demand students to develop their geometric proof for each of the above questions. And, let students know that lots of results from dynamic models do not represent and make a proof.
In a triangle ABC, AB= 3 cm, BC= 4 cm, CA= 5 cm.
1) What kind of triangle is it? Why?
2) Suppose that O is the point of circumcenter of triangle ABC, the sum of OA, OB, and OC is = ______.
1) In a acute triangle ABC, suppose that O is the point of circumcenter of triangle ABC, and the angle BAC is 65 degrees, then the angle BOC is ________ degrees.
2) In a triangle DEF, angle DEF is obtuse angle. Suppose O is the point of circumcenter of triangle DEF, and the angle DEF is 130 degrees, then the angle DOF is ________ degrees.
In a triangle ABC, let A' be the midpoint of BC, B' be the midpoint of AC, and C' be the midpoint of AB. And let O is the circumcenter of triangle ABC. Please explain O is the orthocenter of triangle A'B'C'. (Hint: perpendicular lines)
There is an arc BCD which is a part of a circle. Could you find the center of this circle and draw the another part of this circle ? Explain your method. (Hint: Three points form a triangle and decide a circle.)
1. Replace traditional geometric teaching in which geometry is taught by a verbal description to dynatmic drawing.
2. Help teacher to teach and replace traditional teaching which uses blackboards and chalks to draw graphs
3. Computers with sketchpad software not only allow students to manipulate geometric shapes to discover and explore the geometric relationships, but also verify possible results, provide a creative activity for students' ideas, and enhance students' geometric intuition.
4. Facilitate the creation of a rich mathematical learning environment to assist students' geometric proof and establish geometric concepts
1. It can not replace traditional logic geometric proof -lots of examples do not make a proof
2. Students can not get maximal and potential learning benefits from by using computers to learn if the instructor do not offer appropriate learning directions and guide. The instructor also should know what kind of learning environment with computers is most likely to encourage and stimulate students' learning.
1. Szymanski, W. A., (1994). Geometric computerized proofs= drawing package + symbolic compution software. Journal of Computers in Mathematics and Science Teaching, 13, p433-444.
2. Silver, J. A. (1998). Can computers to teach proofs? Mathematics Teacher, 91, 660-663
Any Comment: Yi-wen Chen firstname.lastname@example.org | http://mste.illinois.edu/courses/ci499sp01/students/ychen17/project336/teachplan3.html | 13 |
14 | Nova was a high-power laser built at the Lawrence Livermore National Laboratory (LLNL) in 1984 which conducted advanced inertial confinement fusion (ICF) experiments until its dismantling in 1999. Nova was the first ICF experiment built with the intention of reaching "ignition", a chain reaction of nuclear fusion that releases a large amount of energy. Although Nova failed in this goal, the data it generated clearly defined the problem as being mostly a result of magnetohydrodynamic instability, leading to the design of the National Ignition Facility, Nova's successor. Nova also generated considerable amounts of data on high-density matter physics, regardless of the lack of ignition, which is useful both in fusion power and nuclear weapons research.
Inertial confinement fusion (ICF) devices use drivers to rapidly heat the outer layers of a target in order to compress it. The target is a small spherical pellet containing a few milligrams of fusion fuel, typically a mix of deuterium and tritium. The heat of the laser burns the surface of the pellet into a plasma, which explodes off the surface. The remaining portion of the target is driven inwards due to Newton's Third Law, eventually collapsing into a small point of very high density. The rapid blowoff also creates a shock wave that travels towards the center of the compressed fuel. When it reaches the center of the fuel and meets the shock from the other side of the target, the energy in the shock wave further heats and compresses the tiny volume around it. If the temperature and density of that small spot can be raised high enough, fusion reactions will occur.
The fusion reactions release high-energy particles, some of which (primarily alpha particles) collide with the high density fuel around it and slow down. This heats the fuel further, and can potentially cause that fuel to undergo fusion as well. Given the right overall conditions of the compressed fuel—high enough density and temperature—this heating process can result in a chain reaction, burning outward from the center where the shock wave started the reaction. This is a condition known as ignition, which can lead to a significant portion of the fuel in the target undergoing fusion, and the release of significant amounts of energy.
To date most ICF experiments have used lasers to heat the targets. Calculations show that the energy must be delivered quickly in order to compress the core before it disassembles, as well as creating a suitable shock wave. The energy must also be focused extremely evenly across the target's outer surface in order to collapse the fuel into a symmetric core. Although other "drivers" have been suggested, notably heavy ions driven in particle accelerators, lasers are currently the only devices with the right combination of features.
LLNL's history with the ICF program starts with physicist John Nuckolls, who predicted in 1972 that ignition could be achieved with laser energies about 1 kJ, while "high gain" would require energies around 1 MJ. Although this sounds very low powered compared to modern machines, at the time it was just beyond the state of the art, and led to a number of programs to produce lasers in this power range.
Prior to the construction of Nova, LLNL had designed and built a series of ever-larger lasers that explored the problems of basic ICF design. LLNL was primarily interested in the Nd:glass laser, which, at the time, was one of a very few high-energy laser designs known. LLNL had decided early on to concentrate on glass lasers, while other facilities studied gas lasers using carbon dioxide (e.g. Antares laser, Los Alamos National Laboratory) or KrF (e.g. Nike laser, Naval Research Laboratory). Building large Nd:glass lasers had not been attempted before, and LLNL's early research focussed primarily on how to make these devices.
One problem was the homogeneity of the beams. Even minor variations in intensity of the beams would result in "self-focusing" in the air and glass optics in a process known as Kerr lensing. The resulting beam included small "filaments" of extremely high light intensity, so high it would damage the glass optics of the device. This problem was solved in the Cyclops laser with the introduction of the spatial filtering technique. Cyclops was followed by the Argus laser of greater power, which explored the problems of controlling more than one beam and illuminating a target more evenly. All of this work culminated in the Shiva laser, a proof-of-concept design for a high power system that included 20 separate "laser amplifiers" that were directed around the target to illuminate it.
It was during experiments with Shiva that another serious unexpected problem appeared. The infrared light generated by the Nd:glass lasers was found to interact very strongly with the electrons in the plasma created during the initial heating through the process of stimulated Raman scattering. This process, referred to as "hot electron pre-heating", carried away a great amount of the laser's energy, and also caused the core of the target to heat before it reached maximum compression. This meant that much less energy was being deposited in the center of the collapse, both due to the reduction in implosion energy, as well as the outward force of the heated core. Although it was known that shorter wavelengths would reduce this problem, it had earlier been expected that the IR frequencies used in Shiva would be "short enough". This proved not to be the case.
A solution to this problem was explored in the form of efficient frequency multipliers, optical devices that combine several photons into one of higher energy, and thus frequency. These devices were quickly introduced and tested experimentally on the OMEGA laser and others, proving effective. Although the process is only about 50% efficient, and half the original laser power is lost, the resulting ultraviolet light couples much more efficiently to the target plasma and is much more effective in collapsing the target to high density.
With these solutions in hand, LLNL decided to build a device with the power needed to produce ignition conditions. Design started in the late 1970s, with construction following shortly starting with the testbed Novette laser to validate the basic beamline and frequency multiplier design. This was a time of repeated energy crises in the U.S. and funding was not difficult to find given the large amounts of money available for alternative energy and nuclear weapons research.
During the initial construction phase, Nuckolls found an error in his calculations, and an October 1979 review chaired by John Foster Jr. of TRW confirmed that there was no way Nova would reach ignition. The Nova design was then modified into a smaller design that added frequency conversion to 351 nm light, which would increase coupling efficiency. The "new Nova" emerged as a system with ten laser amplifiers, or beamlines. Each beamline consisted of a series of Nd:glass amplifiers separated by spatial filters and other optics for cleaning up the resulting beams. Although techniques for folding the beamlines were known as early as Shiva, they were not well developed at this point in time. Nova ended up with a single fold in its layout, and the laser bay containing the beamlines was 300 feet (91 m) long. To the casual observer it appears to contain twenty 300-foot (91 m) long beamlines, but due to the fold each of the ten is actually almost 600 feet (180 m) long in terms of optical path length.
Prior to firing, the Nd:glass amplifiers are first pumped with a series of Xenon flash lamps surrounding them. Some of the light produced by the lamps is captured in the glass, leading to a population inversion that allows for amplification via stimulated emission. This process is quite inefficient, and only about 1 to 1.5% of the power fed into the lamps is actually turned into laser energy. In order to produce the sort of laser power required for Nova, the lamps had to be very large, fed power from a large bank of capacitors located under the laser bay. The flash also generates a large amount of heat which distorts the glass, requiring time for the lamps and glass to cool before they can be fired again. This limits Nova to about six firings a day at the maximum.
Once pumped and ready for firing, a small pulse of laser light is fed into the beamlines. The Nd:glass disks each dump additional power into the beam as it passes through them. After passing through a number of amplifiers the light pulse is "cleaned up" in a spatial filter before being fed into another series of amplifiers. At each stage additional optics were used to increase the diameter of the beam and allow the use of larger and larger amplifier disks. In total, Nova contained fifteen amplifiers and five filters of increasing size in the beamlines, with an option to add an additional amplifier on the last stage, although it is not clear if these were used in practice.
From there all ten beams pass into the experiment area at one end of the laser bay. Here a series of mirrors reflects the beams to impinge in the center of the bay from all angles. Optical devices in some of the paths slow the beams so that they all reach the center at the same time (within about a picosecond), as some of the beams have longer paths to the center than others. Frequency multipliers upconvert the light to green and blue (UV) just prior to entering the "target chamber". Nova is arranged so any remaining IR or green light is focused short of the center of the chamber.
The Nova laser as a whole was capable of delivering approximately 100 kilojoules of infrared light at 1054 nm, or 40-45 kilojoules of frequency tripled light at 351 nm (the third harmonic of the Nd:Glass fundamental line at 1054 nm) in a pulse duration of about 2 to 4 nanoseconds and thus was capable of producing a UV pulse in the range of 16 trillion watts.
Fusion in Nova
Research on Nova was focussed on the "indirect drive" approach, where the laser shine on the inside surface of a thin metal foil, typically made of gold, lead, or another "high-z" metal. When heated by the laser, the metal re-radiates this energy as diffuse x-rays, which are more efficient than UV at compressing the fuel pellet. In order to emit x-rays, the metal must be heated to very high temperatures, which uses up a considerable amount of the laser energy. So while the compression is more efficient, the overall energy delivered to the target is nevertheless much smaller. The reason for the x-ray conversion is not to improve energy delivery, but to "smooth" the energy profile; since the metal foil spreads out the heat somewhat, the anisotropies in the original laser are greatly reduced.
The foil shells, or "hohlraums", are generally formed as small open-ended cylinders, with the laser arranged to shine in the open ends at an oblique angle in order to strike the inner surface. In order to support the indirect drive research at Nova, a second experimental area was built "past" the main one, opposite the laser bay. The system was arranged to focus all ten beams into two sets of five each, which passed into this second area and then into either end of the target chamber, and from there into the hohlraums.
Confusingly, the indirect drive approach was not made widely public until 1993. Documents from the era published in general science magazines and similar material either gloss over the issue, or imply that Nova was using the direct drive approach, lacking the hohlraum. It was only during the design of NIF that the topic become public, so Nova was old news by that point.
As had happened with the earlier Shiva, Nova failed to meet expectations in terms of fusion output. In this case the problem was tracked to instabilities that "mixed" the fuel during collapse and upset the formation and transmission of the shock wave. The maximum fusion yield on NOVA was about 1013 neutrons per shot. The problem was caused by Nova's inability to closely match the output energy of each of the beamlines, which meant that different areas of the pellet received different amounts of heating across its surface. This led to "hot spots" on the pellet which were imprinted into the imploding plasma, seeding Rayleigh–Taylor instabilities and thereby mixing the plasma so the center did not collapse uniformly.
Nevertheless, Nova remained a useful instrument even in its original form, and the main target chamber and beamlines were used for many years even after it was modified as outlined below. A number of different techniques for smoothing the beams were attempted over its lifetime, both to improve Nova as well as better understand NIF. These experiments added considerably not only to the understanding of ICF, but also to high-density physics in general, and even the evolution of the galaxy and supernovas.
Two beam
Shortly after completion of Nova, modifications were made to improve it as an experimental device. One problem was that the experimental chamber took a long time to refit for another "shot", longer than the time needed to cool down the lasers.
In order to improve utilization of the laser, a second experimental chamber was built "past" the original, with optics that combined the ten beamlines into two. Nova had been built up against the older Shiva buildings, with the two experimental chambers "back to back" and the beamlines extending outward from the center target areas. The Two Beam system was installed by passing the beamguides and related optics through the now unused Shiva experimental area and placing the smaller experimental chamber in Shiva's beam bay.
LMF and Nova Upgrade
Nova's partial success, combined with other experimental numbers, prompted Department of Energy to request a custom military ICF facility they called the "Laboratory Microfusion Facility" (LMF) that could achieve fusion yield between 100 and 1000 MJ. Based on the LASNEX computer models, it was estimated that LMF would require a driver of about 10 MJ, in spite of nuclear tests that suggested a higher power. Building such a device was within the state of the art, but would be expensive, on the order of $1 billion. LLNL returned a design with a 5 MJ 350 nm (UV) driver laser that would be able to reach about 200 MJ yield, which was enough to access the majority of the LMF goals. The program was estimated to cost about $600 million FY 1989 dollars, and an additional $250 million to upgrade it to a full 1000 MJ if needed, and would grow to well over $1 billion if LMF was to meet all of the goals the DOE asked for. Other labs also proposed their own LMF designs using other technologies.
Faced with this enormous project, in 1989/90 National Academy of Sciences conducted a second review of the US ICF efforts on behalf of the US Congress. The report concluded that "considering the extrapolations required in target physics and driver performance, as well as the likely $1 billion cost, the committee believes that an LMF [i.e. a Laser Microfusion Facility with yields to one gigajoule] is too large a step to take directly from the present program." Their report suggested that the primary goal of the program in the short term should be resolving the various issues related to ignition, and that a full-scale LMF should not be attempted until these problems were resolved. The report was also critical of the gas laser experiments being carried out at LANL, and suggested they, and similar projects at other labs, be dropped. The report accepted the LASNEX numbers and continued to approve an approach with laser energy around 10 MJ. Nevertheless the authors were aware of the potential for higher energy requirements, and noted "Indeed, if it did turn out that a 100-MJ driver were required for ignition and gain, one would have to rethink the entire approach to, and rationale for, ICF."
In July 1992 LLNL responded to these suggestions with the Nova Upgrade, which would reuse the majority of the existing Nova facility, along with the adjacent Shiva facility. The resulting system would be much lower power than the LMF concept, with a driver of about 1 to 2 MJ. The new design included a number of features that advanced the state of the art in the driver section, including the multi-pass design in the main amplifiers, and 18 beamlines (up from 10) that were split into 288 "beamlets" as they entered the target area in order to improve the uniformity of illumination. The plans called for the installation of two main banks of laser beam lines,r one in the existing Nova beam line room, and the other in the older Shiva building next door, extending through its laser bay and target area into an upgraded Nova target area. The lasers would deliver about 500 TW in a 4 ns pulse. The upgrades were expected to allow the new Nova to produce fusion yields between 2 and 20 MJ The initial estimates from 1992 estimated construction costs around $400 million, with construction taking place from 1995 to 1999.
For reasons that are not well recorded in the historical record, later in 1992 LLNL updated their Nova Upgrade proposal and stated that the existing Nova/Shiva buildings would no longer be able to contain the new system, and that a new building about three times as large would be needed. From then on the plans evolved into the current National Ignition Facility.
Starting in the late 1980s a new method of creating very short but very high power laser pulses was developed, known as chirped pulse amplification, or CPA. Starting in 1992, LLNL staff modified one of Nova's existing arms to build an experimental CPA laser that produced up to 1.25 PW. Known simply as Petawatt, it operated until 1999 when Nova was dismantled to make way for NIF.
The basic amplification system used in Nova and other high-power lasers of its era was limited in terms of power density and pulse length. One problem was that the amplifier glass responded over a period of time, not instantaneously, and very short pulses would not be strongly amplified. Another problem was that the high power densities led to the same sorts of self-focusing problems that had caused problems in earlier designs, but at such a magnitude that even measures like spacial filtering would not be enough, in fact the power densities were high enough to cause filaments to form in air.
CPA avoids both of these problems by spreading out the laser pulse in time. It does this by reflecting a relatively multi-chromatic (as compared to most lasers) pulse off a series of two diffraction gratings, which splits them spatially into different frequencies, essentially the same thing a simple prism does with visible light. These individual frequencies have to travel different distances when reflected back into the beamline, resulting in the pulse being "stretched out" in time. This longer pulse is fed into the amplifiers as normal, which now have time to respond normally. After amplification the beams are sent into a second pair of gratings "in reverse" to recombine them into a single short pulse with high power. In order to avoid filamentation or damage to the optical elements, the entire end of the beamline is placed in a large vacuum chamber.
Although Petawatt was instrumental in advancing the practical basis for the concept of "fast ignition fusion", by the time it was operational as a proof-of-concept device, the decision to move ahead with NIF had already been taken. Further work on the fast ignition approach continues, and will potentially reach a level of development far in advance of NIF at HiPER, an experimental system under development in the European Union. If successful, HiPER should generate fusion energy over twice that of NIF, while requiring a laser system of less than one-quarter the power and one-tenth the cost. Fast ignition is one of the more promising approaches to fusion power.
"Death" of Nova
When Nova was being dismantled to make way for NIF, the target chamber was lent to France for temporary use during the development of Laser Megajoule, a system similar to NIF in many ways. This loan was controversial, as the only other operational laser at LLNL at the time, Beamlet (a single experimental beamline for NIF), had recently been sent to Sandia National Laboratory in New Mexico. This left LLNL with no large laser facility until NIF started operation, which was then estimated as being 2003 at the earliest. Work on NIF was not declared formally completed until March 31, 2009.
- "How NIF works", Lawrence Livermore National Laboratory. Retrieved on October 2, 2007.
- Per F. Peterson, "Inertial Fusion Energy: A Tutorial on the Technology and Economics", University of California, Berkeley, 1998. Retrieved on May 7, 2008.
- Per F. Peterson, "How IFE Targets Work", University of California, Berkeley, 1998. Retrieved on May 8, 2008.
- Per F. Peterson, "Drivers for Inertial Fusion Energy", University of California, Berkeley, 1998. Retrieved on May 8, 2008.
- Nuckolls et al., "Laser Compression of Matter to Super-High Densities: Thermonuclear (CTR) Applications", Nature Vol. 239, 1972, pp. 129
- John Lindl, "The Edward Teller Medal Lecture: The Evolution Toward Indirect Drive and Two Decades of Progress Toward ICF Ignition and Burn", 11th International Workshop on Laser Interaction and Related Plasma Phenomena, December 1994. Retrieved on May 7, 2008.
- "Building increasingly powerful lasers", Year of Physics, 2005, Lawrence Livermore National Laboratory
- J. A. Glaze, "Shiva: A 30 terawatt glass laser for fusion research", presented at the ANS Annual Meeting, San Diego, 18–23 June 1978
- "Empowering Light: Historical Accomplishments in Laser Research", Science & Technology Review, September 2002, pp. 20-29
- Matthew McKinzie and Christopher Paine, "When Peer Review Fails", NDRC. Retrieved on May 7, 2008.
- Ted Perry, Bruce Remington, "Nova Laser Experiments and Stockpile Stewardship", Science & Technology Review, September 1997, pp. 5-13
- "A Virtual Reality Tour of Nova", Lawrence Livermore National Laboratory– opening diagram shows the modified beamline arrangement.
- Moody et all, "Beam smoothing effects on stimulated Raman and Brillouin backscattering in laser-produced plasmas", Journal of Fusion Energy, Vol. 12, No. 3, September 1993, DOI 10.1007/BF01079677, pp. 323-330
- Dixit et all, "Random phase plates for beam smoothing on the Nova laser", Applied Optics, Vol. 32, Issue 14, pp. 2543-2554
- "Colossal Laser Headed for Scrap Heap", ScienceNOW, November 14, 1997
- "Nova Upgrade– A Proposed ICF Facility to Demonstrate Ignition and Gain", Lawrence Livermore National Laboratory ICF Program, July 1992
- "Review of the Department of Energy’s Inertial Confinement Fusion Program, Final Report", National Academy of Sciences
- Tobin, M.T et all, "Target area for Nova Upgrade: containing ignition and beyond", Fusion Engineering, 1991, pg. 650–655. Retrieved on May 7, 2008.
- An image of the design can be found in "Progress Toward Ignition and Burn Propagation in Interial Confinement Fusion", Physics Today, September 1992, p. 40
- Letter from Charles Curtis, Undersecretary of Energy, June 15, 1995
- Michael Perry, "The Amazing Power of the Petawatt", Science & Technology Review, March 2000, pp. 4-12
- Michael Perry, "Crossing the Petawatt Threshold", Science & Technology Review, December 1996, pp. 4-11
- "US sends Livermore laser target chamber to France on loan", Nature, Vol. 402, pp. 709-710, doi:10.1038/45336
- Kilkenny, J.D.; et al. (May 1992). "Recent Nova Experimental Results". Fusion Technology 21 (3): 1340–1343 Part 2A.
- Hammel, B.A. (December 2006). "The NIF Ignition Program: progress and planning". Plasma Physics and Controlled Fusion 48 (12B): B497–B506 Sp. Iss. SI. doi:10.1088/0741-3335/48/12B/S47.
- Coleman, L.W. (December 1987). "Recent Experiments With The Nova Laser". Journal of Fusion Energy 6 (4): 319–327. doi:10.1007/BF01052066. | http://en.wikipedia.org/wiki/Nova_(laser) | 13 |