text
stringlengths
198
630k
id
stringlengths
47
47
metadata
dict
By Allen C. Guelzo What's in a name? A great deal, if it happens to be Stephen A. Douglas. A hundred and fifty years ago, Stephen Arnold Douglas was the most powerful politician in America. He had begun his political career as a hyper-loyal Andrew Jackson Democrat, snatched up one of Illinois' U.S. Senate seats in 1846, and rose from there to the heights of Congressional stardom by helping the great Henry Clay cobble together the Compromise of 1850 - which effectively averted civil war over the expansion of slavery into the West for another decade. No man was a more obvious presidential candidate than Douglas, and in 1860, he won his party's nomination to the presidency. That, unhappily for Douglas, was when the cheering stopped. He made the magnificent mistake, when running for re-election to the Senate in 1858, of agreeing to debate the new Republican party's anti-slavery candidate, Abraham Lincoln. Although Douglas managed to win the election, Lincoln handled him so relentlessly, exposing the failure of Douglas's policies on slavery during the duo's seven open-air debates, that Lincoln emerged as a national contender, while Douglas lost legions of disappointed supporters. When Douglas faced Lincoln again in the presidential campaign of 1860, Douglas's party fractured into three pieces and guaranteed Lincoln's election by default. Douglas died only eight months later. Still, Douglas's name was revered by Illinois Democrats for a generation afterward. In the 1920s, Progressive Democrats adopted Douglas as a model of moderate statesmanship, trying to hold off the destructive fanaticism of both pro- and anti-slavery radicals. Biographers of Lincoln and Douglas alike - Albert Beveridge on the Lincoln side, Robert Johannsen on Douglas's side - praised Douglas for taking the practical road to compromise, unlike the ideologues whose fervor eventually triggered the Civil War. In 1963, Illinois governor Otto Kerner, who would later chair the famous National Advisory Commission on Civil Disorders, praised Douglas for creating the Illinois Central railroad and the University of Chicago, and in 1975, Chicago mayor Richard Daley, a life-long admirer of Douglas, founded a Stephen A. Douglas Association to promote the observation of Douglas's birthday. And, on top of it all, a residence hall - one of three named for famous Illinois politicians - was built at Eastern Illinois University with Douglas's name on it. Douglas Hall, a 200-bed residence hall built in the 1950s, may have been the most innocuous of all the memorializations of Stephen A. Douglas. But not after November 9th. That's when the EIU Faculty Senate, acting on a proposal from Associate Professor of English Christopher Hanlon, voted to remove Douglas's name from the residence hall. This was not the first time the "Douglas" in Douglas Hall had been challenged, but this time, Hanlon cast his objections as a statement on race. Stephen A. Douglas, Hanlon argued, "gave voice to a contemptuous view of African Americans, a view that has long since been recognized as incompatible with modern American democracy." And it is true that Douglas supported the Fugitive Slave Law of 1850 (it was part of the Compromise), backed legislation which tore up laws banning the expansion of slavery into the Western territories, and in his debates with Lincoln, spoke of blacks in terms so demeaning they would make a Klansman blush. Douglas addressed Illinoisans in 1858 with the ringing affirmation that "this government of ours is founded on the white basis." During the debates with Lincoln Douglas made a specialty of race-baiting in the most foul-mouthed fashion: I ask you, are you in favor of conferring upon the negro the rights and privileges of citizenship? Do you desire to strike out of our state's constitution that clause which keeps slaves and free negroes out of the state, and allow the free negroes to flow in, and cover your prairies with black settlements? Do you desire to turn this beautiful state into a free negro colony in order that when Missouri abolishes slavery, she can send one hundred thousand emancipated slaves into Illinois to become citizens and voters on an equality with yourselves? If you desire negro citizenship, if you desire to allow them to come into the state and settle with the white man, if you desire them to vote on an equality with yourselves, and to make them eligible to office, to serve on juries, and to judge your rights, then support Mr. Lincoln and the Black Republican party, who are in favor of the citizenship of the negro. And although he tried to distract notice from it, Douglas was even the legal owner of slaves, which had come to him through the estate of his father-in-law and which he managed as a trust for his two under-age sons. It is easy to read Hanlon's charge that Douglas "bears a dishonorable record of public service and is hence undeserving of public acclaim and honor" as simply another sanctimonious exercise in air-brushing the racially-insensitive and diversity-intolerant out of college and university histories. And this has provoked some equally-predictable push-back from historians like EIU's Mark Summers, who argues that "trying to find historical actors who fully abided my own moral judgments was a fruitless exercise... because our world today is too different from the world occupied by predecessors who spoke and acted in the past." Douglas was, in other words, a man of his time, when white supremacist attitudes were actually mainstream, and should not be judged by "presentist" attitudes. In the case of Stephen A. Douglas, however, I'm on the side of removing the name, and not just because of race. Douglas's entire policy toward race and slavery arose from an even more toxic assumption, which Douglas deified as the principle of "popular sovereignty." In Douglas's dictionary, democracy is an end in itself, and democratic process amounts entirely to consulting what a majority of the people want at any given time. If the voters wanted to legalize slavery, so be it; if not, that was up to them, too, so long as they did not attempt to force this conviction on others. "The principle of self-government is, that each community shall settle this question for itself... and we have no right to complain, either in the North or the South, whichever they do." Douglas liked to speak of this as an example of what he called "diversity;" but in the context of the crisis over slavery in the 1850s, what it meant in practical terms was that "If Kansas wants a slave-State constitution she has a right to it.... I do not care whether it is voted down or voted up." Lincoln, by contrast, believed that democracy was a means, not an end in itself, and that it was a means toward realizing in the fullest fashion the natural rights of life, liberty, and the pursuit of happiness which Nature and Nature's God had hard-wired into every human being. There was a line, drawn by natural right, beyond which no majority and no democracy could or should go, and on the far side of that line was slavery. Douglas's politics were the politics of the pitchfork, devoid of moral principle and respecting nothing except force. Lincoln believed that "moral principle is all that unites us," and that Douglas's popular sovereignty was merely a kind face on mob rule. Wed that notion to white supremacy, and you get the real offense of Stephen A. Douglas. In the long run, the EIU resolution is little more than pin-prick. It would have been far better if they had gone to the root of Douglas's problem, which was his utter political amorality. That's what needs banishing, not only from the addresses of residence halls, but from our halls of representatives, too. Allen C. Guelzo, a prominent Lincoln scholar, is William Garwood Visiting Professor of Politics at Princeton.
<urn:uuid:7400a54f-3f74-429e-97a2-826e17850061>
{ "date": "2013-05-21T10:35:39", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9689186811447144, "score": 2.84375, "token_count": 1643, "url": "http://www.mindingthecampus.com/originals/2010/11/the_douglas_debateno_lincoln_t.html" }
The genus Solenopsis includes both the "fire ants", known for their aggressive nature and potent sting, and the minute "thief ants", many of which are lestobiotic subterranaen or arboreal species that are rarely collected. Many species may be polygynous. Generic level identification of Solenopsis is relatively straight forward, although sizes are greatly variable ranging from approximately 1.0 mm to over 4.0 mm. The genus can be basically characterized by the following: mandible with four teeth (usually), bicarinate clypeus with 0-5 teeth, median part of clypeus with a pair of longitudinal carinae medially or at lateral edges, 10-segmented antennae that terminates in a distinctive 2-segmented club, overall shiny appearance and general lack of or reduced sculpture (when present usually restricted to rugulae or striae on the head, alitrunk, petiole, and postpetiole), lack of propodeal spines or other protuberances on the alitrunk, well developed petiole and postpetiole, and a well developed sting. Workers are either polymorphic (especially in the fire ant group) or monomorphic (especially thief ants). The thief ant group shares these characteristics, but workers are minute (usually under 2.0 mm in total length), usually have minute eyes (usually with only 1-5 ommatidia (rarely more than 18, except for S. globularia in our region), minor funicular segments 2-3 typically wider than long (usually longer than wide in the fire ant group). Biology and Economic Importance Discover Life Images
<urn:uuid:8a797adf-7747-41ba-9596-f22493eb5470>
{ "date": "2013-05-21T10:34:26", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8998308181762695, "score": 2.828125, "token_count": 351, "url": "http://www.mississippientomologicalmuseum.org.msstate.edu/Researchtaxapages/Formicidaepages/genericpages/Solenopsis.subterranea.htm" }
The second generation Titans Prometheus (the son of Iapetus and a cousin of Zeus) and Epimetheus were initially not punished, as their relatives such as Atlas who participated in the Titanomachy, a fight with the father of all Gods Zeus. Prometheus and his brother Epimetheus (“gifted with afterthought”) were asked of forming man from water and earth (Prometheus formed the shape while Athena breathed life into the figure), which he did, but in the process, became fonder of men than Zeus had anticipated. Zeus didn't share Prometheus' feelings and wanted to keep men from having power, especially of fire. The fire symbolizes knowledge and knowledge and intelligence together with arrogance (“Blessed are the poor in spirit”) would probably make Zeus and the other gods obsolete. Prometheus cared more for man than for the wrath of the increasingly powerful and autocratic Zeus, so he stole fire from Zeus' lightning, concealed it in a hollow stalk of fennel, and brought it to man. He also stole skills from Hephaestus and Athena to gave to man. Zeus reacted by presenting to man Pandora (also called Anesidora “sender up of gifts” ), the first woman. While Prometheus may have crafted man, woman was a different sort of creature. She came from the forge of Hephaestus, beautiful as a goddess and beguiling. Note that she is not like her Jewish counterpart: Eve was created to soothe Adam's loneliness, and to help him as a partner. Pandora, the first woman in Greek myth, was created as a punishment to mankind. Her name does not mean "giver of all gifts" - rather "she to whom all gifts were given" - the gods gave her beauty (Aphrodite), skill (Athena), while Hermes gave her a doglike (bitch-like?) mind and a thieving nature. "All (pantes) the gods gave her gifts (dora), a sorrow to men who live on bread." Zeus wanted to punish mankind for Prometheus' theft of fire - he decided to give them a "beautiful evil" (kalon kakon) "to pay for fire" (anti pyros). Hephaestus makes the woman out of earth and water, to look like a goddess. Thus Pandora - and through her all women, who are her descendants - has a beautiful exterior, but is worthless inside. Prometheus, whose name means “forethought,” or “he who thinks ahead,” is a figure whom Steiner refers to as the Greek Lucifer. Prometheus awakened a consciousness in humans that was too dangerous in the eyes of Zeus, so Zeus had Prometheus chained to a rock in the Caucasus mountains. But Prometheus is patient, for he knows a secret that is not known to Zeus. In the future, Zeus will lie with a mortal woman, Io, and she will give birth to a son, who will start a line of descent leading to the birth of Hercules or Heracles, meaning “he who is called by Hera.” This great hero, whom Steiner indicates is a portent of Christ Jesus, will grow up to succeed Zeus in his position of authority as Law-giver in the heavens. Heracles will also kill the vulture that eats Prometheus' liver, and then liberate the great Greek Lucifer. Tom Mellett ALBERT EINSTEIN'S THEORY OF RELATIVITY AS RUDOLF STEINER'S FINAL “RIDDLE OF Philosophy, Journal for Anthroposophy, Number 60, Spring 1995 issue, pp. 51-63) Zeus presented her as a bride to Prometheus' brother Epimetheus (whom Prometheus, expecting retribution for his audacity, had warned against accepting gifts from Zeus), along with a box that they were instructed to keep closed. Epimetheus was dazzled by Pandora and forgot the advice of his prescient brother. Unfortunately, one day while her husband was away, Pandora opened the box. In the process, she unleashed all the evils now known to man, but retaining Hope . No longer could man loll about all day, but he would have to work and would succumb to illnesses. So for the Greeks the woman was crated as a punishment for man. What is the reason of the creation of the woman according to the Bible? Why did God produce male and female versions of all animals but only a man Adam initially. Why did he not create also Eve separately and why did he use material from Adam? Both stories, the Greek and the Christian version are mythological stories to explain the superiority of mens over women. The Greek version says that people were created by throwing stones, in the Bible Eve and Adam had two sons but how were their children produced? Her act ends the Golden Age. She serves the same mythic function as Eve in Genesis. In both the Bible and Greek myth, humanity pays a price for knowledge: loss of innocence, peace and loss of paradise. “While saving him from mental darkness, Prometheus brought to man all the tortures which accompany self-consciousness: the knowledge of his responsibility to the whole of nature; the painful results of all wrong choices made in the past, since free-will and the power of choice go hand in hand with self-consciousness; all of the sorrows and sufferings -- physical, mental and moral -- to which thinking man is heir. Prometheus accepted these tortures as inevitable under the Law, knowing that the soul can develop only through its own experience, willing to pay the price for every experience gained”. THEOSOPHY, Vol. 27, No. 12, October, 1939 Deucalion and Pyrrha , Metamorphoses Bk I: 367-415 , Solis Eventually Hercules rescued Prometheus, and Zeus and the Titan were reconciled. Zeus became angry enough with conditions on earth that he summoned other gods to a conference at his heavenly palace. Zeus (like the Christian God) decided to destroy humankind and to provide the earth with a new race of mortals more worthy of life and more reverent to him. Zeus feared that the destruction of humankind by fire might set heaven itself aflame, so he called for assistance from a god of the sea, and humankind was instead swept away by a great flood. The great flood is a common mythological story in many cultures, for example the myth of Noah in the Old Testament, and probably there was once such a great flood in reality, an event that was incrorporated in various mythological stories. Meanwhile, Prometheus had sired the human Deucalion (some say as his son with Clymene or Celaeno) one of the noble couple whom Zeus had spared when he caused the creatures of the earth to be destroyed by a flood. Deucalion was married to his cousin Pyrrha, the daughter of Epimetheus and Pandora. During the flood, Deucalion and Pyrrha had stayed safe on a boat. When all the other evil humans had been destroyed Zeus caused the waters to recede so that Deucalion and Pyrrha could come to land on Mount Parnassus. While they had each other for company, and they could produce new children, they were lonely and sought help from the oracle of Themis. He was told to “toss the bones of his mighty mother over his shoulder”, that he and Pyrrha understood to be “Gaia” the mother of all living things and the bones to be rocks. Following the advice, they threw stones over their shoulders. From those thrown by Deucalion sprang men and from those thrown by Pyrrha came women. That is why people are called laoi, from laas, "a stone." [Apollodorus 1.7.2] "...I remember in Plutarch's works, what is worth relating that I read there, that by the Pigeon sent forth of the Ark, in Deucalions flood, was shown, that the waters were sunk down, and the storms past...." Then they had their own child, a boy whom they called Hellen and after whom the Greeks were named Hellenes... Other children were Thyia, and the Leleges all forming the various groups of Greeks (see Map below). For Greeks it is left as an exercise to find out the corresponding group they belong and their corresponding root! For others, i.e. barbarians :-) see their corresponding myth (Noah,....,etc). From myth back to history. Between 1900 and 1600 BC Greeks or Hellenes, a branch of the Indo-European speaking people, were simple nomadic herdsmen. They came from the east of the Caspian Sea and entered the Greek peninsula from the north in small groups. The first invaders were the fair-haired Achaeans of whom Homer wrote. The Dorians came 3-4 centuries later and subjugated their Achaean kinsmen. Other tribes, the Aeolians and the Ionians, found homes chiefly on the islands in the Aegean Sea and on the coast of Asia Minor. The land that these tribes invaded was the site of a well-developed civilization. The people who lived there had cities and palaces. They used gold and bronze and made pottery and paintings. The Greek invaders were still in the barbarian stage. They plundered and destroyed the Aegean cities. Gradually, as they settled and intermarried with the people they conquered, they absorbed some of the Aegean culture. About the etymology of Prometheus it could be derived from Pramanthas, a member of the Vedic family of fire-worshipping priests of the fire god Agni. Adam. In Greek this word is compounded of the four initial letters of the cardinal quarters: Arktos, [Greek: arktos]. north. The Hebrew word ADM forms the anagram of A [dam], D [avid], M [essiah]. Adam, how made. God created the body of Adam of Salzal, i.e. dry, unbaked clay, and left it forty nights without a soul. The clay was collected by Azrael from the four quarters of the earth, and God, to show His approval of Azrael's choice, constituted him the angel of death.—Rabadan. Adam, Eve, and the Serpent. After the fall Adam was placed on mount Vassem in the east; Eve was banished to Djidda (now Gedda, on the Arabian coast); and the Serpent was exiled to the coast of Eblehh. After the lapse of 100 years Adam rejoined Eve on mount Arafaith [place of Remembrance], near Mecca.—D'Ohsson. Death of Adam. Adam died on Friday, April 7, at the age of 930 years. Michael swathed his body, and Gabriel discharged the funeral rites. The body was buried at Ghar'ul-Kenz [the grotto of treasure], which overlooks Mecca. Apollodorus. The Library, Sir James G. Frazer (transl.), Harvard University Press, Cambridge, 1921, 1976.
<urn:uuid:05852e18-3372-4937-8780-75bcac0ec7b5>
{ "date": "2013-05-21T10:33:43", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9745060801506042, "score": 3.390625, "token_count": 2313, "url": "http://www.mlahanas.de/Greeks/Prometheus.htm" }
MDH Fact Sheet/Brochure Why Does My Water Smell Like Rotten Eggs? HYDROGEN SULFIDE AND SULFUR BACTERIA IN WELL Hydrogen sulfide gas (H2S) can occur in wells anywhere in Minnesota, and gives the water a characteristic "rotten egg" taste or odor. This brochure provides basic information about hydrogen sulfide gas and sulfur bacteria and discusses actions that you can take to minimize their effects. What are the sources of hydrogen sulfide in well water and the water distribution system? Hydrogen sulfide gas can result from a number of different sources. It can occur naturally in groundwater. It can be produced by certain "sulfur bacteria" in the groundwater, in the well, or in the water distribution system. It can be produced also by sulfur bacteria or chemical reactions inside water heaters. In rare instances, it can result from pollution. The source of the gas is important when considering treatment options. Are sulfur bacteria or hydrogen sulfide In most cases, the rotten egg smell does not relate to the sanitary quality of the water. However, in rare instances the gas may result from sewage or other pollution. It is a good idea to have the well tested for the standard sanitary tests of coliform bacteria and nitrate. Sulfur bacteria are not harmful, but hydrogen sulfide gas in the air can be hazardous at high levels. It is important to take steps to remove the gas from the water, or vent the gas to the atmosphere so that it will not collect in low-lying spaces, such as well pits, basements, or enclosed spaces, such as well houses. Only qualified people who have received special training and use proper safety procedures should enter a well pit or other enclosed space where hydrogen sulfide gas may be present. Are there other problems associated with sulfur bacteria or hydrogen sulfide? Yes. Sulfur bacteria produce a slime and can promote the growth of other bacteria, such as iron bacteria. The slime can clog wells, plumbing, and irrigation systems. Bacterial slime may be white, grey, black or reddish brown if associated with iron bacteria. Hydrogen sulfide gas in water can cause black stains on silverware and plumbing fixtures. It can also corrode pipes and other metal components of the water distribution system. What causes hydrogen sulfide gas to form in groundwater? Decay of organic matter such as vegetation, or chemical reactions with some sulfur-containing minerals in the soil and rock, may naturally create hydrogen sulfide in gas in groundwater. As groundwater moves through soil and rock formations containing minerals of sulfate, some of these minerals dissolve in the water. A unique group of bacteria, called "sulfur bacteria" or "sulfate-reducing bacteria" can change sulfate and other sulfur containing compounds, including natural organic materials, to hydrogen sulfide gas. How is hydrogen sulfide gas produced in a water heater? A water heater can provide an ideal environment for the conversion of sulfate to hydrogen sulfide gas. The water heater can produce hydrogen sulfide gas in two ways - creating a warm environment where sulfur bacteria can live, and sustaining a reaction between sulfate in the water and the water heater anode. A water heater usually contains a metal rod called an "anode," which is installed to reduce corrosion of the water heater tank. The anode is usually made of magnesium metal, which can supply electrons that aid in the conversion of sulfate to hydrogen sulfide gas. The anode is 1/2 to 3/4 inches in diameter and 30 to 40 inches long. How can I find the source of a hydrogen sulfide problem, and what can I do to eliminate it? The odor of hydrogen sulfide gas can be detected in water at a very low level. Smell the water coming out of the hot and cold water faucets. Determine which faucets have the odor. The "rotten egg" smell will often be more noticeable from the hot water because more of the gas is vaporized. Your sense of smell becomes dulled quickly, so the best time to check is after you have been away from your home for a few hours. You can also have the water tested for hydrogen sulfide, sulfate, sulfur bacteria, and iron bacteria at an environmental testing laboratory. The cost of testing for hydrogen sulfide ranges from $20 to $50 depending on the type of test. - If the smell is only from the hot water faucet the problem is likely to be in the water heater. - If the smell is in both the hot and cold faucets, but only from the water treated by a water softener and not in the untreated water the problem is likely to be sulfur bacteria in the water - If the smell is strong when the water in both the hot and cold faucets is first turned on, and it diminishes or goes away after the water has run, or if the smell varies through time the problems is likely to be sulfur bacteria in the well or distribution system. - If the smell is strong when the water in both the hot and cold faucets is first turned on and is more or less constant and persists with use the problem is likely to be hydrogen sulfide gas in the groundwater. What can I do about a problem water Unless you are very familiar with the operation and maintenance of the water heater, you should contact a water system professional, such as a plumber, to do - Replace or remove the magnesium anode. Many water heaters have a magnesium anode, which is attached to a plug located on top of the water heater. It can be removed by turning off the water, releasing the pressure from the water heater, and unscrewing the plug. Be sure to plug the hole. Removal of the anode, however, may significantly decrease the life of the water heater. You may wish to consult with a reputable water heater dealer to determine if a replacement anode made of a different material, such as aluminum, can be installed. A replacement anode may provide corrosion protection without contributing to the production of hydrogen sulfide gas. - Disinfect and flush the water heater with a chlorine bleach solution. Chlorination can kill sulfur bacteria, if done properly. If all bacteria are not destroyed by chlorination, the problem may return within a few weeks. - Increase the water heater temperature to 160 degrees Fahrenheit (71 degrees Celsius) for several hours. This will destroy the sulfur bacteria. Flushing to remove the dead bacteria after treatment should control the odor problem. CAUTION: Increasing the water heater temperature can be dangerous. Before proceeding, consult with the manufacturer or dealer regarding an operable pressure relief valve, and for other recommendations. Be sure to lower the thermostat setting and make certain the water temperature is reduced following treatment to prevent injury from scalding hot water and to avoid high energy costs. What if sulfur bacteria are present in the well, the water distribution system, or the water softener? - Have the well and distribution system disinfected by flushing with a strong chlorine solution (shock chlorination) as indicated in the "Well Disinfection" fact sheet from the Minnesota Department of Health (MDH). Sulfur bacteria can be difficult to remove once established in a well. Physical scrubbing of the well casing, use of special treatment chemicals, and agitation of the water may be necessary prior to chlorination to remove the bacteria, particularly if they are associated with another type of bacteria known as "iron bacteria". Contact a licensed well contractor or a Minnesota Department of Health (MDH) well specialist for details. - If the bacteria are in water treatment devices, such as a water softener, contact the manufacturer, the installer, or the MDH for information on the procedure for disinfecting the treatment What if hydrogen sulfide gas is in the The problem may only be eliminated by drilling a well into different formation capable of producing water that is free of hydrogen sulfide gas or connecting to an alternate water source, if available. However, there are several options available for treatment of water with hydrogen sulfide gas. - Install an activated carbon filter. This option is only effective for low hydrogen sulfide levels, usually less than 1 milligram per liter (mg/L).* The gas is trapped by the carbon filter is saturated. Since the carbon filter can remove substances in addition to hydrogen sulfide gas, it is difficult to predict its service life. Some large carbon filters have been known to last for years, while some small filters may last for only weeks or even days. - Install an oxidizing filter, such as a "manganese greensand" filter. This option is effective for hydrogen sulfide levels up to about 6 mg/L. Manganese greensand filters are often used to treat iron problems in water. The device consists of manganese greensand media, which is sand coated with manganese dioxide. The hydrogen sulfide gas in the water is changed to tiny particles of sulfur as it passes through the filter. The filter must be periodically regenerated, using potassium permanganate, before the capacity of the greensand is exhausted. - Install an oxidation-filtration system. This option is effective for hydrogen sulfide levels up to and exceeding 6 mg/L. These systems utilize a chemical feed pump to inject an oxidizing chemical, such as chlorine, into the water supply line prior to a storage or mixing tank. When sufficient contact time is allowed, the oxidizing chemical changes the hydrogen sulfide to sulfur, which is then removed by a particulate filter, such as a manganese greensand filter. Excess chlorine can be removed by activated carbon filtration. Other related references that are available from MDH include: Iron Bacteria in Well Water Sulfate in Well Water Well Owner's Handbook If you have any questions, please contact a licensed well contractor, a reputable water treatment company, or a well specialist at one of the following offices of the MDH: Minnesota Department of Health Well Management Section PO Box 64975 St. Paul, Minnesota 55164-0975 Source: Minnesota Department of Health Fact Sheet/Brochure "Why Does My Water Smell Like Rotten Eggs? Hydrogen Sulfide and Sulfur Bacteria in Well Water"
<urn:uuid:40d03d22-3b01-482b-8b64-302cde04dd0b>
{ "date": "2013-05-21T10:35:20", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.8970645070075989, "score": 3.546875, "token_count": 2304, "url": "http://www.mrwa.com/watersmellrotteneggs.htm" }
How would our lives be different if we could only buy foods native to New York State? This and similar questions are the focus of a museum-wide quest where students work in teams to gather data about the things they use each day. After collecting the data, each team creates a graphic representation of its findings and leads a discussion about interdependence around the globe. Please divide your students into three different research teams and assign each team at least one chaperon or teacher. Each team will be going to a separate area of the museum to gather data: the “foods” team will work in Super Kids Market; the “fashions” team in the museum’s collections; and the “fads” team in TimeLab. After collecting data, each team will prepare a brief presentation to share its findings. All three groups will gather together for the presentations. Lesson extensions for before or after your visit The following activities are designed for your class to enjoy before or after your museum visit. Familiarizing students with the lesson concepts can enrich your museum experience. How do the choices we make reflect the concept of globalization? Review maps of the United States and the world prior to your visit. Here’s one way to do that: Post the four cardinal directions in your room: North, South, East, and West. Give each student an index card and the instruction to write the name of one of the 50 states on the card. When everyone is ready, ask students to stand where they think their state would be if the room was the U.S. Allow students to make adjustments to their positions after talking with each other. Provide a real map for students to use as a reference if needed. Once in position, ask students to describe their thinking in deciding where to stand. Do the same thing using continents, oceans, and countries with a map of the world. Ask students to spend one week keeping a “globalization journal.” They should record each piece of evidence they see to suggest that globalization is occurring. This evidence may appear in items they have at home or have seen at stores; things they’ve heard about in news stories; music they listen to; food they eat at restaurants; holiday customs their family has adopted; or other sources. Some examples of what they might find include: - imported cheese in the grocery store - imported CDs in a music store - a television program produced in another country - a television news story about international business or another topic related to globalization Post a large map of the world in the classroom. Invite students to collect objects and images of things they like that represent different countries around the world. Post the objects and images on the map and use the on-going project as a springboard for discussion each week. Have students generate their own questions about the data they collect and invite them to do research to further their understanding of globalization.
<urn:uuid:7e1f5725-dfc7-493a-b043-d9a45382ea84>
{ "date": "2013-05-21T10:27:44", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9507409930229187, "score": 3.609375, "token_count": 602, "url": "http://www.museumofplay.org/education/school-programs/extensions/fashions-foods-fads-exploring-geography?mini=calendar%2F2012-12" }
You are here: Our Celtic Family Skip to menu Skip to content Visit our museums Explore our collections Curatorial & Research Who or what is threatening the village? Is it the Romans? Wild animals? The Vikings? Or is it other Celts? Click on the pictures for the answer... The Celts used this against other Celts and wild animals. The Romans came to Britain some time after the time of the birth of Christ. The Vikings came to Britain centuries later still. A spear was a good weapon against wolves, bears and wild boars. These were a threat to the people and animals that lived in the village. Click here to follow Caradog further... © the National Museum of Wales
<urn:uuid:edc00df4-c04f-4f0a-a9a7-86e964a465aa>
{ "date": "2013-05-21T10:28:34", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8799135088920593, "score": 2.828125, "token_count": 152, "url": "http://www.museumwales.ac.uk/en/2301/" }
Advance decision, or living will An advance decision, more commonly known as a living will, allows you to make important choices to be carried out when end of life medical decisions have to be taken. Jessica Tomlin describes the benefits. What is an advance decision? An advance decision - formerly known as a 'living will' - is a document that enables health care professionals to know your treatment wishes should you no longer be able to communicate them. Advance decision can also be known as advance directive. The most relevant feature of an advance decision is the section that enables you to state that you wish to refuse treatment, including life-sustaining treatment, should you lose the capacity to make these decisions in the future. Compassion in Dying have produced an advance decision document, including very helpful guidance notes. My Last Song suggest you download the form. Before you fill it in, discuss your decisions with your GP and close family members. Why make an advance decision? There are many benefits associated with making an advance decision, or more commonly, having a living will: - Taking control: Advance decisions give you more control of your end of life care and treatment; - Peace of mind: Advance decisions provide you and your family with the peace of mind that your wishes will be respected and that, in certain circumstances, your life will not be prolonged against their wishes; - Legally binding: Advance decisions have been given statutory force under the Mental Capacity Act in October 2007 so that any decision to refuse treatment is legally binding; - Better decision making: Making an advance decision is a good opportunity to discuss your wishes with close family and your doctor. Limitations of advance decisions An advance decision can't be used to: - ask for your life to be ended; - force doctors to act against their professional judgement; - nominate someone else to decide about treatment on your behalf. As with advance statements, bear in mind that new drugs or treatments may be introduced in the future so you may wish to allow for new treatments even if refusing a current one. Who can make an advance decision? Anyone who is mentally competent and over 18 can make an advance decision. You should review your advance decision regularly - perhaps every couple of years - to check that you are still happy with what it contains. You may want to change certain treatment requests or change your mind about refusing treatment. Who should have a copy? Five copies may seem a lot, but as with a will, it is vital that this document is found as soon as it is required to ensure your wishes are carried out. These are the people who you should give a copy to: - One copy to your GP; - One copy to a trusted friend; - One copy to a trusted family member; - One copy with your solicitor; - One copy to keep yourself. You can also carry a credit-card sized card stating that you have an advance decision. Consider keeping a digital copy in your Lifebox. You will be the only person able to access it, and then update it if required. You can give your second key holder permission to open your Lifebox when you feel you losing the capacity to make cohesive decisions about your end of life care. Remember to replace old copies with new copies if you update your advance decision. An advance decision enables you to state the scenarios in which you would or would not wish to receive further treatment. - if you are terminally ill with no reasonable prospect of recovery; - if you suffer a serious mental impairment; or - if you are persistently unconscious for X amount of weeks. In a Pro-Choice advance decision - which also lets you state which treatments you do consent to - you can also state any spiritual and personal wishes that you wish to be taken into account, although this section is purely advisory and not legally binding.
<urn:uuid:f0092222-02e2-40dc-9391-2e31f7cc72dc>
{ "date": "2013-05-21T10:06:15", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9481855034828186, "score": 2.625, "token_count": 798, "url": "http://www.mylastsong.com/advice/136/152/149/how-do-i/choose-my-care-options/advance-decision-or-living-will" }
HIGHER EDUCATION AND EE On college campuses across North America, young people with passion for environmental causes are taking action to make the world more sustainable. How do we turn these motivated students into future environmental educators? NAAEE’s Guidelines for the Preparation and Professional Development of Environmental Educators provide the basis for these campus initiatives. Pre-12 Teacher Preparation NAAEE has partnered with National Council for the Accreditation of Teacher Education, which accredits more than half of the 1,200 or so colleges of teacher education in the U.S. How does this impact universities? • Sixty percent of all the colleges and universities that certify teachers are accredited by NCATE. NAAEE’s partnership with NCATE demonstrates acceptance of EE as an important part of formal education training. As teachers are better trained, they will be more comfortable teaching about the environment, particularly in interdisciplinary teams. "Adding NAAEE standards to the NCATE protocols will encourage teacher education programs to take environmental education seriously. The issues facing our society require teachers that are prepared to teach about the environment, based upon the standards of our profession.” Dean of the College of Education, Western Kentucky University Training in the EE Standards is offered each year at the NAAEE annual conference. Nonformal Educator Preparation – Departments other than the College of Education offer training for future environmental educators. EE providers lead activities for children and adults at nonformal educational institutions such as nature centers, zoos, museums, and parks. They develop curriculum materials and administer national, state, and local community EE programs. They work in corporate sustainability departments, teaching employees and customers about the environment, or in media reaching millions of readers and viewers. Regardless of the setting, NAAEE’s Guidelines for the Preparation and Professional Development of Environmental Educators outlines the experiences and learning that will help them deliver their messages in ways that effectively foster environmental literacy. NAAEE is drafting EE Standards to provide recognition to these programs through a mechanism known as a Certificate of Distinction.
<urn:uuid:e9215f13-18fc-464a-85ef-6139756ce756>
{ "date": "2013-05-21T10:07:30", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9332343935966492, "score": 2.84375, "token_count": 430, "url": "http://www.naaee.net/programs/highered" }
Recovery is a process, beginning with diagnosis and eventually moving into successful management of your illness. Successful recovery involves learning about your illness and the treatments available, empowering yourself through the support of peers and family members, and finally moving to a point where you take action to manage your own illness by helping others. Untreated Mental Illness: A Needless Human Tragedy Severe mental illnesses are treatable disorders of the brain. Left untreated, however, they are among the most disabling and destructive illnesses known to humankind. Millions of Americans struggling with severe mental illnesses, such as schizophrenia, bipolar disorder, and major depression, know only too well the personal costs of these debilitating illnesses. Stigma, shame, discrimination, unemployment, homelessness, criminalization, social isolation, poverty, and premature death mark the lives of most individuals with the most severe and persistent mental illnesses. Mental Illness Recovery: A Reality Within Our Grasp The real tragedy of mental illness in this country is that we know how to put things right. We know how to give people back their lives, to give them back their self-respect, to help them become contributing members of our society. NAMI's In Our Own Voice, a live presentation by consumers, offers living proof that recovery from mental illness is an ongoing reality. Science has greatly expanded our understanding and treatment of severe mental illnesses. Once forgotten in the back wards of mental institutions, individuals with brain disorders have a real chance at reclaiming full, productive lives, but only if they have access to the treatments, services, and programs so vital to recovery. - Newer classes of medications can better treat individuals with severe mental illnesses and with far fewer side effects. Eighty percent of those suffering from bipolar disorder and 65 percent of those with major depression respond quickly to treatment; additionally, 60 percent of those with schizophrenia can be relieved of acute symptoms with proper medication. - Assertive community treatment, a proven model treatment program that provides round-the-clock support to individuals with the most severe and persistent mental illnesses, significantly reduces hospitalizations, incarceration, homelessness, and increases employment, decent housing and quality of life. - The involvement of consumers and family members in all aspects of planning, organizing, financing, and implementing service-delivery systems results in more responsiveness and accountability, and far fewer grievances. Resources for Recovery
<urn:uuid:04da894f-84c5-4cf7-bf0d-22b3545279aa>
{ "date": "2013-05-21T10:07:35", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9351825714111328, "score": 3.375, "token_count": 481, "url": "http://www.nami.org/TextTemplate.cfm?Section=About_Recovery&Template=/TaggedPage/TaggedPageDisplay.cfm&TPLID=23&ContentID=34759" }
Joined: 16 Mar 2004 |Posted: Tue Aug 04, 2009 2:40 pm Post subject: Immune Responses Jolted into Action by Nanohorns |The immune response triggered by carbon nanotube-like structures could be harnessed to help treat infectious diseases and cancers, say researchers. The way tiny structures like nanotubes can trigger sometimes severe immune reactions has troubled researchers trying to use them as vehicles to deliver drugs inside the body in a targeted way. White blood cells can efficiently detect and capture nanostructures, so much research is focused on allowing nanotubes and similar structures to pass unmolested in the body. But a French-Italian research team plans to use nanohorns, a cone-shaped variety of carbon nanotubes, to deliberately provoke the immune system. They think that the usually unwelcome immune response could kick-start the body into fighting a disease or cancer more effectively. To test their theory, Alberto Bianco and Hélène Dumortier at the CNRS Institute in Strasbourg, France, in collaboration with Maurizio Prato at the University of Trieste, Italy, gave carbon nanohorns to mouse white blood cells in a Petri dish. The macrophage cells' job is to swallow foreign particles. After 24 hours, most of the macrophages had swallowed some nanohorns. But they had also begun to release reactive oxygen compounds and other small molecules that signal to other parts of the immune system to become more active. The researchers think they could tune that cellular distress call to a particular disease or cancer, by filling the interior of nanohorns with particular antigens, like ice cream filling a cone. "The nanohorns would deliver the antigen to the macrophages while also triggering a cascade of pro-inflammatory effects," Dumortier says. "This process should initiate an antigen-specific immune response." "There is still a long way to go before this interesting approach might become safe and effective," says Ruth Duncan at Cardiff University , UK . "Safety would ultimately depend on proposed dose, the frequency of dose and the route of administration," she says. Dumortier agrees more work is needed, but adds that the results so far suggest that nanohorns are less toxic to cells than normal nanotubes can be. "No sign of cell death was visible upon three days of macrophage culture in the presence of nanohorns," Dumortier says. Recent headline-grabbing results suggest that nanotubes much longer than they are wide can cause similar inflammation to asbestos . But nanohorns do not take on such proportions and so would not be expected to have such an effect. Journal reference: 10 Advanced Materials (DOI: 1002/adma.200702753) Source: New Scientist /... Subscribe to the IoN newsletter.
<urn:uuid:5cade7be-722d-4875-86c2-cdb3dd43ad4f>
{ "date": "2013-05-21T10:21:36", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9411655068397522, "score": 3.390625, "token_count": 593, "url": "http://www.nano.org.uk/forum/viewtopic.php?p=8760" }
In this study, Ni and Cu nanowire arrays and Ni/Cu superlattice nanowire arrays are fabricated using standard techniques such as electrochemical deposition of metals into porous anodic alumina oxide templates having pore diameters of about 50 nm. We perform optical measurements on these nanowire array structures. Optical reflectance (OR) of the as-prepared samples is recorded using an imaging spectrometer in the wavelength range from 400 to 2,000 nm (i.e., from visible to near-infrared bandwidth). The measurements are carried out at temperatures set to be 4.2, 70, 150, and 200 K and at room temperature. We find that the intensity of the OR spectrum for nanowire arrays depends strongly on the temperature. The strongest OR can be observed at about T = 200 K for all samples in visible regime. The OR spectra for these samples show different features in the visible and near-infrared bandwidths. We discuss the physical mechanisms responsible for these interesting experimental findings. This study is relevant to the application of metal nanowire arrays as optical and optoelectronic devices. Keywords:Nanowire array; Optical properties; Visible and near-infrared; Temperature dependence In recent years, quasi one-dimensional (1D) nanostructured materials have received much attention attributed to their interesting physical properties in sharp contrast to the bulk ones and to the potential applications as electronic, magnetic, photonic, and optoelectronic devices [1-4]. From a viewpoint of physics, the basic physical properties of nanostructured materials differ significantly from those of bulk materials with the same chemical components. In particular, quantum confinement effects can be observed in the dimensionally reduced nanomaterial systems. Therefore, nanowires have been a major focus of research on nanoscaled materials which can be taken as a fundamental building block of nanotechnology and practical nanodevices. It should be noticed that metal nanowires have displayed unique optical and optoelectronic properties due to surface plasmon resonance (SPR) which is a resonant oscillation of the conducting electrons within the metallic nanostructures. The SPR effect in nanowire structures can cause a tremendous enhancement of the electromagnetic near-field in the immediate vicinity of the particles and can give rise to enhanced scattering and absorption of light radiation. The SPR in metal nanowires and related phenomena (such as the surface-enhanced Raman spectroscopy, nonlinear optic response, plasmonic excitation, to mention but a few) contributes greatly to their promising applications in biosensors, optical devices, and photonic and plasmonic devices [5-8]. Moreover, metal nanowire wave guides can excite and emit terahertz (1012 Hz or THz) surface plasmon polaritons , which can fill the gap of terahertz electronics and optoelectronics. On the other hand, superlattice nanowires have even richer physical properties owing to further quantum confinement of electron motion along the wire direction. They have been proposed as advanced electronic device systems to observe novel effects such as giant magnetoresistance and even high thermoelectric figure of merit [10,11]. Furthermore, with the rapid development of nanotechnology, it is now possible to fabricate nanowire arrays and superlattice nanowire arrays [12,13]. One of the major advantages to apply nanowire arrays and superlattice nanowire arrays as optic and optoelectronic devices is that the optical response of the array structures can be tuned and modulated via varying sample parameters such as the diameter of the wire and the pattern of the array structure. Due to potential applications of the nanowire arrays and superlattice nanowire arrays as optical devices, it is of importance and significance to examine their basic optical properties. In this article, we present a detailed experimental study on the optical properties of three kinds of nanowire array structures such as Ni and Cu nanowire arrays and Ni/Cu superlattice nanowire arrays. We would like to examine how these advanced nanostructured material systems can respond to light radiation, how their optical properties depend on temperature and radiation wavelength, and why the optical properties of the nanowire arrays differ from those observed in bulk materials. Samples and measurements In this study, three kinds of nanowire array structures are fabricated, including Ni arrays, Cu arrays, and Ni/Cu superlattice arrays. Samples are prepared by direct current electrodeposition [14-16] of metal into the holes of porous anodic alumina membrane (PAAM) with the pore size of about 50 nm. Noteworthy is the diameter of the nanowires used in the investigation, which is about 50 nm. The length of the nanowires is about 30 μm. The holes of the PAAM are periodically in hexagonal pattern, which can serve as template. The distance between adjacent wires is about 60 nm. Because of the confinement of the PAAM material, metal nanowires grow only along the direction of nanopores of the PAAM template and, therefore, form an array structure. In these samples, a layer of Au film (about 200-nm thick) is sputtered onto one side of the PAAM template to serve as the working electrode. The schematic diagram of the Ni or Cu nanowire array in the PAAM template is shown in Figure 1. For the fabrication of the Ni/Cu superlattice arrays, Ni and Cu materials are deposited alternately into the PAAM holes. The details of the sample fabrication were documented in [14-16]. Figure 1. Schematic diagram of Ni or Cu nanowire arrays in porous anodic alumina membrane. In the measurement of optical reflection spectrum, the incident light is set at a 45° angle to the sample surface, and the emergent light beam is also at a 45° angle to the sample surface. For the measurement of optical reflection (OR) spectrum, the incident and emergent light beams are set at an angle of 45° to the sample surface (see Figure 1). The measurements are carried out in the visible (400 to 800 nm in wavelength) and near-infrared (1 to 2 μm in wavelength) bandwidths. The tungsten halogen lamp is taken as a white incident light source for the measurements in the visible bandwidth. The Si carbide rod is employed as broadband infrared incident light source for the measurements in the near-infrared bandwidth. The OR spectrum is recorded using an imaging spectrometer (iHR320 HORIBA Jobin Yvon Inc., Edison, NJ, USA) where the PMT is used for the detection of 400- to 800-nm wavelength regime, and the InGaAs photodetector is employed for the measurement of 1- to 2-μm wavelength regime. For measurements in the visible regime, the temperatures are set at 4.2, 70, 150, and 200 K and at room temperature. The change of temperature is achieved in an Oxford cooling system. The measurements in the near-infrared regime are undertaken at room temperature. Results and discussion The OR spectra for Ni and Cu nanowire arrays and Ni/Cu superlattice nanowire arrays are shown in Figure 2 in visible bandwidth for different temperatures at 4.2, 70, 150, 200, and 297 K, respectively. As can be seen, the intensity of OR in nanowire array structures depends strongly on temperature. When temperature (T) < 200 K, the intensity of OR for a Ni nanowire array sample increases with temperature. When T > 200 K, the OR intensity decreases with increasing temperature. The strongest OR can be observed at about 200 K. A similar phenomenon can be found for a Ni/Cu superlattice nanowire array sample. In contrast, the OR spectra for Cu nanowire arrays (see Figure 2c) show different temperature dependence. With increasing temperature, the intensity of OR for a Cu nanowire array first decreases in the 4.2- to 70-K regime, then increases in the 70- to 200-K regime, and decreases again when T > 200 K. Again, the strongest OR for Cu nanowire arrays can be observed at about T = 200 K. These experimental findings suggest that 200 K is an appropriate temperature for the enhancement of optical reflection from Cu, Ni, and Ni/Cu superlattice nanowire array structures. This can provide a basis for further investigation into other optical properties such as optical absorption and emission from metal nanowire arrays in visible regime. We find that when T > 200 K, the OR spectrum for Ni/Cu superlattice nanowire array lies between those for Cu and Ni nanowire arrays. However, at lower temperatures (e.g., at 150 K), the intensity of the OR spectrum for Ni/Cu superlattice nanowire array is lower than those for Cu and Ni nanowire arrays. Figure 2. The spectra of optical reflection for nanowire arrays measured at different temperatures of 4.2, 70, 150, 200, and 297 K as indicated. The results for a Ni nanowire array (a), a Ni/Cu superlattice nanowire array (b), and a Cu nanowire array (c) are shown. In Figure 3, the OR spectra are shown at room temperature for three metal nanowire array samples in visible and near-infrared bandwidths. In the visible regime (see Figure 3a), two relatively wide reflection peaks can be observed for all samples at about 500 to 650 nm and 650 to 700 nm, respectively. The 650- to 700-nm peaks for the three samples appear at almost the same position (at about 667 nm), while the 500- to 650-nm ones redshift slightly with respect to that of the incident light source. The peak position of the light source is at about 554 nm, whereas the peaks for Cu and Ni/Cu superlattice nanowire arrays are at about 585 nm and that for Ni nanowire arrays is at about 600 nm. It should be noted that the visible light source provided by the tungsten halogen lamp has two main peaks in the 400- to 800-nm wavelength regime. The intensity of infrared light source given by the Si carbide rod decreases when the radiation wavelength approaches 2 μm. The variation of the intensity of the light sources is enhanced via measurement systems. We notice that Ni nanowire arrays reflect more strongly the visible light; Cu nanowire arrays reflect relatively weakly, and the OR spectrum for Ni/Cu superlattice nanowire arrays is just in between them. In the near-infrared range of 1,000 to 2,000 nm (see Figure 3b), the peaks of OR spectra for Cu nanowire arrays and Ni/Cu superlattice nanowire arrays are at about 1,808 nm, and Ni nanowire arrays and light source are at about 1,727 nm. The OR spectra for nanowire arrays redshift slightly with respect to the spectrum of the light source. In contrast to the visible regime, the Cu nanowire array reflects more strongly the infrared radiation than Ni nanowire array. Interestingly, the OR spectrum for Ni/Cu superlattice nanowire array is below that for Ni nanowire array when radiation wavelength is less than 1,730 nm, and it is located in between the OR spectra for Ni and Cu arrays when radiation wavelength is larger than 1,730 nm. Figure 3. The OR spectra for three kinds of nanowire arrays in visible(a) and near-infrared (b)bandwidths. The measurements are carried out at room temperature. The intensity of the incident light source is shown as a reference. The peak positions are marked to guide the eye. It is known that the OR spectrum of a metal nanostructure is determined mainly by surface plasmon modes and corresponding SPR. Our results indicate that the Cu, Ni, and Ni/Cu superlattice nanowire arrays show roughly the same OR spectra when the diameter and the length of the wires are the same. This implies that the features of the SPR in Ni, Cu, and Ni/Cu superlattice nanowire arrays have some similarities. From a fact that a strong optical reflection can weaken optical absorption and transmission, we can predict that Cu nanowire arrays can have stronger (weaker) optical absorption than Ni nanowire arrays in the visible (near-infrared) regime. The strong temperature dependence of the OR spectra for these array structures implies that there exists strong electron-phonon scattering in nanowire array samples. In the presence of light radiation field and phonon scattering, the electrons in an array structure can gain energy from the radiation field and lose energy via emission of phonons and excitation of plasmon and surface plasmon. At relatively low temperatures, the electron-phonon interaction is achieved mainly via phonon emission scattering channels, and the strength of the scattering increases with temperature. A strong phonon scattering implies a small electronic conductivity or a weak optical absorption and, thus, a strong OR. This is the main reason why the OR in metal nanowire arrays increases with temperature in the low-temperature regime. At relatively high temperatures, because phonon occupation number increases rapidly with temperature, the electron-phonon interaction is achieved not only through phonon emission, but also through phonon absorption. Phonon absorption can result in a gain of electron energy and in an increase in electronic conductivity. In this case, the effective strength of electron-phonon scattering decreases with increasing temperature, and therefore, the intensity of OR decreases with increasing temperature. It is interesting to note that such a mechanism is responsible to temperature-dependent electronic and optical properties in polar-semiconductor-based electronic systems. For example, it was found that the strongest magneto-phonon resonance can be observed at about 180 K for GaAs-based bulk and low-dimensional systems. However, we do not know the exact mechanism responsible to the decrease in OR for Cu nanowire arrays with increasing temperature when the temperature is within 4.2 to 70 K regime. This may suggest a strong metallic optic conduction in Cu nanowire array samples in this temperature regime. We note that in a metal nanowire array, the visible (infrared) OR is caused mainly by SPR via interband (intraband) electronic transitions. Due to quantum confinement effect in the nanowire array structure, the surface plasmon and surface plasmon polariton modes induced by inter- and intraband transitions can have different features. For bulk metals, the interband SPR induced mainly by electronic transition from higher-energy sp-band to lower-energy d-band determines the color of the metal. At the same time, the intraband SPR within the sp- and d-bands gives free-carrier optic absorption which leads mainly to a lower-frequency background optic reflection. Because Cu is a better conductor than Ni, Ni normally reflects more strongly the visible light radiation than Cu does. However, for nanowire arrays, the electronic states in different bands are quantized. The intraband electronic transition accompanied by the absorption of photons can be achieved via inter-subband transition events which can result in resonant optical absorption when photon energy approaches the energy spacing between two subbands. Thus, intraband optical absorption can be enhanced in nanowire arrays. The results shown in Figure 3b indicate that the enhancement of intraband optical absorption in Ni nanowire arrays is stronger than that in Cu nanowire arrays. As a result, Cu nanowire arrays reflect more strongly the infrared radiation than Ni arrays do. Because the quantum confinement effect affects mainly the electronic states in different bands in the array structure, the main features of OR due to interband electronic transition does not change very significantly. This is why Ni nanowire arrays can reflect more strongly the visible radiation than Cu arrays can, as shown in Figure 3a and similar to the case for bulk materials. Moreover, our results show that in the visible regime and when T > 200 K, the OR spectrum for Ni/Cu superlattice nanowire arrays lies between those for Cu and Ni nanowire arrays. However, at relatively lower temperatures (e.g., at 150 K), the intensity of the OR spectrum for Ni/Cu superlattice nanowire array is lower than those for Cu and Ni nanowire arrays. We believe that this may have resulted from different features of the phonon modes and electron-phonon scattering in nanowire and superlattice nanowire structures. In superlattice nanowire systems formed by different host materials, the phonon modes can be quantized and the conducting electrons are confined along the wire direction. The quantized phonon modes can weaken the electron-phonon scattering because a scattering event requires momentum and energy conservation. On the other hand, the localized electrons can interact more strongly with phonons. Our results suggest that when T > 200 K, the former case is dominant, and when T ≃ 150 K, the latter effect is stronger. In this study, Cu, Ni, and Ni/Cu nanowire arrays have been fabricated using state-of-the-art nanotechnology. The optical measurements on these nanowire arrays have been carried out in visible and near-infrared bandwidths for different temperatures. We have found that the optical reflection spectra of these samples depend strongly on temperature and on radiation wavelength. In particular, (1) the strongest OR in the visible regime can be observed at about 200 K for all samples, and (2) the OR for Cu nanowire arrays show a different dependence on temperature and radiation wavelength from that for Ni nanowire arrays. These results indicate that the surface plasmon resonances induced by inter- and intraband electronic transitions, the electron-phonon interaction, and the quantum confinement effect can play important roles in affecting optical properties of the metal nanowire array structure. We hope that the interesting experimental findings from this study can provide an in-depth understanding of optical properties of Cu and Ni nanowire arrays and Cu/Ni superlattice nanowire arrays and can provide a physical base for the application of metal nanowire arrays as advanced optical and optoelectronic devices. The authors declare that they have no competing interests. WX proposed the research work, coordinated the collaboration, and carried out the analyses of experimental results. YYZ designed the experiment and experimental setup, carried out the measurements, and drafted the manuscript. SHX and GTF fabricated the nanowire and superlattice nanowire array samples. YMX and JGH participated in experimental measurements, results and discussion, and analyses. All authors read and approved the final manuscript. This work was supported by the National Natural Science Foundation of China (grant no. 10974206), Department of Science and Technology of Yunnan Province, and by the Chinese Academy of Sciences. J Crystal Growth 2003, 254:14. Publisher Full Text J Appl Phys 2002, 91:4590-4594. Publisher Full Text Appl Surf Sci 2008, 255:1901. Publisher Full Text Surf Coat Technol 2010, 205:2432-2437. Publisher Full Text C R Physique 2008, 9:215-231. Publisher Full Text Appl Phys Lett 1994, 65:2484. Publisher Full Text Appl Phys Lett 1994, 65:3019. Publisher Full Text Nano Lett 2002, 2:83. Publisher Full Text
<urn:uuid:2e12f310-414f-43f9-a2fb-66b9ce4ccbc0>
{ "date": "2013-05-21T10:21:48", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8884909749031067, "score": 2.71875, "token_count": 4104, "url": "http://www.nanoscalereslett.com/content/7/1/569/" }
The knowledge, skills and understandings relating to students’ writing have been drawn from the Statements of Learning for English (MCEECDYA 2005). Students are taught to write a variety of forms of writing at school. The three main forms of writing (also called genres or text types) that are taught are narrative writing, informative writing and persuasive writing. In the Writing tests, students are provided with a ‘writing stimulus' (sometimes called a prompt – an idea or topic) and asked to write a response in a particular genre or text type. In 2013, students will be required to complete a persuasive writing task. The Writing task targets the full range of student capabilities expected of students from Years 3 to 9. The same stimulus is used for students in Years 3, 5, 7 and 9. The lines in the response booklet for Year 3 students are more widely spaced than for Years 5, 7 and 9 and more capable students will address the topic at a higher level. The same marking guide is used to assess all students' writing, allowing for a national comparison of student writing capabilities across these year levels. Assessing the Writing task Students’ writing will be marked by assessors who have received intensive training in the application of a set of ten writing criteria summarised below. The full Persuasive Writing Marking Guide ( 5.7 MB) and the writing stimulus used to prompt the writing samples in the Marking Guide are both available for download. Descriptions of the Writing criteria ||Description of marking criterion |The writer’s capacity to orient, engage and persuade the reader ||The organisation of the structural components of a persuasive text (introduction, body and conclusion) into an appropriate and effective text structure ||The selection, relevance and elaboration of ideas for a persuasive argument ||The use of a range of persuasive devices to enhance the writer’s position and persuade the reader ||The range and precision of contextually appropriate language choices ||The control of multiple threads and relationships across the text, achieved through the use of grammatical elements (referring words, text connectives, conjunctions) and lexical elements (substitutions, repetitions, word associations) ||The segmenting of text into paragraphs that assists the reader to follow the line of argument ||The production of grammatically correct, structurally sound and meaningful sentences ||The use of correct and appropriate punctuation to aid the reading of the text ||The accuracy of spelling and the difficulty of the words used The Narrative Writing Marking Guide (used in 2008 - 2010 ) is also available. Use of formulaic structures Beginning writers can benefit from being taught how to use structured scaffolds. One such scaffold that is commonly used is the five paragraph argument essay. However, when students becomes more competent, the use of this structure can be limiting. As writers develop their capabilities they should be encouraged to move away from formulaic structures and to use a variety of different persuasive text types, styles and language features, as appropriate to different topics. Students are required to write their opinion and to draw on personal knowledge and experience when responding to test topics. Students are not expected to have detailed knowledge about the topic. Students should feel free to use any knowledge that they have on the topic, but should not feel the need to manufacture evidence to support their argument. In fact, students who do so may undermine the credibility of their argument by making statements that are implausible. Example topics and different styles: City or country (see example prompt ) A beginning writer could write their opinion about living in either the city or country and give reasons for it. A more capable writer might also choose to take one side and argue for it. However, this topic also lends itself to a comparative style response from a more capable writer. It can be argued there are benefits and limitations to living in the city and living in the country. A writer could also choose to introduce other options, for example living in a large country town that might have the benefits of city and rural life. Positions taken on this topic are likely to elicit logical, practical reasons and anecdotes based on writers’ experiences. Books or TV (see example prompt ) A beginning writer could write about their opinion of one aspect and give reasons for it. However, this topic lends itself to a comparative style response from a more capable writer. It can be argued there are benefits and limitations to both books and TV. The reasons for either side of the topic are likely to elicit logical, practical reasons and personal anecdotes based on the writer's experiences of both books and TV. It is cruel to keep animals in cages and zoos (see example prompt ) A beginning writer could take on one side of the topic and give reasons for it. However, this topic lends itself to be further redefined. For example, a more capable writer might develop the difference between open range zoos and small cages and then argue the merits of one and limitations of the other. The animal welfare issues raised by this topic are likely to elicit very empathetic and emotive arguments based on the writer's knowledge about zoos and animals. More information on persuasive writing can be found in the FAQ section for NAPLAN - Writing test. National minimum standards The national minimum standards for writing describe some of the skills and understandings students can generally demonstrate at their particular year schooling. The standards are intended to be a snapshot of typical achievement and do not describe the full range of what students are taught or what they may achieve. For further information on the national minimum standards see Performance Standards.
<urn:uuid:817d308c-adeb-427a-9b89-415a8f96d2ec>
{ "date": "2013-05-21T10:13:37", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.929629921913147, "score": 4.5, "token_count": 1150, "url": "http://www.nap.edu.au/naplan/about-each-domain/writing/writing.html" }
Atomic oxygen, a corrosive space gas, finds many applications on Earth. An Atomic Innovation for Artwork Oxygen may be one of the most common substances on the planet, but recent space research has unveiled a surprising number of new applications for the gas, including restoring damaged artwork. It all started with a critical problem facing would-be spacecraft: the gasses just outside the Earth’s atmosphere are highly corrosive. While most oxygen atoms on Earth’s surface occur in pairs, in space the pair is often split apart by short-wave solar radiation, producing singular atoms. Because oxygen so easily bonds with other substances, it is highly corrosive in atomic form, and it gradually wears away the protective layering on orbiting objects such as satellites and the International Space Station (ISS). To combat this destructive gas, NASA recreated it on Earth and applied it to different materials to see what would prove most resistant. The coatings developed through these experiments are currently used on the ISS. During the tests, however, scientists also discovered applications for atomic oxygen that have since proved a success in the private sector. Breathing New Life into Damaged Art In their experiments, NASA researchers quickly realized that atomic oxygen interacted primarily with organic materials. Soon after, they partnered with churches and museums to test the gas’s ability to restore fire-damaged or vandalized art. Atomic oxygen was able to remove soot from fire-damaged artworks without altering the paint. It was first tested on oil paintings: In 1989, an arson fire at St. Alban’s Episcopal Church in Cleveland nearly destroyed a painting of Mary Magdalene. Although the paint was blistered and charred, atomic oxygen treatment plus a reapplication of varnish revitalized it. And in 2002, a fire at St. Stanislaus Church (also in Cleveland) left two paintings with soot damage, but atomic oxygen removed it. Buoyed by the successes with oil paints, the engineers also applied the restoration technique to acrylics, watercolors, and ink. At Pittsburgh’s Carnegie Museum of Art, where an Andy Warhol painting, Bathtub, has been kissed by a lipstick-wearing vandal, a technician successfully removed the offending pink mark with a portable atomic oxygen gun. The only evidence that the painting had been treated—a lightened spot of paint—was easily restored by a conservator. A Genuine Difference-maker When the successes in art restoration were publicized, forensic analysts who study documents became curious about using atomic oxygen to detect forgeries. They found that it can assist analysts in figuring out whether important documents such as checks or wills have been altered, by revealing areas of overlapping ink created in the modifications. The gas has biomedical applications as well. Atomic oxygen technology can be used to decontaminate orthopedic surgical hip and knee implants prior to surgery. Such contaminants contribute to inflammation that can lead to joint loosening and pain, or even necessitate removing the implant. Previously, there was no known chemical process that fully removed these inflammatory toxins without damaging the implants. Atomic oxygen, however, can oxidize any organic contaminants and convert them into harmless gases, leaving a contaminant-free surface. Thanks to NASA’s work, atomic oxygen—once studied in order to keep it at bay in space—is being employed in surprising, powerful ways here on Earth. To learn more about this NASA spinoff, read the original article
<urn:uuid:672eb588-eeaa-401f-81e0-1a0e5c9d984f>
{ "date": "2013-05-21T10:07:49", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9505659341812134, "score": 3.703125, "token_count": 714, "url": "http://www.nasa.gov/offices/oct/home/tech_life_oxygen.html" }
The Nature Conservancy’s successful island restoration and innovative conservation practices have inspired countless scientists and conservationists around the world. And now, Santa Cruz Island has motivated novelists as well. Best-selling author T.C. Boyle—author of such works as The Women, The Tortilla Curtain, and A Friend of the Earth—has written a new novel principally set on Santa Cruz Island and inspired by The Conservancy’s and the National Park Service’s scientists work. The story of Santa Cruz Island—and its incredible return from the brink of ecological collapse—is nothing short of remarkable. The Conservancy is announcing a contest to win a trip to this iconic place. Come meet the scientists who are saving Santa Cruz Island and learn more about our work firsthand. You and a guest could win a trip to see animals like the Island fox that exist nowhere else on Earth and stay overnight at a historic ranch. To enter the contest to win an island adventure got to www.nature.org/sci or text “TNC” to 5055. At 96 square miles, Santa Cruz Island is the largest and most biodiverse of California’s eight Channel Islands. It is graced with a nearly unimaginable 77-mile stretch of California coastline surrounding two mountain ranges which flank a central valley. Often referred to as the “Galapagos of North America,” Santa Cruz Island is home to animals and plants found nowhere else on Earth, including the island fox and island scrub-jay. Author T.C. Boyle was inspired by his own trip to Santa Cruz Island with The Conservancy. View his interview at www.nature.org/sci. Boyle’s new book, When the Killing’s Done, tells the fictionalized tale of the island’s restoration. What isn’t fictional is the conscious struggle scientists universally face as we attempt to exert control over the natural world for the good of the whole. This has been a real challenge for The Conservancy’s scientists both personally and professionally on Santa Cruz Island. “It’s our job to preserve nature, but sometimes that requires making hard choices. There is nothing pleasant about having to kill an animal,” said The Nature Conservancy Santa Cruz Island Director Lotus Vermeer. “My love of nature is what brought me to this job in the first place. But what is very clear to me is that when native plants and animals like the Santa Cruz Island fox are at risk, and natural systems are threatened, we are morally obligated to take responsibility for undoing the damage that we have caused.” The Nature Conservancy is proud to be featured in this book, which highlights our collaborative work with our partner the National Park Service on Santa Cruz Island. Since 1978, The Nature Conservancy has achieved extraordinary restoration success on Santa Cruz Island, including the re-establishment of bald eagles, removal of all feral sheep and pigs, vaccinating island scrub-jays against West Nile Virus and bringing the native Santa Cruz Island fox back from the brink of extinction. • To enter the contest visit www.nature.org/sci or text “TNC” to 5055 • Official contest rules—http://www.nature.org/wherewework/northamerica/states/california/features/scisweepstakes.html • To learn more about the Conservancy’s Santa Cruz Island work—http://www.nature.org/wherewework/northamerica/states/california/preserves/art6335.html The Nature Conservancy is a leading conservation organization working around the world to protect ecologically important lands and waters for nature and people. The Conservancy and its more than 1 million members have protected nearly 120 million acres worldwide. Visit The Nature Conservancy on the Web at www.nature.org.
<urn:uuid:42d7589d-027c-4015-b8fe-563647364030>
{ "date": "2013-05-21T10:28:38", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9159325361251831, "score": 2.515625, "token_count": 802, "url": "http://www.nature.org/ourinitiatives/regions/northamerica/unitedstates/california/newsroom/a-novel-idea-santa-cruz-island-inspires-a-new-tc-boyle-book-win-an-islan.xml" }
Take a $20 bill. Now strike a match and watch the money go up in smoke. Pretty crazy idea, right? Yet many of us do something similar every time we drive. We fill up our gas tanks, then burn through extra fuel - and money - that we could be saving. The good news is that it doesn't take much to start saving money at the gas pump. By tweaking your driving habits and adopting a few simple car maintenance tips, you can easily cut your fuel consumption and get more mileage out of your vehicle. Getting 30 MPG instead of 20 MPG saves the average driver about $990 per year in fuel costs! There are other benefits, too. Reducing the amount of fuel you use improves air quality, since motor vehicles account for about half of all greenhouse gas emissions in North Carolina and up to 70 percent in urban areas. That means everyone - you, your grandma, the family next door – can breathe easier. No matter what you drive, you can reduce carbon dioxide and save money - right now. This page will show you how to start driving green and saving green. Download and print these free posters to spread the word about Drive Green, Save Green. Every five miles per hour you go over 60 can cost you an extra 20 cents per gallon. Notes that drivers can save $40 a year and help the environment by clearing out their trunks. Learn how small changes in your driving habits, like going a little slower, using cruise control or cutting off the AC can add up to big savings. Tips for simple, regular maintenance that can help you save on gas and avoid more costly repairs. Find public transit in your area, locate a carpool buddy, and get information on biking and walking in North Carolina. Keeping tires properly inflated saves you one tank of gas a year? For every 5 mph you go over 60 mph, you’re paying 20 cents more per gallon for gas? Your air conditioner can consume up to one gallon of gas per tank to cool the vehicle? Using cruise control on 10,000 of the miles driven in a year could save you nearly $200? You can lose 30 gallons of gasoline annually by not tightening your fuel cap? On a 10-minute trip, rushing to get to your destination – i.e. flooring it at every green light and slamming on your brakes – will get you there only 24 seconds sooner, but reduce your fuel efficiency from 25 mpg to 17 mpg?
<urn:uuid:cfb3960c-bade-4aa4-beff-5c670555c33f>
{ "date": "2013-05-21T10:28:22", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9085472226142883, "score": 2.546875, "token_count": 511, "url": "http://www.ncdot.gov/travel/drivegreen/default.html" }
A Brief Introduction To OpenFlow, November 14, 2011 OpenFlow is a specification now managed by the Open Networking Foundation, which defines the functions and protocols used to centrally manage switches via a centralized controller. While OpenFlow has a centralized controller, that doesn't mean that each new flow has to result in a controller lookup. If a new flow matches an existing rule, it will be processed according to that rule's actions. Rules can be pre-populated, reducing the number of lookups that occur. Intelligent policy development should mean a reduced number of controller lookups. In addition, rules have a time to live associated, so if the switch is disconnected from the controller for some reason, it can still process existing and new flows. Only those flows that result in a controller lookup would fail. Controller technology is not new either. Enterprises have been using controller-based wireless and network access control for years successfully.
<urn:uuid:2f2fe906-dfb3-4ba6-9e98-279552131154>
{ "date": "2013-05-21T10:20:50", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9350315928459167, "score": 2.890625, "token_count": 185, "url": "http://www.networkcomputing.com/next-gen-network-tech-center/a-brief-introduction-to-openflow/231902599?pgno=6" }
Dementia and the Sniff Magnitude Test Sniff Test May Signal Disorders’ Early Stages By ELIZABETH SVOBODA Published: August 14, 2007 The Sniff Magnitude Test, developed with the aid of a $1.3 million grant from the National Institutes of Health, consists of a nasal tube called a cannula attached to a plastic container about the size and shape of a coffee thermos. Chemical vapors inside the canister are released through the tube, exposing subjects to a series of smells, some more objectionable than others. “People describe some of the smells as skunky or sewerlike,” said Jason Bailie, a University of Cincinnati graduate student working on the test. “There’s also one that smells like banana.” As patients take whiffs of each new fragrance, sensors in the thermos unit measure the negative pressure the inhalations produce. The size and intensity of these sniffs turn out to be important gauges of olfactory ability. After detecting a strong or disagreeable odor, people with a normal sense of smell take very small sniffs to avoid smelling it. Subjects with an impaired sense of smell, on the other hand, continue taking deep whiffs, because the scent does not register in their brains. The Cincinnati team’s efforts have piqued the interest of other researchers, including Dr. Doty and Alan Hirsch of the Smell and Taste Research and Treatment Foundation, who is using the Sniff test in his clinical practice. “They’ve chosen some very good odors that stimulate the olfactory system effectively,” Dr. Doty said. “This is a very novel approach — it just needs to be tested more broadly.” Still, Dr. Doty added, the Sniff Magnitude Test may not be the ideal way to assess every patient with cognitive deficits. “Very early in life, we make a connection between an odor and its source,” he said. “We give it a name. If the connection between the name of an odor and the odor itself is what’s breaking down in an Alzheimer’s patient, this test might not be as helpful,” because it does not tell evaluators how a patient identifies and categorizes smells. The Sniff Magnitude Test is likely to raise red flags only if an impending cognitive disorder directly affects a patient’s olfactory abilities. [ ... Read the full article ... ]
<urn:uuid:30ac8575-c560-4da8-894d-40dd32fda824>
{ "date": "2013-05-21T10:29:07", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9181577563285828, "score": 2.609375, "token_count": 523, "url": "http://www.neuropsychological.blogspot.com/2007/08/dementia-and-sniff-magnitude-test.html" }
INDIA INFO: India - More Indian Musical Instruments by V.A.Ponmelil (Feedback) More Indian Musical Instruments Chitra Veena / Gotu Vadhyam The Chitraveena which is also referred to as the gotuvadhyam is one of the most exquisite instruments. It is a 21 stringed fretless lute similar to Vichitraveena. It contains a flat top, two resonant chambers, and a hollow stem of wood. While the right hand plectrums pluck the strings, the left hand slides a piece of wood over the strings. It is one of the oldest instruments of the world and the forerunner of the fretted Saraswati Veena. The word Jaltarang means "waves in water". The jaltarang is an interesting ancient musical instrument consisting of a series of tuned bowls arranged in a semicircle around the performer. The bowls are of different sizes and are tuned precisely to the pitches of various ragas by adding appropriate amounts of water. The instrument is played by striking the inside edge of the bowls with two small wooden sticks, one held in each hand. Jal tarang is not very common and is normally found in the accompaniment of kathak dancers. The Morsing is a tiny instrument which is held in the left hand, the prongs against the upper and lower front teeth. The tongue, which protrudes from the mouth, is made of spring steel. This is plucked with the Index finger of the right hand (backwards, not forwards) while the tone and timbre are adjusted by changing the shape of the mouth cavity and moving the tongue. Further control of the sound can be achieved with the breath. Like the mridangam, the morsing is tuned to the Shruti and fine tuning is achieved by placing small amounts of bee's wax on the end of the tongue. The Shank is one of the ancient instruments of India. It is also referred to as the sushirvadya which is associated with religious functions. In India it is considered very sacred. It is being regarded as one of the attributes of Lord Vishnu. Before using, the Shankh is drilled in such a way as to produce a hole at the base taking care that the natural hole is not disturbed. In Athar¬Veda, one finds reference to Shankh, though it existed long, before. In Bhagvad Gita, during the time of war, Shankh had played an important role. It also has different names like Panch Janya Shankh, Devadatt Shankh, Mahashan Ponder Shankh and more. Even in Valmiki's Ramayna, the mention of a Shankh can be traced. In the temples, Shankh is played in the mornings and evenings during the prayers. In homes, it is played before the starting of havan, yagnopavit, marriage, etc. The Kombu is a wind instrument or a kind of trumpet which is usually played along with the Panchavadyam or the Pandi Melam or the Panchari melam. This musical instrument is like a long horn and is usually seen in Kerala state of South India. Travel Information on top destinations of India Hill Stations of India
<urn:uuid:d883f592-7d49-4425-b333-5e5fa1fcfd30>
{ "date": "2013-05-21T10:28:37", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9539422392845154, "score": 3.265625, "token_count": 697, "url": "http://www.newkerala.com/india/Indian-Music/Indian-Musical-Instruments/More-Indian-Musical-Instruments.html" }
On December 21, 2012, our calendar will align with the Maya date 22.214.171.124.0, completing a great Maya cycle of time. There's been a lot of hoopla that we are about to face a doomsday -- better known as the Maya apocalypse. There are television specials and panic buying of disaster supplies in Russia, a reminder of the stockpiling that took place for Y2K back in 1999. While the Maya date will coincide with the solstice, the shortest day of the year, will it also coincide with the end of the world? I'm no seer, but I am confident that December 22 will see the dawn. The ancient Maya of Mexico, Guatemala, Belize and Honduras were close keepers of time. They charted every day, organizing them into 20- and 400-year periods. Using a base-20 counting system (ours is base 10) and zero, they easily calculated thousands of dates, some noting the existence of millions of years. During the height of their civilization in the 8th century, the Maya recorded dates and deeds at dozens of city-states, from births and battles to the triumphant wrenching of trophies from enemies. Artists inscribed their signatures on painted pots and stone sculptures. Stucco inscriptions adorned monumental pyramids that crested over the rainforest canopy. But to continue building ever grander structures, the Maya needed the natural resources. Most of all, they needed timber to burn limestone in order to make cement. By the late 8th century, the rainforest was in retreat, fuel was scarce and recurrent drought led to desperation, which then led to chronic warfare. And so around 800 AD, one of the most extraordinary civilizations came to a crushing halt. Small groups of desperate dwellers in some cities held out behind hastily thrown-up palisades. Elsewhere, foes burned enemy cities to the ground and smashed monuments, leaving them scattered across the surface to be found in recent times. Scrub jungle overtook what had been sparkling white plazas. Compact ball courts that had seen raucous competition of a team sport played like soccer went silent. Wildlife scavenged lavish furnishings for their nests and dens. The reasons were many, and the outcome was shocking. The Maya civilization collapsed in most of its southern lowlands, leaving only abandoned pyramids in silent cities. This was the true face of apocalypse. Did they see it coming? Just a few years before the rot set in, Maya painters at the site of Bonampak, a small city in Chiapas, Mexico, covered the walls of a small three-room palace with extraordinary murals. They painted more individuals -- men, mainly, but women and children, too -- than had been rendered before, numbering more than 250. They deployed more fancy pigments than had been used before, more than would ever be used again in ancient Mexico, some 47 vibrant blues, reds and yellows. The paintings reveal the social layers of courtiers and lords, musicians and dwarves, victims and their blade-wielding sacrificers. Musicians, singers and performers lined up to perform on plazas and pyramids. None of these activities or materials was new, but what was new was the rapidly crumbling world around the Bonampak painters. No one could change -- the paintings seem to tell us. The Maya ignored the crisis in front of them, instead dancing with great panaches of precious quetzal feathers on pyramids, as if the present would forever hold. Now in the 21st century, perhaps we have also reached a precipice. Global warming is not just fearful thinking -- it's real. Weeks after Superstorm Sandy, scientists are now predicting the near-term and long-term effects of global warming as more dire that previously thought. Some, perhaps like our Maya predecessors, would rather not see the writing on the walls of our flooded cities. The crises pile up in front of us, one after another, and we ignore them at our peril. Acknowledging and doing something about the problems in front of us seems hard. Give us more feathers. Build more walls. Stockpile canned goods and buy a generator. As for December 21, rest easy. This day will pass as if it were nothing more than the Maya Y2K, the nonevent of the decade. We'll wake up on December 22, and the world will still be here. And so will our pressing environmental challenges. We need to make some hard decisions and resolve that we will confront our own brewing apocalypse before it's too late. Follow us on Twitter @CNNOpinion
<urn:uuid:292822e7-c034-4d97-a0d9-9109396022c2>
{ "date": "2013-05-21T10:35:57", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9662423729896545, "score": 2.75, "token_count": 955, "url": "http://www.news4jax.com/news/Miller-We-can-learn-from-the-Maya-collapse/-/475880/17691574/-/85h7sl/-/index.html" }
Young goats learn new and distinctive bleating "accents" once they begin to socialise with other kids. The discovery is a surprise because the sounds most mammals make were thought to be too primitive to allow subtle variations to emerge or be learned. The only known exceptions are humans, bats and cetaceans – although many birds, including songbirds, parrots and hummingbirds have legendary song-learning or mimicry abilities. Now, goats have joined the club. "It's the first ungulate to show evidence of this," says Alan McElligott of Queen Mary, University of London. McElligott and his colleague, Elodie Briefer, made the discovery using 23 newborn kids. To reduce the effect of genetics, all were born to the same father, but from several mothers, so the kids were a mixture of full siblings plus their half-brothers and sisters. The researchers allowed the kids to stay close to their mothers, and recorded their bleats at the age of 1 week. Then, the 23 kids were split randomly into four separate "gangs" ranging from five to seven animals. When all the kids reached 5 weeks, their bleats were recorded again. "We had about 10 to 15 calls per kid to analyse," says McElligott. Some of the calls are clearly different to the human ear, but the full analysis picked out more subtle variations, based on 23 acoustic parameters. What emerged was that each kid gang had developed its own distinctive patois. "It probably helps with group cohesion," says McElligott. "People presumed this didn't exist in most mammals, but hopefully now, they'll check it out in others," says McElligott. "It wouldn't surprise me if it's found in other ungulates and mammals." Erich Jarvis of Duke University Medical Center in Durham, North Carolina, says the results fit with an idea he has developed with colleague Gustavo Arriaga, arguing that vocal learning is a feature of many species. "I would call this an example of limited vocal learning," says Jarvis. "It involves small modifications to innately specified learning, as opposed to complex vocal learning which would involve imitation of entirely novel sounds." Journal reference: Animal Behaviour, DOI: 10.1016/j.anbehav.2012.01.020 If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article
<urn:uuid:072e317e-2d2a-4c8e-97c1-335b8f03bdb2>
{ "date": "2013-05-21T10:14:09", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9724950790405273, "score": 3.578125, "token_count": 569, "url": "http://www.newscientist.com/article/dn21481-young-goats-can-develop-distinct-accents.html" }
Evolution can fall well short of perfection. Claire Ainsworth and Michael Le Page assess where life has gone spectacularly wrong THE ascent of Mount Everest's 8848 metres without bottled oxygen in 1978 suggests that human lungs are pretty impressive organs. But that achievement pales in comparison with the feat of the griffon vulture that set the record for the highest recorded bird flight in 1975 when it was sucked into the engine of a plane flying at 11,264 metres. Birds can fly so high partly because of the way their lungs work. Air flows through bird lungs in one direction only, pumped through by interlinked air sacs on either side. This gives them numerous advantages over lungs like our own. In mammals' two-way lungs, not as much fresh air reaches the deepest parts of the lungs, and incoming air is diluted by the oxygen-poor air that remains after ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:ad635de7-8a5e-4c98-be53-8c463594f176>
{ "date": "2013-05-21T10:30:24", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9488707780838013, "score": 3.28125, "token_count": 207, "url": "http://www.newscientist.com/article/mg19526161.800-evolutions-greatest-mistakes.html" }
Walter Bagehot (February 3, 1826 – March 24, 1877) was a British journalist, political analyst and economist, famous for his analysis of British Parliament and money market. Under his leadership The Economist became one of world’s leading business and political journals. Bagehot recognized that economics in not just a matter of the external, material aspects of financial transactions, but also involves the internal aspects of people's desires, motivations, and personality. Thus, he always emphasized social issues in his writings, and endeavored to make issues of government transparent to the public. Bagehot had an original and insightful mind, recognizing that the character of leaders was often more important than their political affiliation or beliefs. His work has continued to inform and inspire debate, contributing to our understanding of the functioning of human society and its improvement. Walter Bagehot was born in on February 3, 1826, in Langport, Somerset, England, the son of a local banker. He attended the University College London, where he earned a Master's degree in mathematics in 1848. He studied law and was called to the Bar, but decided not to practice, instead joining his father in the banking business, in Stuckey & Co. in the west of England. While still working as a banker, Bagehot started to write, first for some periodicals, and then for The National Review. He soon became the editor of the paper. In 1857, he met James Wilson, founder and editor of The Economist, a political and financial weekly newsmagazine. Bagehot married Wilson’s daughter in 1858. In 1860, Bagehot succeeded his father-in-law, James Wilson, as editor of The Economist. After taking over he expanded the publication's reporting on the United States and on politics, and is considered to have increased its influence among policymakers. Bagehot became influential in both politics and economics, among whose friends were statesmen George Cornewall Lewis and Grant Duff, Lord Carnarvon, Prime Minister William Ewart Gladstone, and the governor and directors of the Bank of England. Bagehot made several attempts to be elected as a Member of Parliament, but without success. He remained at the head of The Economist for the rest of his life. He died suddenly on March 24, 1877 in his home in Langport, Somerset, England, at the age of 51. Bagehot was a person with a whole variety of interests. He wrote on the topics of economics, politics, law, literature, and so forth. He remains most famous however for his three books: The English Constitution (1867), Physics and Politics (1872), and Lombard Street (1873). In addition to these volumes, he commanded substantial influence through his editorship of The Economist. The English Constitution In 1867, Bagehot wrote The English Constitution which explored the constitution of the United Kingdom, specifically the functioning of the British Parliament and the British monarchy, and the contrasts between British and American government. Bagehot revealed how the Parliament operated as it were "behind a curtain," hidden from public knowledge. He divided the constitution into two components: - The Dignified – symbolic side of the constitution, and - The Efficient - the real face of the constitution, the way things actually work and get done. Instead of describing the constitution from the point of the law, as a lawyer would, Bagehot focused on the practical implications of the constitution, as experienced by the common man. The book soon became widely popular, ensuring Bagehot worldwide fame. He criticized American presidential system, claiming that it lacked flexibility and accountability. While in the English parliament real debates took place, after which changes could take place, in the American Congress debates had no power, since the President made the final decision. In Bagehot's view: a parliamentary system educates the public, while a presidential system corrupts it. (The English Constitution 1867) He also criticized the way American presidents are chosen, saying: Under a presidential constitution the preliminary caucuses that choose the president need not care as to the ultimate fitness of the man they choose. They are solely concerned with his attractiveness as a candidate. (The English Constitution, 1867) Physics and Politics Bagehot wrote Physics and Politics in 1872, in which he tried to apply the principles of evolution to human societies. The subtitle of the book reads: Thoughts on the Application of the Principles of "Natural Selection" and "Inheritance" to Political Society. The book represented a pioneering effort to make a relationship between the natural and the social sciences. Bagehot explained the functioning of the market, and how it affects the behavior of the people. For example, he believed that people tend to invest money when the mood of the market is positive, and restrain from it when it comes to a negative phase. In this book Bagehot also reflected on the psychology of politics, especially on the personality of a leader. He stressed two things as essential for leadership: the personality of a leader and his motivation. Bagehot believed that motivation played one of the key roles in good leadership, and that the personality of a leader often counted more than the policy he endorsed: It is the life of teachers which is catching, not their tenets.” (Physics and Politics 1872) Bagehot claimed that the personal example of the leader sets the tone for the whole governance. That is why “character issues” are so important for any government. Character "issues" still play an important role in deciding the potential candidate for any leadership position in today’s modern world. Bagehot coined the expression "the cake of custom," denoting the sets of customs that any society is rooted in. Bagehot believed that customs develop and evolve throughout human history, with the best organized groups overthrowing the poorly organized groups. In this sense Bagehot’s views are a clear example of cultural selection, closer to Lamarckian than Darwinian evolution. The central problem in his book was to understand why Europeans could break away from tradition and “the cake of custom” and instead focus on progress and novelty. He saw tradition as important in keeping societies cohesive, but also believed that diversity was essential for progress: The great difficulty which history records is not that of the first step, but that of the second step. What is most evident is not the difficulty of getting a fixed law, but getting out of a fixed law; not of cementing (as upon a former occasion I phrased it) a cake of custom, but of breaking the cake of custom; not of making the first preservative habit, but of breaking through it, and reaching something better. (Physics and Politics 1872) In his famous Lombard Street (1873), Bagehot explained the theory behind the banking system, using insights from the English money market. As with his analysis of the English constitution six years earlier, Bagehot described the English banking system through the eyes of a simple person, as experienced in everyday life. Bagehot showed that the English money system was solely relying on the central bank, the Bank of England. Bagehot had warned that the whole reserve was in the central bank, under no effectual penalty of failure. He proposed several ideas how to improve that system. Bagehot’s work can be closely associated with the English historicist tradition. He did not directly oppose Classical economics, but advocated for its reorganization. He claimed that economics needed to incorporate more factors in its theory, such as cultural and social factors, in order to be more accurate in theorizing about economic processes. Bagehot was one of the first to study the relationship between physical and social sciences from a sociological perspective. In his contributions to sociological theory through historical studies, Bagehot may be compared to his contemporary Henry Maine. He also developed a distinct theory of central banking, many points of which continue to be valued. With his analysis of English and United States political systems in the English Constitution, Bagehot influenced Woodrow Wilson to write his Congressional Government. In honor of his achievements and his work as its editor, The Economist named its weekly column on British politics after him. Every year the British Political Studies Association awards the Walter Bagehot Prize for the best dissertation in the field of government and public administration. - Bagehot, Walter. 1848. Review of Mill's Principles of Political Economy. Prospective Review, 4(16), 460-502. - Bagehot, Walter. 1858. Estimates of Some Englishmen and Scotchmen. London: Chapman and Hall. - Bagehot, Walter. 1875. A New Standard of Value. The Economist, November 20. - Bagehot, Walter. 1879. Literary Studies. London: Longmans, Green and Co. - Bagehot, Walter. 1998. (original 1880). Economic Studies. Augustus M Kelley Pubs. ISBN 0678008523 - Bagehot, Walter. 2001. (original 1867). The English Constitution. Oxford University Press. ISBN 0192839756 - Bagehot, Walter. 2001. (original 1873). Lombard Street: A description of the money market. Adamant Media Corporation. ISBN 140210006X - Bagehot, Walter. 2001. (original 1877). Some Articles on the Depreciation of Silver and on Topics Connected with It. Adamant Media Corporation. ISBN 140216288X - Bagehot, Walter. 2001. (original 1889). The Works of Walter Bagehot. Adamant Media Corporation. ISBN 1421254530 - Bagehot, Walter. 2006. (original 1881). Biographical Studies. Kessinger Publishing. ISBN 1428608400 - Bagehot, Walter. 2006. (original 1872). Physics and Politics. Dodo Press. ISBN 1406504408 - Bagehot, Walter. 2006. (original 1885). The Postulates of English Political Economy. Cosimo. ISBN 1596053771 - Barrington, Russell. 1914. Life of Walter Bagehot. Longmans, Green and Co. - Buchan, Alastair. 1960. The spare chancellor: The life of Walter Bagehot. Michigan State University Press. ISBN 087013051X - Cousin, John William. 1910. A Short Biographical Dictionary of English Literature. New York, E.P. Dutton. - Morgan, Forrest. 1995. The Works of Walter Bagehot. Routledge. ISBN 0415131545 - Orel, Harold. 1984. Victorian Literary Critics: George Henry Lewes, Walter Bagehot, Richard Holt Hutton, Leslie Stephen, Andrew Lang, George Saintsbury, and Edmund Goss. Palgrave Macmillan. ISBN 0312843046 - Sisson C. H. 1972. The case of Walter Bagehot. Faber and Faber Ltd. ISBN 0571095011 - Stevas, Norman. 1959. Walter Bagehot a Study of His Life and Thought Together with a Selection from His Political Writings. Indiana University Press. - Sullivan, Harry R. 1975. Walter Bagehot. Twayne Publishers. ISBN 0805710183 All links retrieved December 6, 2012. - Bagehot and the Age of Discussion – Commentary on Bagehot’s Physics and Politics - Major Works – Some full-text works of Walter Bagehot - Quotations from Walter Bagehot - Walter Bagehot – Biography - Works by Walter Bagehot. Project Gutenberg New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed.
<urn:uuid:cc871262-6e2e-4e4f-a909-b3fd012a101a>
{ "date": "2013-05-21T10:33:39", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9281991720199585, "score": 2.921875, "token_count": 2559, "url": "http://www.newworldencyclopedia.org/entry/Walter_Bagehot" }
Collins Field Guide to the Birds of South America: Non-Passerines: From rheas to woodpeckers The only field guide to illustrate and describe every non-passerine species of bird in South America, this superbly illustrated field guide to the birds of South America covers all the non-passerines (Divers to Woodpeckers). All plumages for each species are illustrated, including males, females and juveniles. Featuring 1,273 species, the text gives information on key identification features, habitat, and songs and calls. The 156 colour plates appear opposite their relevant text for quick and easy reference and include all field identifiable species, including subspecies and colour morphs. Distribution maps are included, showing where each species can be found and how common it is, to further aid identification. Vew all titles in Americas: Central & South America combined with South America (GEN) View other products from the same publisher
<urn:uuid:a872a921-c1ed-43ba-a4ce-88984cfb94e0>
{ "date": "2013-05-21T09:58:55", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8611264228820801, "score": 3.046875, "token_count": 193, "url": "http://www.nhbs.com/collins_field_guide_to_the_birds_of_south_tefno_131101.html" }
The best gifts are handmade. Make this craft together and give it as a gift to a parent, grandparent, or your child's classmates. This craft is best suited for parents to make on their own or with minimal help from kids. You'll need to allow extra time for glue or paint to dry. Create with us skills involve self-expression, experimentation, and imagination through visual arts (like painting and sculpting), dramatic play, cooking, and dance. Read with us skills focus on early literacy and include: listening, comprehension, speech, reading, writing, vocabulary, letters and their sounds, and spelling.
<urn:uuid:6b5734e7-31f7-4aea-89ae-21a28cd854fc>
{ "date": "2013-05-21T10:07:50", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9478643536567688, "score": 3.1875, "token_count": 128, "url": "http://www.nickjr.com/crafts/bubble-guppies-halloween-cards.jhtml?path=/crafts/all-shows/seasonal/all-ages/index.jhtml" }
In January 1968, Nixon decided to once again seek the nomination of the Republican Party for president. Portraying himself as a figure of stability in a time of national upheaval, Nixon promised a return to traditional values and "law and order." He fended off challenges from other candidates such as California Governor Ronald Reagan, New York Governor Nelson Rockefeller, and Michigan Governor George Romney to secure the nomination at the Republican convention in Miami. Nixon unexpectedly chose Governor Spiro Agnew of Maryland as his running mate. Nixon's campaign was helped by the tumult within the Democratic Party in 1968. Consumed by the war in Vietnam, President Lyndon B. Johnson announced on March 31 that he would not seek re-election. On June 5, immediately after winning the California primaries, former attorney general and then-U.S. Senator Robert F. Kennedy (brother of the late president John F. Kennedy) was assassinated in Los Angeles. The campaign of Vice President Hubert Humphrey, the Democratic nominee for president, went into a tailspin after the Democratic national convention in Chicago was marred by mass protests and violence. By contrast, Nixon appeared to represent a calmer society, and his campaign promised peace at home and abroad. Despite a late surge by Humphrey, Nixon won by nearly 500,000 popular votes. Third-party candidate George Wallace, the once and future governor of Alabama, won nearly ten million popular votes and 46 electoral votes, principally in the Deep South. Once in office, Nixon and his staff faced the problem of how to end the Vietnam War, which had broken his predecessor's administration and threatened to cause major unrest at home. As protesters in America's cities called for an immediate withdrawal from Southeast Asia, Nixon made a nationally televised address on November 3, 1969, calling on the "silent majority" of Americans to renew their confidence in the American government and back his policy of seeking a negotiated peace in Vietnam. Earlier that year, Nixon and his Defense Secretary Melvin Laird had unveiled the policy of "Vietnamization," which entailed reducing American troop levels in Vietnam and transferring the burden of fighting to South Vietnam; accordingly, U.S. troop strength in Vietnam fell from 543,000 in April 1969 to zero on March 29, 1973. Nevertheless, the Nixon administration was harshly criticized for its use of American military force in Cambodia and its stepped-up bombing raids during the later years of the first term. Nixon's foreign policy aimed to reduce international tensions by forging new links with old rivals. In February 1972, Nixon traveled to Beijing, Hangzhou, and Shanghai in China for talks with Chinese leaders Chairman Mao Zedong and Premier Zhou Enlai. Nixon's trip was the first high-level contact between the United States and the People's Republic of China in more than twenty years, and it ushered in a new era of relations between Washington and Beijing. Several weeks later, in May 1972, Nixon visited Moscow for a summit meeting with Leonid Brezhnev, general secretary of the Communist Party of the Soviet Union, and other Soviet leaders. Their talks led to the signing of the Strategic Arms Limitation Treaty, the first comprehensive and detailed nuclear weapons limitation pact between the two superpowers. Foreign policy initiatives represented only one aspect of Nixon's presidency during his first term. In August 1969, Nixon proposed the Family Assistance Plan, a welfare reform that would have guaranteed an income to all Americans. The plan, however, did not receive congressional approval. In August 1971, spurred by high inflation rates, Nixon imposed wage and price controls in an effort to gain control of price levels in the U.S. economy; at the same time, prompted by worries over the soundness of U.S. currency, Nixon took the dollar off the gold standard and let it float against other countries' currencies. On July 19, 1969, astronauts Neil Armstrong and Buzz Aldrin became the first humans to walk on the Earth's moon, while fellow astronaut Michael Collins orbited in the Apollo 11 command module. Nixon made what has been termed the longest-distance telephone call ever made to speak with the astronauts from the Oval Office. And on September 28, 1971, Nixon signed legislation abolishing the military draft. In addition to such weighty affairs of state, Nixon's first term was also full of lighter-hearted moments. On April 29, 1969, Nixon awarded the Presidential Medal of Freedom, the nation's highest civilian honor, to Duke Ellington-and then led hundreds of guests in singing "Happy Birthday" to the famed band leader. On June 12, 1971, Tricia became the sixteenth White House bride when she and Edward Finch Cox of New York married in the Rose Garden. (Julie had wed Dwight David Eisenhower II, grandson of President Eisenhower, on December 22, 1968, in New York's Marble Collegiate Church, while her father was President-elect.) Perhaps most famous was Nixon's meeting with Elvis Presley on December 21, 1970, when the president and the king discussed the drug problem facing American youth. Re-election, Second Term, and Watergate In his 1972 bid for re-election, Nixon defeated South Dakota Senator George McGovern, the Democratic candidate for president, by one of the widest electoral margins ever, winning 520 electoral college votes to McGovern's 17 and nearly 61 percent of the popular vote. Just a few months later, investigations and public controversy over the Watergate scandal had sapped Nixon's popularity. The Watergate scandal began with the June 1972 discovery of a break-in at the Democratic National Committee offices in the Watergate office complex in Washington, D.C., but media and official investigations soon revealed a broader pattern of abuse of power by the Nixon administration, leading to his resignation. The Watergate burglars were soon linked to officials of the Committee to Re-elect the President, the group that had run Nixon's 1972 re-election campaign. Soon thereafter, several administration officials resigned; some, including former attorney general John Mitchell, were later convicted of offenses connected with the break-in and other crimes and went to jail. Nixon denied any personal involvement with the Watergate burglary, but the courts forced him to yield tape recordings of conversations between the president and his advisers indicating that the president had, in fact, participated in the cover-up, including an attempt to use the Central Intelligence Agency to divert the FBI's investigation into the break-in. (For more information about Watergate, please visit the Ford Presidential Library and Museum's online Watergate exhibit.) Investigations into Watergate also revealed other abuses of power, including numerous warrantless wiretaps on reporters and others, campaign "dirty tricks," and the creation of a "Plumbers" unit within the White House. The Plumbers, formed in response to the leaking of the Pentagon Papers to news organizations by former Pentagon official Daniel Ellsberg, broke into the office of Ellsberg's psychiatrist. Adding to Nixon's worries was an investigation into Vice President Agnew's ties to several campaign contributors. The Department of Justice found that Agnew had taken bribes from Maryland construction firms, leading to Agnew's resigning in October 1973 and his entering a plea of no contest to income tax evasion. Nixon nominated Gerald Ford, Republican leader in the House of Representatives, to succeed Agnew. Ford was confirmed by both houses of Congress and took office on December 6, 1973. Such controversies all but overshadowed Nixon's other initiatives in his second term, such as the signing of the Paris peace accords ending American involvement in the Vietnam war in January 1973; two summit meetings with Brezhnev, in June 1973 in Washington and in June and July 1974 in Moscow; and the administration's efforts to secure a general peace in the Middle East following the Yom Kippur War of 1973. The revelations from the Watergate tapes, combined with actions such as Nixon's firing of Watergate special prosecutor Archibald Cox, badly eroded the president's standing with the public and Congress. Facing certain impeachment and removal from office, Nixon announced his decision to resign in a national televised address on the evening of August 8, 1974. He resigned effective at noon the next day, August 9, 1974. Vice President Ford then became president of the United States. On September 8, 1974, Ford pardoned Nixon for "all offenses against the United States" which Nixon "has committed or may have committed or taken part in" during his presidency. In response, Nixon issued a statement in which he said that he regretted "not acting more decisively and forthrightly in dealing with Watergate."
<urn:uuid:83e3cd95-9b04-47c5-bec6-208cac680d84>
{ "date": "2013-05-21T10:07:28", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9726836085319519, "score": 3.65625, "token_count": 1729, "url": "http://www.nixonlibrary.gov/thelife/apolitician/thepresident/" }
Coastal Clash: Defining Public Property and the History of the Public Trust Doctrine "Coastal Clash" is a one-hour documentary focusing on the urbanization of California's coastline. The activities and lesson plans for the film "Coastal Clash" target students at the high school level and align with the California State Standards for Government. In this lesson plan, students will do research and group work related to the concept of the Public Trust Doctrine. Enhancing Modern Languages Teaching: Student Participation and Motivation Enhancing Modern Languages Teaching: Student Participation and Motivation The Icarus Syndrome: A history of American hubris The Icarus Syndrome tells a tale as old as the Greek–a story about the seductions of success. In conversation with Associate Professor Brendan O'Connor from the US Studies Centre, Peter Beinart portrays three extraordinary generations: the progressives... (Running Time 60:06) Oberlin History as American History This site offers exhibits that tell about the lives and histories of the people of Oberlin, Ohio. The website features the story of an Amistad captive, Oberlin women and the struggle for equality, and the city's cooperative tradition. It also includes city maps and pictures, letters and essays related to the city's founding and development, newspaper articles regarding the Niagara movement, and census data. Ancient and Medieval Philosophy, Fall 2006 This course will concentrate on major figures and persistent themes in ancient and medieval philosophy. A balance will be sought between scope and depth, the latter ensured by a close reading of selected texts. Ancient Wisdom and Modern Love, Spring 2007 Built around Plato's Symposium, Shakespeare (including A Midsummer Night's Dream), Catholic writings (including Humanae Vitae), and several movies, this course explores the nature of romance and erotic love. We will examine such topics as sexuality, marriage, and procreation with an eye towards how we can be better at being in love. The course generally tries to integrate the analytic approach of philosophy with the imaginative approach of literature. Medicine and Public Health in American History, Fall 2007 This course offers an introduction to differing conceptions of disease, health, and healing throughout American history, the changing role and image of medicine and medical professionals in American life, and the changing social and cultural meanings and entanglements of medical science and practice throughout American history. Creating People Centred Schools: Section Two, School organization: a brief history This provides an overview of organizational styles and the importance of cultures as well as structures in organizational models and change. Welsh history and its sources This unit is a teaching and learning resource for anyone interested in Welsh history. It contains study materials, links to some of the most important institutions that contribute to our understanding of the history of Wales, and a pool of resources that Great Unsolved Mysteries in Canadian History This site includes a collection of nine historical mysteries which draw students into Canadian history, critical thinking and archival research through the enticement of solving historical cold crimes. Each of the mystery archives includes an average of 100,000 words in English (and in French), as well as up to several hundred images plus maps. Some of the mystery websites also include 3-D recreations, videos and oral history interviews. Site users can look at the collections of archival materia He who destroyes a good Booke, kills reason itselfe: an exhibition of books which have survived Fire In 1955, Robert Vosper of the University of Kansas Libraries put together what would become an internationally recognized exhibit of materials that have been banned and/or censored. This catalog of the exhibit explains why each item was of concern in its time, and includes images of many. Works date from the 1500s to mid-1950s. Research Guide for Doing Undergraduate History A website designed to help undergraduates use internet (and printed) resources in researching and writing history papers at a more sophisticated level than the traditional term paper based on secondary materials. History of Migraine and Risk of Pregnancy Induced Hypertension This peer reviewed article studies the relationship between women who have a history of migraine headaches in relation to developing preeclampsia or gestrational hypertension during pregnancy. The study included 172 women with preeclampsia and 254 with gestrational hypertension. The control included 505 women with no history of hypertension before pregnancy. The study concluded that women who had a history of migraines may be at a higher risk for developing hypertension during pregnancy. History of Science in Latin American and the Caribbean: A Virtual Archive This site is " a comprehensive database of primary sources on the history of science in Latin America and the Caribbean. The site, launched in January 2010, provides a virtual archive of over 200 primary sources along with introductions based on the latest scholarly findings."According to the site, it "is organized into Topics that are organized approximately chronologically, but each one stands alone. The archive, or database of primary sources, is designed in a modular fashion, so viewers from East Asia in World History This site is designed as a resource site for teachers of world history, world geography, and world cultures. It provides background information and curriculum materials, including primary source documents for students.The material is arranged in 14 topic sections. The topics and the historical periods into which they are divided follow the National Standards in World History and the Content Outline for the Advanced Placement Course in World History. Seventeen Moments in Soviet History Begins with the Bolshevik seizure of power in 1917 & ends with the dissolution of the Soviet Union in 1991. It includes the Kronstadt uprising (1921), the death of Lenin (1924), the liquidation of the Kulaks as a class (1929), the year of the Stakhanovite (1936), the end of rationing (1947), the virgin lands campaign (1954), Khrushchev's secret speech (1956), the first cosmonaut (1961), the intervention in Czechoslovakia (1968), & Chernobyl (1986). (NEH) The Mongols in World History A sophisticated web site on the history and impact of the Mongols. Separate pages deal with such topics as the nature of nomadic life, key figures, the Mongol Conquests, and the impact of the Mongols on China and the world. An image gallery and set of historical maps as well as other class materials and readings add to the value of the site. That one of the leading experts on the Mongols, Morris Rosabe, was a consultant gives the site much creditability. A Radically Modern Approach to Introductory Physics Volume 2 This is the second part (chapters 13-24) of a pdf textbook for a one-year introductory physics course. The text was developed out of an alternate beginning physics course at New Mexico Tech designed for students with a strong interest in physics. A broad outline of the text is as follows: Newton's Law of Gravitation; Forces in Relativity; Electromagnetic Forces; Generation of Electromagnetic Fields; Capacitors, Inductors, and Resistors; Measuring the Very Small; Atoms; The Standard Mode; Atomic MacTutor History of Mathematics Archive An award-winning site concerning the history of mathematics. In-depth coverage of numerous people, topics, mathematical curves, and more. Extensively cross-linked; powerful search engine. Rich and growing source of materials. This Land is Your Land? This Land is My Land! Mapping the History of Territory Acquisition in the US In this lesson, students will research the many territory acquisitions in United States history and create an annotated map that tells the history of U.S. expansion.
<urn:uuid:64970b38-05dd-4d6e-b665-19273f009542>
{ "date": "2013-05-21T10:24:25", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9216991066932678, "score": 2.75, "token_count": 1583, "url": "http://www.nottingham.ac.uk/xpert/scoreresults.php?keywords=Harvesting%20history,%20Laxton%20:%20the%20medieval%20village%20that%20survived%20the%20modern%20&start=1240&end=1260" }
National Teachers Initiative The National Teachers Initiative is a project of StoryCorps, the American oral history project. Each month this school year, "Weekend Edition Sunday" will celebrate stories of public school teachers across the country. Learning Works charter school in California takes an unorthodox approach to getting young people to graduate. Students who had previously dropped out get mentors who help with everything from getting to class on time to staying up late studying. Now, some of those who graduated are helping others. December 25, 2011 Teacher John Hunter invented the World Peace Game to get his elementary students to think about major world issues. He also wanted to teach them compassion and kindness. At least two of his former students are on the path he helped to pave. October 30, 2011 Ayodeji Ogunniyi's family came to the U.S. from Nigeria in 1990. His father worked as a cab driver in Chicago, and he always wanted his son to become a doctor. But while Ogunniyi was studying pre-med in college, his father was murdered on the job. At that point, he says, his life changed course. September 25, 2011 As a middle-school student in the '80s, Lee Buono stayed after school one day to remove the brain and spinal cord from a frog. He did such a good job that his science teacher told him he might be a neurosurgeon someday. That's exactly what Buono did. September 25, 2011 StoryCorps is honing in on lessons about learning with a new project for the academic year called the National Teachers Initiative. It'll feature conversations with teachers across the country — teachers talking to each other, students interviewing the teachers who changed their lives, and more.
<urn:uuid:1564cb48-4309-4426-b589-36b3b65e1630>
{ "date": "2013-05-21T09:59:44", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9854123592376709, "score": 2.734375, "token_count": 355, "url": "http://www.npr.org/series/142967497/national-teachers-initiative" }
Why separation of powers matters: Is freedom inevitable? The answer to that question is obvious but essential. Freedom is not inevitable. Historically, freedom is a temporary condition enjoyed by only a fraction of the earth's population. Since freedom is not inevitable - indeed, the opposite is true; freedom is rare - we must ask, "Why are we free when others or not?" As a nation (and state) of immigrants, we can't claim we are free because of our genetics. Our nation (and state) is blessed with natural resources, but so is Russia. Wealth does not produce freedom. In America (and in Nevada), we are free, because our founders recognized that, as Lord Acton stated, "Power tends to corrupt, and absolute power corrupts absolutely" and designed a government with three branches of government. While these branches of government each have different functions, they also have the ability to check the power exercised by another branch. To ensure that no person or group would amass too much power, the founders established a government in which the powers to create, implement, and adjudicate laws were separated. Each branch of government is balanced by powers in the other two coequal branches: The President can veto the laws of the Congress; the Congress confirms or rejects the President's appointments and can remove the President from office in exceptional circumstances; and the justices of the Supreme Court, who can overturn unconstitutional laws, are appointed by the President and confirmed by the Senate.Because we're so used to this system of government, it's easy to forget how important this system is to ensuring freedom. Government is needed to secure an individual's right to life, liberty and property. But those wielding governmental power tend to corruption, which harms the very rights government was created to defend. But using the checks and balances contained within three separate branches of government, you have a system where the tendency of government officials to amass power is checked by other government officials who usually aren't interested in giving up their power. And it's also why it's so dangerous for one individual to work in two branches of government at the same time. Both the separation of powers and the checks and balances in the system go out the window if one person has authority in two branches of government. Instead of separating power, power is consolidated. Instead of one branch checking another, it could collude with it. The idea of separating powers is so important that it's explicitly required in Nevada's constitution in Article 3, Section 1. The powers of the Government of the State of Nevada shall be divided into three separate departments,-the Legislative,-the Executive and the Judicial; and no persons charged with the exercise of powers properly belonging to one of these departments shall exercise any functions, appertaining to either of the others...And that's exactly why NPRI's Center for Justice and Constitutional Litigation has sued Mo Denis, the Public Utilities Commission, and the State of Nevada for violating the separation-of-powers clause in Nevada's constitution. Even the smallest encroachment in the separation-of-powers clause opens the door for larger and larger encroachments. Hello, Wendell Williams, Chris Giunchigliani, and Mark Manendo. Once you remove the bright-line standard, it's only a matter of time before incremental "exceptions" render the provision meaningless. And once you've removed the structural protections against, what James Madison called, "tyranny," you're left with a system of government dependent entirely on the character of its elected officials to keep it free from corruption and abuse of power. As "power tends to corrupt, and absolute power corrupts absolutely," this is a problem. Freedom isn't inevitable. Freedom is rare, and we should do everything in our power to protect the form and structure of our government - including a clear separation-of-powers provision - which provided us with freedom.
<urn:uuid:0f05ae50-8446-4293-a426-f43dd2639c57>
{ "date": "2013-05-21T10:34:20", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9528884887695312, "score": 2.828125, "token_count": 795, "url": "http://www.npri.org/blog/detail/why-separation-of-powers-matters-is-freedom-inevitable" }
How to Use Reading 1: Three Days of Carnage at Gettysburg (Refer to Map 2 as you read the description of the battle.) Units of the Union and the Confederate armies met near Gettysburg on June 30, 1863, and each quickly requested reinforcements. The main battle opened on July 1, with early morning attacks by the Confederates on Union troops on McPherson Ridge, west of the town. Though outnumbered, the Union forces held their position. The fighting escalated throughout the day as more soldiers from each army reached the battle area. By 4 p.m., the Union troops were overpowered, and they retreated through the town, where many were quickly captured. The remnants of the Union force fell back to Cemetery Hill and Culp's Hill, south of town. The Southerners failed to pursue their advantage, however, and the Northerners labored long into the night regrouping their men. Throughout the night, both armies moved their men to Gettysburg and took up positions in preparation for the next day. By the morning of July 2, the main strength of both armies had arrived on the field. Battle lines were drawn up in sweeping arcs similar to a "J," or fishhook shape. The main portions of both armies were nearly a mile apart on parallel ridges: Union forces on Cemetery Ridge, Confederate forces on Seminary Ridge, to the west. General Robert E. Lee, commanding the Confederate troops, ordered attacks against the Union left and right flanks (ends of the lines). Starting in late afternoon, Confederate General James Longstreet's attacks on the Union left made progress, but they were checked by Union reinforcements brought to the fighting from the Culp's Hill area and other uncontested parts of the Union battle line. To the north, at the bend and barb of the fishhook (the other flank), Confederate General Richard Ewell launched his attack in the evening as the fighting at the other end of the fishhook was subsiding. Ewell's men seized part of Culp's Hill, but elsewhere they were repulsed. The day's results were indecisive for both armies. In the very early morning of July 3, the Union army forced out the Confederates who had successfully taken Culp's Hill the previous evening. Then General Lee, having attacked the ends of the Union line the previous day, decided to assail the Union. The attack was preceded by a two hour artillery bombardment of Cemetery Hill and Ridge. For a time, the massed guns of both armies were engaged in a thunderous duel for supremacy. The Union defensive position held. In a final attempt to gain the initiative and win the battle, Lee sent approximately 12,000 soldiers across the one mile of open fields that separated the two armies near the Union center. General George Meade, commander of the Union forces, anticipated such a move and had readied his army. The Union lines did not break. Only every other Southerner who participated in this action retired to safety. Despite great courage, the attack (sometimes called Pickett's Charge or Longstreet's assault) was repulsed with heavy losses. Crippled by extremely heavy casualties in the three days at Gettysburg, the Confederates could no longer continue the battle, and on July 4 they began to withdraw from Gettysburg. 1. Which army had the advantage after the first day of fighting? What were some reasons for their success? Could they have been even more successful? 2. What was the situation by the evening of July 2? 3. What evidence from the previous day's fighting brought General Lee to decide on the strategy for Pickett's Charge on July 3? What was the result of that assault? 4. Why did General Lee decide to withdraw from Gettysburg? Reading 1 was adapted from the National Park Service's visitor's guide for Gettysburg National Military Park.
<urn:uuid:5d90f0c7-251b-47a6-816f-cabf014038b3>
{ "date": "2013-05-21T10:15:53", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9805654287338257, "score": 3.484375, "token_count": 789, "url": "http://www.nps.gov/nr/twhp/wwwlps/lessons/44gettys/44facts1.htm" }
Biomass Technology Analysis Conducting full life-cycle assessments for biomass products, including electricity, biodiesel, and ethanol, is important for determining environmental benefits. NREL analysts use a life-cycle inventory modeling package and supporting databases to conduct life-cycle assessments. These tools can be applied on a global, regional, local, or project basis. Integrated system analyses, technoeconomic analyses, life-cycle assessments (LCAs), and other analysis tools are essential to our research and development efforts. They provide an understanding of the economic, technical, and even global impacts of renewable technologies. These analyses also provide direction, focus, and support to the development and commercialization of various biomass conversion technologies. The economic feasibility and environmental benefits of biomass technologies revealed by these analyses are useful for the government, regulators, and the private sector. Technoeconomic analyses (TEAs) are performed to determine the potential economic viability of a research process. Evaluating the costs of a given process compared to the current technology can assess the economic feasibility of a project. These analyses can be useful in determining which emerging technologies have the highest potential for near-, mid-, and long-term success. The results of a TEA are also useful in directing research toward areas in which improvements will result in the greatest cost reductions. As the economics of a process are evaluated throughout the life of the project, advancement toward the final goal of commercialization can be measured. TEAs performed in previous years have determined the technical and economic feasibility of various biomass-based systems, including: - Direct combustion - Gasification combined cycle power systems NREL's analysis capabilities include proficiency with the following software packages: |ASPEN Plus©||Models continuous processes to obtain material and energy balances| |GateCycle™||Performs detailed steady-state and off-design analyses of thermal power systems| |Questimate©||Performs detailed process plant cost estimates| |MATLAB® and MathCAD®||Perform numeric calculations and mathematical solutions| |Crystal Ball®||Operates within Microsoft Excel® and incorporates uncertainties in forecasting analysis results| Life-cycle assessment (LCA) is an analytic method for identifying, evaluating, and minimizing the environmental impacts of emissions and resource depletion associated with a specific process. When such an assessment is performed in conjunction with a technoeconomic feasibility study, the total economic and environmental benefits and drawbacks of a process can be quantified. Material and energy balances are used to quantify the emissions, resource depletion, and energy consumption of all processes, including raw material extraction, processing, and final disposal of products and by-products, required to make the process of interest operate. The results of this inventory are then used to evaluate the environmental impacts of the process so efforts can focus on mitigation. LCA studies have been conducted on the following systems: - Biomass-fired integrated gasification combined-cycle system using a biomass energy crop - Pulverized coal boiler representing an average U.S. coal-fired power plant - Cofiring biomass residue with coal - Natural gas combined-cycle power plant - Direct-fired biomass power plant using biomass residue - Anaerobic digestion of animal waste Biofuels Production Technologies: - Ethanol from corn stover - Comparison of biodiesel and petroleum diesel used in an urban bus Hydrogen production technologies: - Natural gas-hydrogen production For these analyses, the software package used to track the material and energy flows between the process blocks in each system was Tools for Environmental Analysis and Management (TEAM®). Learn more about our Biomass capabilities and current projects in this area. Access more information on all of our Staff Analysts
<urn:uuid:20b959d9-f123-487a-b3f3-b612ab4f12a6>
{ "date": "2013-05-21T10:13:14", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8949363231658936, "score": 2.640625, "token_count": 760, "url": "http://www.nrel.gov/analysis/tech_bio_analysis.html" }
Algorithm Positions Solar Trackers, Movie Stars March 30, 2011 Math and programming experts at a federal laboratory took an algorithm used to track the stars and rewrote its code to precisely follow the sun, even taking into consideration the vagaries of the occasional leap second. Now, the algorithm and its software are helping solar power manufacturers build more precise trackers, orchards to keep their apples spotless and movie makers to keep the shadows off movie stars. The Solar Position Algorithm (SPA) was developed at the U.S. Department of Energy's National Renewable Energy Laboratory to calculate the sun's position with unmatched low uncertainty of +/- 0.0003 degrees at vertex, in the period of years from -2000 to 6000 (or 2001 B.C. until just short of 4,000 years from now). That's more than 30 times more precise than the uncertainty levels for all other algorithms used in solar energy applications, which claim no better than +/- 0.01 degrees, and are only valid for a maximum of 50 years. And those uncertainty claims cannot be validated because of the need to add an occasional leap second because of the randomly increasing length of the mean solar day. The SPA does account for the leap second. That difference in uncertainty levels is no small change, because an error of .01 degrees at noon can throw calculations off by 2 or 3 percent at sunrise or sunset, said NREL Senior Scientist Ibrahim Reda, the leader on the project. "Every uncertainty of 1 percent in the energy budget is millions of dollars uncertainty for utility companies and bankers," Reda said. "Accuracy is translated into dollars. When you can be more accurate, you save a lot of money." "Siemens Industry Inc. uses NREL's SPA in its newest and smallest S7-1200 compact controller," says Paul Ruland of Siemens Industry, Inc. "Siemens took that very complex calculation, systemized it into our code and made a usable function block that its customers can use with their particular technologies to track the sun in the most efficient way. The end result is a 30 percent increase in accuracy compared to other technologies." Science, Engineering and Math All Add to Breakthroughs An algorithm is a set of rules for solving a mathematical problem in a finite number of steps, even though those steps can number in the hundreds or thousands. NREL is known more for its solar, wind, and biofuel researchers than for its work in advanced math. But algorithms are key to so many scientific and technological breakthroughs today that a scientist well-versed in the math of algorithms is behind many of NREL's big innovations. Since SPA was published on NREL's website, more than 4,000 users from around the world have downloaded it. In the European Union, for the past three years, it has been the reference algorithm to calculate the sun's position both for solar energy and atmospheric science applications. It has been licensed to, and downloaded by, major U.S. manufacturers of sun trackers, military equipment and cell phones. It has been used to boost agriculture and to help forecast the weather. Archaeologists, universities and religious organizations have employed SPA, as have other national laboratories. Fewer Dropped Cell-Phone Calls Billions of cell-phone calls are made each day, and they stay connected only because algorithms help determine exactly when to switch signals from one satellite to another. Cell-phone companies can use the SPA to know exactly the moments when the phone, satellite, and the bothersome sun are in the same alignment, vulnerable to disconnections or lost calls. "The cell phone guys use SPA to know the specific moment to switch to another satellite so you're not disconnected," said Reda, who has a master's degree in electrical engineering/measurement from the University of Colorado. "Think of how many millions of people would be disconnected if there's too much uncertainty about the sun's position." From a Tool for Solar Scientists to Widespread Uses SPA sprang from NREL's need to calibrate solar measuring instruments at its Solar Radiation Research Laboratory. "We characterize the instruments based on the solar angle," Reda said. "It's vital that instruments get a precise read on the amount of energy they are getting from the sun at precise solar angle." That will become even more critical in the future when utilities add more energy garnered from the sun to the smart grid. "The smart grid has to know precisely what your budget is for each resource you are using — oil, coal, solar, wind," Reda said. Making an Astronomy Algorithm One for the Sun Reda borrowed from the "Astronomical Algorithms," which is based on the Variations Sèculaires des Orbites Planètaires Theory (VSOP87) developed in 1982 then modified in 1987. Astronomers trust it to let them know exactly where to point their telescopes to get the best views of Jupiter, Alpha Centauri, the Magellan galaxy or whatever celestial bodies they are studying. "We were able to separate and modify that global astronomical algorithm and apply it just to solar energy, while making it less complex and easy to implement," said Reda, highlighting the role of his colleague, Afshin Andreas, who has a degree in engineering physics from the Colorado School of Mines, as well as expertise in computer programming. They spent an intense three or four weeks of programming to make sure the equations were accurate before distributing the 1,100 lines of code, Andreas said. They used almanacs and historical data to ensure that what the algorithm was calculating agreed with what observers from previous generations said about the sun's position on a particular day. "We did spot checks so we would have a good comfort level that the future projections are accurate," Reda said. "We used our independent math and programming skills to make sure that our results agreed, Reda said. Available for Licensing, Free Public Use The new SPA algorithm simply served the needs of NREL scientists, until the day it was put on NREL's public website. "A lot of people started downloading it," so NREL established some rules of use, Reda said. Individuals and universities could use SPA free of charge, but companies with commercial interests would have to pay for the software. Factoring in Leap Seconds Improves Accuracy NREL's SPA knows the position of the sun in the sky over an 8,000 year period partly because it has learned when to add those confounding leap seconds. Solar positioners that don't factor in the leap second only can calculate a few years or a few decades. The length of an Earth day isn't determined by an expensive watch, but by the actual rotation of the Earth. Almost immeasurably, the Earth's rotation is slowing down, meaning the solar day is getting just a tiny bit longer. But it's not doing so at a constant rate. "It happens in unpredictable ways," Reda said. Sometimes a leap second is added every year; sometimes there isn't a need for another leap second for three or four years. For example, the International Earth Rotation and Reference Systems Service (IERS) added six leap seconds over the course of seven years between 1992 and 1998, but has added just one extra second since 2006. The algorithm calculates exactly when to add a leap second because included in its equations are rapid, monthly, and long-term data on the solar day provided by IERS, Reda and Andreas said. "IERS receives the data from many observatories around the world," Reda added. "Each observatory has its own measuring instruments to measure the Earth's rotation. A consensus correction is then calculated for the fraction of second. As long as we know the time, and how much the Earth's rotation has slowed, we know the sun's position precisely." That precision has proved useful in unexpected fields. Practical Uses in Agriculture, Movie Making One person who bought a license for the SPA software has an apple orchard, and wanted to keep the black spots off the apples that turn off finicky consumers, thus making wholesale buyers hesitate, Reda said. The black spots appear when too much sun hits a particular apple, a particular tree or a particular row of trees in an orchard. The spots can be prevented by showering the apples with water, but growers don't want to use more water than necessary. SPA's precise tracking of the sun tells the grower exactly when the automatic sprinkler should spray for a few moments on a particular set of trees, and when it's OK to shut off that sprayer and turn on the next one. SPA communicates with the sprinkler system so, "instead of spraying the whole orchard, the spray moves minute by minute," Reda said. "He takes our tool and plugs it into the software that controls the sprinkler system. And he saves a lot of water." Religious groups with traditions of praying at a particular time of day even have turned to SPA to help with precision. A movie-camera manufacturer has purchased the SPA software to help cinematographers combat the precious waste of money when shadows disrupt outdoor shooting. "They have cameras on those big cranes and booms, and typically they'd have to manually change them based on the shadows," Reda said. "This company that bought it has an automatic camera positioner." Combining the positioner with the SPA's calculations, the camera can tell the precise moment when the sun will, say, peak above the tall buildings of an outdoor set. "They don't have to make so many judgments on their own about where the camera should be positioned," Reda said. "It gives them a clearer picture." Learn more about NREL's solar radiation research and the Electricity, Resources, and Building Systems Integration Center. — Bill Scanlon
<urn:uuid:b8675757-db5b-4acd-a245-0aceb7ed0441>
{ "date": "2013-05-21T10:07:36", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9547813534736633, "score": 3.140625, "token_count": 2039, "url": "http://www.nrel.gov/news/features/feature_detail.cfm/feature_id=1494" }
NEW ULM A historical impersonator of St. Paul's very first public school teacher provided a snapshot of a time and place in history Saturday during the Junior Pioneers Winter Social. Suzanne de la Houssaye, of the Minnesota Historical Society, performed as Harriet Bishop, who was instrumental in making St. Paul's school into a public school, promoting the national profile of Minnesota and pushing the temperance movement. She was also part of the initial rush of writers and intellectuals to write about the U.S. Dakota Conflict, writing her own record named "Dakota War Whoop." Bishop was an intensely religious Baptist woman who believed in the imminent coming of the Rapture and the need to exuberantly preach the Gospel to all that could hear it. She was also considered an intellectual in the Twin Cities at that time and was instrumental in helping to build the major cities up. She started the St. Paul school in 1847, starting in essentially a log cabin and rapidly growing the number of students until her successful efforts to make it the town's first public school. She even raised Minnesota's profile as a healthy destination to settle due to its "sturdy weather," erroneously claiming certain diseases of the time simply did not exist in Minnesota. Staff photo by Josh Moniz Suzanne de la Houssaye, of the Minnesota Historical Society, performed as Harriet Bishop, St. Paul's first public school teacher, on Saturday at the Junior Pioneers Winter Social. She had several forms of compassion for the Dakota people, but was equally a product of her time in believing the only right way forward was for them to wholly adopt European culture and traditions. She wrote her book from a very emotional standpoint, aimed at drawing up the image of women and children hiding in the basements in New Ulm. Her book also carried many inaccuracies believed at the time and tried to paint Charles Flandreau as the sole savior of the battle at New Ulm at the start of the Conflict. Her spin on the Conflict is largely believed to be due to its elements making a serious impact on her. But, she still fell in the middle of the white settler's beliefs after the Conflict, neither advocating for the extermination of the Dakota people nor being among those who fully accepted the Dakota's right to an independent heritage. Interestingly, she had an almost comically stern view of New Ulm citizens at the time, believing the myths that they were all progressive atheists who forbid priests in their city limits. She literally referred to them as "the infidel Germans" in her book and made indications to somewhat held belief at the time that the Conflict was God's judgment on the town. She also judge New Ulm for having dance halls, which some strict religious sects objected to around that time, and because she believed they would often perform the taboo act of drinking on holy days. She married a widower who served in the U.S. Civil War. The common practice during the time was for widowers to quickly remarry, which was sometimes a sheer matter of survival. It was also a more common occurrence during that time due to the high rate of deaths during childbirth, which was often caused by doctors trying to help women without knowing about the deadly diseases hiding on their hands. However, she eventually undertook the uncommon act of divorcing her husband due to his abusive alcoholism. Her husband's circumstance was frighteningly common, largely due to numerous Civil War veterans returning without any help for psychological issues from their service. This led to her heavy advocacy for the temperance movement, which would eventually see Prohibition passed after her death. She personally was one of the founding members of the Christian Women's Temperance Union. Her cause in that infamous movement was aimed at combatting a very real issue of the day: the prospect of the husband, the only one allowed in that time to earn the living wages, drinking away all the month's food money due to alcoholism. The people of that time drank more than three times more alcohol per week on a normal basis than most people drink today. The movement believed the end of alcohol would address the majority of the terrible acts of abuse laid on women and children at the time. Bishop's belief in the temperance movement even largely influenced how she saw alcohol negatively affecting the Dakota people. She never lived long enough to see Prohibition, but she did see a roughly one year implementation of the "Maine law" that banned alcohol in select locations in the Twin Cities. The temperance movement was also intrinsically linked to intense advocacy for suffrage and abolition. Suzanne de la Houssaye said Bishop was a fascinating woman of her time. She said the Minnesota Historical Society is interested in telling her story, as well as not grazing over her issues in the depiction of the Dakota Conflict to provide a better dialogue about the events. Josh Moniz can be e-mailed at email@example.com.
<urn:uuid:b3d3940b-3455-4ba4-9827-4a83ccbbce54>
{ "date": "2013-05-21T10:14:59", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9832770824432373, "score": 2.625, "token_count": 1000, "url": "http://www.nujournal.com/page/content.detail/id/533724/Impersonator-of-St--Paul-s-first-public-school-teacher-paints-portrait-of-time--place-in-history.html?nav=5126" }
Does Thinness Raise Alzheimer's Risk? < Nov. 23, 2011 > -- In the search for early markers of Alzheimer's disease - in hopes of eventually preventing it - researchers have found that low body weight may somehow play a role. In a study published this week in the journal Neurology, people with early signs of Alzheimer's disease were more likely to be underweight or have a low body mass index (BMI). Earlier studies found that people who are overweight in middle age or earlier are at higher risk for Alzheimer's later in life. Other studies have shown that being overweight later in life seems to protect against the disease. More research needed What the latest study findings mean for diagnosing or preventing Alzheimer's disease is unclear. "A long history of declining weight or BMI could aid the diagnostic process," says study author Eric Vidoni, Ph.D., at the University of Kansas. But, he adds, it's too early "to make body composition part of the diagnostic toolbox." Dr. Vidoni and colleagues studied brain imaging and analyzed cerebrospinal fluid in 506 people. Study participants ranged from those with no memory problems to others with Alzheimer's. Impact of body weight People who had evidence of Alzheimer's - either in brain scans or protein levels in the cerebrospinal fluid - were more likely to have a lower BMI than those who did not show early evidence of the disease. The researchers aren't sure why body weight might have a bearing on Alzheimer's risk. They speculate that the disease may affect the hippocampus, the area of the brain that controls metabolism and appetite. Or, they say, perhaps inflammation is driving both the drop in BMI and the cognitive changes that are the hallmark of Alzheimer's. For more information on health and wellness, please visit health information modules on this website. Although you can't control certain risk factors for Alzheimer's disease like advancing age, you can reduce your odds of developing the condition. The latest findings show you can reduce risk by: Always talk with your health care provider to find out more information. (Our Organization is not responsible for the content of Internet sites.)
<urn:uuid:02eab517-43e9-4b5b-a99e-db4f220d208d>
{ "date": "2013-05-21T10:20:15", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9403473734855652, "score": 2.875, "token_count": 440, "url": "http://www.nyhq.org/diw/Content.asp?PageID=DIW010334&More=DIW" }
Guide to Tanzanian Legal System and Legal Research By Bahame Tom Nyanduga and Christabel Manning Bahame Tom Nyanduga* is Advocate of the High Court of Tanzania, and had been the President of the East Africa Law Society between October 2004 - October 2006. The main research for this compilation has been conducted by Ms. Christabel Manning, LL.B, a graduate of the University of Dar Es Salaam currently working in the Legal Department at KPMG (T) Limited and a member of the Tanzania Women Lawyers Association-TAWLA. Read the Update! Table of Contents The United Republic of Tanzania is situated on the eastern seaboard of the African continent, about one degree south of the Equator. Its eastern border is the Indian Ocean, it shares its northern border with the Republic of Kenya and Uganda, and to the West it borders the Democratic Republic of the Congo, the Republic of Rwanda and the Republic of Burundi. The Republic of Zambia and the Republic of Malawi share its borders on the southwest, while in the South it shares a border with Mozambique; it is the union of two historical countries, Tanganyika and Zanzibar. The United Republic of Tanzania was formed in 1964 through the union of two independent states, namely the Republic of Tanganyika and the Peoples’ Republic of Zanzibar. Zanzibar is an autonomous part of the United Republic, and is made up of two islands, namely Unguja and Pemba, which are found in the territorial waters of the United Republic, in the Indian Ocean. Another island to the south east of the Tanzania, Mafia, is an integral part of mainland Tanzania. Tanganyika gained its independence from the British, who administered her after the end of the WWII under the United Nations Trusteeship; on 9th December 1961 she became a Republic on 9th December 1962. Zanzibar became independent on 12th December 1963. Prior to her independence, Zanzibar, which was ruled by an Arab Sultanate, and enjoyed a protectorate status under the British. One month after she gained independence, the Arab Sultanate regime of Zanzibar was overthrown by a popular revolution on 12th January 1964, which led to the creation of the Revolution Government of Zanzibar. The Republic of Tanganyika and the Peoples’ Republic of Zanzibar entered into a union on 26th April 1964 to form the United Republic of Tanganyika and Zanzibar, which was later renamed on 29th October the United Republic of Tanzania. At the time of the Union, Tanganyika was governed by a political party know as the Tanganyika African National Union (TANU), the nationalist party which won the country its independence, while Zanzibar was rule by the Afro Shiraz Party (ASP) which had lead the popular revolution. The two states were by then governed under the one party system of government, i.e. the one party state democracy, which was then very prevalent in Africa. In 1977 the TANU and ASP merged to form the Chama Cha Mapinduzi-CCM party, (otherwise known as the Revolutionary Party), which continued to exercise political control through out the country under the one party regime. The United Republic of Tanzania was under the leadership of one party system until 1992 when she adopted a new constitution, which enabled the organization of pluralist political parties, and hence in 1995 the first multi party democratic elections were held in the country. Since 1995 the country has held such multi party elections in 2000, and 2005. Tanzania’s legal system is based on the English Common Law system. It derived this system from its British colonial legacy, as it does the system of government, which is based to a large degree on the Westminster parliamentary model. Unlike the unwritten British constitutional system, the first source of law for the United Republic of Tanzania is the 1977 Constitution. The constitutional history of Tanganyika traces its background from the 1961 Independence Constitution, which was adopted at the time of independence. In 1962 Tanganyika adopted the Republican Constitution, which operated from 1962 up to 1965. These two were based on the traditional Lancaster style constitutions negotiated at independence by the British upon handover of state power to newly independent states. In 1965 Tanganyika adopted an Interim Constitution while the country awaited a new constitution to be drafted, after it abolished the multi party political system and adopted a one party state system. The process lingered longer than it was meant to and thus the constitution lasted from 1965 up to 1977 when a new constitution was adopted and it has remained applicable to date, with fourteen subsequent amendments. The Constitution provides for a bill of rights, notwithstanding the fact that it also makes provision for a number of claw-back clauses. In other words the enjoyment of certain rights and freedoms under the constitution is not absolute, but it is subject to legal regulation. The Bill of Rights is found in part three of the first Chapter of the Constitution and the fundamental rights and freedom are stipulated in article 12 to 24, article 25 to 28 imposes duties on every individual to duties and obligations to respect the rights of others and society. Article 29 establishes the obligation of society to every individual. Article 30 of the Constitution limits the application of these rights subject to law and the under the due process of law, as the case may be. The Constitution allows any person to challenge any law or act/omission, which contravenes his or her right, or the Constitution. The second source of law is the Statutes or Acts of Parliament. The Laws Revisions Act of 1994 Chapter Four of the laws of Tanzania [R.E. 2002,] established that all legislations previously known as Ordinances, i.e. those which were enacted by the pre independence colonial administration, as Orders in Council, can now be legally recognized as Acts. These principal legislations, and subsidiary legislations thereto, are published in the Government Gazette and printed by the Tanzania Government Printers. The third source is case law. These are cases from the High Court and Court of Appeal which are either reported or unreported and are be used as precedents, and bind lower courts thereto. Reported Tanzanian cases are found in the Tanzania Law Reports, High Court Digests and East Africa Law Reports. The fourth source is Received Laws established under Section 2.3 of The Judicature and Application Laws Act, Chapter 358 of the Laws of Tanzania [R.E. 2002] (JALA) these include: Common Law, and Doctrine of Equity, Statutes of General Application of England, applicable before the 22 of July 1920, which is deemed to be the Reception date for English Law in Tanzania. The fifth source is the Customary and Islamic law, which are established under section 9 of JALA. Whereby customary law is in effect only when it does not conflict with statutory law whilst Islamic law is applicable to Muslims under the Judicature and Applications of Laws Act, empowering courts to apply Islamic law to matters of succession in communities that generally follow Islamic law in matters of personal status and inheritance. International Laws, that is, Treaties and Conventions, are not self-executing. The Act of Parliament can apply treaties and conventions to which Tanzania is a party in the Courts in Tanzania only after ratification The United Republic of Tanzania is a unitary state based on a multiparty parliamentary democracy. In 1992 the Tanzanian government introduced constitutional reforms permitting the establishment of opposition political parties. All matters of state in the United Republic are exercised and controlled by the Government of the United Republic of Tanzania and the Revolutionary Government of Zanzibar. The Government of The United Republic of Tanzania has authority over all Union matters in the United Republic, as stipulated under the Constitution, and it also runs all non union matters on Mainland Tanzania, i.e. the territory formerly known as Tanganyika. Non-union matters are all those which do not appear in the Schedule to the Constitution which stipulates the list of Union matters. The Revolutionary Government of Zanzibar, similarly, has authority on Tanzania Zanzibar, i.e. the territory composed of the islands of Unguja and Pemba, over all matters, which are not Union Matters. In this respect the Revolutionary Government of Zanzibar has a separate Executive, legislature, known as the House of Representatives, and a judicial structure, which functions from the Primary Court level to the High Court of Zanzibar, which are provided for under the 1984 Constitution of Zanzibar. There are three organs for central government of the United Republic of Tanzania: the Executive, Judiciary and the Legislature. Local Government Authority is exercised through Regional and District Commissioners The functions and powers of each of the three organs are laid out in the 1977 Constitution of the United Republic of Tanzania. Parliament is established under Chapter Three, the Executive is established under Chapter Two and the Judiciary under Chapter Five. The Executive of the United Republic comprises of the President, The Vice-President, President of Zanzibar, the Prime Minister and Cabinet Ministers. The President of the United Republic is the Head of State, the Head of Government and the Commander-in-Chief of the Armed Forces. The President is the Leader of the Executive of the United Republic of Tanzania The Vice President who is the principal assistant to the President in all matters of the United Republic is responsible for: The Prime Minister of the United Republic is the leader of Government business in the National Assembly, controls, supervises and executes daily functions and affairs of the Government of the United Republic, and any other matters the President directs to be done. The President of Zanzibar is the Head of the Executive for Zanzibar, i.e.; the Revolutionary Government of Zanzibar and is the Chairman of the Zanzibar Revolutionary Council. The Cabinet of Ministers, which includes the Prime Minister, is appointed by the President from among members of the National Assembly. The Government executes its functions through Ministers led by Cabinet Ministers. President Jakaya M Kikwete became the current President of the United Republic on the 21st December of 2005 after a historic victory, winning 80.3% of the total votes, and Dr Ali Mohammed Shein is the Vice President of the United Republic of Tanzania. Dr Shein had previously served as Vice-President since 5th July 2001, prior to the 2005 General Elections. Since independence, Tanzania has held peaceful elections. Tanzania was a one-party system of democracy between 1965, 1970, 1975, 1980, 1985, and 1990; in the first elections, held in 1962, the ruling party captured all seats hence the de-facto one party state emerged, to be later regularized by law in 1965. In 1992, following the constitutional reforms, described herein above, the formation and organization of political parties is now conducted under the Political Parties Act 1992. About 18 political parties have been registered since then and multiparty general elections were held under the new multiparty system in 1995, 2000, and 2005. The Legislature, or the Parliament of the United Republic of Tanzania, consists of two parts, i.e. the President and the National Assembly. The President exercises authority vested in him by the Constitution to assent to bills by Parliament in order to complete the enactment process before they become law. The National Assembly, which is the principal legislative organ of the United Republic, has authority on behalf of the people to oversee and the accountability of the Government of the United Republic and all its organs of their particular duties. The Parliament is headed by the Speaker, who is assisted by the Deputy Speaker, and the Clerk as the head of the Secretariat of the National Assembly. The National Assembly also has various standing Committees to support in its various functions. The National Assembly of Tanzania is constituted by one chamber, with members elected form various constituencies across mainland Tanzania and Zanzibar. Under the Constitution, women’s representation is provided for as a special category, in order to increase the participation of women in national politics. Elections are supervised by the National Electoral Commission which is established under the Constitution. The legal system of Tanzania is largely based on common law, as stated previously, but is also accommodates Islamic or customary laws, the latter sources of law being called upon called upon in personal or family matters. The judiciary is formed by the various courts of judicature and is independent of the government. Tanzania adheres to and respects the constitutional principles of separation of powers. The Constitutional makes provision for the establishment of an independent judiciary, and the respect for the principles of the rule of law, human rights and good governance. The Judiciary in Tanzania can be illustrated as follows. The Judiciary in Tanzania has four tiers: The Court of Appeal of the United Republic of Tanzania, the High Courts for Mainland Tanzania and Tanzania Zanzibar, Magistrates Courts, which are at two levels, i.e. the Resident Magistrate Courts and the District Court, both of which have concurrent jurisdiction. Primary Courts are the lowest in the judicial hierarchy. Court of Appeal The Specialized Divisions - High Court of Tanzania -- The High Court of Zanzibar Resident Magistrates Courts Court of Appeal The Court of Appeal of Tanzania, established under Article 108 of the Constitution, is the highest Court in the hierarchy of judiciary in Tanzania. It consists of the Chief Justice and other Justices of Appeal. The Court of Appeal of Tanzania is the court of final appeal at the apex of the judiciary in Tanzania. The High Court of Tanzania (for mainland Tanzania) and the High Court of Zanzibar are courts of unlimited original jurisdiction, and appeals there from go to the Court of Appeal. The High Court of Tanzania was established under Article 107 of the Constitution and it has unlimited original jurisdiction to entertain all types of cases. The High Courts exercise original jurisdiction on matters of a constitutional nature and have powers to entertain election petitions. The High Court’s Main Registry, (which includes the sub-Registries) caters for all civil and criminal matters. The High Court (mainland Tanzania) has established 10 sub Registries in different zone of the country. It also has two specialised divisions, the Commercial Division and the Land Division. All appeals from subordinate courts go to the High Court of Tanzania. These include the Resident Magistrate Courts and the District Courts, which both enjoy concurrent jurisdiction. These courts are established under the Magistrate Courts Act of 1984. The District Courts, unlike the Resident Magistrates Courts, are found throughout all the districts in Tanzania (the local government unit.) They receive appeals from the Primary Courts, several of which will be found in one district. The resident magistrates Courts are located in major towns, municipalities and cities, which serve as the regional (provincial) headquarters. The primary courts are the lowest courts in the hierarchy and are established under the Magistrates Courts Act of 1984. They deal with criminal cases and civil cases. Civil cases on property and family law matters which apply customary law and Islamic law must be initiated at the level of the Primary Court, where the Magistrates sits with lay assessors. (The jury system does not apply in Tanzania) There are specialized tribunals, which form part of the judicial structure. These for example include District Land and Housing Tribunal, Tax Tribunal and the Tax Appeals Tribunal, Labour Reconciliation Board, the Tanzania Industrial Court, and Military Tribunals for the Armed forces. Military Courts do not try civilians. A party who feels dissatisfied with any decision of the Tribunals may refer the same to the High Court for judicial review The High Court of Zanzibar has exclusive original jurisdiction for all matters in Zanzibar, as is the case for the High Court on mainland Tanzania. The Zanzibar court system is quite similar to the Tanzania mainland system, except that Zanzibar retains Islamic courts. These adjudicate Muslim family cases such as divorces, child custody and inheritance. All other appeals from the High Court of Zanzibar go to the Court of Appeal of Tanzania. The structure of the Zanzibar legal system is as follows; Court of Appeal Magistrate Court ↔ Kadhi’s Appeal Courts Primary Courts Kadhi’s Court Court of Appeal of Tanzania The Court of Appeal Tanzania handles all matters from the High Court of Zanzibar. The High Court of Zanzibar is structured with the same structure as the High Court of Tanzania Mainland and it handles all appeals from the lower subordinate courts. These Courts have jurisdiction to entertain cases of different nature, except for cases under Islamic law, which they have no jurisdiction to try which are tried in the Kadhi’s courts. Kadhi’s Appeal Court The main role of the Kadhi’s Appeal Court of Zanzibar is to hear all appeals from the Kadhi’s court, which adjudicates on Islamic law. These are the lowest courts in Zanzibar which have adjudicate all Islamic family matters such as divorce, distribution of matrimonial assets, custody of children and inheritance but only with Muslim families. These have the same rank as the Kadhi’s Courts and they deal with criminal and civil cases of customary nature. There are a number of places one can obtain legal materials in Tanzania: Libraries, The Library of the Court of Appeal of Tanzania, the High Court Library, the High Court Land Division Library, Commercial Division of the High Court Library, the Attorney General’s Office at the Ministry of Justice and Constitutional Affairs, University of Dar Es Salaam, National Archives, Government Bookshop, Dar Es Salaam Bookshop, the United Nations Information Centre, The International Criminal Tribunal for Rwanda in Arusha, Mzumbe University and many more others. Reported cases in Tanzania can be found in a number of Law Reports. Between 1957and 1977 cases reported from the High Court of Tanzania and the East African Court of Appeal appeared in East Africa Law Reports. Law Africa, a law report publishing company has updated the reports for cases from the three East African jurisdictions, of Kenya, Uganda and Tanzania up to 2007. Current editions of the law reports can be sourced from Law Africa Publishers, email email@example.com. Their corporate headquarters address is: Law Africa Publishing (K) Ltd, Coop Trust Plaza, 1st Floor, Lower Hill Road, P.O. Box 4260-00100, GPO, Nairobi, Kenya The Tanzania Law Reports between 1983 and 1997 can be bought online from firstname.lastname@example.org. A complete set of the Statutes of Tanzania, the Laws of Tanzania- Revised Edition of 2002 (21 Volumes,) including a supplementary legislation, and subsidiary legislations can be bought online from the same references above. The Tanzania Government Printer publishes the government's Official gazette. The Official gazette publishes bills, legislative enactments, before and after assent, subsidiary legislations, announcement of all official government appointments and dates of entry into force of all legislations. The same can be ordered through the Government Publications Agency. Any other information on Tanzania can be accessed online. These include the website of Parliament where one can access parliamentary information, including Acts and Bills of Law. Others sites include the government’s public administration page the Tanzanian Law Reform Commission website. These include textbooks such as: Constitutional and Administrative law Contract, Commercial and Company Law Criminal Law and Procedure Civil law and Procedure Family Law, Equity and Succession To pursue a legal career in Tanzania one may start with a Certificate in Law, particularly for persons who have discontinued secondary education, followed by a Diploma in Law, a Degree in law (LL.B) and continue with a Postgraduate Diploma in Law (PGDL), Masters of law (LL.M), Degree of Doctor of Philosophy (Ph.D) and Doctor of Laws (LL.D), which is the highest doctorate to be awarded. Students who have successfully completed advanced secondary education and who qualify with good academic grades can also join a law degree courses offered at any of the Universities in the country. There are a number of Universities, which offer courses in law such as the University of Dar es Salaam, Mzumbe University, Open University, Tumaini University Ruaha University under St. Augustine and other institutes, which offer diploma in law such as Mzumbe University and Lushoto Institute of Judicial Administration. Certificate in Law courses are taught at the other institutes of learning such as the Police College and have enabled successful candidates to pursue law degree courses. Any LL.B degree holder who has attended internship and Pupilage in two years can apply to sit the Bar exam which is held three times a year. The Bar exam is an oral interview conducted under a panel of the Council for Legal Education, which is composed of representatives of the Chief Justice of the United Republic of Tanzania, the Attorney General of the United Republic, the Dean of Faculty of Law, of the University of Dar Es Salaam, and two representatives of the Law Society. A successful candidate is sworn in and enrolled as an Advocate of the High Court of Tanzania and sub-ordinate Courts thereto. Advocates do not have the right of audience before the Primary Courts in Tanzania. More information can be found at the University of Dar es Salaam’s website. Any person enrolled as an advocate under the Advocates Act, Chapter 341 of the Laws of Tanzania [R.E.2002] and listed as a member of the Tanganyika Law Society, established pursuant to the Tanganyika Law Society Act Chapter 307 of the Laws of Tanzania [R.E 2002] can practice law as an Advocate and shall be subject to the disciplinary rules and etiquette as promulgated under the said laws, and subject to the Ethics Committee of the Law Society and the Advocates Disciplinary Committee established under the Advocates Act CAP 341. Any inquiries as to the practice of law in Tanzania may be addressed to the Executive Secretary, Tanganyika Law Society; email; email@example.com * I wish to acknowledge with the thanks the industry and time taken by Christabel Manning for conducting the research and putting together the basic draft for this compilation, without whose assistance, this article would not have been possible.
<urn:uuid:3d26ef91-2a8a-4341-aee0-f4b046edda6d>
{ "date": "2013-05-21T09:59:02", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9524539113044739, "score": 2.765625, "token_count": 4637, "url": "http://www.nyulawglobal.org/globalex/Tanzania.htm" }
|Guidelines for small-scale fruit and vegetable processors. (FAO Agricultural Services Bulletin - 127) (1997)| |Part 2 - Processing for sale| |2.6. Contracts with suppliers and retailers| Many small scale processors buy fruits and vegetables daily from their nearest public market. Although this is simple and straightforward, it creates a number of problems for a business: for example, the processors have little control over the price charged by traders each day and because of the large seasonal price fluctuations that characterise these raw materials, this makes financial planning and control over cashflow more difficult (Section 2.3.4). The processor is also unable to schedule the raw materials in the quantities required and it is common for production to fail to meet a target because there are simply not enough fruits and vegetables for sale on a particular day. Additionally, the processor has no control over the way fruits and vegetables are handled during harvest and transport to the markets and therefore no influence over the quality of the raw materials that are available (see also Section 2.7.2). To address these problems, a processor can arrange contracts with either traders or farmers, in an attempt to have greater control over the amount of raw materials available for processing each day and their quality and price. This is not a common arrangement at present in most developing countries, possibly because commercial food processing is a relatively recent activity and there is no history of collaboration and formal contracts. However, where this has been done, there are benefits to both processor and suppliers, provided that the arrangements are made honourably and there is mutual trust. The benefits to farmers are a guaranteed price for their crop, based on a sliding scale of quality and a guaranteed market when it is harvested. However, the traders who tour an area to buy crops provide a number of benefits to farmers that processors should not ignore when arranging contracts: for example the traders frequently buy the whole crop, regardless of quality and either sort it themselves for different markets or sell it on to wholesalers who do the sorting. From the farmers perspective, they receive payment at the farm, without having to worry about marketing their crop or disposal of substandard items. Although farmers have a guaranteed market by selling to traders, they have virtually no control over the prices offered and can be exploited, particularly at the peak of a growing season when there is an over-supply of a particular crop. Traders also provide a number of other services that farmers may find difficult to obtain elsewhere: traders may be the only realistic source of farming tools and other inputs such as seeds; they are also a source of immediate informal credit, which farmers may require to buy inputs or for other needs such as funerals and weddings. Although the interest payments on such loans may be much higher than those charged on commercial loans, farmers often have no access to banks or other lenders and in practice have no choice. In many countries, large numbers of farmers are permanently indebted to traders for their lifetimes and are only released from the debt by sale of land. When processors begin to negotiate contracts with farmers, they should therefore be aware that farmers may be unwilling to break the existing arrangements with traders, either because of genuine fears that they will lose the services provided or because they are indebted to traders and have no ability to make other arrangements. The local power of traders should not be under-estimated and may range from a refusal to offer further loans to farmers, a threat not to buy the crop again if sales are made directly to processors, a demand that farmers repay loans immediately and in extreme cases, physical violence. Despite the problems described above, there are possibilities for processors to agree contracts to supply fruits and vegetables of a specified variety and quality with individual farmers or with groups of farmers who may be working cooperatively. Typically a specification would include the variety to be grown, the degree of maturity at harvest, freedom from infection etc. The price paid for the crop is agreed in advance and may be set between the mid-season lowest point and the pre- and post-season high points. Alternatively a sliding scale of prices is agreed, based on one or more easily measurable characteristics such as minimum size or agreed colour range, with an independent person being present to confirm the agreement in case of later disputes. The agreement may also specify the minimum or maximum amount that will be bought. In a formal contract, these agreement are written down and signed by both parties, although such formal contracts are rare in most developing countries. Processors should also consider the other forms of assistance that could be offered to farmers. For example, in some other larger scale processing such as tea and coffee production, processors offer training and an extension service to address problems with the crop as they arise throughout the growing season. Although this may be beyond the resources of small scale processors, more limited types of assistance may include purchasing tools, fertilizer or other requirements in bulk with the savings being passed on to farmers. Alternatively, part-payment for the crop can be made in advance so that farmers can buy inputs without the need for credit and the consequent indebtedness. The advantages to the processor are greater control over the quality of raw materials and the varieties that are planted, some control over the amounts supplied and an advance indication of likely raw material costs which assists in both financial control and production planning (Sections 2.3.4 and 2.7.1). The advantage to the farmer is the security of having a guaranteed market for the crop at a known price, together with any other incentives that may be offered by processors. However, this type of arrangement can only operate successfully when both processors and farmers honour their side of the agreement. In the authors experience, there have been a number of occasions when these forms of agreement have been tried, but have failed because one party breaks their part of the contract. Typically, this can be farmers who sell part of their crop to traders at each end of the season, when the price is higher than that offered by the processor. The expected volume of crop is not then available to the processor and planned production capacity cannot be achieved, seriously damaging both sales and cashflow. Alternatively, the processor delays payment to farmers, resulting in the need for them to take another loan and greater indebtedness. The processor may also fail to buy the agreed amount of crop and farmers are left to find alternative markets without the option of supplying traders who may refuse to buy it or may offer an insignificant price. A slightly different approach is that in which a processor takes a greater degree of control over production of the crop and specifies the types of fruit or vegetable to be grown, supplies seeds and other inputs, even including labour. In effect farmers are paid by the processor for the use of their land. Although this involves greater organisational complexity and higher operating costs for the processor, the benefits of an assured supply of raw materials having the correct qualities for processing may outweigh the disadvantages, particularly in situations where the demand for a crop outstrips the supply. A further development of the approach is for the processor to rent or buy land and set up a separate operation to supply the processing unit. This often happens in reverse when an existing farmer diversifies into processing but retains the farm. In either case the processor hires the labour and supplies all inputs needed to operate the farm. The bulk of the produce supplies the processing unit with any excess being sold in local markets or to traders.
<urn:uuid:84ac689e-4879-46dc-a5dd-a7bd2f47b26a>
{ "date": "2013-05-21T10:36:31", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9611262083053589, "score": 2.890625, "token_count": 1499, "url": "http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0aginfo--00-0----0-10-0---0---0direct-10---4-------0-1l--11-en-50---20-about---00-0-1-00-0--4----0-0-11-10-0utfZz-8-00&a=d&cl=CL3.26&d=HASH0130f8e3035d966e6f07e008.7.6.fc" }
The Department of Energy (DOE) is committed to expanding the conversation on energy issues and upholding open government principles of transparency, participation and collaboration. One of the key ways we seek to accomplish this is through the use of social media. "Social media" is a broad term for the wide spectrum of interactive and user-driven content technologies (i.e., social networks, blogs, wikis, podcasts, online videos, etc). Like many government agencies, the Department is exploring how best to use social media to accomplish our mission, engage the public in discussion, include people in the governing process and collaborate internally and externally. The Office of Digital Strategy and Communications (formerly the New Media Office) in the Office of Public Affairs is leading the Department's social media efforts. The purpose of this document is to provide guidance on how to take advantage of these social media platforms by defining the broad Department of Energy vision and strategy for social media use, detailing the means by which to contribute to the Department's social media presence, outlining the various rules of the road for utilizing social media in the government space and last but not least, sharing best practices for various social media tools. It's worth noting that while the primary focus of this guidance is on external facing social media, many of the principles and requirements outlined below can be used as a roadmap for inward facing social media activities. Vision and Strategy "The Department of Energy has an urgent role to play in creating a new, clean energy economy that will spark job creation and reduce our dependence on oil, while cutting our greenhouse gas emissions. The Department will also meet its critical responsibilities of reducing nuclear dangers and environmental risks. The foundation of all our work is a commitment to lead the world in science, technology and engineering." - Secretary Steven Chu The Department of Energy's mission is to become the "department of innovators" and discover the solutions to power and secure America's future now. We're building the new clean energy economy, reducing nuclear dangers and environmental risk and expanding the frontiers of knowledge with innovative scientific research. The objective of the Digital Office in Public Affairs is not only to communicate our mission online but also to develop and foster relationships with the public, outside stakeholders and each other around that mission. With that focus, the primary goals of the Digital Office are to amplify the Department's message, promote transparency and accessibility and provide services and engagement opportunities. Social media is integral to achieving these goals, providing the platform for real-time conversation, collaboration and idea sharing. You know how the saying goes - our whole is much greater than the sum of our parts. The entire Department benefits from a strong enterprise brand. And in many ways, that enterprise brand and culture of brand cultivation already exists throughout the Department, powered by the Office of Public Affairs. We're just extending it online and into the social media sphere. The Digital Office in the Office of Public Affairs is responsible for managing Department's enterprise brand online, including social media. Leading by example, this office will push the Department into new social media spaces and drive innovation and online communication programming in this arena. Offices and labs across the Department should help build the enterprise brand by contributing content and ideas to the Digital Office. A strong, well developed, supported and executed enterprise social media brand is the primary tier of the Department's social media strategy. The Digital Office also serves as a support center for driving the second core component of the Department's social media strategy: empowering social media innovation across the Department. Program offices, field offices and labs are encouraged to take full advantage of the opportunities social media offers. The Digital Office provides clear guidance on how to do so -- assisting with compliance of federal rules regulating social media in government, sharing social media best practices and helping offices develop and execute high quality social media strategies. Contributing to the Department of Energy Enterprise Social Media Accounts The foundation of the Department of Energy enterprise social media brand is our mission - and the work being done everyday across the Department to achieve that mission drives the content for our social media accounts. Offices and labs are enthusiastically encouraged to contribute to our enterprise social media accounts and share what they're doing to achieve our mission. These contributions are integral to the success of our enterprise brand. One of the primary reasons Department of Energy enterprise social media accounts were established was to break down some of the resource and regulatory barriers for communicating in this sphere. In that spirit, it's also simple to contribute to our core enterprise accounts: YouTube, Flickr, Twitter and Facebook. Just submit your suggestion to the Digital Office in the Office of Public Affairs via the Department of Energy Social Media Hub (http://energy.gov/socialmedia) and a member of the Digital Office will follow-up as needed within a reasonable timeframe. Establishing an Official DOE Social Media Account To streamline the process of social media account creation, a dedicated Department of Energy Social Media Hub (http://energy.gov/socialmedia) has been developed to empower program offices and labs to review the social media and application vendors with whom we currently have GSA approved terms of service and request permission to create a new account or verify an existing one. All social media sites require active oversight to ensure proper management. Department personnel should take these commitments into account when weighing whether to create a new social media presence. Before requesting an account, personnel should consult with the appropriate actors within their program office or lab to ensure that the proper authorizations and procedures are in place. This includes reaching out to supervisors and the point of contact for records management, privacy, communications/new media and the program's representative from General Counsel. To be granted an account or have your current account recognized by the Department, fill out the Social Media Request form that includes fields such as: For all requests - Name of the person submitting the request - Title of the person submitting the request/office - Contact e-mail - Contact phone number - Are you authorized to make this request? - Social media application(s) you want to utilize - Existing account? (y/n) - Justification for needing an account - Proposed or current account username/URL - Proposed or current account bio - Criteria for following others, friending others, etc. - Content and feedback strategy - Staff management plan, including post frequency - Sample post (if applicable) For new accounts only - Desired launch date - Roll-out plan For existing accounts only - Length of existence - Have you completed a Privacy Impact Assessment (PIA)? - Are you currently covered under DOE's amended terms of service? - What is your current records process? The Digital Office in the Office of Public Affairs will assess and respond to requests within a reasonable time period. The Digital Office approves accounts and will assist as needed with implementation and compliance. Accounts that consistently fail to meet the best practices outlined in this document are subject to review by the Digital Office, who will work with supervisors in that program office or lab to determine appropriate next steps.You can also use the online form to request that the Department pursue a terms of service agreement with a social media tool or application that is offered by apps.gov but not currently part of our portfolio. Should you determine that you would like to forgo the account creation process and simply have your content featured as part of the larger enterprise presence, you can contact the Digital Office to discuss options for assisting with outreach and amplifying your message. From the Privacy Act of 1974 to the Office of Management and Budget policies on third party sites and multi-session cookies, Federal agencies have specific requirements regarding privacy and Personal Identifiable Information (PII). These policies require the Department to file Privacy Impact Assessments (PIA) in order to utilize social media platforms like Facebook or Uservoice or Twitter for official business. The Digital Office in the Office of Public Affairs has filed several PIA's for the Department as a whole in order to empower others to take advantage of these communication tools. They include the following: - Google Analytics Personnel seeking to verify existing social media presences or establish new ones on the platforms above must consult the existing PIA for that platform to make sure that presence is compliant. If you're interested in using a social media platform that's not on this list or have questions about any of the PIA's above, reach out to the Digital Office for assistance. And if you have questions about federal privacy requirements, contact the privacy officer assigned to your office. The Freedom of Information Act (FOIA), 5 U.S.C. 552, provides a right of access to federal agency records, including any information created or maintained by the Department. Voluntary disclosure of information through a social media platform outside the federal government may waive the application of statutory privileges under federal law and compromise the Department's ability to withhold such information in the future. If you are concerned about making information publicly available through social media or have any questions regarding federal information law, contact the Office of General Counsel or the Office of Public Affairs. Comment Policy and Moderation The Department of Energy respects different opinions and hopes to foster conversation within our online presences. To that end, the Department does not pre-moderate users' comments on our enterprise accounts. This means that users' comments are automatically published, but they may be removed by a Department of Energy official if they violate our commenting policy. Comments may be removed from Department of Energy blogs or social media accounts: - Contain obscene, indecent, or profane language; - Contain threats or defamatory statements; - Contain hate speech directed at race, color, sex, sexual orientation, national origin, ethnicity, age, religion, or disability; - Contain sensitive or personally identifiable information; and/or - Promote or endorse specific commercial services or products. All Department of Energy generated content is subject to the National Archives and Records Administration (NARA) for retention, storage and publication. Federal records management policies regarding social media are still evolving. The CIO has issued interim guidance for the Department of Energy regarding the management of social media records. We can expect additional updates to these policies as our work continues to evolve in the social media sphere. For specific questions regarding records management, contact the records management officer assigned to your office. Access to and Use of Social Media The Department of Energy encourages the responsible use of social media consistent with current laws, policies and guidance that govern information and information technology. Department organizations will not arbitrarily ban access or the use of social media. Department of Energy personnel are encouraged to access and contribute content on social media sites in their official capacity. However, personnel should obtain supervisory approval prior to creating or contributing significant content to external social media sites or to engaging in recurring exchanges with the public.Employees are subject to the applicable Standards of Conduct for Employees of the Executive Branch (5 C.F.R. Part 2635) and the Hatch Act (5 U.S.C. 7321-7326) which governs partisan political activity of Executive Branch employees. Personnel are encouraged to review the Office of Special Counsel's "Frequently Asked Questions Regarding Social Media and the Hatch Act" for further guidance or contact the Office of the Assistant General Counsel for General Law (GC-77). Non-public, sensitive, Personally Identifiable Information (PII) and classified information should not be disclosed on public social media platforms. Personal use of social media while on government time is subject to DOE Order 203.1, Limited Personal Use of Office Equipment Including Information Technology, which provides guidance on "appropriate and inappropriate" use of Government resources. If you have questions about this section, please contact GC-77. Security Requirements and Risk Management The Federal CIO Council's Guidelines for Secure Use of Social Media by Federal Departments and Agencies outlines recommendations for using social media technologies in a manner that minimizes risk while also embracing the opportunities these technologies provide. Federal Government information systems are targeted by persistent, pervasive, aggressive threats. In order to defend against rapidly evolving social media threats, Department of Energy program offices, laboratories, and sites should include a defense-in-depth, multi-layered risk management approach, addressing risks to the user, risks to the Department and risks to the federal infrastructure. Organizations should incorporate risk mitigation strategies such as (1) controlled access to social media, (2) user awareness and training, (3) user rules of behavior, (4) host and/or network controls and (5) secure configuration of social media software to determine overall risk tolerance for use of social media technologies. Cyber Security personnel should be consulted before the implementation of any social media technology to provide the opportunity for incorporation of the new technology into current risk management framework. In addition, Cyber Security should help determine secure technical configurations and monitor published vulnerabilities in social media software. For questions regarding cyber security, contact your security officer. In the event of an Emergency, social media tools should be utilized in accordance with the forthcoming Emergency Public Affairs Plan, which calls for a coordinated messaging effort between the Headquarters Office of Public Affairs and any programs, sites or facilities that may be involved: "When Department of Energy headquarters or a DOE site/facility declares an emergency, it is expected to meet the public information obligations of the Department of Energy Orders, guidance and requirements and the comprehensive emergency management plans developed by each site. This guidance and requirement includes the timely provision of media informational materials to the Public Affairs staff at Department headquarters. Every effort should be made by the designated public affairs officers at the site level to consult with the Headquarters Public Affairs Office on the initial dissemination of information to the public and media. From the DOE O 151.1C "Comprehensive Emergency Management System": "Initial news releases or public statements must be approved by the Cognizant Field Element official responsible for emergency public information review and dissemination. Following initial news releases and public statements, updates must be coordinated with the DOE/NNSA (as appropriate) Director of Public Affairs and the Headquarters Emergency Manager." For more information on Emergency communication protocols, reference the Emergency Public Affairs Plan or contact your public affairs representative.
<urn:uuid:a4d02c2c-9805-421f-a211-13ac72fcf0f0>
{ "date": "2013-05-21T10:00:09", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9139692783355713, "score": 2.5625, "token_count": 2895, "url": "http://www.ocrwm.doe.gov/about-us/web-policies/social-media" }
Global climate change presents challenges associated with balancing potential environmental impacts with a wide variety of economic, technical, and lifestyle changes that may be necessary to address the issue. A government-industry task force is working to develop technologies and infrastructure for carbon capture and sequestration with the goal of reducing greenhouse gas (GHG) emissions that can contribute to global climate change. US Outer Continental Shelf oil and gas development opponents complained that the Department of the Interior and Minerals Management Service’s preliminary final 5-year OCS plan goes too far, while proponents declared that it doesn’t go far enough. Newark East field in North Texas, center of the Mississippian Barnett shale play, was Texas’s largest gas-producing field in 2006 and could become the largest in terms of ultimate recovery in the Lower 48. A recent study of the European refining industry from Concawe (Conservation of Clean Air and Water in Europe) concludes that the imbalance between demand for gasoline and middle distillates will continue to increase. Changes in the vertical relative position of two liquids pipelines laid in the same trench (one crude, one products) produce only small changes in the temperature of the crude oil, allowing this approach to be used as a viable alternative to dual trenching.
<urn:uuid:41b7ca46-e07e-4fb6-9a6b-6e954a5a4459>
{ "date": "2013-05-21T10:15:32", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9108292460441589, "score": 2.96875, "token_count": 262, "url": "http://www.ogj.com/articles/print/volume-105/issue-18.html" }
"Almanac"—the word comes from the Arabic al-manakh, meaning "the calendar," earlier "the weather," deriving ultimately from ma-, "a place," and nakha, "to kneel," or a place where camels kneel, a seasonal stopping place, a camp or settlement. Coming as it does from a nomadic human society, it is a fitting word as we talk about our bird life, and their travels and destinations, all as they are influenced by the season of the year. In order to understand the vital interplay of time and space as they determine which birds they'll bring us, let us first set aside time to deal with space. For birds, Ohio's longitude has less to do with time, except as it determines the diurnal rhythms of night and day, and as it figured eons ago in the shifting of continents, where our present longitude marks our place between mountain ranges, at the edge of the feathering-out of the great prairies and the great forests, and consequently midway between the great north- and southbound rivers of birds in the Mississippi and Atlantic flyways. Our latitude, by contrast, is all about time for birds—their seasonal movements north and south, their life cycles along the way, the timing of migrations and even vagrancy, the changing length of daylight and the intensity of Earth's magnetic fields, even their habitats as developed in the topography of our land as formed by mile-high glaciers moving latitudinally thousands of years ago, forming our plains and hills, Lake Erie, and the Ohio River. Survival for birds means successful breeding, and for this success timing is everything. For migrants, early arrival at the breeding grounds is balanced against the risk of arriving too soon to find adequate food; attempting a second brood must be balanced by the risk of an early reduction in food sources. The phenology of predators, frosts, food sources, leafing of local plants, rain cycles, etc., all affect breeding success, and the species we see have successfully adapted to these influences to remain with us today. Humans have recently (here, over the past two hundred years) radically influenced some of these influences, upsetting delicate balances, and our bird life is changing as a result. We have removed some predators, and encouraged the proliferation of others. We have apparently caused climatic warming, with earlier springs and later winters. We have introduced exotic animals and plants. We have bulldozed and burned and filled in and poisoned bird habitats. We allow birds to be killed in great numbers, but not, we reassure ourselves, in numbers too great to diminish them. Our effect on the life cycles of birds is dramatic, ongoing, and uncertain as to ultimate outcome. Reassuringly, it is still possible to discern primeval patterns of birds' natural life cycles throughout the year. Birders find the continuation of these cycles deeply satisfying as a continuous manifestation of the renewal of life, and a way to measure and better understand, during our short span, the passage of time. Fifty or more species of our birds remain pretty much equally abundant year-round, present in good numbers in every month. Many are the most familiar of our familiar birds, but even to them the calendar brings profound changes. The crows, robins, blue jays, and song sparrows we see year-round are not always the same birds, as these are at least in part migratory species, with different cohorts inhabiting different places at different times of year. Their behavior, too, may change radically over the calendar year: robins that are solitary worm-eaters in summer will flock in winter to eat fruit. The breeding cycle, with all its changes over times, governs all—migrating, singing, incubating, fledging, flocking, molting. Many species have expanded their ranges over recent time—mockingbirds, titmice, cardinals, house finches—and many once-common birds have receded beyond Ohio's borders: prairie-chickens, Bachman's sparrows, and Bewick's wrens are no longer to be found here. Time has claimed some of our birds forever—the passenger pigeon, the Eskimo curlew, the Carolina parakeet—but there is time to save the rest.
<urn:uuid:1439912f-f481-4834-9bfb-531cc6451f12>
{ "date": "2013-05-21T10:21:22", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9600158333778381, "score": 3.671875, "token_count": 892, "url": "http://www.ohiobirds.org/site/library/almanac/introduction.php" }
1933 Unemployment Relief New! Search the database of more than 100,000 individuals listed in the Unemployment Relief records. There are 27 Oklahoma counties included. Search now » 1940 US Census The 1940 US Federal Census records for Oklahoma have now been indexed. Search and view census records online now at familysearch.org/1940census/1940-census-oklahoma/ 1890 Oklahoma Territorial Census The OHS Research Center has completed the index to the 1890 Oklahoma Territorial Census. While the previous index listed only the head of household, this index includes every individual included in the census. Most of the 1890 US Federal Census was destroyed by fire in 1921, making the 1890 Oklahoma Territorial Census one of the few remaining census records from the time. The Oklahoma Historical Society Research Division collections include the original 1890 OT Census pages. Search the index » Own the Complete 1890 Oklahoma Territorial Census Now you can access the 1890 Oklahoma Territorial census in its entirety as part of 1890 Resources, a newly-released DVD from the OHS Research Center. This easy-to-use disc includes: - A complete index to the 1890 OT census and more than 1,200 color pages of census scanned from the original documents. Just locate your ancestor in the index and click on the page number to see the original document. View a sample census page. - Smith's First Directory of Oklahoma Territory for the Year Commencing August 1, 1890, complete with index/namefinding list linked to color scans of the entire directory. View a sample page from Smith's. - A PDF of Bunky's The First Eight Months of Oklahoma City. Beginning with the land run of 1889, this publication explores area businesses, churches, newspapers, politics and citizens. This resource is now available for $45 plus $2 shipping & handling. To order use our printable order form or call (405) 522-5225 - please have your credit card ready. Special Census on Microfilm at OHS - 1890 Oklahoma Territorial Census - 1860 Lands West of Arkansas - 1890 Union Veterans & Widows Census - 1900 US Census - Oklahoma Territory - 1900 US Census - Indian Schedule - Various Mortality Schedules - Additional special censuses for numerous states Online Subscription Services The Research Center offers free access to Ancestry Library Edition® and HeritageQuest Online™. These sites allow patrons visiting the Research Center to search, view and print various items pertaining to genealogy. Ancestry Library® offers US Census, ship logs and passenger indexes, WWI draft registration cards, vital records, and the Social Security Death Index. HeritageQuest™ also includes US Census as well as Revolutionary War pension & bounty-land warrant applications; the Freedman's Bank (1856-1874); and PERSI (Periodical Source Index), an index of almost 2 million genealogical and local history articles.
<urn:uuid:6430b409-714d-4590-afff-a789a8e8fd7b>
{ "date": "2013-05-21T10:13:13", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8719273805618286, "score": 2.515625, "token_count": 604, "url": "http://www.okhistory.org/research/census" }
Dear OncoLink "Ask The Experts," Carolyn Vachani RN, MSN, AOCN, OncoLink's Medical Correspondent, responds: Magnification colonoscopy uses fiberoptic technology to magnify the view of the colon to about 75 to 100 times its normal size. As a point of comparison, standard colonoscopy uses 45-fold magnification. This test can be particularly helpful to diagnose "flat adenomas" (cancers that do not form as a polyp) or dysplasia (abnormal appearing tissue). During the colonoscopy, the physician sprays a dye into the colon which highlights areas of dysplasia, based on the shape and appearance of the colon (called "pit pattern") and the uptake of dye. Magnification is needed to help the physician better and more fully visualize any areas of dye deposition. This procedure is used to screen for dysplasia and/or cancer in patients who are at high risk for colon cancer. High-risk patients typically have chronic colon inflammation (ie: irritable bowel disease, Crohn's disease, ulcerative colitis, primary sclerosing cholangitis, etc.). Magnification colonoscopy is not yet used for general polyps. The test is currently only available at a limited number of medical centers. The physician performing the test must be trained to recognize the "pit patterns" that signify dysplasia. If this sounds like a test for you, I would try the Gastroenterology department at large, academic centers in your area.
<urn:uuid:3f004561-6f01-4726-8a65-dfe0cfc492a0>
{ "date": "2013-05-21T10:34:34", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9172466993331909, "score": 2.640625, "token_count": 321, "url": "http://www.oncolink.org/includes/print_article.cfm?Page=2&id=2118&Section=Ask_The_Experts" }
A safe place to play If you ask your child what he likes most about school, the answer you are likely to get is, “Recess!” It is important for kids to be active, get some fresh air, and release their pent up energy during and after the school day, and playgrounds are a great place to do so. However, faulty equipment, unsafe surfaces, and lack of appropriate supervision can result in injury. Each year, more than 200,000 children are treated in hospital emergency rooms for playground-related injuries. Schools are addressing this by developing rules for safe outdoor play on and off the playground. There are also a few things that you should keep in mind and convey to any other caregivers of your child about play on and around the playground. Tips for injury-free outdoor fun - Know the school rules. Depending on the amount of outdoor space, the size of the student body, and staff limitations, your child’s school may limit the games students can play on the playground. Games like tag and unsupervised sports such as dodgeball are increasingly being banned due to injuries. Find out what your school’s playground rules are and explain them to your child. If your child wants to take a ball, jump rope, or other equipment to share with friends, be sure to check with the school first. - Find out about supervision. Adequate supervision is the best way to reduce the number of injuries on the playground. The National Program for Playground Safety advises that children be supervised when playing on playground structures, whether these are located in your home, in the community, or at school. Adults in charge should be able to direct children to use playground equipment properly and respond to emergencies appropriately. Make sure your child is supervised on the playground at all times, at and outside of school. - Know what is age appropriate. The Consumer Product Safety Commission requires that playground equipment be separated for 2-5 year-olds and 5-12 year-olds. It is recommended that children be further separated according to age group: Pre K, grades K-2, grades 3-4, and grades 5-6. Most schools separate outdoor play times by grades. If you take child to the playground, make sure he is playing on equipment that he is able to use comfortably. Encourage your child to use equipment appropriately and to take turns. Beware of clothing that could get caught or that your child could trip over, such as untied shoelaces, hoods, or drawstrings.. - Keep an eye on the equipment. Before you let your child play on playground structures, check the equipment and its surrounding area to make sure that it is safe. Check the structure to make sure it is not damaged or broken. Look out for any objects that can cause injuries, such as broken glass, rocks, animal feces, or other debris. According to the National Program for Playground Safety, the surface of a play structure should be of loose or soft materials that will cushion a fall, such as wood chips or rubber. Know how to respond. Even a fall of one foot can cause a broken bone or concussion. If your child is injured while playing on the playground, check him carefully for bruises. If you are not sure of the extent of your child’s injury, take him to the pediatrician or the emergency room. If you think your child may have a head or neck injury or if he appears to have a broken bone and you are afraid to move him, call for help. For a playground safety checklist, visit the Consumer Product Safety Commission at http://www.cpsc.gov This information was compiled by Sunindia Bhalla, and reviewed by the Program Staff of the Massachusetts Children’s Trust Fund.
<urn:uuid:0001f24f-c0bb-45f3-a1a7-b14d8b64f196>
{ "date": "2013-05-21T10:34:45", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9652633666992188, "score": 3.5625, "token_count": 768, "url": "http://www.onetoughjob.org/child-care/school-safety/playground-safety" }
Online Physiology Degree |Online Physiology Degree refers to the degree on Physiology that are provided by the online universities that are situated in different parts on the world. In this kind of degrees you can remain in some remote corner of the world, and can take lessons from the universities that are located at some other parts of the world. These degrees are very much in demand and are recognized by the organizations around the world. What is Physiology? The term Physiology refers o the study which involves the mechanical, physical as well a s biochemical functions of any kind of living organisms. In other world the physical, mechanical as well as different biochemical function that take place in the body of any living organism can be studies in the subject of Physiology. Traditionally, Physiology can be divided into two broader parts – plant physiology as well as animal physiology. Physiology Degrees are much sought after by students interested in the life sciences. However the principles that are followed in physiology are universal irrespective of any particular organism. Human physiology is an important part of the study of animal physiology, too. There are some other major branches that originated from physiology that can be studies individually nowadays. These branches that originated from physiology includes biochemistry, paleobiology, biomechanics, pharmacology and biophysics. At the Online Physiology Degree you can also have a chance to study a few of these branches. Who are eligible to study Online Physiology Degree? An Online Physiology Degree are also beneficial to the working professionals who finds it hard to devote a certain amount of tie every day or at least once or twice a week to go for the part time courses. The Online Physiology Degree will help them to study at night or even between the working hours. The course material being available online the Online Physiology Degree gives them the option of ‘flexible timings’ for their studies. Thus the Online Physiology Degree courses are very beneficial to them. Why choose an Online Physiology Degree? Choosing or opting for an Online Physiology Degree can be a wise decision on the part of the busy professional as well as any modern day student. The course material used in Online Physiology Degree are made keeping in mind a global approach and so the degrees are, usually, recognized across the globe. The Online Physiology Degree courses also offer you the flexibility to choose your time and pace to study. Thus an Online Physiology Degree course has an edge over the conventional degrees that are available. To know more about online science degree keep surfing the links of ONLINEDEGREESHUB.
<urn:uuid:3be16b6c-b679-47fb-bcca-cfb5cb09595f>
{ "date": "2013-05-21T10:28:02", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9479763507843018, "score": 2.828125, "token_count": 525, "url": "http://www.onlinedegreeshub.com/science/physiology/" }
Analogue Tachographs: A Brief History Note: Since May 2006, Analogue Tachographs are being phased out in favour of digital versions which record data on a smart card. Find out more about Digital Tachographs. A tachograph displays vehicle speed and makes a record of all speeds during an entire trip. The name ‘tachograph’ comes from the graphical recording of the tachometer or engine speed. Analogue units record the driver’s periods of duty on a waxed paper disc – a tachograph chart. An ink pen records the engine speed on circular graph paper that automatically advances according to the internal clock of the tachograph. This graph paper is removed on a regular basis and maintained by the fleet owner for government records. In the 1950s, there were an increasing number of road accidents attributed to sleep-deprived and tired truck drivers. Concerns for safety led to the rapid spread of the tachograph in the commercial vehicle market, but at this point it was voluntary and not legislated. Fleet operators then found that tachographs helped them to monitor driver hours more reliably, and safety also improved. In Europe, use of tachographs has been compulsory for all trucks over 3.5 tonnes since 1970. For safety reasons, most countries also have limits on the working hours of drivers of commercial vehicles. Tachographs are used to monitor drivers’ working hours and ensure that appropriate breaks are taken. Legislation relating to Tachographs has been in force in the UK for 16 years. The tachograph is now an indispensable tool for managing fleets and ensuring the safety of drivers of commercial vehicles. Find out more about Digital Tachographs.
<urn:uuid:067b127c-5284-4746-a65a-e0278133374c>
{ "date": "2013-05-21T10:00:13", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9501683712005615, "score": 3.234375, "token_count": 355, "url": "http://www.optac.info/uk/analogue-tachograph.html" }
LESSON ONE: Transforming Everyday Objects Marcel Duchamp: Bicycle Wheel, bicycle wheel on wooden stool, 1963 (Henley-on-Thames, Richard Hamilton Collection); © 2007 Artists Rights Society (ARS), New York/ADAGP, Paris, photo credit: Cameraphoto/Art Resource, NY Man Ray: Rayograph, gelatin silver print, 29.4×23.2 cm, 1923 (New York, Museum of Modern Art); © 2007 Man Ray Trust/Artists Rights Society (ARS), New York/ADAGP, Paris, photo © The Museum of Modern Art, New York Meret Oppenheim: Object (Le Déjeuner en fourrure), fur-lined cup, diam. 109 mm, saucer, diam. 237 mm, spoon, l. 202 mm, overall, h. 73 mm, 1936 (New York, Museum of Modern Art); © 2007 Artists Rights Society (ARS), New York/ProLitteris, Zurich, photo © Museum of Modern Art/Licensed by SCALA/Art Resource, NY Dada and Surrealist artists questioned long-held assumptions about what a work of art should be about and how it should be made. Rather than creating every element of their artworks, they boldly selected everyday, manufactured objects and either modified and combined them with other items or simply se-lected them and called them “art.” In this lesson students will consider their own criteria for something to be called a work of art, and then explore three works of art that may challenge their definitions. Students will consider their own definitions of art. Students will consider how Dada and Surrealist artists challenged conventional ideas of art. Students will be introduced to Readymades and photograms. Ask your students to take a moment to think about what makes something a work of art. Does art have to be seen in a specific place? Where does one encounter art? What is art supposed to accomplish? Who is it for? Ask your students to create an individual list of their criteria. Then, divide your students into small groups to discuss and debate the results and come up with a final list. Finally, ask each group to share with the class what they think is the most important criteria and what is the most contested criteria for something to be called a work of art. Write these on the chalkboard for the class to review and discuss. Show your students the image of Bicycle Wheel. Ask your students if Marcel Duchamp’s sculp-ture fulfills any of their criteria for something to be called a work of art. Ask them to support their obser-vations with visual evidence. Inform your students that Duchamp made this work by fastening a Bicycle Wheel to a kitchen stool. Ask your students to consider the fact that Duchamp rendered these two functional objects unus-able. Make certain that your students notice that there is no tire on the Bicycle Wheel. To challenge accepted notions of art, Duchamp selected mass-produced, often functional objects from everyday life for his artworks, which he called Readymades. He did this to shift viewers’ engagement with a work of art from what he called the “retinal” (there to please the eye) to the “intellectual” (“in the service of the mind.”) [H. H. Arnason and Marla F. Prather, History of Modern Art: Painting, Sculpture, Architecture, Photography (Fourth Edition) (New York: Harry N. Abrams, Inc., 1998), 274.] By doing so, Duchamp subverted the traditional notion that beauty is a defining characteristic of art. Inform your students that Bicycle Wheel is the third version of this work. The first, now lost, was made in 1913, almost forty years earlier. Because the materials Duchamp selected to be Readymades were mass-produced, he did not consider any Readymade to be “original.” Ask your students to revisit their list of criteria for something to be called a work of art. Ask them to list criteria related specifically to the visual aspects of a work of art (such as “beauty” or realistic rendering). Duchamp said of Bicycle Wheel, “In 1913 I had the happy idea to fasten a Bicycle Wheel to a kitchen stool and watch it turn.” [John Elderfield, ed., Studies in Modern Art 2: Essays on Assemblage (New York: The Museum of Modern Art, 1992), 135.] Bicycle Wheel is a kinetic sculpture that depends on motion for effect. Although Duchamp selected items for his Readymades without regard to their so-called beauty, he said, “To see that wheel turning was very soothing, very comforting . . . I en-joyed looking at it, just as I enjoy looking at the flames dancing in a fireplace.” [Francis M. Naumann, The Mary and William Sisler Collection (New York: The Museum of Modern Art, 1984), 160.] By en-couraging viewers to spin Bicycle Wheel, Duchamp challenged the common expectation that works of art should not to be touched. Show your students Rayograph. Ask your students to name recognizable shapes in this work. Ask them to support their findings with visual evidence. How do they think this image was made? Inform your students that Rayograph was made by Man Ray, an American artist who was well-known for his portrait and fashion photography. Man Ray transformed everyday objects into mysterious images by placing them on photographic paper, exposing them to light, and oftentimes repeating this process with additional objects and exposures. When photographic paper is developed in chemicals, the areas blocked from light by objects placed on the paper earlier on will remain light, and the areas exposed to light will turn black. Man Ray discovered the technique of making photograms by chance, when he placed some objects in his darkroom on light-sensitive paper and accidentally exposed them to light. He liked the resulting images and experimented with the process for years to come. He likened the technique, now known as the photogram, to “painting with light,” calling the images rayographs, after his assumed name. Now that your students have identified some recognizable objects used to make Rayograph, ask them to consider which of those objects might have been translucent and which might have been opaque, based on the tone of the shapes in the photogram. Now show your students Meret Oppenheim’s sculpture Object (Déjeuner en fourrure). Both Rayograph and Object were made using everyday objects and materials not traditionally used for making art, which, when combined, challenge ideas of reality in unexpected ways. Ask your students what those everyday objects are and how they have been transformed by the artists. Ask your students to name some traditional uses for the individual materials (cup, spoon, saucer, fur) used to make Object. Ask your students what choices they think Oppenheim made to transform these materials and objects. In 1936, the Swiss artist Oppenheim was at a café in Paris with her friends Pablo Picasso and Dora Maar. Oppenheim was wearing a bracelet she had made from fur-lined, polished metal tubing. Picasso joked that one could cover anything with fur, to which Oppenheim replied, “Even this cup and saucer.” [Bice Curiger, Meret Oppenheim: Defiance in the Face of Freedom (Zurich, Frankfurt, New York: PARKETT Publishers Inc., 1989), 39.] Her tea was getting cold, and she reportedly called out, “Waiter, a little more fur!” Soon after, when asked to participate in a Surrealist exhibition, she bought a cup, saucer, and spoon at a department store and lined them with the fur of a Chinese gazelle. [Josephine Withers, “The Famous Fur-Lined Teacup and the Anonymous Meret Oppenheim” (New York: Arts Magazine, Vol. 52, Novem-ber 1977), 88-93.] Duchamp, Oppenheim, and Man Ray transformed everyday objects into Readymades, Surrealist objects, and photograms. Ask your students to review the images of the three artworks in this lesson and discuss the similarities and differences between these artists’ transformation of everyday objects. Art and Controversy At the time they were made, works of art like Duchamp’s Bicycle Wheel and Oppenheim’s Object were controversial. Critics called Duchamp’s Readymades immoral and vulgar—even plagiaristic. Overwhelmed by the publicity Object received, Oppenheim sunk into a twenty-year depres-sion that greatly inhibited her creative production. Ask your students to conduct research on a work of art that has recently been met with controversy. Each student should find at least two articles that critique the work of art. Have your students write a one-page summary of the issues addressed in these articles. Students should consider how and why the work chal-lenged and upset critics. Was the controversial reception related to the representation, the medium, the scale, the cost, or the location of the work? After completing the assignment, ask your students to share their findings with the class. Keep a list of shared critiques among the work’s various receptions. Make a Photogram If your school has a darkroom, have your students make photograms. Each student should collect several small objects from school, home, and the outside to place on photographic paper. Their collection should include a range of translucent and opaque objects to allow different levels of light to shine through. Stu-dents may want to overlap objects or use their hands to cover parts of the light-sensitive paper. Once the objects are arranged on the paper in a darkroom, have your students expose the paper to light for several seconds (probably about five to ten seconds, depending on the level of light) then develop, fix, rinse, and dry the paper. Allow for a few sheets of photographic paper per student so that they can experiment with different arrangements and exposures. After the photograms are complete, have your students discuss the different results that they achieved. Students may also make negatives of their photograms by placing them on top of a fresh sheet of photographic paper and covering the two with a sheet of glass. After ex-posing this to light, they can develop the paper to get the negative of the original photogram. Encourage your students to try FAUXtogram, an activity available on Red Studio, MoMA's Web site for teens. GROVE ART ONLINE: Suggested Reading Below is a list of selected articles which provide more information on the specific topics discussed in this lesson.
<urn:uuid:31fab53b-eb78-4e38-ae2c-77d787710125>
{ "date": "2013-05-21T10:13:20", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9461767077445984, "score": 3.859375, "token_count": 2260, "url": "http://www.oxfordartonline.com/public/page/lessons/Unit5Lesson1" }
What is endometriosis? Endometriosis (say "en-doh-mee-tree-OH-sus") is a problem many women have during their childbearing years. It means that a type of tissue that lines your uterus is also growing outside your uterus. This does not always cause symptoms. And it usually isn't dangerous. But it can cause pain and other problems. The clumps of tissue that grow outside your uterus are called implants. They usually grow on the ovaries, the fallopian tubes, the outer wall of the uterus, the intestines, or other organs in the belly. In rare cases they spread to areas beyond the belly. How does endometriosis cause problems? Your uterus is lined with a type of tissue called Reference endometrium Opens New Window (say "en-doh-MEE-tree-um"). Each month, your body releases hormones that cause the endometrium to thicken and get ready for an egg. If you get pregnant, the fertilized egg attaches to the endometrium and starts to grow. If you do not get pregnant, the endometrium breaks down, and your body sheds it as blood. This is your Reference menstrual period Opens New Window. When you have endometriosis, the implants of tissue outside your uterus act just like the tissue lining your uterus. During your menstrual cycle, they get thicker, then break down and bleed. But the implants are outside your uterus, so the blood cannot flow out of your body. The implants can get irritated and painful. Sometimes they form scar tissue or fluid-filled sacs (cysts). Scar tissue may make it hard to get pregnant. What causes endometriosis? Experts don't know what causes endometrial tissue to grow outside your uterus. But they do know that the female hormone Reference estrogen Opens New Window makes the problem worse. Women have high levels of estrogen during their childbearing years. It is during these years—usually from their teens into their 40s—that women have endometriosis. Estrogen levels drop when menstrual periods stop (menopause). Symptoms usually go away then. What are the symptoms? The most common symptoms are: - Pain. Where it hurts depends on where the implants are growing. You may have pain in your lower belly, your rectum or vagina, or your lower back. You may have pain only before and during your periods or all the time. Some women have more pain during sex, when they have a bowel movement, or when their ovaries release an egg (ovulation). - Abnormal bleeding. Some women have heavy periods, spotting or bleeding between periods, bleeding after sex, or blood in their urine or stool. - Trouble getting pregnant (Reference infertility Opens New Window). This is the only symptom some women have. Endometriosis varies from woman to woman. Some women don't know that they have it until they go to see a doctor because they can't get pregnant or have a procedure for another problem. Some have mild cramping that they think is normal for them. In other women, the pain and bleeding are so bad that they aren't able to work or go to school. How is endometriosis diagnosed? Many different problems can cause painful or heavy periods. To find out if you have endometriosis, your doctor will: - Ask questions about your symptoms, your periods, your past health, and your family history. Endometriosis sometimes runs in families. - Do a Reference pelvic exam Opens New Window. This may include checking both your Reference vagina Opens New Window and Reference rectum Opens New Window. If it seems like you have endometriosis, your doctor may suggest that you try medicine for a few months. If you get better using medicine, you probably have endometriosis. To find out if you have a cyst on an ovary, you might have an imaging test like an Reference ultrasound Opens New Window, an Reference MRI Opens New Window, or a Reference CT scan Opens New Window. These tests show pictures of what is inside your belly. The only way to be sure you have endometriosis is to have a type of surgery called Reference laparoscopy Opens New Window (say "lap-uh-ROSS-kuh-pee"). During this surgery, the doctor puts a thin, lighted tube through a small cut in your belly. This lets the doctor see what is inside your belly. If the doctor finds implants, scar tissue, or cysts, he or she can remove them during the same surgery. How is it treated? There is no cure for endometriosis, but there are good treatments. You may need to try several treatments to find what works best for you. With any treatment, there is a chance that your symptoms could come back. Treatment choices depend on whether you want to control pain or you want to get pregnant. For pain and bleeding, you can try medicines or surgery. If you want to get pregnant, you may need surgery to remove the implants. Treatments for endometriosis include: - Over-the-counter pain medicines like ibuprofen (such as Advil or Motrin) or naproxen (such as Aleve). These medicines are called anti-inflammatory drugs, or NSAIDs. They can reduce bleeding and pain. - Birth control pills. They are the best treatment to control pain and shrink implants. Most women can use them safely for years. But you cannot use them if you want to get pregnant. - Hormone therapy. This stops your periods and shrinks implants. But it can cause side effects, and pain may come back after treatment ends. Like birth control pills, hormone therapy will keep you from getting pregnant. - Laparoscopy to remove implants and scar tissue. This may reduce pain, and it may also help you get pregnant. As a last resort for severe pain, some women have their uterus and ovaries removed (Reference hysterectomy Opens New Window and oophorectomy). If you have your ovaries taken out, your estrogen level will drop and your symptoms will probably go away. But you may have symptoms of menopause, and you will not be able to get pregnant. If you are getting close to Reference menopause Opens New Window, you may want to try to manage your symptoms with medicines rather than surgery. Endometriosis usually stops causing problems when you stop having periods. Frequently Asked Questions |By:||Reference Healthwise Staff||Last Revised: Reference July 7, 2011| |Medical Review:||Reference Adam Husney, MD - Family Medicine Reference Kirtly Jones, MD - Obstetrics and Gynecology
<urn:uuid:930954bd-4123-46c8-949a-9554f717d332>
{ "date": "2013-05-21T10:07:55", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.936316967010498, "score": 3.4375, "token_count": 1419, "url": "http://www.pamf.org/healtheducation/healthinfo/index.cfm?A=C&hwid=hw102998" }
What is pancreatitis? Pancreatitis is inflammation of the Reference pancreas Opens New Window Reference Opens New Window, an organ in your belly that makes the hormones Reference insulin Opens New Window and Reference glucagon Opens New Window. These two hormones control how your body uses the sugar found in the food you eat. Your pancreas also makes other hormones and Reference enzymes Opens New Window that help you break down food. Usually the digestive enzymes stay in one part of the pancreas. But if these enzymes leak into other parts of the pancreas, they can irritate it and cause pain and swelling. This may happen suddenly or over many years. Over time, it can damage and scar the pancreas. What causes pancreatitis? Most cases are caused by Reference gallstones Opens New Window or alcohol abuse. The disease can also be caused by an injury, an infection, or certain medicines. Long-term, or chronic, pancreatitis may occur after one attack. But it can also happen over many years. In Western countries, alcohol abuse causes most chronic cases. In some cases doctors don't know what caused the disease. What are the symptoms? The main symptom of pancreatitis is medium to severe pain in the upper belly. Pain may also spread to your back. Some people have other symptoms too, such as nausea, vomiting, a fever, and sweating. How is pancreatitis diagnosed? Your doctor will do a physical exam and ask you questions about your symptoms and past health. You may also have blood tests to see if your levels of certain enzymes are higher than normal. This can mean that you have pancreatitis. Your doctor may also want you to have a complete blood count (CBC), a liver test, or a stool test. Other tests include an MRI, a CT scan, or an ultrasound of your belly (abdominal ultrasound) to look for gallstones. A test called endoscopic retrograde cholangiopancreatogram, or ERCP, may help your doctor see if you have chronic pancreatitis. During this test, the doctor can also remove gallstones that are stuck in the Reference bile duct Opens New Window. How is it treated? Most attacks of pancreatitis need treatment in the hospital. Your doctor will give you pain medicine and fluids through a vein (Reference IV Opens New Window) until the pain and swelling go away. Fluids and air can build up in your stomach when there are problems with your pancreas. This buildup can cause severe vomiting. If buildup occurs, your doctor may place a tube through your nose and into your stomach to remove the extra fluids and air. This will help make the pancreas less active and swollen. Although most people get well after an attack of pancreatitis, problems can occur. Problems may include Reference cysts Opens New Window, infection, or death of tissue in the pancreas. You may need surgery to remove your gallbladder or a part of the pancreas that has been damaged. If your pancreas has been severely damaged, you may need to take insulin to help your body control blood sugar. You also may need to take pancreatic enzyme pills to help your body digest fat and protein. If you have chronic pancreatitis, you will need to follow a low-fat diet and stop drinking alcohol. You may also take medicine to manage your pain. Making changes like these may seem hard. But with planning, talking with your doctor, and getting support from family and friends, these changes are possible. Frequently Asked Questions |By:||Reference Healthwise Staff||Last Revised: Reference October 31, 2011| |Medical Review:||Reference Kathleen Romito, MD - Family Medicine Reference Peter J. Kahrilas, MD - Gastroenterology
<urn:uuid:90504251-6342-478a-a075-692aad14e4c2>
{ "date": "2013-05-21T10:07:28", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9182218909263611, "score": 3.15625, "token_count": 797, "url": "http://www.pamf.org/teen/healthinfo/index.cfm?A=C&hwid=uf4337" }
Gary McConkey from Knightdale, N.C., writes: I often park my car in the sun. When I get back inside, it feels warmer than the outside temperature. Why is that? This is a good example of the “greenhouse effect,” which is essential to life on Earth. Without it, our planet wouldn’t be warm enough for living things to survive. In the case of a car, the sun’s rays enter through the window glass. Some of the heat is absorbed by interior components, such as the dashboard, seats, and carpeting. But the heat they radiate is a different wavelength from the rays of the sun that got through the glass, and it doesn’t let as much of the rays pass back out. As a result, more energy goes into the car than goes out, and the inside temperature increases.
<urn:uuid:80c83852-1a5c-4531-886d-34b711c519f9>
{ "date": "2013-05-21T10:13:03", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9602810144424438, "score": 3.34375, "token_count": 183, "url": "http://www.parade.com/askmarilyn/2012/05/13-sunday-column.html" }
Work Out With Your Dog - How Animal Agility Training Can Burn Calories For You The University of Massachusetts studied human oxygen consumption during canine agility training. John Ales Vigorous Exercise for Dog and Human Researchers at the University of Massachusetts Department of Kinesiology have studied the impact on humans during canine agility training, and their findings were recently highlighted on Zoom Room Dog Agility Training Center's website. The researchers looked at oxygen consumption (using a face mask and battery-operated, portable metabolic system that measures breath-by-breath gas exchange) as well as heart rate (detected and recorded using a Polar heart rate monitor). The data collected was translated into Metabolic Equivalents, or METs, a way of comparing how much energy a person expends at rest versus during a given activity.
<urn:uuid:0297752d-7d90-4cbe-8341-2b2fb2430ffc>
{ "date": "2013-05-21T10:07:48", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9563897848129272, "score": 3.03125, "token_count": 163, "url": "http://www.pawnation.com/2010/11/09/work-out-with-your-dog-how-animal-agility-training-can-burn-ca/" }
Geology and Geography Information about Portage County Wisconsin We will provide as much historical map information as possible about the county. Google Map of Our Museums . How Wisconsin Was Surveyed The methods used to survey land is largely unknown to the general public. But, the Wisconsin Public Land Survey Records: Original Field Notes and Plat Maps site offers a complete explanation of this method as well as access to the original field notes and maps compiled by the surveyors. This section has links to our maps as well as external links to free printable map providers. - Map of Portage County (33k). - Map of Wisconsin (285k). - Map of Central Wisconsin (30k). - Map of Townships (7k). A Portage County Plat Book for 1895 has been photographed using a digital camera. The following maps are from "Page-Size Maps of Wisconsin" published by: University of Wisconsin - Extension and Wisconsin Geological and Natural History Survey, 3817 Mineral Point Road, Madison WI., 53705-5100. - Bedrock Geology of Wisconsin (168k). - Ice Age Deposits of Wisconsin (150k). - Early Vegetation of Wisconsin (126k). - Landforms of Wisconsin (109k). - Soil Regions of Wisconsin (180k). The following maps are in pdf format. Maps available from the Wisconsin Historical Society also. - British Era fur trading posts 1760-1815. - American Era fur trading posts 1815-1850. - American Forts and Exploration ca 1820. - Military Roads 1815-1862. - Wisconsin counties 1835. - Wisconsin counties 1850. - Wisconsin counties 1870. - Wisconsin counties 1901. - Wisconsin Railroads 1865. - Wisconsin Railroads 1873. - Wisconsin Railroads 1936. - From National Atlas, a government agency, are printable maps of all the states and more. - This link is located in France and provides free printable maps covering all countries. The Society will embark on a project during the summer of 2009 and continuing onward to provide county maps with geotag information locating: - Small Communities. - Cemetery Locations. - Locations of School Houses, one-room and others of historic value. - Catholic Churches. - Lutheran Churches. - Other Churches. - Historic sites within the communities Portage County Ice Age Trail Here is a list of all the Historical Makers in the State of Wisconsin.
<urn:uuid:babed0a1-fbec-430c-8a50-1dd2d6f4add6>
{ "date": "2013-05-21T10:15:15", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8249157667160034, "score": 3.015625, "token_count": 523, "url": "http://www.pchswi.org/archives/geology.html" }
Peoria Tribe of Indians of OklahomaThe Peoria Tribe of Indians of Oklahoma is a confederation of Kaskaskia, Peoria, Piankeshaw and Wea Indians united into a single tribe in 1854. The tribes which constitute The Confederated Peorias, as they then were called, originated in the lands bordering the Great Lakes and drained by the mighty Mississippi. They are Illinois or Illini Indians, descendants of those who created the great mound civilizations in the central United States two thousand to three thousand years ago. Forced from their ancestral lands in Illinois, Michigan, Ohio and Missouri, the Peorias were relocated first in Missouri, then in Kansas and, finally, in northeastern Oklahoma. There, in Miami, Ottawa County, Oklahoma is their tribal headquarters. The Peoria Tribe of Indians of Oklahoma is a federally-recognized sovereign Indian tribe, functioning under the constitution and by-laws approved by the Secretary of the U.S. Department of the Interior on August 13, 1997. Under Article VIII, Section 1 of the Peoria Constitution, the Peoria Tribal Business Committee is empowered to research and pursue economic and business development opportunities for the Tribe. The increased pressure from white settlers in the 1840’s and 1850’s in Kansas brought cooperation among the Peoria, Kaskaskia, Piankashaw and Wea Tribes to protect these holdings. By the Treaty of May 30, 1854, 10 Stat. 1082, the United States recognized the cooperation and consented to their formal union as the Confederated Peoria. In addition to this recognition, the treaty also provided for the disposition of the lands of the constituent tribes set aside by the treaties of the 1830’s; ten sections were to be held in common by the new Confederation, each tribal member received an allotment of 160 acres; the remaining or “surplus” land was to be sold to settlers and the proceeds to be used by the tribes. The Civil War caused considerable turmoil among all the people of Kansas, especially the Indians. After the war, most members of the Confederation agreed to remove to the Indian Territory under the provisions of the so-called Omnibus Treaty of February 23, 1867, 15 Stat. 513. Some of the members elected at this time to remain in Kansas, separate from the Confederated Tribes, and become citizens of the United States. The lands of the Confederation members in the Indian Territory were subject to the provisions of the General Allotment Act of 1887. The allotment of all the tribal land was made by 1893, and by 1915, the tribe had no tribal lands or any lands in restricted status. Under the provisions of the Oklahoma Indian Welfare Act of 1936, 49 Stat. 1967, the tribes adopted a constitution and by-laws, which was ratified on October 10, 1939, and they became known as the Peoria Tribe of Indians of Oklahoma. As a result of the “Termination Policy” of the Federal Government in the 1950’s, the Federal Trust relationship over the affairs of the Peoria Tribe of Indians of Oklahoma and its members, except for claims then pending before the Indian Claims Commission and Court of claims, was ended on August 2, 1959, pursuant to the provisions of the Act of August 2, 1956, 709 Stat. 937, and Federal services were no longer provided to the individual members of the tribe. More recently, however, the Peoria Tribe of Indians of Oklahoma was reinstated as a federally recognized tribe by the Act of May 15, 1978, 92 Stat. 246.
<urn:uuid:29264399-5a65-4210-b5f7-94445de0d2e9>
{ "date": "2013-05-21T10:21:40", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9606784582138062, "score": 3.296875, "token_count": 734, "url": "http://www.peoriatribe.com/history.php" }
The scabs that won’t heal: Racial injustice, stereotypes and social ills By DeShuna Spencer new america media America is in denial. Everyday millions of us of various ethnicities, religions and sexual orientations congregate together in our places of work, at our schools/universities and public spaces exchanging politically correct pleasantries as we interact with each other. All the while, boiling deep, down inside many of us lies unconscious, deep-seated stereotypes and misconceptions about the very people (co-workers, neighbors, store patrons, etc.) we come in contact with on a daily basis; and we've been carrying these racial wounds since childhood. Don't believe me? Two years ago, CNN conducted a study on children's attitudes on race. In one of the segments, a 5-year-old white girl from Georgia was asked a series of questions based on a board that had pictures of identical looking cartoon-type girls that ranged in skin color for light to dark. When the interviewer asked the girl who is smart, the 5-year-old pointed to the lightest child. When she was asked who was mean, she pointed to the darkest child. According to CNN, the 5-year-old's answers were a reflection of one of the major findings of the survey. It revealed that "white children have an overwhelming bias to whites and black children ALSO have a bias toward whites, but not nearly as the bias shown by white children." In a world where whites feel as if they have to walk on egg shells when discussing race for fear of being classified as a narrow-minded racist and where blacks are afraid to report or verbally express when they have experienced a form of prejudice for fear of "pulling out the race-card," people have decided to remain silent on the subject. It is not until a tragedy—like the Trayvon Martin case—happens that causes people to come out of the shadows. This case has forced many Americans to face its painful, dysfunctional relationship with race and prejudice, a subject that is rarely discussed in some households. In 2007, the Journal of Marriage and Family found that 75 percent of white families with kindergartners never, or almost never, talk about race. While the stats were reserved for black parents. Seventy-five percent of them discuss race with their children. Just when we think the racial scabs of this country are finally healing, something happens that reopens an already slow-healing wound, causing further pain. Not since the arrest of Dr. Henry Louis Gates that resulted in the "beer summit" has the issue of race polarized the American public. The Trayvon Martin murder has sparked an outrage from people of all races questioning how could someone who killed an unarmed young man still walk around freely, it has allowed people to look at themselves in the mirror and question how they stereotype others; and has unfortunately turned into a political circus from players on both sides of the isle using his death as a way to take on other issues. Through all of this, Trayvon's family is seeking just one thing: justice. While many see this as a great opportunity for a great debate, I'm sure if Trayvon's parents had their ultimate wish—instead of the TV specials, editorials (like this one) and radio commentary on this issue—their son would be alive and they would be helping him sort through college acceptance letters instead of sorting through dozens of media appearance requests from every Tom, Dick and Harry news outlet looking to get a piece of this story. But unfortunately this is a cruel world and unfair things happen to innocent people. So here we are in a supposedly post-racial America debating a decades old issue: racial profiling. How we address this tragedy can either help America turn over a new leaf or it could drive us further apart as a nation. Mirror Mirror On The Wall Who's The Prejudiced of Them All If there were a national poll of every American of all ethnic backgrounds that asked if they were racist it would be safe to say that most people would say that they are in fact not prejudiced. But is that reality? While people are attacking George Zimmerman on how his preconceived notions on black males caused someone's death, many of us are blind to our own prejudices. Don't we all harbor some form of prejudice (great or small)? I was having this conversation with a group of friends one weekday evening over dinner. A black male admitted that he felt uncomfortable getting on a plane with someone who looked Middle Eastern. I proposed a question: What if a white person did not want to ride in your carpool for fear of getting robbed? "Well that's racist?" he said. And your thoughts aren't? In a way it's silly if you think about it. No one in their right mind would assume a well-dressed African American male who pulls up to a DC Metro (subway) station at 7 am to drive people to the District would attack or rob them. But many people look at the thousands of people from Middle Eastern countries that way everyday on airplanes, even as they travel with their small children and elderly parents. They are law-abiding citizens who want to safely land in their destination just as much as you do. Just because there are extremists in the Muslim community who want to harm others exists, we can't assume every person who has olive skin or wears certain religious attire are out to take down the plane. I'm sure George Zimmerman never considered himself to be a racist just as my friend doesn't. I don't know Zimmerman personally so I can't possibly know what he harbors in his heart, but the reality is that we let images we see in the media dictate how we view others. You see faces of black males' mug shots on the nightly news, so when one walks toward you on the street you clutch your purse a little tighter just in case he tries to snatch it. You see images of Latino men standing in front of Home Depots looking for work or read stories about them getting pulled over without a license, so you assume that all Hispanics are illegal, day laborers. You hear about another terrorist threat from a Muslim extremist group, so when a Middle Eastern man sits next to you on the plane, for a split second you wonder if he's wearing a bomb. You see images of black teens participating in flash mobs, so you follow a group of black females who walk in your store just in case someone slips an item in their bag. Zimmerman took one look at Trayvon and assumed the worst about him: he was on drugs and up to no good (recorded on the 911 tape). His paranoia came after a string of burglaries—in a span of 15 months—that were committed all by young black males, according to his neighbor and supporter Frank Taafee. All along, Zimmerman's friends and family have contended that this shooting was not about race but self-defense. But listening to Taafee discuss the case in an interview with Soledad O'Brien, it looks like Zimmerman judged Trayvon based on previous incidents in the gated community. When O'Brien pressed him on how the prior incidents related to Trayvon's death, Taafee responded with, "There's an old saying if you plant corn, you get corn." And then goes on to say later in the interview that, "It is what it is. It is what it is." Now, how do you judge others?
<urn:uuid:f9ea5072-4c80-4c6e-b26f-ef1d9c6bab79>
{ "date": "2013-05-21T10:28:33", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9763801693916321, "score": 2.921875, "token_count": 1520, "url": "http://www.philasun.com/news/2885/57/The-scabs-that-won-t-heal-Racial-injustice-stereotypes-and-social-ills.html" }
Don't know if you knew this but there are other places that have sweet onions too that are indigenous to their area.Texas has their 1015 Super Sweet(planted on Oct.15) and Washington has what's called Walla Walla's. I think Hawaii has a particular sweet onion too. That's not a post Iíd expect from someone who likes to chide people for not reading posts before posting. Expanding on what I wrote two posts before yours (where I commented on both 1015ís and Maui onions), none of the sweet onions we have in the US today are indigenous. It all started in 1898 when Bermuda onions were first planted in Texas. Ironically, the seeds were from the Canary Islands not Bermuda. By the 1920's they were growing so many onions in Texas, the demand for seed brought in new, inexperienced seed growers who drove the Canary Island seed quality down. Per-acre yields in Texas became so low that growers began looking at other varieties Ė the most important of which was the Grano from Spain. Because of low yields, there have been few, if any, Bermuda onions grown commercially in the US since the late 1940's despite what you might see advertised at your grocery store. One of the most important super sweet onions is the Granex, an F1 hybrid that was developed in Texas from the Excel Yellow Bermuda and the Texas Early Grano 951. This onion has a host of names including Vidalia, Maui, Noonday, etc. The Grano 1015Y (a.k.a Texas 1015) is not a hybrid Ė rather an improved Grano 951 that was developed for resistance to pink root Ė not sweetness per se Ė while maintaining early maturity. Attempts to further improve the 1015Y have resulted in later maturity which is highly undesirable from a commercial growing perspective. The Walla Walla onion, on the other hand was developed from seed brought from Corsica off the coast of Italy. Interestingly, Bermuda onions are also of Italian origin.
<urn:uuid:962d4ad5-11d2-4bb0-b66d-5b83c198d0ba>
{ "date": "2013-05-21T10:01:10", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9787248969078064, "score": 2.578125, "token_count": 418, "url": "http://www.pizzamaking.com/forum/index.php?topic=20364.msg209519" }
In any large city just a handful of bars give the police far more trouble than all the rest put together. The same is true of many other types of establishments, such as schools, convenience stores, and parking lots. In each case, just a few produce far more crime, disorder, and calls for police assistance than the rest of the group combined. This phenomenon—called “risky facilities”—has important implications for many problem-oriented policing projects. In particular, it can help police focus their energies where they are needed most and can help in selecting appropriate preventive measures. This guide serves as an introduction to risky facilities and shows how the concept can aid problem-oriented policing efforts by providing answers to the following key questions. We open with a definition of facilities and provide some examples. We then discuss risky facilities and explain how this concept is related to other crime concentration theories. Facilities are places with specific public or private functions, such as stores, bars, restaurants, mobile home parks, bus stops, apartment buildings, public swimming pools, ATM locations, libraries, hospitals, schools, parking lots, railway stations, marinas, and shopping malls. Facilities vary greatly in the crimes they experience. Medical facilities, for example, are likely to have different types and levels of crime than do police booking facilities. In addition, there is likely to be a great variation within any broad category of facility. For example, although both are medical facilities, dental offices are likely to have different levels and types of crime than are emergency rooms. Because such distinctions are critical to the success of risky facility analyses, it is important to begin by carefully defining the type of facility that is to be examined; only then proceed to an examination of the type and frequency of crime that the particular type of facility experiences. One important principle of crime prevention holds that crime is highly concentrated among particular people, places, and things; as this principle suggests, focusing resources on these concentrations is likely to yield the greatest preventive benefits. This principle has spawned a number of related concepts that are routinely used by police in problem-solving projects, including: Risky facilities is another recently described theory of crime concentration that holds great promise for problem-oriented policing.1 The theory postulates that only a small proportion of any specific type of facility will account for the majority of crime and disorder problems experienced or produced by the group of facilities as a whole. As a rule of thumb, about 20 percent of the total group will account for 80 percent of the problems. This is known as the 80/20 rule: in theory, 20 percent of any particular group of things is responsible for 80 percent of outcomes involving those things.2 The 80/20 rule is not peculiar to crime and disorder; rather, it is almost a universal law. For example, a small portion of the earth’s surface holds the majority of life on the planet; a small proportion of earthquakes cause most earthquake damage; a small number of people hold most of the earth’s wealth; a small proportion of police officers produce the most arrests; and so forth. In practice, of course, the proportion is seldom exactly 80/20; however, it is always true that some small percentage of a group produces a large percentage of any particular result involving that group. Later in the guide we will show you how to determine whether the 80/20 rule holds true for any particular group of facilities. The 80/20 rule can be a useful initial assumption: when confronting a problem, start by assuming that most of the problem is created by a few individuals, places, or events. Although this first approximation is not always correct, it is probably correct more often than assuming that the problem is spread evenly across individuals, places, or events. Careful analysis can then test whether this starting assumption is correct. The first paper to discuss the concept of risky facilities identified nearly 40 studies of specific types of facilities that included data about variations in the risks of crime, disorder, or misconduct.3 These studies covered a wide range of facilities and many different types of crime and deviance, including robbery, theft, assault, and simple disorder. All the studies showed wide variations in risk in the facilities studied and in many there was clear evidence of high concentrations of risk consistent with the definition of risky facilities.† There follow a few examples. † Not every study provided clear evidence that a small proportion of the facilities accounted for a large proportion of the crime, disorder, or misconduct. Rather, some reported differences between facilities in crime numbers or rates; for example, Matthews, Pease & Pease (2001) [PDF] reported that “4 percent of banks had robbery rates four to six times that of other banks.” Although consistent with the concept of risky facilities, these figures do not satisfy a key component of the definition: they do not demonstrate that a small number of high-risk banks accounted for a large part of the robbery problem. However, this does not mean that risks for the facilities studied were not highly skewed. Rather, it only means that the data did not allow the distribution of risk to be examined. Although the studies in this list are just a few of those that have produced evidence of risky facilities, such results make it clear that this form of crime concentration is quite widespread. Low Cost Motel: The risk of crime varies a great deal among facilities of the same type. Photo Credit: John Eck When analysts plot the number of crimes at each facility under investigation, they almost always create a graph with a reclining-J shape. This can be seen in the example in Figure 1, based on the work of crime analysts in Chula Vista, California. In that study, all parks over two acres in Chula Vista were ranked from the most crime (on left) to the least. The heights of the bars show the number of crimes in each park. As can be seen, three parks had far more crime than any of the rest and most parks had very little crime. Risky facilities can show up as hot spots on a city’s crime map. Indeed, specific hospitals, schools, and train stations are often well-known examples. But simply treating these facilities as hot spots misses an important analytical opportunity: comparing the risky facilities with other like facilities. Such a comparison can reveal important differences between facilities that can account for the differences in risk, thereby providing important pointers to preventive action. In addition, risky facilities are sometimes treated as examples of repeat victimization. However, this can create confusion when it is not the facilities that are being victimized, but rather the people who are using them. Thus, a tavern that repeatedly requests police assistance in dealing with fights is not itself being repeatedly victimized, unless it routinely suffers damage in the course of these fights or if members of staff are regularly assaulted. Even those participating in the fights may not be repeat victims, as different patrons might be involved each time. Indeed, no one need be victimized at all, as would be the case if the calls were about drugs, prostitution, or stolen property sales. Calling the tavern a repeat victim can be more than just confusing, however, because it might also divert attention from the role mismanagement or poor design plays in causing the fights. By keeping the concepts of repeat victimization and risky facilities separate, it may be possible to determine whether or not repeat victimization is the cause of a risky facility and thereby to design responses accordingly. The concept of risky facilities can be helpful in two types of policing projects. First, the concept can be useful in crime prevention projects that focus on a particular class of facilities, such as low rent apartment complexes or downtown parking lots. In the scanning stage, the objective is to list the facilities involved along with the corresponding number of problem incidents in order to see which facilities experience the most and which the fewest problems. This might immediately suggest some contributing factors. For example, a study of car break-ins and thefts in downtown parking facilities in Charlotte, North Carolina revealed that the number of offenses in each parking lot was not merely a function of size.14 Rather, it was discovered that some smaller facilities experienced a large numbers of thefts because of some fairly obvious security deficiencies. This finding was explored in more depth in the analysis stage by computing theft rates for each facility based on its number of parking spaces. The analysis found that the risk of theft was far greater in surface lots than in parking garages, a fact that had not been known previously. Subsequent analysis compared security features between the multilevel and surface lots and then within the members of each category in an effort to determine which aspects of security (e.g., attendants, lighting, security guards) explained the variation. This analysis guided the selection of measures that were to have been introduced at the response stage; and had these been implemented as planned (which was not the case), the assessment stage would have examined, not merely whether theft rates declined overall, but whether those at the previously riskiest facilities had declined most. Obviously, this type of analysis can be conducted within any group of facilities. Second, risky facilities analysis can be helpful to crime prevention efforts that focus on a particular troublesome facility. In this sort of analysis, the scanning stage consists of comparing the problems at a particular facility with those at similar nearby facilities. For example, in a project that won the Herman Goldstein Award for Excellence in Problem-oriented Policing in 2003, 15 police in Oakland, California discovered that a particular motel experienced nearly 10 times as many criminal incidents as did any other comparable motel in the area. Although in this case the analysis convinced Oakland police to address the problems at the motel in question, in other cases analysis might reveal that some other facilities have far greater problems than the one which was the initial focus of the project. Comparing the facility being addressed in the project with other group members can also be useful in the analysis, response, and assessment stages described above. Police reports and calls for service data are the most common sources of information about crime and disorder events. However, using these data can lead to errors if care is not taken to check for some of the following potential problems.† † Many of these data problems are also encountered when studying hot spots and repeat victimization. For further information see Deborah Weisel (2005), Analyzing Repeat Victimization, Problem Solving Tools Series No. 4. Incident reporting forms and police records can be revised to improve geographical information gathering; moreover, the increased use of geocoding for crime reports will gradually help resolve some of these difficulties. A study in England in 1964 found that absconding rates for residents in 17 training schools for delinquent boys ranged from 10 percent to 75 percent. To determine whether this variation was random, researchers reexamined the absconding rates two years later (1966) to see if the variation was much the same. They found that by and large the variation was consistent between the two years. For example, School 1 had the lowest absconding rate and School 17 the highest rate in both years (see the table below). In fact, the correlation was 0.65 between the two years.† Because the variation was relatively stable and because very few boys would have been residents in both years, researchers determined that the variation was probably due to differences in management practices rather than to differences in the student populations. † Correlation coefficients can be calculated quite simply from an Excel spreadsheet. |Training School||Absconding Rate| Adapted from: Clarke and Martin (1975). Once a satisfactory measure of the problematic events for a defined group of facilities has been obtained, the following six-step procedure can be used to determine whether the 80/20 rule applies. † Reproduced with permission from Clarke and Eck (2003) In order to analyze crime concentrations, it is first necessary to define the type of facility to be examined; only then is it possible to create a list of facilities that meets that the definition. Ideally, all places that fit the definition and that are in the area of study will be on the list once and only once. In addition, facilities that do not fit the definition will not be on the list. The further the list departs from this ideal, the more likely it is that the results will be misleading. Identifying all facilities of a particular type in any given area can be troublesome: not only can it sometimes be difficult to develop an appropriate working definition of the type of facility at issue, but problems can also arise in regard to the data management practices of relevant public and private agencies. Here is an example of creating a list of facilities that illustrates these points. A research team at the University of Cincinnati, Ohio wanted to determine why a few bars had numerous violent incidents, whereas most of the others had none or only a very few. To do this, they needed a definition of “bar” and a list of facilities that met this definition. Researchers defined “bar” as a place that met four conditions: (1) it had to be open to the general public, rather than restricted to members or rented out to private parties; (2) it had to serve alcohol for onsite consumption; (3) some patrons had to come to the place for the primary purpose of consuming alcohol; and (4) there had to be a designated physical area within the place that served as a drinking area. Locations that did not meet all four conditions were excluded from the study. To obtain a list of locations meeting this definition, researchers began by consulting records from the Ohio Division of Liquor Control. These records showed that 633 places within the city limits were licensed to serve hard liquor. Based upon their personal knowledge, researchers were able to exclude a number of locations from consideration, reducing the list to 391 possible bars. To isolate the real bars, researchers then compared the remaining locations to the most recent bar guide in a local weekly tabloid that catered to young adults, which contained both a brief written description of the locations and numerous commercial advertisements. The tabloid information revealed that at least 198 of the 391 places fit the definition used. The tabloid list was incomplete, however, as there were an unknown number of city bars that were not reviewed by the tabloid staff. A check of the online Yellow pages verified several more bars. Private fraternal organizations were eliminated from consideration because they were not open to the general public. For most of the remaining places, researchers phoned or visited the sites, examining the physical locations and interviewing owners and employees. Onsite visits revealed several restaurants had areas that looked like bars, but these were eventually eliminated from consideration when it became clear from interviews that they were more decorative than functional or that they were used for other purposes (e.g., to hold carryout orders for customer pickup or to provide overflow seating where customers could eat). Ultimately, researchers identified 264 facilities that fit the definition of bar. These then became the subjects of the study. Table 1: The Distribution of 121 Assaults in 30 Pubs |No. of Assaults||% of Assaults||Cumulative % Assaults||Cumulative % Pubs| |George & Dragon||6||5.0||76.9||23.3| |Hare & Hounds||1||0.8||96.7||46.7| |Rose & Crown||0||0||100||63.3| |Dog and Fox||0||0||100||76.7| Because there is no single reason why facilities vary in risk, it is important to determine which reasons are in operation in each particular case. The most important sources of variation in risk follow. Table 2: Reported Shopliftings by Store, Danvers, Mass. October 2003 to September 2004 |Store||Shopliftings||Percent of Shopliftings||Cumulative % of Shopliftings||Cumulative % of Stores||Shopliftings per 1000 Sq. Ft.| |7 stores with 2 incidents||14||4.7||90.6||30.8||0.08| |28 stores with 1 incident||28||9.4||100.0||66.7||0.06| |26 stores with 0 incidents||0||0.0||100.0||100.0||0.00| |Total stores = 78||298||100.0||100.0||100.0||0.15| Unfortunately, it is not always easy to obtain the data needed to correct for the size of the facilities under study. For example, a study of downtown parking lot thefts in Charlotte, North Carolina was impeded when the city was unable to provide data about the number of spaces in each lot.16 As a result, police officers had to visit each lot and count the spaces by hand. † See Clarke, Ronald (1999) [PDF] . Hot Products. Police Research Series. Paper 112. London: Home Office. † See Mike Scott, The Problem of Robbery at Automated Teller Machines, Problem Specific Guide No. 8 (Washington, D.C.: Office of Community Oriented Policing Services, U.S. Department of Justice, 2001). A Sign Outside a Bar – How managers regulate patron conduct can have a big influence on crime risk. Credit: John Eck In every large city, a few low-cost rental apartment buildings make extraordinary demands on police time. These “risky facilities” are often owned by slumlords — unscrupulous landlords who purchase properties in poor neighborhoods and who make a minimum investment in management and maintenance. Building services deteriorate, respectable tenants move out, and their place is taken by less respectable ones — drug dealers, pimps, and prostitutes — who can afford to pay the rent but who cannot pass the background checks made by more responsible managements. In the course of a problem-oriented policing project in Santa Barbara, California, Officers Kim Frylsie and Mike Apsland analyzed arrests made at 14 rental apartment buildings owned by a slumlord, before and after he had purchased them. The table clearly shows a large increase in the number of people arrested at the properties in the years after he acquired them. There was also some evidence that the increased crime and disorder in these properties spilled over to infect other nearby apartment buildings — a finding that supports the widespread belief that slumlords contribute to neighborhood blight. |Property||Year Aquired||No. of Units||Average Pre-Owning||Yearly Arrests Post-Owning| Source: Clarke, Ronald and Gisela Bichler-Robertson (1998). “Place Managers, Slumlords and Crime in Low Rent Apartment Buildings”. Security Journal, 11: 11-19. Table 3: Responses to Risky Facilities |Size||Facility is large and attracts many users, some of whom become victims.||If the number of crimes per user is very small compared to most other facilities, then one option is to do nothing. Alternatively, identify those most likely to become victims and the circumstances associated with their victimization, then focus on these individuals and circumstances.| |Hot Products||Facility contains a large number of things that are particularly vulnerable to theft or vandalism.||Remove hot products. Provide additional protection to hot products.| |Location||Facility may be located in close proximity to offenders.||Hire additional security. Tailor management practices to the peculiarities of the area.| |Repeat Victims||Facility contains a few victims who are involved in a large proportion of crimes.||Provide victims with the information or inducements they need to make behavioral changes that will reduce their likelihood of victimization. Provide information or protection to victims so that they are not victimized again.| |Crime Attractor||Facility attracts many offenders or a few high rate offenders.||Remove offenders through enforcement and incapacitation or rehabilitation. Deny access to repeat offenders.| |Poor Design||Physical layout makes offending easy, rewarding or inducing risk.||Change the physical layout in conformity with principles of Crime Prevention through Environmental Design (CPTED)†.| |Poor Management||Management practices or processes enable or encourage offending.||Change management procedures, paying particular attention to practices that influence repeat victimization.| † For additional information on CPTED principles see Response Guide #6. There is no single reason that explains why some facilities have far more crime than other facilities of the same type. Rather, the full explanation usually involves a combination of the seven factors discussed above; remember though, that the relative contribution of each will vary from case to case. In many problem-oriented projects it might not be possible to explain completely the variations in risk between facilities, because such analysis is usually only possible after detailed research that can take weeks or months to complete. However, it is usually possible to get some idea of how each of the seven factors contributes to the problem by comparing high and low crime facilities. We previously explained how to do this when we discussed the various ways of testing the influence of location, hot products, repeat victimization and crime attractors. In some cases, quantitative data such as facility size will be readily available. In others, it might be necessary to survey the facilities to discover the relevant information. For example, in the project mentioned above that focused on thefts from cars in Charlotte’s downtown parking facilities, police surveyed the lots to gather information about hours of operation, attendants, fencing, lighting, and other security measures. This provided many ideas for reducing crime in the riskiest facilities. In another Charlotte study, a police survey found that the theft of household appliances from construction sites was much lower when builders delayed installation until the homes were ready for occupancy. 19 Direct observation and discussions with managers and police familiar with the facilities (see Box 4) can yield valuable insights into the reasons for variations in risk between facilities. In addition, interviews with apprehended offenders can reveal how they evaluate the difficulties, rewards, and risks of preying upon the facilities in the sample.† Similarly, interviews with victims—particularly repeat victims—can be revealing. † See Scott Decker, Using Offender Interviews to Inform Police Problem Solving, Problem Solving Tools Series No. 3 (Washington, D.C.: Office of Community Oriented Policing Services, 2005). In Newark, New Jersey, a project funded by the U.S. Department of Justice Office of Community Oriented Policing Services (the COPS Office) focused on drug dealing in low cost private rental apartment complexes. 20 During the scanning stage, 22 possible sites for intervention (out of a total of 506 private apartment complexes) were identified through an analysis of police data and interviews with officers in the Newark Police Department’s Safer Cities Task Force and Special Investigations Unit. Subsequent interviews with district commanders revealed a special problem with four apartment complexes located close to entry and exit ramps for Interstate 78, which provided out-of-town buyers with easy access to drug markets. The buyers could briefly enter the city, purchase drugs at the complexes, drive around in a loop and quickly exit again. Authorities implemented a traffic management plan that disrupted the loop by creating one-way streets and dead-ends. The traffic plan was reinforced with additional enforcement at the four sites and will eventually dovetail with a long-term project by the state to rebuild the ramps to route traffic away from residential areas. Your ability to understand the reasons for the variations in risk will be greatly assisted where there is an existing Problem-Oriented Policing Guide that deals with the facilities that are the focus of your own project. Although it will not tell you which factors are important in your sample, it will provide more specific suggestions than are provided by the general discussion above. As of June, 2006, ten guides focused on problems within specific types of facilities.† † New guides are constantly being added; a list of those in preparation is available at www.popcenter.org. Although there are many ways to reduce risk (see Table 3), it is important to focus on those that are most likely to succeed. For example, it is usually impossible to do anything about the size and location of specific facilities. Similarly, changing a facility’s physical design can be difficult or costly and would only be justified in an extreme case. On the other hand, it may be easier to change business practices that facilitate or encourage crime and disorder; this, however, cannot be done without the full cooperation of those who own or manage the facilities, as they are usually the ones who must implement and pay for the measures. Before moving on to a discussion of the various ways of convincing facility managers to make the changes necessary to reduce crime or disorder, it is important to understand some of the reasons why they might not have done these things on their own. The reasons can include the following. Although it always best to assume that managers and owners want to reduce crime and disorder in their facilities and that they will be open to working with the police and others to implement the necessary changes, the list above suggests that they will sometimes resist implementing remedial measures. Consequently, it will sometimes be necessary to exert a certain amount of coercion, either directly or indirectly. There are several ways that this can be done. † † See Clarke, Ronald (1999) [PDF] . Hot Products. Police Research Series. Paper 112. London: Home Office. (Accessible at www.popcenter.org) Demolition of a Former Bar and Drug Dealing Hot Spot: Removing a very risky facility can be the best way to reduce crime Credit: John Eck Table 4: Calls for Police Service Oakland Airport Motel |Year||Calls for Service| |*Through March 2003| In practice, a combination of approaches—both a carrot and a stick—might be the most effective strategy. Because business owners can be politically powerful, it may be far easier to reduce crime if management is induced to cooperate without engaging in a political battle. In this regard, it is important to recall the guiding principle of this guide, the 80-20 rule: most of the problem is likely to be the result of a few facilities. So it might be that enlisting the support of the majority of facility owners and managers—whose contributions to the problem are minor—to change the behavior of the few—whose contributions to the problem are major—can aid police in winning the political struggle. This can also reduce costs by focusing resources where they are needed most, which can aid in tailoring responses to particular settings, thereby increasing the chances that interventions will be effective. Kock (1999). National Association of Convenience Stores (1991). Sherman, Schmidt, and Velke (1992). Lindstrom (1997). Bowers et al. (1998). Hirschfield and Bowers (1998). Newton (2004); Loukaitou-Sideris and Eck (in press). Chula Vista Police Department (2004). Madensen et al. (2005). Eck (2002). Chula Vista Police Department (2004). Bowers, K., A. Hirschfield and S. Johnson (1998). “Victimization Revisited: A Case Study of Non-Residential Repeat Burglary in Merseyside.” British Journal of Criminology 38(3): 429-452. Chula Vista Police Department. Chief’s Community Advisory Committee (2004). The Chula Vista Motel Project. Chula Vista, Calif.: Chula Vista Police Department. Clarke, R.V. (1999). Hot Products: Understanding, Anticipating and Reducing Demand for Stolen Goods. Police Research Series, Paper 112. London: Home Office, Research Development and Statistics Directorate. [Full Text] ---- (2002). Shoplifting. Problem-Oriented Guides for Police Series; Problem-Specific Guide No. 11. Washington, D.C.: U.S. Department of Justice, Office of Community Oriented Policing Services. [Full Text] Clarke, R.V., and G. Bichler-Robertson (1998). “Place Managers, Slumlords and Crime in Low Rent Apartment Buildings.” Security Journal 11(1): 11-19. Clarke, R.V., and J.E. Eck (2003). Become a Problem-Solving Crime Analyst: In 55 Small Steps. London: Jill Dando Institute of Crime Science. [Full text] Clarke, R.V., and H. Goldstein (2002). “Reducing Theft at Construction Sites: Lessons from a Problem-Oriented Project.” In N. Tilley (ed.), Analysis for Crime Prevention, Crime Prevention Studies, Vol. 13. Monsey, N.Y.: Criminal Justice Press. [Full Text] ---- (2003). “Thefts from Cars in Center-City Parking Facilities: A Case Study in Implementing Problem-Oriented Policing.” In J. Knutsson (ed.), Problem-Oriented Policing: From Innovation to Mainstream, Crime Prevention Studies, Vol. 15. Monsey, N.Y.: Criminal Justice Press. [Full Text] Clarke, R.V., and D. Martin (1975). “A Study of Absconding and Its Implications for the Residential Treatment of Delinquents.” In J. Tizard, I. Sinclair and R.V. Clarke (eds.), Varieties of Residential Experience. London: Routledge and Kegan Paul. Decker, S. (2005) Using Offender Interviews to Inform Police Problem Solving, Problem-Oriented Guides for Police Series, Problem Solving Tools Series No. 3 Washington, D.C.: U.S. Department of Justice, Office of Community Oriented Policing Services. [Full Text] Eck, J.E. (2002). “Preventing Crime at Places.” In L.W. Sherman, D. Farrington, B. Welsh and D.L. MacKenzie (eds.), Evidence-Based Crime Prevention. New York: Routledge. ---- (2003). “Police Problems: The Complexity of Problem Theory, Research and Evaluation.” In J. Knutsson (ed.), Problem-Oriented Policing: From Innovation to Mainstream, Crime Prevention Studies, vol. 15. Monsey, N.Y.: Criminal Justice Press. [Full text] Eck, J., R.V. Clarke and R. Guerette (2007). “Risky Facilities: Crime Concentration in Homogeneous Sets of Facilities.” Crime Prevention Studies, Vol.21. Monsey, N.Y.: Criminal Justice Press. [Full Text] Felson, M., R. Berends, B. Richardson and A. Veno (1997). “Reducing Pub Hopping and Related Crime.” In R. Homel (ed.), Policing for Prevention: Reducing Crime, Public Intoxication and Injury, Crime Prevention Studies, vol. 7. Monsey, N.Y.: Criminal Justice Press. [Full Text] Hirschfield, A., and K. Bowers (1998). “Monitoring, Measuring and Mapping Community Safety.” In A. Marlow and J. Pitts (eds.), Planning Safer Communities. Lyne Regis: Russell House Publishing. Homel, R., M. Hauritz, G. McIlwain, R. Wortley and R. Carvolth (1997). “Preventing Drunkenness and Violence Around Nightclubs in a Tourist Resort.” In R.V. Clarke (ed.), Situational Crime Prevention: Successful Case Studies (2nd ed.). Guilderland, N.Y.: Harrow and Heston. Kock, R. (1999). 80-20 Principle: The Secret to Success by Achieving More with Less. New York: Doubleday. La Vigne, N. (1994). “Gasoline Drive-Offs: Designing a Less Convenient Environment.” In R.V. Clarke (ed.), Crime Prevention Studies, Vol. 2. Monsey, N.Y.: Criminal Justice Press. [Full Text] Lindstrom, P. (1997). “Patterns of School Crime: A Replication and Empirical Extension.” British Journal of Criminology 37(1): 121-130. Loukaitou-Sideris, A., and J.E. Eck (in press). “Crime Prevention and Active Living.” American Journal of Health Promotion. Madensen, T., M. Skubak, D. Morgan and J.E. Eck (2005). Open-Air Drug Dealing in Cincinnati, Ohio: Executive Summary and Final Recommendations. Cincinnati, Ohio: University of Cincinnati, Division of Criminal Justice. Available at www.uc.edu/criminaljustice/ProjectReports/ FINAL_ RECOMMENDATIONS.pdf) Matthews, R., C. Pease and K. Pease (2001). “Repeat Bank Robbery: Theme and Variations.” In G. Farrell and K. Pease (eds.), Repeat Victimization. Crime Prevention Studies, Vol.12. Monsey, N.Y.: Criminal Justice Press. [Full Text] National Association of Convenience Stores (1991). Convenience Store Security Report and Recommendations. Alexandria, Va.: National Association of Convenience Stores. Newton, A. (2004). Crime and Disorder on Busses: Toward an Evidence Base for Effective Crime Prevention. PhD dissertation, University of Liverpool. Oakland Police Department (2003). “The Oakland Airport Motel Project.” Submission for the Herman Goldstein Award for Excellence in Problem-Oriented Policing. [Full Text] Perrone, S. (2000). Crimes Against Small Business in Australia: A Preliminary Analysis. Trends & Issues in Crime and Criminal Justice, No. 184. Canberra: Australian Institute for Criminology. [Full text] Scott, M. (2001). The Problem of Robbery at Automated Teller Machines. Problem-Oriented Guides for Police Series, Problem Specific Guide No. 8. Washington, D.C.: U.S. Department of Justice, Office of Community Oriented Policing Services. [Full text] Scott, M., and H. Goldstein (2005). Shifting and Sharing Responsibility for Public Safety Problems. Problem-Oriented Guides for Police, Response Guide Series No. 3 Washington, D.C.: U.S. Department of Justice, Office of Community Oriented Policing Services. [Full text] Sherman, L., J. Schmidt and R. Velke (1992). High Crime Taverns: A RECAP Project in Problem-Oriented Policing. Washington, D.C.: Crime Control Institute. Smith, D., M. Gregson and J. Morgan (2003). Between the Lines: An Evaluation of the Secured Park Award Scheme. Home Office Research Study, No. 266. London: Home Office Research, Development and Statistics Directorate. [Full text] Stedman, J. (2005). “Alcohol Issues in City Parks.” Unpublished presentation to the Chula Vista City Council. Chula Vista, CA: Chula Vista Police Department (November). Weisel, D. (2005) Analyzing Repeat Victimization. Problem-Oriented Guides for Police, Problem Solving Tools Series No. 4. Washington, D.C.: U. S. Department of Justice, Office of Community Oriented Policing Services. [Full text] Zanin, N., J. Shane and R.V. Clarke (2004). “Reducing Drug Dealing in Private Apartment Complexes In Newark, New Jersey.” A final report to the U.S. Department of Justice, Office of Community Oriented Policing Services, on the field applications of the Problem-Oriented Guides for Police project. Washington, D.C.: Office of Community Oriented Policing Services, U.S. Department of Justice. [Full Text] You may order free bound copies in any of three ways: Phone: 800-421-6770 or 202-307-1480 Allow several days for delivery. Send an e-mail with a link to this guide. Error sending email. Please review your enteries below.
<urn:uuid:56a8fda9-11df-4d64-a1cb-3ceaf7795d58>
{ "date": "2013-05-21T10:14:21", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9231218695640564, "score": 2.578125, "token_count": 7539, "url": "http://www.popcenter.org/tools/risky_facilities/print/" }
Q. What's wrong with hot dogs? A. Nitrite additives in hotdogs form carcinogens. Petition to ban Three different studies have come out in the past year, finding that the consumption of hot dogs can be a risk factor for childhood cancer. Peters et al. studied the relationship between the intake of certain foods and the risk of leukemia in children from birth to age 10 in Los Angeles County 1980 and 1987. The study found that children eating more than 12 hot dogs per month have nine times the normal risk of developing childhood leukemia. A strong risk for childhood leukemia also existed for those children whose fathers' intake of hot dogs was 12 or more per month. Researchers Sarusua and Savitz studied childhood cancer cases in Denver and found that children born to mothers who consumed hot dogs one or more times during pregnancy has approximately double the risk of developing brain tumors. Children who ate hot dogs one or more times per week were also at higher risk of brain cancer. Bunin et al, also found that maternal consumption of hot dogs during pregnancy was associated with an excess risk of childhood Q. How could hot dogs cause cancer? A. Hot dogs contain nitrites which are used as preservatives, primarily to combat botulism. During the cooking process, nitrites combine with amines naturally present in meat to form carcinogenic N-nitroso compounds. that nitrites can combine with amines in the human stomach to form N-nitroso compounds. These compounds are known carcinogens and have been associated with cancer of the oral cavity, urinary bladder, esophagus, stomach and Q. Some vegetables contain nitrites, do they cause cancer too? A. It is true that nitrites are commonly found in many green vegetables, especially spinach, celery and green lettuce. However, the consumption of vegetables appears to be effective in reducing the risk of cancer. How is this possible? The explanation lies in the formation of N-nitroso compounds from nitrites and amines. Nitrite containing vegetables also have Vitamin C and D, which serve to inhibit the formation of N-nitroso compounds. Consequently, vegetables are quite and serve to reduce your cancer risk. Q. Do other food products contain nitrites? A. Yes, all cured meats contain nitrites. These include bacon and fish. Q. Are all hot dogs a risk for childhood cancer? A. No. Not all hot dogs on the market contain nitrites. Because of modern refrigeration methods, nitrites are now used more for the red color they produce (which is associated with freshness) than for preservation. Nitrite-free hot dogs, while they taste the same as nitrite hot dogs, have a brownish color that has limited their popularity among consumers. When cooked, nitrite-free hot dogs are perfectly safe and healthy. HERE ARE FOUR THINGS THAT YOU CAN DO: - Do not buy hot dogs containing nitrite. It is especially important that children and potential parents do not consume 12 or more of these - Request that your supermarket have nitrite-free hot - Contact your local school board and find out whether children are being served nitrite hot dogs in the cafeteria, - Write the FDA and express your concern that nitrite-hot dogs are not labeled for their cancer risk to children. You can dogs, docket #: 95P 0112/CP1. Cancer Prevention Coalition of Public Health, M/C 922 University of Illinois at Chicago 2121 West Taylor Street Chicago, IL 60612 Tel: (312) 996-2297, Fax: (312) 413-9898 1, Peters J, et al " Processed meats and risk of childhood leukemia (California, USA)" Cancer Causes & Control 5: 195-202, 1994. 2 Sarasua S, Savitz D. " Cured and broiled meat consumption in relation to childhood cancer: Denver, Colorado (United States)," Cancer Causes & Control 5:141-8, 1994. 3 Bunin GR, et al. "Maternal diet and risk of astrocytic glioma in children: a report from the children's cancer group (United States and Canada)," Cancer Causes & Control 5:177-87, 1994. 4. Lijinsky W, Epstein, S. "Nitrosamines as environmental carcinogens," Nature 225 (5227): 2112, 1970.
<urn:uuid:2675e659-a043-4bc4-94c2-a7baa413ea7d>
{ "date": "2013-05-21T10:06:22", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9120457768440247, "score": 3.078125, "token_count": 975, "url": "http://www.preventcancer.com/consumers/food/hotdogs.htm" }
PPPL scientists propose a solution to a critical barrier to producing fusion Posted April 23, 2012; 05:00 p.m. Physicists from the U.S. Department of Energy's Princeton Plasma Physics Laboratory (PPPL) have discovered a possible solution to a mystery that has long baffled researchers working to harness fusion. If confirmed by experiment, the finding could help scientists eliminate a major impediment to the development of fusion as a clean and abundant source of energy for producing electric power. An in-depth analysis by PPPL scientists zeroed in on tiny, bubble-like islands that appear in the hot, charged gases — or plasmas — during experiments. These minute islands collect impurities that cool the plasma. And these islands, the scientists report in the April 20 issue of the journal Physical Review Letters, are at the root of a longstanding problem known as the "density limit" that can prevent fusion reactors from operating at maximum efficiency. Fusion occurs when plasmas become hot and dense enough for the atomic nuclei contained within the hot gas to combine and release energy. But when the plasmas in experimental reactors called tokamaks reach the mysterious density limit, they can spiral apart into a flash of light. "The big mystery is why adding more heating power to the plasma doesn't get you to higher density," said David Gates, a principal research physicist at PPPL and co-author of the proposed solution with Luis Delgado-Aparicio, a postdoctoral fellow at PPPL and a visiting scientist at the Massachusetts Institute of Technology's Plasma Science Fusion Center. "This is critical because density is the key parameter in reaching fusion and people have been puzzling about this for more than 30 years." A discovery by Princeton Plasma Physics Laboratory physicists Luis Delgado-Aparicio (left) and David Gates could help scientists eliminate a major impediment to the development of fusion as a clean and abundant source of energy for producing electric power. Listen to a podcast with the scientists discussing their discovery. (Photo by Elle Starkman) The scientists hit upon their theory in what Gates called "a 10-minute 'Aha!' moment." Working out equations on a whiteboard in Gates' office, the physicists focused on the islands and the impurities that drive away energy. The impurities stem from particles that the plasma kicks up from the tokamak wall. "When you hit this magical density limit, the islands grow and coalesce and the plasma ends up in a disruption," said Delgado-Aparicio. These islands actually inflict double damage, the scientists said. Besides cooling the plasma, the islands act as shields that block out added power. The balance tips when more power escapes from the islands than researchers can pump into the plasma through a process called ohmic heating — the same process that heats a toaster when electricity passes through it. When the islands grow large enough, the electric current that helps to heat and confine the plasma collapses, allowing the plasma to fly apart. Gates and Delgado-Aparicio now hope to test their theory with experiments on a tokamak called Alcator C-Mod at MIT, and on the DIII-D tokamak at General Atomics in San Diego. Among other things, they intend to see if injecting power directly into the islands will lead to higher density. If so, that could help future tokamaks reach the extreme density and 100-million-degree temperatures that fusion requires. The scientists' theory represents a fresh approach to the density limit, which also is known as the "Greenwald limit" after MIT physicist Martin Greenwald, who has derived an equation that describes it. Greenwald has another potential explanation for the source of the limit. He thinks it may occur when turbulence creates fluctuations that cool the edge of the plasma and squeeze too much current into too little space in the core of the plasma, causing the current to become unstable and crash. "There is a fair amount of evidence for this," Greenwald said. However, he added, "We don't have a nice story with a beginning and end and we should always be open to new ideas." Gates and Delgado-Aparicio pieced together their model from a variety of clues that have developed in recent decades. Gates first heard of the density limit while working as a postdoctoral fellow at the Culham Centre for Fusion Energy in Abingdon, England, in 1993. The limit had previously been named for Culham scientist Jan Hugill, who described it to Gates in detail. Separately, papers on plasma islands were beginning to surface in scientific circles. French physicist Paul-Henri Rebut described radiation-driven islands in a mid-1980s conference paper, but not in a periodical. German physicist Wolfgang Suttrop speculated a decade later that the islands were associated with the density limit. "The paper he wrote was actually the trigger for our idea, but he didn't relate the islands directly to the Greenwald limit," said Gates, who had worked with Suttrop on a tokamak experiment at the Max Planck Institute for Plasma Physics in Garching, Germany, in 1996 before joining PPPL the following year. In early 2011, the topic of plasma islands had mostly receded from Gates' mind. But a talk by Delgado-Aparicio about the possibility of such islands erupting in the plasmas contained within the Alcator C-Mod tokamak reignited his interest. Delgado-Aparicio spoke of corkscrew-shaped phenomena called snakes that had first been observed by PPPL scientists in the 1980s and initially reported by German physicist Arthur Weller. Intrigued by the talk, Gates urged Delgado-Aparicio to read the papers on islands by Rebut and Suttrop. An email from Delgado-Aparicio landed in Gates' inbox some eight months later. In it was a paper that described the behavior of snakes in a way that fit nicely with the C-Mod data. "I said, 'Wow! He's made a lot of progress,'" Gates remembered. "I said, 'You should come down and talk about this.'" What most excited Gates was an equation for the growth of islands that hinted at the density limit by modifying a formula that British physicist Paul Harding Rutherford had derived back in the 1980s. "I thought, 'If Wolfgang (Suttrop) was right about the islands, this equation should be telling us the Greenwald limit," Gates said. "So when Luis arrived I pulled him into my office." Then a curious thing happened. "It turns out that we didn't even need the entire equation," Gates said. "It was much simpler than that." By focusing solely on the density of the electrons in a plasma and the heat radiating from the islands, the researchers devised a formula for when the heat loss would surpass the electron density. That in turn pinpointed a possible mechanism behind the Greenwald limit. Delgado-Aparicio became so absorbed in the scientists' new ideas that he missed several turnoffs while driving back to Cambridge, Mass., that night. "It's intriguing to try to explain Mother Nature," he said. "When you understand a theory you can try to find a way to beat it. By that I mean find a way to work at densities higher than the limit." Conquering the limit could provide essential improvements for future tokamaks that will need to produce self-sustaining fusion reactions, or "burning plasmas," to generate electric power. Such machines include proposed successors to ITER, a $20 billion experimental reactor that is being built in Cadarache, France, by the European Union, the United States and five other countries. Why hadn't researchers pieced together a similar theory of the density-limit puzzle before? The answer, said Gates, lies in how ideas percolate through the scientific community. "The radiation-driven islands idea never got a lot of press," he said. "People thought of them as curiosities. The way we disseminate information is through publications, and this idea had a weak initial push." PPPL, in Plainsboro, N.J., is devoted both to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Through the process of fusion, which is constantly occurring in the sun and other stars, energy is created when the nuclei of two lightweight atoms, such as those of hydrogen, combine in plasma at very high temperatures. When this happens, a burst of energy is released, which can be used to generate electricity. PPPL is managed by Princeton University for the U.S. Department of Energy's Office of Science.
<urn:uuid:903085bb-3abd-4b6c-8799-284017dca006>
{ "date": "2013-05-21T10:09:00", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9597905278205872, "score": 3.34375, "token_count": 1801, "url": "http://www.princeton.edu/main/news/archive/S33/52/10O57/index.xml" }
PRLog (Press Release) - Apr. 10, 2012 - Many are not aware that on the day Titanic collided with an iceberg in the North Atlantic, the ship had received no less than six wireless transmissions describing the extent of the dangerous ice fields and bergs, but that not all of these messages made it to the bridge and that the captain therefore had an incorrect mental picture which did not match the reality on the ocean in front of him. Author David Warner Mathisen, a professional analyst and former US Army Infantry officer, observes that this type of failure to “connect the dots” is well known in the Army, and that military concepts such as “situational awareness” and Clausewitz’s phrase “the fog of war” are valuable tools for extracting lessons from the disaster that we can apply today. He points out that in many situations, the information that is needed to enable accurate analysis of the situation is actually available, but overlooked or not placed into the proper framework or context, so that the dots are not connected, something that happens so often that we can conclude that gaining true situational awareness is actually exceedingly difficult, even though it might at first appear to be simple. He then goes on to argue that the data we may be overlooking from a civilizational perspective may be creating a dangerous “false picture” that creates potentially serious danger, which should encourage greater efforts to “connect the dots” using tools that can facilitate better analysis. While many various theories of greater or lesser merit have been put forward to explain the 1912 Titanic disaster, including some recent analysis that the position of the earth in relation to both the moon and the sun may have played a role, ultimately the sinking and the tragic loss of life were the result of a lack of situational awareness – not just prior to the collision but in the fatal aftermath as well. # # # David Warner Mathisen is a professional analyst and former US Army officer, and the author of the book "The Mathisen Corollary" and of the recently-released essay "Titanic and the Fall of Civilizations."
<urn:uuid:7a3d133d-2007-4dda-ad00-b8c438ee49f6>
{ "date": "2013-05-21T10:36:01", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9669463038444519, "score": 2.65625, "token_count": 432, "url": "http://www.prlog.org/11845924-titanic-and-the-fall-of-civilizations.html" }
CAMBRIDGE – Public-opinion polls show that citizens in many democracies are unhappy with their leaders. This is particularly true in Great Britain, where a number of members of Parliament have used their housing allowances to enhance their income, sometimes legally and sometimes not. Some analysts predict that only half of Britain’s MPs will be returned in next year’s election. But, whatever the failures of particular British legislators, the issues go further than merely allowing voters to “throw the rascals out.” There is also a question of how successful leadership is taught and learned in a democracy. A successful democracy requires leadership to be widespread throughout government and civil society. Citizens who express concern about leadership need to learn not only how to judge it, but how to practice it themselves. Many observers say that leadership is an art rather than a science. Good leadership is situational. In my book The Powers to Lead , I call this skill “contextual intelligence.” The ability to mobilize a group effectively is certainly an art rather than a predictive science, and varies with situations, but that does not mean that it cannot be profitably studied and learned. Music and painting are based in part on innate skills, but also on training and practice. And artists can benefit not merely from studio courses, but also from art appreciation lessons that introduce them to the full repertoires and pallets of past masters. Learning leadership occurs in a variety of ways. Learning from experience is the most common and most powerful. It produces the tacit knowledge that is crucial in a crisis. But experience and intuition can be supplemented by analytics, which is the purpose of my book. As Mark Twain once observed, a cat that sits on a hot stove will not sit on a hot stove again, but it won’t sit on a cold one, either. Consequently, learning to analyze situations and contexts is an important leadership skill. The United States Army categorizes leadership learning under three words: “be, know, do.” “Be” refers to the shaping of character and values, and it comes partly from training and partly from experience. “Know” refers to analysis and skills, which can be trained. “Do” refers to action and requires both training and fieldwork. Most important, however, is experience and the emphasis on learning from mistakes and a continuous process that results from what the military calls “after-action reviews.” Learning can also occur in the classroom, whether through case studies, historical and analytic approaches, or experiential teaching that simulates situations that train students to increase self-awareness, distinguish their roles from their selves, and use their selves as a barometer for understanding a larger group. Similarly, students can learn from the results of scientific studies, limited though they may be, and by studying the range of behaviors and contexts that historical episodes can illuminate. In practice, of course, few people occupy top positions in groups or organizations. Most people “lead from the middle.” Effective leadership from the middle often requires attracting and persuading those above, below, and beside you. Indeed, leaders in the middle frequently find themselves in a policy vacuum, with few clear directives from the top. A passive follower keeps his head down, shuns risk, and avoids criticism. An opportunist uses the slack to feather his own nest rather than help the leader or the public. Bureaucratic entrepreneurs, on the other hand, take advantage of such opportunities to adjust and promote policies. The key moral question is whether, and at what point, their entrepreneurial activity exceed the bounds of policies set from the top. Since they lack the legitimate authority of elected or high-level appointed officials, bureaucratic entrepreneurs must remain cognizant of the need to balance initiative with loyalty. Leaders should encourage such entrepreneurship among their followers as a means of increasing their effectiveness. After all, the key to successful leadership is to surround oneself with good people, empower them by delegating authority, and then claim credit for their accomplishments. To make this formula work, however, requires a good deal of soft power. Without the soft power that produces attraction and loyalty to the leader’s goals, entrepreneurs run off in all directions and dissipate a group’s energies. With soft power, however, the energy of empowered followers strengthens leaders. Leadership is broadly distributed throughout healthy democracies, and all citizens need to learn more about what makes good and bad leaders. Potential leaders, in turn, can learn more about the sources and limits of the soft-power skills of emotional IQ, vision, and communication, as well as hard-power political and organizational skills. They must also better understand the nature of the contextual intelligence they will need to educate their hunches and sustain strategies of smart power. Most important, in today’s age of globalization, revolutionary information technology, and broadened participation, citizens in democracies must learn more about the nature and limits of the new demands on leadership.
<urn:uuid:061a731c-49e3-476a-8b10-79e073b8f67e>
{ "date": "2013-05-21T10:34:13", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9648071527481079, "score": 2.9375, "token_count": 1027, "url": "http://www.project-syndicate.org/commentary/learning-to-lead" }
As millions across India thronged Durga Puja marquees on the penultimate day of the festival Wednesday, so did the Jaintias, an indigenous tribe of Meghalaya comprising Christians, continuing a 400-year-old unique tradition. Worshipping Goddess Durga with the same fervour and devotion but with a different set of rituals, hundreds of Jaintias, both Christians and believers of an indigenous faith, thronged the ancient temple at Nartiang, about 65 km east of Shillong. The Pnar people, as Jaintias are known, were also joined by tourists. The tradition goes back over 400 years. Perched on a hill top, overlooking the Myntang stream, the Durga Bari at Nartiang in the Jaintia Hills district was built by the Jaintia kings in the 16th-17th centuries. "Twenty-two generations of Jaintia kings worshipped Durga and Jayanteswari, the ancestral deity of the Jaintia kings," said the young temple priest, Molay Desmukh. Desmukh, 20, took charge of the Durga temple five years ago after the demise of his father Gopendra Desmukh. Interestingly, Desmukh priests were brought to Nartiang by the Jaintia kings from Bengal, not Maharashtra as the surname may suggest. The dilapidated centuries-old temple structure was demolished recently, and a new one was built with minimal change in design and material in its place. Durga and Jayanteswari are placed on the same place and worshipped together. Both the idols are made of astadhatu (eight precious metals), and each is about six to eight inches tall. "The rituals and religious functions during the Durga Puja are performed as per the Hindu way," the priest said. The ceremony begins with ablution of both the idols, which are then draped in colourful new attires and ornaments before the rituals. On the fourth day of the five-day festival, animal sacrifice is carried out. "However, during the royal Jainitia rule there used to be a scary practice of human sacrifice," the priest said, pointing to a small square hole. He has been told by his father that "the severed head used to be rolled through the hole connected to a secret tunnel that falls into the adjacent river Myntang". It's believed that the practice was stopped by the British, after the sacrifice of a British subject. "Instead, now water gourds are sacrificed, along with animals and birds such as goats, chicken and pigeons," Desmukh said. A human mask is placed on the gourds, as a symbolic act of human sacrifice. Apart from this unique tradition, there is another indigenous feature that marks Durga Puja at Nartiang -- the Durga idol is permanent and is not sent for immersion after the last day of worship. However, the priest installs a young banana plant beside the Durga idol, which is taken out after the completion of the worship and immersed in the nearby river Myntang. The entire expenditure of the Durga Puja is borne by the Dolloi (traditional village chief, who is non-Christian) of Nartiang. Even though the majority of the tribal population in the state of Meghalaya has embraced Christianity, a sizeable section of the community has retained its indigenous culture, religion and customs. "Nartiang was the summer capital of the Jaintia kingdom, which was set up at Jaintiapur, now in Sylhet district of Bangladesh," said historian J.B. Bhattacharjee. "The palace, though in ruins, still stands there as a testimony to the Jaintia heritage," he said. The Jaintia kings spent the summer in the hills to escape the unbearable heat in the plains and return to Jaintiapur after Durga Puja. The royal tradition continued till the British annexed the Jaintia territories in 1835, thereby ending Jaintia reign in the plains.
<urn:uuid:4f7bbd43-184c-4c3a-99a7-c67b07bceeb4>
{ "date": "2013-05-21T10:13:53", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9674846529960632, "score": 2.921875, "token_count": 856, "url": "http://www.prokerala.com/news/articles/a251515.html" }
|Trees and Shrubs that Tolerate Saline Soils and Salt Spray Drift|| Concentrated sodium (Na), a component of salt, can damage plant tissue whether it contacts above or below ground parts. High salinity can reduce plant growth and may even cause plant death. Care should be taken to avoid excessive salt accumulation from any source on tree and shrub roots, leaves or stems. Sites with saline (salty) soils, and those that are exposed to coastal salt spray or paving de-icing materials, present challenges to landscapers and homeowners. |May 1, 2009||430-031| |Urban Forestry Issues||May 1, 2009||420-180| |Value, Benefits, and Costs of Urban Trees||May 1, 2009||420-181|
<urn:uuid:61bccf50-3c7f-4a28-b1e6-fd1afb392d58>
{ "date": "2013-05-21T10:28:10", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8903522491455078, "score": 3.15625, "token_count": 163, "url": "http://www.pubs.ext.vt.edu/author/k/kane-brian-res.html" }
Lama Ole Nydahl The six liberating actions are a motivational teaching for direct use in one's life. As is generally known, Buddhism has a very practical aim and its view is exceedingly clear. No one gets enlightened from only hearing teachings. Lasting results come from real experiences and the changes they bring about. Because this is so important, Buddha gave much practical advice, which should never be seen as commandments but as help from a friend. Being neither a creator nor a judging god, he wants no followers nor students who are a flock of sheep. Instead he wants colleagues - mature people sharing his enlightenment and the massive responsibility it entails are his real goal. For those who mainly think of themselves, his advice is contained in the Noble Eightfold Path. Starting with a useful lifestyle, it culminates in proper concentration. Whoever has reached the level of compassion and insight, and wishes to be useful to others, finds the Six Paramitas or Six Liberating Actions more useful. 'Ita' means 'gone' and 'Param' means 'beyond'. The paramitas develop love which takes one beyond the personal. It is the view which sets one free, the deep insight that seer, things seen, and the act of seeing are interdependent and one, that subject, object and action cannot be separated. The Paramitas liberate not because bad pictures in the mirror of one's mind are replaced with good ones, but because the confident states the latter produce allow one to go behind the good and the bad and recognize the mirror itself; shining, perfect and more fantastic than anything that it may reflect. The actions are liberating because they bring a recognition of the ultimate nature of mind. If one only fills the mind with good impressions, that would of course bring future happiness, but it would not go beyond the conditioned. With the view of the oneness of subject, object and action, whatever is undertaken for the benefit of others will bring the doer timeless benefit. The First Liberating Action: Generosity. Generosity opens up every situation. The world is full of spontaneous richness, but no matter how good the music is, there is no party if no one dances. If no one shares anything of themselves, nothing meaningful will happen. That is why generosity is so important. At Buddha's time, people were much less complicated than today. They also did not have amazing machines working for them. At that time, generosity was a question of helping others survive, of assuring that they had enough to eat. This meant the act was often focused on material things. Today, in the free and non-overpopulated part of the world, this is not the case; one usually dies from too much fat around the heart. Due to a lack of clear thinking, people develop inner problems as the outer ones diminish, and start to feel lonely and insecure. Instead of worrying about necessities, they develop complicated inner lives and many have never tasted the joy of their physical freedom. Thus in the Western world and parts of Asia, where material things are abundant - generosity refers mostly to the emotional. It means sharing one's power, joy and love with others, from the beyond- personal levels from where there is no falling down. If one meditates well and taps into the unconditioned states of mind, there is no end to the good that one may pass on to others. Sharing one's ultimate certainty is the finest gift of all - giving beings one's warmth - and though one cannot take one's car or fame past the grave, not everything is lost at death. The qualities developed during former lives are easily re-gained in later ones and there is no richness that is passed more directly from one existence to another than joyful energy. Squeezing the juice out of life pays, and a few more mantras or prostrations, some more love for one's partner than usual, not only bring power here and now, but speed up enlightenment. As already mentioned, the finest and only lasting richness one may bring beings is an insight into their unconditioned nature. But how to do that? How does one show others their innate perfection? The best mirror is Buddha's teachings and this is why no activity is more beneficial than the making of meditation centers. The practical wisdom they disseminate acquaints many with the clear light of their consciousness and the seeds thus planted will grow over all future lives until enlightenment. Though many socially minded people claim that such teachings are a luxury and that first one should give people something to eat, this is not true. There is ample space for both. When the mind functions well, the stomach will digest the food better and maybe then one can understand the reasons for having less children. In any case, the body will disappear while the mind continues on. The Second Paramita: A life that is aware, meaningful and useful to others. As terms like morality and ethics are employed by governing classes to control those below, many prefer not to use them. People are consciously intimidated by this, and often think, "If the state doesn't get you in this life, the church will get you afterwards." Even when only advice is given, as in the case of the Buddha, and the full development of beings is the only goal, one has to choose words which instruct clearly, without employing fear. The best definition of the second liberating action is probably living meaningfully and for the benefit of others. So what does this mean? How can one encompass the countless actions, words and thoughts during just one single day? Buddha, seeing everything from the state of timeless wisdom, had a few unique ideas. Because people have ten fingers for counting and then remembering, he gave ten pieces of advice concerning what is useful and what is not. Encompassing body, speech and mind, they become meaningful also to independent people when one recognizes that Buddha is not a boss, but a friend wishing one happiness. He wants everybody to share the blissful clear light of mind; the knower of past, present and future. Understanding that everybody is a Buddha who has not realized it yet, and recognizing the outer world to be a pure land, all experience becomes the expression of highest wisdom simply because it can happen. How else could the Buddha act? He never teaches by dogma or from above but shares his wisdom with beings whom he knows to be his equals in essence. Due to the good Karma of those surrounding him, Buddha tought for a full 45 years and died with a smile. He taught many extraordinary students. The questions they asked him were on the level of Socrates, Aristotle and Plato; the best minds of an amazing generation came to test him with the complete range of their philosophical tools and found not only convincing words, but Buddha's power was so skillful that it changed them in lasting ways. Beyond perfecting their logical abilities, he influenced their whole mind. Introducing them to the timeless experiencer behind the experiences, there was no space left for doubt. On the levels of body, speech and mind, it is not difficult to understand what is useful to avoid. When people have problems with the police, usually they have caused some trouble with their body. Killing, stealing, or harming others sexually are the main points here. When they are lonely, usually they say things which disturb others. They usually lie with the intent to harm others, spread gossip, split friends or confuse people. If somebody is unhappy, one will develop a tendency to dislike others, feel envy and permit states of confusion to drag on. The opposite are ten positive actions of body, speech and mind which only bring happiness. They make one powerful and useful to others. Here the Buddha advises using one's body as a tool to protect beings, to give them love and whatever else they need. Whoever has success with others now, has developed that potential during earlier lives, so the quicker one starts, the better. One's speech may touch many more beings with the means of communication today. Kind words previously spoken, create pleasant experiences now and strengthen good karma. If people listen, speak kindly and receive clear information, then again, in this life they will see benefit in telling the truth whenever possible, avoid telling lies to harm others, show people how things work in the world, and bring them calm. And finally, what to do with one's mind? Good wishes, joy in the good that others do and clear thinking is the way to go. These qualities brought us the mental happiness we enjoy today and making a habit of them insures happiness until enlightenment. The mind is most important of all. Thoughts today become words tomorrow and actions the day after. Every moment here and now is important. If one watches the mind, nothing can stop one's progress. The Third Paramita: How not to lose future happiness through anger. When one is accumulating spiritual richness through generosity and directing it with the right understanding, the third quality needed on one's way is patience; not to lose the good energy at work for others and oneself. How may one lose it? Through anger. Anger is the only luxury mind cannot afford. Good impressions gathered over lifetimes - mind's capital and the only source of lasting happiness - may be burnt in no time through fits of hot or cold rage. Buddha said that avoiding anger is the most difficult and most beautiful robe one can wear, and he gave many means to obtain that goal. One which is very useful today is experiencing a situation as a series of separate events to which one reacts without any evaluation. This "salami tactic" or "strobe light-view" is very effective when reacting to a physical danger. Also other methods like feeling empathy with whomever creates bad Karma, knowing it will return to them, and being aware of the impermanent and conditioned nature of every experience, and imagining how deluded people must be to cause such trouble are beneficial approaches. Reacting to whatever appears without anger will set free the timeless wisdom of body, speech, and mind and one's reactions will be right. On the highest level of practice called the Diamond Way, one lets unwanted emotions float on a carpet of mantras, letting them fall away without causing any bad habits. One may also let the thief "come to an empty house" by simply being aware of the feeling while doing nothing unusual. When it has visited a few times without receiving any energy, it will come less frequently and then stay away. Whoever can be aware as anger appears, plays around and then disappears, will discover a radiant state of mind, showing all things clearly like a mirror. In any case, it is wise to avoid anger as well as one can. And when it bites, to let it go quickly. The decision to stop anger and remove it whenever it appears is the support for the "inner" or Bodhisattva vow. Force is useful to protect and teach, but the feeling of anger is always difficult and causes most of the suffering in the world today. The Buddhist protectors removing harm, or Tilopa and Marpa polishing off their students in record time fall under the category of forceful action. Probably no teacher could survive without having to resort to it. Meditation centers need this view for a balanced policy for their visitors. If people appear drunk, on drugs, unwashed or behave badly, one should make them leave quickly. They disturb others, plus the next day they will not remember what they have learned. The function of a Buddhist center, and especially of the Karma Kagyu lineage, is to offer a spiritual way to those who are too critical and independent for anything else; there are enough churches and places for people searching for help. Not everybody brings the necessary conditions for entering the Buddhist practice, however. To practice the Diamond Way one needs a foundation of being at least behaved, able to not take things personally and to think of others. The Fourth Paramita: Joyful energy insuring our growth . Next follows joyful energy. Without that, life has no "zap" and one will get older but not wiser. It is a point where one should be conscious and keep feeding body, speech and mind the impressions which give an appetite for further conquest and joy. As most have a strong tendency towards inertia and the status quo, one should make sure to stay alive from the inside out, which actually happens best through the pure view of the Diamond Way. Knowing that all beings are Buddhas just waiting to be shown their richness and that all existence is the free play of enlightened space: What would be more inspiring than making all that come true? There is an immense joy inherent in constant growth, in never allowing anything to become stale or used. Real development lies beyond the comfort zone and it pays well to demand little from others and much from oneself. The Fifth Paramita: Meditation which makes life meaningful. The former four points should be evident to everybody. Whoever wants to give life power and meaning has to invoke others. This happens best through generosity with body, speech and mind. One needs to direct the energy thus arising through skillful thoughts, words and actions and then to avoid the anger which destroys all good seeds one may have planted. Also energy gives that extra push which opens new dimensions. But why meditation? Because one cannot willfully keep the states so joyfully reached at times. Unwanted emotions often lurk in dark corners of beings' consciousness and may bring them to do, say or experience things they would rather have avoided. Here, the pacifying meditation of calming and holding the mind gives the necessary distance to choose taking roles in life's comedies and avoiding it's tragedies. The Sixth Paramita: Wisdom - Recognizing the true nature of mind. So far, the five actions mentioned have mainly been kind deeds which fill mind with good impressions and thus produce conditioned happiness. In themselves, they go no further than that. What makes them liberating or "gone beyond" paramitas is the sixth point, the enlightening wisdom which the Buddha supplies. In it's fullness it means the understanding of the sixteen levels of "emptiness" or interdependent origination of all phenomena, outer and inner, which is the subject of many weighty books. In a short few words it may be expressed as the understanding that doing good is natural. Because subject, object and action are all parts of the same totality, what else could one do? They condition one another and share the same space while no lasting ego, self or essence can be found either in them or elsewhere. This insight makes one realize how all beings wish for happiness and one will act to bring them benefit in the long run.
<urn:uuid:cc95fd39-e9b2-4277-88a3-0af6175325a7>
{ "date": "2013-05-21T10:12:35", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9621270298957825, "score": 2.734375, "token_count": 2986, "url": "http://www.purifymind.com/LiberatingActions.htm" }
Python's flexible, duck-typed object system lowers the cost of architectural options that are more difficult to exercise in more rigid languages (yes, we are thinking of C++). One of these is carefully separating your data model (the classes and data structures that represent whatever state your application is designed to manipulate) from your controller (the classes that implement your user interface. In Python, a design pattern that frequently applies is to have one master editor/controller class that encapsulates your user interface (with, possibly, small helper classes for stateful widgets) and one master model class that encapsulates your application state (probably with some members that are themselves instances of small data-representation classes). The controller calls methods in the model to do all its data manipulation; the model delegates screen-painting and input-event processing to the controller. Narrowing the interface between model and controller makes it easier to avoid being locked into early decisions about either part by adhesions with the other one. It also makes downstream maintainance and bug diagnosis easier.
<urn:uuid:bd9b1371-3a14-4f87-b0a1-27d5ecdbd1a8>
{ "date": "2013-05-21T10:21:25", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9377673268318176, "score": 2.796875, "token_count": 213, "url": "http://www.pygtk.org/pygtk2tutorial/ch25s02.html" }
ActiveMQ via C# using Apache.NMS Part 1 Java Message Service (JMS) is the de facto standard for asynchronous messaging between loosely coupled, distributed applications. Per the specification, it provides a common way for Java application to create, send, receive and read messages. This is great for enterprises or organizations whose architecture depends upon a single platform (Java), but the reality is that most organizations have hi-bred architectures consisting of Java and .NET (and others). Oftentimes these systems need to communicate using common messaging schematics: ActiveMQ and Apache.NMS satisfy this integration requirement. The JMS specification outlines the requirements for system communication between Java Messaging Middleware and the clients that use them. Products that implement the JMS specification do so by developing a provider that supports the set of JMS interfaces and messaging semantics. Examples of JMS providers include open source offerings such as ActiveMQ, HornetQ and GlassFish and proprietary offerings such as SonicMQ and WebSphere MQ. The specification simply makes it easier for third parties to develop providers. All messaging in JMS is peer-2-peer; clients are either JMS or non JMS applications that send and receive messages via a provider. JMS applications are pure Java based applications whereas non JMS use JMS styled APIs such as ActiveMQ.NMS which uses OpenWire, a cross language wire protocol that allows native access to the ActiveMQ provider. JMS messaging schematics are defined into two separate domains: queue based and topic based applications. Queue based or more formally, point-to-point (PTP) clients rely on “senders” sending messages to specific queues and “receivers” registering as listeners to the queue. In scenarios where more a queue has more than one listener, the messages are delivered in a round-robin fashion between each listener; only one copy of the message is delivered. Think of this as something like a phone call between you and another person. Topic based application follow the publish/subscribe metaphor in which (in most cases) a single publisher client publishes a message to a topic and all subscribers to that topic receive a copy. This type of messaging metaphor is often referred to as broadcast messaging because a single client sends messages to all client subscribers. This is some analogous to a TV station broadcasting a television show to you and any other people who wish to “subscribe” to a specific channel. JMS API Basics The JMS Standard defines a series of interfaces that client applications and providers use to send messages and receive messages. From a client perspective, this makes learning the various JMS implementations relatively easy, since once you learn one you can apply what you learned to another implementation relatively easily and NMS is no exception. The core components of JMS are as follows: ConnectionFactory, Connection, Destination, Session, MessageProducer, and MessageConsumer. The following diagram illustrates communication and creational aspects of each object: NMS supplies similar interfaces to the .NET world which allows for clients to send messages to and from the ActiveMQ JMS via OpenWire. A quick rundown of the NMS interfaces are as follows: Note that the Apache.NMS namespace contains several more interfaces and classes, but these are the essential interfaces that map to the JMS specification. The following diagram illustrates the signature that each interface provides: The interfaces above are all part of the Apache.NMS 1.30 API available for download here. In order to use NMS in your .NET code you also need to down load the Apache.NMS.ActiveMQ client as well and to test your code, you will need to download and install the ActiveMQ broker, which is written in Java so it requires the JRE to be installed as well. The following table provides links to each download: For my examples I will be using the latest release of Apache.NMS and Apache.NMS.ActiveMQ as of this writing time. You should simple pick the latest version that is stable. The same applies for ActiveMQ and the JDK/JRE…note that you only need the Java Runtime Environment (JRE) to host install ActiveMQ. Install the JDK if you want to take advantage of some the tools that it offers for working with JMS providers. To start ActiveMQ, install the JRE (if you do not already have it installed – most people do already) and unzip the ActiveMQ release into a directory…in directory will do. Open a command prompt and navigate to the folder with the ActiveMQ release and locate the “bin” folder, then type ‘activemq”. You should see something like the following: Download and install the Apache.NMS and Apache.NMS.ActiveMQ libraries from the links defined in the table above. Unzip them into a directory on your hard drive, so that you can reference them from Visual Studio. Open Visual Studio 2008/2010 and create a new Windows project of type “Class Library”: And once the project is created, using the “Add Reference” dialog, browse to the directory where you unzipped the Apache.NMS files defined above and a reference to the Apache.NMS.dll. Do the same for the Apache.NMS.ActiveMQ download. Note that each download contains builds for several different .NET versions; I chose the “net-3.5” version of each dll since I am using VS 2008 and targeting the 3.5 version of .NET. For my examples you will also need to install the latest and greatest version NUnit from www.nunit.org. After you have installed NUnit, add a reference to the nunit.framework.dll. Note that any unit testing framework should work. Add three classes to the project: - A test harness class (ApacheNMSActiveMQTests.cs) - A publisher class (TopicPublisher.cs) - A subscriber class (TopicSubscriber.cs). Your solution explorer should look something like the following: The test harness will be used to demonstrate the use of the two other classes. The TopicPublisher class represents a container for a message producer and the TopicSubcriber represents a container for a message consumer. The publisher, TopicPublisher is a simple container/wrapper class that allows a client to easily send messages to a topic. Remember from my previous discussion about topics that topics allow for broadcast messaging scenarios: a single publisher sends a message to one or more subscribers and that all subscribers will receive a copy of the message. Message producers typically have a lifetime equal to the amount of time it takes to send a message, however for performance reasons you can extend the life out to the length of the application’s lifetime. Like the TopicPublisher above, the TopicSubscriber class is container/wrapper class that allows clients to “listen in” or “subscribe” to a topic. The TopicSubscriber class is typically has a lifetime that is the equal to the lifetime of the application. The reason is pretty obvious: a publisher always knows when it will publish, but a subscriber never knows when the publisher will send the message. What the subscriber does is create a permanent “listener” to the topic, when a publisher sends a message to the topic, the subscriber will receive and process the message. The following unit test shows the classes above used in conjunction with the Apache.NMS and Apache.NMS.ActiveMQ API’s to send and receive messages to ActiveMQ which is Java based, from the .NET world! Here is quick rundown of the ApachNMSActiveMQTests class: - Declare variables for the required NMS objects and the TopicSubscriber - Declare variables for the broker URI, the topic to subscribe/publish to, and the client and consumer ids - Create a ConnectionFactory object, create and start a Connection, and then create a Session to work with. - Create and start the TopicSubscriber which will be a listener/subscriber to the “TestTopic” topic. Also, to receive messages you must register an event handler or lambda expression with the MessageReceivedDelegate delegate. In this example I in-lined a lambda expression for simplicity. - On the test the method, create a temporary publisher and send a message to the topic. - Tear down and dispose of the subscriber and Session. - Tear down and dispose of the Connection. After you run the unit test you should see something like the following message: Note that ActiveMQ must be up and running for the example to work.
<urn:uuid:73b23611-c6f1-4577-9ef2-cbb95d8098c3>
{ "date": "2013-05-21T10:21:14", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8888753652572632, "score": 2.625, "token_count": 1808, "url": "http://www.rantdriven.com/2010/07/default.aspx" }
Nearly everybody with more than the minimum amount of computer knowledge will have used the built in Windows Task Manager, and know what an important tool it can sometimes be. Whenever a program crashes, hangs, consumes too many resources or just shouldn’t be there, often the quickest and easiest way to solve the problem is using Task Manager to forcefully close the program. The problem with Task Manager is it’s such a vital troubleshooting component, malware often targets it and tries to block use of the Task Manager so the malicious process cannot be terminated. Some more sophisticated malware can even block third party task management software such as Process Explorer from running. If you’re stuck and the default Task Manager has been blocked or you can’t run a third party task manager tool then things can become quite tricky. There is however, a rather interesting solution to get around this problem, which is to use a task manager tool built to run in a Microsoft Excel spreadsheet. Most people would expect a utility like this to be an executable .exe file, but this one is actually a standard Office 97 – 2003 Worksheet .xls file with some built in trickery. TaskManager.xls is a small (41KB) and simple task manager that has been created using the Visual Basic for Applications (VBA) programming language component built into Excel and other Office applications. While it doesn’t show you things like running services, performance graphs or network activity, it can list the currently running processes, and terminate, suspend or resume any of them, which is the most important part when dealing with malware. For this to run you have to make sure Macro’s are enabled in Excel because their usage is disabled by default to protect against potential Macro viruses. If Macro’s are disabled for instance in Excel 2003, and you don’t get asked if you want to enable them for the current sheet, go to Tools -> Options -> Security -> Macro Security, and set the level to medium which will always ask to run a Macro in future. There are only 2 buttons and a blank window in TaskManager.xls to start with. The List processes button will populate the window with a list of all running and active processes on your computer, and the Execute commands button will perform one of the three tasks available of terminate, suspend or resume a process. These are used by entering t, s or r into column A of the worksheet, then pressing the button. The screenshot below shows that the MaliciousProcess.exe is to be suspended and Ransomware.exe terminated when the Execute commands button is pressed. Clicking the button will do just that, then press the List processes button again to update the list. Do note that like a traditional task manager tool, TaskManager.xls is unable to terminate protected processes. For example, nothing will happen if you try to terminate the Client Server Runtime Process (csrss.exe) from TaskManager.xls. TaskManager.xls is very useful but unfortunately it does have problems working in other Office suites. In Libre Office v4 clicking the List Processes button will prompt a runtime error, and Softmaker Office free version doesn’t support VBA. The free version of Kingsoft Office doesn’t support VBA either so won’t run although the professional version does support it and might work. Even the free Excel Viewer provided by Microsoft doesn’t work, so it appears that sadly the TaskManager.xls tool is only compatible with the real Microsoft Excel.
<urn:uuid:52a96c7d-cd81-4e2d-8d7b-026b859957fd>
{ "date": "2013-05-21T09:59:58", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9184697866439819, "score": 2.609375, "token_count": 729, "url": "http://www.raymond.cc/blog/run-task-manager-from-excel-with-useful-suspend-process-command/" }
Below you will find several recent observations about the relationship between reading and science process skills. Significant improvement in both science and reading scores occurred when the regular basal reading program was replaced with reading in science that correlated with the science curriculum (Romance and Vitale, 2001). Teachers should help students recognize the important role that prior knowledge plays and teach them to use that knowledge when learning science through reading (Barton and Jordan, 2001). Most students arrive at the science teacher's classroom knowing how to read, but few understand how to use reading for learning science content (Santa, Havens, and Harrison, 1996). The same skills that make good scientists also make good readers: engaging prior knowledge, forming hypotheses, establishing plans, evaluating understanding, determining the relative importance of information, describing patterns, comparing and contrasting, making inferences, drawing conclusions, generalizing, evaluating sources, and so on (Armbruster, 1993). The skills in science are remarkably similar to those used in other subjects, especially reading. When students are doing science, following scientific procedures, and thinking as scientists, they are developing skills that are necessary for effective reading and understanding (Padilla, Muth and Lund Padilla, 1991). Students engaging in hands-on activities are forced to confront currently held cognitive frameworks with new ideas, and, thus actively reconstruct meaning form experience (Shymansky, 1989). Because hands-on activities encourage students to generate their own questions whose answers are found by subsequent reading of their science textbook or other science materials, such activities can provide students with both a meaningful purpose for reading (Ulerick, 1989) and context-valid cognitive frames of reference from which to construct meaning from text (Nelson-Herber, 1986). Reading and activity-oriented sciences emphasize the same intellectual skills and are both concerned with thinking processes. When a teacher helps students develop science process skills, reading processes are simultaneously being developed (Mechling & Oliver, 1983 and Simon & Zimmerman, 1980). Research indicates that a strong experienced-based science program, one in which students directly manipulate materials, can facilitate the development of language arts skills (Wellman, 1978). Science process skills have reading counterparts. For example, when a teacher is working on "describing" in science, students are learning to isolate important characteristics, enumerate characteristics, use appropriate terminology, and use synonyms which are important reading skills (Carter & Simpson, 1978). When students have used the process skills of observing, identifying, and classifying, they are better able to discriminate between vowels and consonants and to learn the sounds represented by letters, letter blends, and syllables (Murray & Pikul ski, 1978). Science instruction provides an alternative teaching strategy that motivates students who may have reading difficulties (Wellman, 1978). Children's involvement with process skills enables them to recognize more easily the contextual and structural clues in attacking new words and better equips them to interpret data in a paragraph. Science process skills are essential to logical thinking, as well as to forming the basic skills for learning to read (Barufaldi & Swift, 1977). Guszak defines reading readiness as a skill-complex. Of the three areas within the skill-complex, two can be directly enhanced by science process skills: (1) physical factors (health, auditory, visual, speech, and motor); and (2) understanding factors (concepts, processes). When students see, hear, and talk about science experiences, their understanding, perception, and comprehension of concepts and processes may improve (Barufaldi & Swift, 1977 and Bethel, 1974). The hands-on manipulative experiences science provides are the key to the relationship between process skills in both science and reading (Lucas & Burlando, 1975). Science activities provide opportunities for manipulating large quantities of multi-sensory materials which promote perceptual skills, i.e., tactile, kinesthetic, auditory, and visual (Neuman, 1969). These skills then contribute to the development of the concepts, vocabulary, and oral language skills (listening and speaking) necessary for learning to read (Wellman, 1978). Studies viewed cumulatively suggest that science instruction at the intermediate and upper elementary grades does improve the attainment of reading skills. The findings reveal that students have derived benefits in the areas of vocabulary enrichment, increased verbal fluency, enhanced ability to think logically, and improved concept formation and communication skills (Campbell, 1972; Kraft, 1961; Olson, 1971; Quinn & Kessler, 1976).
<urn:uuid:ec9cf480-f6cb-437c-a4cf-e506d9210796>
{ "date": "2013-05-21T10:20:43", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9259857535362244, "score": 3.390625, "token_count": 920, "url": "http://www.readingeducator.com/content/science/research.htm" }
Defining Dry Eyes: Doctors Agree That Dry Eyes Involve Water Loss in the Tear Film’s Aqueous Layer Physicians look for a series of symptoms for dry eyes, not an exact cause or condition, says Bio-Logic Aqua Research Founder Sharon Kleyne. Grants Pass, OR (PRWEB) April 09, 2012 In a recent interview, Mrs. Kleyne discussed the latest attempts to define “dry eyes,” “dry eye syndrome” and “dry eye disease.” According to Mrs. Kleyne, the only agreement is that dry eyes involve a loss of water in the tear film’s “aqueous layer,” due either to excessive evaporation or to poor tear production. The causes and symptoms of dry eyes are so complex and variable that doctors have not agreed on a precise clinical definition of the syndrome. Dry eyes are the most frequently cited reason for visiting an eye doctor and so common that ophthalmologists find it difficult to draw a precise line between normal eyes and abnormal eyes with dry eye disease. (Mathers, 2005). That was the conclusion of eye health advocate Sharon Kleyne, host of the Sharon Kleyne Hour Power of Water syndicated radio show and founder of Bio-Logic Aqua Research. The three-layered tear film covering the eye’s exposed portions is 99% water and extremely complex. The overlying “lipid layer” helps prevent water evaporation from the middle “aqueous (water) layer,” while the lower “mucin layer” adheres the tear film to the eye. Dry eyes are experienced by nearly everyone, says Mrs. Kleyne. Tear film dehydration (water loss) begins at the moment of birth, when you first open your eyes, and eyes require constant hydration throughout life. Because we are all unique, no two individuals are affected in exactly the same way by eye dehydration. Doctors agree that maintaining a healthy, fully hydrated tear film is becoming an increasing challenge for everyone. According to Ula Jurkunas, MD, corneal stem cell researcher at Harvard University, “To function well, the cornea (clear part of the eye) must be well hydrated by the tear film. Hydration is also essential to successful corneal stem cell transplants” (Jurkunas, 2011). Sharon Kleyne notes that, no physiologic variable correlates exactly with dry eye symptoms, although most measurable variables correlate to some degree. Instead, she explains, physicians look for a series of symptoms. The presence of one or more symptom could indicate a dry eye condition (Korb, 2000). The most common dry eye symptoms include eye irritation, a feeling of dryness in the eyes; itching, burning and grainy or scratchy eyes; increased eye allergies, and blurred vision (especially late in the day). Symptoms such as fatigue, headache, muscle aches and an elevated stress level may not even directly involve the eyes (Mathers, 2005). This symptom-based definition works reasonably well, according to Mrs. Kleyne. The degree and duration of symptoms are critical since a large percentage of the adult population complains of at least mild dry eye symptoms at any given time. This includes 50% of adult females and a significant percentage of computer users and contact lens patients (Mathers, 2005). In addition to symptoms, most (but not all) dry eye patients have at least one physiologic parameter outside the range of normal. Typically, tear production has decreased, tear film volume is low, tear film evaporation is high, and/or tear film osmolarity is elevated (Mathers, 2004). In addition, tears produced in dry eyes contain elevated levels of substances (metalloproteases and other proteinaceous compounds) that increase surface inflammation (Barton, 1995). © 2012 Bio-Logic Aqua Research For the original version on PRWeb visit: http://www.prweb.com/releases/prweb2012/4/prweb9381612.htm
<urn:uuid:0668c357-db10-44e3-9891-6865f5dbc051>
{ "date": "2013-05-21T10:16:09", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9068838953971863, "score": 2.828125, "token_count": 863, "url": "http://www.redorbit.com/news/science/1112511020/defining_dry_eyes_doctors_agree_that_dry_eyes_involve_water/" }
A canticle (from the Latin canticulum, a diminutive of canticum, song) is a hymn (strictly excluding the Psalms) taken from the Bible. The term is often expanded to include ancient non-biblical hymns such as the Te Deum and certain psalms used liturgically. These three canticles are sometimes referred to as the "evangelical canticles", as they are taken from the Gospel of St Luke. They are sung every day (unlike those from the Old Testament which, as is, shown above, are only of weekly occurrence). They are placed not amongst the psalms (as are the seven from the Old Testament), but separated from them by the Chapter, the Hymn, the Versicle and Response, and thus come immediately before the Prayer (or before the preces, if these are to be said). They are thus given an importance and distinction elevating them into great prominence, which is further heightened by the rubric which requires the singers and congregations to stand while they are being sung (in honour of the mystery of the Incarnation, to which they refer). Further, while the "Magnificat" is being sung at Solemn Vespers, the altar is incensed as at Solemn Mass. All three canticles are in use in the Greek and Anglican churches. In the Breviary the above-named ten canticles are provided with antiphons and are sung in the same eight psalm-tones and in the same alternating manner as the psalms. To make the seven taken from the Old Testament suitable for this manner of singing, nos. 2-7 sometimes divide a verse of the Bible into two verses, thus increasing the number of Breviary verses. No. 1, however, goes much farther than this. It uses only a portion of the long canticle in Daniel, and condenses, expands, omits, and interverts verses and portions of verses. In the Breviary the canticle begins with verse 57, and ends with verse 56 (Dan., iii); and the penultimate verse is clearly an interpolation, "Benedicamus Patrem, et Filium . . .". In addition to their Breviary use some of the canticles are used in other connections in the liturgy; e.g. the "Nunc dimittis" as a tract at the Mass of the Feast of the Purification (when 2 February comes after Septuagesima); the "Benedictus" in the burial of the dead and in various processions. The use of the "Benedictus" and the "Benedicite" at the old Gallican Mass is interestingly described by Duchene (Christian Worship: Its Origin and Evolution, London, 1903, 191-196). In the Office of the Greek Church the canticles numbered 1, 3, 5, 6, 7, 8, 9 are used at Lauds, but are not assigned to the same days as in the Roman Breviary. Two others (Isaiah 26:9-20, and Jonah 2:2-9) are added for Friday and Saturday respectively. The ten canticles so far mentioned do not exhaust the portions of Sacred Scripture which are styled "canticles". There are, so example, those of Deborah and Barac, Judith, the "canticle of Canticles"; and many psalms (e.g. xvii, 1, "this canticle"; xxxviii,1, "canticle of David"; xliv,1, "canticle for the beloved"; and the first verse of Pss. 1xiv, 1xv, 1xvi, 1xvii, etc). In the first verse of some psalms the phrase psalmus cantici (the psalm of a canticle) is found, and in others the phrase canticum psalmi (a canticle of a psalm). Cardinal Bona thinks that psalmus cantici indicated that the voice was to precede the instrumental accompaniment, while canticum psalmi indicated an instrumental prelude to the voice. This distinction follows from his view of a canticle as an unaccompanied vocal song, and of a psalm as an accompanied vocal song. It is not easy to distinguish satisfactorily the meanings of psalm, hymn, canticle, as referred to by St. Paul in two places. Canticum appears to be generic - a song, whether sacred or secular; and there is reason to think that his admonition did not contemplate religious assemblies of the Christians, but their social gatherings. In these the Christians were to sing "spiritual songs", and not the profane or lascivious songs common amongst the pagans. These spiritual songs were not exactly psalms or hymns. The hymn may then be defined as a metrical or rhythmical praise of God; and the psalm, accompanied sacred song or canticle, either taken from the Psalms or from some less authoritative source (St. Augustine declaring that a canticle may be without a psalm but not a psalm without a canticle). In addition to the ten canticles enumerated above the Roman Breviary places in its index, under the heading "Cantica", the "Te Deum" (at the end of Matins for Sundays and Festivals, but there styled "Hymnus SS. Ambrosii et Augustini") and the: "Quicumque vult salvus esse" (Sundays at Prime, but there styled "Symbolum S. Athanasii", the "Creed of St. Athanasius"). To these are sometimes added by writers the "Gloria in excelsis", the "Trisagion", and the "Gloria Patri" (the Lesser Doxology). In the "Psalter and Canticles Pointed for chanting" (Philadelphia, 1901), for the use of the Evangelical Lutheran Congregations, occurs a "Table of canticles" embracing Nos. 1, 3, 8, 9, 10, besides certain psalms, and the "Te Deum" and "Venite" (Ps. xicv, used at the beginning of Matins in the Roman Breviary). The word Canticles is thus seen to be somewhat elastic in its comprehension. On the one hand, while it is used in the common parlance in the Church of England to cover several of the enumerated canticles, the Prayer Book applies it only to the "Benedicite", while in its Calendar the word Canticles is applied to what is commonly known as the "Song of Solomon" (the Catholic "Canticle of Canticles", Vulgate, "Canticum canticorum"). The nine Canticles are as follows: Originally, these Canticles were chanted in their entirety every day, with a short refrain inserted between each verse. Eventually, short verses (troparia) were composed to replace these refrains, a process traditionally inaugurated by Saint Andrew of Crete. Gradually over the centuries, the verses of the Biblical Canticles were omitted (except for the Magnificat) and only the composed troparia were read, linked to the original canticles by an Irmos. During Great Lent however, the original Biblical Canticles are still read. Another Biblical Canticle, the Nunc Dimittis is either read or sung at Vespers.
<urn:uuid:617a48de-4593-4652-a581-fd490cb1420f>
{ "date": "2013-05-21T10:21:01", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.944137454032898, "score": 3.171875, "token_count": 1566, "url": "http://www.reference.com/browse/Canticle+of+Canticles" }
When a child who has been hospitalized with a serious infection is sent home to complete a prolonged course of antibiotics, they can receive their medicine in two ways — by mouth, or intravenously, via a peripherally inserted central catheter (PICC) line. Though PICC lines can be scary for pediatric patients, and require caregivers to be trained in their use and care, many doctors often prefer them to oral medicines for long-term antibiotic treatments. One CHOP researcher, Ron Keren, MD, MPH, director of the Center for Pediatric Clinical Effectiveness, was recently awarded nearly two million dollars from the Patient-Centered Outcomes Research Institute (PCORI) to lead a study examining whether oral antibiotics are as effective at treating infection over an extended period as PICC lines. “These two antibiotic treatment options have major implications for the overall experience of the child, families and caregivers, but there is a lack of real-world evidence on their benefits and drawbacks to help clinicians and patient families make an informed choice,” said Dr. Keren. A type of intravenous (IV) catheter, a PICC line is a long, flexible tube that is inserted in a peripheral vein, often in the arm or neck, and advanced until its tip rests near the heart. Because they tap directly into the circulatory system, PICC lines offer maximum drug delivery. Unlike regular IV catheters, PICC lines can stay in the body for weeks to months, but they require regular maintenance. PICC lines must be flushed daily, their dressings have to be inspected and changed, and patients with PICC lines must avoid getting them wet or dirty — a tall order for some active pediatric patients. In addition, a variety of equipment is required to use and maintain PICC lines, including infusion pumps and portable IV poles. PICC lines do have some risks. They can clot, break, or become dislodged. And because they sit in large blood vessels directly above the heart, any bacteria that are inadvertently introduced into the catheter go directly to the heart and are pumped throughout the body, which can lead to a dangerous infection called sepsis. Oral antibiotics, on the other hand, are much easier for patients to take and caregivers to manage. However, because oral medications must pass through the digestive system, to have the same efficacy as IV medications oral antibiotics must have high “bioavailability” — the percentage of the drug that reaches the blood. Drugs administered via PICC lines have, by definition, 100 percent bioavailability. “If we find that the prolonged IV option is no better than the oral route, we think that most families would prefer for their child to take oral antibiotics,” Dr. Keren noted. “However, if IV antibiotics are marginally better than oral antibiotics, then that benefit will need to be weighed against any reduction in quality of life and complications that we anticipate with the PICC lines.”
<urn:uuid:f55a2b17-5f2a-406d-8e61-bdd2d06fae43>
{ "date": "2013-05-21T10:07:48", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9568846225738525, "score": 2.984375, "token_count": 620, "url": "http://www.research.chop.edu/blog/comparing-oral-and-intravenous-antibiotics/" }
When the last oil well runs dry Just as certain as death and taxes is the knowledge that we shall one day be forced to learn to live without oil. Exactly when that day will dawn nobody knows, but people in middle age today can probably expect to be here for it. Long before it arrives we shall have had to commit ourselves to one or more of several possible energy futures. And the momentous decisions we take in the next few years will determine whether our heirs thank or curse us for the energy choices we bequeath to them. There will always be some oil somewhere, but it may soon cost too much to extract and burn it. It may be too technically difficult, too expensive compared with other fuels, or too polluting. An article in Scientific American in March 1998 by Dr Colin Campbell and Jean Laherrere concluded: "The world is not running out of oil - at least not yet. "What our society does face, and soon, is the end of the abundant and cheap oil on which all industrial nations depend." They suggested there were perhaps 1,000 billion barrels of conventional oil still to be produced, though the US Geological Survey's World Petroleum Assessment 2000 put the figure at about 3,000 billion barrels. Too good to burn The world is now producing about 75 million barrels per day (bpd). Conservative (for which read pessimistic) analysts say global oil production from all possible sources, including shale, bitumen and deep-water wells, will peak at around 2015 at about 90 million bpd, allowing a fairly modest increase in consumption. Peaking is at hand, not years away... If I'm right, the unforeseen consequences are devastating Matthew Simmons, former US government adviser On Campbell and Laherrere's downbeat estimate, that should last about 30 years at 90 million bpd, so drastic change could be necessary soon after 2030. And it would be drastic: 90% of the world's transport depends on oil, for a start. Most of the chemical and plastic trappings of life which we scarcely notice - furniture, pharmaceuticals, communications - need oil as a feedstock. The real pessimists want us to stop using oil for transport immediately and keep it for irreplaceable purposes like these. In May 2003 the Association for the Study of Peak Oil and Gas (Aspo), founded by Colin Campbell, held a workshop on oil depletion in Paris. One of the speakers was an investment banker, Matthew Simmons, a former adviser to President Bush's administration. From The Wilderness Publications reported him as saying: "Any serious analysis now shows solid evidence that the non-FSU [former Soviet Union], non-Opec [Organisation of Petroleum Exporting Countries] oil has certainly petered out and has probably peaked... No cheap oil, no cheap food "I think basically that peaking of oil will never be accurately predicted until after the fact. But the event will occur, and my analysis is... that peaking is at hand, not years away. "If I'm right, the unforeseen consequences are devastating... If the world's oil supply does peak, the world's issues start to look very different. "There really aren't any good energy solutions for bridges, to buy some time, from oil and gas to the alternatives. The only alternative right now is to shrink our economies." Planning pays off Aspo suggests the key date is not when the oil runs out, but when production peaks, meaning supplies decline. It believes the peak may come by about 2010. Fundamental change may be closing on us fast. And even if the oil is there, we may do better to leave it untouched. Many scientists are arguing for cuts in emissions of the main greenhouse gas we produce, carbon dioxide, by at least 60% by mid-century, to try to avoid runaway climate change. That would mean burning far less oil than today, not looking for more. There are other forms of energy, and many are falling fast in price and will soon compete with oil on cost, if not for convenience. So there is every reason to plan for the post-oil age. Does it have to be devastating? Different, yes - but our forebears lived without oil and thought themselves none the worse. We shall have to do the same, so we might as well make the best of it. And the best might even be an improvement on today. Who holds the world's oil - and how long will it last? At a glance link:
<urn:uuid:3ce10fea-1379-4508-ba12-ebb5202acfdb>
{ "date": "2013-05-21T10:29:08", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.956148624420166, "score": 2.6875, "token_count": 933, "url": "http://www.resilience.org/stories/2004-04-18/when-last-oil-well-runs-dry" }
Water and sediment testing EPA is currently collecting and analyzing water and sediment samples to help states and other federal agencies understand the immediate and long-term impacts of oil contamination along the Gulf coast. The results and the interpretation of all data collected by EPA will be posted to www.epa.gov/bpspill. Water and sediment samples are being taken prior to oil reaching the area to determine water quality and sediment conditions that are typical of selected bays and beaches in Louisiana, Mississippi, Alabama, and the Florida panhandle. This data will be used to supplement existing data generated from previous water quality surveys conducted by states, EPA, and others. Water sampling will continue once the oil reaches the shore; periodic samples will be collected to document water quality changes. EPA will make data publicly available as quickly as possible. Other state and federal agencies make beach closure and seafood harvesting and consumption determinations, but the data generated by EPA will assist in their evaluations. Why is EPA sampling and monitoring the water? EPA is tracking the prevalence of potentially harmful chemicals in the water as a result of this spill to determine the level of risk posed to fish and other wildlife. While these chemicals can impact ecosystems, drinking water supplies are not expected to be affected. The oil itself can cause direct effects on fish and wildlife, for example when it coats the feathers of waterfowl and other types of birds. In addition, other chemical compounds can have detrimental effects. Monitoring information allows EPA to estimate the amount of these compounds that may reach ecological systems. When combined with available information on the toxicity of these compounds, EPA scientists can estimate the likely magnitude of effects on fish, wildlife, and human health. To Learn More:
<urn:uuid:7de73c8d-5f61-40e7-bd98-90d076d2c127>
{ "date": "2013-05-21T10:33:55", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9279345870018005, "score": 3.6875, "token_count": 344, "url": "http://www.restorethegulf.gov/release/2010/09/07/water-and-sediment-testing" }
All-metal hip implants can damage soft tissue: FDA (Reuters) - Metal-on-metal hip implants can cause soft-tissue damage and pain, which could lead to further surgery to replace the implant, the U.S. health regulator said, following several recalls of the artificial hip parts. All-metal hip implants were developed to be more durable than traditional implants but have become a major cause of concern following several safety issues and user discomforts. The traditional implants combine a ceramic or metal ball with a plastic socket. The U.S. Food and Drug Administration said all-metal implants can shed metal where two components connect, such as the ball and the cup that slide against each other during walking or running. Such release of metal will cause wear and tear of the implant and can damage bone and soft tissue surrounding the implant. The agency said surgeons should select a metal-on-metal hip implant for their patient only after determining that its benefits outweigh that of an alternative hip system. Johnson & Johnson, the biggest manufacturer of all-metal devices, recalled its ASR hip implant in 2010 following safety problems. Smith & Nephew withdrew a component of one of its all-metal artificial hip systems last June, following higher level of patient problems with the device. Stryker Corp begun recalling some components of its implant in July due to risks associated with corrosion. Other hip implant makers include Zimmer Holdings Inc and Wright Medical Group. The regulator, however, added that it does not have enough data to specify the concentration of metal ions in a patient's body or blood necessary to produce adverse effects. The reaction seemed to be specific to individual patients, the FDA said on its website. (Reporting by Esha Dey in Bangalore; Editing by Don Sebastian) - Tweet this - Share this - Digg this
<urn:uuid:886c0edc-44e2-40cb-89cc-06630237dd72>
{ "date": "2013-05-21T10:35:37", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9470794796943665, "score": 2.640625, "token_count": 378, "url": "http://www.reuters.com/article/2013/01/17/us-fda-hips-idUSBRE90G0W520130117" }
Word of the Day, Website of the Day, Number to Know, This Day in History, Today’s Featured Birthday and Daily Quote. Word of the Day Vermicular ver-MIK-yuh-ler (adjective) Resembling a worm in form or motion; of, relating to or caused by worms - www.merriam-webster.com Website of the Day Martin Luther King Research and Education Institute As we approach Martin Luther King Jr. Day, take a moment to learn more about the great American leader. This site collects all kinds of documents pertaining to King, lists the latest news and much more. Number to Know 1983: Year when President Ronald Reagan signed Martin Luther King Jr. Day into law. This Day in History Jan. 16, 2003: The Space Shuttle Columbia takes off for mission STS-107, which would be its final one. Columbia disintegrated 16 days later on re-entry. Today’s Featured Birthday Baseball star Albert Pujols (33) “I look to a day when people will not be judged by the color of their skin, but by the content of their character.” – Dr. Martin Luther King Jr.
<urn:uuid:9311d186-ae00-46e5-a539-0621d8b6471b>
{ "date": "2013-05-21T10:01:26", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8539186716079712, "score": 2.703125, "token_count": 258, "url": "http://www.ridgecrestca.com/article/20130116/NEWS/301169992" }
See what questions a doctor would ask. Leri-Weil syndrome (medical condition): A rare genetic disorder characterized by short forearms...more » These medical condition or symptom topics may be relevant to medical information for Leri-Weil syndrome: Leri-Weil syndrome is listed as a "rare disease" by the Office of Rare Diseases (ORD) of the National Institutes of Health (NIH). This means that Leri-Weil syndrome, or a subtype of Leri-Weil syndrome, affects less than 200,000 people in the US population. Source - National Institutes of Health (NIH) Leri-Weil syndrome: Leri-Weil syndrome is listed as a type of (or associated with) the following medical conditions in our database: These medical disease topics may be related to Leri-Weil syndrome: Source - NIH Search to find out more about Leri-Weil syndrome: Search Specialists by State and City
<urn:uuid:790de929-ef3d-4395-8a67-cec887be9eab>
{ "date": "2013-05-21T10:14:46", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.884020209312439, "score": 2.546875, "token_count": 213, "url": "http://www.rightdiagnosis.com/medical/leri_weil_syndrome.htm" }
Healthy Affordable Food RWJF Priority: Increase access to high-quality, affordable foods through new or improved grocery stores and healthier corner stores and bodegas Research shows that having a supermarket or grocery store in a neighborhood increases residents’ fruit and vegetable consumption and is associated with lower body mass index (BMI) among adolescents. Yet, many families do not have access to healthy affordable foods in their neighborhoods. This is especially true in lower-income communities, where convenience stores and fast-food restaurants are widespread, but supermarkets and farmers’ markets are scarce.
<urn:uuid:d25c1581-8ee3-457b-bd6c-71268f50a2ad>
{ "date": "2013-05-21T10:15:04", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9575108885765076, "score": 2.890625, "token_count": 116, "url": "http://www.rwjf.org/en/about-rwjf/program-areas/childhood-obesity/strategy/policy-priority-healthy-affordable-food.html?d=location_type%3A549" }
Last Friday, The Hill’s Congress Blog highlighted the innovative ways governments, NGO’s and the private sector are using to aid for global health. Programs like the Global Alliance for Vaccines and Immunization (GAVI) and The Global Fund to fight AIDS, TB and Malaria are not only ensuring that health interventions are getting to the people that need them most, they are helping to promote market growth and drive down prices. Here’s an excerpt on public-private partnerships from the blog: “Millions of lives are saved today in developing countries because of bold, innovative financing arrangements over last 10 years. These financing mechanisms are good examples of private sector partnership with public sector for common good. These financing initiatives have pooled large public sector funding with private sector resources, thus allowing tax payers funds to have much larger impact than would otherwise be possible. Some of the examples are given below.” USAID’s Neglected Tropical Disease (NTD) Program is one such collaboration. In a press statement released last fall, Dr. Ariel Pablos-Mendez, Assistant Administrator for USAID’s Global Health Bureau, states: “To date, USAID’s NTD program is the largest public-private partnership collaboration in our 50 year history. Over the past six years, USAID has leveraged over $3 billion in donated medicines reflecting one of the most cost effective public health programs. Because of this support, we are beginning to document control and elimination of these diseases in our focus countries and we are on track to meet the 2020 goals.” You can also read about how Sabin in helping countries create sustainable access to immunization financing here.
<urn:uuid:27bf619e-f246-4a79-ae92-51aa24290e8f>
{ "date": "2013-05-21T10:07:48", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9191789031028748, "score": 2.546875, "token_count": 351, "url": "http://www.sabin.org/updates/blog/innovation-fund-global-health?language=fr" }
Stat: Of all infertile women, an estimated 15 percent are infertile because of PID. What is it exactly? "Pelvic inflammatory disease" is shorthand for any serious, non-specific bacterial infection of the reproductive organs that are housed in the pelvis: the uterus, uterine lining, fallopian tubes, and/or ovaries. These infections usually start in the vagina and, when left untreated, can progressively infect other reproductive organs. 20% of PID cases are found in teens, who often are afraid or unable to get reproductive health care. PID can result in permanent infertility and chronic pain. About how many people have it? About one million cases of PID are reported in the United States annually. How is it spread? In most cases, other sexually transmitted diseases and infections such as gonorrhea and chlamydia are at the root of PID, especially when they are left untreated. Some cases of PID are due to infections with more than one type of bacteria. What are its symptoms? • painful periods that may last longer than previous cycles • unusual vaginal discharge • spotting or cramping between periods • pain or cramping curing urination, or blood in the urine • lower back or abdominal pain • nausea or vomiting • pain during vaginal intercourse How is it diagnosed? PID is often difficult to diagnose, and it is widely thought that millions of cases each year go undiagnosed. To diagnose PID, you will need a pelvic exam which includes a Pap smear, and a possible laparoscopy (a diagnostic microsurgical procedure that can usually be done in an office visit) in order for your doctor or clinician to take a close look at your reproductive system. It is also imperative that you tell your doctor or clinician if you have been sexually active with a partner and what your sexual history has been. Is it treatable? In some cases, antibiotics, bed rest, and sexual celibacy are prescribed. In other cases, surgery may be required, including the possible removal of some reproductive organs. Is it curable? In some cases, but it can recur even once treated if the person becomes reinfected. Can it effect fertility? PID can lead to permanent sterility or ectopic pregnancy. Can it cause death? Almost any bacterial infection, if it becomes serious enough or affects enough of the body's systems, can potentially cause severe injury or death. How can we protect against it? Using condoms during vaginal intercourse offers a very high level of protection from PID. Annual STD screenings also reduce the risk by finding other STDs or STIs and treating them before they can progress to cause PID. Because PID is caused by other untreated infection, it is one of many reasons why it is so important for women to get gynecological exams and full STI screenings at least once every year, without fail.
<urn:uuid:5cbad007-e8b7-40fb-b7c8-1c444f66cdcc>
{ "date": "2013-05-21T10:08:27", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9458293318748474, "score": 2.84375, "token_count": 594, "url": "http://www.scarleteen.com/article/infection/the_sti_files_pelvic_inflammatory_disease_pid?page=1" }
Ever wonder what would happen if every single adult in the U.S. took a few hours each month to support a program that supports the well-being of children? Perhaps you would choose to advocate for a child in an unstable environment; or a child in poor health; or one who is struggling with their academics; or one who facing a bully? What kind of impact would that make on the future of our country? I recently came across some information about National Make a Difference in Children Month; a grassroots call to action sponsored by long-time child advocate, Kim Ratz. The intention of this annual observance is to bring awareness on how our actions can make a positive difference to a child. Ms. Ratz outlines 4 key actions we can take to have a direct impact on the life of a child on her Website: 1. Pick one (or more) event or activity to do with a child … that will make some kind of positive difference or impact on that child. Need ideas? Read 100+ Ways to Make a Difference to Children. 2. Support an organization that serves children …It could be your local community ed. or schools, YMCA, Boy or Girl Scouts, place of worship, park and recreation or any other organization that serves kids. 3. Tell your policy makers to support initiatives that are good for kids … like your school board, city council, county commissioners, state legislators & congressional delegation; summer is generally a more relaxed time to communicate with them. Share your own story about Making a Difference to Children … and WHY it’s important to support programs for children … 4. Tell other people about this campaign …like your neighbors, relatives, friends, people at work, worship, school or play. Here are some more ideas from Early Childhood News and Resources on how you can make a difference to a child this month: - Volunteer at a local center that helps teen or single mothers (or fathers) - Volunteer with your local elementary school - Help at a soup kitchen for needy families - Help at church with Sunday School, VBS or another faith-based program - Locate a service in your area that assists homeless children with school supplies, medical care or social-emotional development - Volunteer to read for kids at your local library - Teach classes at a local rec center or community center: arts, crafts, reading, sports, ASL, music, etc. - Offer your time at the Foundation for the Blind (they often run children’s classes) - Find a local farm that hosts classes for special needs kiddos and volunteer there (horse therapy, etc) - Don’t have time to volunteer your time? How about a simple donation? What can YOU do to help a child in need? Share your ideas and inspiration!
<urn:uuid:a9bc1e4e-7dcc-4fb4-8dfb-781c02a919cd>
{ "date": "2013-05-21T10:07:17", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.948811948299408, "score": 2.59375, "token_count": 580, "url": "http://www.schoodoodle.com/weblog/2011/07/05/july-is-national-make-a-difference-in-children-month/" }
XML and the Second-Generation Web; May 1999; Scientific American Magazine; by Bosak, Bray; 5 Page(s) Give people a few hints, and they can figure out the rest. They can look at this page, see some large type followed by blocks of small type and know that they are looking at the start of a magazine article. They can look at a list of groceries and see shopping instructions. They can look at some rows of numbers and understand the state of their bank account. Computers, of course, are not that smart; they need to be told exactly what things are, how they are related and how to deal with them. Extensible Markup Language (XML for short) is a new language designed to do just that, to make information self-describing. This simple-sounding change in how computers communicate has the potential to extend the Internet beyond information delivery to many other kinds of human activity. Indeed, since XML was completed in early 1998 by the World Wide Web Consortium (usually called the W3C), the standard has spread like wildfire through science and into industries ranging from manufacturing to medicine.
<urn:uuid:2da37991-2e83-48e8-b166-797398d5e7aa>
{ "date": "2013-05-21T10:21:26", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9722031354904175, "score": 3.171875, "token_count": 233, "url": "http://www.sciamdigital.com/index.cfm?fa=Products.ViewIssuePreview&ARTICLEID_CHAR=A916DC70-DA95-49E5-8DDA-A9D9E743319" }
Please Read How You Can Help Keep the Encyclopedia Free Absolute and Relational Theories of Space and Motion Since antiquity, natural philosophers have struggled to comprehend the nature of three tightly interconnected concepts: space, time, and motion. A proper understanding of motion, in particular, has been seen to be crucial for deciding questions about the natures of space and time, and their interconnections. Since the time of Newton and Leibniz, philosophers’ struggles to comprehend these concepts have often appeared to take the form of a dispute between absolute conceptions of space, time and motion, and relational conceptions. This article guides the reader through some of the history of these philosophical struggles. Rather than taking sides in the (alleged) ongoing debates, or reproducing the standard dialectic recounted in most introductory texts, we have chosen to scrutinize carefully the history of the thinking of the canonical participants in these debates — principally Descartes, Newton, Leibniz, Mach and Einstein. Readers interested in following up either the historical questions or current debates about the natures of space, time and motion will find ample links and references scattered through the discussion and in the Other Internet Resources section below. - 1. Introduction - 2. Aristotle - 3. Descartes - 4. Newton - 5. Absolute Space in the Twentieth Century - 6. Leibniz - 7. ‘Not-Newton’ versus ‘Be-Leibniz’ - 8. Mach and Later Machians - 9. Relativity and Motion - 10. Conclusion - Other Internet Resources - Related Entries Things change. A platitude perhaps, but still a crucial feature of the world, and one which causes many philosophical perplexities — see for instance the entry on Zeno's Paradoxes. For Aristotle, motion (he would have called it ‘locomotion’) was just one kind of change, like generation, growth, decay, fabrication and so on. The atomists held on the contrary that all change was in reality the motion of atoms into new configurations, an idea that was not to begin to realize its full potential until the Seventeenth Century, particularly in the work of Descartes. (Of course, modern physics seems to show that the physical state of a system goes well beyond the geometrical configuration of bodies. Fields, while determined by the states of bodies, are not themselves configurations of bodies if interpreted literally, and in quantum mechanics bodies have ‘internal states' such as particle spin.) While not all changes seem to be merely the (loco)motions of bodies in physical space. Yet since antiquity, in the western tradition, this kind of motion has been absolutely central to the understanding of change. And since motion is a crucial concept in physical theories, one is forced to address the question of what exactly it is. The question might seem trivial, for surely what is usually meant by saying that something is moving is to say that it is moving relative to something, often tacitly understood between speakers. For instance: the car is moving at 60mph (relative to the road and things along it), the plane is flying (relative) to London, the rocket is lifting off (the ground), or the passenger is moving (to the front of the speeding train). Typically the relative reference body is either the surroundings of the speakers, or the Earth, but this is not always the case. For instance, it seems to make sense to ask whether the Earth rotates about its axis West-East diurnally or whether it is instead the heavens that rotate East-West; but if all motions are to be reckoned relative to the Earth, then its rotation seems impossible. But if the Earth does not offer a unique frame of reference for the description of motion, then we may wonder whether any arbitrary object can be used for the definition of motions: are all such motions on a par, none privileged over any other? It is unclear whether anyone has really, consistently espoused this view: Aristotle, perhaps, in the Metaphysics; Descartes and Leibniz are often thought to have but, as we'll see, those claims are suspect; possibly Huygens, though his remarks remain cryptic; Mach at some moments perhaps. If this view were correct, then the question of whether the Earth or heavens rotate would be meaningless, merely different but equivalent expressions of the facts. But suppose, like Aristotle, you take ordinary language accurately to reflect the structure of the world, then you could recognize systematic everyday uses of ‘up’ and ‘down’ that require some privileged standards — uses that treat things closer to a point at the center of the Earth as more ‘down’ and motions towards that point as ‘downwards'. Of course we would likely explain this usage in terms of the fact that we and our language evolved in a very noticeable gravitational field directed towards the center of the Earth, but for Aristotle, as we shall see, this usage helped identify an important structural feature of the universe, which itself was required for the explanation of weight. Now a further question arises: how should a structure, such as a preferred point in the universe, which privileges certain motions, be understood? What makes that point privileged? One might expect that Aristotle simply identified it with the center of the Earth, and so relative to that particular body; but in fact he did not adopt that tacit convention as fundamental, for he thought it possible for the Earth to move from the ‘down’ point. Thus the question arises (although Aristotle does not address it explicitly) of whether the preferred point is somewhere picked out in some other way by the bodies in the universe —the center of the heavens perhaps? Or is it picked out quite independently of the arrangements of matter? The issues that arise in this simple theory help frame the debates between later physicists and philosophers concerning the nature of motion; in particular, we will focus on the theories of Descartes, Newton, Leibniz, Mach and Einstein, and their interpretations. But similar issues circulate through the different contexts: is there any kind of privileged sense of motion, a sense in which things can be said to move or not, not just relative to this or that reference body, but ‘truly’? If so, can this true motion be analyzed in terms of motions relative to other bodies — to some special body, or to the entire universe perhaps? (And in relativity, in which distances, times and measures of relative motion are frame-dependent, what relations are relevant?) If not, then how is the privileged kind of motion to be understood, as relative to space itself — something physical but non-material — perhaps? Or can some kinds of motion be best understood as not being spatial changes — changes of relative location or of place — at all? To see that the problem of the interpretation of spatiotemporal quantities as absolute or relative is endemic to almost any kind of mechanics one can imagine, we can look to one of the simplest theories — Aristotle's account of natural motion (e.g., On the Heavens I.2). According to this theory it is because of their natures, and not because of ‘unnatural’ forces, that that heavy bodies move down, and ‘light’ things (air and fire) move up; it is their natures, or ‘forms’, that constitute the gravity or weight of the former and the levity of the latter. This account only makes sense if ‘up’ and ‘down’ can be unequivocally determined for each body. According to Aristotle, up and down are fixed by the position of the body in question relative to the center of the universe, a point coincident with the center of the Earth. That is, the theory holds that heavy bodies naturally move towards the center, while light bodies naturally move away. Does this theory involve absolute or merely relative quantities? It depends on how the center is conceived. If the center were identified with the center of the Earth, then the theory could be taken to eschew absolute quantities: it would simply hold that the natural motions of any body depend on its position relative to another, namely the Earth. But Aristotle is explicit that the center of the universe is not identical with, but merely coincident with the center of the Earth (e.g., On the Heavens II.14): since the Earth itself is heavy, if it were not at the center it would move there! So the center is not identified with any body, and so perhaps direction-to-center is an absolute quantity in the theory, not understood fundamentally as direction to some body (merely contingently as such if some body happens to occupy the center). But this conclusion is not clear either. In On the Heavens II.13, admittedly in response to a different issue, Aristotle suggests that the center itself is ‘determined’ by the outer spherical shell of the universe (the aetherial region of the fixed stars). If this is what he intends, then the natural law prescribes motion relative to another body after all — namely up or down with respect to the mathematical center of the stars. It would be to push Aristotle's writings too hard to suggest that he was consciously wrestling with the issue of whether mechanics required absolute or relative quantities of motion, but what is clear is that these questions arise in his physics and his remarks impinge on them. His theory also gives a simple model of how these questions arise: a physical theory of motion will say that ‘under such-and-such circumstances, motion of so-and-so a kind will occur’ — and the question of whether that kind of motion makes sense in terms of the relations between bodies alone arises automatically. Aristotle may not have recognized the question explicitly, but we see it as one issue in the background of his discussion of the center. The issues are, however, far more explicit in Descartes' physics; and since the form of his theory is different the ‘kinds of motion’ in question are quite different — as they change with all the different theories that we discuss. For Descartes argued in his 1644 Principles of Philosophy (see Book II) that the essence of matter was extension (i.e., size and shape) because any other attribute of bodies could be imagined away without imagining away matter itself. But he also held that extension constitutes the nature of space, hence he concluded that space and matter were one and the same thing. An immediate consequence of the identification is the impossibility of the vacuum; if every region of space is a region of matter, then there can be no space without matter. Thus Descartes' universe is ‘hydrodynamical’ — completely full of mobile matter of in different sized pieces in motion, rather like a bucket full of water and lumps of ice of different sizes, which has been stirred around. Since fundamentally the pieces of matter are nothing but extension, the universe is in fact nothing but a system of geometric bodies in motion without any gaps. (Descartes held that all other properties arise from the configurations and motions of such bodies — from geometric complexes. See Garber 1992 for a comprehensive study.) The identification of space and matter poses a puzzle about motion: if the space that a body occupies literally is the matter of the body, then when the body — i.e., the matter — moves, so does the space that it occupies. Thus it doesn't change place, which is should be to say that it doesn't move after all! Descartes resolved this difficulty by taking all motion to be the motion of bodies relative to one another, not a literal change of space. Now, a body has as many relative motions as there are bodies but it does not follow that all are equally significant. Indeed, Descartes uses several different concepts of relational motion. First there is ‘change of place’, which is nothing but motion relative to this or that arbitrary reference body (II.13). In this sense no motion of a body is privileged, since the speed, direction, and even curve of a trajectory depends on the reference body, and none is singled out. Next, he discusses motion in ‘the ordinary sense’ (II.24). This is often conflated with mere change of arbitrary place, but it in fact differs because according to the rules of ordinary speech one properly attributes motion only to bodies whose motion is caused by some action, not to any relative motion. (For instance, a person sitting on a speeding boat is ordinarily said to be at rest, since ‘he feels no action in himself’.) Finally, he defined motion ‘properly speaking’ (II.25) to be a body's motion relative to the matter contiguously surrounding it, which the impossibility of a vacuum guarantees to exist. (Descartes’ definition is complicated by the fact that he modifies this technical concept to make it conform more closely to the pre-theoretical sense of ‘motion’; however, in our discussion transference is all that matters, so we will ignore those complications.) Since a body can only be touching one set of surroundings, Descartes (dubiously) argued that this standard of motion was unique. What we see here is that Descartes, despite holding motion to be the motion of bodies relative to one another, also held there to be a privileged sense of motion; in a terminology sometimes employed by writers of the period, he held there to be a sense of ‘true motion’, over and above the merely relative motions. Equivalently, we can say that Descartes took motion (‘properly speaking’) to be a complete predicate: that is, moves-properly-speaking is a one-place predicate. (In contrast, moves-relative-to is a two-place predicate.) And note that the predicate is complete despite the fact that it is analyzed in terms of relative motion. (Formally, let contiguous-surroundings be a function from bodies to their contiguous surroundings, then x moves-properly-speaking is analyzed as x moves-relative-to contiguous-surroundings(x).) This example illustrates why it is crucial to keep two questions distinct: on the one hand, is motion to be understood in terms of relations between bodies or by invoking something additional, something absolute; on the other hand, are all relative motions equally significant, or is there some ‘true’, privileged notion of motion? Descartes' views show that eschewing absolute motion is logically compatible with accepting true motion; which is of course not to say that his definitions of motion are themselves tenable. There is an interpretational tradition which holds that Descartes only took the first, ‘ordinary’ sense of motion seriously, and introduced the second notion to avoid conflict with the Catholic Church. Such conflict was a real concern, since the censure of Galileo's Copernicanism took place only 11 years before publication of the Principles, and had in fact dissuaded Descartes from publishing an earlier work, The World. Indeed, in the Principles (III.28) he is at pains to explain how ‘properly speaking’ the Earth does not move, because it is swept around the Sun in a giant vortex of matter — the Earth does not move relative to its surroundings in the vortex. The difficulty with the reading, aside from the imputation of cowardice to the old soldier, is that it makes nonsense of Descartes' mechanics, a theory of collisions. For instance, according to his laws of collision if two equal bodies strike each other at equal and opposite velocities then they will bounce off at equal and opposite velocities (Rule I). On the other hand, if the very same bodies approach each other with the very same relative speed, but at different speeds then they will move off together in the direction of the faster one (Rule III). But if the operative meaning of motion in the Rules is the ordinary sense, then these two situations are just the same situation, differing only in the choice of reference frame, and so could not have different outcomes — bouncing apart versus moving off together. It seems inconceivable that Descartes could have been confused in such a trivial way. (Additionally, as Pooley 2002 points out, just after he claims that the Earth is at rest ‘properly speaking’, Descartes argues that the Earth is stationary in the ordinary sense, because common practice is to determine the positions of the stars relative to the Earth. Descartes simply didn't need motion properly speaking to avoid religious conflict, which again suggests that it has some other significance in his system of thought.) Thus Garber (1992, Chapter 6-8) proposes that Descartes actually took the unequivocal notion of motion properly speaking to be the correct sense of motion in mechanics. Then Rule I covers the case in which the two bodies have equal and opposite motions relative to their contiguous surroundings, while Rule VI covers the case in which the bodies have different motions relative to those surroundings — one is perhaps at rest in its surroundings. That is, exactly what is needed to make the rules consistent is the kind of privileged, true, sense of motion provided by Descartes' second definition. Insurmountable problems with the rules remain, but rejecting the traditional interpretation and taking motion properly speaking seriously in Descartes' philosophy clearly gives a more charitable reading. In an unpublished essay — De Gravitatione (Newton, 2004) — and in a Scholium to the definitions given in his 1687 Mathematical Principles of Natural Philosophy (see Newton, 1999 for an up-to-date translation), Newton attacked both of Descartes' notions of motion as candidates for the operative notion in mechanics. (see Stein 1967and Rynasiewicz 1995 for important, and differing, views on the issue.) (This critique is studied in more detail in the entry Newton's views on space, time, and motion.) The most famous argument invokes the so-called ‘Newton's bucket’ experiment. Stripped to its basic elements one compares: - a bucket of water hanging from a cord as the bucket is set spinning about the cord's axis, with - the same bucket and water when they are rotating at the same rate about the cord's axis. As is familiar from any rotating system, there will be a tendency for the water to recede from the axis of rotation in the latter case: in (i) the surface of the water will be flat (because of the Earth's gravitational field) while in (ii) it will be concave. The analysis of such ‘inertial effects' due to rotation was a major topic of enquiry of ‘natural philosophers' of the time, including Descartes and his followers, and they would certainly have agreed with Newton that the concave surface of the water in the second case demonstrated that the water was moving in a mechanically significant sense. There is thus an immediate problem for the claim that proper motion is the correct mechanical sense of motion: in (i) and (ii) proper motion is anti-correlated with the mechanically significant motion revealed by the surface of the water. That is, the water is flat in (i) when it is in motion relative to its immediate surroundings — the inner sides of the bucket — but curved in (ii) when it is at rest relative to its immediate surroundings. Thus the mechanically relevant meaning of rotation is not that of proper motion. (You may have noticed a small lacuna in Newton's argument: in (i) the water is at rest and in (ii) in motion relative to that part of its surroundings constituted by the air above it. It's not hard to imagine small modifications to the example to fill this gap.) Newton also points out that the height that the water climbs up the inside of the bucket provides a measure of the rate of rotation of bucket and water: the higher the water rises up the sides, the greater the tendency to recede must be, and so the faster the water must be rotating in the mechanically significant sense. But supposing, very plausibly, that the measure is unique, that any particular height indicates a particular rate of rotation. Then the unique height that the water reaches at any moment implies a unique rate of rotation in a mechanically significant sense. And thus motion in the sense of motion relative to an arbitrary reference body, is not the mechanical sense, since that kind of rotation is not unique at all, but depends on the motion of the reference body. And so Descartes’ change of place (and for similar reasons, motion in the ordinary sense) is not the mechanically significant sense of motion. In our discussion of Descartes we called the sense of motion operative in the science of mechanics ‘true motion’, and the phrase is used in this way by Newton in the Scholium. Thus Newton's bucket shows that true (rotational) motion is anti-correlated with, and so not identical with, proper motion (as Descartes proposed according to the Garber reading); and Newton further argues that the rate of true (rotational) motion is unique, and so not identical with change of place, which is multiple. Newton proposed instead that true motion is motion relative to a temporally enduring, rigid, 3-dimensional Euclidean space, which he dubbed ‘absolute space’. Of course, Descartes also defined motion as relative to an enduring 3-dimensional Euclidean space; the difference is that Descartes space was divided into parts (his space was identical with a plenum of corpuscles) in motion, not a rigid structure in which (mobile) material bodies are embedded. So according to Newton, the rate of true rotation of the bucket (and water) is the rate at which it rotates relative to absolute space. Or put another way, Newton effectively defines the complete predicate x moves-absolutely as x moves-relative-to absolute space; both Newton and Descartes offer the competing complete predicates as analyses of x moves-truly. Newton's proposal for understanding motion solves the problems that he posed for Descartes, and provides an interpretation of the concepts of constant motion and acceleration that appear in his laws of motion. However, it suffers from two notable interpretational problems, both of which were pressed forcefully by Leibniz (in the Leibniz-Clarke Correspondence, 1715–1716) — which is not to say that Leibniz himself offered a superior account of motion (see below). (Of course, there are other features of Newton's proposal that turned out to be empirically inadequate, and are rejected by relativity: Newton's account violates the relativity of simultaneity and postulates a non-dynamical spacetime structure.) First, according to this account, absolute velocity is a well-defined quantity: more simply, the absolute speed of a body is the rate of change of its position relative to an arbitrary point of absolute space. But the Galilean relativity of Newton's laws mean that the evolution of a closed system is unaffected by constant changes in velocity; Galileo's experimenter cannot determine from observations inside his cabin whether the boat is at rest in harbor or sailing smoothly. Put another way, according to Newtonian mechanics, in principle Newton's absolute velocity cannot be experimentally determined. So in this regard absolute velocity is quite unlike acceleration (including rotation); Newtonian acceleration is understood in absolute space as the rate of change of absolute velocity, and is, according to Newtonian mechanics, in general measurable, for instance by measuring the height that the water ascends the sides of the bucket. (It is worth noting that Newton was well-aware of these facts; the Galilean relativity of his theory is demonstrated in Corollary V of the laws of the Principia, while Corollary VI shows that acceleration is unobservable if all parts of the system accelerate in parallel at the same rate, as they do in a homogeneous gravitational field.) Leibniz argued (rather inconsistently, as we shall see) that since differences in absolute velocity were unobservable, they could not be genuine differences at all; and hence that Newton's absolute space, whose existence would entail the reality of such differences, must also be a fiction. Few contemporary philosophers would immediately reject a quantity as meaningless simply because it was not experimentally determinable, but this fact does justify genuine doubts about the reality of absolute velocity, and hence of absolute space. The second problem concerns the nature of absolute space. Newton quite clearly distinguished his account from Descartes' — in particular with regards to absolute space's rigidity versus Descartes' ‘hydrodynamical’ space, and the possibility of the vacuum in absolute space. Thus absolute space is definitely not material. On the other hand, presumably it is supposed to be part of the physical, not mental, realm. In De Gravitatione, Newton rejected both the standard philosophical categories of substance and attribute as suitable characterizations. Absolute space is not a substance for it lacks causal powers and does not have a fully independent existence, and yet not an attribute since it would exist even in a vacuum, which by definition is a place where there are no bodies in which it might inhere. Newton proposes that space is what we might call a ‘pseudo-substance’, more like a substance than property, yet not quite a substance. (Note that Samuel Clarke, in his Correspondence with Leibniz, which Newton had some role in composing, advocates the property view, and note further that when Leibniz objects because of the vacuum problem, Clarke suggests that there might be non-material beings in the vacuum in which space might inhere.) In fact, Newton accepted the principle that everything that exists, exists somewhere — i.e., in absolute space. Thus he viewed absolute space as a necessary consequence of the existence of anything, and of God's existence in particular — hence space's ontological dependence. Leibniz was presumably unaware of the unpublished De Gravitatione in which these particular ideas were developed, but as we shall see, his later works are characterized by a robust rejection of any notion of space as a real thing rather than an ideal, purely mental entity. This is a view that attracts even fewer contemporary adherents, but there is something deeply peculiar about a non-material but physical entity, a worry that has influenced many philosophical opponents of absolute space. After the development of relativity (which we will take up below), and its interpretation as a spacetime theory, it was realized that the notion of spacetime had applicability to a range of theories of mechanics, classical as well as relativistic. In particular, there is a spacetime geometry — ‘Galilean’ or ‘neo-Newtonian’ spacetime — for Newtonian mechanics that solves the problem of absolute velocity; an idea exploited by a number of philosophers from the late 1960s (e.g., Earman 1970, Friedman 1983, Sklar 1974 and Stein 1968). For details the reader is referred to the entry on spacetime: inertial frames, but the general idea is that although a spatial distance is well-defined between any two simultaneous points of this spacetime, only the temporal interval is well-defined between non-simultaneous points. Thus things are rather unlike Newton's absolute space, whose points persist through time and maintain their distances; in absolute space the distance between p-now and q-then (where p and q are points) is just the distance between p-now and q-now. However, Galilean spacetime has an ‘affine connection’ which effectively specifies for every point of every continuous curve, the rate at which the curve is changing from straightness at that point; for instance, the straight lines are picked out as those curves whose rate of change from straightness is zero at every point. (Another way of thinking about this space is as possessing — in addition to a distance between any two simultaneous points and a temporal interval between any points — a three-place relation of colinearity, satisfied by three points just in case they lie on a straight line.) Since the trajectories of bodies are curves in spacetime the affine connection determines the rate of change from straightness at every point of every possible trajectory. The straight trajectories thus defined can be interpreted as the trajectories of bodies moving inertially, and the rate of change from straightness of any trajectory can be interpreted as the acceleration of a body following that trajectory. That is, Newton's Second Law can be given a geometric formulation as ‘the rate of change from straightness of a body's trajectory is equal to the forces acting on the body divided by its mass’. The significance of this geometry is that while acceleration is well-defined, velocity is not — in accord with empirically determinability of acceleration but not velocity according to Newtonian mechanics. (A simple analogy helps see how such a thing is possible: betweenness but not ‘up’ is a well-defined concept in Euclidean space.) Thus Galilean spacetime gives a very nice interpretation of the choice that nature makes when it decides that the laws of mechanics should be formulated in terms of accelerations not velocities (as Aristotle and Descartes proposed). Put another way, we can define the complete predicate x accelerates as trajectory(x) has-non-zero-rate-of-change-from-straightness, where trajectory maps bodies onto their trajectories in Galilean spacetime. And this predicate, defined this way, applies to the water in the bucket if and only if it is rotating, according to Newtonian mechanics formulated in terms of the geometry of Galilean spacetime; it is the mechanically relevant sense of the word in this theory. But all of this formulation and definition has been given in terms of the geometry of spacetime, not relations between bodies; acceleration is ‘absolute’ in the sense that there is a preferred (true) sense of acceleration in mechanics and which is not defined in terms of the motions of bodies relative to one another. (Note that this sense of ‘absolute’ is broader than that of motion relative to absolute space, which we defined earlier. In the remainder of this article we will use it in the broader sense. The reader should be aware that the term is used in many ways in the literature, and such equivocation often leads to massive misunderstandings.) Thus if any of this analysis of motion is taken literally then one arrives at a position regarding the ontology of spacetime rather like that of Newton's regarding space: it is some kind of ‘substantial’ (or maybe pseudo-substantial) thing with the geometry of Galilean spacetime, just as absolute space possessed Euclidean geometry. This view regarding the ontology of spacetime is usually called ‘substantivalism’ (Sklar, 1974). The Galilean substantivalist usually sees himself as adopting a more sophisticated geometry than Newton but sharing his substantivalism (though there is room for debate on Newton's exact ontological views, see DiSalle, 2002). The advantage of the more sophisticated geometry is that although it allows the absolute sense of acceleration apparently required by Newtonian mechanics to be defined, it does not allow one to define a similar absolute speed or velocity — x accelerates can be defined as a complete predicate in terms of the geometry of Galilean spacetime but not x moves in general — and so the first of Leibniz's problem is resolved. Of course we see that the solution depends on a crucial shift from speed and velocity to acceleration as the relevant senses of ‘motion’: from the rate of change of position to the rate of rate of change. While this proposal solves the first kind of problem posed by Leibniz, it seems just as vulnerable to the second. While it is true that it involves the rejection of absolute space as Newton conceived it, and with it the need to explicate the nature of an enduring space, the postulation of Galilean spacetime poses the parallel question of the nature of spacetime. Again, it is a physical but non-material something, the points of which may be coincident with material bodies. What kind of thing is it? Could we do without it? As we shall see below, some contemporary philosophers believe so. There is a ‘folk-reading’ of Leibniz that one finds either explicitly or implicitly in the philosophy of physics literature which takes account of only some of his remarks on space and motion. The reading underlies vast swathes of the literature: for instance, the quantities captured by Earman's (1999) ‘Leibnizian spacetime’, do not do justice to Leibniz's view of motion (as Earman acknowledges). But it is perhaps most obvious in introductory texts (e.g., Ray 1991, Huggett 2000 to mention a couple). According to this view, the only quantities of motion are relative quantities, relative velocity, acceleration and so on, and all relative motions are equal, so there is no true sense of motion. However, Leibniz is explicit that other quantities are also ‘real’, and his mechanics implicitly — but obviously — depends on yet others. The length of this section is a measure, not so much the importance of Leibniz's actual views, but the importance of showing what the prevalent folk view leaves out regarding Leibniz's views on the metaphysics of motion and interpretation of mechanics. That said, we shall also see that no one has yet discovered a fully satisfactory way of reconciling the numerous conflicting things that Leibniz says about motion. Some of these tensions can be put down simply to his changing his mind (see Cover and Hartz 1988 for an explication of how Leibniz's views on space developed). However, we will concentrate on the fairly short period in the mid 1680-90s during which Leibniz developed his theory of mechanics, and was most concerned with their interpretation. We will supplement this discussion with the important remarks that he made in his Correspondence with Samuel Clarke around 30 years later (1715–1716); this discussion is broadly in line with the earlier period, and the intervening period is one in which he turned to other matters, rather than one in which his views on space were dramatically evolving. Arguably, Leibniz's views concerning space and motion do not have a completely linear logic, starting from some logically sufficient basic premises, but instead form a collection of mutually supporting doctrines If one starts questioning why Leibniz held certain views — concerning the ideality of space, for instance — one is apt to be led in a circle. Still, exposition requires starting somewhere, and Leibniz's argument for the ideality of space in the Correspondence with Clarke is a good place to begin. But bear in mind the caveats made here — this argument was made later than a number of other relevant writings, and its logical relation to Leibniz's views on motion is complex. Leibniz (LV.47 — this notation means Leibniz's Fifth letter, section 47, and so on) says that (i) a body comes to have the ‘same place’ as another once did, when it comes to stand in the same relations to bodies we ‘suppose’ to be unchanged (more on this later). (ii) That we can define ‘a place’ to be that which any such two bodies have in common (here he claims an analogy with the Euclidean/Eudoxan definition of a rational number in terms of an identity relation between ratios). And finally that (iii) space is all such places taken together. However, he also holds that properties are particular, incapable of being instantiated by more than one individual, even at different times; hence it is impossible for the two bodies to be in literally the same relations to the unchanged bodies. Thus the thing that we take to be the same for the two bodies — the place — is something added by our minds to the situation, and only ideal. As a result, space, which is after all constructed from these ideal places, is itself ideal: ‘a certain order, wherein the mind conceives the application of relations’. It's worth pausing briefly to contrast this view of space with those of Descartes and of Newton. Both Descartes and Newton claim that space is a real, mind-independent entity; for Descartes it is matter, and for Newton a ‘pseudo-substance’, distinct from matter. And of course for both, these views are intimately tied up with their accounts of motion. Leibniz simply denies the mind-independent reality of space, and this too is bound up with his views concerning motion. (Note that fundamentally, in the metaphysics of monads that Leibniz was developing contemporaneously with his mechanics, everything is in the mind of the monads; but the point that Leibniz is making here is that even within the world that is logically constructed from the contents of the minds of monads, space is ideal.) So far (apart from that remark about ‘unchanged’ bodies) we have not seen Leibniz introduce anything more than relations of distance between bodies, which is certainly consistent with the folk view of his philosophy. However, Leibniz sought to provide a foundation for the Cartesian/mechanical philosophy in terms of the Aristotelian/scholastic metaphysics of substantial forms (here we discuss the views laid out in Sections 17-22 of the 1686 Discourse on Metaphysics and the 1695 Specimen of Dynamics, both in Garber and Ariew 1989). In particular, he identifies primary matter with what he calls its ‘primitive passive force’ of resistance to changes in motion and to penetration, and the substantial form of a body with its ‘primitive active force’. It is important to realize that these forces are not mere properties of matter, but actually constitute it in some sense, and further that they are not themselves quantifiable. However because of the collisions of bodies with one another, these forces ‘suffer limitation’, and ‘derivative’ passive and active forces result. (There's a real puzzle here. Collision presupposes space, but primitive forces constitute matter prior to any spatial concepts — the primitive active and passive forces ground motion and extension respectively. See Garber and Rauzy, 2004.) Derivative passive force shows up in the different degrees of resistance to change of different kinds of matter (of ‘secondary matter’ in scholastic terms), and apparently is measurable. Derivative active force however, is considerably more problematic for Leibniz. On the one hand, it is fundamental to his account of motion and theory of mechanics — motion fundamentally is possession of force. But on the other hand, Leibniz endorses the mechanical philosophy, which precisely sought to abolish Aristotelian substantial form, which is what force represents. Leibniz's goal was to reconcile the two philosophies, by providing an Aristotelian metaphysical foundation for modern mechanical science; as we shall see, it is ultimately an open question exactly how Leibniz intended to deal with the inherent tensions in such a view. The texts are sufficiently ambiguous to permit dissent, but arguably Leibniz intends that one manifestation of derivative active force is what he calls vis viva — ‘living force’. Leibniz had a famous argument with the Cartesians over the correct definition of this quantity. Descartes defined it as size times speed — effectively as the magnitude of the momentum of a body. Leibniz gave a brilliant argument (repeated in a number of places, for instance Section 17 of the Discourse on Metaphysics) that it was size times speed2 — so (proportional to) kinetic energy. If the proposed identification is correct then kinetic energy quantifies derivative active force according to Leibniz; or looked at the other way, the quantity of virtus (another term used by Leibniz for active force) associated with a body determines its kinetic energy and hence its speed. As far as the authors know, Leibniz never explicitly says anything conclusive about the relativity of virtus, but it is certainly consistent to read him (as Roberts 2003 does) to claim that there is a unique quantity of virtus and hence ‘true’ (as we have been using the term) speed associated with each body. At the very least, Leibniz does say that there is a real difference between possession and non-possession of vis viva (e.g., in Section 18 of the Discourse) and it is a small step from there to true, privileged speed. Indeed, for Leibniz, mere change of relative position is not ‘entirely real’ (as we saw for instance in the Correspondence) and only when it has vis viva as its immediate cause is there some reality to it. (However, just to muddy the waters, Leibniz also claims that as a matter of fact, no body ever has zero force, which on the reading proposed means no body is ever at rest, which would be surprising given all the collisions bodies undergo.) An alternative interpretation to the one suggested here might say that Leibniz intends that while there is a difference between motion/virtus and no motion/virtus, there is somehow no difference between any strictly positive values of those quantities. It is important to emphasize two points about the preceding account of motion in Leibniz's philosophy. First, motion in the everyday sense — motion relative to something else — is not really real. Fundamentally motion is possession of virtus, something that is ultimately non-spatial (modulo its interpretation as primitive force limited by collision). If this reading is right — and something along these lines seems necessary if we aren't simply to ignore important statements by Leibniz on motion — then Leibniz is offering an interpretation of motion that is radically different from the obvious understanding. One might even say that for Leibniz motion is not movement at all! (We will leave to one side the question of whether his account is ultimately coherent.) The second point is that however we should understand Leibniz, the folk reading simply does not and cannot take account of his clearly and repeatedly stated view that what is real in motion is force not relative motion, for the folk reading allows Leibniz only relative motion (and of course additionally, motion in the sense of force is a variety of true motion, again contrary to the folk reading). However, from what has been said so far it is still possible that the folk reading is accurate when it comes to Leibniz's views on the phenomena of motion, the subject of his theory of mechanics. The case for the folk reading is in fact supported by Leibniz's resolution of the tension that we mentioned earlier, between the fundamental role of force/virtus (which we will now take to mean mass times speed2) and its identification with Aristotelian form. Leibniz's way out (e.g., Specimen of Dynamics) is to require that while considerations of force must somehow determine what form of the laws of motion, the laws themselves should be such as not to allow one to determine the value of the force (and hence true speed). One might conclude that in this case Leibniz held that the only quantities which can be determined are those of relative position and motion, as the folk reading says. But even in this circumscribed context, it is at best questionable whether the interpretation is correct. Consider first Leibniz's mechanics. Since his laws are what is now (ironically) often called ‘Newtonian’ elastic collision theory, it seems that they satisfy both of his requirements. The laws include conservation of kinetic energy (which we identify with virtus), but they hold in all inertial frames, so the kinetic energy of any arbitrary body can be set to any initial value. But they do not permit the kinetic energy of a body to take on any values throughout a process. The laws are only Galilean relativistic, and so are not true in every frame. Furthermore, according to the laws of collision, in an inertial frame, if a body does not collide then its Leibnizian force is conserved while if (except in special cases) it does collide then its force changes. According to Leibniz's laws one cannot determine initial kinetic energies, but one certainly can tell when they change. At very least, there are quantities of motion implicit in Leibniz's mechanics — change in force and true speed — that are not merely relative; the folk reading is committed to Leibniz simply missing this obvious fact. That said, when Leibniz discusses the relativity of motion — which he calls the ‘equivalence of hypotheses’ about the states of motion of bodies — some of his statements do suggest that he was confused in this way. For another way of stating the problem for the folk reading is that the claim that relative motions alone suffice for mechanics and that all relative motions are equal is a principle of general relativity, and could Leibniz — a mathematical genius — really have failed to notice that his laws hold only in special frames? Well, just maybe. On the one hand, when he explicitly articulates the principle of the equivalence of hypotheses (for instance in Specimen of Dynamics) he tends to say only that one cannot assign initial velocities on the basis of the outcome of a collision, which requires only Galilean relativity. However, he confusingly also claimed (On Copernicanism and the Relativity of Motion, also in Garber and Ariew 1989) that the Tychonic and Copernican hypotheses were equivalent. But if the Earth orbits the Sun in an inertial frame (Copernicus), then there is no inertial frame according to which the Sun orbits the Earth (Tycho Brahe), and vice versa: these hypotheses are simply not Galilean equivalent (something else Leibniz could hardly have failed to notice). So there is some textual support for Leibniz endorsing general relativity, as the folk reading maintains. A number of commentators have suggested solutions to the puzzle of the conflicting pronouncements that Leibniz makes on the subject, but arguably none is completely successful in reconciling all of them (Stein 1977 argues for general relativity, while Roberts 2003 argues the opposite; see also Lodge 2003). So the folk reading simply ignores Leibniz's metaphysics of motion, it commits Leibniz to a mathematical howler regarding his laws, and it is arguable whether it is the best rendering of his pronouncements concerning relativity; it certainly cannot be accepted unquestioningly. However, it is not hard to understand the temptation of the folk reading. In his Correspondence with Clarke, Leibniz says that he believes space to be “something merely relative, as time is, … an order of coexistences, as time is an order of successions” (LIII.4), which is naturally taken to mean that space is at base nothing but the distance and temporal relations between bodies. (Though even this passage has its subtleties, because of the ideality of space discussed above, and because in Leibniz's conception space determines what sets of relations are possible.) And if relative distances and times exhaust the spatiotemporal in this way, then shouldn't all quantities of motion be defined in terms of those relations? We have seen two ways in which this would be the wrong conclusion to draw: force seems to involve a notion of speed that is not identified with any relative speed, and (unless the equivalence of hypotheses is after all a principle of general relativity) the laws pick out a standard of constant motion that need not be any constant relative motion. Of course, it is hard to reconcile these quantities with the view of space and time that Leibniz proposes — what is speed in size times speed2 or constant speed if not speed relative to some body or to absolute space? Given Leibniz's view that space is literally ideal (and indeed that even relative motion is not ‘entirely real’) perhaps the best answer is that he took force and hence motion in its real sense not to be determined by motion in a relative sense at all, but to be primitive monadic quantities. That is, he took x moves to be a complete predicate, but he believed that it could be fully analyzed in terms of strictly monadic predicates: x moves iff x possesses-non-zero-derivative-active-force. And this reading explains just what Leibniz took us to be supposing when we ‘supposed certain bodies to be unchanged’ in the construction of the idea of space: that they had no force, nothing causing, or making real any motion. It's again helpful to compare Leibniz with Descartes and Newton, this time regarding motion. Commentators often express frustration at Leibniz's response to Newton's arguments for absolute space: “I find nothing … in the Scholium that proves or can prove the reality of space in itself. However, I grant that there is a difference between an absolute true motion of a body and a mere relative change …” (LV.53). Not only does Leibniz apparently fail to take the argument seriously, he then goes on to concede the step in the argument that seems to require absolute space! But with our understanding of Newton and Leibniz, we can see that what he says makes perfect sense (or at least that it is not as disingenuous as it is often taken to be). Newton argues in the Scholium that true motion cannot be identified with the kinds of motion that Descartes considers; but both of these are purely relative motions, and Leibniz is in complete agreement that merely relative motions are not true (i.e., ‘entirely real’). Leibniz's ‘concession’ merely registers his agreement with Newton against Descartes on the difference between true and relative motion; he surely understood who and what Newton was refuting, and it was a position that he had himself, in different terms, publicly argued against at length. But as we have seen, Leibniz had a very different analysis of the difference to Newton's; true motion was not, for him, a matter of motion relative to absolute space, but the possession of quantity of force, ontologically prior to any spatiotemporal quantities at all. There is indeed nothing in the Scholium explicitly directed against that view, and since it does potentially offer an alternative way of understanding true motion, it is not unreasonable for Leibniz to claim that there is no deductive inference from true motion to absolute space. The folk reading which belies Leibniz has it that he sought a theory of mechanics formulated in terms only of the relations between bodies. As we'll see presently, in the Nineteenth Century, Ernst Mach indeed proposed such an approach, but Leibniz clearly did not; though certain similarities between Leibniz and Mach — especially the rejection of absolute space — surely helps explain the confusion between the two. But not only is Leibniz often misunderstood, there are influential misreadings of Newton's arguments in the Scholium, influenced by the idea that he is addressing Leibniz in some way. Of course the Principia was written 30 years before the Correspondence, and the arguments of the Scholium were not written with Leibniz in mind, but Clarke himself suggests (CIV.13) that those arguments — specifically those concerning the bucket — are telling against Leibniz. That argument is indeed devastating to a general principle of relativity — the parity of all relative motions — but we have seen that it is highly questionable whether Leibniz's equivalence of hypotheses amount to such a view. That said, his statements in the first four letters of the Correspondence could understandably mislead Clarke on this point — it is in reply to Clarke's challenge that Leibniz explicitly denies the parity of relative motions. But interestingly, Clarke does not present a true version of Newton's argument — despite some involvement of Newton in writing the replies. Instead of the argument from the uniqueness of the rate of rotation, he argues that systems with different velocities must be different because the effects observed if they were brought to rest would be different. This argument is of course utterly question begging against a view that holds that there is no privileged standard of rest! As we discuss in Section 8, Mach attributed to Newton the fallacious argument that because the surface of the water curved even when it was not in motion relative to the bucket, it must be rotating relative to absolute space. Our discussion of Newton showed how misleading such a reading is. In the first place he also argues that there must be some privileged sense of rotation, and hence not all relative motions are equal. Second, the argument is ad hominem against Descartes, in which context a disjunctive syllogism — motion is either proper or ordinary or relative to absolute space — is argumentatively legitimate. On the other hand, Mach is quite correct that Newton's argument in the Scholium leaves open the logical possibility that the privileged, true sense of rotation (and acceleration more generally) is some species of relative motion; if not motion properly speaking, then relative to the fixed stars perhaps. (In fact Newton rejects this possibility in De Gravitatione (1962) on the grounds that it would involve an odious action at a distance; an ironic position given his theory of universal gravity.) However the kind of folk-reading of Newton that underlies much of the contemporary literature replaces Mach's interpretation with a more charitable one. According to this reading, Newton's point is that his mechanics — unlike Descartes' — could explain why the surface of the rotating water is curved, that his explanation involves a privileged sense of rotation, and that absent an alternative hypothesis about its relative nature, we should accept absolute space. But our discussion of Newton's argument showed that it simply does not have an ‘abductive’, ‘best explanation’ form, but shows deductively, from Cartesian premises, that rotation is neither proper nor ordinary motion. That is not to say that Newton had no understanding of how such effects would be explained in his mechanics. For instance, in Corollaries 5 and 6 to the Definitions of the Principles he states in general terms the conditions under which different states of motion are not — and so by implication are — discernible according to his laws of mechanics. Nor is it to say that Newton's contemporaries weren't seriously concerned with explaining inertial effects. Leibniz, for instance, analyzed a rotating body (in the Specimen). In short, parts of a rotating system collide with the surrounding matter and are continuously deflected, into a series of linear motions that form a curved path. But the system as Leibniz envisions it — comprised of a plenum of elastic particles of matter — is far too complex for him to offer any quantitative model based on this qualitative picture. (In the context of the proposed ‘abductive’ reading of Newton, note that this point is telling against a rejection of intrinsic rigidity or forces acting at a distance, not narrow relationism; it is the complexity of collisions in a plenum that stymies analysis. And since Leibniz's collision theory requires a standard of inertial motion, even if he had explained inertial effects, he would not have thereby shown that all motions are relative, much less that all are equal.) Although the argument is then not Newton's, it is still an important response to the kind of relationism proposed by the folk-Leibniz, especially when it is extended by bringing in a further example from Newton's Scholium. Newton considered a pair of identical spheres, connected by a cord, too far from any bodies to observe any relative motions; he pointed out that their rate and direction of rotation could still be experimentally determined by measuring the tension in the rod, and by pushing on opposite faces of the two globes to see whether the tension increased or decreased. He intended this simple example to demonstrate that the project he intended in the Principia, of determining the absolute accelerations and hence gravitational forces on the planets from their relative motions, was possible. However, if we further specify that the spheres and cord are rigid and that they are the only things in their universe, then the example can be used to point out that there are infinitely many different rates of rotation all of which agree on the relations between bodies. Since there are no differences in the relations between bodies in the different situations, it follows that the observable differences between the states of rotation cannot be explained in terms of the relations between bodies. Therefore, a theory of the kind attributed to the folk's Leibniz cannot explain all the phenomena of Newtonian mechanics, and again we can argue abductively for absolute space. (Of course, the argument works by showing that, granted the different states of rotation, there are states of rotation that cannot merely be relative rotations of any kind; for the differences cannot be traced to any relational differences. That is, granted the assumptions of the argument, rotation is not true relative motion of any kind.) This argument (neither the premises nor conclusion) is not Newton's, and must not be taken as a historically accurate reading, However, that is not to say that the argument is fallacious, and indeed many have found it attractive, particularly as a defense not of Newton's absolute space, but of Galilean spacetime. That is, Newtonian mechanics with Galilean spacetime can explain the phenomena associated with rotation, while theories of the kind proposed by Mach cannot explain the differences between situations allowed by Newtonian mechanics, but these explanations rely on the geometric structure of Galilean spacetime — particularly its connection, to interpret acceleration. And thus — the argument goes — those explanations commit us to the reality of spacetime — a manifold of points — whose properties include the appropriate geometric ones. This final doctrine, of the reality of spacetime with its component points or regions, distinct from matter, with geometric properties, is what we earlier identified as ‘substantivalism’. There are two points to make about this line of argument. First, the relationist could reply that he need not explain all situations which are possible according to Newtonian mechanics, because that theory is to be rejected in favor of one which invokes only distance and time relations between bodies, but which approximates to Newton's if matter is distributed suitably. Such a relationist would be following Mach's proposal, which we will discuss next. Such a position would be satisfactory only to the extent that a suitable concrete replacement theory to Newton's theory is developed; Mach never offered such a theory, but recently more progress has been made. Second, one must be careful in understanding just how the argument works, for it is tempting to gloss it by saying that in Newtonian mechanics the connection is a crucial part of the explanation of the surface of the water in the bucket, and if the spacetime which carries the connection is denied, then the explanation fails too. But this gloss tacitly assumes that Newtonian mechanics can only be understood in a substantial Galilean spacetime; if an interpretation of Newtonian mechanics that does not assume substantivalism can be constructed, then all Newtonian explanations can be given without a literal connection. Both Sklar (1974) and van Fraassen (1985) have made proposals along these lines. Sklar proposes interpreting ‘true’ acceleration as a primitive quantity not defined in terms of motion relative to anything, be it absolute space, a connection or other bodies. (Notice the family resemblance between this proposal and Leibniz's view of force and speed.) Van Fraassen proposes formulating mechanics as ‘Newton's Laws hold in some frame’, so that the form of the laws and the ways bodies move picks out a standard of inertial motion, not absolute space or a connection, or any instantaneous relations. These proposals aim to keep the full explanatory resources of Newtonian mechanics, and hence admit ‘true acceleration’, but deny any relations between bodies and spacetime itself. Like the actual Leibniz, they allow absolute quantities of motion, but claim that space and time themselves are nothing but the relations between bodies. Of course, such views raise the question of how a motion can be not relative to anything at all, and how we are to understand the privileging of frames; Huggett (2006) contains a proposal for addressing these problems. (Note that Sklar and van Fraassen are committed to the idea that in some sense Newton's laws are capable of explaining all the phenomena without recourse to spacetime geometry; that the connection and the metrical properties are explanatorily redundant. A similar view is defended in the context of relativity in Brown 2005.) Between the time of Newton and Leibniz and the 20th century, Newton's mechanics and gravitation theory reigned essentially unchallenged, and with that long period of dominance, absolute space came to be widely accepted. At least, no natural philosopher or physicist offered a serious challenge to Newton's absolute space, in the sense of offering a rival theory that dispenses with it. But like the action at a distance in Newtonian gravity, absolute space continued to provoke metaphysical unease. Seeking a replacement for the unobservable Newtonian space, Neumann (1870) and Lange (1885) developed more concrete definitions of the reference frames in which Newton's laws hold. In these and a few other works, the concept of the set of inertial frames was first clearly expressed, though it was implicit in both remarks and procedures to be found in the Principia. (See the entries on space and time: inertial frames and Newton's views on space, time, and motion) The most sustained, comprehensive, and influential attack on absolute space was made by Ernst Mach in his Science of Mechanics (1883). In a lengthy discussion of Newton's Scholium on absolute space, Mach accuses Newton of violating his own methodological precepts by going well beyond what the observational facts teach us concerning motion and acceleration. Mach at least partly misinterpreted Newton's aims in the Scholium, and inaugurated a reading of the bucket argument (and by extension the globes argument) that has largely persisted in the literature since. Mach viewed the argument as directed against a ‘strict’ or ‘general-relativity’ form of relationism, and as an attempt to establish the existence of absolute space. Mach points out the obvious gap in the argument when so construed: the experiment only establishes that acceleration (rotation) of the water with respect to the Earth, or the frame of the fixed stars, produces the tendency to recede from the center; it does not prove that a strict relationist theory cannot account for the bucket phenomena, much less the existence of absolute space. (The reader will recall that Newton's actual aim was simply to show that Descartes' two kinds of motion are not adequate to accounting for rotational phenomena.) Although Mach does not mention the globes thought experiment specifically, it is easy to read an implicit response to it in the things he does say: nobody is competent to say what would happen, or what would be possible, in a universe devoid of matter other than two globes. So neither the bucket nor the globes can establish the existence of absolute space. Both in Mach's interpretations of Newton's arguments and in his replies, one can already see two anti-absolute space viewpoints emerge, though Mach himself never fully kept them apart. The first strain, which we may call ‘Mach-lite’, criticizes Newton's postulation of absolute space as a metaphysical leap that is neither justified by actual experiments, nor methodologically sound. The remedy offered by Mach-lite is simple: we should retain Newton's mechanics and use it just as we already do, but eliminate the unnecessary posit of absolute space. In its place we need only substitute the frame of the fixed stars, as is the practice in astronomy in any case. If we find the incorporation of a reference to contingent circumstances (the existence of a single reference frame in which the stars are more or less stationary) in the fundamental laws of nature problematic (which Mach need not, given his official positivist account of scientific laws), then Mach suggests that we replace the 1st law with an empirically equivalent mathematical rival: Mach's Equation (1960, 287) The sums in this equation are to be taken over all massive bodies in the universe. Since the top sum is weighted by distance, distant masses count much more than near ones. In a world with a (reasonably) static distribution of heavy distant bodies, such as we appear to live in, the equation entails local conservation of linear momentum in ‘inertial’ frames. The upshot of this equation is that the frame of the fixed stars plays exactly the role of absolute space in the statement of the 1st law. (Notice that this equation, unlike Newton's first law, is not vectorial.) This proposal does not, by itself, offer an alternative to Newtonian mechanics, and as Mach himself pointed out, the law is not well-behaved in an infinite universe filled with stars; but the same can perhaps be said of Newton's law of gravitation (see Malament 1995, and Norton 1993). But Mach did not offer this equation as a proposed law valid in any circumstances; he avers, “it is impossible to say whether the new expression would still represent the true condition of things if the stars were to perform rapid movements among one another.” (p. 289) It is not clear whether Mach offered this revised first law as a first step toward a theory that would replace Newton's mechanics, deriving inertial effects from only relative motions, as Leibniz desired. But many other remarks made by Mach in his chapter criticizing absolute space point in this direction, and they have given birth to the Mach-heavy view, later to be christened “Mach's Principle” by Albert Einstein. The Mach-heavy viewpoint calls for a new mechanics that invokes only relative distances and (perhaps) their 1st and 2nd time derivatives, and thus ‘generally relativistic’ in the sense sometimes read into Leibniz's remarks about motion. Mach wished to eliminate absolute time from physics too, so he would have wanted a proper relationist reduction of these derivatives also. The Barbour-Bertotti theories, discussed below, provide this. Mach-heavy apparently involves the prediction of novel effects due to ‘merely’ relative accelerations. Mach hints at such effects in his criticism of Newton's bucket: Newton's experiment with the rotating vessel of water simply informs us that the relative rotation of the water with respect to the sides of the vessel produces no noticeable centrifugal forces, but that such forces are produced by its relative rotation with respect to the mass of the earth and the other celestial bodies. No one is competent to say how the experiment would turn out if the sides of the vessel [were] increased until they were ultimately several leagues thick. (1883, 284.) The suggestion here seems to be that the relative rotation in stage (i) of the experiment might immediately generate an outward force (before any rotation is communicated to the water), if the sides of the bucket were massive enough. More generally, Mach-heavy involves the view that all inertial effects should be derived from the motions of the body in question relative to all other massive bodies in the universe. The water in Newton's bucket feels an outward pull due (mainly) to the relative rotation of all the fixed stars around it. Mach-heavy is a speculation that an effect something like electromagnetic induction should be built into gravity theory. (Such an effect does exist according to the General Theory of Relativity, and is called ‘gravitomagnetic induction’. The recently finished Gravity Probe B mission was designed to measure the gravitomagnetic induction effect due to the Earth's rotation.) Its specific form must fall off with distance much more slowly than 1/r2, if it is to be empirically similar to Newtonian physics; but it will certainly predict experimentally testable novel behaviors. A theory that satisfies all the goals of Mach-heavy would appear to be ideal for the vindication of strict relationism and the elimination of absolute quantities of motion from mechanics. Direct assault on the problem of satisfying Mach-heavy in a classical framework proved unsuccessful, despite the efforts of others besides Mach (e.g., Friedländer 1896, Föpl 1904, Reissner 1914, 1915), until the work of Barbour and Bertotti in the 1970s and 80s. (Between the late 19th century and the 1970s, there was of course one extremely important attempt to satisfy Mach-heavy: the work of Einstein that led to the General Theory of Relativity. Since Einstein's efforts took place in a non-classical (Lorentz/Einstein/Minkowski) spacetime setting, we discuss them in the next section.) Rather than formulating a revised law of gravity/inertia using relative quantities, Barbour and Bertotti attacked the problem using the framework of Lagrangian mechanics, replacing the elements of the action that involve absolute quantities of motion with new terms invoking only relative distances, velocities etc. Their first (1977) theory uses a very simple and elegant action, and satisfies everything one could wish for from a Mach-heavy theory: it is relationally pure (even with respect to time: while simultaneity is absolute, the temporal metric is derived from the field equations); it is nearly empirically equivalent to Newton's theory in a world such as ours (with a large-scale uniform, near-stationary matter distribution); yet it does predict novel effects such as the ones Mach posited with his thick bucket. Among these is an ‘anisotropy of inertia’ effect — accelerating a body away from the galactic center requires more force than accelerating it perpendicular to the galactic plane — large enough to be ruled out empirically. Barbour and Bertotti's second attempt (1982) at a relational Lagrangian mechanics was arguably less Machian, but more empirically adequate. In it, solutions are sought beginning with two temporally-nearby, instantaneous relational configurations of the bodies in the universe. Barbour and Bertotti define an ‘intrinsic difference’ parameter that measures how different the two configurations are. In the solutions of the theory, this intrinsic difference quantity gets minimized, as well as the ordinary action, and in this way full solutions are derived despite not starting from a privileged inertial-frame description. The theory they end up with turns out to be, in effect, a fragment of Newtonian theory: the set of models of Newtonian mechanics and gravitation in which there is zero net angular momentum. This result makes perfect sense in terms of strict relationist aims. In a Newtonian world in which there is a nonzero net angular momentum (e.g., a lone rotating island galaxy), this fact reveals itself in the classic “tendency to recede from the center”. Since a strict relationist demands that bodies obey the same mechanical laws even in ‘rotating’ coordinate systems, there cannot be any such tendency to recede from the center (other than in a local subsystem), in any of the relational theory's models. Since cosmological observations, even today, reveal no net angular momentum in our world, the second Barbour & Bertotti theory can lay claim to exactly the same empirical successes (and problems) that Newtonian physics had. The second theory does not predict the (empirically falsified) anisotropy of inertia derivable from the first; but neither does it allow a derivation of the precession of the orbit of Mercury, which the first theory does (for appropriately chosen cosmic parameters). Mach-lite, like the relational interpretations of Newtonian physics reviewed in section 5, offers us a way of understanding Newtonian physics without accepting absolute position, velocity or acceleration. But it does so in a way that lacks theoretical clarity and elegance, since it does not delimit a clear set of cosmological models. We know that Mach-lite makes the same predictions as Newton for worlds in which there is a static frame associated with the stars and galaxies; but if asked about how things will behave in a world with no frame of fixed stars, or in which the stars are far from ‘fixed’, it shrugs and refuses to answer. (Recall that Mach-lite simply says: “Newton's laws hold in the frame of reference of the fixed stars.”) This is perfectly acceptable according to Mach's philosophy of science, since the job of mechanics is simply to summarize observable facts in an economical way. But it is unsatisfying to those with stronger realist intuitions about laws of nature. If there is, in fact, a distinguishable privileged frame of reference in which the laws of mechanics take on a specially simple form, without that frame being determined in any way by relation to the matter distribution, a realist will find it hard to resist the temptation to view motions described in that frame as the ‘true’ or ‘absolute’ motions. If there is a family of such frames, disagreeing about velocity but all agreeing about acceleration, she will feel a temptation to think of at least acceleration as ‘true’ or ‘absolute’. If such a realist believes motion to be by nature a relation rather than a property (and as we saw in the introduction, not all philosophers accept this) then she will feel obliged to accord some sort of existence or reality to the structure — e.g., the structure of Galilean spacetime — in relation to which these motions are defined. For philosophers with such realist inclinations, the ideal relational account of motion would therefore be some version of Mach-heavy. The Special Theory of Relativity (STR) is notionally based on a principle of relativity of motion; but that principle is ‘special’ — meaning, restricted. The relativity principle built into STR is in fact nothing other than the Galilean principle of relativity, which is built into Newtonian physics. In other words, while there is no privileged standard of velocity, there is nevertheless a determinate fact of the matter about whether a body has accelerated or non-accelerated (i.e., inertial) motion. In this regard, the spacetime of STR is exactly like Galilean spacetime (defined in section 5 above). In terms of the question of whether all motion can be considered purely relative, one could argue that there is nothing new brought to the table by the introduction of Einstein's STR — at least, as far as mechanics is concerned. As Dorling (1978) first pointed out, however, there is a sense in which the standard absolutist arguments against ‘strict’ relationism using rotating objects (buckets or globes) fail in the context of STR. Maudlin (1993) used the same considerations to show that there is a way of recasting relationism in STR that appears to be very successful. STR incorporates certain novelties concerning the nature of time and space, and how they mesh together; perhaps the best-known examples are the phenomena of ‘length contraction’, ‘time dilation’, and the ‘relativity of simultaneity.’ Since in STR both spatial distances and time intervals — when measured in the standard ways — are observer-relative (observers in different states of motion ‘disagreeing’ about their sizes), it is arguably most natural to restrict oneself to the invariant spacetime separation given by the interval between two points: [dx2 + dy2 + dz2 — dt2] — the four-dimensional analog of the Pythagorean theorem, for spacetime distances. If one regards the spacetime interval relations between masses-at-times as one's basis on which space-time is built up as an ideal entity, then with only mild caveats relationism works: the ‘relationally pure’ facts suffice to uniquely fix how the material systems are embeddable (up to isomorphism) in the ‘Minkowski’ spacetime of STR. The modern variants of Newton's bucket and globes arguments no longer stymie the relationist because (for example) the spacetime interval relations among bits of matter in Newton's bucket at rest are quite different from the spacetime interval relations found among those same bits of matter after the bucket is rotating. For example, the spacetime interval relation between a bit of water near the side of the bucket, at one time, and itself (say) a second later is smaller than the interval relation between a center-bucket bit of water and itself one second later (times referred to inertial-frame clocks). The upshot is that, unlike the situation in classical physics, a body at rest cannot have all the same spatial relations among its parts as a similar body in rotation. We cannot put a body or system into a state of rotation (or other acceleration) without thereby changing the spacetime interval relations between the various bits of matter at different moments of time. Rotation and acceleration supervene on spacetime interval relations. It is worth pausing to consider to what extent this victory for (some form of) relationism satisfies the classical ‘strict’ relationism traditionally ascribed to Mach and Leibniz. The spatiotemporal relations that save the day against the bucket and globes are, so to speak, mixed spatial and temporal distances. They are thus quite different from the spatial-distances-at-a-time presupposed by classical relationists; moreover they do not correspond to relative velocities (-at-a-time) either. Their oddity is forcefully captured by noticing that if we choose appropriate bits of matter at ‘times’ eight minutes apart, I-now am at zero distance from the surface of the sun (of eight minutes ‘past’, since it took 8 minutes for light from the sun to reach me-now). So we are by no means dealing here with an innocuous, ‘natural’ translation of classical relationist quantities into the STR setting. On the other hand, in light of the relativity of simultaneity (see note), it can be argued that the absolute simultaneity presupposed by classical relationists and absolutists alike was, in fact, something that relationists should always have regarded with misgivings. From this perspective, instantaneous relational configurations — precisely what one starts with in the theories of Barbour and Bertotti — would be the things that should be treated with suspicion. If we now return to our questions about motions — about the nature of velocities and accelerations — we find, as noted above, that matters in the interval-relational interpretation of STR are much the same as in Newtonian mechanics in Galilean spacetime. There are no well-defined absolute velocities, but there are indeed well-defined absolute accelerations and rotations. In fact, the difference between an accelerating body (e.g., a rocket) and an inertially moving body is codified directly in the cross-temporal interval relations of the body with itself. So we are very far from being able to conclude that all motion is relative motion of a body with respect to other bodies. It is true that the absolute motions are in 1-1 correlation with patterns of spacetime interval relations, but it is not at all correct to say that they are, for that reason, eliminable in favor of merely relative motions. Rather we should simply say that no absolute acceleration can fail to have an effect on the material body or bodies accelerated. But this was already true in classical physics if matter is modeled realistically: the cord connecting the globes does not merely tense, but also stretches; and so does the bucket, even if imperceptibly, i.e., the spatial relations change. Maudlin does not claim this version of relationism to be victorious over an absolutist or substantivalist conception of Minkowski spacetime, when it comes time to make judgments about the theory's ontology. There may be more to vindicating relationism than merely establishing a 1-1 correlation between absolute motions and patterns of spatiotemporal relations. The simple comparison made above between STR and Newtonian physics in Galilean spacetime is somewhat deceptive. For one thing, Galilean spacetime is a mathematical innovation posterior to Einstein's 1905 theory; before then, Galilean spacetime had not been conceived, and full acceptance of Newtonian mechanics implied accepting absolute velocities and, arguably, absolute positions, just as laid down in the Scholium. So Einstein's elimination of absolute velocity was a genuine conceptual advance. Moreover, the Scholium was not the only reason for supposing that there existed a privileged reference frame of ‘rest’: the working assumption of almost all physicists in the latter half of the 19th century was that, in order to understand the wave theory of light, one had to postulate an aetherial medium filling all space, wave-like disturbances in which constituted electromagnetic radiation. It was assumed that the aether rest frame would be an inertial reference frame; and physicists felt some temptation to equate its frame with the absolute rest frame, though this was not necessary. Regardless of this equation of the aether with absolute space, it was assumed by all 19th century physicists that the equations of electrodynamic theory would have to look different in a reference frame moving with respect to the aether than they did in the aether's rest frame (where they presumably take their canonical form, i.e., Maxwell's equations and the Lorentz force law.) So while theoreticians labored to find plausible transformation rules for the electrodynamics of moving bodies, experimentalists tried to detect the Earth's motion in the aether. Experiment and theory played collaborative roles, with experimental results ruling out certain theoretical moves and suggesting new ones, while theoretical advances called for new experimental tests for their confirmation or — as it happened — disconfirmation. As is well known, attempts to detect the Earth's velocity in the aether were unsuccessful. On the theory side, attempts to formulate the transformation laws for electrodynamics in moving frames — in such a way as to be compatible with experimental results — were complicated and inelegant. A simplified way of seeing how Einstein swept away a host of problems at a stroke is this: he proposed that the Galilean principle of relativity holds for Maxwell's theory, not just for mechanics. The canonical (‘rest-frame’) form of Maxwell's equations should be their form in any inertial reference frame. Since the Maxwell equations dictate the velocity c of electromagnetic radiation (light), this entails that any inertial observer, no matter how fast she is moving, will measure the velocity of a light ray as c — no matter what the relative velocity of its emitter. Einstein worked out logically the consequences of this application of the special relativity principle, and discovered that space and time must be rather different from how Newton described them. STR undermined Newton's absolute time just as decisively as it undermined his absolute space (see note ). Einstein's STR was the first clear and empirically successful physical theory to overtly eliminate the concepts of absolute rest and absolute velocity while recovering most of the successes of classical mechanics and 19th century electrodynamics. It therefore deserves to be considered the first highly successful theory to explicitly relativize motion, albeit only partially. But STR only recovered most of the successes of classical physics: crucially, it left out gravity. And there was certainly reason to be concerned that Newtonian gravity and STR would prove incompatible: classical gravity acted instantaneously at a distance, while STR eliminated the privileged absolute simultaneity that this instantaneous action presupposes. Several ways of modifying Newtonian gravity to make it compatible with the spacetime structure of STR suggested themselves to physicists in the years 1905-1912, and a number of interesting Lorentz-covariant theories were proposed (set in the Minkowski spacetime of STR). Einstein rejected these efforts one and all, for violating either empirical facts or theoretical desiderata. But Einstein's chief reason for not pursuing the reconciliation of gravitation with STR's spacetime appears to have been his desire, beginning in 1907, to replace STR with a theory in which not only velocity could be considered merely relative, but also acceleration. That is to say, Einstein wanted if possible to completely eliminate all absolute quantities of motion from physics, thus realizing a theory that satisfies at least one kind of ‘strict’ relationism. (Regarding Einstein's rejection of Lorentz-covariant gravity theories, see Norton 1992; regarding Einstein's quest to fully relativize motion, see Hoefer 1994.) Einstein began to see this complete relativization as possible in 1907, thanks to his discovery of the Equivalence Principle. Imagine we are far out in space, in a rocket ship accelerating at a constant rate g = 9.98 m/s2. Things will feel just like they do on the surface of the Earth; we will feel a clear up-down direction, bodies will fall to the floor when released, etc. Indeed, due to the well-known empirical fact that gravity affects all bodies by imparting a force proportional to their matter (and energy) content, independent of their internal constitution, we know that any experiment performed on this rocket will give the same results that the same experiment would give if performed on the Earth. Now, Newtonian theory teaches us to consider the apparent downward, gravity-like forces in the rocket ship as ‘pseudo-forces’ or ‘inertial forces’, and insists that they are to be explained by the fact that the ship is accelerating in absolute space. But Einstein asked: “Is there any way for the person in the rocket to regard him/herself as being ‘at rest’ rather than in absolute (accelerated) motion?” And the answer he gave is: Yes. The rocket traveler may regard him/herself as being ‘at rest’ in a homogeneous and uniform gravitational field. This will explain all the observational facts just as well as the supposition that he/she is accelerating relative to absolute space (or, absolutely accelerating in Minkowski spacetime). But is it not clear that the latter is the truth, while the former is a fiction? By no means; if there were a uniform gravitational field filling all space, then it would affect all the other bodies in the world — the Earth, the stars, etc, imparting to them a downward acceleration away from the rocket; and that is exactly what the traveler observes. In 1907, Einstein published his first gravitation theory (Einstein 1907), treating the gravitational field as a scalar field that also represented the (now variable and frame-dependent) speed of light. Einstein viewed the theory as only a first step on the road to eliminating absolute motion. In the 1907 theory, the theory's equations take the same form in any inertial or uniformly accelerating frame of reference. One might say that this theory reduces the class of absolute motions, leaving only rotation and other non-uniform accelerations as absolute. But, Einstein reasoned, if uniform acceleration can be regarded as equivalent to being at rest in a constant gravitational field, why should it not be possible also to regard inertial effects from these other, non-uniform motions as similarly equivalent to “being at rest in a (variable) gravitational field”? Thus Einstein set himself the goal of expanding the principle of equivalence to embrace all forms of ‘accelerated’ motion. Einstein thought that the key to achieving this aim lay in further expanding the range of reference frames in which the laws of physics take their canonical form, to include frames adapted to any arbitrary motions. More specifically, since the class of all continuous and differentiable coordinate systems includes as a subclass the coordinate systems adapted to any such frame of reference, if he could achieve a theory of gravitation, electromagnetism and mechanics that was generally covariant — its equations taking the same form in any coordinate system from this general class — then the complete relativity of motion would be achieved. If there are no special frames of reference in which the laws take on a simpler canonical form, there is no physical reason to consider any particular state or states of motion as privileged, nor deviations from those as representing ‘absolute motion’. (Here we are just laying out Einstein's train of thought; later we will see reasons to question the last step.) And in 1915, Einstein achieved his aim in the General Theory of Relativity (GTR). There is one key element left out of this success story, however, and it is crucial to understanding why most physicists reject Einstein's claim to have eliminated absolute states of motion in GTR. Going back to our accelerating rocket, we accepted Einstein's claim that we could regard the ship as hovering at rest in a universe-filling gravitational field. But a gravitational field, we usually suppose, is generated by matter. How is this universe-filling field linked to generating matter? The answer may be supplied by Mach-heavy. Regarding the ‘accelerating’ rocket which we decide to regard as ‘at rest’ in a gravitational field, the Machian says: all those stars and galaxies, etc., jointly accelerating downward (relative to the rocket), ‘produce’ that gravitational field. The mathematical specifics of how this field is generated will have to be different from Newton's law of gravity, of course; but it should give essentially the same results when applied to low-mass, slow-moving problems such as the orbits of the planets, so as to capture the empirical successes of Newtonian gravity. Einstein thought, in 1916 at least, that the field equations of GTR are precisely this mathematical replacement for Newton's law of gravity, and that they fully satisfied the desiderata of Mach-heavy relationism. But it was not so. (See the entry on early philosophical interpretations of general relativity.) In GTR, spacetime is locally very much like flat Minkowski spacetime. There is no absolute velocity locally, but there are clear local standards of accelerated vs non-accelerated motion, i.e., local inertial frames. In these ‘freely falling’ frames bodies obey the usual rules for non-gravitational physics familiar from STR, albeit only approximately. But overall spacetime is curved, and local inertial frames may tip, bend and twist as we move from one region to another. The structure of curved spacetime is encoded in the metric field tensor gab, with the curvature encoding gravity at the same time: gravitational forces are so to speak ‘built into’ the metric field, geometrized away. Since the spacetime structure encodes gravity and inertia, and in a Mach-heavy theory these phenomena should be completely determined by the relational distribution of matter (and relative motions), Einstein wished to see the metric as entirely determined by the distribution of matter and energy. But what the GTR field equations entail is, in general, only a partial-determination relation. We cannot go into the mathematical details necessary for a full discussion of the successes and failures of Mach-heavy in the GTR context. But one can see why the Machian interpretation Einstein hoped he could give to the curved spacetimes of his theory fails to be plausible, by considering a few simple ‘worlds’ permitted by GTR. In the first place, for our hovering rocket ship, if we are to attribute the gravity field it feels to matter, there has got to be all this other matter in the universe. But if we regard the rocket as a mere ‘test body’ (not itself substantially affecting the gravity present or absent in the universe), then we can note that according to GTR, if we remove all the stars, galaxies, planets etc. from the world, the gravitational field does not disappear. On the contrary, it stays basically the same locally, and globally it takes the form of empty Minkowski spacetime, precisely the quasi-absolute structure Einstein was hoping to eliminate. Solutions of the GTR field equations for arbitrary realistic configurations of matter (e.g., a rocket ship ejecting a stream of particles to push itself forward) are hard to come by, and in fact a realistic two-body exact solution has yet to be discovered. But numerical methods can be applied for many purposes, and physicists do not doubt that something like our accelerating rocket — in otherwise empty space — is possible according to the theory. We see clearly, then, that GTR fails to satisfy Einstein's own understanding of Mach's Principle, according to which, in the absence of matter, space itself should not be able to exist. A second example: GTR allows us to model a single rotating object in an otherwise empty universe (e.g., a neutron star). Relationism of the Machian variety says that such rotation is impossible, since it can only be understood as rotation relative to some sort of absolute space. In the case of GTR, this is basically right: the rotation is best understood as rotation relative to a ‘background’ spacetime that is identical to the Minkowski spacetime of STR, only ‘curved’ by the presence of matter in the region of the star. On the other hand, there is one charge of failure-to-relativize-motion sometimes leveled at GTR that is unfair. It is sometimes asserted that the simple fact that the metric field (or the connection it determines) distinguishes, at every location, motions that are ‘absolutely’ accelerated and/or ‘absolutely rotating’ from those that are not, by itself entails that GTR fails to embody a folk-Leibniz style general relativity of motion (e.g. Earman (1989), ch. 5). We think this is incorrect, and leads to unfairly harsh judgments about confusion on Einstein's part. The local inertial structure encoded in the metric would not be ‘absolute’ in any meaningful sense, if that structure were in some clear sense fully determined by the relationally specified matter-energy distribution. Einstein was not simply confused when he named his gravity theory. (Just what is to be understood by “the relationally specified matter-energy distribution” is a further, thorny issue, which we cannot enter into here.) GTR does not fulfill all the goals of Mach-heavy, at least as understood by Einstein, and he recognized this fact by 1918 (Einstein 1918). And yet … GTR comes tantalizingly close to achieving those goals, in certain striking ways. For one thing, GTR does predict Mach-heavy effects, known as ‘frame-dragging’: if we could model Mach's thick-walled bucket in GTR, it seems clear that it would pull the water slightly outward, and give it a slight tendency to begin rotating in the same sense as the bucket (even if the big bucket's walls were not actually touching the water. While GTR does permit us to model a lone rotating object, if we model the object as a shell of mass (instead of a solid sphere) and let the size of the shell increase (to model the ‘sphere of the fixed stars’ we see around us), then as Brill & Cohen (1966) showed, the frame-dragging becomes complete inside the shell. In other words: our original Minkowski background structure effectively disappears, and inertia becomes wholly determined by the shell of matter, just as Mach posited was the case. This complete determination of inertia by the global matter distribution appears to be a feature of other models, including the Friedman-Robertson-Walker-Lemâitre Big Bang models that best match observations of our universe. Finally, it is important to recognize that GTR is generally covariant in a very special sense: unlike all other prior theories (and unlike many subsequent quantum theories), it postulates no fixed ‘prior’ or ‘background’ spacetime structure. As mathematicians and physicists realized early on, other theories, e.g., Newtonian mechanics and STR, can be put into a generally covariant form. But when this is done, there are inevitably mathematical objects postulated as part of the formalism, whose role is to represent absolute elements of spacetime structure. What is unique about GTR is that it was the first, and is still the only ‘core’ physical theory, to have no such absolute elements in its covariant equations. The spacetime structure in GTR, represented by the metric field (which determines the connection), is at least partly ‘shaped’ by the distribution of matter and energy. And in certain models of the theory, such as the Big Bang cosmological models, some authors have claimed that the local standards of inertial motion — the local ‘gravitational field’ of Einstein's equivalence principle — are entirely fixed by the matter distribution throughout space and time, just as Mach-heavy requires (see, for example, Wheeler and Cuifollini 1995). Absolutists and relationists are thus left in a frustrating and perplexing quandary by GTR. Considering its anti-Machian models, we are inclined to say that motions such as rotation and acceleration remain absolute, or nearly-totally-absolute, according to the theory. On the other hand, considering its most Mach-friendly models, which include all the models taken to be good candidates for representing the actual universe, we may be inclined to say: motion in our world is entirely relative; the inertial effects normally used to argue for absolute motion are all understandable as effects of rotations and accelerations relative to the cosmic matter, just as Mach hoped. But even if we agree that motions in our world are in fact all relative in this sense, this does not automatically settle the traditional relationist/absolutist debate, much less the relationist/substantivalist debate. Many philosophers (including, we suspect, Nerlich 1994 and Earman 1989) would be happy to acknowledge the Mach-friendly status of our spacetime, and argue nevertheless that we should understand that spacetime as a real thing, more like a substance than a mere ideal construct of the mind as Leibniz insisted. (Nerlich (1994) and Earman (1989), we suspect, would take this stance.) Some, though not all, attempts to convert GTR into a quantum theory would accord spacetime this same sort of substantiality that other quantum fields possess. This article has been concerned with tracing the history and philosophy of ‘absolute’ and ‘relative’ theories of space and motion. Along the way we have been at pains to introduce some clear terminology for various different concepts (e.g., ‘true’ motion, ‘substantivalism’, ‘absolute space’), but what we have not really done is say what the difference between absolute and relative space and motion is: just what is at stake? Recently Rynasiewicz (2000) has argued that there simply are no constant issues running through the history that we have discussed here; that there is no stable meaning for either ‘absolute motion’ or ‘relative motion’ (or ‘substantival space’ vs ‘relational space’). While we agree to a certain extent, we think that nevertheless there are a series of issues that have motivated thinkers again and again; indeed, those that we identified in the introduction. (One quick remark: Rynasiewicz is probably right that the issues cannot be expressed in formally precise terms, but that does not mean that there are no looser philosophical affinities that shed useful light on the history.) Our discussion has revealed several different issues, of which we will highlight three as components of the ‘absolute-relative debate’. (i) There is the question of whether all motions and all possible descriptions of motions are equal, or whether some are ‘real’ — what we have called, in Seventeenth Century parlance, ‘true’. There is a natural temptation for those who hold that there is ‘nothing but the relative positions and motions between bodies' (and more so for their readers) to add ‘and all such motions are equal’, thus denying the existence of true motion. However, arguably — perhaps surprisingly — no one we have discussed has unreservedly held this view (at least not consistently): Descartes considered motion ‘properly speaking’ to be privileged, Leibniz introduced ‘active force’ to ground motion (arguably in his mechanics as well as metaphysically), and Mach's view seems to be that the distribution of matter in the universe determines a preferred standard of inertial motion. (Again, in general relativity, there is a distinction between inertial and accelerated motion.) That is, relationists can allow true motions if they offer an analysis of them in terms of the relations between bodies. Given this logical point, and given the historical ways thinkers have understood themselves, it seems unhelpful to characterize the issues in (i) as constituting an absolute-relative debate, hence our use of the term ‘true’ instead of ‘absolute’. So we are led to the second question: (ii) is true motion definable in terms of relations or not? (Of course the answer depends on what kind of definitions will count, and absent an explicit definition — Descartes' proper motion for example — the issue is often taken to be that of whether true motions supervene on relations, as Newton's globes are often supposed to refute.) It seems reasonable to call this issue that of whether motion is absolute or relative. Descartes and Mach are relationists about motion in this sense, while Newton is an absolutist. Leibniz is also an absolutist about motion in his metaphysics, and if our reading is correct, also about the interpretation of motion in the laws of collision. This classification of Leibniz's views runs contrary to his customary identification as relationist-in-chief, but we will clarify his relationist credentials below. Finally, we have discussed (ii) in the context of relativity, first examining Maudlin's proposal that the embedding of a relationally-specified system in Minkowski spacetime is in general unique once all the spacetime interval-distance relations are given. This proposal may or may not be held to satisfy the relational-definability question of (ii), but in any case it cannot be carried over to the context of general relativity theory. In the case of GTR we linked relational motion to the satisfaction of Mach's Principle, just as Einstein did in the early years of the theory. Despite some promising features displayed by GTR, and certain of its models, we saw that Mach's Principle is not fully satisfied in GTR as a whole. We also noted that in the absence of absolute simultaneity, it becomes an open question what relations are to be permitted in the definition (or supervience base) — spacetime interval relations? Instantaneous spatial distances and velocities on a 3-d hypersurface? (In recent works, Barbour has argued that GTR is fully Machian, using a 3-d relational-configuration approach. See Barbour, Foster and Murchadha 2002.) The final issue is that of (iii) whether absolute motion is motion with respect to substantival space or not. Of course this is how Newton understood acceleration — as acceleration relative to absolute space. More recent Newtonians share this view, although motion for them is with respect to substantival Galilean spacetime (or rather, since they know Newtonian mechanics is false, they hold that this is the best interpretation of that theory). Leibniz denied that motion was relative to space itself, since he denied the reality of space; for him true motion was the possession of active force. So despite his ‘absolutism’ (our adjective not his) about motion he was simultaneously a relationist about space: ‘space is merely relative’. Following Leibniz's lead we can call this debate the question of whether space is absolute or relative. The drawback of this name is that it suggests a separation between motion and space, which exists in Leibniz's views, but which is otherwise problematic; still, no better description presents itself. Others who are absolutists about motion but relationists about space include Sklar (1974) and van Fraassen (1985); Sklar introduced a primitive quantity of acceleration, not supervenient on motions relative to anything at all, while van Fraassen let the laws themselves pick out the inertial frames. It is of course arguable whether any of these three proposals are successful; (even) stripped of Leibniz's Aristotelian packaging, can absolute quantities of motion ‘stand on their own feet’? And under what understanding of laws can they ground a standard of inertial motion? Huggett (2006) defends a similar position of absolutism about motion, but relationism about space; he argues — in the case of Newtonian physics — that fundamentally there is nothing to space but relations between bodies, but that absolute motions supervene — not on the relations at any one time — but on the entire history of relations. Works cited in text - Aristotle, 1984, The Complete Works of Aristotle: The Revised Oxford Translation, J. Barnes (ed.), Princeton: Princeton University Press. - Barbour, J. and Bertotti, B., 1982, “Mach's Principle and the Structure of Dynamical Theories,” Proceedings of the Royal Society (London), 382: 295-306. - –––, 1977, “Gravity and Inertia in a Machian Framework,” Nuovo Cimento, 38B: 1-27. - Brill, D. R. and Cohen, J., 1966, “Rotating Masses and their effects on inertial frames,” Physical Review 143: 1011-1015. - Brown, H. R., 2005, Physical Relativity: Space-Time Structure from a Dynamical Perspective, Oxford: Oxford University Press. - Descartes, R., 1983, Principles of Philosophy, R. P. Miller and V. R. Miller (trans.), Dordrecht, London: Reidel. - Dorling, J., 1978, “Did Einstein need General Relativity to solve the Problem of Space? Or had the Problem already been solved by Special Relativity?,” British Journal for the Philosophy of Science, 29: 311-323. - Earman, J., 1989, World Enough and Spacetime: Absolute and Relational Theories of Motion. Boston: M.I.T. Press. - –––, 1970, “Who's Afraid of Absolute Space?,” Australasian Journal of Philosophy, 48: 287-319. - Einstein, A., 1918, “Prinzipielles zur allgemeinen Relativitätstheorie,” Annalen der Physik, 51: 639-642. - –––, 1907, “Über das Relativitätsprinzip und die aus demselben gezogenen Folgerungen,” Jahrbuch der Radioaktivität und Electronik 4: 411-462. - Einstein, A., Lorentz, H. A., Minkowski, H. and Weyl, H., 1952, The Principle of Relativity. W. Perrett and G.B. Jeffery, trs. New York: Dover Books. - Föppl, A. “Über absolute und relative Bewegung,” Sitzungsberichte der Münchener Akad.. 35:383. - Friedländer, B. and J., 1896, Absolute und relative Bewegung, Berlin: Leonhard Simion. - Friedman, M., 1983, Foundations of Space-Time Theories: Relativistic Physics and Philosophy of Science, Princeton: Princeton University Press. - Garber, D., 1992, Descartes' Metaphysical Physics, Chicago: University of Chicago Press. - Garber, D. and J. B. Rauzy, 2004, “Leibniz on Body, Matter and Extension,” The Aristotelian Society: Supplementary Volume, 78: 23-40. - Hartz, G. A. and J. A. Cover, 1988, “Space and Time in the Leibnizian Metaphysic,” Nous, 22: 493-519. - Hoefer, C., 1994, “Einstein's Struggle for a Machian Gravitation Theory,” Studies in History and Philosophy of Science, 25: 287-336. - Huggett, N., 2006, “The Regularity Account of Relational Spacetime,” Mind, 115: 41-74. - –––, 2000, “Space from Zeno to Einstein: Classic Readings with a Contemporary Commentary,” International Studies in the Philosophy of Science, 14: 327-329. - Lange, L., 1885, “Ueber das Beharrungsgesetz,” Berichte der Königlichen Sachsischen Gesellschaft der Wissenschaften zu Leipzig, Mathematisch-physische Classe 37 (1885): 333-51. - Leibniz, G. W., 1989, Philosophical Essays, R. Ariew and D. Garber (trans.), Indianapolis: Hackett Pub. Co. - Leibniz, G. W., and Samuel Clarke, 1715–1716, “Correspondence”, in The Leibniz-Clarke Correspondence, Together with Extracts from Newton's “Principia” and “Opticks”, H. G. Alexander (ed.), Manchester: Manchester University Press, 1956. - Lodge, P., 2003, “Leibniz on Relativity and the Motion of Bodies,” Philosophical Topics, 31: 277-308. - Mach, E., 1883, Die Mechanik in ihrer Entwickelung, historisch-kritisch dargestellt. 2nd edition. Leipzig: Brockhaus. English translation (6th edition, 1960): The Science of Mechanics, La Salle, Illinois: Open Court Press. - Malament, D., 1995, “Is Newtonian Cosmology Really Inconsistent?,” Philosophy of Science 62, no. 4. - Maudlin, T., 1993, “Buckets of Water and Waves of Space: Why Space-Time is Probably a Substance,” Philosophy of Science, 60:183-203. - Minkowski, H. (1908). “Space and time,” In Einstein, et al. (1952), pp. 75-91. - Nerlich, Graham, 1994, The Shape of Space (2nd edition), Cambridge: Cambridge University Press. - Neumann, C., 1870, Ueber die Principien der Galilei-Newton'schen Theorie. Leipzig: B. G. Teubner, 1870. - Newton, I., 2004, Newton: Philosophical Writings, A. Janiak (ed.), Cambridge: Cambridge University Press. - Newton, I. and I. B. Cohen, 1999, The Principia: Mathematical Principles of Natural Philosophy, I. B. Cohen and A. M. Whitman (trans.), Berkeley ; London: University of California Press. - Norton, J., 1995, “Mach's Principle before Einstein,” in J. Barbour and H. Pfister (eds.) Mach's Principle: From Newton's Bucket to Quantum Gravity: Einstein Studies, Vol. 6. Boston: Birkhäuser, pp.9-57. - Norton, J., 1993, “A Paradox in Newtonian Cosmology,” in M. Forbes , D. Hull and K. Okruhlik (eds.) PSA 1992: Proceedings of the 1992 Biennial Meeting of the Philosophy of Science Association. Vol. 2. East Lansing, MI: Philosophy of Science Association, pp. 412-20. - –––, 1992, “Einstein, Nordström and the Early Demise of Scalar, Lorentz-Covariant Theories of Gravitation,” Archive for History of Exact Sciences, 45: 17-94. - Pooley, O., 2002, the Reality of Spacetime, D.Phil thesis, Oxford University. - Ray, C., 1991, Time, Space and Philosophy, New York: Routledge. - Roberts, J. T., 2003, “Leibniz on Force and Absolute Motion,” Philosophy of Science, 70: 553-573. - Rynasiewicz, R., 1995, “By their Properties, Causes, and Effects: Newton's Scholium on Time, Space, Place, and Motion — I. The Text,” Studies in History and Philosophy of Science, 26: 133-153. - Sklar, L., 1974, Space, Time and Spacetime, Berkeley: University of California Press. - Stein, H., 1977, “Some Philosophical Prehistory of General Relativity,” in Minnesota Studies in the Philosophy of Science 8: Foundations of Space-Time Theories: , J. Earman, C. Glymour and J. Stachel (eds.), Minneapolis: University of Minnesota Press. - –––, 1967, “Newtonian Space-Time,” Texas Quarterly, 10: 174-200. - Wheeler, J.A. and Ciufolini, I., 1995, Gravitation and Inertia, Princeton, N.J.: Princeton U. Press. Notable Philosophical Discussions of the Absolute-Relative Debates - Barbour, J. B., 1982, “Relational Concepts of Space and Time,” British Journal for the Philosophy of Science, 33: 251-274. - Belot, G., 2000, “Geometry and Motion,” British Journal for the Philosophy of Science, 51: 561-595. - Butterfield, J., 1984, “Relationism and Possible Worlds,” British Journal for the Philosophy of Science, 35: 101-112. - Callender, C., 2002, “Philosophy of Space-Time Physics,” in The Blackwell Guide to the Philosophy of Science, P. Machamer (ed.), Cambridge: Blackwell. 173-198. - Carrier, M., 1992, “Kant's Relational Theory of Absolute Space,” Kant Studien, 83: 399-416. - Dieks, D., 2001, “Space-Time Relationism in Newtonian and Relativistic Physics,” International Studies in the Philosophy of Science, 15: 5-17. - Disalle, R., 1995, “Spacetime Theory as Physical Geometry,” Erkenntnis, 42: 317-337. - Earman, J., 1986, “Why Space is Not a Substance (at Least Not to First Degree),” Pacific Philosophical Quarterly, 67: 225-244. - –––, 1970, “Who's Afraid of Absolute Space?,” Australasian Journal of Philosophy, 48: 287-319. - Earman, J. and J. Norton, 1987, “What Price Spacetime Substantivalism: The Hole Story,” British Journal for the Philosophy of Science, 38: 515-525. - Hoefer, C., 2000, “Kant's Hands and Earman's Pions: Chirality Arguments for Substantival Space,” International Studies in the Philosophy of Science, 14: 237-256. - –––, 1998, “Absolute Versus Relational Spacetime: For Better Or Worse, the Debate Goes on,” British Journal for the Philosophy of Science, 49: 451-467. - –––, 1996, “The Metaphysics of Space-Time Substantialism,” Journal of Philosophy, 93: 5-27. - Huggett, N., 2000, “Reflections on Parity Nonconservation,” Philosophy of Science, 67: 219-241. - Le Poidevin, R., 2004, “Space, Supervenience and Substantivalism,” Analysis, 64: 191-198. - Malament, D., 1985, “Discussion: A Modest Remark about Reichenbach, Rotation, and General Relativity,” Philosophy of Science, 52: 615-620. - Maudlin, T., 1993, “Buckets of Water and Waves of Space: Why Space-Time is Probably a Substance,” Philosophy of Science, 60: 183-203. - –––, 1990, “Substances and Space-Time: What Aristotle would have Said to Einstein,” Studies in History and Philosophy of Science, 531-561. - Mundy, B., 1992, “Space-Time and Isomorphism,” Proceedings of the Biennial Meetings of the Philosophy of Science Association, 1: 515-527. - –––, 1983, “Relational Theories of Euclidean Space and Minkowski Space-Time,” Philosophy of Science, 50: 205-226. - Nerlich, G., 2003, “Space-Time Substantivalism,” in The Oxford Handbook of Metaphysics, M. J. Loux (ed.), Oxford: Oxford Univ Pr. 281-314. - –––, 1996, “What Spacetime Explains,” Philosophical Quarterly, 46: 127-131. - –––, 1994, What Spacetime Explains: Metaphysical Essays on Space and Time, New York: Cambridge Univ Pr. - –––, 1973, “Hands, Knees, and Absolute Space,” Journal of Philosophy, 70: 337-351. - Rynasiewicz, R., 2000, “On the Distinction between Absolute and Relative Motion,” Philosophy of Science, 67: 70-93. - –––, 1996, “Absolute Versus Relational Space-Time: An Outmoded Debate?,” Journal of Philosophy, 93: 279-306. - Teller, P., 1991, “Substance, Relations, and Arguments about the Nature of Space-Time,” Philosophical Review, 363-397. - Torretti, R., 2000, “Spacetime Models for the World,” Studies in History and Philosophy of Modern Physics, 31B: 171-186. - St. Andrews School of Mathematics and Statistics Index of Biographies - The Pittsburgh Phil-Sci Archive of pre-publication articles in philosophy of science - Ned Wright's Special Relativity tutorial - Andrew Hamilton's Special Relativity pages Descartes, René: physics | general relativity: early philosophical interpretations of | Newton, Isaac: views on space, time, and motion | space and time: inertial frames | space and time: the hole argument | Zeno of Elea: Zeno's paradoxes
<urn:uuid:c463babe-3f8d-40d5-a697-a0afccbbaf62>
{ "date": "2013-05-21T10:33:35", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9360849857330322, "score": 3.03125, "token_count": 25047, "url": "http://www.science.uva.nl/~seop/entries/spacetime-theories/" }
Feb. 8, 2006 Older Americans with high blood pressure and moderate to severe chronic kidney disease have a greater chance of developing heart disease than people with normal kidney function. This finding is one of three in a new paper published in the Feb. 7 issue of the Annals of Internal Medicine. The study also found these patients are at higher risk for developing heart disease than kidney failure (end stage renal disease). Lastly, it found for the first time that new types of drugs such as ACE inhibitors and calcium-channel blockers are no better than older type diuretic drugs, also called water pills, in preventing heart disease, and may be even less effective at preventing heart failure in patients with chronic kidney disease. Lead author of the study is Mahboob Rahman, M.D., M.S., of Case Western Reserve University School of Medicine, University Hospitals of Cleveland and the Louis Stokes Cleveland VA Medical Center. The study was sponsored by the National Heart Lung and Blood Institute and coordinated by the Clinical Trials Center at the University of Texas School of Public Health in Houston. The study looked at more than 31,000 men and women 55 years and older who have high blood pressure and one other risk factor of cardiovascular disease, such as diabetes. A blood test was used to determine kidney function and severity of disease. Patients with moderate chronic kidney disease had a 38 percent greater chance of developing heart disease and a 35 percent increase in overall cardiovascular disease (which includes heart disease, stroke, heart failure and others) than those with normal kidney function. In addition, patients with moderate to severe chronic kidney disease were twice as likely to develop heart disease than to experience kidney failure. Rahman said the researchers are not quite sure why moderate and severe kidney disease leads to greater risk in heart disease. "It may be related to other factors associated with renal failure, such as anemia or abnormalities of calcium or phosphorus metabolism, for example. We are participating in other ongoing studies to establish the connections," he said. The study also confirmed other earlier findings that diuretics are as effective as or better for preventing cardiovascular disease than newer drugs. "Overall, ACE inhibitors and diuretics were about equally likely to protect against heart attacks," said Rahman, "but diuretics seemed more effective at preventing other kinds of cardiovascular diseases, such as stroke and heart failure." Calcium-channel blockers were about equal in protecting against all cardiovascular disease, but diuretics were more effective at preventing heart failure. These results held for all participants regardless of kidney function. Rahman cautioned patients not to stop taking their medications after reading these results, however, and to consult their physicians. He added, "Exercise, maintaining optimal body weight, smoking avoidance, and maintaining low cholesterol levels -- these are all things that should be done with renewed emphasis in most patients with high blood pressure. Most patient with hypertension and chronic kidney disease will require multiple medications to control blood pressure. Our results demonstrate that the risk for cardiovascular disease is lower if one of the medications is a diuretic." He recommends patients who have high blood pressure talk to their doctors about measuring their kidney function to determine if they are suffering from chronic kidney disease. Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
<urn:uuid:598976ed-eff5-417d-a642-78c8f606bd6e>
{ "date": "2013-05-21T10:22:27", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9548972845077515, "score": 2.734375, "token_count": 696, "url": "http://www.sciencedaily.com/releases/2006/02/060206234103.htm" }
Jan. 30, 2009 A new way of making LEDs could see household lighting bills reduced by up to 75% within five years. Gallium Nitride (GaN), a man-made semiconductor used to make LEDs (light emitting diodes), emits brilliant light but uses very little electricity. Until now high production costs have made GaN lighting too expensive for wide spread use in homes and offices. However the Cambridge University based Centre for Gallium Nitride has developed a new way of making GaN which could produce LEDs for a tenth of current prices. GaN, grown in labs on expensive sapphire wafers since the 1990s, can now be grown on silicon wafers. This lower cost method could mean cheap mass produced LEDs become widely available for lighting homes and offices in the next five years. Based on current results, GaN LED lights in every home and office could cut the proportion of UK electricity used for lights from 20% to 5%. That means we could close or not need to replace eight power stations. A GaN LED can burn for 100,000 hours so, on average, it only needs replacing after 60 years. And, unlike currently available energy-saving bulbs GaN LEDs do not contain mercury so disposal is less damaging to the environment. GaN LEDs also have the advantage of turning on instantly and being dimmable. Professor Colin Humphreys, lead scientist on the project said: “This could well be the holy grail in terms of providing our lighting needs for the future. We are very close to achieving highly efficient, low cost white LEDs that can take the place of both traditional and currently available low energy light bulbs. That won’t just be good news for the environment. It will also benefit consumers by cutting their electricity bills.” GaN LEDs, used to illuminate landmarks like Buckingham Palace and the Severn Bridge, are also appearing in camera flashes, mobile phones, torches, bicycle lights and interior bus, train and plane lighting. Parallel research is also being carried out into how GaN lights could mimic sunlight to help 3m people in the UK with Seasonal Affective Disorder (SAD). Ultraviolet rays made from GaN lighting could also aid water purification and disease control in developing countries, identify the spread of cancer tumours and help fight hospital ‘super bugs’. Funding was provided by the Engineering and Physical Sciences Research Council (EPSRC). About GaN LEDs A light-emitting diode (LED) is a semiconductor diode that emits light when charged with electricity. LEDs are used for display and lighting in a whole range of electrical and electronic products. Although GaN was first produced over 30 years ago, it is only in the last ten years that GaN lighting has started to enter real-world applications. Currently, the brilliant light produced by GaN LEDs is blue or green in colour. A phosphor coating is applied to the LED to transform this into a more practical white light. GaN LEDs are currently grown on 2-inch sapphire. Manufacturers can get 9 times as many LEDs on a 6-inch silicon wafer than on a 2-inch sapphire wafer. In addition, edge effects are less, so the number of good LEDs is about 10 times higher. The processing costs for a 2-inch wafer are essentially the same as for a 6-inch wafer. A 6-inch silicon wafer is much cheaper to produce than a 2-inch sapphire wafer. Together these factors result in a cost reduction of about a factor of 10. Possible Future Applications - Cancer surgery. Currently, it is very difficult to detect exactly where a tumour ends. As a result, patients undergoing cancer surgery have to be kept under anaesthetic while cells are taken away for laboratory tests to see whether or not they are healthy. This may need to happen several times during an operation, prolonging the procedure extensively. But in the future, patients could be given harmless drugs that attach themselves to cancer cells, which can be distinguished when a blue GaN LED is shone on them. The tumour’s edge will be revealed, quickly and unmistakably, to the surgeon. - Water purification. GaN may revolutionise drinking water provision in developing countries. If aluminium is added to GaN then deep ultra-violet light can be produced and this kills all viruses and bacteria, so fitting such a GaN LED to the inside of a water pipe will instantly eradicate diseases, as well as killing mosquito larvae and other harmful organisms. - Hospital-acquired infections. Shining a ultra-violet GaN torch beam could kill viruses and bacteria, boosting the fight against MRSA and C Difficile. Simply shining a GaN torch at a hospital wall or trolley, for example, could kill any ‘superbugs’ lurking there. Other social bookmarking and sharing tools: The above story is reprinted from materials provided by Engineering and Physical Sciences Research Council (EPSRC). Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
<urn:uuid:356d9855-1da1-42ce-95c5-c24f941a4519>
{ "date": "2013-05-21T10:13:47", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9444608688354492, "score": 3.71875, "token_count": 1075, "url": "http://www.sciencedaily.com/releases/2009/01/090129090218.htm" }
Mar. 6, 2013 Boys are right-handed, girls are left ... Well at least this is true for sugar gliders (Petaurus breviceps) and grey short-tailed opossums (Monodelphis domestica), according to an article in BioMed Central’s open access journal BMC Evolutionary Biology that shows that handedness in marsupials is dependent on gender. This preference of one hand over another has developed despite the absence of a corpus collosum, the part of the brain which in placental mammals allows one half of the brain to communicate with the other. Many animals show a distinct preference for using one hand/paw/hoof over another. This is often related to posture – an animal is more likely to show manual laterality if it is upright, related to the difficulty of the task, more complex tasks show a handed preference, or even with age. As an example of all three: crawling human babies show less hand preference than toddlers. Some species also show a distinct sex effect in handedness but among non-marsupial mammals this tendency is for left-handed males and right-handed females. In contrast researchers from St Petersburg State University show that male quadruped marsupials, such as who walk on all fours, tend to be right-handed while the females are left-handed, especially as tasks became more difficult. Dr Yegor Malashichev from Saint Petersburg State University who led this study explained why they think this has evolved, “Marsupials do not have a corpus callosum – which connects the two halves of the mammalian brain together. Reversed sex related handedness is an indication of how the marsupial brain has developed different ways of the two halves of the brain communicating in the absence of the corpus callosum.” Other social bookmarking and sharing tools: - Andrey Giljov, Karina Karenina, Yegor Malashichev. Forelimb preferences in quadrupedal marsupials and there implications for laterality evolution in mammals. BMC Evolutionary Biology, 2013; 13 (1): 61 DOI: 10.1186/1471-2148-13-61 Note: If no author is given, the source is cited instead.
<urn:uuid:0da0c3b0-3202-410e-943e-07c344d95981>
{ "date": "2013-05-21T10:15:07", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.8945415019989014, "score": 3.515625, "token_count": 469, "url": "http://www.sciencedaily.com/releases/2013/03/130305200312.htm" }
Web edition: March 4, 2013 Pregnant women taking DHA, an omega-3 fatty acid in fish oil, give birth to babies that score slightly better on several health measurements than those born to women who don’t take the supplement, a study has found. DHA, or docosahexaenoic acid, is a nutrient that promotes brain development (SN Online: 1/13/2009). Susan Carlson of the University of Kansas Medical Center in Kansas City and her colleagues randomly assigned 350 women to take daily capsules of either a placebo or DHA starting midway through pregnancy. Babies born to the women who took DHA were slightly longer and heavier than the other babies and were less apt to spend time in the intensive care unit.Overall rates of preterm birth, defined as birth before the 37th week of gestation, didn’t differ substantially between the groups. But among preterm babies, those in the DHA group spent an average of nine days in the hospital compared with 41 days for those in the placebo group. While only one of 154 babies in the DHA group was born very early — before 34 weeks’ gestation — seven of 147 babies to non-DHA mothers were born that early, Carlson and colleagues report in the April American Journal of Clinical Nutrition. S. E. Carlson et al. DHA supplementation and pregnancy outcomes. American Journal of Clinical Nutrition. April 2013, in press. doi: 10.3945/ajcn.112.050021. [Go to] N. Seppa. Omega-3 fatty acid is early boost for female preemies. Science News Online. January 13, 2009. [Go to]_
<urn:uuid:fa5c6e70-4e13-4fda-a48c-2cf94d43dcc8>
{ "date": "2013-05-21T10:35:53", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9562485814094543, "score": 2.671875, "token_count": 346, "url": "http://www.sciencenews.org/view/generic/id/348704/description/News_in_Brief_Fish_oil_component_boosts_newborn_health" }
Anthony Stocks, chairman and professor of anthropology at Idaho State University, responds: "The evolution of smiles is opaque and, as with many evolutionary accounts of social behavior, fraught with just-soism. Among human babies, however, the 'tooth-baring' smile is associated less with friendship than with fright--which, one might argue, is related to the tooth-baring threats of baboons. On the other hand, a non-toothy, not-so-broad-but-open-lipped smile is associated with pleasure in human infants. Somehow we seem to have taken the fright-threat sort of smile and extended it to strangers as a presumably friendly smile. Maybe it is not as innocent as it seems. "All cultures recognize a variety of mouth gestures as indexes of inner emotional states. As in our own culture, however, smiles come in many varieties, not all of them interpreted as friendly." Frank McAndrew, professor of psychology at Knox College in Galesburg, Ill., has done extensive research on facial expressions. He answers as follows: "Baring one's teeth is not always a threat. In primates, showing the teeth, especially teeth held together, is almost always a sign of submission. The human smile probably has evolved from that. "In the primate threat, the lips are curled back and the teeth are apart--you are ready to bite. But if the teeth are pressed together and the lips are relaxed, then clearly you are not prepared to do any damage. These displays are combined with other facial features, such as what you do with your eyes, to express a whole range of feelings. In a lot of human smiling, it is something you do in public, but it does not reflect true 'friendly' feelings--think of politicians smiling for photographers. "What is especially interesting is that you do not have to learn to do any of this--it is preprogrammed behavior. Kids who are born blind never see anybody smile, but they show the same kinds of smiles under the same situations as sighted people." McAndrew suggests several books that will be of interest to readers seeking more information on this topic: 'Non-Verbal Communication.' Edited by R. A. Hinde. Cambridge University Press, 1972. 'Emotion: A Psychoevolutionary Synthesis.' Robert Plutchik. Harper and Row, 1980. 'Emotion in the Human Face.' Second edition. Edited by Paul Ekman. Cambridge University Press, 1982
<urn:uuid:3af07c55-7258-46ac-a3d2-070c181f11d6>
{ "date": "2013-05-21T10:36:47", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9469987750053406, "score": 2.921875, "token_count": 515, "url": "http://www.scientificamerican.com/article.cfm?id=it-seems-that-in-almost-a" }
More 60-Second Science Plants can pull carbon dioxide, the planet-warming greenhouse gas, out of Earth’s atmosphere. But these aren’t the only living organisms that affect carbon dioxide levels, and thus global warming. Nope, I’m not talking about humans. Humble sea otters can also reduce greenhouse gases, by indirectly helping kelp plants. That finding is in the journal Frontiers in Ecology and the Environment. [Christopher C Wilmer et al., Do trophic cascades affect the storage and flux of atmospheric carbon? An analysis of sea otters and kelp forests] Researchers used 40 years of data to look at the effect of sea otter populations on kelp. Depending on the plant density, one square meter of kelp forest can absorb anywhere from tens to hundreds of grams of carbon per year. But when sea otters are around, kelp density is high and the plants can suck up more than 12 times as much carbon. That’s because otters nosh on kelp-eating sea urchins. In the mammals’ presence, the urchins hide away and feed on kelp detritus rather than living, carbon-absorbing plants. So climate researchers need to note that the herbivores that eat plants, and the predators that eat them, also have roles to play in the carbon cycle. [The above text is a transcript of this podcast.]
<urn:uuid:988dc99a-1448-437e-9ce7-9be18141d267>
{ "date": "2013-05-21T10:28:35", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.8678373694419861, "score": 3.578125, "token_count": 296, "url": "http://www.scientificamerican.com/podcast/episode.cfm?id=sea-otters-fight-global-warming-12-09-14" }
Own an Actual Piece of an American Space Travel Icon The Space Shuttle Atlantis is a retired Space Shuttle orbiter in the Space Shuttle fleet belonging to NASA. The last mission of Atlantis was STS-135, the last flight before the Shuttle program ended. By the end of its final mission, Atlantis had orbited the Earth 4,848 times, traveling nearly 126,000,000 mi in space or more than 525 times the distance from the Earth to the Moon. This photograph of the Atlantis taking off contains a piece of cargo bay liner from the actual space shuttle. Certificate of Authenticity is included. Dimensions: 8"x10" photograph, 13"x16" wooden frame.
<urn:uuid:e7c697cb-13c7-448a-83ea-4333bbfab128>
{ "date": "2013-05-21T10:21:43", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8674817681312561, "score": 2.703125, "token_count": 142, "url": "http://www.scientificsonline.com/review/product/list/id/10980/?cat=444478&laser_color=83" }
OurDocuments.gov. Featuring 100 milestone documents of American history from the National Archives. Includes images of original primary source documents, lesson plans, teacher and student competitions, and educational resources. In 1866 the Russian government offered to sell the territory of Alaska to the United States. Secretary of State William H. Seward, enthusiastic about the prospects of American Expansion, negotiated the deal for the Americans. Edouard de Stoeckl, Russian minister to the United States, negotiated for the Russians. On March 30, 1867, the two parties agreed that the United States would pay Russia $7.2 million for the territory of Alaska. For less that 2 cents an acre, the United States acquired nearly 600,000 square miles. Opponents of the Alaska Purchase persisted in calling it “Seward’s Folly” or “Seward’s Icebox” until 1896, when the great Klondike Gold Strike convinced even the harshest critics that Alaska was a valuable addition to American territory. The check for $7.2 million was made payable to the Russian Minister to the United States Edouard de Stoeckl, who negotiated the deal for the Russians. Also shown here is the Treaty of Cession, signed by Tzar Alexander II, which formally concluded the agreement for the purchase of Alaska from Russia.
<urn:uuid:8182aa95-78e2-42b3-a86d-30bb1a0fa8f8>
{ "date": "2013-05-21T10:21:24", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9341670870780945, "score": 4.03125, "token_count": 279, "url": "http://www.scoop.it/t/on-this-day/p/3018291670/our-documents-check-for-the-purchase-of-alaska-1868" }
Filed under: Foundational Hand After studying the proportions of the Foundational Hand letters, the next step is to start writing the letters. Each letter is constructed rather than written. The letters are made up of a combination of pen strokes, which are only made in a top – down or left – right direction. The pen is never pushed up. When we studied the proportions of the Foundational Hand we could group the letters according to their widths. Now, we can group them according to the order and direction of the pen strokes. You may find it useful to look at the construction grid whilst studying the order and direction of the letters. The first group consists of the letters c, e, and o. These letters are based on the circle shape. This shape is produced with two pen strokes. Visualise a clock face and start the first stroke at approximately the 11, and finish it in an anti-clockwise direction at 5. The second stroke starts again at the 11 and finishes in a clockwise direction on the 5 to complete the letter o. The first pen-stroke for the letters c and e are the same as the first of the letter o. The second pen-stroke on the c and e are shorter and finish around the 1 position on the imaginary clock face. Finally, the letter e has a third stroke, starting at the end of the second stroke and finishes when it touches the first stroke. The next group of letters are d, q, b and p. All these letters combine curved and straight pen strokes. When writing these letters it can be useful to think of the underlying circle shape, which your pen will leave or join at certain points depending upon which letter is being written. The first stroke of the b starts at the ascender height of the letter, which can be eyed in at just under half the x-height (body height of letters with no ascender or descender). Continue the ascender stroke of the b until it ‘picks up’ the circle shape, follow round the circle until the pen reaches the 5 on the imaginary clock face. The second stroke starts on the first stroke following the circle round until it touches the end of the first stroke. The letter d is similar to the c except it has a third stroke for the ascender, which will touch the ends of the first and second stroke being for finishing on the write-line. Letter p starts with a vertical stroke from the x-height down to the imaginary descender line, which is just under half the x-height below the write-line. The second and third strokes are curved, starting on the descender stroke and following round the imaginary circle. The letter q is almost the same as the d, except it has a descender stroke rather than an ascender stroke. Letters a, h, m, n, r All these letters combine curved and straight pen strokes. Once again, think of the underlying circle shape, which your pen will leave or join at certain points depending upon the letter being written. The Letter h consists of two pen strokes. The first is a vertical ascender stroke. The second stroke starts curved, follows the circle round, then leaves it and becomes straight. The letter n is produced exactly the same way as the letter h, except the first stroke is not so tall as it starts on the x-height line. The first two pen strokes of the letter m are the same as the letter n. Then a third stroke is added which is identical to the second stroke. The letter r is also written the same way as the letter n except the second stroke finishes at the point where the circle would have been left and the straight is picked up. The first stroke of letter a is the same as the second stroke of the letters h, m and n. The second stroke follows the circle. Finally, the third stroke starts at the same point as the second stroke, but is a straight line at a 30° angle and touches the first stroke. The next group of letters are l, u and t. These letters are straight-forward. The letter l is the same as the first stroke of letter b. The letter u is also similar to the first stroke of letter b except it starts lower down. The second stroke starts on the x-height line and finishes on the write-line. Letter t has the same first stroke as letter u. It is completed by a second horizontal stroke. The following letters k, v, w, x, y and z are made of at least one diagonal pen stroke. The letter k starts with a vertical ascender stroke, then a second stroke diagonal stroke which joins the vertical stroke. The final stroke is also diagonal and starts where the first and second stroke meet and stops when it touches the write-line. If you look closely you will see it goes further out than the second stroke. This makes the letter look more balanced. If the end of these two pen-strokes lined up the letter would look like it is about to fall over. Letter v is simply two diagonal strokes and these are repeated to produce the letter w. The letter y is the same as the v except the second stroke is extended until to create a descender stroke. Letter x is a little different, you need to create it in such a way that the two stroke cross slightly above the half-way mark on the x-height. This means the top part will be slightly smaller than the bottom which will give the letter a better balance. Finally, in this group is letter z. The easiest way to produce this is with the two horizontal pen strokes, thenjoin these two strokes with a diagonal pen-stroke to complete the letter. Now for the hardest letters; f, g and s. Out of these three letters, f is the simplest. It starts with a vertical ascender stroke – except this is not as tall as the other ascender strokes we have produced so far. This is because we have to allow for the second curved stroke. The overall height of these two strokes should be the same as other letters that have an ascender. Finally, we need a horizontal stroke to complete the letter. Which will you find the hardest letter g or s? These are trickier because unlike all the other letters we have written they do not relate so well to the grid. The letter g is made of a circle shape, with an oval/bowl shape under the write-line. You can see the letter g is made of three pen-strokes. The first stroke is just like the first stroke of the letter o for example, except it is a smaller. The second stroke starts like the second stroke of the letter o, but when it joins the first stroke it continues and changes direction in the gap between the bottom of the shape and the write-line. The third stroke completes the oval shape. Finally, we have a little fourth stroke to complete the letter. The letter s is made up of three strokes. The first stroke is sort of an s shape! The second and third strokes complete the letter s. These are easier to get right than the first stroke because they basically follow the circle shape on our construction grid. The secret to this letter is to make both ‘ends’ of the first stroke not too curved. Because the other two strokes are curved they will compensate and give the overall correct shape. Finally, we are left with the letters i and j, which are made from one pen-stroke. You just need to remember to curve the end of the stroke when writing the letter j.
<urn:uuid:ebc9b632-c27d-4adb-85bd-b11864ab1adf>
{ "date": "2013-05-21T10:35:15", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9464024901390076, "score": 4.15625, "token_count": 1563, "url": "http://www.scribblers.co.uk/blog/tag/starting-calligraphy/" }
When it comes to Spanish-style colonial charm, few cities in the Western Hemisphere can rival Old San Juan. But that doesn’t mean that Puerto Rico’s historical significance is exclusively within the capital city’s walls. Roughly 100 miles southwest of San Juan, the lovely town of San Germán holds the venerable distinction of being Puerto Rico’s second oldest city. Founded in 1573 and named after King Ferdinand the Catholic’s second wife Germaine of Foix, San Germán became the island’s first settlement outside of San Juan. Its significance was such that the island was first divided into the San Juan Party and the San Germán Party. The town also became the focal point from which other settlements were established, thus earning the nickname ‘Ciudad Fundadora de Pueblos’ (roughly, Town-Founding City). But while San Juan went on to grow exponentially beyond the old city walls and other cities like Ponce, Mayagüez, Arecibo or Caguas grew in population and importance, San Germán remained a sleepy colonial town and one of the best-kept secrets within the island. From a historical perspective, San Germán’s most famous landmark is Porta Coeli Church. One of the earliest examples of Gothic architecture in the Americas, the chapel was originally built as a convent in 1609 by the Dominican Order. It was reconstructed during the 18th century and expanded with a single nave church of rubble masonry. Listed in 1976 in the U.S. National Register of Historic Places, Porta Coeli was restored by the Institute of Puerto Rican Culture and now houses the Museo de Arte Religioso, which showcases religious paintings and wooden carvings dating back from the 18th and 19th centuries. Porta Coeli overlooks quaint Plazuela Santo Domingo, an elonganted, cobblestoned square enclosed by pastel-colored, colonial-style houses. A block away sits the town’s main square, Plaza Francisco Mariano Quiñones, where the operational church of San Germán de Auxerre is located. Both Porta Coeli and San Germán de Auxerre are part of the San Germán Historic District, which was also listed in the U.S. National Register of Historic Places in 1994 and includes about 100 significant buildings. Though San Germán has long since lost its 16th-century designation as Puerto Rico’s most important city after San Juan, the town is nonetheless a regional powerhouse in southwestern Puerto Rico, housing important insitutions as the main campus of Universidad Interamericana (Interamerican University). Sports enthusiasts will also appreciate that the city is considered “The Cradle of Puerto Rican Basketball” as it is home to one of the island’s oldest and most succesful basketball franchises, Atléticos de San Germán (San Germán Athletics).
<urn:uuid:6ab19afa-f944-43a1-ab9d-b8cf0b819be4>
{ "date": "2013-05-21T10:00:37", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.944115400314331, "score": 3.171875, "token_count": 618, "url": "http://www.seepuertorico.com/blog/" }
The basic element in solar modules The wafers are further processed to solar cells in the third production step. They form the basic element of the resulting solar modules. The cells already possess all of the technical attributes necessary to generate electricity from sunlight. Positive and negative charge carriers are released in the cells through light radiation causing electrical current (direct current) to flow. The "Cell" business division is part of SolarWorld subsidiary Deutsche Cell GmbH and SolarWorld Industries America LP. Here, solar cells are produced from the preliminary product, the solar silicon wafer. The group manufactures both monocrystalline as well as polycrystalline solar cells. The monocrystalline as well as polycrystalline solar cells are produced around the clock in one of the most advanced solar cell production facilities. The wafers are produced in the clean rooms of the Deutsche Cell GmbH using the most cutting edge process facilities with the highest level of automation. Through the fully integrated production concept, it is possible to flexibly control the use of all auxiliary materials necessary for production and to continuously optimize material utilization during operation. This concept allows us to assure the unique quality standard of our solar cells and simultaneously reduce the loss rate compared to conventional processes. This not only lowers production costs, it adds to the expertise in the solar cell production for the SolarWorld group. The wafer is first cleaned of all damage caused by cutting and then textured. A p/n junction is created by means of phosphorous diffusion which makes the silicon conductive. In the next step, the phosphorus glass layer produced by diffusion is removed. An anti-reflection layer is added. This which reduces optical losses and ensures electrical passivation of the surface is added. Then, the contacts are attached to the front and back along with a rear contact. Finally, every individual solar cell is tested for its optical qualities and the electrical efficiency measured.
<urn:uuid:23db29b2-778d-4f3d-8483-c82ac082e7e9>
{ "date": "2013-05-21T10:21:17", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9240497350692749, "score": 3.140625, "token_count": 391, "url": "http://www.solarworld.de/en/solar-power/from-sand-to-module/solar-cells/?cHash=7b2c190ccf04a8c15ab1640a810ffe72&webtoolPid=5062" }
by Staff Writers Chicago IL (SPX) Jan 11, 2013 Technologically valuable ultrastable glasses can be produced in days or hours with properties corresponding to those that have been aged for thousands of years, computational and laboratory studies have confirmed. Aging makes for higher quality glassy materials because they have slowly evolved toward a more stable molecular condition. This evolution can take thousands or millions of years, but manufacturers must work faster. Armed with a better understanding of how glasses age and evolve, researchers at the universities of Chicago and Wisconsin-Madison raise the possibility of designing a new class of materials at the molecular level via a vapor-deposition process. "In attempts to work with aged glasses, for example, people have examined amber," said Juan de Pablo, UChicago's Liew Family Professor in Molecular Theory and Simulations. "Amber is a glass that has been aged millions of years, but you cannot engineer that material. You get what you get." de Pablo and Wisconsin co-authors Sadanand Singh and Mark Ediger report their findings in the latest issue of Nature Materials. Ultrastable glasses could find potential applications in the production of stronger metals and in faster-acting pharmaceuticals. The latter may sound surprising, but drugs with the amorphous molecular structure of ultrastable glass could avoid crystallization during storage and be delivered more rapidly in the bloodstream than pharmaceuticals with a semi-crystalline structure. Amorphous metals, likewise, are better for high-impact applications than crystalline metals because of their greater strength. The Nature Materials paper describes computer simulations that Singh, a doctoral student in chemical engineering at UW-Madison, carried out with de Pablo to follow-up some intriguing results from Ediger's laboratory. Growing stable glasses Several years ago, he discovered that glasses grown this way on a specially prepared surface that is kept within a certain temperature range exhibit far more stability than ordinary glasses. Previous researchers must have grown this material under the same temperature conditions, but failed to recognize the significance of what they had done, Ediger said. Ediger speculated that growing glasses under these conditions, which he compares to the Tetris video game, gives molecules extra room to arrange themselves into a more stable configuration. But he needed Singh and de Pablo's computer simulations to confirm his suspicions that he had actually produced a highly evolved, ordinary glass rather than an entirely new material. "There's interest in making these materials on the computer because you have direct access to the structure, and you can therefore determine the relationship between the arrangement of the molecules and the physical properties that you measure," said de Pablo, a former UW-Madison faculty member who joined UChicago's new Institute for Molecular Engineering earlier this year. There are challenges, though, to simulating the evolution of glasses on a computer. Scientists can cool a glassy material at the rate of one degree per second in the laboratory, but the slowest computational studies can only simulate cooling at a rate of 100 million degrees per second. "We cannot cool it any slower because the calculations would take forever," de Pablo said. "It had been believed until now that there is no correlation between the mechanical properties of a glass and the molecular structure; that somehow the properties of a glass are "hidden" somewhere and that there are no obvious structural signatures," de Pablo said. Creating better materials Ultrastable glasses achieve their stability in a manner analogous to the most efficiently packed, multishaped objects in Tetris, each consisting of four squares in various configurations that rain from the top of the screen. "This is a little bit like the molecules in my deposition apparatus raining down onto this surface, and the goal is to perfectly pack a film, not to have any voids left," Ediger said. The object of Tetris is to manipulate the objects so that they pack into a perfectly tight pattern at the bottom of the screen. "The difference is, when you play the game, you have to actively manipulate the pieces in order to build a well-packed solid," Ediger said. "In the vapor deposition, nature does it for us." But in Tetris and experiments alike, when the objects or molecules descend too quickly, the result is a poorly packed, void-riddled pattern. "In the experiment, if you either rain the molecules too fast or choose a low temperature at which there's no mobility at the surface, then this trick doesn't work," Ediger said. Then it would be like taking a bucket of odd-shaped pieces and just dumping them on the floor. There are all sorts of voids and gaps because the molecules didn't have any opportunity to find a good way of packing." "Ultrastable glasses from in silico vapor deposition," by Sadamand Singh, M.D. Ediger and Juan J. de Pablo," Nature Materials. National Science Foundation and the U.S. Department of Energy. University of Chicago Space Technology News - Applications and Research |The content herein, unless otherwise known to be public domain, are Copyright 1995-2012 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement|
<urn:uuid:c3f50b79-8c61-44c3-9407-5d3feede364f>
{ "date": "2013-05-21T10:00:15", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9321709871292114, "score": 3.171875, "token_count": 1141, "url": "http://www.spacedaily.com/reports/Study_reveals_ordinary_glasss_extraordinary_properties_999.html" }
Mission controllers received confirmation today that NASA's Dawn spacecraft has escaped from the gentle gravitational grip of the giant asteroid Vesta. Dawn is now officially on its way to its second destination, the dwarf planet Ceres. Dawn departed from Vesta at about 11:26 p.m. PDT on Sept. 4 (2:26 a.m. EDT on Sept. 5). Communications from the spacecraft via NASA's Deep Space Network confirmed the departure and that the spacecraft is now traveling toward Ceres. "As we respectfully say goodbye to Vesta and reflect on the amazing discoveries over the past year, we eagerly look forward to the next phase of our adventure at Ceres, where even more exciting discoveries await," said Robert Mase, Dawn project manager, based at NASA's Jet Propulsion Laboratory, Pasadena, Calif. Launched on Sept. 27, 2007, Dawn slipped into orbit around Vesta on July 15, 2011 PDT (July 16 EDT). Over the past year, Dawn has comprehensively mapped this previously uncharted world, revealing an exotic and diverse planetary building block. The findings are helping scientists unlock some of the secrets of how the solar system, including our own Earth, was formed. A web video celebrating Dawn's "greatest hits" at Vesta is available at http://www.nasa.gov/multimedia/videogallery/index.html?media_id=151669301 . Two of Dawn's last looks at Vesta are also now available, revealing the creeping dawn over the north pole. Dawn spiraled away from Vesta as gently as it arrived. It is expected to pull into its next port of call, Ceres, in early 2015. Dawn's mission is managed by JPL for NASA's Science Mission Directorate in Washington. Dawn is a project of the directorate's Discovery Program, managed by NASA's Marshall Space Flight Center in Huntsville, Ala. UCLA is responsible for overall Dawn mission science. Orbital Sciences Corp. in Dulles, Va., designed and built the spacecraft. The German Aerospace Center, the Max Planck Institute for Solar system Research, the Italian Space Agency and the Italian National Astrophysical Institute are international partners on the mission team. The California Institute of Technology in Pasadena manages JPL for NASA. More information about Dawn: http://www.nasa.gov/dawn http://dawn.jpl.nasa.gov
<urn:uuid:d335bafc-4460-4b84-b589-da7814ca2914>
{ "date": "2013-05-21T10:35:39", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9053118824958801, "score": 2.640625, "token_count": 491, "url": "http://www.spaceref.com/news/viewpr.html?pid=38430" }
|SPACE TODAY ONLINE Covering Space From Earth to the Edge of the Universe| |Cover||Rockets||Satellites||Shuttles||Stations||Astronauts||Solar System||Deep Space||Global Links| Beating Swords Into Plowshares Converting Military Intercontinental Ballistic Missiles to Peaceful Space Launchers Russian Submarine Novomoscovsk Launches Satellites From Barents Sea The Russian nuclear submarine Novomoscovsk used a converted sea-launched ballistic missile to fire two small environmental research satellites into Earth orbit from beneath the Barents Sea in 1998. A Russian Typhoon-class nuclear submarine The unusual launch was the first time a commercial payload had ever been sent from Earth into orbit from a submarine and the first commercial space launch in the history of the Russian Navy. The satellites named TUBSAT were launched on a Shitl rocket which was a converted sea-launched ballistic missile (SLBM). The Shitl Rocket The Shitl rocket family is one of a range of space launch vehicles derived from decommissioned ballistic missiles offered for sale by Russia after the Cold War. The industrial design bureau Makeyev OKB had been formed by the former Soviet Union in the 1950s to produce a storable liquid fuel rocket family. Back then, those missiles were known as R-11 for use on land and R-11FM for use by the navy. Makeyev went on to design and manufacture descendents of the R-11 family, including the infamous Scud-B missile and nearly all of Russia's submarine-launched ballistic missiles (SLBMs). In the 1990s, Makeyev and other OKBs marketed a variety of space rockets converted from surplus SLRBs, which could be launched from the ground, air, sea surface or underwater. During the Cold War, the military SLBM, which later would become the Shitl space rocket, was known as R-29RM and SS-N-23. The manufacturer was designated RSM-54. Shitl is a three-stage liquid-fuel rocket. The satellites replaced the nuclear warhead inside a standard R-29RM re-entry vehicle atop the SS-N-23. The submarine launch plaform was Novomoskovsk K-407, a 667BDRM Delta-IV-class or Delfin-class submarine of the Russian Northern Fleet's 3rd Flotilla. The Shitl's maiden flight took place July 7, 1998, while the submarine was in a Barents Sea firing range off the coast of the Kolskiy Peninsula at 69.3 degrees N by 35.3 degrees E. Prior to launch, the space flight had been viewed as a risk because a different one of the Northern fleet's Delta-class submarines had suffered an accident in one of its rocket tubes on May 5, 1998. The Shtil's former warhead faring housed an Israeli instrument package and the German satellites TUBSAT-N and TUBSAT N-1. The tiny satellites, referred to as nanosatellites, were built and operated by the Technische Universitat Berlin (TUB). Each TUBSAT carried small store-and-forward communications payloads used to track transmitters placed on vehicles, migrating animals and marine buoys. The satellites were dropped off in elliptical orbits ranging from 250 to 500 miles above Earth. They traveled around Earth every 96 minutes. Tubsat-N, designated internationally as 1998-042A, weighed eighteen lbs. while Tubsat-N1, designated 1998-042B, weighed seven lbs. Technically, putting satellites in low Earth orbits is only a small step from delivering long-range warheads. The Russians had been offering the submarine launch facility as a commercial service for some time and previously had conducted sub-orbital test flights. The benefits of a submarine launch are safety and ease of putting a payload into a particular orbit. By comparison, there are safety restrictions on the directions toward which land-based rockets can be launched. On the other hand, these submarine-based missiles converted to space rockets are only big enough to launch small research satellites. They aren't able to launch very large and heavy communications satellites or interplanetary space probes. However, the success of the Shitl launch could open up a valuable small-satellite niche in the space-launch market for the Russians. The Northern fleet reportedly was paid $111,000 for the launch, which helped the submarine crew sharpen skills diminished by a shortage of training funds. Berlin Technical University's Transport and Applied Mechanics Department plans to launch two more TUBSATs. Learn more about nuclear submarines and missiles... Taking Nuclear Weapons off Hair-Trigger Alert Scientific American, November 1997 Launchers from decommissioned missiles sold by Russia Encyclopedia Astronautica DLR German Aerospace Center Top of this page Swords Into Plowshares Titan Minuteman Submarine Tsyklon SS-25 Rockets index STO Cover About STO Search STO Feedback Questions E-Mail © 2003 Space Today Online
<urn:uuid:4536940c-6c0c-4d1f-af8d-330268357f7b>
{ "date": "2013-05-21T10:36:16", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9385547041893005, "score": 2.90625, "token_count": 1074, "url": "http://www.spacetoday.org/Rockets/Plowshares/Submarine.html" }
In January 1992, a container ship near the International Date Line, headed to Tacoma, Washington from Hong Kong, lost 12 containers during severe storm conditions. One of these containers held a shipment of 29,000 bathtub toys. Ten months later, the first of these plastic toys began to wash up onto the coast of Alaska. Driven by the wind and ocean currents, these toys continue to wash ashore during the next several years and some even drifted into the Atlantic Ocean. The ultimate reason for the world's surface ocean currents is the sun. The heating of the earth by the sun has produced semi-permanent pressure centers near the surface. When wind blows over the ocean around these pressure centers, surface waves are generated by transferring some of the wind's energy, in the form of momentum, from the air to the water. This constant push on the surface of the ocean is the force that forms the surface currents. Learning Lesson: How it is Currently Done Around the world, there are some similarities in the currents. For example, along the west coasts of the continents, the currents flow toward the equator in both hemispheres. These are called cold currents as they bring cool water from the polar regions into the tropical regions. The cold current off the west coast of the United States is called the California Current. Likewise, the opposite is true as well. Along the east coasts of the continents, the currents flow from the equator toward the poles. There are called warm current as they bring the warm tropical water north. The Gulf Stream, off the southeast United States coast, is one of the strongest currents known anywhere in the world, with water speeds up to 3 mph (5 kph). These currents have a huge impact on the long-term weather a location experiences. The overall climate of Norway and the British Isle is about 18°F (10°C) warmer in the winter than other cites located at the same latitude due to the Gulf Stream. Take it to the MAX! Keeping Current While ocean currents are a shallow level circulations, there is global circulation which extends to the depths of the sea called the Great Ocean Conveyor. Also called the thermohaline circulation, it is driven by differences in the density of the sea water which is controlled by temperature (thermal) and salinity (haline). In the northern Atlantic Ocean, as water flows north it cools considerably increasing its density. As it cools to the freezing point, sea ice forms with the "salts" extracted from the frozen water making the water below more dense. The very salty water sinks to the ocean floor. Learning Lesson: That Sinking Feeling It is not static, but a slowly southward flowing current. The route of the deep water flow is through the Atlantic Basin around South Africa and into the Indian Ocean and on past Australia into the Pacific Ocean Basin. If the water is sinking in the North Atlantic Ocean then it must rise somewhere else. This upwelling is relatively widespread. However, water samples taken around the world indicate that most of the upwelling takes place in the North Pacific Ocean. It is estimated that once the water sinks in the North Atlantic Ocean that it takes 1,000-1,200 years before that deep, salty bottom water rises to the upper levels of the ocean.
<urn:uuid:dfd00b67-c3db-464f-93c1-6d6c5508de9d>
{ "date": "2013-05-21T10:00:34", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9542478919029236, "score": 3.984375, "token_count": 678, "url": "http://www.srh.noaa.gov/srh/jetstream/ocean/circulation.htm" }
Michele Johnson, Ames Research Center Astronomers have discovered a pair of neighboring planets with dissimilar densities orbiting very close to each other. The planets are too close to their star to be in the so-called "habitable zone," the region in a system where liquid water might exist on the surface, but they have the closest-spaced orbits ever confirmed. The findings are published today in the journal Science. The research team, led by Josh Carter, a Hubble fellow at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass., and Eric Agol, a professor of astronomy at the University of Washington in Seattle, used data from NASA's Kepler space telescope, which measures dips in the brightness of more than 150,000 stars, to search for transiting planets. The inner planet, Kepler-36b, orbits its host star every 13.8 days and the outer planet, Kepler-36c, every 16.2 days. On their closest approach, the neighboring duo comes within about 1.2 million miles of each other. This is only five times the Earth-moon distance and about 20 times closer to one another than any two planets in our solar system. Kepler-36b is a rocky world measuring 1.5 times the radius and 4.5 times the mass of Earth. Kepler-36c is a gaseous giant measuring 3.7 times the radius and eight times the mass of Earth. The planetary odd couple orbits a star slightly hotter and a couple billion years older than our sun, located 1,200 light-years from Earth To read more about the discovery, visit: the Harvard-Smithsonian Center for Astrophysics and University of Washington press releases. Ames Research Center in Moffett Field, Calif., manages Kepler's ground system development, mission operations and science data analysis. NASA’s Jet Propulsion Laboratory, Pasadena, Calif., managed the Kepler mission's development. Ball Aerospace and Technologies Corp. in Boulder, Colo., developed the Kepler flight system and supports mission operations with the Laboratory for Atmospheric and Space Physics at the University of Colorado in Boulder. The Space Telescope Science Institute in Baltimore archives, hosts and distributes Kepler science data. Kepler is NASA's 10th Discovery Mission and is funded by NASA's Science Mission Directorate at the agency's headquarters in Washington.
<urn:uuid:21851e90-e451-4be9-b860-e7e63b41efac>
{ "date": "2013-05-21T10:35:33", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.8924832344055176, "score": 3.609375, "token_count": 478, "url": "http://www.staplenews.com/home/2012/6/22/astronomers-discover-planetary-odd-couple.html" }
During the last 25 years, there has been debate about the value of corporate social responsibility (CSR), particularly as it relates to the rise of “ethical consumers.” These are shoppers who base purchasing decisions on whether a product’s social and ethical positioning — for example, its environmental impact or the labor practices used to manufacture it — aligns with their values. Many surveys purport to show that even the average consumer is demanding so-called ethical products, such as fair trade–certified coffee and chocolate, fair labor–certified garments, cosmetics produced without animal testing, and products made through the use of sustainable technologies. Yet when companies offer such products, they are invariably met with indifference by all but a selected group of consumers. Is the consumer a cause-driven liberal when surveyed, but an economic conservative at the checkout line? Is the ethical consumer little more than a myth? Although many individuals bring their values and beliefs into purchasing decisions, when we examined actual consumer behavior, we found that the percentage of shopping choices made on a truly ethical basis proved far smaller than most observers believe, and far smaller than is suggested by the anecdotal data presented by advocacy groups. The trouble with the data on ethical consumerism is that the majority of research relies on people reporting on their own purchasing habits or intentions, whether in surveys or through interviews. But there is little if any validation of what consumers report in these surveys, and individuals tend to dramatically overstate the importance of social and ethical responsibility when it comes to their purchasing habits. As noted by John Drummond, CEO of Corporate Culture, a CSR consultancy, “Most consumer research is highly dubious, because there is a gap between what people say and what they do.” The purchasing statistics on ethical products in the marketplace support this assertion. Most of these products have attained only niche market positions. The exceptions tend to be relatively rare circumstances in which a multinational corporation has acquired a company with an ethical product or service, and invested in its growth as a separate business, without altering its other business lines (or the nature of its operations). For example, Unilever’s purchase of Ben & Jerry’s Homemade Inc. allowed for the expansion of the Ben & Jerry’s ice cream franchise within the United States, but the rest of Unilever’s businesses remained largely unaffected. Companies that try to engage in proactive, cause-oriented product development often find themselves at a disadvantage: Either their target market proves significantly smaller than predicted by their focus groups and surveys or their costs of providing ethical product features are not covered by the prices consumers are willing to pay. (For a different perspective on these issues, see “The Power of the Post-Recession Consumer,” by John Gerzema and Michael D’Antonio, s+b, Spring 2011.) To understand the true nature of the ethical consumer, we set up a series of generalized experimental polling studies over nearly 10 years that allowed us to gather the social and ethical preferences of large samples of individuals. We then conducted 120 in-depth interviews with consumers from eight countries (Australia, China, Germany, India, Spain, Sweden, Turkey, and the United States). We asked them not just to confirm that they might purchase a product, but to consider scenarios under which they might buy an athletic shoe from a company with lax labor standards, a soap produced in ways that might harm the environment, and a counterfeit brand-name wallet or suitcase. They were also asked how they thought other people from their country might respond to these products — a well-established “projective technique” that often reveals more accurate answers than questions about the respondent’s direct purchases. And they were asked about their own past behavior; for example, all the interviewees admitted purchasing counterfeit goods at some point. The interviews asked participants explicitly about the ramifications of these ethical issues, and the inconsistencies between their words and their actions.
<urn:uuid:073b05ae-7c40-41bb-852f-740d0635d55d>
{ "date": "2013-05-21T10:35:45", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9581508040428162, "score": 2.640625, "token_count": 804, "url": "http://www.strategy-business.com/article/11103?gko=03d29" }
History of the Red Mass The “Red Mass” is an historical tradition within the Catholic Church dating back to the Thirteenth Century when it officially opened the term of the court for most European countries. The first recorded Red Mass was celebrated in the Cathedral of Paris in 1245. From there, it spread to most European countries. Around 1310, during the reign of Edward I, the tradition began in England with the Mass offered at Westminster Abbey at the opening of the Michaelmas term. It received its name from the fact that the celebrant was vested in red and the Lord High justices were robed in brilliant scarlet. They were joined by the university professors with doctors among them displaying red in their academic gowns. The Red Mass also has been traditionally identified with opening of the Sacred Roman Rota, the supreme judicial body of the Catholic Church. In the United States, the first Red Mass occurred in New York City on October 6, 1928. This Mass was celebrated at Old St. Andrew’s Church with Cardinal Patrick Hayes presiding. Today, well over 25 cities in the United States celebrate the Red Mass each year, with not only Catholic but also Protestant and Jewish members of the judiciary and legal profession attending the Mass. One of the better-known Red Masses is the one celebrated each fall at the Cathedral of St. Matthew the Apostle in Washington, D.C. It is attended by Justices of the Supreme Court, members of Congress, the diplomatic corps, the Cabinet, and other government departments and, sometimes, the President of the United States. All officials attend in their capacity as private individuals, rather than as government representatives, in order to prevent any issues over separation of church and state. For the most part the Red Mass is like any other Roman Catholic Mass. A sermon is given, usually with a message which has an overlapping political and religious theme. The Mass is also an opportunity for the Catholic church to express its goals for the coming year. One significant difference between the Red Mass and a traditional Mass is that the prayers and blessings are focused on the leadership roles of those present and to invoke divine guidance and strength during the coming term of Court. It is celebrated in honor of the Holy Spirit as the source of wisdom, understanding, counsel and fortitude, gifts which shine forth preeminently in the dispensing of justice in the courtroom as well as in the individual lawyer’ s office.
<urn:uuid:e4d2a3f5-9e6a-4040-9672-d562582ec4df>
{ "date": "2013-05-21T10:20:36", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9749917984008789, "score": 3.65625, "token_count": 494, "url": "http://www.stthomasmoresantaclara.org/the-red-mass/history-of-the-red-mass/" }
Reading Classic Literatures Classic literature, even though they were written fifty or hundred years ago, still has the power to affect the readers. The gift of literature to educate and inspire people transcends time. Unfortunately, not all people like to read classic literature. Sometimes, to understand classic literature, you have to be mature enough to enjoy and comprehend these writings. Although we read classic literature because we have to do a report in school, we can also read them for enjoyment. You may have heard of famous authors of classical novels on the television and internet, you can check out their writings and their books. If you really want to get into the habit of reading classical literature, you can start by reading 30 minutes every day. You should have a dictionary near you when reading classical novels since the words used are always deep or its meaning has changed over time. To have a better understanding of the setting and the plot of the story, you can make a little background research on the era or its time period. You can also research on the background of the author. You really have to follow the structure of the story. Most classical literature have complex storyline and plots which makes it hard sometimes to follow the story. The character development is also very extensive. Seeing the overall theme of the story is very important as well as following the basic development of the characters and their story. There are literature companions that you can buy to help you get started with the classical literature. An example of a literature companion is the "Oxford Companion to Classical Literature." Another key to understanding classic literature is by understanding the use of the footnotes. These classical literature are full of footnotes that references the social and culture elements of their time.
<urn:uuid:007f5583-acd1-46d2-9f51-3956850b8832>
{ "date": "2013-05-21T10:28:16", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9732595086097717, "score": 3.203125, "token_count": 348, "url": "http://www.studyguide.org/reading-classic-literatures" }
Many of us act as though we all see the same reality, yet the truth is we don't. Human Beings have cognitive biases or blind spots. Blind spots are ways that our mind becomes blocked from seeing reality as it is - blinding us from seeing the real truth about ourselves in relation to others. Once we form a conclusion, we become blind to alternatives, even if they are right in front of their eyes. Emily Pronin, a social psychologist, along with colleagues Daniel Lin and Lee Ross, at Princeton University's Department of Psychology, created the term "blind spots." The bias blind spot is named after the visual blind spot. Passing the Ball - Watch this Video There is a classic experiment that demonstrates one level of blind spots that can be attributed to awareness and focused-attention. When people are instructed to count how many passes the people in white shirts make on the basketball court, they often get the number of passes correct, but fail to see the person in the black bear suit walking right in front of their eyes. Hard to believe but true! Blind Spots & Denial However, the story of blind spots gets more interesting when we factor in our cognitive biases that come from our social needs to look good in the eyes of others. When people operate with blind spots, coupled with a strong ego, they often refuse to adjust their course even in the face of opposition from trusted advisors, or incontrovertible evidence to the contrary. Two well-known examples of blind spots are Henry Ford and A&P: - Next >> - Next >>
<urn:uuid:6513ee73-d970-4b01-a599-cce767241921>
{ "date": "2013-05-21T10:14:28", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9546551704406738, "score": 3.640625, "token_count": 328, "url": "http://www.successtelevision.com/index.php/Career/self-perception-and-awareness-and-knowing-how-others-perceive-your-behavior.html" }
Electrical research institute selects IPP as participant in carbon dioxide study The Intermountain Power Project has been selected as one of five electric utilities in the United States and Canada to participate in a study of technology for capturing carbon dioxide emissions from coal-fueled electrcity generation facilities. Conducted by the Electric Power Research Institute, the study will examine the impacts of retrofitting advanced amine-based post-combustion carbon dioxide capture technology to existing coal-fired power plants, indicated EPRI representatives. As global demand for electricity increases and regulators worldwide look at ways to reduce carbon dioxide emissions, post-combustion capture for new and existing power plants could be an important option. However, retrofit of systems to an existing plant presents significant challenges, including limited space for new plant equipment, limited heat available for process integration, additional cooling water requirements and potential steam turbine modifications. "EPRI's analyses have shown carbon capture and storage will be an essential part of the solution if we are to achieve meaningful carbon dioxide emissions reductions at a cost that can be accommodated by our economy," pointed out Bryan Hannegan, vice president of generation and environment at the research institute. "Projects such as this, in which a number of utility companies come forward to offer their facilities and form a collaborative to share the costs of research, are critical to establishing real momentum for the technologies that we will need." In addition to IPP, power plants in Ohio, Illinois, North Dakota, and Nova Scotia will participate in the project. Individual sites offers a unique combination of unit sizes and ages, existing and planned emissions controls, fuel types, steam conditions, boilers, turbines, cooling systems and options for carbon dioxide storage, pointed out EPRI representatives. The study - to be completed during 2009 - will provide the participants with valuable information applicable to their own individual power plants. A report for an individual operation will: â¢Assess the most practical carbon dioxide capture efficiency configuration based on site constraints. â¢Determine the space required for the carbon dioxide capture technology and the interfaces with existing systems. â¢Estimate performance and costs for the post-combustion capture plant. â¢Assess the features of the facility that materially affect the cost and feasibility of the retrofit. "The participants in the Intermountain Power Project are committed to maintaining high environmental standards," said general manager James Hewlet. "This study will help us evaluate options for managing the emissions of greenhouse gases in the future. It is a meaningful step in our three-decade track record of continually improving the power plant's environmental performance."
<urn:uuid:e72b94a0-eaf5-4afc-87be-12f1cb211d61>
{ "date": "2013-05-21T10:07:37", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9158762693405151, "score": 2.5625, "token_count": 544, "url": "http://www.sunadvocate.com/index.php?tier=1&article_id=15683" }
Breast-Feeding a Sick Baby If your baby becomes ill or develops a minor viral illness, such as a cold, flu, or diarrhea, it is best to continue your Reference breast-feeding Opens New Window routine. Breast milk provides your baby with the best possible nutrition. If your baby is too ill to breast-feed, try Reference cup-feeding. With this technique, you feed your baby collected breast milk. Take your baby to visit a health professional if he or she eats very little or not at all. Even if your baby does not have much appetite or is in the hospital on Reference intravenous (IV) Opens New Window fluids, use a pump or hand express your milk on your normal schedule. This will help to maintain your milk production until your baby's appetite returns. |By:||Reference Healthwise Staff||Last Revised: Reference April 14, 2011| |Medical Review:||Reference Sarah Marshall, MD - Family Medicine Reference Kirtly Jones, MD, MD - Obstetrics and Gynecology
<urn:uuid:a1912a04-de80-4ab3-a6ef-f8a8c8472128>
{ "date": "2013-05-21T10:07:33", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8902696371078491, "score": 2.828125, "token_count": 215, "url": "http://www.sutterhealth.org/health/healthinfo/index.cfm?A=C&type=info&hwid=ue5303&section=ue5303-sec" }