score
stringclasses 605
values | text
stringlengths 4
618k
| url
stringlengths 3
537
| year
int64 13
21
|
---|---|---|---|
54 | The Pangaea Resource MountainCommodities / Energy Resources Dec 13, 2012 - 09:55 AM GMT
The supercontinent Pangaea was as above all vast, and its formation then movement have left vast resources of all hydrocarbons, often located more than 2000 metres below current land surfaces and ocean floors. Plate tectonics helps us understand the processes.
One example of how different the world was about 220 million years ago, was India's status of an island separated from the Asian continent by the vast Tethys Sea. When Pangaea broke apart, India began moving north-east away from the future Africa, taking about 170 million years before it collided with Asia.
When India rammed into Asia about 50 million years ago, India's northward advance slowed as it
folded and buckled the Himalayan mountain chain, which continues today. One trace of this is the rise of Everest and other mountains in the Himalayan chain at about 5-40 mm per year, and their north-eastward drift at about 5 cms a year. How and why the Earth's mantle forms, breaks up and then spreads is the focus of geological research and theory, but current research suggests the main driver of tectonic change are mantle plumes, which are motors of mountain building and continent spreading processes. The plumes originate as little as 2750 metres below the earth’s surface in large regions with low mantle thickness like the Pacific ocean floor, and are the contact zone where upper parts of the liquid/metal outer core meet the lowermost parts of the semi-fixed rocky mantle, neither of these parts being smooth and spherical.
The probable mechanism is that core spikes channel heat flows to produce plumes of molten rock capable of penetrating the thick mantle, and in the form of volcanoes reach the Earth’s surface, creating new islands on a very regular basis, in the Pacific ocean.
The link with oil and gas is important. In a January 2006 'Scientific American' article, Roger N. Anderson of the Lamont-Doherty Earth Observatory at Columbia University wrote:"Plate tectonics determine the location of oil and gas reservoirs and is the best key we have to understanding why deserts and arctic areas seem to hold the largest hydrocarbon reserves on earth.....other important locations of large reserves (are): river deltas and continental margins offshore. Together, these four types of areas hold most of the oil and gas in the world today".
The critical roles of depth, time and heat - rather than the carbon and hydrogen "raw materials" which are ultra abundant - have until recently been taken as setting relatively fixed and certain amounts, or "endowments" of the hydrocarbon fuel energy resources of the Earth's crust.
Coal for example is usually considered a deeper, more compacted and older resource than either oil or gas, but improving knowledge of plate tectonics shatters this easy categorization. New techniques, especially "fracking" or hydraulic fracturing have also shattered older ideas on what amounts of energy, on a net basis, can be extracted from hydrocarbon resources. Plate tectonics creates the "pressure cooker" that generates coal, oil and gas from organic and inorganic materials, especially their carbon and hydrogen content.
For oil and gas, this process can take from tens of millions to more than 175 million years, and for coal can take 350 million years, also giving the hydrocarbon deposits plenty of time to move around with tectonic plate spreading. The constant movement of the plates, and mantle plume action, as well as rifting, collisions between land masses, and other tectonic forces such as crustal warping and local collapse can trap these solids. In the case of oil and gas they can be trapped in what we call oil and gas fields and provinces.
Depending on depth, time and heat and the interactions of these parameters, it is possible to have Permian era natural gas (300-250 million years before present), but also oil from the Triassic, or even the Jurassic era which occurred more than 100 million years after the Permian. Hard or "old" coals from the Carboniferous can be nearly 350 million years old and are often highly mineralised and metallized, that is containing heavy, toxic or radioactive metals, but geologically young coals extend to brown coal and lignite of the Eocene, about 55 million years before present and can outcrop very close to present land surfaces. "Immature" lignite, called peat, can be less than 50 000 years old and located at the present land surface.
THE VARISCAN HERCYNIAN OROGENY
For world unconventional shale-based oil and gas resources the key event was what is called the Variscan-Hercynian orogeny, dating to a mega change in Earth tectonics, at the end of the Paleozoic super era and start of the Mesozoic super era, at around 240-220 million years before present.
The processes involved were tectonic plate movements, as well as horizontal, vertical and shearing orogenic action. This orogeny produced or modified mountain ranges from the Alleghennies of the USA and Atlas mountains of Morocco, to the European Alps, the Urals, Pamir, Tian Shan and other east Asian fold belts, as well as causing massive Atlantic and Pacific seafloor spreading. Probably all present continents were affected. Very large and geologically rapid seafloor spreading also occurred outside the Atlantic and Pacific, over several tens of millions of years.
One clear example of the link between conventional and then unconventional oil and gas resources, and this orogeny occurred in the future-Europe. The orogeny formed then trapped North Sea oil and gas resources, by the process which petroleum geologists call the "Iapetus suture" or joint. This occurred in the late Caledonian phase of the Variscan/Hercynian global orogeny, and this volcanic fault zone in the central North Sea region is the major hydrocarbon bearing zone of the oil and gas province.
In addition, as with most other oil-gas provinces, the North Sea conventional oil-gas zone sits above older phases of the Variscan /Hercynian orogeny, making it certain that the North Sea region has large unconventional oil and gas reserves, but often at considerably deeper levels. Some of these reserves can already be extracted, including Total Oil's Elgin-Franklin fields which suffered a blowout in early 2012 and which operate at around 5500 metres depth, 600-1100 bar pressure and have a typical fluid temperature of around 193 degrees C.
This could be considered as producing "young" or "fresh volcanic" oil and gas dating from as recently as 130 million years before present. Above all, the so-called source rocks for North Sea oil and gas resources were formed during the Variscan/Hercynian orogeny, and this orogeny had global reach.
This orogeny generated massive hydrocarbon bearing areas almost worldwide, and certainly in all current oil and gas provinces. Most of these resources are deep-located and need fracking plus horizontal drilling to extract the oil and gas. Well-explored shale oil deposits, either in production or in many cases moving fast to recoverable reserve status, include deposits in the western United States, deposits in Australia, Sweden, Germany, France, Estonia, Poland, Jordan, Turkey, Brazil, Argentina, China, and Russia. This is not an exclusive list but is based on very approximate analysis of deposits able to yield at least 40 liters of liquid hydrocarbons (a quarter barrel) per metric ton of shale. On this basis, and including only the above land-based deposits, a first estimate is of about 5 trillion barrels of "potentially recoverable" shale oil.
One of the simplest ways to understand why these figures are massive relative to conventional oil, gas and coal reserves is the thickness of the Earth's crust involved. Unlike conventional oil resources and recoverable reserves located in stratigraphic zones only one or a few hundred metres thick, shale oil and gas hydrocarbon resources are - in large areas of the world - often found in zones that are 500 - 1500 metres thick. World-wide remaining conventional oil reserves are of course controversial but already include unconventional resources and the total can be considered as about 1.25 trillion barrels.
At least 25%-33% of these conventional oil resources are located off-shore with the offshore component constantly growing, making it reasonable to include potentially-recoverable offshore unconventional shale oil resources, for comparison. Extending the shale oil resource to include offshore resources, the about 5 trillion barrel estimate of the past can easily be extended to at least 7.5 trillion of potentially recoverable oil. Present world consumption of oil is about 31 billion barrels per year.
Again due to geological fundamentals, especially the vertical thickness of gas-bearing shale dominant strata which is larger than shale oil strata, the oil equivalent of world potentially recoverable shale gas resources is probably far above 5 - 7.5 trillion barrels oil equivalent, possibly 10 trillion boe. Present world total consumption of natural gas is about 24 billion barrels oil equivalent per year.
Defined as "deep coal", meaning coal seams and strata located at more than 500 metres and up to 3000 metres depth, which are currently impossible to utilise, unconventional coal resources probably total at least 25 trillion barrels oil equivalent on a very highly conservative basis. Some estimates go well above 150 trillion barrels oil equivalent. Present world total consumption of energy coal is about 30 billion barrels equivalent per year.
THE RESOURCE MOUNTAIN
Only since about 2005 has the world become aware of what plate tectonics and "unconventional oil and gas" mean for global hydrocarbon resources and potentially recoverable reserves. To be sure, the "net energy" of shale hydrocarbons tends to be lower, sometimes a lot lower than conventional coal, oil and gas production, but as the US shale gas revolution already clearly shows, cheap gas energy is possible to produce and the output can be sustained and increased, probably for several decades ahead.
Other advantages of shale-based hydrocarbons include the generally much lower level of contaminants in the extracted resource, that is "light sweet" crude and low sulphur gas. In situ extraction of gas from deep coal seams and strata (sometimes several hundred metres thick, over areas as large as 100 square kilometres) can avoid the multiple and dangerous contaminants found in "old" coals.
Still ignored at present, but as shown in Total's Elgin-Franklin ultra-deep field and similar oil and gas production fields, the extreme high temperatures of liquids found point to a future for geothermal energy. Temperatures near 200 degrees C enable the efficient operation of an increasing range of equipment, such as used in OTEC (ocean thermal energy conversion) power plants. Even existing technology permits this "post-hydrocarbon" geothermal energy production, the main problem being sustaining heat output, then utilising the heat for economic benefit.
For the present and near-term future decades, with remaining high dependence on hydrocarbon energy, the Pangea Resource Mountain totally changes the outlook of coal, oil or gas depletion and shortage. The only economic question concerns the pricing of this energy bounty.
(Shortened version of article to appear in 'Energy and Environment Journal (2013)
By Andrew McKillop
Former chief policy analyst, Division A Policy, DG XVII Energy, European Commission. Andrew McKillop Biographic Highlights
Co-author 'The Doomsday Machine', Palgrave Macmillan USA, 2012
Andrew McKillop has more than 30 years experience in the energy, economic and finance domains. Trained at London UK’s University College, he has had specially long experience of energy policy, project administration and the development and financing of alternate energy. This included his role of in-house Expert on Policy and Programming at the DG XVII-Energy of the European Commission, Director of Information of the OAPEC technology transfer subsidiary, AREC and researcher for UN agencies including the ILO.
© 2012 Copyright Andrew McKillop - All Rights Reserved Disclaimer: The above is a matter of opinion provided for general information purposes only and is not intended as investment advice. Information and analysis above are derived from sources and utilising methods believed to be reliable, but we cannot accept responsibility for any losses you may incur as a result of this analysis. Individuals should consult with their personal financial advisor.
Andrew McKillop Archive
© 2005-2013 http://www.MarketOracle.co.uk - The Market Oracle is a FREE Daily Financial Markets Analysis & Forecasting online publication. | http://www.marketoracle.co.uk/Article38035.html | 13 |
54 | A Reference Resource
The Nixon administration marked the end of America's long period of post-World War II prosperity and the onset of a period of high inflation and unemployment-"stagflation." Unemployment was unusually low when Nixon took office in January 1969 (3.3 percent), but inflation was rising. Nixon adopted a policy of monetary restraint to cool what his advisers saw as an overheating economy. "Gradualism," as it was called, placed its hopes in restricting the growth of the money supply to rein in the economic boom that occurred during Lyndon Johnson's last year in office.
But gradualism, as its name implied, did not produce quick results. As the congressional election year of 1970 began, Nixon, according to Haldeman's diary, repeatedly asked the chairman of his Council of Economic Advisers "to explain why we hadn't solved the inflation problem." The President also said "that he never heard of losing an election because of inflation, but lots were lost because unemployment or recession. Point is, he's determined not to let the war on inflation get carried to the point that it will lose us House or Senate seats in November." Political concerns would play an overriding role in the economic decisions of Nixon's first term.
Nixon's fears proved well-founded. By the end of 1970, unemployment rose to the politically damaging level of 6 percent. In that year, Nixon appointed his chief economic adviser, Arthur Burns, chairman of the Federal Reserve; Burns quickly asserted his independence by giving the President an ultimatum: if Nixon failed to hold federal spending under $200 billion, Burns would continue to keep the money supply tight to fight inflation. Nixon acceded to Burns demands. To save money, he delayed pay raises to federal employees by six months. One result was a strike by the nation's postal workers. Although Nixon used the U.S. Army to keep the postal system going, he ultimately yielded to the postal workers' wage demands, undoing some of the budget-balancing that Burns demanded.
Nixon found himself entering the congressional campaign season faced with unemployment, inflation, and Democratic demands for an "incomes policy" to check spiraling prices and wages. Some called for wage and price controls. In the fall, Republicans picked up two seats in the Senate but lost nine in the House, a development that Nixon blamed on the economy.
The economy continued to deteriorate. By the middle of 1971, unemployment reached 6.2 percent while inflation raged unchecked. Nixon decided his administration needed a single economic spokesman and tapped Treasury Secretary John Connally as its mouthpiece. Connally made sweeping statements about the President's intentions: "Number one, he is not going to initiate a wage-price board. Number two, he is not going to impose mandatory price and wage controls. Number three, he is not going to ask Congress for any tax relief. And number four, he is not going to increase federal spending."
Within a matter of weeks, the Treasury Secretary and the President would reverse course. In August 1971, Nixon gathered all of his economic advisers at Camp David and emerged with a New Economic Policy that stood the old one on its head. The NEP violated most of Nixon's long-held economic principles, but he was never one to let principle stand in the way of politics, and his dramatic turnaround on economic issues was immediately and enormously popular. One participant in the Camp David meeting, Herb Stein, thought the assemblage of advisers "acquired the attitude of scriptwriters preparing a TV special to be broadcast on Sunday evening." The announcement had to be as dramatic as possible. "After the special," as Stein put it, "regular programming would be resumed."
Nixon came up with a smash hit. He announced a wage-and-price freeze, tax cuts, and a temporary closure of the "gold window," preventing other nations from demanding American gold in exchange for American dollars. To improve the nation's balance of trade, Nixon called for a 10 percent import tax. Public approval was overwhelming.
Nixon then became the beneficiary of some good luck. An economic boom, which began late in 1971, lasted well into the 1972 campaign season, long enough for Nixon to parlay its effects into reelection that November.
The downturn resumed, however, in 1973. Expansive fiscal and monetary policies combined with a shortage of food (aggravated by massive Soviet purchases of American wheat) to fuel inflation. And then came the oil shock. Oil prices were rising even before the onset of the Arab oil boycott in October of 1973. Ultimately, inflation would climb to 12.1 percent in 1974 and help push the economy into recession. When Nixon left office, the economy was in the tank, with rising unemployment and inflation, lengthening gas lines, and a crashing stock market.
Regulation and Social Legislation
"Probably more new regulation was imposed on the economy," wrote Herb Stein, the chairman of Nixon's Council of Economic Advisers, "than in any other presidency since the New Deal."
The federal government took an active role in preventing on-the-job accidents and deaths when Nixon in 1970 signed into law a bill to create the Occupational Safety and Health Administration (OSHA). That same year, rising concern about the environment led him to propose an Environmental Protection Agency (EPA) and a National Oceanic and Atmospheric Administration (NOAA), and to sign amendments to the 1967 Clean Air Act calling for reductions in automobile emissions and the national testing of air quality. Other significant environmental legislation enacted during Nixon's presidency included the 1972 Noise Control Act, the 1972 Marine Mammal Protection Act, the 1973 Endangered Species Act, and the 1974 Safe Drinking Water Act.
Despite this blizzard of legislation, environmentalists found much to criticize in Nixon's record. The President impounded billions of dollars Congress had authorized to implement the Clean Air Act, lobbied hard for the air-polluting Supersonic Transport, and subjected environmental regulation to cost-benefit analyses which highlighted the economic costs of preserving a healthy ecosystem.
Nixon proposed more ambitious programs than he enacted, including the National Health Insurance Partnership Program, which promoted health maintenance organizations (HMOs). He also proposed a massive overhaul of federal welfare programs. The centerpiece of Nixon's welfare reform was the replacement of much of the welfare system with a negative income tax, a favorite proposal of conservative economist Milton Friedman. The purpose of the negative income tax was to provide both a safety net for the poor and a financial incentive for welfare recipients to work.
Nixon also proposed an expansion of the Food Stamp program. His Family Assistance Program was bold, innovative-even radical-and, apparently, insincere. "About Family Assistance Plan," Haldeman wrote in his diary, the President "wants to be sure it's killed by Democrats and that we make big play for it, but don't let it pass, can't afford it." One part of Nixon's welfare reform proposal did pass and become a lasting part of the system: Supplemental Security Income (SSI) provides a guaranteed income for elderly and disabled citizens. The Nixon years also brought large increases in Social Security, Medicare, and Medicaid benefits.
Watergate was so much more than a single crime and cover-up that it is impossible to summarize the tangle of abuses of presidential power that today are grouped under the name of the hotel where the Democratic National Committee had its offices. The arrest of five men in those offices on June 17, 1972, was the first step toward unearthing a host of administration misdeeds. It was to hide those other crimes that Nixon and his men launched the cover-up, the investigation of which helped to unravel that string of illegal conduct.
Indeed, Watergate was far from the first break-in. A year earlier, Nixon had unconstitutionally created his own secret police organization, the Special Investigations Unit, to unearth a conspiracy that he feared would leak some of his most damaging foreign policy secrets, including the secret bombing of Cambodia and Laos. The President, however, could not convince FBI Director J. Edgar Hoover that such a conspiracy actually existed. Nixon also wanted to expose the alleged conspiracy in the press, something the Justice Department could not legally do. He decided he needed his own team to investigate the conspiracy and leak damaging stories about it. Thus was born the SIU, better known by its nickname, "The Plumbers," an inside joke about its mission to fix leaks.
The immediate cause of Nixon's concern was the publication of the Pentagon Papers, a massive study of the Vietnam War as it was conducted by Nixon's predecessors. The study was commissioned by Robert S. McNamara, the secretary of defense under Kennedy and Johnson. It did not contain a word about the Nixon administration but it did reference top secret documents from the two prior presidencies. The leak ignited Nixon's fear that his own politically damaging secrets would be exposed before the 1972 election. He suspected a conspiracy and resolved to destroy it before it destroyed him.
It was to find out more about this imaginary conspiracy that two of the Plumbers, ex-CIA agent E. Howard Hunt and ex-FBI agent G. Gordon Liddy, planned and carried out an operation to discredit Daniel Ellsberg, the man who leaked the Pentagon Papers. Hunt and Liddy burglarized the offices of Ellsberg's psychiatrist, looking for damaging information on the former Pentagon aide and military operative. Hunt recruited the break-in team through members of the Cuban expatriate community in Florida he knew from his time as a CIA agent working on the Bay of Pigs invasion. When some of those same Cuban expatriates were arrested in the Watergate complex, and when it was discovered that Hunt and Liddy were behind the Watergate break-in as well, Nixon sought desperately to cover up their earlier misdeeds.
In fact, it was his concern with those earlier transgressions that gave rise to a post-Watergate political axiom: that the cover-up of the crime can be more damaging than the crime itself. Nixon's creation of a secret police organization without congressional authorization—one that carried out an illegal break-in without a warrant, no less— would ultimately become a basis for one of the articles of impeachment brought against him by the House Judiciary Committee. As Howard Hunt would put it in an angry memo as his prosecution moved forward, "The Watergate bugging is only one of a number of highly illegal conspiracies engaged in by one or more of the defendants at the behest of senior White House officials. These as yet undisclosed crimes can be proved." Nixon's chief aide, Bob Haldeman, was caught on tape alluding to this very issue:
Haldeman: The problem is that there are all kinds of other involvements and if they started a fishing thing on this they're going to start picking up other tracks. That's what appeals to me about trying to get one jump ahead of them and hopefully cut the whole thing off and sink all of it. [June 21, 1972, quoted in Stanley Kutler's Abuse of Power]
The hope of cutting off the investigation led, two days later, to the conversation between Nixon and Haldeman that became known as the "smoking gun" tape when the White House released it under court order in 1974. In this exchange, Nixon decided to have the CIA tell the FBI to, in Haldeman's words, "Stay the hell out of this." Nixon suggested that the CIA say that "the problem is that this will open the whole, the whole Bay of Pigs thing, and the President just feels that, ah, without going into the details—don't, don't lie to them to the extent to say no involvement, but just say this is a comedy of errors, without getting into it, the President believes that it is going to open the whole Bay of Pigs thing up again."
The White House managed to prevent Watergate's political fallout from affecting the 1972 election. But Nixon had hardly begun his second term when the dam broke. In February 1973, L. Patrick Gray, Nixon's nominee to succeed the late J. Edgar Hoover as head of the FBI, revealed during his confirmation hearings that he had allowed John W. Dean, a White House legal counsel, to sit in on FBI interviews of Watergate suspects. Nixon refused to allow Dean to testify before the Senate Watergate committee chaired by Sam Ervin (D-N.C.), citing the doctrine of executive privilege. Gray's nomination was all but dead. "Let him hang there," John D. Ehrlichman said memorably. "Let him twist slowly, slowly, in the wind."
At the sentencing hearing for the Watergate burglars on March 23, 1973, Judge John J. Sirica read a letter from James McCord, an ex-CIA man and the security chief for Nixon's reelection campaign until his arrest in the burglary. The letter made four points:
1. There was political pressure applied to the defendants to plead guilty and remain silent.
2. Perjury occurred during the trial of matters highly material to the very structure, orientation, and impact of the government's case, and to the motivation and intent of the defendants.
3. Others involved in the Watergate operation were not identified during the trial, when they could have been by those testifying.
4. The Watergate operation was not a CIA operation. The Cubans may have been misled by others into believing that it was a CIA operation. I know for a fact that it was not.
The investigation began to close in on Dean, who, unbeknownst to the President, decided to turn state's evidence. By the end of April, Nixon announced the resignations of Dean, Haldeman, and Ehrlichman. The next month brought the Senate Watergate hearings, televised and widely watched. As witness after witness revealed more details about scandals old and new, Nixon's approval rating sank like a stone. One witness, Alexander P. Butterfield, a former Haldeman aide who then headed the Federal Aviation Administration, revealed the existence of Nixon's White House taping system. Objective evidence existed to determine who was telling the truth—the White House or its accusers.
Watergate special prosecutor Archibald Cox subpoenaed the tapes. In October, Nixon fired Cox, a move that prompted the resignations of Attorney General Elliot Richardson and his deputy, William Ruckelshaus. The "Saturday Night Massacre," as it quickly became known, backfired on Nixon. The outrage it inspired among the American public led him to reverse course and agree to turn over the tapes to Judge Sirica. A new special prosecutor, Leon Jaworski, was appointed and subpoenaed 64 more tapes, including the July 23, 1972, "smoking gun" tape. Jaworski took the case all the way to the Supreme Court, which voted 8-0 to uphold the subpoena. With the release of the tapes, the bottom fell out of Nixon's political support. Senator Barry Goldwater, the conservative leader, told the President that there were a maximum of 18 senators who might vote against his conviction on the articles of impeachment—too few to save him. The Nixon presidency was over.
Nixon announced his resignation on August 8, 1974, to take effect at noon the next day.
In his inaugural address, incoming President Gerald R. Ford declared, "Our long national nightmare is over." One month later, he granted Richard Nixon a full pardon. | http://millercenter.org/president/nixon/essays/biography/4 | 13 |
18 | New Orleans: A background
In the Eighteenth CenturyThe city of New Orleans was founded in 1718 by the French Mississippi Company, which named it after the French Regent of the period, Philippe d'Orleans. The French colony came under Spanish control for a brief spell dating from the 1763 Treaty of Paris until 1801. Much of the famed Vieux Carre architecture can be traced back to this period.
Napoleon sold New Orleans territory to the United States in the Louisiana purchase of 1803, thus severing ties to the Old World. However, continued immigration from Germany, the Americas, Ireland, and France replenished the population and ensured a flourishing cultural mix. Sugar and cotton was produced in nearby plantations reliant on slaves, guaranteeing a degree of prosperity for the region.
In the Nineteenth CenturyThe Haitian Revolution of 1804 brought about the creation of the first black republic in the Western hemisphere. For New Orleans, this meant a greater French-speaking population as many seeking change settled here, escaping the Caribbean island. Today, the Boston Globe has named it the 'Northernmost Caribbean City,' alluding to its past and cultural affinity with the islands.
A British attempt to seize power was stalled in 1815 at the Battle of New Orleans. The city played a key role in the Atlantic slave trade, given its position as a principal port. Nevertheless, a large and prosperous community of free persons co-existed with the slave trade, with a blossoming educated middle class. The Mississippi river served to distribute all the commodities passing through New Orleans so that in 1840 the city was the wealthiest and third most populous in the nation. Meanwhile, its early capture in the American Civil War meant that it was spared the destruction wrought on other southern cities.
By the 1850s, French instruction in schools had been forbidden by the Union, much to the despair of the Creole elite. By the end of the nineteenth century, there had been a dramatic reduction in French usage across the city. The French religious legacy shows little sign of abating: though neighboring cities belong to the Protestant Bible belt, New Orleans retains its Catholic identity.
New Orleans has long harbored historic figures: in 1872 P.B.S. Pinchback became the first non-white governor of a US state, a feat not to be repeated for another century.
In the Twentieth CenturySoon after, burgeoning industrialization stripped New Orleans of its economic position as the Mid-West sped ahead with manufacturing. The nature of the fast-evolving US economy complicated growth in the city; its role in the south was increasingly jeopardized by new manufacturing powerhouses in the Sun Belt. At the same time, the city was christened 'The Big Easy,' as musicians found opportunities with ease, meaning cultural richness found its home while big industry moved elsewhere. Notable residents or natives include literary giants such as Tennessee Williams, Truman Capote and William Faulkner, and jazz heroes Louis Armstrong and Noah Howard.
New Orleans again attracted attention as a central focus of the civil rights struggle, with arguably the most significant twentieth century development being the enfranchisement of the black population and the desegregation of the schooling system. Despite apparent legal equality between ethnic groups at the end of the 1960s, the gap in attainment and income stubbornly persists even today.
It was during the twentieth century that tourism became pivotal to the city's economic success. However, warning signs of the city's vulnerability and susceptibility to storms began to surface: in 1965 Hurricane Betsy killed dozens of residents. On several occasions, flooding threatened livelihoods, though nobody could fully anticipate the impact of Katrina.
New Orleans in the Twenty-First Century - Hurricane KatrinaHurricane Katrina was the costliest hurricane in United States history, with its fatalities elevating it into the top five deadly storms to have hit US territory. At least 1,836 people lost their lives in the 2005 hurricane and subsequent floods, with untold subsequent damage to livelihoods and property. Total property damage was estimated at $81 billion in 2005. Neighboring states also suffered losses, notably Mississippi with 238 dead.
Hurricane Katrina formed over the Bahamas on August 23, 2005, crossing southern Florida as a moderate Category 1 hurricane, causing some deaths and flooding there before strengthening rapidly in the Gulf of Mexico. Its second landfall was as a Category 3 storm on the morning of Monday, August 29 in southeast Louisiana. It wrought severe destruction along the Florida coastline. The tragic loss of life suffered in New Orleans, Louisiana, can be principally attributed to the gross failure of the levee system, which collapsed as the hurricane moved inland. Eighty percent of the city was flooded, with floodwaters lingering for weeks, carrying disease and further contamination. The worst property damage occurred in coastal areas including Mississippi beachfront towns, which were flooded over 90% in hours, with waters reaching 10–19 km from the beach. Boats and cargo ships docked along the coast were rammed further inland, flattening houses and destroying all in its wake.
The devastating failures in protecting the city of New Orleans prompted a lawsuit against the US Army Corps of Engineers (USACE) - the builders of the levee system according to the Flood Control Act of 1965. The Army Corps firmly received the blame and responsibility for the damage in January 2008, though the federal agency could not be held financially liable due to sovereign immunity granted in the Flood Control Act of 1928. An investigation of the responses from federal, state and local governments, resulted in the resignation of Federal Emergency Management Agency (FEMA) director Michael D. Brown, and of New Orleans Police Department (NOPD) Superintendent Eddie Compass. Conversely, the United States Coast Guard (USCG), National Hurricane Center (NHC) and National Weather Service (NWS) were widely applauded for their actions and accurate forecasts.
Thousands of displaced residents in Mississippi and Louisiana still live in temporary accommodation five years after chaos struck. Reconstruction of each section of the southern portion of Louisiana is being addressed in the Army Corps LACPR Final Technical Report, identifying areas not to be rebuilt and areas and buildings that need to be elevated.
The Aftermath of Hurricane Katrina
In New Orleans, the deadliest aspect of the storm was the storm surge, which led to 53 breaches in the federally built levee system, which had been put in place to protect metro New Orleans. The water that tore through the breaches flooded approximately 80% of the city. This water rose to the rooftops of the poorest neighborhoods, while the hurricane's howling winds stripped buildings apart.
On a human level, Katrina redistributed over one million refugees from the central Gulf coast to elsewhere in the United States. Though many have returned, five years later thousands of displaced residents are still living in temporary accommodation. Crime was also rampant in post-Katrina New Orleans. Looting, in many cases a result of the desperation of people who had no other way of finding food or water, was rife. The crime rate did not only increase in New Orleans, but also in cities that took in large numbers of refugees such as Houston. Federal disaster declarations covered 90,000 square miles, and an estimated three million people were left without access to electricity.
Economically, Katrina was equally devastating. The Bush Administration requested a minimum of $105 billion for repairs and reconstruction in the region, and it is estimated that the total economic impact in Louisiana and Mississippi may exceed $150 billion. Hundreds of thousands of residents were left unemployed, and the forestry and agricultural industries in the area were heavily affected. One of the most damaging economic affects was the destruction of 30 oil platforms and the closure of 9 refineries in the Gulf of Mexico. This was not only a blow to the economy, but also to the environment, as resulting oil spills from 44 facilities in the Gulf led to over 7 million US gallons of oil leaking in to the sea.
The damage resulting from Hurricane Katrina proved unfathomable. Millions have been affected, and the region's economy and environment received a powerful beating. The reconstruction of the city has been a long, expensive, and painful process that still continues today.
Recovery from Katrina and the current situationAs of March 2007, the city's population is approximately 60% of what it was before Katrina struck, a new total of 274 000. Areas that suffered no flooding have in some cases exceeded their pre-Katrina population, showing renewed energy and vigor in the city and hope for its future. The 2006/07 season welcomed back college football fixtures, signaling a return to normality.
BP's disastrous oil spill in the Gulf of Mexico has created a further recovery issue in the region. The spill, though now finally contained, has seriously damaged two of the most significant contributors to the southeastern Louisiana economy: oil and fishing. It is unclear what the long-term impact may be. Unfortunately, this year's hurricane season promises to be active according to leading US meteorologists. Several days after Hurricane Alex made landfall earlier this year, tarballs started turning up in Lake Pontchartrain, which flanks the northern side of New Orleans.
Interestingly a thick slick of oil on the sea could potentially limit a hurricane by acting as a shield—preventing water from evaporating into the storm. Piers Chapman, an oceanographer at Texas A&M University discusses the possibility of strong winds scattering the oil, hastening its evaporation. That means little, sadly, given the scale of the environmental catastrophe. BP is still working on drilling two relief wells to siphon off the oil that is still gushing out.
On more positive notes, the famed New Orleans festivals were never cancelled: Mardi Gras and the Jazz and Heritage festival continued to stand their ground, despite the catastrophic effects unleashed by the storm. In 2007 a new festival was added to the repertoire: 'Running of the Bulls New Orleans!' Tourism continues to support the city, with over 31 000 rooms offered in over 140 hotel establishments in May 2007. The city continues to open its arms to an average of ten million visitors annually.
A 2009 Travel + Leisure poll of "America's Favorite Cities" ranked New Orleans top in ten categories, a feat matched by no other city in the USA. According to the poll, New Orleans ranks top for a spring break destination and for "wild weekends," stylish boutique hotels, cocktail hours, singles/bar scenes, live music/concerts and bands, antique and vintage shops, cafés/coffee bars, neighborhood restaurants, and people-watching. | http://www.culturaldiplomacy.org/academy/index.php?en_conferences_floodwall_background | 13 |
14 | This topic provides
information about asthma in teens and adults. If you are looking for
information about asthma in children age 12 and younger, see the topic
Asthma in Children.
Asthma causes swelling and
inflammation in the airways that lead to your lungs.
When asthma flares up, the airways tighten and become narrower. This keeps the
air from passing through easily and makes it hard for you to breathe. These
flare-ups are also called asthma attacks or exacerbations (say "ig-zas-er-BAY-shuns").
affects people in different ways. Some people only have asthma attacks during
allergy season, or when they breathe in cold air, or when they exercise. Others
have many bad attacks that send them to the doctor often.
you have few asthma attacks, you still need to treat your asthma. The swelling
and inflammation in your airways can lead to permanent changes in your airways
and harm your lungs.
Many people with asthma live active, full
lives. Even though asthma is a lifelong disease, treatment can control it and
keep you healthy.
Experts don't know exactly
what causes asthma. But there are some things we do know:
Symptoms of asthma can be
mild or severe. You may have mild attacks now and then, or you may have severe
symptoms every day. Or you may have something in between. How often you have
symptoms can also change. When you have asthma, you may:
Your symptoms may be worse at night.
asthma attacks can be life-threatening and need emergency treatment.
Along with doing a
physical exam and asking about your health, your doctor may order lung function
tests. These tests include:
You will need routine checkups with your doctor to keep
track of your asthma and decide on treatment.
There are two parts to treating
asthma, which are outlined in your asthma action plan. The goals are to:
If you need to use the quick-relief inhaler more often
than usual, talk to your doctor. This is a sign that your asthma is not
controlled and can cause problems.
Asthma attacks can be
life-threatening, but you may be able to prevent them if you follow a plan.
Your doctor can teach you the skills you need to use your asthma action
prevent some asthma attacks by avoiding those things that cause them. These are
called triggers. A trigger can be:
Health Tools help you make wise health decisions or take action to improve your health.
Learning about asthma:
Living with asthma:
The cause of
asthma isn't known. Health experts believe that
inherited, environmental, and
immune system factors combine to cause
inflammation of the airways. This can lead to asthma and
Asthma may run in families (be inherited). If
this is the case in your family, you may be more likely than other people to
get long-lasting (chronic) inflammation in the airways.
In some people,
an allergic reaction causes asthma symptoms. An allergen makes the immune system cells release chemicals that cause
Studies show that exposure to
allergens such as
dust mites, cockroaches, and
animal dander may influence asthma's
development.1 Asthma is much more common in people
with allergies, although not all those who have allergies get asthma. And not
all people with asthma have allergies.
Environmental factors and
today's germ-conscious lifestyle may play a role in the development of asthma.
Some experts believe that there are more cases of asthma because of pollution
and less exposure to certain types of bacteria or infections.2 As a result, children's immune systems may develop in a way
that makes it more likely they will also get allergies and asthma.
Asthma in adults also can be related to work. This is called occupational asthma.
asthma can be mild or severe. You may have no
symptoms, severe symptoms every day, or something in between. How often you have
symptoms can also change. Symptoms of asthma may include:
asthma attack occurs when your symptoms suddenly
increase. Factors that can lead to an asthma attack or make it worse
Many people have symptoms that become worse at night
(nocturnal asthma), such as cough and shortness of breath.
general, waking at night because of shortness of breath or a cough is a sign of
poorly controlled asthma.
begins during infancy or childhood, but it can start at any age. It may last throughout
At times, the
inflammation from asthma causes a narrowing of your
mucus production. This causes asthma symptoms such as
shortness of breath.
Your airways narrow when they overreact to
certain substances. These are known as asthma
triggers. What triggers asthma symptoms varies from person to
When asthma symptoms
suddenly occur, it is called an
asthma attack (also called a flare-up or
exacerbation). Asthma attacks can occur rarely or frequently. They may be mild to
Although some asthma attacks occur very suddenly, many become worse
gradually over a period of several days. In general, you can take care of
symptoms at home by following your
asthma action plan. A severe attack may
need emergency treatment and in rare cases can be fatal.
classified as intermittent, mild persistent, moderate
persistent, and severe persistent.
Asthma can raise your risk for complications from lung
infections, such as acute
Even mild asthma may cause changes to the airway
system. It may speed up and worsen the natural decrease
in lung function that occurs as we age.3
Some experts believe that asthma may
raise your risk for chronic obstructive pulmonary disease (COPD).4
Asthma can occur for the first time during pregnancy, or it may change
When asthma is properly controlled, a
woman can have a normal pregnancy with little or no increased risk
to herself or the baby. But if the asthma isn't well controlled, there
are risks to the pregnant woman and the baby.
Many things can increase
your risk for
asthma. Some of these are not within your control. Others you can control.
The main things that put you at risk for getting asthma as an
adult are ongoing (chronic) wheezing when you were a child and cigarette
Triggers that may make asthma worse and may lead to
asthma attacks include:
Experts aren't yet sure:
or other emergency services right away if:
Call your doctor now or seek immediate medical care if:
Call your doctor if:
If you have not been diagnosed with asthma but have mild
asthma symptoms, call your doctor and make an appointment for an
If your teenager has symptoms of asthma, it is
important to see a doctor. Many teens with frequent wheezing may
have asthma but aren't diagnosed with the disease. Teens who have asthma but
are less likely to be diagnosed are most often:14
Watchful waiting is a "wait and see" approach.
Watchful waiting may be
appropriate if you follow your
asthma action plan and stay within the
green zone. Watch your symptoms, and continue to avoid
your asthma triggers.
If you have been getting
treatment for 1 to 3 months but aren't improving, ask your doctor if you
need to see an asthma specialist.
Doctors who can diagnose and treat
You may need to see a specialist (allergist or
pulmonologist) if you have:
A diagnosis of
asthma is based on your
medical history, a
physical exam, and lung function tests.
Lung function tests can diagnose asthma, show how
severe it is, and check for complications.
Asthma can be hard
to diagnose because the symptoms vary widely. And asthma-like symptoms can also be caused by other conditions, such
as a viral lung infection or a
vocal cord problem. So your doctor may want to do one or more extra tests.
You need to
monitor your condition and have regular checkups to
keep asthma under control and to review and possibly update your
asthma action plan. Checkups are recommended every 1
to 6 months, depending on how well your asthma is controlled.
During checkups, your doctor will ask about information you may have tracked in an
asthma diary, such as:
Based on the results, your asthma category may
change, and your doctor may change the medicines you use or how much medicine
If you have persistent
asthma and take medicine every day, your doctor may ask about your exposure to
substances (allergens) that cause an allergic reaction. For more
information about testing for triggers, see the topic
Allergy tests can include skin tests and a blood test. Skin tests are needed if you are interested in allergy
It's important to treat asthma, because even mild asthma can damage your airways.
By following your treatment plan, you can meet your goals to:15
An asthma action plan tells you which medicines to take
every day and how to treat
asthma attacks. It also may include an
asthma diary where you record your
peak expiratory flow (PEF), symptoms, and triggers.
This helps you identify triggers that can be changed or avoided. It also lets you be aware of
your symptoms and know how to make quick decisions about medicine and
treatment. See an
example of an asthma action plan(What is a PDF document?).
You'll likely take several medicines to control your asthma and to prevent attacks. Your doctor may adjust your medicines depending on
how well your asthma is controlled. Medicines include:
deliver medicine directly to the lungs. To get the best asthma control possible, be sure you know how to use your inhaler. Use a spacer with your inhaler if your doctor recommends it.
Be sure to monitor your asthma and have regular checkups. Checkups are recommended
every 1 to 6 months, depending on how well your asthma is controlled.
It's easy to underestimate how severe your symptoms are. You may
not notice them until your lungs are functioning at 50% of your
personal best peak expiratory flow (PEF).
PEF is a way to keep track of asthma symptoms at home. It can help you know
when your lung function is getting worse before it drops to a dangerously low
level. You can do this with a
peak flow meter.
asthma triggers increases symptoms. Try to avoid irritants (such as smoke or air pollution) or things
that you may be allergic to (such as
animal dander). If
something at work is causing your asthma or making it worse (occupational asthma), you may have to change jobs.
If you have persistent asthma and react to
allergens, you may need to have
skin testing for allergies.
Allergy shots (immunotherapy) may be helpful.
considerations in treating asthma include:
asthma isn't improving, make an appointment with your
If your medicine isn't working to control airway
inflammation, your doctor will first check to see if you are using the
inhaler correctly. If you are using it the right way, your
doctor may increase the dosage, switch to another medicine, or add a medicine
to your treatment.
For severe asthma that cannot be controlled with medicines, a newer treatment called bronchial thermoplasty may be used. For this treatment, heat is applied to the airways. This reduces the thickness of the airways and improves the ability to breathe.16, 17
If you have a severe asthma
red zone of your asthma action plan), use medicine based on your
action plan and talk with a doctor right away about
what to do next. This is especially important if your
peak expiratory flow (PEF) doesn't return to the
green zone or if it stays in the
yellow zone after you take medicine.
You may have to
go to the hospital or an emergency room for treatment. Be sure to tell the
emergency staff if you are pregnant.
At the hospital, you will
probably receive inhaled beta2-agonists and
corticosteroids. You may be given
oxygen therapy. Your lung function and condition will
be checked. You may need more treatment in the emergency
room or a stay in the hospital.
Some people are
at increased risk of death from asthma, such as people
who have been admitted to an intensive care unit for asthma or who have needed
a breathing tube (intubation) for asthma. If you are high-risk, seek medical
care early when you have symptoms.
Although there is no certain way to
asthma, you can reduce
airway inflammation and your risk of
The goal is to reduce the number, length, and severity of asthma attacks. Start by
your asthma triggers. Also be sure to:
Common irritants in the air, such as tobacco smoke and air pollution, can trigger asthma attacks in some
people. They include:
Exercise is an asthma trigger for some people. If you often have asthma attacks when you
exercise, use your inhaler 10 to 30 minutes before you start the activity so you can
avoid an attack.
Avoid exercising outdoors in cold weather. If you are outdoors in cold weather, wear a scarf around your
face and breathe through your nose.
You can control the impact
of asthma with an asthma action plan. A good action plan reminds you to take your daily controller medicines and to be aware of your symptoms. It also tells you how to make
quick decisions about medicine and treatment when you need to.
To manage your asthma and get the most out of your asthma
action plan, know how to monitor your peak airflow, identify
asthma triggers, and take your asthma medicine correctly.
It's easy to
underestimate how severe your symptoms are. You may not notice symptoms
until your lungs are functioning at 50% of your personal best measurement.
peak expiratory flow (PEF) is a way to keep track of
asthma symptoms at home. Doing this can help you know when your lung function is
getting worse before it drops to a dangerously low level. You can do this with
peak flow meter.
trigger is anything that can lead to an asthma attack. A trigger can be smoke, air pollution, allergens, some medicines, or even stress. Avoiding triggers will help decrease the chance of
having an asthma attack.
In the case of allergy triggers, avoiding them will help control
inflammation in the airways. If you have asthma triggered by an allergen, taking
antihistamine medicine may help you manage the allergy. It may limit the allergy's effect on your asthma.
Taking medicines is an
important part of asthma treatment. But because you may need to take more than
one medicine, it can be hard to remember to take them. To help yourself
remember, understand the reasons people don't take their asthma medicines. Then find
ways to overcome those obstacles, such as taping a
note to your refrigerator.
Most people with asthma can travel freely.
But if you travel to remote areas and take part in intense physical
activity, such as long hikes, you may be at increased risk for an asthma attack
in an area where emergency help may be hard to find.
traveling, keep your medicine with you, carry the prescription for it,
and use it as prescribed. Also carry your asthma action plan so you know what
medicines to take every day and what to do if you have an asthma attack.
Teens who have asthma
may view the disease as cutting into their independence and setting them apart
from their peers. Parents and other adults can offer support and
encouragement to help teens stick with a treatment program. It's important
Medicine doesn't cure
asthma. But it is an important part of managing it. Medicines for asthma treatment are used to:
Asthma medicines are divided into two groups: those for
prevention and long-term control of
inflammation and those that provide quick relief for
Most medicines for asthma are
inhaled. Inhaled medicines are used because a specific dose can
be given directly to the airways.
Delivery systems include metered-dose and dry powder
nebulizers. A metered-dose inhaler (MDI) is used most
Sometimes doctors recommend attaching a
spacer to an MDI to better deliver the medicine to the lungs. For many people, a spacer makes an MDI
easier to use.
The most important asthma
There are other long-term medicines for daily treatment. They
Other medicines may be given in some cases.
Medicine treatment for asthma depends on your age and type of asthma, and how well the treatment is controlling your asthma
Your doctor will work with you to help find the number and
dose of medicines that work best.
One of the best tools for managing asthma is a daily controller medicine that has a corticosteroid ("steroid"). But some people worry about taking steroid medicines because of myths they've heard about them. If you're making a decision about a steroid inhaler, it helps to know the facts.
At the start of asthma
treatment, the number and dosage of medicines are chosen to get the asthma
under control. Your doctor may start you at a higher dose within your asthma
classification so that the inflammation is controlled right away. After the asthma has been controlled for several months, the dose
of the last medicine added is reduced to the lowest possible dose that prevents
symptoms. This is known as step-down care. Step-down care is believed to be a
better way to control inflammation in the airways than starting at
lower doses of medicine and increasing the dose if it is not enough.18
Because quick-relief medicine quickly reduces
symptoms, people sometimes overuse these medicines instead of using the
slower-acting long-term medicines. But
overuse of quick-relief medicines may have harmful
effects, such as reducing how well these
medicines will work for you in the future.19
You may have to take more than one
medicine each day to manage your asthma. Help yourself remember when to take each medicine, such as taping a
note to your refrigerator to remind yourself.
Tell your doctor about all the medicines you
are taking, so he or she can choose asthma medicines that won't interfere with
Some people only have symptoms during certain
times of the year (seasonal asthma). If you know when you will most likely have
symptoms, start using a medicine to decrease inflammation before the symptoms
A new treatment called bronchial thermoplasty is available for adults with severe asthma. For this treatment, bronchoscopy is used to apply heat to the airways. This reduces the thickness of the airways and improves the ability to breathe.16, 17
(immunotherapy) may be recommended for people who have
asthma symptoms that are triggered by allergens.
For some people, allergy
shots reduce asthma symptoms and the need for
medicines.20 But allergy shots don't work equally well for all allergens. Allergy shots should not be given when asthma is
Some people have used
ephedra—a stimulant sold for weight loss and sports
performance—to try to treat asthma symptoms. But the U.S. Food and Drug
Administration (FDA) has banned the sale of this dietary supplement because of
concerns about safety. Ephedra, also called ma huang, has been linked to
strokes, and some deaths.
Alternative treatments such
as homeopathy, acupuncture, and breathing exercises have been used to treat
asthma. The research on these treatments is limited. Reviews of research
A review of
complementary and alternative treatments for treating asthma in children
concluded that none have been proved to reduce asthma symptoms and some may
have harmful side effects.23 Some of these studies
included teenagers and adults. The treatments reviewed include:
Talk to your doctor before trying a complementary or
For more information on alternative
treatments, see the topic
The American Academy of Allergy, Asthma, and Immunology
publishes an excellent series of pamphlets on allergies, asthma, and related
information. It also provides physician referrals.
The American Lung Association provides programs of
education, community service, and advocacy. Some of the topics available
include asthma, tobacco control, emphysema, infectious disease, asbestos, carbon monoxide, radon,
The Asthma and Allergy Foundation of America (AAFA)
provides information and support for people who have allergies or asthma. The
AAFA has local chapters and support groups. And its Web site has online
resources, such as fact sheets, brochures, and newsletters, both free and for
The Centers for Disease Control and Prevention (CDC) is
an agency of the U.S. Department of Health and Human Services. The CDC works
with state and local health officials and the public to achieve better health
for all people. The CDC creates the expertise, information, and tools that
people and communities need to protect their health—by promoting health,
preventing disease, injury, and disability, and being prepared for new health
The U.S. National Heart, Lung, and Blood Institute
(NHLBI) information center offers information and publications about preventing
Bush RK (2002). Environmental controls on the
management of allergic asthma. Medical Clinics of North America, 86(3): 973–989.
McGeady SJ (2004). Immunocompetence and allergy.
Pediatrics, 113(4): 1107–1113.
Jarjour NN, Kelly EAB (2002). Pathogenesis of asthma.
Medical Clinics of North America, 86(3):
Silva GE, et al. (2004). Asthma as a risk factor for
COPD in a longitudinal study. Chest, 126(1):
Guilbert T, Krawiec M (2003). Natural history of
asthma. Pediatric Clinics of North America, 50(3):
Stern DA, et al. (2008). Wheezing and bronchial
hyper-responsiveness in early childhood as predictors of newly diagnosed asthma
in early adulthood: A longitudinal birth-cohort study. Lancet, 372(9643): 1058–1064.
Etzel RA (2003). How environmental exposures influence
the development and exacerbation of asthma. Pediatrics,
Rodriguez MA, et al. (2002). Identification of
population subgroups of children and adolescents with high asthma prevalence:
Findings from the third National Health and Nutrition Examination.
Archives of Pediatrics and Adolescent Medicine, 156(3):
Lemanske RF Jr (2003). Viruses and asthma: Inception,
exacerbations, and possible prevention. Proceedings from the Consensus
Conference on Treatment of Viral Respiratory Infection-Induced Asthma in
Children. Journal of Pediatrics, 142(2, Suppl): S3–S7.
Sutherland ER, Martin RJ (2002). Is infection
important in the pathogenesis and clinical expression of asthma? In SL
Johnston, ST Holgate, eds., Asthma: Critical Debates,
pp. 69–84. London: Blackwell Science.
Burgess SW, et al. (2006). Breastfeeding does not increase the risk of asthma at 14 years. Pediatrics, 117(4): 787–792.
Jaakkola JJK, et al. (2002). Pets, parental atopy, and asthma in adults. Journal of Allergy and Clinical Immunology, 109(5): 784–788.
Ownby DR, et al. (2002). Exposure to dogs and cats in the first year of life and risk of allergic sensitization at 6 to 7 years of age. JAMA, 288(8): 963–972.
Yeatts K, et al. (2003). Who gets diagnosed with
asthma? Frequent wheeze among adolescents with and without a diagnosis of
asthma. Pediatrics, 111(5): 1046–1054.
Joint Task Force on Practice Parameters (2005).
Attaining optimal asthma control: A practice parameter. Journal of Allergy and Clinical Immunology, 116(5): S3–S11. Available online:
Cox G, et al. (2007). Asthma control during the year after bronchial thermoplasty. New England Journal of Medicine, 356(13): 1327–1337.
Castro M, et al. (2010). Effectiveness and safety of bronchial thermoplasty in the treatment of severe asthma: A multicenter, randomized, double-blind, sham-controlled clinical trial.
American Journal of Respiratory and Critical Care Medicine, 181(2): 116–124.
National Institutes of Health (2007). National Asthma Education and Prevention Program Expert Panel Report 3: Guidelines for the Diagnosis and Management of Asthma (NIH
Publication No. 08–5846). Available online:
Salpeter SR, et al. (2004). Meta-analysis: Respiratory
tolerance to regular beta2-agonist use in patients with
asthma. Annals of Internal Medicine, 140(10): 802–813.
Abramson MJ, et al. (2010). Injection allergen immunotherapy for
asthma. Cochrane Database of Systematic Reviews (8).
Oxford: Update Software.
Györik SA, Brutsche MH (2004). Complementary and
alternative medicine for bronchial asthma: Is there new evidence?
Current Opinion in Pulmonary Medicine, 10(1): 37–43.
Passalacqua G, et al. (2006). ARIA update:
I—Systematic review of complementary and alternative medicine for rhinitis and
asthma. Journal of Allergy and Clinical Immunology,
Bukutu C, et al. (2008). Asthma: A review of
complementary and alternative therapies. Pediatrics in Review, 29(8): e44–e49.
Other Works Consulted
Grayson MH, Holtzman MJ (2007). Asthma. In EG Nabel, ed., ACP Medicine, section 14, chap.
19. Hamilton, ON: BC Decker.
Jaeschke R, et al. (2008). The safety of long-acting
beta-agonists among patients with asthma using inhaled corticosteroids.
American Journal of Respiratory and Critical Care Medicine, 178(10): 1009–1016.
October 22, 2012
E. Gregory Thompson, MD - Internal Medicine
& Rohit K Katial, MD - Allergy and Immunology
How this information was developed to help you make better health decisions.
To learn more visit Healthwise.org
© 1995-2013 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
© 2010 Hill Physicians
Medical Group, Inc.; PO Box
5080; San Ramon, CA | http://www.hillphysicians.com/kbase/Default.aspx?id=hw271348 | 13 |
37 | |Polish History- The Rise and Fall of the First Republic
"It is by unrule that Poland stands."
The 1569 Union of Lublin represented the final stage in the creation of the Polish-Lithuanian federation, but the death of the last Jagiellon king in 1572 presented the first real test of the evolving political system.
As was common during past successional interregnums, the clergy and magnates provisionally wielded royal authority until a new king was crowned. Although the magnates (rich nobles) were actively involved in the selection of king since at least the death of Casmir the Great in 1370, the process was a confusing mixture of precedent and personality. Technically free to select a king of their liking, for a host of practical reasons the nobility were limited to a short list of dynastic candidates typically recommended by the prior monarch. In effect, the Sejm confirmed the senior heir of the preceding king in a ceremonial pseudo-election. There was, in fact, no uniform, legal procedure to assure a smooth transition and many feared that the occasion of Sigismund II's death would be used by separatist elements to divide and weaken the new union. It was widely recognized that a more rational and orderly successional procedure was needed.
The crisis was averted when the nobility decided to use their temporary authority to define a new political system for the Commonwealth. In actuality the majority of the "new" system was the comprehensive codification of past privileges, precedents, and customs that resulted from over five hundred years of political evolution. Poland became a parliamentary republic of the szalachta, a gentry proto-democracy, by increments. The death of the last Jagiellon king was an opportunity to comprehensively define governmental institutions to create a constitutional united republic with a freely elected sovereign.
Senator Jakub Uchanski, the Archbishop of Gniezno and Primate, head of the Catholic Church of Poland, assumed the role of Interrex and summoned a convocational Sejm called the Confederation of Warsaw to hammer out the details of the new government. The term "confederation" was not selected casually. In general terms a confederation is an association of sovereign states, and since each Polish nobleman believed in the sovereignty of their individual liberty the term was indeed appropriate. Moreover, in Poland a confederation was an ancient institution which had been employed many times in the past. A Polish confederation was a voluntary organization usually comprised of armed men who swore on their lives to champion a noble cause or correct an injustice. The confederation was considered an incipient and ephemeral political party, formed ad hoc to address specific issues. The Solidarity movement in 1980 Poland, a spontaneous organization of concerned patriots opposed to tyrannical rule, has strong parallels with the confederations of the First Republic.
The confederation was in essence a form of legalized rebellion, oddly enough necessitated by the tradition of unanimity among the nobility. The szlachta believed that the noble estate was indivisible, each nobleman processing sovereign power that could not be arbitrarily usurped by a majority or the edicts of a king. However in the inevitable event of disagreement it was considered perfectly acceptable for the discontented parties to withdraw from debate and form an opposing confederation. Significantly, confederations were bound to strict majority voting, perhaps a tacit admission that absolute unanimity, though ideal, was ultimately unworkable.
Any government in Poland-Lithuania had to consider the very unique nature of the Commonwealth, a cacophony of cultures that boasted one of the most religiously diverse societies in Europe. Although Poland later became almost pervasively and militantly Catholic, at the time of the Union of Lublin the Roman Church constituted only a statistically dominant minority, and an extremely tolerant one at that. Protestants, Orthodox, Moslems and Jews were well represented and unmolested in the Republic. In fact, by the mid-16th century the majority of the non-clergy members of the Senate were Protestant as the elite seemed disproportionately susceptible to the Reformation.
The Jewish community in Poland pre-dated Miesko's conversion to Christianity in 966 and was granted a general charter of Jewish liberties as early as 1264. Fleeing persecutions associated with the Crusades, the Black Death, and the Inquisition, to name a few, major influxes of Jews into Poland occurred between the late 11th and 15th centuries, vastly expanding the earlier settlements that developed as a result of a Polish special invitation to Jewish settlers in 1133. Another invitation was extended in the wake of the devastation following the Mongolian invasion and Casimir the Great issued a special charter in 1345 to safeguard Jewish liberties. Jews migrating from coastal Europe via Germany became known as Ashkenazim (German) Jews while those emanating from Spain were known as Sephardim (exiled) Jews, but both groups were widely accepted in their new home. Tolerant and relatively under-developed Poland, in need of highly skilled craftsmen and merchants, welcomed the Jews who eventually formed ten percent of the population. By the mid-16th century Poland had the largest concentration of Jews in the world.
Given the circumstances it is perhaps not surprising that one of the most significant outcomes of the Confederation of Warsaw was a guarantee of religious freedom. Nonetheless the concept was one of the truly admirable and unique features of a government formed at a time of widespread violence associated with the Reformation and Counter-Reformation. The religiously mixed Sejm was particularly alarmed by the August 24, 1572, St. Bartholomew's Day Massacre, in which a violent Catholic mob in Paris, outraged at a recent inter-denominational royal wedding, rampaged through the city killing Huguenots, French Calvinist Protestants. The violence soon spread throughout much of France and lasted several months, killing as many as twenty thousand people. The Poles considered the massacres a despicable act of intolerance and were determined to prevent a similar occurrence in their country. In a memorable passage in which freedom of religion was enshrined in the constitution, the Toleration Act of Warsaw stated:
Whereas in our Common Wealth there is no small disagreement in the matter of the Christian faith, and in order to prevent that any harmful contention should arise from this, as we see clearly taking place in other kingdoms, we swear to each other, in our name and in that of our descendants for ever more, on our honor, our faith, our love and our consciences, that albeit we are dissidents in religion, we will keep the peace between ourselves, and that we will not, for the sake of our various faith and difference of church, either shed blood or confiscate property, deny favor, imprison or banish, and that furthermore we will not aid or abet any power or office which strives to this in any way whatsoever...
The Poles were as good as their word. That year the Inquisition was banned in Poland. In the next hundred years, no more than twelve sectarian killings were recorded in the Republic while in Europe thousands were burnt at the stake for perceived heresies. In a telling example, in 1580 a radical Calvinist walked into a Catholic Church during mass and snatched the consecrated host, believed the actual body of Christ, from an astonished priest. The Protestant protester then spat on the Blessed Sacrament, threw it on the ground and stomped on it, and finally fed the soiled wafer to a passing mongrel. In virtually any other Catholic country the transgressor would have been invited to an unpleasant public barbeque, but in Poland the king simply reprimanded the miscreant and asked that he not do it again. Perhaps as a consequence of toleration, anti-Catholic demonstrations were relatively rare in the Republic where the stimulating effects of prosecution were noticeably lacking.
The Poles recognized that the separation of church and state was desirable in a Republic. Religion, anchored by non-negotiable dogma, is by its nature uncompromising while the essence of a representative form of government is compromise. Although most individual Poles were religious, they collectively believed that their government should be secular.
Another hallmark of the new Republic was a respect for freedom of expression, even during trying times. During the Great Cossack uprising in the mid-17th century, Poland was ridiculed by the Czar for allowing the publication of subversive material. The Senate responded:
The King and we do not order books printed, nor do we forbid it: if a printer publishes good and fair material, we praise it; if fools publish something inferior, unworthy and untrue, we at the Council laugh at it. If no one were to publish books, our descendants would know nothing about us...Printing is free in our country, by law and by the custom of nations.
Although the nobles did not share a common lifestyle, they held a common viewpoint on government and the prerogatives of their class that was reflected in the social contract constructed by the Confederation of Warsaw. It is perhaps surprising that Polish-Lithuanian nobles manifested such impressive group solidarity given their profound ethnic, religious, educational, and socio-economic differences.
The szlachta (nobles) were comprised of numerous ethnic groups, including Lithuanian and Ruthenian boyars, Germans, Prussians, Baltic gentry, Tatars, Italians and eventually even one American. Yet most nobles were considered "Polish", more as a symbol of class than ethnicity. Although the concept of nobility was historically associated with land ownership, significant numbers of szlachta became landless indigents, some virtually indistinguishable from their peasant counterparts. The magnates, who comprised only a few percent of the nobility, were obnoxiously wealthy and disproportionably represented in prior interregnums. It was not uncommon for individual magnates to own several villages, complete with self-sufficient economies, and command private armies largely comprised of their less fortunate fellow nobles. Despite the obvious disparity in wealth, the szlachta, who addressed each other as "brother", were theoretically socially and politically equal, a beautiful fiction that was pursued with passion.
The glaring contrast between the reality and the theory created a mania for equality among the lower nobility that led to the conclusion that the entire noble class by right should participate as equal partners in the election of the King. For the first time the lower nobility were included in the electoral process, the most destitute szlactha's vote counting no less than that of the most powerful magnate.
At the time, the political power accretedto the Noble estate over the past centuries was unmatched anywhere in Europe. Unfortunately, unlike the modern concept of freedom based on universal citizenship and inalienable rights, the Poles still based their liberties on negotiated privileges which especially since the death of Casimir the Great in 1370 had come at expense of the king . These privileges were typically negotiated, some would say extorted, during successional crisis or times of war when the king's bargaining power was at low ebb. Once granted, these rights became the irrevocable sacred property of the nobility. Some of the more significant past concessions included:
- 1374- Statue of Koszyce: recognized that the nobility as a class were entitled to basic rights and privileges, including exemption from the land tax.
- 1422- Privilege of Czerwinsk: prevented the king from confiscating noble private property without a court verdict.
- 1430, 1433- Acts of Jedlina, Krakow: statue of "Neminem Captivabimus" stated that the King can neither imprison nor punish any szlachta without a viable court order. This is the equivalent of Habeaus Corpus, not codified in England until 1679.
- 1454- Statue of Niezawa: stated no new tax or army could be raised without the consent of the Sejm, the Polish version of "no taxation without representation".
- 1505- The Constitution of Nihil Novi (nothing new): forbade the king to pass new laws without the consent of both the Sejm and the Senate. This act effectively transferred legislative powers from the king to the nobility and is often cited as the world's first application of a democratic parliamentary system, anticipating the goals of the English Glorious Revolution (1688) and the American Revolution (1775) by centuries.
The Confederation of Warsaw provided the opportunity to reinforce, expand, and more importantly preserve in law the sacrosanct privileges of the noble class. These "Cardinal Laws", which long preceded many of their English counterparts, became the backbone of Poland's constitutional law. To assure the continued supremacy of the szlachta, the new government had the following new features:
- The king would be elected by the entire noble class. Eligible candidates included any Commonwealth noble or foreigner of royal blood.
- The king would be elected for life but must renounce any hereditary right of succession.
- The king could not marry without the consent of the Senate.
- Religious tolerance must be strictly observed.
- Parliament would be called into session at least once every two years.
- A group of senators would form an ongoing supervisory council to oversee governmental actions.
- The judiciary in the form of a Supreme Court would be independent from the king.
- Any declaration of war was to be approved by the Senate.
- Parliament would control foreign policy.
- If, in the nobility's opinion, the king violated any of the terms of the social contract, he would be subject to dethronement. Civil disobedience was in effect legalized.
- The king would be required to publicly swear to uphold these basic principles spelled out in a Paca Conventa, essentially a Polish Bill of Rights, including any pre-election promises he may have made to win the crown.
The statues were designed to officially transfer sovereignty from the king to the constituent members, nobles, of the political nation, who adapted the then radical notion of "one nobleman, one vote". The Cardinal Laws and the Paca Conventa collectively became known as the "Golden Freedoms" that formed the political soul the Noble Republic.
The new Republic was fashioned very consciously after ancient Rome. It is therefore no wonder that the political lexicon became saturated with terms such as "citizen", "senate", and "tribune". In fact the Latin term res publica, rzeczpospolita in Polish, was selected by the nobles to indicate that their political system was a direct descendant of the Roman Republic. The parallels were well deserved, for as the historian R.H. Lord observed, the Polish state was:
...the largest and the most ambitious experiment with a republican form of government since the days of the Romans. In the sixteenth and seventeenth centuries this republic was the freest state in Europe, the state in which the greatest degree of constitutional, civic, and intellectual liberty prevailed...
Unfortunately, Poland was to emulate Rome in a way she did not wish; the decline and fall of the noble republic was in many ways similar to the dissolution of the Roman Empire.
Despite coming to the party of Western Civilization relatively late, the Poles were determined to claim the cultural and political legacy of Rome as their own.
The glaring central theme in the new government was the limitation of the power of the king, who was perhaps more accurately described as a chief magistrate for life. The Chancellor Jan Zamoyski summed up this doctrine when he said that, "Rex regnat et non gubernat", "The King reigns but does not govern". Moreover, the fact that the ruled freely choose their ruler was literally a liberating idea. As the palatinates of Sandomierz and Krakow declared in their resolution of December 12, 1572, "...it is fitting and proper of us to consider our freedoms and liberties, perceiving the basis of them to be the free election of our King and Lord." Although the king was elected for life and was technically responsible to no one, most high offices in the state also enjoyed life tenure. The lack of dependence was in theory another check on tyranny but often prevented reasonable compromise as neither side possessed enough leverage to realize their goals.
Fear of central authority seemed to be in the Polish genes but was perhaps more influenced by events outside Poland than within. Many neighboring countries were teetering on the brink of tyranny, developing absolutist monarchies who often claimed a divine right to rule. Habsburg, Hohenzollern, and later Romanov, Stuart, and Bourbon dynasties created centralized administrations with efficient and draconian taxation systems, designed to support large standing armies capable of political oppression or territorial expansion. As the Confederation pondered the new Republic in 1573, the szlachta need only look at the neighboring monarch, Ivan the Terrible, to see the darkest side of absolutism.
In an effort to disperse authority to prevent tyranny, the monarchal republic was in theory to combine the best parts of three basic government structures. The Sejm represented democracy, championing the will of the people or at least the nobility. The Senate represented oligarchy, a council of learned elders serving as custodians of the law. The king represented monarchy as commander-in-chief and chief executive. The tripartite government of Poland was an early attempt at a separations of powers designed to prevent governmental abuse through a system of checks and balances, a concept refined in the American Constitution more than two hundred years later.
The Polish Nobility seized the opportunity afforded by the successional crisis to create a national government where power was concentrated in the parliament, a structure they believed secured and maximized individual liberty. Indeed, in the 16th century Poland was the freest state in Europe whose Sejm was significantly more empowered than its counterpart in the English Parliament. Although the Magna Carta, literally "Great Paper" of 1215, limited the power of the king and established the basis of English civil rights, contemporary Poland had many of the same rights and by the late 16th century the Noble Republic had advanced the concept much further.
Unlike the later Cromwellian upheaval in England, to the great credit of the Polish gentry these momentous reforms were accomplished not by bloodshed or revolution, but by the quiet reason and cooperation of responsible citizens. The new social contract represented the culmination of a long liberal tradition characterized by limited government, property rights, parliamentary representation, government by the consent of the governed, and broad civil freedoms. No other European country can claim a liberal democratic pedigree more senior than Poland. Although some argue that the institutions of the Polish-Lithuanian Commonwealth, lacking full rights for the lower classes, was proto-liberal at best, in the context of the 16th century Poland was much advanced.
Yet the Noble Republic failed. Many came to believe that this outcome was the inevitable result of excessively liberal, in the sense of maximum personal freedom, principles applied in an increasingly irresponsible and self-serving society. But this is only hindsight history; the collapse of the Polish-Lithuanian Commonwealth was not pre-ordained. Many elements of the system existed for two hundred years before the Confederation of Warsaw and the Republic itself endured for another two hundred years. Poland's democratic traditions were hardly an unworkable flash-in-the-pan.
Democratic governments are notoriously difficult to implement, even in our age but the fact that the process is difficult does not mean that the effort is unworthy. The failure of new parliamentary systems in Italy, Yugoslavia, German, Austria, Czechoslovakia, and Pilsudski's Poland within the twenty years after World War I is attributed to widespread economic distress and militant nationalism, but the Noble Republic was subjected to conditions at least as difficult as these advanced European countries and for a longer time. Similar failures of democratic governments in Africa after World War II, despite ample natural resources and foreign aid, demonstrate the difficulty of representative rule. Westerners can point with pride to successful democracies in America and Great Britain, but these governments were allowed to develop relatively unmolested during prolonged periods of prosperity. Poland had no oceans to protect her, yet by historical standards the Noble Republic was, if anything, remarkable for its longevity.
The failure of the Polish Commonwealth was the result of a cruel combination of related factors. It has been famously said the American Constitution is not a suicide pact. Under the circumstances, the Polish constitution proved to be just that.
An often cited underlying problem was that the Noble Republic was never able or willing to create a strong central government. In fact Polish political institutions were based on maximizing local control which had the goal of preserving the feudal system controlled by the nobility. The Polish elite who constituted the political nation never allowed the development of the universal bureaucracy needed to efficiently administer government and instead insisted upon radical decentralization as a way of preserving their own power. Poland's government was more concerned with obstruction than construction, always on the alert for any stirrings of tyranny that might threaten the supremacy of the szlachta. Healthy skepticism was replaced with outright paranoia, which prevented most actions by the irrational fear of being manipulated by hidden evil forces. But as the British statesman Edmund Burke observed, inaction does not assure good government, often the only thing required for evil to triumph is for good men to do nothing.
The obsessive desire to keep the king weak and his administration minimal made by default the Sejm the most powerful governing body, but even its effectiveness was limited by several curious parliamentary quirks. Debate in the Sejm was frequently pointless as deputies were commonly given irrevocable voting instructions from the dietines, local assemblies. The Sejm, therefore, resembled an assembly of sovereign states or a federation of neighborhoods than a national parliament. More problematically, the szlachta were committed to the principle of unanimity, based on the belief that no genuine dissenting opinion could be ignored and that any measure not freely agreed to by all lacked full authority. The Poles feared the tyranny of the many as much as the tyranny of the few. Incredibly, unanimity was commonly achieved as the minority recognized a moral obligation to submit to the majority after their grievances were aired and sufficiently considered. An individual's dissenting vote was more similar to a filibuster than a veto, designed to delay a decision for reconsideration or to ensure general consent. Many believed that if the cause was just it was the obligation of the majority to convince the minority. If agreement was ultimately impossible the minority withdrew and invoked an alternate form of minority rights, the Confederation.
It may seem that the concept of unanimity limited the Polish deputies to a choice of compliance or rebellion, but in practice legislators usually found common ground. Unanimity survives today in the English jury system, where it functions quite well in limited applications. The Polish system worked because opposition was truly principled, unwilling to take unfair advantage of what was in effect a procedural flaw, and because the majority respected the opinions of the minority. The principle was both bold and delicate, relying on honorable men to act reasonably for the good of the nation. But ultimately there was nothing but honor to prevent an obnoxious deputy from destructive actions. The fact that the system worked so well for so long says volumes about the integrity of the Polish noble class.
However, the Sejm came to be plagued by a pernicious parliamentary devise that mutated from honorable unanimity. The liberum veto, "I freely forbid" or "I am free to veto", came into the Polish lexicon after an unfortunate incident in 1652. The Sejm was near the conclusion of an exhausting six-week session during the height of the Cossack Rebellion. The Marshall of the Sejm announced that the session was extended for another day to address unfinished business. As the deputies prepared to leave, a solitary voice, an obscure deputy from Lithuania, Wladyslaw Sicinski, secretly in the pay of the disruptive Radziwills, shouted, "I do not allow it!" After registering his vote the novice deputy mounted his horse and rode away. Many argued to simply ignore the absurd proclamation, but the Marshal, after consulting legal experts, ruled the veto legal. Attempts to locate Sicinski to convince him to change his mind failed as the young man had left to go home. Despite displaying a general distain for authority, the Poles paradoxically were obsessed with legality. Not only did the Sejm fail to reconvene, but the entire legislative work of the extended war-time session was declared null and void. Most szlachta were shocked by the ruling but came to respect it as law.
It is important to note that despite the dangerous precedent, the liberum veto was not used for another seventeen years, and not again for another ten years. At this time the veto was considered a dishonorable method of disruption that the civic-minded szlachta were reluctant to use. Many considered the liberum veto a vehicle for lobbying or merely a device to cut short debates that became pointless or interminable, and besides most debates were carried out in private before a formal vote in the Sejm to insure unaminity. But as Poland descended into chaos in later years the liberum veto was applied recklessly, typically on the orders of foreign powers with an interest in keeping Poland weak and disorganized. The principle of unanimity was shockingly simply to corrupt; only one bribe was needed to derail any legislation. Lacking mechanisms to reform thanks to the paralyzing veto, Poland became trapped in a static governmental system. By the mid-18th century during the reign of Augustus III, only one Sejm in thirty years was able to pass legislation of any kind due to the illiberal application of the liberum veto. When Poland finally tried to outlaw its use, foreign powers made sure the liberum veto was preserved, cynically citing their commitment to the Polish Golden Freedoms that were, of course, denied in their own countries.
Power was concentrated in the Sejm to prevent the monarchy from devolving into tyranny, yet the Sejm lacked the ability to effectively govern, hamstrung by a fatal idealism. The szlachta believed that no man had a right to tell another man what he could or could not do, with the monumental exception of anyone outside of their own privileged class. Unfortunately the opposite of tyranny is not freedom, but anarchy. Poland became a world power on the eve of the modern state operating largely on auto-pilot, relying on the individual good will of thousands of uncoordinated local interests. The "invisible hand" of capitalism did not appear as dexterous when at the controls of government. Known as the "Republic of Anarchy", the government of Poland's inability to muster a collective response had ominous repercussions.
The fear of tyranny reached absurd proportions. The Noble Republic was without equal in the theory of the law; however, enforcement, at least for the privileged class, was another matter. Poland retained separate courts for the five estates which handed out harsh punishment for the lower classes. The pugnacious Poles were famous for their brawling, in which half of the cases were between women. Whippings, banishment, torture, hangings, beheadings, and drownings were liberally dispensed. The nobility, however, were not subject to these infringements on their sacrosanct rights.
The noble judiciary system consisted of an elaborate network of elective judges and courts of appeal, which although independent of the executive branch, were in fact often powerless because of a lack of enforcement mechanisms. As in all things, the szlachta believed that no man or institution could tell them what to do. The extreme rejection of central authority prevented the creation of any policing authority, which might transform into Oprichnina-like terror. Consequently the law was enforced haphazardly or arbitrarily, creating a general disrespect for the institutions of justice. Because the threat of injustice was greater than the fear of lawlessness, in many cases the law was not enforced by the state, but by the offended party. Besides, the szlachta were expected to be fully armed and able to care of themselves. Even if the accused appeared in court, there was no means to compel him to do so with judgments typically lenient and relatively meaningless since not physically enforced. Consequently, vigilante justice and private wars were the norm. Piotr Skarga (1536-1612), first rector of the University of Wilno and the Chaplain at the court of Sigismund III, noted in 1597 that "Discipline and self-restraint have perished in this kingdom. No one fears the laws or institutions, no one even thinks of punishment. Everyone defends our noble freedom, whilst honest liberty is turned into disobedience and harlotry". According to Skarga, there were only three good freedoms: to refrain from sin, to decline a foreign master, and to resist a tyrant. Prophetically he proclaimed that to live without law was a "Satanic freedom" that doomed the Republic to failure.
To be effective, the rule of law requires that the government have a monopoly on the use of force, and be willing to use it when necessary. But Poland preferred a Wild West mentality rather than risk any abuse that might arise out of central authority.
The Sejm proved a poor forum for formulating foreign policy, as national security frequently was subordinated to conflicting local interests. Diplomatic missions abroad that peaked during the reign of Sigismund I, decreased in both quantity and quality as necessary funding was withheld by the parochial-minded parliament. Even in time of war, policy was subject to protracted debate that required a unanimous, and, therefore, almost certainly compromised, conclusion. Discretion and frank discussion, opposite but necessary diplomatic tools, were impaired in the Republic by the very public and widely published debates. Then as now legislatures in a public forum were often more interested in grandstanding than solving problems. Americans were to discover that sometimes parliamentary protocol was inadequate, even life-threatening, during wartime. In the Revolutionary War, as Americans stared at defeat during "the times that try men's souls" the Continental Congress recognized that they were ill suited to manage a desperate war effort and appointed George Washington as Commander-in-Chief with broad decision-making powers. As General Nathaniel Greene observed "The fate of the war is so uncertain, dependant on so many contingencies. A day, nay an hour is so important in the crisis of public affairs that it would be folly to wait for relief from the deliberative councils of legislative bodies". But Poland did wait, frequently with disastrous results. At a time when her enemies were becoming increasingly aggressive, Poland's foreign policy became inconsistent, passive, and slow to respond. Thanks in part to an absence of central authority, Poland's diplomacy became dangerously unbalanced.
Moreover, although it was assumed that a democratic Republic would have an unaggressive foreign policy that would prevent unnecessary wars, paradoxically it created more conflicts as Poland's neighbors interpreted her passivity as weakness and an invitation to attack. In the difficult world of international competition there is no greater guarantee of safety from cautious passivity than bold action, too much of either is provocative.
Perhaps worse in the long term, Poland's new government did not establish an adequate or reliable source of income. To their credit the Poles implemented the concept of "no taxation without representation" several centuries before the phrase was coined by the American Reverend Jonathan Mayhew in 1750, but all too frequently the Republic refined the practice to simply "no taxes". The plea of the Bishop of Wilno to a member of the Sejm is typical of Polish elite, "For the love of the Lord on the cross, for His sacred glory, do not allow any taxation of the clergy". The Bishop went on to imply that collection of any tax in arrears would also not meet with heavenly approval.
The king could not, and the Sejm would not, tax except in extraordinary circumstances. Even when taxes were approved it was rarely voted for more than a year at a time, after which the revenue battle began anew. Local councils often over-ruled the Sejm by evoking an institution known as the "appeal to the brethren," which simply meant they refused to pay national taxes. Taxes needed to fund armies were frequently denied as Poles removed from the threatened areas were reluctant to pay for an enterprise that did not directly or immediately effect local interests Beginning in 1573 the king was denied traditional revenues from mining operations as the nobility granted themselves the exclusive right to natural resources on their property. Taxes were called the price of civilization but, characteristically, the szlachta considered taxes an infringement on their personal freedoms.
For comparison, contemporary France supplied ten times the revenue to the state as Poland.. This chronic lack of funding became the Achilles heel of the Republic. While neighboring countries were spending up to sixty percent of their gross domestic product on military expenditures, the King fought for every zloty from the tight-fisted deputies in the Sejm. In the past the Nobility justified their generous tax exemptions based on their obligatory military service which they were expected to self-fund, however, by the end of the 16th century many of the szlachta refused to serve in any but a local cause. In many cases the King was forced to draft the previously exempt peasants or attempt to hire foreign mercenaries with money he could not guarantee.
Although the concept of an elected king was admirable, the Polish manifestation had several disturbing weaknesses. As in all matters, the king had to be elected unanimously. Not surprisingly this proved difficult and resulted in several dually, as opposed to duly, elected kings and associated civil wars. Even when unanimity was achieved, frequently after the violent death of many of the contesting electors, the king's position was tenuous. Although elected for life, the king's limited powers were subject to termination at any time. The nobles pledged their loyalty only as long as the king honored the conditions elucidated in the Paca Conventa and Henrician Articles, social contracts guaranteeing the fundamental principles of governance outlined by the nobility that the king was required to sign before his coronation; however, whether or not the king was in compliance was subject to highly individualistic interpretation.
The Poles recognized that the right of civil disobedience was an essential element of a democratic society almost three centuries before Henry David Thoreau contemplated the subject on Walden Pond, however, the unconstrained application of the principle proved unnecessarily disruptive. Like many institutions in Poland, there were little legal means to prevent arbitrary abuse. If any szlachta, for any reason, believed that the king violated any of the conditions of his enthronement, than the nobleman invoked the right of de non praestanda obedientia releasing him from obligation to the crown. Although the nobility's right of armed resistance against the king was first established in Hungary as specified in the famous "Golden Bull" of 1422, the Poles took the principle to the extreme.
The highly subjective criterion proved easy to abuse, a virtual invitation to legal rebellion that resulted in frequent anti-King confederations called a Rokosz, named after the site of a 14th century Hungarian revolt in the field of Rakos outside Buda. Sanctified by the Golden Freedoms, many of the Polish rebellions were based on obscure points of honor, needlessly dividing Poland at times when unified action was critical. The szlachta demanded a fatal purity that often ignored pressing realities. The perfect became the enemy of the good and allowed much greater evil to prevail.
In many ways the ancient Polish confederation presaged the principles that John Locke espoused in his 1689 masterpiece of political philosophy, The Two Treatises of Government. Locke believed that the relationship between society and its citizens took the form of a contract which was valid only as long as its terms were fulfilled. If the government, which could only be legitimately formed by the consent of the governed, overstepped its limits, citizens were not only free to but morally compelled to revolt. But Polish civil disobedience ignored some of the finer points of Locke's thesis, which recognized that the contract required sacrifices from, and had to be honored by, both sides. In exchange for the protection and order that only the state, under the guidelines of the rule of law, can provide, citizens must agree to surrender some individual freedoms. This is not because government is inherently evil, but because a small but irreducible number of individuals and foreign states are evil. In addition, according to Locke it was the duty of citizens to resist the arbitrary overthrow of the government by dissatisfied members of society, which are always present to some degree. But by the 18th century an unfortunately large portion of the Polish elite became unwilling to sacrifice any individual freedom for collective order, or to stand by the administration in times of trouble, and therefore, the social contract between the government and its citizens became a dead letter.
The king was elected for life, but was not allowed hereditary succession or to name the new King. The szlachta believed this policy served as a check against tyranny; however, the negative consequences were never properly addressed. Since the election occurred only after the king's death, the transition period was never smooth. The lack of fixed timetables added to the uncertainty. Although the Primate officially served as Interrex during the interregnum his authority was subject to interpretation. The process resulted in the steady erosion of the king's power as each new king was forced to make debilitating concessions to the nobility in order to get elected. Unlike much of European nobility, the szlachta had no tradition of service to the crown and believed that their main loyalty was to their own privileged class. As the Crown weakened, the political system became dangerously unbalanced. Worse, the power vacuum created after the death of the king served as an open invitation to political intrigue and foreign intervention, a temptation that was rarely declined.
Another problem with the elected monarchy was the Polish gentry's distinct preference to select a foreigner as king; to elect one of their own was offensive to their sense of equality. Only four of eleven elected Kings were native Poles. It seems incomprehensible to the modern mind to trust one's country to a foreign national, however, remember that 19th century nationalism did not exist yet. During the time of the Noble Republic kingship was considered a perfectly exportable commodity.. A foreigner of noble blood was considered a kindred spirit more aligned with the szlachta's interests than a lesser born native Pole. That Poles alternately served as kings of Hungary and Bohemia was not considered exceptional or contradictory as the limited scope of medieval politics was much more of a cosmopolitan than domestic affair. In fact, foreigner born kings were expected to bring to the table valuable military alliances, and therefore had a distinct advantage in the Republic's elections. The unfortunate tendency to seek foreign solutions to domestic problems was a major contributing factor in Poland's loss of sovereignty in the 18th century.
Even if a qualified Polish candidate was available, the nobility was reluctant to elevate a domestic candidate to king for fear of giving unfair advantage to local factions that might result in a hereditary monarchy. The fear of dynasty seems overstated, given the fact that dynasties in the Polish past were hardly tyrannical. Apparently, the szlachta did not consider that a King who aspires to create a dynasty has a vested interest in the future of the country.
Although one of Poland's greatest kings, Stephen Bathory, was of foreign birth, most non-native rulers proved disappointing, even disastrous. Particularly dangerous was the military adventurism of foreign kings that embroiled the Republic in virtually continuous warfare, frequently with little advantage to Poland. The Saxon Kings who ruled Poland for most of the 18th century were largely absentee and negligent rulers, but when they bothered with foreign affairs they formed alliances with countries distinctly hostile to Poland. Foreign control eventually corrupted the electoral process, often blatantly ignoring the will of the majority of Polish nobility. A weak king, subject to highly conditional loyalty, frequently misaligned with the Republic's interests, was not a component of effective government.
The paranoia of oligarchic resistance to a strong central government evolved into an aristocratic parliamentary democracy made impotent by unanimity rule, creating an internal power vacuum that was unable to support a structured society and served as an irresistible temptation for outside powers to fill.
Although "equality" was the catchphrase of the Nobility, it was a standing joke among the disenfranchised members of society. The szlachta were a very rights conscious group, but only for their own closed "estate". Even within the nobility fully half were denied full rights as women were not eligible to vote or hold office; of course, neither were they required to serve in the military. However, to the Republic's credit, noble women enjoyed many freedoms including inheritance and property rights that were typically denied in most of the 16th century world.
Whether the social contract outlined by the Confederation of Warsaw was based on man's inherent right to self-government or the desire of the gentry to control society is up for debate but the fact that the majority of the Polish population was excluded from these lofty ideals castes doubt on the szlachta's altruistic motives. The primary function of Poland's government was to preserve the personal liberty of its citizens, which was rigidly defined as the szlachta which comprised no more ten percent of the population.
In fact the nobility collectively referred to itself as narod meaning "the nation". Everyone else was referred to as "the people" who were in many regards looked upon as mere processions of the political state. Of course, no other society at the time came close to recognizing as many rights to as many people as Poland. Other sincere promoters of democratic ideals, such as ancient Athens or the American Founding Fathers, somehow reconciled the political equality of the ruling class with the enslavement of the underclass. Politics is after all the art of the possible that must be judged in historical context. But in Poland, as elsewhere, these inconsistencies eventually came to haunt their proponents.
Polish society was stratified not predominantly economically but functionally into five broad, largely hereditary "estates". Despite the vital role played by each estate all but the Nobility, and the numerically insignificant Bishops in the Senate, were excluded from the institutions of government. Four of these estates: the Nobility, the Clergy, the Burghers, and the Jews, were at least partially autonomous, protected by their own royal charter. But most of society was populated by the fifth estate, the Peasants. Although some free holding landed peasants were better off economically than the lower nobility, they processed no rights. The vast majority of the peasants, however, were debilitating poor. Peasants were largely at the mercy of their szlachta overlords, particularly after 1521 when the royal courts no longer had jurisdiction on Noble land. Equality before the law, a basic requirement in any free society, was not possible because each estate had different laws, applied unequally.
The peasants, unable to protect themselves or seek redress from the king, became increasingly exploited. Polish government became dominated by what was in effect one special interest group, the Nobility. While earlier there was some mobility between the estates, most notably by an expansion of the szlachta's ranks by several vehicles of ennoblement, by the mid-17th century the estate structure was frozen in place, making the inequities even harder to bear. The unbalanced, ossified society eventually wobbled and spun out of control.
In an unfortunate feedback system, as the noble power grew, so did the incentives to enserf the peasants. The dramatic increase in the Polish grain trade led to a phenomenon called "export-led serfdom". The lucrative Vistula trade supplying Polish grain to an eager and expanding world market provided irresistible incentives for landowners to extract more work from their virtually captive labor force. The szlachta, who controlled a monopoly on the land and the law, pressed their considerable advantage and inflated the already considerable inequities to obscene dimensions. Polish nobility bound the peasants to the land, stunting the growth of urban centers while demoting the masses from mere poverty to penury. The "normal" medieval social relationship between the peasants and their masters, which entailed mutual obligations, became perverted into an ingrained injustice in which one class ruthless exploited another. The magnate's huge gains had no appreciable trickle-down effects as neither the captive peasants nor the country at large benefited from the largely tax-free profits generated from the new grain economy.
While earlier Polish political development stressed the rise of broad rights for the entire noble class, as the Republic aged government was dominated by an all-powerful oligarchy of magnates interested in preserving their advantage. In the face of glaring inequities, even the Orwellian facade of, "we are all equal but some are more equal than others" within the privileged class became increasingly difficult to maintain. Wealth became concentrated in the hands of a few large landowners, creating powerful "families" that operated as states within the state. Making matters worse, the magnates themselves were deeply divided and often began making foreign policy based on their individual and conflicting self-interest.
Hence, the "golden freedoms" had a superficial appearance of modernity because they protected individual rights, relied on legal precedent, and emphasized limited government, but as applied in Poland these principles were actually ancient, a reaffirmation of a feudal system designed to limit the power of the king and keep the masses in check.
The Poles unwittingly created a magnatial oligarchy controlled by a relatively small number of self-anointed families, whose vices rivaled the autocratic governments they so feared. The same nobility that served as safeguards against political tyranny were the engineers of an economic tyranny that was every bit as oppressive to the majority of Poles.
Technically serfs did not exist in Poland; no one owned anyone and work was theoretically a contractual arrangement for rent, however, the peasants had no objective legal recourse and had little choice but to work on szlachta land. Although not technically serfs, it would have been difficult to explain the fine distinction to the peasants toiling in the fields.
The situation is reminiscent of how the lucrative cotton trade perpetuated and expanded what would otherwise have been the dying institution of slavery in the American South. Given the parallels, it is perhaps not a coincident that the southern states, after withdrawing from the Union rather than submit to the will of the majority, called their new country "the Confederacy". Like the South, the over-reliance on cheap agrarian labor, which required no great technical, mechanical, or manufacturing skill, ruralized the nation and retarded industrialist development. By the time the one dimensional trade dwindled, undercut by foreign sources and lack of demand, society was trapped in an antiquated economic system unable to compete. American slavery and Polish serfdom developed at about the same time and lasted about as long. Like the Southern aristocracy the Polish gentry entertained lavishly and developed elaborate social customs. Both borrowed huge sums for foreign manufactured goods to furnish their ostentatious estates. Both considered themselves a warrior class and established an elite lifestyle isolated from the central government based on the servitude of the rural labor force. Both displayed an exaggerated sense of honor and excessive sensitive to any perceived slight, perhaps due to the unspoken contradiction between their lofty ideals and the blatant hypocrisy behind their wealth. To preserve their privileged lifestyles, both became involved in disastrous wars that destroyed their nation's sovereignty.
The powerful Polish economy that was instrumental in allowing liberal political development faltered in the late 16th century for reasons other than the collapse of the grain trade. Spain's discovery of massive silver deposits in the New World made the Polish silver florin lose value, causing a damaging inflation. With their products worth less, magnates demanded their peasants work more, further exasperating social tension.
The vicious cycle of peasant enserfment was accelerated by the Polish tradition of partial inheritance. While most countries practiced primogeniture, in which the eldest son inherits the entire estate, the egalitarian Poles divided inheritance equally among all surviving children, including females. Although the Polish practice seems admirable to modern ears, the tradition actually worked to further impoverishment. Farms became unworkably divided, leading to an increasing number of landless rural laborers at the mercy of the magnates.
In addition to alienating Polish peasants, the Noble Republic was highly successful at creating long term animosity among its minorities in peripheral lands. Although the Poles were relatively tolerant and certainly more benevolent than later Russian and German occupiers, Ukrainian and Belarussian people under the control of Polish landlords did not necessarily consider the Commonwealth one big happy family. These people were predominantly Eastern and Orthodox, while their masters were Western and Catholic. In some ways the Polish drive to spread superior culture to the east is reminiscent of the German drive to the east, and was about as well received. The cultural divide between the Polish lords and their Orthodox servants did not heal with time, and this rift was a serious problem along Poland's eastern borders for centuries. In a sense the demise of the Republic was a result of the age-old problem of imperial over-reach to which multi-ethnic, mulit-national, and multi-cultural empires are prone. The Habsburgs later grappled with the same problem and with a similar result.
Poland in the late 16th century became a polar anachronism, ahead of the time in one respect and behind it in another. While her neighbors developed centralized autocratic governments and distanced themselves from feudalism to lay the foundation of diversified, proto-capitalist economies, Poland was experimenting with de-centralized democratic rule while resurrecting serfdom based on an antiquated economic model. Perhaps the greatest difference in the divergent developments was that in Poland the nobility remained in power longer than anywhere else, resulting in a persistence of feudal structures which promoted political institutions based on weak central authority. Poland's elite was freer than her neighbors, but she was less modern.
The szlachta's philosophy of minimal government was theoretically appealing, but its application in Poland proved disastrous. Weak central governments do not always fail and absolutist governments do not always succeed, not even in general, but extreme examples of either are typically short lived.
In the final analysis the problem with the Noble Republic was probably timing. Poland developed its hyper-liberal government during a time when her enemies were weak and she was strong. Poland had the luxury of idealism because she was wealthy and smugly confident of national security, believing that the nobility could rise up as a spontaneous levee en mass and repel any invader. Polish nobles simply did not believe that their existence, or that of their state, could be seriously threatened, and therefore they refused to allow the king to form a standing army, which they feared might eventually be used against their interests. But the traditional feudal reliance on the szlachta for military protection was no longer adequate in the face of new realities, namely the professional standing armies of her neighbors, and in any case nobles often withheld their support based on self-concentered interpretations of their Golden Freedoms.
This false sense of security led to a warped or at least antiquated political model, an uncompromising and extreme end member that ignored the dark side of human nature. Unconstrained democracy progressed to demagoguery and eventually decadence, weakening the Republic from within long before she fell prey to her neighbors. The Pogo dictum, "we have met the enemy, and it is us," seems perfectly applicable to the Noble Republic.
Poland's national government, its powers dispersed to the point of impotency, lacked sufficient tax revenues even under normal circumstances was bankrupted by the almost continual series of foreign-induced wars. The Golden Freedoms facilitated not only liberty but apathy, as the self-focused nobility was forced neither by civic responsibilities to make hard choices nor processed the power to implement the choices they dared to make. The end result was the perhaps the nearest approximation of anarchy that any great nation achieved.
But Poland did not drift into anarchy, rather it embraced political anarchy as a guiding principle, reflected in its motto Nierzadem Polska Stoi, it is by unrule that Poland stands. Poland chose anarchy because the nobles were convinced that the alternative was worse, and given the grotesque abuse of civil liberties in neighboring countries, perhaps they had a point. Miraculously the system functioned fairly well as long as the delicate inner workings of the Republic of Anarchy were protected by the hard outer shell of Polish militarism.
Particularly after the 17th century, Poland's predatory neighbors became increasingly powerful while the Republic steadily declined.. While her neighbors developed modern centralized administrations, either as "enlightened" absolutism in France, Austria, and Prussia, or as naked despotism in czarist Russia, Poland maintained an essentially feudal system that was unable to centralize institutions of administration, finance, or defense. The purposeful withering of state authority in the Commonwealth was not seen by its neighbors as an idealistic statement about Golden Freedoms, but as a golden opportunity. Poland's inclusive nature was used by enemies to infiltrate, emasculate, and ultimately destroy the Republic. Exploiting the weakness of the "Republic of Anarchy" they cynically used the Golden Freedoms against the Poles, recognizing that the love of individual freedom, taken to an extreme, threatens collective security.
On the eve of the partitions, the sometimes king of Poland, Stanislas Leszczynski, recognized the consequences of the lack of an effective military, saying,
I reflect with dread the perils that surround us; what force have we to resist our neighbors? And on what do we found this extreme confidence which keeps us chained, slumbering in disgraceful repose? Do we trust to the faith of treaties? How many examples have we of the frequent neglect of even the most solemn agreements! Either we shall be the prey of some famous conquers, or, perhaps, even the neighboring powers will combine to divide our states.
Unfortunately for Poland, her time of maximum vulnerability coincided with the French Revolution. The autocratic governments of Russia, Prussia, and Austria who were deciding Poland's fate feared a Jacobin revolution in Poland, which by example might inspire their own oppressed people. In this context, Poland's harsh treatment and eventual demise was justified as self-defense.
Some historians fault Poland for being unwilling to reform its political systems but this is not entirely true. Although for centuries the szlachta were more concerned with maintaining privileges and insuring only the "execution" of existing laws, but by the late 1700s it was evident that progress, not simply preservation, was required. By the time Poland realized that reform was needed, she was not allowed normal political development because her foreign masters saw it as a threat to their power.
Ironically, it was not complacency but the recognition that constructive change was needed that doomed Poland. Valiant attempts at internal reform, culminating in the first modern European and only the second written constitution in history, post-dating the American Constitution by just four years, were brutally suppressed by the partitioning powers. Prussia and Russia could simply not tolerate a free Poland as part of their Empire any more than they tolerated the freedom of their own people. When Poland lost the ability to defend itself, all the precious personal liberties became meaningless in the face of collective enslavement. Ultimately, due to the nature of man, liberty is not possible without sovereignty.
Poland's experience was not unique. As Will Durant, the critically acclaimed author of the eleven-volume classic The Story of Civilization noted, "the freedom of individuals in society requires some regulation of conduct, the first condition of freedom is its limitation; make it absolute and it dies in chaos". Absolutism and the human condition never mix well.
The szlachta's uncompromising stance resulted in Poland's liberty being completely compromised. Polish patriots were so fearful of the theoretical tyranny of their leaders that they were easy prey for the actual tyranny of their neighbors, who simply gamed the system with their unscrupulous legality. Poles could not balance the preservation of human rights with national security needs, which requires some reasonable compromises. For example, even in the rights conscious United States, virtually everyone who travels on commercial jets submits to a warrantless search, a clear violation of constitutional rights, for obvious security reasons. Reality has a nasty way of intruding on our absolutes, yet the secular absolutism of Polish political idealism was pursued with a purity that rivaled any religious fervor.
Unfortunately the freedom and welfare of the individual and the freedom and welfare of the nation are sometimes in conflict, and a society must chose under which circumstances which has priority. An interesting insight on this conumdrum was expressed by Stanislaw Staszic (1755-1826), a Polish priest, philosopher, statesman, and geologist. Defending his decision to support the provision in the 1791 Constitution that reinstated the herditary monarchy, Staszic wrote:
...there was never any doubt, namely, that election [of a king]is more suitable to the freedom of the nation than suuceesion to the throne...we should for the good of Poland, in order to prepare us for true freedom, for a length of time come under autocracy which would make us more equal to each other and erase these persistant superstitions and prejudices...True, succession to the throne is one step towards losing freedom. But the election of kings is halfway towards losing the nation. First, the nation − then freedom. First, life − then comfort.
Ungoverned liberty is always short lived; maximum sustainable liberty requires a wise freedom anchored in individual responsibility and bound by law. But in the Polish-Lithuanian Commonwealth personal freedom superseded all other concerns. Without the boundaries provided by the existence and enforcement of law, individual freedom is ultimately not possible. Even the idealistic French revolutionists recognized this basic requirement when they declared in the 1789 "Rights of Man and Citizen" that limits to liberty must be determined by law. The Poles considered virtually every law an infraction of liberty, as perhaps they are, but freedom paradoxically requires some accepted rules to exist. The paradox exists, in part, because liberal societies, founded on the bedrock of personal liberty, need the power of government to transform natural rights into civil rights. The rules of society must be enforced by an efficient and impartial government because ultimately goodwill alone is insufficient. Left unprotected by anything but honor society will always be corrupted by a small but irreducible number of evil men. The trick is to make sure that the government is not itself corrupted by that same ever present faction. In a perfect world, the limitation of laws and government is unnecessary, but human nature is maddening consistent and can be ignored only at one's peril. Imperfect human society is still searching for the perfect mix of maximum personal freedom and minimum government constraint.
Perhaps with the Polish example in mind, the American Constitution established a federal government not only to secure the blessings of liberty, but "to establish justice, insure domestic tranquility, provide for the common defense, and promote the general welfare", elements woefully absent in the Noble Republic.
Pawel Jasiencia, Calamity of the Realm: The Commonwealth of Both Nations II, Alexander Jordan, trans. (Miami, FL.: American Institute of Polish Culture, 1992), 29.
Jędruch, Constitutions, Elections and Legislatures of Poland, 73.
Davies, God's Playground: A History of Poland, I, 153.
Davies, Heart of Europe: The Past in Poland's Present, 259.
Davies, God's Playground: A History of Poland, I, 154.
Davies, God's Playground: A History of Poland, I, 184.
Haller's Army-p384 Need book info
Taras, Consolidating Democracy in Poland, 29. The concept of the sovereignty of "the people" would have to wait another two hundred years
C.H. Haskins and R.H. Lord, Some Problems of the Peace Conference, (Cambridge MA.: PUBLISHER, 1920), 160-167.
Interestingly, the Magna Carta, touted as the cornerstone of civil liberties, was actually a very limited document that only applied to the priviledged class, similar in many ways to the Paca Conventa. At the time, the English King John (ruled 1199-1216), was exceedingly unpopular. He had been excommunicated by the pope, defeated in France, and vilified for punishing vassals without trial. The English barons demanded that the King accept a list of demands or face revolt. The concessions granted, or extorted, were specific to the special interests of the feudal class, not a general statement about the rights of man. It was only much later, during the English Civil War of 1642-1649, that the clause "no scutage or aid, save the customary fedudal ones, shall be levied except by the common consent of the realm" was interpreted to mean that taxation without representation was tyranny. However, even the radicals of the 17th century believed that this requirement only meant that the King had to consult a council of barons and bishops before levying taxes. Crane Brinton, John B. Christopher and Robert Lee Wolff, Civilization in the West: Part 2 1600 to the Present, 4th ed. (Englewood Cliffs, NJ.: Prentice-Hall, Inc., 1964), 150.
Janowski, Polish Liberal Thought before 1918, 5.
Lukowski and Zawadzki, A Concise History of Poland, 71.
Davies, God's Playground: A History of Poland, I,265.
Davies, God's Playground: A History of Poland, I, 273.
Davies, God's Playground: A History of Poland, I, 185.
This number is actually impressively high as in Western Europe at this time only about one percent of the population was considered "noble". Hungary was the only other country with an unusually large percentage of nobility, numbering four to five percent. Johnson, Central Europe: Enemies, Neighbors, Friends, 49.
An exception to this phenomenon was the Grand Duchy's 1588 Statue which ennobled Jewish converts to Christianity. Snyder, The Reconstruction of Nations, 20.
Davies, God's Playground: A History of Poland, I, 215.
Davies, God's Playground: A History of Poland, I, 286.
James Fletcher, The History of Poland: From the Earliest Period to the Present Time (New York: Bradley Company Publishers, n.d.), 209.
Janowski, Polish Liberal Thought before 1918, 13-14. | http://josephpilsudski.com/polish-history-the-rise-and-fall-of-the-first-republic_282.html | 13 |
14 | Practice in Action
Finding and Solving Problems is an inquiry approach to learning that starts with posing a question or problem. The instructor uses questions to help students to identify a real-world problem or issue that concerns them (for example, ways to improve recycling or water conservation in their school or community). Students are asked to develop strategies that employ technology tools to help solve the problem. This practice promotes critical thinking and supports math and science content and skills development.
The key to the success of this practice is identifying a manageable and interesting problem to solve. Start small, and consider expanding or extending the activity if it proves successful. Working with students to identify a problem that interests them will increase their sense of ownership and willingness to participate. Encourage students to explore examples of projects from the Internet or from their school-day classes. Find out which technology tools are needed to make the activity successful.
Form teams that are appropriate for the activity and your students' age and skill level. As the project progresses, make any necessary adjustments and look for extension opportunities. When the project is complete, evaluate and plan the next one.
Remember that assessing student skills, completing the activity, and determining computer needs are all part of the planning process. Getting Started: Considerations for Activity Planning (PDF) will help you get underway.
Finding and Solving Problems works because students are actively engaged in interdisciplinary, collaborative, open-ended, and challenging problems that are meaningful to them. Students can build independent thinking skills, and learn from one another. Further, aligning problem solving activities to the school day curriculum can enrich many content areas.
Planning Your Lesson
Great afterschool lessons start with having a clear intention about who your students
are, what they are learning or need to work on, and crafting activities that engage students while supporting their academic growth. Great afterschool lessons also require planning and preparation, as there is a lot of work involved in successfully managing kids, materials, and time.
Below are suggested questions to consider while preparing your afterschool lessons.
The questions are grouped into topics that correspond to the Lesson Planning
Template. You can print out the template and use it as a worksheet to plan and
refine your afterschool lessons, to share lesson ideas with colleagues, or to help in professional development sessions with staff.
Lesson Planning Template (PDF)
Lesson Planning Template (Word document)
What grade level(s) is this lesson geared to?
How long will it take to complete the lesson? One hour? One and a half hours? Will
it be divided into two or more parts, over a week, or over several weeks?
What do you want students to learn or be able to do after completing this activity? What skills do you want students to develop or hone? What tasks do they need to accomplish?
List all of the materials needed that will be needed to complete the activity.
Include materials that each student will need, as well as materials that students
may need to share (such as books or a computer). Also include any materials that students or instructors will need for record keeping or evaluation. Will you need to store materials for future sessions? If so, how will you do this?
What do you need to do to prepare for this activity? Will you need to gather
materials? Will the materials need to be sorted for students or will you assign students to be "materials managers"? Are there any books or instructions that you need to read in order to prepare? Do you need a refresher in a content area? Are there questions you need to develop to help students explore or discuss the activity? Are there props that you need to have assembled in advance of the activity? Do you need to enlist another adult to help run the activity?
Think about how you might divide up groups―who works well together? Which students could assist other peers? What roles will you assign to different members of the group so that each student participates?
Now, think about the Practice that you are basing your lesson on. Reread the
Practice. Are there ways in which you need to amend your lesson plan to better
address the key goal(s) of the Practice? If this is your first time doing the activity, consider doing a "run through" with friends or colleagues to see what works and what you may need to change. Alternatively, you could ask a colleague to read over your lesson plan and give you feedback and suggestions for revisions.
What to Do
Think about the progression of the activity from start to finish. One model that
might be useful—and which was originally developed for science
education—is the 5E's instructional model. Each phrase of the learning
sequence can be described using five words that begin with "E": engage, explore, explain, extend, and evaluate. For more information, see
the 5E's Instructional Model.
Outcomes to Look For
How will you know that students learned what you intended them to learn through this
activity? What will be your signs or benchmarks of learning? What questions might you ask to assess their understanding? What, if any, product will they produce?
After you conduct the activity, take a few minutes to reflect on what took place.
How do you think the lesson went? Are there things that you wish you had done differently? What will you change next time? Would you do this activity again?
Friendship Bracelets (2-5)
An odd-numbered group of students are asked to pair up and create "friendship bracelets" for one another and problem-solve so that everyone can participate equally.
5 to 6 sessions, 45 to 60 minutes each
- Work together to solve a simple problem using everyday math and logic skills and a visual learning application
- Learn more about each other by creating friendship bracelets
- Learn about the purpose and design of friendship bracelets
Instructors should determine students' computer skill level and select the appropriate technology tools. Instructors should also be familiar with any software or equipment used in the lesson, or enlist the help of a volunteer who is.
- Computers (1 per 2 students is best)
- Word-processing software, and Inspiration or Kidspiration visual learning (or a similar application) installed on each computer
- Digital projector connected to instructor's computer (optional)
- Digital camera
- Selection of beads and cords for the bracelets; materials will depend on student ages
- Tape measures or rulers, crayons, and paper
Engage students by talking about friendship bracelets.
- Depending on the ages and skills of students, decide what type of bracelet they should create. Will this be a simple bracelet with colored beads, or a woven bracelet with a pattern? Do you want to include a brief introduction to designs and crafts of other cultures? For example, the knot-craft and hand weaving used to create traditional patterns stems from Native American handcrafts. For some helpful background information, see the Resources tab.
- Practice with the visual learning application. You can download a free 30-day trial of Kidspiration.
- Gather the beads, cord, and supplies needed for creating the bracelets.
- Practice by making several sample bracelets. The design and complexity will depend on the age and skill level of students.
Begin the activity.
- Ask if they have a friendship bracelet or have ever made one.
- Explain that friendship bracelets are:
- special and usually handmade;
- given from one person to another as a symbol of friendship;
- typically made from embroidery thread, wool, beads, and other materials—although styles will vary; and
- not meant to be removed from the wearer's wrist until they fall off naturally. By tradition, if a recipient wears a bracelet until it falls off naturally, he or she is entitled to a wish for having honored the hard work and love that went into the bracelet. If a bracelet is intentionally removed, however, the friendship is said to be over.
- Introduce the materials students will use for their bracelets and show the samples you made. You may distribute handouts that demonstrate how to make certain patterns and knots.
- Tell students that they are going to make friendship bracelets for each other.
Important: This activity only works with an odd number of students. If you have an even number of students, invite another student to participate, or participate yourself.
Find and explain the solution.
- Have them choose partners to discuss design ideas.
- In the pairing process, students discover that there are an odd number of students and one student is without a partner.
- Stop the activity and point out the dilemma. Then ask students how they can solve this problem. Tell them that they cannot work in a group of three, have one person make an extra bracelet, or have you be a partner (unless you are participating to make it an odd-numbered group).
- Have students discuss this amongst themselves, and ask them to share with the group any solutions they might propose.
Make the bracelets.
- Introduce Inspiration or Kidspiration software and show how it can be used to "map" their solutions. Depending on their age and skill level, help students create symbols, and demonstrate how to move them around the screen.
- Question students and guide them through their problem-solving logic. If appropriate for the group's size, you may use a computer connected to a digital projector for the demonstration and sharing of ideas and solutions.
- Arrange the group in a circle. Each student will make a bracelet for the person next to him or her, going around either clock-wise or counter-clockwise.
- Have students learn more about each other and take each other's picture with the digital camera.
- Demonstrate the steps for making bracelets and have each student make a bracelet for his or her identified partner. If time allows, students may make an additional bracelet for a person of their choice.
Note: Students may not want to make bracelets for partners they do not know. However, an important point of this activity is to meet someone new and learn something about him or her.
Write about the activity.
- After students have finished making their bracelets, have each write a short paragraph about the person for whom they made a bracelet. They can create the story on the computer and add digital pictures of each other and the bracelets.
- Explore other types of bracelets and types of materials. For example, find out more about cyclist Lance Armstrong's LIVESTRONG bracelet.
- Student engagement and participation
- Students share information about themselves and learn about others
- Ideas and comments that reflect an understanding of a problem, the use of problem-solving skills, and creative solutions
For more information and ideas to support this lesson, see the Resources tab.
Solution 1: List student names with the mapping software. Draw a vertical line connecting each name to the next, and then connect the last name to the first.
Solution 2: Arrange students' names in a circular pattern as shown below.
Hide and Seek with Geocaching (3-12)
Students locate objects hidden outdoors using a handheld global positioning system (GPS) receiver, learning about longitude and latitude, and global navigation.
2 to 3 sessions of 60 to 90 minutes each
- Enhance and extend students' understanding of global geography
- Increase visual acuity
- Develop technology skills using handheld Global Positioning System (GPS) receivers
Instructors should determine students' understanding of longitude and latitude and their technology skill levels. Basic keyboarding skills will be required. Instructors should also have basic computer skills, basic geography skills, and experience operating both a digital projector and a handheld GPS receiver.
- Computers with Internet access
- Digital projector (optional)
- Handheld GPS receivers, which may be purchased from a local discount or sporting goods store (one GPS unit for every 2 to 3 students)
- Digital cameras (optional)
- Paper and pencil for field notes and journal entries
- Objects to hide in caches (for example, Mr. Potato Head parts, other trinkets)
Introduce students to geocaching:
- Become familiar with your GPS handheld unit by following the instructions that are included with it.
- Familiarize yourself with the resources available at the Geocaching Web site.
- Create caches and either report them to the Geocaching Web site registry, or record coordinates manually.
For younger students:
- Using a computer with Internet access and projector (optional), access the Geocaching Web site (www.geocaching.com) and provide an overview of this worldwide recreational activity.
- Generate and build on student interest by entering your afterschool location to see what caches might be nearby.
- If possible, arrange to take students to look for one of the caches registered on the Web site.
For older students:
- Before the session begins, hide parts of a Mr. Potato Head in caches in the schoolyard and record their coordinates.
- Divide students into small groups, each with an adult supervisor, or have them work together without an adult.
- Using the handheld GPS receivers and coordinates provided, the group or groups should locate the caches. Discuss units of distance (for example, miles, feet) during the search process, and define geocaching vocabulary (for example, cache, satellite, waypoints, coordinates).
- Have students make an entry in the logbook at each cache to reinforce proper geocaching etiquette.
Back in the classroom:
- Before the session begins, create a cache or several caches in the schoolyard or in nearby locations. Record the coordinates from the GPS. Helpful hints on creating your first cache can also be found at the Geocaching Web site. This site also includes information on listing your cache so that others may find it. Note: You must be a registered user of the site to list a cache in the online registry.
- Divide students into groups of three, with one GPS receiver per group. In order to involve each student, assign roles, such as GPS handler, logbook keeper, and photographer (if you use cameras). Change roles so that everyone has a chance to use the GPS receiver and enter coordinates.
- Give each group coordinates to a cache and have them find it. Document the search using notes and photos (optional).
- Discuss the geocaching activity.
- Have older students write a description of the experience on a computer, including the search coordinates they used, and any photos of their find.
- If this was a registered cache, have students make an entry in the Geocaching Web site.
- Take geocaching beyond treasure hunting, and make it a nature study, creative writing opportunity, or other activity. For some ideas, see the Resources page.
- Understanding of plotting techniques and geographical terminology, including latitude and longitude
- Comfort using technology tools, including the Internet and handheld GPS receivers to enhance learning
Building Robotic Machines (8-12)
This lesson provides ideas for setting up a LEGO lab classroom, introducing students to robotics, and leading students in their first construction.
Multiple 60-minute sessions, twice a week (depending on the difficulty of the activity and the experience level of the instructor and students). Projects could require several weeks to complete.
- Apply practical math and scientific concepts while learning design, mechanical construction, and computer programming
- Learn to read and follow directions carefully
- Visualize, think, and problem solve in a three-dimensional perspective
- Discover properties and use of basic electronic light, rotation, touch, and temperature sensors
- Choose and purchase a LEGO system that is age appropriate and suitable to your budget. To help make this investment choice, see these reviews in the Consumer Guide for Afterschool Science Resources
- Storage space: If you are familiar with LEGO building sets, you know there are hundreds of pieces to organize and store. Therefore, having a secure place to store student projects and parts is essential. Experience suggests an investment in some type of storage cabinet with multiple drawers for organizing all parts.
- Work space: Most of the projects require small-group work, so you will need a workspace with tables and chairs to accommodate up to four students. For the first session, set up a large table in the center of room with several boxes and trays holding a variety of LEGO blocks, wheels, rods, and gadgets.
- Assistants or volunteers: Experienced instructors suggest one instructor or teaching assistant for every two teams. Teams can vary in size from two to four students. Collaborations among instructors and students are encouraged. Once students become experienced, they can serve as instructor assistants.
- Sheets of white paper and colored pencils for drawing
- Computer with Internet access and projection to a large screen or interactive whiteboard
Introduce the activity. What is a robot?
- Study the LEGO Teacher Guide for a clear understanding of the project and process you have chosen.
- Create a "Parts Inventory" of LEGO pieces in a Word document.
- Decide how students will be paired for the activity.
Introduce the project.
- Engage students by asking them what they think a robot is. Discuss what students know and believe about robots.
- Distribute paper and colored pencils.
- Ask students to draw a picture of a robot or robots using their imaginations.
- After students have completed their drawings, ask them to look at the robots they have drawn and consider the following questions:
How does your robot move?
Does it have arms? If so, does it have joints? Where?
Does it have any other moving pieces? How and where do they move?
Does your robot have a purpose?
- As students start to think differently about their drawings, walk around the room and ask individual students questions about their drawings.
- Ask students to think like an engineer or a scientist would think—more analytically and in more detail than they might normally think.
- Using the pictures previously bookmarked online, use a computer with projection to show students a variety of advanced robots. Ask students how they think these robots move and are powered. Ask students to visualize how their own arms, hands, legs, and feet move.
- Ask students to look at their robot drawings again and think more about their designs.
- Have students share what they might do differently now that they have seen pictures of other robots.
- With your LEGO "Parts Inventory" projected for all students to see, explain that as a scientist or engineer, it is important to keep tools and parts in the laboratory well organized and neat. Even though this class might not have a cabinet with drawers for each kind of piece, which would be ideal, there are bins for the different sizes and types of blocks, beams, plates, wheels, connector plugs, gears, axles, sensors, and lamps.
- Show students all the different parts as you talk and explain the organization of your classroom laboratory.
- Explain they will work in pairs and will share responsibility for gathering the necessary parts, keeping track of those parts, and returning them to their project bin with project instructions.
- Group students in teams of two for this project.
- Have students collect their materials from the parts trays.
- After they collect the materials and read the instructions, have students begin to build their beginning project.
- As students complete this introductory project, look at what they've made together as a class. Discuss what practical functions it might have in society.
- Ask how computer automations would improve their project.
- Ask students what they have learned as a result of this first effort and what they will do differently on their next LEGO project.
- Emphasize the importance of reading carefully and visualizing thoughtfully.
- Teacher guides, lesson plans, curriculum, worksheets, and video examples in various LEGO sets, which you can obtain from commercial publishers and the Internet, provide hours, if not years, of projects for students and teachers who are interested in technology and engineering.
- More advanced projects utilize motorized levers, gears, and pulleys as well as multiple types of sensors and advanced computer programming.
- Local and national robotic competitions will challenge students’ problem solving, practical math, scientific, mechanical construction, and computer programming skills.
- Students read and follow directions
- Students assemble a working robotic arm
- Students work responsibly with a teammate
- Students recognize the difference between a robot and robotics
For more information and ideas to support this lesson, see the Resources tab.
Ringstaff, C., & Kelley, L. (2003).
The learning return on our educational technology investment: A review of findings from research.San Francisco, CA: WestEd Regional Technology in Education Consortium in the Southwest.
George Lucas Educational Foundation. (2001).
Project-based learning research.Retrieved June 22, 2007, from http://www.edutopia.org/project-based-learning-research
Jonassen, D. H., & Stollenwerk, D. (1999).
Computers as mindtools for schools: Engaging critical thinking.NY: Pearson Education.
The following resources are related to the "Finding and Mapping with GPS" sample lesson.
Geocaching - The Official Global GPS Cache Hunt Site
You begin any geocaching project at this site because here is the place to register your cache and track a travel bug.
Geocaching for Kids
Website authors share their experience in creating and finding caches with kids. They answer questions commonly associated with geocaching and offer their personal favorite caches.
GPS Product Reviews
Before purchasing GPS units for geocaching check product reviews on this site. An online forum also provides answers to questions regarding GPS and geocaching.
GPS in Education
United States Geological Survey and Rocky Mountain Mapping Center sponsor this site which provides GPS lesson ideas including specific directions and website links to explain GPS satellites and the Degree Confluence Project as well as a mail group for teachers using GPS.
Find locations around the world
National Geographic Xpeditions
The following resources are related to the "Friendship Bracelets" sample lesson.
Maps for printing and copying
Wikipedia: Friendship Bracelet
A nice history and ideas for integrating other subjects into this project.
Student and teacher inspired ideas and step by step instructions.
Inspiration Software, Inc.
The following resources are related to the "Building Robotic Machines" sample lesson.
Lots of ideas for using this easy to use computer software.
Choose and purchase a LEGO system that is age appropriate and suitable to your budget. To help make this investment choice, see these reviews in the Consumer Guide for Afterschool Science Resources:
Mindstorms for Schools (LEGO)
Secondary Robotics Initiative | http://www.sedl.org/afterschool/toolkits/technology/pr_finding_solving.html | 13 |
14 | This topic applies to ArcGIS for Desktop Standard and ArcGIS for Desktop Advanced only.
Topology is a collection of rules that, coupled with a set of editing tools and techniques, enables the geodatabase to more accurately model geometric relationships. ArcGIS implements topology through a set of rules that define how features may share a geographic space and a set of editing tools that work with features that share geometry in an integrated fashion. A topology is stored in a geodatabase as one or more relationships that define how the features in one or more feature classes share geometry. The features participating in a topology are still simple feature classes—rather than modifying the definition of the feature class, a topology serves as a description of how the features can be spatially related.
Topology has long been a key GIS requirement for data management and integrity. In general, a topological data model manages spatial relationships by representing spatial objects (point, line, and area features) as an underlying graph of topological primitives—nodes, faces, and edges. These primitives, together with their relationships to one another and to the features whose boundaries they represent, are defined by representing the feature geometries in a planar graph of topological elements.
Topology is fundamentally used to ensure data quality of the spatial relationships and to aid in data compilation. Topology is also used for analyzing spatial relationships in many situations, such as dissolving the boundaries between adjacent polygons with the same attribute values or traversing a network of the elements in a topology graph.
Topology can also be used to model how the geometry from a number of feature classes can be integrated. Some refer to this as vertical integration of feature classes.
Ways that features share geometry in a topology
Features can share geometry within a topology. Here are some examples among adjacent features:
- Area features can share boundaries (polygon topology).
- Line features can share endpoints (edge-node topology).
In addition, shared geometry can be managed between feature classes using a geodatabase topology. For example:
- Line features can share segments with other line features.
- Area features can be coincident with other area features. For example, parcels can nest within blocks.
- Line features can share endpoint vertices with other point features (node topology).
- Point features can be coincident with line features (point events).
Parcels have commonly been managed using simple feature classes and geodatabase topology, so that the set of feature classes needed to model parcels, boundaries, corner points, and control points obey the required coincidence rules. Another way to manage parcels is with a parcel fabric, which automatically provides these layers for you. A fabric manages its internal topology, with no requirement to maintain a geodatabase topology or perform any topological editing for the set of layers used by parcels.
A key difference between parcels modeled as simple features and parcels in a fabric is that fabric parcel boundaries (lines in a fabric) are not shared—there is a complete set of lines on the boundary of each parcel; fabric lines for adjacent parcels overlap and are coincident.
Parcel fabrics may still participate in geodatabase topology; where overlapping boundary lines have differing geometry, the lines are cracked, and the topology graph is built as usual.
Two views: Features and topological elements
A layer of polygons can be described and used:
- As collections of geographic features (points, lines, and polygons)
- As a graph of topological elements (nodes, edges, faces, and their relationships)
This means that there are two alternatives for working with features—one in which features are defined by their coordinates and another in which features are represented as an ordered graph of their topological elements.
The evolution of geodatabase topology from coverages
Reading this large topic is not necessary to implement geodatabase topologies. However, you may want to spend some time reading this if you are interested in the historical evolution and motivations for how topology is managed in the geodatabase.
The genesis of Arc-node and Georelational
ArcInfo Workstation coverage users have a long history and appreciation for the role that topology plays in maintaining the spatial integrity of their data.
Here are the elements of the coverage data model.
In a coverage, the feature boundaries and points were stored in a few main files that were managed and owned by ArcInfo Workstation. The ARC file held the linear or polygon boundary geometry as topological edges, which were referred to as arcs. The LAB file held point locations, which were used as label points for polygons or as individual point features such as for a wells feature layer. Other files were used to define and maintain the topological relationships between each of the edges and the polygons.
For example, one file called the PAL file (which stands for Polygon-arc list) listed the order and direction of the arcs in each polygon. In ArcInfo Workstation, software logic was used to assemble the coordinates for each polygon for display, analysis, and query operations. The ordered list of edges in the PAL file was used to look up and assemble the edge coordinates held in the ARC file. The polygons were assembled during runtime when needed.
The coverage model had several advantages:
- It used a simple structure to maintain topology.
- It enabled edges to be digitized and stored only once and shared by many features.
- It could represent polygons of enormous size (with thousands of coordinates) because polygons were really defined as an ordered set of edges (arcs)
- The Topology storage structure of the coverage was intuitive. Its physical topological files were readily understood by ArcInfo Workstation users.
An interesting historical fact: Arc, when coupled with the table manager Info, was the genesis of the product name ArcInfo Workstation, which led to all subsequent Arc products in the Esri product family—ArcInfo, ArcIMS, ArcGIS, and so on.
Coverages also had some disadvantages:
- Some operations were slow because many features had to be assembled on the fly when they needed to be used. This included all polygons and multipart features such as regions (the coverage term for multipart polygons) and routes (the term for multipart line features).
- Topological features (such as polygons, regions, and routes) were not ready to use until the coverage topology was built. If edges were edited, the topology had to be rebuilt. (Note: Partial processing was eventually used, which required rebuilding only the changed portions of the coverage topology.) In general, when edits are made to features in a topological dataset, a geometric analysis algorithm must be executed to rebuild the topological relationships regardless of the storage model.
- Coverages were limited to single-user editing. Because of the need to ensure that the topological graph was in synchronization with the feature geometries, only a single user could update a topology at a time. Users would tile their coverages and maintain a tiled database for editing. This enabled individual users to lock down and edit one tile at a time. For general data use and deployment, users would append copies of their tiles into a mosaicked data layer. In other words, the tiled datasets they edited were not directly used across the organization. They had to be converted, which meant extra work and extra time.
Shapefiles and simple geometry storage
In the early 1980s, coverages were seen as a major improvement over the older polygon and line-based systems in which polygons were held as complete loops. In these older systems, all the coordinates for a feature were stored in each feature's geometry. Before the coverage and ArcInfo Workstation came along, these simple polygon and line structures were used. These data structures were simple but had the disadvantage of double digitized boundaries. That is, two copies of the coordinates of the adjacent portions of polygons with shared edges would be contained in each polygon's geometry. The main disadvantage was that GIS software at the time could not maintain shared edge integrity. Plus, storage costs were enormous, and each byte of storage came at a premium. During the early 1980s, a 300 MB disk drive was the size of a washing machine and cost $30,000. Holding two or more representations of coordinates was expensive, and the computations took too much compute time. Thus, the use of a coverage topology had real advantages.
During the mid 1990s, interest in simple geometric structures grew because disk storage and hardware costs in general were coming down while computational speed was growing. At the same time, existing GIS datasets were more readily available, and the work of GIS users was evolving from primarily data compilation activities to include data use, analysis, and sharing.
Users wanted faster performance for data use (for example, don't spend computer time to derive polygon geometries when we need them. Just deliver the feature coordinates of these 1,200 polygons as fast as possible). Having the full feature geometry readily available was more efficient. Thousands of geographic information systems were in use, and numerous datasets were readily available.
Around this time, Esri developed and published its shapefile format. Shapefiles used a very simple storage model for feature coordinates. Each shapefile represented a single feature class (of points, lines, or polygons) and used a simple storage model for the feature's coordinates. Shapefiles could be easily created from coverages as well as many other geographic information systems. They were widely adopted as a de facto standard and are still massively used and deployed to this day.
A few years later, ArcSDE pioneered a similar simple storage model in relational database tables. A feature table could hold one feature per row with the geometry in one of its columns along with other feature attribute columns.
A sample feature table of state polygons is shown below. Each row represents a state. The shape column holds the polygon geometry of each state.
This simple features model fits the SQL processing engine very well. Through the use of relational databases, we began to see GIS data scale to unprecedented sizes and numbers of users without degrading performance. We were beginning to leverage RDBMS for GIS data management.
Shapefiles became ubiquitous, and using ArcSDE, this simple features mechanism became the fundamental feature storage model in RDBMSs. (To support interoperability, Esri was the lead author of the OGC and ISO simple features specification).
Simple feature storage had clear advantages:
- The complete geometry for each feature is held in one record. No assembly is required.
- The data structure (physical schema) is very simple, fast, and scalable.
- It is easy for programmers to write interfaces.
- It is interoperable. Many wrote simple converters to move data in and out of these simple geometries from numerous other formats. Shapefiles were widely applied as a data use and interchange format.
Its disadvantages were that maintaining the data integrity that was readily provided by topology was not as easy to implement for simple features. As a consequence, users applied one data model for editing and maintenance (such as coverages) and used another for deployment (such as shapefiles or ArcSDE layers).
Users began to use this hybrid approach for editing and data deployment. For example, users would edit their data in coverages, CAD files, or other formats. Then, they would convert their data into shapefiles for deployment and use. Thus, even though the simple features structure was an excellent direct use format, it did not support the topological editing and data management of shared geometry. Direct use databases would use the simple structures, but another topological form was used for editing. This had advantages for deployment. But the disadvantage was that data would become out-of-date and have to be refreshed. It worked, but there was a lag time for information update. Bottom line—topology was missing.
What GIS required and what the geodatabase topology model implements now is a mechanism that stores features using the simple feature geometry but enables topologies to be used on this simple, open data structure. This means that users can have the best of both worlds—a transactional data model that enables topological query, shared geometry editing, rich data modeling, and data integrity, but also a simple, highly scalable data storage mechanism that is based on open, simple feature geometry.
This direct use data model is fast, simple, and efficient. It can also be directly edited and maintained by any number of simultaneous users.
The topology framework in ArcGIS
In effect, topology has been considered as more than a data storage problem. The complete solution includes the following:
- A complete data model (objects, integrity rules, editing and validation tools, a topology and geometry engine that can process datasets of any size and complexity, and a rich set of topological operators, map display, and query tools)
- An open storage format using a set of record types for simple features and a topological interface to query simple features, retrieve topological elements, and navigate their spatial relationships (that is, find adjacent areas and their shared edge, route along connected lines)
- The ability to provide the features (points, lines, and polygons) as well as the topological elements (nodes, edges, and faces) and their relationships to one another
- A mechanism that can support the following
- Massively large datasets with millions of features
- Ability to perform editing and maintenance by many simultaneous editors
- Ready-to-use, always available feature geometry
- Support for topological integrity and behavior
- A system that goes fast and scales for many users and many editors
- A system that is flexible and simple
- A system that leverages the RDBMS SQL engine and transaction framework
- A system that can support multiple editors, long transactions, historical archiving, and replication
In a geodatabase topology, the validation process identifies shared coordinates between features (both in the same feature class and across feature classes). A clustering algorithm is used to ensure that the shared coordinates have the same location. These shared coordinates are stored as part of each feature's simple geometry.
This enables very fast and scalable lookup of topological elements (nodes, edges, and faces). This has the added advantage of working quite well and scaling with the RDBMS's SQL engine and transaction management framework.
During editing and update, as features are added, they are directly usable. The updated areas on the map, dirty areas, are flagged and tracked as updates are made to each feature class. At any time, users can choose to topologically analyze and validate the dirty areas to generate clean topology. Only the topology for the dirty areas needs rebuilding, saving processing time.
The results are that topological primitives (nodes, edges, and faces) and their relationships to one another and their features can be efficiently discovered and assembled. This has several advantages:
- Simple feature geometry storage is used for features. This storage model is open, efficient, and scales to large sizes and numbers of users.
- This simple features data model is transactional and is multiuser. By contrast, the older topological storage models will not scale and have difficulties supporting multiple editor transactions and numerous other GIS data management workflows.
- Geodatabase topologies fully support all the long transaction and versioning capabilities of the geodatabase. Geodatabase topologies need not be tiled, and many users can simultaneously edit the topological database—even their individual versions of the same features if necessary.
- Feature classes can grow to any size (hundreds of millions of features) with very strong performance.
- This topology implementation is additive. You can typically add this to an existing schema of spatially related feature classes. The alternative is that you must redefine and convert all your existing feature classes to new data schemas holding topological primitives.
- There need only be one data model for geometry editing and data use, not two or more.
- It is interoperable because all feature geometry storage adheres to simple features specifications from the Open Geospatial Consortium and ISO.
- Data modeling is more natural because it is based on user features (such as parcels, streets, soil types, and watersheds) instead of topological primitives (such as nodes, edges, and faces). Users will begin to think about the integrity rules and behavior of their actual features instead of the integrity rules of the topological primitives. For example, how do parcels behave? This will enable stronger modeling for all kinds of geographic features. It will improve our thinking about streets, soils types, census units, watersheds, rail systems, geology, forest stands, land forms, physical features, and on and on.
- Geodatabase topologies provide the same information content as maintained topological implementations—either you store a topological line graph and discover the feature geometry (like coverages) or you store the feature geometry and discover the topological elements and relationships (like geodatabases).
In cases where users want to store the topological primitives, it is easy to create and post topologies and their relationships to tables for various analytic and interoperability purposes (such as users who want to post their features into an Oracle Spatial warehouse that stores tables of topological primitives).
At a pragmatic level, the ArcGIS topology implementation works. It scales to extremely large geodatabases and multiuser systems without loss of performance. It includes validation and editing tools for building and maintaining topologies in geodatabases. It includes rich and flexible data modeling tools that enable users to assemble practical, working systems on file systems, in any relational database, and on any number of schemas. | http://resources.arcgis.com/en/help/main/10.1/0062/006200000002000000.htm | 13 |
27 | Electronic Warfare and Radar Systems Engineering Handbook
- Transforms / Wavelets -
[Go to TOC]
TRANSFORMS / WAVELETS
Signal processing using a transform analysis for calculations is a technique used to simplify or accelerate problem solution. For example, instead of dividing two large numbers, we might convert them to logarithms, subtract them, then look-up the anti-log to obtain the result. While this may seem a three-step process as opposed to a one-step division, consider that long-hand division of a four digit number by a three digit number, carried out to four places requires three divisions, 3-4 multiplication*s, and three subtractions. Computers process additions or subtractions much faster than multiplications or divisions, so transforms are sought which provide the desired signal processing using these steps.
Other types of transforms include the Fourier transform, which is used to decompose or separate a waveform into a sum of sinusoids of different frequencies. It transforms our view of a signal from time based to frequency based. Figure 1 depicts how a square wave is formed by summing certain particular sine waves. The waveform must be continuous, periodic, and almost everywhere differentiable. The Fourier transform of a sequence of rectangular pulses is a series of sinusoids. The envelope of the amplitude of the coefficients of this series is a waveform with a Sin X/X shape. For the special case of a single pulse, the Fourier series has an infinite series of sinusoids that are present for the duration of the pulse.
Digital Sampling of Waveforms
In order to process a signal digitally, we need to sample the signal frequently enough to create a complete “picture” of the signal. The discrete Fourier transform (DFT) may be used in this regard. Samples are taken at uniform time intervals as shown in Figure 2 and processed.
If the digital information is multiplied by the Fourier coefficients, a digital filter is created as shown Figure 3. If the sum of the resultant components is zero, the filter has ignored (notched out) that frequency sample. If the sum is a relatively large number, the filter has passed the signal. With the single sinusoid shown, there should be only one resultant. (Note that being “zero” and relatively large may just mean below or above the filter*s cutoff threshold)
Figure 4 depicts the process pictorially: The vectors in the figure just happen to be pointing in a cardinal direction because the strobe frequencies are all multiples of the vector (phasor) rotation rate, but that is not normally the case. Usually the vectors will point in a number of different directions, with a resultant in some direction other than straight up.
In addition, sampling normally has to taken at or above twice the rate of interest (also known as the Nyquist rate), otherwise ambiguous results may be obtained.
Figure 4. Phasor Representation
Fast Fourier Transforms
One problem with this type of processing is the large number of additions, subtractions, and multiplications which are required to reconstruct the output waveform. The Fast Fourier transform (FFT) was developed to reduce this problem. It recognizes that because the filter coefficients are sine and cosine waves, they are symmetrical about 90, 180, 270, and 360 degrees. They also have a number of coefficients equal either to one or zero, and duplicate coefficients from filter to filter in a multibank arrangement. By waiting for all of the inputs for the bank to be received, adding together those inputs for which coefficients are the same before performing multiplications, and separately summing those combinations of inputs and products which are common to more than one filter, the required amount of computing may be cut drastically.
- The number of computations for a DFT is on the order of N squared.
- The number of computations for a FFT when N is a power of two is on the order of N log2 N.
For example, in an eight filter bank, a DFT would require 512 computations, while an FFT would only require 56, significantly speeding up processing time.
Windowed Fourier Transform
The Fourier transform is continuous, so a windowed Fourier transform (WFT) is used to analyze non-periodic signals as shown in Figure 5. With the WFT, the signal is divided into sections (one such section is shown in Figure 5) and each section is analyzed for frequency content. If the signal has sharp transitions, the input data is windowed so that the sections converge to zero at the endpoints. Because a single window is used for all frequencies in the WFT, the resolution of the analysis is the same (equally spaced) at all locations in the time-frequency domain.
The FFT works well for signals with smooth or uniform frequencies, but it has been found that other transforms work better with signals having pulse type characteristics, time-varying (non-stationary) frequencies, or odd shapes.
The FFT also does not distinguish sequence or timing information. For example, if a signal has two frequencies (a high followed by a low or vice versa), the Fourier transform only reveals the frequencies and relative amplitude, not the order in which they occurred. So Fourier analysis works well with stationary, continuous, periodic, differentiable signals, but other methods are needed to deal with non-periodic or non-stationary signals.
The Wavelet transform has been evolving for some time. Mathematicians theorized its use in the early 1900's. While the Fourier transform deals with transforming the time domain components to frequency domain and frequency analysis, the wavelet transform deals with scale analysis, that is, by creating mathematical structures that provide varying time/frequency/amplitude slices for analysis. This transform is a portion (one or a few cycles) of a complete waveform, hence the term wavelet.
The wavelet transform has the ability to identify frequency (or scale) components, simultaneously with their location(s) in time. Additionally, computations are directly proportional to the length of the input signal. They require only N multiplications (times a small constant) to convert the waveform. For the previous eight filter bank example, this would be about twenty calculations, vice 56 for the FFT.
In wavelet analysis, the scale that one uses in looking at data plays a special role. Wavelet algorithms process data at different scales or resolutions. If we look at a signal with a large "window," we would notice gross features. Similarly, if we look at a signal with a small "window," we would notice small discontinuities as shown in Figure 6. The result in wavelet analysis is to "see the forest and the trees." A way to achieve this is to have short high-frequency fine scale functions and long low-frequency ones. This approach is known as multi-resolution analysis.
For many decades, scientists have wanted more appropriate functions than the sines and cosines (base functions) which comprise Fourier analysis, to approximate choppy signals. (Although Walsh transforms work if the waveform is periodic and stationary). By their definition, sine and cosine functions are non-local (and stretch out to infinity), and therefore do a very poor job in approximating sharp spikes. But with wavelet analysis, we can use approximating functions that are contained neatly in finite (time/frequency) domains. Wavelets are well-suited for approximating data with sharp discontinuities.
The wavelet analysis procedure is to adopt a wavelet prototype function, called an "analyzing wavelet" or "mother wavelet." Temporal analysis is performed with a contracted, high-frequency version of the prototype wavelet, while frequency analysis is performed with a dilated, low-frequency version of the prototype wavelet. Because the original signal or function can be represented in terms of a wavelet expansion (using coefficients in a linear combination of the wavelet functions), data operations can be performed using just the corresponding wavelet coefficients as shown in Figure 7.
If one further chooses the best wavelets adapted to the data, or truncates the coefficients below some given threshold, the data is sparsely represented. This "sparse coding" makes wavelets an excellent tool in the field of data compression. For instance, the FBI uses wavelet coding to store fingerprints. Hence, the concept of wavelets is to look at a signal at various scales and analyze it with various resolutions.
Analyzing Wavelet Functions
Fourier transforms deal with just two basis functions (sine and cosine), while there are an infinite number of wavelet basis functions. The freedom of the analyzing wavelet is a major difference between the two types of analyses and is important in determining the results of the analysis. The “wrong” wavelet may be no better (or even far worse than) than the Fourier analysis. A successful application presupposes some expertise on the part of the user. Some prior knowledge about the signal must generally be known in order to select the most suitable distribution and adapt the parameters to the signal. Some of the more common ones are shown in Figure 8. There are several wavelets in each family, and they may look different than those shown. Somewhat longer in duration than these functions, but significantly shorter than infinite sinusoids is the cosine packet shown in Figure 9.
Wavelet Comparison With Fourier Analysis
While a typical Fourier transform provides frequency content information for samples within a given time interval, a perfect wavelet transform records the start of one frequency (or event), then the start of a second event, with amplitude added to or subtracted from, the base event.
Wavelets are especially useful in analyzing transients or time-varying signals. The input signal shown in Figure 9 consists of a sinusoid whose frequency changes in stepped increments over time. The power of the spectrum is also shown. Classical Fourier analysis will resolve the frequencies but cannot provide any information about the times at which each occurs. Wavelets provide an efficient means of analyzing the input signal so that frequencies and the times at which they occur can be resolved. Wavelets have finite duration and must also satisfy additional properties beyond those normally associated with standard windows used with Fourier analysis. The result after the wavelet transform is applied is the plot shown in the lower right. The wavelet analysis correctly resolves each of the frequencies and the time when it occurs. A series of wavelets is used in example 2.
Example 2. Figure 10 shows the input of a clean signal, and one with noise. It also shows the output of a number of “filters” with each signal. A 6 dB S/N improvement can be seen from the d4 output. (Recall from Section 4.3 that 6 dB corresponds to doubling of detection range.) In the filter cascade, the HPFs and LPFs are the same at each level. The wavelet shape is related to the HPF and LPF in that it is the “impulse response” of an infinite cascade of the HPFs and LPFs. Different wavelets have different HPFs and LPFs. As a result of decimating by 2, the number of output samples equals the number of input samples.
Wavelet Applications Some fields that are making use of wavelets are: astronomy, acoustics, nuclear engineering, signal and image processing (including fingerprinting), neurophysiology, music, magnetic resonance imaging, speech discrimination, optics, fractals, turbulence, earthquake-prediction, radar, human vision, and pure mathematics applications. See October 1996 IEEE Spectrum article entitled “Wavelet Analysis”, by Bruce, Donoho, and Gao.
Table of Contents
for Electronics Warfare and Radar Engineering Handbook
Abbreviations | Decibel | Duty
Cycle | Doppler Shift | Radar Horizon / Line
of Sight | Propagation Time / Resolution | Modulation
| Transforms / Wavelets | Antenna Introduction
/ Basics | Polarization | Radiation Patterns |
Frequency / Phase Effects of Antennas |
Antenna Near Field | Radiation Hazards |
Power Density | One-Way Radar Equation / RF Propagation
| Two-Way Radar Equation (Monostatic) |
Alternate Two-Way Radar Equation |
Two-Way Radar Equation (Bistatic) |
Jamming to Signal (J/S) Ratio - Constant Power [Saturated] Jamming
| Support Jamming | Radar Cross Section (RCS) |
Emission Control (EMCON) | RF Atmospheric
Absorption / Ducting | Receiver Sensitivity / Noise |
Receiver Types and Characteristics |
General Radar Display Types |
IFF - Identification - Friend or Foe | Receiver
Tests | Signal Sorting Methods and Direction Finding |
Voltage Standing Wave Ratio (VSWR) / Reflection Coefficient / Return
Loss / Mismatch Loss | Microwave Coaxial Connectors |
Power Dividers/Combiner and Directional Couplers |
Attenuators / Filters / DC Blocks |
Terminations / Dummy Loads | Circulators
and Diplexers | Mixers and Frequency Discriminators |
Detectors | Microwave Measurements |
Microwave Waveguides and Coaxial Cable |
Electro-Optics | Laser Safety |
Mach Number and Airspeed vs. Altitude Mach Number |
EMP/ Aircraft Dimensions | Data Busses | RS-232 Interface
| RS-422 Balanced Voltage Interface | RS-485 Interface |
IEEE-488 Interface Bus (HP-IB/GP-IB) | MIL-STD-1553 &
1773 Data Bus |
This HTML version may be printed but not reproduced on websites. | http://www.rfcafe.com/references/electrical/ew-radar-handbook/transforms-wavelets.htm | 13 |
23 | Table of Contents
Tags are elements of the HTML language. Almost every kind of tag has an opening symbol and a closing symbol. For example, the <HEAD> tag identifies the beginning of heading information. It also has a closing tag </HEAD>.
This element tells browsers that the file is a HTML document. Each HTML document starts with the tag <HTML>. This tag should be first thing in the document. It has an associate closing tag </HTML> which must be the last tag in the file.
The head contains important information about the document.
The title tag is an important tag. It is used to display a title on the top of your browser window. Both the opening and the closing tags go between the head tags.
The following example shows how to use the tags:
Another tag that can be added in the head is a <META> tag. It is used to help search engines index a page. There are several different meta names.
The author meta:
<META NAME="author" CONTENT="Nongjian Zhou">
The description meta:
CONTENT="A very easy tutorial for HTML beginners">
The keyword meta. Note that always seperate Keywords with a comma:
The following example shows how these tags are coded:
<title>HTML For Beginners</title>
content="A very easy tutorial for HTML beginners">
The Body Tag is used to identify the start of the main portion of your webpage. Between <BODY> </BODY> tags you will place all images, links, text, paragraphs, and forms. We will explain each tag that is used within the body of the HTML file.
Character, Paragraph and Position
There are six levels of headings, numbered 1 through 6. These tags are used for the characters in the outlines. The biggest heading is <H1> and smallest one is <H6>:
Paragraph tags (<P> opening tag and </P> closing tag) allow you to place a paragraph. For example:
The </P> closing tag may be omitted.
The defaulted position is left justification. You can also use "ALIGN" for justification:
<p ALIGN="center"> Paragraph will be centered</p>
<p ALIGN="left"> Paragraph will be left justified</p>
<p ALIGN="right">Paragraph will be right justified</p>
This kind of tags have capability of allowing you to center the text on the homepage.
<center><p> Paragraph will be centered</p></center>
This tag break whatever to be on the next line. The following is an example:
<p>Welcome To<br> My Homepage!</p>
This tag adds a horizontal line or divider to your web site. An <HR> tag makes the following divider:
The <hr> tag can be set as:
<hr width="450" align="right" size="5">
You can add spaces in your text by using .
You can use this tag to format or remove a text by movinging both the left and right sides of the paragraph.
<H1>Welcome To John's Homepage!</H1>
Preformatted the text of the paragraph to exactly display what you typed in the Web browser. For example:
Item Price quantity
A 34.99 23
B 25.95 13
The comment tag looks like this:
Nothing inside the comment tags will show up when your page is viewed.
Character styles include physical and logical character styles, and Face, Size, and Color. The following is character style table.
||Make text bold.
||Make text italic.
||Make text underline.
||Make text superscript.
||Make text subscript.
||Make text teletype.
||Indicate the text is very important.
||Indicate the text is important.
||Indicate that the text is from a book or other document.
||Indicate that the text is an address.
||Indicate that the text is a definition.
||Indicate that the text is a sequence of literal characters.
||Indicate that the text is keyboard input.
||Indicate that the text is a variable.
||Indicate that the text is code.
||Make text display in the default font (Times New Roman) of the Web browser.
||Type a list of fonts separated by commas (for example, Helvetica, Arial, Courier). The text will display in the first listed font found on the browser's system.
||Make the text display in the font specified. (If the font is not available on the browser's system, another font will be substituted.)
||1 through 7 (3 is the default)
||Format text with 7 sizes where 7 is the largest size and 1 is the smallest.
||Format text with the largest size (same as 7).
||Format text with the smallest size (same as 1).
||"#xxxxxx" or: White, Red, Blue and Others
||Make the text a different color.
The tags below have the effect shown on the text in between.
<TT>Monospaced typewriter text</TT>
(Note: This only works on Netscape)
<SUB>This <sub>makes a subscript.</sub></SUB>
<SUP>This<sup> makes a superscript.</sup></SUP>
<FONT FACE="Arial">This is a test</FONT>
<FONT COLOR="#00FF00">Text is in the color of Green</FONT>
<FONT SIZE="+2">This is a test</FONT>
You may use this tag to set default font face, size or color for your page and save your time of coding. For example:
<basefont face="Arial" size="7" color="red">
There are three kinds of lists in HTML:
Unordered lists <UL></UL>
Ordered lists <OL></OL>
Definition lists <DL></DL>
This list starts with an opening list <UL> tag and ends the list with a closing list </UL> tag. Between the <UL> and </UL>, you enter the <LI> (list item) tag followed by the individual item; no closing </LI> tag is needed. For example:
In the web browser the above code would appear three elements as:
An orderered list is similar to an unordered list, except it uses <OL> instead of <UL>:
The output is:
- Hight School
- Elemantory School
A definition list starts with <DL> and ends with </DL>. This list consists of alternating a definition term and a definition definition. The definition term is enclosed in <DT> </DT> and should precede the definition definition. The definition definition is enclosed in <DD> </DD>. So, a whole definition list looks like:
<DT> term </DT>
<DD> definition </DD>
<DT> term </DT>
<DD> definition </DD>
Links allow you to navigation from one page to another on the internet or in your local machine. Before you add a link to your page you need a URL of another web site or a path of your local file that you want to link to. The link tag also provides the capability to provide a way for linking an e-mail address. To link to another file in your current dictionary, use <A HREF="name.html"> anchor text </A>. For example:
<A HREF="bscInfo.html">Basic Information</A>
If you want to link to a file that in another dictionary, you can write the code like this:
You can create links from your webpage to other webpages on internet:
<A HREF="http://internetcollege.virtualave.net/">Internet College</A>
If you want link to the an email address and when you click it, then start the mail program, you can write the a link like this:
<A HREF="mailto:email@example.com">Email us</A>
If a file has a large size, you may want to create links to different parts of the page. To do that, first you must leave a pointer to the place in the file you want to link to. The pointer looks like <A NAME="xyz">. Then use <A HREF="#xyz"> tags. For example, you want to have a link from the section D to the section "My current project" of your page. Right before "My current project" you need to type <A NAME="M">. At the section D of your page you add the following link: <A HREF="#M">. The # symbol tells your browser to look for the link within the same document instead of looking for another file. You can use any number or letter to replace "M":
<A NAME="M"></A>My current projects
<A HREF="#M"></A>Click here to see my projects</A>
You can link to any place in other documents by the same way:
<A HREF="people.html#F3">Faculty Infomation</A>
You also can link a part of another page on the Internet if you can put a pointer <A NAME=""> in it:
Most Web browsers can display images that are in GIF, or JPEG format. To include an image, enter:
For example: <IMG SRC="monky.gif"> The <IMG> tag is used to define an image. This tag does not have a closing tag. The IMG part tells the browser to add an image, The SRC tells your browser where to find the image. You should include two other attributes on <IMG> tags to tell your browser the size of the images. The HEIGHT and WIDTH attributes let your browser set aside the appropriate space (in pixels) for the images. For example:
<IMG SRC="monky.gif" HEIGHT=80 WIDTH=100>
You can put an image in the left or right of a page by using ALIGN.. For example:
<IMG SRC="ImageName" ALIGN="right">
By default the bottom of an image is aligned with the following text. You can align images to the top, bottom or middle of a paragraph by using one of three things: TOP, MIDDLE, BOTTOM, For example:
<IMG SRC="monky.gif" ALIGN="top">
Note: You must use "align", not "valign" to set for TOP, MIDDLE, BOTTOM. It's different from the table alignment. We can use "vspace" and "hspace" to adjust space around the picture:
<IMG SRC="monky.gif" vspace="50" hspace="80">
The ALT attribute is one of IMG attributes. You can use the ALT attribute to specify text to be displayed instead of an image. For example:
<IMG SRC="monky.gif" ALT="[monky]">
In this example, if someone cannot see the image, at least they will be able to read the name of the image and know that it would be a monky because the words "[monky]" is shown in the place.
An image can be used as hyperlinks just like plain text. The following is the HTML code:
<A HREF="animal.html"><IMG SRC="monky.gif"></A>
The blue border that surrounds the image indicates that it's a clickable hyperlink. If you do not want to display this border, you can add the BORDER attribute and setting it to zero:
<A HREF="animal.html"><IMG SRC="monky.gif" BORDER=0></A>
You can load an image from another webpage to your page. To display a image on some one else's page, you need to find the URL:
You also can use an image as a background. The tag to include a background image is included in the <BODY> statement as an attribute:
A large inline image would slow down the loading of the main document. To avoid it, you may have an image open as an external image. To include a reference to an external image, enter:
<A HREF="ImageName">link anchor</A>
You can also use a smaller image as a link to a larger image. Enter:
<A HREF="LargerImageName"><IMG SRC="SmallImageName"></A>
You may want to have a specific color for the background, text, links, visited links, and active links. In HTML, Colors are coded as a 6 digit hex RGB (red, green, blue) number. A hexadecimal value in the range 00-FF. For example, 000000 is black (no color at all), FFFFFF is white (fully saturated with all three colors). FF0000 is bright red, 0000FF is bright blue, and 00FF00 is pastel green. You must have the "#" sign before the actual code. You can use the attributes of the <BODY> tag to change the color of text, links, vlinks (visited links), and alinks (active links). For example:
<BODY bgcolor="#FFFFFF" text="#000000"
link="#0000FF" vlink="#800000" alink="#808000">
You can also use the name of the color instead of the corresponding RGB value to indicate some basic colors. For example, "black", "red", "blue", and "green" are all valid for use in place of RGB values. Coloring specific text is done very much like changing the font size. The tag is like:
<FONT color="code"> text </FONT>
This tag can be combined with the font size. For example:
<FONT color="#00FF00" size="+3"> text </FONT>
The format of table is:
<TR> <TD> Table Entry </TD> ... <TD> Table Entry </TD>
<TR> <TD> Table Entry </TD> ... <TD> Table Entry </TD>
The whole table is opened and closed with <TABLE> </TABLE>. Each row is encapsulated in <TR> </TR>. Within the row are cells, enclosed in <TD> </TD>. There can be as many rows and columns as you want and as will fit on the screen. The browser will autoformat the rows, vertically centering cell contents if necessary. If you want a cell to span more than one column, enclose it in <TD COLSPAN=X> </TD>, where X indicate the number of columns to span. Similarly, <TD ROWSPAN=X> </TD> will cause the cell to span X rows. A border can be placed around all the cells by using <TABLE BORDER=X> </TABLE>, where X is the number of pixels thick the border should be. Let's see an example:
<CENTER><TABLE BORDER=1 WIDTH="62%" HEIGHT=90>
<TD WIDTH=82><CENTER> Name</CENTER></TD>
<TD WIDTH=82><CENTER>John Lee</CENTER></TD>
<TD WIDTH=82><CENTER>Cherry Heitz</CENTER></TD>
<TD WIDTH=91> <CENTER>908743</CENTER></TD>
The value of width and height can be "xx%" or XX. For example: WIDTH="80%" or WIDTH=450. "xx%" allow the table size changing while the window size is changing. The value of Border can be zero. In this case, the table will have no border. You can make a table looking like this:
The following is the code of this table.
<TABLE BORDER=10 CELLSPACING=10 CELLPADDING=2>
The CELLSPACING attribute refers to the space between cells and should be in pixels. The CELLPADDING attribute refers to the spacing within the cell in pixels (the space between the cell walls and the contents of the cell).
|defines a table in HTML. If the BORDER attribute is present, your browser displays the table with a border.
|defines the caption for the title of the table. The default position of the title is centered at the top of the table. The attribute ALIGN=BOTTOM can be used to position the caption below the table.
NOTE: Any kind of markup tag can be used in the caption.
- <TR> </TR>
|specifies a table row within a table. You may define default attributes for the entire row: ALIGN (LEFT, CENTER, RIGHT) and/or VALIGN (TOP, MIDDLE, BOTTOM). See Table Attributes at the end of this table for more information.
- <TH> </TH>
|defines a table header cell. By default the text in this cell is bold and centered. Table header cells may contain other attributes to determine the characteristics of the cell and/or its contents. See Table Attributes at the end of this table for more information.
- <TD> </TD>
|defines a table data cell. By default the text in this cell is aligned left and centered vertically. Table data cells may contain other attributes to determine the characteristics of the cell and/or its contents. See Table Attributes at the end of this table for more information.
|ALIGN (LEFT, CENTER, RIGHT)
||Horizontal alignment of a cell.
|VALIGN (TOP, MIDDLE, BOTTOM)
||Vertical alignment of a cell.
||The number (n) of columns a cell spans.
||The number (n) of rows a cell spans.
||Turn off word wrapping within a cell.
Forms allow the user to enter information. For example, you can use forms to collect user's names and email addresses. Forms begin with the tag <FORM> and end with </FORM>.
<FORM ACTION="path/script.pl" METHOD="">
Two attributes you should type for your form are the Form Action and Method.:
<FORM ACTION="http://www.abc.com/cgi-bin/login.pl" METHOD="post">
You can use "input" for single line information:
<INPUT TYPE="input" NAME=name SIZE=##>
<INPUT TYPE="input" NAME="email" SIZE=26>Your Email Address
<INPUT TYPE="input" NAME="name" SIZE=26>Your Name
<INPUT TYPE="input" NAME="subject" SIZE=26>Subject
Here is what the result shows:
<input size="26" name="email" /> Your Email Address
<input size="26" name="firstname" /> Your Name
<input size="26" name="subject" /> Subject
The value of size is in characters, so "SIZE=26" means the width of the input box is 26 characters.
Text Area can be as big as you'd like. Text Area begins with <TEXTAREA NAME=name ROWS=## COLS=##>and end with </TEXTAREA>. For example:
<TEXTAREA Rows=2 Cols=25 NAME="comments"></TEXTAREA>
The result is:
<textarea cols="25" name="comments" />
You can use radio buttons to ask a question with one answer. For example, if you wanted to ask "Which picture do you like?" and you wanted to have the choices "monky", "flower", "girl", "building", you would type:
<INPUT TYPE="radio" checked NAME="picture" VALUE="monky">Monky<P>
<INPUT TYPE="radio" NAME="picture" VALUE="flower">Flower<P>
<INPUT TYPE="radio" NAME="picture" VALUE="girl">Girl<P>
<INPUT TYPE="radio" NAME="picture" VALUE="building">Building<P>
The Result is:
<input checked="true" type="radio" name="picture" /> Monky
<input type="radio" name="picture" /> Flower
<input type="radio" name="picture" /> Girl
<input type="radio" name="picture" /> Building
Checkboxes let the user check things from a list. The form is:
<INPUT TYPE="checkbox" NAME="name" VALUE="text">
Notice that the difference between check boxes and radio buttons is that any number of check boxes can be checked at one time while only one radio button can be checked at a time. For example, if you wanted to ask "Which picture do you like?" and you allow any number of check boxes can be checked at one time, you would type:
<INPUT TYPE="checkbox" NAME="picture" VALUE="monky">Monky<P>
<INPUT TYPE="checkbox" NAME="picture" VALUE="flower">Flower<P>
<INPUT TYPE="checkbox" NAME="picture" VALUE="girl">Girl<P>
<INPUT TYPE="checkbox" NAME="picture" VALUE="building">Building<P>
The result is:
Which picture do you like?
<input type="checkbox" name="picture" /> Monky
<input type="checkbox" name="picture" /> Flower
<input type="checkbox" name="picture" /> Girl
<input type="checkbox" name="picture" /> Building
Submit and Reset
Other button types include submit and reset. "submit" is the button the user presses to send in the form. "reset" clears the entire form so the user can start over. For example:
<INPUT TYPE="submit" NAME="submit" VALUE="Send">
<INPUT TYPE="reset" NAME="reset" VALUE="Clear">
The result is:
<input type="submit" name="submit" /><input type="reset" name="reset" />
This type allows users to type in text but instead of displaying the text that they type astericks are displayed instead:
<INPUT TYPE="password" NAME="pass" SIZE="20">
You can ask a question with only one answer is by using a pull-down menu. For example:
How old are you?
<OPTION SELECTED >16-21
The result is:
How old are you?
<select size="1" name="age"> <option>1-15</option>
<option>31-45</option> <option>46-65</option> <option>66-80</option>
Ther are two kinds of scroll-down menus. One is that you can only select one item:
How old are you?
<SELECT NAME="age" SIZE=5>
The result is:
How old are you?
<select size="5" name="age"> <option value="1-15">1-15</option>
<option value="16-21">16-21</option> <option value="22-30">22-30</option>
<option value="31-45">31-45</option> <option value="46-65">46-65</option>
<option value="66-80">66-80</option> <option value="80-up">80-up</option></select>
The other menu is that you can select one or more items by holding down shift. For example:
What is your favorite thing?
(Hold <i>shift</i> to select more that one)
<SELECT NAME="reading" MULTIPLE size="3">reading
The point is "multiple".
With frames, you can put a number of HTML pages into a single window, each of frame can display a page. frames start and end with the <FRAMESET></FRAMSET> tags. The <FRAMESET> tag can have two modifiers: ROWS and COLS to define how big the frames will be. For example:
<frame name="banner" scrolling="no" noresize
<frame name="contents" target="main"
<frame name="main" src="home.htm">
<p>This page uses frames, but your browser doesn't
Let's explain each element:
rows="64,*" means that the the first frame will take up 64 rows of the window and the second frame will take up the rest. An asterisk means that the row will take up whatever space is left. You can use percentage to replace length. For example: cols="30%,60%"
<frame> defines each individual frame.
name="..." gives the frame a name.
src="..." tells which page will be loaded in the frame.
target="..." allows you to make links appear in specific frames or windows.
scrolling="yes|no|auto" allows you to control the scroll bars on the frame. "yes" forces the frame to always have scroll bars. "no" forces the frame to never have scroll bars. "auto" allows the browser to decide if scroll bars are necessary. The default value is "auto".
noresize allows you to keep the frame from being resizable by the viewer.
</noframes> is used to create a frameless alternate. When the page is viewed by a browser that does not support frames, everything EXCEPT what is between the </noframes> tags is ignored.
There are also some "magic" TARGETs.
"_blank" will always open the link in a new window.
"_top" will open the link in the whole page, instead of in a single frame.
"_self" makes the link open in the frame it's called from. This is useful when the <BASE...> tag is used.
"_parent" opens the link in the immediate frameset parent of the frame the link is called from.
<A HREF="ah.html" TARGET="_blank">text</A>
And, TARGET can also be added to the <FORM> tag to make the output from the script got to the specified frame or window.
HTML for CodeProject Articles
(Added by Chris Maunder)
If you wish to submit articles to the Code Project, and you want to see your article up ASAP, then the easier you make it for us, the faster it gets posted.
We use style sheets for our articles, so you do not need to add any formatting at all. Typically we only use <h1> - <h5> for headings, <p> for paragraphs, <code> for function names within text paragraphs, and <pre> for blocks of code. That's it - no fancy fonts, no colors - it is all taken care of for you.
For more information on posting articles see the submission guidelines. | http://www.codeproject.com/Articles/775/HTML-For-Beginners?fid=1344&df=90&mpp=10&sort=Position&spc=None&tid=3407250 | 13 |
19 | The Dawes Plan (as proposed by the Dawes Committee, chaired by Charles G. Dawes) was an attempt in 1924 to solve the reparations problem, which had bedeviled international politics following World War I.
The Allies' occupation of the Ruhr industrial area contributed to the hyperinflation crisis in Germany. The plan provided for their leaving the Ruhr, and a staggered payment plan for Germany's payment of war reparations. Because the Plan resolved a serious international crisis, Dawes shared the Nobel Peace Prize in 1925 for his work.
It was an interim measure and proved unworkable; the Young Plan was adopted in 1929 to replace it.
Background: World War I Europe
The initial German debt default
At the conclusion of World War I, the Triple Entente included in the Treaty of Versailles a plan for reparations to be paid by Germany. The amount of these initial payments were reduced from 269 to 226 billion German Gold Marks in 1921 but in 1923 Germany defaulted on its ability to deliver further amounts of coal and steel. In response to this, French and Belgian troops occupied the Ruhr River valley inside the borders of Germany. This occupation of the centre of the German coal and steel industries outraged the German people. They passively resisted the occupation, and the economy suffered, significantly contributing to the hyperinflation that followed in Germany.
The Barclay School Committee is established
To simultaneously defuse this situation and increase the chances of Germany resuming reparation payments, the Allied Reparations Commission asked Dawes to find a solution fast. The Dawes committee, which was urged into action by Britain and the United States, consisted of ten informal expert representatives, two each from Belgium (Baron Maurice Houtart, Emile Francqui), France (Jean Parmentier, Edgard Allix), Britain (Sir Josiah C. Stamp, Sir Robert M. Kindersley), Italy (Alberto Pirelli, Federico Flora), and the United States (Dawes and Owen D. Young). It was entrusted with finding a solution for the collection of the German reparations debt, which was determined to be 132 billion gold marks, as well as declaring that America would provide loans to the Germans, in order that they could make reparations payments to the United States, Britain and France.
Main points of the Dawes Plan
In an agreement of August 1924, the main points of The Dawes Plan were:
- The Ruhr area was to be evacuated by Allied occupation troops.
- Reparation payments would begin at one billion marks the first year, increasing annually to two and a half billion marks after five years.
- The Reichsbank would be reorganized under Allied supervision.
- The sources for the reparation money would include transportation, excise, and custom taxes.
The Dawes Plan relied on capital lent to Germany by a consortium of American investment banks, led by the Morgan Guaranty Trust Company, under supervision by the US State Department. The German economic state was precarious. The Dawes plan was based on the help of loans from the US that were unrelated to the previous war.
The plan was accepted by Germany, which was in no position to refuse, and by the Triple Entente, and went into effect in September 1924. German business began to rebound during the mid-1920s and it made prompt reparation payments. Regulators realized that the German economy could not long sustain the enormous annual payments, which the Allies had deliberately set at a crushing level. As a result, the Young Plan was substituted in 1929.
Results of the Dawes Plan
The Dawes Plan provided short-term economic benefits to the German economy and softened the burdens of war reparations. By stabilizing the currency, it brought increased foreign investments and loans to the German market. But, it made the German economy dependent on foreign markets and economies. As the U.S. economy developed problems under the Great Depression, Germany and other countries involved economically with it also suffered. The Allies owed the US debt repayments for loans.
After World War I, this cycle of money from U.S. loans to Germany, which made reparations to other European nations, who paid off their debts to the United States, locked the western world's economy into that of the U.S.
Dawes shared the Nobel Peace Prize in 1925, in recognition of his work on the Plan.
See also
World War I
World War II
- Industrial plans for Germany
- Morgenthau Plan, 1945–47
- Marshall Plan, 1948–51
- Agreement on German External Debts, debt agreement, 1953
- Noakes, Jeremy. Documents on Nazism, 1919-1945, pg 53
- Rostow, Eugene V. Breakfast for Bonaparte U.S. national security interests from the Heights of Abraham to the nuclear age. Washington, D.C: National Defense UP, For sale by the Supt. of Docs., U.S. G.P.O., 1993.
Further reading
- Gilbert, Felix (1970). The End of the European Era: 1890 to the present. New York: Norton. ISBN 0-393-05413-6.
- McKercher, B. J. C. (1990). Anglo-American Relations in the 1920s: The Struggle for Supremacy. Edmonton: University of Alberta Press. ISBN 0-88864-224-5.
- Schuker, Stephen A. (1976). The End of French Predominance in Europe: The Financial Crisis of 1924 and the Adoption of the Dawes Plan. Chapel Hill: University of North Carolina Press. ISBN 0-8078-1253-6. | http://en.wikipedia.org/wiki/Dawes_Plan | 13 |
14 | Social psychology can be defined as “the scientific investigation of how the thoughts, feelings and behaviours of individuals are influenced by the actual, imagined or implied presence of others” (Gordon Allport, 1935).
unit 1: social psychology
Section A: Obedience
1.2 Milgram’s Study of Obedience (1963)
1.3 Evaluation of Milgram’s Study of Obedience
1.4 Variations of the Milgram Experiment
1.5 Meeus and Raaijmakers (1986)
1.6 Agency Theory
1.7 Hofling et al. (1966)
Section B: Prejudice
1.8 Social Identity Theory as an Explanation of Prejudice
1.9 Tajfel et al. (1970, 1971)
1.10 Sherig et al. (1954)
1.11 Reicher and Haslam (2006)
1.12 Asch (1951, 1952, 1956)
Unit 1 Key Issues
Unit 1 Revision Notes
Unit 1 Questions
To obey someone means to follow direct orders from an individual more often than not in a position of authority. There are three types of obedience in general:
- compliance – following instructions without necessarily agreeing with them (an example of this might be wearing a school uniform – although you don’t want to, you comply with the rules and do anyway because it causes you no harm)
- conformity – adopting the attitudes and behaviours of others, even if they are against an individual’s own inclinations (an example of this might be the Nazis during the Holocaust, they were instructed to do what they did, and some of them may not have wanted to do it but conformed to the rules anyway)
- internalising – this is carrying out orders with agreement
The term destructive obedience refers to the idea of an individual following the orders which they consider to be immoral, which will cause them a lot of distress and regret. This often occurs with conformity.
Taking the example of the holocaust further, think of Adolf Eichmann. He was the officer probably most responsible for what happened during the Holocaust, and he always said that he only did what he did because he was carrying out orders. Whether or not it was true, this is an example of how obedience can work, and it was particularly frightening because it makes people wonder if they would do the same thing if it ever happened again and they were in his position. This thought is what has encouraged numerous psychologists to carry out studies into the nature of obedience, probably the most famous of which being Stanley Milgram, who was specifically curious about potential replications of the holocaust, because he wanted to test to see if the Germans in particular were different to other people, by testing obedience on other people.
1.2 milgram’s study of obedience (1963)
Aim: To investigate how far people will go in obeying an authority figure
1 Volunteers responded to an advertisement in a paper for an experiment at Yale which investigated the effects of punishment in learning, they were paid $4.50 for participating
2 Via a fixed lottery, the subjects were chosen to play the role of teacher (the confederate or accomplice), and an actor, posing as another volunteered participant became the learner
3 The learner was strapped into a chair and had electrodes attached to him, and the teacher was informed that the shocks would result in no permanent damage. To prove the equipment was working, the subject (teacher) received an initial 45 volt shock themselves
4 The teacher is taken next door to the shock generator room where the they are told to administer a shock to the learner of increasing severity for each incorrect answer he gives using a word game based on memory, over an intercom
5 The actor frequently gave wrong answers and would receive a shock for each one, each time the voltage would increase by 15 volts. After each shock, a recording of a painful scream was played back to the teacher over the intercom
6 After 300 volts there was silence from the learner – he was either unconscious or dead
7 The experiment came to an end when the teacher refused to continue or they reached the full voltage (450V)
8 After the experiment finished, the teacher was fully debriefed about the true nature of the experiment and was reintroduced to the learner, who had come to no harm
Milgram chose 40 males between the ages of 20 and 50 with a wide range of jobs from the New Haven Area. The use of males prevented interference on the basis of reluctance towards intersexual abuse
The learner was a 47-year old American-Irish actor who acted as ‘Mr Wallace’ – a mild-mannered and likeable accountant. He was an average person
The experimenter watched the teacher as he administered the shocks, and if the teacher hesitated because they found it uncomfortable, he would use one of his standardised prompts from “please continue” to “you must go on.” He was a 31-year old dressed in a grey lab coat to give the appearance of an important, authoritative figure. He would be impassive during the experiment. The experimenter would not force the teacher to continue, but would sternly encourage them to carry on
Levels of obedience expected
When psychology students and professional psychologists were asked what percentage of the people participating in the experiment would go right through and administer the highest voltage of shock (450 volts – lethal), the answers ranged from 1 to 3; the mean value was 1.2
Levels of obedience obtained
When the study was carried out:
- 65% of participants continued to the maximum shock level of 450 volts
- Not one participant stopped the experiment before 300 volts
According to Milgram himself, the degree of tension reached extremes for some subjects as some were “observed to sweat, tremble, stutter, bite their lips, groan and dig their fingers into their flesh.” What is interesting is how these quite clear signs of body language show that the study was making them uncomfortable, and even though they were under no obligation to continue (the experimenter wasn’t forcing them to continue), most subjects obeyed the experimenter throughout the entire 450 volts, simply because he appeared to be a figure of authority.
“One sign on tension was the regular occurrence of nervous laughing fits… Full-blown, uncontrollable seizures were observed for 3 subjects. On one occasion we observed a fit so violently convulsive that it was necessary to call a halt to the experiment. In post experimental interviews, subjects took pains to point out that they were not sadistic types, and that the laughter did not mean they enjoyed shocking the victim.” Milgram, 1963
Generalisability refers to the idea that the findings can be applied to the target population as a whole
Reliability refers to the idea that repeating the experiment would obtain similar or identical findings
Application refers to the idea that the findings can be useful in a real-life application in society
Validity refers to the idea that results should measure what they initially were supposed to measure
Ethics refers to the idea that an experiment should be carried out whilst taking into consideration ethical grounds
In terms of generalisability, the test subjects were all males within a specific age group. So the data obtained from the experiment cannot necessarily apply to a whole plethora of people. However, Milgram purposely chose not to use all college students, but instead wanted a range of men with varied jobs to get a good range of data. His experiment was reliable, because the experiment was repeated a number of times, and different variations of the studies went out. Milgram experimented changes in gender and nationality. Other psychologists (Sheridan and King, 1972) even tried altering the species, using animals as the learners (victims).
Can the findings from Milgram’s experiment be applied to society and be useful in everyday situations? The supposed experiment which the subjects believed they were originally signing up for would have been, experimenting on the effect of punishment on learning, in terms of memory and forgetfulness. However, what uses did the findings from the data have that are implemented today?
Milgram’s study was well standardised and obedience was accurately operationalised as the amount of voltage given – so the study was experimentally valid. However, two psychologists, Orne and Holland (1968) said that they believed the subjects knew that they were not causing the learners any harm. Because the experiment was an artificial test, and because the test subjects were aware that they were being studied, it was argued that the study lacked “mundane realism” and was therefore not ecologically valid. However, one might argue that because the subjects were not actually aware of what the real study was investigating, the nature of the subjects was more natural, as they were less suspecting that it was their part being investigated, even if the environment of the university was not a natural place.
You might also say that because the test subjects were completely unaware of the true nature of the experiment, it was not an ethical study. This may also be the case because the experience the subjects went through may have a negative effect on them post-investigation when they realised how they behaved.
1.3 evaluation of milgram’s study of obedience
The main measure of how reliable a psychological study is will more often than not be its replicability. Milgram used a standardised procedure for each participant – for example, the same script was used by the learner and experimenter; the same rooms were used during the experiment; and identical equipment was used each time. This ensured that all the participants had a similar experience, so there was no bias in the experiment. The strong controls meant that the studies could be repeated, to test whether the findings were reliable – and the experiment was, indeed, repeated by Milgram himself, among other psychologists, afterwards.
Real World Application
Milgram’s work was of practical value because it showed that individual’s have a tendency towards destructive obedience. He believed that, by showing this, his work had wider benefits to society as it could avoid such incidents in the future, as the one which triggered Milgram’s investigations – the Holocaust.
The study helps us to understand how historical events such as this could happen, where people obeyed orders against the moral code they normally lived by.
1 The participants had to complete an artificial task by asking the learner to remember word pairs and then administer an electric shock whenever they didn’t remember correctly. Many theories suggest that most participants felt protected from their actions because they assumed whatever happened at Yale was fine and so trusted the study. Thus, it could be argued the experiment lacked experimental validity
However, Milgram tried to ensure the participants thought the situation was real, for example, by giving them a 45 volt shock at the start. The obvious stress experienced by participants implies that most did believe that what was happening was real, so this would suggest that in fact there was some experimental validity in his method
2 The study took place in a laboratory in Yale University, a very well-respected university with an extremely popular reputation. This is an unnatural setting for most people, which suggests that normal behaviour wouldn’t necessarily be usual. This means that the experiment lacked ecological validity
3 As Milgram’s sample of participants consisted of adult males from a range of backgrounds, it could be said that the experiment had some population validity, but only for American male adults
However, Milgram later repeated the study in a large number of variations (see 1.4 Variations of the Milgram Experiment), and many other psychologists have repeated the experiment. What was noticed is how the results tended to produce similar patterns (the number of participants who continued to the full 450V shock when it was all women in the experiment was almost the same as with the original men’s experiment), and so you might say it did in fact have definite population validity
The biggest criticism of Milgram’s study has always been on ethical grounds. There are 5 important guidelines to consider: informed consent, deceit, right to withdraw, debriefing and competence. Here, you will see in-depth analysis of each of these guidelines.
Informed Consent – In the study, the participants were not given the full details on the true nature of the experiment, so it initially sounds as though the experimenters did not gain correct informed consent, but you have to consider that had the participants been aware that the electric shocks were not real, the results gathered would not have been a clear indication of their obedience and behaviour because they would have known that the
consequences of their actions were not real. Milgram therefore could not ask for informed consent but did try to be ethical so asked participants if they would like to take part in such a study and they did – this is presumptive consent. Another way of remaining ethical is to ask the participants before the study if they agree to take part, but inform them that sometimes deception is necessary – this is prior consent
Deception – There was a severe amount of deception in Milgram’s experiment, but (as before) this was all necessary for the results of the experiment to be valid. Examples of the deception used include faking the shocks, leading participants to believe they were given the teacher role by chance, telling them it was for a study on memory and forgetfulness, telling them the learner and experimenter were real and not actors, and many more
Right to Withdraw – There is a lot of controversy over ethics regarding the right to withdraw. Whilst the participants were free to leave and were not being forced to continue, they were strongly encouraged to carry on by the experimenter, and the experimenter even had a script with lines to tell the teacher such as “the experiment requires that you continue” which almost made the subject feel they had to go on. When the participants said that they wanted to stop, they were strongly urged to continue, thus it might be argued they did not have a true right to withdraw, making the study unethical
Debriefing – Because the experiment was very stressful for the participants and it involved a lot deception, the debriefing process was essential. Additionally, the participants would have come to realise that had the fake “memory improving” experiment been real, they would have administered lethal shocks to random strangers, showing them they had the capability to commit murder. Therefore it is important for them and the experimenter to fully evaluate the experiment to ensure they are in a safe mental state before going home
Competence – Milgram knew the possible implications of the study; understood the ethical guidelines, did not feel the need to get advice from others; was suitably qualified as a scientist who had his PhD for three years; made sure that nobody would come to any immediate harm as a result of the experiment; adhered to the Data Protection Act and easily and correctly stored the data. However, the participants became distressed, making the experiment less ethical as a whole, but the fact that Milgram was competent to run the experiment and knew what he was doing means it wasn’t necessarily unethical as a whole
1.4 variations of the milgram experiment
In Milgram’s book, Obedience to Authority: An Experimental View (1974), he outlines 19 different variations of the original study of obedience, some of which were previously unreported. Each of the variations had one thing in common; they all led to a reduction in obedience. Some of the variations are listed below:
Evaluating the Variations
One of the strengths of the variations is its strong controls. This means that the studies are replicable and so reliability can be tested. Having strong controls means that there is a lack of bias, which allows you to draw more accurate conclusions about cause and effect.
Of course, the most important weakness to consider, which is similar to the original experiment, is how unethical the variations were. Again, there was a lot of deception involved in each experiment, and there is always a certain risk when dealing with subjects in such a way that could cause them distress, as finding out what the true nature of the experiment is might cause them.
Also, the experimental validity and ecological validity (and the population validity) are all questionable. The results can not necessarily be applied to the population as a whole, because throughout, it was essentially all people from the same categories used as subjects (20 – 40 year old men); although in one variation of the experiment all women were used instead of men. The results of that experiment were not significantly different from the original study, although women seemed to communicate higher experiences of stress than the men did.
1.5 meeus and raaijmakers (1986)
Aim: To investigate destructive obedience in the everyday situation of a job interview
Wim Meeus and Quinten Raaijmakers wanted to replicate Milgram’s original study but wanted to improve on two initial problems they saw within the study had they repeated it in exactly the same way:
- Milgram’s participants were assured that there would be no permanent damage to the “learners”
- The form of punishment would have been ‘old-fashioned’ according to Meeus and Raaijmakers
The aim of their experiment was to assess how the participants would handle destructive obedience in the everyday situation of a job interview, specifically, to see to what extent people would obey orders to psychologically abuse a job interviewee.
1 There were three people involved: a university researcher, a “job applicant” (who was an actor, similar to the role of the learner in the Milgram experiment), and a participant, who would issue the abuse
2 The applicant was following a script, and had to pass a test of 32 oral multiple-choice questions to “get the job”
3 The participants were told the job required ability to handle stress, so they had to cause the applicant stress during the interview by psychologically abusing them. This was essential because it gave the study motive to get the participant involved, had they not been told this information they would have been curious as to why they were being asked to verbally abuse the interviewee
4 Participants were also informed that it was part of an investigation to find out the relationship between psychological stress and test success, and the applicant didn’t know about the research (of course, none of this was true)
5 After the interview had begun, the participant would have to make a series of 15 negative comments about the interviewee’s performance and personality during the test, every time they would get an answer wrong
6 The applicant would occasionally interrupt the interview to object about the participant’s negative comments, but the participant was instructed to ignore these objections and continue with the interview
7 The “stress remarks” led the applicant to failing the test and so did not get the job
Altogether, 39 participants were used, which consisted of both males and females between the ages of 18 and 55. These were split into two groups: a control group of 15 and an experimental group of 24. Each of the applicants had at least Dutch High School Education (equivalent education of GCSEs) and were recruited by a newspaper ad, offering $13 for the 30 minute experiment
The Stress Remarks
A TV monitor instructed participants when to make each stress remark about the job interviewee. The applicant had electrodes stuck on his skull to make the participant believe his stress levels were being measured. If the participant refused to continue making the comments, the experimenter had 4 prods to say to the participant. The questions for the interview were asked in four sets:
- Set A: no stress remarks made by the participant, to provide baseline measures
- Set B and C: five stress remarks were made for each of these sets
- There were ten errors in the last three sets
Participants in the control group chose when to make the comments and when to stop the experiment
The remarks started at question 9 (“your answer is wrong”) and went up to the end (“I think you’d be better suited for a lower function job”)
The study was conducted at a university, similar to Milgram’s study, and the experimenter was a well-dressed man of about 30 years who was friendly but stern, and who was present in the room with the participant, but the applicant was in another room and had to listen to questions via a speaker.
Results of Experiment
22 of the 24 participants in the experimental group obeyed until the end, making all 15 stress remarks. Most had some discussion with the experimenter, but continued when ordered to do so. Follow-up questionnaires showed that the participants did not like the experiment and were upset by the procedure
- 73% of the total participants believed the experiment to be real, and only 4% thought it was a hoax, the remaining 23% were not sure
- In terms of who the participants blamed for the applicant not getting the job, 45% blamed the experimenter; 33% blamed themselves, and the other 22% blamed the applicant
- Within the control group, who could choose when to make comments and when to stop, none of the participants made the stress comments
As with the Milgram experiment, Meeus and Raaijmakers decided to conduct other variations of the original experiment to see how the results were affected by altering some of the variables:
Variation 1 – the experimenter ordered the participants to make the stress remarks and then left the room for the experiment (22 participants used)
Variation 2 – two other actor participants (confederates) present, who rebelled – the first quit after the tenth stress remark, and the second then did the same, so the experimenter ordered the real participant to continue alone (19 participants used)
The table below shows a comparison of the results from the three experiments:
These were the three main conclusions reached by Meeus and Raaijmakers which tried to explain why they achieved higher levels of obedience than Milgram did with his study:
1 physical violence has more of an immediate impact than psychological harm -
the participants could hear the cries of the learner in Milgram’s study, but the real impact of psychological abuse only tends to become evident later (i.e. after they became upset and did not get the job)
2 consent levels were different -
the participants’ consent to take part in the experiment carried more weight as they knew they were going to harm the applicant verbally and had agreed to participate; in Milgram’s study, the participants had not explicitly agreed to administer physical harm to the learners
3 the victim was more dependent on the outcome -
in Meeus’ and Raaijmakers’ study, the applicant had to continue with the test to get the job, even if they objected to the stress remarks, whilst the learner in Milgram’s study could refuse to answer as there was no gain from continuing
Evaluation of the Dutch Study of Obedience
The main strengths of the of the Meeus and Raaijmakers experiment were:
- The study builds on Milgram’s study by focusing deliberately on two areas that Meeus and Raaijmakers saw as needing attention. They used similar variations to Milgram to see if the levels of obedience fluctuated in the same way. Their study, therefore, is all the more useful because the findings can be compared with those of Milgram
- Due to the attention to detail, the study is replicable and can be tested for reliability. There are controls, which mean that the details are clear and the study can be judged carefully. A study with good controls is easier to draw cause-and-effect conclusions from
Some of the weaknesses of the study are shown below:
- The study is an experiment, and is therefore artificial. The need for controls, such as an applicant taking a test in a laboratory, means that the findings may not be valid. The situation is not very realistic and this might have affected the results
- Although the findings were compared with Milgram’s findings, which is useful, there are differences between the two studies which make such comparisons difficult. One difference is that the studies were in different cultures (even though they are both western cultures); another is that the studies were twenty years apart, which could have affected obedience levels
The table below shows a comparison of the results between the main Milgram and Meeus and Raaijmakers studies to make these comparisons evident:
1.6 agency theory
In Milgram’s studies of obedience, participants who obeyed to the end tended to say that they were only doing what they did because they were being ordered to do so by a member of authority and would not have done it otherwise. They said that they knew what they were doing was wrong. The participants felt moral strain, in that they were aware that following the order was immoral, but they felt unable to disobey. Moral strain arises when people become uncomfortable with their behaviour, because they feel that it is wrong and goes against their better values.
In the Milgram study, all the participants obeyed until the shock level reached 300 volts. It was as if, having simply agreed to take part, they were in an agentic state. This meant that they were the agents of the experimenter and so obeyed his orders. Being in an agentic state is the opposite of autonomy. Being in an autonomous state is being under one’s own control and having the power to make one’s own decisions.
Milgram used the idea of being in an agentic state to put forward his agency theory. This is the idea that our social system leads to obedience. If people see themselves as individuals, they will respond as individuals in an autonomous state in a situation.
For example, in a threatening situation, many people avoid aggression and turn away. This is likely to happen because avoiding aggression avoids being hurt and will lead to survival. Evolution theory suggests that avoiding aggression leads to survival. Early humans had a better chance of survival if they lived in social groups, with leaders and followers. A tendency to have leaders and followers may also have been passed on genetically. A hierarchial social system, such as the one Milgram’s participants were used to, requires a system in which some people act as agents for those above them. According to the agency theory, the agentic state is what led to the participants to obey in Milgram’s study.
Milgram suggested that not only was this system of obedience present as a survival strategy, but also because we are taught that it is the correct way from a young age. Obedience is hammered into children by their parents, and also there are very strict hierarchial systems in place in schools – it is clear who has the power, and so children learn exactly the same lessons there.
In the agentic state, people do not feel responsible for their actions. They feel that they have no power so they might as well act against their own moral code, as happened in Milgram’s basic study. In the variation in which the victim was nearer to the teacher, and the teacher had hold the victim’s hand to the shock plate, there was less obedience. This suggests that the participants had to take greater responsibility for what they were doing.
Evaluation of the Agency Theory of Obedience
These are some of the strengths of the agency theory:
- The agency theory explains the different levels of obedience found in the variations to the basic study by explaining the relationship between the level of responsibility felt by the participant and the levels of obedience obtained
- The theory helps (or tries to at least) explain the issue that triggered Milgram’s research into obedience, the holocaust. Probably the main officer responsible for the holocaust was Eichmann, who said he was merely obeying orders, and agency theory suggests why he, and so many others, would obey to such a degree
- The theory offers similar explanations to events such as the My Lai massacre
However, one of the weaknesses of the theory is that there are other possible explanations for obedience, such as social power. French and Raven (1959) proposed five different kinds of power:
Legitimate power is held by those in certain roles, usually those of authority; Milgram’s role would have had legitimate power
Reward power is held by those with certain resources; Milgram may have had reward power as he way paying the participants
Coercive power is held by those who can punish another; Milgram gave the participants a small shock, so he may have felt he could punish them
Expert power is held by those with knowledge; the participants would have seen Milgram as someone with knowledge
Referent power is held by those who are able to win people over; the participants would not have seen Milgram to hold this type of power
Also, one of the biggest criticisms of Milgram’s agency theory is that it is just a description and not an explanation. Many people view the theory as more of a description of how society works than an explanation. It suggests that the participants obeyed because they were agents of authority. However, obedience is defined as obeying authority figures, so a theory explaining obedience should offer more detail into why it is that people follow orders against their better judgement under given situations.
1.7 hofling et al. (1966)
Aim: To investigate the levels of obedience shown by nurses to doctors in hospitals
Hofling et al. (1966) decided to investigate the reactions of nurses to orders from a person who they believed to be a doctor. They decided to test how far they would be willing to obey the doctor in unusual and unethical practices. The study took place in a hospital, and so was a field study.
Hofling et al. wanted to study the doctor-nurse relationship. They wanted to specifically look at health care, and many of the involved researchers were medical personnel. In particular, they were interested to see how nurses would respond to a doctor giving them orders which went against their usual professional standards, as this was an occupational issue
To make the orders contrary to the nurses professional standards, some of the doctor’s requests were:
- asking the nurse to give an excessive dosage of medicine (would actually be a placebo)
- transmit the order over the phone (against hospital policy)
- use an unauthorised drug (either one not on the ward stock list or one not yet cleared for use)
- have the order given to the nurse by an unfamiliar voice
The situation for the main study involved 12 wards in public hospitals and 10 wards in private hospitals. Questionnaires were distributed to graduate nurses at a separate hospital in order for usage as a matched control. The questions in the questionnaire asked the nurses what they would do in the situations the nurses experienced in the real study, to see what ordinary nurses believed they would do. The same questionnaire was also given to some student nurses to see how less-experienced nurses would respond to the same situations on paper
Procedure of the Main Study
Pill boxes were central props in the study, each labelled “Astroten, 5mg capsules. Usual dose, 5mg. Maximum daily dose, 10mg.” The boxes contained placebo capsules and were placed on the wards. The doctor would give the nurse the orders via phone, and this would follow a script. Standard responses to potential questions were prepared. The caller, a supposed doctor the nurses had not heard of before, was always courteous yet self-confident. Researchers would always monitor the phone calls to check the tone was appropriate
There was an observer on each ward, who would stop the experiment:
- if the nurse had the medication ready and moved towards the patient’s bed
- the nurse refused and ended the conversation
- the nurse began to contact another professional person
The observer would then interview the nurse to obtain more information, and also offered “psychiatric first aid”
The experiment was run on medical, surgical, paediatric and psychiatric wards from 7pm to 9pm, when administration of medication does not normally happen, and doctors are not normally present, so the nurses would have to make their own decisions
The Phone Call
Circumstances to end the phone call:
- participant complies
- participant refuses
- participant insists on referring to someone else
- participant becomes upset
- participant is unable to find the medication
- the call lasts longer than ten minutes
After the incident, a nurse-investigator would follow up within half an hour and request a follow-up interview. The interviews were unstructured (but the nurse-investigator would have had the tape recording of the call, as well as the observer’s report). Information asked for was:
1 Unguided narrative (what happened…?)
2 Emotions (what are your feelings…?)
3 Discrepancies (are you sure it happened that way…?)
4 Any similar incidents (has this happened before…?)
5 Retrospective view (what do you feel about it now…?)
6 Biographical data (what is your age, religion, etc…?)
Questionnaires were sent to graduate and student nurses. The participants were closely matched for age, sex, race, area of origin, marital status and experience at work. Twelve graduate nurses were given the questionnaires with a doctor explaining the whole imaginary scenario to them. The nurses were not only expected to answer what they would do, but also what they predicted the majority of other nurses would do in the same situation. The same questionnaires were handed out to 21-degree programme nursing students
An example of the question might have been: “You are the only nurse on the ward. Now will you please give Mr Jones a stat dose of 20mg – that’s four capsules – of Astroten? I will be up within ten minutes and I will sign the order for them then. Write down what do you do?”
Results of the Main Study and Questionnaire Research
The researchers drew the following conclusions:
- None of those asked thought that nearly all the nurses would obey in the experiment. However, the obedience showed the strength of the doctor-nurse relationship, and how a patient can suffer as a consequence. The researchers say that instead of two “intelligences” – the doctor and the nurse – working for the patient, one of them seems to be non-functioning
- The nurses were affected by the study: they were upset that they had been observed without their permission and also that their specific behaviour had been noted
- Nurses think that they will defend their patients and are proud of being professionals. However, the reality seems to be different (the evidence of this is the discrepancy)
- The nurses appeared to trust the doctors, which may be a valuable trait. They were willing to act promptly and efficiently, again a valuable trait. However, this study suggests nurses need to be encouraged to use their own intellectual and ethical resources
The researchers behind the experiment concluded that there was definite potential for nurses to be encouraged to question and think more clearly about orders, especially in these types of circumstance, without being disloyal or discourteous to doctors.
The experiment took place in a hospital, where nurses would not feel out-of-place. Also, they were unaware that they were being observed by researchers, therefore normal behaviour would have occurred. This gave the experiment ecological validity. Nurses were going about their usual work (psychologists soon discovered that these “stranger doctor” phone calls were not an unusual experience for the nurses) and because it wasn’t strictly unusual for something against the rules to happen, the experiment was very realistic, and certainly true to life: therefore having experimental validity.
The study was replicable, i.e. could be repeated many times to find similar or identical results. It was replicable because of such strong controls on the experiment. Examples of these controls include the phone call following the same script, the type of drug and how much to be “prescribed”, the voice and tone of the caller and the place to put the fake pill boxes – all kept the same throughout. Replicability is a good test for reliability, therefore the study is reliable.
However, there are numerous faults with the experiment in terms of ethical issues. The main issue is that the nurses were being observed and their actions were being noted without their permission. This upset the vast majority of the nurses, and even angered a few of them, as they felt themselves it was very unethical. On the other hand, the counterpoint of this argument is that this withholding of information was necessary to maintain experimental validity. Another ethical issue breached by the experiment, tying into the lack of information to the nurses, is the lack of informed consent. This also meant that they had no specific right to withdraw from the study.
Extraneous variables (those other than the ones you’re testing) could have also intervened with the data. For example, the study could have actually produced results for a different reason, i.e. as the study was done in 1966 when it was practically all male doctors and female nurses, it could have simply produced results identifying the female-obeying-male relationship, rather than the nurse-obeying-doctor relationship. The experiment could also be said to be ethnocentric in that it was only tested in one area, so you cannot guarantee the results would be identical if the same study was carried out elsewhere. The experiment may therefore lack population validity (generalisability).
1.8 social identity theory as an explanation of prejudice
The word prejudice derives from ‘pre’ meaning ‘before’ and ‘judice’ meaning ‘justice’. The idea of prejudice refers to the judgements made by other people based on their membership to a particular group, rather than their individual nature. Discrimination refers to treating others differently according to their group membership due to prejudice.
Prejudice consists of three elements:
Social Identity Theory
Social identity theory is one of a number of theories that suggest prejudice can be explained by our tendency to see ourselves as part of a group. We therefore view others as either part or not part of the same group as us. Thus people are judged as being “us” and “them”. It is seen as part of human nature to view oneself as part of one or more groups, there are our in-groups – this leads us to discriminate against out-groups for no logical reason, i.e. there does not have to be any conflict or competition for ill feelings to develop.
Tajfel et al. (1970, 1971) conducted a series of lab experiments called the minimal group studies which led Tajfel and Turner (1979) to propose that there are three cognitive processes in deciding whether someone is part of the in-group or out-group, leading to the development of prejudice:
- Social categorisation – the process of deciding which group you belong to: you see yourself as part of that group, where any group will do and you see no need for conflict between yours and other groups
- Social identification – identifying yourself with the in-group more overtly, this is when you begin to take on the norms and attitudes of other group members within of the group
- Social comparison – one’s self-concept becomes wrapped up with the in-group that self-esteem is enhanced by the perception that the in-group is better than the out-group
For more information on Tajfel’s minimal group studies, see 1.9 Tajfel et al. (1970, 1971)
According to social identity theory, there are three variables contributing to in-group favouritism:
1 the extent to which individuals identify with the in-group
2 the extent to which there are grounds for making comparison with the out-group
3 the relevance of the comparison group in relation to the in-group
The ideas of in-group favouritism and out-group prejudices have been confirmed in a number of studies…
- Tajfel et al. (1970, 1971) conducted the minimal group studies in which boys of the ages of 14 and 15 were split into groups and had the chance to reward each other by giving them money, or punish them by taking money away from them, even though they didn’t win or lose anything themselves in making the decision, in-group favouritism soon became apparent as the boys gave more to their own group members and punished others
- Lalonde (1992) studied a hockey team with poor performance and asked them about it, and the players claimed that it was down to other teams using “dirtier” tactics – however, Lalonde observed several of the team’s matches and concluded that the opponents’ teams were not using “dirtier” tactics, and so he had come across in-group bias from the poor team
- Reicher and Haslam (2006) conducted their own variation and improvement on the famous Stanford prison experiment in which the prisoners had a chance to be promoted to guards, and guards were the superior figures in the study – the guards showed a lot more closeness and definitely had in-group favouritism
Evaluation of Social Identity Theory as an Explanation of Prejudice
A range of studies have shown support of the idea that people are willing to see their group as better in some way than other groups (as shown in the above examples). Tajfel, for example, replicated his experiment with a variation to prove that his findings were reliable. There is also a practical application, in that the theory helps to explain a wide range of social phenomena.
Social identity theory doesn’t take into account other factors which might be influencing behaviour, for example Dobbs and Crano (2001) have shown that under some circumstances there is much less in-group favouritism than suggested by Tajfel. The theory also doesn’t explain why there are individual differences in the level of prejudices shown. There are also other possible explanations of prejudice which might offer a fuller account of prejudice, for example the realistic conflict theory which sees social identity theory as only part of the explanation. It suggests that it is not just the creation of two groups that leads to prejudice, but that they need to have a goal in sight for conflict/prejudice to develop.
1.9 tajfel et al. (1970, 1971)
Aim: To test the idea that prejudice and discrimination can occur even without group history
Tajfel carried out a number of studies to develop and test social identity theory. Tajfel et al. wanted to test the idea that prejudice and discrimination can occur between groups even if there is no history between them, and no competition. Having found prejudice between such minimal groups, Tajfel et al. wanted to investigate further into the possible causes.
Experiment 1: Estimating Numbers of Dots
For the first of two experiments, 64 boys aged 14 and 15 were used. They were all from a comprehensive school in Bristol. They all knew each other very well and were split up into eight groups of eight boys each. The experiment was run in a laboratory. The experiment was designed to establish in-group categorisation (formation of the groups) and to assess the effect on behaviour of the group formations. To form the two groups, the boys were taken into a lecture room where forty clusters of varying numbers of dots were flashed onto a screen. They were asked to write down how many dots they thought there were each time on a score sheet. After they had estimated the number of dots:
- in condition 1, they were told that people constantly overestimate or underestimate the number
- in condition 2, they were told that some people are more accurate than others
Their judgements were then scored by one of the experimenters, and they were then randomly split into groups. They were told, in condition 1, that one group was the overestimators, and the other the underestimators; and in condition 2, they were told that one group was the better group at making judgements, and the other group worse.
The boys were told that the task used real money for rewards and punishments. They would know the code number of each boy and which group they were in, and would have to decide whether or not to allocate money to the other boys. They had to choose how much to reward or punish another boy in either their own group or the other group.
The experimenters showed the boys the type of matrix they would be using (similar to the above example), each one with 2 rows of 14 numbers. Those which were positive figures would represent amounts potentially rewarded to the boys; the negative numbers would be the amounts to be taken away from them. The boys could not allocate money to themselves, and had to work through a booklet of matrices.
The experimenter would call out “These are the rewards and punishments for member XX of your group” or “These are the rewards and punishments for member XX of the other group”. They had to decide which pair of numbers to allocate to the boys, because one number from each pair would affect one boy and the other affecting another.
The boys had to make decisions about the rewards and punishments they would impose. They had three types of decision: ‘in-group/in-group’, ‘in-group/out-group’ or ‘out-group/out-group’. If the boys allocated as much as possible to one boy, they were given a score of 14 (because there were 14 decisions for each row on each matrix). If they allocated as little as possible, the score was 1. For reach decision they were allocating to two boys. Therefore, a fair score would be 7 because this would mean that they had allocated rewards (or punishments) equally.
When decisions involved two boys, one from each group (an in-group/out-group decision), the average score was 9 out of 14. When boys were making in-group/in-group or out-group/out-group decisions, the average score was 7.5
It seemed that decisions about boys in the same groups were fairer than decisions when one boy was in the same group as the boy making the judgements and one boy was in the other group. A large majority gave more money to their own groups and showed in-group favouritism. This was found in all trials of this study.
Experiment 2: Klee and Kandinsky Preferences
This second experiment involved three new groups of 16 boys per group. The boys were shown twelve slides, showing paintings by foreign artists Klee and Kandinsky, six of each artist. The boys had to express a preference for one of the painters. The paintings were not signed, so that, in actual fact, the boys could be randomly assigned the groups, as again they had nothing to do with their choices, even though they were led to believe this was not the case.
The first experiment showed that forming groups led to in-group favouritism. The experimenters wanted to investigate this further by examining the factors leading to the boys making their decisions. They chose to investigate:
- maximum joint profit – what was the most the two boys represented by each matrix would ‘receive’ from the boys?
- maximum in-group profit – what was the most the boys would give to their in-group members?
- maximum difference – what was the most difference between an in-group and out-group member benefiting the in-group members?
As in the first experiment, there were the same three conditions when making the choices. There were matrices as before, and again a choice was made of one pair of ‘rewards and punishments’. The experimenters could see if the boy had chosen the highest possible for his own group member, the lowest possible for a member of the other group, or a decision that was the lowest for both (or other similar patterns).
Maximum joint profit did not seem to guide the boys’ choices. Maximum in-group profit and maximum difference in favour of the in-group worked against maximum joint profit. If the boys had a choice between maximum joint profit for all and maximum profit for their in-group, they acted on behalf of their own group. Even if giving more to the other group did not mean giving less to their own group, they still gave more to their own.
- Out-group discrimination was found and is easily triggered
- There is no need for groups to be in intense competition, this goes against the realistic conflict theory
- In the two experiments, all the boys needed was to see themselves as in an in-group and out-group situation, and discrimination ensued
- People acted according to the social norms that they had learnt, such as favouring the in-group
- The boys responded to the social norms of “groupness” and fairness and in general kept a balance between the two
- In real life “groupness” may override fairness, for example, if the group is more important than counting dots, or choosing a preference between Klee and Kandinsky
- Given the side effects of discrimination that were found in these experiments, teams in schools may not be a good idea
1.10 sherif et al. (1954)
Aim: To study the origin of prejudice arising from the formation of social groups
Sherif carried out research into groups, leadership and the effect groups had on attitudes and behaviour. The Robbers Cave Study built upon his previous work. He thought that social behaviour could not be studied properly by looking at individuals in isolation. He recognised how social organisation differs between cultures and affects group practices, so he claimed that groups have to be understood as part of a social structure. The Robbers Cave Study used two groups of young boys to find: how the groups developed; if and how conflict between the groups arose; and how to reduce any such friction. Three terms defined according to Sherif are:
small group - individuals share a common goal that fosters interaction; individuals are affected differently by being in a group; an in-group develops with its own hierarchy and a set of norms is standardised
norm - a product of group interaction that regulates member behaviour in terms of expected or ideal behaviour
group - a social unit with a number of individuals who are interdependent and have a set of norms and values for self regulation; individuals have roles within the unit
22 young boys, aged 11, who did not know each other prior to the study. All from Protestant Oklahoma families to eliminate family problems and match the kids as much as possible. They were also matched based on a rating, including their IQ, from their teachers and were finally reassessed and matched , including issues such as sporting ability, before the experiment began. A nominal fee was charged for the children to attend the camp and they were not informed that they were being used for a piece of research in order to obtain “true” results
The experiment is called the Robbers Cave Study because it took place in a camp at Robbers Cave State Park, Oklahoma. The location was a 200-acre Boy Scouts of America camp completely surrounded by the State Park. The site was isolated and keeping the two groups apart (at first) was easy because of the layout of the site, as shown in the diagram
There was a wide range of data collection methods:
- observer – participant observer allocated to each group for 12 hours a day
- sociometric analysis – issues such as friendship patterns were noted and studied
- experiment – boys had to collect beans and estimate how many each boy had collected
- tape recordings – words and phrases used to describe their own group were studied
The observers were trained not to influence the boys’ decisions but to help them once a decision was reached
Three Stage Experiment
- The two groups were formed and set up norms and hierarchies (to see how they became in-groups)
- The two groups were introduced and competition was set up, as a tournament (to test for friction, name-calling and hostility to the out-group)
- The two groups were set goals that they needed each other to achieve
Stage 1: in-group formation
The two groups were kept apart for one week to help the formation of group norms and relations. They had to work as a group to achieve common goals that required cooperation. Data was gathered by observation, including rating of emerging relationships, sociometric measures and experimental judgements. Status positions and roles in the groups were studied. There is much detail about how hierarchies within each group developed. The measurements were thought to be both valid and reliable because different data collection methods produced similar results. For example, in the bean-collecting task, the boys tended to overestimate the number of beans their own group members had collected and underestimate the number collected by the other group (the number of beans was actually the same).
Stage 2: inter-group relations, the friction phase
After the first week, the two groups were told about one another and a tournament was set up with competitive activities. Points could be earned for the group and there were rewards. As soon as they heard about each other, the two groups became hostile. They wanted to play each other at baseball, so they effectively set up their own tournament, which was what the researchers wanted.
The aim of the experiment was to make one group frustrated because of the other group, to see if negative attitudes developed. Adjectives and phrases were recorded to see if they were derogatory and behaviour was observed as previously. The researchers introduced the collecting the beans experiment: the boys had to collect beans and then judge how many each boy had collected. This was to see if the boys overestimated the abilities of the in-group members and minimised the abilities of the out-group members. As was mentioned before, this was the case.
Stage 3: inter-group relations, the integration
The researchers wanted to achieve harmony between the two groups, which they did by introducing superordinate goals. This meant that the groups would have to work together to achieve the goals. At first, they introduced tasks that simply brought the two groups together so that they could communicate. They then introduced the superordinate goals, which included:
- fixing the water tank and pump when the water supply was threatened
- a truck that would not start, so they had to pull together to try and start it
- pooling resources so that they could afford a film that they all wanted to watch
The researchers measured the use of derogatory terms and used observation and rating of stereotyping.
Stage 1: in-group formation
By the end of the first Stage, the boys had given themselves names: the Rattlers and the Eagles. The groups developed similarly, but this was expected due to how carefully they had been matched. Any differences present were most likely due to the different decisions they had to make based on their cabins being located in different areas. For both groups, status positions were settled over days five and six of the first week, and a clear group leader was in place.
The Rattlers often discussed the situation of the Eagles, saying things such as “They had better not be swimming in our swimming hole”. Although the Eagles did not refer to the Rattlers so often, they wanted to play a competition game with them. It seems that even only knowing another group existed was enough reason for hostility to develop, even though neither group had been introduced yet.
Stage 2: inter-group relations, the friction phase
As soon as the groups found out about each other, they wanted to play baseball in a group competition: and so both groups had naturally moved onto Stage 2. The Rattlers were excited, and discussed such issues such as protecting their flag. The Eagles weren’t as excited, but made such comments as “we will beat them”. The Eagle selected as baseball captain for the baseball competition became the group leader of the Eagles for all of Stage 2, even though he was not the group leader at the end of Stage 1.
When the two groups first met, there was a lot of name calling. There is evidence collected, including what the boys said, who they were friends with and practical issues (such as the burning of a flag). It was found that there were clearly negative attitudes towards the out-group members.
Stage 3: inter-group relations, the integration
During the initial contacts of this Stage, the hostility remained. There were comments such as “ladies first” and when they watched a group movie together, they sat separated in their individual groups. After seven contact activities, there were superordinate goals set up:
1 The staff turned off the valve to the water pump and placed two large boulders over it. The children were informed that vandals had damaged it in the past. They worked together to fix the damage and rejoiced in common when they were successful
2 The second goal was to watch a movie together, but both groups had to chip in to pay for it. They eventually agreed to go halves even though one group had fewer members than the others. However, this agreement showed that the two groups cooperated to arrive at one final decision which they both were happy with
3 The boys all went on an organised trip to Cedar Lake, where the truck suddenly ‘developed’ a problem meaning the boys had to use the tug-of-war rope to try and pull it out and get it started
It was noticeable how friendships differed between Stage 2 and 3. More out-group members were chosen as friends by the end of Stage 3, which is evidence that friction was reduced by the superordinate goals outlined.
Most of the hypotheses put forward by the researchers at the beginning of the study were confirmed. Some of the conclusions drawn from the experiment include:
- The groups developed social hierarchies and group norms, even though they were not stable throughout the study
- Each group had a clear leadership structure by the end of the first week
- When the two groups meet for competition, in-group solidarity and cooperation increases and inter-group hostility is strong
- People tend to overestimate the abilities of their own group members and to minimise the abilities of out-group members
- Contact between two groups is not enough to reduce hostility
- When groups needed to work together, exchanged tools, shared responsibilities and agreed how to solve problems, friction was reduced – working towards a superordinate goal once was not sufficient, there needed to be numerous cooperation tasks to achieve this
- There were controls, such as the careful sampling and the briefing observers so they all followed the same procedures, this meant that cause-and-effect conclusions could be drawn more justifiably than when observing naturally-occurring groups
- There were several data collection methods and the findings agreed, so validity was claimed – for example, derogatory behaviour and recordings found derogatory remarks against the out-group
- The group conflict could be seen as prejudice; reduction of friction would be reducing the prejudice, therefore the study has a practical application
- It was unethical in the sense that there was no informed consent obtained, there was no right to withdraw for the participants (also, the boys’ parents were not allowed to visit – to prevent them feeling homesick – but this meant they could not check on their children’s welfare)
- It was hard to generalise to other situations because the sample was restricted to boys with a specific background
1.11 reicher and haslam (2006)
Aim: To investigate tyranny at a group level
In 1973, Zimbardo carried out the famous Stanford Prison experiment where one group of people acted as guards and others as prisoners, all of which were participants. The study looked at the psychological effects of becoming a prisoner or a prison guard. The experiment was conducted at Stanford University, where 24 undergraduates were selected to play the roles in a mock prison in the basement of the Stanford Psychological Building. Those chosen were chosen due to their lack of psychological issues, crime history, and medical disabilities, in order to obtain a representative sample.
Roles were assigned based on a coin toss. Prisoners and guards rapidly adapted to their roles, stepping beyond the boundaries of what had been predicted, and leading to dangerous and psychologically-damaging situations. One third of the guards were judged to have genuine sadistic tendencies, while many of the prisoners were emotionally traumatised and two had to be removed early on. The study was meant to last for two weeks, but after Zimbardo’s girlfriend pointed out that he was allowing unethical acts to happen directly under his supervision, he concluded that both prisoners and guards had become too engrossed in their roles and terminated the experiment after only six days for their safety.
The BBC Prison Study
Reicher and Haslam (2002, 2006) wanted to test the idea of social identification and to see how many people come to condone tyranny or become tyrannical themselves, following on from the events of World War II. The study builds on the work of Milgram, Tajfel and Zimbardo. It builds upon the Stanford Prison experiment, but is not an exact replica as Zimbardo’s work was unethical.
Reicher and Haslam called it an experimental case study, as they set up a one-off situation and then studied it to collect in-depth, detailed data using observational studying, video and tape recording, analysis of conversations and psychological and physiological assessments.
The study was discussed with colleagues, a university ethics committee and the British Psychological Society (BPS). Safeguards used within the experiment included:
- thorough screening of the participants
- a signed, detailed consent form which told participants that they could be at risk of stress and confinement
- independent monitoring of the study by two clinical psychologists and an ethics committee
- security guards, able to intervene if the behaviour ever became dangerous
The BBC recorded the study and organised it into four programmes. They were broadcast in May 2002. The participants knew they would be appearing on national television. A detailed explanation of the study is provided by Reicher and Haslam, in conjunction with the BBC, at http://www.bbcprisonstudy.org | http://aspsychology101.wordpress.com/social-psychology/ | 13 |
47 | In economics, comparative advantage refers to the ability of a party to produce a particular good or service at a lower marginal and opportunity cost over another. Even if one country is more efficient in the production of all goods (absolute advantage in all goods) than the other, both countries will still gain by trading with each other, as long as they have different relative efficiencies.
For example, if, using machinery, a worker in one country can produce both shoes and shirts at 6 per hour, and a worker in a country with less machinery can produce either 2 shoes or 4 shirts in an hour, each country can gain from trade because their internal trade-offs between shoes and shirts are different. The less-efficient country has a comparative advantage in shirts, so it finds it more efficient to produce shirts and trade them to the more-efficient country for shoes. Without trade, its opportunity cost per shoe was 2 shirts; by trading, its cost per shoe can reduce to as low as 1 shirt depending on how much trade occurs (since the more-efficient country has a 1:1 trade-off). The more-efficient country has a comparative advantage in shoes, so it can gain in efficiency by moving some workers from shirt-production to shoe-production and trading some shoes for shirts. Without trade, its cost to make a shirt was 1 shoe; by trading, its cost per shirt can go as low as 1/2 shoe depending on how much trade occurs.
The net benefits to each country are called the gains from trade.
Origins of the theory
The idea of comparative advantage has been first mentioned in Adam Smith's Book The Wealth of Nations: "If a foreign country can supply us with a commodity cheaper than we ourselves can make it, better buy it of them with some part of the produce of our own industry, employed in a way in which we have some advantage." But the law of comparative advantages has been formulated by David Ricardo who investigated in detail advantages and alternative or relative opportunity in his 1817 book On the Principles of Political Economy and Taxation in an example involving England and Portugal. In Portugal it is possible to produce both wine and cloth with less labor than it would take to produce the same quantities in England. However the relative costs of producing those two goods are different in the two countries. In England it is very hard to produce wine, and only moderately difficult to produce cloth. In Portugal both are easy to produce. Therefore while it is cheaper to produce cloth in Portugal than England, it is cheaper still for Portugal to produce excess wine, and trade that for English cloth. Conversely England benefits from this trade because its cost for producing cloth has not changed but it can now get wine at a lower price, closer to the cost of cloth. The conclusion drawn is that each country can gain by specializing in the good where it has comparative advantage, and trading that good for the other.
Modern Theories
|This section may be confusing or unclear to readers. (November 2012)|
Classical comparative advantage theory was extended in two directions: Ricardian theory and Heckscher-Ohlin-Samuelson theory (HOS theory). In both theories, the comparative advantage concept is formulated for 2 country, 2 commodity case. It can easily be extended to the 2 country, many commodity case or many country, 2 commodity case. But in the case with many countries (more than 3 countries) and many commodities (more than 3 commodities), the notion of comparative advantage loses its facile features and requires totally different formulation. In these general cases, HOS theory totally depends on Arrow-Debreu type general equilibrium theory but gives little information other than general contents. Ricardian theory was formulated in Jones' 1961 paper, but it was limited to the case where there are no traded intermediate goods. In view of growing outsourcing and global procuring, it is necessary to extend the theory to the case with traded intermediate goods. This was done in Shiozawa's 2007 paper. Until now, this is the unique general theory which accounts for traded input goods.
Effect of trade costs
Using Ricardo's classic example:
In the absence of transportation costs, it is efficient for Britain to produce cloth and for Portugal to produce wine as, assuming that these trade at equal price (1 unit of cloth for 1 unit of wine), Britain can then obtain wine at a cost of 100 labor units by producing cloth and trading, rather than 110 units by producing the wine itself, and Portugal can obtain cloth at a cost of 80 units by trade rather than 90 by production.
However, in the presence of trade costs of 15 units of labor to import a good (alternatively a mix of export labor costs and import labor costs, such as 5 units to export and 10 units to import), it then costs Britain 115 units of labor to obtain wine by trade – 100 units for producing the cloth, 15 units for importing the wine, which is more expensive than producing the wine locally, and likewise for Portugal. Thus, if trade costs exceed the production advantage, it is not advantageous to trade.
Krugman proceeds to argue more speculatively that changes in the cost of trade (particularly transportation) relative to the cost of production may be a factor in changes in global patterns of trade; if trade costs decrease, such as with the advent of steam-powered shipping, trade should be expected to increase, as more comparative advantages in production can be realized. Conversely, if trade costs increase or if production costs decrease faster than trade costs (such as via electrification of factories), then trade should be expected to decrease as trade costs become a more significant barrier.
Effects on the economy
||This section needs additional citations for verification. (June 2012)|
Conditions that maximize comparative advantage do not automatically resolve trade deficits. In fact, many real world examples where comparative advantage is attainable may require a trade deficit. For example, the amount of goods produced can be maximized, yet it may involve a net transfer of wealth from one country to the other, often because economic agents have widely different rates of saving.
As the markets change over time, the ratio of goods produced by one country versus another variously changes while maintaining the benefits of comparative advantage. This can cause national currencies to accumulate into bank deposits in foreign countries where a separate currency is used.
Macroeconomic monetary policy is often adapted to address the depletion of a nation's currency from domestic hands by the issuance of more money, leading to a wide range of historical successes and failures.
Development economics
The theory of comparative advantage, and the corollary that nations should specialize, is criticized on pragmatic grounds within the import substitution industrialization theory of development economics, on empirical grounds by the Singer–Prebisch thesis which states that terms of trade between primary producers and manufactured goods deteriorate over time, and on theoretical grounds of infant industry and Keynesian economics. In older economic terms, comparative advantage has been opposed by mercantilism and economic nationalism. These argue instead that while a country may initially be comparatively disadvantaged in a given industry (such as Japanese cars in the 1950s), countries should shelter and invest in industries until they become globally competitive. Further, they argue that comparative advantage, as stated, is a static theory – it does not account for the possibility of advantage changing through investment or economic development, and thus does not provide guidance for long-term economic development.
Much has been written since Ricardo as commerce has evolved and cross-border trade has become more complicated. Today trade policy tends to focus more on "competitive advantage" as opposed to "comparative advantage". One of the most indepth research undertakings on "competitive advantage" was conducted in the 1980s as part of the Reagan administration's Project Socrates to establish the foundation for a technology-based competitive strategy development system that could be used for guiding international trade policy.
Free mobility of capital in a globalized world
Ricardo explicitly bases his argument on an assumed immobility of capital:" ... if capital freely flowed towards those countries where it could be most profitably employed, there could be no difference in the rate of profit, and no other difference in the real or labor price of commodities, than the additional quantity of labor required to convey them to the various markets where they were to be sold."
He explains why, from his point of view (anno 1817), this is a reasonable assumption: "Experience, however, shows, that the fancied or real insecurity of capital, when not under the immediate control of its owner, together with the natural disinclination which every man has to quit the country of his birth and connexions, and entrust himself with all his habits fixed, to a strange government and new laws, checks the emigration of capital."
Some scholars, notably Herman Daly, an American ecological economist and professor at the School of Public Policy of the University of Maryland, have voiced concern over the applicability of Ricardo's theory of comparative advantage in light of a perceived increase in the mobility of capital: "International trade (governed by comparative advantage) becomes, with the introduction of free capital mobility, interregional trade (governed by Absolute advantage)."
Adam Smith developed the principle of absolute advantage. The economist Paul Craig Roberts notes that the comparative advantage principles developed by David Ricardo do not hold where the factors of production are internationally mobile. Limitations to the theory may exist if there is a single kind of utility. Yet the human need for food and shelter already indicates that multiple utilities are present in human desire. The moment the model expands from one good to multiple goods, the absolute may turn to a comparative advantage. The opportunity cost of a forgone tax base may outweigh perceived gains, especially where the presence of artificial currency pegs and manipulations distort trade.
Economist Ha-Joon Chang criticized the comparative advantage principle, contending that it may have helped developed countries maintain relatively advanced technology and industry compared to developing countries. In his book Kicking Away the Ladder, Chang argued that all major developed countries, including the United States and United Kingdom, used interventionist, protectionist economic policies in order to get rich and then tried to forbid other countries from doing the same. For example, according to the comparative advantage principle, developing countries with a comparative advantage in agriculture should continue to specialize in agriculture and import high-technology widgits from developed countries with a comparative advantage in high technology. In the long run, developing countries would lag behind developed countries, and polarization of wealth would set in. Chang asserts that premature free trade has been one of the fundamental obstacles to the alleviation of poverty in the developing world. Recently, Asian countries such as South Korea, Japan and China have utilized protectionist economic policies in their economic development.
Philosopher and Professor of Evolutionary Psychology Bruce Charlton has argued that comparative advantage is a metaphysical assumption, rather than a discovery. In addition to falsifiable nature of the principle, he notes that the principle relies on several assumptions that are not necessarily operative. In particular "under conditions of stability and trust - where cycles of exchange tend to repeat themselves, where there is high trust between producers, and dependability of producers, where costs of exchange such as transport are relatively small etc."
See also
- Competitive advantage
- Revealed comparative advantage
- Heckscher-Ohlin model
- Bureau of Labor Statistics
- Resource curse
- 'Baumol, William J. and Alan S. Binder, 'Economics: Principles and Policy, [p. 50 http://books.google.com/books?id=6Kedl8ZTTe0C&lpg=PA49&dq=%22law%20of%20comparative%20advantage%22&pg=PA50#v=onepage&q=%22law%20of%20comparative%20advantage%22&f=false]. 2009.
- "BLS Information". Glossary. U.S. Bureau of Labor Statistics Division of Information Services. February 28, 2008. Retrieved 2009-05-05.
- O'Sullivan, Arthur; Sheffrin, Steven M. (2003) [January 2002]. Economics: Principles in Action. The Wall Street Journal:Classroom Edition (2nd ed.). Upper Saddle River, New Jersey 07458: Pearson Prentice Hall: Addison Wesley Longman. p. 444. ISBN 0-13-063085-3. Retrieved May 3, 2009.
- The exact phrase is not found in an online version of that book.
- For example, R. Dornbusch, S. Fischer and P. A. Samuelson, Comparative Advantage, Trade, and Payments in a Ricardian Model with a Continuum of Goods, The American Economic Review, Vol. 67, No. 5, Dec., 1977, pages 823-839. Rudiger Dornbusch, Stanley Fischer and Paul A. Samuelson, Heckscher-Ohlin Trade Theory with a Continuum of Goods, Quarterly Journal of Economics Volume 95 Issue 2, pages 203-224.
- Alan V. Deardorff, How Robust is Comparative Advantage, Review of International Economics, Volume 13, Issue 5, pages 1004–1016, November 2005.
- Richard Jones, Comparative Advantage and the Theory Tariffs: A Multi-country, Multi-commodity Model, Review of Economic Studies, Nomber 77, pages 161-175, June 1961.
- Yoshinori Shiozawa, A New Construction of Ricardian Trade Theory / a Many-Country, Many-Commodity Case with Intermediate Goods and Choice of Production Techniques, Evolutionary and Institutional Economics Review, Volume 3 Issue 2, pages 141-187, March 2007. Andrew J. Cassey, An Application of the Ricardian Trade Model with Trade Costs, Applied Economics Letters, 2012, 19, 1227-1230.
- A Globalization Puzzle, Paul Krugman, February 21, 2010
- Ricardo (1817). On the Principles of Political Economy and Taxation. London, Chapter 7
- "Lecture by Sophie Prize winner Herman Daly, Oslo, 1999". Sophieprize.org. 1999-06-15. Retrieved 2009-04-07.
- Roberts, Paul Craig (August 7, 2003). Jobless in the USA Newsmax. Retrieved on January 5, 2010.
- Hira, Ron and Anil Hira with forward by Lou Dobbs, (May 2005). Outsourcing America: What's Behind Our National Crisis and How We Can Reclaim American Jobs. (AMACOM) American Management Association. Citing Paul Craig Roberts, Paul Samuelson, and Lou Dobbs, pp. 36-38.
- Bivens, Josh (September 25, 2006 ).China Manipulates Its Currency—A Response is Needed. Economic Policy Institute. Retrieved on February 2, 2010.
- Chang, Ha-Joon. Kicking Away the Ladder: Development Strategy in Historical Perspective. London: Anthem Press, 2002.
- http://charltonteaching.blogspot.com/2013/05/the-metaphysical-law-of-comparative.html, "The metaphysical 'law' of Comparative Advantage - an assumption masquerading as a discovery"
- Chang, Ha-Joon (2002). Kicking Away the Ladder: Development Strategy in Historical Perspective, Anthem Press.
- Chang, Ha-Joon (2008). Bad Samaritans: The Myth of Free Trade and the Secret History of Capitalism, Bloomsbury Press.
- Ronald Findlay (1987). "comparative advantage," The New Palgrave: A Dictionary of Economics, v. 1, pp. 514–17.
- Hardwick, Khan and Langmead (1990). An Introduction to Modern Economics - 3rd Edn
- A. O'Sullivan & S.M. Sheffrin (2003). Economics. Principles & Tools.
- Comparative advantage in glossary, U.S. Bureau of Labor Statistics Division of Information Services
- David Ricardo's The Principles of Trade and Taxation (original source text)
- Ricardo's Difficult Idea, Paul Krugman's exploration of why non-economists don't understand the idea of comparative advantage
- The Ricardian Model of Comparative Advantage
- J.G. Hülsmann's Capital Exports and Free Trade explanation of why the immobility of capital is not an essential condition.
- Matt Ridley, 'When Ideas Have Sex', a video at the TED talk where he explain the Comparative advantage for a general audience (5:05 out in the video). | http://en.wikipedia.org/wiki/Comparative_advantage | 13 |
21 | Deafness is a medical condition in which the afflicted individual cannot hear or distinguish any sound. It can refer to either a total inability to perceive sound or an inability to perceive certain pitch ranges. Deafness can cause major problems in communication and hinders a major sense often relied on for self-preservation.
What is Deafness? – Overview of the condition from The Open University.
Deafness and Hearing Problems – Overview information from BBC Health.
The causes of deafness include congenital causes brought on by family history or medical conditions such as otitis media, a middle ear inflammation experienced by young children. Prolonged exposure to high decibel levels can also damage sensitive organs within the ear or rupture the eardrum, causing hearing loss.
Causes of Hearing Loss in Children – From the American Speech-Language-Hearing Association.
Hearing Loss: Causes – From the Mayo Clinic.
Common Causes of Hearing Loss – Informational pamphlet from Harvard Medical School’s Center for Hereditary Deafness.
Hospital staff administer many tests to newborns to diagnose any possible deafness or hearing loss. Early testing is necessary to spot medical conditions early, when early treatment can help overcome some of the negative aspects of hearing loss.
Hearing Loss: Symptoms, Causes, Tests – Health guide from the New York Times.
Hearing Loss: Tests and Diagnosis – From the Mayo Clinic.
Screening Newborns – Information on infant screening tests from an online resource maintained by the Children’s Hospital of Philadelphia.
Treatment options differ based on the underlying causes behind the hearing loss. Mild hearing loss may be treated with hearing aids, which increases the volume of sound entering the ears, or cochlear implants may be necessary in cases where there is profound sensorineural hearing loss.
Medical Advances for Hearing Loss Treatment – Information on treatments from the not-for-profit Better Hearing Institute.
Treating Hearing Impairment – Treatment information from England’s National Health Service.
Deafness and Hearing Impairment – Global information on treatments from the World Health Organization.
Globally, more than 278 million people experience some form of hearing loss, especially in developing countries with poorer medical infrastructures. In America, recent surveys by governmental organizations indicate that one in every 1,000 children is affected by hearing loss.
Hearing Loss in Children – Data and statistical information from the U.S. Centers for Disease Control and Prevention.
Hearing Loss in the Developing World – Global information from CBM International.
Individuals experiencing total deafness can overcome the communication barriers associated with this condition by learning sign language. Sign languages are specific to different countries, within each different dialects of a sign language can be found in different geographical regions.
American Sign Language – Information on ASL from the National Institute on Deafness and Other Communication Disorders.
Comparison of American Sign Language – Presentation comparing ASL to other worldwide dialects from the website Deaf Education.
ASL Dictionary – Online English to ASL dictionary.
Many organizations work on outreach programs to inform unaffected populations about the social concerns behind deafness and hearing loss in individuals. Many festivities are coordinated for special deaf awareness weeks or days, while other organizations offer workshops to interested individuals who want more involvement.
Deaf Awareness for the General Public – Listing of events and workshops available through the Hearing and Speech Agency.
Deaf Awareness Fest – Annual event sponsored by the Wisconsin Association of the Deaf.
Deaf Awareness Week – From the UK Council on Deafness.
More information can be found by reviewing publications and journals dedicated to hearing health, deafness and relevant medical information. Publications range from commercial magazines with an interest in hearing loss to medical journals used to disseminate research in hearing loss.
Leading National Publications for Deaf People – Listing of different publications from Galludet University’s National Deaf Education Center.
BHI: Publications – Information on publications released by the Better Hearing Institute.
American Deaf Culture – Bibliography of books, DVDs and conference proceedings related to deafness in America. | http://www.healthadministration.org/resources/deafness-resources/ | 13 |
17 | Overview of Muslim History and the Spread of Islam from the 7th to the 21st century
Overview: The purpose of this activity is to provide students with knowledge of how and when Islam spread to various regions, and to locate regions where Muslims form a demographic majority or significant minorities, from the 7th to the 21st centuries.
Students should be able to:
- relate the spread of Islam to historical events and processes of historical change
- trace the spread of Islam chronologically and regionally
- assess the importance of cultural and political factors in the spread of Islam
- evaluate the importance of shifts in economic and political power, and cultural influence among states and regions in the spread of Islam.
- use a map key to identify and locate regions of the eastern hemisphere (Afroeurasia, a modern geography term that combines the contiguous continents of Africa, Europe and Asia) to locate regions of the world that have majority Muslim populations today, and to describe their geographical features.
- Assign or read as a class Handout 1a: "The Spread of Islam in History." Study Questions at the end of the reading give suggestions for comprehension, discussion activities.
- Draw particular attention to the difference between the rapid expansion of territory under Muslim rule and the spread of Islam among the populations. Discuss previous ideas students may have about the spread of Islam by the sword, or about "instant conversion" of regions to any world faith. Explain that conversion has usually been a gradual process. Ask students to list the reasons why people might have changed from the religion they grew up with? What influences might play a role in their decision (social, political and economic). Is it more challenging for individuals to join a faith when it is a minority or when many people are converting? How do the poverty and persecution, or the wealth and power of members of the faith affect individual choice about conversion? How might people learn about the beliefs of a faith, and what role do spiritual leaders play? What other role models, such as traders, travelers, and teachers might influence people? For further reading, see Jerry H. Bentley, Old World Encounters (Oxford University Press, 1993) on the spread of world religions.
- Adaptation: For middle school level or lower reading ability students, a modified version of the reading is provided in Handout 1a«
. Use the modified or regular Handout 1a in an alternative procedure: Read and discuss the first three introductory paragraphs as a class to explain the basic process by which Islam spread. Divide the rest of Handout 1a or 1a«
into sections by headings or paragraphs, beginning with "The Process of Conversion" and subsequent sections. Assign each section or set of paragraphs to a group of students who will be responsible for explaining it and showing the regions it discusses on a classroom map. In a round robin format, groups each present their part of the spread of Islam narrative in chronological order. Each group can take questions and raise discussion points from the audience with the help of the teacher.
- Study Question #6 may be used for younger students to create a timeline. Older students may make notes for a preview timeline before they move to the chronology activity, and Question #7 anticipates work on the maps of the spread of Islam and modern Muslim regions. These activities may substitute for the chronology activity for middle school students.
- Distribute Handout 1b, "Chronology of the Spread of Islam." Discuss the introduction to preview the type of information the students will find in the chronology. Explain the difference between a chronology and a timeline. If not already discussed using the narrative in Handout 1a, explain or reinforce the difference between the historical concepts of expanding Muslim-ruled territory and the spread of Islam among the population of lands in Africa, Asia and Europe, and elsewhere. Discuss events in the first century of Muslim history, then the period from 750 to 1200 CE, then 1200 to 1500 CE. Students should note items on the chronology that represented advances as well as setbacks for the spread of Islam.
- Adaptation for middle school: See #3, above, for adapted Handout 1a«
. Teachers may find it useful to break up the chronology into parts to correspond to historical periods or geographic regions being studied, using it in conjunction with individual units. By doing so, students can focus on 5 or 6 items at a time. If the class is making a world history timeline on the wall or in a notebook, they can insert these events along into the larger timeline. Discuss how these events may relate to events taking place in other regions and cultures.
- Correlating the chronology to geography: Make a master copy of the chronology Handout 1b by making an enlarged photocopy. Cut the chronology into strips with one item on each. Distribute the strips among members of the class. Color the strips with pink, yellow, green or blue highlighters, using one color for chronology items on the first century of Islam from 622 – 750 CE, a second color for 800-1500 CE, a third color for 1500-1900 CE, and a fourth color for the 20th century. Using removable tape, have students attach each strip to the classroom wall map of the world (preferably a physical map rather than a modern political map) on the appropriate location. By posting the strips on the map, the colors will show the sequence of the spread of Islam over the centuries. Make a map key using the same colors and post it near the map.
- Pre-modern and modern events in the spread of Islam: Discuss the second half of the chronology, from 1500 to the present, which includes political, military and economic milestones, and discuss how they affected social and religious conditions in Muslim regions. How did these events and historical trends affect the spread of Islam? Discuss ways in which the establishment of European economic dominance and colonial control affected the spread of Islam, or the relative strength of Muslim influence in their own and other lands. The latter items discuss the spread of Islam to the industrialized countries, and the post-colonial situation in Muslim countries.
- Media activity: Using the general trends described in the last 4-6 items in the chronology, have students collect national and international newspaper, TV or Internet news reports related to these issues. Each student should briefly present their news item and explain or ask for discussion on how relates to the spread of Islam and religious affairs in those countries. News about Islam in Europe and North America is of particular interest.
- People by the Numbers: Using the map of modern countries in Handout 1c, discuss those which are majority, large minority and small minority Muslim countries. Using an atlas, gazetteer or other up-to-date reference, have students select several countries on the map and find out their current population. Using a calculator and the map key, figure out the percentage range of Muslim population in these countries. Answers will be a range, such as "above 50%" of 50 million population = at least 25 million, or 1%-10% of 1 billion = 10 million to 100 million. Students will realize that Muslim minorities in countries with large populations may be more numerous than Muslim majorities in countries with small populations.
- Do the Math: Make a four-column chart on a whiteboard, flipchart or poster. In groups or as a class, list regions from the chronology, such as Syria, Iraq, Iran, Egypt, North Africa, Indonesia, etc. In the second column list the dates when Islam was first introduced by conquest, trade or migration. In the third column list the century (approximate date) when the region became the majority faith or write "minority" in the space. In the fourth column, write the number of years the region or country has been majority Muslim. Extension: Do the same exercise for the spread of other world faiths such as Christianity, Buddhism, or Hinduism.
- Identify countries mentioned in the Frontline: Muslims video on the demographic map, Handout 1c. How might the different issues raised in the video relate to the location of these places? For example, which involve majority Muslim countries? Which countries are in the Middle East? What language is spoken in each country? Which countries in the video came most recently to Islam? In which countries do Muslims live as small minorities?
Handout 1a: The Spread of Islam in History
A Slow Process. Hearing that Muslims conquered territory "from the Atlantic to the borders of China," many people reading about Muslim history often wrongly imagine that this huge region instantly became "Islamic." The rapid conquests led to the idea that Islam spread by the sword, with people forced to become Muslims. In fact, however, the spread of Islam in these vast territories took centuries, and Muslims made up a small minority of the population for a long time. In other words, the expansion of territory under Muslim rule happened very rapidly, but the spread of Islam in those lands was a much slower process. The paragraphs below explain how and when that happened.
"Let there be no compulsion in religion." The Qur’an specifies, "Let there be no compulsion in religion" (2: 256). This verse states that no person can ever be forced to accept religion against their will. It tells Muslims never to force people to convert to Islam. Anyone who accepts Islam under pressure might not be sincere, and conversion in name only is useless to them, and harmful to members of the faith community.
Prophet Muhammad set a precedent as the leader of Madinah. Under his leadership, the Muslims practiced tolerance towards those of other religions. They were parties to the Constitution of Madinah and to treaties with the Muslims, discussing religious ideas with the Jews, Christians and polytheists (believers in many gods). The Qur’an records some of the questions that they put to Muhammad about Islam. Later Muslim leaders were required to be tolerant, based on the authority of both the Qur’an (in this and many other verses), and the Sunnah, or example of Muhammad. With few exceptions, Muslim leaders have adhered to it over time.
Becoming Muslim. To accept Islam, a person only has to make the profession of faith (shahada) in front of two or more witnesses. Even after a person has accepted Islam, he or she may take a long time to learn and apply its practices, going through many different stages or levels of understanding and practice over time. As Islam spread among large populations, this process was multiplied across a whole population. Different individuals and social classes may have different understandings of Islam at the same time. Also, many local variations and pre-Islamic customs remained, even after societies had been majority Muslim for a long time. This has been a source of diversity among Muslim cultures and regions.
The Process of Conversion. The Prophet Muhammad preached Islam at Makkah and Madinah in Arabia for about twenty-three years. For the first ten years (612 to 622 CE), he preached publicly at Makkah. After the migration to Madinah he preached only in his own house—which was the first masjid—only to people who came to hear him. Preaching in houses or in the masjid became the pattern in Islam.
The first two khalifahs required most of the inhabitants of Arabia who had been pagans to affirm their loyalty as Muslims. Christian and Jewish communities were allowed to continue practicing their faiths. In Yemen there are still Jewish communities. Outside Arabia, however, the khilafah did not force non-Arabs to become Muslims. Historians are surprised that they did not even encourage them to become Muslims. Only Khalifah ‘Umar ibn ‘Abd al-‘Aziz (ruled 717–720) made an effort to encourage people to accept Islam, and sent out missionaries to North Africa and other areas. During the early khilafah (632–750), non-Arabs began to accept Islam of their own free will. New Muslims migrated to Muslim garrison cities, to learn about Islam and possibly to get jobs and associate themselves with ruling groups. Whatever their reasons their actions became more common over the years, and expanded the Muslim population. These migrants became associates, or mawali, of Arab tribes. The mawali also tried to convince their relatives and members of their ethnic group to become Muslims. Some migrant Arab and mawali families made important contributions in preserving and spreading Islamic knowledge. They became scholars of Islamic law, history, literature and the sciences. In this way, Islam spread in spite of political rulers, not because of them.
During the years of the Umayyad khalifahs from 661–750 CE, the overwhelming majority of non-Arab population of the Umayyad—which stretched from Morocco to China—were not Muslims. Toward the end of that time, the North African Berbers became the first major non-Arab group to accept Islam.
Within a few centuries, Christianity disappeared almost completely from North Africa—as it did from no other place in the Muslim world. Jews remained as a small minority, with many living in Muslim Spain. Iranians of Central Asia were the second major movement in the spread of Islam, beginning in about 720 CE. Both of these early groups of converts caused problems for the central government. In North Africa, Berbers set up an independent khalifah, breaking the political unity of Islam. In in Central Asia, the revolution arose that replaced the Umayyad with the Abbasid dynasty. After this time, Islam was no longer the religion of a single ethnic group or of one ruling group.
Developing a Muslim culture. In the central lands, the gradual spread of Islam is difficult to trace. Some scholars, such as Richard Bulliet, think that in Egypt, few Egyptians had become Muslims before the year 700, and Islam reached 50 percent of the population in the 900s, three hundred years after the arrival of Islam. By about 1200, Muslims were more than 90 percent of the population. In Syria, Islam spread even more slowly. There, the 50-percent mark was not reached until 1200, nearly six hundred years after the arrival of Islam. Iraq and Iran probably reached a Muslim majority by around 900 CE, like Egypt. In much of Spain and Portugal, Islam became established between 711 and about 1250. After the Reconquista by Spanish Catholics was completed in 1492, and many Muslims and Jews were expelled from Spain, Islam continued to exist until after 1600. Islam may never have been the majority faith during the 700 years of Muslim rule. Spain, Portugal and Sicily are the only places where which Islam has ever been driven out.
In the East, Muslim law treated Zoroastrians, Buddhists, and Hindus just as it treated Jews and Christians. Muslim rulers offered them protection of life, property, and freedom of religious practice in exchange for the payment of a tax, as an alternative to military service. In Sind (India), the Buddhist population seems to have embraced Islam over about two centuries (712–900). Buddhism disappeared entirely. Hinduism in Sind declined much more slowly than Buddhism.
All of the lands described above were territories under Muslim rule. After the decline of unified Muslim rule, Islam spread to lands outside its boundaries. Anatolia
(or Asia Minor), which makes up most of modern Turkey, came after 1071 under the rule of Turkish tribesmen who had become Muslims. Islam spread gradually for centuries after that.
When the Ottoman Turks reached south-eastern Europe in the mid fourteenth century, most Albanians and Bosnians and some Bulgarians became Muslims. Beginning in the fifteenth century, however, Islam did not spread rapidly in this area, perhaps because the population resented or disliked the centralized government of the Ottoman Empire. Strong feelings about religion and ethnicity in the region may also have been a cause.
Continuing Spread. Beginning in 1192, other Muslim Turkish tribesmen conquered parts of India, including the area of present-day Bangladesh. The number of Muslims there gradually increased in India from that time. The people of Bangladesh were Buddhists, and, beginning about 1300, they—like the Buddhists of Sind—rapidly embraced Islam, becoming a Muslim majority in that region. Elsewhere in India, except for Punjab and Kashmir in the north-west, Hinduism remained the religion of the majority.
In South India and Sri Lanka, traders and Sufis, or mystical followers of Islam, spread Islam and carried it to Southeast Asia by 1300 CE. Over the next two centuries in today’s Indonesia—the Spice Islands—Islam spread from Malaysia to Sumatra and reached the Moluccas in eastern Indonesia. Entering a land where Buddhism, Hinduism and traditional faiths of the island people existed, it took several centuries before practice of Islam became established as it was practiced in other Muslim lands. In Central Asia, Islam gradually spread to the original homelands of the Turks and Mongols, until it was the main religion of nearly all Turkic-speaking peoples. Islam spread into Xinjiang, the western part of China, where it was tolerated by the Chinese empire. Much earlier, in the 8th and 9th centuries, a group of ethnic Chinese Han had accepted Islam. These groups continue to practice Islam today. Islam spread to China through the seaports such as Guanzhou, where the earliest Chinese masjid exists.
Africa. Before 1500, Islam spread widely in sub-Saharan Africa. The first town south of the Sahara that became majority Muslim was Gao on the Niger River in Mali before 990, when a ruler accepted Islam. Over the centuries, many rulers followed. By 1040, groups in Senegal became Muslims. From them Islam spread to the region of today’s Senegal, west Mali, and Guinea. After the Soninke of the Kingdom of Ghana became Muslims about 1076, Islam spread along the Niger River. Muslims established the kingdom of Mali in the thirteenth to fifteenth centuries, and Songhai from1465 to 1600. Farther east, Kanem-Bornu near Lake Chad became Muslim after 1100. In West Africa, like Turkestan, India, and Indonesia, it was traders and later Sufis who introduced Islam, and many rulers accepted it first, followed by others. African Muslim scholars became established in the major towns like Timbuktu, and they taught, wrote and practiced Islamic law as judges. By 1500, Islam was established in West Africa throughout the Sahel belt and along the Niger River into today’s Nigeria.
In East Africa, traders had spread Islam down the coast by the tenth century, and it gradually developed further in the following centuries. In the Sudan, south of Egypt, the population of Nubia gradually became Muslim during the fourteenth century, through immigration of Muslim Arab tribesmen and preaching Islam, and because Christian rule became weak in the region. Muslim rule and influence, however, did not extend south of Khartoum, where the Blue and White Niles before 1500 CE.
Strong Governments and the Spread of Islam. By understanding that the expansion of Muslim rule was different from the spread of Islam among populations, we can see an interesting trend. Ironically, Islam has spread most widely and rapidly among the population at times when Muslim rule was weaker and less unified. When Muslim political regimes were weak, decentralized, disunited, or completely absent, Islam as a religion flourished and often spread to non-Muslims. Influence by traders, Sufis and influence of Muslim culture in the cities aided the spread of Islam to new areas. On the other hand, strong states like the Ottoman Empire in the Balkans during the fifteenth century, or the Sultanate of Delhi and the Mogul empire in northern India, had little success in spreading Islam, though they did gain territory. Non-Muslim populations seem to have viewed these powerful Muslim rulers negatively, and so they resisted conversion to Islam. Whoever did embrace Islam in such circumstances, if not for material gain, usually did so because of the efforts of merchants, teachers and traveling Sufi preachers, who were not part of the government. Although the conversion of rulers has often influenced other people in a society to accept Islam, these conversions were not the result of conquests. As in West Africa, East Africa and Southeast Asia, they were far from the ruling centers, but came to know about Islam through the example and teaching of traders and travelers who came in their wake.
- In what important way was the conquest of territory by Muslims different from the spread of Islam?
- How many centuries do historians think it took from the time Islam was introduced until it became the religion of the majority population in Egypt, Syria, Iran, and Spain?
- To which regions did Islam spread mainly as a result of trade and travel?
- How do you think the development of Islamic law might have been affected by the fact that Islam was a minority faith at the time of the early Muslim scholars of law? How might laws tolerating other religions have affected the spread of Islam among the population?
- Construct a time line tracing the spread of Islam using the dates in the text above.
- Locate the regions mentioned in the text on a map, and make labels showing the dates when Islam was introduced and reached a majority of the population there. Compare your map with handout #XX, showing the spread of Islam by locating the places you identified on that map.
Resources for further reading:
Khalid Blankinship, "The Spread of Islam," in World Eras: Rise and Spread of Islam, 622-1500, S. L. Douglass, ed. (Farmington, MI: Gale, 2002), pp. 230-232.
Richard Bulliet, Conversion to Islam in the Medieval Period: An Essay in Quantitative History (Cambridge, Mass: Harvard University Press, 1979).
Bulliet, Islam: the View from the Edge (New York: Columbia University Press, 1994)
: The Spread of Islam in History
A Slow Process. In the first century after Muhammad died, Muslims conquered territory stretching from the Atlantic to the borders of China. People often assume that this huge region instantly became "Islamic" with the arrival of Muslims. This notion led to the idea that people were forced to become Muslims, and that Islam spread by the sword. In fact, the spread of Islam in these lands took many centuries. Although Muslims were the ruling group, they were a small minority of the population. In other words, the expansion of territory under Muslim rule happened very rapidly, but the spread of Islam in those lands was a much slower process. The paragraphs below explain how and when that happened.
"Let there be no compulsion in religion." The Qur’an states, "Let there be no compulsion in religion" (2: 256). This verse tells Muslims never to force people to convert to Islam. Anyone who accepts Islam under pressure might not be sincere. Converting to a religion by force, or only in name, would be useless and harmful to any faith community.
Prophet Muhammad set a precedent, or example, as the leader of Madinah. Under his leadership, Muslims practiced tolerance toward persons with other religious beliefs. Muslims made treaties and agreements with people of other religions. They discussed religious ideas with Jews, Christians and polytheists (believers in many gods). The Qur’an and Muhammad’s example required Muslim leaders to be tolerant of the People of the Book, or Jews and Christians, and to allow them freedom of worship. With few exceptions, Muslim leaders have followed these policies over time.
Becoming Muslim is a simple act. To accept Islam, a person only has to make the profession of faith (shahada) in front of two or more witnesses. After that, it may take a long time to learn and apply Islamic practices. As Islam spread, this process was multiplied across large populations. Many local variations in understanding as well as customs remained from people’s lives before accepting Islam. These continued even after societies had been majority Muslim for a long time. This has been a source of diversity among Muslim cultures and regions.
The Process of Conversion. The Prophet Muhammad preached Islam publicly at Makkah and from his home in Madinah for about twenty-three years. His house in Madinah became the first masjid. Christian and Jewish communities were allowed to continue practicing their faiths. Non-Arabs were neither forced nor expected to become Muslims. As people in lands under Muslim rule learned about the faith and traveled to Muslim cities, some began to accept Islam by choice. When they returned home, they shared their religious knowledge with family and friends. Many of the families of early non-Arab converts went on to become important scholars of Islamic knowledge. They played important roles in preserving and developing Islamic law, history, literature and sciences.
Although the rulers of the Umayyad khalifah (661-750 CE) were Muslim, The overwhelming majority of non-Arab population of the Umayyad (661-750 CE) —which stretched from Morocco to China—were not Muslims. Eventually, the North African Berbers became the first major non-Arab group to accept Islam. The Iranians of Central Asia followed them. In time, both groups of converts broke away from the khalifah government and set up their own governments. Islam was no longer the religion of a single ethnic group. It was no longer ruled by one government.
Developing Muslim culture. In Egypt, Iran and Iraq, scholars believe that Islam reached approximately 50 percent of the population by the 900s, three hundred years after its arrival. From then on, conversion rates slowly increased in the region. Islam also spread to Spain and Portugal between 711 and about 1250. After the 1492 Spanish Reconquista, many Muslims and Jews were expelled from Spain. Islam spread in other places, however, such as Anatolia (Asia Minor) after 1071. When the Ottoman Turks reached south-eastern Europe in the mid 1300s, many Albanians, Bosnians and Bulgarians became Muslims.
Continuing Spread. Beginning in 1192, Muslims conquered parts of India, including lands in today’s Bangladesh. Although the number of Muslims in South Asia gradually increased, Hinduism remained the religion of the majority in India. Muslim rulers generally treated Zoroastrians, Buddhists, and Hindus just as it treated Jews and Christians. They were offered protection of life, property, and freedom of religious practice in exchange for paying a tax. Muslim citizens paid other types of taxes, and served in the army.
In South India and Sri Lanka, traders and Sufis, or mystical followers of Islam, spread Islam and carried it to Southeast Asia by 1300 CE. In Central Asia, Islam gradually spread to the original homelands of the Turks and Mongols. Islam spread into Xinjiang, the western part of China, where the Chinese empire tolerated it. Early in Muslim history, a group of ethnic Chinese, the Han, had accepted IslamBoth groups continue to practice Islam in China today.
Islam in Africa. Before 1500, Islam had already spread widely in sub-Saharan Africa. The first town south of the Sahara that became majority Muslim was Gao on the Niger River in Mali. After the Soninke of the Kingdom of Ghana became Muslims around 1076, Islam spread along the Niger River. Muslims established the kingdom of Mali in the thirteenth to fifteenth centuries, which was later taken over by the Songhai from 1465 to 1600. In the thriving capital city of Mali, Timbuktu, African Muslim scholars taught, wrote and practiced Islamic law as judges. Farther east, Islam spread to Kanem-Bornu near Lake Chad after 1100. In West Africa, like Turkestan, India, and Indonesia, traders and later Sufis introduced Islam. Often, rulers in these places accepted it first, followed by others.
In East Africa, Arab traders had spread Islam down the coast by the tenth century. In the Sudan, during the fourteenth century, Islam spread through migration of Muslim Arab tribesmen.
Governments and the Spread of Islam. In summary, the expansion of Muslim rule was different from the spread of Islam among populations. It spread mainly among people in the cities and countryside, and not by the efforts of governments. Ironically, Islam has spread most widely and rapidly among the population at times when Muslim rule was weaker and less unified. When Muslim political regimes were weak, decentralized, disunited, or completely absent, Islam as a religion flourished and often spread to non-Muslims. For example, traders, Sufis and the influence of Muslim culture in cities aided the spread of Islam to new areas that were not ruled by Muslims. On the other hand, strong states like the Ottoman Empire in the Balkans during the fifteenth century, the Delhi Sultanate or the Mughal Empire in northern India, had little success in spreading Islam, even though their territory grew. In some places, a ruler’s conversion often influenced people in the society to accept Islam. These conversions, however, were not the result of conquests. Merchants, teachers, and traveling Sufi preachers were the agents who helped spread Islam. Finally, according to Islamic beliefs, it is not a Muslim who causes someone to accept Islam, but God who opens a person’s heart to faith.
- How was the growth of territory ruled by Muslims different from the spread of Islam among the people who lived in those lands?
- How long do historians think it took from the introduction of Islam until it became the religion of the majority population in Egypt, Syria, Iran, and Spain?
- Where did Islam spread mainly as a result of trade and travel?
- Make a time line that traces the spread of Islam, using the dates in the text above.
- Locate the regions mentioned in the text on a map, and make labels showing the dates when Islam was introduced there.
- Handout 1b: Chronology of the Spread of Islam
Over the past 1425 years, Islam has spread from the small trading town of Makkah on the Arabian Peninsula to become a world religion practiced on every continent. Like other world religions, Islam has been spreading ever since its origin, both through migration of Muslims to new places, and by individuals who have accepted Islam as their religion, having chosen to convert from other religions.
During the first century after the Hijrah, rapid expansion of the territory under Muslim rule took place as a result of military campaigns. This territory did not instantly become "Islamic," meaning that most people rapidly became Muslims. In fact, the spread of Islam among the population took centuries, even in the regions conquered in the 7th century CE.
The following timeline marks dates when various regions were first introduced to Islam. It also gives the dates when Muslims probably became a majority of the population in those regions. It also marks important dates in the past two hundred years or so, when Muslim majority regions were conquered by groups of other faiths. During the past century, many Muslim regions were colonized by European nations, with Muslim countries formed after independence. Religious life in those countries was much affected by foreign rule. In turn, emigration by Muslims and travel by non-Muslims has resulted in introducing Islam to Europe and the Americas. The timeline also records trends in cultural and religious influence by Muslims and by non-Muslims that affect the spread of Islam.
622 Muhammad and the Muslims migrated from Makkah to Madinah at the invitation of the Madinans. Muhammad became the city’s leader, and the first Muslim community was established.
630 Makkah surrendered to the Muslim force, placing the city under Muslim rule. Many members of Quraysh accepted Islam shortly after.
632 Muhammad died, leaving much of the Arabian Peninsula under Muslim rule.
634-650 Muslim armies defeat Byzantine and Persian imperial armies, bringing Syria, Iraq, Egypt and Iran under Muslim rule, including the cities of Jerusalem, Damascus, and Alexandria.
711-715 Spain, Turkistan and Sind (northern India) were brought under Muslim rule.
750s Muslim soldiers settled in Chang’an (Xian), the largest city in China. Muslim merchants also visited and settled in southern Chinese ports.
*ca. 800-850 Islam became the faith of the majority of people in Iran.
819 The Samanids became the first independent Muslim state in northeastern Iran and Central Asia. By the 900s CE, Islam became the majority religion in that region.
*ca. 850-900 Islam became the majority religion in Iraq, Egypt and Tunisia.
*ca. 940-1000 Islam became the majority religion in Muslim-ruled parts of the Iberian Peninsula (today’s Spain and Portugal).
1099-1187 Western European Crusader armies held Jerusalem.
11th c Muslim traders in West Africa began to spread Islam. Muslims settled in the Champa region of Vietnam and introduced Islam.
1040s The Almoravids, a Muslim Berber ruling group spread Islam in Mauritania and other parts of west Africa. They campaigned against the Soninke kings of Ghana.
1060s The Almoravids ruled in the Maghrib and Muslim Spain (al-Andalus). The empire of Ghana weakened.
*ca.1200 Islam became the majority religion in Syria.
13th c. Ghana’s empire collapsed and Mali rose. Rulers of Kanem, near Lake Chad, became Muslim
End 13th c Muslims lived in northern ports of Sumatra (today’s Indonesia). Muslim traders had close trade and cultural contacts in the trading cities on the east Indian coast, such as Gujarat.
ca.1300 Islam became the majority faith in Anatolia (part of today’s Turkey).
1295 the Ilkhan ruler Ghazan "the Reformer" was the first Mongol leader to become Muslim, along with most of his Mongol generals.
1324-25 Mansa Musa, king of Mali, made the pilgrimage journey to Makkah, strengthening Mali’s links with Islam.
14th c. Mali, Gao, and Timbuktu, cities on the Niger River in west Africa became important centers of Muslim trade and scholarship
15th c. A ruler of Malacca converted to Islam, while that port city was becoming an important stop on the China-Indian Ocean trade routes. From Malacca, Islamic influence spread in the Malay peninsula and nearby islands.
1453 Ottoman forces conquered the city of Constantinople, ending the Byzantine Empire.
1085-1492 Spanish Christian forces carried out Reconquista in the Iberian Peninsula.
1495 Muslims and Jews were expelled from Spain, while others were forced to convert to Christianity.
1501-1600 Safavid rulers in Iran established a strong Shi’i Muslim state, arts and culture flourish.
1526-1707 Mughal India was established and reached its greatest size and cultural influence. Religious tolerance toward Hindus varied among rulers. Both Muslim and Hindu influences contributed to Mughal culture, politics and the arts.
1500-1570s Ottoman Muslim Turks united most of Southwest Asia and North Africa (often called the Middle East) under their rule. The Ottoman Empire expanded into Eastern Europe. Religious tolerance policies gave non-Muslim minorities autonomy in worship and religious law.
1500-1680 Muslim empires and small states expanded the territory under Muslim rule and influence, such as Kanem-Bornu, Songhai, Bondu, Nubia and Ethiopia. European economic and military pressure increased in coastal areas of West and East Africa.
1500-1600 Muslim rule replaced Hindu rule in the Indonesian islands of Sumatra and Java
1500-1600 Central Asian Muslim states weakened as overland trade on the old Silk Roads declined, and sea trade by Europeans increased. The Russian Empire expanded into Central Asia, defeating Muslim states near the border Europe and Asia.
1748-1800 The Safavid Empire in Iran ended. British and Russian military and economic influence in the region grew.
1608-1670 Islamic political, religious and cultural influence grew in Malaysia and Indonesia, while Dutch economic and political pressure also grew.
1641 Dutch forces conquered Malacca, a major port in Southeast Asia, which was the gateway to the China Sea and the Pacific.
1669-1774 Ottoman territories in Eastern Europe were lost to Europeans and Russians. Ottoman government weakened, and European economic pressure grew.
1761-1800 Hindu Marathas and Sikhs challenged Mughal rule over parts of India. British control of Indian territory expanded to the Ganges River plain.
1725-1898 Muslim states and reform movements extend Islamization in West Africa, North Africa and the Sudan, including Abd al-Qadir in Algeria, Uthman dan Fodio in Nigeria, Samori Ture in and Muhammad al-Mahdi in the Sudan. These movements, which include military challenges, oppose British and French political control of these African regions.
1830-1882 French invaded and colonized Algeria and Tunisia. British forces occupied Egypt. North African nationalist and religious movement challenged British and French colonial power.
1803-1818 Delhi fell to the British in 1803, and British rule was established all over India.
1800-1910 Dutch control of the Indonesian islands expanded. Religious reform movements in Sumatra and Java opposed colonial rule. These movements helped spread Islam and Muslim cultural and political influence.
1802-1925 Wahhabi Muslim reformers call for returning to a more purist interpretation of Islam, and revolted in Iraq, Syria and Arabia in 1802. Wahhabi influence continued in Arabia, leading to the founding of Saudi Arabia in 1925 by Ibn Saud.
1800-1920 Russia and China imposed direct rule on Central Asian Muslim states. Muslim revivalist movements, led by Sufi orders such as the Naqshbandi, opposed colonial rule. Attempts to assimilate Chinese Muslims to Confucianism added to pressure on Muslims from European economic and military power.
1917-1949 The Russian and Chinese Revolutions brought anti-religious and communist ideas and strong central governments. Persecution of Muslims and other religious groups brought cultural and religious disaster to those regions. Practice of religion was strongly limited.
1900-1912 Britain colonized Nigeria. France conquered Morocco and the Sahara. Italy conquered Libya. European rule contributed to the spread of Islam and the growth of Muslim institutions in these areas.
1908-1920 The Ottoman Empire was broken up at the end of World War I, ending 700 years of rule. Many of its territories were already under European colonial rule. Modern Turkey was carved out of Anatolia. The Treaty of Versailles and the League of Nations established French mandates (temporary rule) over Lebanon and Syria, and British mandates over Iraq, Palestine and Jordan. The Jewish Zionist movement gained British support to establish a Jewish state in Palestine.
1800-1945 Traditional Muslim educational institutions declined with European political and economic takeover. Islamic awqaf (charitable foundations) were taken over by governments. European influence over schools made a sharp division between religious and secular education, and many upper class parents sent their children to European-model schools and missionary schools established by churches in Muslim countries.
1900-1948 With the support of the Zionist movement and growing persecution of Jews in Russia and Europe, Jews acquired land and settled in Palestine under the British Mandate. British exited their mandate and Jews established the State of Israel in 1948. Many Muslim and Christian Palestinians lost their land, homes and lives, and became refugees.
1900-1938 Nationalist independence movements in Asia and Africa included the growth of Muslim political parties in India, Indonesia, Egypt and in North Africa and China. Efforts to retain Islamic education and preserve
1945-1990 Independence movements and war-weakened European colonial powers gain independence for Muslim countries from Central Asia to Africa and Europe. Borders often reflected former colonies. Post-colonial governments were committed to secularization and controlling of Islamic influence, believing that modernization can best be achieved with religion under state control. Muslim movements opposed these views and secular governments.
1800-2000 European and American citizens’ learn about Islam and Muslim culture in popular media and education. European and American universities opened departments of Islamic and Muslim studies. Books, television, Internet and movies, cultural institutions like museums provide information on Islam. Immigration By 1980, most European and US curriculum include study of Islam and Muslim history. Muslim publications and organizations challenged western misunderstanding of Islam and Muslims.
1920-2000 Muslims emigrate to European former colonial powers, the United States, and Latin America, especially after 1945, and in the US, after 1975. African Americans join movements influenced by Islam, and some enter Islam. By 2000, nearly 40% of the American Muslim population of 4-6 million are African American. By 2000, Muslims formed large minorities in France, Germany, the Netherlands, Italy and the United Kingdom. Significant Muslim minorities in western industrialized countries lead to increased participation of Muslims in those societies and the growth of religious, educational, civic and cultural institutions.
Richard W. Bulliet, Conversion to Islam in the Medieval Period: An Essay in Quantitative History (Harvard University Press, 1979) [The dates marked with an *asterisk are derived from this study]
Khalid Y. Blankinship, "Politics, Law and the Military," in S. L. Douglass, ed., World Eras: Rise and Spread of Islam, 622-1500 (Farmington Hills, MI: Gale Group, Inc., 2002), pp. 230-232.
Marshall G. S. Hodgson, The Venture of Islam: Conscience and History in a World Civilization, Vols. 1 & 2 (Chicago: University of Chicago Press, 1974)
Francis Robinson, ed. Atlas of the Islamic World Since 1500 (New York: Facts on File, Inc, 1982.
Handout 1c: Muslim Population Percentage by Country
(Source of date: Guardian Newspapers Ltd. 2001) | http://www.islamproject.org/education/B04_SpreadofIslam.htm | 13 |
20 | A standard definition of static equilibrium
- A system of particles is in static equilibrium when all the particles of the system are at rest and the total force on each particle is permanently zero.
This is a strict definition, and often the term "static equilibrium" is used in a more relaxed manner interchangeably with "mechanical equilibrium", as defined next.
A standard definition of mechanical equilibrium
for a particle is:
- The necessary and sufficient conditions for a particle to be in mechanical equilibrium is that the net force
In physics, net force is the total force acting on an object. It is calculated by vector addition of all forces that are actually acting on that object. Net force has the same effect on the translational motion of the object as all actual forces taken together...
acting upon the particle is zero.
The necessary conditions for mechanical equilibrium
for a system of particles are:
- (i)The vector sum of all external forces is zero;
- (ii) The sum of the moments of all external forces about any line is zero.
As applied to a rigid body, the necessary and sufficient conditions become:
- A rigid body
In physics, a rigid body is an idealization of a solid body of finite size in which deformation is neglected. In other words, the distance between any two given points of a rigid body remains constant in time regardless of external forces exerted on it...
is in mechanical equilibrium when the sum of all forces on all particles of the system is zero, and also the sum of all torque
Torque, moment or moment of force , is the tendency of a force to rotate an object about an axis, fulcrum, or pivot. Just as a force is a push or a pull, a torque can be thought of as a twist....
s on all particles of the system is zero.
A rigid body in mechanical equilibrium is undergoing neither linear nor rotational acceleration; however it could be translating or rotating at a constant velocity.
However, this definition is of little use in continuum mechanics
Continuum mechanics is a branch of mechanics that deals with the analysis of the kinematics and the mechanical behavior of materials modelled as a continuous mass rather than as discrete particles...
, for which the idea of a particle is foreign. In addition, this definition gives no information as to one of the most important and interesting aspects of equilibrium states – their stability
In mathematics, stability theory addresses the stability of solutions of differential equations and of trajectories of dynamical systems under small perturbations of initial conditions...
An alternative definition of equilibrium that applies to conservative systems and often proves more useful is:
- A system is in mechanical equilibrium if its position in configuration space
- Configuration space in physics :In classical mechanics, the configuration space is the space of possible positions that a physical system may attain, possibly subject to external constraints...
is a point at which the gradient
In vector calculus, the gradient of a scalar field is a vector field that points in the direction of the greatest rate of increase of the scalar field, and whose magnitude is the greatest rate of change....
with respect to the generalized coordinates
In the study of multibody systems, generalized coordinates are a set of coordinates used to describe the configuration of a system relative to some reference configuration....
of the potential energy
In physics, potential energy is the energy stored in a body or in a system due to its position in a force field or due to its configuration. The SI unit of measure for energy and work is the Joule...
Because of the fundamental relationship between force and energy, this definition is equivalent to the first definition. However, the definition involving energy can be readily extended to yield information about the stability of the equilibrium state.
For example, from elementary calculus
Calculus is a branch of mathematics focused on limits, functions, derivatives, integrals, and infinite series. This subject constitutes a major part of modern mathematics education. It has two major branches, differential calculus and integral calculus, which are related by the fundamental theorem...
, we know that a necessary condition for a local minimum or
a maximum of a differentiable function is a vanishing first derivative (that is, the first derivative is becoming zero). To determine whether a point is a minimum or maximum, one may be able to use the second derivative test
In calculus, the second derivative test is a criterion often useful for determining whether a given stationary point of a function is a local maximum or a local minimum using the value of the second derivative at the point....
. The consequences to the stability of the equilibrium state are as follows:
- Second derivative
In calculus, the second derivative of a function ƒ is the derivative of the derivative of ƒ. Roughly speaking, the second derivative measures how the rate of change of a quantity is itself changing; for example, the second derivative of the position of a vehicle with respect to time is...
< 0 : The potential energy is at a local maximum, which means that the system is in an unstable equilibrium state. If the system is displaced an arbitrarily small distance from the equilibrium state, the forces of the system cause it to move even farther away.
- Second derivative > 0 : The potential energy is at a local minimum. This is a stable equilibrium. The response to a small perturbation is forces that tend to restore the equilibrium. If more than one stable equilibrium state is possible for a system, any equilibria whose potential energy is higher than the absolute minimum represent metastable states.
- Second derivative = 0 or does not exist: The second derivative test fails, and one must typically resort to using the first derivative test
In calculus, the first derivative test uses the first derivative of a function to determine whether a given critical point of a function is a local maximum, a local minimum, or neither.-Intuitive explanation:...
. Both of the previous results are still possible, as is a third: this could be a region in which the energy does not vary, in which case the equilibrium is called neutral or indifferent or marginally stable. To lowest order, if the system is displaced a small amount, it will stay in the new state.
In more than one dimension, it is possible to get different results in different directions, for example stability with respect to displacements in the x
-direction but instability in the y
-direction, a case known as a saddle point
In mathematics, a saddle point is a point in the domain of a function that is a stationary point but not a local extremum. The name derives from the fact that in two dimensions the surface resembles a saddle that curves up in one direction, and curves down in a different direction...
. Without further qualification, an equilibrium is stable only if it is stable in all directions.
The special case of mechanical equilibrium of a stationary object is static equilibrium. A paperweight on a desk would be in static equilibrium. The minimal number of static equilibria of homogeneous, convex bodies (when resting under gravity on a horizontal surface) is of special interest. In the planar case, the minimal number is 4, while in three dimensions one can build an object with just one stable and one unstable balance point, this is called Gomboc
A gömböc is a convex three-dimensional homogeneous body which, when resting on a flat surface, has just one stable and one unstable point of equilibrium. Its existence was conjectured by Russian mathematician Vladimir Arnold in 1995 and proven in 2006 by Hungarian scientists Gábor Domokos and Péter...
. A child sliding down a slide
A playground or play area is a place with a specific design for children be able to play there. It may be indoors but is typically outdoors...
at constant speed would be in mechanical equilibrium, but not in static equilibrium.
An example of mechanical equilibrium is a person trying to press a spring. He or she can push it up to a point after which it reaches a state where the force trying to compress it and the resistive force from the spring are equal, so the person cannot further press it. At this state the system will be in mechanical equilibrium. When the pressing force is removed the spring attains its original state.
- Dynamic equilibrium
A dynamic equilibrium exists once a reversible reaction ceases to change its ratio of reactants/products, but substances move between the chemicals at an equal rate, meaning there is no net change. It is a particular example of a system in a steady state...
- Engineering mechanics
Metastability describes the extended duration of certain equilibria acquired by complex systems when leaving their most stable state after an external action....
- Statically indeterminate
In statics, a structure is statically indeterminate when the static equilibrium equations are insufficient for determining the internal forces and reactions on that structure....
Statics is the branch of mechanics concerned with the analysis of loads on physical systems in static equilibrium, that is, in a state where the relative positions of subsystems do not vary over time, or where components and structures are at a constant velocity...
Water is a chemical substance with the chemical formula H2O. A water molecule contains one oxygen and two hydrogen atoms connected by covalent bonds. Water is a liquid at ambient conditions, but it often co-exists on Earth with its solid state, ice, and gaseous state . Water also exists in a...
- Marion & Thornton, Classical Dynamics of Particles and Systems. Fourth Edition, Harcourt Brace & Company (1995). | http://www.absoluteastronomy.com/topics/Mechanical_equilibrium | 13 |
21 | Some would say that the most important components in an RF system are the mixer, filter, amplifier, transmitter/receiver, and antenna. However, in addition to these components, there are less glamorous devices that play a pivotal role in the successful design of an RF system. Some of these components are used to either help send the signal in a different direction (or in multiple directions) within the system or change the shape of the RF signal. One of these important, yet sometimes taken for granted, parts is the RF switch.
As noted, the RF switch will change the direction of the RF signal. When a switch operates in both directions, it is referred to as bidirectional. Since the switch requires some sort of power supply, it is considered an active device.
There are two important parameters the designer should look at when selecting an RF switch for his/her design: insertion loss and isolation. When looking at a datasheet, the designer wants a switch with the lowest possible insertion loss and the highest possible isolation.
The “insertion loss” specification of a switch module is a measure of power loss in the signal – which can occur if the length of the transmission line it is made to propagate through is greater than 0.01 of its own wavelength — and signal attenuation. Insertion loss of a switch module at a particular frequency can be used to calculate the power loss or voltage attenuation caused by the switch on a signal at that frequency.
Every switch has some parasitic capacitance, inductance, resistance, and conductance. These parasitic components combine to attenuate and degrade the signal that the switch is being used to route. The power loss and voltage attenuation caused by these components varies with the frequency of the input signal, and can be quantified by the insertion loss specification of the switch module at that frequency. As a result, it is very important to ensure that the insertion loss of a switch is acceptable at the bandwidth requirement of the application.
Power loss can be calculated as follows:
Similarly, to calculate voltage attenuation:
Isolation is defined as the magnitude of a signal that gets coupled across an open circuit. Crosstalk (Figure 1) is defined as the magnitude of a signal that is coupled between circuits (such as separate multiplexer banks on an RF module).
Figure 1: Crosstalk versus isolation. (Courtesy of National Instruments.)
Another key parameter is switching speed – a measure of how long it takes for the part to go from one position to the other. In datasheets we usually see the terms from “on” to “off” and from “off” to “on.” Obviously the fastest possible switching speed is what the designer wants for his/her RF designs.
Other parameters a designer might be concerned with are the frequency range, return loss, settling time, power handling, termination, video leakage, and operating life.
There are basically two types of switches used today: electromechanical and solid-state switches. The basic operation of electromechanical switches is based on the simple theory of electromagnetic induction. They rely on mechanical contacts as their switching mechanism.
In electromechanical switches, the control signal causes the contact to physically change positions during the switching process. These parts can handle high power RF signals since they have low insertion loss and high isolation. Even though the parts benefit from these features, the disadvantage with these parts is that they can be bulky, heavy, and slow (providing a switching speed in the millisecond range). We usually see these types of switches used in Industrial, and test and measurement applications.
As the name implies, a solid-state switch is an electronic switching device based on semiconductor technology, so nothing inside this parts moves (unlike electromechanical switches) - this makes them faster, smaller, and lighter. Switching times for these parts are in the nanosecond range. These parts are inferior with regard to insertion loss, DC power consumption, isolation, and power handling (Table 1) compared to electromechanical switches. A switch of this kind is made from diodes (low insertion loss) or transistors (faster switching time).
|Switching speed||In ms||In ns|
|Frequency range||From DC||From kHz|
|Settling time||< 15 ms||< 1 µs|
|Operating life||5 million cycles||Infinite|
|Sensitive to||Vibration||RF power overstress|
Table 1: Comparison of electromechanical and solid-state switch parameters. (Courtesy of Wikipedia)
A new type of switch available today is the MEMS RF switch. This type promises better properties over the electromechanical and semiconductor switches. MEMS switches offer the high RF performance and low DC power consumption of electromechanical parts, and the small size, weight, and low-cost features of semiconductor parts.
MEMS switches are surface-micromachined devices which use a mechanical movement to achieve a short circuit or an open circuit in the RF transmission line. RF MEMS switches are designed to operate at RF to mm-wave frequencies (0.1 to 100 GHz). The advantages of MEMS switches over PIN diode or FET switches are:
- Near-Zero Power Consumption: Electrostatic actuation requires 30-80 V, but does not consume any current, leading to very low power dissipation (10 to 100 nJ per switching cycles).
- Very High Isolation: RF MEMS metal-contact switches are fabricated with air gaps, and therefore, have very low off-state capacitances (2 to 4 fF) resulting in excellent isolation at 0.1 to 60 GHz.
- Very Low Insertion Loss: RF MEMS metal-contact and capacitive switches have an insertion loss of 0.1 dB up to 100 GHz.
- Relatively Low Speed: The switching speed of most electrostatic MEMS switches is 2 to 40 µs, and for thermal/magnetic switches, the switching speed is 200 to 3,000 µs. Certain communication and radar systems require much faster switches.
- High Voltage or High Current Drive: Electrostatic MEMS switches require 30 to 80 V for reliable operation, and this requires a voltage up-converter chip when used in portable telecommunication systems.
These types of switches would also benefit satellite applications, which not only demand high switching performance, but also mass and volume reduction. Another possible use is in beam forming networks, such as in the design of reconfigurable Butler matrices and phase shifters for multi-beam satellite communication systems. Down the road, MEMS switches should become more advantageous as frequency is increased.
Poles and throws
RF switches are categorized by their number of poles and throws. The number of poles is the number of separate circuits controlled by a switch. The number of throws is the number of separate positions that the switch can adopt. For example, the following are some RF switch definitions:
SPDT (single-pole, double-throw) switch routes RF signals from one input port to two selectable output ports.
SPDT terminated switch is a single-pole, double throw switch that has one open output RF port internally terminated in a 50 Ω resistive load.
A multiposition switch has one input and more than two outputs.
A transfer or DPDT (double-pole-double-throw) switch has two independent paths that operate simultaneously in either of two selected positions.
Bypass switches: These types insert or remove a test component from a signal path.
The ADG918/ADG919 from Analog Devices (Figure 2) are wideband switches using a CMOS process to provide high isolation and low insertion loss to 1 GHz. The ADG918 is an absorptive (matched) switch having 50 Ω terminated shunt legs, whereas the ADG919 is a reflective switch. Housed in an 8-lead MSOP/LFCSP package, the parts offer -43 dB off isolation at 1 GHz and 0.8 dB insertion loss at 1 GHz.
Figure 2: Analog Devices' ADG918.
The SA58643 from NXP Semiconductor (Figure 3) is a wideband RF switch fabricated in BiCMOS technology and incorporating on-chip CMOS/TTL compatible drivers. Its primary function is to switch signals in the frequency range DC to 1 GHz from one 50 W channel to another. The switch is activated by a CMOS/TTL compatible signal applied to the enable channel 1 pin (ENCH1). The extremely low current consumption makes the SA58643 ideal for portable applications. The excellent isolation and low loss makes this a suitable replacement for PIN diodes. It is available in an 8-pin TSSOP package.
Figure 3: NXP’s SA58643.
The Skyworks AS179-92LF is an IC FET SPDT switch in a miniature SC-70 6-lead plastic package. The AS179-92 features low insertion loss and positive voltage operation with very low DC power consumption. This general-purpose switch can be used in a variety of telecommunications applications.
In this article, we discussed the types of RF switches that are available today and the important parameters designers must consider when selecting a switch for his/her design. We examined a relatively new a new category of RF switch, the MEMS switch, which promises the high RF performance and low DC power consumption of electromechanical switches and the small size, weight, and low-cost features of semiconductor designs.
Regardless of the type of switch you choose, selecting the best and most cost-effective RF switch for your application requires a thorough review of the datasheet of the product to determine whether its insertion loss, isolation, and other specifications meet the requirements of your system. Some vendors provide sweep charts to display these specifications for an entire range of frequencies, while others will only provide specifications for a particular frequency. In such cases, it is important to obtain complete specifications to determine if the product is suited to your application.
- Carl J. Weisman, The Essential Guide to RF and Wireless (Upper Saddle River, NJ: Prentice Hall, 2002)
- P.D Grant, R.R. Mansour, and M.W Denfoff, A Comparison between RF MEMS Switches and Semiconductor Switches. ftp://22.214.171.124/cee/baganji/RF%20switch/mems-switch-2002.pdf
Powered By Electronic Products Editorial Consortium
Discover the benefits of becoming a My Digi-Key registered user.
• Enjoy faster, easier ordering with your information preloaded.
• View your order status, web order history
• Use our BOM Manager tool
• Import a text file into a RoHS query | http://www.digikey.com/us/en/techzone/wireless/resources/articles/choosing-an-rf-switch.html?WT.z_sm_link=Twitter_wirelsstzart_0712 | 13 |
70 | A Brief History of the Gold Standard
Congress established a mint in 1792 and defined the dollar in terms of a specific weight in both gold and silver. This put the new republic on a bimetallic standard, common at the time. Initially, the official ratio of gold to silver at the mint overvalued gold relative to silver, so silver became the de facto standard.
In 1834, the dollar was devalued in terms of gold — from $19.39 per troy ounce to $20.67, shifting the United States to a de facto gold standard. The official gold price remained $20.67 until President Franklin Roosevelt devalued the dollar to $35 an ounce a century later.
Paper Money in a Domestic Gold and Silver Standard. The federal government did not initially issue paper money. Private state-chartered banks issued bank notes that circulated as paper money until the Civil War. In addition, the federally chartered First and Second Banks of the United States issued notes during 1791-1811 and 1816-1836, respectively. Note-issuing banks were expected to redeem their notes on demand in gold or silver at the official prices.
Banks were not required to hold 100 percent of their note and deposit liabilities as gold or silver reserves, but they were expected to hold enough specie reserves to be able to redeem them on demand at the official rates. This “convertibility” requirement was intended to prevent “over-issue” of notes or paper money.
The Return to Gold after the Civil War. During major economic disturbances creating inflation, especially wars, banks often became unable to redeem their notes at par, so they “suspended specie payments” and their notes traded at a discount. This happened early in the Civil War. In addition, for the first time in U.S. history, the government began issuing paper money in the form of U.S. Notes (greenbacks) to finance the Civil War. It also passed a legal tender law requiring people to accept greenbacks at their face value.
The Civil War ended with the market price of gold substantially above the old official price. Instead of returning to gold convertibility at a new price, the country went through severe deflation. The federal government ran budget surpluses and destroyed bank notes flowing into the Treasury as tax payments. The price level fell about 25 percent in 1865-1868. In 1868 the government decided to stop shrinking the money supply and instead let the economy “grow into” the existing money supply. Convertibility at the old parity was finally achieved in 1879, a date that many scholars regard as the beginning of the international gold standard.
Exchange Rates in an International Gold Standard. Two things fixed to the same thing are fixed to each other. Thus:
- Two currencies fixed to gold (or silver) at official prices have an implicit official exchange rate.
- If the official price of an ounce of gold is $20 in the United States and £2 in Britain, the implied exchange rate is $10 per pound or £1/10 per dollar.
- International arbitrage will keep the actual exchange rate as close to that price as the cost of shipping gold will allow.
An importer had the option of buying pounds in the foreign exchange market or shipping gold. An exporter had a similar choice in selling pounds. Therefore, no government action was necessary to peg the exchange rate.
Silver Populism. The deflation leading up to the return of gold convertibility was hard on debtors in general and farmers in particular. This contributed to a populist movement of debtors and silver interests demanding a return to a silver standard, which they perceived to be less harsh on debtors than the gold standard.
This populist movement culminated in the election campaign of 1896, made famous by presidential candidate William Jennings Bryan’s “Cross of Gold” speech favoring a silver standard over a gold standard. His speech, which put the crowd into frenzy and won him the Democratic nomination, concluded with: “You shall not press down upon the brow of labor this crown of thorns. You shall not crucify mankind on a cross of gold.”
After World War I, however, the gold standard was substantially restored. The Federal Reserve Act was passed in 1913, and the issue of paper money was transferred from national banks to the 12 Federal Reserve Banks. The Federal Reserve was expected to operate within the framework and rules of the gold standard.
The Depression, World War II and After. The boom of the 1920s gave way to depression in the 1930s. President Roosevelt devalued the dollar in terms of gold in 1934 by raising its price to $35 per ounce. He also abrogated the gold clause in contracts and prohibited the private ownership of gold, requiring citizens to sell their gold to the government at the new rate. Congress did not restore the ability of private citizens to own gold until 1975.
Following World War II the western nations set up a modified gold standard at a meeting in Bretton Woods, New Hampshire. Other countries pegged their currencies to the dollar and the United States converted dollars to gold for foreign central banks and governments. In the early postwar years, the world was hungry for dollars and there were few demands to redeem dollars for gold.
By the 1960s, however, the world dollar shortage was over and official U.S. gold reserves declined as dollars were redeemed. On a couple of occasions, Congress reduced the gold reserve requirement on outstanding Federal Reserve notes. In 1971 President Nixon took the dollar off the gold standard and announced that the United States would no longer redeem dollars for gold, removing the last vestiges of the gold standard. The dollar was officially a fiat currency.
Robert McTeer is a distinguished fellow with the National Center for Policy Analysis. | http://www.ncpa.org/pub/ba746 | 13 |
19 | FARM TENANCY. Since the colonial period, there have always been some Texas farmers who rented the land they farmed rather than owning it. Although no statistical information was collected until 1880, when United States census officials began to include that information in their returns, it is clear from letters, court cases, and newspaper advertisements that tenants rented land for a variety of reasons and paid in a variety of ways. Some farmers who possessed the resources to buy land rented until they were more familiar with Texas before making a permanent commitment to a specific location, while others rented because they lacked the resources necessary to obtain land of their own. Some tenants, sharecroppers, paid for rented land by promising a share of the crop or labor, while others paid in cash. In antebellum Texas most farm tenants probably lived outside the plantation areas of the state, since most plantations involved in the production of commercial crops utilized slaves. Precise figures are impossible to obtain, but it seems clear that only a relatively small percentage of farmers were tenants. Thus they are rarely if ever mentioned in newspapers or descriptions of the state.
The end of the Civil War and the demise of slavery brought a need for new labor arrangements in the production of commercial crops. Texas plantation owners, like others in the South, had little or no cash, and they wanted to assure themselves of a stable labor supply throughout the growing and harvesting season. A system of tenant farming evolved that met these needs. The most common arrangement after the Civil War was a share tenant or sharecropping arrangement. Since the crop would not be split until after the harvest, tenants could only receive payment for their labor after the crops were in. Most tenants in the period just after the Civil War were black, and the Freedmen's Bureau supervised the signing and implementation of tenant-farming agreements in areas where it had local agents until it closed its local offices in December 1868. Although the agents sometimes complained that black women did not want to work as long in the fields as they had before the war and that many blacks did not want to work as many hours as they had as slaves, they generally reported that African Americans worked well as tenants when treated fairly. Blacks and, later, whites seem to have preferred tenancy arrangements over other forms of agricultural labor because tenancy gave them greater independence and flexibility than wage labor. It also directly rewarded them for their hard work with better crops. They seem to have viewed tenancy as an agricultural ladder that could lead to farm ownership under the right conditions.
As tenant farming became more common, it also became more systematic. In Texas, as in other Southern states, a hierarchy of tenant farmers developed, according to what tenants provided for themselves. At the top were share and cash tenants who supplied the mules, plows, seed, feed, and other supplies needed. Share tenants typically paid the landlord a third of the cotton crop and a fourth of the grain. At the bottom were sharecroppers who supplied only their labor. They typically received half the crops. The differences were critical, not only because share tenants received a larger portion of the crops, but also because they were considered the owners of the crops. Sharecroppers were generally considered laborers whose wages were paid with a share of the crops, which were owned by the landlord. A sharecropping arrangement gave owners greater control over how their land was worked. By 1880, when the first systematic data were collected, approximately 38 percent of all farmers were tenants. More than 80 percent of these rented for a share of the crops. Although statistics on farm tenure by race were not collected until 1900, blacks comprised a much higher proportion of the total number of tenant farmers than their proportion of the population. The highest percentages of tenant farmers were in counties with black majorities. In Fort Bend, Harrison, and Marion counties, for example, tenant farmers comprised 74, 60, and 51 percent of all farmers, respectively. Census returns did not differentiate between sharecroppers and share tenants until 1920, so it is impossible to determine what percentage of the group listed as share tenants were actually sharecroppers.
In addition to paying out a portion of the crop as rent, many tenants also mortgaged their share of the cotton crop to a furnishing merchant or their landlord for food and other supplies. Because the crops were of an uncertain size, and the price of cotton at harvest was unknown, cotton crops were risky collateral for the lender. Consequently, the interest on the loans was quite high, sometimes as high as 150 percent. Once forced to make a crop lien, many tenants could never get away from the system, as they found themselves just breaking even or owing more than the total received for their crops. One economist estimated that by 1914 half the tenants borrowed 100 percent of their income. As the population of the state grew and the state's vast lands were claimed, acquiring ownership of a farm became more difficult. This led to a corresponding increase in the proportion of tenants. By 1900, half of all Texas farmers were tenants. Again as in 1880, more than 80 percent of these were either share tenants or sharecroppers. The biggest proportional increase was probably the increase in white tenants. In 1900, 47 percent of all white farmers and 69 percent of all black farmers were tenants.
The conditions under which tenant farmers lived and worked became political issues with the rise of the People's party in the 1890s and became even more prominent when James E. Ferguson used them as part of his successful campaign for governor in 1914. Despite the rhetoric, except for the temporary prosperity the high cotton prices of World War I brought, conditions for tenant farmers did not materially change, and the number of tenants continued to rise. Each census recorded a larger proportion of tenants among Texas farmers. The census of 1930 recorded the highest percentage of tenants ever reported in the state. That year, almost 61 percent of all Texas farmers were tenant farmers, and one third of these were sharecroppers. In terms of the population as a whole, in 1930, nearly one quarter of all Texans lived on tenant farms. With the coming of the New Deal, however, the number of tenant farmers began to fall. Although Franklin D. Roosevelt saw tenant farming as evidence of economic problems and supported programs designed to make tenants into owners, the programs that had the highest impact on tenants were those that paid farmers to restrict crop acreage. These programs reduced the need for labor and caused many owners to push sharecroppers off the land. By 1935, the proportion of tenant farmers in Texas had dropped to 57 percent. This drop was due to a much larger decrease in the number of sharecroppers, as the number of other types of tenants rose slightly. The changes brought on by the New Deal signaled the beginning of a rapid drop in the proportion of Texas farmers who rented their land rather than owned it and a dramatic change in the types of farmers who were tenants. By the time of the 1945 census, the pull of the job market and the armed services in World War II had accelerated the changes already begun by the Great Depression and New Deal programs. The proportion of farmers who were tenants fell from almost 61 percent in 1930 to a little over 37½ percent by 1945. The number of sharecroppers fell from more than a third to 16 percent of all tenant farmers. By 1987 tenants comprised just under 12 percent of all farmers.
As the numbers fell, the very nature of tenant farming also changed. The most striking changes came in the number of part owners listed in the census. Part owners-that is, farmers who owned some of the land they farmed and rented the rest-comprised less than 10 percent of all Texas farmers until 1940, when they accounted for 11 percent of all farmers. By 1978 part owners made up almost 30 percent of all farm operators. During this same period, Texas farming became more highly mechanized. In 1929, for example, fewer than 10 percent of all Texas farms had tractors. By 1960, Texas had more tractors than farms. As farming became more mechanized and thus more capital-intensive, viable economic units became too large for one family to own enough farmland and provide machinery and working capital at the same time. Therefore, many of the larger operations were run by part owners or by tenants who owned no land at all. In 1987 full owners comprised 56 percent of all farmers but farmed only a quarter of all harvested cropland. Tenants, who comprised just 12 percent of all farmers, also farmed approximately a fourth of all harvested cropland. Part owners made up about 32 percent of all farmers and harvested a little over half of all harvested cropland. See also AGRICULTURE.
Barry A. Crouch, The Freedmen's Bureau and Black Texans (Austin: University of Texas Press, 1992). James Edward Ferguson, The Need of Outside Capital for Turning Landless Men of Texas into Home Owning Farmers (Temple, Texas: Telegram Print, 1915). Louis Ferleger, "Sharecropping Contracts in the Late Nineteenth Century South," Agricultural History 67 (Summer 1993). Neil F. Foley, The New South in the Southwest: Anglos, Blacks and Mexicans in Central Texas, 1880–1930 (Ph.D. dissertation, University of Michigan, 1990). Cecil Harper, Jr., Farming Someone Else's Land: Farm Tenancy in the Texas Brazos River Valley, 1850–1880 (Ph.D. dissertation, University of North Texas, 1988). Virgil Lee, Farm Mortgage Financing in Texas (College Station: Texas Agricultural Experiment Station, 1925). Richard G. Lowe and Randolph B. Campbell, Planters and Plain Folk: Agriculture in Antebellum Texas (Dallas: Southern Methodist University Press, 1987). Studies in Farm Tenancy in Texas (Austin: Division of Public Welfare, Department of Extension, 1915). Texas Historic Crop Statistics, 1866–1975 (Austin: Texas Crop and Livestock Reporting Service, 1976). Harold D. Woodman, King Cotton and His Retainers: Financing and Marketing the Cotton Crop of the South, 1800–1925 (Columbia: University of South Carolina Press, 1990). Robert Yantis, Farm Acreage, Values, Ownership and Tenancy (Austin: Texas Department of Agriculture, 1927).
The following, adapted from the Chicago Manual of Style, 15th edition, is the preferred citation for this article.Cecil Harper, Jr., and E. Dale Odom, "FARM TENANCY," Handbook of Texas Online (http://www.tshaonline.org/handbook/online/articles/aefmu), accessed June 19, 2013. Published by the Texas State Historical Association. | http://www.tshaonline.org/handbook/online/articles/aefmu | 13 |
32 | BEFORE FORT UNION
The American Southwest officially became part of the United States at the close of the Mexican War in 1848, although the infiltration of Anglo-American people and culture had begun more than a generation earlier with the opening of the Santa Fe Trail between New Mexico and Missouri. Political organization of the Mexican Cession was part of the famous Compromise of 1850 when California was admitted to the Union as a free state, New Mexico and Utah territories were established with the right of popular sovereignty regarding the institution of black slavery, and the boundary controversy between the State of Texas and New Mexico Territory was settled.
American military history in the region began with the outbreak of war between the United States and Mexico in 1846, and the United States Army would continue to be a major factor in political, social, cultural, and economic, as well as military developments in New Mexico Territory for nearly half a century. For a time New Mexico Territory included all of the present states of New Mexico and Arizona and portions of the present states of Colorado, Utah, and Nevada. The primary mission of the army in the region for four decades was to protect travelers and settlers (including the Pueblo Indians, Hispanic population, and Anglo residents) from hostile activities of some Indians. During the Civil War that responsibility was expanded to include Confederate troops who invaded the territory. The significance of the army in the region, however, extended far beyond protection, and the military establishment affected almost every institution and individual in New Mexico. Fort Union was one part of that vast system, and it was established at a time of extensive changes in the New Mexican political, social, economic, cultural, and military structure.
In 1851 Fort Union was established almost 100 miles from Santa Fe near the Santa Fe Trail and served briefly as command headquarters for the several other forts in the territory and longer as protector of the vicinity from Indians who resented the loss of their lands, power, and traditional ways of life. Most military engagements between soldiers and Indians, however, occurred beyond the immediate jurisdiction of Fort Union. Even so, troops stationed at Union were frequently sent to participate in campaigns in the Southwest and on the plains. The post was always closely associated with the Santa Fe Trail, the economic lifeline that tied New Mexico to the eastern States. An important part of the mission of troops stationed at Fort Union was to protect that route from Indian raids and warfare, to keep open the shipping lane to the Southwest.
Perhaps more important than fighting Indians over the years was Fort Union's role as the department (later district) quartermaster depot for military posts throughout the territory, 1851-1853 and 1861-1879 (it was a subdepot from 1853-1861), when much of the food, clothing, transportation, and shelter for the army was distributed from Fort Union store houses. This made Fort Union the hub of military freighting in the Southwest, an activity which also employed many civilians and has until recently been overlooked in evaluating the military history of the region. In addition, from 1851 to 1883, the department ordnance depot (known as the arsenal after the Civil War) was operated at Fort Union. Such logistical assignments at Fort Union were not as romantic in the public eye as fighting Indians, but they made the other military bases, field campaigns, and police actions possible. New Mexico was a large territory, it must be remembered, and Fort Union was not involved in everything going on there. One must be careful not to claim too much importance for Fort Union, just as one must be careful not to claim too much importance for the army in the region. It was just one part of a complex and changing society.
The Anglo-American troops and civilian employees of the army who came from the eastern states to the Southwest, including those at Fort Union, helped to modify and destroy the traditional ways of life of Indians and Hispanos in the Southwest, a process that has since been called the "Americanization" of the region. Marion Sloan Russell (1845-1936) first visited Fort Union in 1852 and was there on many other occasions. She met her husband, Lieutenant Richard D. Russell, and was married at the post. A few years before her death she dictated her memoirs, including fond recollections of Fort Union. "That fort," she proclaimed, "became the base for United States troops during the long period required to Americanize the territory of New Mexico."
That "Americanization," in part, was the result of the intrusion of Anglo institutions and values, including Protestantism, democratic ideals, political structures, public education, and a market economy into the combination of Indian and Hispanic cultures that had developed during previous centuries. It was a also the result of Anglo-American domination of the economy and government, which slowly affected the social structure and culture in the Southwest. This was not always a conscious goal or effort, but it resulted from circumstances in which Anglo power was enforced by the military (which also included some Hispanic soldiers and native New Mexican employees).
The army thus performed primary and secondary functions in that process of change over the years. The overall effect appeared far-reaching and dramatic because the histories, traditions, and cultures of the Indians and Hispanos of the Southwest were markedly different from those of the Anglo conquerors. As historian Marc Simmons proclaimed, "the entire history of New Mexico from 1850 to the present is interwoven with attempts by the Indian and Hispano populations to come to terms with an alien Anglo society." The history of Fort Union must be set into that perspective of cultural change to see it as more than just another frontier military post established to fight Indians.
The officers and men of the American army had to adapt to the peoples and cultures already in the Southwest, and they had to learn to survive and live productively in a geographical environment foreign to their earlier experiences but to which the native New Mexicans had already learned to accommodate their lives, ideas, and institutions. Because of Anglo beliefs in the superiority of their people and institutions over those of the Hispanics and Indians, army personnel often failed to assimilate native practices in dealing with the environment and misunderstood what was possible in the region. Americans from the United States were as determined to dominate the land as they were the people of the Southwest. The history of Fort Union is also part of that story.
Fort Union was established in the heart of a vast region of plains (where there were few trees) and mountains, embracing portions of the present states of New Mexico, Texas, Oklahoma, Kansas, Colorado, and Utah. This included the western plains, ranging from the flat grasslands of the Llano Estacado of western Texas and eastern New Mexico to the eroded prairies bordering eastward-flowing streams running out of the Rocky Mountains toward the Mississippi River, the volcanic mesas and isolated peaks of northeastern New Mexico and southeastern Colorado, and the foothills and mountains of the southern Rockies.
Fort Union was located in 1851 in the transition zone between the plains and the mountains, an area rich in several grasses which were excellent for grazing livestock and cutting for hay. The predominate grass was grama, and there were also found buffalo grass, switch grass, bluestem, antelope grass, and others. The military post was located west of the Turkey Mountains and east of the Sangre de Cristo Mountains. The Turkey Mountains comprise a circular group of timbered hills, formed by volcanic eruptions and igneous uplift, which were set aside as the Fort Union timber reservation. The Sangre de Cristos form the southernmost branch of the Rocky Mountain province. West of the Sangre de Cristos lies the Rio Grande, the fifth longest river in North America, the lifestream of New Mexico from early Indian occupation to the present.
One of the military officers stationed in New Mexico in the late 1850s, Lieutenant William Woods Averell, Regiment of Mounted Riflemen, later wrote in his memoirs that "the principal topographical feature of New Mexico is the Rio Grande which enters it from Colorado on the north and running along the backbone of the Rocky Mountains, like a half-developed spinal cord in embryo, leaves it at El Paso on the south." Averell clearly understood the primacy of the Rio Grande to the territory. "As the Nile to lower Egypt, so is the Rio Grande to the habitable portion of New Mexico," he wrote. "Agriculture waits upon its waters which are drained away by unnumbered acequias to irrigate its fertile but thirsty soil." In addition, "the Mexicans, for protection and defense against twenty thousand savages, lived in towns from Taos to El Paso."
The Sangre de Cristo range was an obstacle to travel between the plains where buffalo were plentiful and the agricultural settlements in the Rio Grande valley. There were several passes through the mountains, three of which were most important to plains Indians who visited the Pueblos and other New Mexican settlements and to the Pueblos and New Mexicans who ventured onto the plains to hunt buffalo and trade with the plains tribes. The Pueblos located at those three connections enjoyed a favored position in trade between the plains and the valley and prospered from the commerce. As points where different cultures met, they also faced special problems.
The northern pass, perhaps the most difficult of the three, connected with Taos, northernmost Pueblo in New Mexico, via either Rayado Creek or the Cimarron River of New Mexico on the eastern side of the Sangre de Cristo range and the Taos Valley on the west. The southern pass, the least difficult route of the three, connected Pecos Pueblo in the Pecos River valley with the river Pueblos and Santa Fe, after it was founded in 1610, via Glorieta Pass. It was the route followed by the Santa Fe Trail in the nineteenth century. The middle pass followed up the Mora River valley from the plains and connected with Picuris Pueblo on the Rio Grande side. Fort Union was established at the eastern end of that middle pass to Picuris. Each of those three routes, it should be noted, followed reliable water sources.
Transportation routes and settlements in the Southwest were located on or near flowing streams because of the general paucity of annual precipitation and its sporadic nature during any given year. All of the streams headed in the mountains and defined the patterns for permanent settlements. The Rio Grande was the largest and most important river in New Mexico, but a number of rivers and their tributary creeks were vital in the area surrounding Fort Union. None of these streams was navigable.
The Arkansas River flowing eastward from the Colorado Rockies and across present Kansas had served as the international boundary (west of the 100th meridian, present Dodge City, Kansas) between the United States and Mexico, 1819-1848. Its valley was an important avenue for Anglo westward migration. The Santa Fe Trail, the major overland connection between New Mexico and the Missouri River valley and the primary route of supply for Fort Union and the army in the Southwest, followed a stretch of the Arkansas (the original route, later known as the Cimarron Route, from present Ellinwood, Kansas, to a point near present Cimarron, Ingalls, or Lakin, Kansas, and the later Mountain Route from Ellinwood to present La Junta, Colorado). Several Indian tribes lived and hunted along the Arkansas, and Bent's Fort was established on that stream by Bent, St. Vrain & Co. (Charles and William Bent and Ceran St. Vrain) in 1833, in part, to trade with some of them. Troops from Fort Union were sometimes sent to protect routes of transportation along the Arkansas, especially during the 1850s and the Civil War years.
There are two Cimarron rivers in Fort Union country. One, a tributary of the Arkansas River, is formed by the joining of the Dry Cimarron (which begins in the Raton Mountains about 30 miles east of Raton Pass in New Mexico), Carrizozo Creek (heading in New Mexico), and Carrizo Creek (heading in Colorado) in the northwestern corner of the Oklahoma panhandle. Thus the main stream of this Cimarron is known as the Dry Cimarron in New Mexico (to distinguish it from the other Cimarron in New Mexico) and as the Cimarron River from Oklahoma eastward. The Dry Cimarron was also an appropriate name for the river because, in most years, its surface flow was only sporadic. Water could usually be found, however, by digging in the sandy bed. This Cimarron flows (when water is evident) eastward in present Oklahoma, Colorado, and Kansas, and back into Oklahoma where it joins the Arkansas west of present Tulsa. The Cimarron Route of the Santa Fe Trail followed this Cimarron River from Lower Spring south of present Ulysses, Kansas, to Willow Bar northeast of present Boise City, Oklahoma. The other Cimarron River flows eastward from the Sangre de Cristo range in New Mexico and joins the Canadian River just north of the famous Rock Crossing of the Canadian where the Cimarron Route of the Santa Fe Trail crossed on a streambed of solid stone. The Canadian River was also crossed farther upstream by the Bent's Fort or Raton Route (later known as the Mountain Route) of the Santa Fe Trail southwest of Raton Pass, and the Mountain Route crossed this Cimarron River at the present town of Cimarron, New Mexico, and other places. The Canadian, which flows through a deep canyon from a point a short distance south of the Rock Crossing until it reaches eastern New Mexico, was with few exceptions an obstacle to wagon travel to the east and northeast of Fort Union. The Canadian River was often called the Red River during the nineteenth century, which sometimes creates confusion because there are so many other Red rivers. The presence of two Cimarron rivers, plus the Dry Cimarron, also provides potential for a mix-up.
Ute or Utah Creek flows south into the Canadian River, joining that stream near the eastern boundary of New Mexico. The Cimarron Route of the Santa Fe Trail crossed Ute Creek, and Fort Bascom was later located near its mouth on the Canadian. Two small streams, Rayado and Ocate creeks, head in the Sangre de Cristo Mountains. The Rayado is an affluent of the New Mexico Cimarron River and was crossed by the Mountain Route of the Santa Fe Trail. The Ocate flows to the Canadian River and was crossed by both major branches of the Santa Fe Trail. Both creeks were closely related to Fort Union. Troops were stationed at the Rayado before Union was established, and detachments from Fort Union were sent there briefly afterward. The Fort Union farm was located on the Ocate.
The Pecos River flows south out of the Sangre de Cristos through New Mexico and Texas to the Rio Grande, and it drew settlers from all cultures which came into the area. Rio Gallinas, a tributary of the Pecos, runs through present Las Vegas, New Mexico. The Mora River and its tributary, Sapello River, which joins at present Watrous, New Mexico, drains eastward from the Sangre de Cristos to join the Canadian. Like the Pecos, the Mora valley drew settlers prior to the Anglo infiltration. It was a valley of rich soil which, with irrigation, produced fine crops of wheat, corn, other small grains, vegetables, and fruits. Fort Union was established on a tributary of the Mora, Wolf Creek (also known as Coyote Creek and occasionally as Dog Creek).
The importance of these streams in the region cannot be exaggerated. The overwhelming factor throughout the entire area is aridity; the limited supply of water has been critical regardless of the terrain and other geographical features. "Aridity," William deBuys succinctly declared, "more than any other single factor, shapes this stark world." All human activity, from procuring basic necessities to traveling through the region, always has been constrained by the scarcity of a reliable source of water. Annual precipitation in the region averages below twenty inches per year, but "the capricious timing of it" according to deBuys, "makes the Southwestern environment particularly difficult." Much of the precipitation occurs during the summer months, most of it the result of "local high-intensity storms of relatively short duration." These thunderstorms are frequently accompanied by hail. From records kept at Fort Union during a period of ten years, the following monthly mean temperatures (degrees F.) and mean precipitation (inches) were derived:
The record was clear that most precipitation occurred in July, August, and September, a period known in New Mexico as the "monsoon season" or "rainy season." Eveline M. Alexander, wife of Captain Andrew Jonathan Alexander, Third Cavalry, wrote in her diary in August 1866, following their trip from Fort Smith, Arkansas, to Fort Union: "We arrived here in the rainy season, . . . and every day we are treated to a shower of rain. However, you can see it coming so long before it reaches you that it is not much annoyance." A newcomer to the area, Mrs. Alexander had not yet felt the force of the violent thunderstorms with high winds and hail which were an annoyance according to the testimony of numerous residents in the territory.
The region also experiences an abundance of wind. Complaints about the wind and the dust it whipped through the post were common at Fort Union. Some residents referred to it as "Fort Windy." The soils were easily blown about most seasons of the year because of the shortage of moisture. One of the first residents of the post, Catherine Cary (Mrs. Isaac) Bowen, commonly known as Katie, wrote that "in this territory nearly all the time we have high winds and the soil becomes so dry and powdered that the air is filled with clouds of the most disagreeable kind of dust." Later she commented about "one or two days of high winds which nearly buried us in dust." Her explanation was that "the grass in this country forms no sod, consequently the ground is much like an ash heap on the surface."
On another occasion, Mrs. Bowen gave a more vivid description of the gales at Fort Union:
Another officer's wife, Lydia Spencer (Mrs. William B.) Lane, who lived at Fort Union before and after the Civil War, complained about how the third post "was swept by the winds all summer long" in 1867. Her views of the wind and descriptive talents were comparable to those of Mrs. Bowen fifteen years earlier. Of the omnipresent winds, Mrs. Lane wrote:
Soon after Private William Edward (Eddie) Matthews, Company L, Eighth Cavalry, arrived for duty at Fort Union in 1870, he reported to his family at Westminster, Maryland, about his new assignment: "The only objection I can find here is the miserable wind. Talk of March wind in the States, why it is not a comparison to this place. Wind, wind, and sand all the time. This Post is built on a plain, there is nothing to break the wind, therefore giving it full sway."
A couple of weeks later Matthews noted that, during the sand storms, almost everyone who had to be outside wore goggles to protect their eyes. In March 1874, with his talent for humorous exaggeration, Matthews again described the wind at Fort Union:
The persistent gales and resulting dust and sand storms at the third Fort Union were explained by yet another officer's wife, Frances A. (Mrs. Orsemus B.) Boyd, who resided at the post in 1872. Fort Union, she declared, "has always been noted for severe dust-storms. Situated on a barren plain, the nearest mountains, and those not very high, three miles distant, it has the most exposed position of any military fort in New Mexico." Mrs. Boyd also discerned that the fine soil and sand drifted like driven snow, especially against the buildings at the fort. "The sand-banks," she explained, "were famous playgrounds for the children." She believed that neither trees nor grass would grow at Fort Union because the abrasive dust either prevented plants from taking root or uprooted and scattered the plants. Despite the wind and dust, however, Mrs. Boyd considered Fort Union a place of much beauty, especially the surrounding area "where trees and green grass were to be found in abundance."
Most Anglo-Americans, who came to New Mexico from other regions, held strong opinions about the land and climate, some favorable and some not. Ovando J. Hollister, a Colorado Volunteer in the Civil War, gave his favorable impression of the area, expressing well an attitude hinted at by many others.
Lydia Lane enjoyed New Mexico and wrote of one of her several trips between Fort Union and Santa Fe, in 1867, as follows: "The road generally was excellent, the scenery beautiful, and at times grand. The breeze, filled with the odor of pine-trees, was exhilarating and delicious,you seemed to take in health with every breath of the pure air." Years later she also held fond memories of "the sights, sounds, and odors of the little Mexican towns!" She remembered, while passing through the communities, that "the barking of every dog in the village, bleating of terrified sheep and goats, and the unearthly bray of the ill-used burro (donkey) made a tremendous racket." Most of all she remembered "the smells! The smoke from the fires of cedar wood would have been as sweet as a perfume if it had reached us it its purity; but, mixed with heavy odors from sheep and goat corrals, it was indescribable." It was an impression that stayed with her. "I never get a whiff of burning cedar . . . that the whole panorama does not rise up before me, and it is with a thrill of pleasure I recall the past, scents and all."
Another point of view was provided by Lieutenant Henry B. Judd, Third Artillery, following his arrival for duty in New Mexico late in 1848:
Judd found nothing pretty, describing the "Country" as "the most dreary & desolate that ever caused the eye to ache by gazing upon."
Eddie Matthews expressed similar opinions and was never fond of New Mexico Territory nor its inhabitants. In his bigoted judgment, somewhat typical of Anglo-Americans from the eastern United States, the land was not fit for civilized people, and the Indians and Hispanos were not civilized. He noted that the "wind which blows in all seasons" kept the "sand in motion nearly all the time."
Many Anglo-Americans could not condone aridity, believing that to be a sign of a forsaken land. The Southwest experiences periodic droughts which affect all human cultures. Historian Charles L. Kenner concluded that drought has been "the Southwest's most persistent opponent of tranquility." Archaeologist J. Charles Kelley has conjectured that peace and war between the Pueblos and Indians of the plains was directly related to precipitation. When rainfall was adequate for agricultural surpluses in the Pueblos and an abundance of buffalo meat and robes on the plains, peaceful trade was predominant in their relations. During droughts, when neither culture had a surplus to trade, raiding and warfare predominated.
Such was the situation in New Mexico when Inspector General George A. McCall was sent to inspect the military posts in the Ninth Military Department and, so far as possible, determine the actual losses in lives and property to the Indians during the preceding 18 months, the capacity of the New Mexicans to resist the attacks, and the amount of military force required to provide adequate protection. There was a great need to know more about New Mexico, for in 1850 little was known about the territory by the American people or the government officials in the East; in fact, not much at all was known about the people of the region and their customs, the population, economic resources, geography, and almost everything else. In 1853 a former territorial governor of New Mexico, William Carr Lane, declared that "I find a deplorable state of ignorance to exist" about New Mexican affairs in Washington, D.C.
Although the military may have had more and better information about New Mexico than did any other government departments, because of reports from officers stationed there since the Mexican War, it must be understood that many of the decisions made regarding relations with New Mexicans and Indians, the establishment of Fort Union and missions assigned to it, and the administration of the Ninth Military Department which embraced New Mexico were often made with inadequate information and sometimes with considerable misinformation. When James S. Calhoun was appointed first Indian agent for New Mexico in 1849, Commissioner of Indian Affairs William Medill's letter of appointment declared: "So little is known here [Washington] of the condition and situation of the Indians in that region [New Mexico] that no specific instructions, relative to them can be given at present." Calhoun was requested to supply detailed reports about the Indians in the territory.
By 1850 there were a few publications about New Mexico to which government officials and others could turn for information (although much of what was available was prejudiced against the New Mexicans), but there was little evidence that these were read by people who needed the information. The available publications included George Wilkins Kendall's Narrative of the Texan-Santa Fe Expedition (1844), Josiah Gregg's classic Commerce of the Prairies (1844), Thomas James's Three Years Among the Mexicans and Indians (1846), George F. A. Ruxton's Adventures in Mexico and the Rocky Mountains (1847), Frederick Adolphus Wislizenus, Memoir of a Tour to Northern Mexico, Connected with Col. Doniphan's Expedition, in 1846 and 1847 (1848), Second Lieutenant James W. Abert's Report and Map of the Examination of New Mexico (1848), and Lewis H. Garrard's Wah-to-yah and the Taos Trail (1850). One newspaper published in the East, Niles' Weekly Register, carried many New Mexican items, often reprinted from western newspapers, including the Santa Fe Republican which began publication in 1847. In addition there were several reports prepared by military officials that had been published.
Until September 9, 1850, when Congress created New Mexico Territory, the boundaries of New Mexico had not been defined, and it would be some time before these were surveyed. James S. Calhoun, who had been appointed Indian agent for New Mexico in 1849, became the first territorial governor on March 3, 1851, ending the military rule of the region that had existed since General Kearny occupied Santa Fe on August 18, 1846. He had learned much about New Mexico during the previous two years, but many aspects of the region remained a mystery even to him. What he and others did know provided the basis for decisions in 1851 and after.
In summary, the Hispanic and Pueblo Indian settlements of New Mexico were located mostly along the Rio Grande, with a few settlements east of that valley and fewer still to the west. These settlements were virtually surrounded by the so-called "wild" tribes, including Utes, various bands of Apaches, Navajos, and Plains tribes, most of whom had raided almost at will for decades. The settled areas suffered great losses of property and life as crops were destroyed, livestock stolen, and people killed or captured.
The primary mission of the U. S. Army after successful occupation of the land, as declared by General Kearny at the time of the invasion and by other government officials many times later, was to protect New Mexican and Pueblo settlements from those Indians. In addition the Treaty of Guadalupe Hidalgo which ended the Mexican War provided that the United States would prevent raids on Mexican territory by Indians residing in the United States, or if these could not be prevented the U. S. would punish any Indians who did raid into Mexico. This was an impossible mission but required that the army make efforts to fulfill the agreement. At the same time, it was clear that the future of New Mexico, both its ability to attract settlers and its economic development, depended on control of the Indians.
The military occupation of New Mexico was followed by a policy of providing some protection of population centers by stationing troops at those locations and at points along routes of travel that Indians followed in their raids. The success of that policy required more troops than were available. After the withdrawal of volunteer troops at the close of the Mexican War, the number of soldiers in the department was reduced substantially, never adequate to deal effectively with Indian raiders. The annual report of the secretary of war showed there were 665 troops in New Mexico in 1848, 708 in 1849, and 1,019 in 1850.
Not only were the numbers small but, because of the vast territory, they were spread exceedingly thin to be effective. The largest concentration was at Santa Fe, where Fort Marcy, established in 1846, was the only fortification in the territory (the other military posts were simply bases of operation). The posts at El Paso and San Elizario, although located in Texas, were included in the department, but the troops those places were of minor importance in the protection of most of the settlements in New Mexico. Other posts were located at Albuquerque, Socorro, Abiquiu, Dona Ana, Las Vegas, Rayado, Taos, and Cebolleta. The hope that such distribution of troops would protect the towns and help to block the routes of Indian raiders was accompanied by the belief that the protection of lives and property would stimulate economic growth and attract additional settlers. Not only were the troops unable to cover the territory, despite their wide distribution, but the cost of providing for them at so many locations rose far beyond what Congress wanted to appropriate for the job. The next policy, inaugurated in 1851 with the appointment of a new commanding officer and specific orders to economize, saw the removal of the troops from most of the towns.
The economy of New Mexico at mid-century operated mostly at a subsistence-level because of tradition, lack of capital, and perhaps most important because of the almost constant destruction perpetrated by Indian raids. It was not able to produce many supplies needed by the army. Even before the Mexican War, New Mexico had come to rely heavily upon the commerce of the Santa Fe Trail for manufactured items. The army had to depend on that same route. The need for economic development in New Mexico was clear, but that depended on the success of the military. New Mexican Governor Donaciano Vigil explained the situation in 1848: "The pacification of the Indians is another necessity of the first order, for as you already know the principal wealth of this country is the breeding of livestock, and the warfare of the Indians obstructs this almost completely."
The constant threat of Indian raids made subsistence agriculture much more difficult. Hispanic farmers, facing loss or destruction of their crops and livestock to Indian raiders, usually produced little more than required for their own household. Pueblo farmers, who had lived with Indian raids and periodic droughts for centuries, attempted to store any surplus in order to survive during bad times. The army thus found few sources of supply among the people of the territory because the Hispanics did not have surplus commodities to sell and the Pueblos usually refused to sell any surpluses they had. By providing a market and offering protection from Indian raids, the army stimulated New Mexican agricultural development. Even so, prices were high for limited supplies available. At the same time, the army introduced a cash system into what had been largely an economy based on barter.
The New Mexican livestock industry was dominated by the raising of sheep, primarily for meat and secondarily for wool. Sheep provided the major source of wealth in New Mexico, wealth that was concentrated in the hands of a few wealthy families (ricos). The remainder of the people were economically poor; some were peons. There were also cattle and horse herds which, as with sheep, were objects of Indian raids, but almost no swine or goats were raised. Most manufacturing in New Mexico was comprised of household handicrafts, there being almost no production for a market.
Several villages had a grist mill operated either by water or animal power. These were not capable of producing surplus flour and meal for a market beyond the local economy. The occupation of the area by U. S. troops apparently stimulated the establishment of a few larger grist mills, including one erected by Donaciano Vigil on the Pecos and another built by Ceran St. Vrain on the Mora, and these mills, in turn, stimulated additional production of cereal grains (especially wheat) to supply the demands of the mills and the market provided by the presence of the army. By 1850 a local supply of flour was available for the army. Other items available in the local markets included mutton, beans, vegetables, melons, fruits, salt, and firewood. The army was not the only beneficiary, however, for those heading for the California gold fields in 1849 and after also bought whatever was available as they passed through New Mexico (another factor accounting for the high prices of produce).
The army also relied, for the most part, on the local economy for facilities. With the exception of a portion of Fort Marcy at Santa Fe and the Post at San Elizario, the army rented most of the buildings used for quarters and storehouses in 1850. Almost everything else the military required had to be shipped in via the Santa Fe Trail or, in the case of the southern posts, across Texas. The result of all these factors was that it was tremendously expensive to supply the troops in New Mexico.
Military freight contractors carried 422 wagon loads of supplies from Fort Leavenworth to the posts of New Mexico during 1850, a total of 2.15 million pounds of food, clothing, and equipment. Rates per hundred pounds varied from just under $8.00 to more than $14.00.
In addition to transportation costs, rent for facilities and prices demanded for locally purchased supplies were considered to be exceptionally high in New Mexico. It fell on the new departmental commander, Lieutenant Colonel Edwin Vose Sumner, to try to reduce such costs to the military, beginning in 1851. Sumner and his superiors relied heavily on the information gathered and recommendations made by Inspector General McCall in 1850. McCall's reports comprised the most complete information about New Mexico that was available to the War Department at the time. Some of the things he found should have been revealing. For example, there was not one military veterinarian in the department that had to rely heavily on horses for dealing with Indians. Some of his recommendations, such as the removal of troops from the towns, were followed almost completely. Within two years after his inspection tour of New Mexico, all the posts he visited except Fort Marcy at Santa Fe were abandoned and new ones had been established at other locations.
McCall commented several times about the disastrous effects Indian raids were having on the economy of New Mexico. On July 15, 1850, he wrote as follows: "The hill sides and the plains that were in days past covered with sheep and cattle are now bare in many parts of the state, yet the work of plunder still goes on!" He noted that Apaches and Navajos were not afraid to steal livestock "in the close vicinity of our military posts." He estimated that during the previous three months several herders had been killed, between 15,000 and 20,000 sheep had been stolen, and "several hundred head of cattle and mules" driven from the settlements. The army had been ineffective. The Indians "were on several occasions pursued by the troops, but without success."
As directed, McCall gathered reports on the losses to Indians during the 18 months prior to September 1, 1850. He concluded that the loss in livestock included 181 horses, 402 mules, 788 cattle, and 47,300 sheep. Another estimate of New Mexican losses of livestock to Indians during five years, from 1846 through 1850, included 7,050 horses, 12,887 mules, 31,581 cattle, and 453,293 sheep. A further perspective of those estimates may be gained by comparison with the numbers of livestock recorded in the federal census of New Mexico in 1850: 5,079 horses, 8,654 mules, 32,977 cattle, and 377,271 sheep. The need for additional protection from Indians was evident.
McCall provided his assessment of the non-Pueblo Indians of the area. He thought the Navajos might be persuaded to adapt to a Pueblo way of life, and declared the several Apache tribes were considered the most destructive raiders because "they have nothing of their own and must plunder or starve." He thought the Apaches would be the most difficult to subdue "owing to their numerical strength, their bold and independent character, and their immemorial predatory habits."
McCall identified six bands of Apaches in New Mexico, enclosing the settlements on all sides with the aid of the Navajos and Utes. The Jicarilla Apaches to the northeast were considered "one of the most troublesome" because of their recent attacks along the Santa Fe Trail. The White Mountain and Sacramento Apaches "range the country extending north and south from the junction of the Gallinas with the Pecos to the lower end of the Jornada del Muerto. They continue to drive off stock and to kill the Mexican shepherds both in the vicinity of Vegas and along the Rio Grande." The Mescaleros to the southeast raided more into Texas and Mexico than in New Mexico. The Gila Apaches to the southwest also carried destruction to Mexico more than New Mexico. Peace with all bands of Apaches would require sufficient supplies of the means of life so that they might survive without stealing, for without aid, McCall reiterated, "they must continue to plunder, or they must starve."
According to McCall, the Utes ranged beyond New Mexico, but those living north of most settlements were considered "warlike" and raided as far south as Abiquiu, Taos, and Mora. They sometimes united with Jicarilla Apaches in their forays. The Cheyennes and Arapahos to the northeast were not considered a serious threat to New Mexican settlements. The Comanches to the east rarely struck in New Mexico, but they raided into Mexico and traded stolen property and captives with other tribes and the New Mexicans. The Kiowas were seldom seen in New Mexico. It was clear to McCall that the first priority for the army in New Mexico was to deal effectively with the Indians. Not until that problem was resolved could the territory grow and prosper.
McCall's primary duty in New Mexico was to inspect the military posts, evaluate the state of the army, and make recommendations for improvements. In addition to department headquarters at Santa Fe, McCall visited the ten other posts, reporting the number present and evaluating conditions. He found a total of 831 troops in the department, including 150 at Fort Marcy in Santa Fe, 44 at Taos, 41 at Rayado, and 82 at Las Vegas. His detailed inspection reports on the posts provided a thorough summary of the army in the department.
Of Las Vegas McCall wrote, "The consumption of corn at this post is very great, and a large depot should be established either here or in the vicinity." The demand for corn at Las Vegas was "caused by troops and government trains passing and repassing." Wagon trains were outfitted there for the trip across the plains and forage was sometimes sent to the relief of westbound trains as far away as the Cimarron River.
In addition to a supply depot in the area, a military post was needed to protect the route of supply from Fort Leavenworth and other wagon roads, including one from Las Vegas to Albuquerque via Anton Chico. Las Vegas, which McCall thought was a good location for a supply depot, was not a good location for such a garrison because it was too far from the homelands of the Indians causing the most problems and off the "line of march of the Comanches when they visit New Mexico." A better location, he thought, would be at Rayado or on the Pecos River. McCall was not impressed with Barclay's Fort as a possible army post, although the location was good, because it was too small for a depot or large garrison of troops and the owners wanted too much money to sell or rent it ($20,000.00 to sell or $2,000.00 per year rent). McCall thought Rayado was a good location for a military post.
McCall was critical of the overall military situation in the territory, calling it inadequate for the task at hand. He recommended a minimum of 2,200 troops with at least 1,400 of those mounted. He recommended that the troops be moved from the towns to "the heart of the Indian country." Because of the difficulty of maintaining horses for mounted troops, McCall recommended the establishment of "grazing farms" which, he believed, would result in great savings. Everything in the military department needed to be structured to deal with the serious Indian problem facing settlements in the territory.
Indian raids continued into 1851. In February Indian Agent Calhoun reported that, "during the past month the Indians have been active in every direction, and for no one month during the occupancy of the Territory by the American troops have they been more successful in their depredations." Late in January, near Pecos only 25 miles from Santa Fe, several large herds of sheep and other livestock were stolen and at least three herders were killed. The Utes had raided along the Arkansas River, and "the Apaches and Navajos have roamed in every direction through this Territory."
In March 1851 a band of Jicarillas took about 1,000 sheep near Anton Chico and more sheep were stolen from Chilili. Some of the Jicarillas, however, expressed a desire for peace. On April 2 two principal chiefs, Chacon and Lobo, came to Santa Fe, along with Mescalero Chief José Cito. On that date these Indians agreed to reside on lands assigned to them and not to go nearer than 50 miles from any settlement or route of transportation. In return the government would furnish them with farm equipment and annuities.
Some of the Jicarillas refused to be bound by the treaty, which was not approved by the U.S. government anyway, and in April they raided near Barclay's Fort and attacked the town of Mora, killing several people. When a large party of Jicarillas appeared along the Pecos Valley near San Miguel, La Cuesta, and Anton Chico, the residents were alarmed. No raids were reported, however, and Chacon declared that his people were starving and had to find food.
Chacon's band, as a demonstration of their commitment to peace, had recovered livestock taken by the Navajos and returned the stock to its owners. To avoid potential problems between Jicarillas and settlers, however, Calhoun wanted the Indians to move farther away from settlements. Chacon went to Santa Fe and agreed to move his people away from the settlements. But the move did not immediately occur. Other Indians were raiding settlements while major changes were taking place in the military organization with the appointment of a new department commander in 1851. This resulted in the establishment of Fort Union. The troops in New Mexico, it is important to understand, were part of the larger U. S. Army and functioned under its organization and limitations.
The Anglo-American tradition, begun during the colonial era, was that a standing army was a liability rather than an asset. Citizen-soldier volunteers could be raised temporarily for a crisis, such as an Indian war or a war for independence, but an army of permanent soldiers was expensive and a threat to freedom. After independence the army was a necessary part of frontier Indian policy, but it was kept small, often inadequate, and poor. As Don Russell pointed out, "had it not been for Indian wars there probably would have been no Regular Army, yet at no time was it organized and trained to fight Indians." Congress was reluctant to fund a military complex. The army that was designed for the early national period, when the western boundary was the Mississippi River, faced enormous new responsibility following the expansionist years during which the western boundary was pushed to the Pacific Ocean. An increase in size and monetary support of the army did not follow prior to the Civil War. Following that national calamity, fought primarily by citizen-soldiers on both sides, Congress determined to reduce military expenditures again, keeping the army handicapped until the frontier was settled.
Thus the greatest problem faced by the army in the Southwest was not the Indian threat to settlement, nor even the arid environment and vast distances, but a parsimonious Congress which refused to recognize that an expanding nation required an expanding military force to deal effectively with Indians, explore new lands, improve roads, provide its own facilities, and supply itself over long routes. Funds were never sufficient for the demands made on the army, and manpower and equipment were usually inadequate for the job faced. As military historian Robert Utley expressed so cogently, Congress refused "to pay the price of Manifest Destiny." Too often presidential administrations devoted to budget economy viewed the military as a good place to reduce expenditures. Such a move in 1851 resulted in orders for troops at western posts to become farmers and produce some of their own food and forage.
As a result of congressional limitations, the army was small in numbers, had substandard equipment and facilities, and experienced a difficult time recruiting and keeping competent soldiers. There was little honor but a lot of hardship connected with service on the frontier, one reason that the companies of most regiments were seldom if ever filled to authorized capacity and that the army experienced a high rate of desertion in the West. In most years more than 10% of the enlisted soldiers in the entire army, in some years more than 20%, deserted and, over time, some regiments lost more than 50% of those enlisted for five years before their term of service expired. As Utley concluded, "they simply got their fill of low pay, bad living conditions, and oppressive discipline that stood in such bold contrast to the seeming allurements of the civilian world."
Military justice often seemed arbitrary and severe. Punishment frequently varied for the same crime. In February 1851 a general court-martial in Santa Fe tried the cases of several Second Dragoons charged with forming a secret society in New Mexico known as the "Dark Riders," which included among its objectives "robbing and desertion." Of those found guilty, one was sentenced "to forfeit twelve dollars of his Pay, to work under charge of the Guard for one month & then be returned to duty." Two were sentenced to lose twenty-five dollars of their pay and, additionally, each was "to walk a ring daily six hours for one month twelve feet in diameter, then to labor two months with Ball & Chain attached to his Leg under charge of the Guard & be returned to duty." Each of four others faced a much more severe sentence, "to forfeit all pay and allowances that are now or may become due him, to have his Head shaved, to have his face blackened daily and placed standing on a Barrel from 9 to 12 O'clock A.M., and from 2 to 5 O'clock P.M. daily for twenty days, then placed under charge of the Guard at hard Labor, with Ball & Chain attached to his Leg until an opportunity affords to be marched on foot carrying his Ball & Chain to Fort Leavenworth and there be drummed out of the Service." Soldiers serving such penalties were not available for regular duty and contributed to the shortage of personnel.
Thus an under-strength army, always inadequate in authorized numbers, was further reduced in effectiveness and efficiency by being constantly undermanned. The army averaged only 82% of its mandated strength prior to 1850. In 1850 the authorized size of the army was four artillery regiments, eight infantry regiments, and three mounted regiments (two dragoons and one mounted riflemen). The artillery regiments were comprised of twelve companies and the cavalry and infantry regiments had ten companies.
The company strength varied by type of service. Each light artillery company was authorized to contain 64 privates, and each heavy artillery company was to have 42. Each infantry company was to have 42 privates; the dragoons were authorized 50 privates; and the mounted riflemen were assigned 64. In 1850 Congress authorized all companies of all branches stationed on the frontier to have 74 privates. Each company had three commissioned officers (captain, first lieutenant, and second lieutenant) and eight non-commissioned officers (four sergeants and four corporals). In addition, the field staff of a regiment included four commissioned officers (colonel, lieutenant colonel, and two majors), with an adjutant and a quartermaster selected from the subalterns. The noncommissioned staff included a sergeant major, quartermaster sergeant, and musicians (buglers for the cavalry and fifers, drummers, and bandsmen for the artillery and infantry regiments). In addition to the regiments there were the general staff officers and members of the following departments: medical, paymaster, military storekeepers, corps of engineers, corps of topographical engineers, and ordnance. If filled to authorized level, the entire army in 1850 would have totaled over 13,000 officers and men. Because most units were not up to capacity, the actual strength was 10,763, most of whom were stationed in the West.
Almost 10% of the army in 1850 was stationed among the eleven posts of the Ninth Military Department. There were two companies of Second Artillery, ten companies (the entire regiment) of Third Infantry, three companies of First Dragoons, and four companies of Second Dragoons. The total authorized strength for these units was 1,603 officers and men, but only 987 were actually present in the department. This was an average of just under 90 officers and men for each military post. A chronic problem in New Mexico was the absence of officers who should have been with their companies. Many officers could be away from their regimental duties because of a generous leave policy which permitted them to be absent from duty up to a year (occasionally longer). Vacancies also resulted from resignations and delays in appointing replacements, detached service with other units and in other places, courts-martial assignments, and recruiting duties.
Each military post comprised a highly structured society and operated under a disciplined routine in which every officer and enlisted man had his duties to perform. Despite the daily schedule, which ran by the clock with appropriate calls of drum or bugle, there was a considerable amount of leisure time with nothing provided for the men to do. There was little direct contact between commissioned and non-commissioned troops. The post commander ruled, assisted by the post adjutant and a sergeant major. The duties and training of enlisted men were directed by sergeants and corporals, under command of company officers. Several officers were in charge of specific departments: the post quartermaster was in charge of quarters, clothing, transportation, and all other supplies except food; the post commissary officer was in charge of rations; and the surgeon was in charge of the post hospital and sanitation. At some posts the quartermaster and commissary duties were performed by the same officer. Enlisted men, sometimes assisted by a few civilian employees, provided the labor force for a multitude of tasks at the post. Not all of them were available for duty, as Utley made clear: "Allowing for men in confinement, on guard, sick, and detailed to fatigue duties, a post commander could not often count enough men to man the fort, much less to take the field."
It was not easy to recruit skillful young men for the required five-year enlistment. By 1850 almost two-thirds of the enlisted men were foreign-born, many of them Irish and German, and one-fourth were illiterate. The pay for privates was $7.00 per month for infantrymen and $8.00 for cavalrymen. A sergeant drew $13.00 a month. Soldiers were supposed to be paid every two months, but at frontier posts it was sometimes as long as six months before the paymaster returned. The soldier required little cash, however, because most of his needs were furnished, including uniforms, rations, quarters, transportation, medical care, and equipment. Except for his expense to the company laundress and tailor (which could be avoided if the soldier washed his own clothing and made his own alterations), a soldier's pay was available for items such as additional food from the post or regimental sutler's store, tobacco, recreation, gambling, whiskey, and, if inclined, to send some home to his family.
The uniforms were probably sufficient, but rations and quarters were often inadequate. The daily ration, according to historian Robert Frazer, "was both uninviting and dietetically impoverished, designed to fill the stomach at minimum cost." The monotonous fare as prescribed by Army Regulations included meat (twelve ounces of salt pork or bacon, or twenty ounces of fresh or salt beef) and flour or bread (eighteen ounces of flour or bread, or twelve ounces of hard bread; sixteen ounces of corn meal could be substituted for flour or bread) each day. For each 100 rations there were also issued eight quarts of beans or ten pounds of rice, one pound of coffee or one and one-half pounds of tea, twelve pounds of sugar, two quarts of salt, and four quarts of vinegar. In addition, for each 100 rations, the soldier received one pound of sperm candles and four pounds of soap. Some of the food items shipped to New Mexico, such as bacon and flour, frequently deteriorated during the trip and the subsequent storage before issue. Other foods, except for the issue of vegetables when scurvy was found among the troops, had to be purchased by the individual soldier. Often the enlisted men had the opportunity to buy vegetables, fruits, milk, butter, and eggs at frontier posts, provided they chose to use their pay for such items. Many apparently preferred to use their limited funds for tobacco and whiskey. Drunkenness was a chronic problem at all levels of the service. Excessive drinking, like desertion, was a way many soldiers sought escape from the realities of garrison life.
Quarters varied from post to post, and soldiers sometimes were housed in tents because barracks were not available. They lived in tents, of course, when on field duty. Most company quarters, because of inadequate funds and unskilled labor, were poorly constructed, inadequately ventilated, hot in the summer, cold in the winter, and conducive to the spread of disease. The frontier army frequently experienced "a high rate of sickness and mortality." Medical care, intended to be part of the fringe benefits, was too often inadequate at frontier posts.
Although training was an important part of turning recruits into disciplined soldiers, the army did not have a standardized training program. Thus many recruits joined companies for duty without any "idea of the duties they will be called on to perform, or of the discipline they will be required to undergo." According to military historian Edward Coffman, the new soldier "often found the diet inadequate, the uniforms ill-fitting, and the quarters uncomfortable. Neither was the adjustment to discipline and drill and all that was involved in learning to be a soldier a pleasant experience." While drill dominated a recruit's training, usually there was no training in marksmanship. Perhaps it was not considered necessary since most troops became laborers at frontier posts and used axes, hammers, saws, picks, and shovels more than muskets, sabers, or cannon. Their main contact with a weapon came when they stood the ubiquitous guard duty.
Most of a soldier's time was spent on garrison duty at a small military post, the tedious routine of which was occasionally broken by field service. Time away from the fort was often spent as guard to a supply train, mail coach, or other group, and, at other times, marching from one duty station to service at another. They were also sent on scouts to investigate Indian "depredations" and on expeditions to locate and punish Indian offenders. Despite the images of an Indian-fighting army portrayed in popular media, enlisted men were seldom engaged in combat. On average, a frontier soldier might participate in battle with the enemy one time during a five-year enlistment. Only rarely were those engagements decisive, and military leaders had a difficult time trying to figure out how to deal most effectively with Indians. In the long run, many other factors besides the army contributed to the defeat and destruction of the Indians' traditional ways of life.
Meanwhile officers and soldiers held justifiable misgivings about their way of life, treatment, and importance on the frontier. William B. Lane, an officer who served in New Mexico and was stationed at Fort Union both before and after the Civil War, later explained the difficulties of soldiering in the 1850s.
The effectiveness of troops in the Ninth Military Department depended on their comfort, health, well-being, and training, but it also depended on the equipment with which they were supplied and the officers who led them. In battle the troops were only as good as their weapons and commanders. The Third Infantry was equipped with the .69 caliber percussion smoothbore musket, a reliable instrument with destructive impact (although not as accurate as a rifled musket). It was heavy to carry, weighing over nine pounds, and time-consuming to reload and fire during the heat of battle (it was a muzzle-loader). The musket was equipped for a bayonet which was sometimes attached for drill and in battle. Most of the time, however, it was detached and served a variety of purposes as a tool, especially in the field, and made a good candlestand.
The soldiers in the Second Artillery and Second Dragoons carried the musketoon, a shortened version of the .69 caliber musket used by the infantry. It weighed six and one-half pounds. According to Major General Zenus R. Bliss, the musketoon was "a sort of brevet musket. It was nothing but an old musket sawed off to about two-thirds of its original length, and the rammer fastened to the barrel by a swivel to prevent its being lost or dropped when loading on horseback; it used the same cartridge as the musket, kicked like blazes, and had neither range nor accuracy, and was not near as good as the musket, and was only used because it could be more conveniently carried on horseback." Almost everyone agreed that the musketoon was unsatisfactory. In 1853 Inspector General J. K. F. Mansfield declared the musketoon was "a worthless arm . . . with no advocates."
When McCall inspected the posts in New Mexico in 1850, he recorded that "the two batteries in possession of the Artillery companies are in good order and are complete, including carriages, limbers, caissons, harness, etc." Each battery, according to McCall, comprised one six-pounder gun, one twelve-pounder field howitzer, and three twelve-pounder mountain howitzers. Ammunition included fifty-six rounds for each gun and field howitzer and sixty rounds for each mountain howitzer.
Each of the artillery pieces had a bronze tube. The six-pounder gun had a bore diameter of 3.67 inches and fired a projectile weighing 6.10 pounds. It had a muzzle velocity of 1,439 feet per second and a range of 1,523 yards at a five-degree elevation. The twelve-pounder field howitzer had a bore diameter of 4.62 inches and fired a projectile weighting 8.9 pounds. It had muzzle velocity of 1,054 feet per second and a range of 1,663 yards at a five-degree elevation. The twelve-pounder mountain howitzer was a lighter weight, mobile weapon designed for field duty. It had the same bore and fired the same projectile as the field howitzer. It utilized a powder charge of one-half pound, only half the charge of the field howitzer. It had a muzzle velocity of 650 feet per second and a range of 900 yards at a five-degree elevation. The twelve-pounder mountain howitzer was "the most popular and widely employed piece" during the 1850s and during and after the Civil War. It was mobile, when mounted on the prairie carriage as in New Mexico, and effective against Indians.
The First Dragoons in New Mexico were still using the .525 caliber Hall's percussion carbine, a breech-loading weapon issued when the dragoons were first organized in 1833. The musketoon was the replacement weapon for Hall's carbine, beginning in 1849. The First Dragoons in New Mexico had not yet received the "improvement" in 1850 and may have considered the Hall's carbine a more effective weapon, given the criticism of the musketoon. The troops of the First and Second Dragoons in New Mexico carried sabers. Inspector McCall did not identify the style, but most likely these were the Model 1840 dragoon sabers which were issued to both regiments. Members of both dragoon regiments in the Ninth Military Department also carried pistols, the Colt .44 caliber dragoon revolver, a cap-and-ball six-shooter.
A full complement of dragoon equipment and arms, including forty rounds of ammunition, weighed a total of seventy-eight pounds. When this was added to the weight of the trooper and the horse equipment (saddle and bridle), it made a heavy burden for the dragoon mounts and affected their efficiency in pursuit of Indians. Dragoons surrendered part of their mobility for the superiority of equipment. They did not always carry everything when engaged in chasing Indians.
With this combination of arms, Utley concluded, "the frontier army easily outmatched the Indians in weaponry. It was without doubt the most important single advantage the soldiers enjoyed over their adversary, and time and again, when a test of arms could be engineered, it carried the day." The problem was to catch the Indians and force an engagement, for they enjoyed the advantage of better knowledge of the land and greater mobility. They could be elusive to the point of frustration and use the landscape to their advantage. Indian soldiers usually stood and fought only when they believed they enjoyed superiority of numbers or position on the field or when surprised in camp. Successful engagements by the U.S. Army depended on perseverance, luck, and the officers who directed the troops.
Most of the officers in the Ninth Military Department were graduates of the Military Academy at West Point where they were trained to serve as officers, received general military education, and were provided special schooling in engineering. They were not taught how to fight Indians. It was not easy to keep officers in the army because pay was inadequate in comparison to similar civilian positions, there was no retirement plan available, promotion was exceedingly slow, and there was much quarreling and competition among them. Except in wartime, there were few opportunities for advancement, and, as military historian Coffman explained, "the tedious monotony of garrison life could be grindingly oppressive." The Ninth Military Department was comprised, as noted above, of minor military posts in a remote region of the nation. "The routine of small garrisons," wrote Coffman, "offered little in the way of professional development."
The incentives to make a career of officer life were not strong. The sister of West Point graduate Edmund Kirby Smith, also the wife of an officer, declared "The Army offers no career which a man of talent can desireIt to be sure (and I am sorry to say it) offers a safe harbour for indolence and imbecility." In 1847 Captain Edmund B. Alexander, who would become the first commanding officer of Fort Union four years later, wrote to his family: "I think if I had my profession to choose over I would select anything but the Army." Officers and enlisted men frequently turned to whiskey for escape from their conditions, and alcoholism was a serious problem for the army. Perhaps many of the officers assigned to duty in New Mexico felt there was little to be gained from service there.
The military organization demanded discipline of officers as well as enlisted men, and everything at the department level of the army was carried out by orders issued from the top down. Officers at military posts, from the commanding officer to the lowest lieutenant, were hesitant to take any action without specific orders. Although officers in command of field operations were usually given much individual discretion in dealing with whatever circumstances that might arise, there was guarded apprehension that any decision beyond specific instructions, which proved to be unsuccessful, might reflect badly on the officer and even lead to disciplinary action. The overall result was stifling for the officer corps, most of whom became mere functionaries in the chain of command. There was always an awareness among officers of who had rank over whom, which depended on the date of commission to a particular grade. The seeming simplicity of that system of seniority was complicated by the institution of brevet rank.
Brevet rank (usually a rank higher than the regular commission of an officer, awarded for a variety of purposes) was the cause of much controversy among officers in the army and of confusion among historians. The practice created all sorts of problems, as Secretary of War John B. Floyd pointed out in 1858, because of its "uncertain and ill-defined rights." The concept was borrowed from the British during the American Revolution to provide a temporary grade for an officer serving in an appointment away from his regular assignment. In the War of 1812 Congress established brevet appointments as honorary ranks to reward individual officers for gallant and meritorious service in battle or for faithful service in the same commissioned rank for ten years (a way to provide a "promotion" when there were no openings in the service at that level). As established at that time brevet rank was only an award of honor, the officer received the pay of his regular commission and held only the position of his regular commission in the chain of command. "Had the brevet system remained purely honorary," historian Utley observed, "it would have been harmless." It did not.
Many officers who held a brevet rank must have argued that such an appointment should be worth something, at least under some conditions. For whatever reasons, as Utley summarized, "brevet rank took effect, in both authority and pay, by special assignment of the President, in commands composed of different corps, on courts-martial [from 1829 to 1869], and in detachments composed of different corps." The resulting arrangement "had so many ramifications and nuances that it produced endless dispute and uncertainty, to say nothing of chaos in the computation of pay."
During the Mexican War brevet ranks were widely conferred as the primary method of extending recognition for achievement in battle. Most of the officers who remained in the service in 1850, including those in New Mexico, held one or more brevets. "Thus," wrote Utley, "under certain conditions a captain with no brevet might find himself serving under a lieutenant who had picked up a brevet of major in Mexico." In 1851 Senator Jefferson Davis, who would become secretary of war a few years later, spoke out against the brevet system that "has produced such confusion in the Army that many of its best soldiers wish it could be obliterated." The practice continued because it was a way to accord honor to deserving officers and, perhaps even more important, it compensated career officers for the inordinately slow promotions up the regular commissioned ranks.
In military correspondence, orders, and reports, it was customary during the nineteenth century that all officers were addressed as and signed their name over their brevet rank, whether they received pay and commanded at the brevet rank or not (although sometimes regular commission and brevet rank were both given). In 1870 officers who were not serving at their brevet rank were prohibited from wearing the uniform of their brevet rank and from using their brevet rank in official communication. The widespread use of brevet ranks remains confusing, and every student of the frontier army must be aware of the system. Throughout this study of the history of Fort Union, brevet ranks are given only when it was clear that the identified officer was actually serving in that rank, as during the Civil War or a brevet second lieutenant. Even then the use of the term is avoided as much as possible in an attempt to reduce misunderstanding.
Perhaps the best illustration of brevet rank was provided in a humorous poem by Captain Arthur T. Lee:
The army was firmly established in New Mexico Territory by 1851 and faced myriad problems. There were obstacles of terrain, climate, and distance from supplies. The territorial government was weak, and there were rumors of political unrest. The unique blend of Indian, Spanish, and Mexican heritage in New Mexico made it difficult to draw lines and determine who were the perpetrators and who the victims of a complex conflict that had developed for centuries. The injection of Anglo culture, with yet another system of priorities and values, made the situation less stable. The army's record in dealing with the Indian problem there, the primary mission of the troops stationed in the department, left much to be desired. A complete shakeup was about to occur, resulting in widespread reorganization and the establishment of Fort Union.
Last Updated: 09-Jul-2005 | http://www.nps.gov/history/history/online_books/foun/chap1.htm | 13 |
40 | gross domestic product (GDP) is the total market value of all the
final goods and services produced within an economy in a given year. When all
the components of GDP are valued a their current prices in the market, it is
called nominal gross domestic product. Nominal GDP measures
national income ruling at the time and thus takes no account of inflation.
In many applications of macro economics, the
nominal GDP is not
considered a measure of growth and welfare. Why this is so is explained by
taking a simple example of two good economy and two years.
Let us assume that an
economy produces 100 pens and 50 books in the year 2001. The , price of
one pen is $1 and that of the book is $2 in the market. The total
value of the goods produced is $200 in the year 2001 .
(100 pens x $1 per pen) + (50 books x
$2 per book)
(100) + (100)
Suppose that in the year 2002,
the production of the two goods, pens and books remains the same, but their prices get doubled. The total value
of the goods then would be $400.
pens x $2) + (50 books x $4)
200 + 200 = $400
The nominal GDP in the year 2001
is $200 and is $400 in the year 2002. The nominal GDP has increased by 100% even though the physical
production of goods has remained the same. So, if we use the nominal GDP
to measure growth of the economy, we will be misled into thinking that
production has grown. What all has really happened is a rise in the price level.
The standard of living of the people will increase only if (i) the economy produces larger quantity of goods than the previous year and (ii) the goods
are sold at normal prices in the market.
The economists while studying the changes in the economy need a measure of
output which shows an actual increase in production of goods and it is not
affected by changes in their prices. To get this problem solved, the
economist use a measure called Real GDP.
Explanation of Real GDP:
Real gross, domestic product
(Real GDP) is the production of goods and
services valued at constant prices. It is also defined as GDP adjusted for
price changes. It is a measure of output that reflects actual income in.
production, separate and part from any price changes that may have occurred in
the economy during the year.
Calculating Nominal GDP and Real GDP:
Let us take a simple example of a two good economy and of two years to
explain the concept of Real GDP. The table given below shows the nominal GDP for
two years 2001 and 2002.
Price and Quantity
Calculating Real GDP:
($1 per pen x 100 pens) + ($2
per book x 50 books) = $200
($1 per pen x 150 pens) + ($2
per book x 100 books) = $350
We find that real GDP has increased from
$200 in the year 2001 to $350 in
the year 2002. This increase is due to increase in quantities of goods produced
because the prices are held fixed at base year levels. The real GDP enables us
to see how much real income has changed from one year to another.
Measuring Price Changes
We can measure the changes in prices of goods overtime by an index called
Definition of GDP
GDP deflator is a measure of the price level calculated as the
ratio of nominal GDP to real GDP times 100.
Formula For GDP
GDP Deflator = Nominal GDP x 100
$200/$200 x 100 = 100
$600/$350 x 100 = 171 | http://www.economicsconcepts.com/measurement_of_gdp_in_current_price_and_in_constant_price.htm | 13 |
15 | Confederate government of Kentucky
The Confederate government of Kentucky was a shadow government established for the Commonwealth of Kentucky by a self-constituted group of Confederate sympathizers during the American Civil War. The shadow government never replaced the elected government in Frankfort, which had strong Union sympathies. Neither was it able to gain the whole support of Kentucky's citizens; its jurisdiction extended only as far as Confederate battle lines in the Commonwealth. Nevertheless, the provisional government was recognized by the Confederate States of America, and Kentucky was admitted to the Confederacy on December 10, 1861. Kentucky was represented by the central star on the Confederate battle flag.
Bowling Green was designated the Confederate capital of Kentucky. Due to the military situation in the state, the provisional government was exiled and traveled with the Army of Tennessee for most of its existence. For a short time in the autumn of 1862, the Confederate Army controlled Frankfort, the only time a Union capital was captured by Confederate forces. During this occupation, General Braxton Bragg attempted to install the provisional government as the permanent authority in the Commonwealth. However, Union General Don Carlos Buell ambushed the inauguration ceremony and drove the provisional government from the state for the final time. From that point forward, the government existed primarily on paper and was dissolved at the end of the war.
The provisional government elected two governors. George W. Johnson was elected at the Russellville Convention and served until his death at the Battle of Shiloh. Richard Hawes was elected to replace Johnson and served through the remainder of the war.
Kentucky's citizens were split regarding the issues central to the Civil War. The state had strong economic ties with Ohio River cities such as Pittsburgh and Cincinnati while at the same time sharing many cultural, social, and economic links with the South. Unionist traditions were strong throughout the Commonwealth's history, especially in the east. With economic ties to both the North and the South, Kentucky had little to gain and much to lose from a war between the states. Additionally, many slaveholders felt that the best protection for slavery was within the Union.
The presidential election of 1860 showed Kentucky's mixed sentiments when the state gave John Bell 45% of the popular vote, John C. Breckinridge 36%, Stephen Douglas 18%, and Abraham Lincoln less than 1%. Historian Allan Nevins interpreted the election results to mean that Kentuckians strongly opposed both secession and coercion against the secessionists. The majority coalition of Bell and Douglas supporters was seen as a solid moderate Unionist position that opposed precipitate action by extremists on either side.
The majority of Kentucky's citizens believed the state should be a mediator between the North and South. On December 9, 1860, Kentucky Governor Beriah Magoffin sent a letter to the other slave state governors, suggesting that they come to an agreement with the North that would include strict enforcement of the Fugitive Slave Act, a division of common territories at the 37th parallel, a guarantee of free use of the Mississippi River, and a Southern veto over slave legislation. Magoffin proposed a conference of slave states, followed by a conference of all the states to secure the concessions. Because of the escalating pace of events, neither conference was held.
Governor Magoffin called a special session of the Kentucky General Assembly on December 27, 1860, to ask the legislators for a convention to decide the Commonwealth's course in the sectional conflict. The Louisville Morning Courier on January 25, 1861 articulated the position that the secessionists faced in the legislature, "Too much time has already been wasted. The historic moment once past, never returns. For us and for Kentucky, the time to act is NOW OR NEVER." The Unionists, on the other hand, were unwilling to surrender the fate of the state to a convention that might "in a moment of excitement, adopt the extreme remedy of secession." The Unionist position carried after many of the states rights' legislators, opposing the idea of immediate secession, voted against the convention. The assembly did, however, send six delegates to a February 4 Peace Conference in Washington, D.C., and asked Congress to call a national convention to consider potential resolutions to the secession crisis, including the Crittenden Compromise, proposed by Kentuckian John J. Crittenden.
As a result of the firing on Fort Sumter, President Lincoln sent a telegram to Governor Magoffin requesting that the Commonwealth supply four regiments as its share of the overall request of 75,000 troops for the war. Magoffin, a Confederate sympathizer, replied, "President Lincoln, Washington, D.C. I will send not a man nor a dollar for the wicked purpose of subduing my sister Southern states. B. Magoffin" Both houses of the General Assembly met on May 7 and passed declarations of neutrality in the war, a position officially declared by Governor Magoffin on May 20.
In a special congressional election held June 20, Unionist candidates won nine of Kentucky's ten congressional seats. Confederate sympathizers won only the Jackson Purchase region, which was economically linked to Tennessee by the Cumberland and Tennessee Rivers. Believing defeat at the polls was certain, many Southern Rightists had boycotted the election; of the 125,000 votes cast, Unionists captured close to 90,000. Confederate sympathizers were dealt a further blow in the August 5 election for state legislators. This election resulted in veto-proof Unionist majorities of 76–24 in the House and 27–11 in the Senate. From then on, most of Magoffin's vetoes to protect southern interests were overridden in the General Assembly.
Historian Wilson Porter Shortridge made the following analysis:
|“||These elections demonstrated that a majority of the people of Kentucky were opposed to secession, but they could not be interpreted as an approval of the war policy of the Lincoln administration, as was quite generally done at the north at that time. Perhaps the best explanation at that time was that the people of Kentucky desired peace and thought that the election of the union candidates was the best way to get it.||”|
With secession no longer considered a viable option, the pro-Confederate forces became the strongest supporters for neutrality. Unionists dismissed this as a front for a secessionist agenda. Unionists, on the other hand, struggled to find a way to move the large, moderate middle to a "definite and unqualified stand with the Washington government." The maneuvering between the two reached a decisive point on September 3 when Confederate forces were ordered from Tennessee to the Kentucky towns of Hickman and Columbus. Union forces responded by occupying Paducah.
On September 11, the legislature passed a resolution instructing Magoffin to order the Confederate forces (but not the Union forces) to leave the state. The Governor vetoed the resolution, but the General Assembly overrode his veto, and Magoffin gave the order. The next week, the assembly officially requested the assistance of the Union and asked the governor to call out the state militia to join the Federal forces. Magoffin also vetoed this request. Again the assembly overrode his veto and Magoffin acquiesced.
|Wikisource has original text related to this article:|
A pro-Confederate peace meeting, with Breckinridge as a speaker, was scheduled for September 21. Unionists feared the meeting would lead to actual military resistance, and dispatched troops from Camp Dick Robinson to disband the meeting and arrest Breckinridge. Breckinridge, as well as many other state leaders identified with the secessionists, fled the state. These leaders eventually served as the nucleus for a group that would create a shadow government for Kentucky. In his October 8 "Address to the People of Kentucky," Breckinridge declared, "The United States no longer exists. The Union is dissolved."
On October 29, 1861, 63 delegates representing 34 counties met at Russellville to discuss the formation of a Confederate government for the Commonwealth. Despite its defeats at the polls, this group believed that the Unionist government in Frankfort did not represent the will of the majority of Kentucky's citizens. Trigg County's Henry Burnett was elected chairman of the proceedings. Scott County farmer George W. Johnson chaired the committee that wrote the convention's final report and introduced some of its key resolutions. The report called for a sovereignty convention to sever ties with the Federal government. Both Breckinridge and Johnson served on the Committee of Ten that arranged the convention.
On November 18, 116 delegates from 68 counties met at the William Forst House in Russellville. Burnett was elected presiding officer. Fearing for the safety of the delegates, he first proposed postponing proceedings until January 8, 1862. Johnson convinced the majority of the delegates to continue. By the third day, the military situation was so tenuous that the entire convention had to be moved to a tower on the campus of Bethel Female College, a now-defunct institution in Hopkinsville.
|Governor||George W. Johnson|
|Lieutenant Governor||Horatio F. Simrall|
|Secretary of State||Robert McKee|
|Treasurer||Theodore Legrand Burnett|
The first item was ratification of an ordinance of secession, which proceeded in short order. Next, being unable to flesh out a complete constitution and system of laws, the delegates voted that "the Constitution and laws of Kentucky, not inconsistent with the acts of this Convention, and the establishment of this Government, and the laws which may be enacted by the Governor and Council, shall be the laws of this state." The delegates proposed a provisional government to consist of a legislative council of ten members (one from each Kentucky congressional district); a governor, who had the power to appoint judicial and other officials; a treasurer and an auditor. The delegates designated Bowling Green (then under the control of Confederate general Albert Sidney Johnston) as the Confederate State capital, but had the foresight to provide for the government to meet anywhere deemed appropriate by the council and governor. The convention adopted a new state seal, an arm wearing mail with a star, extended from a circle of twelve other stars.
The convention unanimously elected Johnson as governor. Horatio F. Simrall was elected lieutenant governor but soon fled to Mississippi to escape Federal authorities. Robert McKee, who had served as secretary of both conventions, was appointed secretary of state. Theodore Legrand Burnett was elected treasurer but resigned on December 17 to accept a position in the Confederate Congress. He was replaced by Warren County native John Quincy Burnham. The position of auditor was first offered to former Congressman Richard Hawes, but Hawes declined to continue his military service under Humphrey Marshall. In his stead, the convention elected Josiah Pillsbury, also of Warren County. The legislative council elected Willis Benson Machen as its president.
On November 21, the day following the convention, Johnson wrote Confederate president Jefferson Davis to request Kentucky's admission to the Confederacy. Burnett, William Preston, and William E. Simms were chosen as the state's commissioners to the Confederacy. For reasons unexplained by the delegates, Dr. Luke P. Blackburn, a native Kentuckian living in Mississippi, was invited to accompany the commissioners to Richmond, Virginia. Though Davis had reservations about circumvention of the elected General Assembly in forming the Confederate government, he concluded that Johnson's request had merit, and on November 25, recommended Kentucky for admission to the Confederacy. Kentucky was admitted to the Confederacy on December 10, 1861.
On November 26, 1861, Governor Johnson issued an address to the citizens of the Commonwealth blaming abolitionists for the breakup of the United States. He asserted his belief that the Union and Confederacy were forces of equal strength, and that the only solution to the war was a free trade agreement between the two sovereign nations. He further announced his willingness to resign as provisional governor if the Kentucky General Assembly would agree to cooperate with Governor Magoffin. Magoffin himself denounced the Russellville Convention and the provisional government, stressing the need to abide by the will of the majority of the Commonwealth's citizens.
During the winter of 1861, Johnson tried to assert the legitimacy of the fledgling government but its jurisdiction extended only as far as the area controlled by the Confederate Army. Johnson came short of raising the 46,000 troops requested by the Confederate Congress. Efforts to levy taxes and to compel citizens to turn over their guns to the government were similarly unsuccessful. On January 3, 1862, Johnson requested a sum of $3 million ($69 million as of 2013) from the Confederate Congress to meet the provisional government's operating expenses. The Congress instead approved a sum of $2 million, the expenditure of which required approval of Secretary of War Judah P. Benjamin and President Davis. Much of the provisional government's operating capital was probably provided by Kentucky congressman Eli Metcalfe Bruce, who made a fortune from varied economic activities throughout the war.
The council met on December 14 to appoint representatives to the Confederacy's unicameral provisional congress. Those appointed would serve for only two months, as the provisional congress was replaced with a permanent bicameral legislature on February 17, 1862. Kentucky was entitled to two senators and 12 representatives in the permanent Confederate Congress. The usual day for general elections being passed, Governor Johnson and the legislative council set election day for Confederate Kentucky on January 22. Voters were allowed to vote in whichever county they occupied on election day, and could cast a general ballot for all positions. In an election that saw military votes outnumber civilian ones, only four of the provisional legislators were elected to seats in the Confederate House of Representatives. One provisional legislator, Henry Burnett, was elected to the Confederate Senate.
The provisional government took other minor actions during the winter of 1861. An act was passed to rename Wayne County to Zollicoffer County in honor of Felix Zollicoffer, who died at the Battle of Mill Springs. Local officials were appointed in areas controlled by Confederate forces, including many justices of the peace. When the Confederate government eventually disbanded, the legality of marriages performed by these justices was questioned, but eventually upheld.
Withdrawal from Kentucky and death of Governor Johnson
Following Ulysses S. Grant's victory at the Battle of Fort Henry, General Johnston withdrew from Bowling Green into Tennessee on February 7, 1862. A week later, Governor Johnson and the provisional government followed. On March 12, the New Orleans Picayune reported that "the capital of Kentucky [is] now being located in a Sibley tent."
Governor Johnson, despite his presumptive official position, his age (50), and a crippled arm, volunteered to serve under General John C. Breckinridge and Colonel Robert P. Trabue at the Battle of Shiloh. On April 7, Johnson was severely wounded in the thigh and abdomen, and lay on the battlefield until the following day. Johnson was recognized by acquaintance and fellow Freemason, Alexander McDowell McCook, a Union general. Johnson died aboard the Union hospital ship Hannibal, and the provisional government of Kentucky was left leaderless.
Richard Hawes as governor
Prior to abandoning Bowling Green, Governor Johnson requested that Richard Hawes come to the city and help with the administration of the government, but Hawes was delayed due to a bout with typhoid fever. Following Johnson's death, the provisional government elected Hawes, who was still recovering from his illness, as governor. Following his recovery, Hawes joined the government in Corinth, Mississippi, and took the oath of office on May 31.
During the summer of 1862, word began to spread through the Army of Tennessee that Generals Bragg and Edmund Kirby Smith were planning an invasion of Kentucky. The legislative council voted to endorse the invasion plan, and on August 27, Governor Hawes was dispatched to Richmond to favorably recommend it to President Davis. Davis was non-committal, but Bragg and Smith proceeded, nonetheless.
On August 30, Smith commanded one of the most complete Confederate victories of the war against an inexperienced Union force at the Battle of Richmond. Bragg also won a decisive victory at the September 13 Battle of Munfordville, but the delay there cost him the larger prize of Louisville, which Don Carlos Buell moved to occupy on September 25. Having lost Louisville, Bragg spread his troops into defensive postures in the central Kentucky cities of Bardstown, Shelbyville and Danville and waited for something to happen, a move that historian Kenneth W. Noe called a "stupendously illogical decision."
Meanwhile, the leaders of Kentucky's Confederate government had remained in Chattanooga, Tennessee, awaiting Governor Hawes' return. They finally departed on September 18, and caught up with Bragg and Smith in Lexington, Kentucky on October 2. Bragg had been disappointed with the number of soldiers volunteering for Confederate service in Kentucky; wagon loads of weapons that had been shipped to the Commonwealth to arm the expected enlistees remained unissued. Desiring to enforce the Confederate Conscription Act to boost recruitment, Bragg decided to install the provisional government in the recently-captured state capital of Frankfort. On October 4, 1862, Hawes was inaugurated as governor by the Confederate legislative council. In the celebratory atmosphere of the inauguration ceremony, however, the Confederate forces let their guard down, and were ambushed and forced to retreat by Buell's artillery.
Decline and dissolution
Following the Battle of Perryville, the provisional government left Kentucky for the final time. Displaced from their home state, members of the legislative council dispersed to places where they could make a living or be supported by relatives until Governor Hawes called them into session. Scant records show that on December 30, 1862, Hawes summoned the council, auditor, and treasurer to his location at Athens, Tennessee for a meeting on January 15, 1863. Hawes himself unsuccessfully lobbied President Davis to remove Hawes' former superior, Humphrey Marshall, from command. On March 4, Hawes told Davis by letter that "our cause is steadily on the increase" and assured him that another foray into the Commonwealth would produce better results than the first had.
The government's financial woes also continued. Hawes was embarrassed to admit that neither he nor anyone else seemed to know what became of approximately $45,000 that had been sent from Columbus to Memphis, Tennessee during the Confederate occupation of Kentucky. Another major blow was Davis' 1864 decision not to allow Hawes to spend $1 million that had been secretly appropriated in August 1861 to help Kentucky maintain its neutrality. Davis reasoned that the money could not be spent for its intended purpose, since Kentucky had already been admitted to the Confederacy.
Late in the war, the provisional government existed mostly on paper. However, in the summer of 1864, Colonel R. A. Alston of the Ninth Tennessee Cavalry requested Governor Hawes' assistance in investigating crimes allegedly committed by Brigadier General John Hunt Morgan during his latest raid into Kentucky. Hawes never had to act on the request, however, as Morgan was suspended from command on August 10 and killed by Union troops on September 4, 1864.
There is no documentation detailing exactly when Kentucky's provisional government ceased operation. It is assumed to have dissolved upon the conclusion of the Civil War.
See also
- Border states (Civil War)
- Confederate government of Missouri
- Kentucky in the Civil War
- Upland South
- Western Theater of the American Civil War
- Kent Masterson Brown, ed. (2000). The Civil War in Kentucky: Battle for the Bluegrass. Mason City, Iowa: Savas Publishing Company. ISBN 1-882810-47-3.
- Encyclopedia Americana. Vol. 4 (1969 ed.). Americana Corporation. ISBN 0-7172-0100-7.
- Lowell H. Harrison, ed. (2004). Kentucky's Governors. Lexington, Kentucky: The University Press of Kentucky. ISBN 0-8131-2326-7.
- Harrison, Lowell H. (1975). The Civil War in Kentucky. Lexington, Kentucky: The University Press of Kentucky. ISBN 0-8131-0209-X.
- Harrison, Lowell Hayes (Winter 1981). "George W. Johnson and Richard Hawes: The Governors of Confederate Kentucky". The Register of the Kentucky Historical Society 79 (1): pp. 3–39.
- Heck, Frank H. (August 1955). "John C. Breckinridge in the Crisis of 1860-1861". The Journal of Southern History (Southern Historical Association) 21 (3): 316–346. doi:10.2307/2954954. JSTOR 2954954.
- Kleber, John E., ed. (1992). The Kentucky Encyclopedia. Associate editors: Thomas D. Clark, Lowell H. Harrison, and James C. Klotter. Lexington, Kentucky: The University Press of Kentucky. ISBN 0-8131-1772-0.
- Nevins, Allen (1959). The War for the Union: The Improvised War 1861-1862. Charles Scribner's Sons. ISBN 684104261 Check
- Noe, Kenneth W. (2001). Perryville: This Grand Havoc of Battle. Lexington, Kentucky: University Press of Kentucky. ISBN 978-0-8131-2209-0.
- Powell, Robert A. (1976). Kentucky Governors. Frankfort, Kentucky: Kentucky Images. OCLC 2690774.
- Jerlene Rose, ed. (2005). Kentucky's Civil War 1861 – 1865. Clay City, Kentucky: Back Home in Kentucky, Inc. ISBN 0-9769231-2-2.
- Shortridge, William Porter (March 1923). "Kentucky Neutrality in 1861". The Mississippi Valley Historical Review (Organization of American Historians) 9 (4): 283–301. doi:10.2307/1886256. JSTOR 1886256.
- Irby, Jr., Richard E. "A Concise History of the Flags of the Confederate States of America and the Sovereign State of Georgia". About North Georgia. Golden Ink. Retrieved 2006-11-29.
- Nevins, pp. 129–130
- Harrison in The Civil War in Kentucky, pp. 6–7
- Harrison in The Civil War in Kentucky, p. 7
- Shortridge, p. 290
- Heck, p. 333
- Shortridge, pp. 290–291
- Harrison in The Civil War in Kentucky, p. 8
- Harrison in Kentucky Governors, pp. 82–84
- Powell, p. 52
- Harrison in The Civil War in Kentucky, p. 9
- Rose, pp. 63–65
- Kleber, p. 193
- Harrison in The Civil War in Kentucky, p. 11
- Shortridge, p. 297
- Shortridge, pp. 298–300
- Shortridge, p. 300
- Heck, p. 343
- Brown, p. 80
- Brown, p. 83
- Kleber, p. 222
- Harrison in Register, p. 13
- Powell, p. 114
- Harrison in Register, p. 14
- Brown, p. 84
- Brown, p. 85
- Kleber, pp. 418–419
- Harrison in Register, p. 15
- Brown, p. 87
- Harrison in Register, p. 16
- Harrison in Register, p. 20
- Brown, p. 88
- Harrison in Register, p. 22
- Brown, p. 89
- Kleber, p. 473
- Rose, pp. 90–91
- Harrison in Kentucky Governors, pp. 85–88
- Brown, p. 93
- Kleber, pp. 772–773
- Harrison in The Civil War in Kentucky, p. 46
- Harrison in The Civil War in Kentucky, p. 48
- Noe, p. 124
- Harrison in The Civil War in Kentucky, p. 47
- Encyclopedia Americana, p. 407
- Powell, p. 115
- Encyclopedia Americana, p. 707
- Brown, p. 96
- Brown, pp. 96–97
- Brown, p. 97
- Proceedings of the convention establishing provisional government of Kentucky. Constitution of the provisional government. Letter of the governor to the president. President s message recommending the admission of Kentucky as a member of the confederate states
- Secession and the Union in Tennessee and Kentucky: A Comparative Analysis James Copeland, Walters State Community College | http://en.wikipedia.org/wiki/Confederate_government_of_Kentucky | 13 |
51 | Is there such a thing as too much money?
by Fred E. Foldvary, Senior EditorWhat is inflation? There are two economic meanings of inflation. The first meaning is monetary inflation, having to do with the money supply. To understand that, we need to understand that the impact of money on the economy depends not just on the amount of money but also on its rate of turnover.
We all know that money circulates. How fast it circulates is called its velocity. For example, suppose you get paid $4000 every four weeks. You are circulating $4000 13 times per year. Then suppose you instead get paid $1000 each week. Your total spending is the same, but now you are circulating $1000 52 times per year. The velocity of the money is 52, but the money you hold has been reduced to one fourth its previous amount, although the money held times the velocity is the same. The effect on the economy is the money supply times the velocity.
Monetary inflation is an increase in the money supply, times the velocity, which is greater than the increase in the amount of transactions measured in constant dollars. Simply put, if velocity does not change, monetary inflation is an increase in money that is greater than the increase in goods.
Price inflation is an on-going increase in the price level. The level of prices is measured by a price index, such as the consumer price index (CPI). Usually, price inflation is caused by monetary inflation. So let’s take a look at recent monetary inflation.
The broadest measure of money is MZM, which stands for money zero maturity, funds which can be readily spent. The Federal Reserve Bank of St. Louis keeps track of various measurements of money. Its data show that on an annual basis, MZM increased by 13 percent in January 2008, 36 percent in February, and 23 percent in March. These are huge increases, since gross domestic product, the total production of goods, increased at an annual rate of only .6 percent during these months. In 2006, MZM grew at an annual rate of only 4 percent.
High monetary inflation results in high price inflation. Indeed, in May 2008 the consumer price index rose by 4.2 percent from the level of May 2007. For the month, the increase for May was .6 percent, an annual rate of 7.2 percent. The “Consumer Price Index for All Urban Consumers” (CPI-U) increased 0.8 percent in May, before seasonal adjustment, for an annualized increase of 9.6 percent. The “Consumer Price Index for Urban Wage Earners and Clerical Workers” (CPI-W) increased 1.0 percent in May, prior to seasonal adjustment, for a whopping annual increase of 12 percent.
The rapid rise in oil prices fueled the increase in the price of gasoline, while the greater demand for grains made food prices rise, but beneath these rises is the monetary inflation that creates a higher demand for goods in general. The government reports that “core inflation,” not counting gasoline and food, is lower, but what counts for people is everything they buy, including food and fuel. If you have to pay much more for food and gasoline, there is less money for other things, so of course these will not rise in price as much.
In making monetary policy, the Federal Reserve targets the federal funds interest rate, which banks pay when they borrow funds from one another. During the financial troubles during the first few months of 2008, the Fed aggressively lowered the federal funds rate to 2 percent and also indicated that it would supply limitless credit to banks that borrowed directly from the Federal Reserve.
The Fed lowers the interest rate by increasing the supply of money that banks have to lend; to unload it, banks charge borrowers less interest. To start, the Fed buys U.S. Treasury bonds from the public. The Fed pays for the bonds not by using old money it has lying around but by increasing the reserves held by the banks in their accounts at their local Federal Reserve Bank then using that new money.
This increase in reserves or bank funds is a creation of money out of nothing. Actually, this does not violate the law of conservation, because this creation of money is at the expense of the value of all other money holdings. Every extra dollar created by the Fed decreases the value of the dollars you hold by a tiny amount.
Most monetary reformers stop there, but that is not enough. The current financial instability is also caused by the real estate boom-bust cycle, since even with sound money, an economic expansion would spark a speculative boom in land values. In a competitive market, when produced goods rise in price, producers usually supply more, bringing the price back down or limiting the rise. But land is not produced, so with increased demand, the price has nowhere to go but up. Speculators drive the price of land based on expectations of even higher future prices, but at the peak of the boom, the price becomes too high for those who want to use the land.
Real estate stops rising and then falls, and that brings the financial system down with it, as we have witnessed during the past year. To prevent the inflation in land prices, we need to remove the subsidy, the pumping up of land value from the civic benefits paid by returns on labor and capital goods. We can remove the land subsidy by tapping the land value or land rent for public revenue. Land-value tapping or taxation plus free-market money and banking would provide price and financial stability.
Only the free market can know the right money supply. Some people think the government could just print money and spend it. That is what is happening in Zimbabwe, which has an inflation rate of one hundred thousand percent. Much of the population has fled the country. Once government can create money at will, there is really no way to limit it, and if there is some limiting rule, then the money supply becomes too rigid. Only free market competition and production can combine price stability with money-supply flexibility.
-- Fred Foldvary
Copyright 2008 by Fred E. Foldvary. All rights reserved. No part of this material may be reproduced or transmitted in any form or by any means, electronic or mechanical, which includes but is not limited to facsimile transmission, photocopying, recording, rekeying, or using any information storage or retrieval system, without giving full credit to Fred Foldvary and The Progress Report.
Part III, The Trouble With Money and its Cure
A Better Way to Pay for Railways?
How Economic Systems Really Work
Email this article Sign up for free Progress Report updates via email
What are your views? Share your opinions with The Progress Report:
Page One Page Two Archive Discussion Room Letters What's Geoism? | http://www.progress.org/2008/fold564.htm | 13 |
15 | |At the root of England's difficulties with her American colonies was the mercantilist system of Britain's economy. For Great Britain to profit fully from her colonies and prevent the loss of wealth to her rivals, trade within the empire had to be closely regulated.
To control imperial trade, Parliament legislated a series of "navigation acts" that defined what goods could be shipped from colonial ports to those outside England's control. The acts also defined what goods could be shipped to an English port from a foreign one. Under the regulation and protection of the British government, the Americans prospered, but they also bridled at the controls placed on them. Strict enforcement of the navigation acts was often impossible. American merchants regularly traded with both the Dutch and French West Indies, and smuggling was widespread. Despite recognized American violations of the navigation acts, peace prevailed between England and her American colonies. However, following the French and Indian War in 1763, Parliament sought ways to raise revenues in the colonies to help pay war debts and cover the costs of defending the empire. Efforts to enforce parliamentary authority over the Americans ultimately led to open rebellion and the formation of the United States.
The following is a partial outline of some of the more important acts passed by Parliament. Known collectively as the "navigation acts," they were originally designed to regulate commerce within the British Empire, but ultimately ignited war between the American colonies and England.
1651--The Navigation Act of 1651, one of the earliest navigation acts, was designed to channel all exports from the colonies through an English port before continuing to a foreign harbor. The goods had to be carried on English ships and have English crews, and the ships had to pay duties on the goods before continuing.
1663--The Staple Act of 1663 altered preexisting regulations so that any goods picked up in foreign ports had to be taken back to England, unloaded, inspected, paid for in duties, and repacked for shipment to the colonies. This greatly increased the prices paid by colonial consumers.
1673--The Act of 1673 stated that all goods not bonded in England must have a duty and bond placed on them when the ship reached the colonies. The colonial governor collected the bond and duty and thus started a tradition that continued through the Revolution. Before going to sea, a ship was required to pay a bond guaranteeing that if certain enumerated goods were loaded at any port they would be brought to England or an English port and nowhere else. A shipowner or captain who did not go to an English port would be prosecuted and would usually lose the bond. The Crown thus hoped to channel all trade through English ports and receive income from duties and taxes. The English merchants would also benefit from having a monopoly on sales and increased prices in the colonies. The colonial traders would not be allowed to trade with foreign countries.
1733--The Molasses Act attempted to stop trade between the New England colonies and the French West Indies. Northern traders exchanged salted fish, beef, and pork for molasses, which they converted into rum. This was one leg in the triangular trade between the Americas, Europe, and Africa. The islanders, for their part, traded sugar for needed New England foods. The New Englanders then produced rum from the sugar and exported it to England. New England rum became in turn a key trade item in the slave trade, which finally brought yet more slaves to the West Indies to work on the sugar plantations.
1764--The Revenue Act (Sugar Act) actually reduced taxes on molasses from six to three pence a gallon, but it also added to the list of American exports that had to pass through English ports. The Revenue Act required American merchants to post bonds guaranteeing the observance of the trade regulations before loading their cargoes. This law also applied to any intercoastal trading and severely hurt the small local traders. Violations of these acts were prosecuted by the vice-admiralty courts. Parliament permitted the British navy to help the customs service enforce the regulation of American trade. Soon eight warships and twelve armed sloops arrived on the coast to patrol for smugglers.
1765--The Stamp Act became one of the most unpopular acts passed by Parliament. It required that a revenue stamp be placed on all newspapers, pamphlets, almanacs, legal documents, liquor licenses, college diplomas, playing cards, and even dice. Without the stamp, the document was not legal or admissible in court cases. Violent opposition caused Parliament to repeal the Stamp Act in 1766.
1767--The Townshend Acts added duties to the importation of paper, lead, painters' colors, and tea. This act inspired an effective American economic boycott of English goods. With fifty percent of English ships engaged in trade with colonial America and twenty-five percent of English manufactured goods being consumed in the colonies, the boycott proved effective. With the exception of the tea tax, the Townshend Acts were repealed in 1770.
1773--The Tea Act was designed to help the nearly bankrupt East India Company by giving it direct access to the American market. The act actually lowered tea prices by eliminating the middleman and lowering tea taxes. However, the desired effect was lost when the colonists argued that it granted the East India Company a monopoly injurious to American trade. The Boston Tea Party followed as a protest.
|The Boston Tea Party. From the collections of The Mariners' Museum.
1774--The Coercive Acts (Intolerable Acts) were passed in response to the actions taken by the American colonists at the Boston Tea Party. The Coercive Acts were actually a series of acts that included the Port Act, which closed the port of Boston until the loss of the East India Company's tea was repaid; the Massachusetts Regulating Act, which essentially revoked Massachusetts's colonial charter; and the Quebec Act, which granted a centralized government to Quebec and extended the Canadian border to the Ohio River. British troops were ordered to Boston to enforce the Coercive Acts, and the Quartering Act requiring the billeting of British troops in civilian homes was renewed. | http://www.marinersmuseum.org/sites/micro/usnavy/02.htm | 13 |
14 | Cancer of the Larynx & Voice Box
The larynx is an organ at the front of your neck. It is also called the voice box. It is about 2 inches long and 2 inches wide. It is above the windpipe (trachea). Below and behind the larynx is the esophagus.
The larynx has two bands of muscle that form the vocal cords. The cartilage at the front of the larynx is sometimes called the Adam’s apple.
The larynx has three main parts:
- The top part of the larynx is the supraglottis.
- The glottis is in the middle. Your vocal cords are in the glottis.
- The subglottis is at the bottom. The subglottis connects to the windpipe.
The larynx plays a role in breathing, swallowing, and talking. The larynx acts like a valve over the windpipe. The valve opens and closes to allow breathing, swallowing, and speaking:
- Breathing: When you breathe, the vocal cords relax and open. When you hold your breath, the vocal cords shut tightly.
- Swallowing: The larynx protects the windpipe. When you swallow, a flap called the epiglottis covers the opening of your larynx to keep food out of your lungs. The food passes through the esophagus on its way from your mouth to your stomach.
- Talking: The larynx produces the sound of your voice. When you talk, your vocal cords tighten and move closer together. Air from your lungs is forced between them and makes them vibrate. This makes the sound of your voice. Your tongue, lips, and teeth form this sound into words.
Who’s at Risk?
No one knows the exact causes of cancer of the larynx. Doctors cannot explain why one person gets this disease and another does not. We do know that cancer is not contagious. You cannot “catch” cancer from another person.
People with certain risk factors are more likely to get cancer of the larynx. A risk factor is anything that increases your chance of developing this disease.
Studies have found the following risk factors:
- Age. Cancer of the larynx occurs most often in people over the age of 55.
- Gender. Men are four times more likely than women to get cancer of the larynx.
- Race. African Americans are more likely than whites to be diagnosed with cancer of the larynx.
- Smoking. Smokers are far more likely than nonsmokers to get cancer of the larynx. The risk is even higher for smokers who drink alcohol heavily.
People who stop smoking can greatly decrease their risk of cancer of the larynx, as well as cancer of the lung, mouth, pancreas, bladder, and esophagus. Also, quitting smoking reduces the chance that someone with cancer of the larynx will get a second cancer in the head and neck region. (Cancer of the larynx is part of a group of cancers called head and neck cancers.)
- Alcohol. People who drink alcohol are more likely to develop laryngeal cancer than people who don’t drink. The risk increases with the amount of alcohol that is consumed. The risk also increases if the person drinks alcohol and also smokes tobacco.
- A personal history of head and neck cancer. Almost one in four people who have had head and neck cancer will develop a second primary head and neck cancer.
- Occupation. Workers exposed to sulfuric acid mist or nickel have an increased risk of laryngeal cancer. Also, working with asbestos can increase the risk of this disease. Asbestos workers should follow work and safety rules to avoid inhaling asbestos fibers.
Other studies suggest that having certain viruses or a diet low in vitamin A may increase the chance of getting cancer of the larynx. Another risk factor is having gastroesophageal reflux disease (GERD), which causes stomach acid to flow up into the esophagus.
Most people who have these risk factors do not get cancer of the larynx. If you are concerned about your chance of getting cancer of the larynx, you should discuss this concern with your health care provider. Your health care provider may suggest ways to reduce your risk and can plan an appropriate schedule for checkups.
The symptoms of cancer of the larynx depend mainly on the size of the tumor and where it is in the larynx. Symptoms may include the following:
These symptoms may be caused by cancer or by other, less serious problems. Only a doctor can tell for sure.
- Hoarseness or other voice changes
- A lump in the neck
- A sore throat or feeling that something is stuck in your throat
- A cough that does not go away
- Problems breathing
- Bad breath
- An earache
- Weight loss
If you have symptoms of cancer of the larynx, the doctor may do some or all of the following exams:
- Physical exam. The doctor will feel your neck and check your thyroid, larynx, and lymph nodes for abnormal lumps or swelling. To see your throat, the doctor may press down on your tongue.
- Indirect laryngoscopy. The doctor looks down your throat using a small, long-handled mirror to check for abnormal areas and to see if your vocal cords move as they should. This test does not hurt. The doctor may spray a local anesthesia in your throat to keep you from gagging. This exam is done in the doctor's office.
- Direct laryngoscopy. The doctor inserts a thin, lighted tube called a laryngoscope through your nose or mouth. As the tube goes down your throat, the doctor can look at areas that cannot be seen with a mirror. A local anesthetic eases discomfort and prevents gagging. You may also receive a mild sedative to help you relax. Sometimes the doctor uses general anesthesia to put a person to sleep. This exam may be done in a doctor's office, an outpatient clinic, or a hospital.
- CT scan. An x-ray machine linked to a computer takes a series of detailed pictures of the neck area. You may receive an injection of a special dye so your larynx shows up clearly in the pictures. From the CT scan, the doctor may see tumors in your larynx or elsewhere in your neck.
- Biopsy. If an exam shows an abnormal area, the doctor may remove a small sample of tissue. Removing tissue to look for cancer cells is called a biopsy. For a biopsy, you receive local or general anesthesia, and the doctor removes tissue samples through a laryngoscope. A pathologist then looks at the tissue under a microscope to check for cancer cells. A biopsy is the only sure way to know if a tumor is cancerous.
If you need a biopsy, you may want to ask the doctor the following questions:
- What kind of biopsy will I have? Why?
- How long will it take? Will I be awake? Will it hurt?
- How soon will I know the results?
- Are there any risks? What are the chances of infection or bleeding after the biopsy?
- If I do have cancer, who will talk with me about treatment? When?
To plan the best treatment, your doctor needs to know the stage, or extent, of your disease. Staging is a careful attempt to learn whether the cancer has spread and, if so, to what parts of the body. The doctor may use x-rays, CT scans, or magnetic resonance imaging to find out whether the cancer has spread to lymph nodes, other areas in your neck, or distant sites.
People with cancer of the larynx often want to take an active part in making decisions about their medical care. It is natural to want to learn all you can about your disease and treatment choices. However, shock and stress after a diagnosis of cancer can make it hard to remember what you want to ask the doctor. Here are some ideas that might help:
Your doctor may refer you to a specialist who treats cancer of the larynx, such as a surgeon, otolaryngologist (an ear, nose, and throat doctor), radiation oncologist, or medical oncologist. You can also ask your doctor for a referral. Treatment usually begins within a few weeks of the diagnosis. Usually, there is time to talk to your doctor about treatment choices, get a second opinion, and learn more about the disease before making a treatment decision.
- Make a list of questions.
- Take notes at the appointment.
- Ask the doctor if you may use a tape recorder during the appointment.
- Ask a family member or friend to come to the appointment with you.
Methods of Treatment
Cancer of the larynx may be treated with radiation therapy, surgery, or chemotherapy. Some patients have a combination of therapies.
Radiation therapy (also called radiotherapy) uses high-energy x-rays to kill cancer cells. The rays are aimed at the tumor and the tissue around it. Radiation therapy is local therapy. It affects cells only in the treated area. Treatments are usually given 5 days a week for 5 to 8 weeks.
Laryngeal cancer may be treated with radiation therapy alone or in combination with surgery or chemotherapy:
- Radiation therapy alone: Radiation therapy is used alone for small tumors or for patients who cannot have surgery.
- Radiation therapy combined with surgery: Radiation therapy may be used to shrink a large tumor before surgery or to destroy cancer cells that may remain in the area after surgery. If a tumor grows back after surgery, it is often treated with radiation.
- Radiation therapy combined with chemotherapy: Radiation therapy may be used before, during, or after chemotherapy.
After radiation therapy, some people need feeding tubes placed into the abdomen. The feeding tube is usually temporary.
These are questions you may want to ask your doctor before having radiation therapy:
- Why do I need this treatment?
- What are the risks and side effects of this treatment?
- Are there any long-term effects?
- Should I see my dentist before I start treatment?
- When will the treatments begin? When will they end?
- How will I feel during therapy?
- What can I do to take care of myself during therapy?
- Can I continue my normal activities?
- How will my neck look afterward?
- What is the chance that the tumor will come back?
- How often will I need checkups?
Surgery is an operation in which a doctor removes the cancer using a scalpel or laser while the patient is asleep. When patients need surgery, the type of operation depends mainly on the size and exact location of the tumor.
There are several types of laryngectomy (surgery to remove part or all of the larynx):
- Total laryngectomy: The surgeon removes the entire larynx.
- Partial laryngectomy (hemilaryngectomy): The surgeon removes part of the larynx.
- Supraglottic laryngectomy: The surgeon takes out the supraglottis, the top part of the larynx.
- Cordectomy: The surgeon removes one or both vocal cords.
Sometimes the surgeon also removes the lymph nodes in the neck. This is called lymph node dissection. The surgeon also may remove the thyroid.
During surgery for cancer of the larynx, the surgeon may need to make a stoma. (This surgery is called a tracheostomy.) The stoma is a new airway through an opening in the front of the neck. Air enters and leaves the windpipe (trachea) and lungs through this opening. A tracheostomy tube, also called a trach (“trake”) tube, keeps the new airway open. For many patients, the stoma is temporary. It is needed only until the patient recovers from surgery. More information about stomas can be found in the “Living with a Stoma” section.
After surgery, some people may need a temporary feeding tube.
Chemotherapy is the use of drugs to kill cancer cells. Your doctor may suggest one drug or a combination of drugs. The drugs for cancer of the larynx are usually given by injection into the bloodstream. The drugs enter the bloodstream and travel throughout the body.
Chemotherapy is used to treat laryngeal cancer in several ways:
- Before surgery or radiation therapy: In some cases, drugs are given to try to shrink a large tumor before surgery or radiation therapy.
- After surgery or radiation therapy: Chemotherapy may be used after surgery or radiation therapy to kill any cancer cells that may be left. It also may be used for cancers that have spread.
- Instead of surgery: Chemotherapy may be used with radiation therapy instead of surgery. The larynx is not removed and the voice is spared.
Chemotherapy may be given in an outpatient part of the hospital, at the doctor’s office, or at home. Rarely, a hospital stay may be needed.
These are questions you may want to ask your doctor before having chemotherapy:
- Why do I need this treatment?
- What will it do?
- Will I have side effects? What can I do about them?
- How long will I be on this treatment?
- How often will I need checkups?
>> Back to Larynx & Voicbox Conditions | http://www.tampaent.com/conditions-cancer-larynx-ent-doctors-wesley-chapel-fl.html | 13 |
23 | Thon Buri and Bangkok Period
Thon Buri Period (1767-1772)
King Taksin: Warfare and National Revival (1767-1782)
After the shattering defeat that had culminated in Ayutthaya's destruction, the death and capture of thousands of Thais by the victorious
Burmese, and the dispersal of several potential Thai leaders, the situation seemed hopeless. It was a time of darkness and of troubles for the Thai nation. Members of the old royal family of Ayutthaya had died, escaped, or been captured by the Burmese and many rival claimants for the throne emerged, based in different areas of the country. But out of this national catastrophe emerged yet another savior of the Thai state: the half-Chinese general Phraya Taksin, former governor of Tak. Within a few years this determined warrior had defeated not only all his rivals but also the Burmese invaders and had set himself up as king. Since Ayutthaya had been so completely devastated. King Taksin chose to establish his capital at Thon Buri (across the river from Bangkok). Although a small town, Thon Buri was strategically situated near the mouth of the Chao Phraya River and therefore suitable as a seaport. The Thais needed weapons, and one way of acquiring them was through trade. Besides, foreign trade was also needed to bolster the Thai economy, which had suffered extensively during the war with Burma. Chinese and Chinese-Thai traders helped revive the economy by engaging in maritime trade with neighboring states, with China, and with some European nations. King Taksin's prowess as a general and as an inspirational leader meant that all attempts by the Burmese to reconquer Siam failed. The rallying of the Thai nation during a time of crisis was King Taksin's greatest achievement. However, he was also interested in cultural revival, in literature and the arts. He was deeply religious and studied meditation to an advanced level. The stress and strain of such much fighting and the responsibility of rebuilding a centralized Thai state took their toll on the king. Following an internal political conflict in 1782. King Taksin's fellow general Chao Phraya Chakri was chosen king. King Taksin's achievements have caused posterity to bestow on him the epithet "the Great"
King Rama I and the Reconstruction of the Thai State (1782-1809)
The new king, Phraphutthayotfa Chulalok, or Rama I, was like King Taksin a great general. He was also an accomplished statesman, a lawmaker, a poet, and a devout Buddhist. His reign has been called a reconstruction" of the Thai state and Thai culture, using Ayutthaya as a model but at the same time not slavishly imitating all things Ayutthaya. He was the monarch who established Bangkok as the capital of Thailand and was also the founder of the Royal House of Chakri, of which the ruling monarch, King Bhumibol Adulyadej, is the ninth king. The significance of his reign in Thai history is therefore manifold.
King Rama I was intent on the firm reestablishment of the Buddhist monkhood, allying church to state and purifying the doctrine. The
Tripitaka, or Buddhist scriptures, were re-edited in a definitive text by a grand council of learned men convened by the king in1788-9. This concern with codification and textual accuracy was also apparent in the collation and editing of laws both old and new which resulted in one of the major achievements of his reign: the "Three Seals Code" or Kotmai tra samduang. This too was the work of a panel of experts assembled by the king. King Rama I consistently explained all his reforms and actions in a rational way. This aspect of his reign has been interpreted as a major change in the intellectual outlook of the Thai elite, or a re-orientation of the Thai world-view. The organization of Thai society during the early Bangkok period was not fundamentally different from that of the late Ayutthaya period. Emphasis was still placed on manpower and on an extensive system of political and social patronage. The officials' main duty was still to provide the crown with corvee labor and to provide patronage to the commoners. The Burmese remained a threat to the Thai kingdom during this reign and launched several attacks on Thai territory. King Rama I was ably assisted by his brother and other generals in defeating the Burmese in 1785 and 1786, when the Burmese tried to invade Siam. King Rama I not only drove out these invading armies but also launched a bold counter-attack as retaliation, invading Tavoy in Lower Burma. During this reign, Chiang Mai was added to the Thai kingdom, and the Malay states of Kedah, Perlis, Kelantan, and Trengganu all sent tribute to King Rama I. The recovery of the Thai state's place and prestige in the region was one of King Rama l's major achievements.
The most long-lasting creation of King Rama I was perhaps the city of Bangkok (Rattanakosin). Before 1782, it was just a small trading community, but the first king transformed it into a thriving, cosmopolitan city based on Ayutthaya's example. He had a canal dug to make it an island-city and it contained Mon, Lao, Chinese, and Thai communities similar to Ayutthaya. He also had several Ayutthaya-style monasteries built in and around the city.
King Rama I was indeed, a great builder-king He endeavored to model his new palace closely on the Royal Palace at Ayutthaya and in doing so helped create one of Bangkok's enduring glories: the Grand Palace with its resplendent royal chapel, the Temple of the Emerald Buddha. King Rama I also completely rebuilt an old monastery, Wat Photharam, and had it renamed Wat Phra Chetuphon, which became not only an exemplar of classical Thai architecture but also a famous place of learning. The cosmopolitan outlook of the Thais during King Rama l's reign was also reflected in the arts of the period. Both painting and literature during the early Bangkok period showed a keen awareness of other cultures, though Thai traditional forms and conventions were adhered to, King Rama I's reconstruction of the Thai State and Thai culture was so comprehensive that it extended also to literature. The king and his court poets composed new versions of the Ramakian (the Thai version of the Indian Ramayana epic) and the Inao (based on the Javanese Panji story).
King Rama II and His Sons
King Rama I's son Phra Phutthaloetla Naphalai, or Rama II, acceded to the throne peacefully and was fortunate to have inherited the
crown during a time of stability. His reign was especially remarkable for the heights attained by Thai poetry, particularly in the works of the King himself and of Sunthon Phu, one of the court poets. King Rama II was a man gifted with an all-round artistic talent: he had a hand in the carving of Wat Suthat's vihara door-panels, considered to be the supreme masterpiece of Thai woodcarving. At the end of King Rama II's reign, two princes were in contention for the succession. Prince Chetsadabodin was lesser in rank than Prince Mongkut, but he was older, had greater experience of government, and had a wider power base. In a celebrated example of Thai crisis power management, Prince Mongkut (who had just entered the monkhood) remained monk for the whole of his brother's reign (1824-1851). The avoidance of an open struggle between the princes worked out well for both the country and for the Royal House. While King Rama III ruled firmly and with wisdom, his half-brother was accumulating experience that was to prove invaluable to him during his years as king. The priest-prince Mongkut was able to travel extensively, to see for himself how ordinary Thais lived, and to the lay the foundations for a reform of the Buddhist clergy. In the late 1830's he had set up what was to become the Thammayut sect or order (dhammayutika nikaya), an order of monks which became stronger under royal patronage. To this very day the royal family of Thailand is still closely associated with the Thammayut order.
The Growing Challenge of the West (1821-1868)
The major characteristic of Thai history during the 19th and 20th centuries may be summed up by the phrase "the challenge of the West."
The reigns of King Rama II and his two sons, Rama III and Rama IV, marked the first stage in the Thai kingdom's dealings with the West during the Age of Imperialism. During the Ayutthaya period, the Thais had more often than not chosen just how they wanted to deal with foreign countries, European states included. By the 19th century this freedom of choice became more and more constricted. The West had undergone a momentous change during the Industrial Revolution, and western technology and economy had begun to outstrip those of Asian and African nations. This fact was not readily apparent to the Asians of the early 19th century, but it became alarmingly obvious as the century wore on and several erstwhile proud kingdoms fell under the sway of the western powers. The early 19th century was a time when the Napoleonic Wars were preoccupying all the major European powers, but once the British had gained their victory in Europe, they resumed their quest for additional commerce and territory in Asia. King Rama III may have been "conservative" in outlook, striving hard to uphold Buddhism (he built or repaired many monasteries), and refusing to acknowledge the claims of Western powers to increased shares in the Thai trade, but he was above all a shrewd ruler. He was justifiably wary of
Western ambitions in Southeast Asia, but he was tolerant enough to come to an agreement with Burney, as well as to allow Christian missionaries to work in the kingdom. One of the men most intellectually stimulated by the Western missionaries was Prince Mongkut. The priest-prince had an inquiring mind, a philosophical nature, and a voracious appetite for new knowledge. He learnt Latin from the French Catholic bishop Jean-Baptiste Pallegoix and English from the American Protestant missionary Jesse Caswell. Prince Mongkut's intellectual interests were wide-ranging; not only did he study the Buddhist Pali scriptures but also Western astronomy, mathematics, science, geography, and culture. His wide knowledge of the West helped him to deal with Britain, France, and other powers when he reigned as King of Siam (1851-1868). King Mongkut was the first Chakri king to embark seriously on reform based on Western models. This did not mean wholesale structural change, since King Mongkut did not wish to undermine his own status and power as a traditional and absolute ruler. He concentrated on the technological and organizational aspects of reform. During this reign, there were road building, canal digging, shipbuilding, a reorganization of the Thai army and administration, and the minting of money to meet the demands of a growing money economy. The King employed Western experts and advisers at the court and in the administration. One of his employees at court was the English governess Anna H. Leonowens, whose books on Siam have resulted in several misunderstandings concerning King Mongkut's character and reign. Far from being the strutting "noble savage" figure portrayed by Hollywood in the musical "The King and I." King Mongkut was a scholarly, conscientious, and humane monarch who ruled at a difficult time in Thai history.
The Reign and Reforms of King Chulalongkorn (1868-1910)
The reforms and foreign policy of King Mongkut were carried on by his son and successor, King Chulalongkorn (Rama V), who came to the throne a frail youth of 16 and died one of Siam's most loved and revered kings, after a remarkable reign of 42 years. Indeed, modern Thailand may be said to be a product of the comprehensive and progressive reforms of his reign, for these touched almost every aspect of Thai life.
King Chulalongkorn faced the Western world with a positive, eager attitude: eager to learn about Western ideas and inventions, positively
working towards Western-style "progress" while at the same time resisting Western rule. He was the first Thai king to travel abroad; he went to the Dutch and British colonial territories in Java, Malaya, Burma, and India, and also made two extended trips to Europe towards the end of his reign. He did not just travel as an observer or tourist but worked hard during his trips to further Thai interests. For instance, during one of his European sojourns he obtained support from Tsar Nicholas II of Russia and the German Kaiser Wilhelm II to put Siam in a stronger international position, no longer dominated by Britain and France. The King also traveled within his own country. He was passionately interested in his subjects' welfare and was intent on the monarchy assuming a more visible role in society. He wanted to see at first-hand how his subjects lived and went outside his palace often, sometimes incognito. His progressive outlook led him, in what was his first official act, to forbid prostration in the royal presence. He considered that such prostration was humiliating to the subject and apt to engender arrogance in the ruler. Influenced by Buddhist morality and Western examples, he gradually abolished both the corvee system and the institution of slavery, a momentous and positive change for Thai society. During this reign, Siam's communications system was revolutionized. Post and telegraph services were introduced and a railway network was built. Such advances enabled the central government to improve its control over outlying provinces. One of the central issues inaugurated in 1892 of King Chulalongkorn's reign was the imposition of central authority over the more distant parts of the kingdom. The King initiated extensive reforms of the administration, both in the provinces and in Bangkok. Western-style ministries were set up, replacing older, traditional administrative bodies. The old units, which were remodeled according to the Western pattern, were those of the Interior, of War, of Foreign Affairs, of Finance, of Agriculture, of the Palace, and of Local Administration. Completely new ministries were also created, such as the ministries of Justice, of Public Instruction, and of Public Works. This new ministerial system of government was
King Chulalongkorn's contribution to education was also to prove of great significance to modern Thailand. During this reign "public instruction" or education became more secular than ever before in Thai history. Secular schools were established in the 1880's aimed at producing the educated men necessary for the smooth functioning of a centralized administration. One of the pressing issues of the reign was the necessity to prove to the Western colonial powers that Siam had become a "modern" and "progressive" country: the problem, however, was that the King and his advisers had very little time in which to do so.
The King was eager to send Thais abroad for their education partly because the country needed skills and knowledge from the West and partly because Thai students abroad could come into direct contact with Europe's elite. Conversely, the King also hired several westerners to act as advisers to the Thai government in various fields, among them the Belgian Rolin-Jacquemyns (a "General Adviser" whose special knowledge was in jurisprudence) and the British Financial Advisers H. Rivett-Carnac and W.J.F. Williamson. Such policies were deemed to be essential for Siam's survival as a sovereign state and its progress to modernity.
Thai foreign policy during King Chulalongkorn's long reign was a series of precarious balancing acts, playing off one Western power against another, and trying to maintain both sovereignty and territorial integrity. Siam's heartland had to be preserved at all costs, even to the extent of conceding to Britain and France some peripheral territories whenever the pressure became too intense.
Even Siam's subtle and supple foreign policy was not always enough to offset the appetite for territory. In 1893, Siam ceded all territories on the east ("left") bank of the Mekong River to France, then building up its Indochinese Empire. In 1904, the Thais had to cede all territories on the west bank of the Mekong to France.
The Thai government wanted to put an end to the clauses concerning extra-territoriality, land tax, and trade duties in the treaties concluded with Western countries during King Mongkut's reign. In return for the mitigation of treaty disabilities, the Thais had to cede several territories. For example, in 1907 the Khmer provinces of Siem Reap, Battambang, and Sisophon were ceded to France in return for French withdrawal from the eastern Thai province of Chanthaburi and the abandonment of French extraterritorial claims over their "protected persons" (mostly Asian and therefore not properly French at all). In 1909, Siam gave up its claims to the Malay states of Kedah, Perlis, Kelantan, and Trengganu, all of which became British protectorates. This cession of territory was again agreed to by Siam in return for a lessening of certain treaty disabilities. It was fortunate indeed for the Thai kingdom that Britain and France had agreed in 1896 to keep Siam as a "buffer zone" between British and French territorial possessions in Southeast Asia.
King Chulalongkorn kept Siam an independent sovereign state in spite of all these crises, and all the while he strove to uphold Thai cultural, artistic, and religious values. The Thammayut order of monks founded by King Mongkut thrived during this reign, extending its influence from Bangkok to the provinces.
When King Chulalongkorn died in 1910 a new Siam had come into being. The Thai kingdom was now a more centralized, bureaucratic state partly modeled on Western example. It was also a society without slaves, with a ruling class that was partly westernized in outlook and much more aware of what was going on in Europe and America. Technologically, too, there had been many advances: there were now railroads and trams, postage stamps and telegraphs.
With so many achievements to his credit, and a charisma that was enhanced by his longevity, it was no wonder that the Thai people grieved long and genuinely for King Chulalongkorn when he died. October 23, the date of his death, is still a national holiday, in honour of one of Siam's greatest and most beloved kings.
Nationalism and Constitution (1910-1932)
King Chulalongkorn's son and successor Vajiravudh (Rama VI) was the first Thai king to have been educated abroad, in his case at Harrow School and Oxford University in England. King Vajiravudh (r. 1910-1925) was notable for his accomplishments as a poet, dramatist (in both English and Thai), and polemicist. He was a convinced nationalist and was the first person to try to instill a western-style nationalistic fervor in his subjects. Like his father, he was determined to modernize Siam while still upholding traditional Thai values and royal authority.
King Vajiravudh chose to work on issues and problems that appealed to his personal interests, largely in the literary, educational, and
ideological fields. The King was also keenly interested in military affairs and formed his own paramilitary organization, the "Wild Tiger Corps," to inculcate nationalism and promote national unity. When the First World War broke out, he was determined to join the Allies in their struggle against Germany. His decision in 1917 to send Thai troops to fight in Europe was a felicitous piece of timing: although the Thai expeditionary force did not see much action. Siam's participation in the war on the Allied side earned the country and its king much praise and recognition from the international community. The major achievements of King Vajiravudh, however, lay in the area of education and related legislation. In 1913, he compelled his subjects by law to use surnames and thus be no different from the Western nations. As a measure of his personal commitment to this idea, he himself coined hundreds of family names. In 1921, the King issued a law on compulsory primary education, which was the first step in Siam 5 path towards universal primary education. Two of present-day Thailand's most prestigious educational establishments were founded by King Vajiravudh, Chulalongkorn University, Siam's first Western-style University, named in honour of King Chulalongkorn, and Vajiravudh College, a boarding school for boys modeled upon the English public school. The death of King Vajiravudh in 1925 meant that Prince Prajadhipok, his younger brother, succeeded to the throne since King Vajiravudh had no male heir. The new king (also known as Rama VII) began his reign at an unenviable juncture of both Thai and world history. The global economic depression of the late 1920's and early 1930's forced the Thai government to make economic measures that led to some discontentment. As for Siam's internal development, the dilemma about when or whether to institute wide-ranging political reforms became more acute during this reign.
King Prajadhipok was a liberal and a conscientious man. A soldier by training, he nevertheless worked hard in addressing himself to Siam's problems, and his comments on various matters of government and administration in the state papers of this reign reveal him to be an admirable ruler in many ways. He was well aware of the desirability of establishing Siam in the international political community as a country with a "modern" and "liberal" constitutional system of government. The King, however, was still in the process of trying to convince the more conservative of his relatives in the Supreme State Council about the need to promulgate a constitution when matters were taken out of his hands by the bloodless "revolution", or coup d'etat, of 24 June 1932.
The 1932 coup d'etat put an end to absolute monarchy in Siam. Prior to this event, there had been an increased political awareness among the middle-ranking military officers and civilian officials who were to become the major figures in the coup group, which called itself the People's Party. Many of these men had been educated abroad, principally in France and Britain. There had also been a degree of discontent within the military and civilian bureaucracy resulting from. the royal government's retrenchment program, which in turn had been dictated by the worldwide economic depression. Government expenditures had been cut by one-third in early 1932, salaries were also cut, and many government officials lost their jobs. All these factors were instrumental in motivating the coup group of 1932 to initiate a new system of government. A formal constitution was promulgated and a National Assembly set up. Siam thus became a constitutional monarchy without any bloodshed or wholesale changes in its society and economy
After 1932: The Ascendancy of the Military
After June 1932, the country 5 governments alternated between democratically-elected and differing degrees of military rule. It was a period of transition, of trying to balance new political ideals and expectations with the pragmatism of power politics.
King Prajadhipok abdicated in March 1935, feeling that he could no longer cooperate with the People's Party in a constructive way. He went into exile in England, where he died in 1941. The new king was Ananda Mahidol, the ten-year-old son of Prince Mahidol of Songkla, one of King Chulalongkorn's sons. The extreme youth of the new king, and his absence from the country while pursuing his studies in Switzerland, left the People's Party with a relatively free hand in shaping the destiny of the kingdom. During the 1940's leading figures of the People's Party dominated Thai politics. Two men in particular stood out: the civilian leader Dr. Pridi Panomyong and the young officer Luang Pibulsongkram (later Field Marshal P. Pibulsongkram). While the country experimented with various forms and degrees of democracy and several constitutions were promulgated, the two groups which held power were, alternately, the military and the civilian bureaucratic elite. Dr. Pridi Panomyong tried to lay down the foundations of a socialistic society with his economic plan of 1933. This plan was considered to be too radical. It proposed to nationalize all land and labor resources and to have most people working for the state as government employees. These ideas were unacceptable to the more conservative elements both within the People's Party and also in the elite as a whole, which did not desire any sweeping structural changes in Thai society. Dr. Pridi was forced into temporary exile, and the National Assembly prorogued.
After 1933, Siam entered a long period of military ascendancy. The army that had been so carefully and systematically built up during the reign of King Chulalongkorn became a formidable institution. During King Vajiravudh's reign, in 1912, some officers had tried unsuccessfully to stage a coup d'etat, wanting to see Siam progress into modernity in terms of politics and government. In 1932 some senior and middle-ranking military officers had formed part of the People's Party. The most dynamic of these military officers was undoubtedly Luang Pibulsongkram, who came into prominence after he had played a crucial role in the defeat of a royalist counter-revolution in 1933. The Thai army was to be Field Marshal P. Pibulsongkram's power base during the next 25 years. The military had one vital advantage over other groups: an organizational strength born of being a strict and tightly-knit hierarchy. Once the military decided to involve itself in politics, it was inevitable that it would prove to be a dominant force.
The first governments of the post-1932 era tried to keep a balance between civilian and military elements so as not to alienate any important group. For instance, in 1934 the exiled Dr. Pridi Panomyong was brought back into the administration as Interior Minister largely because the Prime Minister, General Phraya Phahol Pholphayuhasena, was eager to preserve civilian support for his government. Phraya Phahol also used Luang Pibulsongkram as a minister. During the period 1934-1938 both Dr. Pridi and Luang Pibulsongkram strove hard to consolidate their political power, the former through the Thai intelligentsia and the latter through influence over the army. When Phraya Phahol resigned in 1938 Luang Pibulsongkram succeeded him as Prime Minister, signifying that the military had gained a decisive advantage in the struggle for dominance in Thai politics.
In conformity with his view that a strongly enforced discipline backed by military strength was vital for Thailand's development he aimed at
focusing nationalism to maximum intensity. He continued this policy until, in 1941, he was forced into collaboration with the occupying Japanese. Dr. Pridi, during the same period, was sympathetic to the Allies and worked with Thailand's underground resistance movement. Towards the end of World War II, Field Marshal Pibul and his collaborative government resigned and Khuang Apaivongse became the Prime Minister in 1944. In the following year King Ananda Mahidol (Rama VIII) returned from Switzerland, and Dr. Pridi became Prime Minister in 1946. But the unexpected death of the young King generated popular dissatisfaction and once again the tide turned. Dr. Pridi was forced into exile and Field Marshal Pibul again assumed power. This time his period of leadership was to be a long one. It would witness the establishment of parliamentary democracy in Thailand and see the emergence of the country's students as a powerful political force whose protests contributed to Field Marshal Pibul's eventual overthrow.
In 1946, Thailand joined the United Nations, recognizing the future importance of the UN’s role in securing world peace. In 1950, shortly after the outbreak of war in Korea. Thailand announced its support of United Nations intervention and promptly sent a 2,000-man fighting force, naval and air force contingents, and several tons of rice.
Bhumibol Adulyadej (born 5 December 1927), is the current King of Thailand. Publicly acclaimed "the Great", he is also known as Rama IX. Having reigned since 9 June 1946, he is the world's longest-serving current head of state and the longest-reigning monarch in Thai history. Bhumibol was crowned King of Thailand on 5 May 1950 at the Royal Palace in Bangkok where he pledged that he would "reign with righteousness for the benefit and happiness of the Siamese people". Notable elements associated with the coronation included the Bahadrabith Throne beneath the Great White Umbrella of State; and he was presented with the royal regalia and utensils. Economically, the establishment of the People's Republic of China discouraged Thailand's Chinese from sending monthly remittances and encouraged local assimilation, which in turn stimulated local growth and profits. As world demand for food products rose, the countryside began diversifying away from the rice monoculture. And in response to local demand, enterprising producers founded light manufacturing industries on city and town outskirts.In 1957, the premiership changed from Field Marshal Pibul to Field Marshal Sarit Thanarat. Under his vigorous personal leadership, the government apparently satisfied the requirements of the ever-burgeoning population by emphasizing economic development and national security. As a consequence of these decisive actions and policies, Field Marshal Sarit provided the nation with a sound infrastructure which successive governments could easily continue and adapt.
Following the sudden death of Field Marshal Sarit in 1963, Field Marshal Thanom Kittikachorn was appointed Prime Minister. The government led by Field Marshal Thanom not only concentrated on internal social and economic development but also promoted the stability of the region as a whole. Indeed, it was primarily through the initiative of Thailand that the Association of Southeast Asian Nations (ASEAN) was established in 1967 in accordance with the Bangkok Declaration. However, in response to unprecedented political confusion caused by a student uprising in October 1973 Field Marshal Thanom relinquished the premiership in favor of Professor Sanya Dharmasakti.
During the period 1973-1976, the Thai political area witnessed successive governments headed by Professor Sanya Dharmasakti, M.R. Seni Pramoj, M.R. Kukrit Pramoj, again M.R. Seni Pramoj, and finally Dr. Tanin Kraivixian, each of whom strove to develop the country in its own way.
In 1977, General Kriengsak Chamanand became the Prime Minister. His government maintained political stability, which successfully encouraged foreigners to invest in Thailand.
General Prem Tinsulanonda became premier in 1979 and headed four governments between that time and 1988, when he declined another term. During these years, insurgency-caused conflicts were greatly reduced and many groups of insurgents emerged from their jungle hideouts to peacefully surrender to government officials. Moreover, national stability and successful foreign policies brought about a great many socio-political and economic developments. In 1982 Thailand celebrated the 2nd centennial anniversary of Bangkok.
An elected Prime Minister, Major General Chatichai Choonhavan, took office in August of 1988. During his first year he continued the successful economic policies that have brought Thailand to the status of a newly industrialized country and was also active in foreign affairs, particularly those of neighboring Indochina.
In 1992 the military coup d'etat led by General Sunthorn Kongsompong ousted the democratically elected Chatichai cabinet. Mr. Anand Panyarachun, a diplomat and well-known businessman was appointed as the next Prime Minister. He led his cabinet as an interim government until his term ended in accordance with the constitution. A general election took place and resulted in an appointment of General Suchinda Kraprayoon as the Prime Minister. The cabinet led by General Suchinda Kraprayoon was ended by a political mass demonstration for democracy. After resignation of General Suchinda Kraprayoon, Mr. Anand Panyarachun was for the second time appointed as the Prime Minister. In his second period, Mr. Anand Panyarachun came with several liberalization programs for the enhancement of economic growth and the general advancements of the country. He has also introduced a nationwide reform and revised the outmoded laws of the country resulting in greater facilitation and greater assurance to the business community.
The Anand II interim cabinet came to an end when Chuan Leekpai won the election in 1992 and was Prime Minister from 1992 to 1995. In mid 1995, Banharn Silpa-Archa won the election and became Prime Minister until September, 1996. He was replaced by Chavalit Yongchaiyuth, who became Prime Minister until November, 1997. In November, 1997 Mr. Leekpai returned and became the Prime Minister. | http://www.thailandtravelinformation.info/travel/bangkok-period | 13 |
15 | The Federal Government of the United States is the central current reigning United States governmental body, established by the United States Constitution. The federal government has three branches: the legislative, executive, and judicial. Through a system of separation of powers and the system of "checks and balances," each of these branches has some authority to act on its own, some authority to regulate the other two branches, and has some of its own authority, in turn, regulated by the other branches. The policies of the federal government have a broad impact on both the domestic and foreign affairs of the United States. In addition, the powers of the federal government as a whole are limited by the Constitution, which, per the Tenth Amendment, gives all power not directed to the National government, to the State level, or to the people.
See main article: United States Congress.
The United States Congress is the legislative branch of the federal government. It is bicameral, comprising the House of Representatives and the Senate. The House of Representatives consists of 435 voting members, each of whom represents a congressional district and serves for a two-year term. In addition to the 435 voting members there are five non-voting members, consisting of four delegates and one resident commissioner. There is one delegate each from the District of Columbia, Guam, Virgin Islands, and American Samoa, and the resident commissioner is from Puerto Rico. House seats are apportioned among the states by population; in contrast, each state has two Senators, regardless of population. There are a total of 100 senators (as there are currently 50 states), who serve six-year terms (one third of the Senate stands for election every two years). Each congressional chamber (House or Senate) has particular exclusive powers—the Senate must give "advice and consent" to many important Presidential appointments, and the House must introduce any bills for the purpose of raising revenue. However, the consent of both chambers is required to make any law. The powers of Congress are limited to those enumerated in the Constitution; all other powers are reserved to the states and the people. The Constitution also includes the "Necessary and Proper Clause", which grants Congress the power to "make all laws which shall be necessary and proper for carrying into execution the foregoing powers."Members of the House and Senate are elected by first-past-the-post voting in every state except Louisiana and Washington, which have runoffs.
The Constitution does not specifically call for the establishment of Congressional committees. As the nation grew, however, so did the need for investigating pending legislation more thoroughly. The 108th Congress (2003-2005) had 19 standing committees in the House and 17 in the Senate, plus four joint permanent committees with members from both houses overseeing the Library of Congress, printing, taxation, and the economy. In addition, each house can name special, or select, committees to study specific problems. Because of an increase in workload, the standing committees have also spawned some 150 subcommittees.
See main article: Article One of the United States Constitution.
The Constitution grants numerous powers to Congress. These include the powers to levy and collect taxes, provide for common defense and promote the pursuit of liberty; to coin money and regulate its value; provide for punishment for counterfeiting; establish post offices and roads, promote progress of science, create courts inferior to the Supreme Court, define and punish piracies and felonies, declare war, raise and support armies, provide and maintain a navy, make rules for the regulation of land and naval forces, provide for, arm, and discipline the militia, exercise exclusive legislation in the District of Columbia, and make laws necessary and proper to execute the powers of Congress.
See main article: Congressional oversight.
Congressional oversight is intended to prevent waste and fraud, protect civil liberties and individual rights, ensure executive compliance with the law, gather information for making laws and educating the public, and evaluate executive performance.
It applies to cabinet departments, executive agencies, regulatory commissions, and the presidency.Congress's oversight function takes many forms:
All executive power in the federal government is vested in the President of the United States, although power is often delegated to the Cabinet members and other officials. The President and Vice President are elected as 'running mates' for a maximum of two four-year terms by the Electoral College, for which each state, as well as the District of Columbia, is allocated a number of seats based on its representation (or ostensible representation, in the case of D.C.) in both houses of Congress.
See main article: President of the United States.
The Executive branch consists of the President and delegates. The President is both the head of state and government, as well as the military commander-in-chief (only when called into actual military services), chief diplomat and chief of party. The President, according to the Constitution, must "take care that the laws be faithfully executed." The President presides over the executive branch of the federal government, a vast organization numbering about 4 million people, including 1 million active-duty military personnel. The current president is Barack Obama.
The President may sign legislation passed by Congress into law, or may veto it, preventing it from becoming law unless two-thirds of both houses of Congress vote to override the veto. The President may, with the consent of two-thirds of the Senate, make treaties with foreign nations. The President may be impeached by a majority in the House and removed from office by a two-thirds majority in the Senate for "treason, bribery, or other high crimes and misdemeanors." The President may not dissolve Congress or call special elections, but does have the power to pardon, or release, criminals convicted of offenses against the federal government (except in cases of impeachment), enact executive orders, and (with the consent of the Senate) appoint Supreme Court justices and federal judges.
See main article: Vice President of the United States.
The Vice President is the second-highest executive official of the government. At first in the United States presidential line of succession, the Vice President becomes President upon the death, resignation, or removal of the President, which has happened nine times in U.S. history. His only other constitutional duty is to serve as President of the Senate and break any tie votes in the Senate.
The relationship between the President and the Congress reflects that between the English monarchy and parliament at the time of the framing of the United States Constitution. While the President can directly propose legislation (for instance, the federal budget), he must rely on supporters in Congress to support and promote his legislative agenda. After identical copies of a particular bill have been approved by a majority of both houses of Congress, the President's signature is required to make these bills law; in this respect, the President has the power to veto congressional legislation. Congress can override a presidential veto with a two-thirds majority vote from both houses. The ultimate power of Congress over the President is that of impeachment or removal of the elected President through a House vote, a Senate trial, and a Senate vote (by two-thirds majority in favor). Nearly every president is threatened with the idea of impeachment, but only two Presidents (Andrew Johnson and Bill Clinton) have ever been successfully impeached, and neither was convicted by the Senate. Richard Nixon was not impeached in connection with the Watergate scandal, although the House Judiciary Committee had approved articles of impeachment against Nixon at the time he resigned.
The President makes around 2,000 executive appointments, including members of the Cabinet and ambassadors, which must be approved by the Senate; the President can also issue executive orders and pardons, and has other Constitutional duties, among them the requirement to give a State of the Union Address to Congress from time to time (usually once a year). (The Constitution does not specify that the State of the Union address be delivered in person; it can be in the form of a letter, as was the practice during most of the 19th century.) Although the President's constitutional role may appear to be constrained, in practice, the office carries enormous prestige that typically eclipses the power of Congress. The Vice President is first in the line of succession, and is the President of the Senate ex officio, with the ability to cast a tie-breaking vote. The members of the President's Cabinet are responsible for administering the various departments of state, including the Department of Defense, the Justice Department, and the State Department. These departments and department heads have considerable regulatory and political power, and it is they who are responsible for executing federal laws and regulations.
See main article: United States Cabinet.
See main article: List of United States federal agencies.
The day-to-day enforcement and administration of federal laws is in the hands of the various federal executive departments, created by Congress to deal with specific areas of national and international affairs. The heads of the 15 departments, chosen by the President and approved with the "advice and consent" of the U.S. Senate, form a council of advisers generally known as the President's "Cabinet". In addition to departments, there are a number of staff organizations grouped into the Executive Office of the President. These include the White House staff, the National Security Council, the Office of Management and Budget, the Council of Economic Advisers, the Office of the U.S. Trade Representative, the Office of National Drug Control Policy and the Office of Science and Technology Policy.
There are also independent agencies such as the United States Postal Service, the National Aeronautics and Space Administration (NASA), the Central Intelligence Agency (CIA), the Environmental Protection Agency, and the United States Agency for International Development. In addition, there are government-owned corporations such as the Federal Deposit Insurance Corporation and the National Railroad Passenger Corporation.
By law, each agency must submit an annual Section 300 report to the President's Office of Management & Budget.
This is part of a larger set of more extensive annual requirements called Circular A-11. Section 300 specifically covers planning, budgeting, acquisition, and management of capital assets. The details on how agencies collect and share information and how they are upgrading and improving their information technology decisions are becoming increasingly important. Within Section 300 there is a special exhibit called Exhibit 53 which gives extensive details on agency information technology investments. These investments make up most of the information technology investments from the annual budgets. For the fiscal year 2008's budget, that spending exceeds $66.4 billion.
See main article: United States federal courts.
The Supreme Court is the highest court in the federal court system. The court deals with matters pertaining to the federal government, disputes between states, and interpretation of the United States Constitution, and can declare legislation or executive action made at any level of the government as unconstitutional, nullifying the law and creating precedent for future law and decisions. Below the Supreme Court are the courts of appeals, and below them in turn are the district courts, which are the general trial courts for federal law.
Separate from, but not entirely independent of, this federal court system are the individual court systems of each state, each dealing with its own laws and having its own judicial rules and procedures.
The supreme court of each state is the final authority on the interpretation of that state's laws and constitution. A case may be appealed from a state court to the U.S. Supreme Court only if there is a federal question (an issue arising under the U.S. Constitution, or laws/treaties of the United States). The relationship between federal and state laws is quite complex; together, they form the U.S. law.
The federal judiciary consists of the U.S. Supreme Court, whose justices are appointed for life by the President and confirmed by the Senate, and various "lower" or "inferior courts," among which are the courts of appeals and district courts.
The first Congress divided the nation into judicial districts and created federal courts for each district. From that beginning has evolved the present structure: the Supreme Court, 13 courts of appeals, 94 district courts, and two courts of special jurisdiction. Congress retains the power to create and abolish federal courts, as well as to determine the number of judges in the federal judiciary system. It cannot, however, abolish the Supreme Court.
There are three levels of federal courts with general jurisdiction, meaning that these courts handle criminal cases and civil law suits between individuals. The other courts, such as the bankruptcy courts and the tax court, are specialized courts handling only certain kinds of cases. The bankruptcy courts are branches of the district courts, but technically are not considered part of the "Article III" judiciary because their judges do not have lifetime tenure. Similarly, the tax court is not an Article III court.
The U.S. district courts are the "trial courts" where cases are filed and decided. The United States courts of appeals are "appellate courts" that hear appeals of cases decided by the district courts, and some direct appeals from administrative agencies. The Supreme Court hears appeals from the decisions of the courts of appeals or state supreme courts (on constitutional matters), as well as having original jurisdiction over a very small number of cases.
The judicial power extends to cases arising under the Constitution, an Act of Congress, or a U.S. treaty; cases affecting ambassadors, ministers, and consuls of foreign countries in the U.S.; controversies in which the U.S. government is a party; controversies between states (or their citizens) and foreign nations (or their citizens or subjects); and bankruptcy cases. The Eleventh Amendment removed from federal jurisdiction cases in which citizens of one state were the plaintiffs and the government of another state was the defendant. It did not disturb federal jurisdiction in cases in which a state government is a plaintiff and a citizen of another state the defendant.
The power of the federal courts extends both to civil actions for damages and other redress, and to criminal cases arising under federal law. Article III has resulted in a complex set of relationships between state and federal courts. Ordinarily, federal courts do not hear cases arising under the laws of individual states. However, some cases over which federal courts have jurisdiction may also be heard and decided by state courts. Both court systems thus have exclusive jurisdiction in some areas and concurrent jurisdiction in others.
The Constitution safeguards judicial independence by providing that federal judges shall hold office "during good behaviour"; in practice, this usually means they serve until they die, retire, or resign. A judge who commits an offence whilst in office may be impeached in the same way as the President or other officials of the federal government. U.S. judges are appointed by the President, subject to confirmation by the Senate. Another Constitutional provision prohibits Congress from reducing the pay of any judge. Congress is able to set a lower salary for all future judges that take office after the reduction, but may not decrease the rate of pay for judges already in office.
See main article: Elections in the United States.
Suffrage, commonly known as the ability to vote, has changed significantly over time. In the early years of the United States, voting was considered a matter for state governments, and was commonly restricted to white men who owned land. Direct elections were mostly held only for the U.S. House of Representatives and state legislatures, although what specific bodies were elected by the electorate varied from state to state. Under this original system, both senators representing each state in the U.S. Senate were chosen by a majority vote of the state legislature. Since the ratification of the Seventeenth Amendment in 1913, members of both houses of Congress have been directly elected.
Today, partially due to the Twenty-sixth Amendment, U.S. citizens have almost universal suffrage from the age of 18, regardless of race, gender, or wealth, and both Houses of Congress are directly elected. The only exception to this is the disenfranchisement of convicted felons, and in some states former felons as well.
Currently, the national representation of territories and the federal district of Washington, D.C., in Congress is limited: residents of the District of Columbia are subject to federal laws and federal taxes, but their only congressional representative is a non-voting delegate. Residents of U.S. territories have varying rights; for example, residents of Puerto Rico do not pay federal taxes (on local income), but cannot vote for President and have no voting representatives in Congress.
See main article: U.S. state and Local government in the United States. The state governments tend to have the greatest influence over most Americans' daily lives because they handle the issues most relevant for an individual in that state. The state also goes through budget cuts at any time the economy is faltering, which the collective public they are responsible for feel most.
Each state has its own written constitution, government, and code of laws. There are sometimes great differences in law and procedure between individual states, concerning issues such as property, crime, health, and education. The highest elected official of each state is the Governor. Each state also has an elected state legislature (bicameralism is a feature of every state except Nebraska), whose members represent the voters of the state. Each state maintains its own state court system. In some states, supreme and lower court justices are elected by the people; in others, they are appointed, as they are in the federal system.
As a result of the Supreme Court case Worcester v. Georgia, Indian tribes are considered "domestic dependent nations" that operate as sovereign governments subject to federal authority but, generally and where possible, outside of the influence of state governments. Hundreds of laws, executive orders, and court cases have modified the governmental status of tribes vis-à-vis individual states, but the two have continued to be recognised as separate bodies. Tribal capacity to operate robust governments varies, from a simple council used to manage all aspects of tribal affairs, to large and complex bureaucracies with several branches of government. Tribes are empowered to form their own governments, with power resting in elected tribal councils, elected tribal chairpersons, or religiously appointed leaders (as is the case with pueblos). Tribal citizenship (and voting rights) is generally restricted to individuals of native descent, but tribes are free to set whatever membership requirements they wish.
The institutions that are responsible for local government in states are typically town, city, or county boards, water management districts, fire management districts, library districts, and other similar governmental units which make laws that affect their particular area. These laws concern issues such as traffic, the sale of alcohol, and the keeping of animals. The highest elected official of a town or city is usually the mayor. In New England, towns operate in a direct democratic fashion, and in some states, such as Rhode Island and Connecticut, counties have little or no power, existing only as geographic distinctions. In other areas, county governments have more power, such as to collect taxes and maintain law enforcement agencies. | http://everything.explained.at/Federal_government_of_the_United_States/ | 13 |
16 | To begin to understand what life is like in a country- to
know, for example, how many of its inhabitants are poor-
it is not enough to know that country's per capita income.
The number of poor people in a country and the average quality
of life also depend on how equallyor unequally-income
In Brazil and Hungary, for example, GNP per capita levels
are quite comparable, but the incidence of poverty in Brazil
is much higher. This observation can be explained with the
help of Figure 5.1, which shows the
percentages of national income received by equal percentiles
of individuals or households ranked by their income levels.
In Hungary the richest 20 percent (quintile) of the population
receives about 4 times more than the poorest quintile, while
in Brazil the richest quintile receives more than 30 times
more than the poorest quintile.
Compare these ratios to an average of about 6:1 in high-income
countries. In the developing world income inequality, measured
the same way, varies by region: it is 4:1 in South Asia,
6:1 in East Asia and the Middle East and North Africa, 10:1
in Sub-Saharan Africa, and 12:1 in Latin America.
To measure income inequality in a country and compare this
phenomenon among countries more accurately, economists use
Lorenz curves and Gini indexes. A Lorenz curve plots the
cumulative percentages of total income received against
the cumulative percentages of recipients, starting with
the poorest individual or household (Figure
5.2). How is it constructed?
First, economists rank all the individuals or households
in a country by their income level, from the poorest to
the richest. Then all of these individuals or households
are divided into 5 groups (20 percent in each) or 10 groups
(10 percent in each) and the income of each group is calculated
and expressed as a percentage of GDP (see Figure
5.1). Next economists plot the shares of GDP received
by these groups cumulatively- that is, plotting the income
share of the poorest quintile against 20 percent of population,
the income share of the poorest quintile and the next (fourth)
quintile against 40 percent of population, and so on, until
they plot the aggregate share of all five quintiles (which
equals 100 percent) against 100 percent of the population.
After connecting all the points on the chart- starting with
the 0 percent share of income received by 0 percent of the
population- they get the Lorenz curve for this country.
The deeper a country's Lorenz curve, the less equal its
income distribution. For comparison, see on Figure 5.2 the
"curve" of absolutely equal income distribution. Under such
a distribution pattern, the first 20 percent of the population
would receive exactly 20 percent of the income, 40 percent
of the population would receive 40 percent of the income,
and so on. The corresponding Lorenz curve would therefore
be a straight line going from the lower left corner of the
figure (x = 0 percent, y = 0 percent) to the upper right
corner (x = 100 percent, y = 100 percent). Figure 5.2 shows
that Brazil's Lorenz curve deviates from the hypothetical
line of absolute equality much further than that of Hungary.
This means that of these two countries, Brazil has the highest
A Gini index is even more convenient than a Lorenz curve
when the task is to compare income inequality among many
countries. The index is calculated as the area between a
Lorenz curve and the line of absolute equality, expressed
as a percentage of the triangle under the line (see the
two shaded areas on Figure 5.2). Thus
a Gini index of 0 percent represents perfect equality- the
Lorenz curve coincides with the straight line of absolute
equality. A Gini index of 100 implies perfect inequality-
the Lorenz curve coincides with the x axis and goes straight
upward against the last entry (that is, the richest individual
or household; see the thick dotted line on Figure
5.2). In reality, neither perfect equality, nor perfect
inequality is possible. Thus Gini indexes are always greater
than 0 percent but less than 100 percent (see Figure
5.3 and Data Table 1).
Is a less equal distribution of income good or bad for a
country's development? There are different opinions about
the best patterns of distribution- about whether, for example,
the Gini index should be closer to 25 percent (as in Sweden)
or to 40 percent (as in the United States). Consider the
An excessively equal income distribution can be bad for
economic efficiency. Take, for example, the experience of
socialist countries, where deliberately low inequality (with
no private profits and minimal differences in wages and
salaries) deprived people of the incentives needed for their
active participation in economic activities- for diligent
work and vigorous entrepreneurship. Among the consequences
of socialist equalization of incomes were poor discipline
and low initiative among workers, poor quality and limited
selection of goods and services, slow technical progress,
and eventually, slower economic growth leading to more poverty.
On the other hand, excessive inequality adversely affects
people's quality of life, leading to a higher incidence
of poverty and so impeding progress in health and education
and contributing to crime. Think also about the following
effects of high income inequality on some major factors
- High inequality threatens a country's political stability
because more people are dissatisfied with their economic
status, which makes it harder to reach political consensus
among population groups with higher and lower incomes.
Political instability increases the risks of investing
in a country and so significantly undermines its development
potential (see Chapter 6).
- High inequality limits the use of important market instruments
such as changes in prices and fines. For example, higher
rates for electricity and hot water might promote energy
efficiency (see Chapter 15),
but in the face of serious inequality, governments introducing
even slightly higher rates risk causing extreme deprivation
among the poorest citizens.
- High inequality may discourage certain basic norms of
behavior among economic agents (individuals or enterprises)
such as trust and commitment. Higher business risks and
higher costs of contract enforcement impede economic growth
by slowing down all economic transactions.
These are among the reasons some international experts
recommend decreasing income inequality in developing countries
to help accelerate economic and human development. | http://www.worldbank.org/depweb/beyond/global/chapter5.html | 13 |
150 | In calculus, a branch of mathematics, the derivative is a measure of how a function changes as its input changes. Loosely speaking, a derivative can be thought of as how much one quantity is changing in response to changes in some other quantity; for example, the derivative of the position of a moving object with respect to time is the object's instantaneous velocity.
The derivative of a function at a chosen input value describes the best linear approximation of the function near that input value. Informally, the derivative is the ratio of the infinitesimal change of the output over the infinitesimal change of the input producing that change of output. For a real-valued function of a single real variable, the derivative at a point equals the slope of the tangent line to the graph of the function at that point. In higher dimensions, the derivative of a function at a point is a linear transformation called the linearization. A closely related notion is the differential of a function.
The process of finding a derivative is called differentiation. The reverse process is called antidifferentiation. The fundamental theorem of calculus states that antidifferentiation is the same as integration. Differentiation and integration constitute the two fundamental operations in single-variable calculus.
Differentiation and the derivative
Differentiation is a method to compute the rate at which a dependent output y changes with respect to the change in the independent input x. This rate of change is called the derivative of y with respect to x. In more precise language, the dependence of y upon x means that y is a function of x. This functional relationship is often denoted y = f(x), where f denotes the function. If x and y are real numbers, and if the graph of y is plotted against x, the derivative measures the slope of this graph at each point.
The simplest case is when y is a linear function of x, meaning that the graph of y divided by x is a straight line. In this case, y = f(x) = m x + b, for real numbers m and b, and the slope m is given by
where the symbol Δ (the uppercase form of the Greek letter Delta) is an abbreviation for "change in." This formula is true because
- y + Δy = f(x+ Δx) = m (x + Δx) + b = m x + b + m Δx = y + mΔx.
It follows that Δy = m Δx.
This gives an exact value for the slope of a straight line. If the function f is not linear (i.e. its graph is not a straight line), however, then the change in y divided by the change in x varies: differentiation is a method to find an exact value for this rate of change at any given value of x.
suggesting the ratio of two infinitesimal quantities. (The above expression is read as "the derivative of y with respect to x", "d y by d x", or "d y over d x". The oral form "d y d x" is often used conversationally, although it may lead to confusion.)
Definition via difference quotients
Let f be a real valued function. In classical geometry, the tangent line to the graph of the function f at a real number a was the unique line through the point (a, f(a)) that did not meet the graph of f transversally, meaning that the line did not pass straight through the graph. The derivative of y with respect to x at a is, geometrically, the slope of the tangent line to the graph of f at a. The slope of the tangent line is very close to the slope of the line through (a, f(a)) and a nearby point on the graph, for example (a + h, f(a + h)). These lines are called secant lines. A value of h close to zero gives a good approximation to the slope of the tangent line, and smaller values (in absolute value) of h will, in general, give better approximations. The slope m of the secant line is the difference between the y values of these points divided by the difference between the x values, that is,
This expression is Newton's difference quotient. The derivative is the value of the difference quotient as the secant lines approach the tangent line. Formally, the derivative of the function f at a is the limit
of the difference quotient as h approaches zero, if this limit exists. If the limit exists, then f is differentiable at a. Here f′ (a) is one of several common notations for the derivative (see below).
Equivalently, the derivative satisfies the property that
which has the intuitive interpretation (see Figure 1) that the tangent line to f at a gives the best linear approximation
to f near a (i.e., for small h). This interpretation is the easiest to generalize to other settings (see below).
Substituting 0 for h in the difference quotient causes division by zero, so the slope of the tangent line cannot be found directly using this method. Instead, define Q(h) to be the difference quotient as a function of h:
Q(h) is the slope of the secant line between (a, f(a)) and (a + h, f(a + h)). If f is a continuous function, meaning that its graph is an unbroken curve with no gaps, then Q is a continuous function away from h = 0. If the limit exists, meaning that there is a way of choosing a value for Q(0) that makes the graph of Q a continuous function, then the function f is differentiable at a, and its derivative at a equals Q(0).
In practice, the existence of a continuous extension of the difference quotient Q(h) to h = 0 is shown by modifying the numerator to cancel h in the denominator. Such manipulations can make the limiting value of Q for small h clear even though Q is still not defined at h = 0. This process can be long and tedious for complicated functions, and many shortcuts are commonly used to simplify the process.
The squaring function f(x) = x² is differentiable at x = 3, and its derivative there is 6. This result is established by calculating the limit as h approaches zero of the difference quotient of f(3):
The last expression shows that the difference quotient equals 6 + h when h ≠ 0 and is undefined when h = 0, because of the definition of the difference quotient. However, the definition of the limit says the difference quotient does not need to be defined when h = 0. The limit is the result of letting h go to zero, meaning it is the value that 6 + h tends to as h becomes very small:
Hence the slope of the graph of the squaring function at the point (3, 9) is 6, and so its derivative at x = 3 is f '(3) = 6.
More generally, a similar computation shows that the derivative of the squaring function at x = a is f '(a) = 2a.
Continuity and differentiability
If y = f(x) is differentiable at a, then f must also be continuous at a. As an example, choose a point a and let f be the step function that returns a value, say 1, for all x less than a, and returns a different value, say 10, for all x greater than or equal to a. f cannot have a derivative at a. If h is negative, then a + h is on the low part of the step, so the secant line from a to a + h is very steep, and as h tends to zero the slope tends to infinity. If h is positive, then a + h is on the high part of the step, so the secant line from a to a + h has slope zero. Consequently the secant lines do not approach any single slope, so the limit of the difference quotient does not exist.
However, even if a function is continuous at a point, it may not be differentiable there. For example, the absolute value function y = |x| is continuous at x = 0, but it is not differentiable there. If h is positive, then the slope of the secant line from 0 to h is one, whereas if h is negative, then the slope of the secant line from 0 to h is negative one. This can be seen graphically as a "kink" or a "cusp" in the graph at x = 0. Even a function with a smooth graph is not differentiable at a point where its tangent is vertical: For instance, the function y = x1/3 is not differentiable at x = 0.
Most functions that occur in practice have derivatives at all points or at almost every point. Early in the history of calculus, many mathematicians assumed that a continuous function was differentiable at most points. Under mild conditions, for example if the function is a monotone function or a Lipschitz function, this is true. However, in 1872 Weierstrass found the first example of a function that is continuous everywhere but differentiable nowhere. This example is now known as the Weierstrass function. In 1931, Stefan Banach proved that the set of functions that have a derivative at some point is a meager set in the space of all continuous functions. Informally, this means that hardly any continuous functions have a derivative at even one point.
The derivative as a function
Let f be a function that has a derivative at every point a in the domain of f. Because every point a has a derivative, there is a function that sends the point a to the derivative of f at a. This function is written f′(x) and is called the derivative function or the derivative of f. The derivative of f collects all the derivatives of f at all the points in the domain of f.
Sometimes f has a derivative at most, but not all, points of its domain. The function whose value at a equals f′(a) whenever f′(a) is defined and elsewhere is undefined is also called the derivative of f. It is still a function, but its domain is strictly smaller than the domain of f.
Using this idea, differentiation becomes a function of functions: The derivative is an operator whose domain is the set of all functions that have derivatives at every point of their domain and whose range is a set of functions. If we denote this operator by D, then D(f) is the function f′(x). Since D(f) is a function, it can be evaluated at a point a. By the definition of the derivative function, D(f)(a) = f′(a).
For comparison, consider the doubling function f(x) = 2x; f is a real-valued function of a real number, meaning that it takes numbers as inputs and has numbers as outputs:
The operator D, however, is not defined on individual numbers. It is only defined on functions:
Because the output of D is a function, the output of D can be evaluated at a point. For instance, when D is applied to the squaring function,
D outputs the doubling function,
which we named f(x). This output function can then be evaluated to get f(1) = 2, f(2) = 4, and so on.
Higher derivatives
Let f be a differentiable function, and let f′(x) be its derivative. The derivative of f′(x) (if it has one) is written f′′(x) and is called the second derivative of f. Similarly, the derivative of a second derivative, if it exists, is written f′′′(x) and is called the third derivative of f. These repeated derivatives are called higher-order derivatives.
If x(t) represents the position of an object at time t, then the higher-order derivatives of x have physical interpretations. The second derivative of x is the derivative of x′(t), the velocity, and by definition this is the object's acceleration. The third derivative of x is defined to be the jerk, and the fourth derivative is defined to be the jounce.
A function f need not have a derivative, for example, if it is not continuous. Similarly, even if f does have a derivative, it may not have a second derivative. For example, let
Calculation shows that f is a differentiable function whose derivative is
f′(x) is twice the absolute value function, and it does not have a derivative at zero. Similar examples show that a function can have k derivatives for any non-negative integer k but no (k + 1)-order derivative. A function that has k successive derivatives is called k times differentiable. If in addition the kth derivative is continuous, then the function is said to be of differentiability class Ck. (This is a stronger condition than having k derivatives. For an example, see differentiability class.) A function that has infinitely many derivatives is called infinitely differentiable or smooth.
On the real line, every polynomial function is infinitely differentiable. By standard differentiation rules, if a polynomial of degree n is differentiated n times, then it becomes a constant function. All of its subsequent derivatives are identically zero. In particular, they exist, so polynomials are smooth functions.
The derivatives of a function f at a point x provide polynomial approximations to that function near x. For example, if f is twice differentiable, then
in the sense that
If f is infinitely differentiable, then this is the beginning of the Taylor series for f evaluated at x+h around x.
Inflection point
A point where the second derivative of a function changes sign is called an inflection point. At an inflection point, the second derivative may be zero, as in the case of the inflection point x=0 of the function y=x3, or it may fail to exist, as in the case of the inflection point x=0 of the function y=x1/3. At an inflection point, a function switches from being a convex function to being a concave function or vice versa.
Notations for differentiation
Leibniz's notation
The notation for derivatives introduced by Gottfried Leibniz is one of the earliest. It is still commonly used when the equation y = f(x) is viewed as a functional relationship between dependent and independent variables. Then the first derivative is denoted by
and was once thought of as an infinitesimal quotient. Higher derivatives are expressed using the notation
for the nth derivative of y = f(x) (with respect to x). These are abbreviations for multiple applications of the derivative operator. For example,
With Leibniz's notation, we can write the derivative of y at the point x = a in two different ways:
Leibniz's notation allows one to specify the variable for differentiation (in the denominator). This is especially relevant for partial differentiation. It also makes the chain rule easy to remember:
Lagrange's notation
Sometimes referred to as prime notation, one of the most common modern notations for differentiation is due to Joseph-Louis Lagrange and uses the prime mark, so that the derivative of a function f(x) is denoted f′(x) or simply f′. Similarly, the second and third derivatives are denoted
To denote the number of derivatives beyond this point, some authors use Roman numerals in superscript, whereas others place the number in parentheses:
The latter notation generalizes to yield the notation f (n) for the nth derivative of f — this notation is most useful when we wish to talk about the derivative as being a function itself, as in this case the Leibniz notation can become cumbersome.
Newton's notation
Newton's notation for differentiation, also called the dot notation, places a dot over the function name to represent a time derivative. If y = f(t), then
denote, respectively, the first and second derivatives of y with respect to t. This notation is used exclusively for time derivatives, meaning that the independent variable of the function represents time. It is very common in physics and in mathematical disciplines connected with physics such as differential equations. While the notation becomes unmanageable for high-order derivatives, in practice only very few derivatives are needed.
Euler's notation
If y = f(x) is a dependent variable, then often the subscript x is attached to the D to clarify the independent variable x. Euler's notation is then written
- or ,
although this subscript is often omitted when the variable x is understood, for instance when this is the only variable present in the expression.
Euler's notation is useful for stating and solving linear differential equations.
Computing the derivative
The derivative of a function can, in principle, be computed from the definition by considering the difference quotient, and computing its limit. In practice, once the derivatives of a few simple functions are known, the derivatives of other functions are more easily computed using rules for obtaining derivatives of more complicated functions from simpler ones.
Derivatives of elementary functions
Most derivative computations eventually require taking the derivative of some common functions. The following incomplete list gives some of the most frequently used functions of a single real variable and their derivatives.
where r is any real number, then
wherever this function is defined. For example, if , then
and the derivative function is defined only for positive x, not for x = 0. When r = 0, this rule implies that f′(x) is zero for x ≠ 0, which is almost the constant rule (stated below).
Rules for finding the derivative
In many cases, complicated limit calculations by direct application of Newton's difference quotient can be avoided using differentiation rules. Some of the most basic rules are the following.
- Constant rule: if f(x) is constant, then
- for all functions f and g and all real numbers and .
- for all functions f and g. By extension, this means that the derivative of a constant times a function is the constant times the derivative of the function:
- for all functions f and g at all inputs where g ≠ 0.
- Chain rule: If , then
Example computation
The derivative of
Here the second term was computed using the chain rule and third using the product rule. The known derivatives of the elementary functions x2, x4, sin(x), ln(x) and exp(x) = ex, as well as the constant 7, were also used.
Derivatives in higher dimensions
Derivatives of vector valued functions
A vector-valued function y(t) of a real variable sends real numbers to vectors in some vector space Rn. A vector-valued function can be split up into its coordinate functions y1(t), y2(t), …, yn(t), meaning that y(t) = (y1(t), ..., yn(t)). This includes, for example, parametric curves in R2 or R3. The coordinate functions are real valued functions, so the above definition of derivative applies to them. The derivative of y(t) is defined to be the vector, called the tangent vector, whose coordinates are the derivatives of the coordinate functions. That is,
if the limit exists. The subtraction in the numerator is subtraction of vectors, not scalars. If the derivative of y exists for every value of t, then y′ is another vector valued function.
If e1, …, en is the standard basis for Rn, then y(t) can also be written as y1(t)e1 + … + yn(t)en. If we assume that the derivative of a vector-valued function retains the linearity property, then the derivative of y(t) must be
because each of the basis vectors is a constant.
This generalization is useful, for example, if y(t) is the position vector of a particle at time t; then the derivative y′(t) is the velocity vector of the particle at time t.
Partial derivatives
Suppose that f is a function that depends on more than one variable. For instance,
f can be reinterpreted as a family of functions of one variable indexed by the other variables:
In other words, every value of x chooses a function, denoted fx, which is a function of one real number. That is,
Once a value of x is chosen, say a, then f(x,y) determines a function fa that sends y to a² + ay + y²:
In this expression, a is a constant, not a variable, so fa is a function of only one real variable. Consequently the definition of the derivative for a function of one variable applies:
The above procedure can be performed for any choice of a. Assembling the derivatives together into a function gives a function that describes the variation of f in the y direction:
This is the partial derivative of f with respect to y. Here ∂ is a rounded d called the partial derivative symbol. To distinguish it from the letter d, ∂ is sometimes pronounced "der", "del", or "partial" instead of "dee".
In general, the partial derivative of a function f(x1, …, xn) in the direction xi at the point (a1 …, an) is defined to be:
In the above difference quotient, all the variables except xi are held fixed. That choice of fixed values determines a function of one variable
and, by definition,
In other words, the different choices of a index a family of one-variable functions just as in the example above. This expression also shows that the computation of partial derivatives reduces to the computation of one-variable derivatives.
An important example of a function of several variables is the case of a scalar-valued function f(x1,...xn) on a domain in Euclidean space Rn (e.g., on R² or R³). In this case f has a partial derivative ∂f/∂xj with respect to each variable xj. At the point a, these partial derivatives define the vector
This vector is called the gradient of f at a. If f is differentiable at every point in some domain, then the gradient is a vector-valued function ∇f that takes the point a to the vector ∇f(a). Consequently the gradient determines a vector field.
Directional derivatives
If f is a real-valued function on Rn, then the partial derivatives of f measure its variation in the direction of the coordinate axes. For example, if f is a function of x and y, then its partial derivatives measure the variation in f in the x direction and the y direction. They do not, however, directly measure the variation of f in any other direction, such as along the diagonal line y = x. These are measured using directional derivatives. Choose a vector
The directional derivative of f in the direction of v at the point x is the limit
In some cases it may be easier to compute or estimate the directional derivative after changing the length of the vector. Often this is done to turn the problem into the computation of a directional derivative in the direction of a unit vector. To see how this works, suppose that v = λu. Substitute h = k/λ into the difference quotient. The difference quotient becomes:
This is λ times the difference quotient for the directional derivative of f with respect to u. Furthermore, taking the limit as h tends to zero is the same as taking the limit as k tends to zero because h and k are multiples of each other. Therefore Dv(f) = λDu(f). Because of this rescaling property, directional derivatives are frequently considered only for unit vectors.
If all the partial derivatives of f exist and are continuous at x, then they determine the directional derivative of f in the direction v by the formula:
The same definition also works when f is a function with values in Rm. The above definition is applied to each component of the vectors. In this case, the directional derivative is a vector in Rm.
Total derivative, total differential and Jacobian matrix
When f is a function from an open subset of Rn to Rm, then the directional derivative of f in a chosen direction is the best linear approximation to f at that point and in that direction. But when n > 1, no single directional derivative can give a complete picture of the behavior of f. The total derivative, also called the (total) differential, gives a complete picture by considering all directions at once. That is, for any vector v starting at a, the linear approximation formula holds:
Just like the single-variable derivative, f ′(a) is chosen so that the error in this approximation is as small as possible.
If n and m are both one, then the derivative f ′(a) is a number and the expression f ′(a)v is the product of two numbers. But in higher dimensions, it is impossible for f ′(a) to be a number. If it were a number, then f ′(a)v would be a vector in Rn while the other terms would be vectors in Rm, and therefore the formula would not make sense. For the linear approximation formula to make sense, f ′(a) must be a function that sends vectors in Rn to vectors in Rm, and f ′(a)v must denote this function evaluated at v.
To determine what kind of function it is, notice that the linear approximation formula can be rewritten as
Notice that if we choose another vector w, then this approximate equation determines another approximate equation by substituting w for v. It determines a third approximate equation by substituting both w for v and a + v for a. By subtracting these two new equations, we get
If we assume that v is small and that the derivative varies continuously in a, then f ′(a + v) is approximately equal to f ′(a), and therefore the right-hand side is approximately zero. The left-hand side can be rewritten in a different way using the linear approximation formula with v + w substituted for v. The linear approximation formula implies:
This suggests that f ′(a) is a linear transformation from the vector space Rn to the vector space Rm. In fact, it is possible to make this a precise derivation by measuring the error in the approximations. Assume that the error in these linear approximation formula is bounded by a constant times ||v||, where the constant is independent of v but depends continuously on a. Then, after adding an appropriate error term, all of the above approximate equalities can be rephrased as inequalities. In particular, f ′(a) is a linear transformation up to a small error term. In the limit as v and w tend to zero, it must therefore be a linear transformation. Since we define the total derivative by taking a limit as v goes to zero, f ′(a) must be a linear transformation.
In one variable, the fact that the derivative is the best linear approximation is expressed by the fact that it is the limit of difference quotients. However, the usual difference quotient does not make sense in higher dimensions because it is not usually possible to divide vectors. In particular, the numerator and denominator of the difference quotient are not even in the same vector space: The numerator lies in the codomain Rm while the denominator lies in the domain Rn. Furthermore, the derivative is a linear transformation, a different type of object from both the numerator and denominator. To make precise the idea that f ′ (a) is the best linear approximation, it is necessary to adapt a different formula for the one-variable derivative in which these problems disappear. If f : R → R, then the usual definition of the derivative may be manipulated to show that the derivative of f at a is the unique number f ′(a) such that
This is equivalent to
because the limit of a function tends to zero if and only if the limit of the absolute value of the function tends to zero. This last formula can be adapted to the many-variable situation by replacing the absolute values with norms.
The definition of the total derivative of f at a, therefore, is that it is the unique linear transformation f ′(a) : Rn → Rm such that
Here h is a vector in Rn, so the norm in the denominator is the standard length on Rn. However, f′(a)h is a vector in Rm, and the norm in the numerator is the standard length on Rm. If v is a vector starting at a, then f ′(a)v is called the pushforward of v by f and is sometimes written f*v.
If the total derivative exists at a, then all the partial derivatives and directional derivatives of f exist at a, and for all v, f ′(a)v is the directional derivative of f in the direction v. If we write f using coordinate functions, so that f = (f1, f2, ..., fm), then the total derivative can be expressed using the partial derivatives as a matrix. This matrix is called the Jacobian matrix of f at a:
The existence of the total derivative f′(a) is strictly stronger than the existence of all the partial derivatives, but if the partial derivatives exist and are continuous, then the total derivative exists, is given by the Jacobian, and depends continuously on a.
The definition of the total derivative subsumes the definition of the derivative in one variable. That is, if f is a real-valued function of a real variable, then the total derivative exists if and only if the usual derivative exists. The Jacobian matrix reduces to a 1×1 matrix whose only entry is the derivative f′(x). This 1×1 matrix satisfies the property that f(a + h) − f(a) − f ′(a)h is approximately zero, in other words that
Up to changing variables, this is the statement that the function is the best linear approximation to f at a.
The total derivative of a function does not give another function in the same way as the one-variable case. This is because the total derivative of a multivariable function has to record much more information than the derivative of a single-variable function. Instead, the total derivative gives a function from the tangent bundle of the source to the tangent bundle of the target.
The natural analog of second, third, and higher-order total derivatives is not a linear transformation, is not a function on the tangent bundle, and is not built by repeatedly taking the total derivative. The analog of a higher-order derivative, called a jet, cannot be a linear transformation because higher-order derivatives reflect subtle geometric information, such as concavity, which cannot be described in terms of linear data such as vectors. It cannot be a function on the tangent bundle because the tangent bundle only has room for the base space and the directional derivatives. Because jets capture higher-order information, they take as arguments additional coordinates representing higher-order changes in direction. The space determined by these additional coordinates is called the jet bundle. The relation between the total derivative and the partial derivatives of a function is paralleled in the relation between the kth order jet of a function and its partial derivatives of order less than or equal to k.
By repeatedly taking the total derivative, one obtains higher versions of the Fréchet derivative, specialized to Rp. The kth order total derivative may be interpreted as a map
which takes a point x in Rn and assigns to it an element of the space of k-linear maps from Rn to Rm- the "best" (in a certain precise sense) k-linear approximation to f at that point. By precomposing it with the diagonal map Δ, x→(x, x), a generalized Taylor series may be begun as
where f(a) is identified with a constant function, (x-a)i are the components of the vector x-a, and (D f)i and (D2 f)j k are the components of D f and D2 f as linear transformations.
The concept of a derivative can be extended to many other settings. The common thread is that the derivative of a function at a point serves as a linear approximation of the function at that point.
- An important generalization of the derivative concerns complex functions of complex variables, such as functions from (a domain in) the complex numbers C to C. The notion of the derivative of such a function is obtained by replacing real variables with complex variables in the definition. If C is identified with R² by writing a complex number z as x + i y, then a differentiable function from C to C is certainly differentiable as a function from R² to R² (in the sense that its partial derivatives all exist), but the converse is not true in general: the complex derivative only exists if the real derivative is complex linear and this imposes relations between the partial derivatives called the Cauchy Riemann equations — see holomorphic functions.
- Another generalization concerns functions between differentiable or smooth manifolds. Intuitively speaking such a manifold M is a space that can be approximated near each point x by a vector space called its tangent space: the prototypical example is a smooth surface in R³. The derivative (or differential) of a (differentiable) map f: M → N between manifolds, at a point x in M, is then a linear map from the tangent space of M at x to the tangent space of N at f(x). The derivative function becomes a map between the tangent bundles of M and N. This definition is fundamental in differential geometry and has many uses — see pushforward (differential) and pullback (differential geometry).
- Differentiation can also be defined for maps between infinite dimensional vector spaces such as Banach spaces and Fréchet spaces. There is a generalization both of the directional derivative, called the Gâteaux derivative, and of the differential, called the Fréchet derivative.
- One deficiency of the classical derivative is that not very many functions are differentiable. Nevertheless, there is a way of extending the notion of the derivative so that all continuous functions and many other functions can be differentiated using a concept known as the weak derivative. The idea is to embed the continuous functions in a larger space called the space of distributions and only require that a function is differentiable "on average".
- The properties of the derivative have inspired the introduction and study of many similar objects in algebra and topology — see, for example, differential algebra.
- The discrete equivalent of differentiation is finite differences. The study of differential calculus is unified with the calculus of finite differences in time scale calculus.
- Also see arithmetic derivative.
See also
- Applications of derivatives
- Automatic differentiation
- Differentiability class
- Generalizations of the derivative
- Multiplicative inverse
- Numerical differentiation
- Symmetric derivative
- Differentiation rules
- Fractal derivative
- Differential calculus, as discussed in this article, is a very well established mathematical discipline for which there are many sources. Almost all of the material in this article can be found in Apostol 1967, Apostol 1969, and Spivak 1994.
- Spivak 1994, chapter 10.
- See Differential (infinitesimal) for an overview. Further approaches include the Radon–Nikodym theorem, and the universal derivation (see Kähler differential).
- Despite this, it is still possible to take the derivative in the sense of distributions. The result is nine times the Dirac measure centered at a.
- Banach, S. (1931), "Uber die Baire'sche Kategorie gewisser Funktionenmengen", Studia. Math. (3): 174–179.. Cited by Hewitt, E and Stromberg, K (1963), Real and abstract analysis, Springer-Verlag, Theorem 17.8
- Apostol 1967, §4.18
- In the formulation of calculus in terms of limits, the du symbol has been assigned various meanings by various authors. Some authors do not assign a meaning to du by itself, but only as part of the symbol du/dx. Others define dx as an independent variable, and define du by du = dx•f′(x). In non-standard analysis du is defined as an infinitesimal. It is also interpreted as the exterior derivative of a function u. See differential (infinitesimal) for further information.
- "The Notation of Differentiation". MIT. 1998. Retrieved 24 October 2012.
- This can also be expressed as the adjointness between the product space and function space constructions.
- Anton, Howard; Bivens, Irl; Davis, Stephen (February 2, 2005), Calculus: Early Transcendentals Single and Multivariable (8th ed.), New York: Wiley, ISBN 978-0-471-47244-5
- Apostol, Tom M. (June 1967), Calculus, Vol. 1: One-Variable Calculus with an Introduction to Linear Algebra 1 (2nd ed.), Wiley, ISBN 978-0-471-00005-1
- Apostol, Tom M. (June 1969), Calculus, Vol. 2: Multi-Variable Calculus and Linear Algebra with Applications 1 (2nd ed.), Wiley, ISBN 978-0-471-00007-5
- Courant, Richard; John, Fritz (December 22, 1998), Introduction to Calculus and Analysis, Vol. 1, Springer-Verlag, ISBN 978-3-540-65058-4
- Eves, Howard (January 2, 1990), An Introduction to the History of Mathematics (6th ed.), Brooks Cole, ISBN 978-0-03-029558-4
- Larson, Ron; Hostetler, Robert P.; Edwards, Bruce H. (February 28, 2006), Calculus: Early Transcendental Functions (4th ed.), Houghton Mifflin Company, ISBN 978-0-618-60624-5
- Spivak, Michael (September 1994), Calculus (3rd ed.), Publish or Perish, ISBN 978-0-914098-89-8
- Stewart, James (December 24, 2002), Calculus (5th ed.), Brooks Cole, ISBN 978-0-534-39339-7
- Thompson, Silvanus P. (September 8, 1998), Calculus Made Easy (Revised, Updated, Expanded ed.), New York: St. Martin's Press, ISBN 978-0-312-18548-0
Online books
|Find more about Differentiation at Wikipedia's sister projects|
|Definitions and translations from Wiktionary|
|Media from Commons|
|Learning resources from Wikiversity|
|News stories from Wikinews|
|Quotations from Wikiquote|
|Source texts from Wikisource|
|Textbooks from Wikibooks|
|Travel information from Wikivoyage|
|Find more about Derivative at Wikipedia's sister projects|
|Definitions and translations from Wiktionary|
|Media from Commons|
|Learning resources from Wikiversity|
|News stories from Wikinews|
|Quotations from Wikiquote|
|Source texts from Wikisource|
|Textbooks from Wikibooks|
|Travel information from Wikivoyage|
- Crowell, Benjamin (2003), Calculus
- Garrett, Paul (2004), Notes on First-Year Calculus, University of Minnesota
- Hussain, Faraz (2006), Understanding Calculus
- Keisler, H. Jerome (2000), Elementary Calculus: An Approach Using Infinitesimals
- Mauch, Sean (2004), Unabridged Version of Sean's Applied Math Book
- Sloughter, Dan (2000), Difference Equations to Differential Equations
- Strang, Gilbert (1991), Calculus
- Stroyan, Keith D. (1997), A Brief Introduction to Infinitesimal Calculus
- Wikibooks, Calculus
Web pages
- Hazewinkel, Michiel, ed. (2001), "Derivative", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
- Khan Academy: Derivative lesson 1
- Weisstein, Eric W. "Derivative." From MathWorld
- Derivatives of Trigonometric functions, UBC
- Solved problems in derivatives | http://en.wikipedia.org/wiki/First_derivative | 13 |
18 | Illinois Learning Standards
Stage E - Social Science
Students who meet the standard can understand and explain basic principles of the United States government.
- Give examples of civic and personal responsibilities of
students and adults.
- Explain the characteristics of a "democracy."
- Justify why governments need to make rules and laws for
- Explain the importance of the Declaration of Independence
and the Illinois and United States Constitutions.
- Define the concept of "unalienable" as it relates
to rights expressed in the Declaration of Independence.
- Explain how the U.S. Constitution can be amended.
- Defend the idea of having a Bill of Rights to outline
and protect the rights of citizens.
- Summarize the evolution of one of the amendments to the
constitution (e.g., its origins, implementation, influence).
- Define rule of law.
Students who meet the standard can understand the structures and functions of the political systems of Illinois, the United States, and other nations.
- State the names of the two houses in the Illinois state
- Describe the purpose behind the principles of division
and sharing powers among the executive, legislative, and
- Describe the system of checks and balances between the
three branches of the federal government.
- Differentiate between the characteristics of criminal
and civil trials.
Students who meet the standard can understand election processes and responsibilities of citizens.
- Describe situations in their home, school, or community
where the rights of minorities have been respected.
- Predict the consequences of ignoring the rights of other
people in public places (e.g., smoking in a crowded theater).
- Explain how an individual or group has solved a problem
in their community.
- Identify voting requirements.
Students who meet the standard can understand the roles and influences of individuals and interest groups in the political systems of Illinois, the United States, and other nations.
- Describe a situation where minority rights may supersede
the wishes of the majority.
- Produce a plan to increase student and/or parent involvement
in school activities.
- Define the concept of "lobbying" to influence
public opinion or legislative decision-making.
- Explain ways that individuals and groups influence the
shaping of public policy.
- Compare/contrast contemporary and traditional forms of
political persuasion (e.g., speeches and parades with Internet,
faxes, electronic mail).
Students who meet the standard can understand United States foreign policy as it relates to other nations and international issues.
- Summarize how nations interact to avoid conflict (e.g.,
diplomacy, trade, treaties).
- Identify government branches and offices at the federal
level that are responsible for conducting foreign affairs.
- Identify a treaty the United States has signed with another
- Describe how a specific issue (e.g., trade, resources,
human rights) has affected a president's foreign policy.
Students who meet the standard can understand the development of United States political ideas and traditions.
- Describe examples of the development of basic freedoms
for the people of the United States.
- Discuss consistencies and inconsistencies expressed in
United States political traditions and actual practices
(e.g., freedom of speech, the right to bear arms, slavery,
- Compare the similarities found in national symbols, legends,
or stories that have emphasized the value of such principles
as freedom, liberty, preservation of the Union, etc.
- Describe historical examples featuring the denial or extension
of civil rights to various individuals or groups.
- Identify significant changes in communication or technology
that have had an affect on the spread of political information
and influence (e.g., telegraph, television, Internet).
Students who meet the standard understand economic systems, with an emphasis on the United States.
- Explain how a market economy answers the three basic economic
questions: What to produce? How to produce? For whom to
- Identify the productive resources people sell to earn
- Identify human resources in their community and the goods
and services they produce.
- Analyze the relationship between what they learn in school
and the skills they need for a job.
Students who meet the standard understand that scarcity necessitates choices by consumers.
- Apply the concept of opportunity cost to choices in the
- Identify factors that affect consumer choices (e.g., prices
of goods and services; quality; income; preferences/tastes).
- Describe how a large increase or decrease in the price
of a good or service would affect how much of that item
would be purchased.
- Explain why consumers will buy more goods and services
at lower prices and fewer at higher prices.
- Define prices as what consumers pay when buying goods
or services and what sellers receive when selling goods
Students who meet the standard understand that scarcity necessitates choices by producers.
- Predict how a large increase or decrease in the price
of a good or service will affect how much producers will
make and sell of that good or service.
- Analyze why producers will produce more goods and services
at higher prices and fewer at lower prices.
- Identify markets in which there are very few sellers and
markets in which there are many sellers.
- Identify the characteristics of effective entrepreneurs
(e.g., why they are willing to take risks to start new businesses).
Students who meet the standard understand trade as an exchange of goods or services.
- Explain the benefits of exchanging with the use of money.
- Identify the primary functions and services of financial
- Predict how people's lives would be different if they
did not trade with others for goods and services they use.
- Illustrate how division of labor in a production process
can increase productivity.
- Explain how division of labor creates interdependence.
- Analyze the impact of interdependence on the production
Students who meet the standard understand the impact of government policies and decisions on production and consumption in the economy.
- Identify public goods and services in the community, state,
- List the types of taxes paid by individuals and by businesses.
Students who meet the standard can apply the skills of historical analysis and interpretation.
- Explain how life changed or stayed the same in a region
or place using two historic maps that depict different times
in that region or place.
- Describe trends during a time period using political,
economic, environmental, and social data from appropriate
graphs or charts.
- Distinguish between primary and secondary sources.
- Formulate a research question about the past that includes
its "people", "space", and "time"
- Identify sources in the school or local library that will
help answer a research question.
- Locate on the World Wide Web one source pertaining to
each of "people", "space", and "time"
dimensions of a research question.
Students who meet the standard understand the development of significant political events.
- Identify turning points in United States political history.
- Summarize the causes and effects of ideas and actions
of significant political figures during the Colonial Period.
- Analyze political events, figures, and ideas in the colonies
that led to the American Revolution. (US)
- List the key figures, events, and ideas in the development
of the United States government during the Early National
- Identify turning points in world political history. (W)
- Identify significant political leaders of the non-Western
world (e.g., Genghis Khan, Gandhi, Mandela). (W)
- Describe major developments in the evolution of Western
political systems (e.g., Greek democracy, Roman republic,
Magna Carta and Common Law, the Enlightenment). (W)
- Analyze the consequences of political ideas and actions
taken by significant individuals in the past. (W)
Students who meet the standard understand the development of economic systems.
- Identify turning points in United States economic history.
- Describe the economic choices people made or were forced
to make during the development of the early economy of the
United States. (US)
- Describe how slavery and indentured servitude were related
to the wants of economic interest groups in the United States.
- Explain how the economic choices people made in the past
affected their political and social lives and their environment.
- Identify turning points in world economic history (e.g.,
manorial system, cultural exchanges, capitalism, industrial
revolution, information revolution). (W)
- Describe the impact of trade on the development of early
- Identify the differences between agricultural and industrial
Students who meet the standard understand Illinois, United States, and world social history.
- Compare and contrast family and community life in two
or more American colonies in terms of the colonists' motives
for settling there. (US)
- Use a variety of sources to describe how people organized
colonial society. (US)
- Compare and contrast changes in family life as people
moved from one geographic region to another during the period
of westward expansion. (US)
- Assess the influence that significant people had on the
social lives of others in Illinois or the United States.
- Identify turning points in world social history. (W)
- Discuss how the roles of men, women, and children in past
cultures have changed over time. (W)
- Describe how a cultural exchange occurred between two
societies of the past (W)
Students who meet the standard understand Illinois, United States, and world environmental history.
- Identify turning points in United States environmental
- Explain how a community or state's location helps to understand
its growth and development over time. (US)
- Organize a series of Illinois or United States maps on
one environmental theme into an historical atlas. (US)
- Describe how various people around North America used
human or animal power to cultivate crops before the onset
of mechanized technology. (US)
- Provide an example of how some people continue to depend
on human or animal power to survive in North America. (US)
- Describe the physical and cultural features of life in
the pre-colonial Illinois country using images documenting
the archaeological record. (US)
- Describe the effects of a significant invention or technological
innovation on the physical and cultural environment of Illinois
between 1700 and 1818. (US)
- Identify turning points in world environmental history.
- Identify on a map the location of the major world political
powers, over time, and explain how their location fostered
their growth and development. (W)
- Organize a series of maps on one environmental theme into
an historical atlas. (W)
- Compare the cultural features of the environment of settled
societies with those of hunter-gatherer cultures. (W)
- Describe how various people around the globe used animals
to cultivate crops in early world history. (W)
- Provide examples of how some people continue to depend
on animal power to survive in their environment. (W)
- Describe the physical and cultural features of life in
the ancient world using images documenting the archaeological
Students who meet the standard can locate, describe and explain places, regions and features on Earth.
- Mark major ocean currents, wind patterns, landforms, and
climate regions on a map.
- Create thematic maps and graphs of the students' local
community, Illinois, United States, and the world using
data and a variety of symbols and colors (e.g., to indicate
patterns of population, disease, economic features, rainfall,
- Describe the locations of major physical and human features
in the community.
- Explain how major urban centers in Illinois are connected
to other urban centers in Illinois and the United States
(e.g., transportation arteries, communication systems, cultural
and recreational relationships).
- Design symbols as references for map interpretation and
place them in a legend/key to be used on a map.
- Determine the absolute location of places chosen by the
teacher and students using a map grid with latitude and
Students who meet the standard can analyze and explain characteristics and interactions of Earth's physical systems.
- Demonstrate understanding of Earth/Sun relationship by
preparing a model or by designing a demonstration to show
the tilt of Earth in relation to the Sun in order to explain
day/night and length of day at different locations on Earth.
- Explain how and why people alter the physical environment
(e.g., by creating irrigation projects, clearing land to
make room for houses and shopping centers, planting crops,
- Explain the process of erosion and its effects of rainfall
on unprotected soil surfaces (e.g., newly tilled farm fields,
- Explain the relationship between plants and animals in
a local ecosystem.
Students who meet the standard can understand relationships between geographic factors and society.
- Create a map showing the occurrence of natural hazards
in Illinois and the United States.
- Map the location of students in your school by coloring
the different areas (cafeteria, classrooms, gym, etc.) to
show different population densities at a given time of day.
- Analyze map and aerial photos of the local community and
Illinois to determine how humans use, abuse, and protect
- Identify factors that influence the location of cities
(e.g., transportation arteries, physical features, migration,
Students who meet the standard can understand the historical significance of geography.
- Compare maps of the United States showing landforms, climate,
and natural vegetation regions to maps that show population
distribution to identify the relationship between settlement
and physical features.
- Analyze how customs and traditions of people from different
parts of the world change over time.
- Describe how physical characteristics of a region or a
nation influence people's point of view and the decisions
they make over time (e.g., scarcity of water influences
water usage, mining resources in mountainous regions, logging
forested land in forested areas).
Students who meet the standard can compare characteristics of culture as reflected in language, literature, the arts, traditions, and institutions.
- Describe how culture is shared through music, art, and
literature throughout the world over time.
- Describe how an artistic tradition has been changed by
technology (e.g., photography, music).
- Describe how social celebrations (parades, fairs) reinforce
- Compare the celebration of holidays by cultures throughout
- Compare cultural differences/similarities with students
from a different part of the United States.
Students who meet the standard can understand the roles and interactions of individuals and groups in society.
- Analyze how social institutions or groups meet the needs
- Explain how interactions of individuals and groups impact
the local community.
- Describe how national institutions affect individuals in
the local community.
- Give an example of how different social institutions or groups
(e.g., religious, nonprofit and community groups) address
the same social problem.
Students who meet the standard can understand how social systems form and develop over time.
- Define belief system.
- Describe ways school administrators, teachers, students,
and parents can cooperate to address school issues.
- Identify historically significant people who affected
social life or institutions.
Return to Social Science Classroom
Assessments and Performance Descriptors | http://www.isbe.net/ils/social_science/stage_E/descriptor.htm | 13 |
20 | Some shorelines experience two almost equal high tides and two low tides each day, called a semi-diurnal tide. Some locations experience only one high and one low tide each day, called a diurnal tide. Some locations experience two uneven tides a day, or sometimes one high and one low each day; this is called a mixed tide. The times and amplitude of the tides at a locale are influenced by the alignment of the Sun and Moon, by the pattern of tides in the deep ocean, by the amphidromic systems of the oceans, and by the shape of the coastline and near-shore bathymetry (see Timing).
Tides vary on timescales ranging from hours to years due to numerous influences. To make accurate records, tide gauges at fixed stations measure the water level over time. Gauges ignore variations caused by waves with periods shorter than minutes. These data are compared to the reference (or datum) level usually called mean sea level.
While tides are usually the largest source of short-term sea-level fluctuations, sea levels are also subject to forces such as wind and barometric pressure changes, resulting in storm surges, especially in shallow seas and near coasts.
Tidal phenomena are not limited to the oceans, but can occur in other systems whenever a gravitational field that varies in time and space is present. For example, the solid part of the Earth is affected by tides, though this is not as easily seen as the water tidal movements.
Tide changes proceed via the following stages:
- Sea level rises over several hours, covering the intertidal zone; flood tide.
- The water rises to its highest level, reaching high tide.
- Sea level falls over several hours, revealing the intertidal zone; ebb tide.
- The water stops falling, reaching low tide.
Tides produce oscillating currents known as tidal streams. The moment that the tidal current ceases is called slack water or slack tide. The tide then reverses direction and is said to be turning. Slack water usually occurs near high water and low water. But there are locations where the moments of slack tide differ significantly from those of high and low water.
Tides are most commonly semi-diurnal (two high waters and two low waters each day), or diurnal (one tidal cycle per day). The two high waters on a given day are typically not the same height (the daily inequality); these are the higher high water and the lower high water in tide tables. Similarly, the two low waters each day are the higher low water and the lower low water. The daily inequality is not consistent and is generally small when the Moon is over the equator.
Tidal changes are the net result of multiple influences that act over varying periods. These influences are called tidal constituents. The primary constituents are the Earth's rotation, the positions of the Moon and the Sun relative to Earth, the Moon's altitude (elevation) above the Earth's equator, and bathymetry.
Variations with periods of less than half a day are called harmonic constituents. Conversely, cycles of days, months, or years are referred to as long period constituents.
The tidal forces affect the entire earth, but the movement of the solid Earth is only centimeters. The atmosphere is much more fluid and compressible so its surface moves kilometers, in the sense of the contour level of a particular low pressure in the outer atmosphere.
Principal lunar semi-diurnal constituent
In most locations, the largest constituent is the "principal lunar semi-diurnal", also known as the M2 (or M2) tidal constituent. Its period is about 12 hours and 25.2 minutes, exactly half a tidal lunar day, which is the average time separating one lunar zenith from the next, and thus is the time required for the Earth to rotate once relative to the Moon. Simple tide clocks track this constituent. The lunar day is longer than the Earth day because the Moon orbits in the same direction the Earth spins. This is analogous to the minute hand on a watch crossing the hour hand at 12:00 and then again at about 1:05½ (not at 1:00).
The Moon orbits the Earth in the same direction as the Earth rotates on its axis, so it takes slightly more than a day—about 24 hours and 50 minutes—for the Moon to return to the same location in the sky. During this time, it has passed overhead (culmination) once and underfoot once (at an hour angle of 00:00 and 12:00 respectively), so in many places the period of strongest tidal forcing is the above mentioned, about 12 hours and 25 minutes. The moment of highest tide is not necessarily when the Moon is nearest to zenith or nadir, but the period of the forcing still determines the time between high tides.
Because the gravitational field created by the Moon weakens with distance from the Moon, it exerts a slightly stronger than average force on the side of the Earth facing the Moon, and a slightly weaker force on the opposite side. The Moon thus tends to "stretch" the Earth slightly along the line connecting the two bodies. The solid Earth deforms a bit, but ocean water, being fluid, is free to move much more in response to the tidal force, particularly horizontally. As the Earth rotates, the magnitude and direction of the tidal force at any particular point on the Earth's surface change constantly; although the ocean never reaches equilibrium—there is never time for the fluid to "catch up" to the state it would eventually reach if the tidal force were constant—the changing tidal force nonetheless causes rhythmic changes in sea surface height.
Semi-diurnal range differences
When there are two high tides each day with different heights (and two low tides also of different heights), the pattern is called a mixed semi-diurnal tide.
Range variation: springs and neaps
The semi-diurnal range (the difference in height between high and low waters over about half a day) varies in a two-week cycle. Approximately twice a month, around new moon and full moon when the Sun, Moon and Earth form a line (a condition known as syzygy) the tidal force due to the sun reinforces that due to the Moon. The tide's range is then at its maximum: this is called the spring tide, or just springs. It is not named after the season but, like that word, derives from the meaning "jump, burst forth, rise", as in a natural spring.
When the Moon is at first quarter or third quarter, the sun and Moon are separated by 90° when viewed from the Earth, and the solar tidal force partially cancels the Moon's. At these points in the lunar cycle, the tide's range is at its minimum: this is called the neap tide, or neaps (a word of uncertain origin).
Spring tides result in high waters that are higher than average, low waters that are lower than average, 'slack water' time that is shorter than average and stronger tidal currents than average. Neaps result in less extreme tidal conditions. There is about a seven-day interval between springs and neaps.
The changing distance separating the Moon and Earth also affects tide heights. When the Moon is closest, at perigee, the range increases, and when it is at apogee, the range shrinks. Every 7½ lunations (the full cycles from full moon to new to full), perigee coincides with either a new or full moon causing perigean spring tides with the largest tidal range. Even at its most powerful this force is still weak causing tidal differences of inches at most.
The shape of the shoreline and the ocean floor changes the way that tides propagate, so there is no simple, general rule that predicts the time of high water from the Moon's position in the sky. Coastal characteristics such as underwater bathymetry and coastline shape mean that individual location characteristics affect tide forecasting; actual high water time and height may differ from model predictions due to the coastal morphology's effects on tidal flow. However, for a given location the relationship between lunar altitude and the time of high or low tide (the lunitidal interval) is relatively constant and predictable, as is the time of high or low tide relative to other points on the same coast. For example, the high tide at Norfolk, Virginia, predictably occurs approximately two and a half hours before the Moon passes directly overhead.
Land masses and ocean basins act as barriers against water moving freely around the globe, and their varied shapes and sizes affect the size of tidal frequencies. As a result, tidal patterns vary. For example, in the U.S., the East coast has predominantly semi-diurnal tides, as do Europe's Atlantic coasts, while the West coast predominantly has mixed tides.
These include solar gravitational effects, the obliquity (tilt) of the Earth's equator and rotational axis, the inclination of the plane of the lunar orbit and the elliptical shape of the Earth's orbit of the sun.
A compound tide (or overtide) results from the shallow-water interaction of its two parent waves.
Phase and amplitude
Because the M2 tidal constituent dominates in most locations, the stage or phase of a tide, denoted by the time in hours after high water, is a useful concept. Tidal stage is also measured in degrees, with 360° per tidal cycle. Lines of constant tidal phase are called cotidal lines, which are analogous to contour lines of constant altitude on topographical maps. High water is reached simultaneously along the cotidal lines extending from the coast out into the ocean, and cotidal lines (and hence tidal phases) advance along the coast. Semi-diurnal and long phase constituents are measured from high water, diurnal from maximum flood tide. This and the discussion that follows is precisely true only for a single tidal constituent.
For an ocean in the shape of a circular basin enclosed by a coastline, the cotidal lines point radially inward and must eventually meet at a common point, the amphidromic point. The amphidromic point is at once cotidal with high and low waters, which is satisfied by zero tidal motion. (The rare exception occurs when the tide encircles an island, as it does around New Zealand, Iceland and Madagascar.) Tidal motion generally lessens moving away from continental coasts, so that crossing the cotidal lines are contours of constant amplitude (half the distance between high and low water) which decrease to zero at the amphidromic point. For a semi-diurnal tide the amphidromic point can be thought of roughly like the center of a clock face, with the hour hand pointing in the direction of the high water cotidal line, which is directly opposite the low water cotidal line. High water rotates about the amphidromic point once every 12 hours in the direction of rising cotidal lines, and away from ebbing cotidal lines. This rotation is generally clockwise in the southern hemisphere and counterclockwise in the northern hemisphere, and is caused by the Coriolis effect. The difference of cotidal phase from the phase of a reference tide is the epoch. The reference tide is the hypothetical constituent "equilibrium tide" on a landless Earth measured at 0° longitude, the Greenwich meridian.
In the North Atlantic, because the cotidal lines circulate counterclockwise around the amphidromic point, the high tide passes New York Harbor approximately an hour ahead of Norfolk Harbor. South of Cape Hatteras the tidal forces are more complex, and cannot be predicted reliably based on the North Atlantic cotidal lines.
History of tidal physics
Investigation into tidal physics was important in the early development of heliocentrism and celestial mechanics, with the existence of two daily tides being explained by the Moon's gravity. Later the daily tides were explained more precisely by the interaction of the Moon's and the sun's gravity.
Galileo Galilei in his 1632 Dialogue Concerning the Two Chief World Systems, whose working title was Dialogue on the Tides, gave an explanation of the tides. The resulting theory, however, was incorrect as he attributed the tides to the sloshing of water caused by the Earth's movement around the sun. He hoped to provide mechanical proof of the Earth's movement – the value of his tidal theory is disputed. At the same time Johannes Kepler correctly suggested that the Moon caused the tides, which he based upon ancient observations and correlations, an explanation which was rejected by Galileo. It was originally mentioned in Ptolemy's Tetrabiblos as having derived from ancient observation.
Isaac Newton (1642–1727) was the first person to explain tides as the product of the gravitational attraction of astronomical masses. His explanation of the tides (and many other phenomena) was published in the Principia (1687). and used his theory of universal gravitation to explain the lunar and solar attractions as the origin of the tide-generating forces. Newton and others before Pierre-Simon Laplace worked the problem from the perspective of a static system (equilibrium theory), that provided an approximation that described the tides that would occur in a non-inertial ocean evenly covering the whole Earth. The tide-generating force (or its corresponding potential) is still relevant to tidal theory, but as an intermediate quantity (forcing function) rather than as a final result; theory must also consider the Earth's accumulated dynamic tidal response to the applied forces, which response is influenced by bathymetry, Earth's rotation, and other factors.
Maclaurin used Newton’s theory to show that a smooth sphere covered by a sufficiently deep ocean under the tidal force of a single deforming body is a prolate spheroid (essentially a three dimensional oval) with major axis directed toward the deforming body. Maclaurin was the first to write about the Earth's rotational effects on motion. Euler realized that the tidal force's horizontal component (more than the vertical) drives the tide. In 1744 Jean le Rond d'Alembert studied tidal equations for the atmosphere which did not include rotation.
Pierre-Simon Laplace formulated a system of partial differential equations relating the ocean's horizontal flow to its surface height, the first major dynamic theory for water tides. The Laplace tidal equations are still in use today. William Thomson, 1st Baron Kelvin, rewrote Laplace's equations in terms of vorticity which allowed for solutions describing tidally driven coastally trapped waves, known as Kelvin waves.
Others including Kelvin and Henri Poincaré further developed Laplace's theory. Based on these developments and the lunar theory of E W Brown describing the motions of the Moon, Arthur Thomas Doodson developed and published in 1921 the first modern development of the tide-generating potential in harmonic form: Doodson distinguished 388 tidal frequencies. Some of his methods remain in use.
The tidal force produced by a massive object (Moon, hereafter) on a small particle located on or in an extensive body (Earth, hereafter) is the vector difference between the gravitational force exerted by the Moon on the particle, and the gravitational force that would be exerted on the particle if it were located at the Earth's center of mass. Thus, the tidal force depends not on the strength of the lunar gravitational field, but on its gradient (which falls off approximately as the inverse cube of the distance to the originating gravitational body). The solar gravitational force on the Earth is on average 179 times stronger than the lunar, but because the sun is on average 389 times farther from the Earth, its field gradient is weaker. The solar tidal force is 46% as large as the lunar. More precisely, the lunar tidal acceleration (along the Moon-Earth axis, at the Earth's surface) is about 1.1 × 10−7 g, while the solar tidal acceleration (along the Sun-Earth axis, at the Earth's surface) is about 0.52 × 10−7 g, where g is the gravitational acceleration at the Earth's surface. Venus has the largest effect of the other planets, at 0.000113 times the solar effect.
The ocean's surface is closely approximated by an equipotential surface, (ignoring ocean currents) commonly referred to as the geoid. Since the gravitational force is equal to the potential's gradient, there are no tangential forces on such a surface, and the ocean surface is thus in gravitational equilibrium. Now consider the effect of massive external bodies such as the moon and sun. These bodies have strong gravitational fields that diminish with distance in space and which act to alter the shape of an equipotential surface on the Earth. This deformation has a fixed spatial orientation relative to the influencing body. The Earth's rotation relative to this shape causes the daily tidal cycle. Gravitational forces follow an inverse-square law (force is inversely proportional to the square of the distance), but tidal forces are inversely proportional to the cube of the distance. The ocean surface moves because of the changing tidal equipotential, rising when the tidal potential is high, which occurs on the parts of the Earth nearest to and furthest from the moon. When the tidal equipotential changes, the ocean surface is no longer aligned with it, so the apparent direction of the vertical shifts. The surface then experiences a down slope, in the direction that the equipotential has risen.
Laplace's tidal equations
- The vertical (or radial) velocity is negligible, and there is no vertical shear—this is a sheet flow.
- The forcing is only horizontal (tangential).
- The Coriolis effect appears as an inertial force (fictitious) acting laterally to the direction of flow and proportional to velocity.
- The surface height's rate of change is proportional to the negative divergence of velocity multiplied by the depth. As the horizontal velocity stretches or compresses the ocean as a sheet, the volume thins or thickens, respectively.
The boundary conditions dictate no flow across the coastline and free slip at the bottom.
The Coriolis effect (inertial force) steers currents moving towards the equator to the west and toward the east for flows moving away from the equator, allowing coastally trapped waves. Finally, a dissipation term can be added which is an analog to viscosity.
Amplitude and cycle time
The theoretical amplitude of oceanic tides caused by the moon is about 54 centimetres (21 in) at the highest point, which corresponds to the amplitude that would be reached if the ocean possessed a uniform depth, there were no landmasses, and the Earth were rotating in step with the moon's orbit. The sun similarly causes tides, of which the theoretical amplitude is about 25 centimetres (9.8 in) (46% of that of the moon) with a cycle time of 12 hours. At spring tide the two effects add to each other to a theoretical level of 79 centimetres (31 in), while at neap tide the theoretical level is reduced to 29 centimetres (11 in). Since the orbits of the Earth about the sun, and the moon about the Earth, are elliptical, tidal amplitudes change somewhat as a result of the varying Earth–sun and Earth–moon distances. This causes a variation in the tidal force and theoretical amplitude of about ±18% for the moon and ±5% for the sun. If both the sun and moon were at their closest positions and aligned at new moon, the theoretical amplitude would reach 93 centimetres (37 in).
Real amplitudes differ considerably, not only because of depth variations and continental obstacles, but also because wave propagation across the ocean has a natural period of the same order of magnitude as the rotation period: if there were no land masses, it would take about 30 hours for a long wavelength surface wave to propagate along the equator halfway around the Earth (by comparison, the Earth's lithosphere has a natural period of about 57 minutes). Earth tides, which raise and lower the bottom of the ocean, and the tide's own gravitational self attraction are both significant and further complicate the ocean's response to tidal forces.
Earth's tidal oscillations introduce dissipation at an average rate of about 3.75 terawatt. About 98% of this dissipation is by marine tidal movement. Dissipation arises as basin-scale tidal flows drive smaller-scale flows which experience turbulent dissipation. This tidal drag creates torque on the moon that gradually transfers angular momentum to its orbit, and a gradual increase in Earth–moon separation. The equal and opposite torque on the Earth correspondingly decreases its rotational velocity. Thus, over geologic time, the moon recedes from the Earth, at about 3.8 centimetres (1.5 in)/year, lengthening the terrestrial day. Day length has increased by about 2 hours in the last 600 million years. Assuming (as a crude approximation) that the deceleration rate has been constant, this would imply that 70 million years ago, day length was on the order of 1% shorter with about 4 more days per year.
Observation and prediction
From ancient times, tidal observation and discussion has increased in sophistication, first marking the daily recurrence, then tides' relationship to the sun and moon. Pytheas travelled to the British Isles about 325 BC and seems to be the first to have related spring tides to the phase of the moon.
In the 2nd century BC, the Babylonian astronomer, Seleucus of Seleucia, correctly described the phenomenon of tides in order to support his heliocentric theory. He correctly theorized that tides were caused by the moon, although he believed that the interaction was mediated by the pneuma. He noted that tides varied in time and strength in different parts of the world. According to Strabo (1.1.9), Seleucus was the first to link tides to the lunar attraction, and that the height of the tides depends on the moon's position relative to the sun.
The Naturalis Historia of Pliny the Elder collates many tidal observations, e.g., the spring tides are a few days after (or before) new and full moon and are highest around the equinoxes, though Pliny noted many relationships now regarded as fanciful. In his Geography, Strabo described tides in the Persian Gulf having their greatest range when the moon was furthest from the plane of the equator. All this despite the relatively small amplitude of Mediterranean basin tides. (The strong currents through the Euripus Strait and the Strait of Messina puzzled Aristotle.) Philostratus discussed tides in Book Five of The Life of Apollonius of Tyana. Philostratus mentions the moon, but attributes tides to "spirits". In Europe around 730 AD, the Venerable Bede described how the rising tide on one coast of the British Isles coincided with the fall on the other and described the time progression of high water along the Northumbrian coast.
The first tide table in China was recorded in 1056 AD primarily for visitors wishing to see the famous tidal bore in the Qiantang River. The first known British tide table is thought to be that of John Wallingford, who died Abbot of St. Albans in 1213, based on high water occurring 48 minutes later each day, and three hours earlier at the Thames mouth than upriver at London.
William Thomson (Lord Kelvin) led the first systematic harmonic analysis of tidal records starting in 1867. The main result was the building of a tide-predicting machine using a system of pulleys to add together six harmonic time functions. It was "programmed" by resetting gears and chains to adjust phasing and amplitudes. Similar machines were used until the 1960s.
The first known sea-level record of an entire spring–neap cycle was made in 1831 on the Navy Dock in the Thames Estuary. Many large ports had automatic tide gage stations by 1850.
William Whewell first mapped co-tidal lines ending with a nearly global chart in 1836. In order to make these maps consistent, he hypothesized the existence of amphidromes where co-tidal lines meet in the mid-ocean. These points of no tide were confirmed by measurement in 1840 by Captain Hewett, RN, from careful soundings in the North Sea.
The tidal forces due to the Moon and Sun generate very long waves which travel all around the ocean following the paths shown in co-tidal charts. The time when the crest of the wave reaches a port then gives the time of high water at the port. The time taken for the wave to travel around the ocean also means that there is a delay between the phases the moon and their effect on the tide. Springs and neaps in the North Sea, for example, are two days behind the new/full moon and first/third quarter moon. This is called the tide's age.
The ocean bathymetry greatly influences the tide's exact time and height at a particular coastal point. There are some extreme cases: the Bay of Fundy, on the east coast of Canada, is often stated to have the world's highest tides because of its shape, bathymetry and its distance from the continental shelf edge. Measurements made in November 1998 at Burntcoat Head in the Bay of Fundy recorded a maximum range of 16.3 metres (53 ft) and a highest predicted extreme of 17 metres (56 ft). Similar measurements made in March 2002 at Leaf Basin, Ungava Bay in northern Quebec gave similar values (allowing for measurement errors), a maximum range of 16.2 metres (53 ft) and a highest predicted extreme of 16.8 metres (55 ft). Ungava Bay and the Bay of Fundy lie similar distances from the continental shelf edge but Ungava Bay is free of pack ice for only about four months every year while the Bay of Fundy rarely freezes.
Southampton in the United Kingdom has a double high water caused by the interaction between the region's different tidal harmonics, caused primarily by the east/west orientation of the English Channel and the fact that when it is high water at Dover it is low water at Land's End (some 300 nautical miles distant) and vice versa. This is contrary to the popular belief that the flow of water around the Isle of Wight creates two high waters. The Isle of Wight is important, however, since it is responsible for the 'Young Flood Stand', which describes the pause of the incoming tide about three hours after low water.
Because the oscillation modes of the Mediterranean Sea and the Baltic Sea do not coincide with any significant astronomical forcing period, the largest tides are close to their narrow connections with the Atlantic Ocean. Extremely small tides also occur for the same reason in the Gulf of Mexico and Sea of Japan. Elsewhere, as along the southern coast of Australia, low tides can be due to the presence of a nearby amphidrome.
Isaac Newton's theory of gravitation first enabled an explanation of why there were generally two tides a day, not one, and offered hope for detailed understanding. Although it may seem that tides could be predicted via a sufficiently detailed knowledge of the instantaneous astronomical forcings, the actual tide at a given location is determined by astronomical forces accumulated over many days. Precise results require detailed knowledge of the shape of all the ocean basins—their bathymetry and coastline shape.
Current procedure for analysing tides follows the method of harmonic analysis introduced in the 1860s by William Thomson. It is based on the principle that the astronomical theories of the motions of sun and moon determine a large number of component frequencies, and at each frequency there is a component of force tending to produce tidal motion, but that at each place of interest on the Earth, the tides respond at each frequency with an amplitude and phase peculiar to that locality. At each place of interest, the tide heights are therefore measured for a period of time sufficiently long (usually more than a year in the case of a new port not previously studied) to enable the response at each significant tide-generating frequency to be distinguished by analysis, and to extract the tidal constants for a sufficient number of the strongest known components of the astronomical tidal forces to enable practical tide prediction. The tide heights are expected to follow the tidal force, with a constant amplitude and phase delay for each component. Because astronomical frequencies and phases can be calculated with certainty, the tide height at other times can then be predicted once the response to the harmonic components of the astronomical tide-generating forces has been found.
The main patterns in the tides are
- the twice-daily variation
- the difference between the first and second tide of a day
- the spring–neap cycle
- the annual variation
The Highest Astronomical Tide is the perigean spring tide when both the sun and the moon are closest to the Earth.
When confronted by a periodically varying function, the standard approach is to employ Fourier series, a form of analysis that uses sinusoidal functions as a basis set, having frequencies that are zero, one, two, three, etc. times the frequency of a particular fundamental cycle. These multiples are called harmonics of the fundamental frequency, and the process is termed harmonic analysis. If the basis set of sinusoidal functions suit the behaviour being modelled, relatively few harmonic terms need to be added. Orbital paths are very nearly circular, so sinusoidal variations are suitable for tides.
For the analysis of tide heights, the Fourier series approach has in practice to be made more elaborate than the use of a single frequency and its harmonics. The tidal patterns are decomposed into many sinusoids having many fundamental frequencies, corresponding (as in the lunar theory) to many different combinations of the motions of the Earth, the moon, and the angles that define the shape and location of their orbits.
For tides, then, harmonic analysis is not limited to harmonics of a single frequency. In other words, the harmonies are multiples of many fundamental frequencies, not just of the fundamental frequency of the simpler Fourier series approach. Their representation as a Fourier series having only one fundamental frequency and its (integer) multiples would require many terms, and would be severely limited in the time-range for which it would be valid.
The study of tide height by harmonic analysis was begun by Laplace, William Thomson (Lord Kelvin), and George Darwin. A.T. Doodson extended their work, introducing the Doodson Number notation to organise the hundreds of resulting terms. This approach has been the international standard ever since, and the complications arise as follows: the tide-raising force is notionally given by sums of several terms. Each term is of the form
- A·cos(w·t + p)
where A is the amplitude, w is the angular frequency usually given in degrees per hour corresponding to t measured in hours, and p is the phase offset with regard to the astronomical state at time t = 0 . There is one term for the moon and a second term for the sun. The phase p of the first harmonic for the moon term is called the lunitidal interval or high water interval. The next step is to accommodate the harmonic terms due to the elliptical shape of the orbits. Accordingly, the value of A is not a constant but also varying with time, slightly, about some average figure. Replace it then by A(t) where A is another sinusoid, similar to the cycles and epicycles of Ptolemaic theory. Accordingly,
- A(t) = A·(1 + Aa·cos(wa·t + pa)) ,
which is to say an average value A with a sinusoidal variation about it of magnitude Aa , with frequency wa and phase pa . Thus the simple term is now the product of two cosine factors:
- A·[1 + Aa·cos(wa ·t + pa)]·cos(w·t + p)
Given that for any x and y
- cos(x)·cos(y) = ½·cos( x + y ) + ½·cos( x–y ) ,
it is clear that a compound term involving the product of two cosine terms each with their own frequency is the same as three simple cosine terms that are to be added at the original frequency and also at frequencies which are the sum and difference of the two frequencies of the product term. (Three, not two terms, since the whole expression is (1 + cos(x))·cos(y) .) Consider further that the tidal force on a location depends also on whether the moon (or the sun) is above or below the plane of the equator, and that these attributes have their own periods also incommensurable with a day and a month, and it is clear that many combinations result. With a careful choice of the basic astronomical frequencies, the Doodson Number annotates the particular additions and differences to form the frequency of each simple cosine term.
Remember that astronomical tides do not include weather effects. Also, changes to local conditions (sandbank movement, dredging harbour mouths, etc.) away from those prevailing at the measurement time affect the tide's actual timing and magnitude. Organisations quoting a "highest astronomical tide" for some location may exaggerate the figure as a safety factor against analytical uncertainties, distance from the nearest measurement point, changes since the last observation time, ground subsidence, etc., to avert liability should an engineering work be overtopped. Special care is needed when assessing the size of a "weather surge" by subtracting the astronomical tide from the observed tide.
Careful Fourier data analysis over a nineteen-year period (the National Tidal Datum Epoch in the U.S.) uses frequencies called the tidal harmonic constituents. Nineteen years is preferred because the Earth, moon and sun's relative positions repeat almost exactly in the Metonic cycle of 19 years, which is long enough to include the 18.613 year lunar nodal tidal constituent. This analysis can be done using only the knowledge of the forcing period, but without detailed understanding of the mathematical derivation, which means that useful tidal tables have been constructed for centuries. The resulting amplitudes and phases can then be used to predict the expected tides. These are usually dominated by the constituents near 12 hours (the semi-diurnal constituents), but there are major constituents near 24 hours (diurnal) as well. Longer term constituents are 14 day or fortnightly, monthly, and semiannual. Semi-diurnal tides dominated coastline, but some areas such as the South China Sea and the Gulf of Mexico are primarily diurnal. In the semi-diurnal areas, the primary constituents M2 (lunar) and S2 (solar) periods differ slightly, so that the relative phases, and thus the amplitude of the combined tide, change fortnightly (14 day period).
In the M2 plot above, each cotidal line differs by one hour from its neighbors, and the thicker lines show tides in phase with equilibrium at Greenwich. The lines rotate around the amphidromic points counterclockwise in the northern hemisphere so that from Baja California Peninsula to Alaska and from France to Ireland the M2 tide propagates northward. In the southern hemisphere this direction is clockwise. On the other hand M2 tide propagates counterclockwise around New Zealand, but this is because the islands act as a dam and permit the tides to have different heights on the islands' opposite sides. (The tides do propagate northward on the east side and southward on the west coast, as predicted by theory.)
The exception is at Cook Strait where the tidal currents periodically link high to low water. This is because cotidal lines 180° around the amphidromes are in opposite phase, for example high water across from low water at each end of Cook Strait. Each tidal constituent has a different pattern of amplitudes, phases, and amphidromic points, so the M2 patterns cannot be used for other tide components.
Because the moon is moving in its orbit around the earth and in the same sense as the Earth's rotation, a point on the earth must rotate slightly further to catch up so that the time between semidiurnal tides is not twelve but 12.4206 hours—a bit over twenty-five minutes extra. The two peaks are not equal. The two high tides a day alternate in maximum heights: lower high (just under three feet), higher high (just over three feet), and again lower high. Likewise for the low tides.
When the Earth, moon, and sun are in line (sun–Earth–moon, or sun–moon–Earth) the two main influences combine to produce spring tides; when the two forces are opposing each other as when the angle moon–Earth–sun is close to ninety degrees, neap tides result. As the moon moves around its orbit it changes from north of the equator to south of the equator. The alternation in high tide heights becomes smaller, until they are the same (at the lunar equinox, the moon is above the equator), then redevelop but with the other polarity, waxing to a maximum difference and then waning again.
The tides' influence on current flow is much more difficult to analyse, and data is much more difficult to collect. A tidal height is a simple number which applies to a wide region simultaneously. A flow has both a magnitude and a direction, both of which can vary substantially with depth and over short distances due to local bathymetry. Also, although a water channel's center is the most useful measuring site, mariners object when current-measuring equipment obstructs waterways. A flow proceeding up a curved channel is the same flow, even though its direction varies continuously along the channel. Surprisingly, flood and ebb flows are often not in opposite directions. Flow direction is determined by the upstream channel's shape, not the downstream channel's shape. Likewise, eddies may form in only one flow direction.
Nevertheless, current analysis is similar to tidal analysis: in the simple case, at a given location the flood flow is in mostly one direction, and the ebb flow in another direction. Flood velocities are given positive sign, and ebb velocities negative sign. Analysis proceeds as though these are tide heights.
In more complex situations, the main ebb and flood flows do not dominate. Instead, the flow direction and magnitude trace an ellipse over a tidal cycle (on a polar plot) instead of along the ebb and flood lines. In this case, analysis might proceed along pairs of directions, with the primary and secondary directions at right angles. An alternative is to treat the tidal flows as complex numbers, as each value has both a magnitude and a direction.
Tide flow information is most commonly seen on nautical charts, presented as a table of flow speeds and bearings at hourly intervals, with separate tables for spring and neap tides. The timing is relative to high water at some harbour where the tidal behaviour is similar in pattern, though it may be far away.
As with tide height predictions, tide flow predictions based only on astronomical factors do not incorporate weather conditions, which can completely change the outcome.
The tidal flow through Cook Strait between the two main islands of New Zealand is particularly interesting, as the tides on each side of the strait are almost exactly out of phase, so that one side's high water is simultaneous with the other's low water. Strong currents result, with almost zero tidal height change in the strait's center. Yet, although the tidal surge normally flows in one direction for six hours and in the reverse direction for six hours, a particular surge might last eight or ten hours with the reverse surge enfeebled. In especially boisterous weather conditions, the reverse surge might be entirely overcome so that the flow continues in the same direction through three or more surge periods.
A further complication for Cook Strait's flow pattern is that the tide at the north side (e.g. at Nelson) follows the common bi-weekly spring–neap tide cycle (as found along the west side of the country), but the south side's tidal pattern has only one cycle per month, as on the east side: Wellington, and Napier.
The graph of Cook Strait's tides shows separately the high water and low water height and time, through November 2007; these are not measured values but instead are calculated from tidal parameters derived from years-old measurements. Cook Strait's nautical chart offers tidal current information. For instance the January 1979 edition for 41°13·9’S 174°29·6’E (north west of Cape Terawhiti) refers timings to Westport while the January 2004 issue refers to Wellington. Near Cape Terawhiti in the middle of Cook Strait the tidal height variation is almost nil while the tidal current reaches its maximum, especially near the notorious Karori Rip. Aside from weather effects, the actual currents through Cook Strait are influenced by the tidal height differences between the two ends of the strait and as can be seen, only one of the two spring tides at the north end (Nelson) has a counterpart spring tide at the south end (Wellington), so the resulting behaviour follows neither reference harbour.
Tidal energy can be extracted by two means: inserting a water turbine into a tidal current, or building ponds that release/admit water through a turbine. In the first case, the energy amount is entirely determined by the timing and tidal current magnitude. However, the best currents may be unavailable because the turbines would obstruct ships. In the second, the impoundment dams are expensive to construct, natural water cycles are completely disrupted, ship navigation is disrupted. However, with multiple ponds, power can be generated at chosen times. So far, there are few installed systems for tidal power generation (most famously, La Rance by Saint Malo, France) which faces many difficulties. Aside from environmental issues, simply withstanding corrosion and biological fouling pose engineering challenges.
Tidal power proponents point out that, unlike wind power systems, generation levels can be reliably predicted, save for weather effects. While some generation is possible for most of the tidal cycle, in practice turbines lose efficiency at lower operating rates. Since the power available from a flow is proportional to the cube of the flow speed, the times during which high power generation is possible are brief.
Tidal flows are important for navigation, and significant errors in position occur if they are not accommodated. Tidal heights are also important; for example many rivers and harbours have a shallow "bar" at the entrance which prevents boats with significant draft from entering at low tide.
Until the advent of automated navigation, competence in calculating tidal effects was important to naval officers. The certificate of examination for lieutenants in the Royal Navy once declared that the prospective officer was able to "shift his tides".
Tidal flow timings and velocities appear in tide charts or a tidal stream atlas. Tide charts come in sets. Each chart covers a single hour between one high water and another (they ignore the leftover 24 minutes) and show the average tidal flow for that hour. An arrow on the tidal chart indicates the direction and the average flow speed (usually in knots) for spring and neap tides. If a tide chart is not available, most nautical charts have "tidal diamonds" which relate specific points on the chart to a table giving tidal flow direction and speed.
The standard procedure to counteract tidal effects on navigation is to (1) calculate a "dead reckoning" position (or DR) from travel distance and direction, (2) mark the chart (with a vertical cross like a plus sign) and (3) draw a line from the DR in the tide's direction. The distance the tide moves the boat along this line is computed by the tidal speed, and this gives an "estimated position" or EP (traditionally marked with a dot in a triangle).
Nautical charts display the water's "charted depth" at specific locations with "soundings" and the use of bathymetric contour lines to depict the submerged surface's shape. These depths are relative to a "chart datum", which is typically the water level at the lowest possible astronomical tide (although other datums are commonly used, especially historically, and tides may be lower or higher for meteorological reasons) and are therefore the minimum possible water depth during the tidal cycle. "Drying heights" may also be shown on the chart, which are the heights of the exposed seabed at the lowest astronomical tide.
Tide tables list each day's high and low water heights and times. To calculate the actual water depth, add the charted depth to the published tide height. Depth for other times can be derived from tidal curves published for major ports. The rule of twelfths can suffice if an accurate curve is not available. This approximation presumes that the increase in depth in the six hours between low and high water is: first hour — 1/12, second — 2/12, third — 3/12, fourth — 3/12, fifth — 2/12, sixth — 1/12.
Intertidal ecology is the study of intertidal ecosystems, where organisms live between the low and high water lines. At low water, the intertidal is exposed (or ‘emersed’) whereas at high water, the intertidal is underwater (or ‘immersed’). Intertidal ecologists therefore study the interactions between intertidal organisms and their environment, as well as among the different species. The most important interactions may vary according to the type of intertidal community. The broadest classifications are based on substrates — rocky shore or soft bottom.
Intertidal organisms experience a highly variable and often hostile environment, and have adapted to cope with and even exploit these conditions. One easily visible feature is vertical zonation, in which the community divides into distinct horizontal bands of specific species at each elevation above low water. A species' ability to cope with desiccation determines its upper limit, while competition with other species sets its lower limit.
Humans use intertidal regions for food and recreation. Overexploitation can damage intertidals directly. Other anthropogenic actions such as introducing invasive species and climate change have large negative effects. Marine Protected Areas are one option communities can apply to protect these areas and aid scientific research.
The approximately fortnightly tidal cycle has large effects on intertidal and marine organisms. Hence their biological rhythms tend to occur in rough multiples of this period. Many other animals such as the vertebrates, display similar rhythms. Examples include gestation and egg hatching. In humans, the menstrual cycle lasts roughly a lunar month, an even multiple of the tidal period. Such parallels at least hint at the common descent of all animals from a marine ancestor.
Shallow areas in otherwise open water can experience rotary tidal currents, flowing in directions that continually change and thus the flow direction (not the flow) completes a full rotation in 12½ hours (for example, the Nantucket Shoals).
In addition to oceanic tides, large lakes can experience small tides and even planets can experience atmospheric tides and Earth tides. These are continuum mechanical phenomena. The first two take place in fluids. The third affects the Earth's thin solid crust surrounding its semi-liquid interior (with various modifications).
Large lakes such as Superior and Erie can experience tides of 1 to 4 cm, but these can be masked by meteorologically induced phenomena such as seiche. The tide in Lake Michigan is described as 0.5 to 1.5 inches (13 to 38 mm) or 1¾ inches.
Atmospheric tides are negligible at ground level and aviation altitudes, masked by weather's much more important effects. Atmospheric tides are both gravitational and thermal in origin and are the dominant dynamics from about 80 to 120 kilometres (50 to 75 mi), above which the molecular density becomes too low to support fluid behavior.
Earth tides or terrestrial tides affect the entire Earth's mass, which acts similarly to a liquid gyroscope with a very thin crust. The Earth's crust shifts (in/out, east/west, north/south) in response to lunar and solar gravitation, ocean tides, and atmospheric loading. While negligible for most human activities, terrestrial tides' semi-diurnal amplitude can reach about 55 centimetres (22 in) at the equator—15 centimetres (5.9 in) due to the sun—which is important in GPS calibration and VLBI measurements. Precise astronomical angular measurements require knowledge of the Earth's rotation rate and nutation, both of which are influenced by Earth tides. The semi-diurnal M2 Earth tides are nearly in phase with the moon with a lag of about two hours.
Some particle physics experiments must adjust for terrestrial tides. For instance, at CERN and SLAC, the very large particle accelerators account for terrestrial tides. Among the relevant effects are circumference deformation for circular accelerators and particle beam energy. Since tidal forces generate currents in conducting fluids in the Earth's interior, they in turn affect the Earth's magnetic field. Earth tides have also been linked to the triggering of earthquakes. See also earthquake prediction.
Galactic tides are the tidal forces exerted by galaxies on stars within them and satellite galaxies orbiting them. The galactic tide's effects on the Solar System's Oort cloud are believed to cause 90 percent of long-period comets.
Tsunamis, the large waves that occur after earthquakes, are sometimes called tidal waves, but this name is given by their resemblance to the tide, rather than any actual link to the tide. Other phenomena unrelated to tides but using the word tide are rip tide, storm tide, hurricane tide, and black or red tides.
- Reddy, M.P.M. & Affholder, M. (2002). Descriptive physical oceanography: State of the Art. Taylor and Francis. p. 249. ISBN 90-5410-706-5. OCLC 223133263 47801346.
- Hubbard, Richard (1893). Boater's Bowditch: The Small Craft American Practical Navigator. McGraw-Hill Professional. p. 54. ISBN 0-07-136136-7. OCLC 44059064.
- Coastal orientation and geometry affects the phase, direction, and amplitude of amphidromic systems, coastal Kelvin waves as well as resonant seiches in bays. In estuaries seasonal river outflows influence tidal flow.
- "Tidal lunar day". NOAA. Do not confuse with the astronomical lunar day on the Moon. A lunar zenith is the Moon's highest point in the sky.
- Mellor, George L. (1996). Introduction to physical oceanography. Springer. p. 169. ISBN 1-56396-210-1.
- Tide tables usually list mean lower low water (mllw, the 19 year average of mean lower low waters), mean higher low water (mhlw), mean lower high water (mlhw), mean higher high water (mhhw), as well as perigean tides. These are mean values in the sense that they derive from mean data."Glossary of Coastal Terminology: H–M". Washington Department of Ecology, State of Washington. Retrieved 5 April 2007.
- "Types and causes of tidal cycles". U.S. National Oceanic and Atmospheric Administration (NOAA) National Ocean Service (Education section).
- Swerdlow, Noel M.; Neugebauer, Otto (1984). Mathematical astronomy in Copernicus's De revolutionibus, Volume 1. Springer-Verlag. p. 76. ISBN 0-387-90939-7, 9780387909394 Check
- Plait, Phil (11 March 2011). "No, the "supermoon" didn’t cause the Japanese earthquake". Discover Magazine. Retrieved 16 May 2012.
- Rice, Tony (4 May 2012). "Super moon looms Saturday". WRAL-TV. Retrieved 5 May 2012.
- U.S. National Oceanic and Atmospheric Administration (NOAA) National Ocean Service (Education section), map showing world distribution of tide patterns, semi-diurnal, diurnal and mixed semi-diurnal.
- Thurman, H.V. (1994). Introductory Oceanography (7 ed.). New York, NY: Macmillan. pp. 252–276.ref
- Ross, D.A. (1995). Introduction to Oceanography. New York, NY: HarperCollins. pp. 236–242.
- Le Provost, Christian (1991). Generation of Overtides and compound tides (review). In Parker, Bruce B. (ed.) Tidal Hydrodynamics. John Wiley and Sons, ISBN 978-0-471-51498-5
- Accad, Y. & Pekeris, C.L. (November 28, 1978). "Solution of the Tidal Equations for the M2 and S2 Tides in the World Oceans from a Knowledge of the Tidal Potential Alone". Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences 290 (1368): 235–266.
- "Tide forecasts". New Zealand: National Institute of Water & Atmospheric Research. Retrieved 2008-11-07. Including animations of the M2, S2 and K1 tides for New Zealand.
- Schureman, Paul (1971). Manual of harmonic analysis and prediction of tides. U.S. Coast and geodetic survey. p. 204.
- Lisitzin, E. (1974). "2 "Periodical sea-level changes: Astronomical tides"". Sea-Level Changes, (Elsevier Oceanography Series) 8. p. 5.
- "What Causes Tides?". U.S. National Oceanic and Atmospheric Administration (NOAA) National Ocean Service (Education section).
- See for example, in the 'Principia' (Book 1) (1729 translation), Corollaries 19 and 20 to Proposition 66, on pages 251–254, referring back to page 234 et seq.; and in Book 3 Propositions 24, 36 and 37, starting on page 255.
- Wahr, J. (1995). Earth Tides in "Global Earth Physics", American Geophysical Union Reference Shelf #1,. pp. 40–46.
- Zuosheng, Y.; Emery, K.O. & Yui, X. (July 1989). "Historical Development and Use of Thousand-Year-Old Tide-Prediction Tables". Limnology and Oceanography 34 (5): 953–957. doi:10.4319/lo.1989.34.5.0953.
- Cartwright, David E. (1999). Tides: A Scientific History. Cambridge, UK: Cambridge University Press.
- Case, James (March 2000). "Understanding Tides—From Ancient Beliefs to Present-day Solutions to the Laplace Equations". SIAM News 33 (2).
- Doodson, A.T. (December, 1921). "The Harmonic Development of the Tide-Generating Potential". Proceedings of the Royal Society of London. Series A 100 (704): 305–329. Bibcode:1921RSPSA.100..305D. doi:10.1098/rspa.1921.0088.
- Casotto, S. & Biscani, F. (April 2004). "A fully analytical approach to the harmonic development of the tide-generating potential accounting for precession, nutation, and perturbations due to figure and planetary terms". AAS Division on Dynamical Astronomy 36 (2): 67.
- See e.g. Moyer, T.D. (2003), "Formulation for observed and computed values of Deep Space Network data types for navigation", vol.3 in Deep-space communications and navigation series, Wiley (2003), e.g. at pp.126–8.
- NASA (May 4, 2000). "Interplanetary Low Tide". Retrieved September 26, 2009.
- Two points on either side of the Earth sample the imposed gravity at two nearby points, effectively providing a finite difference of the gravitational force that varies as the inverse square of the distance. The derivative of 1/r2, with r = distance to originating body, varies as the inverse cube.
- According to NASA the lunar tidal force is 2.21 times larger than the solar.
- See Tidal force – Mathematical treatment and sources cited there.
- Munk, W.; Wunsch, C. (1998). "Abyssal recipes II: energetics of tidal and wind mixing". Deep Sea Research Part I Oceanographic Research Papers 45 (12): 1977. Bibcode:1998DSRI...45.1977M. doi:10.1016/S0967-0637(98)00070-3.
- Ray, R.D.; Eanes, R.J.; Chao, B.F. (1996). "Detection of tidal dissipation in the solid Earth by satellite tracking and altimetry". Nature 381 (6583): 595. Bibcode:1996Natur.381..595R. doi:10.1038/381595a0.
- Lecture 2: The Role of Tidal Dissipation and the Laplace Tidal Equations by Myrl Hendershott. GFD Proceedings Volume, 2004, WHOI Notes by Yaron Toledo and Marshall Ward.
- Flussi e riflussi. Milano: Feltrinelli. 2003. ISBN 88-07-10349-4.
- van der Waerden, B.L. (1987). "The Heliocentric System in Greek, Persian and Hindu Astronomy". Annals of the New York Academy of Sciences 500 (1): 525–545 . doi:10.1111/j.1749-6632.1987.tb37224.x.
- Cartwright, D.E. (1999). Tides, A Scientific History: 11, 18
- "The Doodson–Légé Tide Predicting Machine". Proudman Oceanographic Laboratory. Retrieved 2008-10-03.
- Glossary of Meteorology American Meteorological Society.
- Webster, Thomas (1837). The elements of physics. Printed for Scott, Webster, and Geary. p. 168.
- "FAQ". Retrieved June 23, 2007.
- O'Reilly, C.T.R.; Ron Solvason and Christian Solomon (2005). "Where are the World's Largest Tides". In Ryan, J. BIO Annual Report "2004 in Review" (in English) (Washington, D.C.: Biotechnol. Ind. Org.): 44–46.
- Charles T. O'reilly, Ron Solvason, and Christian Solomon. "Resolving the World's largest tides", in J.A Percy, A.J. Evans, P.G. Wells, and S.J. Rolston (Editors) 2005: The Changing Bay of Fundy-Beyond 400 years, Proceedings of the 6th Bay of Fundy Workshop, Cornwallis, Nova Scotia, Sept. 29, 2004 to October 2, 2004. Environment Canada-Atlantic Region, Occasional Report no. 23. Dartmouth, N.S. and Sackville, N.B.
- "English Channel double tides. Retrieved April 24, 2008". Bristolnomads.org.uk. Retrieved 2012-08-28.
- To demonstrate this Tides Home Page offers a tidal height pattern converted into an .mp3 sound file, and the rich sound is quite different from a pure tone.
- Center for Operational Oceanographic Products and Services, National Ocean Service, National Oceanic and Atmospheric Administration (January 2000). "Tide and Current Glossary". Silver Spring, MD.
- Harmonic Constituents, NOAA.
- Society for Nautical Research (1958). The Mariner's Mirror. Retrieved 2009-04-28.
- Bos, A.R.; Gumanao, G.S.; van Katwijk, M.M.; Mueller, B.; Saceda, M.M. & Tejada, R.P. (2011). "Ontogenetic habitat shift, population growth, and burrowing behavior of the Indo-Pacific beach star Archaster typicus (Echinodermata: Asteroidea)". Marine Biology 158 (3): 639–648. doi:10.1007/s00227-010-1588-0.
- Bos, A.R. & Gumanao, G.S. (2012). "The lunar cycle determines availability of coral reef fishes on fish markets". Journal of Fish Biology 81 (6): 2074–2079. doi:10.1111/j.1095-8649.2012.03454.x. PMID 23130702.
- Darwin, Charles (1871). The Descent of Man, and Selection in Relation to Sex. London: John Murray.
- Le Lacheur, Embert A. Tidal currents in the open sea: Subsurface tidal currents at Nantucket Shoals Light Vessel Geographical Review, April 1924. Accessed: 4 February 2012.
- "Do the Great Lakes have tides?". Great Lakes Information Network. October 1, 2000. Retrieved 2010-02-10.
- Calder, Vince. "Tides on Lake Michigan". Argonne National Laboratory. Retrieved 2010-02-10.
- Dunkerson, Duane. "moon and Tides". Astronomy Briefly. Retrieved 2010-02-10.
- "Linac". Stanford.
- Arnaudon, L. et al. (1993). "Effects of Tidal Forces on the Beam Energy in LEP". PAC (IEEE).
- Takao, M. & Shimida, T. (2000). "Long term variation of the circumference of the spring-8 storage ring". Proceedings of EPAC (Vienna, Austria).
- Tanaka, Sachiko (2010). "Tidal triggering of earthquakes precursory to the recent Sumatra megathrust earthquakes of 26 December 2004 (Mw9.0), 28 March 2005 (Mw8.6), and 12 September 2007 (Mw8.5)". Geophys. Res. Lett. 37 (2): L02301. Bibcode:2010GeoRL..3702301T. doi:10.1029/2009GL041581.
- Nurmi, P., Valtonen, M.J. & Zheng, J.Q. (2001). "Periodic variation of Oort Cloud flux and cometary impacts on the Earth and Jupiter". Monthly Notices of the Royal Astronomical Society 327 (4): 1367–1376. Bibcode:2001MNRAS.327.1367N. doi:10.1046/j.1365-8711.2001.04854.x.
|Wikiquote has a collection of quotations related to: Tides|
|Wikimedia Commons has media related to: Tides|
- 150 Years of Tides on the Western Coast: The Longest Series of Tidal Observations in the Americas NOAA (2004).
- Eugene I. Butikov: A dynamical picture of the ocean tides
- Earth, Atmospheric, and Planetary Sciences MIT Open Courseware; Ch 8 §3
- Myths about Gravity and Tides by Mikolaj Sawicki (2005).
- Ocean Motion: Open-Ocean Tides
- Oceanography: tides by J. Floor Anthoni (2000).
- Our Restless Tides: NOAA's practical & short introduction to tides.
- Planetary alignment and the tides (NASA)
- Tidal Misconceptions by Donald E. Simanek.
- Tides and centrifugal force: Why the centrifugal force does not explain the tide's opposite lobe (with nice animations).
- O. Toledano et al. (2008): Tides in asynchronous binary systems
- Gif Animation of TPX06 tide model based on TOPEX/Poseidon (T/P) satellite radar altimetry
- Gaylord Johnson "How Moon and Sun Generate the Tides" Popular Science, April 1934
- Tide gauge observation reference networks (French designation REFMAR: Réseaux de référence des observations marégraphiques)
- NOAA Tide Predictions
- NOAA Tides and Currents information and data
- History of tide prediction
- Department of Oceanography, Texas A&M University
- Mapped, graphical and tabular tide charts for US displayed as calendar months
- Mapped, graphical US tide tables/charts in calendar form from NOAA data
- SHOM Tide Predictions
- UK Admiralty Easytide
- UK, South Atlantic, British Overseas Territories and Gibraltar tide times from the UK National Tidal and Sea Level Facility
- Tide Predictions for Australia, South Pacific & Antarctica
- Tide and Current Predictor, for stations around the world
- World Tide Tables
- Tidely U.S. Tide Predictions
- Famous Tidal Prediction Pioneers and Notable Contributions | http://en.wikipedia.org/wiki/Tide | 13 |
20 | A paradox is a statement or proposition that seems self-contradictory or absurd but in reality expresses a possible truth.
These questions present you with a paradox and ask you to resolve it or explain how that contradiction could exist. Paradox questions are rare and more common at the higher skill levels. These questions usually contain the keywords explanation, resolve or account.
These questions often ask you to play the role of a top researcher where you have to reconcile conflicting data.
Here are some examples of the ways in which these questions are worded:
- Which of the following, if true, would help to resolve the apparent paradox presented above?
- Which of the following, if true, contributes most to an explanation of the apparent discrepancy described above?
- Each of the following could help account for this discrepancy, EXCEPT:
Sometimes paradox questions will have two speakers or have the text in bold and ask the user to compare the statements and resolve the conflict.
How to approach paradox questions:
- Read the argument and find the apparent paradox, discrepancy, or contradiction.
- State the apparent paradox, discrepancy, or contradiction in your own words.
- Use POE (process of elimination). The best answer will explain how both sides of the paradox, discrepancy, or contradiction can be true. Eliminate answers that are out of scope.
Inflation rose by 5.1% over the 2nd quarter, up from 4.1% during the first quarter of the year, and higher than the 3.3% recorded during the same time last year. However, the higher price index did not seem to alarm Wall Street and stock prices remained steady.
Which of the following, if true, could explain the reaction of Wall Street?
- Stock prices were steady because of a fear that inflation would continue.
- The President announced that he was concerned about rising inflation.
- Economists warned that inflation would persist.
- Much of the quarterly increase in the price level was due to a summer drought's effect on food prices.
- Other unfavorable economic news had overshadowed the fact of inflation.
Explanation: This is a paradox because the high inflation report would seem to indicate that the stock market should go down.
A fear that inflation would continue (A), an announcement by the president that he was concerned about inflation (B), economists' warnings about inflation (C), and other unfavorable economic news (E) would all tend to cause stock prices to decline and cause alarm on Wall Street.
What we are looking for instead is an explanation which suggests why a high-inflation report would not spook the markets. (D) is most appropriate. If most of the quarterly inflation was due to a rise in food prices caused by a drought, then other prices rose less or no more than in the last quarter. Since the drought is probably a temporary phenomenon, it may be expected that inflation will decline next quarter. Thus, there is no cause for alarm on Wall Street, and the high-inflation report should not scare the markets. | http://www.lsat-center.com/lsatc4s5b.htm | 13 |
19 | Hearing impairment is a disability affecting about 1 in 10 North Americans. Hearing impairment results from a structural abnormality (such as a hole in the eardrum) that may or may not produce a functional disability (such as diminished hearing).
Hearing loss can be conductive (due to faulty transmission of sound waves) or sensorineural (faulty sound reception by nerve cells), or both.
Common causes of conductive hearing loss are wax blocking the ear, a perforated eardrum, or fluid in the ears.
Common reasons for sensorineural deafness are noise exposure, age-related changes, and ototoxic drugs (that damage hearing).
Hearing loss can be:
- mild (a loss up to 40 dB) - with trouble in hearing ordinary conversation
- moderate (40-60 dB) - where voices must be raised to be heard
- severe (over 60 dB loss) - where people must shout to be heard.
According to the World Health Organization, the term "deaf" should only be applied to individuals with hearing impairment so severe that they cannot benefit from sound amplification or hearing aid assistance.
The most common cause of sensorineural deafness is aging which produces presbycusis - literally, "old hearing." Those with presbycusis often complain not only of hearing loss - usually in both ears - but also of associated tinnitus or ringing in the ears, and sometimes dizziness.
It takes only a slight loss of hearing to make life difficult because although conversation is audible at low frequencies (deeper voices), it is not as easy to hear higher pitched voices.
Typically, with hearing loss, the ability to hear high sounds goes first so that there is trouble hearing birds or women's voices, followed by the loss of low-tone reception.
The elderly may have trouble hearing the phone ring or distinguishing consonants. The problem is particularly acute when there is a lot of background noise, as on a bus, at the dinner table, or when standing next to an open window facing traffic.
Hearing impairment is measured by the amount of level of loss in what are called decibels (dB) hearing level (HL). Decibels are like degrees of a thermometer. As temperature increases, so do the number of degrees. As the volume of sound increases, so do the number of decibels.
Normal conversation is usually between 45 to 55 dB. A baby crying hovers around 60 dB and downtown traffic can blister the ear at 90 dB.
If you can hear sounds between 0 and 25 dB HL most of the time, your hearing is normal or near normal and you probably do not need a hearing aid, although it may enhance your abilities in some situations.
If you only hear sounds above 25 dB HL, your hearing loss may be mild, moderate, severe, or profound. Hearing aids are designed in part to compensate for the level of loss.
The causes of hearing loss may be congenital (present at birth) - genetic, use of ototoxic drugs during pregnancy, prenatal rubella in expectant mothers, infections during pregnancy, perinatal anoxia (fetal oxygen lack), or Rh blood disease. Or, the cause may be acquired hearing loss - noise exposure, presbycusis, infections that affect the middle ear and inner ear such as mumps, measles and influenza, middle ear infections, ototoxicity (drugs that harm the inner ear), head injuries, benign tumors of the hearing nerve (acoustic neuroma), and cancer (rare).
Appropriately chosen, properly fitted, and regularly checked, hearing aids can greatly improve the quality of life for hearing-impaired persons. They are prescribed according to the type and severity of hearing loss, how well someone can manipulate the aid, and the condition of the ear canal. They work by amplifying sound and are most effective in quiet areas - as for one to one conversations and small group interaction.
Some devices for the hard of hearing are designed for a particular situation. For example, for watching TV a specific amplifier improves the sound signal and mutes background noise.
Most hearing aids currently sold are behind the ear (BTE), in the ear (ITE), or in the ear canal (ITC).
Is there significant hearing loss?
Is the loss conductive or sensorineural?
What is the degree of loss - mild, moderate, or severe?
What is causing the loss?
Is a hearing aid needed?
What kind of hearing aid do you recommend?
What degree of improvement in hearing can be expected?
What will a hearing aid cost? | http://www.healthcentral.com/encyclopedia/408/598.html | 13 |
21 | Plymouth Colony (sometimes New Plymouth or The Old Colony) was an English colonial venture in North America from 1620 until 1691. Founded by a group of separatists who later came to be known as the Pilgrim Fathers, Plymouth Colony was one of the earliest colonies to be founded by the English in North America. The citizens of Plymouth were fleeing religious persecution and searching for a place to worship God as they saw fit. The social and legal systems of the colony were thus closely tied to their religious beliefs. Many of the people and events surrounding Plymouth Colony have become part of American mythology, including the North American tradition known as Thanksgiving and the monument known as Plymouth Rock. The colonists believed that they were constructing a better society than the one they had left behind, one that would be characterized by caring, sharing and a concern for the common good. Their early collaboration with the indigenous people did not survive very long into the American experience. However, while it did last it was a significant aspect of the early settlement of the New World.
- See also: Pilgrim Fathers
Plymouth Colony was founded by a group of people who later came to be known as the "Pilgrims." The core group—roughly 40 percent of the adults and 56 percent of the family groupings—was part of a congregation of religious separatists led by pastor John Robinson (pastor), church elder William Brewster, and William Bradford. While still in the town of Scrooby in Nottinghamshire, England, the congregation began to feel the pressures of religious persecution. During the Hampton Court Conference, King James I had declared the Puritans and Protestant Separatists to be undesirable and, in 1607, the Bishop of York raided the homes and imprisoned several members of the congregation. The congregation thus left England and emigrated to the Netherlands, first to Amsterdam and finally to Leiden, in 1609.
In Leiden, the congregation found the freedom to worship as it chose, but Dutch society was unfamiliar to these immigrants. Scrooby had been an agricultural community, whereas Leiden was a thriving industrial center, and the pace of life was hard on the Pilgrims. Furthermore, though the community remained close-knit, their children began adopting the Dutch customs and language. The Pilgrims were also still not free from the persecutions of the English Crown; after William Brewster in 1618 published comments highly critical of the King of England and the Anglican Church, English authorities came to Leiden to arrest him. Though Brewster escaped arrest, the events spurred the congregation to move even further from England.
In June 1619, the Pilgrims obtained a land patent from the London Virginia Company, allowing them to settle at the mouth of the Hudson River. They then sought financing through the Merchant Adventurers, a group of Puritan businessmen who viewed colonization as a means of both spreading their religion and making a profit. Upon arriving in America, the Pilgrims began working to repay their debts.
Landings at Provincetown and Plymouth
The Mayflower anchored at Provincetown Harbor on November 11, 1620. The Pilgrims did not have a patent to settle this area, thus some passengers began to question their right to land; they complained that there was no legal authority to establish a colony. In response to this, a group of colonists, still aboard the ship as it lay off-shore, drafted and ratified the first governing document of the colony, the Mayflower Compact, the intent of which was to establish a means of governing the colony. Though it did little more than confirm that the colony would be governed like any English town, it did serve the purpose of relieving the concerns of many of the settlers.
The colonists dropped anchor in Plymouth Harbor on December 17 and spent three days surveying for a settlement site. They rejected several sites, including one on Clark's Island and another at the mouth of the Jones River, in favor of the site of a recently abandoned, Native American settlement named Patuxet. The location was chosen largely for its defensive position; the settlement would be centered on two hills: Cole's Hill, where the village would be built, and Fort Hill, where a defensive cannon would be stationed. Also important in choosing the site, the prior Indian villagers had cleared much of the land, making agriculture relatively easy. Although there are no contemporary accounts to verify the legend, Plymouth Rock is often hailed as the point where the colonists first set foot on their new homeland.
On December 21, 1620, the first landing party arrived at the site of what would become the settlement of Plymouth, Massachusetts. Plans to immediately begin building houses, however, were delayed by inclement weather until December 23. As the building progressed, 20 men always remained ashore for security purposes, while the rest of the work crews returned each night to the Mayflower. Women, children, and the infirm remained on board the Mayflower; many had not left the ship for six months. The first structure, a "common house" of wattle and daub, took two weeks to complete in the harsh New England winter. In the following weeks, the rest of the settlement slowly took shape. The living and working structures were built on the relatively flat top of Cole's Hill, and a wooden platform was constructed to support the cannon that would defend the settlement from nearby Fort Hill. Many of the able-bodied men were too infirm to work, and some died of their illnesses. Thus, only seven residences (of a planned 19) and four common houses were constructed during the first winter.
By the end of January, enough of the settlement had been built to begin unloading provisions from the Mayflower. In mid-February, after several tense encounters with local Native Americans, the male residents of the settlement organized themselves into military orders; Myles Standish was designated as the commanding officer. By the end of the month, five cannon had been defensively positioned on Fort Hill. John Carver was elected governor to replace Governor Martin.
On March 16, 1621, the first formal contact with the Native Americans occurred. A Native American named Samoset, originally from Pemaquid Point in modern Maine, walked boldly into the midst of the settlement and proclaimed, "Welcome, Englishmen!" He had learned some English from fishermen who worked off the coast of Maine and gave them a brief introduction to the region's history and geography. It was during this meeting that the Pilgrims found out that the previous residents of the Native American village, Patuxet, had probably died of smallpox. They also discovered that the supreme leader of the region was a Wampanoag Native American sachem (chief) by the name of Massasoit; and they learned of the existence of Squanto—also known by his full Massachusett name of Tisquantum—a Native American originally from Patuxet. Squanto had spent time in Europe and spoke English quite well. Samoset spent the night in Plymouth and agreed to arrange a meeting with some of Massasoit's men.
Massasoit and Squanto were apprehensive about the Pilgrims. In Massasoit's first contact with the English, several men of his tribe had been killed in an unprovoked attack by English sailors. He also knew of the Pilgrims' theft of the corn stores and grave robbing. Squanto had been abducted in 1614 by the English explorer Thomas Hunt and had spent five years in Europe, first as a slave for a group of Spanish Monks, then in England. He had returned to New England in 1619, acting as a guide to the English explorer Ferdinando Gorges. Massasoit and his men had massacred the crew of the ship and had taken in Squanto.
Samoset returned to Plymouth on March 22 with a delegation from Massasoit that included Squanto; Massasoit himself joined them shortly thereafter. After an exchange of gifts, Massasoit and Governor Martin established a formal treaty of peace. This treaty ensured that each people would not bring harm to the other, that Massasoit would send his allies to make peaceful negotiations with Plymouth, and that they would come to each other's aid in a time of war.
On April 5, 1621, after being anchored for almost four months in Plymouth Harbor, the Mayflower set sail for England. Nearly half of the original 102 passengers died during the first winter. As William Bradford wrote, "of these one hundred persons who came over in this first ship together, the greatest half died in the general mortality, and most of them in two or three months' time". Several of the graves on Cole's Hill were uncovered in 1855; their bodies were disinterred and moved to a site near Plymouth Rock.
The autumn celebration in late 1621 that has become known as "The First Thanksgiving" was not known as such to the Pilgrims. The Pilgrims did recognize a celebration known as a "Thanksgiving," which was a solemn ceremony of praise and thanks to God for a congregation's good fortune. The first such Thanksgiving as the Pilgrims would have called it did not occur until 1623, in response to the good news of the arrival of additional colonists and supplies. That event probably occurred in July and consisted of a full day of prayer and worship and probably very little revelry.
The event now commemorated by the United States at the end of November each year is more properly termed a "harvest festival." The festival was probably held in early October 1621 and was celebrated by the 51 surviving Pilgrims, along with Massasoit and 90 of his men. Two contemporary accounts of the event survive: Of Plimoth Plantation by William Bradford as well as Mourt's Relation by Edward Winslow. The celebration lasted three days and featured a feast that included numerous types of waterfowl, wild turkeys and fish procured by the colonists, as well as five deer brought by the Native Americans.
Early relations with the Native Americans
After the departure of Massasoit and his men, Squanto remained in Plymouth to teach the Pilgrims how to survive in New England, for example using dead fish to fertilize the soil. Shortly after the departure of the Mayflower, Governor Carver suddenly died. William Bradford was elected to replace him and would go on to lead the colony through much of its formative years.
As promised by Massasoit, numerous Native Americans arrived at Plymouth throughout the middle of 1621 with pledges of peace. On July 2, a party of Pilgrims, led by Edward Winslow (who would himself become the chief diplomat of the colony), set out to continue negotiations with the chief. The delegation also included Squanto, who acted as a translator. After traveling for several days, they arrived at Massasoit's capital, the village of Sowams near Narragansett Bay. After meals and an exchange of gifts, Massasoit agreed to an exclusive trading pact with the English, and thus the French, who were also frequent traders in the area, were no longer welcome. Squanto remained behind and traveled the area to establish trading relations with several tribes in the area.
In late July, a boy by the name of John Billington became lost for some time in the woods around the colony. It was reported he was found by the Nauset, the same group of Native Americans on Cape Cod from whom the Pilgrims had stolen corn seed the prior year upon their first explorations. The English organized a party to return Billington to Plymouth. The Pilgrims agreed to reimburse the Nauset for the stolen goods in return for the Billington boy. This negotiation would do much to secure further peace with the Native Americans in the area.
During their dealings with the Nausets over the release of John Billington, the Pilgrims learned of troubles that Massasoit was experiencing. Massasoit, Squanto, and several other Wampanoags had been captured by Corbitant, sachem of the Narragansett tribe. A party of ten men, under the leadership of Myles Standish, set out to find and execute Corbitant. While hunting for Corbitant, they learned that Squanto had escaped and Massasoit was back in power. Several Native Americans had been injured by Standish and his men, and were offered medical attention in Plymouth. Though they had failed to capture Corbitant, the show of force by Standish had garnered respect for the Pilgrims and, as a result, nine of the most powerful sachems in the area, including Massasoit and Corbitant, signed a treaty in September that pledged their loyalty to King James.
In May 1622, a vessel named the Sparrow arrived carrying seven men from the Merchant Adventurers whose purpose was to seek out a site for a new settlement in the area. Two ships followed shortly thereafter carrying sixty settlers, all men. They spent July and August in Plymouth before moving north to settle in modern Weymouth, Massachusetts at a settlement they named Wessagussett. Though short-lived, the settlement of Wessagussett would provide the spark for an event that would dramatically change the political landscape between the local Native American tribes and the English settlers. Responding to reports of a military threat to Wessagussett, Myles Standish organized a militia to defend Wessagussett. However, he found that there had been no attack. He therefore decided on a pre-emptive strike. In an event called "Standish's raid" by historian Nathanial Philbrick, he lured two prominent Massachusett military leaders into a house at Wessagussett under the pretense of sharing a meal and making negotiations. Standish and his men then stabbed and killed the two unsuspecting Native Americans. The local sachem, named Obtakiest, was pursued by Standish and his men but escaped with three English prisoners from Wessagusset, whom he then executed. Within a short time, Wessagussett was disbanded and the survivors were integrated into the town of Plymouth.
Word quickly spread among the Native American tribes of Standish's attack; many Native Americans abandoned their villages and fled the area. As noted by Philbrick: "Standish's raid had irreparably damaged the human ecology of the region…. It was some time before a new equilibrium came to the region." Edward Winslow, in his 1624 memoirs Good News from New England, reports that "they forsook their houses, running to and fro like men distracted, living in swamps and other desert places, and so brought manifold diseases amongst themselves, whereof very many are dead". Now lacking the trade in furs provided by the local tribes, the Pilgrims lost their main source of income for paying off their debts to the Merchant Adventurers. Rather than strengthening their position, Standish's raid had disastrous consequences for the colony, a fact noted by William Bradford, who in a letter to the Merchant Adventurers noted "We had much damaged our trade, for there where we had [the] most skins the Indians are run away from their habitations…." The only positive effect of Standish's raid seemed to be the increased power of the Massasoit-led Wampanoag, the Pilgrims' closest ally in the region.
Growth of Plymouth
In November 1621, almost exactly one year after the Pilgrims first set foot in New England, a second ship sent by the Merchant Adventurers arrived. Named the Fortune, it arrived with 37 new settlers for Plymouth. However, as the ship had arrived unexpectedly, and also without many supplies, the additional settlers put a strain on the resources of the colony. Among the passengers of the Fortune were several additional members of the original Leiden congregation, including William Brewster's son Jonathan, Edward Winslow's brother John, and Philip de la Noye (the family name was later changed to "Delano") whose descendants would include President Franklin Delano Roosevelt. The Fortune also carried a letter from the Merchant Adventurers chastising the colony for failure to return goods with the Mayflower that had been promised in return for their support. The Fortune began its return to England laden with ₤500 worth of goods, more than enough to keep the colonists on schedule for repayment of their debt, however the Fortune was captured by the French before she could deliver her cargo to England, creating an even larger deficit for the colony.
In July 1623, two more ships arrived, carrying 90 new settlers, among them Leideners, including William Bradford's future wife, Alice. Some of the settlers were unprepared for frontier life and returned to England the next year. In September 1623, another ship carrying settlers destined to refound the failed colony at Weymouth arrived and temporarily stayed at Plymouth. In March 1624, a ship bearing a few additional settlers and the first cattle arrived. A 1627 division of cattle lists 156 colonists divided into twelve lots of thirteen colonists each. Another ship also named the Mayflower arrived in August 1629 with 35 additional members of the Leiden congregation. Ships arrived throughout the period between 1629 and 1630 carrying numbers of passengers; though the exact number is unknown, contemporary documents claimed that by January 1630 the colony had almost 300 people. In 1643 the colony had an estimated 600 males fit for military service, implying a total population of about 2000. By 1690, on the eve of the dissolution of the colony, the estimated total population of Plymouth County, the most populous, was 3055 people. It is estimated that the entire population of the colony at the point of its dissolution was around 7000. For comparison it is estimated that between 1630 and 1640, a period known as the Great Migration, over 20,000 settlers had arrived in Massachusetts Bay Colony alone, and by 1678 the English population of all of New England was estimated to be in the range of 60,000. Despite the fact that Plymouth was the first colony in the region, by the time of its absorption it was much smaller than Massachusetts Bay Colony.
The first full scale war in New England was the Pequot War of 1637. The War's roots go back to 1632, when a dispute over control of the Connecticut River Valley near modern Hartford, Connecticut arose between Dutch fur traders and Plymouth officials. Representatives from the Dutch East India Company and Plymouth Colony both had deeds that claimed they had rightfully purchased the land from the Pequot. A sort of land rush occurred as settlers from Massachusetts Bay and Plymouth colonies tried to beat the Dutch in settling the area; the influx of English settlers also threatened the Pequot. Other confederations in the area, including the Narragansett and Mohegan, were the natural enemies of the Pequot, and sided with the English. The event that sparked the start of formal hostilities was the capture of a boat and the murder of its captain, John Oldham, in 1636, an event blamed on allies of the Pequots. In April 1637, a raid on a Pequot village by John Endicott led to a retaliatory raid by Pequot warriors on the town of Wethersfield, Connecticut where some 30 English settlers were killed. This led to a further retaliation, where a raid led by Captain John Underhill and Captain John Mason burned a Pequot village to the ground near modern Mystic, Connecticut, killing 300 Pequots. Plymouth Colony had little to do with the actual fighting in the war.
In the wake of the Pequot War, four of the New England colonies (Massachusetts Bay, Connecticut, New Haven, and Plymouth) formed a defensive compact known as the United Colonies of New England. Edward Winslow, already known for his diplomatic skills, was the chief architect of the United Colonies. His experience in the United Provinces of the Netherlands during the Leiden years would be used in organizing the confederation. John Adams would later consider the United Colonies to be the prototype for the Articles of Confederation, which itself was the first attempt at a national government.
King Philip's War
Also known as Metacomet and other variations on that name, King Philip was the younger son of Massasoit, and the heir of Massasoit's position as sachem of the Wampanoag and supreme leader of the Wampanoag. He became sachem upon the sudden death of his older brother Wamsutta, also known as Alexander, in 1662.
The roots of the war stem from the increasing numbers of English colonists and their demand for land. As more land was purchased from the Native Americans, they were restricted to smaller territories for themselves. Native American leaders such as King Philip resented the loss of land and looked for a means to slow or reverse it. Of specific concern was the founding of the town of Swansea, which was located only a few miles from the Wampanoag capital at Mount Hope. The General Court of Plymouth began using military force to coerce the sale of Wampanoag land to the settlers of the town. The proximate cause of the conflict was the death of a Praying Indian named John Sassamon in 1675. Sassamon had been an advisor and friend to King Philip; however Sassamon's conversion to Christianity had driven the two apart. Accused in the murder of Sassamon were some of Philip's most senior lieutenants. A jury of twelve Englishmen and six Praying Indians found the Native Americans guilty of murder and sentenced them to death. To this day, some debate exists whether or not King Philip's men actually committed the murder.
Philip had already begun war preparations at his home base near Mount Hope where he started raiding English farms and pillaging their property. In response, Governor Josiah Winslow called out the militia, and they organized and began to move on Philip's position. The war had started.
King Philip systematically attacked unarmed women and children. One such attack resulted in the capture of Mary Rowlandson and the murder of her small children. The memoirs of her capture would provide historians with much information on Native American culture during this time period.
The war continued through the rest of 1675 and into the next year. The English were constantly frustrated by the Native American's refusal to meet them in pitched battle. They employed a form of guerrilla warfare that confounded the English. Captain Benjamin Church continuously campaigned to enlist the help of friendly Native Americans to help learn how to fight on an even footing with Philip's troops, but he was constantly rebuffed by the Plymouth leadership, who mistrusted all Native Americans, thinking them potential enemies. Eventually, Governor Winslow and Plymouth military commander Major William Bradford (son of the late Governor William Bradford) relented and gave Church permission to organize a combined force of English and Native Americans. After securing the alliance of the Sakonnet, he led his combined force in pursuit of Philip, who had thus far avoided any major battles in the war that bears his name. Throughout July 1676, Church's band would capture hundreds of Native American troops, often without much of a fight, though Philip eluded him. After Church was given permission to grant amnesty to any captured Native Americans who would agree to join the English side, his force grew immensely. Philip was killed by a Pocasset Indian; the war soon ended as an overwhelming English victory.
Eight percent of the English adult male population is estimated to have died during the war, a rather large percentage by most standards. The impact on the Native Americans was far higher, however. So many were killed, fled, or shipped off as slaves that the entire Native American population of New England fell by 60–80 percent.
In 1686, the entire region was reorganized under a single government known as the Dominion of New England; this included the colonies of Plymouth, Rhode Island, Massachusetts Bay, Connecticut, and New Hampshire. New York, West Jersey, East Jersey were added in 1688. The President of the Dominion, Edmund Andros, was highly unpopular, and the union did not last. Plymouth Colony revolted, and withdrew from the Dominion in April 1688; the entire union was dissolved during the Glorious Revolution of 1688.
The return of self-rule for Plymouth Colony was short-lived, however. A delegation of New Englanders, led by Increase Mather, went to England to negotiate for a return of the colonial charters that had been nullified during the Dominion years. The situation was particularly problematic for Plymouth Colony, as it had existed without a formal charter since its founding. Plymouth did not get their wish for a formal charter; instead a new charter was issued, annexing Plymouth Colony to Massachusetts Bay Colony. The official date of the proclamation ending the existence of Plymouth Colony was October 17, 1691, though it was not put into force until the arrival of the new charter on May 14, 1692, carried by William Phips. The last official meeting of the Plymouth General Court occurred on June 8, 1692.
The Pilgrims themselves were a subset of an English religious movement known as Puritanism, which sought to "purify" the Anglican Church of its secular trappings. The movement sought to return the church to a more primitive state and to practice Christianity as was done by the earliest Church Fathers. Puritans believed that the Bible was the only true source of religious teaching and that any additions made to Christianity, especially with regard to church traditions, had no place in Christian practice. The Pilgrims distinguished themselves from the Puritans in that they sought to "separate" themselves from the Anglican Church, rather than reform it from within. It was this desire to worship from outside of the Anglican Communion that led them first to the Netherlands and ultimately to New England.
Each town in Plymouth colony was considered a single church congregation; in later years some of the larger towns split into two or three congregations. The church was undoubtedly the most important social institution in the colony. Not only was the Bible the primary religious document of the society, but it also served as the primary legal document as well. Church attendance was not only mandatory, but membership was socially vital. Education was carried out for almost purely religious purposes. The laws of the colony specifically asked parents to provide for the education of their children, to "at least to be able duly to read the Scriptures" and to understand "the main Grounds and Principles of Christian Religion." It was expected that the male head of the household be responsible for the religious well-being of all its members, children and servants alike.
Most churches utilized two acts to sanction its members: censure and excommunication. Censure was a formal reprimand for behavior that did not conform with accepted religious and social norms, while excommunication involved full removal from church membership. Many perceived social evils, from fornication to public drunkenness, were dealt with through church discipline rather than through civil punishment. Church sanctions seldom held official recognition outside church membership and seldom resulted in civil or criminal proceedings. Nevertheless, such sanctions were a powerful tool of social control.
Besides the Puritan theology espoused by their religious leaders, the people of Plymouth Colony had a strong belief in the supernatural. Richard Greenham, a Puritan theologian whose works were known to the Plymouth residents, counseled extensively against turning to magic or wizardry to solve problems. The Pilgrims saw Satan's work in nearly every calamity that befell them; the dark magical arts were very real and present for them. They believed in the presence of malevolent spirits who brought misfortune to people. For example, in 1660, a court inquest into the drowning death of Jeremiah Burroughs determined that a possessed canoe was to blame. While Massachusetts Bay Colony experienced an outbreak of witchcraft scares in the seventeenth century, there is little evidence that Plymouth was engulfed in anything similar. While witchcraft was listed as a capital crime in the 1636 codification of the laws by the Plymouth General Court, there were no actual convictions of witches in Plymouth Colony. The court records only show two formal accusations of witchcraft. The first, of Goodwife Holmes in 1661, never went to trial. The second, of Mary Ingram in 1677, resulted in trial and acquittal.
Marriage and family life
Edward Winslow and Susanna White, each of who lost their spouses during the harsh winter of 1620–1621, became the first couple to be married in Plymouth. Governor Bradford presided over the civil ceremony.
Family size in the colony was large by modern American standards, though childbirth was often spaced out, with an average of two years between children. Most families averaged five to six children living under the same roof, though it would not be uncommon for one family to have grown children moving out before the mother had finished giving birth. Mortality rates were high for both mother and child; one birth in thirty resulted in the death of the mother, resulting in one in five women dying in childbirth. Infant mortality rates were high, with 12 percent of children dying before their first birthday. By comparison, the infant mortality rate for the United States in 1995 was 0.76 percent.
The nuclear family was the most common familial structure in the colony, and while close relatives may have lived nearby, it was expected that upon reaching the age of maturity, older children would move out and establish their own households. In addition to parents and birth children living in the same household, many families took in children from other families or hired indentured servants. Some of the more wealthy families owned slaves.
Childhood, adolescence, and education
Children generally remained in the direct care of their mothers until the age of about eight years old, after which time it was not uncommon for the child to be placed in the foster care of another family. There were any number of reasons for a child to be "put-out" in this manner. Some children were placed into households to learn a trade, others to be taught to read and write. It seems that there was, as with almost every decision in the colony, a theological reason for fostering children. It was assumed that a child's own parents would love them too much and would not properly discipline them. By placing a child in the care of another family, there was little danger of a child being spoiled.
Adolescence was not a recognized phase of life in Plymouth colony, and there was not a single rite of passage that marked transition from youth to adulthood. Several important transitions occurred at various ages, but none marked a single "coming of age" event. As early as eight years old, children were expected to begin learning their adult roles in life, by taking on some of the family work or by being placed in foster homes to learn a trade. Most children experienced religious conversion around the age of eight as well, thus becoming church members.Orphaned children were given the right to choose their own guardians at age 14. At 16, males became eligible for military duty and were also considered adults for legal purposes, such as standing trial for crimes. Age 21 was the youngest at which a male could become a freeman, though for practical purposes this occurred sometime in a man's mid-twenties. Though 21 was the assumed age of inheritance as well, the law respected the rights of the deceased to name an earlier age in his will.
Actual schools were rare in Plymouth colony. The first true school was not founded until 40 years after the foundation of the colony. The General Court first authorized colony-wide funding for formal public schooling in 1673, but only one town, Plymouth, made use of these funds at that time. By 1683, though, five additional towns had received this funding.
Education of the young was never considered to be the primary domain of schools, even after they had become more common. Most education was carried out by a child's parents or foster parents. While formal apprenticeships were not the norm in Plymouth, it was expected that a foster family would teach the children whatever trades they themselves practiced. The church also played a central role in a child's education. As noted above, the primary purpose of teaching a child to read was so that they could read the Bible for themselves.
Government and laws
Plymouth Colony did not have a royal charter authorizing it to form a government. Still, some means of governance was needed; the Mayflower Compact, signed by the 41 able-bodied men aboard the Mayflower upon their arrival in Provincetown Harbor on November 21, 1620, was the colony's first governing document. Formal laws were not codified until 1636. The colony's laws were based on a hybrid of English common law and religious law as laid out in the Bible.
The colony offered nearly all adult males potential citizenship in the colony. Full citizens, or "freemen," were accorded full rights and privileges in areas such as voting and holding office. To be considered a freeman, adult males had to be sponsored by an existing freeman and accepted by the General Court. Later restrictions established a one-year waiting period between nominating and granting of freeman status and also placed religious restrictions on the colony's citizens, specifically preventing Quakers from becoming freemen. Freeman status was also restricted by age; while the official minimum age was 21, in practice most men were elevated to freeman status between the ages of 25 and 40, averaging somewhere in their early thirties.
|Governors of Plymouth Colony|
The colony's most powerful executive was its Governor, who was originally elected by the freemen, but was later appointed by the General Court in an annual election. The General Court also elected seven "Assistants" to form a cabinet to assist the governor. The Governor and Assistants then appointed "Constables" who served as the chief administrators for the towns and "Messengers" who were the main civil servants of the colony. They were responsible for publishing announcements, performing land surveys, carrying out executions, and a host of other duties.
The General Court was both the chief legislative and judicial body of the colony. It was elected by the freemen from among their own number and met regularly in Plymouth, the capital town of the colony. As part of its judicial duties, it would periodically call a "Grand Enquest," which was a grand jury of sorts, elected from the freemen, who would hear complaints and swear out indictments for credible accusations. The General Court, and later lesser town and county courts, would preside over trials of accused criminals and over civil matters, but the ultimate decisions were made by a jury of freemen.
As a legislative body, the General Court could make proclamations of law as needed. In the early years of the colony, these laws were not formally compiled anywhere. In 1636 these laws were first organized and published in the 1636 Book of Laws. The book was reissued in 1658, 1672, and 1685. Among these laws included the levying of "rates," or taxes, and the distribution of colony lands. The General Court established townships as a means of providing local government over settlements, but reserved for itself the right to control specific distribution of land to individuals within those towns. When new land was granted to a freeman, it was directed that only the person to whom the land was granted was allowed to settle it. It was forbidden for individual settlers to purchase land from Native Americans without formal permission from the General Court. The government recognized the precarious peace that existed with the Wampanoag, and wished to avoid antagonizing them by buying up all of their land.
The laws also set out crimes and their associated punishments. There were several crimes that mandated the death penalty: treason, murder, witchcraft, arson, sodomy, rape, bestiality, adultery, and cursing or smiting one's parents. The actual exercise of the death penalty was fairly rare; only one sex-related crime, a 1642 incidence of bestiality by Thomas Granger, resulted in execution. One person, Edward Bumpus, was sentenced to death for "striking and abusing his parents" in 1679, but his sentence was commuted to a severe whipping by reason of insanity. Perhaps the most notable use of the death penalty was in the execution of the Native Americans convicted of the murder of John Sassamon; this helped lead to King Philip's War. Though nominally a capital crime, adultery was usually dealt with by public humiliation. Convicted adulterers were often forced to wear the letters "A.D." sewn into their garments, much in the manner of Hester Prynne in Nathaniel Hawthorne's novel The Scarlet Letter.
Several laws dealt with indentured servitude, a legal status whereby a person would work off debts or be given training in exchange for a period of unrecompensed service. The law required that all indentured servants had to be registered by the Governor or one of the Assistants, and that no period of indenture could be less than six months. Further laws forbade a master from shortening the length of time of service required for his servant, and also confirmed that any indentured servants whose period of service began in England would still be required to complete their service while in Plymouth.
Still used by the town of Plymouth, the seal of the Plymouth Colony was designed in 1629. It depicts four figures within a shield bearing Saint George's Cross, apparently in Native-American style clothing, each carrying the burning heart symbol of John Calvin. The seal was also used by the County of Plymouth until 1931.
Without a clear land patent for the area, the settlers settled without a charter to form a government, and as a result, it was often unclear in the early years as to what land was under the colony's jurisdiction. In 1644, "The Old Colony Line"—which had been surveyed in 1639—was formally accepted as the boundary between Massachusetts Bay and Plymouth.
The situation was more complicated along the border with Rhode Island. Roger Williams in 1636 settled in the area of Rehoboth, near modern Pawtucket. He was forcibly evicted in order to maintain Plymouth's claim to the area. Williams would move to the west side of the Pawtucket River to found the settlement of Providence, the nucleus for the colony of Rhode Island, which was formally established with the "Providence Plantations Patent" of 1644. As various settlers from both Rhode Island and Plymouth began to settle along the area, the exact nature of the western boundary of Plymouth became more unclear. The issue was not fully resolved until the 1740s, long after the dissolution of Plymouth Colony itself. Rhode Island had received a patent for the area in 1693, which had been disputed by Massachusetts Bay Colony. Rhode Island successfully defended the patent, and in 1746, a royal decree transferred the land along the eastern shore of the Narragansett Bay to Rhode Island, including the mainland portion of Newport County and all of modern Bristol County, Rhode Island.
The English in Plymouth Colony fit broadly into three categories: Pilgrims, Strangers, and Particulars. The Pilgrims, like the Puritans that would later found Massachusetts Bay Colony to the north, were a Protestant group that closely followed the teachings of John Calvin. However, unlike the Puritans, who wished to reform the Anglican Church from within, the Pilgrims saw it as a morally defunct organization, and sought to remove themselves from it. The name "Pilgrims" was actually not used by the separatists themselves. Though William Bradford used the term "pilgrims" to describe the group, he was using the term generically, to define the group as travelers on a religious mission. The term used by those we now call the Pilgrims was the "Saints." They used the term to indicate their special place among God's elect, as they subscribed to the Calvinist belief in predestination.
Besides the Pilgrims, or "Saints," the rest of the Mayflower settlers were known as the "Strangers." This group included the non-Pilgrim settlers placed on the Mayflower by the Merchant Adventurers, as well as later settlers who would come for other reasons throughout the history of the colony and who did not necessarily adhere to the Pilgrim religious ideals. A third group, known as the "Particulars," consisted of a group of later settlers that paid their own "particular" way to America, and thus were not obliged to pay the colony's debts.
The presence of the Strangers and the Particulars was a considerable annoyance to the Pilgrims. As early as 1623, a conflict between the two groups broke out over the celebration of Christmas, a day of no particular significance to the Pilgrims. Furthermore, when a group of Strangers founded the nearby settlement of Wessagusset, the Pilgrims were highly strained, both emotionally and in terms of resources, by their lack of discipline. They looked at the eventual failure of the Wessagusset settlement as Divine Providence against a sinful people.
The residents of Plymouth used terms to distinguish between the earliest settlers of the colony and those that came later. The first generation of settlers, generally thought to be those that arrived before 1627, called themselves the "Old Comers" or "Planters." Later generations of Plymouth residents would refer to this group as the "Forefathers".
The Native Americans in New England were organized into loose tribal confederations, sometimes called "nations." Among these confederations were the Nipmucks, the Massachusett, the Narragansett, the Niantics, the Mohegan, and the Wampanoag. Several significant events would dramatically alter the demographics of the Native American population in the region. The first was "Standish's raid" on Wessagusset, which frightened Native American leaders to the extent that many abandoned their settlements, resulting in many deaths through starvation and disease. The second, the Pequot War, resulted in the dissolution of its namesake tribe and a major shift in the local power structure. The third, King Phillip's War, had the most dramatic effect on local populations, resulting in the death or displacement of as much as 80 percent of the total number of Native Americans of southern New England and the enslavement and removal of thousands of Native Americans to the Caribbean and other locales.
Following the tradition of England, some of the wealthier families in Plymouth Colony owned black slaves, which unlike the white indentured servants, were considered the property of their owners and passed on to heirs like any other property. Slave ownership was not widespread and very few families possessed the wealth necessary to own slaves. In 1674, the inventory of Capt. Thomas Willet of Marshfield includes "8 Negroes" at a value of ₤200. Other inventories of the time valued slaves at ₤24–25 each, well out of the financial ability of most families. A 1689 census of the town of Bristol shows that of the 70 families that lived there, only one had a black slave. So few were black slaves in the colony that the General Court never saw fit to pass any laws dealing with them.
The largest source of wealth for Plymouth Colony was the fur trade. The colonists attempted to supplement their income by fishing; the waters in Cape Cod bay were known to be excellent fisheries. However, they lacked any skill in this area, and it did little to relieve their economic hardship. The colony traded throughout the region, establishing trading posts as far away as Penobscot, Maine. They were also frequent trading partners with the Dutch at New Amsterdam.
The economic situation improved with the arrival of cattle in the colony. It is unknown when the first cattle arrived, but the division of land for the grazing of cattle in 1627 represented one of the first moves towards private land ownership in the colony. Cattle became an important source of wealth in the colony; the average cow could sell for ₤28 in 1638. However, the flood of immigrants during the Great Migration drove the price of cattle down. The same cows sold at ₤28 in 1638 were valued in 1640 at only ₤5. Besides cattle, there were also pigs, sheep, and goats raised in the colony
Agriculture also made up an important part of the Plymouth economy. The colonists adopted Native American agricultural practices and crops. They planted maize, squash, pumpkins, beans, and potatoes. Besides the crops themselves, the Pilgrims learned productive farming techniques from the Native Americans, such as proper crop rotation and the use of dead fish to fertilize the soil. In addition to these native crops, the colonists also successfully planted Old World crops such as turnips, carrots, peas, wheat, barley, and oats.
Despite its short history, fewer than 72 years, the events surrounding the founding and history of Plymouth Colony have had a lasting effect on the art, traditions, and mythology of the United States of America.
Art, literature and film
The earliest artistic depiction of the Pilgrims was actually done before their arrival in America—Dutch painter Adam Willaerts painted a portrait of their departure from Delfshaven in 1620. The same scene was repainted by Robert Walter Weir in 1844, and hangs in the Rotunda of the United States Capitol building. Numerous other paintings have been created memorializing various scenes from the life of Plymouth Colony, including their landing and the "First Thanksgiving," many of which have been collected by Pilgrim Hall, a museum and historical society founded in 1824 to preserve the history of the Colony.
Several contemporary accounts of life in Plymouth Colony have become both vital primary historical documents and literary classics. Of Plimoth Plantation by William Bradford and Mourt's Relation by Bradford, Edward Winslow are both accounts written by Mayflower passengers, accounts that provide much of the information we have today regarding the trans-Atlantic voyage and early years of the settlement. Benjamin Church wrote several accounts of King Philip's War, including Entertaining Passages Relating to Philip's War, which remained popular throughout the eighteenth century. An edition of the work was illustrated by Paul Revere in 1772. Another work, The Sovereignty and Goodness of God, provides an account of King Philip's War from the perspective of Mary Rowlandson, an Englishwoman who was captured and spent some time in the company of Native Americans during the war.
Each year the United States celebrates a holiday known as Thanksgiving on the fourth Thursday of November. It is a recognized federal holiday, and frequently involves family gathering with a large feast, traditionally featuring a turkey. Civic recognition of the holiday typically include parades and football games. The holiday is meant to honor the "First Thanksgiving," which was a harvest feast held in Plymouth in 1621.
One of the enduring symbols of the landing of the Pilgrims is Plymouth Rock, a large granite outcropping of rock that was near their landing site at Plymouth. However, none of the contemporary accounts of the actual landing makes any mention that the Rock was the specific place of landing. The Pilgrims chose the site for their landing not for the rock, but for a small brook nearby that was a source of fresh water and fish.
- ↑ Patricia Scott Deetz and James F. Deetz, Passengers on the Mayflower: Ages & Occupations, Origins & Connections. The Plymouth Colony Archive Project, 2000, accessdate 2006-05-19
- ↑ Nathaniel Philbrick. Mayflower: A Story of Courage, Community, and War. (New York: Penguin Group, 2006. ISBN 0670037605), 7–13
- ↑ Albert Christopher Addison. (1911), The Romantic Story of the Mayflower Pilgrims. The Plymouth Colony Archive Project accessdate 2007-04-30, foreword "From a Pilgrim Cell," xiii–xiv
- ↑ Addison, (1911), 51
- ↑ Philbrick, 2006, 16–18
- ↑ Due to hardships experienced during the early years of the settlement, as well as corruption and mismanagement by their representatives, the debt was not actually paid off until 1648. Philbrick, 2006, 19–20, 169
- ↑ Philbrick, 2006, 41
- ↑ Philbrick, 2006, 78–80
- ↑ Paul Johnson. A History of the American People. (New York: HarperCollins, 1997. ISBN 0060168366), 37
- ↑ Philbrick, 2006, 80–84
- ↑ Philbrick, 2006, 88–91
- ↑ Massasoit was specifically the sachem of a single tribe of Wampanoag Indians known as the Pokanoket, though he was recognized as the founder and leader of the entire confederation. Philbrick, 2006, 93, 155
- ↑ Philbrick, 2006, 93–94
- ↑ Philbrick, 2006, 94–96
- ↑ Elliot West, "Squanto", in Allen Weinstein and David Rubel. The Story of America: Freedom and Crisis from Settlement to Superpower. (New York: DK Publishing, 2002. ISBN 0789489031), 50–51
- ↑ Philbrick, 2006, 97–99
- ↑ Addison, 1911, 83–85
- ↑ Patricia Scott Deetz and James F. Deetz, Mayflower Passenger Deaths, 1620–1621. The Plymouth Colony Archive Project, 2000. accessdate 2007-04-19
- ↑ Addison, 1911, 83
- ↑ Carolyn Freeman Travers, "Fast and Thanksgiving Days of Plymouth Colony." Plimoth Plantation: Living, Breathing History. . Plimoth Plantation, accessdate 2007-05-02
- ↑ Primary Sources for "The First Thanksgiving" at Plymouth. Pilgrim Hall Museum 1998 accessdate 2007-03-30 note: this reference contains partial transcriptions of two documents, Winslow's Mourt's Relations and Bradford's Of Plimoth Plantation, which describe the events of the First Thanksgiving.
- ↑ Philbrick, 2006, 102–103
- ↑ Philbrick, 2006, 104–109
- ↑ Philbrick, 2006, 110–113
- ↑ Philbrick, 2006, 113–116
- ↑ Philbrick, 2006, 151–154
- ↑ 27.0 27.1 Patricia Scott Deetz, 2000, "Population of Plymouth Town, County, & Colony, 1620–1690". The Plymouth Colony Archive Project. Retrieved March 26, 2009.
- ↑ 28.0 28.1 28.2 28.3 Philbrick, 2006, 154–155
- ↑ Edward Winslow. The Plymouth Colony Archive Project 1624, Chapter 5 Good Newes From New England. accessdate 2007-05-17
- ↑ Philbrick (2006) pp 123–126, 134
- ↑ Plimoth Plantation: Living, Breathing History Residents of Plymouth according to the 1627 Division of Cattle. Plimoth Plantation. accessdate 2007-05-02
- ↑ Douglas Edward Leach, "The Military System of Plymouth Colony." The New England Quarterly 24 (3) (Sep., 1951): 342–364 doi = 10.2307/361908 . accessdate 2007-04-03. note: login required for access
- ↑ Norris Taylor, The Massachusetts Bay Colony 1998. accessdate 2007-03-30
- ↑ 34.0 34.1 34.2 The Descendants of Henry Doude. Perspectives: The Pequot War accessdate 2007-04-02
- ↑ Philbrick, 2006, 180–181
- ↑ Philbrick, 2006, 205
- ↑ Philbrick, 2006, 207–208
- ↑ 38.0 38.1 38.2 Jennifer L. Aultman, From Thanksgiving to War: Native Americans in Criminal Cases of Plymouth Colony, 1630–1675. The Plymouth Colony Archive Project 2001. accessdate 2007-05-17
- ↑ Philbrick, 2006, 221–223
- ↑ Philbrick, 2006, 229–237
- ↑ Philbrick, 2006, 288–289
- ↑ Philbrick, 2006, 311–323
- ↑ Philbrick, 2006, 331–337
- ↑ 44.0 44.1 Philbrick, 2006, 332, 345–346
- ↑ 45.0 45.1 Timeline of Plymouth Colony 1620–1692. Plimoth Plantation 2007 accessdate 2007-04-02.
- ↑ John Demos. A Little Commonwealth: Family Life in Plymouth Colony. (New York: Oxford University Press, 1970), 17
- ↑ Demos, 17–18
- ↑ Weinstein and Rubel, 64–65
- ↑ 49.0 49.1 Richard Howland Maxwell, "Pilgrim and Puritan: A Delicate Distinction." 2003, Pilgrim Society Note, Series Two. Pilgrim Hall Museum. accessdate 2003-04-04
- ↑ 50.0 50.1 50.2 50.3 50.4 50.5 Christopher Fennell, Plymouth Colony Legal Structure. The Plymouth Colony Archive Project 1998. accessdate 2007-04-02
- ↑ 51.0 51.1 Demos, 104–106, 140
- ↑ Demos, 8–9
- ↑ Deetz and Deetz, 87–100, and endnotes
- ↑ Deetz and Deetz, 2000, 2–98, and endnotes
- ↑ Heather Whipps, September 21, 2006, Census: U.S. household size shrinking MSNBC.com. accessdate 2007-05-11 A study reported by MSNBC found that the modern American household consisted of 2.6 people. Demos, 1970, 192 cites that by the third generation, the average family had 9.3 births, with 7.9 children living until adulthood. Since most families had two parents, this would extrapolate to an average of 10 people under one roof.
- ↑ Demos, 64–69
- ↑ Carolyn Freeman Travers, 2007, Common Myths: Dead at Forty Plimoth Plantation. accessdate 2007-05-11
- ↑ Demos, 62–81
- ↑ 59.0 59.1 Demos, 141
- ↑ Demos, 71–75
- ↑ Demos, 146
- ↑ Demos, 147–149
- ↑ Demos, 142–143
- ↑ Demos, 144
- ↑ Demos, 104
- ↑ Demos, 148
- ↑ Governors of Plymouth Colony 1998. Pilgrim Hall Museum accessdate 2007-04-02
- ↑ Demos, 7
- ↑ Demos, 10
- ↑ Demos, 14
- ↑ Philbrick, 2006, 214–215
- ↑ Deetz and Deetz, 2000, 133, cite the first eight examples (treason-adultery); Demos, 100 mentions the last.
- ↑ Deetz and Deetz, 2000, 135
- ↑ Demos, 102. Bumpus's actual sentence was to be "whipt att the post," with the note that "hee was crasey brained, ortherwise hee had bine put to death."
- ↑ Philbrick, 2006, 223
- ↑ Johnson, 53
- ↑ Demos, 96–98
- ↑ 78.0 78.1 Lillian Galle, 2000, Servants and Masters in the Plymouth Colony The Plymouth Colony Archive Project. accessdate 2007-05-17
- ↑ David Martucci, 1997, The Flag of New England accessdate 2007-04-03
- ↑ Morse Payne, 2006, The Survey System of the Old Colony. Slade and Associates. accessdate 2007-04-03
- ↑ The Rhode Islander: A depository of opinion, information, and pictures of the Ocean State. 2007, The Border is Where? Part II. blogspot.com. accessdate 2007-04-03
- ↑ EDC Profile (Rhode Island Economic Development Corporation, 2007) Town of Bristol. accessdate 2007-07-13
- ↑ Deetz and Deetz, 2000, 14
- ↑ Duane A. Cline, 2006, The Pilgrims and Plymouth Colony: 1620. Rootsweb. accessdate 2007-04-04
- ↑ Philbrick, 2006, 21–23
- ↑ Demos, 6
- ↑ Philbrick, 2006, 128, 151–154
- ↑ Deetz and Deetz, 2000, 14 and endnotes
- ↑ Demos, 110–111, also see Demos's footnote #10 on 110
- ↑ Philbrick, 2006, 136
- ↑ Philbrick, 2006, 199–200
- ↑ Deetz and Deetz, 2000, 77–78. The first mention of cattle occurs with the arrival of "three heifers and a bull" in 1624, but there is some doubt as to whether this was the first cattle in the colony.
- ↑ Charles S. Chartier, Livestock in Plymouth Colony. Plymouth Archaeological Rediscovery Project. accessdate 2007-05-03
- ↑ Johnson, 37
- ↑ Johnson, 36–37
- ↑ Philbrick, 2006, 22
- ↑ History Paintings Pilgrim Hall, 1998. accessdate 2007-04-05
- ↑ Philbrick, 2006, 75, 288, 357–358
- ↑ 2007 Federal Holidays U.S. Office of Personnel Management. accessdate 2007-04-04
- ↑ Philbrick, 2006, 75, 78–79
- Addison, Albert Christopher. The Romantic Story of the Mayflower Pilgrims. (1911), The Plymouth Colony Archive Project accessdate 2007-04-30
- Deetz, James, and Patricia Scott Deetz. The Times of Their Lives: Life, Love, and Death in Plymouth Colony. New York: W. H. Freeman and Company, 2000. ISBN 071673830-9.
- Demos, John. A Little Commonwealth: Family Life in Plymouth Colony. New York:xford University Press, 1970.
- Johnson, Paul. A History of the American People. New York: HarperCollins, 1997. ISBN 0060168366.
- Philbrick, Nathaniel. Mayflower: A Story of Courage, Community, and War. New York: Penguin Group, 2006. ISBN 0670037605.
- Weinstein, Allen, and David Rubel. The Story of America: Freedom and Crisis from Settlement to Superpower. New York: DK Publishing, 2002. ISBN 0789489031.
- Timeline of Plymouth Colony retrieved November 30, 2007.
- Colonial America:Plymouth Colony 1620 A short history of Plymouth Colony hosted at U-S-History.com, includes a map of all of the New England colonies. Retrieved November 30, 2007.
- The Plymouth Colony Archive Project a collection of primary sources documents and secondary source analysis related to Plymouth Colony. Retrieved November 30, 2007.
- Pilgrim ships from 1602 to 1638 Pilgrim ships searchable by ship name, sailing date and passengers. Retrieved November 30, 2007.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | http://www.newworldencyclopedia.org/entry/Plymouth_Colony | 13 |
20 | British Impositions and Colonial Resistance, 1763–1770
After the French and Indian War, Britain
was the premier colonial power in North America. The Treaty of Paris
(1763) more than doubled British territories in North America and eliminated
the French as a threat. While British power seemed more secure than
ever, there were signs of trouble brewing in the colonies. The main
problem concerned British finances. The British government had accumulated
a massive debt fighting the French and Indian War, and now looked
to the American colonies to help pay it. King George III and
his prime minister, George Grenville, noted that the colonists had
benefited most from the expensive war and yet had paid very little
in comparison to citizens living in England. To even this disparity,
Parliament passed a series of acts (listed below) designed to secure
revenue from the colonies. In addition, royal officials revoked
their policy of salutary neglect and began to enforce
the Navigation Acts, and newer taxation measures, with vigor. Angry colonists
chafed under such tight control after years of relative independence.
The Proclamation Line
In efforts to keep peace with the Native Americans, the
British government established the Proclamation Line in 1763, barring
colonial settlement west of the Allegheny Mountains in Pennsylvania.
The Proclamation declared that colonists already settled in this
region must remove themselves, negating colonists’ claims to the
West and thus inhibiting colonial expansion.
The Sugar Act
In 1764, Parliament passed the Sugar Act to
counter smuggling of foreign sugar and to establish a British monopoly
in the American sugar market. The act also allowed royal officials
to seize colonial cargo with little or no legal cause. Unlike previous
acts, which had regulated trade to boost the entire British imperial
economy, the Sugar Act was designed to benefit England at the expense
of the American colonists.
A major criticism of the Sugar Act was that it
aimed not to regulate the economy of the British Empire but to raise
revenue for the British government. This distinction became important
as the colonists determined which actions of the British government
The Stamp Act
As a further measure to force the colonies to help pay
off the war debt, Prime Minister Grenville pushed the Stamp
Act through Parliament in March 1765. This act required Americans
to buy special watermarked paper for newspapers, playing cards,
and legal documents such as wills and marriage licenses. Violators
faced juryless trials in Nova Scotian vice-admiralty courts, where
guilt was presumed until innocence was proven.
Like the Sugar Act, the Stamp Act was aimed at raising
revenue from the colonists. As such, it elicited fierce colonial
resistance. In the colonies, legal pamphlets circulated condemning
the act on the grounds that it was “taxation without representation.” Colonists believed
they should not have to pay Parliamentary taxes because they did
not elect any members of Parliament. They argued that they should
be able to determine their own taxes independent of Parliament.
Prime Minister Grenville and his followers retorted
that Americans were obliged to pay Parliamentary taxes because they
shared the same status as many British males who did not have enough
property to be granted the vote or who lived in certain large cities
that had no seats in Parliament. He claimed that all of these people
were “virtually represented” in Parliament. This theory of virtual
representation held that the members of Parliament not only
represented their specific geographical constituencies, but they
also considered the well-being of all British subjects when deliberating
Opposition to the Stamp Act
The Stamp Act generated the first wave of
significant colonial resistance to British rule. In late May 1765,
the Virginia House of Burgesses passed the Virginia Resolves,
which denied Parliament’s right to tax the colonies under the Stamp
Act. By the end of the year, eight other colonial legislatures had
adopted similar positions.
As dissent spread through the colonies, it quickly became
more organized. Radical groups calling themselves the Sons
of Liberty formed throughout the colonies to channel the
widespread violence, often burning stamps and threatening British
officials. Merchants in New York began a boycott of British goods
and merchants in other cities soon joined in. Representatives of
nine colonial assemblies met in New York City at the Stamp
Act Congress, where they prepared a petition asking Parliament
to repeal the Stamp Act on the grounds that it violated the principle
of “no taxation without representation.” The congress argued that
Parliament could not tax anyone outside of Great Britain and could
not deny anyone a fair trial, both of which had been consequences
of the Stamp Act.
The Stamp Act Congress was a major step in uniting
the colonies against the British. Nine colonial delegations attended
and agreed that there could be no taxation without representation.
Under strong pressure from the colonies, and with their
economy slumping because of the American boycott of British goods,
Parliament repealed the Stamp Act in March 1766. But, at the same
time, Parliament passed the Declaratory Act to solidify
British rule in the colonies. The Declaratory Act stated that Parliament
had the power to tax and legislate for the colonies “in all cases
whatsoever,” denying the colonists’ desire to set up their own legislature.
The Townshend Duties
In 1767, Britain’s elite landowners exercised political
influence to cut their taxes by one-fourth, leaving the British
treasury short £500,000 from the previous year. By that time, Chancellor
Charles Townshend dominated government affairs. His superior, Prime
Minister William Pitt (who was the second prime minister after Grenville)
had become gravely ill, and Townshend had assumed leadership of
the government. Townshend proposed taxing imports into the American
colonies to recover Parliament’s lost revenue, and secured passage
of the Revenue Act of 1767. Popularly referred to as the Townshend
Duties, the Revenue Act taxed glass, lead, paint, paper,
and tea entering the colonies. The profits from these taxes were
to be used to pay the salaries of the royal governors in the colonies.
In practice, however, the Townshend Duties yielded little income
for the British; the taxes on tea brought in the only significant
Opposition to the Townshend Duties
While ineffective in raising revenue, the Townshend Duties
proved remarkably effective in stirring up political dissent, which
had lain dormant since the repeal of the Stamp Act. Protest against
the taxes first took the form of intellectual and legal dissents
and soon erupted in violence.
In December 1767, the colonist John Dickinson published Letters
From a Pennsylvania Farmer in the Pennsylvania Chronicle.
This series of twelve letters argued against the legality of the
Townshend Duties and soon appeared in nearly every colonial newspaper.
They were widely read and admired. Political opposition to the Townshend
Duties spread, as colonial assemblies passed resolves denouncing
the act and petitioning Parliament for its repeal.
Popular protest once again took the form of a boycott
of British goods. Although the colonial boycott was only moderately
successful at keeping British imports out of the colonies, it prompted
many British merchants and artisans to mount a significant movement in
Britain to repeal the Townshend Duties. Sailors joined the resistance
by rioting against corrupt customs officials. Many customs officials
exploited the ambiguous and confusing wording of the Towsnhend Act
to claim that small items stored in a sailor’s chest were undeclared
cargo. The custom’s officers then seized entire ships based on that
charge. Often, they pocketed the profits. Known as “customs racketeering,”
this behavior amounted to little more than legalized piracy.
In 1768, 1,700 British troops landed in Boston to stem
further violence, and the following year passed relatively peacefully.
But tension again flared with the Boston Massacre in March
1770, when an unruly mob bombarded British troops with rocks and
dared them to shoot. In the ensuing chaos, five colonists were killed.
The Boston Massacre marked the peak of colonial opposition to the
Parliament finally relented and repealed most of the Townshend
Duties in March 1770, partially because England was now led by a
new prime minister, Lord North. North eliminated most of the taxes
but insisted on maintaining the profitable tax on tea. In response, Americans
ended the policy of general non-importation, but maintained voluntary
agreements to boycott British tea. Non-consumption kept the tea
tax revenues far too low to pay the royal governors, effectively
nullifying what remained of the Townshend Duties. | http://www.sparknotes.com/testprep/books/sat2/history/chapter6section1.rhtml | 13 |
15 | Science Fair Project Encyclopedia
Congress of the United States
The Congress of the United States is the legislative branch of the federal government of the United States of America. It is established by Article One of the Constitution of the United States, which also delineates its structure and powers. Congress is a bicameral legislature, consisting of the House of Representatives (the "Lower House") and the Senate (the "Upper House").
The House of Representatives consists of 435 members, each of whom is elected by a congressional district and serves a two-year term. Seats in the House are divided between the states on the basis of population, with each state is entitled to at least one seat. In the Senate, on the other hand, each state is represented by two members, regardless of population. As there are fifty states in the Union, the Senate consists of 100 members. Each Senator, who is elected by the whole state rather than by a district, serves a six-year term. Senatorial terms are staggered so that approximately one-third of the terms expire every two years.
The Constitution vests in Congress all the legislative powers of the federal government. The Congress, however, only possesses those powers enumerated in the Constitution; other powers are reserved to the states, except where the Constitution provides otherwise. Important powers of Congress include the authority to regulate interstate and foreign commerce, to levy taxes, to establish courts inferior to the Supreme Court, to maintain armed forces, and to declare war. Insofar as passing legislation is concerned, the Senate is fully equal to the House of Representatives. The Senate is not a mere "chamber of review," as is the case with the upper houses of the bicameral legislatures of most other nations.
The Senate currently has 100 seats, one-third being renewed every two years; two members are elected from each U.S. state by popular vote to serve six-year terms. Each state has equal representation in the Senate because at the Constitutional Convention, where every state had one vote, the small states refused to go along with any Constitution that did not give them an equal vote in at least one house of Congress. Because terms are staggered, every state will have a "junior" and "senior" Senator.
The House of Representatives currently has 435 seats for voting Members. Additionally, there are non-voting "delegates" from the District of Columbia, American Samoa, Guam, Puerto Rico, and the U.S. Virgin Islands. Members are directly elected by first past the post to serve two-year terms from Congressional districts. Only the non-voting delegate from Puerto Rico (known as Resident Commissioner) is elected to a four-year term. The states with the very small populations—smaller than the population of a whole Congressional district elsewhere—are still guaranteed one whole seat. These seats are apportioned according to the population of each state, but the total number is fixed by statute at 435 (Public Law 62-5).
The US Congress has fewer women in it than legislatures in other countries, as well as many more lawyers. The percentage of lawyers in Congress flucuates around 45 percent, by contrast, in the Canadian House of Commons, the British House of Commons, and the Bundestag, approximately 15 percent of members have law degrees.
The first Congress under the current Constitution started its term in Federal Hall in New York City on March 4, 1789 and their first action was to declare that the new Constitution of the United States was in effect. The United States Capitol building in Washington, D.C. hosted its first session of Congress on November 17, 1800.
Proceedings of the United States Congress were televised for the first time on January 3, 1947. Proceedings of the general Congress are now regularly broadcast on C-SPAN, as are newsworthy meetings of committees and subcommittees. Details of the activities of Congress can also be found on the internet, on the legislative database THOMAS.
Specific powers held by the Congress
The powers of the Congress are set forth in Article 1 (particularly Article 1, Section 8) of the United States Constitution. The powers originally delegated to the Congress by the original version of the Constitution were supplemented by the post-Civil War amendments to the Constitution (Amendments 13, 14, and 15, each of which authorizes the Congress to enforce its provisions by appropriate legislation), and by the 16th Amendment, which authorizes an income tax.
Each house of Congress has the power to introduce legislation on any subject dealing with the powers of Congress, except for legislation dealing with gathering revenue (generally through taxes), which must originate in the House of Representatives (specifically the U.S. House Committee on Ways and Means). The large states may thus appear to have more influence over the public purse than the small states. In practice, however, each house can vote against legislation passed by the other house. The Senate may disapprove a House revenue bill—or any bill, for that matter—or add amendments that change its nature. In that event, a conference committee made up of members from both houses must work out a compromise acceptable to both sides before the bill becomes the law of the land.
The broad powers of the whole Congress are spelled out in Article I of the Constitution:
- To levy and collect taxes
- To borrow money for the public treasury
- To make rules and regulations governing commerce among the states and with foreign countries
- To make uniform rules for the naturalization of foreign citizens
- To coin money, state its value, and provide for the punishment of counterfeiters
- To set the standards for weights and measures
- To establish bankruptcy laws for the country as a whole
- To establish post offices and post roads
- To issue patents and copyrights
- To punish piracy
- To declare war
- To raise and support armies
- To provide for a navy
- To call out the militia to enforce federal laws, suppress lawlessness, or repel invasions
- To make all laws for the seat of government (Washington, D.C.)
- To make all laws necessary to enforce the Constitution
Some powers are added in other parts of the Constitution:
- To set up a system of federal courts (set out in Article III)
- To prohibit slavery (set out in the Thirteenth Amendment)
- To enforce the right of citizens to vote, irrespective of race (set out in the Fifteenth Amendment)
The Tenth Amendment sets definite limits on congressional authority, by providing that powers not delegated to the national government are reserved to the states or to the people.
In addition, the Constitution specifically forbids certain acts by Congress. It may not:
- Suspend the writ of habeas corpus—a requirement that those accused of crimes be brought before a judge or court before being imprisoned —unless necessary in time of rebellion or invasion
- Pass laws that condemn persons for crimes or unlawful acts without a trial (attainder)
- Pass any law that retroactively makes a specific act a crime
- Levy direct taxes on citizens, except on the basis of a census already taken (This was overridden by the Sixteenth Amendment)
- Tax exports from any one state
- Give specially favorable treatment in commerce or taxation to the seaports of any state or to the vessels using them
- Authorize any titles of nobility
The Congress also has sole jurisdiction over impeachment of federal officials. The House has the sole right to bring the charges of misconduct which would be considered at an impeachment trial , and the Senate has the sole power to try impeachment cases and to find officials guilty or not guilty. A guilty verdict requires a two-thirds majority and results in the removal of the federal official from public office.
The Senate has further oversight powers over the executive branch. For those, see United States Senate.
Officers of the Congress
The Constitution provides that the vice president shall be President of the Senate. The vice president has no vote, except in the case of a tie. The Senate chooses a President pro tempore to preside when the vice president is absent. The most powerful person in the Senate is not the president pro tempore, but the Senate Majority Leader.
The House of Representatives chooses its own presiding officer—the Speaker of the House. The speaker and the president pro tempore are always members of the political party with the largest representation in each house, aka the majority.
At the beginning of each new Congress, members of the political parties select floor leaders and other officials to manage the flow of proposed legislation. These officials, along with the presiding officers and committee chairpersons, exercise strong influence over the making of laws.
|Position||Senate||Current Office Holder||House||Current Office Holder|
|Presiding Officer||President of the Senate (symbolic)|
President pro tempore of the United States Senate (acting)
|Speaker of the United States House of Representatives||Dennis Hastert|
|Majority Leader||United States Senate Majority Leader||Bill Frist||Majority Leader of the United States House of Representatives||Tom DeLay|
|Minority Leader||United States Senate Minority Leader||Harry Reid||Minority Leader of the United States House of Representatives||Nancy Pelosi|
|Majority Whip||United States Senate Majority Whip||Mitch McConnell||Majority Whip of the United States House of Representatives||Roy Blunt|
|Minority Whip||United States Senate Minority Whip||Richard Durbin||Minority Whip of the United States House of Representatives||Steny H. Hoyer|
The committee process
One of the major characteristics of the Congress is the dominant role that Congressional committees play in its proceedings. Committees have assumed their present-day importance by evolution, not by constitutional design, since the Constitution makes no provision for their establishment. In 1885, when Woodrow Wilson wrote Congressional Government, there were only 60-odd legislative committees and subcommittees, in the 1990's there were 300. There are so many subcommittees that Morris Udall of Arizona could joke that he could address any Democrat whose name he had forgotten, "Good morning, Mr. Chairman," and half the time be right. (Frozen Republic, 191)
At present the Senate has 16 full-fledged standing (or permanent) committees; the House of Representatives has 20 standing committees. Each specializes in specific areas of legislation: foreign affairs, defense, banking, agriculture, commerce, appropriations, etc. Almost every bill introduced in either house is referred to a committee for study and recommendation. The committee may approve, revise, kill or ignore any measure referred to it. It is nearly impossible for a bill to reach the House or Senate floor without first winning committee approval. In the House, a petition to release a bill from a committee to the floor requires the signatures of 218 members; in the Senate, a majority of all members is required. In practice, such discharge motions only rarely receive the required support.
The majority party in each house controls the committee process. Committee chairpersons are selected by a caucus of party members or specially designated groups of members. Minority parties are proportionally represented on the committees according to their strength in each house.
Bills are introduced by a variety of methods. Some are drawn up by standing committees; some by special committees created to deal with specific legislative issues; and some may be suggested by the president or other executive officers. Citizens and organizations outside the Congress may suggest legislation to members, and individual members themselves may initiate bills. After introduction, bills are sent to designated committees that, in most cases, schedule a series of public hearings to permit presentation of views by persons who support or oppose the legislation. The hearing process, which can last several weeks or months, theoretically opens the legislative process to public participation.
One virtue of the committee system is that it permits members of Congress and their staffs to amass a considerable degree of expertise in various legislative fields. In the early days of the republic, when the population was small and the duties of the federal government were narrowly defined, such expertise was not as important. Each representative was a generalist and dealt knowledgeably with all fields of interest. The complexity of national life today calls for special knowledge, which means that elected representatives often acquire expertise in one or two areas of public policy.
When a committee has acted favorably on a bill, the proposed legislation is then sent to the floor for open debate. In the Senate, the rules permit virtually unlimited debate. In the House, because of the large number of members, the Rules Committee usually sets limits. When debate is ended, members vote either to approve the bill, defeat it, table it (which means setting it aside and is tantamount to defeat) or return it to committee. A bill passed by one house is sent to the other for action. If the bill is amended by the second house, a conference committee composed of members of both houses attempts to reconcile the differences.
Conference committees are not supposed to add anything that was not supported by either house, or delete anything that was supported by one house, but in practice conference committees make substantial changes to legislation. According to Citizens Against Government Waste, conference committees even add pork to legislation. For the 2005 budget conference committees added 3407 pork barrel appropriations, budget, up from 47 pork barrel appropriations in 1994.
Once passed by both houses, the bill is sent to the president, for constitutionally the president must act on a bill for it to become law. The president has the option of signing the bill—at which point it becomes national law—or vetoing it. A bill vetoed by the president must be reapproved by a two-thirds vote of both houses to become law, this is called overriding a veto .
The president may also refuse either to sign or veto a bill. In that case, the bill becomes law without his signature 10 days after it reaches him (not counting Sundays). The single exception to this rule is when Congress adjourns after sending a bill to the president and before the 10-day period has expired; his refusal to take any action then negates the bill — a process known as the "pocket veto."
Congressional powers of investigation
One of the most important nonlegislative functions of the Congress is the power to investigate. This power is usually delegated to committees—either to the standing committees, to special committees set up for a specific purpose, or to joint committees composed of members of both houses. Investigations are conducted to gather information on the need for future legislation, to test the effectiveness of laws already passed, to inquire into the qualifications and performance of members and officials of the other branches, and, on rare occasions, to lay the groundwork for impeachment proceedings. Frequently, committees call on outside experts to assist in conducting investigative hearings and to make detailed studies of issues.
There are important corollaries to the investigative power. One is the power to publicize investigations and their results. Most committee hearings are open to the public and are widely reported in the mass media. Congressional investigations thus represent one important tool available to lawmakers to inform the citizenry and arouse public interest in national issues. Congressional committees also have the power to compel testimony from unwilling witnesses and to cite for contempt of Congress witnesses who refuse to testify and for perjury those who give false testimony.
Informal practices of Congress
In contrast to European parliamentary systems, the selection and behavior of U.S. legislators has little to do with central party discipline. Each of the major American political parties is a coalition of local and state organizations that join together as a national party—Republicans and Democrats. Thus, traditionally members of Congress owe their positions to their districtwide or statewide electorate, not to the national party leadership nor to their congressional colleagues. As a result, the legislative behavior of representatives and senators tends to be individualistic and idiosyncratic, reflecting the great variety of electorates represented and the freedom that comes from having built a loyal personal constituency.
Congress is thus a collegial and not a hierarchical body. Power does not flow from the top down, as in a corporation, but in practically every direction. There is comparatively minimal centralized authority, since the power to punish or reward is slight. Congressional policies are made by shifting coalitions that may vary from issue to issue. Sometimes, where there are conflicting pressures—from the White House and from important interest groups—legislators will use the rules of procedure to delay a decision so as to avoid alienating an influential sector. A matter may be postponed on the grounds that the relevant committee held insufficient public hearings. Or Congress may direct an agency to prepare a detailed report before an issue is considered. Or a measure may be put aside by either house, thus effectively defeating it without rendering a judgment on its substance.
There are informal or unwritten norms of behavior that often determine the assignments and influence of a particular member. "Insiders," representatives and senators who concentrate on their legislative duties, may be more powerful within the halls of Congress than "outsiders," who gain recognition by speaking out on national issues. Members are expected to show courtesy toward their colleagues and to avoid personal attacks, no matter how unpalatable their opponents' policies may be, though in recent years this norm has been called into question. Still, members daily refer to one another as the "Gentlewoman from Tennessee" or the "distinguished Senator from Michigan," reflecting a traditionalist etiquette found in few other domains of American life. Members usually specialize in a few policy areas rather than claim expertise in the whole range of legislative concerns. Those who conform to these informal rules are more likely to be appointed to prestigious committees or at least to committees that affect the interests of a significant portion of their constituents.
A Congressional practice that is newly emerged is the practice of the Speaker of the House only supporting legislation that is supported by his party, regardless of whether or not he personally supports it or the majority of the House supports it. Dennis Hastert believes that the Speaker should only let pass legislation that is supported by the "majority of the majority," not necessarily the entire House. In a 2003 Capital speech Hastert said "On occasion, a particular issue might excite a majority made up mostly of the minority . . . Campaign finance is a particularly good example of this phenomenon. The job of speaker is not to expedite legislation that runs counter to the wishes of the majority of his majority." []
The traditional independence of members of Congress has both positive and negative aspects. One benefit is that a system that allows legislators to vote their consciences or their constituents’ wishes is inherently more democratic than one that does not. In European party systems legislators are beholden to the party leadership through the slating process, in America's party system they are responsible to voters through primaries.
The independence of Congressmen and Senators also allows much greater diversity of opinion than would exist if Congressmen had to obey their leaders. If legislators had to vote with the party leadership, presumably America would become a multi-party system. Thus although there are only two parties represented in Congress, America’s Congress represents virtually every shade of opinion that exists in the land. (see third party )
The problem of independence is that there is less accountability for voters than there would be if Congressmen took responsibility for their party’s actions.
When in the majority, congressional leaders in both houses and both parties use a technique that is sometimes called "catch and release." In "catch and release," if a piece of pending legislation is unpopular in a member’s district or state, that member of Congress will be allowed to vote against the law if his or her vote will not affect the outcome. If the vote will be close, Congressmen will be "reeled in" and required to vote for the party’s legislation. Because of catch and release, it is possible for Congressmen to hide their true political stances from their constituents until an extremely close vote comes up. As an example, in 2002, several members of Congress voted to authorize presidential Trade Promotion Authority , formerly known as "Fast Track," who had never voted for free trade in the past. Apparently, they had always supported free trade, but had been able to conceal it from their anti-trade constituents. On the 2003 Prescription Drug Benefit, 13 Republicans voted affirmatively in extremely close 6:00 AM initial vote only to vote against the conference bill when it returned a few weeks later, thereby being able to tell their constituents whatever they needed to tell them.
Congressional freedom of action also allows Congressmen and Senators to hold out on certain bills in order to pull down pork for their districts. Often a reluctant vote has to be won over by pet projects or jobs for allies. In the Senate, small state Senators are more likely to hold out than large state Senators are. Congressional freedom of action also gives more power to lobbyists. Compared to America, parliamentary nations have far fewer lobbyists per capita.
The practice of districts choosing their own Congressmen also results in members of Congress being the best fundraisers and best campaigners, not necessarily the best qualified.
Women in the 109th Congress of the United States of America
The 109th Congress of the United States of America takes place in the years 2005 and 2006. Like each Congress, it encompasses both the House of Representatives and the United States Senate. The House representatives are a reflection of each state’s population and the Senate consists of two representatives from each state, regardless of its population. These two sectors that create Congress are made up of a majority of men, although women are slowly gaining tenure of the seats.
In the 109th Congress, there are 69 women that are in the House of Representatives, 14 women in the United States Senate, and two women on George W. Bush’s Cabinet (Imbornoni). Up until 2005, this is the greatest number of women to serve in Congress at one time. White males, on the other hand, still make up the majority of congress. Rebekah Herrick, author of “Gender Effects on Job Satisfaction in the House of Representatives” comments on this trend by saying: “Congress is a masculine institution in that it was created by men and is composed largely of men with a masculine bias that affects its power structure and norms today. Similarly, congresswomen, even in the 1990’s, see the House as a male institution and have complained about their difficulty in gaining positions of power, sexual harassment, patronizing behavior, and being left out of social and recreational opportunities” (Herrick 87).
- Imbornoni, Ann-Marie; Johnson, David; and Haney, Elissa. “Famous Firsts by American Women” Infoplease. 2005. 03-01-05 <http://www.infoplease.com/spot/womensfirsts1.html>
- Herrick, Rebekah. “Gender effects on job satisfaction in the House of Representatives.” Women and Politics 23.4 (2001), 85-98.
Lobbying has been called the fourth branch of the American government. Many observers of Congress consider lobbying to be a corrupting practice, but others appreciate the fact that lobbyists provide information. Lobbyists also help write complicated legislation.
Lobbyists must be registered in a central database and only sometimes actually work in lobbies. Virtually every group - from corporations to foreign governments to states to grass-roots organizations - employs lobbyists.
As of 1987, there were 23,000 registered lobbyists, a sixty-fold increase from 1961. (Power Game, Hendrick Smith, 29-31) Many lobbyists are former Congressmen and Senators, or relatives of sitting Congressmen. Former Congressmen are advantaged because they retain special access to the Capital, office buildings, and even the Congressional gym.
Elections for members of both houses of Congress are invariably held in November of every even numbered year, on that month's first Tuesday following its first Monday (that is to say, on the Tuesday that falls between the second and eighth days, inclusive), a day known as Election Day.
In the case of the House of Representatives, these elections occur in every state, and in every district of the states that are divided into Congressional districts. Occasionally a special election is held within a state, or district of a state, that has an unscheduled vacancy in its corresponding seat.
A candidate gets to run in an election by winning a primary.
In the case of the Senate, however, since terms of office last six years and each state has two, it follows mathematically that Senate elections can occur in a given state no more often than twice for every three Congressional-election years. In fact, no state has elections for both its senators in the same year (with possible exceptions in cases of unscheduled vacancies); every state elects one senator two years after the other, and then next elects a senator after four additional years. (One additional possible wrinkle remains: rarely, a state may divide itself into two Senate districts, with a Senate election occurring every sixth year in each district, and never in both districts in one year.) Replacements for vacant Senate seats are usually appointed by state governors, rather than by special election. Before the passage of the Seventeenth Amendment to the United States Constitution, providing for direct elections, Senators were chosen by state legislatures.
Seats by party (109th Congress, 2005-2007)
|+||Independent: 1 (Senator James Jeffords (I-VT) votes with the Democrats on procedural matters)|
House of Representatives
|*||Republicans: 232 (53%)|
|*||Democrats: 202 (46%)|
|*||Independent: 1 (Bernard Sanders of VT)|
List of United States Congresses by Session
For a detailed list of congressional members or information on particular congressional sessions, click on a session's link from the List of United States Congresses.
- Baker, Ross K., House and Senate (3rd Ed. New York: W. W. Norton & Company, 2000). ISBN 0393976114
- Davidson, Roger H., and Walter J. Oleszek, Congress and Its Members, 6th ed. Washington DC: Congressional Quarterly, Inc., 1998.
- Hunt, Richard, "Using the Records of Congress in the Classroom," OAH Magazine of History 12, (Summer 1998): 34-37. EJ 572 681.
- Rimmerman, Craig A. "Teaching Legislative Politics and Policy Making." Political Science Teacher 3 (Winter 1990): 16-18. EJ 409 538.
- Ritchie, Donald A, "What Makes a Successful Congressional Investigation," OAH Magazine of History 11 (Spring 1997): 6-8. EJ 572 628
- U.S. House of Representatives
- U.S. Senate
- Library of Congress: Thomas Legislative Information
- Teaching about the U.S. Congress
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/U.S._Congress | 13 |
31 | Aspects Basic Micro Economics
Micro Economics peeling from the viewpoint of economics interests smallest economic units.
I. Demand, supply and market equilibrium
Demand theory to explain the nature of the buyers to demand a good.
1. Function request
Demand function is an equation showing the relationship between the amount of demand for certain goods and all the factors that influence it.
Based on the factors that affect demand, it can be prepared a general demand function, as follows:
Qd = f (PQ, Ps.i, Y, S, D), where:
Qd = quantity of goods demanded
Pq = price of the goods themselves
Ps.i = price of substitute goods (i = 1,2, ..., n)
Y = income
S = taste
D = total population.
The theory of supply explains the nature of the vendors bidding in the supply of goods.
3. Supply function
Supply function is an equation showing the relationship between the number of goods offered by the seller and all the factors that influence it. Supply function is generally written:
Qs = f (PQ, Pl.i, C, O, T), where:
Qs = quantity of goods on offer
Pq = price of the goods themselves
Pl.i = price of other goods (i = 1,2, ...., N)
O = corporate objectives
T = level of technology used.
4. Market equilibrium
Price and quantity of goods traded in the market is determined by demand and supply of the goods. Therefore, analysis of pricing and quantity of goods traded in a market, should be based on the analysis of demand and supply of goods simultaneously. Market price or equilibrium price is the price level where the number of goods offered by the seller equal to the amount of goods demanded by the purchaser. In such conditions it is said that the market in a state of balance or equilibrium.
II. Elasticities of demand and supply
What will happen to the demand or supply of goods if the price of goods that go down or up one cent? The answer to this question depends on the degree of sensitivity of each item in response to changes in price. Measuring the degree of sensitivity is called elasticity. The size of a good degree of sensitivity of demand to changes in the factors that influence it is called the elasticity of demand. While offering a good degree of sensitivity to changes in the factors that influence it is called the elasticity of supply.
III. My consumer behavior
The theory of consumer behavior basically learn why consumers behave as stated in the law of demand. Therefore, the theory of consumer behavior would explain why consumers will buy more goods at lower prices and reduce their purchase at high prices, and how does a consumer determine the number and combination of items to be purchased from the income.
1. Cardinal approach
Utility approach cardinal or Marginal Utility: starting point on the assumption that the satisfaction of each consumer can be measured by money or by other units (which are cardinal utility) as we measure the volume of water, the length of the road, or a heavy bag of rice.
2. Ordinal approach
Ordinal utility approach or equal satisfaction curve (Indifference Curve): starting point on the assumption that the level of consumer satisfaction can be said to be higher or lower without saying how much higher or lower (utility which is ordinal).
IV. Perfect competition
Perfect competition is the most ideal market structure, because the structure of this market will ensure the ongoing production activities with a high level of efficiency. Therefore in the economic analysis is often used assumption that the economy is perfectly competitive market. But in practice not easy to determine an industry can be classified into the real market of perfect competition (according to theory).
Generally, there is a close structural characteristics of these markets. However, as a theoretical basis for economic analysis, to study the characteristics of perfect competition is very important.
Market structure as opposed to perfect competition is monopoly. Monopoly is a market structure where there is only one seller, no substitution products that are similar (close substitute), and there are barriers to entry to the market.
1. Characteristics of monopoly market
• There is only one seller. Since there is only one seller, the buyer has no other choice. In this case the buyer only accepts the terms of sale specified sellers.
• No substitutions of products that are similar.
• There are barriers to entry into the market. These barriers may take the form of laws, require sophisticated technology, and requires huge capital.
• As a determinant of price (price setter). By controlling the level of production and volume of products offered by the monopolist can set a desired price.
VI. Competition Monopolistic And Oligopolistic
1. Monopolistic competition
Monopolistic market is basically a market that lies between two extreme types of market forms, namely perfect competition and monopoly. Therefore, the properties of these markets contain elements of nature and the nature of monopoly markets perfectly competitive market. In general, the market is monopolistic competition can be defined as a market where there are many producers / sellers who produce and sell different products tone.
oligopoly is a situation where there are several companies that dominate the market but not many so that the actions of employers that one will affect the policies of other employers. | http://www.shvoong.com/social-sciences/economics/2097503-aspects-basic-micro-economics/ | 13 |
14 | Report of the Select Committee on Assassinations of the U.S. House of Representatives
Findings in the Assassination of Dr. Martin Luther King, Jr.
- Introduction: The civil rights movement and Dr. King
- A history of civil rights violence
- Equality in education-- the 20th century objective
- A leader emerges
- A philosophy of nonviolence
- 1960: The year of the sit-ins
- 1963: The year of triumph and despair
- The road to Memphis
- The last moments: Memphis, Tenn., April 4, 1968
Dr. Martin Luther King, Jr., an eloquent Baptist minister from Atlanta, Ga., was one of the most prominent figures in the civil rights movement in America during its period of most visible achievement, 1955 to 1968. A disciple of nonviolence and love, Dr. King became the victim of savage violence, killed by a sniper's bullet as he stood on the balcony of a Memphis, Tenn, motel on April 4, 1968. His death signaled the seeming end of a period of civil rights progress that he had led and for which his life had become a symbol. Dr. King's legacy is one of profound change in the social fabric, not only for Black Americans, but for all citizens. But for some, after his death, as a Washington Post writer observed, "...his army of conscience disbanded, the banners fell, the movement unraveled..."
History of Civil Rights Violence (1)
Dr. King's tragic death in Memphis in 1968 was not, unfortunately,
historical aberration. The first Blacks arrived in colonial America
at Jamestown: Va., in 1619 as slaves from Africa. As they were dispersed
among Southern plantations, they were deprived of their traditions
and separated from the rest of the population by custom and
their fate was determined by the white majority.
Civil rights violence dates back at least to the mid-18th century, with the slave revolts of that period and their brutal suppression by whites. Roaming bands of runaway slaves in the South attacked plantations, and, in 1775, fears of a general slave uprising led to the annihilation of at least one group of Blacks by white soldiers in Georgia.
After the American Revolution, with the invention of the cotton gin, slavery in the South intensified. Black Americans provided most of the labor to support the economy of that region. Laws restricting Black mobility and educational opportunity were adopted by Southern legislatures, while the rights of slaveholders were jealously protected. Involuntary servitude was, however, outlawed in the North, and leaders of the new Nation such as Benjamin Franklin, John Jay, and John Woolman called for an end to slavery.
During the 1830's, sentiment for emancipation of slaves solidified. The-movement for the abolition of slavery, led by "radicals," sparked violence throughout the United States. In 1835, a proslavery band seized abolitionist William Lloyd Garrison and dragged him through the streets of Boston. Two years later, the presses of the radical Alton, Ill, Observer were destroyed, and its editor, Elijah P. Lovejoy, was shot to death by white vandals.
In the 1850's, violence presaged the struggle that was to tear the Union asunder. The pillaging and burning of Lawrence, Kans., by a proslavery mob on May 21, 1856, led abolitionist John Brown to launch a bloody retaliatory raid on Potawatamie, Kans., 3 days later. The massacre touched off a guerrilla war that lasted until Kansas was granted statehood in 1861. In 1859, Brown seized the Federal arsenal at Harpers Ferry, W. Va., in the hope of arming a Black force that would free slaves in the South. The arsenal was recaptured 2 days after Brown's raid, and Brown was hanged following his trial and conviction of treason, conspiracy, and murder.
Sectional differences led to the Civil War that fractured the Union in 1861; it lasted 4 years and became one of the bloodiest military conflicts in U.S. history. Blacks served a limited role in the Union Army; over 200,000 of them were inducted. Their presence in battle infuriated Confederate military leaders, some of whom approved a no-prisoner policy for Blacks. Combat reports indicate that, Black prisoners were murdered by Southern troops following, for example, the 1864 Battles of Fort Pillow, Tenn., Poison Spring, Ark., and the Crater at Petersburg, Va.
In the decade following the Northern victory in 1865 and the freeing of slaves from bondage, a spate of laws, engineered to guarantee the rights of newly emancipated Blacks, were adopted. They included the 13th, 14th, and 15th amendments and 7 civil rights acts. The promise of equality during postwar Reconstruction, the period of reestablishment of the seceded States into the Union, however, was not realized. Reforms were ultimately defeated by Southern white intransigence and violence. With emancipation, a wave of murders swept the South, and Reconstruction became the bloodiest period of civil rights violence in U.S. history, as the caste system of segregation was violently institutionalized. Militant groups such as the White Leagues and the Ku Klux Klan organized to oppose the new challenge to white supremacy.
Outbursts of violence were commonplace throughout the South during this period:
According to General Philip Sheridan, commander of troops in Louisiana and Texas during Reconstruction, 3,500 civil rights advocates were slain in Louisiana alone in the decade following the Civil War, 1,884 of them in 1868 alone.
When Blacks in Memphis, Tenn., appealed for their civil rights in 1866, rampaging white terrorists burned homes and churches in the Black section of that city and massacred 47 Blacks.
The killing of 27 delegates by a white mob at the Louisiana State Convention in New Orleans in 1866 was described by one observer as "systematic massacre of Negroes by whites."
Of 16 Blacks elected as delegates to the Mississippi Constitutional Convention in 1868, two were assassinated by whites.
In the Alabama election campaign of 1870, four Black civil rights leaders were murdered when they attended a Republican rally.
White terrorists took control of Meridian, Miss., in 1871 after they killed a Republican judge and lynched an interracial group of civil rights leaders.
In the Mississippi election campaign of 1874, several Black leaders in Vicksburg were attacked and murdered by members of the Ku Klux Klan.
During the Louisiana election campaign of 1878, Klan gunmen fired on Blacks in Caddo Parish, killing 40 by one account, as many as 75 by another.
Systematic violence, designed to terrify Blacks asserting their right to vote, led Attorney General Alfonso Taft to declare in 1876, "It is the fixed purpose of the Democratic Party in the South that the Negro shall not vote and murder is a common means of intimidation to prevent them."
Radical Reconstruction in the South was defeated by 1877, and the last of the Black militias in the South were dissolved. Southern legislatures adopted laws to deprive Blacks of all opportunity for political or civil participation and to segregate all facilities for education, travel, and public accommodation. Despite the waning of Reconstruction, mob violence and lynching occurred almost unchecked in the South until World War I. Blacks were removed from public affairs by intimidation.
In the 1890's, the legislatures of all Southern States disenfranchised Black citizens. With its 1903 ruling in Giles v. Harris, the U.S. Supreme Court sanctioned this practice. A few years earlier, in 1896, the Court had also approved racial segregation, finding in Plessy v. Ferguson that "separate but equal" facilities were acceptable under the Constitution. As the Black vote disappeared in the South, the murder of civil rights leaders decreased dramatically, only to be replaced by other forms of white terrorism: riots and lynching. The National Association for the Advancement of Colored People (NAACP) was founded in 1909 to deal with this intimidation at the expense of further assertion of Black political authority.
Top of Page
Equality in Education---The 20th Century Objective(2)
The civil rights movement that became a major social and political force in the 1950's, and matured in the 1960's, grew out of the efforts of organizations founded during the first half of the 20th century. One prominent organization of this period, the NAACP, was responsible for the gradual emergence of the Black protest movement. It sought an end to racial segregation primarily through the court system by providing counsel to Blacks whose rights had been denied. It also pushed for reform in the Congress and in State legislatures and initiated programs to educate the public about existing racial injustice. The National Urban League worked on behalf of middle-class Blacks. The Congress of Racial Equality (CORE), a pacifist organization founded in 1949, attacked discrimination in places of public accommodation in Northern and Border States. CORE took the lead in nonviolent direct action, organizing, for example, sit-ins in Chicago in 1943, bus rides and stand-ins at Chicago's Palisades Pool in 194748, and, in 1947, the Journey of Reconciliation, a harbinger of later freedom rides.
These activities of CORE, in fact, presaged the work of Dr. Martin Luther King's Southern Christian Leadership Conference in the late 1950's and 1960's.
With the signs of civil rights progress in the 1940's, particularly judicial responses to the NAACP, a mass movement began to develop, U.S. Supreme Court prohibited all-white primary elections and declared unconstitutional racially restrictive real estate covenants. In 1941, President Franklin D. Roosevelt issued an Executive order urging fair employment practices in response to the threats of mass demonstrations from A Philip Randolph, president of the Brotherhood of Sleeping Car Porters. The President's Committee on Civil Rights recommended the enactment of fair employment legislation in 1947, and in 1948, President Harry S. Truman barred segregation in the Armed Forces and Government agencies. The Congress, however, did not act on civil rights issues until 1957.
The modern civil rights movement set its roots in the field of education. The NAACP had initiated litigation in the 1930's to end segregation in education. At the beginning of 1954, 17 States and the District of Columbia required segregation in public schools, while three other States permitted localities to adopt the practice. Then, on May 17, 1954, the U.S. Supreme Court announced its unanimous decision in Brown v. Board of Education that segregation in public schools was unconstitutional. In delivering the opinion of the Court, Chief Justice Earl Warren said that "separate education facilities are inherently unequal." A year later, the Court followed with a ruling that the process of public school desegregation must proceed with "all deliberate speed," thus choosing a policy of gradualism rather than requiring desegregation by a fixed date as urged by the Brown plaintiffs through their NAACP attorneys.
The Brown decision the beginning of a long struggle, for it was not readily accepted in the South. Segregationist and States rights groups emerged to oppose the goal of integration, and militant organizations such as the White Citizens Councils and the Ku Klux Klans attracted a new following. Violence was resumed. On August 28, 1955, for example, a white mob in Mississippi kidnapped and lynched Emmett Till, a 14-year-old boy from Chicago who had been visiting his relatives.
Top of Page
A New Leader Emerges
Many historians believe the beginning of the modern Black revolt against inequality was marked in Montgomery, Ala. on December 1, 1955. Four Black passengers were asked by the driver of a downtown bus to give up their seats. Rosa Parks, a 42-year-old Black seamstress, refused and was arrested under a local segregation ordinance. In protest, Black leaders organized a boycott of the Montgomery bus system that lasted 382 days, ending only when the U.S. Supreme Court ordered the buses integrated.
The bus boycott was guided by the words of a 27-year-old Baptist minister who emerged as a fresh and dynamic force among Blacks. Preaching the "Christian doctrine of love operating through the Gandhian method of nonviolence," Dr. Martin Luther King, Jr., represented a new leadership. In Montgomery, he demonstrated that non-violent direct action could be used effectively to achieve social justice.
From that time until his death in 1968, Dr. King's life was inextricably interwoven with the events of the civil rights movement.
Dr. King was born in Atlanta, Ga., on January 15, 1929, the son of Baptist minister, Martin Luther King, Sr. and the maternal grandson of another Baptist minister. He enrolled at Atlanta's all-Black Morehouse College at age 15 and, in his junior year, decided to enter the clergy. In 1947, he was ordained a minister at his father's Ebenezer Baptist Church in Atlanta. The following year, he continued his studies at the Crozer Theological Seminary in Chester, Pa. He was elected president of his class in his senior year and was named outstanding student when he graduated first in his class. At Crozer, he became acquainted with the work of Christian social theologians, as well as Mohandas K. Gandhi's doctrine of nonviolent direct action, Satyagraha (Sanskrit for truth-force), and Henry David Thoreau's essay, "On the Duty of Civil Disobedience."
With a fellowship he received to pursue his doctorate, King entered graduate school at Boston University in 1951. His doctoral thesis compared the conceptions of God in the thinking of Paul Tillich and Harry Nelson Weiman. He received his doctorate in the spring of 1955.
In Boston, he met Coretta Scott, a graduate of Antioch College who was attending the New England Conservatory of Music. They were married in June 1953, and in the ensuing years had four children: Yolanda, Martin Luther III, Dexter Scott, and Bernice.
At the beginning of 1954, as he continued work toward his doctorate, Martin Luther King was hired as pastor of the Dexter Avenue Baptist Church in Montgomery, Ala., the city where he was to begin his civil rights career.
As president of the Montgomery Improvement Association (MIA), Dr. King led the bus boycott with the assistance of Montgomery Black leaders E.D. Nixon, a civil rights activist who had worked with A. Philip Randolph's Brotherhood of Sleeping Car Porters, Reverend Ralph David Abernathy, and Reverend E.N. French. At the first meeting of the MIA on December 5, 1955, Dr. King enunciated principle from which he would never waver: "We will not resort to violence. We will not degrade ourselves with hatred. Love will be returned for hate." In the tradition of Gandhi, leader of the struggle for Indian independence and an advocate of passive resistance, Dr. King urged his followers to forswear violence and to work for ultimate reconciliation with their opponents by returning good for evil.
After mass arrests, threats and physical attacks, including the dynamiting of Dr. King's home, the Montgomery bus boycott ended successfully in December 1956. That month the Southern Regional Council announced that 25 other Southern cities had desegregated their buses either voluntarily or as the result of boycotts.
Despite the successful Montgomery bus boycott, 1956 was also marked by disappointments to the rising hopes of Black Americans. The admission of Autherine Lucy to the University of Alabama in February was met by white mob violence. To avert further disturbances, she was expelled by university officials, That decision was upheld by a Federal district court and the University of Alabama remained segregated until 1963. Also in 1956, 101 members of Congress from the States that had comprised the Confederacy signed the Southern Manifesto, which declared that the school desegregation decisions of the Supreme Court were a "clear abuse of judicial power." Noting that
neither the Constitution nor the 14th amendment mentioned education and that the Brown decision had abruptly reversed precedents established in Plessy v. Ferguson and subsequent cases, the manifesto signers vowed "to use all lawful means to bring about a reversal of this decision which is contrary to the Constitution and to prevent the use of force in its implementation."
Top of Page
A Philosophy of Nonviolence
White resistance notwithstanding, the civil rights movement continued its growth in 1957. Recognizing the need for a mass movement to capitalize on the Montgomery bus boycott, Black leaders formed the Southern Christian Leadership Conference (SCLC) early in the year, and the boycott leader, Dr. Martin Luther King, Jr., was elected its first president. Adopting a nonviolent approach and focusing on the South, the SCLC was dedicated to the integration of Blacks in all aspects of American life.
In May 1957, to commemorate the third anniversary of the Supreme Court's Brown ruling on school desegregation, Dr. King led a prayer pilgrimage in Washington, D.C., the first large-scale Black demonstration in the capital since World War II. In his first national address, Dr. King returned to a theme that had lain dormant for 80 years, the right to vote. "Give us the ballot," he pleaded, "and we will no longer have to worry the Federal Government about our basic rights ...we will quietly and nonviolently, without rancor or bitterness, implement the Supreme Court's decision." Dr. King was on his way to becoming one of the most influential Black leaders of his time, a symbol of the hopes for equality for all Americans.
It was a time of fast-moving events, actions and counteractions, in a continuing conflict. On September 9, 1957, President Dwight D. Eisenhower signed the first Civil Rights Act since 1875. The law markedly enlarged the Federal role in race relations. It established a Civil Rights Commission and a Civil Rights Division in the Department of Justice. Most important, it gave the Attorney General authority to seek injunctions against obstruction of voting rights.
That same month, in Little Rock, Ark., violent rioting erupted over the integration of Central High School. Nine Black students were successfully enrolled, but not before 1,000 paratroopers and 10,000 National Guardsmen were sent into the beleaguered city. The appearance of Federal troops in Little Rock brought back unpleasant memories of Reconstruction, and the price of progress was a polarization of southern attitudes. Meanwhile, as Dr. King continued to carry the civil rights banner, he became the victim of a near fatal assault on September 20, 1957. As he was autographing copies of his first book, "Stride Toward Freedom," in a Harlem department store, a deranged Black woman, Izola Curry, stabbed him with an 8-inch letter opener.
Though the weapon penetrated near his heart, Dr. King recovered after 2 weeks of hospitalization.
1960: The Year of the Sit-ins
Civil rights activism intensified in 1960 the year of the sit-ins. On February 1, 1960, four Black students dedicated to nonviolent direct
action sat at the lunch counter of a Greensboro, N.C., Woolworth's store. Though they were refused service, the students sat at the counter until the store closed, and each succeeding day they returned with more students. The sit-in movement spread to cities in Virginia, Maryland, South Carolina, Tennessee, Alabama, Kentucky, and Florida. Recognizing the need for organization of this new movement, the SCLC provided the impetus for the Student Nonviolent Coordinating Committee (SNCC) in April 1960.
The sit-ins that continued throughout the year became a successful
means to protest. By the end of 1960, Blacks were being served at lunch
counters in hundreds of southern stores.
Inevitably, there was white resistance. As the sit-ins set the pace of a campaign to open up public facilities of all sorts, there were thousands of arrests and occasional outbreaks of violence. Dr. King was arrested with other demonstrators at an Atlanta, Ga., department store sit-in in October 1960. Trespass charges were dropped against him at his trial, but he was sentenced to 4 months hard labor at the Reidsville State Prison Farm on the pretext that he had violated probation for an earlier minor traffic offense. National concern for Dr. King's safety prompted the intercession of Democratic Presidential candidate John F. Kennedy, which led to the civil rights leader's release. Some observers believed this action contributed to Kennedy's narrow election victory over Vice President Richard M. Nixon a week later by attracting Black support.
Violence increased with attempts to integrate the interstate transportation system in 1961, the year of the freedom rides. They began in May when members of CORE boarded two buses in Washington, D.C., and set out for New Orleans, determined to test southern segregation laws on buses as well as in terminals en route. Trouble broke out when the buses reached Alabama. One bus was burned and stoned by whites in Anniston, and, in Birmingham, protesters on the second bus were brutally beaten by a mob awaiting their arrival. Another group of students left Atlanta, Ga., for Montgomery, Ala. the following week. Attorney General Robert F. Kennedy sent 500 Federal marshals to protect them, but the students arrived before the marshals and were savagely beaten. The next evening an angry throng of whites surrounded a church where Dr. King was scheduled to speak. The marshals and federalized National Guard troops had to rescue the congregation and Dr. King from the mob. Although the freedom riders met with little violence in Mississippi, they did have to reckon with an unsympathetic legal system. Over 300 demonstrators were arrested for breach of the peace and for disobeying police orders to disperse in segregated Mississippi terminals.
In response to the attacks on freedom riders, Attorney General Kennedy petitioned the Interstate Commerce Commission (ICC) to adopt stricter regulations against segregation. On September 22, 1961, the ICC announced new rules prohibiting segregation on interstate buses and in terminals.
Across-the-board desegregation of all public facilities in Albany, Ga., was the focus of a campaign led by Dr. King from late 1961 through the summer of 1962. The city reacted by arresting over 1,100 demonstrators during the campaign, including Dr. King and his
colleague, Reverend Abernathy. City officials stubbornly refused to confer with Black leaders and steadfastly rejected proposals for desegregation. By September 1962, public parks pools and libraries had been closed or sold to white business groups. The Albany campaign received national attention, but it failed to crack the southern resistance symbolized by the city. From the Albany defeat Dr. King learned that the scattergun approach of simultaneously attacking all aspects of segregation was ineffective.
On the other hand, the admission of the first Black student to the all-white University of Mississippi in the fall of 1962 marked a significant integrationist victory. James Meredith, an Air Force veteran, had been enrolled at Jackson State College when he decided to transfer to "Ole Miss." With the assistance of the NAACP, he filed suit when he was rejected. After 16 months of litigation, the Fifth Circuit Court of Appeals ruled that he had been turned down solely because of his race and ordered that he be accepted. Outright obstruction by State officials led the court to order that Mississippi's Gov. Ross Barnett and Lt. Gov. Paul Johnson pay fines unless they stop interfering with its ruling. On October 1, 1962, 320 Federal marshals arrived at Oxford to escort Meredith to his dormitory. This action set off a riot that left 2 persons killed and 375 injured before it was quelled by Federal troops. When the tear gas cleared, Meredith was the first Black student to enter "Ole Miss." Despite Governor Barnett's vow to continue to fight his enrollment, Meredith graduated in August 1963.
1963: A Year of Triumph and Despair
Dr. King led an all-out attack in the spring of 1963 on racial discrimination in Birmingham, Ala., which he described as "the most segregated city in the United States." Civil rights activists sought removal of racial restrictions in downtown snack bars, restrooms and stores, as well as nondiscriminatory hiring practices and the formation of a biracial committee to negotiate integration. Sit-ins, picket lines and parades were met by the police forces of Eugene "Bull" Connor, commissioner of public safety, with hundreds of arrests on charges of demonstrating without a permit, loitering and trespassing.
On Good Friday, April 12, 1963, Dr. King, Reverend Abernathy and Rev. Fred Shuttlesworth were arrested for leading a demonstration in defiance of an injunction obtained by Bull Connor. Dr. King was placed in solitary confinement and refused access to counsel. During his incarceration, he penned his "Letter from the Birmingham Jail," a response to a statement by eight leading local white clergymen-- Protestant, Catholic, and Jewish--who had denounced him as an outside agitator and urged Blacks to withdraw their support for his crusade. In this eloquent statement, Dr. King set forth his philosophy of nonviolence and enumerated the steps that preceded the Gandhian civil disobedience in Birmingham. Specifically citing southern segregation laws, he wrote that any law that degraded people was unjust and must be resisted. Nonviolent direct action, Dr. King explained, sought to foster tension and dramatize an issue "so it can no longer be ignored."
Dr. King was released from jail on April 20, 1963. The Birmingham demonstrations continued. On May 2, 500 Blacks, most of them high school students, were arrested and jailed. The next day, a group of demonstrators was bombarded with brickbats and bottles by onlookers while another cluster of 2,500 protesters was met by the forces of Police Commissioner Connor, with his snarling dogs and high-pressure firehoses.
Worldwide attention was being focused on the plight of Blacks whose reasonable demands were being met by the unbridled brutality of the Birmingham police. Senator Wayne Morse of Oregon said Birmingham "would disgrace a Union of South Africa or a Portuguese Angola." The outcry led to negotiations with the city, and Dr. King suspended his campaign on May 8. Two days later, an agreement was reached to desegregate lunch counters, restrooms, fitting rooms, and drinking fountains in department stores and to promote Blacks over a 60-day period. The following day, however, the bombings of a desegregated hotel and the home of Dr. King's brother, Rev. A. D. King, led to a disturbance by hundreds of Blacks that lasted until State troopers arrived to assist local police. Calm was restored. Dr. King was considered victorious because of the attention he had attracted to racial injustice. One by one, public facilities in Birmingham were opened to Blacks.
Birmingham became a rallying cry for civil rights activists hundreds of cities in the summer of 1963. Marches were held in Selma, Ala., Albany, Ga., Cambridge, Md., Raleigh and Greensboro, N.C., Nashville and Clinton, Tenn, Shreveport, La., Jackson and Philadelphia, Miss., as well as in New York and Chicago.
This period was also one of tragedy. On June 19, 1963, the day after
President Kennedy's dramatic call for comprehensive civil rights legislation,
Medgar Evers, NAACP field secretary for Mississippi, was shot
to death in front of his Jackson home. Evers had been instrumental in
James Meredith's efforts to enter the University of Mississippi, and a
month before his death had launched an antisegregation drive in Jackson.
Byron de la Beckwith, a fertilizer salesman, was charged with
the murder and tried twice; both trials ending in hung juries. In
September 1963, attention reverted to Birmingham, Ala., when the
16th Street Baptist Church was bombed, killing four Black girls,
aged 11 to 14, in their Sunday school class. The tragedy was compounded
by the deaths of two Black youths, killed later that day in an
outburst of violence that followed the bombing.
The climatic point of the campaign for Black equality came on August 28, 1963, when Dr. King led 250,000 followers in the march on Washington, a nonviolent demonstration of solidarity engineered by A. Philip Randolph and Bayard Rustin to dramatize Black discontent and demand an open, desegregated society with equal justice for all citizens regardless of race. A goal of the march was passage of a comprehensive civil rights bill to insure integrated education, equal access to public accommodations, protection of voting rights and nondiscriminatory employment practices. In his address, acclaimed as the most memorable moment of the day, Dr. King recounted his dream for an integrated society:
I have a dream that one day this Nation will rise up, live out the true meaning of its creed:"We hold these truths to be self-evident that all men are created equal." I have a dream that one day on the red hills of Georgia sons of former slaves and the sons of former slaveowners will be able to sit down together at the table of brotherhood. I have a dream that one day even the State of Mississippi, a State sweltering with the heat of injustice ...will be transformed into an oasis of freedom and justice. I have a dream that my four little children will one day live in a nation where they will not be judged by the color of their skin but by the content of their character.
Dr. King pledged to continue to fight for freedom and concluded:
When we allow freedom to ring ...from every town and every hamlet, from every State and every city, we will be able to speed up that day when all of God's children, Black men and white men, Jews and Gentiles, Protestants and Catholics will be able to join hands and sing in the words of the old Negro spiritual, "Free at last! Free at last! Great God All Mighty, we are free at last!"
The march provided new impetus to the civil rights movement and helped solidify the recognition of Dr. King as one of the most important spokesmen for the Black cause.
Within weeks of President Kennedy's assassination on November 22, 1963, his successor, President Lyndon B. Johnson, asked the Congress to end its deadlock and submit strong civil rights legislation for his approval. Congress responded by passing the Civil Rights Act of 1964, which contained provisions that: Guaranteed Blacks the right to vote; guaranteed access to public accommodations, such as restaurants, hotels, and amusement areas; authorized the Federal Government to sue to desegregate public facilities, including schools; mandated nondiscrimination in Federal programs; and required equal employment opportunity. In addition, on February 5, 1964, poll taxes, a device that had been used to prevent Blacks from voting, were barred with the adoption of the 24th amendment.
CORE and SNCC recruited 1,100 northern college students in a
drive to register on the voting rolls as many of Mississippi's 900,000
Blacks as possible in the freedom summer voter registration campaign
of 1964. The campaign came to the forefront of public attention
on August 4 when the bodies of three civil rights workers-- James E.
Chaney, Andrew Goodman, and Michael Schwerner--were found
buried in a dam near Philadelphia, Miss. The three men, missing since
June 21, had been shot to death. Eighteen whites, including several
police officers, were arrested and charged with conspiracy to deprive
the victims of their civil rights. Dismissed by Federal District Court
Judge W. Harold Cox, the charges were reinstated in 1968 after the
U.S. Supreme Court decided that the Federal Government could prosecute
State officials, as well as private persons who conspire with them,
who deprive persons of their constitutional rights.
The year 1964 also marked an important personal achievement for Dr. King. On December 10, he was awarded the Nobel Peace Prize in
Oslo, Norway. At age 35, he was the youngest recipient of the award in history and the second Black American after Dr. Ralph J. Bunche, the 1950 award winner. Not only was the award a recognition of Dr. King's role in the nonviolent struggle for civil rights in the United States, but to many it signified official international recognition of the Black protest movement.
In 1965, civil rights advocates, led by Dr. King, focused their attention on Black voting rights. At least two-thirds of Alabama's eligible Black voters were not registered at the beginning of the year. In Selma, Ala., on January 2, 1965, Dr. King announced a voter registration drive centering on that city, an attempt to dramatize the need for a Federal voting rights law. The violence directed against demonstrators in Selma, along with harassment by State and local authorities, aroused sentiment for such legislation. In February, Jimmy Lee Jackson, a civil rights worker from Perry County, Ala., became the first martyr of the campaign, when he was killed by gunfire in a clash between demonstrators and State troopers. Dr. King organized, but did not lead an initial march from Selma to the State capital, Montgomery, on March 7. The demonstrators were turned back just outside Selma by State troopers with nightsticks, tear gas, and bull whips. On March 9, 1,500 Black and white marchers, this time led by Dr. King, made a second attempt to reach Montgomery, despite a Federal court injunction. They were again met by a phalanx of State troopers just outside Selma. Rather than force a confrontation, Dr. King asked his followers to kneel in prayer and then instructed them to return to Selma. His caution cost him the support of many young militants who already mocked him with the title, "De Lawd." That evening in Selma, three white ministers were attacked and brutally beaten by white thugs. Rev. James Reeb, a Unitarian pastor from Boston, died 2 days later as a result of his injuries.
On March 13, President Johnson addressed a joint session of Congress to propose enactment of a strong voting rights bill. In one of the most memorable speeches of his Presidency, Johnson said:
At times history and fate meet at a single time in a single place to shape a turning point in man's unending search for freedom. So it was at Lexington and Concord. So it was last week in Selma, Ala.
In Alabama the twice-aborted march from Selma to Montgomery began for a third time on March 21, led by two Black Nobel Peace Prize winners, Dr. King and Dr. Bunche. On March 25, when the civil rights marchers reached Montgomery, their ranks had swelled to 50,000. In an impassioned address on the statehouse grounds, Dr. King noted that the Black protest movement was recognizing gains and no amount of white terrorism would stop it. He said:
...I know some of you are asking today, "How long will it take?" I come to say to you this afternoon, however difficult the moment, however frustrating the hour, it will not be long, because truth pressed to earth will rise again.How long? Not long, because no lie can live forever. How long? Not long, because you will reap what you sow. How long? Not long, because the arm of the moral universe is long but it bends toward justice.
While the march was considered a success, the tragedy that had plagued it from the outset continued. A civil rights transportation volunteer, Viola Liuzzo of Detroit, was shot to death as she drove a marcher home to Selma. Four Ku Klux Klan members were arrested for her murder, three of whom were eventually convicted of violating Mrs. Liuzzo's civil rights and sentenced to 10 years in prison.
The Selma campaign led to the passage of the Voting Rights Act, signed into law by President Johnson on August 6, 1965. The act provided for direct action through use of Federal examiners to register voters turned away by local officials. The Department of Justice moved swiftly to suspend voter qualification devices such as literacy tests in several Southern States, and within 3 weeks of the law's enactment, Johnson announced that over 27,000 Blacks had been registered by Federal examiners in three Southern States.
Divisions in the ranks of Black Americans became painfully apparent
in 1965. Militants labeled Dr. King's nonviolence a tool of the white
power structure. The February 21 assassination of Malcolm X, a former
leader of the Black Muslims who had called for Black separation,
underscored growing problems among Blacks. Three Black men were
arrested for the Harlem shooting of Malcolm X.
In early 1965, Dr. King suggested that the SCLC wage a campaign in northern cities for better housing for Blacks and nondiscriminatory employment practices. He spoke several times in the North. That summer he attacked patterns of de facto segregation in Chicago, and led a number of marches in predominantly Black neighborhoods of that city. It was also in 1965 that he first indicated a nexus between Federal Government spending for the Vietnam war and cuts in Federal assistance to the poor.
The euphoria over the August 6, 1965, signing of the Voting Rights Act subsided a week later when the Watts section of Los Angeles exploded in the Nation's worst race riot since 1943. It lasted 6 days and left 35 dead, 900 injured, over 3,500 arrested and $46 million of property damage. Dr. King received a mixed welcome in Watts, as he preached nonviolence in the wake of the tragic disturbance. He urged massive Federal assistance for the northern urban poor who suffered from economic discrimination and de facto segregation, the underlying causes of the Los Angeles violence.
The Watts riot demonstrated the depth of the urban race problems in the North. At the beginning of 1966, Dr. King launched a campaign against discrimination in Chicago, focusing his attack on substandard and segregated housing. He moved to a Chicago slum tenement in January and promised to organize tenants and lead a rent strike if landlords did not improve living conditions in the ghetto. Mayor Richard Daley met with Black leaders several times, but he took no concrete action to promote better housing or to implement nondiscriminatory employment practices. Violence against demonstrators plagued rallies and marches led by Dr. King in the spring and summer of 1966. At the end of July, he pressed his drive for better housing into Chicago's all-white neighborhoods. Demonstrators were jeered and attacked during these marches, and Dr. King himself was stoned in a parade through the Gage Park section on August 5. Although he was stunned by the vehement reaction of northern whites to civil rights activities, Dr. King
planned a march through the all-white suburb of Cicero because demands for better housing were not acknowledged by the city. He canceled the Cicero protest, however, when the city administration and Chicago business leaders agreed to meet with civil rights leaders. The city officials and Black leaders signed a summit agreement that manifested a commitment to open housing. Though Dr. King considered the agreement a victory and moderate Black leaders saw it as setting new precedent by forcing the mayor to the conference table, restive Black militants criticized it as a middle class sellout. The agreement ultimately had little effect on the plight of Chicago Blacks, and Dr. King's campaign was defeated by the combination of Mayor Richard Daley's intransigence and the complexities of northern racism. A positive byproduct of the effort was the SCLC's Operation Bread Basket that attacked economic ills and attempted to create new jobs for Blacks.
During 1966, the Black protest movement crumbled into several factions. SNCC, led by Stokely Carmichael, and CORE, under Floyd McKissick, adopted the slogan "Black Power," symbolizing radicalization of the movement. The term dramatically came to the attention of the public during the Meredith march in June. On June 6, 1966, James Meredith had been shot and wounded shortly after he began a 220-mile "March Against Fear" from Memphis, Tenn., to Jackson, Miss. He had hoped to embolden Blacks to register and vote, as well as to demonstrate the right of Blacks to move freely in the South. On the day after the assassination attempt, the leaders of five major civil rights organizations, Dr. King of the SCLC; Roy Wilkins, NAACP; Whitney Young, Jr., National Urban League; Floyd McKissick, CORE; and Stokely Carmichael, SNCC, converged in Memphis to pick up Meredith's march. Dr. King attempted to walk the line between the militancy of SNCC and CORE and the moderate tactics of the NAACP and the Urban League. During the 3-week Meredith march, however, the differing views of King and Carmichael became increasingly apparent. The SCLC president continued to advocate nonviolence, cooperation with whites and racial integration, while Carmichael urged Blacks to resist their white "oppressors" and "seize power."
The marchers reached their destination, Jackson, on June 26. While Meredith and King addressed the marchers, it was Carmichael's plea for Blacks to build a power structure "so strong that we will bring them [whites] to their knees every time they mess with us" that attracked the most attention. In July 1966, CORE adopted "Black Power" rather than integration as its goal. The NAACP disassociated itself from the "Black Power" doctrine.
Urban riots in 1966 by angry and frustrated Blacks did not compare
to the magnitude of the Watts riot a year earlier, but violence spread
to more cities, 43 for the year, including Washington, D.C., Baltimore, Dayton,
St. Louis, Brooklyn, Cleveland, Milwaukee, and
Atlanta. By the end of the summer, 7 persons were dead, over 400
injured, 3,000 arrested; property damage was estimated at over $5
1967 was a year of widespread urban violence, sanctioned by some Black militant leaders while abhorred by moderates who saw the up-
rising as ultimately counterproductive to Black interests. It appeared to some that the phase of the Black protest movement characterized by nonviolent demonstrations led by Dr. King was coming to an end. Many civil rights leaders thought violent upheaval inevitable. In an April 16, 1967, news conference, Dr. King warned that at least 10 cities "could explode in racial violence this summer."
Urban racial violence did plague over 100 cities in 1967. During the Spring, minor disturbances had occurred in Omaha, Louisville, Cleveland, Chicago, San Francisco, Wichita, Nashville, and Houston. Then in June, Boston and Tampa experienced serious disorders. The most devastating riot since Watts in 1965 occurred, however, in Newark, from June 12 to 17, 1967, an outburst that resulted in 25 deaths, 1,200 persons injured, and over 1,300 arrested. The following month Detroit was the site of the worst urban race riot of the decade, one that left 43 dead, over 2,000 injured and more than 3,800 arrested. Rioting continued around the country, with outbreaks in Phoenix, Washington, D.C. and New Haven, among other cities. According to a report of the Senate Permanent Committee on Investigations released in November 1967, 75 major riots occurred in that year, compared with 21 in 1966; 83 were killed in 1967, compared with 11 in 1966 and 36 in 1965.
On July 27, 1967, President Johnson established the National Advisory Commission on Civil Disorders, chaired by Illinois Gov. Otto Kerner, to investigate the origins of the disturbances and to make recommendations to prevent or contain such outbursts. On July 26, Dr. King, with Roy Wilkins, Whitney Young, and A. Philip Randolph, issued a statement from NAACP headquarters calling on Blacks to refrain from rioting and urging them to work toward improving their situation through peaceful means.
Violence flared early in 1968 as students at South Carolina State College, on February 5, organized a protest against segregation at a local bowling alley. Following the arrests of several demonstrators on trespassing charges, a clash between students and police left eight injured. On February 8, renewed conflicts on the campus led to the shooting deaths of three Black students. The bowling alley was ultimately integrated, but only after the National Guard was called in.
Still, sporadic disruptions continued.
On February 29, a jolting summary of the final report of the National Advisory Commission on Civil Disorders was made public. The Commission found that the urban riots of 1967 were not the result of any organized conspiracy, as fearful whites had charged. Rather, it concluded that the United States was "moving-toward two separate societies, one Black, one white--separate and unequal." The report warned that frustration and resentment resulting from brutalizing inequality and white racism were fostering violence by Blacks. The Commission suggested that the Nation attack the root of the problems that led to violence through a massive financial Commitment to programs designed to improve housing, education, and employment opportunities. This advice was significant because it came not from militants, but from moderates such as Illinois Governor and Commission Chairman Kerner, New York City Mayor and Commission Vice Chairman John V. Lindsay, NAACP executive board chairman Roy Wilkins and Senator Edward W. Brooke of Massachusetts. In the con-
clusion of its report, the Commission quoted the testimony of social psychologist Dr. Kenneth B. Clark, who referred to the reports of earlier violence commissions:
I read that report ...of the 1919 riot in Chicago, and it is as if I were reading the report of the investigating committee on the Harlem riot of 1935, the report of the investigating committee of the Harlem riot of 1943, the report of the McCone Commission on the Watts riot.
I must in candor say to you members of this Commission: it is a kind of Alice in Wonderland, with the same moving picture reshown over and over again, the same recommendations, and the same inaction.
Black leaders generally felt vindicated by the report. On March 4,
1968, Dr. King described it as "a physician's warning of approaching
death [of American society] with a prescription to life. The duty of
every American is to administer the remedy without regard for the
cost and without delay."
In December 1967, Dr. King had announced plans for a massive campaign of civil disobedience in Washington to pressure the Federal Government to provide jobs and income for all Americans. In mid-March, he turned his attention from this Poor People's Campaign to a strike of sanitation workers in Memphis, Tenn., and thus began his last peaceful crusade.
The Road to Memphis (3)
A quest for world peace and an end to economic deprivation for all American citizens, regardless of race, were uppermost in Dr. King's mind during the last year of his life, as manifested by his staunch opposition to the Vietnam war and his Poor People's Campaign, an effort designed to dramatize the scourge of poverty in the United States. In March 1968, he interrupted his planning of the Poor People's March on Washington to travel to Memphis, Tenn., where he hoped to organize a nonviolent campaign to assist the poorly paid, mostly Black sanitation workers who were on strike for better better working conditions, and recognition of their union.
By 1967, American forces in Vietnam had grown to over 500,000, and more than 6,000 Americans had died in the escalating Southeast Asian conflict.(4) Opposition to U.S. involvement in Vietnam had begun to intensify. Dr. King was among those who called for disengagement and peaceful settlement.
The press pointed to Dr. King's address at New York City's Riverside Church on April 4, 1967, as the time when the SCLC president publicly disclosed his opposition to the Vietnam war, even though he had made similar statements and had been urging a negotiated settlement since early 1965. (5) He attacked the foreign policy of the Johnson administration, emphasizing the connection between wasteful military spending and its harmful effect on the poor, as social programs were dropped in favor of Vietnam-related expenditures. He warned that this pattern was an indication of the "approaching spiritual death" of the Nation. Dr. King described the United States as the "greatest purveyor of violence in the world today," and said that the high proportion of fatalities among Black soldiers in Viet-
nam demonstrated "cruel manipulation of the poor" who bore the
burden of the struggle. On April 15, 1967, at a rally at the United
Nations, he called for a halt to U.S. bombing.
Dr. King was stunned by the vehement reaction to his call for peace, especially from his colleagues in the civil rights movement. For example, Urban League president Whitney Young and NAACP executive director Roy Wilkins strongly condemned Kings' pacifism.(6) Moderate Black leaders feared that the generally sympathetic Johnson administration would be antagonized by the SCLC president's ministrations, while Dr. King argued that war priorities diverted valuable resources that could be used to improve the condition of America's Blacks. At the same time, his indefatigable belief in nonviolence was increasingly challenged by younger, more militant Blacks who did not renounce the use of violence to achieve their goals.
A King biographer, David L. Lewis, wrote that by early 1967, "the
verdict was that Martin was finished." (7)
In late 1967, in keeping with his belief that the problem of domestic poverty was exacerbated by use of Government funds to finance the war in Vietnam, Dr. King turned his attention to the plight of the poor in America. At an SCLC meeting in Atlanta in December 1967, he presented a plan for a nonviolent demonstration by a racially integrated coalition of the poor, to take place in Washington, D.C., in April 1968. Using creative nonviolence, these ignored Americans would demand an economic bill of rights with the objectives of a guaranteed annual income, employment for the able-bodied, decent housing, and quality education. Dr. King planned that the poor would demonstrate, beginning on April 20, until the Government responded to their demands. He wrote:
We will place the problems of the poor at the seat of the Government of the wealthiest Nation in the history of mankind. If that power refuses to acknowledge its debt to the poor, it will have failed to live up to its promise to insure life, liberty, and the pursuit of happiness to its citizens.
In the face of criticism of his antiwar views by moderate Blacks and
rejection of his tireless devotion to nonviolence by militants, Dr. King
also hoped to use the Poor People's Campaign to broaden his base of
support and buoy the SCLC. In the opinion of Dr. King's closest
associate, Reverend Abernathy, SCLC vice-president-at-large in 1968,
and Dr. King's successor as president of the organization, SCLC
influence had declined since the Selma, Ala. voter registration campaign
in 1965. Stymied in its efforts to deal with the urban racism of
the North, the SCLC had seen a decline in financial contributions after
the 1966 Chicago drive for better housing and nondiscriminatory
employment practices. Abernathy described the SCLC's failure to
implement new policies in Chicago as "the SCDC's Waterloo."
Public sentiment for a negotiated settlement in Vietnam intensified in early 1968, following the bloody Tet offensive during which the National Front attacked almost every American base in Vietnam and destroyed the U.S. Embassy in Saigon. Dr. King continued his criticism of the Johnson administration's escalation of U.S. involvement in Southeast Asia. In a March 16, 1968, address to delegates at the California Democratic Council's statewide convention in
Anaheim, he urged that, Johnson's nomination be blocked by the Democratic Party that year, charging that the President's obsession with the war in Vietnam was undercutting the civil rights movement.(8) According to one writer, this was Dr. King's first public call for President Johnson's defeat.(9) Although he did not endorse either of the Democratic peace candidates, Senator Eugene B. McCarthy or Senator Robert F. Kennedy, he did praise the civil rights record of each aspirant.
During the weekend of March 16 to 17, 1968, Dr. King told Rev. James Lawson of Memphis, Tenn., that he would be willing to make an exploratory trip to Memphis to speak on behalf of striking sanitation workers. He was expected to appear there on Monday night, March 18, 1968. Reverend Lawson had first contacted Dr. King in late February 1968 in the hope that the SCLC president could assist the garbage workers in pressing their demands, as well as avert further violence between the strikers and the police.
At the heart of the Memphis strike was the issue of racial discrimination. (10) As the result of heavy rains in Memphis on January 31, 1968, Black crews of sanitation workers had been sent home without pay, while white city employees had been allowed to work and received a full day's wage. On the following day, two Black sanitation workers took shelter from the rain in the back of a compressor garbage truck. The truck malfunctioned, and the two were crushed to death. These events were the catalyst for a strike of Memphis sanitation workers, 90 percent of whom were Black; they were protesting the problems faced by the workers: low wages, unsafe working conditions, lack of benefits such as medical protection and racial discrimination on the job. On February 12, 1968, all but 200 of the 1,300 Memphis workers walked off their jobs. The American Federation of State, County and Municipal Employees (AFSCME) supported the strike and demanded a pay raise, recognition of AFSCME as sole bargaining agent, seniority rights, health and hospital insurance, safety controls, a meaningful grievance procedure and other benefits.
Newly elected Memphis Mayor Henry Loeb III rejected the demands
labeling the strike illegal and refusing to negotiate until the
workers returned to their jobs. Using the slogan "I am a man Blacks
believed that union representation was tantamount to their recognition
as human beings. The racial issue became a central theme and the
NAACP intervened in the strike.
When the Memphis City Council refused to hear their demands for union recognition on February 23, 1968, the striking workers had responded with their first march. They were ruthlessly dispersed by police indiscriminately using mace and nightsticks. Several marchers were injured. On the following day, the city obtained an injunction against further marches.
Deeply affected by the violence, Black ministers in Memphis, including Lawson, Rev. Samuel B. Kyles, and Rev. H. Ralph Jackson, formed a strike support organization, Community on the Move for Equality (COME) and called for a boycott of downtown stores. Beginning on February 26, COME organized a large number of Black Memphians to support the daily marches that continued for the duration of the strike, and COME leader Lawson invited Dr. Martin Luther King, Jr., to Memphis.
In the midst of organizing his Poor People's Campaign, Dr. King
was reluctant to travel to Memphis when first approached by Lawson
in late February. Rev. Andrew Young, in 1968 the executive vice-president
of SCLC, told the committee that the SCLC staff initially
opposed a King trip to Memphis. Dr. King eventually agreed, however,
to make an initial trip in an attempt to discourage further
violence, rearranging his schedule and flying to Memphis on March 18,
1968. He saw the poorly paid, badly organized, mostly Black garbage
workers as epitomizing the problems of the poor in the United States.
On the evening of March 18, Dr. King gave a well-received address
throng of 17,000 strikers and their supporters. Encouraged by
his reception, he announced he would head a citywide demonstration
and sympathy strike of other workers on Friday, March 22. As the
result of a recordbreaking snowstorm, the march was rescheduled
for Thursday, March 28. In the meantime, efforts to settle the strike
failed as Mayor Loeb tenaciously continued to reject union demands.
At about 11 a.m. on March 28, 2 hours after the march had originally been scheduled to begin, Dr. King arrived at the Clayborn Temple in Memphis to lead the demonstrators. By this time, the impatient and tense crowd of about 6,000 persons had heard rumors that police had used clubs and mace to prevent a group of high school students from joining the demonstration.
The march, led by Dr. King and Reverend Abernathy, began shortly after 11. As it proceeded along Beale Street toward Main, several Black youths broke store windows with signpost clubs. Police, clad in gas masks and riot gear, blocked Main Street. Abernathy and Dr. King were somewhere in the middle of the procession, not at its head, when they heard the shattering of glass. Some teenagers at the rear of the march began breaking windows and looting stores. When violence appeared imminent, Dr. King asked Reverend Lawson to cancel the march. SCLC aides commandeered a private automobile, and Dr. King was hustled away to safety at the Holiday Inn-Rivermont Hotel.
As Lawson pleaded with the marchers to return to Clayborn Temple, police moved toward Main and Beale where youths met them with picket signs and rocks. Tear gas was fired into the mob of young Blacks and stragglers who were unable to make their way back to the starting point. Police dispersed the crowd with nightstick, mace and finally guns. In the ensuing melee, 60 persons were injured, and Larry Payne, a 16-year-old Black youth, was killed by police gunfire. Much of the violence was attributed to the Invaders, a group of young Black militants. A curfew was ordered following the riot, and Tennessee Gov. Buford Ellington called out 3,500 National Guard troops.
Dr. King was upset and deeply depressed by the bloody march. Never before had demonstrators led by Dr. King perpetrated violence, according to Abernathy. The press excoriated Dr. King for inciting the tragic confrontation, even though he was quick to state that his staff had not planned the march and it had been poorly monitored. The Memphis debacle was labeled a failure of nonviolence direct action.
Three members of the militant Invaders visited Dr. King on the morning following the violence, Friday, March 29. They acknowledged their role in inciting the disturbance but explained that they merely wanted a meaningful role in the strike. Dr. King said he would do what he could, but stated emphatically that he could not support a group that condoned violence. At a press conference later that morn-
ing, he announced that he would return to Memphis the following
week to demonstrate that he could lead a peaceful march. (11) He and
Abernathy then left Memphis for Atlanta at 3 p.m. Both Jesse
Jackson and Andrew Young, members of the SCLC executive board
in 1968, told the committee that they believed Dr. King would
not have returned to Memphis if the March 28 demonstration had
been nonviolent. Following the Memphis incident, critics, including
civil rights leaders such as Roy Wilkins of the NAACP, were doubtful
that Dr. King could control a demonstration and asked that he cancel
the Poor People's Campaign to avoid another bloody eruption.
On Saturday, March 30, 1968, in Atlanta, Dr. King along with the SCLC executive staff, including Abernathy, Young, Jackson, James Bevel, Walter Fauntroy, and Hosea Williams, decided it was crucial to resolve the Memphis dispute before marching on to Washington with the Poor People's Campaign. Abernathy said Dr. King, was "very delighted" by this plan, which would allow him to prove the efficiency of nonviolence. The next day, Dr. King preached at Washington's National Cathedral, urging human rights in the United States and withdrawal from Vietnam. He mentioned the Poor People's march and promised an orderly, nonviolent demonstration. That evening, President Johnson announced his decision not to seek reelection in 1968.
On Monday, April 1, an entourage of SCLC executive staff members arrived in Memphis to lay the groundwork for a peaceful demonstration in support of the striking garbage workers, preparation that regrettably had been ignored before the last March. Memphis was the focus of national attention the next day as hundreds of Blacks attended the funeral of riot victim Larry Payne.
Dr. King, with Abernathy and administrative assistant Bernard Scott Lee, arrived in Memphis on Wednesday, April 3. That morning their flight had been delayed in Atlanta for more than an hour by an extensive search for a bomb following a threat against Dr. King. Solomon Jones, a local mortuary employee who served as Dr. King's chauffeur during his Memphis visits, took Dr. King and Abernathy from the airport to the Lorraine Motel. Dr. King's April 3 return visit to Memphis had received heavy publicity. It was common knowledge that he would be staying at the Lorraine, and at least one radio station announced that he was booked in room number 306, according to Kyles.
On the morning of April 3, U.S. District Court Judge Bailey Brown issued a temporary restraining order against the SCLC-sponsored demonstration that was originally scheduled to occur on Friday, April 5. Dr. King was determined to lead the march despite the injunction, and the planned protest became a major attraction for Blacks and union leaders.
Tornado warnings were broadcast in Memphis during the afternoon of April 3, and heavy rain fell on the city that night. Despite the inclement weather, 2,000 persons gathered that evening at the Mason Temple Church and awaited Dr. King, who was scheduled to speak there. King had asked Reverend Abernathy to talk in his place, but when Abernathy saw the enthusiastic crowd waiting to hear the SCLC president, he telephoned Dr. King and urged him to give the address. King agreed to go to Mason Temple, where he gave one of the most stirring speeches of his career, the last public address of his life.
After alluding to the bomb scare that morning and other threats
against him, Dr. King explained his return visit to Memphis despite
such intimidation. Ambassador Young later remarked to the committee
that the address was "almost morbid," and Abernathy noted
that his friend appeared particularly nervous and anxious.
Dr. King concluded the speech with a reference to his own death:
...Well, I don't know what will happen now. We've got some difficult days ahead. But it really doesn't matter to me now, because I've been to the mountaintop. I won't mind. Like anybody, I'd like to live a long life. Longevity has its place but I'm not concerned about that now. I just want to do God's will and He's allowed me to go up to the mountain. And I've looked over. And I've seen the Promised Land. So I'm happy tonight. I'm not worried about anything. I'm not fearing any man. "Mine eyes have seen the glory of the coming of the Lord."
After the talk, Dr. King and Young had dinner at the home of Judge Ben Hooks, a Memphis Black leader. Later that evening, Dr. King's brother, Rev, A.D. King, arrived in Memphis from his home in Louisville, Ky. He registered at the Lorraine Motel at 1 a.m. on April 4. Dr. King, who has not expected his brother in Memphis, visited with him until almost 4 a.m.
The Last Moments: Memphis, Tenn., April 4, 1968
Dr. King spent the last day of his life, Thursday, April 4, 1968, at
the Lorraine Motel. Walter Lane Bailey, owner of the Lorraine, later
recalled that the usually businesslike SCLC president was particularly
jovial that day, "teasing and cutting up."
At an SCLC staff meeting that morning, the march, planned for the next day, was postponed until the following Monday, April 8. In addition, that morning, SCLC general counsel Chauncey Eskridge appeared before Judge Bailey Brown in Federal court and argued that the city's injunction against the proposed demonstration should be lifted. In the meantime, four members of the Invaders presented a series of demands to Dr. King, including one for several thousand dollars. He refused to entertain their demands. After the men left, he told a group of executive board members that he would not tolerate advocates of violence on his staff and was angry that two Invaders had been assigned to work with the SCLC.
At about 1 p.m., Dr. King and Reverend Abernathy had a lunch of fried catfish at the motel, then Abernathy went to his room to take a nap, while Dr. King visited his brother in his room.
At about 4 p.m. on the afternoon of April 4, Abernathy was awakened from his nap by the telephone in his motel room. He answered, and Dr. King asked him to come to his brother's room, No. 201, so they could talk.
When Abernathy reached A.D.'s room, Dr. King told him that he and A.D. had called Atlanta and had spoken with their mother, who was pleased that her sons could get together in Memphis. He also said that they were all invited to the Kyles home for dinner. At King's direction, Abernathy called Mrs. Kyles to find out what she would be
serving, and she said she would have a good dinner of prime rib roast and soul food such as chitterlings, greens, pig's feet and blackeyed peas.
At about 5 p.m., according to Abernathy, he and Dr. King returned to room 306 to shave and dress for dinner. He recalled Dr. King's use of an acrid, sulfurous depilatory to remove his heavy beard, part of his daily shaving ritual. As they were preparing to leave, Abernathy mentioned that he would not be able to attend the poor people's march in Washington in April because he had planned a revival at his West Hunter Street Baptist Church in Atlanta for that same day. Dr. King told Abernathy he would not consider going to Washington without him and attempted to make arrangements for someone else to handle the Atlanta revival. He called Rev. Nutrell Long in New Orleans but was unable to reach him.
Dr. King then told Abernathy to go to the West Hunter Street Church and tell his congregation that,
...you have a greater revival, you have a revival where you are going to revive the soul of this Nation; where you are going to cause America to feed the hungry, to have concern for those who are downtrodden, and disinherited; you have a revival where you are going to cause America to stop denying necessities to the masses ....
Abernathy agreed to go to Washington with Dr. King.
At about 5:30 p.m., Kyles went to room 306 and urged Dr. King
and Abernathy to hurry so they would get to dinner on time. "OK,
Doc, it's time to go," he urged. Kyles had arrived at the Lorraine at
about 4 p.m. and had run into the Bread Basket Band, an SCLC singing
group. He had been singing some hymns and movement anthems
with them until shortly after 5 p.m. Dr. King assured Kyles that he
had telephoned the preacher's home and that Mrs. Kyles had said
dinner was not until 6. "We are not going to mess up her program,"
Dr. King insisted.
When he finished dressing, Dr. King asked Kyles if his tie matched his suit. He was in a good mood, according to Kyles, who told the committee that Dr. King teased him about dinner, saying he once had been to a preacher's house for ham and Kool-Aid, and the ham was cold. "I don't want to go to your house for cold food."
As Dr. King adjusted his tie, he and Kyles walked onto the balcony outside room 306. The room overlooked a courtyard parking lot and swimming pool. The two men faced west, toward the backs of several rundown buildings on Mulberry Street. Dr. King greeted some of the people in the courtyard below, and Kyles said hello to SCLC attorney Eskridge who had been in Federal court most of the day. Eskridge was challenging the injunction against the SCLC's proposed Monday march, and the court had decided to permit a demonstration, though it restricted the number of marchers and the route. After court had adjourned at 3 p.m., Eskridge went with Young to the Lorraine where they saw Dr. King in A.D.'s room and informed him of the ruling. At that time, Dr. King invited Eskridge to join him for dinner at the home of Reverend Kyles. Thus, Eskridge was standing in the Lorraine's courtyard parking lot shortly before 6 p.m., awaiting Dr. King's departure for dinner. Dr. King, leaning against the iron railing of the balcony, called to Eskridge and asked that he tell Jesse Jackson,
a member of the SCLC's Chicago chapter, to come to dinner with him.
Eskridge found Jackson, who was also in the courtyard, and invited
him to dinner, suggesting that he change into something other than
the turtleneck he was wearing.
Rev. James Orange of the SCLC advance team and James Bevel were also in the courtyard. Both had been assigned by the SCLC staff to work in Memphis with the Invaders in an effort to get the young militants to cool down. Orange had just arrived at the Lorraine with Marrell McCullough, a Memphis Police Department undercover officer. Orange and Bevel wrestled playfully in the courtyard. Dr. King spotted them and shouted to Bevel: "Don't let him hurt you"
Dr. King's chauffeur, Solomon Jones, was standing next to the funeral home limousine, which he had parked in front of room 207, below room 306. Jones had been parked in front of the Lorraine since 8:30 a.m. that morning, and he later recalled that this was the first time Dr. King had stepped out that day. Dr. King told Jones to get the car ready for their trip to Kyles' home, and Jones urged him to bring a top coat because it was chilly that evening. "Solomon, you really know how to take good care of me," Dr. King responded.
Dr. King's administrative assistant, Bernard Lee, along with Andrew Young and Hosea Williams, were also talking in the Lorraine parking lot, waiting for Dr. King to leave for dinner. Young recalled that Jones said, "I think you need a coat" to Dr. King. Ben Branch, leader of the Bread Basket Band, was also there, with Jesse Jackson. Dr. King called down to Branch, "Ben, make sure you play 'Precious Lord, Take My Hand' at the meeting tonight. Sing it real pretty." "OK, Doc, I will," Branch promised.
Meanwhile, in room 306, Abernathy recalled that at some point shortly before 6 p.m., he and Dr. King put on their coats and were about to leave the motel. Abernathy hesitated and said, "Wait just a moment. Let me put on some aftershave lotion."
According to Abernathy, Dr. King replied, "OK, I'll just stand right here on the balcony."
Kyles recalled that Dr. King asked Abernathy to get his topcoat and then called to Jackson, "Jesse, I want you to go to dinner with us this evening," but urged him not to bring the entire Bread Basket Band. Kyles chided Dr. King, "Doc, Jesse had arranged that even before you had." Kyles then stood on the balcony with Dr. King for a moment, finally saying, "Come on. It's time to go." Kyles turned and walked away to go down to his car. After a few steps, Kyles called to lawyer Eskridge in the-courtyard below. "Chauncey, are you going with me? I'm going to get the car."
At 6:01 p.m., as Dr. King stood behind the iron balcony railing in front of room 306, the report of a high-powered rifle cracked the air. A slug tore into the right side of his face, violently throwing him backward.
At the mirror in room 306, Abernathy poured some cologne into his hands. As he lifted the lotion to his face, he heard what sounded like a "firecracker." He jumped, looked out the door to the balcony and saw that Dr. King had fallen backward. Only his feet were visible, one foot protruding through the ironwork of the balcony railing. According to Abernathy, the bullet was so powerful it twisted Dr. King's body so that he fell diagonally backward. As Abernathy rushed out
to aid his dying friend, he heard the cries and groans of people in the courtyard below.
Just below the balcony, Jones recalled that Young and Bevel shoved him to the ground just after the firecracker sound. He looked up and saw Abernathy come out of the room and then realized that the prone Dr. King had been shot. Lee, who had been talking with Young and Bevel, took cover behind a car and then noticed Dr. King's feet protruding through the balcony railing.
Memphis undercover policeman McCullough recalled that immediately before he heard the shot, he saw Dr. King alone on the balcony outside room 306, facing a row of dilapidated buildings on Mulberry Street. As he turned away from Dr. King and began to walk toward his car, McCullough, an Army veteran, heard an explosive sound, which he assumed was a gunshot. He looked back and saw Dr. King grasp his throat and fall backward. According to McCullough's account, he bolted up the balcony steps as others in the courtyard hit the ground. When he got to Dr. King's prone figure, the massive face wound was bleeding profusely and a sulfurous odor like gunpowder, perhaps Dr. King's depilatory, permeated the air. McCullough took a towel from a housekeeping tray and tried to stem the flow of blood.
Eskridge had heard a "zing" and looked up toward the balcony. He saw that Dr. King was down, and as Abernathy walked out onto the balcony, Eskridge heard him cry out "Oh my God, Martin's been shot." A woman screamed.
Abernathy recalled that when he walked out on the balcony, he had to step over his mortally wounded friend.
...the bullet had entered his right cheek and I patted his left cheek, consoled him, and got his attention by saying, "This is Ralph, this is Ralph, don't be afraid."
Kyles, who had started to walk toward his car, ran back to room
306. Young leaped up the stairs from the courtyard to Dr. King, whom
he found lying face up, rapidly losing blood from the wound. Young
checked Dr. King's pulse and, as Abernathy recalled, said, "Ralph, it's
"Don't say that, don't say that," Abernathy responded.
Kyles ran into room 306. Abernathy urged him to call an ambulance. Kyles tried to make the call, but was unable to get through to the motel switchboard.
Lee, Jackson, and Williams had followed Young up the steps from the courtyard to room 306. Dr. King's still head lay in a pool of blood. Abernathy, kneeling over his friend tried desperately to save Dr. King's life. Several of the men on that balcony pointed in the direction of the shot. Frozen in a picture taken by photographer James Louw, they were aiming their index fingers across Mulberry Street and northwest of room 306.
An ambulance arrived at the Lorraine about 5 minutes after Dr. King had been shot, according to Abernathy. By this time, police officers "cluttered the courtyard." Abernathy accompanied the unconscious Dr. King to the emergency room of St. Joseph Hospital. The 39-year-old civil rights leader described by Abernathy as "the most peaceful warrior of the 20th Century," was pronounced dead at 7:05 p.m., April 4, 1968.
Bibliographic note: Web version based on the Report of the Select Committee on Assassinations of the U.S. House of Representatives, Washington, DC: United States Government Printing Office, 1979. 1 volume, 686 pages. The formatting of this Web version may differ from the original. | http://www.archives.gov/research/jfk/select-committee-report/part-2-king-findings.html | 13 |
27 | Learn something new every day More Info... by email
The Panic of 1837 set off the most severe depression experienced by the United States up to that point. Chief among the depression’s causes was a wave of land speculation, fueled by cheap and easy credit. Across the country, unemployment rose, businesses failed, and bankruptcy became commonplace. During the five years following the Panic of 1837, 343 of the nation's 850 banks went out of business entirely, with an additional 62 suffering partial failure.
President Andrew Jackson’s economic policies often are blamed for creating the conditions that caused the Panic of 1837. In 1829, Jackson, who mistrusted the National Bank and considered it unconstitutional, refused to renew the bank’s charter. He also withdrew all federal funds, depositing them instead in state and private banks.
As a result, credit was easily available from these institutions. State-funded and privately funded projects such as canals and rail lines encouraged westward expansion. Speculators were quick to buy cheap government property, hoping to sell it for a huge profit as expansion and infrastructure drove up property values. Businesses also relied heavily on credit, often using earnings to fund high return speculative investments rather than quickly paying down loans.
Banks were able to supply this cheap credit partly through the use of bank notes, money that they printed themselves. Foreign investors hoped to take advantage of the United States' boom, adding additional capital to the economy. With high levels of currency circulating, inflation was inevitable.
Speculation drove property sales to record highs. By 1837, land offices were reporting sales 10 times greater than in 1830. Hoping to curtail this land rush, Jackson issued the Specie Circular in the summer of 1836, requiring that specie — gold and silver currency — be used for all public land sales. State and private banks did not have sufficient specie funds, typically using bank notes for loans. With credit supplies suddenly cut off, many buyers defaulted on payments, the property market dried up and the Panic of 1837 was underway.
Foreign investors called in debts, refusing to accept U.S. currency. Already overextended, bank reserves were quickly depleted. Depositors attempted to withdrawal funds, resulting in bank runs. During the Panic of 1837, paper money became worthless as banks refused to exchange it for hard specie. Widespread business failure, bankruptcy and double-digit unemployment resulted.
When Martin Van Buren took office as president in January 1837, the panic was just beginning to take hold. By the end of June, more than 250 businesses had failed in New York alone. In September, Van Buren called a special session of Congress, demanding a national treasury system designed to make banks more accountable. Despite political efforts, the effects of the Panic of 1837 were felt for many years. | http://www.wisegeek.com/what-is-the-panic-of-1837.htm | 13 |
14 | A perforated (torn) eardrum is not usually serious and often heals on its own without any complications. Complications sometimes occur such as hearing loss and infection in the middle ear. A small procedure to repair a perforated eardrum is an option if it does not heal by itself, especially if you have hearing loss.
What is the eardrum and how do we hear?
The eardrum (also called the tympanic membrane) is a thin skin-like structure in the ear. It lies between the outer and middle ear.
The ear is divided into three parts - the outer, middle, and inner ear. Sound waves come into the outer (external) ear and hit the eardrum, causing the eardrum to vibrate.
Behind the eardrum are three tiny bones (ossicles). The vibrations pass from the eardrum to these middle ear bones. The bones then transmit the vibrations to the cochlea in the inner ear. The cochlea converts the vibrations to sound signals which are sent down a nerve to the brain, which we 'hear'.
The middle ear behind the eardrum is normally filled with air. The middle ear is connected to the back of the nose by the Eustachian tube. This allows air in and out of the middle ear.
What is a perforated eardrum and what problems can it cause?
A perforated eardrum is a hole or tear that has developed in the eardrum. It can affect hearing. However, the extent of hearing loss can vary greatly. For example, tiny perforations may only cause minimal loss of hearing. Larger perforations may affect hearing more severely. Also, if the ossicles are damaged in addition to the eardrum then the hearing loss would be much greater than, say, a small perforation which is not close to the ossicles.
Also, with a perforation, you are at greater risk of developing an ear infection. This is because the eardrum acts as a barrier to bacteria and other germs that may get into the middle ear.
What can cause a perforated eardrum?
- Infections of the middle ear, which can damage the eardrum. In this situation you often have a discharge from the ear as pus runs out from the middle ear.
- Direct injury to the ear. For example, a punch to the ear.
- A sudden loud noise. For example, from a nearby explosion. The shock waves and sudden sound waves can perforate the eardrum. This is often the most severe type of perforation and can lead to severe hearing loss and tinnitus (ringing in the ears).
- Barotrauma. This is when you suddenly have a change in air pressure and there is a sharp difference in the pressure of air outside the ear and in the middle ear. For example, when descending in an aircraft. Pain in the ear due to a tense eardrum is common during altitude changes. However, a perforated eardrum only happens rarely in extreme cases. See separate leaflet called 'Barotrauma of the Ear'.
- Poking objects into the ear. This can sometimes damage the eardrum.
- Grommets. These are tiny tubes that are placed through the eardrum. They are used to treat glue ear as they allow any mucus that is trapped in the middle ear to drain out from the ear. When a grommet falls out, there is a tiny perforation in the eardrum (that usually soon heals).
How is a perforated eardrum diagnosed?
A doctor can usually diagnose a perforated eardrum simply by looking into the ear with a special torch called an otoscope. However, sometimes it is difficult to see the eardrum if there is a lot of inflammation, wax or infection present in the ear.
What is the treatment for a perforated eardrum?
No treatment is needed in most cases
A perforated eardrum will usually heal by itself within 6-8 weeks. It is a skin-like structure and, like skin that is cut, it will usually heal. In some cases, a doctor may prescribe antibiotics if there is an infection or risk of infection developing in the middle ear whilst the eardrum is healing.
It is best to avoid water getting into the ear whilst it is healing. For example, your doctor may advise that you put some cotton wool or similar material into your outer ear whilst showering or washing your hair. It is best not to swim until the eardrum has healed.
Occasionally, a perforated eardrum gets infected and needs antibiotics. Some ear drops can potentially damage the nerve supply to the ear so your doctor will select a type that does not have this risk, or will give you medication by mouth.
Surgical treatment is sometimes considered
A small procedure is an option to treat a perforated drum that does not heal by itself. There are various techniques ranging from placing some chemicals next to the torn part of the drum to encourage healing, to an operation called tympanoplasty to repair the eardrum. Tympanoplasty is usually successful in fixing the perforation, and improving hearing.
However, not all people with an unhealed perforation need treatment. Many people have a small permanent perforation with no symptoms or significant hearing loss. Treatment is mainly considered if there is hearing loss as this may improve if the perforation is fixed. Also, swimmers may prefer to have a perforation repaired as getting water in the middle ear can increase the risk of having an ear infection.
If you have a perforation that has not healed by itself, an ear specialist will advise on the pros and cons of treatment versus leaving it alone.
Further reading & references
- Howard M; Middle Ear, Tympanic Membrane, Perforations, eMedicine, Sept 2009
- Ribeiro JC, Rui C, Natercia S, et al; Tympanoplasty in children: A review of 91 cases. Auris Nasus Larynx. 2010 Jun 17.
- Sarkar S, Roychoudhury A, Roychaudhuri BK; Tympanoplasty in children. Eur Arch Otorhinolaryngol. 2009 May;266(5):627-33. Epub 2009 Jan 22.
|Original Author: Dr Tim Kenny||Current Version: Dr Laurence Knott|
|Last Checked: 27/07/2010||Document ID: 4783 Version: 38||© EMIS|
Disclaimer: This article is for information only and should not be used for the diagnosis or treatment of medical conditions. EMIS has used all reasonable care in compiling the information but make no warranty as to its accuracy. Consult a doctor or other health care professional for diagnosis and treatment of medical conditions. For details see our conditions. | http://www.patient.co.uk/health/Perforated-Eardrum.htm | 13 |
33 | Wikijunior:United States Charters of Freedom/Constitution
The United States Constitution is the supreme law of the United States of America. When nine states of the then thirteen states ratified the document it marked the creation of a union of sovereign states, and a federal government to operate that union. It replaced the weaker, less well-defined union that existed under the Articles of Confederation and took effect on March 4, 1789. The handwritten copy signed by the delegates to the Congress is on display in the National Archives in Washington, D.C. It is the second of the three Charters of Freedom along with the Declaration of Independence and the Bill of Rights.
During the Revolutionary War, the thirteen states first formed a weak central government—with the Congress being its only component—under the Articles of Confederation/. Congress lacked any power to impose taxes, and, because there was no national executive or judiciary, it relied on state authorities, who were often uncooperative, to enforce all its acts. It also had no authority to override tax laws and tariffs between states. The Articles required unanimous consent from all the states before they could be amended and states took the central government so lightly that their representatives were often absent. For lack of a quorum, Congress was frequently blocked from making even moderate changes.
The Confederation Congress endorsed the plan to revise the Articles of Confederation on February 21, 1787. Twelve states, Rhode Island being the only exception, accepted this invitation and sent delegates to convene in May 1787. The decision was made to draft a new fundamental government design which eventually stipulated that only 9 of the 13 states would have to ratify for the new government to go into effect. These actions were criticized by some as exceeding the convention's mandate and existing law. However, Congress, noting dissatisfaction with the Articles of Confederation government, unanimously agreed to submit the proposal to the states despite what some perceived as the exceeded terms of reference. On September 17, 1787, the Constitution was completed in Philadelphia, followed by a speech given by Benjamin Franklin. In it he talked about how he wasn't completely satisfied with it but that perfection would never fully be achieved. He accepted the document as it was and he wanted all those against the ratification of it to do the same. The new government it prescribed came into existence on March 4, 1789, after fierce fights over ratification in many of the states.
Text of the Constitution
The text of the Constitution can be divided into nine sections: the preamble, 7 articles, and the conclusion. (Note that the preamble and conclusion headings are not part of the text of the document) (though the articles have headings labeled Article I-Article VII).
We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America.
All legislative Powers herein granted shall be vested in a Congress of the United States, which shall consist of a Senate and House of Representatives.
The House of Representatives shall be composed of Members chosen every second Year by the People of the several States, and the Electors in each State shall have the Qualifications requisite for Electors of the most numerous Branch of the State Legislature.
No Person shall be a Representative who shall not have attained to the Age of twenty five Years, and been seven Years a Citizen of the United States, and who shall not, when elected, be an Inhabitant of that State in which he shall be chosen.
Representatives and direct Taxes shall be apportioned among the several States which may be included within this Union, according to their respective Numbers, which shall be determined by adding to the whole Number of free Persons, including those bound to Service for a Term of Years, and excluding Indians not taxed, three fifths of all other Persons. The actual Enumeration shall be made within three Years after the first Meeting of the Congress of the United States, and within every subsequent Term of ten Years, in such Manner as they shall by Law direct. The Number of Representatives shall not exceed one for every thirty Thousand, but each State shall have at Least one Representative; and until such enumeration shall be made, the State of New Hampshire shall be entitled to chuse three, Massachusetts eight, Rhode Island and Providence Plantations one, Connecticut five, New York six, New Jersey four, Pennsylvania eight, Delaware one, Maryland six, Virginia ten, North Carolina five, South Carolina five and Georgia three.
When vacancies happen in the Representation from any State, the Executive Authority thereof shall issue Writs of Election to fill such Vacancies.
The House of Representatives shall chuse their Speaker and other Officers; and shall have the sole Power of Impeachment.
The Senate of the United States shall be composed of two Senators from each State, chosen by the Legislature thereof, for six Years; and each Senator shall have one Vote.
Immediately after they shall be assembled in Consequence of the first Election, they shall be divided as equally as may be into three Classes. The Seats of the Senators of the first Class shall be vacated at the Expiration of the second Year, of the second Class at the Expiration of the fourth Year, and of the third Class at the Expiration of the sixth Year, so that one third may be chosen every second Year; and if Vacancies happen by Resignation, or otherwise, during the Recess of the Legislature of any State, the Executive thereof may make temporary Appointments until the next Meeting of the Legislature, which shall then fill such Vacancies.
No Person shall be a Senator who shall not have attained to the Age of thirty Years, and been nine Years a Citizen of the United States, and who shall not, when elected, be an Inhabitant of that State for which he shall be chosen.
The Vice President of the United States shall be President of the Senate, but shall have no Vote, unless they be equally divided.
The Senate shall chuse their other Officers, and also a President pro tempore, in the absence of the Vice President, or when he shall exercise the Office of President of the United States.
The Senate shall have the sole Power to try all Impeachments. When sitting for that Purpose, they shall be on Oath or Affirmation. When the President of the United States is tried, the Chief Justice shall preside: And no Person shall be convicted without the Concurrence of two thirds of the Members present.
Judgment in Cases of Impeachment shall not extend further than to removal from Office, and disqualification to hold and enjoy any Office of honor, Trust or Profit under the United States: but the Party convicted shall nevertheless be liable and subject to Indictment, Trial, Judgment and Punishment, according to Law.
The Times, Places and Manner of holding Elections for Senators and Representatives, shall be prescribed in each State by the Legislature thereof; but the Congress may at any time by Law make or alter such Regulations, except as to the Place of choosing Senators.
The Congress shall assemble at least once in every Year, and such Meeting shall be on the first Monday in December, unless they shall by Law appoint a different Day.
Each House shall be the Judge of the Elections, Returns and Qualifications of its own Members, and a Majority of each shall constitute a Quorum to do Business; but a smaller number may adjourn from day to day, and may be authorized to compel the Attendance of absent Members, in such Manner, and under such Penalties as each House may provide.
Each House may determine the Rules of its Proceedings, punish its Members for disorderly Behavior, and, with the Concurrence of two-thirds, expel a Member.
Each House shall keep a Journal of its Proceedings, and from time to time publish the same, excepting such Parts as may in their Judgment require Secrecy; and the Yeas and Nays of the Members of either House on any question shall, at the Desire of one fifth of those Present, be entered on the Journal.
Neither House, during the Session of Congress, shall, without the Consent of the other, adjourn for more than three days, nor to any other Place than that in which the two Houses shall be sitting.
The Senators and Representatives shall receive a Compensation for their Services, to be ascertained by Law, and paid out of the Treasury of the United States. They shall in all Cases, except Treason, Felony and Breach of the Peace, be privileged from Arrest during their Attendance at the Session of their respective Houses, and in going to and returning from the same; and for any Speech or Debate in either House, they shall not be questioned in any other Place.
No Senator or Representative shall, during the Time for which he was elected, be appointed to any civil Office under the Authority of the United States which shall have been created, or the Emoluments whereof shall have been increased during such time; and no Person holding any Office under the United States, shall be a Member of either House during his Continuance in Office.
All bills for raising Revenue shall originate in the House of Representatives; but the Senate may propose or concur with Amendments as on other Bills.
Every Bill which shall have passed the House of Representatives and the Senate, shall, before it become a Law, be presented to the President of the United States; If he approve he shall sign it, but if not he shall return it, with his Objections to that House in which it shall have originated, who shall enter the Objections at large on their Journal, and proceed to reconsider it. If after such Reconsideration two thirds of that House shall agree to pass the Bill, it shall be sent, together with the Objections, to the other House, by which it shall likewise be reconsidered, and if approved by two thirds of that House, it shall become a Law. But in all such Cases the Votes of both Houses shall be determined by Yeas and Nays, and the Names of the Persons voting for and against the Bill shall be entered on the Journal of each House respectively. If any Bill shall not be returned by the President within ten Days (Sundays excepted) after it shall have been presented to him, the Same shall be a Law, in like Manner as if he had signed it, unless the Congress by their Adjournment prevent its Return, in which Case it shall not be a Law.
Every Order, Resolution, or Vote to which the Concurrence of the Senate and House of Representatives may be necessary (except on a question of Adjournment) shall be presented to the President of the United States; and before the Same shall take Effect, shall be approved by him, or being disapproved by him, shall be repassed by two thirds of the Senate and House of Representatives, according to the Rules and Limitations prescribed in the Case of a Bill.
The Congress shall have power To lay and collect Taxes, Duties, Imposts and Excises, to pay the Debts and provide for the common Defence and general Welfare of the United States; but all Duties, Imposts and Excises shall be uniform throughout the United States;
To borrow money on the credit of the United States;
To regulate Commerce with foreign Nations, and among the several States, and with the Indian Tribes;
To establish a uniform Rule of Naturalization, and uniform Laws on the subject of Bankruptcies throughout the United States;
To coin Money, regulate the Value thereof, and of foreign Coin, and fix the Standard of Weights and Measures;
To provide for the Punishment of counterfeiting the Securities and current Coin of the United States;
To establish Post Offices and Post Roads;
To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries;
To constitute Tribunals inferior to the supreme Court;
To define and punish Piracies and Felonies committed on the high Seas, and Offenses against the Law of Nations;
To declare War, grant Letters of Marque and Reprisal, and make Rules concerning Captures on Land and Water;
To raise and support Armies, but no Appropriation of Money to that Use shall be for a longer Term than two Years;
To provide and maintain a Navy;
To make Rules for the Government and Regulation of the land and naval Forces;
To provide for calling forth the Militia to execute the Laws of the Union, suppress Insurrections and repel Invasions;
To provide for organizing, arming, and disciplining the Militia, and for governing such Part of them as may be employed in the Service of the United States, reserving to the States respectively, the Appointment of the Officers, and the Authority of training the Militia according to the discipline prescribed by Congress;
To exercise exclusive Legislation in all Cases whatsoever, over such District (not exceeding ten Miles square) as may, by Cession of particular States, and the acceptance of Congress, become the Seat of the Government of the United States, and to exercise like Authority over all Places purchased by the Consent of the Legislature of the State in which the Same shall be, for the Erection of Forts, Magazines, Arsenals, dock-Yards, and other needful Buildings; And
To make all Laws which shall be necessary and proper for carrying into Execution the foregoing Powers, and all other Powers vested by this Constitution in the Government of the United States, or in any Department or Officer thereof.
The Migration or Importation of such Persons as any of the States now existing shall think proper to admit, shall not be prohibited by the Congress prior to the Year one thousand eight hundred and eight, but a tax or duty may be imposed on such Importation, not exceeding ten dollars for each Person.
The privilege of the Writ of Habeas Corpus shall not be suspended, unless when in Cases of Rebellion or Invasion the public Safety may require it.
No Bill of Attainder or ex post facto Law shall be passed.
No capitation, or other direct, Tax shall be laid, unless in Proportion to the Census or Enumeration herein before directed to be taken.
No Tax or Duty shall be laid on Articles exported from any State.
No Preference shall be given by any Regulation of Commerce or Revenue to the Ports of one State over those of another: nor shall Vessels bound to, or from, one State, be obliged to enter, clear, or pay Duties in another.
No Money shall be drawn from the Treasury, but in Consequence of Appropriations made by Law; and a regular Statement and Account of the Receipts and Expenditures of all public Money shall be published from time to time.
No Title of Nobility shall be granted by the United States: And no Person holding any Office of Profit or Trust under them, shall, without the Consent of the Congress, accept of any present, Emolument, Office, or Title, of any kind whatever, from any King, Prince or foreign State.
No State shall enter into any Treaty, Alliance, or Confederation; grant Letters of Marque and Reprisal; coin Money; emit Bills of Credit; make any Thing but gold and silver Coin a Tender in Payment of Debts; pass any Bill of Attainder, ex post facto Law, or Law impairing the Obligation of Contracts, or grant any Title of Nobility.
No State shall, without the Consent of the Congress, lay any Imposts or Duties on Imports or Exports, except what may be absolutely necessary for executing it's inspection Laws: and the net Produce of all Duties and Imposts, laid by any State on Imports or Exports, shall be for the Use of the Treasury of the United States; and all such Laws shall be subject to the Revision and Controul of the Congress.
No State shall, without the Consent of Congress, lay any duty of Tonnage, keep Troops, or Ships of War in time of Peace, enter into any Agreement or Compact with another State, or with a foreign Power, or engage in War, unless actually invaded, or in such imminent Danger as will not admit of delay.
The executive Power shall be vested in a President of the United States of America. He shall hold his Office during the Term of four Years, and, together with the Vice-President chosen for the same Term, be elected, as follows:
Each State shall appoint, in such Manner as the Legislature thereof may direct, a Number of Electors, equal to the whole Number of Senators and Representatives to which the State may be entitled in the Congress: but no Senator or Representative, or Person holding an Office of Trust or Profit under the United States, shall be appointed an Elector.
The Electors shall meet in their respective States, and vote by Ballot for two persons, of whom one at least shall not lie an Inhabitant of the same State with themselves. And they shall make a List of all the Persons voted for, and of the Number of Votes for each; which List they shall sign and certify, and transmit sealed to the Seat of the Government of the United States, directed to the President of the Senate. The President of the Senate shall, in the Presence of the Senate and House of Representatives, open all the Certificates, and the Votes shall then be counted. The Person having the greatest Number of Votes shall be the President, if such Number be a Majority of the whole Number of Electors appointed; and if there be more than one who have such Majority, and have an equal Number of Votes, then the House of Representatives shall immediately chuse by Ballot one of them for President; and if no Person have a Majority, then from the five highest on the List the said House shall in like Manner chuse the President. But in chusing the President, the Votes shall be taken by States, the Representation from each State having one Vote; a quorum for this Purpose shall consist of a Member or Members from two-thirds of the States, and a Majority of all the States shall be necessary to a Choice. In every Case, after the Choice of the President, the Person having the greatest Number of Votes of the Electors shall be the Vice President. But if there should remain two or more who have equal Votes, the Senate shall chuse from them by Ballot the Vice-President.
The Congress may determine the Time of chusing the Electors, and the Day on which they shall give their Votes; which Day shall be the same throughout the United States.
No person except a natural born Citizen, or a Citizen of the United States, at the time of the Adoption of this Constitution, shall be eligible to the Office of President; neither shall any Person be eligible to that Office who shall not have attained to the Age of thirty-five Years, and been fourteen Years a Resident within the United States.
In Case of the Removal of the President from Office, or of his Death, Resignation, or Inability to discharge the Powers and Duties of the said Office, the same shall devolve on the Vice President, and the Congress may by Law provide for the Case of Removal, Death, Resignation or Inability, both of the President and Vice President, declaring what Officer shall then act as President, and such Officer shall act accordingly, until the Disability be removed, or a President shall be elected.
The President shall, at stated Times, receive for his Services, a Compensation, which shall neither be increased nor diminished during the Period for which he shall have been elected, and he shall not receive within that Period any other Emolument from the United States, or any of them.
Before he enter on the Execution of his Office, he shall take the following Oath or Affirmation:
"I do solemnly swear (or affirm) that I will faithfully execute the Office of President of the United States, and will to the best of my Ability, preserve, protect and defend the Constitution of the United States."
The President shall be Commander in Chief of the Army and Navy of the United States, and of the Militia of the several States, when called into the actual Service of the United States; he may require the Opinion, in writing, of the principal Officer in each of the executive Departments, upon any subject relating to the Duties of their respective Offices, and he shall have power to Grant Reprieves and Pardons for Offenses against the United States, except in Cases of Impeachment.
He shall have Power, by and with the Advice and Consent of the Senate, to make Treaties, provided two thirds of the Senators present concur; and he shall nominate, and by and with the Advice and Consent of the Senate, shall appoint Ambassadors, other public Ministers and Consuls, Judges of the supreme Court, and all other Officers of the United States, whose Appointments are not herein otherwise provided for, and which shall be established by Law: but the Congress may by Law vest the Appointment of such inferior Officers, as they think proper, in the President alone, in the Courts of Law, or in the Heads of Departments.
The President shall have power to fill up all Vacancies that may happen during the Recess of the Senate, by granting Commissions which shall expire at the End of their next Session.
He shall from time to time give to the Congress Information of the State of the Union, and recommend to their Consideration such Measures as he shall judge necessary and expedient; he may, on extraordinary Occasions, convene both Houses, or either of them, and in Case of Disagreement between them, with Respect to the Time of Adjournment, he may adjourn them to such Time as he shall think proper; he shall receive Ambassadors and other public Ministers; he shall take Care that the Laws be faithfully executed, and shall Commission all the Officers of the United States.
The President, Vice President and all civil Officers of the United States, shall be removed from Office on Impeachment for, and Conviction of, Treason, Bribery, or other high Crimes and Misdemeanors.
The judicial Power of the United States, shall be vested in one supreme Court, and in such inferior Courts as the Congress may from time to time ordain and establish. The Judges, both of the supreme and inferior Courts, shall hold their Offices during good Behavior, and shall, at stated Times, receive for their Services a Compensation which shall not be diminished during their Continuance in Office.
The judicial Power shall extend to all Cases, in Law and Equity, arising under this Constitution, the Laws of the United States, and Treaties made, or which shall be made, under their Authority; to all Cases affecting Ambassadors, other public Ministers and Consuls; to all Cases of admiralty and maritime Jurisdiction; to Controversies to which the United States shall be a Party; to Controversies between two or more States; between a State and Citizens of another State; between Citizens of different States; between Citizens of the same State claiming Lands under Grants of different States, and between a State, or the Citizens thereof, and foreign States, Citizens or Subjects.
In all Cases affecting Ambassadors, other public Ministers and Consuls, and those in which a State shall be Party, the supreme Court shall have original Jurisdiction. In all the other Cases before mentioned, the supreme Court shall have appellate Jurisdiction, both as to Law and Fact, with such Exceptions, and under such Regulations as the Congress shall make.
Trial of all Crimes, except in Cases of Impeachment, shall be by Jury; and such Trial shall be held in the State where the said Crimes shall have been committed; but when not committed within any State, the Trial shall be at such Place or Places as the Congress may by Law have directed.
Treason against the United States, shall consist only in levying War against them, or in adhering to their Enemies, giving them Aid and Comfort. No Person shall be convicted of Treason unless on the Testimony of two Witnesses to the same overt Act, or on Confession in open Court.
The Congress shall have power to declare the Punishment of Treason, but no Attainder of Treason shall work Corruption of Blood, or Forfeiture except during the Life of the Person attainted.
Full Faith and Credit shall be given in each State to the public Acts, Records, and judicial Proceedings of every other State. And the Congress may by general Laws prescribe the Manner in which such Acts, Records and Proceedings shall be proved, and the Effect thereof.
The Citizens of each State shall be entitled to all Privileges and Immunities of Citizens in the several States.
A Person charged in any State with Treason, Felony, or other Crime, who shall flee from Justice, and be found in another State, shall on demand of the executive Authority of the State from which he fled, be delivered up, to be removed to the State having Jurisdiction of the Crime.
No Person held to Service or Labour in one State, under the Laws thereof, escaping into another, shall, in Consequence of any Law or Regulation therein, be discharged from such Service or Labour, But shall be delivered up on Claim of the Party to whom such Service or Labour may be due.
New States may be admitted by the Congress into this Union; but no new States shall be formed or erected within the Jurisdiction of any other State; nor any State be formed by the Junction of two or more States, or parts of States, without the Consent of the Legislatures of the States concerned as well as of the Congress.
The Congress shall have power to dispose of and make all needful Rules and Regulations respecting the Territory or other Property belonging to the United States; and nothing in this Constitution shall be so construed as to Prejudice any Claims of the United States, or of any particular State.
The United States shall guarantee to every State in this Union a Republican Form of Government, and shall protect each of them against Invasion; and on Application of the Legislature, or of the Executive (when the Legislature cannot be convened) against domestic Violence.
The Congress, whenever two thirds of both Houses shall deem it necessary, shall propose Amendments to this Constitution, or, on the Application of the Legislatures of two thirds of the several States, shall call a Convention for proposing Amendments, which, in either Case, shall be valid to all Intents and Purposes, as part of this Constitution, when ratified by the Legislatures of three fourths of the several States, or by Conventions in three fourths thereof, as the one or the other Mode of Ratification may be proposed by the Congress; Provided that no Amendment which may be made prior to the Year One thousand eight hundred and eight shall in any Manner affect the first and fourth Clauses in the Ninth Section of the first Article; and that no State, without its Consent, shall be deprived of its equal Suffrage in the Senate.
All Debts contracted and Engagements entered into, before the Adoption of this Constitution, shall be as valid against the United States under this Constitution, as under the Confederation.
This Constitution, and the Laws of the United States which shall be made in Pursuance thereof; and all Treaties made, or which shall be made, under the Authority of the United States, shall be the supreme Law of the Land; and the Judges in every State shall be bound thereby, any Thing in the Constitution or Laws of any State to the Contrary notwithstanding.
The Senators and Representatives before mentioned, and the Members of the several State Legislatures, and all executive and judicial Officers, both of the United States and of the several States, shall be bound by Oath or Affirmation, to support this Constitution; but no religious Test shall ever be required as a Qualification to any Office or public Trust under the United States.
The Ratification of the Conventions of nine States, shall be sufficient for the Establishment of this Constitution between the States so ratifying the Same.
Done in Convention by the Unanimous Consent of the States present the Seventeenth Day of September in the Year of our Lord one thousand seven hundred and Eighty seven and of the Independence of the United States of America the Twelfth. In Witness whereof We have hereunto subscribed our Names.
- Virginia: George Washington
- Delaware: George Read, Gunning Bedford, Jr., John Dickinson, Richard Bassett, Jacob Broom
- Maryland: James McHenry, Daniel of St. Thomas Jenifer, Daniel Carroll
- Virginia: John Blair, James Madison Jr.
- North Carolina: William Blount, Richard Dobbs Spaight, Hugh Williamson
- South Carolia: John Rutledge, Charles Cotesworth Pinckney, Charles Pinckney, Pierce Butler
- Georgia: William Few, Abraham Baldwin
- New Hampshire: John Langdon, Nicholas Gilman
- Massachusetts: Nathaniel Gorham, Rufus King
- Connecticut: William Samuel Johnson, Roger Sherman
- New York: Alexander Hamilton
- New Jersey: William Livingston, David Brearly, William Paterson, Jonathan Dayton
- Pennsylvania: Benjamin Franklin, Thomas Mifflin, Robert Morris, George Clymer, Thomas Fitzsimons, Jared Ingersoll, James Wilson, Gouverneur Morris
Analysis of the text
This section describes each section of the Constitution.
Analysis of the Preamble
The Preamble neither grants any powers nor inhibits any actions; it only explains the rationale behind the Constitution. The Preamble, especially the first three words ("We the people"), is one of the most quoted and referenced sections of the Constitution.
Analysis of Article I
Article I establishes the legislative branch of government, U.S. Congress, which includes the House of Representatives and the Senate. The Article establishes the manner of election and qualifications of members of each House. In addition, it provides for free debate in congress and limits self-serving behavior of congressmen, outlines legislative procedure and indicates the powers of the legislative branch. Finally, it establishes limits on federal and state legislative power.
Analysis of Article II
Article II describes the presidency (the executive branch): procedures for the selection of the president, qualifications for office, the oath to be affirmed and the powers and duties of the office. It also provides for the office of Vice President of the United States, and specifies that the Vice President succeeds to the presidency if the President is incapacitated, dies, or resigns, although whether this succession was on an acting or permanent basis was unclear until the passage of the 25th Amendment.
Article II also provides for the impeachment and removal from office of civil officers
Analysis of Article III
Article III describes the court system (the judicial branch), including the Supreme Court. The article requires that there be one court called the Supreme Court; Congress, at its discretion, can create lower courts, whose judgments and orders are reviewable by the Supreme Court. Article Three also requires trial by jury in all criminal cases, defines the crime of treason, and charges Congress with providing for a punishment for it, while imposing limits on that punishment.
Analysis of Article IV
Article IV describes the relationship between the states and the Federal government, and amongst the states. For instance, it requires states to give "full faith and credit" to the public acts, records and court proceedings of the other states. Congress is permitted to regulate the manner in which proof of such acts, records or proceedings may be admitted. The "privileges and immunities" clause prohibits state governments from discriminating against citizens of other states in favor of resident citizens (e.g., having tougher penalties for residents of Ohio convicted of crimes within Arizona). It also establishes extradition between the states, as well as laying down a legal basis for freedom of movement and travel amongst the states. Today, this provision is sometimes taken for granted, especially by citizens who live near state borders; but in the days of the Articles of Confederation, crossing state lines was often a much more arduous (and costly) process. Article IV also provides for the creation and admission of new states. The Territorial Clause gives Congress the power to make rules for disposing of Federal property and governing non-state territories of the United States. Finally, the fourth section of Article IV requires the United States to guarantee to each state a republican form of government, and to protect the states from invasion and violence.
Analysis of Article V
Article V describes the process necessary to amend the Constitution. It establishes two methods of proposing amendments: by Congress or by a national convention requested by the states. Under the first method, Congress can propose an amendment by a two-thirds vote (of a quorum, not necessarily of the entire body) of the Senate and of the House of Representatives. Under the second method, two-thirds of the state legislatures may convene and "apply" to Congress to hold a national convention, whereupon Congress must call such a convention for the purpose of considering amendments. Thus far, only the first method (proposal by Congress) has been used.
Once proposed—whether submitted by a national convention or by Congress—amendments must then be ratified by three-fourths of the states to take effect. Article V gives Congress the option of requiring ratification by state legislatures or by special conventions assembled in the states. The convention method of ratification has been used only once (to approve the 21st Amendment). Article V currently places only one limitation on the amending power—that no amendment can deprive a state of its equal representation in the Senate without that state's consent.
Analysis of Article VI
Article VI establishes the Constitution, and the laws and treaties of the United States made in accordance with it, to be the supreme law of the land. It also validates national debt created under the Articles of Confederation and requires that all legislators, federal officers, and judges take oaths to support the Constitution.
Analysis of Article VII
Article VII sets forth the requirements for ratification of the Constitution. The Constitution would not take effect until at least nine states had ratified the Constitution in state conventions specially convened for that purpose. New Hampshire became that ninth state on June 21, 1788. Once the Congress of the Confederation received word of New Hampshire's ratification, it set a timetable for the start of operations under the Constitution, and, on March 4, 1789, the government under the Constitution began operations.
The Constitution was ratified by the states in the following order:
|1||December 7, 1787||Delaware||30||0||100%|
|2||December 12, 1787||Pennsylvania||46||23||67%|
|3||December 18, 1787||New Jersey||38||0||100%|
|4||January 2, 1788||Georgia||26||0||100%|
|5||January 9, 1788||Connecticut||128||40||76%|
|6||February 6, 1788||Massachusetts||187||168||53%|
|7||April 28, 1788||Maryland||63||11||85%|
|8||May 23, 1788||South Carolina||149||73||67%|
|9||June 21, 1788||New Hampshire||57||47||55%|
|10||June 25, 1788||Virginia||89||79||53%|
|11||July 26, 1788||New York||30||27||53%|
|12||November 21, 1789||North Carolina||194||77||72%|
|13||May 29, 1790||Rhode Island||34||32||52%|
Analysis of the Conclusion
The Conclusion ends the Constitution and is basically a summary of the document's date of completion and contains the delegate's signatures.
Provisions for amendments
The authors of the Constitution were aware that changes would be necessary from time to time if the Constitution was to endure and cope with the effects of the growth of the nation. However, they were also conscious that such a change should not be easy, in case it permits ill-conceived and hastily passed amendments. They also wanted to ensure that an amendment would not block action desired by the vast majority of the population. Their solution was to devise a dual process by which the Constitution could be altered.
The first option must begin in Congress which, by a two-thirds of the members vote in each house, may initiate an amendment. Alternatively, the legislatures of two-thirds of the several states may ask Congress to call a national convention to discuss and draft amendments. To date, all amendments have been proposed by Congress. The second option is that two-thirds of the state legislatures may convene and "apply" to Congress to hold a national convention, whereupon Congress must call such a convention for the purpose of considering amendments. Although state legislatures have occasionally requested the calling of a convention, no such request has yet received the concurrence required for such a convention. As of mid-2006, only the first method (proposal by Congress) has been used.
In either case, an amendment must have the approval of the legislatures or of smaller ratifying conventions within three-fourths of the states before becoming a part of the Constitution. All amendments except one have been submitted to the state legislatures for ratification; only the 21st amendment was ratified by individual conventions in the states.
Unlike most constitutions, amendments to the United States Constitution are appended to the existing body of the text, rather than being revisions of or insertions into the main articles. There is no provision for expunging from the text obsolete or rescinded provisions.
Some people feel that demographic changes in the U.S.—specifically the great disparity in population between states—have made the Constitution too difficult to amend, with states representing as little as 4% of the population theoretically able to block an amendment desired by over 90% of Americans; others feel that it is unlikely that such an extreme result would occur. However, any proposals to change this would necessarily involve amending the Constitution itself.
Congressional legislation, passed to implement provisions of the Constitution or to adapt those implementations to changing conditions, also broadens and, in subtle ways, changes the meanings given to the words of the Constitution. Up to a point, the rules and regulations of the many agencies of the federal government have a similar effect. In case of objection, the test in both cases is whether, in the opinion of the courts, such legislation and rules conform with the meanings given to the words of the Constitution.
The Constitution has a total of 27 amendments. The first ten, written in the Bill of Rights, another Charter of Freedom, were ratified simultaneously. The following seventeen were ratified separately.
The Bill of Rights (1-10)
The Bill of Rights comprises the first ten amendments to the Constitution. Those amendments were adopted between 1789 and 1791, and all relate to limiting the power of the federal government. They were added in response to criticisms of the Constitution by the state ratification conventions and by prominent individuals such as Thomas Jefferson. These critics argued that without further restraints, the strong central government would become tyrannical. The amendments were proposed by Congress as part of a block of twelve in September 1789. By December 1791 a sufficient number of states had ratified ten of the twelve proposals, and the Bill of Rights became part of the Constitution. One of the failed proposals has yet to be ratified, while the other became the 27th amendment in 1992.
Subsequent amendments (11–27)
The 17 Additional amendments to the United States Constitution ratified after the Bill of Rights cover many subjects. The majority of the seventeen later amendments stem from continued efforts to expand individual civil or political liberties, while a few are concerned with modifying the basic governmental structure drafted in Philadelphia in 1787. Although the United States Constitution has been amended a total of 17 times, only 16 of the amendments are currently used because Amendment XXI supersedes Amendment XVIII.
Over 10,000 Constitutional amendments have been introduced in Congress since 1789; in a typical Congressional year in the last several decades, between 100 and 200 are offered. Most of these concepts never get out of Congressional committee, much less get proposed by the Congress for ratification. Backers of some amendments have attempted the alternative, and thus far never-utilized, method mentioned in Article V.
Amendment XVIII is the only amendment to be directly and specifically repealed by another (Amendment XXI). The episode highlighted the importance of proposing and ratifying only the most important, and least evanescent, of amendments.
Of the thirty-three amendments that have been proposed by Congress, six have failed ratification by the required three-quarters of the state legislatures—and four of those six are still technically pending before state lawmakers. Starting with the 18th amendment, each proposed amendment (except for the 19th Amendment and for the still-pending Child Labor Amendment of 1924) has specified a deadline for passage. The following are the unratified amendments:
- The Congressional Apportionment Amendment proposed by the 1st Congress on September 25, 1789, defined a formula for how many members there would be in the United States House of Representatives after each decennial census. Ratified by eleven states, the last being Kentucky in June 1792 (Kentucky's initial month of statehood), this amendment contains no expiration date for ratification. In principle it may yet be ratified, though as written it became moot when the population of the United States reached ten million.
- The so-called missing thirteenth amendment, or "Titles of Nobility Amendment" (TONA), proposed by the 11th Congress on May 1, 1810, would have ended the citizenship of any American accepting "any Title of Nobility or Honour" from any foreign power. Some scholars maintain that the amendment was actually ratified by the legislatures of enough states, and that a conspiracy has suppressed it. Known to have been ratified by lawmakers in twelve states, the last in 1812, this amendment contains no expiration date for ratification. It may yet be ratified.
- The Corwin amendment, proposed by the 36th Congress on March 2, 1861, would have forbidden any attempt to subsequently amend the Constitution to empower the Federal government to "abolish or interfere" with the "domestic institutions" of the states (a delicate way of referring to slavery). It was ratified by only Ohio and Maryland lawmakers before the outbreak of the Cival War. Illinois lawmakers—sitting as a state constitutional convention at the time—likewise approved it, but that action is of questionable validity. The proposed amendment contains no expiration date for ratification and may yet be ratified. However, adoption of the 13th, 14th, and 15th Amendments after the Civil War likely means that the amendment would be ineffective if adopted.
- A child labor amendment proposed by the 68th Congress on June 2, 1924, which stipulates: "The Congress shall have power to limit, regulate, and prohibit the labor of persons under eighteen years of age." This amendment is now moot, since subsequent federal child labor laws have uniformly been upheld as a valid exercise of Congress' powers under the commerce clause. This amendment contains no expiration date for ratification. It may yet be ratified.
Properly placed in a separate category from the other four constitutional amendments that Congress proposed to the states, but which not enough states have approved, are the following two offerings which—due to deadlines—are no longer subject to ratification.
- The Equal Rights Amendment, or ERA, which reads in pertinent part "Equality of rights under the law shall not be denied or abridged by the United States or by any state on account of sex." Proposed by the 92nd Congress on March 22, 1972, it was ratified by the legislatures of 35 states, and expired on either March 22, 1979, or on June 30, 1982, depending upon one's point of view of a controversial ratification deadline three-year extension by the 95th Congress in 1978. Of the 35 states ratifying it, four later rescinded their ratifications prior to the extended ratification period which commenced March 23, 1979 and a fifth—while not going so far as to actually rescind its earlier ratification—adopted a resolution stipulating that its approval would not extend beyond March 22, 1979. There continues to be diversity of opinion as to whether such reversals are valid; no court has ruled on the question, including the Supreme Court. But a precedent against the validity of rescission was first established during the ratification process of the 14th Amendment when Ohio and New Jersey rescinded their earlier approvals, but yet were counted as ratifying states when the 14th Amendment was ultimately proclaimed part of the Constitution in 1868.
- The District of Columbia Voting Rights Amendment was proposed by the 95th Congress on August 22, 1978. Had it been ratified, it would have granted to Washington, D.C., two Senators and at least one member of the House of Representatives as though the District of Columbia were a state. Ratified by the legislatures of only 16 states—less than half of the required 38—the proposed amendment expired on August 22, 1985.
Proposals for amendments
There are currently only a few proposals for amendments which have entered mainstream political debate. These include the proposed Federal Marriage Amendment, the Balanced Budget Amendment, and the Flag-Burning Amendment.
Here are some questions to answer. If you are stumped, and need the answer, just click and drag your mouse over the space next to the question. The answer will show up. The answers to all the questions are located in this article (except question 12). (Question 12 has no right or wrong answer).
- What document did the Constitution replace? The Articles of Confederation
- Which one of the 13 states did not send a delegate to sign the Constitution? Rhode Island
- How many of the 13 states were needed to ratify this document in order for it to become functional? 9
- Who was nicknamed the "Father of the Constitution"? James Madison
- How many articles are there in the Constitution? 7
- How many delegates signed the Constitution? 39
- Which article describes how to amend the Constitution? Article V
- How many ways are there to amend the Constitution? 2
- How many times has the Constitution been amended? 27
- Which document the first ten amendments to the Constitution written on? The Bill of Rights
- Which document influenced the Constitution? The Magna Carta
- Which article do you think is the most significant and why? Answers may vary | http://en.m.wikibooks.org/wiki/Wikijunior:United_States_Charters_of_Freedom/Constitution | 13 |
14 | Some economists have put forward the optimum Theory of population to replace the Malthusian Theory. Prof. Sigdwick gave the basic idea of optimum population in his book ‘principles of Political Economy’. It was further developed by Prof. Cannon and refined and polished by Prof. Robbins, Dalton and Carr Saunders.
Number alone does not show whether or not a country is under or over populated. A particular density of population per square mile might be too large for one country but not for another. The production of a commodity requires the employment of a number of factors of production, of which labour is only one. These have to be combined in a certain proportion in order to obtain the maximum output. Hence, according to J.L. Hanson “The amount of labour which, combined with the other factors of production, yields the maximum output is the optimum population for that particular country”.
Expressed in other way optimum population means the ideal number of population that a country should have considering its resources. When a country’s population is neither too big nor too small but just that much which the country has, it is called optimum population. Hence, the optimum population can be defined as the one on which output per capita is highest. ‘According to R.G.Lipsey and C.I-Iarbury, “The optimum population is one that maximizes per capita national income.” What is the optimum population for any country depends on its natural resources and its stock of capital.
When population is below optimum, it is the case of under population. In other words, if the population of a .country falls short of the optimum it can then be considered to be under-populated. The number of people is insufficient for the fuller utilization of natural and capital resources. Under such situation, the increase in population will lead to increase in output per capita. Because. The increase in population increases labour force. The division of labour will be possible. There will be fuller utilization of natural and capital resources. The market for product will increase. Finally the point is reached at which the output per capita is the highest.
If the population of a country exceeds the optimum it is clearly Thus country poor in natural resources and lacking capital might be economically over-populated, although in terms of numbers its population is small. The resources will be insufficient to provide employment to all. The output per capita will diminish, standard of living will fall, malnutrition and disease will engulf the people.
The table below illustrates the optimum theory of population.
|Population (In Crore)||Per Capita Income (In Rupees)||Size of Population|
The table shows that in the beginning when population increases from 1 crore to 2 crore, per capita income also increases. This is the case of under-population when the population of the country is insufficient to exploit all the resources available in the country. When populating is 3 crore per capita incomes is highest. But when population exceeds 3 crore and increases to 4 and 5 crore per capita income begins to decrease. This is the case of over-population when the resources are insufficient to provide employment to all. So, the population of 3 crore is the optimum population. When the per capita income or output per capita is highest.
It is thus evident that the situation of both under and overpopulation are bad. The concepts of optimum population, under population and over-population are represented in the figure below:
In the figure, OX axis represents size of population and OY axis represents national income. MP is the marginal product curve. In the beginning as the nation’s population increases, each new citizen adds more to total output than did each previous citizen. Thus the marginal contribution to national income of additional citizens increases. As the population goes on increasing, however, all of the opportunities for improving the division of labour and for exploiting scale economies will eventually be exhausted. So, further new inhabitants will add less to total production than did each previous addition to population. Now the marginal product of further additions to population will fall. In figure falling marginal product sets in after the population has reached N1.
AP is the pen capita output curve or average product curve. The AP curve first slopes upward, reaches maximum and then slopes downwards. It means that in the beginning, as the population increases. Output per capita increases till N2 are reached. Eventually, the falling marginal product of new inhabitants will cause a fall in the average product of all the population. In the figure the average product per person begins to fall when the population reaches N2. The population that maximizes output per person is N2.
At N2 level of population, output per capita is highest and is equal to N2 F. F is the maximum point. Therefore N2 is the optimum population. If the population of a country is less than N2, it is under populated and if more than N2, it is over-populated. If the population increases beyond N2, output per capita begins to decline.
Change in Optimum Population
The optimum population, however, is not fixed, for over a period of time. Conditions are liable to change. What was formerly the optimum may cease to be so under changed condition of production. If it were possible to increase the supply of other factors proportionately to increase in population, the optimum might be raised. Optimum population is relative to resources and technology. So if there is increase in capital stock or natural resources or level of technology. There will be an upward shift in the average and marginal product curves. The per capita output will increase. This means that the level of optimum population too will increase. This has been represented in figure below:
As shown in figure, with given resources and technology, the per capita output curve is AP. Here optimum population is and maximum output per capita is NF.
When capital and natural resource increase or there is improvement in technology, the per capita output curve shifts upward in the form AP, Now the optimum population is N, and maximum output per capita is N,F,. Here both level of optimum population and maximum output per capita is higher than before.
Likewise, further increase in resources and technology will shift the per capita output curve upward in the form of AP2. Now the level of optimum population and maximum per capita output are N2 and N2 F2 respectively which are higher than before.
Dalton has given a formula to measure the extent to which the actual population of a country deviates from the optimum population. The extent of deviation is called maladjustment.
M = Maladjustment
A= Actual population
O = Optimum population
If M is positive, the country is over-populated and if M negative, the country is under-populated. This can be illustrated by the help of an example. Suppose that the actual population of a country is 2 crore, and its optimum population is 1.5 crore, then,
The country is over-populated to the extent of 0.3333
Limitations (Criticisms) of Optimum Theory
The Optimum Theory of population has following limitations:
- No Political Consideration
The optimum theory is based on economic consideration only. It has not taken political consideration. Population may be a blessing in disguise from political viewpoint. The more population is useful at times of war. There is no wonder that some countries are encouraging population increase basically due to political reason.
- Imaginary Concept
The optimum population is an imaginary concept. Because, less population cannot be increased abruptly nor more population can be reduced instantly. Hence, it is difficult to achieve the optimum population.
- Optimum Population is not Rigid
The optimum population is not a rigid one. It is flexible. The level of optimum population increases with increase in resource and technology. Likewise, it decreases with decrease in resources and technology. It is, therefore, difficult to determine what is optimum. | http://theoryofeconomics.com/market-economy/optimum-theory-of-population/ | 13 |
15 | ||This article may be too long to read and navigate comfortably. (June 2013)|
|Part of a series on|
The scientific method is a body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge. To be termed scientific, a method of inquiry must be based on empirical and measurable evidence subject to specific principles of reasoning. The Oxford English Dictionary defines the scientific method as: "a method or procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses."
The chief characteristic which distinguishes the scientific method from other methods of acquiring knowledge is that scientists seek to let reality speak for itself,[discuss] supporting a theory when a theory's predictions are confirmed and challenging a theory when its predictions prove false. Although procedures vary from one field of inquiry to another, identifiable features distinguish scientific inquiry from other methods of obtaining knowledge. Scientific researchers propose hypotheses as explanations of phenomena, and design experimental studies to test these hypotheses via predictions which can be derived from them. These steps must be repeatable, to guard against mistake or confusion in any particular experimenter. Theories that encompass wider domains of inquiry may bind many independently derived hypotheses together in a coherent, supportive structure. Theories, in turn, may help form new hypotheses or place groups of hypotheses into context.
Scientific inquiry is generally intended to be as objective as possible in order to reduce biased interpretations of results. Another basic expectation is to document, archive and share all data and methodology so they are available for careful scrutiny by other scientists, giving them the opportunity to verify results by attempting to reproduce them. This practice, called full disclosure, also allows statistical measures of the reliability of these data to be established (when data is sampled or compared to chance).
Scientific method has been practiced in some form for at least one thousand years and is the process by which science is carried out. Because science builds on previous knowledge, it consistently improves our understanding of the world. The scientific method also improves itself in the same way, meaning that it gradually becomes more effective at generating new knowledge. For example, the concept of falsification (first proposed in 1934) reduces confirmation bias by formalizing the attempt to disprove hypotheses rather than prove them.
The overall process involves making conjectures (hypotheses), deriving predictions from them as logical consequences, and then carrying out experiments based on those predictions to determine whether the original conjecture was correct. There are difficulties in a formulaic statement of method, however. Though the scientific method is often presented as a fixed sequence of steps, they are better considered as general principles. Not all steps take place in every scientific inquiry (or to the same degree), and not always in the same order. As noted by William Whewell (1794–1866), "invention, sagacity, [and] genius" are required at every step:
- Formulation of a question: The question can refer to the explanation of a specific observation, as in "Why is the sky blue?", but can also be open-ended, as in "Does sound travel faster in air than in water?" or "How can I design a drug to cure this particular disease?" This stage also involves looking up and evaluating previous evidence from other scientists, including experience. If the answer is already known, a different question that builds on the previous evidence can be posed. When applying the scientific method to scientific research, determining a good question can be very difficult and affects the final outcome of the investigation.
- Hypothesis: An hypothesis is a conjecture, based on the knowledge obtained while formulating the question, that may explain the observed behavior of a part of our universe. The hypothesis might be very specific, e.g., Einstein's equivalence principle or Francis Crick's "DNA makes RNA makes protein", or it might be broad, e.g., unknown species of life dwell in the unexplored depths of the oceans. A statistical hypothesis is a conjecture about some population. For example, the population might be people with a particular disease. The conjecture might be that a new drug will cure the disease in some of those people. Terms commonly associated with statistical hypotheses are null hypothesis and alternative hypothesis. A null hypothesis is the conjecture that the statistical hypothesis is false, e.g., that the new drug does nothing and that any cures are due to chance effects. Researchers normally want to show that the null hypothesis is false. The alternative hypothesis is the desired outcome, e.g., that the drug does better than chance. A final point: a scientific hypothesis must be falsifiable, meaning that one can identify a possible outcome of an experiment that conflicts with predictions deduced from the hypothesis; otherwise, it cannot be meaningfully tested.
- Prediction: This step involves determining the logical consequences of the hypothesis. One or more predictions are then selected for further testing. The less likely that the prediction would be correct simply by coincidence, the stronger evidence it would be if the prediction were fulfilled; evidence is also stronger if the answer to the prediction is not already known, due to the effects of hindsight bias (see also postdiction). Ideally, the prediction must also distinguish the hypothesis from likely alternatives; if two hypotheses make the same prediction, observing the prediction to be correct is not evidence for either one over the other. (These statements about the relative strength of evidence can be mathematically derived using Bayes' Theorem.)
- Testing: This is an investigation of whether the real world behaves as predicted by the hypothesis. Scientists (and other people) test hypotheses by conducting experiments. The purpose of an experiment is to determine whether observations of the real world agree with or conflict with the predictions derived from an hypothesis. If they agree, confidence in the hypothesis increases; otherwise, it decreases. Agreement does not assure that the hypothesis is true; future experiments may reveal problems. Karl Popper advised scientists to try to falsify hypotheses, i.e., to search for and test those experiments that seem most doubtful. Large numbers of successful confirmations are not convincing if they arise from experiments that avoid risk. Experiments should be designed to minimize possible errors, especially through the use of appropriate scientific controls. For example, tests of medical treatments are commonly run as double-blind tests. Test personnel, who might unwittingly reveal to test subjects which samples are the desired test drugs and which are placebos, are kept ignorant of which are which. Such hints can bias the responses of the test subjects. Failure of an experiment does not necessarily mean the hypothesis is false. Experiments always depend on several hypotheses, e.g., that the test equipment is working properly, and a failure may be a failure of one of the auxiliary hypotheses. (See the Duhem-Quine thesis.) Experiments can be conducted in a college lab, on a kitchen table, at CERN's Large Hadron Collider, at the bottom of an ocean, on Mars (using one of the working rovers), and so on. Astronomers do experiments, searching for planets around distant stars. Finally, most individual experiments address highly specific topics for reasons of practicality. As a result, evidence about broader topics is usually accumulated gradually.
- Analysis: This involves determining what the results of the experiment show and deciding on the next actions to take. The predictions of the hypothesis are compared to those of the null hypothesis, to determine which is better able to explain the data. In cases where an experiment is repeated many times, a statistical analysis such as a chi-squared test may be required. If the evidence has falsified the hypothesis, a new hypothesis is required; if the experiment supports the hypothesis but the evidence is not strong enough for high confidence, other predictions from the hypothesis must be tested. Once a hypothesis is strongly supported by evidence, a new question can be asked to provide further insight on the same topic. Evidence from other scientists and experience are frequently incorporated at any stage in the process. Many iterations may be required to gather sufficient evidence to answer a question with confidence, or to build up many answers to highly specific questions in order to answer a single broader question.
This model underlies the scientific revolution. One thousand years ago, Alhazen demonstrated the importance of forming questions and subsequently testing them, an approach which was advocated by Galileo in 1638 with the publication of Two New Sciences. The current method is based on a hypothetico-deductive model formulated in the 20th century, although it has undergone significant revision since first proposed (for a more formal discussion, see below).
|The basic elements of the scientific method are illustrated by the following example from the discovery of the structure of DNA:
The discovery became the starting point for many further studies involving the genetic material, such as the field of molecular genetics, and it was awarded the Nobel Prize in 1962. Each step of the example is examined in more detail later in the article.
The scientific method also includes other components required even when all the iterations of the steps above have been completed:
- Replication: If an experiment cannot be repeated to produce the same results, this implies that the original results were in error. As a result, it is common for a single experiment to be performed multiple times, especially when there are uncontrolled variables or other indications of experimental error. For significant or surprising results, other scientists may also attempt to replicate the results for themselves, especially if those results would be important to their own work.
- External review: The process of peer review involves evaluation of the experiment by experts, who give their opinions anonymously to allow them to give unbiased criticism. It does not certify correctness of the results, only that the experiments themselves were sound (based on the description supplied by the experimenter). If the work passes peer review, which may require new experiments requested by the reviewers, it will be published in a peer-reviewed scientific journal. The specific journal that publishes the results indicates the perceived quality of the work.
- Data recording and sharing: Scientists must record all data very precisely in order to reduce their own bias and aid in replication by others, a requirement first promoted by Ludwik Fleck (1896–1961) and others. They must supply this data to other scientists who wish to replicate any results, extending to the sharing of any experimental samples that may be difficult to obtain.
The goal of a scientific inquiry is to obtain knowledge in the form of testable explanations that can predict the results of future experiments. This allows scientists to gain an understanding of reality, and later use that understanding to intervene in its causal mechanisms (such as to cure disease). The better an explanation is at making predictions, the more useful it is, and the more likely it is to be correct. The most successful explanations, which explain and make accurate predictions in a wide range of circumstances, are called scientific theories.
Most experimental results do not result in large changes in human understanding; improvements in theoretical scientific understanding is usually the result of a gradual synthesis of the results of different experiments, by various researchers, across different domains of science. Scientific models vary in the extent to which they have been experimentally tested and for how long, and in their acceptance in the scientific community. In general, explanations become accepted by a scientific community as evidence in favor is presented, and as presumptions that are inconsistent with the evidence are falsified.
Properties of scientific inquiry
Scientific knowledge is closely tied to empirical findings, and always remains subject to falsification if new experimental observation incompatible with it is found. That is, no theory can ever be considered completely certain, since new evidence falsifying it might be discovered. If such evidence is found, a new theory may be proposed, or (more commonly) it is found that minor modifications to the previous theory are sufficient to explain the new evidence. The strength of a theory is related to how long it has persisted without falsification of its core principles.
Confirmed theories are also subject to subsumption by more accurate theories. For example, thousands of years of scientific observations of the planets were explained almost perfectly by Newton's laws. However, these laws were then determined to be special cases of a more general theory (relativity), which explained both the (previously unexplained) exceptions to Newton's laws as well as predicting and explaining other observations such as the deflection of light by gravity. Thus independent, unconnected, scientific observations can be connected to each other, unified by principles of increasing explanatory power.
Since every new theory must explain even more than the previous one, any successor theory capable of subsuming it must meet an even higher standard, explaining both the larger, unified body of observations explained by the previous theory and unifying that with even more observations. In other words, as scientific knowledge becomes more accurate with time, it becomes increasingly harder to produce a more successful theory, simply because of the great success of the theories that already exist. For example, the Theory of Evolution explains the diversity of life on Earth, how species adapt to their environments, and many other patterns observed in the natural world; its most recent major modification was unification with genetics to form the modern evolutionary synthesis. In subsequent modifications, it has also subsumed aspects of many other fields such as biochemistry and molecular biology.
Beliefs and biases
Scientific methodology directs that hypotheses be tested in controlled conditions which can be reproduced by others. The scientific community's pursuit of experimental control and reproducibility diminishes the effects of cognitive biases.
For example, pre-existing beliefs can alter the interpretation of results, as in confirmation bias; this is a heuristic that leads a person with a particular belief to see things as reinforcing their belief, even if another observer might disagree (in other words, people tend to observe what they expect to observe).
A historical example is the conjecture that the legs of a galloping horse are splayed at the point when none of the horse's legs touches the ground, to the point of this image being included in paintings by its supporters. However, the first stop-action pictures of a horse's gallop by Eadweard Muybridge showed this to be false, and that the legs are instead gathered together.
In contrast to the requirement for scientific knowledge to correspond to reality, beliefs based on myth or stories can be believed and acted upon irrespective of truth, often taking advantage of the narrative fallacy that when narrative is constructed its elements become easier to believe. Myths intended to be taken as true must have their elements assumed a priori, while science requires testing and validation a posteriori before ideas are accepted.
Elements of the scientific method
There are different ways of outlining the basic method used for scientific inquiry. The scientific community and philosophers of science generally agree on the following classification of method components. These methodological elements and organization of procedures tend to be more characteristic of natural sciences than social sciences. Nonetheless, the cycle of formulating hypotheses, testing and analyzing the results, and formulating new hypotheses, will resemble the cycle described below.
- Four essential elements of the scientific method are iterations, recursions, interleavings, or orderings of the following:
- Characterizations (observations, definitions, and measurements of the subject of inquiry)
- Hypotheses (theoretical, hypothetical explanations of observations and measurements of the subject)
- Predictions (reasoning including logical deduction from the hypothesis or theory)
- Experiments (tests of all of the above)
Each element of the scientific method is subject to peer review for possible mistakes. These activities do not describe all that scientists do (see below) but apply mostly to experimental sciences (e.g., physics, chemistry, and biology). The elements above are often taught in the educational system as "the scientific method".
The scientific method is not a single recipe: it requires intelligence, imagination, and creativity. In this sense, it is not a mindless set of standards and procedures to follow, but is rather an ongoing cycle, constantly developing more useful, accurate and comprehensive models and methods. For example, when Einstein developed the Special and General Theories of Relativity, he did not in any way refute or discount Newton's Principia. On the contrary, if the astronomically large, the vanishingly small, and the extremely fast are removed from Einstein's theories — all phenomena Newton could not have observed — Newton's equations are what remain. Einstein's theories are expansions and refinements of Newton's theories and, thus, increase our confidence in Newton's work.
A linearized, pragmatic scheme of the four points above is sometimes offered as a guideline for proceeding:
- Define a question
- Gather information and resources (observe)
- Form an explanatory hypothesis
- Test the hypothesis by performing an experiment and collecting data in a reproducible manner
- Analyze the data
- Interpret the data and draw conclusions that serve as a starting point for new hypothesis
- Publish results
- Retest (frequently done by other scientists)
The iterative cycle inherent in this step-by-step method goes from point 3 to 6 back to 3 again.
While this schema outlines a typical hypothesis/testing method, it should also be noted that a number of philosophers, historians and sociologists of science (perhaps most notably Paul Feyerabend) claim that such descriptions of scientific method have little relation to the ways science is actually practiced.
- Operation - Some action done to the system being investigated
- Observation - What happens when the operation is done to the system
- Model - A fact, hypothesis, theory, or the phenomenon itself at a certain moment
- Utility Function - A measure of the usefulness of the model to explain, predict, and control, and of the cost of use of it. One of the elements of any scientific utility function is the refutability of the model. Another is its simplicity, on the Principle of Parsimony more commonly known as Occam's Razor.
The scientific method depends upon increasingly sophisticated characterizations of the subjects of investigation. (The subjects can also be called unsolved problems or the unknowns.) For example, Benjamin Franklin conjectured, correctly, that St. Elmo's fire was electrical in nature, but it has taken a long series of experiments and theoretical changes to establish this. While seeking the pertinent properties of the subjects, careful thought may also entail some definitions and observations; the observations often demand careful measurements and/or counting.
The systematic, careful collection of measurements or counts of relevant quantities is often the critical difference between pseudo-sciences, such as alchemy, and science, such as chemistry or biology. Scientific measurements are usually tabulated, graphed, or mapped, and statistical manipulations, such as correlation and regression, performed on them. The measurements might be made in a controlled setting, such as a laboratory, or made on more or less inaccessible or unmanipulatable objects such as stars or human populations. The measurements often require specialized scientific instruments such as thermometers, spectroscopes, particle accelerators, or voltmeters, and the progress of a scientific field is usually intimately tied to their invention and improvement.
|"I am not accustomed to saying anything with certainty after only one or two observations."—Andreas Vesalius (1546)|
Measurements in scientific work are also usually accompanied by estimates of their uncertainty. The uncertainty is often estimated by making repeated measurements of the desired quantity. Uncertainties may also be calculated by consideration of the uncertainties of the individual underlying quantities used. Counts of things, such as the number of people in a nation at a particular time, may also have an uncertainty due to data collection limitations. Or counts may represent a sample of desired quantities, with an uncertainty that depends upon the sampling method used and the number of samples taken.
Measurements demand the use of operational definitions of relevant quantities. That is, a scientific quantity is described or defined by how it is measured, as opposed to some more vague, inexact or "idealized" definition. For example, electrical current, measured in amperes, may be operationally defined in terms of the mass of silver deposited in a certain time on an electrode in an electrochemical device that is described in some detail. The operational definition of a thing often relies on comparisons with standards: the operational definition of "mass" ultimately relies on the use of an artifact, such as a particular kilogram of platinum-iridium kept in a laboratory in France.
The scientific definition of a term sometimes differs substantially from its natural language usage. For example, mass and weight overlap in meaning in common discourse, but have distinct meanings in mechanics. Scientific quantities are often characterized by their units of measure which can later be described in terms of conventional physical units when communicating the work.
New theories are sometimes developed after realizing certain terms have not previously been sufficiently clearly defined. For example, Albert Einstein's first paper on relativity begins by defining simultaneity and the means for determining length. These ideas were skipped over by Isaac Newton with, "I do not define time, space, place and motion, as being well known to all." Einstein's paper then demonstrates that they (viz., absolute time and length independent of motion) were approximations. Francis Crick cautions us that when characterizing a subject, however, it can be premature to define something when it remains ill-understood. In Crick's study of consciousness, he actually found it easier to study awareness in the visual system, rather than to study free will, for example. His cautionary example was the gene; the gene was much more poorly understood before Watson and Crick's pioneering discovery of the structure of DNA; it would have been counterproductive to spend much time on the definition of the gene, before them.
The history of the discovery of the structure of DNA is a classic example of the elements of the scientific method: in 1950 it was known that genetic inheritance had a mathematical description, starting with the studies of Gregor Mendel, and that DNA contained genetic information (Oswald Avery's transforming principle). But the mechanism of storing genetic information (i.e., genes) in DNA was unclear. Researchers in Bragg's laboratory at Cambridge University made X-ray diffraction pictures of various molecules, starting with crystals of salt, and proceeding to more complicated substances. Using clues painstakingly assembled over decades, beginning with its chemical composition, it was determined that it should be possible to characterize the physical structure of DNA, and the X-ray images would be the vehicle. ..2. DNA-hypotheses
Another example: precession of Mercury
The characterization element can require extended and extensive study, even centuries. It took thousands of years of measurements, from the Chaldean, Indian, Persian, Greek, Arabic and European astronomers, to fully record the motion of planet Earth. Newton was able to include those measurements into consequences of his laws of motion. But the perihelion of the planet Mercury's orbit exhibits a precession that cannot be fully explained by Newton's laws of motion (see diagram to the right), though it took quite some time to realize this. The observed difference for Mercury's precession between Newtonian theory and observation was one of the things that occurred to Einstein as a possible early test of his theory of General Relativity. His relativistic calculations matched observation much more closely than did Newtonian theory (the difference is approximately 43 arc-seconds per century), .
An hypothesis is a suggested explanation of a phenomenon, or alternately a reasoned proposal suggesting a possible correlation between or among a set of phenomena.
Normally hypotheses have the form of a mathematical model. Sometimes, but not always, they can also be formulated as existential statements, stating that some particular instance of the phenomenon being studied has some characteristic and causal explanations, which have the general form of universal statements, stating that every instance of the phenomenon has a particular characteristic.
Scientists are free to use whatever resources they have — their own creativity, ideas from other fields, induction, Bayesian inference, and so on — to imagine possible explanations for a phenomenon under study. Charles Sanders Peirce, borrowing a page from Aristotle (Prior Analytics, 2.25) described the incipient stages of inquiry, instigated by the "irritation of doubt" to venture a plausible guess, as abductive reasoning. The history of science is filled with stories of scientists claiming a "flash of inspiration", or a hunch, which then motivated them to look for evidence to support or refute their idea. Michael Polanyi made such creativity the centerpiece of his discussion of methodology.
William Glen observes that
- the success of a hypothesis, or its service to science, lies not simply in its perceived "truth", or power to displace, subsume or reduce a predecessor idea, but perhaps more in its ability to stimulate the research that will illuminate … bald suppositions and areas of vagueness.
In general scientists tend to look for theories that are "elegant" or "beautiful". In contrast to the usual English use of these terms, they here refer to a theory in accordance with the known facts, which is nevertheless relatively simple and easy to handle. Occam's Razor serves as a rule of thumb for choosing the most desirable amongst a group of equally explanatory hypotheses.
Linus Pauling proposed that DNA might be a triple helix. This hypothesis was also considered by Francis Crick and James D. Watson but discarded. When Watson and Crick learned of Pauling's hypothesis, they understood from existing data that Pauling was wrong and that Pauling would soon admit his difficulties with that structure. So, the race was on to figure out the correct structure (except that Pauling did not realize at the time that he was in a race—see section on "DNA-predictions" below)
Predictions from the hypothesis
Any useful hypothesis will enable predictions, by reasoning including deductive reasoning. It might predict the outcome of an experiment in a laboratory setting or the observation of a phenomenon in nature. The prediction can also be statistical and deal only with probabilities.
It is essential that the outcome of testing such a prediction be currently unknown. Only in this case does a successful outcome increase the probability that the hypothesis is true. If the outcome is already known, it is called a consequence and should have already been considered while formulating the hypothesis.
If the predictions are not accessible by observation or experience, the hypothesis is not yet testable and so will remain to that extent unscientific in a strict sense. A new technology or theory might make the necessary experiments feasible. Thus, much scientifically based speculation might convince one (or many) that the hypothesis that other intelligent species exist is true. But since there no experiment now known which can test this hypothesis, science itself can have little to say about the possibility. In future, some new technique might lead to an experimental test and the speculation would then become part of accepted science.
James D. Watson, Francis Crick, and others hypothesized that DNA had a helical structure. This implied that DNA's X-ray diffraction pattern would be 'x shaped'. This prediction followed from the work of Cochran, Crick and Vand (and independently by Stokes). The Cochran-Crick-Vand-Stokes theorem provided a mathematical explanation for the empirical observation that diffraction from helical structures produces x shaped patterns.
In their first paper, Watson and Crick also noted that the double helix structure they proposed provided a simple mechanism for DNA replication, writing "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material". ..4. DNA-experiments
Another example: general relativity
Einstein's theory of General Relativity makes several specific predictions about the observable structure of space-time, such as that light bends in a gravitational field, and that the amount of bending depends in a precise way on the strength of that gravitational field. Arthur Eddington's observations made during a 1919 solar eclipse supported General Relativity rather than Newtonian gravitation.
Once predictions are made, they can be sought by experiments. If the test results contradict the predictions, the hypotheses which entailed them are called into question and become less tenable. Sometimes the experiments are conducted incorrectly or are not very well designed, when compared to a crucial experiment. If the experimental results confirm the predictions, then the hypotheses are considered more likely to be correct, but might still be wrong and continue to be subject to further testing. The experimental control is a technique for dealing with observational error. This technique uses the contrast between multiple samples (or observations) under differing conditions to see what varies or what remains the same. We vary the conditions for each measurement, to help isolate what has changed. Mill's canons can then help us figure out what the important factor is. Factor analysis is one technique for discovering the important factor in an effect.
Depending on the predictions, the experiments can have different shapes. It could be a classical experiment in a laboratory setting, a double-blind study or an archaeological excavation. Even taking a plane from New York to Paris is an experiment which tests the aerodynamical hypotheses used for constructing the plane.
Scientists assume an attitude of openness and accountability on the part of those conducting an experiment. Detailed record keeping is essential, to aid in recording and reporting on the experimental results, and supports the effectiveness and integrity of the procedure. They will also assist in reproducing the experimental results, likely by others. Traces of this approach can be seen in the work of Hipparchus (190-120 BCE), when determining a value for the precession of the Earth, while controlled experiments can be seen in the works of Jābir ibn Hayyān (721-815 CE), al-Battani (853–929) and Alhazen (965-1039).
Watson and Crick showed an initial (and incorrect) proposal for the structure of DNA to a team from Kings College - Rosalind Franklin, Maurice Wilkins, and Raymond Gosling. Franklin immediately spotted the flaws which concerned the water content. Later Watson saw Franklin's detailed X-ray diffraction images which showed an X-shape and was able to confirm the structure was helical. This rekindled Watson and Crick's model building and led to the correct structure. ..1. DNA-characterizations
Evaluation and improvement
The scientific method is iterative. At any stage it is possible to refine its accuracy and precision, so that some consideration will lead the scientist to repeat an earlier part of the process. Failure to develop an interesting hypothesis may lead a scientist to re-define the subject under consideration. Failure of a hypothesis to produce interesting and testable predictions may lead to reconsideration of the hypothesis or of the definition of the subject. Failure of an experiment to produce interesting results may lead a scientist to reconsider the experimental method, the hypothesis, or the definition of the subject.
Other scientists may start their own research and enter the process at any stage. They might adopt the characterization and formulate their own hypothesis, or they might adopt the hypothesis and deduce their own predictions. Often the experiment is not done by the person who made the prediction, and the characterization is based on experiments done by someone else. Published results of experiments can also serve as a hypothesis predicting their own reproducibility.
After considerable fruitless experimentation, being discouraged by their superior from continuing, and numerous false starts, Watson and Crick were able to infer the essential structure of DNA by concrete modeling of the physical shapes of the nucleotides which comprise it. They were guided by the bond lengths which had been deduced by Linus Pauling and by Rosalind Franklin's X-ray diffraction images. ..DNA Example
Science is a social enterprise, and scientific work tends to be accepted by the scientific community when it has been confirmed. Crucially, experimental and theoretical results must be reproduced by others within the scientific community. Researchers have given their lives for this vision; Georg Wilhelm Richmann was killed by ball lightning (1753) when attempting to replicate the 1752 kite-flying experiment of Benjamin Franklin.
To protect against bad science and fraudulent data, government research-granting agencies such as the National Science Foundation, and science journals including Nature and Science, have a policy that researchers must archive their data and methods so other researchers can test the data and methods and build on the research that has gone before. Scientific data archiving can be done at a number of national archives in the U.S. or in the World Data Center.
Models of scientific inquiry
The classical model of scientific inquiry derives from Aristotle, who distinguished the forms of approximate and exact reasoning, set out the threefold scheme of abductive, deductive, and inductive inference, and also treated the compound forms such as reasoning by analogy.
In 1877, Charles Sanders Peirce (// like "purse"; 1839–1914) characterized inquiry in general not as the pursuit of truth per se but as the struggle to move from irritating, inhibitory doubts born of surprises, disagreements, and the like, and to reach a secure belief, belief being that on which one is prepared to act. He framed scientific inquiry as part of a broader spectrum and as spurred, like inquiry generally, by actual doubt, not mere verbal or hyperbolic doubt, which he held to be fruitless. He outlined four methods of settling opinion, ordered from least to most successful:
- The method of tenacity (policy of sticking to initial belief) — which brings comforts and decisiveness but leads to trying to ignore contrary information and others' views as if truth were intrinsically private, not public. It goes against the social impulse and easily falters since one may well notice when another's opinion is as good as one's own initial opinion. Its successes can shine but tend to be transitory.
- The method of authority — which overcomes disagreements but sometimes brutally. Its successes can be majestic and long-lived, but it cannot operate thoroughly enough to suppress doubts indefinitely, especially when people learn of other societies present and past.
- The method of the a priori — which promotes conformity less brutally but fosters opinions as something like tastes, arising in conversation and comparisons of perspectives in terms of "what is agreeable to reason." Thereby it depends on fashion in paradigms and goes in circles over time. It is more intellectual and respectable but, like the first two methods, sustains accidental and capricious beliefs, destining some minds to doubt it.
- The scientific method — the method wherein inquiry regards itself as fallible and purposely tests itself and criticizes, corrects, and improves itself.
Peirce held that slow, stumbling ratiocination can be dangerously inferior to instinct and traditional sentiment in practical matters, and that the scientific method is best suited to theoretical research, which in turn should not be trammeled by the other methods and practical ends; reason's "first rule" is that, in order to learn, one must desire to learn and, as a corollary, must not block the way of inquiry. The scientific method excels the others by being deliberately designed to arrive — eventually — at the most secure beliefs, upon which the most successful practices can be based. Starting from the idea that people seek not truth per se but instead to subdue irritating, inhibitory doubt, Peirce showed how, through the struggle, some can come to submit to truth for the sake of belief's integrity, seek as truth the guidance of potential practice correctly to its given goal, and wed themselves to the scientific method.
For Peirce, rational inquiry implies presuppositions about truth and the real; to reason is to presuppose (and at least to hope), as a principle of the reasoner's self-regulation, that the real is discoverable and independent of our vagaries of opinion. In that vein he defined truth as the correspondence of a sign (in particular, a proposition) to its object and, pragmatically, not as actual consensus of some definite, finite community (such that to inquire would be to poll the experts), but instead as that final opinion which all investigators would reach sooner or later but still inevitably, if they were to push investigation far enough, even when they start from different points. In tandem he defined the real as a true sign's object (be that object a possibility or quality, or an actuality or brute fact, or a necessity or norm or law), which is what it is independently of any finite community's opinion and, pragmatically, depends only on the final opinion destined in a sufficient investigation. That is a destination as far, or near, as the truth itself to you or me or the given finite community. Thus his theory of inquiry boils down to "Do the science." Those conceptions of truth and the real involve the idea of a community both without definite limits (and thus potentially self-correcting as far as needed) and capable of definite increase of knowledge. As inference, "logic is rooted in the social principle" since it depends on a standpoint that is, in a sense, unlimited.
Paying special attention to the generation of explanations, Peirce outlined the scientific method as a coordination of three kinds of inference in a purposeful cycle aimed at settling doubts, as follows (in §III–IV in "A Neglected Argument" except as otherwise noted):
1. Abduction (or retroduction). Guessing, inference to explanatory hypotheses for selection of those best worth trying. From abduction, Peirce distinguishes induction as inferring, on the basis of tests, the proportion of truth in the hypothesis. Every inquiry, whether into ideas, brute facts, or norms and laws, arises from surprising observations in one or more of those realms (and for example at any stage of an inquiry already underway). All explanatory content of theories comes from abduction, which guesses a new or outside idea so as to account in a simple, economical way for a surprising or complicative phenomenon. Oftenest, even a well-prepared mind guesses wrong. But the modicum of success of our guesses far exceeds that of sheer luck and seems born of attunement to nature by instincts developed or inherent, especially insofar as best guesses are optimally plausible and simple in the sense, said Peirce, of the "facile and natural", as by Galileo's natural light of reason and as distinct from "logical simplicity". Abduction is the most fertile but least secure mode of inference. Its general rationale is inductive: it succeeds often enough and, without it, there is no hope of sufficiently expediting inquiry (often multi-generational) toward new truths. Coordinative method leads from abducing a plausible hypothesis to judging it for its testability and for how its trial would economize inquiry itself. Peirce calls his pragmatism "the logic of abduction". His pragmatic maxim is: "Consider what effects that might conceivably have practical bearings you conceive the objects of your conception to have. Then, your conception of those effects is the whole of your conception of the object". His pragmatism is a method of reducing conceptual confusions fruitfully by equating the meaning of any conception with the conceivable practical implications of its object's conceived effects — a method of experimentational mental reflection hospitable to forming hypotheses and conducive to testing them. It favors efficiency. The hypothesis, being insecure, needs to have practical implications leading at least to mental tests and, in science, lending themselves to scientific tests. A simple but unlikely guess, if uncostly to test for falsity, may belong first in line for testing. A guess is intrinsically worth testing if it has instinctive plausibility or reasoned objective probability, while subjective likelihood, though reasoned, can be misleadingly seductive. Guesses can be chosen for trial strategically, for their caution (for which Peirce gave as example the game of Twenty Questions), breadth, and incomplexity. One can hope to discover only that which time would reveal through a learner's sufficient experience anyway, so the point is to expedite it; the economy of research is what demands the leap, so to speak, of abduction and governs its art.
2. Deduction. Two stages:
- i. Explication. Unclearly premissed, but deductive, analysis of the hypothesis in order to render its parts as clear as possible.
- ii. Demonstration: Deductive Argumentation, Euclidean in procedure. Explicit deduction of hypothesis's consequences as predictions, for induction to test, about evidence to be found. Corollarial or, if needed, Theorematic.
3. Induction. The long-run validity of the rule of induction is deducible from the principle (presuppositional to reasoning in general) that the real is only the object of the final opinion to which adequate investigation would lead; anything to which no such process would ever lead would not be real. Induction involving ongoing tests or observations follows a method which, sufficiently persisted in, will diminish its error below any predesignate degree. Three stages:
- i. Classification. Unclearly premissed, but inductive, classing of objects of experience under general ideas.
- ii. Probation: direct Inductive Argumentation. Crude (the enumeration of instances) or Gradual (new estimate of proportion of truth in the hypothesis after each test). Gradual Induction is Qualitative or Quantitative; if Qualitative, then dependent on weightings of qualities or characters; if Quantitative, then dependent on measurements, or on statistics, or on countings.
- iii. Sentential Induction. "...which, by Inductive reasonings, appraises the different Probations singly, then their combinations, then makes self-appraisal of these very appraisals themselves, and passes final judgment on the whole result".
Many subspecialties of applied logic and computer science, such as artificial intelligence, machine learning, computational learning theory, inferential statistics, and knowledge representation, are concerned with setting out computational, logical, and statistical frameworks for the various types of inference involved in scientific inquiry. In particular, they contribute hypothesis formation, logical deduction, and empirical testing. Some of these applications draw on measures of complexity from algorithmic information theory to guide the making of predictions from prior distributions of experience, for example, see the complexity measure called the speed prior from which a computable strategy for optimal inductive reasoning can be derived.
Communication and community
Frequently the scientific method is employed not only by a single person, but also by several people cooperating directly or indirectly. Such cooperation can be regarded as one of the defining elements of a scientific community. Various techniques have been developed to ensure the integrity of scientific methodology within such an environment.
Peer review evaluation
Scientific journals use a process of peer review, in which scientists' manuscripts are submitted by editors of scientific journals to (usually one to three) fellow (usually anonymous) scientists familiar with the field for evaluation. The referees may or may not recommend publication, publication with suggested modifications, or, sometimes, publication in another journal. This serves to keep the scientific literature free of unscientific or pseudoscientific work, to help cut down on obvious errors, and generally otherwise to improve the quality of the material. The peer review process can have limitations when considering research outside the conventional scientific paradigm: problems of "groupthink" can interfere with open and fair deliberation of some new research.
Documentation and replication
Sometimes experimenters may make systematic errors during their experiments, unconsciously veer from scientific method (Pathological science) for various reasons, or, in rare cases, deliberately report false results. Consequently, it is a common practice for other scientists to attempt to repeat the experiments in order to duplicate the results, thus further validating the hypothesis.
As a result, researchers are expected to practice scientific data archiving in compliance with the policies of government funding agencies and scientific journals. Detailed records of their experimental procedures, raw data, statistical analyses and source code are preserved in order to provide evidence of the effectiveness and integrity of the procedure and assist in reproduction. These procedural records may also assist in the conception of new experiments to test the hypothesis, and may prove useful to engineers who might examine the potential practical applications of a discovery.
When additional information is needed before a study can be reproduced, the author of the study is expected to provide it promptly. If the author refuses to share data, appeals can be made to the journal editors who published the study or to the institution which funded the research.
Since it is impossible for a scientist to record everything that took place in an experiment, facts selected for their apparent relevance are reported. This may lead, unavoidably, to problems later if some supposedly irrelevant feature is questioned. For example, Heinrich Hertz did not report the size of the room used to test Maxwell's equations, which later turned out to account for a small deviation in the results. The problem is that parts of the theory itself need to be assumed in order to select and report the experimental conditions. The observations are hence sometimes described as being 'theory-laden'.
Dimensions of practice
The primary constraints on contemporary science are:
- Publication, i.e. Peer review
- Resources (mostly funding)
It has not always been like this: in the old days of the "gentleman scientist" funding (and to a lesser extent publication) were far weaker constraints.
Both of these constraints indirectly require scientific method — work that violates the constraints will be difficult to publish and difficult to get funded. Journals require submitted papers to conform to "good scientific practice" and this is mostly enforced by peer review. Originality, importance and interest are more important - see for example the author guidelines for Nature.
Philosophy and sociology of science
Philosophy of science looks at the underpinning logic of the scientific method, at what separates science from non-science, and the ethic that is implicit in science. There are basic assumptions derived from philosophy that form the base of the scientific method - namely, that reality is objective and consistent, that humans have the capacity to perceive reality accurately, and that rational explanations exist for elements of the real world. These assumptions from methodological naturalism form the basis on which science is grounded. Logical Positivist, empiricist, falsificationist, and other theories have claimed to give a definitive account of the logic of science, but each has in turn been criticized.
Thomas Kuhn examined the history of science in his The Structure of Scientific Revolutions, and found that the actual method used by scientists differed dramatically from the then-espoused method. His observations of science practice are essentially sociological and do not speak to how science is or can be practiced in other times and other cultures.
Norwood Russell Hanson, Imre Lakatos and Thomas Kuhn have done extensive work on the "theory laden" character of observation. Hanson (1958) first coined the term for the idea that all observation is dependent on the conceptual framework of the observer, using the concept of gestalt to show how preconceptions can affect both observation and description. He opens Chapter 1 with a discussion of the Golgi bodies and their initial rejection as an artefact of staining technique, and a discussion of Brahe and Kepler observing the dawn and seeing a "different" sun rise despite the same physiological phenomenon. Kuhn and Feyerabend acknowledge the pioneering significance of his work.
Kuhn (1961) said the scientist generally has a theory in mind before designing and undertaking experiments so as to make empirical observations, and that the "route from theory to measurement can almost never be traveled backward". This implies that the way in which theory is tested is dictated by the nature of the theory itself, which led Kuhn (1961, p. 166) to argue that "once it has been adopted by a profession ... no theory is recognized to be testable by any quantitative tests that it has not already passed".
Paul Feyerabend similarly examined the history of science, and was led to deny that science is genuinely a methodological process. In his book Against Method he argues that scientific progress is not the result of applying any particular method. In essence, he says that for any specific method or norm of science, one can find a historic episode where violating it has contributed to the progress of science. Thus, if believers in scientific method wish to express a single universally valid rule, Feyerabend jokingly suggests, it should be 'anything goes'. Criticisms such as his led to the strong programme, a radical approach to the sociology of science.
The postmodernist critiques of science have themselves been the subject of intense controversy. This ongoing debate, known as the science wars, is the result of conflicting values and assumptions between the postmodernist and realist camps. Whereas postmodernists assert that scientific knowledge is simply another discourse (note that this term has special meaning in this context) and not representative of any form of fundamental truth, realists in the scientific community maintain that scientific knowledge does reveal real and fundamental truths about reality. Many books have been written by scientists which take on this problem and challenge the assertions of the postmodernists while defending science as a legitimate method of deriving truth.
Role of chance in discovery
Somewhere between 33% and 50% of all scientific discoveries are estimated to have been stumbled upon, rather than sought out. This may explain why scientists so often express that they were lucky. Louis Pasteur is credited with the famous saying that "Luck favours the prepared mind", but some psychologists have begun to study what it means to be 'prepared for luck' in the scientific context. Research is showing that scientists are taught various heuristics that tend to harness chance and the unexpected. This is what professor of economics Nassim Nicholas Taleb calls "Anti-fragility"; while some systems of investigation are fragile in the face of human error, human bias, and randomness, the scientific method is more than resistant or tough - it actually benefits from such randomness in many ways (it is anti-fragile). Taleb believes that the more anti-fragile the system, the more it will flourish in the real world.
Psychologist Kevin Dunbar says the process of discovery often starts with researchers finding bugs in their experiments. These unexpected results lead researchers to try and fix what they think is an error in their method. Eventually, the researcher decides the error is too persistent and systematic to be a coincidence. The highly controlled, cautious and curious aspects of the scientific method are thus what make it well suited for identifying such persistent systematic errors. At this point, the researcher will begin to think of theoretical explanations for the error, often seeking the help of colleagues across different domains of expertise.
The development of the scientific method is inseparable from the history of science itself. Ancient Egyptian documents describe empirical methods in astronomy, mathematics, and medicine. The ancient Greek philosopher Thales in the 6th century BC refused to accept supernatural, religious or mythological explanations for natural phenomena, proclaiming that every event had a natural cause. The development of deductive reasoning by Plato was an important step towards the scientific method. Empiricism seems to have been formalized by Aristotle, who believed that universal truths could be reached via induction.
There are hints of experimental methods from the Classical world (e.g., those reported by Archimedes in a report recovered early in the 20th century from an overwritten manuscript), but the first clear instances of an experimental scientific method seem to have been developed by Islamic scientists who introduced the use of experimentation and quantification within a generally empirical orientation. For example, Alhazen performed optical and physiological experiments, reported in his manifold works, the most famous being Book of Optics (1021).
By the late 15th century, the physician-scholar Niccolò Leoniceno was finding errors in Pliny's Natural History. As a physician, Leoniceno was concerned about these botanical errors propagating to the materia medica on which medicines were based. To counter this, a botanical garden was established at Orto botanico di Padova, University of Padua (in use for teaching by 1546), in order that medical students might have empirical access to the plants of a pharmacopia. The philosopher and physician Francisco Sanches was led by his medical training at Rome, 1571–73, and by the philosophical skepticism recently placed in the European mainstream by the publication of Sextus Empiricus' "Outlines of Pyrrhonism", to search for a true method of knowing (modus sciendi), as nothing clear can be known by the methods of Aristotle and his followers — for example, syllogism fails upon circular reasoning. Following the physician Galen's method of medicine, Sanches lists the methods of judgement and experience, which are faulty in the wrong hands, and we are left with the bleak statement That Nothing is Known (1581). This challenge was taken up by René Descartes in the next generation (1637), but at the least, Sanches warns us that we ought to refrain from the methods, summaries, and commentaries on Aristotle, if we seek scientific knowledge. In this, he is echoed by Francis Bacon, also influenced by skepticism; Sanches cites the humanist Juan Luis Vives who sought a better educational system, as well as a statement of human rights as a pathway for improvement of the lot of the poor.
The modern scientific method crystallized no later than in the 17th and 18th centuries. In his work Novum Organum (1620) — a reference to Aristotle's Organon — Francis Bacon outlined a new system of logic to improve upon the old philosophical process of syllogism. Then, in 1637, René Descartes established the framework for scientific method's guiding principles in his treatise, Discourse on Method. The writings of Alhazen, Bacon and Descartes are considered critical in the historical development of the modern scientific method, as are those of John Stuart Mill.
Grosseteste was "the principal figure" in bringing about "a more adequate method of scientific inquiry" by which "medieval scientists were able eventually to outstrip their ancient European and Muslim teachers" (Dales 1973:62). ... His thinking influenced Roger Bacon, who spread Grosseteste's ideas from Oxford to the University of Paris during a visit there in the 1240s. From the prestigious universities in Oxford and Paris, the new experimental science spread rapidly throughout the medieval universities: "And so it went to Galileo, William Gilbert, Francis Bacon, William Harvey, Descartes, Robert Hooke, Newton, Leibniz, and the world of the seventeenth century" (Crombie 1962:15). So it went to us also.— Hugh G. Gauch, 2003.
In the late 19th century, Charles Sanders Peirce proposed a schema that would turn out to have considerable influence in the development of current scientific methodology generally. Peirce accelerated the progress on several fronts. Firstly, speaking in broader context in "How to Make Our Ideas Clear" (1878), Peirce outlined an objectively verifiable method to test the truth of putative knowledge on a way that goes beyond mere foundational alternatives, focusing upon both deduction and induction. He thus placed induction and deduction in a complementary rather than competitive context (the latter of which had been the primary trend at least since David Hume, who wrote in the mid-to-late 18th century). Secondly, and of more direct importance to modern method, Peirce put forth the basic schema for hypothesis/testing that continues to prevail today. Extracting the theory of inquiry from its raw materials in classical logic, he refined it in parallel with the early development of symbolic logic to address the then-current problems in scientific reasoning. Peirce examined and articulated the three fundamental modes of reasoning that, as discussed above in this article, play a role in inquiry today, the processes that are currently known as abductive, deductive, and inductive inference. Thirdly, he played a major role in the progress of symbolic logic itself — indeed this was his primary specialty.
Beginning in the 1930s, Karl Popper argued that there is no such thing as inductive reasoning. All inferences ever made, including in science, are purely deductive according to this view. Accordingly, he claimed that the empirical character of science has nothing to do with induction—but with the deductive property of falsifiability that scientific hypotheses have. Contrasting his views with inductivism and positivism, he even denied the existence of the scientific method: "(1) There is no method of discovering a scientific theory (2) There is no method for ascertaining the truth of a scientific hypothesis, i.e., no method of verification; (3) There is no method for ascertaining whether a hypothesis is 'probable', or probably true". Instead, he held that there is only one universal method, a method not particular to science: The negative method of criticism, or colloquially termed trial and error. It covers not only all products of the human mind, including science, mathematics, philosophy, art and so on, but also the evolution of life. Following Peirce and others, Popper argued that science is fallible and has no authority. In contrast to empiricist-inductivist views, he welcomed metaphysics and philosophical discussion and even gave qualified support to myths and pseudosciences. Popper's view has become known as critical rationalism.
Although science in a broad sense existed before the modern era, and in many historical civilizations (as described above), modern science is so distinct in its approach and successful in its results that it now defines what science is in the strictest sense of the term.
Relationship with mathematics
Science is the process of gathering, comparing, and evaluating proposed models against observables. A model can be a simulation, mathematical or chemical formula, or set of proposed steps. Science is like mathematics in that researchers in both disciplines can clearly distinguish what is known from what is unknown at each stage of discovery. Models, in both science and mathematics, need to be internally consistent and also ought to be falsifiable (capable of disproof). In mathematics, a statement need not yet be proven; at such a stage, that statement would be called a conjecture. But when a statement has attained mathematical proof, that statement gains a kind of immortality which is highly prized by mathematicians, and for which some mathematicians devote their lives.
Mathematical work and scientific work can inspire each other. For example, the technical concept of time arose in science, and timelessness was a hallmark of a mathematical topic. But today, the Poincaré conjecture has been proven using time as a mathematical concept in which objects can flow (see Ricci flow).
Nevertheless, the connection between mathematics and reality (and so science to the extent it describes reality) remains obscure. Eugene Wigner's paper, The Unreasonable Effectiveness of Mathematics in the Natural Sciences, is a very well known account of the issue from a Nobel Prize-winning physicist. In fact, some observers (including some well known mathematicians such as Gregory Chaitin, and others such as Lakoff and Núñez) have suggested that mathematics is the result of practitioner bias and human limitation (including cultural ones), somewhat like the post-modernist view of science.
George Pólya's work on problem solving, the construction of mathematical proofs, and heuristic show that the mathematical method and the scientific method differ in detail, while nevertheless resembling each other in using iterative or recursive steps.
|Mathematical method||Scientific method|
|1||Understanding||Characterization from experience and observation|
|2||Analysis||Hypothesis: a proposed explanation|
|3||Synthesis||Deduction: prediction from the hypothesis|
|4||Review/Extend||Test and experiment|
In Pólya's view, understanding involves restating unfamiliar definitions in your own words, resorting to geometrical figures, and questioning what we know and do not know already; analysis, which Pólya takes from Pappus, involves free and heuristic construction of plausible arguments, working backward from the goal, and devising a plan for constructing the proof; synthesis is the strict Euclidean exposition of step-by-step details of the proof; review involves reconsidering and re-examining the result and the path taken to it.
Problems and issues
History, philosophy, sociology
- Goldhaber & Nieto 2010, p. 940
- " Rules for the study of natural philosophy", Newton 1999, pp. 794–6, from Book 3, The System of the World.
- Oxford English Dictionary - entry for scientific.
- "How does light travel through transparent bodies? Light travels through transparent bodies in straight lines only.... We have explained this exhaustively in our Book of Optics. But let us now mention something to prove this convincingly: the fact that light travels in straight lines is clearly observed in the lights which enter into dark rooms through holes.... [T]he entering light will be clearly observable in the dust which fills the air. —Alhazen, translated into English from German by M. Schwarz, from "Abhandlung über das Licht", J. Baarmann (ed. 1882) Zeitschrift der Deutschen Morgenländischen Gesellschaft Vol 36 as quoted in Sambursky 1974, p. 136.
- He demonstrated his conjecture that "light travels through transparent bodies in straight lines only" by placing a straight stick or a taut thread next to the light beam, as quoted in Sambursky 1974, p. 136 to prove that light travels in a straight line.
- David Hockney, (2001, 2006) in Secret Knowledge: rediscovering the lost techniques of the old masters ISBN 0-14-200512-6 (expanded edition) cites Alhazen several times as the likely source for the portraiture technique using the camera obscura, which Hockney rediscovered with the aid of an optical suggestion from Charles M. Falco. Kitab al-Manazir, which is Alhazen's Book of Optics, at that time denoted Opticae Thesaurus, Alhazen Arabis, was translated from Arabic into Latin for European use as early as 1270. Hockney cites Friedrich Risner's 1572 Basle edition of Opticae Thesaurus. Hockney quotes Alhazen as the first clear description of the camera obscura in Hockney, p. 240.
- Morris Kline (1985) Mathematics for the nonmathematician. Courier Dover Publications. p. 284. ISBN 0-486-24823-2
- Shapere, Dudley (1974). Galileo: A Philosophical Study. University of Chicago Press. ISBN 0-226-75007-8.
- Peirce, C. S., Collected Papers v. 1, paragraph 74.
- " The thesis of this book, as set forth in Chapter One, is that there are general principles applicable to all the sciences." __ Gauch 2003, p. xv
- Peirce (1877), "The Fixation of Belief", Popular Science Monthly, v. 12, pp. 1–15. Reprinted often, including (Collected Papers of Charles Sanders Peirce v. 5, paragraphs 358–87), (The Essential Peirce, v. 1, pp. 109–23). Peirce.org Eprint. Wikisource Eprint.
- Gauch 2003, p. 1: This is the principle of noncontradiction.
- Peirce, C. S., Collected Papers v. 5, in paragraph 582, from 1898:
... [rational] inquiry of every type, fully carried out, has the vital power of self-correction and of growth. This is a property so deeply saturating its inmost nature that it may truly be said that there is but one thing needful for learning the truth, and that is a hearty and active desire to learn what is true.
- Taleb contributes a brief description of anti-fragility, http://www.edge.org/q2011/q11_3.html
- Karl R. Popper (1963), 'The Logic of Scientific Discovery'. The Logic of Scientific Discovery pp. 17-20, 249-252, 437-438, and elsewhere.
- Leon Lederman, for teaching physics first, illustrates how to avoid confirmation bias: Ian Shelton, in Chile, was initially skeptical that supernova 1987a was real, but possibly an artifact of instrumentation (null hypothesis), so he went outside and disproved his null hypothesis by observing SN 1987a with the naked eye. The Kamiokande experiment, in Japan, independently observed neutrinos from SN 1987a at the same time.
- Peirce (1908), "A Neglected Argument for the Reality of God", Hibbert Journal v. 7, pp. 90-112. s:A Neglected Argument for the Reality of God with added notes. Reprinted with previously unpublished part, Collected Papers v. 6, paragraphs 452-85, The Essential Peirce v. 2, pp. 434-50, and elsewhere.
- Gauch 2003, p. 3
- History of Inductive Science (1837), and in Philosophy of Inductive Science (1840)
- Schuster and Powers (2005), Translational and Experimental Clinical Research, Ch. 1. Link. This chapter also discusses the different types of research questions and how they are produced.
- This phrasing is attributed to Marshall Nirenberg.
- Karl R. Popper, Conjectures and Refutations: The Growth of Scientific Knowledge, Routledge, 2003 ISBN 0-415-28594-1
- Lindberg 2007, pp. 2–3: "There is a danger that must be avoided. ... If we wish to do justice to the historical enterprise, we must take the past for what it was. And that means we must resist the temptation to scour the past for examples or precursors of modern science. ...My concern will be with the beginnings of scientific theories, the methods by which they were formulated, and the uses to which they were put; ... "
- Galilei, Galileo (M.D.C.XXXVIII), Discorsi e Dimonstrazioni Matematiche, intorno a due nuoue scienze, Leida: Apresso gli Elsevirri, ISBN 0-486-60099-8, Dover reprint of the 1914 Macmillan translation by Henry Crew and Alfonso de Salvio of Two New Sciences, Galileo Galilei Linceo (1638). Additional publication information is from the collection of first editions of the Library of Congress surveyed by Bruno 1989, pp. 261–264.
- Godfrey-Smith 2003 p. 236.
- October 1951, as noted in McElheny 2004, p. 40:"That's what a helix should look like!" Crick exclaimed in delight (This is the Cochran-Crick-Vand-Stokes theory of the transform of a helix).
- June 1952, as noted in McElheny 2004, p. 43: Watson had succeeded in getting X-ray pictures of TMV showing a diffraction pattern consistent with the transform of a helix.
- Watson did enough work on Tobacco mosaic virus to produce the diffraction pattern for a helix, per Crick's work on the transform of a helix. pp. 137-138, Horace Freeland Judson (1979) The Eighth Day of Creation ISBN 0-671-22540-5
- — Cochran W, Crick FHC and Vand V. (1952) "The Structure of Synthetic Polypeptides. I. The Transform of Atoms on a Helix", Acta Cryst., 5, 581-586.
- Friday, January 30, 1953. Tea time, as noted in McElheny 2004, p. 52: Franklin confronts Watson and his paper - "Of course it [Pauling's pre-print] is wrong. DNA is not a helix." However, Watson then visits Wilkins' office, sees photo 51, and immediately recognizes the diffraction pattern of a helical structure. But additional questions remained, requiring additional iterations of their research. For example, the number of strands in the backbone of the helix (Crick suspected 2 strands, but cautioned Watson to examine that more critically), the location of the base pairs (inside the backbone or outside the backbone), etc. One key point was that they realized that the quickest way to reach a result was not to continue a mathematical analysis, but to build a physical model.
- "The instant I saw the picture my mouth fell open and my pulse began to race." —Watson 1968, p. 167 Page 168 shows the X-shaped pattern of the B-form of DNA, clearly indicating crucial details of its helical structure to Watson and Crick.
- McElheny 2004 p.52 dates the Franklin-Watson confrontation as Friday, January 30, 1953. Later that evening, Watson urges Wilkins to begin model-building immediately. But Wilkins agrees to do so only after Franklin's departure.
- Saturday, February 28, 1953, as noted in McElheny 2004, pp. 57–59: Watson found the base pairing mechanism which explained Chargaff's rules using his cardboard models.
- Fleck 1979, pp. xxvii-xxviii
- "NIH Data Sharing Policy."
- Stanovich, Keith E. (2007). How to Think Straight About Psychology. Boston: Pearson Education. pg 123
- Brody 1993, pp. 44–45
- Hall, B. K.; Hallgrímsson, B., eds. (2008). Strickberger's Evolution (4th ed.). Jones & Bartlett. p. 762. ISBN 0-7637-0066-5.
- Cracraft, J.; Donoghue, M. J., eds. (2005). Assembling the tree of life. Oxford University Press. p. 592. ISBN 0-19-517234-5.
- Needham & Wang 1954 p.166 shows how the 'flying gallop' image propagated from China to the West.
- "A myth is a belief given uncritical acceptance by members of a group ..." —Weiss, Business Ethics p. 15, as cited by Ronald R. Sims (2003) Ethics and corporate social responsibility: why giants fall p.21
- Imre Lakatos (1976), Proofs and Refutations. Taleb 2007, p. 72 lists ways to avoid narrative fallacy and confirmation bias.
- For more on the narrative fallacy, see also Fleck 1979, p. 27: "Words and ideas are originally phonetic and mental equivalences of the experiences coinciding with them. ... Such proto-ideas are at first always too broad and insufficiently specialized. ... Once a structurally complete and closed system of opinions consisting of many details and relations has been formed, it offers enduring resistance to anything that contradicts it."
- "Invariably one came up against fundamental physical limits to the accuracy of measurement. ... The art of physical measurement seemed to be a matter of compromise, of choosing between reciprocally related uncertainties. ... Multiplying together the conjugate pairs of uncertainty limits mentioned, however, I found that they formed invariant products of not one but two distinct kinds. ... The first group of limits were calculable a priori from a specification of the instrument. The second group could be calculated only a posteriori from a specification of what was done with the instrument. ... In the first case each unit [of information] would add one additional dimension (conceptual category), whereas in the second each unit would add one additional atomic fact.", —Pages 1-4: MacKay, Donald M. (1969), Information, Mechanism, and Meaning, Cambridge, MA: MIT Press, ISBN 0-262-63-032-X
- See the hypothethico-deductive method, for example, Godfrey-Smith 2003, p. 236.
- Jevons 1874, pp. 265–6.
- pp.65,73,92,398 —Andrew J. Galambos, Sic Itur ad Astra ISBN 0-88078-004-5(AJG learned scientific method from Felix Ehrenhaft
- Galileo 1638, pp. v-xii,1–300
- Brody 1993, pp. 10–24 calls this the "epistemic cycle": "The epistemic cycle starts from an initial model; iterations of the cycle then improve the model until an adequate fit is achieved."
- Iteration example: Chaldean astronomers such as Kidinnu compiled astronomical data. Hipparchus was to use this data to calculate the precession of the Earth's axis. Fifteen hundred years after Kidinnu, Al-Batani, born in what is now Turkey, would use the collected data and improve Hipparchus' value for the precession of the Earth's axis. Al-Batani's value, 54.5 arc-seconds per year, compares well to the current value of 49.8 arc-seconds per year (26,000 years for Earth's axis to round the circle of nutation).
- Recursion example: the Earth is itself a magnet, with its own North and South Poles William Gilbert (in Latin 1600) De Magnete, or On Magnetism and Magnetic Bodies. Translated from Latin to English, selection by Moulton & Schifferes 1960, pp. 113–117. Gilbert created a terrella, a lodestone ground into a spherical shape, which served as Gilbert's model for the Earth itself, as noted in Bruno 1989, p. 277.
- "The foundation of general physics ... is experience. These ... everyday experiences we do not discover without deliberately directing our attention to them. Collecting information about these is observation." —Hans Christian Ørsted("First Introduction to General Physics" ¶13, part of a series of public lectures at the University of Copenhagen. Copenhagen 1811, in Danish, printed by Johan Frederik Schulz. In Kirstine Meyer's 1920 edition of Ørsted's works, vol.III pp. 151-190. ) "First Introduction to Physics: the Spirit, Meaning, and Goal of Natural Science". Reprinted in German in 1822, Schweigger's Journal für Chemie und Physik 36, pp.458-488, as translated in Ørsted 1997, p. 292
- "When it is not clear under which law of nature an effect or class of effect belongs, we try to fill this gap by means of a guess. Such guesses have been given the name conjectures or hypotheses." —Hans Christian Ørsted(1811) "First Introduction to General Physics" as translated in Ørsted 1997, p. 297.
- "In general we look for a new law by the following process. First we guess it. ...", —Feynman 1965, p. 156
- "... the statement of a law - A depends on B - always transcends experience."—Born 1949, p. 6
- "The student of nature ... regards as his property the experiences which the mathematician can only borrow. This is why he deduces theorems directly from the nature of an effect while the mathematician only arrives at them circuitously." —Hans Christian Ørsted(1811) "First Introduction to General Physics" ¶17. as translated in Ørsted 1997, p. 297.
- Salviati speaks: "I greatly doubt that Aristotle ever tested by experiment whether it be true that two stones, one weighing ten times as much as the other, if allowed to fall, at the same instant, from a height of, say, 100 cubits, would so differ in speed that when the heavier had reached the ground, the other would not have fallen more than 10 cubits." Two New Sciences (1638) —Galileo 1638, pp. 61–62. A more extended quotation is referenced by Moulton & Schifferes 1960, pp. 80–81.
- In the inquiry-based education paradigm, the stage of "characterization, observation, definition, …" is more briefly summed up under the rubric of a Question
- "To raise new questions, new possibilities, to regard old problems from a new angle, requires creative imagination and marks real advance in science." —Einstein & Infeld 1938, p. 92.
- Crawford S, Stucki L (1990), "Peer review and the changing research record", "J Am Soc Info Science", vol. 41, pp 223-228
- See, e.g., Gauch 2003, esp. chapters 5-8
- Cartwright, Nancy (1983), How the Laws of Physics Lie. Oxford: Oxford University Press. ISBN 0-19-824704-4
- Andreas Vesalius, Epistola, Rationem, Modumque Propinandi Radicis Chynae Decocti (1546), 141. Quoted and translated in C.D. O'Malley, Andreas Vesalius of Brussels, (1964), 116. As quoted by Bynum & Porter 2005, p. 597: Andreas Vesalius,597#1.
- Crick, Francis (1994), The Astonishing Hypothesis ISBN 0-684-19431-7 p.20
- McElheny 2004 p.34
- Glen 1994, pp. 37–38.
- "The structure that we propose is a three-chain structure, each chain being a helix" — Linus Pauling, as quoted on p. 157 by Horace Freeland Judson (1979), The Eighth Day of Creation ISBN 0-671-22540-5
- McElheny 2004, pp. 49–50: January 28, 1953 - Watson read Pauling's pre-print, and realized that in Pauling's model, DNA's phosphate groups had to be un-ionized. But DNA is an acid, which contradicts Pauling's model.
- June 1952. as noted in McElheny 2004, p. 43: Watson had succeeded in getting X-ray pictures of TMV showing a diffraction pattern consistent with the transform of a helix.
- McElheny 2004 p.68: Nature April 25, 1953.
- In March 1917, the Royal Astronomical Society announced that on May 29, 1919, the occasion of a total eclipse of the sun would afford favorable conditions for testing Einstein's General theory of relativity. One expedition, to Sobral, Ceará, Brazil, and Eddington's expedition to the island of Principe yielded a set of photographs, which, when compared to photographs taken at Sobral and at Greenwich Observatory showed that the deviation of light was measured to be 1.69 arc-seconds, as compared to Einstein's desk prediction of 1.75 arc-seconds. — Antonina Vallentin (1954), Einstein, as quoted by Samuel Rapport and Helen Wright (1965), Physics, New York: Washington Square Press, pp 294-295.
- Mill, John Stuart, "A System of Logic", University Press of the Pacific, Honolulu, 2002, ISBN 1-4102-0252-6.
- al-Battani, De Motu Stellarum translation from Arabic to Latin in 1116, as cited by "Battani, al-" (c.858-929) Encyclopaedia Britannica, 15th. ed. Al-Battani is known for his accurate observations at al-Raqqah in Syria, beginning in 877. His work includes measurement of the annual precession of the equinoxes.
- McElheny 2004 p.53: The weekend (January 31-February 1) after seeing photo 51, Watson informed Bragg of the X-ray diffraction image of DNA in B form. Bragg gave them permission to restart their research on DNA (that is, model building).
- McElheny 2004 p.54: On Sunday February 8, 1953, Maurice Wilkes gave Watson and Crick permission to work on models, as Wilkes would not be building models until Franklin left DNA research.
- McElheny 2004 p.56: Jerry Donohue, on sabbatical from Pauling's lab and visiting Cambridge, advises Watson that textbook form of the base pairs was incorrect for DNA base pairs; rather, the keto form of the base pairs should be used instead. This form allowed the bases' hydrogen bonds to pair 'unlike' with 'unlike', rather than to pair 'like' with 'like', as Watson was inclined to model, on the basis of the textbook statements. On February 27, 1953, Watson was convinced enough to make cardboard models of the nucleotides in their keto form.
- "Suddenly I became aware that an adenine-thymine pair held together by two hydrogen bonds was identical in shape to a guanine-cytosine pair held together by at least two hydrogen bonds. ..." —Watson 1968, pp. 194–197.
- McElheny 2004 p.57 Saturday, February 28, 1953, Watson tried 'like with like' and admited these base pairs didn't have hydrogen bonds that line up. But after trying 'unlike with unlike', and getting Jerry Donohue's approval, the base pairs turned out to be identical in shape (as Watson stated above in his 1968 Double Helix memoir quoted above). Watson now felt confident enough to inform Crick. (Of course, 'unlike with unlike' increases the number of possible codons, if this scheme were a genetic code.)
- See, e.g., Physics Today, 59(1), p42. Richmann electrocuted in St. Petersburg (1753)
- Aristotle, "Prior Analytics", Hugh Tredennick (trans.), pp. 181-531 in Aristotle, Volume 1, Loeb Classical Library, William Heinemann, London, UK, 1938.
- "What one does not in the least doubt one should not pretend to doubt; but a man should train himself to doubt," said Peirce in a brief intellectual autobiography; see Ketner, Kenneth Laine (2009) "Charles Sanders Peirce: Interdisciplinary Scientist" in The Logic of Interdisciplinarity). Peirce held that actual, genuine doubt originates externally, usually in surprise, but also that it is to be sought and cultivated, "provided only that it be the weighty and noble metal itself, and no counterfeit nor paper substitute"; in "Issues of Pragmaticism", The Monist, v. XV, n. 4, pp. 481-99, see p. 484, and p. 491. (Reprinted in Collected Papers v. 5, paragraphs 438-63, see 443 and 451).
- Peirce (1898), "Philosophy and the Conduct of Life", Lecture 1 of the Cambridge (MA) Conferences Lectures, published in Collected Papers v. 1, paragraphs 616-48 in part and in Reasoning and the Logic of Things, Ketner (ed., intro.) and Putnam (intro., comm.), pp. 105-22, reprinted in Essential Peirce v. 2, pp. 27-41.
- " ... in order to learn, one must desire to learn ..."—Peirce (1899), "F.R.L." [First Rule of Logic], Collected Papers v. 1, paragraphs 135-40, Eprint
- Peirce (1877), "How to Make Our Ideas Clear", Popular Science Monthly, v. 12, pp. 286–302. Reprinted often, including Collected Papers v. 5, paragraphs 388–410, Essential Peirce v. 1, pp. 124–41. ArisbeEprint. Wikisource Eprint.
- Peirce (1868), "Some Consequences of Four Incapacities", Journal of Speculative Philosophy v. 2, n. 3, pp. 140–57. Reprinted Collected Papers v. 5, paragraphs 264–317, The Essential Peirce v. 1, pp. 28–55, and elsewhere. Arisbe Eprint
- Peirce (1878), "The Doctrine of Chances", Popular Science Monthly v. 12, pp. 604-15, see pp. 610-11 via Internet Archive. Reprinted Collected Papers v. 2, paragraphs 645-68, Essential Peirce v. 1, pp. 142-54. "...death makes the number of our risks, the number of our inferences, finite, and so makes their mean result uncertain. The very idea of probability and of reasoning rests on the assumption that this number is indefinitely great. .... ...logicality inexorably requires that our interests shall not be limited. .... Logic is rooted in the social principle."
- Peirce (c. 1906), "PAP (Prolegomena for an Apology to Pragmatism)" (Manuscript 293, not the like-named article), The New Elements of Mathematics (NEM) 4:319-320, see first quote under "Abduction" at Commens Dictionary of Peirce's Terms.
- Peirce, Carnegie application (L75, 1902), New Elements of Mathematics v. 4, pp. 37-38:
For it is not sufficient that a hypothesis should be a justifiable one. Any hypothesis which explains the facts is justified critically. But among justifiable hypotheses we have to select that one which is suitable for being tested by experiment.
- Peirce (1902), Carnegie application, see MS L75.329-330, from Draft D of Memoir 27:
Consequently, to discover is simply to expedite an event that would occur sooner or later, if we had not troubled ourselves to make the discovery. Consequently, the art of discovery is purely a question of economics. The economics of research is, so far as logic is concerned, the leading doctrine with reference to the art of discovery. Consequently, the conduct of abduction, which is chiefly a question of heuretic and is the first question of heuretic, is to be governed by economical considerations.
- Peirce (1903), "Pragmatism — The Logic of Abduction", Collected Papers v. 5, paragraphs 195-205, especially 196. Eprint.
- Peirce, "On the Logic of Drawing Ancient History from Documents", Essential Peirce v. 2, see pp. 107-9. On Twenty Questions, p. 109:
Thus, twenty skillful hypotheses will ascertain what 200,000 stupid ones might fail to do.
- Peirce (1878), "The Probability of Induction", Popular Science Monthly, v. 12, pp. 705-18, see 718 Google Books; 718 via Internet Archive. Reprinted often, including (Collected Papers v. 2, paragraphs 669-93), (The Essential Peirce v. 1, pp. 155-69).
- Peirce (1905 draft "G" of "A Neglected Argument"), "Crude, Quantitative, and Qualitative Induction", Collected Papers v. 2, paragraphs 755–760, see 759. Find under "Induction" at Commens Dictionary of Peirce's Terms.
- . Brown, C. (2005) Overcoming Barriers to Use of Promising Research Among Elite Middle East Policy Groups, Journal of Social Behaviour and Personality, Select Press.
- Hanson, Norwood (1958), Patterns of Discovery, Cambridge University Press, ISBN 0-521-05197-5
- Kuhn 1962, p. 113 ISBN 978-1-4432-5544-8
- Feyerabend, Paul K (1960) "Patterns of Discovery" The Philosophical Review (1960) vol. 69 (2) pp. 247-252
- Kuhn, Thomas S., "The Function of Measurement in Modern Physical Science", ISIS 52(2), 161–193, 1961.
- Feyerabend, Paul K., Against Method, Outline of an Anarchistic Theory of Knowledge, 1st published, 1975. Reprinted, Verso, London, UK, 1978.
- Higher Superstition: The Academic Left and Its Quarrels with Science, The Johns Hopkins University Press, 1997
- Fashionable Nonsense: Postmodern Intellectuals' Abuse of Science, Picador; 1st Picador USA Pbk. Ed edition, 1999
- The Sokal Hoax: The Sham That Shook the Academy, University of Nebraska Press, 2000 ISBN 0-8032-7995-7
- A House Built on Sand: Exposing Postmodernist Myths About Science, Oxford University Press, 2000
- Intellectual Impostures, Economist Books, 2003
- Dunbar, K., & Fugelsang, J. (2005). Causal thinking in science: How scientists and students interpret the unexpected. In M. E. Gorman, R. D. Tweney, D. Gooding & A. Kincannon (Eds.), Scientific and Technical Thinking (pp. 57-79). Mahwah, NJ: Lawrence Erlbaum Associates.
- Oliver, J.E. (1991) Ch2. of The incomplete guide to the art of discovery. New York:NY, Columbia University Press.
- Riccardo Pozzo (2004) The impact of Aristotelianism on modern philosophy. CUA Press. p.41. ISBN 0-8132-1347-9
- The ancient Egyptians observed that heliacal rising of a certain star, Sothis (Greek for Sopdet (Egyptian), known to the West as Sirius), marked the annual flooding of the Nile river. See Neugebauer, Otto (1969) , The Exact Sciences in Antiquity (2 ed.), Dover Publications, ISBN 978-0-486-22332-2, p.82, and also the 1911 Britannica, "Egypt".
- The Rhind papyrus lists practical examples in arithmetic and geometry —1911 Britannica, "Egypt".
- The Ebers papyrus lists some of the 'mysteries of the physician', as cited in the 1911 Britannica, "Egypt"
- R. L. Verma (1969). Al-Hazen: father of modern optics.
- Niccolò Leoniceno (1509), De Plinii et aliorum erroribus liber apud Ferrara, as cited by Sanches, Limbrick & Thomson 1988, p. 13
- 'I have sometimes seen a verbose quibbler attempting to persuade some ignorant person that white was black; to which the latter replied, "I do not understand your reasoning, since I have not studied as much as you have; yet I honestly believe that white differs from black. But pray go on refuting me for just as long as you like." '— Sanches, Limbrick & Thomson 1988, p. 276
- Sanches, Limbrick & Thomson 1988, p. 278.
- Bacon, Francis Novum Organum (The New Organon), 1620. Bacon's work described many of the accepted principles, underscoring the importance of empirical results, data gathering and experiment. Encyclopaedia Britannica (1911), "Bacon, Francis" states: [In Novum Organum, we ] "proceed to apply what is perhaps the most valuable part of the Baconian method, the process of exclusion or rejection. This elimination of the non-essential, ..., is the most important of Bacon's contributions to the logic of induction, and that in which, as he repeatedly says, his method differs from all previous philosophies."
- "John Stuart Mill (Stanford Encyclopedia of Philosophy)". plato.stanford.edu. Retrieved 2009-07-31.
- Gauch 2003, pp. 52–53
- George Sampson (1970). The concise Cambridge history of English literature. Cambridge University Press. p.174. ISBN 0-521-09581-6
- Logik der Forschung, new appendices *XVII–*XIX (not yet available in the English edition Logic of scientific discovery)
- Logic of Scientific discovery, p. 20
- Karl Popper: On the non-existence of scientific method. Realism and the Aim of Science (1983)
- Karl Popper: Science: Conjectures and Refutations. Conjectures and Refuations, section VII
- Karl Popper: On knowledge. In search of a better world, section II
- "The historian ... requires a very broad definition of "science" — one that ... will help us to understand the modern scientific enterprise. We need to be broad and inclusive, rather than narrow and exclusive ... and we should expect that the farther back we go [in time] the broader we will need to be." — David Pingree (1992), "Hellenophilia versus the History of Science" Isis 83 554-63, as cited on p.3, David C. Lindberg (2007), The beginnings of Western science: the European Scientific tradition in philosophical, religious, and institutional context, Second ed. Chicago: Univ. of Chicago Press ISBN 978-0-226-48205-7
- "When we are working intensively, we feel keenly the progress of our work; we are elated when our progress is rapid, we are depressed when it is slow." — the mathematician Pólya 1957, p. 131 in the section on 'Modern heuristic'.
- "Philosophy [i.e., physics] is written in this grand book--I mean the universe--which stands continually open to our gaze, but it cannot be understood unless one first learns to comprehend the language and interpret the characters in which it is written. It is written in the language of mathematics, and its characters are triangles, circles, and other geometrical figures, without which it is humanly impossible to understand a single word of it; without these, one is wandering around in a dark labyrinth." —Galileo Galilei, Il Saggiatore (The Assayer, 1623), as translated by Stillman Drake (1957), Discoveries and Opinions of Galileo pp. 237-8, as quoted by di Francia 1981, p. 10.
- Pólya 1957 2nd ed.
- George Pólya (1954), Mathematics and Plausible Reasoning Volume I: Induction and Analogy in Mathematics,
- George Pólya (1954), Mathematics and Plausible Reasoning Volume II: Patterns of Plausible Reasoning.
- Pólya 1957, p. 142
- Pólya 1957, p. 144
- Mackay 1991 p.100
- See the development, by generations of mathematicians, of Euler's formula for polyhedra as documented by Lakatos, Imre (1976), Proofs and refutations, Cambridge: Cambridge University Press, ISBN 0-521-29038-4
- Born, Max (1949), Natural Philosophy of Cause and Chance, Peter Smith, also published by Dover, 1964. From the Waynflete Lectures, 1948. On the web. N.B.: the web version does not have the 3 addenda by Born, 1950, 1964, in which he notes that all knowledge is subjective. Born then proposes a solution in Appendix 3 (1964)
- Brody, Thomas A. (1993), The Philosophy Behind Physics, Springer Verlag, ISBN 0-387-55914-0. (Luis De La Peña and Peter E. Hodgson, eds.)
- Bruno, Leonard C. (1989), The Landmarks of Science, ISBN 0-8160-2137-6
- Bynum, W.F.; Porter, Roy (2005), Oxford Dictionary of Scientific Quotations, Oxford, ISBN 0-19-858409-1.
- di Francia, G. Toraldo (1981), The Investigation of the Physical World, Cambridge University Press, ISBN 0-521-29925-X.
- Einstein, Albert; Infeld, Leopold (1938), The Evolution of Physics: from early concepts to relativity and quanta, New York: Simon and Schuster, ISBN 0-671-20156-5
- Feynman, Richard (1965), The Character of Physical Law, Cambridge: M.I.T. Press, ISBN 0-262-56003-8.
- Fleck, Ludwik (1979), Genesis and Development of a Scientific Fact, Univ. of Chicago, ISBN 0-226-25325-2. (written in German, 1935, Entstehung und Entwickelung einer wissenschaftlichen Tatsache: Einführung in die Lehre vom Denkstil und Denkkollectiv) English translation, 1979
- Galileo (1638), Two New Sciences, Leiden: Lodewijk Elzevir, ISBN 0-486-60099-8 Translated from Italian to English in 1914 by Henry Crew and Alfonso de Salvio. Introduction by Antonio Favaro. xxv+300 pages, index. New York: Macmillan, with later reprintings by Dover.
- Gauch, Hugh G., Jr. (2003), Scientific Method in Practice, Cambridge University Press, ISBN 0-521-01708-4 435 pages
- Glen, William (ed.) (1994), The Mass-Extinction Debates: How Science Works in a Crisis, Stanford, CA: Stanford University Press, ISBN 0-8047-2285-4.
- Godfrey-Smith, Peter (2003), Theory and Reality: An introduction to the philosophy of science, University of Chicago Press, ISBN 0-226-30063-3.
- Goldhaber, Alfred Scharff; Nieto, Michael Martin (January–March 2010), "Photon and graviton mass limits", Rev. Mod. Phys. (American Physical Society) 82: 939, doi:10.1103/RevModPhys.82.939. pages 939-979.
- Jevons, William Stanley (1874), The Principles of Science: A Treatise on Logic and Scientific Method, Dover Publications, ISBN 1-4304-8775-5. 1877, 1879. Reprinted with a foreword by Ernst Nagel, New York, NY, 1958.
- Kuhn, Thomas S. (1962), The Structure of Scientific Revolutions, Chicago, IL: University of Chicago Press. 2nd edition 1970. 3rd edition 1996.
- Lindberg, David C. (2007), The Beginnings of Western Science, University of Chicago Press 2nd edition 2007.
- Mackay, Alan L. (ed.) (1991), Dictionary of Scientific Quotations, London: IOP Publishing Ltd, ISBN 0-7503-0106-6
- McElheny, Victor K. (2004), Watson & DNA: Making a scientific revolution, Basic Books, ISBN 0-7382-0866-3.
- Moulton, Forest Ray; Schifferes, Justus J. (eds., Second Edition) (1960), The Autobiography of Science, Doubleday.
- Needham, Joseph; Wang, Ling (王玲) (1954), Science and Civilisation in China, 1 Introductory Orientations, Cambridge University Press
- Newton, Isaac (1687, 1713, 1726), Philosophiae Naturalis Principia Mathematica, University of California Press, ISBN 0-520-08817-4, Third edition. From I. Bernard Cohen and Anne Whitman's 1999 translation, 974 pages.
- Ørsted, Hans Christian (1997), Selected Scientific Works of Hans Christian Ørsted, Princeton, ISBN 0-691-04334-5. Translated to English by Karen Jelved, Andrew D. Jackson, and Ole Knudsen, (translators 1997).
- Peirce, C. S. — see Charles Sanders Peirce bibliography.
- Poincaré, Henri (1905), Science and Hypothesis Eprint
- Pólya, George (1957), How to Solve It, Princeton University Press, ISBN -691-08097-6 Check
- Popper, Karl R., The Logic of Scientific Discovery, 1934, 1959.
- Sambursky, Shmuel (ed.) (1974), Physical Thought from the Presocratics to the Quantum Physicists, Pica Press, ISBN 0-87663-712-8.
- Sanches, Francisco; Limbrick, Elaine. Introduction, Notes, and Bibliography; Thomson, Douglas F.S. Latin text established, annotated, and translated. (1988), That Nothing is Known, Cambridge: Cambridge University Press, ISBN 0-521-35077-8 Critical edition.
- Taleb, Nassim Nicholas (2007), The Black Swan, Random House, ISBN 978-1-4000-6351-2
- Watson, James D. (1968), The Double Helix, New York: Atheneum, Library of Congress card number 68-16217.
- Bauer, Henry H., Scientific Literacy and the Myth of the Scientific Method, University of Illinois Press, Champaign, IL, 1992
- Beveridge, William I. B., The Art of Scientific Investigation, Heinemann, Melbourne, Australia, 1950.
- Bernstein, Richard J., Beyond Objectivism and Relativism: Science, Hermeneutics, and Praxis, University of Pennsylvania Press, Philadelphia, PA, 1983.
- Brody, Baruch A. and Capaldi, Nicholas, Science: Men, Methods, Goals: A Reader: Methods of Physical Science, W. A. Benjamin, 1968
- Brody, Baruch A., and Grandy, Richard E., Readings in the Philosophy of Science, 2nd edition, Prentice Hall, Englewood Cliffs, NJ, 1989.
- Burks, Arthur W., Chance, Cause, Reason — An Inquiry into the Nature of Scientific Evidence, University of Chicago Press, Chicago, IL, 1977.
- Alan Chalmers. What is this thing called science?. Queensland University Press and Open University Press, 1976.
- Crick, Francis (1988), What Mad Pursuit: A Personal View of Scientific Discovery, New York: Basic Books, ISBN 0-465-09137-7.
- Dewey, John, How We Think, D.C. Heath, Lexington, MA, 1910. Reprinted, Prometheus Books, Buffalo, NY, 1991.
- Earman, John (ed.), Inference, Explanation, and Other Frustrations: Essays in the Philosophy of Science, University of California Press, Berkeley & Los Angeles, CA, 1992.
- Fraassen, Bas C. van, The Scientific Image, Oxford University Press, Oxford, UK, 1980.
- Franklin, James (2009), What Science Knows: And How It Knows It, New York: Encounter Books, ISBN 1-59403-207-6.
- Gadamer, Hans-Georg, Reason in the Age of Science, Frederick G. Lawrence (trans.), MIT Press, Cambridge, MA, 1981.
- Giere, Ronald N. (ed.), Cognitive Models of Science, vol. 15 in 'Minnesota Studies in the Philosophy of Science', University of Minnesota Press, Minneapolis, MN, 1992.
- Hacking, Ian, Representing and Intervening, Introductory Topics in the Philosophy of Natural Science, Cambridge University Press, Cambridge, UK, 1983.
- Heisenberg, Werner, Physics and Beyond, Encounters and Conversations, A.J. Pomerans (trans.), Harper and Row, New York, NY 1971, pp. 63–64.
- Holton, Gerald, Thematic Origins of Scientific Thought, Kepler to Einstein, 1st edition 1973, revised edition, Harvard University Press, Cambridge, MA, 1988.
- Kuhn, Thomas S., The Essential Tension, Selected Studies in Scientific Tradition and Change, University of Chicago Press, Chicago, IL, 1977.
- Latour, Bruno, Science in Action, How to Follow Scientists and Engineers through Society, Harvard University Press, Cambridge, MA, 1987.
- Losee, John, A Historical Introduction to the Philosophy of Science, Oxford University Press, Oxford, UK, 1972. 2nd edition, 1980.
- Maxwell, Nicholas, The Comprehensibility of the Universe: A New Conception of Science, Oxford University Press, Oxford, 1998. Paperback 2003.
- McCarty, Maclyn (1985), The Transforming Principle: Discovering that genes are made of DNA, New York: W. W. Norton, pp. 252 , ISBN 0-393-30450-7. Memoir of a researcher in the Avery–MacLeod–McCarty experiment.
- McComas, William F., ed. PDF (189 KB), from The Nature of Science in Science Education, pp53–70, Kluwer Academic Publishers, Netherlands 1998.
- Misak, Cheryl J., Truth and the End of Inquiry, A Peircean Account of Truth, Oxford University Press, Oxford, UK, 1991.
- Piattelli-Palmarini, Massimo (ed.), Language and Learning, The Debate between Jean Piaget and Noam Chomsky, Harvard University Press, Cambridge, MA, 1980.
- Popper, Karl R., Unended Quest, An Intellectual Autobiography, Open Court, La Salle, IL, 1982.
- Putnam, Hilary, Renewing Philosophy, Harvard University Press, Cambridge, MA, 1992.
- Rorty, Richard, Philosophy and the Mirror of Nature, Princeton University Press, Princeton, NJ, 1979.
- Salmon, Wesley C., Four Decades of Scientific Explanation, University of Minnesota Press, Minneapolis, MN, 1990.
- Shimony, Abner, Search for a Naturalistic World View: Vol. 1, Scientific Method and Epistemology, Vol. 2, Natural Science and Metaphysics, Cambridge University Press, Cambridge, UK, 1993.
- Thagard, Paul, Conceptual Revolutions, Princeton University Press, Princeton, NJ, 1992.
- Ziman, John (2000). Real Science: what it is, and what it means. Cambridge, UK: Cambridge University Press.
|Wikibooks has a book on the topic of: The Scientific Method|
- Scientific method at PhilPapers
- Scientific method at the Indiana Philosophy Ontology Project
- An Introduction to Science: Scientific Thinking and a scientific method by Steven D. Schafersman.
- Introduction to the scientific method at the University of Rochester
- Theory-ladenness by Paul Newall at The Galilean Library
- Lecture on Scientific Method by Greg Anderson
- Using the scientific method for designing science fair projects
- SCIENTIFIC METHODS an online book by Richard D. Jarrard
- Richard Feynman on the Key to Science (one minute, three seconds), from the Cornell Lectures.
- Lectures on the Scientific Method by Nick Josh Karean, Kevin Padian, Michael Shermer and Richard Dawkins | http://en.wikipedia.org/wiki/Scientific_research | 13 |
17 | How to Manage Pests
Pests in Gardens and Landscapes
White, flattened, pointed apple buds are overwintering sources of powdery mildew, Podosphaera leucotricha.
Powdery mildew is a common disease on many types of plants. Several powdery mildew fungi cause similar diseases on different plants (such as Podosphaera species on apple and stone fruits; Sphaerotheca species on berries and stone fruits; Erysiphe necator on grapevines, see Table 1). Powdery mildew fungi generally require moist conditions to release overwintering spores and for those spores to germinate and infect a plant. However, no moisture is needed for the fungus to establish itself and grow after infecting the plant. Powdery mildews normally do well in warm, Mediterranean-type climates. Thus powdery mildews are more prevalent than many other diseases in California’s dry summer and fall seasons.
The disease can be serious on woody plants such as grapevines, caneberries, and fruit trees where it attacks new growth including buds, shoots, and flowers as well as leaves. New growth is dwarfed, distorted, and covered with a white, powdery growth. On apple and grape and to a lesser extent apricot, nectarine, and peach, infected young fruits develop weblike, russetted scars. On tree fruits a rough corky spot on the skin will develop where infection occurred. Grapes with a severe infection may also crack or split and fail to grow and expand.
On strawberry, affected leaf edges curl upward. Infected leaves later develop dry, brownish patches along with nondescript patches of white powdery fungus on the lower surface and reddish discoloration on the upper surface. When foliage infections are severe, flowers and fruit may also be infected.
All powdery mildew fungi require living plant tissue to grow. On deciduous perennial hosts such as grapevine, raspberry, and fruit trees, powdery mildew survives from one season to the next in infected buds or as fruiting bodies called chasmothecia, which reside on the bark of cordons, branches, and stems. On strawberry the fungus can survive on leaves that remain on the plants through winter.
Most powdery mildew fungi grow as thin layers of mycelium on the surface of the affected plant part. Spores, which are the primary means of dispersal, make up the bulk of the powdery growth and are produced in chains that can be seen with a hand lens. In contrast, spores of downy mildew grow on branched stalks that look like tiny trees. Also downy mildew colonies are gray instead of white and occur mostly on the lower leaf surface.
Powdery mildew spores are carried by wind to host plants. Although humidity requirements for germination vary, many powdery mildew species can germinate and infect in the absence of water. In fact, spores of some powdery mildew fungi are killed and germination and mycelial growth are inhibited by water on plant surfaces. Moderate temperatures and shade are generally the most favorable conditions for powdery mildew development, since spores and mycelium are sensitive to extreme heat and direct sunlight.
The best method of powdery mildew control is prevention. Avoiding the most susceptible varieties and following good cultural practices will adequately control powdery mildew in many situations. However, where conditions are favorable, susceptible fruit trees and berries may require protection with fungicide sprays. Fungicide applications are most often needed on susceptible varieties of apple and on almost all grape and strawberry varieties.
Where possible, choose resistant varieties that meet your growing requirements and personal preferences. Be aware that control actions will probably be necessary when planting more susceptible varieties.
Apple. The most resistant varieties are Red Delicious and Stayman Winesap. Moderately susceptible varieties include Braeburn, Golden Delicious, Granny Smith, Jonagold, and McIntosh. The most susceptible varieties include Gravenstein, Jonathan, Rome Beauty, and Yellow Newtown.
Caneberries. Blackberry is not affected by powdery mildew. Resistant raspberry varieties include Chief, Marcy, Malling Orion; the variety Logan is immune. Highly susceptible raspberry varieties include Glen Clova, Latham, Ottawa, and Viking.
Cherry. The most susceptible varieties are Bing, Black Tartarian, and Rainier.
Grapevines. Most varieties are susceptible.
Nectarine. Most varieties are susceptible.
Peach. Freestone varieties such as Crest, Flame Crest, Flavor Crest, and O’Henry are less susceptible than varieties such as Elegant Lady, Fairtime, Fay Elberta, and Summerset.
Plum. Some highly susceptible varieties of plum that may need protection are Black Beaut, Gaviota, Kelsey, and Wickson.
Strawberry. Day-neutral (everbearing) varieties such as Fern, Seascape, Sequoia, and Yolo are more susceptible than short-day varieties (those that fruit in May and June only) such as Chandler.
Shade and moderate temperatures favor most powdery mildews. Plant in sunny areas as much as possible, provide good air circulation, and avoid applying excess fertilizer. A good alternative is to use a slow-release fertilizer. Long duration overhead sprinkling may actually reduce active powdery mildew infections because spores are washed off the plant. However, spores can be disseminated in water to new noninfected leaves if watered only briefly.
As new shoots begin to develop on perennial plants, watch closely for the appearance of powdery mildew. Where infection is limited, prune out and bury or discard diseased tissue as soon as it appears. Infected tissue can be recognized by the young emerging leaves being deformed or showing a puckered condition. Soon after emergence infected leaves begin to exhibit white mycelial growth on the leaf surface. This combination of symptoms is characteristic of early season mildew onset. If powdery mildew has been present during the season on woody species, prune out infected tissue during the dormant season.
Prune grapevines during dormancy and position shoots during the growing season to allow exposure of fruit to sunlight and good air flow through the canopy. Pruning and training are also helpful in controlling Botrytis bunch rot.
Because one common powdery mildew fungus, Sphaerotheca pannosa, often spreads disease from roses to stone fruits, try to avoid planting apricot or plum trees near highly susceptible rose bushes. If roses are nearby and can’t be removed, control powdery mildew infections on them.
On apple trees, look carefully for infected shoots and buds in the dormant season and remove them. Infected buds are flattened or shriveled compared to normal buds. The buds and infected shoots have a thin layer of fuzzy white fungus on their surfaces that usually is easy to see. Where practical, remove and dispose of overwintering leaves on strawberry plants that are infected. If raspberry canes develop powdery mildew, remove the canes down to the roots during the dormant season. Infected canes of berries and grapevines have distinctive weblike russetting. Remove infected prunings from the garden area and destroy them.
Where powdery mildew has been a problem in the past, fungicides may be needed. Fungicides function as protectants, eradicants, or both. A protectant fungicide can only prevent a new infection from occurring, but an eradicant will kill an existing infection. Apply protectant fungicides to highly susceptible plants before the disease appears. Eradicants should be used at the earliest appearance of the disease. Once mildew growth is extensive, control with fungicides becomes more difficult.
Fungicides. Several least-toxic fungicides are available for backyard trees and vines, including horticultural oils, neem oil, jojoba oil, sulfur, and the biological fungicide Serenade. With the exception of the oils, these materials are primarily preventive. Oils work best as eradicants but also work as good protectants. The fungicides listed here are registered for home use. Commercial growers should consult the UC IPM Pest Management Guidelines for fungicides for commercial use.
Oils. To eradicate powdery mildew infections, use a horticultural oil such as Saf-T-Side Spray Oil, Sunspray Ultra-Fine Spray Oil or one of the plant-based oils such as neem oil (such as Green Light Neem Concentrate) or jojoba oil (such as E-rase). Be careful, however, never to apply an oil spray within 2 weeks of a sulfur spray or plants may be injured. Some plants may be more sensitive than others, however, and the interval required between sulfur and oil sprays may be even longer; always consult the fungicide label for any special precautions. Also, oils should never be applied when temperatures are above 90°F or to drought-stressed plants. Horticultural oils and neem and jojoba oils are registered on a wide variety of crops.
Sulfur. Sulfur products have been used to manage powdery mildew for centuries but are only effective when applied before disease symptoms appear. The best sulfur products to use for powdery mildew control in gardens are wettable sulfurs that are specially formulated with surfactants similar to those in dishwashing detergent (such as Safer Garden Fungicide). To avoid injury to the plant or tree, sulfurs should not be applied within 2 weeks of an oil spray, used on any plant when the temperature is near or over 90°F (80°F for caneberries and strawberry), and never applied at any temperature to apricot trees.
Biological Fungicides. Biological fungicides (Serenade) are commercially available beneficial microorganisms formulated into a product that, when sprayed on the plant, inhibit or destroy fungal pathogens. The active ingredient in Serenade is a bacterium, Bacillus subtilis, that helps prevent the powdery mildew from infecting the plant. While this product functions to kill the powdery mildew organism and is nontoxic to people, pets, and beneficial insects, it has not proven to be as effective as the oils or sulfur in controlling this disease.
How to Use. Apply protectant fungicides to susceptible plants before disease develops. Once mildew growth is mild to moderate, it is generally too late for protective fungicides to effectively control powdery mildew except for on new plant growth. The protectant fungicides are only effective on contact, so applications must provide thorough coverage of all susceptible plant parts. As plants grow and produce new tissue, additional applications may be necessary at 7- to 10-day intervals as long as conditions are conducive to disease growth. On highly susceptible plants, sulfur can be applied early in the season when temperatures are below 90°F and then to switch to other materials as the season progresses. However, applying oil, which is both a protectant and an eradicant, for the early sprays provides the best control.
If mild to moderate powdery mildew symptoms are present, the horticultural oils and plant-based oils such as neem oil and jojoba oil can be used.
Caneberries and Grapevines. Dormant or delayed dormant sulfur sprays can be used as a preventive measure before canes begin to grow in spring. Fungicides registered for use on caneberries include wettable sulfur and oils including neem oil. Don’t apply sulfur when temperatures exceed 90°F.
Strawberry. Treat as soon as symptoms appear. Be sure to spray both upper and lower leaf surfaces. It may help to remove and destroy affected leaves before treating the rest of the planting. Materials registered to control powdery mildew include sulfur and oils. The sulfur treatments also reduce mite populations, but don’t apply sulfur when temperatures exceed 80°F because it damages foliage and fruit.
Apple and Stone Fruit. Sprays are not necessary in many backyard situations. However, if you have had serious powdery mildew damage in past years, treat at 2-week intervals, beginning when buds just start to open (green tip stage), until small, green fruit are present. (Caution: Do not use sulfur on apricot trees.) Sulfur, horticultural oils, neem oil, and Serenade are all registered for powdery mildew on backyard trees.
Grapevines. Powdery mildew is a perennial problem in grapevines. Begin applying treatments when all buds have pushed. Thereafter, repeat at 10-day intervals if disease pressure is high; otherwise, extend intervals when temperatures are above 90°F until the sugar content of the grapes is 12 to 15%, which is when they begin to soften and approach ripeness and are no longer susceptible to infection. You can measure the sugar content with a refractometer, if you have access to one, or you can see if sample berries sink in a 15% sucrose solution. (Prepare the sucrose solution by dissolving 8-1/2 teaspoons of table sugar in a half cup of warm water, then mixing in enough cold water to make the total volume 1 cup.) Sulfur, horticultural oils, neem oil, jojoba oil, and Serenade are registered for controlling powdery mildew in home vineyards.
Flint, M. L. 1998. Pests of the Garden and Small Farm: A Grower's Guide to Using Less Pesticide. Oakland: Univ. Calif. Agric. Nat. Res. Publ. 3332.
Gubler, W. D., and D. J. Hirschfelt. 1992. Powdery Mildew. In Grape Pest Management. Oakland: Univ. Calif. Agric. Nat. Resources Publ. 3343.
McCain, A. H. 1994. Powdery Mildew. HortScript #3. Univ. Calif. Coop. Ext. Marin County.
Authors: W. D. Gubler, Plant Pathology, UC Davis; S. T. Koike, UC Cooperative Extension, Monterey County
Produced by UC Statewide IPM Program, University of California, Davis, CA 95616
PDF: To display a PDF document, you may need to use a PDF reader. | http://ipm.ucdavis.edu/PMG/PESTNOTES/pn7494.html | 13 |
45 | Spanish conquest of Guatemala
The Spanish conquest of Guatemala was a protracted conflict during the Spanish colonization of the Americas, in which Spanish colonisers gradually incorporated the territory that became the modern country of Guatemala into the colonial Viceroyalty of New Spain. Before the conquest, this territory contained a number of competing Mesoamerican kingdoms, the majority of which were Maya. Many conquistadors viewed the Maya as "infidels" who needed to be forcefully converted and pacified, disregarding the achievements of their civilization. The first contact between the Maya and European explorers came in the early 16th century when a Spanish ship sailing from Panama to Santo Domingo was wrecked on the east coast of the Yucatán Peninsula in 1511. Several Spanish expeditions followed in 1517 and 1519, making landfall on various parts of the Yucatán coast. The Spanish conquest of the Maya was a prolonged affair; the Maya kingdoms resisted integration into the Spanish Empire with such tenacity that their defeat took almost two centuries.
Pedro de Alvarado arrived in Guatemala from the newly conquered Mexico in early 1524, commanding a mixed force of Spanish conquistadors and native allies, mostly from Tlaxcala and Cholula. Geographic features across Guatemala now bear Nahuatl placenames owing to the influence of these Mexican allies, who translated for the Spanish. The Kaqchikel Maya initially allied themselves with the Spanish, but soon rebelled against excessive demands for tribute and did not finally surrender until 1530. In the meantime the other major highland Maya kingdoms had each been defeated in turn by the Spanish and allied warriors from Mexico and already subjugated Maya kingdoms in Guatemala. The Itza Maya and other lowland groups in the Petén Basin were first contacted by Hernán Cortés in 1525, but remained independent and hostile to the encroaching Spanish until 1697, when a concerted Spanish assault led by Martín de Urzúa y Arizmendi finally defeated the last independent Maya kingdom.
Spanish and native tactics and technology differed greatly. The Spanish viewed the taking of prisoners as a hindrance to outright victory, whereas the Maya prioritised the capture of live prisoners and of booty. The indigenous peoples of Guatemala lacked key elements of Old World technology such as a functional wheel, horses, steel, and gunpowder; they were also extremely susceptible to Old World diseases, against which they had no resistance. The Maya preferred raiding and ambush to large-scale warfare, using spears, arrows and wooden swords with inset obsidian blades; the Xinca of the southern coastal plain used poison on their arrows. In response to the use of Spanish cavalry, the highland Maya took to digging pits and lining them with wooden stakes.
Historical sources
The sources describing the Spanish conquest of Guatemala include those written by the Spanish themselves, among them two of four letters written by conquistador Pedro de Alvarado to Hernán Cortés in 1524, describing the initial campaign to subjugate the Guatemalan Highlands. These letters were despatched to Tenochtitlan, addressed to Cortés but with a royal audience in mind; two of these letters are now lost. Gonzalo de Alvarado y Chávez was Pedro de Alvarado's cousin; he accompanied him on his first campaign in Guatemala and in 1525 he became the chief constable of Santiago de los Caballeros de Guatemala, the newly founded Spanish capital. Gonzalo wrote an account that mostly supports that of Pedro de Alvarado. Pedro de Alvarado's brother Jorge wrote another account to the king of Spain that explained it was his own campaign of 1527–1529 that established the Spanish colony. Bernal Díaz del Castillo wrote a lengthy account of the conquest of Mexico and neighbouring regions, the Historia verdadera de la conquista de la Nueva España ("True History of the Conquest of New Spain"); his account of the conquest of Guatemala generally agrees with that of the Alvarados. His account was finished around 1568, some 40 years after the campaigns it describes. Hernán Cortés described his expedition to Honduras in the fifth letter of his Cartas de Relación, in which he details his crossing of what is now Guatemala's Petén Department. Dominican friar Bartolomé de las Casas wrote a highly critical account of the Spanish conquest of the Americas and included accounts of some incidents in Guatemala. The Brevísima Relación de la Destrucción de las Indias ("Short Account of the Destruction of the Indies") was first published in 1552 in Seville.
The Tlaxcalan allies of the Spanish who accompanied them in their invasion of Guatemala wrote their own accounts of the conquest; these included a letter to the Spanish king protesting at their poor treatment once the campaign was over. Other accounts were in the form of questionnaires answered before colonial magistrates to protest and register a claim for recompense. Two pictorial accounts painted in the stylised indigenous pictographic tradition have survived; these are the Lienzo de Quauhquechollan, which was probably painted in Ciudad Vieja in the 1530s, and the Lienzo de Tlaxcala, painted in Tlaxcala.
Accounts of the conquest as seen from the point of view of the defeated highland Maya kingdoms are included in a number of indigenous documents, including the Annals of the Kaqchikels, which includes the Xajil Chronicle describing the history of the Kaqchikel from their mythical creation down through the Spanish conquest and continuing to 1619. A letter from the defeated Tz'utujil Maya nobility of Santiago Atitlán to the Spanish king written in 1571 details the exploitation of the subjugated peoples.
Francisco Antonio de Fuentes y Guzmán was a colonial Guatemalan historian of Spanish descent who wrote La Recordación Florida, also called Historia de Guatemala (History of Guatemala). The book was written in 1690 and is regarded as one of the most important works of Guatemalan history, and is the first such book to have been written by a criollo author. Field investigation has tended to support the estimates of indigenous population and army sizes given by Fuentes y Guzmán.
Background to the conquest
Christopher Columbus discovered the New World for the Kingdom of Castile and Leon in 1492. Private adventurers thereafter entered into contracts with the Spanish Crown to conquer the newly discovered lands in return for tax revenues and the power to rule. In the first decades after the discovery of the new lands, the Spanish colonised the Caribbean and established a centre of operations on the island of Cuba. They heard rumours of the rich empire of the Aztecs on the mainland to the west and, in 1519, Hernán Cortés set sail with eleven ships to explore the Mexican coast. By August 1521 the Aztec capital of Tenochtitlan had fallen to the Spanish. A single soldier arriving in Mexico in 1520 was carrying smallpox and thus initiated the devastating plagues that swept through the native populations of the Americas. Within three years of the fall of Tenochtitlan the Spanish had conquered a large part of Mexico, extending as far south as the Isthmus of Tehuantepec. The newly conquered territory became New Spain, headed by a viceroy who answered to the king of Spain via the Council of the Indies. Hernán Cortés received reports of rich, populated lands to the south and despatched Pedro de Alvarado to investigate the region.
Preparations for conquest
In the run up to the announcement that an invasion force was to be sent to Guatemala, ten thousand Nahua warriors had already been assembled by the Aztec emperor Cuauhtémoc to accompany the Spanish expedition. Warriors were ordered to be gathered from each of the Mexica and Tlaxcaltec towns. The native warriors supplied their own weapons, including swords, clubs and bows and arrows. Alvarado's army left Tenochtitlan at the beginning of the dry season, sometime between the second half of November and December 1523. As Alvarado left the Aztec capital, he led about 400 Spanish and approximately 200 Tlaxcaltec and Cholultec warriors and 100 Mexica, meeting up with the gathered reinforcements on the way. When the army left the Basin of Mexico, it may have included as many as 20,000 native warriors from various kingdoms although the exact numbers are disputed. By the time the army crossed the Isthmus of Tehuantepec, the massed native warriors included 800 from Tlaxcala, 400 from Huejotzingo, 1600 from Tepeaca plus many more from other former Aztec territories. Further Mesoamerican warriors were recruited from the Zapotec and Mixtec provinces, with the addition of more Nahuas from the Aztec garrison in Soconusco.
Guatemala before the conquest
In the early 16th century the territory that now makes up Guatemala was divided into various competing polities, each locked in continual struggle with its neighbours. The most important were the K'iche', the Kaqchikel, the Tz'utujil, the Chajoma, the Mam, the Poqomam and the Pipil. All were Maya groups except for the Pipil, who were a Nahua group related to the Aztecs; the Pipil had a number of small city-states along the Pacific coastal plain of southern Guatemala and El Salvador. The Pipil of Guatemala had their capital at Itzcuintepec. The Xinca were another non-Maya group occupying the southeastern Pacific coastal area. The Maya had never been unified as a single empire, but by the time the Spanish arrived Maya civilization was thousands of years old and had already seen the rise and fall of great cities.
On the eve of the conquest the highlands of Guatemala were dominated by several powerful Maya states. In the centuries preceding the arrival of the Spanish the K'iche' had carved out a small empire covering a large part of the western Guatemalan Highlands and the neighbouring Pacific coastal plain. However, in the late 15th century the Kaqchikel rebelled against their former K'iche' allies and founded a new kingdom to the southeast with Iximche as its capital. In the decades before the Spanish invasion the Kaqchikel kingdom had been steadily eroding the kingdom of the K'iche'. Other highland groups included the Tz'utujil around Lake Atitlán, the Mam in the western highlands and the Poqomam in the eastern highlands.
The kingdom of the Itza was the most powerful polity in the Petén lowlands of northern Guatemala, centred on their capital Nojpetén, on an island in Lake Petén Itzá.[nb 1] The second polity in importance was that of their hostile neighbours, the Kowoj. The Kowoj were located to the east of the Itza, around the eastern lakes: Lake Salpetén, Lake Macanché, Lake Yaxhá and Lake Sacnab. Other groups are less well known and their precise territorial extent and political makeup remains obscure; among them were the Chinamita, the Kejache, the Icaiche, the Lacandon, the Mopan, the Manche Ch'ol and the Yalain. The Kejache occupied an area north of the lake on the route to Campeche, while the Mopan and the Chinamita had their polities in the southeastern Petén. The Manche territory was to the southwest of the Mopan. The Yalain had their territory immediately to the east of Lake Petén Itzá.
Native weapons and tactics
Maya warfare was not so much aimed at destruction of the enemy as the seizure of captives and plunder. The Spanish described the weapons of war of the Petén Maya as bows and arrows, fire-sharpened poles, flint-headed spears and two-handed swords crafted from strong wood with the blade fashioned from inset obsidian, similar to the Aztec macuahuitl. Pedro de Alvarado described how the Xinca of the Pacific coast attacked the Spanish with spears, stakes and poisoned arrows. Maya warriors wore body armour in the form of quilted cotton that had been soaked in salt water to toughen it; the resulting armour compared favourably to the steel armour worn by the Spanish. The Maya had historically employed ambush and raiding as their preferred tactic, and its employment against the Spanish proved troublesome for the Europeans. In response to the use of cavalry, the highland Maya took to digging pits on the roads, lining them with fire-hardened stakes and camouflaging them with grass and weeds, a tactic that according to the Kaqchikel killed many horses.
The conquistadors were all volunteers, the majority of whom did not receive a fixed salary but instead a portion of the spoils of victory, in the form of precious metals, land grants and provision of native labour. Many of the Spanish were already experienced soldiers who had previously campaigned in Europe. The initial incursion into Guatemala was led by Pedro de Alvarado, who earned the military title of Adelantado in 1527; he answered to the Spanish crown via Hernán Cortés in Mexico. Other early conquistadors included Pedro de Alvarado's brothers Gómez de Alvarado, Jorge de Alvarado and Gonzalo de Alvarado y Contreras; and his cousins Gonzalo de Alvarado y Chávez, Hernando de Alvarado and Diego de Alvarado. Pedro de Portocarrero was a nobleman who joined the initial invasion. Bernal Díaz del Castillo was a petty nobleman who accompanied Hernán Cortés when he crossed the northern lowlands, and Pedro de Alvarado on his invasion of the highlands. In addition to Spaniards, the invasion force probably included dozens of armed African slaves and freemen.
Spanish weapons and tactics
Spanish weaponry and tactics differed greatly from that of the indigenous peoples of Guatemala. This included the Spanish use of crossbows, firearms (including muskets and cannon), war dogs and war horses. Among Mesoamerican peoples the capture of prisoners was a priority, while to the Spanish such taking of prisoners was a hindrance to outright victory. The inhabitants of Guatemala, for all their sophistication, lacked key elements of Old World technology, such as the use of iron and steel and functional wheels. The use of steel swords was perhaps the greatest technological advantage held by the Spanish, although the deployment of cavalry helped them to rout indigenous armies on occasion. The Spanish were sufficiently impressed by the quilted cotton armour of their Maya enemies that they adopted it in preference to their own steel armour. The conquistadors applied a more effective military organisation and strategic awareness than their opponents, allowing them to deploy troops and supplies in a way that increased the Spanish advantage.
In Guatemala the Spanish routinely fielded indigenous allies; at first these were Nahua brought from the recently conquered Mexico, later they also included Maya. It is estimated that for every Spaniard on the field of battle, there were at least 10 native auxiliaries. Sometimes there were as many as 30 indigenous warriors for every Spaniard, and it was the participation of these Mesoamerican allies that was particularly decisive.
The Spanish engaged in a strategy of concentrating native populations in newly founded colonial towns, or reducciones (also known as congregaciones). Native resistance to the new nucleated settlements took the form of the flight of the indigenous inhabitants into inaccessible regions such as mountains and forests.
Impact of Old World diseases
Epidemics accidentally introduced by the Spanish included smallpox, measles and influenza. These diseases, together with typhus and yellow fever, had a major impact on Maya populations. The Old World diseases brought with the Spanish and against which the indigenous New World peoples had no resistance were a deciding factor in the conquest; the diseases crippled armies and decimated populations before battles were even fought. Their introduction was catastrophic in the Americas; it is estimated that 90% of the indigenous population had been eliminated by disease within the first century of European contact.
In 1519 and 1520, before the arrival of the Spanish in the region, a number of epidemics swept through southern Guatemala. At the same time as the Spanish were occupied with the overthrow of the Aztec empire, a devastating plague struck the Kaqchikel capital of Iximche, and the city of Q'umarkaj, capital of the K'iche', may also have suffered from the same epidemic. It is likely that the same combination of smallpox and a pulmonary plague swept across the entire Guatemalan Highlands. Modern knowledge of the impact of these diseases on populations with no prior exposure suggests that 33–50% of the population of the highlands perished. Population levels in the Guatemalan Highlands did not recover to their pre-conquest levels until the middle of the 20th century. In 1666 pestilence or murine typhus swept through what is now the department of Huehuetenango. Smallpox was reported in San Pedro Saloma, in 1795. At the time of the fall of Nojpetén in 1697, there are estimated to have been 60,000 Maya living around Lake Petén Itzá, including a large number of refugees from other areas. It is estimated that 88% of them died during the first ten years of colonial rule owing to a combination of disease and war.
Timeline of the conquest
|Date||Event||Modern department (or Mexican state)|
|1521||Conquest of Tenochtitlan||Mexico|
|1522||Spanish allies scout Soconusco and receive delegations from the K'iche' and Kaqchikel||Chiapas, Mexico|
|1523||Pedro de Alvarado arrives in Soconusco||Chiapas, Mexico|
|February 1524February – March 1524||Spanish defeat the K'iche'||Retalhuleu, Suchitepéquez, Quetzaltenango, Totonicapán and El Quiché|
|8 February 1524||Battle of Zapotitlán, Spanish victory over the K'iche'||Suchitepéquez|
|12 February 1524||First battle of Quetzaltenango results in the death of the K'iche' lord Tecun Uman||Quetzaltenango|
|18 February 1524||Second battle of Quetzaltenango||Quetzaltenango|
|March 1524||Spanish under Pedro de Alvarado raze Q'umarkaj, capital of the K'iche'||El Quiché|
|14 April 1524||Spanish enter Iximche and ally themselves with the Kaqchikel||Chimaltenango|
|18 April 1524||Spanish defeat the Tz'utujil in battle on the shores of Lake Atitlán||Sololá|
|9 May 1524||Pedro de Alvarado defeats the Pipil of Panacal or Panacaltepeque near Izcuintepeque||Escuintla|
|26 May 1524||Pedro de Alvarado defeats the Xinca of Atiquipaque||Santa Rosa|
|27 July 1524||Iximche declared first colonial capital of Guatemala||Chimaltenango|
|28 August 1524||Kaqchikel abandon Iximche and break alliance||Chimaltenango|
|7 September 1524||Spanish declare war on the Kaqchikel||Chimaltenango|
|1525||The Poqomam capital falls to Pedro de Alvarado||Guatemala|
|13 March 1525||Hernán Cortés arrives at Lake Petén Itzá||Petén|
|October 1525||Zaculeu, capital of the Mam, surrenders to Gonzalo de Alvarado y Contreras after a lengthy siege||Huehuetenango|
|1526||Chajoma rebel against the Spanish||Guatemala|
|1526||Acasaguastlán given in encomienda to Diego Salvatierra||El Progreso|
|1526||Spanish captains sent by Alvarado conquer Chiquimula||Chiquimula|
|9 February 1526||Spanish deserters burn Iximche||Chimaltenango|
|1527||Spanish abandon their capital at Tecpán Guatemala||Chimaltenango|
|1529||San Mateo Ixtatán given in encomienda to Gonzalo de Ovalle||Huehuetenango|
|September 1529||Spanish routed at Uspantán||El Quiché|
|April 1530||Rebellion in Chiquimula put down||Chiquimula|
|9 May 1530||Kaqchikel surrender to the Spanish||Sacatepéquez|
|December 1530||Ixil and Uspantek surrender to the Spanish||El Quiché|
|April 1533||Juan de León y Cardona founds San Marcos and San Pedro Sacatepéquez||San Marcos|
|1543||Foundation of Cobán||Alta Verapaz|
|1549||First reductions of the Chuj and Q'anjob'al||Huehuetenango|
|1551||Corregimiento of San Cristóbal Acasaguastlán established||El Progreso, Zacapa and Baja Verapaz|
|1555||Lowland Maya kill Francisco de Vico||Alta Verapaz|
|1560||Reduction of Topiltepeque and Lacandon Ch'ol||Alta Verapaz|
|1618||Franciscan missionaries arrive at Nojpetén, capital of the Itzá||Petén|
|1619||Further missionary expeditions to Nojpetén||Petén|
|1684||Reduction of San Mateo Ixtatán and Santa Eulalia||Huehuetenango|
|29 January 1686||Melchor Rodríguez Mazariegos leaves Huehuetenango, leading an expedition against the Lacandón||Huehuetenango|
|1695||Franciscan friar Andrés de Avendaño attempts to convert the Itzá||Petén|
|28 February 1695||Spanish expeditions leave simultaneously from Cobán, San Mateo Ixtatán and Ocosingo against the Lacandón||Alta Verapaz, Huehuetenango and Chiapas|
|1696||Andrés de Avendaño forced to flee Nojpetén||Petén|
|13 March 1697||Nojpetén falls to the Spanish after a fierce battle||Petén|
Conquest of the highlands
The conquest of the highlands was made difficult by the many independent polities in the region, rather than one powerful enemy to be defeated as was the case in central Mexico. After the Aztec capital Tenochtitlan fell to the Spanish in 1521, the Kaqchikel Maya of Iximche sent envoys to Hernán Cortés to declare their allegiance to the new ruler of Mexico, and the K'iche' Maya of Q'umarkaj may also have sent a delegation. In 1522 Cortés sent Mexican allies to scout the Soconusco region of lowland Chiapas, where they met new delegations from Iximche and Q'umarkaj at Tuxpán; both of the powerful highland Maya kingdoms declared their loyalty to the king of Spain. But Cortés' allies in Soconusco soon informed him that the K'iche' and the Kaqchikel were not loyal, and were instead harassing Spain's allies in the region. Cortés decided to despatch Pedro de Alvarado with 180 cavalry, 300 infantry, crossbows, muskets, 4 cannons, large amounts of ammunition and gunpowder, and thousands of allied Mexican warriors from Tlaxcala, Cholula and other cities in central Mexico; they arrived in Soconusco in 1523. Pedro de Alvarado was infamous for the massacre of Aztec nobles in Tenochtitlan and, according to Bartolomé de las Casas, he committed further atrocities in the conquest of the Maya kingdoms in Guatemala. Some groups remained loyal to the Spanish once they had submitted to the conquest, such as the Tz'utujil and the K'iche' of Quetzaltenango, and provided them with warriors to assist further conquest. Other groups soon rebelled however, and by 1526 numerous rebellions had engulfed the highlands.
Subjugation of the K'iche'
Pedro de Alvarado and his army advanced along the Pacific coast unopposed until they reached the Samalá River in western Guatemala. This region formed a part of the K'iche' kingdom, and a K'iche' army tried unsuccessfully to prevent the Spanish from crossing the river. Once across, the conquistadors ransacked nearby settlements in an effort to terrorise the K'iche'. On 8 February 1524 Alvarado's army fought a battle at Xetulul, called Zapotitlán by his Mexican allies (modern San Francisco Zapotitlán). Although suffering many injuries inflicted by defending K'iche' archers, the Spanish and their allies stormed the town and set up camp in the marketplace. Alvarado then turned to head upriver into the Sierra Madre mountains towards the K'iche' heartlands, crossing the pass into the fertile valley of Quetzaltenango. On 12 February 1524 Alvarado's Mexican allies were ambushed in the pass and driven back by K'iche' warriors but the Spanish cavalry charge that followed was a shock for the K'iche', who had never before seen horses. The cavalry scattered the K'iche' and the army crossed to the city of Xelaju (modern Quetzaltenango) only to find it deserted. Although the common view is that the K'iche' prince Tecun Uman died in the later battle near Olintepeque, the Spanish accounts are clear that at least one and possibly two of the lords of Q'umarkaj died in the fierce battles upon the initial approach to Quetzaltenango. The death of Tecun Uman is said to have taken place in the battle of El Pinar, and local tradition has his death taking place on the Llanos de Urbina (Plains of Urbina), upon the approach to Quetzaltenango near the modern village of Cantel. Pedro de Alvarado, in his third letter to Hernán Cortés, describes the death of one of the four lords of Q'umarkaj upon the approach to Quetzaltenango. The letter was dated 11 April 1524 and was written during his stay at Q'umarkaj. Almost a week later, on 18 February 1524, a K'iche' army confronted the Spanish army in the Quetzaltenango valley and were comprehensively defeated; many K'iche' nobles were among the dead. Such were the numbers of K'iche' dead that Olintepeque was given the name Xequiquel, roughly meaning "bathed in blood". In the early 17th century, the grandson of the K'iche' king informed the alcalde mayor (the highest colonial official at the time) that the K'iche' army that had marched out of Q'umarkaj to confront the invaders numbered 30,000 warriors, a claim that is considered credible by modern scholars. This battle exhausted the K'iche' militarily and they asked for peace and offered tribute, inviting Pedro de Alvarado into their capital Q'umarkaj, which was known as Tecpan Utatlan to the Nahuatl-speaking allies of the Spanish. Alvarado was deeply suspicious of the K'iche' intentions but accepted the offer and marched to Q'umarkaj with his army.
The day after the battle of Olintepeque, the Spanish army arrived at Tzakahá, which submitted peacefully. There the Spanish chaplains Juan Godinez and Juan Díaz conducted a Roman Catholic mass under a makeshift roof; this site was chosen to build the first church in Guatemala, which was dedicated to Concepción La Conquistadora. Tzakahá was renamed as San Luis Salcajá. The first Easter mass held in Guatemala was celebrated in the new church, during which high-ranking natives were baptised.
In March 1524 Pedro de Alvarado entered Q'umarkaj at the invitation of the remaining lords of the K'iche' after their catastrophic defeat, fearing that he was entering a trap. He encamped on the plain outside the city rather than accepting lodgings inside. Fearing the great number of K'iche' warriors gathered outside the city and that his cavalry would not be able to maneouvre in the narrow streets of Q'umarkaj, he invited the leading lords of the city, Oxib-Keh (the ajpop, or king) and Beleheb-Tzy (the ajpop k'amha, or king elect) to visit him in his camp. As soon as they did so, he seized them and kept them as prisoners in his camp. The K'iche' warriors, seeing their lords taken prisoner, attacked the Spaniards' indigenous allies and managed to kill one of the Spanish soldiers. At this point Alvarado decided to have the captured K'iche' lords burnt to death, and then proceeded to burn the entire city. After the destruction of Q'umarkaj and the execution of its rulers, Pedro de Alvarado sent messages to Iximche, capital of the Kaqchikel, proposing an alliance against the remaining K'iche' resistance. Alvarado wrote that they sent 4000 warriors to assist him, although the Kaqchikel recorded that they sent only 400.
San Marcos: Province of Tecusitlán and Lacandón
With the capitulation of the K'iche' kingdom, various non-K'iche' peoples under K'iche' dominion also submitted to the Spanish. This included the Mam inhabitants of the area now within the modern department of San Marcos. Quetzaltenango and San Marcos were placed under the command of Juan de León y Cardona, who began the reduction of indigenous populations and the foundation of Spanish towns. The towns of San Marcos and San Pedro Sacatepéquez were founded soon after the conquest of western Guatemala. In 1533 Pedro de Alvarado ordered de León y Cardona to explore and conquer the area around the Tacaná, Tajumulco, Lacandón and San Antonio volcanoes; in colonial times this area was referred to as the Province of Tecusitlán and Lacandón. De León marched to a Maya city named Quezalli by his Nahuatl-speaking allies with a force of fifty Spaniards; his Mexican allies also referred to the city by the name Sacatepequez. De León renamed the city as San Pedro Sacatepéquez in honour of his friar, Pedro de Angulo. The Spanish founded a village nearby at Candacuchex in April that year, renaming it as San Marcos.
Kaqchikel alliance
On 14 April 1524, soon after the defeat of the K'iche', the Spanish were invited into Iximche and were well received by the lords Belehe Qat and Cahi Imox.[nb 3] The Kaqchikel kings provided native soldiers to assist the conquistadors against continuing K'iche' resistance and to help with the defeat of the neighbouring Tz'utuhil kingdom. The Spanish only stayed briefly in Iximche before continuing through Atitlán, Escuintla and Cuscatlán. The Spanish returned to the Kaqchikel capital on 23 July 1524 and on 27 July (1 Q'at in the Kaqchikel calendar) Pedro de Alvarado declared Iximche as the first capital of Guatemala, Santiago de los Caballeros de Guatemala ("St. James of the Knights of Guatemala"). Iximche was called Guatemala by the Spanish, from the Nahuatl Quauhtemallan meaning "forested land". Since the Spanish conquistadors founded their first capital at Iximche, they took the name of the city used by their Nahuatl-speaking Mexican allies and applied it to the new Spanish city and, by extension, to the kingdom. From this comes the modern name of the country. When Pedro de Alvarado moved his army to Iximche, he left the defeated K'iche' kingdom under the command of Juan de León y Cardona. Although de León y Cardona was given command of the western reaches of the new colony, he continued to take an active role in the continuing conquest, including the later assault on the Poqomam capital.
Conquest of the Tz'utujil
The Kaqchikel appear to have entered into an alliance with the Spanish to defeat their enemies, the Tz'utujil, whose capital was Tecpan Atitlan. Pedro de Alvarado sent two Kaqchikel messengers to Tecpan Atitlan at the request of the Kaqchikel lords, both of whom were killed by the Tz'utujil. When news of the killing of the messengers reached the Spanish at Iximche, the conquistadors marched against the Tz'utujil with their Kaqchikel allies. Pedro de Alvarado left Iximche just 5 days after he had arrived there, with 60 cavalry, 150 Spanish infantry and an unspecified number of Kaqchikel warriors. The Spanish and their allies arrived at the lakeshore after a day's hard march, without encountering any opposition. Seeing the lack of resistance, Alvarado rode ahead with 30 cavalry along the lake shore. Opposite a populated island the Spanish at last encountered hostile Tz'utujil warriors and charged among them, scattering and pursuing them to a narrow causeway across which the surviving Tz'utujil fled. The causeway was too narrow for the horses, therefore the conquistadors dismounted and crossed to the island before the inhabitants could break the bridges. The rest of Alvarado's army soon reinforced his party and they successfully stormed the island. The surviving Tz'utujil fled into the lake and swam to safety on another island. The Spanish could not pursue the survivors further because 300 canoes sent by the Kaqchikels had not yet arrived. This battle took place on 18 April.
The following day the Spanish entered Tecpan Atitlan but found it deserted. Pedro de Alvarado camped in the centre of the city and sent out scouts to find the enemy. They managed to catch some locals and used them to send messages to the Tz'utujil lords, ordering them to submit to the king of Spain. The Tz'utujil leaders responded by surrendering to Pedro de Alvarado and swearing loyalty to Spain, at which point Alvarado considered them pacified and returned to Iximche. Three days after Pedro de Alvarado returned to Iximche, the lords of the Tz'utujil arrived there to pledge their loyalty and offer tribute to the conquistadors. A short time afterwards a number of lords arrived from the Pacific lowlands to swear allegiance to the king of Spain, although Alvarado did not name them in his letters; they confirmed Kaqchikel reports that further out on the Pacific plain was the kingdom called Izcuintepeque in Nahuatl, or Panatacat in Kaqchikel, whose inhabitants were warlike and hostile towards their neighbours.
Kaqchikel rebellion
Pedro de Alvarado rapidly began to demand gold in tribute from the Kaqchikels, souring the friendship between the two peoples. He demanded that their kings deliver 1000 gold leaves, each worth 15 pesos.[nb 4]
A Kaqchikel priest foretold that the Kaqchikel gods would destroy the Spanish, causing the Kaqchikel people to abandon their city and flee to the forests and hills on 28 August 1524 (7 Ahmak in the Kaqchikel calendar). Ten days later the Spanish declared war on the Kaqchikel. Two years later, on 9 February 1526, a group of sixteen Spanish deserters burnt the palace of the Ahpo Xahil, sacked the temples and kidnapped a priest, acts that the Kaqchikel blamed on Pedro de Alvarado.[nb 5] Conquistador Bernal Díaz del Castillo recounted how in 1526 he returned to Iximche and spent the night in the "old city of Guatemala" together with Luis Marín and other members of Hernán Cortés's expedition to Honduras. He reported that the houses of the city were still in excellent condition; his account was the last description of the city while it was still inhabitable.
The Spanish founded a new town at nearby Tecpán Guatemala; Tecpán is Nahuatl for "palace", thus the name of the new town translated as "the palace among the trees". The Spanish abandoned Tecpán in 1527, because of the continuous Kaqchikel attacks, and moved to the Almolonga Valley to the east, refounding their capital on the site of today's San Miguel Escobar district of Ciudad Vieja, near Antigua Guatemala. The Nahua and Oaxacan allies of the Spanish settled in what is now central Ciudad Vieja, then known as Almolonga (not to be confused with Almolonga near Quetzaltenango); Zapotec and Mixtec allies also settled San Gaspar Vivar about 2 kilometres (1.2 mi) northeast of Almolonga, which they founded in 1530.
The Kaqchikel kept up resistance against the Spanish for a number of years, but on 9 May 1530, exhausted by the warfare that had seen the deaths of their best warriors and the enforced abandonment of their crops, the two kings of the most important clans returned from the wilds. A day later they were joined by many nobles and their families and many more people; they then surrendered at the new Spanish capital at Ciudad Vieja. The former inhabitants of Iximche were dispersed; some were moved to Tecpán, the rest to Sololá and other towns around Lake Atitlán.
Siege of Zaculeu
Although a state of hostilities existed between the Mam and the K'iche' of Q'umarkaj after the rebellion of the Kaqchikel against their former K'iche' allies prior to European contact, when the conquistadors arrived there was a shift in the political landscape. Pedro de Alvarado described how the Mam king Kayb'il B'alam was received with great honour in Q'umarkaj while he was there. The expedition against Zaculeu was apparently initiated after K'iche' bitterness at their failure to contain the Spanish at Q'umarkaj, with the plan to trap the conquistadors in the city having been suggested to them by the Mam king, Kayb'il B'alam; the resulting execution of the K'iche' kings was viewed as unjust. The K'iche' suggestion of marching on the Mam was quickly taken up by the Spanish.
At the time of the conquest, the main Mam population was situated in Xinabahul (also spelled Chinabjul), now the city of Huehuetenango, but Zaculeu's fortifications led to its use as a refuge during the conquest. The refuge was attacked by Gonzalo de Alvarado y Contreras, brother of conquistador Pedro de Alvarado, in 1525, with 40 Spanish cavalry and 80 Spanish infantry, and some 2,000 Mexican and K'iche' allies. Gonzalo de Alvarado left the Spanish camp at Tecpán Guatemala in July 1525 and marched to the town of Totonicapán, which he used as a supply base. From Totonicapán the expedition headed north to Momostenango, although it was delayed by heavy rains. Momostenango quickly fell to the Spanish after a four-hour battle. The following day Gonzalo de Alvarado marched on Huehuetenango and was confronted by a Mam army of 5,000 warriors from nearby Malacatán (modern Malacatancito). The Mam army advanced across the plain in battle formation and was met by a Spanish cavalry charge that threw them into disarray, with the infantry mopping up those Mam that survived the cavalry. Gonzalo de Alvarado slew the Mam leader Canil Acab with his lance, at which point the Mam army's resistance was broken, and the surviving warriors fled to the hills. Alvarado entered Malacatán unopposed to find it occupied only by the sick and the elderly. Messengers from the community's leaders arrived from the hills and offered their unconditional surrender, which was accepted by Alvarado. The Spanish army rested for a few days, then continued onwards to Huehuetenango only to find it deserted. Kayb'il B'alam had received news of the Spanish advance and had withdrawn to his fortress at Zaculeu. Alvarado sent a message to Zaculeu proposing terms for the peaceful surrender of the Mam king, who chose not to answer.
Zaculeu was defended by Kayb'il B'alam commanding some 6,000 warriors gathered from Huehuetenango, Zaculeu, Cuilco and Ixtahuacán. The fortress was surrounded on three sides by deep ravines and defended by a formidable system of walls and ditches. Gonzalo de Alvarado, although outnumbered two to one, decided to launch an assault on the weaker northern entrance. Mam warriors initially held the northern approaches against the Spanish infantry but fell back before repeated cavalry charges. The Mam defence was reinforced by an estimated 2,000 warriors from within Zaculeu but was unable to push the Spanish back. Kayb'il B'alam, seeing that outright victory on an open battlefield was impossible, withdrew his army back within the safety of the walls. As Alvarado dug in and laid siege to the fortress, an army of approximately 8,000 Mam warriors descended on Zaculeu from the Cuchumatanes mountains to the north, drawn from those towns allied with the city. Alvarado left Antonio de Salazar to supervise the siege and marched north to confront the Mam army. The Mam army was disorganised, and although it was a match for the Spanish and allied foot soldiers, it was vulnerable to the repeated charges of the experienced Spanish cavalry. The relief army was broken and annihilated, allowing Alvarado to return to reinforce the siege. After several months the Mam were reduced to starvation. Kayb'il B'alam finally surrendered the city to the Spanish in the middle of October 1525. When the Spanish entered the city they found 1,800 dead Indians, and the survivors eating the corpses of the dead. After the fall of Zaculeu, a Spanish garrison was established at Huehuetenango under the command of Gonzalo de Solís; Gonzalo de Alvarado returned to Tecpán Guatemala to report his victory to his brother.
Conquest of the Poqomam
|Classic Maya collapse|
|Spanish conquest of Yucatán|
|Spanish conquest of Guatemala|
|Spanish conquest of Petén|
In 1525 Pedro de Alvarado sent a small company to conquer Mixco Viejo (Chinautla Viejo), the capital of the Poqomam.[nb 6] At the Spanish approach, the inhabitants remained enclosed in the fortified city. The Spanish attempted an approach from the west through a narrow pass but were forced back with heavy losses. Alvarado himself launched the second assault with 200 Tlaxcalan allies but was also beaten back. The Poqomam then received reinforcements, possibly from Chinautla, and the two armies clashed on open ground outside of the city. The battle was chaotic and lasted for most of the day but was finally decided by the Spanish cavalry, forcing the Poqomam reinforcements to withdraw. The leaders of the reinforcements surrendered to the Spanish three days after their retreat and revealed that the city had a secret entrance in the form of a cave leading up from a nearby river, allowing the inhabitants to come and go.
Armed with the knowledge gained from their prisoners, Alvarado sent 40 men to cover the exit from the cave and launched another assault along the ravine from the west, in single file owing to its narrowness, with crossbowmen alternating with soldiers bearing muskets, each with a companion sheltering him from arrows and stones with a shield. This tactic allowed the Spanish to break through the pass and storm the entrance of the city. The Poqomam warriors fell back in disorder in a chaotic retreat through the city, and were hunted down by the victorious conquistadors and their allies. Those who managed to retreat down the neighbouring valley were ambushed by Spanish cavalry who had been posted to block the exit from the cave, the survivors were captured and brought back to the city. The siege had lasted more than a month and because of the defensive strength of the city, Alvarado ordered it to be burned and moved the inhabitants to the new colonial village of Mixco.
Resettlement of the Chajoma
There are no direct sources describing the conquest of the Chajoma by the Spanish but it appears to have been a drawn-out campaign rather than a rapid victory. The only description of the conquest of the Chajoma is a secondary account appearing in the work of Francisco Antonio de Fuentes y Guzmán in the 17th century, long after the event. After the conquest, the inhabitants of the eastern part of the kingdom were relocated by the conquerors to San Pedro Sacatepéquez, including some of the inhabitants of the archaeological site now known as Mixco Viejo (Jilotepeque Viejo).[nb 6] The rest of the population of Mixco Viejo, together with the inhabitants of the western part of the kingdom, were moved to San Martín Jilotepeque. The Chajoma rebelled against the Spanish in 1526, fighting a battle at Ukub'il, an unidentified site somewhere near the modern towns of San Juan Sacatepéquez and San Pedro Sacatepéquez.
In the colonial period, most of the surviving Chajoma were forcibly settled in the towns of San Juan Sacatepéquez, San Pedro Sacatepéquez and San Martín Jilotepeque as a result of the Spanish policy of congregaciones; the people were moved to whichever of the three towns was closest to their pre-conquest land holdings. Some Iximche Kaqchikels seem also to have been relocated to the same towns. After their relocation some of the Chajoma drifted back to their pre-conquest centres, creating informal settlements and provoking hostilities with the Poqomam of Mixco and Chinautla along the former border between the pre-Columbian kingdoms. Some of these settlements eventually received official recognition, such as San Raimundo near Sacul.
El Progreso and Zacapa
The Spanish colonial corregimiento of San Cristóbal Acasaguastlán was established in 1551 with its seat in the town of that name, now in the eastern portion of the modern department of El Progreso. Acasaguastlán was one of few pre-conquest centres of population in the middle Motagua River drainage, due to the arid climate. It covered a broad area that included Cubulco, Rabinal, and Salamá (all in Baja Verapaz), San Agustín de la Real Corona (modern San Agustín Acasaguastlán) and La Magdalena in El Progreso, and Chimalapa, Gualán, Usumatlán and Zacapa, all in the department of Zacapa. Chimalapa, Gualán and Usumatlán were all satellite settlements of Acasaguastlán. San Cristóbal Acasaguastlán and the surrounding area were reduced into colonial settlements by friars of the Dominican Order; at the time of the conquest the area was inhabited by Poqomchi' Maya and by the Nahuatl-speaking Pipil. In the 1520s, immediately after conquest, the inhabitants paid taxes to the Spanish Crown in the form of cacao, textiles, gold, silver and slaves. Within a few decades taxes were instead paid in beans, cotton and maize. Acasaguastlán was first given in encomienda to conquistador Diego Salvatierra in 1526.
Chiquimula de la Sierra ("Chiquimula in the Highlands"), occupying the area of the modern department of Chiquimula to the east of the Poqomam and Chajoma, was inhabited by Ch'orti' Maya at the time of the conquest. The first Spanish reconnaissance of this region took place in 1524 by an expedition that included Hernando de Chávez, Juan Durán, Bartolomé Becerra and Cristóbal Salvatierra, amongst others. In 1526 three Spanish captains, Juan Pérez Dardón, Sancho de Barahona and Bartolomé Becerra, invaded Chiquimula on the orders of Pedro de Alvarado. The indigenous population soon rebelled against excessive Spanish demands, but the rebellion was quickly put down in April 1530. However, the region was not considered fully conquered until a campaign by Jorge de Bocanegra in 1531–1532 that also took in parts of Jalapa. The afflictions of Old World diseases, war and overwork in the mines and encomiendas took a heavy toll on the inhabitants of eastern Guatemala, to the extent that indigenous population levels never recovered to their pre-conquest levels.
Campaigns in the Cuchumatanes
In the ten years after the fall of Zaculeu various Spanish expeditions crossed into the Sierra de los Cuchumatanes and engaged in the gradual and complex conquest of the Chuj and Q'anjob'al. The Spanish were attracted to the region in the hope of extracting gold, silver and other riches from the mountains but their remoteness, the difficult terrain and relatively low population made their conquest and exploitation extremely difficult. The population of the Cuchumatanes is estimated to have been 260,000 before European contact. By the time the Spanish physically arrived in the region this had collapsed to 150,000 because of the effects of the Old World diseases that had run ahead of them.
Uspantán and the Ixil
After the western portion of the Cuchumatanes fell to the Spanish, the Ixil and Uspantek Maya were sufficiently isolated to evade immediate Spanish attention. The Uspantek and the Ixil were allies and in 1529, four years after the conquest of Huehuetenango, Uspantek warriors were harassing Spanish forces and Uspantán was trying to foment rebellion among the K'iche'. Uspantek activity became sufficiently troublesome that the Spanish decided that military action was necessary. Gaspar Arias, magistrate of Guatemala, penetrated the eastern Cuchumatanes with sixty Spanish infantry and three hundred allied indigenous warriors. By early September he had imposed temporary Spanish authority over the Ixil towns of Chajul and Nebaj. The Spanish army then marched east toward Uspantán itself; Arias then received notice that the acting governor of Guatemala, Francisco de Orduña, had deposed him as magistrate. Arias handed command over to the inexperienced Pedro de Olmos and returned to confront de Orduña. Although his officers advised against it, Olmos launched a disastrous full-scale frontal assault on the city. As soon as the Spanish began their assault they were ambushed from the rear by more than two thousand Uspantek warriors. The Spanish forces were routed with heavy losses; many of their indigenous allies were slain, and many more were captured alive by the Uspantek warriors only to be sacrificed on the altar of their deity Exbalamquen. The survivors who managed to evade capture fought their way back to the Spanish garrison at Q'umarkaj.
A year later Francisco de Castellanos set out from Santiago de los Caballeros de Guatemala (by now relocated to Ciudad Vieja) on another expedition against the Ixil and Uspantek, leading eight corporals, thirty-two cavalry, forty Spanish infantry and several hundred allied indigenous warriors. The expedition rested at Chichicastenango and recruited further forces before marching seven leagues northwards to Sacapulas and climbed the steep southern slopes of the Cuchumatanes. On the upper slopes they clashed with a force of between four and five thousand Ixil warriors from Nebaj and nearby settlements. A lengthy battle followed during which the Spanish cavalry managed to outflank the Ixil army and forced them to retreat to their mountaintop fortress at Nebaj. The Spanish force besieged the city, and their indigenous allies managed to scale the walls, penetrate the stronghold and set it on fire. Many defending Ixil warriors withdrew to fight the fire, which allowed the Spanish to storm the entrance and break the defences. The victorious Spanish rounded up the surviving defenders and the next day Castellanos ordered them all to be branded as slaves as punishment for their resistance. The inhabitants of Chajul immediately capitulated to the Spanish as soon as news of the battle reached them. The Spanish continued east towards Uspantán to find it defended by ten thousand warriors, including forces from Cotzal, Cunén, Sacapulas and Verapaz. The Spaniards were barely able to organise a defence before the defending army attacked. Although heavily outnumbered, the deployment of Spanish cavalry and the firearms of the Spanish infantry eventually decided the battle. The Spanish overran Uspantán and again branded all surviving warriors as slaves. The surrounding towns also surrendered, and December 1530 marked the end of the military stage of the conquest of the Cuchumatanes.
Reduction of the Chuj and Q'anjob'al
In 1529 the Chuj city of San Mateo Ixtatán (then known by the name of Ystapalapán) was given in encomienda to the conquistador Gonzalo de Ovalle, a companion of Pedro de Alvarado, together with Santa Eulalia and Jacaltenango. In 1549, the first reduction (reducción in Spanish) of San Mateo Ixtatán took place, overseen by Dominican missionaries, in the same year the Q'anjob'al reducción settlement of Santa Eulalia was founded. Further Q'anjob'al reducciones were in place at San Pedro Soloma, San Juan Ixcoy and San Miguel Acatán by 1560. Q'anjob'al resistance was largely passive, based on withdrawal to the inaccessible mountains and forests from the Spanish reducciones. In 1586 the Mercedarian Order built the first church in Santa Eulalia. The Chuj of San Mateo Ixtatán remained rebellious and resisted Spanish control for longer than their highland neighbours, resistance that was possible owing to their alliance with the lowland Lacandon to the north. The continued resistance was so determined that the Chuj remained pacified only while the immediate effects of the Spanish expeditions lasted.
In the late 17th century, the Spanish missionary Fray Alonso de León reported that about eighty families in San Mateo Ixtatán did not pay tribute to the Spanish Crown or attend the Roman Catholic mass. He described the inhabitants as quarrelsome and complained that they had built a pagan shrine in the hills among the ruins of pre-Columbian temples, where they burnt incense and offerings and sacrificed turkeys. He reported that every March they built bonfires around wooden crosses about two leagues from the town and set them on fire. Fray de León informed the colonial authorities that the practices of the natives were such that they were Christian in name only. Eventually, Fray de León was chased out of San Mateo Ixtatán by the locals.
In 1684, a council led by Enrique Enríquez de Guzmán, the governor of Guatemala, decided on the reduction of San Mateo Ixtatán and nearby Santa Eulalia, both within the colonial administrative district of the Corregimiento of Huehuetenango.
On 29 January 1686, Captain Melchor Rodríguez Mazariegos, acting under orders from the governor, left Huehuetenango for San Mateo Ixtatán, where he recruited indigenous warriors from the nearby villages, 61 from San Mateo itself. It was believed by the Spanish colonial authorities that the inhabitants of San Mateo Ixtatán were friendly towards the still unconquered and fiercely hostile inhabitants of the Lacandon region, which included parts of what is now the Mexican state of Chiapas and the western part of the Petén Basin. To prevent news of the Spanish advance reaching the inhabitants of the Lacandon area, the governor ordered the capture of three of San Mateo's community leaders, named as Cristóbal Domingo, Alonso Delgado and Gaspar Jorge, and had them sent under guard to be imprisoned in Huehuetenango. The governor himself arrived in San Mateo Ixtatán on 3 February, where Captain Rodríguez Mazariegos was already awaiting him. The governor ordered the captain to remain in the village and use it as a base of operations for penetrating the Lacandon region. The Spanish missionaries Fray de Rivas and Fray Pedro de la Concepción also remained in the town. Governor Enriquez de Guzmán subsequently left San Mateo Ixtatán for Comitán in Chiapas, to enter the Lacandon region via Ocosingo.
In 1695, a three-way invasion of the Lacandon was launched simultaneously from San Mateo Ixtatán, Cobán and Ocosingo. Captain Rodriguez Mazariegos, accompanied by Fray de Rivas and 6 other missionaries together with 50 Spanish soldiers, left Huehuetenango for San Mateo Ixtatán. Following the same route used in 1686, they managed on the way to recruit 200 indigenous Maya warriors from Santa Eulalia, San Juan Solomá and San Mateo itself. On 28 February 1695, all three groups left their respective bases of operations to conquer the Lacandon. The San Mateo group headed northeast into the Lacandon Jungle.
Pacific lowlands: Pipil and Xinca
Before the arrival of the Spanish, the western portion of the Pacific plain was dominated by the K'iche' and Kaqchikel states, while the eastern portion was occupied by the Pipil and the Xinca. The Pipil inhabited the area of the modern department of Escuintla and a part of Jutiapa; the main Xinca territory lay to the east of the main Pipil population in what is now Santa Rosa department; there were also Xinca in Jutiapa.
In the half century preceding the arrival of the Spanish, the Kaqchikel were frequently at war with the Pipil of Izcuintepeque (modern Escuintla). By March 1524 the K'iche had been defeated, followed by a Spanish alliance with the Kaqchikel in April of the same year. On 8 May 1524, soon after his arrival in Iximche and immediately following his subsequent conquest of the Tz'utujil around Lake Atitlán, Pedro de Alvarado continued southwards to the Pacific coastal plain with an army numbering approximately 6000,[nb 7] where he defeated the Pipil of Panacal or Panacaltepeque (called Panatacat in the Annals of the Kaqchikels) near Izcuintepeque on 9 May. Alvarado described the terrain approaching the town as very difficult, covered with dense vegetation and swampland that made the use of cavalry impossible; instead he sent men with crossbows ahead. The Pipil withdrew their scouts because of the heavy rain, believing that the Spanish and their allies would not be able to reach the town that day. However, Pedro de Alvarado pressed ahead and when the Spanish entered the town the defenders were completely unprepared, with the Pipil warriors indoors sheltering from the torrential rain. In the battle that ensued, the Spanish and their indigenous allies suffered minor losses but the Pipil were able to flee into the forest, sheltered from Spanish pursuit by the weather and the vegetation. Pedro de Alvarado ordered the town to be burnt and sent messengers to the Pipil lords demanding their surrender, otherwise he would lay waste to their lands. According to Alvarado's letter to Cortés, the Pipil came back to the town and submitted to him, accepting the king of Spain as their overlord. The Spanish force camped in the captured town for eight days. A few years later, in 1529, Pedro de Alvarado was accused of using excessive brutality in his conquest of Izcuintepeque, amongst other atrocities.
In Guazacapán, now a municipality in Santa Rosa, Pedro de Alvardo described his encounter with people who were neither Maya nor Pipil, speaking a different language altogether; these people were probably Xinca. At this point Alvarado's force consisted of 250 Spanish infantry accompanied by 6,000 indigenous allies, mostly Kaqchikel and Cholutec. Alvarado and his army defeated and occupied the most important Xinca city, named as Atiquipaque, usually considered to be in the Taxisco area. The defending warriors were described by Alvarado as engaging in fierce hand-to-hand combat using spears, stakes and poisoned arrows. The battle took place on 26 May 1524 and resulted in a significant reduction of the Xinca population. Alvarado's army continued eastwards from Atiquipaque, seizing several more Xinca cities. Tacuilula feigned a peaceful reception only to unsuccessfully raise arms against the conquistadors within an hour of their arrival. Taxisco and Nancintla fell soon afterwards. Because Alvarado and his allies could not understand the Xinca language, Alvarado took extra precautions on the march eastward by strengthening his vanguard and rearguard with ten cavalry apiece. In spite of these precautions the baggage train was ambushed by a Xinca army soon after leaving Taxisco. Many indigenous allies were killed and most of the baggage was lost, including all the crossbows and ironwork for the horses. This was a serious setback and Alvarado camped his army in Nancintla for eight days, during which time he sent two expeditions against the attacking army. Jorge de Alvarado led the first attempt with thirty to forty cavalry and although they routed the enemy they were unable to retrieve any of the lost baggage, much of which had been destroyed by the Xinca for use as trophies. Pedro de Portocarrero led the second attempt with a large infantry detachment but was unable to engage with the enemy due to the difficult mountain terrain, so returned to Nancintla. Alvarado sent out Xinca messengers to make contact with the enemy but they failed to return. Messengers from the city of Pazaco, in the modern department of Jutiapa, offered peace to the conquistadors but when Alvarado arrived there the next day the inhabitants were preparing for war. Alvarado's troops encountered a sizeable quantity of gathered warriors and quickly routed them through the city's streets. From Pazaco Alvarado crossed the Río Paz and entered what is now El Salvador.
Northern lowlands
The Contact Period in Guatemala's northern Petén lowlands lasted from 1525 through to 1700. Superior Spanish weaponry and the use of cavalry, although decisive in the northern Yucatán, were ill-suited to warfare in the dense forests of lowland Guatemala.
Cortés in Petén
In 1525, after the Spanish conquest of the Aztec Empire, Hernán Cortés led an expedition to Honduras over land, cutting across the Itza kingdom in what is now the northern Petén Department of Guatemala. His aim was to subdue the rebellious Cristóbal de Olid, whom he had sent to conquer Honduras, but Cristóbal de Olid had set himself up independently on his arrival in that territory. Cortés had 140 Spanish soldiers, 93 of them mounted, 3,000 Mexican warriors, 150 horses, a herd of pigs, artillery, munitions and other supplies. He also had with him 600 Chontal Maya carriers from Acalan. They arrived at the north shore of Lake Petén Itzá on 13 March 1525.
Cortés accepted an invitation from Aj Kan Ek', the king of the Itza, to visit Nojpetén (also known as Tayasal), and crossed to the Maya city with 20 Spanish soldiers while the rest of his army continued around the lake to meet him on the south shore. On his departure from Nojpetén, Cortés left behind a cross and a lame horse. The Spanish did not officially contact the Itza again until the arrival of Franciscan priests in 1618, when Cortés' cross was said to still be standing at Nojpetén. From the lake, Cortés continued south along the western slopes of the Maya Mountains, a particularly arduous journey that took 12 days to cover 32 kilometres (20 mi), during which he lost more than two-thirds of his horses. When he came to a river swollen with the constant torrential rains that had been falling during the expedition, Cortés turned upstream to the Gracias a Dios rapids, which took two days to cross and cost him more horses.
On 15 April 1525 the expedition arrived at the Maya village of Tenciz. With local guides they headed into the hills north of Lake Izabal, where their guides abandoned them to their fate. The expedition became lost in the hills and came close to starvation before they captured a Maya boy who led them out to safety. Cortés found a village on the shore of Lake Izabal, perhaps Xocolo. He crossed the Dulce River to the settlement of Nito, somewhere on the Amatique Bay, with about a dozen companions, and waited there for the rest of his army to regroup over the course of the next week. By this time the remnants of the expedition had been reduced to a few hundred; Cortés succeeded in contacting the Spaniards he was searching for, only to find that Cristóbal de Olid's own officers had already put down his rebellion. Cortés then returned to Mexico by sea.
Land of War: Verapaz
By 1537 the area immediately north of the new colony of Guatemala was being referred to as the Tierra de Guerra ("Land of War").[nb 8] Paradoxically, it was simultaneously known as Verapaz ("True Peace"). The Land of War described an area that was undergoing conquest; it was a region of dense forest that was difficult for the Spanish to penetrate militarily. Whenever the Spanish located a centre of population in this region, the inhabitants were moved and concentrated in a new colonial settlement near the edge of the jungle where the Spanish could more easily control them. This strategy resulted in the gradual depopulation of the forest, simultaneously converting it into a wilderness refuge for those fleeing Spanish domination, both for individual refugees and for entire communities, especially those congregaciones that were remote from centres of colonial authority. The Land of War, from the 16th century through to the start of the 18th century, included a vast area from Sacapulas in the west to Nito on the Caribbean coast and extended northwards from Rabinal and Salamá, and was an intermediate area between the highlands and the northern lowlands. It includes the modern departments of Baja Verapaz and Alta Verapaz, Izabal and Petén, as well as the eastern part of El Quiché and a part of the Mexican state of Chiapas. The western portion of this area was the territory of the Q'eqchi' Maya.
Pedro Orozco,[nb 9] the leader of the Sacatepéquez Mam of San Marcos department, lent willing help to the Dominicans in their campaign to peacefully subject the inhabitants of Verapaz. On 1 May 1543 Carlos V rewarded the Sacatepéquez Mam by issuing a royal order promising never to give them in encomienda.
Dominican friar Bartolomé de las Casas arrived in the colony of Guatemala in 1537 and immediately campaigned to replace violent military conquest with peaceful missionary work. Las Casas offered to achieve the conquest of the Land of War through the preaching of the Catholic faith. It was the Dominicans who promoted the use of the name Verapaz instead of the Land of War. Because of the fact that the land had not been possible to conquer by military means, the governor of Guatemala, Alonso de Maldonado, agreed to sign a contract promising he would not establish any new encomiendas in the area should Las Casas' strategy succeed. Las Casas and a group of Dominican friars established themselves in Rabinal, Sacapulas and Cobán, and managed to convert several native chiefs using a strategy of teaching Christian songs to merchant Indian Christians who then ventured into the area.
In this way they congregated a group of Christian Indians in the location of what is now the town of Rabinal. Las Casas became instrumental in the introduction of the New Laws in 1542, established by the Spanish Crown to control the excesses of the conquistadors and colonists against the indigenous inhabitants of the Americas. As a result the Dominicans met substantial resistance from the Spanish colonists, who saw their own interests threatened by the New Laws; this distracted the Dominicans from their efforts to establish peaceful control over the Land of War.
In 1543 the new colonial reducción of Santo Domingo de Cobán was founded at Chi Mon'a to house the relocated Q'eqchi' from Chichen, Xucaneb and Al Run Tax Aj. Santo Tomás Apóstol was founded nearby the same year at Chi Nim Xol, it was used in 1560 as a reducción to resettle Ch'ol communities from Topiltepeque and Lacandon in the Usumacinta Valley. In 1555 the Acala and their Lacandon allies killed the Spanish friar Domingo de Vico. De Vico had established a small church among the inhabitants of San Marcos, a region that lay between the territories of the Lacandon and the Manche Ch'ol (an area unrelated to the department of San Marcos). De Vico had offended the local ruler by repeatedly scolding him for taking several wives. The indigenous leader shot the friar through the throat with an arrow; the angry natives then seized him, cut open his chest and extracted his heart. His corpse was then decapitated; the natives carried off his head, which was never recovered by the Spanish. In response a punitive expedition was launched, headed by Juan Matalbatz, a Q'eqchi' leader from Chamelco; the independent Indians captured by the Q'eqchi' expedition were taken back to Cobán and resettled in Santo Tomás Apóstol.
Lake Izabal and the lower Motagua River
The Dominicans established themselves in Xocolo on the shore of Lake Izabal in the mid-16th century. Xocolo became infamous among the Dominican missionaries for the practice of witchcraft by its inhabitants. By 1574 it was the most important staging post for European expeditions into the interior, and it remained important in that role until as late as 1630, although it was abandoned in 1631.
In 1598 Alfonso Criado de Castilla became governor of the Captaincy General of Guatemala. Owing to the poor state of Puerto Caballos on the Honduran coast and its exposure to repeated pirate raids he sent a pilot to scout Lake Izabal. As a result of the survey, and after royal permission was granted, Criado de Castilla ordered the construction of a new port, named Santo Tomás de Castilla, at a favourable spot on the Amatique Bay not far from the lake. Work then began on building a highway from the port to the new capital of the colony, modern Antigua Guatemala, following the Motagua Valley into the highlands. Indigenous guides scouting the route from the highlands would not proceed further downriver than three leagues below Quiriguá, because the area was inhabited by the hostile Toquegua.
The leaders of Xocolo and Amatique, backed by the threat of Spanish action, persuaded a community of 190 Toquegua to settle on the Amatique coast in April 1604. The new settlement immediately suffered a drop in population, but although the Amatique Toquegua were reported extinct before 1613 in some sources, Mercedarian friars were still attending to them in 1625. In 1628 the towns of the Manche Ch'ol were placed under the administration of the governor of Verapaz, with Francisco Morán as their ecclesiastical head. Morán favoured a more robust approach to the conversion of the Manche and moved Spanish soldiers into the region to protect against raids from the Itza to the north. The new Spanish garrison in an area that had not previously seen a heavy Spanish military presence provoked the Manche to revolt, which was followed by abandonment of the indigenous settlements. By 1699 the neighbouring Toquegua no longer existed as a separate people because of a combination of high mortality and intermarriage with the Amatique Indians. At around this time the Spanish decided on the reduction of the independent (or "wild" from the Spanish point of view) Mopan Maya living to the north of Lake Izabal. The north shore of the lake, although fertile, was by then largely depopulated, therefore the Spanish planned to bring the Mopan out of the forests to the north into an area where they could be more easily controlled.
During the campaign to conquer the Itza of Petén, the Spanish sent expeditions to harass and relocate the Mopan north of Lake Izabal and the Ch'ol Maya of the Amatique forests to the east. They were resettled in the Colonial reducción of San Antonio de las Bodegas on the south shore of the lake and in San Pedro de Amatique. By the latter half of the 18th century the indigenous population of these towns had disappeared; the local inhabitants now consisted entirely of Spaniards, mulattos and others of mixed race, all associated with the Castillo de San Felipe de Lara fort guarding the entrance to Lake Izabal. The main cause of the drastic depopulation of Lake Izabal and the Motagua Delta was the constant slave raids by the Miskito Sambu of the Caribbean coast that effectively ended the Maya population of the region; the captured Maya were sold into slavery in the British colony of Jamaica.
Conquest of Petén
From 1527 onwards the Spanish were increasingly active in the Yucatán Peninsula, establishing a number of colonies and towns by 1544, including Campeche and Valladolid in what is now Mexico. The Spanish impact on the northern Maya, encompassing invasion, epidemic diseases and the export of up to 50,000 Maya slaves, caused many Maya to flee southwards to join the Itza around Lake Petén Itzá, within the modern borders of Guatemala. The Spanish were aware that the Itza Maya had become the centre of anti-Spanish resistance and engaged in a policy of encircling their kingdom and cutting their trade routes over the course of almost two hundred years. The Itza resisted this steady encroachment by recruiting their neighbours as allies against the slow Spanish advance.
Dominican missionaries were active in Verapaz and the southern Petén from the late 16th century through the 17th century, attempting non-violent conversion with limited success. In the 17th century the Franciscans came to the conclusion that the pacification and Christian conversion of the Maya would not be possible as long as the Itza held out at Lake Petén Itzá. The constant flow of escapees fleeing the Spanish-held territories to find refuge with the Itza was a drain on the encomiendas. Fray Bartolomé de Fuensalida visited Nojpetén in 1618 and 1619. The Franciscan missionaries attempted to use their own reinterpretation of the k'atun prophecies when they visited Nojpetén at this time, to convince the current Aj Kan Ek' and his Maya priesthood that the time for conversion had come. But the Itza priesthood interpreted the prophecies differently, and the missionaries were fortunate to escape with their lives. In 1695 the colonial authorities decided to connect the province of Guatemala with Yucatán, and Guatemalan soldiers conquered a number of Ch'ol communities, the most important being Sakb'ajlan on the Lacantún River in eastern Chiapas, now in Mexico, which was renamed as Nuestra Señora de Dolores, or Dolores del Lakandon. The Franciscan friar Andrés de Avendaño oversaw a second attempt to overcome the Itza in 1695, convincing the Itza king that the K'atun 8 Ajaw, a twenty-year Maya calendrical cycle beginning in 1696 or 1697, was the right time for the Itza to finally embrace Christianity and to accept the king of Spain as overlord. However the Itza had local Maya enemies who resisted this conversion, and in 1696 Avendaño was fortunate to escape with his life. The Itza's continued resistance had become a major embarrassment for the Spanish colonial authorities, and soldiers were despatched from Campeche to take Nojpetén once and for all.
Fall of Nojpetén
Martín de Urzúa y Arizmendi arrived on the western shore of lake Petén Itzá with his soldiers in February 1697, and once there built a galeota, a large and heavily armed oar-powered attack boat. The Itza capital fell in a bloody waterborne assault on 13 March 1697. The Spanish bombardment caused heavy loss of life on the island; many Itza Maya who fled to swim across the lake were killed in the water. After the battle the surviving defenders melted away into the forests, leaving the Spanish to occupy an abandoned Maya town. The Itza and Kowoj kings (Ajaw Kan Ek' and Aj Kowoj) were soon captured, together with other Maya nobles and their families. With Nojpetén safely in the hands of the Spanish, Urzúa returned to Campeche; he left a small garrison on the island, isolated amongst the hostile Itza and Kowoj who still dominated the mainland. Nojpetén was renamed by the Spanish as Nuestra Señora de los Remedios y San Pablo, Laguna del Itza ("Our Lady of Remedy and Saint Paul, Lake of the Itza"). The garrison was reinforced in 1699 by a military expedition from Guatemala, accompanied by mixed-race ladino civilians who came to found their own town around the military camp. The settlers brought disease with them, which killed many soldiers and colonists and swept through the indigenous population. The Guatemalans stayed just three months before returning to Santiago de los Caballeros de Guatemala, taking the captive Itza king with them, together with his son and two of his cousins. The cousins died on the long journey to the colonial capital; Ajaw Kan Ek' and his son spent the rest of their lives under house arrest in the capital.
Final years of conquest
In the late 17th century the small population of Ch'ol Maya in southern Petén and Belize was forcibly removed to Alta Verapaz, where the people were absorbed into the Q'eqchi' population. The Ch'ol of the Lacandon Jungle were resettled in Huehuetenango in the early 18th century. Catholic priests from Yucatán founded several mission towns around Lake Petén Itzá in 1702–1703. Surviving Itza and Kowoj were resettled in the new colonial towns by a mixture of persuasion and force. Kowoj and Itza leaders in these mission towns rebelled against their Spanish overlords in 1704, but although well-planned, the rebellion was quickly crushed. Its leaders were executed and most of the mission towns were abandoned. By 1708 only about 6,000 Maya remained in central Petén, compared to ten times that number in 1697. Although disease was responsible for the majority of deaths, Spanish expeditions and internecine warfare between indigenous groups also played their part.
Legacy of the Spanish conquest
|Part of a series on the|
|History of New Spain|
|Spanish conquest of the Aztec Empire|
|Spanish conquest of Guatemala|
|Spanish conquest of Yucatán|
|History of the Philippines (1521–1898)|
|Piracy in the Caribbean|
|Spanish missions in the Americas|
|Queen Anne's War|
|Spanish American wars of independence|
|New Spain portal|
The initial shock of the Spanish conquest was followed by decades of heavy exploitation of the indigenous peoples, allies and foes alike. Over the following two hundred years colonial rule gradually imposed Spanish cultural standards on the subjugated peoples. The Spanish reducciones created new nucleated settlements laid out in a grid pattern in the Spanish style, with a central plaza, a church and the town hall housing the civil government, known as the ayuntamiento. This style of settlement can still be seen in the villages and towns of the area. The civil government was either run directly by the Spanish and their descendents (the ladinos) or was tightly controlled by them. The introduction of Catholicism was the main vehicle for cultural change, and resulted in religious syncretism. Old World cultural elements came to be thoroughly adopted by Maya groups, an example being the marimba, a musical instrument of African origin. The greatest change was the sweeping aside of the pre-Columbian economic order and its replacement by European technology and livestock; this included the introduction of iron and steel tools to replace Neolithic tools, and of cattle, pigs and chickens that largely replaced the consumption of game. New crops were also introduced; however, sugarcane and coffee led to plantations that economically exploited native labour. Sixty per cent of the modern population of Guatemala is estimated to be Maya, concentrated in the central and western highlands. The eastern portion of the country was the object of intense Spanish migration and hispanicization. Guatemalan society is divided into a class system largely based on race, with Maya peasants and artisans at the bottom, the mixed-race Ladino salaried workers and bureaucrats forming the middle and lower class and above them the creole elite of pure European ancestry. Some indigenous elites such as the Xajil did manage to maintain a level of status into the colonial period; a prominent Kaqchikel noble family, they chronicled the history of their region.
- While most sources accept the modern town of Flores on Lake Petén Itzá as the location of Nojpetén/Tayasal, Arlen Chase argued that this identification is incorrect and that descriptions Nojpetén correspond better to the archaeological site of Topoxte on Lake Yaxha. Chase 1976. See also the detailed rebuttal by Jones, Rice and Rice 1981.
- In the original this reads: ...por servir a Dios y a Su Majestad, e dar luz a los questaban en tinieblas, y también por haber riquezas, que todos los hombres comúnmente venimos a buscar. "(...those who died) to serve God and His Majesty, and to bring light to those who were in darkness, and also because there were riches, that all of us came in search of." Díaz del Castillo 1632, 2004, p. 720. Chapter CCX: De otras cosas y proyectos que se han seguido de nuestras ilustres conquistas y trabajos "Of other things and projects that have come about from our illustrious conquests and labours".
- Recinos places all these dates two days earlier (e.g. the Spanish arrival at Iximche on 12 April rather than 14 April) based on vague dating in Spanish primary records. Schele and Fahsen calculated all dates on the more securely dated Kaqchikel annals, where equivalent dates are often given in both the Kaqchikel and Spanish calendars. The Schele and Fahsen dates are used in this section. Schele & Mathews 1999, p. 386. n. 15.
- A peso was a Spanish coin. One peso was worth eight reales (the source of the term "pieces of eight") or two tostones. During the conquest, a peso contained 4.6 grams (0.16 oz) of gold. Lovell 2005, p. 223. Recinos 1952, 1986, p. 52. n. 25.
- Recinos 1998, p. 19. gives sixty deserters.
- The location of the historical city of Mixco Viejo has been the source of some confusion. The archaeological site now known as Mixco Viejo has been proven to be Jilotepeque Viejo, the capital of the Chajoma. The Mixco Viejo of colonial records has now been associated with the archaeological site of Chinautla Viejo, much closer to modern Mixco. Carmack 2001a, pp. 151, 158.
- Most of these were native allies.
- The colony of Guatemala at this time consisted only of the highlands and Pacific plain. Lovell et al 1984, p. 460.
- His baptismal name.
- Lovell 2005, p. 58.
- Jones 2000, p. 356.
- Jones 2000, pp. 356–358.
- Sharer and Traxler 2006, pp. 8, 757.
- Restall and Asselbergs 2007, p. 23.
- Restall and Asselbergs 2007, p. 49.
- Restall and Asselbergs 2007, pp. 49–50.
- Díaz del Castillo 1632, 2005, p. 5.
- Cortés 1844, 2005, p. xxi.
- Restall and Asselbergs 2007, p. 50.
- de Las Casas 1552, 1997, p. 13.
- Restall and Asselbergs 2007, pp. 79–81.
- Restall and Asselbergs 2007, p. 94.
- Restall and Asselbergs 2007, pp. 103–104.
- Restall and Asselbergs 2007, p. 111.
- Lara Figueroa 2000, p. 1.
- Lovell 2005, p. 69.
- Feldman 2000, p. xix.
- Smith 1996, 2003, p. 272.
- Smith 1996, 2003, p. 276.
- Smith 1996, 2003, p. 279.
- Coe and Koontz 2002, p. 229.
- Matthew 2012, p. 78.
- Matthew 2012, p. 79.
- Matthew 2012, p. 80.
- Polo Sifontes 1986, p. 14.
- Hill 1998, pp. 229, 233.
- Restall and Asselbergs 2007, p. 6.
- Restall and Asselbergs 2007, p. 25.
- Polo Sifontes 1981, p. 123.
- Restall and Asselbergs 2007, p. 26. Jiménez 2006, p. 1. n. 1.
- Restall and Asselbergs 2007, p. 4.
- Sharer and Traxler 2006, p. 717.
- Restall and Asselbergs 2007, p. 5.
- Rice 2009, p. 17.
- Rice and Rice 2009, pp. 10-11. Rice 2009, p. 17.
- Rice 2009, p. 17. Feldman 2000, p. xxi.
- Rice 2009, p. 19.
- Feldman 2000, p. xxi.
- Rice and Rice 2009, pp. 8, 11–12.
- Phillips 2006, 2007, p. 95.
- Rice et al. 2009, p. 129.
- Letona Zuleta et al., p. 5.
- Phillips 2006, 2007, p. 94.
- Restall and Asselbergs 2007, pp. 73, 108.
- Lovell 1988, p. 30.
- Polo Sifontes 1986, pp. 57–58.
- Polo Sifontes 1986, p. 62.
- Polo Sifontes 1986, p. 61. Recinos 1952, 1986, p. 124.
- Polo Sifontes 1986, p. 61.
- Sharer and Traxler 2006, p. 761. Díaz del Castillo 1632, 2005, p. 10. Restall and Asselbergs 2007, p. 8.
- Restall and Asselbergs 2007, p. 8.
- Restall and Asselbergs 2007, pp. 15, 61.
- Drew 1999, p. 382.
- Webster 2002, p. 77.
- Restall and Asselbergs 2007, p. 15.
- Lovell 1988, p. 29.
- Restall and Asselbergs 2007, p. 16.
- Hinz 2008, 2010, p. 36.
- Jones 2000, p. 363.
- Sharer and Traxler 2006, pp. 762–763.
- Coe 1999, p. 231.
- Restall and Asselbergs 2007, p. 3.
- Carmack 2001b, p. 172.
- Lovell 2005, p. 70.
- Lovell 2005, p. 71.
- Hinz 2008, 2010, p. 37.
- Jones 2000, p. 364.
- Lovell 2005, pp. 59–60.
- Sharer and Traxler 2006, p. 763. Restall and Asselbergs 2007, p. 3.
- Sharer and Traxler 2006, p. 763. Lovell 2005, p. 58. Matthew 2012, pp. 78-79.
- Sharer and Traxler 2006, pp. 763–764.
- Carmack 2001a, pp. 39–40.
- Alvarado 1524, 2007, p. 30.
- Recinos 1952, 1986, p. 65. Gall 1967, pp. 40–41.
- Sharer and Traxler 2006, p. 764. Gall 1967, p. 41.
- Gall 1967, pp. 41–42. Díaz del Castillo 1632, 2005, p. 510.
- Restall and Asselbergs 2007, pp. 9, 30.
- Cornejo Sam 2009, pp. 269–270.
- Gall 1967, p. 41.
- Fuentes y Guzmán 1882, p. 49.
- Veblen 1977, p. 488.
- de León Soto 2010, p. 24.
- de León Soto 2010, p. 22.
- Sharer & Traxler 2006, pp. 764–765. Recinos 1952, 1986, pp. 68, 74.
- Recinos 1952, 1986, p. 74.
- Recinos 1952, 1986, p. 75. Sharer & Traxler 2006, pp. 764–765.
- Recinos 1952, 1986, p. 75.
- Recinos 1952, 1986, pp. 74–5. Sharer & Traxler 2006, pp. 764–765.
- Calderón Cruz 1994, p. 23. de León Soto 2010, p. 24.
- de León Soto 2010, p. 26.
- de León Soto 2010, pp. 24-25.
- Schele & Mathews 1999, p. 297. Guillemín 1965, p. 9.
- Schele and Mathews 1999, p. 297.
- Schele & Mathews 1999, p. 297. Recinos 1998, p. 101. Guillemín 1965, p. 10.
- Schele & Mathews 1999, p. 292.
- de León Soto 2010, p. 29.
- de León Soto 2010, pp. 22, 25.
- Sharer and Traxler 2006, p. 765. Recinos 1952, 1986, p. 82.
- Recinos 1952, 1986, p. 82.
- Recinos 1952, 1986, pp. 82–83.
- Recinos 1952, 1986, p. 83.
- Sharer and Traxler 2006, pp. 765–766. Recinos 1952, 1986, p. 84.
- Recinos 1952, 1986, p. 84.
- Schele & Mathews 1999, p. 298.
- Guillemin 1967 p. 25.
- Schele & Mathews 1999, pp. 298, 310, 386n19.
- Schele & Mathews 1999, p. 298. Recinos 1998, p. 19.
- Recinos 1998, p. 104.
- Schele & Mathews 1999, p. 299.
- Lutz 1997, pp. 10, 258. Ortiz Flores 2008.
- Matthews 2012, p. 87.
- Matthews 2012, p. 57.
- Polo Sifontes 1986, p. 92.
- del Águila Flores 2007, p. 37.
- Lovell 2005, p. 60.
- Recinos 1986, p. 110.
- Gall 1967, p.39.
- Lovell 2005, p. 61.
- Carmack 2001a, p. 39.
- Lovell 2005, pp. 61–62.
- Lovell 2005, p. 62.
- Lovell 2005, pp. 62, 64.
- Lovell 2005, p. 64.
- Recinos 1986, p. 110. del Águila Flores 2007, p. 38. Lovell 2005, p. 64.
- Lehmann 1968, pp. 11–13.
- Lehmann 1968, pp. 11–13. Recinos, Adrian 1952, 1986, p. 108.
- Hill 1998, pp. 253.
- Hill 1996, p. 85.
- Carmack 2001a, pp. 155–6.
- Hill 1996, pp. 65, 67.
- Municipalidad de San Cristóbal Acasaguastlán 2011.
- Feldman 1998, p. 29.
- Feldman 1998, pp. 29-30.
- Castro Ramos 2003, p. 40
- Dary Fuentes 2008, p. 59.
- Putzeys and Flores 2007, p. 1475.
- Dary Fuentes 2008, p. 60.
- Limón Aguirre 2008, p. 10.
- Limón Aguirre 2008, p. 11.
- Lovell 2005, pp. 64–65.
- Lovell 2005, p. 65.
- Lovell 2005, pp. 65–66.
- Lovell 2005, p. 66
- INFORPRESSCA 2011. MINEDUC 2001, pp. 14–15. Limón Aguirre 2008, p. 10.
- Limón Aguirre 2008, pp. 10–11.
- Lovell 2000, pp. 416–417.
- Pons Sáez 1997, pp. 149–150.
- Pons Sáez 1997, pp. xxxiii, 153–154.
- Pons Sáez 1997, p. 154.
- Pons Sáez 1997, pp. 154–155.
- Pons Sáez 1997, p. 156.
- Pons Sáez 1997, pp. 156, 160.
- Pons Sáez 1997, p. xxxiii.
- Pons Sáez 1997, p. xxxiv.
- Fox 1981, p. 321.
- Polo Sifontes 1981, p. 111.
- Polo Sifontes 1981, p. 113.
- Polo Sifontes 1981, p. 114.
- Fox 1981, p. 326.
- Fowler 1985, p. 41. Recinos 1998, p. 29. Matthew 2012, p. 81.
- Polo Sifontes 1981, p. 117.
- Batres 2009, p. 65.
- Batres 2009, p. 66.
- Letona Zuleta et al., p. 6.
- Recinos 1952, 1986, p. 87.
- Recinos 1952, 1986, pp. 87–88.
- Mendoza Asencio 2011, pp. 34-35.
- Recinos 1952, 1986, p. 88.
- Batres 2009, p. 84.
- Rice and Rice 2009, p. 5.
- Jones 2000, p. 361.
- Jones 2000, p. 358.
- Sharer and Traxler 2006, p. 761.
- Sharer and Traxler 2006, p. 762. Jones 2000, p. 358.
- Feldman 1998, p. 6.
- Webster 2002, p. 83.
- Pons Sáez 1997, p. xvi.
- Pons Sáez 1997, p. xvii.
- Pons Sáez 1997, p. xviii.
- Pons Sáez 1997, p. xix.
- Caso Barrera and Aliphat 2007, pp. 51–52.
- ITMB Publishing 1998.
- Caso Barrera and Aliphat 2007, p. 48.
- Calderón Cruz 1994, p. 24.
- Pons Sáez 1997, p. xx.
- Pons Sáez 1997, p. xxi.
- De las Casas 1552, 1992, p. 54.
- Wagner and Parish 1967, pp. 86–93.
- Caso Barrera and Aliphat 2007, p. 52. Josserand and Hopkins 2001, p. 3.
- Caso Barrera and Aliphat 2007, p. 53.
- Salazar 1620, 2000, p.38.
- Salazar 1620, 2000, p. 37.
- Salazar 1620, 2000, p.39.
- Salazar 1620, 2000, p.35.
- Feldman 1998, p. 7.
- Feldman 1998, p. 8.
- Feldman 1998, p. 10.
- Feldman 2000, p. xxii.
- Feldman 1998, pp. 10–11.
- Feldman 1998, p. 11.
- Feldman 1998, p. 12.
- Jones 2000, pp. 358–360.
- Jones 2000, pp. 360–361.
- Rice and Rice 2009, p. 11.
- Jones 2000, pp. 361–362.
- Jones 2000, p. 362.
- Jones 2009, p. 59.
- Jones 2000, p. 362. Jones 2009, p. 59.
- Jones 2000, p. 365.
- Jones 2009, p. 60.
- Coe 1999, pp. 231–232.
- Coe 1999, p. 233.
- Coe 1999, p. 232.
- Smith 1997, p. 60.
- Restall and Asselbergs 2007, p. 104.
- Alvarado, Pedro de (1524, 2007). "Pedro de Alvarado's letters to Hernando Cortés, 1524". In Matthew Restall and Florine Asselbergs. Invading Guatemala: Spanish, Nahua, and Maya Accounts of the Conquest Wars. University Park, Pennsylvania, US: Pennsylvania State University Press. pp. 23–47. ISBN 978-0-271-02758-6. OCLC 165478850.
- Batres, Carlos A. (2009). "Tracing the "Enigmatic" Late Postclassic Nahua-Pipil (A.D. 1200–1500): Archaeological Study of Guatemalan South Pacific Coast". Carbondale, Illinois, US: Southern Illinois University Carbondale. Retrieved 2011-10-02.
- Calderón Cruz, Silvia Josefina (1994). "Historia y Evolución del Curato de San Pedro Sacatepéquez San Marcos, desde su origen hasta 1848" (PDF). Guatemala City, Guatemala: Universidad Francisco Marroquín, Facultad de Humanidades, Departamento de Historia. Retrieved 2012-09-28. (Spanish)
- Carmack, Robert M. (2001a). Kik'aslemaal le K'iche'aab': Historia Social de los K'iche's. Guatemala City, Guatemala: Cholsamaj. ISBN 99922-56-19-2. OCLC 47220876. (Spanish)
- Carmack, Robert M. (2001b). Kik'ulmatajem le K'iche'aab': Evolución del Reino K'iche'. Guatemala City, Guatemala: Cholsamaj. ISBN 99922-56-22-2. OCLC 253481949. (Spanish)
- Caso Barrera, Laura; and Mario Aliphat (2007). "Relaciones de Verapaz y las Tierras Bajas Mayas Centrales en el siglo XVII" (PDF). XX Simposio de Investigaciones Arqueológicas en Guatemala, 2006 (edited by J.P. Laporte, B. Arroyo and H. Mejía) (Guatemala City, Guatemala: Museo Nacional de Arqueología y Etnología): 48–58. Retrieved 2012-01-22. (Spanish)
- Castro Ramos, Xochitl Anaité (2003). "El Santo Ángel. Estudio antropológico sobre una santa popular guatemalteca: aldea El Trapiche, municipio de El Adelanto, departamento de Jutiapa" (PDF). Guatemala City, Guatemala: Escuela de Historia, Área de Antropología, Universidad de San Carlos de Guatemala. Retrieved 2012-01-25. (Spanish)
- Chase, Arlen F. (April 1976). "Topoxte and Tayasal: Ethnohistory in Archaeology" (PDF). American Antiquity (Washington, D.C., USA: Society for American Archaeology) 41 (2): 154–167. ISSN 0002-7316. OCLC 482285289. Retrieved 2012-12-02.
- Coe, Michael D. (1999). The Maya. Ancient peoples and places series (6th ed.). London, UK and New York, US: Thames & Hudson. ISBN 0-500-28066-5. OCLC 59432778.
- Coe, Michael D.; with Rex Koontz (2002). Mexico: from the Olmecs to the Aztecs (5th ed.). London, UK and New York, US: Thames & Hudson. ISBN 0-500-28346-X. OCLC 50131575.
- Cornejo Sam, Mariano. Q'antel (Cantel): Patrimonio cultural-histórico del pueblo de Nuestra Señora de la Asunción Cantel: Tzion'elil echba'l kech aj kntelab "Tierra de Viento y Neblina". Quetzaltenango, Guatemala. (Spanish)
- Cortés, Hernán (1844, 2005). Manuel Alcalá, ed. Cartas de Relación. Mexico City, Mexico: Editorial Porrúa. ISBN 970-07-5830-3. OCLC 229414632. (Spanish)
- Dary Fuentes, Claudia (2008). Ethnic Identity, Community Organization and Social Experience in Eastern Guatemala: The Case of Santa María Xalapán. Albany, New York, US: ProQuest/College of Arts and Sciences, Department of Anthropology: University at Albany, State University of New York. ISBN 978-0-549-74811-3. OCLC 352928170. (Spanish)
- de Las Casas, Bartolomé (1552, 1992). Nigel Griffin, ed. A Short Account of the Destruction of the Indies. London, UK and New York, US: Penguin Books. ISBN 0-14-044562-5. OCLC 26198156.
- de Las Casas, Bartolomé (1552, 1997). Olga Camps, ed. Brevísima Relación de la Destrucción de las Indias. Mexico City, Mexico: Distribuciones Fontamara, S.A. ISBN 968-476-013-2. OCLC 32265767. (Spanish)
- de León Soto, Miguel Ángel (2010). La Notable Historia de Tzalcahá, Quetzaltenango, y del Occidente de Guatemala. Guatemala City, Guatemala: Centro Editorial Vile. OCLC 728291450. (Spanish)
- del Águila Flores, Patricia (2007). "Zaculeu: Ciudad Postclásica en las Tierras Altas Mayas de Guatemala" (PDF). Guatemala City, Guatemala: Ministerio de Cultura y Deportes. OCLC 277021068. Archived from the original on 2011-07-21. Retrieved 2011-08-06. (Spanish)
- Díaz del Castillo, Bernal (1632, 2005). Historia verdadera de la conquista de la Nueva España. Mexico City, Mexico: Editores Mexicanos Unidos, S.A. ISBN 968-15-0863-7. OCLC 34997012. (Spanish)
- Drew, David (1999). The Lost Chronicles of the Maya Kings. London, UK: Weidenfeld & Nicolson. ISBN 0-297-81699-3. OCLC 43401096.
- Feldman, Lawrence H. (1998). Motagua Colonial. Raleigh, North Carolina, US: Boson Books. ISBN 1-886420-51-3. OCLC 82561350.
- Feldman, Lawrence H. (2000). Lost Shores, Forgotten Peoples: Spanish Explorations of the South East Maya Lowlands. Durham, North Carolina, US: Duke University Press. ISBN 0-8223-2624-8. OCLC 254438823.
- Fowler, William R. Jr. (Winter 1985). "Ethnohistoric Sources on the Pipil-Nicarao of Central America: A Critical Analysis". Ethnohistory (Duke University Press) 32 (1): 37–62. ISSN 0014-1801. JSTOR 482092. OCLC 478130795.
- Fox, John W. (August 1981). "The Late Postclassic Eastern Frontier of Mesoamerica: Cultural Innovation Along the Periphery". Current Anthropology (The University of Chicago Press on behalf of Wenner-Gren Foundation for Anthropological Research) 22 (4): 321–346. ISSN 0011-3204. JSTOR 2742225. OCLC 4644864425.
- Fuentes y Guzman, Francisco Antonio de; with Justo Zaragoza (notes and illustrations) (1882). Luis Navarro, ed. Historia de Guatemala o Recordación Florida I. Madrid, Spain: Biblioteca de los Americanistas. OCLC 699103660. (Spanish)
- Gall, Francis (July to December 1967). "Los Gonzalo de Alvarado, Conquistadores de Guatemala". Anales de la Sociedad de Geografía e Historia (Guatemala City, Guatemala: Sociedad de Geografía e Historia de Guatemala) XL. OCLC 72773975. (Spanish)
- Guillemín, Jorge F. (1965). Iximché: Capital del Antiguo Reino Cakchiquel. Guatemala: Tipografía Nacional de Guatemala. OCLC 1498320. (Spanish)
- Guillemin, George F. (Winter 1967). "The Ancient Cakchiquel Capital of Iximche". Expedition (University of Pennsylvania Museum of Archaeology and Anthropology): 22–35. ISSN 0014-4738. OCLC 1568625. Retrieved 2011-09-12.
- Hill, Robert M. II (1996). "Eastern Chajoma (Cakchiquel) Political Geography: Ethnohistorical and archaeological contributions to the study of a Late Postclassic highland Maya polity". Ancient Mesoamerica (New York, US: Cambridge University Press) 7: 63–87. ISSN 0956-5361. OCLC 88113844.
- Hill, Robert M. II (June 1998). "Los Otros Kaqchikeles: Los Chajomá Vinak". Mesoamérica (Antigua Guatemala, Guatemala: El Centro de Investigaciones Regionales de Mesoamérica (CIRMA) in conjunction with Plumsock Mesoamerican Studies, South Woodstock, VT) 35: 229–254. ISSN 0252-9963. OCLC 7141215. (Spanish)
- Hinz, Eike (2008, 2010). Existence and Identity: Reconciliation and Self-organization through Q'anjob'al Maya Divination (PDF). Hamburg, Germany and Norderstedt, Germany: Universität Hamburg. ISBN 978-3-8334-8731-6. OCLC 299685808. Retrieved 2011-09-25.
- INFORPRESSCA (June 2011). "Reseña Historia del Municipio de San Mateo Ixtatán, Huehuetenango". Guatemala. Archived from the original on 2011-06-07. Retrieved 2011-09-06.
- ITMB Publishing Ltd. (1998). Guatemala (Map). 1:500000. International Travel Maps (3rd ed.). ISBN 0-921463-64-2. OCLC 421536238.
- Jiménez, Ajb'ee (2006). "Qnaab'ila b'ix Qna'b'ila, Our thoughts and our feelings: Maya-Mam women's struggles in San Ildefonso Ixtahuacán" (PDF). University of Texas at Austin. Retrieved 2011-09-04.
- Jones, Grant D.; Don S. Rice and Prudence M. Rice (July 1981). "The Location of Tayasal: A Reconsideration in Light of Peten Maya Ethnohistory and Archaeology" (PDF). American Antiquity (Washington, D.C., USA: Society for American Archaeology) 46 (6). ISSN 0002-7316. JSTOR 280599. OCLC 482285289. Retrieved 2012-12-02. (subscription required)
- Jones, Grant D. (2000). "The Lowland Maya, from the Conquest to the Present". In Richard E.W. Adams and Murdo J. Macleod (eds.). The Cambridge History of the Native Peoples of the Americas, Vol. II: Mesoamerica, part 2. Cambridge, UK: Cambridge University Press. pp. 346–391. ISBN 0-521-65204-9. OCLC 33359444.
- Jones, Grant D. (2009). "The Kowoj in Ethnohistorical Perspective". In Prudence M. Rice and Don S. Rice (eds.). The Kowoj: identity, migration, and geopolitics in late postclassic Petén, Guatemala. Boulder, Colorado, US: University Press of Colorado. pp. 55–69. ISBN 978-0-87081-930-8. OCLC 225875268.
- Josserand, J. Kathryn; and Nicholas A. Hopkins (2001). "Chol Ritual Language" (PDF). Los Angeles, California, US: FAMSI (Foundation for the Advancement of Mesoamerican Studies). Retrieved 2012-01-30.
- Lara Figueroa, Celso A. (2000). "Introducción". Recordación Florida: Primera Parte: Libros Primero y Segundo. Ayer y Hoy (3rd ed.). Guatemala: Editorial Artemis-Edinter. ISBN 84-89452-66-0. (Spanish)
- Lehmann, Henri (1968). Guide to the Ruins of Mixco Viejo. Andrew McIntyre and Edwin Kuh. Guatemala: Piedra Santa. OCLC 716195862.
- Letona Zuleta, José Vinicio; Carlos Camacho Nassar and Juan Antonio Fernández Gamarro. "Las tierras comunales xincas de Guatemala". In Carlos Camacho Nassar. Tierra, identidad y conflicto en Guatemala. Guatemala: Facultad Latinoamericana de Ciencias Sociales (FLACSO); Misión de Verificación de las Naciones Unidas en Guatemala (MINUGUA); Dependencia Presidencial de Asistencia Legal y Resolución de Conflictos sobre la Tierra (CONTIERRA). ISBN 978-99922-66-84-7. OCLC 54679387. (Spanish)
- Limón Aguirre, Fernando (2008). "La ciudadanía del pueblo chuj en México: Una dialéctica negativa de identidades". San Cristóbal de Las Casas, Mexico: El Colegio de la Frontera Sur – Unidad San Cristóbal de Las Casas. Retrieved 2011-09-15. (Spanish)
- Lovell, W. George; Christopher H. Lutz and William R. Swezey (April 1984). "The Indian Population of Southern Guatemala, 1549-1551: An Analysis of López de Cerrato's Tasaciones de Tributos". The Americas (Academy of American Franciscan History) 40 (4): 459–477. JSTOR 980856. Retrieved 2012-11-07. (subscription required)
- Lovell, W. George (1988). "Surviving Conquest: The Maya of Guatemala in Historical Perspective" (PDF). Latin American Research Review (Pittsburgh, Pennsylvania: The Latin American Studies Association) 23 (2): 25–57. Retrieved 2012-09-27.
- Lovell, W. George (2000). "The Highland Maya". In Richard E.W. Adams and Murdo J. Macleod (eds.). The Cambridge History of the Native Peoples of the Americas, Vol. II: Mesoamerica, part 2. Cambridge, UK: Cambridge University Press. pp. 392–444. ISBN 0-521-65204-9. OCLC 33359444.
- Lovell, W. George (2005). Conquest and Survival in Colonial Guatemala: A Historical Geography of the Cuchumatán Highlands, 1500–1821 (3rd ed.). Montreal, Canada: McGill-Queen's University Press. ISBN 0-7735-2741-9. OCLC 58051691.
- Lutz, Christopher H. (1997). Santiago de Guatemala, 1541–1773: City, Caste, and the Colonial Experience. University of Oklahoma Press. ISBN 0-8061-2597-7. OCLC 29548140.
- Matthew, Laura E. (2012). Memories of Conquest: Becoming Mexicano in Colonial Guatemala (hardback ). First Peoples. Chapel Hill, North Carolina, USA: University of North Carolina Press. ISBN 978-0-8078-3537-1. OCLC 752286995.
- Mendoza Asencio, Hilda Johanna (2011). "Módulo pedagógico para desarrollo turístico dirigido a docentes y estudiantes del Instituto Mixto de Educación Básica por Cooperativa de Enseñanza, Pasaco, Jutiapa" (PDF). Universidad de San Carlos de Guatemala, Facultad de Humanidades. Retrieved 2012-09-24. (Spanish)
- MINEDUC (2001). Eleuterio Cahuec del Valle, ed. Historia y Memorias de la Comunidad Étnica Chuj II (Versión escolar ed.). Guatemala: Universidad Rafael Landívar/UNICEF/FODIGUA. OCLC 741355513. (Spanish)
- Municipalidad de San Cristóbal Acasaguastlán (2011). "Historia del Municipio". Municipalidad de San Cristóbal Acasaguastlán. Retrieved 2012-09-24. (Spanish)
- Ortiz Flores, Walter Agustin (2008). "Segundo Asiento Oficial de la Ciudad según Acta". Ciudad Vieja Sacatepéquez, Guatemala: www.miciudadvieja.com. Retrieved 2011-10-25. (Spanish)
- Phillips, Charles (2006, 2007). The Complete Illustrated History of the Aztecs & Maya: The definitive chronicle of the ancient peoples of Central America & Mexico – including the Aztec, Maya, Olmec, Mixtec, Toltec & Zapotec. London, UK: Anness Publishing Ltd. ISBN 1-84681-197-X. OCLC 642211652.
- Polo Sifontes, Francis (1981). "Título de Alotenango, 1565: Clave para ubicar geograficamente la antigua Itzcuintepec pipil". In Francis Polo Sifontes and Celso A. Lara Figueroa. Antropología e Historia de Guatemala (Guatemala City, Guatemala: Dirección General de Antropología e Historia de Guatemala, Ministerio de Educación). 3, II Epoca: 109–129. OCLC 605015816. (Spanish)
- Polo Sifontes, Francis (1986). Los Cakchiqueles en la Conquista de Guatemala. Guatemala: CENALTEX. OCLC 82712257. (Spanish)
- Pons Sáez, Nuria (1997). La Conquista del Lacandón. Mexico: Universidad Nacional Autónoma de México. ISBN 968-36-6150-5. OCLC 40857165. (Spanish)
- Putzeys, Ivonne; and Sheila Flores (2007). "Excavaciones arqueológicas en la Iglesia de la Santísima Trinidad de Chiquimula de la Sierra: Rescate del nombre y el prestigio de una iglesia olvidada" (PDF). XX Simposio de Arqueología en Guatemala, 2006 (edited by J.P. Laporte, B. Arroyo and H. Mejía) (Guatemala City, Guatemala: Museo Nacional de Arqueología y Etnología): 1473–1490. Retrieved 2012-01-24. (Spanish)
- Recinos, Adrian (1952, 1986). Pedro de Alvarado: Conquistador de México y Guatemala (2nd ed.). Guatemala: CENALTEX Centro Nacional de Libros de Texto y Material Didáctico "José de Pineda Ibarra". OCLC 243309954. (Spanish)
- Recinos, Adrian (1998). Memorial de Solalá, Anales de los Kaqchikeles; Título de los Señores de Totonicapán. Guatemala: Piedra Santa. ISBN 84-8377-006-7. OCLC 25476196. (Spanish)
- Restall, Matthew; and Florine Asselbergs (2007). Invading Guatemala: Spanish, Nahua, and Maya Accounts of the Conquest Wars. University Park, Pennsylvania, US: Pennsylvania State University Press. ISBN 978-0-271-02758-6. OCLC 165478850.
- Rice, Prudence M. (2009). "Who were the Kowoj?". In Prudence M. Rice and Don S. Rice (eds.). The Kowoj: identity, migration, and geopolitics in late postclassic Petén, Guatemala. Boulder, Colorado, US: University Press of Colorado. pp. 17–19. ISBN 978-0-87081-930-8. OCLC 225875268.
- Rice, Prudence M.; and Don S. Rice (2009). "Introduction to the Kowoj and their Petén Neighbors". In Prudence M. Rice and Don S. Rice (eds.). The Kowoj: identity, migration, and geopolitics in late postclassic Petén, Guatemala. Boulder, Colorado, US: University Press of Colorado. pp. 3–15. ISBN 978-0-87081-930-8. OCLC 225875268.
- Rice, Prudence M.; Don S. Rice, Timothy W. Pugh and Rómulo Sánchez Polo (2009). "Defensive Architecture and the Context of Warfare at Zacpetén". In Prudence M. Rice and Don S. Rice (eds.). The Kowoj: identity, migration, and geopolitics in late postclassic Petén, Guatemala. Boulder, Colorado, US: University Press of Colorado. pp. 123–140. ISBN 978-0-87081-930-8. OCLC 225875268.
- Salazar, Gabriel (1620, 2000). "Geography of the Lowlands: Gabriel Salazar, 1620". In Lawrence H. Feldman. Lost Shores, Forgotten Peoples: Spanish Explorations of the South East Maya Lowlands. Durham, North Carolina, US: Duke University Press. pp. 21–54. ISBN 0-8223-2624-8. OCLC 254438823.
- Schele, Linda; and Peter Mathews (1999). The Code of Kings: The language of seven Maya temples and tombs. New York, US: Simon & Schuster. ISBN 978-0-684-85209-6. OCLC 41423034.
- SEGEPLAN (2010). "Plan de Desarrollo San Agustín Acasaguastlán El Progreso 2011-2025" (PDF). Guatemala City, Guatemala: SEGEPLAN. Retrieved 2012-09-26. (Spanish)
- Sharer, Robert J.; with Loa P. Traxler (2006). The Ancient Maya (6th ed.). Stanford, California, US: Stanford University Press. ISBN 0-8047-4817-9. OCLC 57577446.
- Smith, Carol A. (1997). "Race/Class/Gender Ideology in Guatemala: Modern and Anti-Modern Forms". In Brackette Williams. Women Out of Place: The Gender of Agency, the Race of Nationality. New York, US and London, UK: Routledge. pp. 50–78. ISBN 0-415-91496-5. OCLC 60185223.
- Smith, Michael E. (1996, 2003). The Aztecs (2nd ed.). Malden, Massachusetts, US and Oxford, UK: Blackwell Publishing. ISBN 978-0-631-23016-8. OCLC 59452395.
- Veblen, Thomas T. (December 1977). "Native Population Decline in Totonicapan, Guatemala". Annals of the Association of American Geographers (Taylor & Francis, on behalf of the Association of American Geographers) 67 (4): 484–499. Retrieved 2012-09-27. (subscription required)
- Wagner, Henry Raup; and Helen Rand Parish (1967). The Life and Writings of Bartolomé de Las Casas. University of New Mexico Press. OCLC 427169.
- Webster, David L. (2002). The Fall of the Ancient Maya: Solving the Mystery of the Maya Collapse. London, UK: Thames & Hudson. ISBN 0-500-05113-5. OCLC 48753878.
Further reading
- Kramer, Wendy; W. George Lovell and Christopher H. Lutz (1990). Encomienda and Settlement: Towards a Historical Geography of Early Colonial Guatemala 16. Austin, Texas, USA: University of Texas Press. pp. 67–72. ISSN 1054-3074. JSTOR 25765724. OCLC 4897324685. Retrieved 2013-03-24. Unknown parameter
|jorunal=ignored (help) (subscription required)
- Kramer, Wendy; W. George Lovell and Christopher H. Lutz (1990). Encomienda and Settlement: Towards a Historical Geography of Early Colonial Guatemala 16. Austin, Texas, USA: University of Texas Press. pp. 67–72. ISSN 1054-3074. JSTOR 25765724. OCLC 4897324685. Retrieved 2013-03-24. Unknown parameter | http://en.wikipedia.org/wiki/Spanish_conquest_of_Guatemala | 13 |
18 | The events of March 12, 1938, marked the culmination of historical cross-national pressures to unify the German populations of Austria and Germany under one nation. However, the 1938 Anschluss, regardless of its popularity, was forcibly enacted by Germany. Earlier, Hitlerian Germany had provided support for the Austrian National Socialist Party in its bid to seize power from Austria's Austrofascist leadership. Fully devoted to remaining independent but amidst growing pressures, the chancellor of Austria, Kurt Schuschnigg, tried to hold a plebiscite.
Although he expected Austria to vote in favor of maintaining autonomy, a well-planned internal overthrow by the Austrian Nazi Party of Austria's state institutions in Vienna took place on March 11, prior to the vote. With power quickly transferred over to Germany, the Wehrmacht troops entered Austria to enforce the Anschluss. The Nazis held a plebiscite within the following month, where they received 99.73 percent of the vote. No fighting ever took place and the strongest voices against the annexation, particularly Fascist Italy, France and the United Kingdom—the Stresa Front—were either powerless to stop it, or, in case of Italy, appeased. The Allies were, on paper, committed to upholding the terms of the Treaty of Versailles, which specifically prohibited the union of Austria and Germany.
Nevertheless, the Anschluss was among the first major steps in Adolf Hitler's long-desired creation of an empire, including German-speaking lands and territories Germany had lost after World War I. Already prior to the 1938 annexation, the Rhineland was retaken and the Saar region was returned to Germany after 15 years of occupation. After the Anschluss, the predominantly German Sudetenland of Czechoslovakia was taken, with the rest of the country becoming a protectorate to Germany in 1939. That same year, Memelland was returned from Lithuania, the final event and antecedent before the invasion of Poland, prompting World War II.
Austria ceased to exist as a fully independent nation until 1955. A preliminary Austrian government was reinstated on April 27, 1945, and was legally recognized by the Allies in the following months.
Situation before the Anschluss
The idea of grouping all Germans into one state had been the subject of inconclusive debate since the end of the Holy Roman Empire in 1806. Prior to 1866, it was generally thought that the unification of the Germans could only succeed under Austrian leadership, but the rise of Prussia was largely unpredicted. This created a rivalry between the two that made unification through a Großdeutschland solution impossible. Also, due to the multi-ethnic composition of the Austro-Hungarian Empire centralized in Vienna, many rejected this notion; it was unthinkable that Austria would give up her "non-German" territories, let alone submit to Prussia. Nevertheless, a series of wars, including the Austro-Prussian War, led to the expulsion of Austria from German affairs, allowing for the creation of the Norddeutsche Bund (North German Confederation) and consolidated the German states through Prussia, enabling the creation of a German Empire in 1871. Otto von Bismarck played a fundamental role in this process, with the end result representing a Kleindeutsche solution that did not include the German-speaking parts of Austria-Hungary. When the latter broke up in 1918, many German-speaking Austrians hoped to join with Germany in the realignment of Europe, but the Treaty of Versailles (1919) and the Treaty of Saint-Germain of 1919 explicitly vetoed the inclusion of Austria within a German state, because France and Britain feared the power of a larger Germany, and had already begun to disempower the current one. Also Austrian particularism, especially among the nobility, played an important role, as Austria was Roman Catholic, while Germany was dominated, especially in government, more by Protestants.
In the early 1930s, popular support for union with Germany remained overwhelming, and the Austrian government looked to a possible customs union with Germany in 1931. However Hitler's and the Nazis' rise to power in Germany left the Austrian government with little enthusiasm for such formal ties. Hitler, born in Austria, had promoted an "all-German Reich" from the early beginnings of his leadership in the NSDAP and had publicly stated as early as 1924 in Mein Kampf that he would attempt a union, by force if necessary.
Austria shared the economic turbulence of post-1929 Europe with a high unemployment rate and unstable commerce and industry. Similar to its northern and southern neighbors these uncertain conditions made the young democracy vulnerable. The First Republic, dominated from the late 1920s by the Catholic nationalist Christian Social Party (CS), gradually disintegrated from 1933 (including the dissolution of parliament and a ban of the Austrian National Socialists) to 1934 (with the Austrian Civil War in February and ban of all remaining parties except the CS). This evolved into a pseudo-fascist, corporatist model of one-party government which combined the CS and the paramilitary Heimwehr with absolute state domination of labor relations and no freedom of the press. Power was centralized in the office of the Chancellor who was empowered to rule by decree. The predominance of the Christian Social Party (whose economic policies were based on the papal encyclical Rerum novarum) was a purely Austrian phenomenon based on Austria's national identity, which had strong Catholic elements which were incorporated into the movement by way of clerical authoritarian tendencies which are certainly not to be found in Nazism. Both Engelbert Dollfuss and his successor Kurt Schuschnigg turned to Austria's other fascist neighbor, Italy, for inspiration and support. Indeed, the statist corporatism often referred to as Austrofascism bore more resemblance to Italian Fascism than German National Socialism. Benito Mussolini was able to support the independent aspirations of the Austrian dictatorship until his need for German support in Ethiopia forced him into a client relationship with Berlin that began with the 1937 Berlin-Rome Axis.
When Chancellor Dollfuss was assassinated by Austrian Nazis on 25 July 1934 in a failed coup, the second civil war within only one year followed, lasting until August 1934. Afterwards, many leading Austrian Nazis fled to Germany and continued to coordinate their actions from there while the remaining Austrian Nazis started to make use of terrorist attacks against the Austrian governmental institutions (causing a death toll of more than 800 between 1934 and 1938). Dollfuss' successor Schuschnigg, who followed the political course of Dollfuss, took drastic actions against the Nazis, including rounding up of Nazis (but also Social Democrats) in internment camps.
The Anschluss of 1938
Hitler's first moves
In early 1938, Hitler had consolidated his power in Germany and was ready to reach out to fulfill his long-planned expansion. After a lengthy period of pressure by Germany, Hitler met Schuschnigg on February 12, 1938 in Berchtesgaden (Bavaria), instructing him to lift the ban of political parties, reinstate full party freedoms, release all imprisoned members of the Nazi party and let them participate in the government. Otherwise, he would take military action. Schuschnigg complied with Hitler's demands, appointing Arthur Seyss-Inquart, a Nazi lawyer, as Interior Minister and another Nazi, Edmund Glaise-Horstenau, as Minister, even without a portfolio.
Before the February meeting, Schuschnigg was already under considerable pressure from Germany, which demanded the removal of the chief of staff of the Austrian Army, Alfred Jansa, from his position in January 1938. Jansa and his staff had developed a scenario for Austria's defense against a German attack, a situation Hitler wanted to avoid at all costs. Schuschnigg subsequently complied with the demand.
During the following weeks, Schuschnigg realized that his newly appointed ministers were working to take over his authority. Schuschnigg tried to gather support throughout Austria and inflame patriotism among the people. For the first time since February 12, 1934 (the time of the Austrian Civil War), socialists and communists could legally appear in public again. The communists announced their unconditional support for the Austrian government, understandable in light of Nazi pressure on Austria. The socialists demanded further concessions from Schuschnigg before they were willing to side with him.
Schuschnigg announces a referendum
On March 9, as a last resort to preserve Austria's independence, Schuschnigg scheduled a plebiscite on the independence of Austria for March 13. To secure a large majority in the referendum, Schuschnigg set the minimum voting age at 24 in order to exclude younger voters who largely sympathized with Nazi ideology. Holding a referendum was a highly risky gamble for Schuschnigg, and, on the next day, it became apparent that Hitler would not simply stand by while Austria declared its independence by public vote. Hitler declared that the plebiscite would be subject to major fraud and that Germany would not accept it. In addition, the German Ministry of Propaganda issued press reports that riots had broken out in Austria and that large parts of the Austrian population were calling for German troops to restore order. Schuschnigg immediately publicly replied that the reports of riots were nothing but lies.
Hitler sent an ultimatum to Schuschnigg on March 11, demanding that he hand over all power to the Austrian National Socialists or face an invasion. The ultimatum was set to expire at noon, but was extended by two hours. However, without waiting for an answer, Hitler had already signed the order to send troops into Austria at one o'clock, issuing it to Hermann Göring only hours later.
Schuschnigg desperately sought support for Austrian independence in the hours following the ultimatum, but, realizing that neither France nor the United Kingdom were willing to take steps, he resigned as Chancellor that evening. In the radio broadcast in which he announced his resignation, he argued that he accepted the changes and allowed the Nazis to take over the government in order to avoid bloodshed. Meanwhile, Austrian President Wilhelm Miklas refused to appoint Seyss-Inquart Chancellor and asked other Austrian politicians such as Michael Skubl and Sigismund Schilhawsky to assume the office. However, the Nazis were well organized. Within hours they managed to take control of many parts of Vienna, including the Ministry of Internal Affairs (controlling the Police). As Miklas continued to refuse to appoint a Nazi government and Seyss-Inquart still could not send a telegram in the name of the Austrian government demanding German troops to restore order, Hitler became furious. At about 10 P.M., well after Hitler had signed and issued the order for the invasion, Göring and Hitler gave up on waiting and published a forged telegram containing a request by the Austrian Government for German troops to enter Austria. Around midnight, after nearly all critical offices and buildings had fallen into Nazi hands in Vienna and the main political party members of the old government had been arrested, Miklas finally conceded, appointing Seyss-Inquart Chancellor.
German troops march into Austria
On the morning of March 12, the 8th Army of the German Wehrmacht crossed the German-Austrian border. They did not face resistance by the Austrian Army. On the contrary, the German troops were greeted by cheering Austrians. Although the invading forces were badly organized and coordination between the units was poor, it mattered little because no fighting took place. It did, however, serve as a warning to German commanders in future military operations, such as the invasion of Czechoslovakia.
Hitler's car crossed the border in the afternoon at Braunau am Inn, his birthplace. In the evening, he arrived at Linz and was given an enthusiastic welcome in the city hall. The atmosphere was so intense that Göring, in a telephone call that evening, stated: "There is unbelievable jubilation in Austria. We ourselves did not think that sympathies would be so intense."
Hitler's further travel through Austria changed into a triumphal tour that climaxed in Vienna, when around 200,000 Austrians gathered on the Heldenplatz (Square of Heroes) to hear Hitler proclaim the Austrian Anschluss (Video: Hitler proclaims Austria's inclusion in the Reich (2MB)). Hitler later commented: "Certain foreign newspapers have said that we fell on Austria with brutal methods. I can only say: even in death they cannot stop lying. I have in the course of my political struggle won much love from my people, but when I crossed the former frontier (into Austria) there met me such a stream of love as I have never experienced. Not as tyrants have we come, but as liberators."
The Anschluss was given immediate effect by legislative act on 13 March, subject to ratification by a plebiscite. Austria became the province of Ostmark, and Seyss-Inquart was appointed Governor. The plebiscite was held on 10 April and officially recorded a support of 99.73 percent of the voters. While historians concur that the result itself was not manipulated, the voting process was neither free nor secret. Officials were present directly beside the voting booths and received the voting ballot by hand (in contrast to a secret vote where the voting ballot is inserted into a closed box). In addition, Hitler's brutal methods to emasculate any opposition had been immediately implemented in the weeks preceding the referendum. Even before the first German soldier crossed the border, Heinrich Himmler and a few SS officers landed in Vienna to arrest prominent representatives of the First Republic such as Richard Schmitz, Leopold Figl, Friedrich Hillegeist and Franz Olah. During the weeks following the Anschluss (and before the plebiscite), Social Democrats, Communists, and other potential political dissenters, as well as Jews, were rounded up and either imprisoned or sent to concentration camps. Within only a few days of 12 March, 70,000 people had been arrested. The referendum itself was subject to large-scale propaganda and to the abrogation of the voting rights of around 400,000 people (nearly 10% of the eligible voting population), mainly former members of left-wing parties and Jews. Interestingly, in some remote areas of Austria the referendum on the independence of Austria on March 13, was held despite the Wehrmacht's presence in Austria (it took up to 3 days to occupy every part of Austria). For instance, in the village of Innervillgraten a majority of 95 percent, voted for Austria's independence.
Austria remained part of the Third Reich until the end of World War II when a preliminary Austrian Government declared the Anschluss "null und nichtig" (null and void) on April 27, 1945. After the war, then allied-occupied Austria was recognized and treated as a separate country, but was not restored to sovereignty until the Austrian State Treaty and Austrian Declaration of Neutrality, both of 1955, largely due to the rapid development of the Cold War and disputes between the Soviet Union and its former allies over its foreign policy.
Reactions and consequences of the Anschluss
The picture of Austria in the first days of its existence in the Third Reich is one of contradictions: at one and the same time, Hitler's terror regime began to tighten its grip in every area of society, beginning with mass arrests and thousands of Austrians attempting to flee in every direction; yet Austrians could be seen cheering and welcoming German troops entering Austrian territory. Many Austrian political figures did not hesitate to announce their support of the Anschluss and their relief that it happened without violence.
Cardinal Theodor Innitzer (a political figure of the CS) declared as early as March 12: "The Viennese Catholics should thank the Lord for the bloodless way this great political change has occurred, and they should pray for a great future for Austria. Needless to say, everyone should obey the orders of the new institutions." The other Austrian bishops followed suit some days later. Vatican Radio, however, immediately broadcast a vehement denunciation of the German action, and Cardinal Pacelli, the Vatican Secretary of State, ordered Innitzer to report to Rome. Before meeting with the pope, Innitzer met with Pacelli, who had been outraged by Innitzer's statement. He made it clear that Innitzer needed to retract; he was made to sign a new statement, issued on behalf of all the Austrian bishops, which provided: “The solemn declaration of the Austrian bishops … was clearly not intended to be an approval of something that was not and is not compatible with God's law”. The Vatican newspaper also reported that the bishop's earlier statement had been issued without the approval from Rome.
Robert Kauer, President of the Protestants in Austria, greeted Hitler on March 13, as "saviour of the 350,000 German Protestants in Austria and liberator from a five-year hardship." Even Karl Renner, the most famous Social Democrat of the First Republic, announced his support for the Anschluss and appealed to all Austrians to vote in favor of it on April 10.
The international response to the expansion of Germany may be described as moderate. in London The Times commented that 200 years ago Scotland had joined England as well and that this event would not really differ much. On March 14, the British Prime Minister Neville Chamberlain noted in the House of Commons:
His Majesty's Government have throughout been in the closest touch with the situation. The Foreign Secretary saw the German Foreign Minister on the 10th of March and addressed to him a grave warning on the Austrian situation and upon what appeared to be the policy of the German Government in regard to it…. Late on the 11th of March our Ambassador in Berlin registered a protest in strong terms with the German Government against such use of coercion, backed by force, against an independent State in order to create a situation incompatible with its national independence.
However the speech concluded:
I imagine that according to the temperament of the individual the events which are in our minds to-day will be the cause of regret, of sorrow, perhaps of indignation. They cannot be regarded by His Majesty's Government with indifference or equanimity. They are bound to have effects which cannot yet be measured. The immediate result must be to intensify the sense of uncertainty and insecurity in Europe. Unfortunately, while the policy of appeasement would lead to a relaxation of the economic pressure under which many countries are suffering to-day, what has just occurred must inevitably retard economic recovery and, indeed, increased care will be required to ensure that marked deterioration does not set in. This is not a moment for hasty decisions or for careless words. We must consider the new situation quickly, but with cool judgement…. As regards our defence programmes, we have always made it clear that they were flexible and that they would have to be reviewed from time to time in the light of any development in the international situation. It would be idle to pretend that recent events do not constitute a change of the kind that we had in mind. Accordingly we have decided to make a fresh review, and in due course we shall announce what further steps we may think it necessary to take.
The modest response to the Anschluss was the first major consequence of the strategy of appeasement which characterized British foreign policy in the pre-war period. The international reaction to the events of March 12, 1938 led Hitler to conclude that he could use even more aggressive tactics in his roadmap to expand the Third Reich, as he would later in annexing the Sudetenland. The relatively bloodless Anschluss helped pave the way for the Treaty of Munich in September 1938 and the annexation of Czechoslovakia in 1939, because it reinforced appeasement as the right way for Britain to deal with Hitler's Germany.
Legacy of the 1938 Anschluss
The appeal of Nazism to Austrians
Despite the subversion of Austrian political processes by Hitler's sympathizers and associates, Austrian acceptance of direct government by Hitler's Germany is a very different phenomenon from the administration of other collaborationist countries.
With the break-up of the Austro-Hungarian monarchy in 1918, popular opinion was for unification with Germany, fueled by the concept of Grossdeutschland. Although forbidden by the Treaty of St. Germain, to which the newly formed Austrian republic was obliged, the idea nonetheless held some appeal for Austrians. This was in stark contrast to the general concept of self-determination which governed the Versailles talks, as was the inclusion of the Sudetenland, a German-populated area of the former Austro-Hungarian province of Bohemia (whose population favored joining German-speaking Austria), in the newly formed Czechoslovak republic, giving rise to revisionist sentiment. This laid the grounds for the general willingness of the populations of both Austria and the Sudetenland for inclusion into the Third Reich, as well as the relative acceptance of the Western Governments, who made little protest until March 1939, when the irredentist argument lost its value following the annexation of the rest of Czech-speaking Bohemia, as well as Moravia and Czech Silesia.
The small Republic of Austria was seen by many of its citizens as economically nonviable, a feeling that was exacerbated by the Depression of the 1930s. In contrast, the Nazi dictatorship appeared to have found a solution to the economic crisis of the 1930s. Furthermore, the break-up had thrown Austria into a crisis of identity, and many Austrians, of both the left and the right, felt that Austria should be part of a larger German nation.
Politically, Austria had not had the time to develop a strongly democratic society to resist the onslaught of totalitarianism. The final version of the First Republic's constitution had only lasted from 1929 to 1933. The First Republic was ridden by violent strife between the different political camps; the Christian Social Party were complicit in the murder of large numbers of adherents of the decidedly left-wing Social Democratic Party by the police during the July Revolt of 1927. In fact, with the end of democracy in 1933 and the establishment of Austrofascism, Austria had already purged its democratic institutions and instituted a dictatorship long before the Anschluss. There is thus little to distinguish radically the institutions of, at least the post-1934 Austrian government, before or after March 12, 1938.
The members of the leading Christian Social Party were fervent Catholics, but not particularly anti-Semitic. For instance, Jews were not prohibited from exercising any profession, in sharp contrast to the Third Reich. Many prominent Austrian scientists, professors, and lawyers at the time were Jewish; in fact Vienna, with its Jewish population of about 200,000, was considered a safe haven from 1933 to 1938 by many Jews who fled Nazi Germany. However, the Nazis' anti-Semitism found fertile soil in Austria. Anti-Semitic elements had emerged as a force in Austrian politics in the late nineteenth century, with the rise in prominence of figures such as Georg Ritter von Schönerer and Karl Lueger (who had influenced the young Hitler) and, in the 1930s, anti-Semitism was rampant, as Jews were a convenient scapegoat for economic problems.
In addition to the economic appeal of the Anschluss, the popular underpinning of Nazi politics as a total art form (the refinement of film propaganda exemplified by Riefenstahl's Triumph of the Will and mythological aestheticism of a broadly-conceived national destiny of the German people within a "Thousand-Year Reich") gave the Nazis a massive advantage in advancing their claims to power. Moreover Austrofascism was less grand in its appeal than the choice between Stalin and Hitler to which many European intellectuals of the time believed themselves reduced by the end of the decade. Austria had effectively no alternative view of its historical mission when the choice was upon it. In spite of Dollfuss' and Schuschnigg's hostility to Nazi political ambitions, the Nazis succeeded in convincing many Austrians to accept what they viewed as the historical destiny of the German people rather than continue as part of a distinct sovereign.
The Second Republic
The Moscow Declaration
The governments of the United Kingdom, the Soviet Union and the United States of America are agreed that Austria, the first free country to fall a victim to Hitlerite aggression, shall be liberated from German domination.
They regard the annexation imposed on Austria by Germany on 15 March 1938, as null and void. They consider themselves as in no way bound by any charges affected in Austria since that date. They declare that they wish to see re-established a free and independent Austria and thereby to open the way for the Austrian people themselves, as well as those neighbouring States which will be faced with similar problems, to find that political and economic security which is the only basis for lasting peace.
Austria is reminded, however that she has a responsibility, which she cannot evade, for participation in the war at the side of Hitlerite Germany, and that in the final settlement account will inevitably be taken of her own contribution to her liberation.
To judge from the last paragraph and subsequent determinations at the Nuremberg Trials, the Declaration was intended to serve as propaganda aimed at stirring Austrian resistance (although there are Austrians counted as Righteous Among the Nations, there never was an effective Austrian armed resistance of the sort found in other countries under German occupation) more than anything else, although the exact text of the declaration is said to have a somewhat complex drafting history. At Nuremberg Arthur Seyss-Inquart and Franz von Papen, in particular, were both indicted under count one (conspiracy to commit crimes against peace) specifically for their activities in support of the Austrian Nazi Party and the Anschluss, but neither was convicted of this count. In acquitting von Papen, the court noted that his actions were in its view political immoralities but not crimes under its charter. Seyss-Inquart was convicted of other serious war crimes, most of which took place in Poland and the Netherlands, and was sentenced to death.
Austrian identity and the "victim theory"
After World War II, many Austrians sought comfort in the myth of Austria as "the Nazis' first victim." Although the Nazi party was promptly banned, Austria did not have the same thorough process of de-Nazification at the top of government which was imposed on Germany for a time. Lacking outside pressure for political reform, factions of Austrian society tried for a long time to advance the view that the Anschluss was only an annexation at the point of a bayonet.
Policy of neutrality
This view of the events of 1938 had deep roots in the ten years of Allied occupation and the struggle to regain Austrian sovereignty. The "victim theory" played an essential role in the negotiations on the Austrian State Treaty with the Soviets, and by pointing to the Moscow Declaration, Austrian politicians heavily relied on it to achieve a solution for Austria different from the Germany's division into East and West. The State Treaty, alongside with the subsequent Austrian declaration of permanent neutrality, marked important milestones for the solidification of Austria's independent national identity during the course of following decades.
As Austrian politicians of the Left and Right attempted to reconcile their differences in order to avoid the violent conflict that had dominated the First Republic, discussions of both Austrian-Nazism and Austria's role during the Nazi-era were largely avoided. Still, the Austrian People's Party (ÖVP) had advanced, and still advances, the argument that the establishment of the Dollfuss dictatorship was necessary in order to maintain Austrian independence; while the Austrian Social Democratic Party, (SPÖ), argues that the Dollfuss dictatorship stripped the country of the democratic resources necessary to repel Hitler; yet it ignores the fact that Hitler himself was indigenous to Austria.
Confronting the past
For decades, the victim theory established in the Austrian mind remained largely undisputed. The Austrian public was only rarely forced to confront the legacy of the Third Reich (most notably during the events of 1965 concerning Taras Borodajkewycz, a professor of economic history notorious for anti-Semitic remarks, when Ernst Kirchweger, a concentration camp survivor, was killed by a right-wing protester during riots). It was not until the 1980s that Austrians were finally massively confronted with their past. The main catalyst for the start of a Vergangenheitsbewältigung was the so-called Waldheim affair. The Austrian reply to allegations during the 1986 Presidential election campaign that successful candidate and former UN Secretary-General Kurt Waldheim had been a member of the Nazi party and of the infamous Sturmabteilung (SA) (he was later absolved of direct involvement in war crimes) was that scrutiny was an unwelcome intervention in the country's internal affairs. Despite the politicians' reactions to international criticism of Waldheim, the Waldheim affair started the first serious major discussion on Austria's past and the Anschluss.
Another main factor in Austria coming to terms with the past in the 1980s was Jörg Haider and the rise of the Freedom Party of Austria (FPÖ). The party had combined elements of the pan-German right with free-market liberalism since its founding in 1955, but after Haider had ascended to the party chairmanship in 1986, the liberal elements became increasingly marginalized while Haider began to openly use nationalist and anti-immigrant rhetoric. He was often criticized for tactics such as the völkisch (ethnic) definition of national interest ("Austria for Austrians") and his apologism for Austria's past, notably calling members of the Waffen-SS "men of honor." Following an enormous electoral rise in the 1990s, peaking in the legislative election of 1999, the FPÖ, now purged of its liberal elements, entered a coalition with the Austrian People's Party (ÖVP) led by Wolfgang Schüssel, that met with international condemnation in 2000. This coalition triggered the regular Donnerstagsdemonstrationen (Thursday demonstrations) in protest against the government, which took place on the Heldenplatz, where Hitler had greeted the masses during the Anschluss. Haider's tactics and rhetoric, which were often criticized as sympathetic to Nazism, again forced Austrians to reconsider their relationship to the past.
But it is not Jörg Haider alone who has made questionable remarks on Austria's past. His coalition partner and current Chancellor Wolfgang Schüssel in an interview with the Jerusalem Post as late as 2000 stated that Austria was the first victim of Hitler-Germany.
Attacking the simplism of victim theory and the time of the Austrofascism, Thomas Bernhard's last play, Heldenplatz, was highly controversial even before it appeared on stage in 1988, 50 years after Hitler's visit. Bernhard's achievement was to make the elimination of references to Hitler's reception in Vienna emblematic of Austrian attempts to claim their history and culture under questionable criteria. Many politicians from all political factions called Bernhard a Nestbeschmutzer (a person who damages the reputation of his country) and openly demanded that the play should not be staged in Vienna's Burgtheater. Kurt Waldheim, who was at that time still Austrian president called the play a crude insult to the Austrian people.
The Historical Commission and outstanding legal issues
In the context of the postwar Federal Republic of Germany, the Vergangenheitsbewältigung ("struggle to come to terms with the past") has been partially institutionalized, variably in literary, cultural, political, and educational contexts (its development and difficulties have not been trivial; see, for example, the Historikerstreit). Austria formed a Historikerkommission ("Historian's Commission" or "Historical Commission") in 1998 with a mandate to review Austria's role in the Nazi expropriation of Jewish property from a scholarly rather than legal perspective, partly in response to continuing criticism of its handling of property claims. Its membership was based on recommendations from various quarters, including Simon Wiesenthal and Yad Vashem. The Commission delivered its report in 2003. Noted Holocaust historian Raul Hilberg refused to participate in the Commission and in an interview stated his strenuous objections in reference to larger questions about Austrian culpability and liability, comparing what he believed to be relative inattention to the settlement governing the Swiss bank holdings of those who died or were displaced by the Holocaust:
I personally would like to know why the WJC World Jewish Congress has hardly put any pressure on Austria, even as leading Nazis and SS leaders were Austrians, Hitler included... Immediately after the war, the US wanted to make the Russians withdraw from Austria, and the Russians wanted to keep Austria neutral, therefore there was a common interest to grant Austria victim status. And later Austria could cry poor - though its per capita income is as high as Germany's. And, most importantly, the Austrian PR machinery works better. Austria has the opera ball, the imperial castle, Mozartkugeln [a chocolate]. Americans like that. And Austrians invest and export relatively little to the US, therefore they are less vulnerable to blackmail. In the meantime, they set up a commission in Austria to clarify what happened to Jewish property. Victor Klima, the former chancellor, has asked me to join. My father fought for Austria in the First World War and in 1939 he was kicked out of Austria. After the war they offered him ten dollars per month as compensation. For this reason I told Klima, no thank you, this makes me sick.
The Simon Wiesenthal Center continues to criticize Austria (as recently as June 2005) for its alleged historical and ongoing unwillingness aggressively to pursue investigations and trials against Nazis for war crimes and crimes against humanity from the 1970s onwards. Its 2001 report offered the following characterization:
Given the extensive participation of numerous Austrians, including at the highest levels, in the implementation of the Final Solution and other Nazi crimes, Austria should have been a leader in the prosecution of Holocaust perpetrators over the course of the past four decades, as has been the case in Germany. Unfortunately relatively little has been achieved by the Austrian authorities in this regard and in fact, with the exception of the case of Dr. Heinrich Gross which was suspended this year under highly suspicious circumstances (he claimed to be medically unfit, but outside the court proved to be healthy) not a single Nazi war crimes prosecution has been conducted in Austria since the mid-seventies.
In 2003, the Center launched a worldwide effort named "Operation: Last Chance" in order to collect further information about those Nazis still alive that are potentially subject to prosecution. Although reports issued shortly thereafter credited Austria for initiating large-scale investigations, there has been one case where criticism of Austrian authorities arose recently: The Center has put 92-year old Croatian Milivoj Asner on its 2005 top ten list. Asner fled to Austria in 2004 after Croatia announced it would start investigations in the case of war crimes he may have been involved in. In response to objections about Asner's continued freedom, Austria's federal government has deferred to either extradition requests from Croatia or prosecutorial actions from Klagenfurt, neither of which appears forthcoming (as of June 2005). Extradition is not an option since Asner also holds Austrian citizenship, having lived in the country from 1946 to 1991.
- ↑ Until the German spelling reform of 1996, Anschluss was written Anschluß in the countries subject to the reform. (See also the article on ß.) In English-language typography and style conventions, "ß" was often transliterated as "ss," although the spelling in German is a valid, if not predominant, option, but mainly before 1996 (when the English spelling became a correct German spelling).
- ↑ 1938: Austria, MSN Encarta. accessed 10 June 2005.
- ↑ "Österreichs Weg zum Anschluss im März 1938," Wiener Zeitung, 25 May 1998 (detailed article the on the events of the Anschluss, in German).
- ↑ Österreichs Weg zum Anschluss im März 1938, Wiener Zeitung, 25 May 1998
- ↑ Anschluss, Spartacus Schoolnet (reactions on the Anschluss).
- ↑ "Die propagandistische Vorbereitung der Volksabstimmung," Austrian Resistance Archive, Vienna, 1988, accessed 10 June 2005.
- ↑ Die propagandistische Vorbereitung der Volksabstimmung, Austrian Resistance Archive, Vienna, 1988, accessed 10 June 2005.
- ↑ See note 2 above.
- ↑ See note 2 above.
- ↑ Neville Chamberlain, Statement of the Prime Minister in the House of Commons, 14 March 1938.
- ↑ Moscow Conference: Joint Four-Nation Declaration, October 1943 (full text of the Moscow Memorandum).
- ↑ Gerald Stourzh, Waldheim's Austria, The New York Review of Books 34 (3) (February 1987).
- ↑ Judgment, The Defendants: Seyss-Inquart, The Nizkor Project.
- ↑ The Defendants: Von Papen, The Nizkor Project.
- ↑ Short note on Schüssel's interview in the Jerusalem Post (in German), Salzburger Nachrichten, 11 November 2000.
- ↑ Thomas Bernhard, Books and Writers (article on Bernhard with a short section on Heldenplatz).
- ↑ Austrian Historical Commission.
- ↑ Press statement on the report of the Austrian Historical Commission Austrian Press and Information Service, 28 February 2003
- ↑ Hilberg interview with the Berliner Zeitung, as quoted by Norman Finkelstein's web site.
- ↑ Efraim Zuroff, Worldwide Investigation and Prosecution of Nazi War Criminals, 2001–2002, Simon Wiesenthal Center, Jerusalem (April 2002).
- ↑ Take action against Nazi war criminal Milivoj Asner, World Jewish Congress, 19 November 2004.
- ↑ Mutmaßlicher Kriegsverbrecher Asner wird nicht an Zagreb ausgeliefert, Der Standard, September 23, 2005.
- Bukey, Evan Burr (1986). Hitler's Hometown: Linz, Austria, 1908-1945. Indiana University Press ISBN 0253328330.
- Parkinson, F. (ed.) (1989). Conquering the Past: Austrian Nazism Yesterday and Today. Wayne State University Press. ISBN 0814320546.
- Pauley, Bruce F. (1981). Hitler and the Forgotten Nazis: A History of Austrian National Socialism. University of North Carolina Press. ISBN 0807814563.
- Scheuch, Manfred (2005). Der Weg zum Heldenplatz: eine Geschichte der österreichischen Diktatur. 1933-1938. ISBN 3825877124.
- Schuschnigg, Kurt (1971). The brutal takeover: The Austrian ex-Chancellor's account of the Anschluss of Austria by Hitler. Weidenfeld and Nicolson. ISBN 0297003216.
- Stuckel, Eva-Maria (2001). Österreich, Monarchie, Operette, und Anschluss: Antisemtismus, Faschismus, und Nationalsozialismus im Fadenkreuz von Ingeborg Bachman und Elias Canetti.
Electronic articles and journals
- Österreichs Weg zum Anschluss im März 1938," Wiener Zeitung, 25 May 1998 (detailed article the on the events of the Anschluss, in German).
- Die propagandistische Vorbereitung der Volksabstimmung," Austrian Resistance Archive, Vienna, 1988. accessed 10 June 2005.
- 1938: Austria, MSN Encarta. accessed 10 June 2005.
- The Crisis Year of 1934 Buchner, A. From the Destruction of the Socialist Lager to National Socialist Coup Attempt. accessed 10 June 2005.
All links retrieved October 15, 2012.
- Austrian Historical Commission.
- Documentation Centre of Austrian Resistance (DÖW).
- Exchange in the New York Review of Books between Gerald Stourzh and Gordon Craig over the latter's review, "Waldheim's Austria"
- Full text of the Moscow Declaration
- Simon Wiesenthal Center Retrieved April 26, 2007.
- Time magazine coverage of the events of the Anschluss
- Pictures of Adolf Hitler in Vienna
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | http://www.newworldencyclopedia.org/entry/Anschluss | 13 |
61 | The Vietnam War was the legacy of France's failure to suppress nationalist forces in Indochina as it struggled to restore its colonial dominion after World War II. Led by Ho Chi Minh, a Communist-dominated revolutionary movement—the Viet Minh—waged a political and military struggle for Vietnamese independence that frustrated the efforts of the French and resulted ultimately in their ouster from the region.
The U.S. Army's first encounters with Ho Chi Minh were brief and sympathetic. During World War II, Ho's anti-Japanese resistance fighters helped to rescue downed American pilots and furnished information on Japanese forces in Indochina. U.S. Army officers stood at Ho's side in August 1945 as he basked in the short-lived satisfaction of declaring Vietnam's independence. Five years later, however, in an international climate tense with ideological and military confrontation between Communist and non-Communist powers, Army advisers of the newly formed U.S. Military Assistance Advisory Group (MAAG), Indochina, were aiding France against the Viet Minh. With combat raging in Korea and mainland China recently fallen to the Communists, the war in Indochina now appeared to Americans as one more pressure point to be contained on a wide arc of Communist expansion in Asia. By underwriting French military efforts in Southeast Asia, the United States enabled France to sustain its economic recovery and to contribute, through the North Atlantic Treaty Organization (NATO), to the collective defense of western Europe.
Provided with aircraft, artillery, tanks, vehicles, weapons, and other equipment and supplies a small portion of which they distributed to an anti-Communist Vietnamese army they had organized—the French did not fail for want of equipment. Instead, they put American aid at the service of a flawed strategy that sought to defeat the elusive Viet Minh in set-piece battles, but neglected to cultivate the loyalty and support of the Vietnamese people. Too few in number to provide more than a veneer of security in most rural areas, the French were unable to suppress the guerrillas or to prevent the underground Communist shadow government from reappearing whenever French forces left one area to fight elsewhere.
The battle of Dien Bien Phu epitomized the shortcomings of French strategy. Located near the Laotian border in a rugged valley of remote northwestern
Vietnam, Dien Bien Phu was not a congenial place to fight. Far inland from coastal supply bases and with roads vulnerable to the Viet Minh, the base depended almost entirely on air support. The French, expecting the Viet Minh to invade Laos, occupied Dien Bien Phu in November 1953 in order to force a battle. Yet they had little to gain from an engagement. Victory at Dien Bien Phu would not have ended the war; even if defeated, the Viet Minh would have retired to their mountain redoubts. And no French victory at Dien Bien Phu would have reduced Communist control over large segments of the population. On the other hand, the French had much to lose, in manpower, equipment, and prestige.
Their position was in a valley, surrounded by high ground that the Viet Minh quickly fortified. While bombarding the besieged garrison with artillery and mortars, the attackers tunneled closer to the French positions. Supply aircraft that successfully ran the gauntlet of intense antiaircraft fire risked destruction on the ground from Viet Minh artillery. Eventually, supplies and ammunition could be delivered to the defenders only by parachute drop. As the situation became critical, France asked the United States to intervene. Believing that the French position was untenable and that even massive American air attacks using small nuclear bombs would be futile, General Matthew B. Ridgway, the Army Chief of Staff, helped to convince President Dwight D. Eisenhower not to aid them. Ridgway also opposed the use of U.S. ground forces, arguing that such an effort would severely strain the Army and possibly lead to a wider war in Asia.
The fall of Dien Bien Phu on 7 May 1954, as peace negotiations were about to start in Geneva, hastened France's disengagement from Indochina. On 20 July, France and the Viet Minh agreed to end hostilities and to divide Vietnam temporarily into two zones at the 17th parallel. ( Map 47) In the North, the Viet Minh established a Communist government, with its capital at Hanoi. French forces withdrew to the South, and hundreds of thousands of civilians, most of whom were Roman Catholics, accompanied them. The question of unification was left to be decided by an election scheduled for 1956.
The Emergence of South Vietnam
As the Viet Minh consolidated control in the North, Ngo Dinh Diem, a Roman Catholic of mandarin background, sought to assert his authority over the chaotic conditions in the South in hopes of establishing an anti-Communist state. A onetime minister in the French colonial administration, Diem enjoyed a reputation for honesty. He had resigned his office in 1933 and had taken no part in the tumultuous events that swept over Vietnam after the
war. Diem returned to Saigon in the summer of 1954 as premier with no political following except his family and a few Americans. His authority was challenged, first by the independent Hoa Hao and Cao Dai religious sects and then by the Binh Xuyen, an organization of gangsters that controlled Saigon's gambling dens and brothels and had strong influence with the police. Rallying an army, Diem defeated the sects and gained their grudging allegiance. Remnants of their forces, however, fled to the jungle to continue their resistance, and some, at a later date, became the nucleus of Communist guerrilla units.
Diem was also challenged by members of his own army, where French influence persisted among the highest ranking officers. But he weathered the threat of an army coup, dispelling American doubts about his ability to survive in the jungle of Vietnamese politics. For the next few years, the United States commitment to defend South Vietnam's independence was synonymous with support for Diem. Americans now provided advice and support to the Army of the Republic of Vietnam (ARVN); at Diem's request, they replaced French advisers throughout his nation's military establishment.
As the American role in South Vietnam was growing, U.S. defense policy was undergoing review. Officials in the Eisenhower administration believed that wars like those in Korea and Vietnam were too costly and ought to be avoided in the future. "Never again" was the rallying cry of those who opposed sending U.S. ground forces to fight a conventional war in Asia. Instead, the Eisenhower administration relied on the threat or use of massive nuclear retaliation to deter or, if necessary, to defeat the armies of the Soviet Union or Communist China. The New Look, as this policy was called, emphasized nuclear air power at the expense of conventional ground forces. If deterrence failed, planners envisioned the next war as a short, violent nuclear conflict of a few days' duration, conducted with forces in being. Ground forces were relegated to a minor role, and mobilization was regarded as an unnecessary luxury. In consequence, the Army's share of the defense budget decreased, the modernization of its forces was delayed, and its strength was reduced by 40 percent—from 1,404,598 in 1954 to 861,964 in 1956.
A strategy dependent on one form of military power, the New Look was sharply criticized by soldiers and academics alike. Unless the United States was willing to risk destruction, critics argued, the threat of massive nuclear retaliation had little credibility. General Ridgway and his successor, General Maxwell D. Taylor, were vocal opponents. Both advocated balanced forces to enable the United States to cope realistically with a variety of military contingencies. The events of the late 1950's appeared to support their demand
for flexibility. The United States intervened in Lebanon in 1956 to restore political stability there. Two years later an American military show of force in the Straits of Taiwan helped to dampen tensions between Communist China and the Nationalist Chinese Government on Formosa. Both contingencies underlined the importance of avoiding any fixed concept of war.
Advocates of the flexible response doctrine foresaw a meaningful role for the Army as part of a more credible deterrent and as a means of intervening, when necessary, in limited and small wars. They wished to strengthen both conventional and unconventional forces; to improve strategic and tactical mobility; and to maintain troops and equipment at forward bases, close to likely areas of conflict. They placed a premium on highly responsive command and control, to allow a close meshing of military actions with political goals. The same reformers were deeply interested in the conduct of brushfire wars, especially among the underdeveloped nations. In the so-called third world, competing cold war ideologies and festering nationalistic, religious, and social conflicts interacted with the disruptive forces of modernization to create the preconditions for open hostilities. Southeast Asia was one of several such areas identified by the Army. Here the United States' central concern was the threat of North Vietnamese and perhaps Chinese aggression against South Vietnam and other non-Communist states.
The United States took the lead in forming a regional defense pact, the Southeast Asia Treaty Organization (SEATO), signaling its commitment to contain Communist encroachment in the region. Meanwhile the 342 American advisers of MAAG, Vietnam (which replaced MAAG, Indochina, in 1955), trained and organized Diem's fledgling army to resist an invasion from the North. Three MAAG chiefs—Lt. Gens. John W. O'Daniel, Samuel T. Williams, and Lionel C. McGarr—reorganized South Vietnam's light mobile infantry groups into infantry divisions, compatible in design and mission with U.S. defense plans. The South Vietnamese Army, with a strength of about 150,000, was equipped with standard Army equipment and given the mission of delaying the advance of any invasion force until the arrival of American reinforcements. The residual influence of the army's earlier French training, however, lingered in both leadership and tactics. The South Vietnamese had little or no practical experience in administration and the higher staff functions, from which the French had excluded them.
The MAAG's training and reorganization work was often interrupted by Diem's use of his army to conduct "pacification" campaigns to root out stay-behind Viet Minh cadre. Hence responsibility for most internal security was transferred to poorly trained and ill-equipped paramilitary forces, the Civil Guard and Self-Defense Corps, which numbered about 75,000. For the
most part, the Viet Minh in the South avoided armed action and subscribed to a political action program in anticipation of Vietnam-wide elections in 1956, as stipulated by the Geneva Accords. But Diem, supported by the United States, refused to hold elections, claiming that undemocratic conditions in the North precluded a fair contest. (Some observers thought Ho Chi Minh sufficiently popular in the South to defeat Diem.) Buoyed by his own election as President in 1955 and by the adulation of his American supporters, Diem's political strength rose to its apex. While making some political and economic reforms, he pressed hard his attacks on political opponents and former Viet Minh, many of whom were not Communists at all but patriots who had joined the movement to fight for Vietnamese independence.
By 1957 Diem's harsh measures had so weakened the Viet Minh that Communist leaders in the South feared for the movement's survival there. The southerners urged their colleagues in the North to sanction a new armed struggle in South Vietnam. For self-protection, some Viet Minh had fled to secret bases to hide and form small units. Others joined renegade elements of the former sect armies. From bases in the mangrove swamps of the Mekong Delta, in the Plain of Reeds near the Cambodian border, and in the jungle of War Zones C and D northwest of Saigon, the Communists began to rebuild their armed forces, to re-establish an underground political network, and to carry out propaganda, harassment, and terrorist activities. As reforms faltered and Diem became more dictatorial, the ranks of the rebels swelled with the politically disaffected.
The Rise of the Viet Cong
The insurgents, now called the Viet Cong, had organized several companies and a few battalions by 1959, the majority in the Delta and the provinces around Saigon. As Viet Cong military strength increased, attacks against the paramilitary forces, and occasionally against the South Vietnamese Army, became more frequent. Many were conducted to obtain equipment, arms, and ammunition, but all were hailed by the guerrillas as evidence of the government's inability to protect its citizens. Political agitation and military activity also quickened in the Central Highlands, where Viet Cong agents recruited among the Montagnard tribes. In 1959, after
assessing conditions in the South, the leaders in Hanoi agreed to resume the armed struggle, giving it equal weight with political efforts to undermine Diem and reunify Vietnam. To attract the growing number of anti-Communists opposed to Diem, as well as to provide a democratic facade for administering the party's policies in areas controlled by the Viet Cong, Hanoi
in December 1960 created the National Liberation Front of South Vietnam. The revival of guerrilla warfare in the South found the advisory group, the South Vietnamese Army, and Diem's government ill prepared to wage an effective campaign. In their efforts to train and strengthen Diem's army, U.S. advisers had concentrated on meeting the threat of a conventional North Vietnamese invasion. The ARVN's earlier antiguerrilla campaigns, while seemingly successful, had been carried out against a weak and dormant insurgency. The Civil Guard and Self-Defense Corps, which bore the brunt of the Viet Cong's attacks, were not under the MAAG's purview and proved unable to cope with the audacious Viet Cong. Diem's regime, while stressing military activities, neglected political, social, and economic reforms. American officials disagreed over the seriousness of the guerrilla threat, the priority to be accorded political or military measures, and the need for special counterguerrilla training for the South Vietnamese Army. Only a handful of the MAAG's advisers had personal experience in counterinsurgency warfare.
Yet the U.S. Army was not a stranger to such conflict. Americans had fought insurgents in the Philippines at the turn of the century, conducted a guerrilla campaign in Burma during World War II, helped the Greek and Philippine Governments to subdue Communist insurgencies after the war, and studied the French failure in Indochina and the British success in Malaya. The Army did not, however, have a comprehensive doctrine for dealing with insurgency. For the most part, insurgent warfare was equated with the type of guerrilla or partisan struggles carried out during World War II behind enemy lines in support of conventional operations. This viewpoint reduced antiguerrilla warfare to providing security against enemy partisans operating behind friendly lines.
Almost totally lacking was an appreciation of the political and social dimensions of insurgency and its role in the larger framework of revolutionary war. Insurgency meant above all a contest for political legitimacy and power—a struggle between contending political cultures over the organization of society. Most of the Army advisers and Special Forces who were sent to South Vietnam in the early 1960'S were poorly prepared to wage such a struggle. A victory for counterinsurgency in South Vietnam would require Diem's government not only to outfight the guerrillas, but to compete successfully with their efforts to organize the population in support of the government's cause.
The Viet Cong thrived on their access to and control of the people, who formed the most important part of their support base. The population provided both economic and manpower resources to sustain and expand the insurgency; the people of the villages served the guerrillas as their first line of
resistance against government intrusion into their "liberated zones" and bases. By comparison with their political effort, the strictly military aims of the Viet Cong were secondary. The insurgents hoped not to destroy government forces—although they did so when weaker elements could be isolated and defeated—but by limited actions to extend their influence over the population. By mobilizing the population, the Viet Cong compensated for their numerical and material disadvantages. The rule of thumb that ten soldiers were needed to defeat one guerrilla reflected the insurgents" political support rather than their military superiority. For the Saigon government, the task of isolating the Viet Cong from the population was difficult under any circumstances and impossible to achieve by force alone.
Viet Cong military forces varied from hamlet and village guerrillas, who were farmers by day and fighters by night, to full-time professional soldiers. Organized into squads and platoons, part-time guerrillas had several military functions. They gathered intelligence, passing it on to district or provincial authorities; they proselytized, propagandized, recruited, and provided security for local cadres. They reconnoitered the battlefield, served as porters and guides, created diversions, evacuated wounded, and retrieved weapons. Their very presence and watchfulness in a hamlet or village inhibited the population from aiding the government.
By contrast, the local and main force units consisted of full-time soldiers, most often recruited from the area where the unit operated. Forming companies and battalions, local forces were attached to a village, district, or provincial headquarters. Often they formed the protective shield behind which a Communist Party cadre established its political infrastructure and organized new guerrilla elements at the hamlet and village levels. As the link between guerrilla and main force units, local forces served as a reaction force for the former and as a pool of replacements and reinforcements for the latter. Having limited offensive capability, local forces usually attacked poorly defended, isolated outposts or weaker paramilitary forces, often at night and by ambush. Main force units were organized as battalions, regiments, and—as the insurgency matured—divisions. Subordinate to provincial, regional, and higher commands, such units were the strongest, most mobile, and most offensive-minded of the Viet Cong forces; their mission often was to attack and defeat a specific South Vietnamese unit.
Missions were assigned and approved by a political officer who, in most cases, was superior to the unit's military commander. Party policy, military discipline, and unit cohesion were inculcated and reinforced by three-man party cells in every unit. Among the insurgents, war was always the servant of policy.
As the Viet Cong's control over the population increased, their military forces grew in number and size. Squads and platoons became companies, companies formed battalions, and battalions were organized into regiments. This process of creating and enlarging units continued as long as the Viet Cong had a base of support among the population. After 1959, however, infiltrators from the North also became important. Hanoi activated a special military transportation unit to control overland infiltration along the Ho Chi Minh Trail through Laos and Cambodia. Then a special naval unit was set up to conduct sea infiltration. At first, the infiltrators were southern-born Viet Minh soldiers who had regrouped north after the French Indochina War. Each year until 1964, thousands returned south to join or to form Viet Cong units, usually in the areas where they had originated. Such men served as experienced military or political cadres, as technicians, or as rank-and-file combatants wherever local recruitment was difficult.
When the pool of about 80,000 so-called regroupees ran dry, Hanoi began sending native North Vietnamese soldiers as individual replacements and reinforcements. In 1964 the Communists started to introduce entire North Vietnamese Army (NVA) units into the South. Among the infiltrators were senior cadres, who manned the expanding Viet Cong command system— regional headquarters, interprovincial commands, and the Central Office for South Vietnam (COSVN), the supreme military and political headquarters. As the southern branch of the Vietnamese Communist Party, COSVN was directly subordinate to the Central Committee in Hanoi. Its senior commanders were high-ranking officers of North Vietnam's Army. To equip the growing number of Viet Cong forces in the South, the insurgents continued to rely heavily on arms and supplies captured from South Vietnamese forces. But, increasingly, large numbers of weapons, ammunition, and other equipment arrived from the North, nearly all supplied by the Sino-Soviet bloc.
From a strength of approximately 5,000 at the start of 1959, the Viet Cong's ranks grew to about 100,000 at the end of 1964. The number of infiltrators alone during that period was estimated at 41,000. The growth of the insurgency reflected not only North Vietnam's skill in infiltrating men and weapons, but South Vietnam's inability to control its porous borders, Diem's failure to develop a credible pacification program to reduce Viet Cong influence in the countryside, and the South Vietnamese Army's difficulties in reducing long-standing Viet Cong bases and secret zones. Such areas not only facilitated infiltration, but were staging areas for operations; they contained training camps, hospitals, depots, workshops, and command centers. Many bases were in remote areas seldom visited by the army, such as the U Minh Forest or the Plain of Reeds. But others existed in the heart of populated
areas, in the "liberated zones." There Viet Cong forces, dispersed among hamlets and villages, drew support from the local economy. From such centers the Viet Cong expanded their influence into adjacent areas that were nominally under Saigon's control.
A New President Takes Charge
Soon after John F. Kennedy became President in 1961, he sharply increased military and economic aid to South Vietnam to help Diem defeat the growing insurgency. For Kennedy, insurgencies (or "wars of national liberation" in the parlance of Communist leaders) were a challenge to international security every bit as serious as nuclear war. The administration's approach to both extremes of conflict rested on the precepts of the flexible response. Regarded as a form of "sub-limited" or small war, insurgency was treated largely as a military problem—conventional war writ small—and hence susceptible to resolution by timely and appropriate military action. Kennedy's success in applying calculated military pressures to compel the Soviet Union to remove its offensive missiles from Cuba in 1962 reinforced the administration's disposition to deal with other international crises, including the conflict in Vietnam, in a similar manner.
Though an advance over the New Look, his policy also had limitations. Long-term strategic planning tended to be sacrificed to short-term crisis management. Planners were all too apt to assume that all belligerents were rational and that the foe subscribed as they did to the seductive logic of the flexible response. Hoping to give the South Vietnamese a margin for success Kennedy periodically authorized additional military aid and support between I96I and November 1963, when he was assassinated. But potential benefits were nullified by the absence of a clear doctrine and a coherent operational strategy for the conduct of counterinsurgency, and by chronic military and political shortcomings on the part of the South Vietnamese.
The U.S. Army played a major role in Kennedy's "beef up" of the American advisory and support efforts in South Vietnam. In turn, that role was made possible in large measure by Kennedy's determination to increase the strength and capabilities of Army forces for both conventional and unconventional operations. Between 1961 and 1964 the Army's strength rose from about 850,000 to nearly a million men, and the number of combat divisions grew from eleven to sixteen. These increases were backed up by an ambitious program to modernize Army equipment and, by stockpiling supplies and equipment at forward bases, to increase the deployability and readiness of Army combat forces. The build-up, however, did not prevent the
call-up of 120,000 Reservists to active duty in the summer of 1961, a few months after Kennedy assumed office. Facing renewed Soviet threats to force the Western Powers out of Berlin, Kennedy mobilized the Army to reinforce NATO, if need be. But the mobilization revealed serious shortcomings in Reserve readiness and produced a swell of criticism and complaints from Congress and Reservists alike. Although Kennedy sought to remedy the deficiencies that were exposed and set in motion plans to reorganize the Reserves, the unhappy experience of the Berlin Crisis was fresh in the minds of national leaders when they faced the prospect of war in Vietnam a few years later.
Facing trouble spots in Latin America, Africa, and Southeast Asia, Kennedy took a keen interest in the U.S. Army's Special Forces, believing that their skills in unconventional warfare were well suited to countering insurgency. During his first year in office, he increased the strength of the Special Forces from about 1,500 to 9,000 and authorized them to wear a distinctive green beret. In the same year he greatly enlarged their role in South Vietnam. First under the auspices of the Central Intelligence Agency and then under a military commander, the Special Forces organized the highland tribes into the Civilian Irregular Defense Group (CIDG) and in time sought to recruit other ethnic groups and sects in the South as well. To this scheme, underwritten almost entirely by the United States, Diem gave only tepid support. Indeed, the civilian irregulars drew strength from groups traditionally hostile to Saigon. Treated with disdain by the lowland Vietnamese, the Montagnards developed close, trusting relations with their Army advisers. Special Forces detachment commanders frequently were the real leaders of CIDG units. This strong mutual bond of loyalty between adviser and highlander benefited operations, but some tribal leaders sought to exploit the special relationship to advance Montagnard political autonomy. On occasion, Special Forces advisers found themselves in the awkward position of mediating between militant Montagnards and South Vietnamese officials who were suspicious and wary of the Americans' sympathy for the highlanders.
Through a village self-defense and development program, the Special Forces aimed initially to create a military and political buffer to the growing Viet Cong influence in the Central Highlands. Within a few years, approximately 60,000 highlanders had enlisted in the CIDG program. As their participation increased, so too did the range of Special Forces activities. In addition to village defense programs, the Green Berets sponsored offensive guerrilla activities and border surveillance and control measures. To detect and impede the Viet Cong, camps were established astride infiltration corri-
dors and near enemy base areas, especially along the Cambodian and Laotian borders. But the camps themselves were vulnerable to enemy attack and, despite their presence, infiltration continued. At times, border control diverted tribal units from village defense, the original heart of the CIDG program.
By 1965, as the military situation in the highlands worsened, many CIDG units had changed their character and begun to engage in quasi-conventional military operations. In some instances, irregulars under the leadership of Army Special Forces stood up to crack enemy regiments, offering much of the military resistance to enemy efforts to dominate the highlands. Yet the Special Forces—despite their efforts in South Vietnam and in Laos, where their teams helped to train and advise anti-Communist Laotian forces in the early 1960'S—did not provide an antidote to the virulent insurgency in Vietnam. Long-standing animosities between Montagnard and Vietnamese prevented close, continuing co-operation between the South Vietnamese Army and the irregulars. Long on promises but short on action to improve the lot of the Montagnards, successive South Vietnamese regimes failed to win the loyalty of the tribesmen. And the Special Forces usually operated in areas that were remote from the main Viet Cong threat to the heavily populated and economically important Delta and coastal regions of the country.
Besides the Special Forces, the Army's most important contribution to the fight was the helicopter. Neither Kennedy nor the Army anticipated the rapid growth of aviation in South Vietnam when the first helicopter transportation company arrived in December 1961. Within three years, however, each of South Vietnam's divisions and corps was supported by Army helicopters, with the faster, more reliable and versatile UH-1 (Huey) replacing the older CH-21. In addition to transporting men and supplies, helicopters were used to reconnoiter, to evacuate wounded, and to provide command and control. The Vietnam conflict became the crucible in which Army airmobile and air assault tactics evolved. As armament was added—first machine gun-wielding door-gunners, and later rockets and mini-guns—armed helicopters began to protect troop carriers against antiaircraft fire, to suppress enemy fire around landing zones during air assaults, and to deliver fire support to troops on the ground.
Army fixed-wing aircraft also flourished. Equipped with a variety of detection devices, the OV-1 Mohawk conducted day and night surveillance of Viet Cong bases and trails. The Caribou, with its sturdy frame and ability to land and take off on short, unimproved airfields, proved ideal to supply remote camps.
Army aviation revived old disagreements with the Air Force over the roles and missions of the two services and the adequacy of Air Force close air
support. The expansion of the Army's own "air force" nevertheless continued, abetted by the Kennedy administration's interest in extending airmobility to all types of land warfare, from counterinsurgency to the nuclear battlefield. Secretary of Defense Robert S. McNamara himself encouraged the Army to test an experimental air assault division. During 1963 and 1964 the Army demonstrated that helicopters could successfully replace ground vehicles for mobility and provide fire support in lieu of ground artillery. The result was the creation in 1965 of the 1st Cavalry Division (Airmobile)—the first such unit in the Army. In South Vietnam the helicopter's effect on organization and operations was as sweeping as the influence of mechanized forces in World War II. Many of the operational concepts of airmobility, rooted in cavalry doctrine and operations, were pioneered by helicopter units between 1961 and 1964, and later adopted by the new airmobile division and by all Army combat units that fought in South Vietnam.
In addition to Army Special Forces and helicopters, Kennedy greatly expanded the entire American advisory effort. Advisers were placed at the sector (provincial) level and were permanently assigned to infantry battalions and certain lower echelon combat units; additional intelligence advisers were
sent to South Vietnam. Wide use was made of temporary training teams in psychological warfare, civic action, engineering, and a variety of logistical functions. With the expansion of the advisory and support efforts came demands for better communications, intelligence, and medical, logistical, and administrative support, all of which the Army provided from its active forces, drawing upon skilled men and units from U.S.-based forces. The result was a slow, steady erosion of its capacity to meet worldwide contingency obligations. But if Vietnam depleted the Army, it also provided certain advantages. The war was a laboratory in which to test and evaluate new equipment and techniques applicable to counterinsurgency—among others, the use of chemical defoliants and herbicides, both to remove the jungle canopy that gave cover to the guerrillas and to destroy his crops. As the activities of all the services expanded, U.S. military strength in South Vietnam increased from under 700 at the start of 1960 to almost 24,000 by the end of 1964. Of these, 15,000 were Army and a little over 2,000 were Army advisers.
Changes in American command arrangements attested to the growing
commitment. In February 1962 the Joint Chiefs of Staff established the United States Military Assistance Command, Vietnam (USMACV), in Saigon as the senior American military headquarters in South Vietnam, and appointed General Paul D. Harkins as commander (COMUSMACV). Harkins reported to the Commander in Chief, Pacific (CINCPAC), in Hawaii, but because of high-level interest in South Vietnam, enjoyed special access to military and civilian leaders in Washington as well. Soon MACV moved into the advisory effort hitherto directed by the Military Assistance Advisory Group. To simplify the advisory chain of command, the latter was disestablished in May 1964, and MACV took direct control. As the senior Army commander in South Vietnam, the MACV commander also commanded Army support units; for day-to-day operations, however, control of such units was vested in the corps and division senior advisers. For administrative and logistical support Army units looked to the U.S. Army Support Group, Vietnam (later the U.S. Army Support Command), which was established in mid-1962.
Though command arrangements worked tolerably well, complaints were heard in and out of the Army. Some officials pressed for a separate Army component commander, who would be responsible both for operations and for logistical support—an arrangement enjoyed by other services in South Vietnam. Airmen tended to believe that an Army command already existed, disguised as MACV. They believed that General Harkins, though a joint commander, favored the Army in the bitter interservice rivalry over the roles and missions of aviation in South Vietnam. Some critics thought his span of control excessive, for Harkins' responsibility extended to Thailand, where Army combat units had deployed in 1962, aiming to overawe Communist forces in neighboring Laos. The Army undertook several logistical projects in Thailand, and Army engineers, signalmen, and other support forces remained there after combat forces withdrew in the fall of 1962.
While the Americans strengthened their position in South Vietnam and Thailand, the Communists tightened their grip in Laos. In 1962 agreements on that small, land-locked nation were signed in Geneva requiring all foreign military forces to leave Laos. American advisers, including hundreds of Special Forces, departed. But the agreements were not honored by North Vietnam. Its army, together with Laotian Communist forces, consolidated their hold on areas adjacent to both North and South Vietnam through which passed the network of jungle roads called the Ho Chi Minh Trail. As a result, it became easier to move supplies south to support the Viet Cong in the face of the new dangers embodied in U.S. advisers, weapons, and tactics.
At first the enhanced mobility and firepower afforded the South Vietnamese Army by helicopters, armored personnel carriers, and close air support surprised and overwhelmed the Viet Cong. Saigon's forces reacted more quickly to insurgent attacks and penetrated many Viet Cong areas. Even more threatening to the insurgents was Diem's strategic hamlet program, launched in late 1961. Diem and his brother Ngo Dinh Nhu, an ardent sponsor of the program, hoped to create thousands of new, fortified villages, often by moving peasants from their existing homes. Hamlet construction and defense were the responsibility of the new residents, with paramilitary and ARVN forces providing initial security while the peasants were recruited and organized. As security improved, Diem and Nhu hoped to enact social, economic, and political reforms which, when fully carried out, would constitute Saigon's revolutionary response to Viet Cong promises of social and economic betterment. If successful, the program might destroy the insurgency by separating and protecting the rural population from the Viet Cong, threatening the rebellion's base of support.
By early 1963, however, the Viet Cong had learned to cope with the army's new weapons and more aggressive tactics and had begun a campaign to eliminate the strategic hamlets. The insurgents became adept at countering helicopters and slow-flying aircraft and learned the vulnerabilities of armored personnel carriers. In addition, their excellent intelligence, combined with the predictability of ARVN's tactics and pattern of operations, enabled the Viet Cong to evade or ambush government forces. The new weapons the United States had provided the South Vietnamese did not compensate for the stifling influence of poor leadership, dubious tactics, and inexperience. The much publicized defeat of government forces at the Delta village of Ap Bac in January 1963 demonstrated both the Viet Cong's skill in countering ARVN's new capabilities and the latter's inherent weaknesses. Faulty intelligence, poorly planned and executed fire support, and overcautious leadership contributed to the outcome. But Ap Bac's significance transcended a single battle. The defeat was a portent of things to come. Now able to challenge ARVN units of equal strength in quasi-conventional battles, the Viet Cong were moving into a more intense stage of revolutionary war.
As the Viet Cong became stronger and bolder, the South Vietnamese Army became more cautious and less offensive-minded. Government forces became reluctant to respond to Viet Cong depredations in the countryside, avoided night operations, and resorted to ponderous sweeps against vague military objectives, rarely making contact with their enemies. Meanwhile,
the Viet Cong concentrated on destroying strategic hamlets, showing that they considered the settlements, rather than ARVN forces, the greater danger to the insurgency. Poorly defended hamlets and outposts were overrun or subverted by enemy agents who infiltrated with peasants arriving from the countryside.
The Viet Cong's campaign was aided by Saigon's failures. The government built too many hamlets to defend. Hamlet militia varied from those who were poorly trained and armed to those who were not trained or armed at all. Fearing that weapons given to the militia would fall to the Viet Cong, local officials often withheld arms. Forced relocation, use of forced peasant labor to construct hamlets, and tardy payment of compensation for relocation were but a few reasons why peasants turned against the program. Few meaningful reforms took place. Accurate information on the program's true condition and on the decline in rural security was hidden from Diem by officials eager to please him with reports of progress. False statistics and reports misled U.S. officials, too, about the progress of the counterinsurgency effort.
If the decline in rural security was not always apparent to Americans, the lack of enlightened political leadership on the part of Diem was all too obvious. Diem habitually interfered in military matters—bypassing the chain of command to order operations, forbidding commanders to take casualties, and appointing military leaders on the basis of political loyalty rather than competence. Many military and civilian appointees, especially province and district chiefs, were dishonest and put career and fortune above the national interest. When Buddhist opposition to certain policies erupted into violent antigovernment demonstrations in 1963, Diem's uncompromising stance and use of military force to suppress the demonstrators caused some generals to decide that the President was a liability in the fight against the Viet Cong. On 1 November, with American encouragement, a group of reform-minded generals ousted Diem, who was murdered along with his brother.
Political turmoil followed the coup. Emboldened, the insurgents stepped up operations and increased their control over many rural areas. North Vietnam's leaders decided to intensify the armed struggle, aiming to demoralize the South Vietnamese Army and further undermine political authority in the South. As Viet Cong military activity quickened, regular North Vietnamese Army units began to train for possible intervention in the war. Men and equipment continued to flow down the Ho Chi Minh Trail, with North Vietnamese conscripts replacing the dwindling pool of southerners who had belonged to the Viet Minh.
Setting the Stage for Confrontation
The critical state of rural security that came to light after Diem's death again prompted the United States to expand its military aid to Saigon. General Harkins and his successor General William C. Westmoreland urgently strove to revitalize pacification and counterinsurgency. Army advisers helped their Vietnamese counterparts to revise national and provincial pacification plans. They retained the concept of fortified hamlets as the heart of a new national counterinsurgency program, but corrected the old abuses, at least in theory. To help implement the program, Army advisers were assigned to the subsector (district) level for the first time, becoming more intimately involved in local pacification efforts and in paramilitary operations. Additional advisers were assigned to units and training centers, especially those of the Regional and Popular Forces (formerly called the Civil Guard and Self-Defense Corps). All Army activities, from aviation support to Special Forces, were strengthened in a concerted effort to undo the effects of years of Diem's mismanagement.
At the same time, American officials in Washington, Hawaii, and Saigon began to explore ways to increase military pressure against North Vietnam. In 1964 the South Vietnamese launched covert raids under MACV's auspices. Some military leaders, however, believed that only direct air strikes against North Vietnam would induce a change in Hanoi's policies by demonstrating American determination to defend South Vietnam's independence. Air strike plans ranged from immediate massive bombardment of military and industrial targets to gradually intensifying attacks spanning several months.
The interest in using air power reflected lingering sentiment in the United States against involving American ground forces once again in a land war on the Asian continent. Many of President Lyndon B. Johnson's advisers—among them General Maxwell D. Taylor, who was appointed Ambassador to Saigon in mid-1964—believed that a carefully calibrated air campaign would be the most effective means of exerting pressure against the North and, at the same time, the method least likely to provoke intervention by China. Taylor thought conventional Army ground forces ill suited to engage in day-to-day counterinsurgency operations against the Viet Cong in hamlets and villages. Ground forces might, however, be used to protect vital air bases in the South and to repel any North Vietnamese attack across the demilitarized zone, which separated North from South Vietnam. Together, a more vigorous counterinsurgency effort in the South and military pressure against the North might buy time for Saigon to put its political house in order, boost flagging military and civilian morale, and strengthen its military
position in the event of a negotiated peace. Taylor and Westmoreland, the senior U.S. officials in South Vietnam, agreed that Hanoi was unlikely to change its course unless convinced that it could not succeed in the South. Both recognized that air strikes were neither a panacea nor a substitute for military efforts in the South.
As each side undertook more provocative military actions, the likelihood of a direct military confrontation between North Vietnam and the United States increased. The crisis came in early August 1964 in the international waters of the Gulf of Tonkin. North Vietnamese patrol boats attacked U.S. naval vessels engaged in surveillance of North Vietnam's coastal defenses. The Americans promptly launched retaliatory air strikes. At the request of President Johnson, Congress overwhelmingly passed the Southeast Asia Resolution—the so-called Gulf of Tonkin Resolution—authorizing all actions necessary to protect American forces and to provide for the defense of the nation's allies in Southeast Asia. Considered by some in the administration as the equivalent of a declaration of war, this broad grant of authority encouraged Johnson to expand American military efforts within South Vietnam, against North Vietnam, and in Southeast Asia at large.
By late 1964, both sides were poised to increase their stake in the war. Regular NVA units had begun moving south and stood at the Laotian frontier, on the threshold of crossing into South Vietnam's Central Highlands. U.S. air and naval forces stood ready to renew their attacks. On 7 February 1965, Communist forces attacked an American compound in Pleiku in the Central Highlands and a few days later bombed American quarters in Qui Nhon. The United States promptly bombed military targets in the North. A few weeks later, President Johnson approved ROLLING THUNDER, a campaign of sustained, direct air strikes of progressively increasing strength against military and industrial targets in North Vietnam. Signs of intensifying conflict appeared in South Vietnam as well. Strengthening their forces at all echelons, from village guerrillas to main force regiments, the Viet Cong quickened military activity in late 1964 and in the first half of 1965. At Binh Gia, a village forty miles east of Saigon in Phuoc Tuy Province, a multiregimental Viet Cong force—possibly the 1st Viet Cong Infantry Division—fought and defeated several South Vietnamese battalions.
Throughout the spring the Viet Cong sought to disrupt pacification and oust the government from many rural areas. The insurgents made deep inroads in the central coastal provinces and withstood government efforts to reduce their influence in the Delta and in the critical provinces around Saigon. Committed to static defense of key towns and bases, government forces were unable or unwilling to respond to attacks against rural commu-
nities. In late spring and early summer, strong Communist forces sought a major military victory over the South Vietnamese Army by attacking border posts and highland camps. The enemy also hoped to draw government forces from populated areas, to weaken pacification further. By whipsawing war-weary ARVN forces between coast and highland and by inflicting a series of damaging defeats against regular units, the enemy hoped to undermine military morale and popular confidence in the Saigon government. And by accelerating the dissolution of government military forces, already racked by high desertions and casualties, the Communists hoped to compel the South Vietnamese to abandon the battlefield and seek an all-Vietnamese political settlement that would compel the United States to leave South Vietnam.
By the summer of 1965, the Viet Cong, strengthened by several recently infiltrated NVA regiments, had gained the upper hand over government forces in some areas of South Vietnam. With U.S. close air support and the aid of Army helicopter gunships, Saigon's forces repelled many enemy attacks, but suffered heavy casualties. Elsewhere highland camps and border outposts had to be abandoned. ARVN's cumulative losses from battle deaths and desertions amounted to nearly a battalion a week. Saigon was hard pressed to find men to replenish these heavy losses and completely unable to match the growth of Communist forces from local recruitment and infiltration. Some American officials doubted whether the South Vietnamese could hold out until ROLLING THUNDER created pressures sufficiently strong to convince North Vietnam's leaders to reduce the level of combat in the South. General Westmoreland and others believed that U.S. ground forces were needed to stave off an irrevocable shift of the military and political balance in favor of the enemy.
For a variety of diplomatic, political, and military reasons, President Johnson approached with great caution any commitment of large ground combat forces to South Vietnam. Yet preparations had been under way for some time. In early March 1965, a few days after ROLLING THUNDER began, American marines went ashore in South Vietnam to protect the large airfield at Da Nang—a defensive security mission. Even as they landed, General Harold K. Johnson, Chief of Staff of the Army, was in South Vietnam to assess the situation. Upon returning to Washington, he recommended a substantial increase in American military assistance, including several combat divisions. He wanted U.S. forces either to interdict the Laotian panhandle to stop infiltration or to counter a growing enemy threat in the central and northern provinces.
But President Johnson sanctioned only the dispatch of additional marines to increase security at Da Nang and to secure other coastal enclaves. He also
authorized the Army to begin deploying nearly 20,000 logistical troops, the main body of the 1st Logistical Command, to Southeast Asia. (Westmoreland had requested such a command in late 1964.) At the same time, the President modified the marines' mission to allow them to conduct offensive operations close to their bases. A few weeks later, to protect American bases in the vicinity of Saigon, Johnson approved sending the first Army combat unit, the 173d Airborne Brigade (Separate), to South Vietnam. Arriving from Okinawa in early May, the brigade moved quickly to secure the air base at Bien Hoa, just northeast of Saigon. With its arrival, U.S. military strength in South Vietnam passed 50,000. Despite added numbers and expanded missions, American ground forces had yet to engage the enemy in full-scale combat.
Indeed, the question of how best to use large numbers of American ground forces was still unresolved on the eve of their deployment. Focusing on population security and pacification, some planners saw U.S. combat forces concentrating their efforts in coastal enclaves and around key urban centers and bases. Under this plan, such forces would provide a security shield behind which the Vietnamese could expand the pacification zone; when required, American combat units would venture beyond their enclaves as mobile reaction forces.
This concept, largely defensive in nature, reflected the pattern established by the first Army combat units to enter South Vietnam. But the mobility and offensive firepower of U.S. ground units suggested their use in remote, sparsely populated regions to seek out and engage main force enemy units as they infiltrated into South Vietnam or emerged from their secret bases. While secure coastal logistical enclaves and base camps still would be required, the weight of the military effort would be focused on the destruction of enemy military units. Yet even in this alternative, American units would serve indirectly as a shield for pacification activities in the more heavily populated lowlands and Delta. A third proposal had particular appeal to General Johnson. He wished to employ U.S. and allied ground forces across the Laotian panhandle to interdict enemy infiltration into South Vietnam. Here was a more direct and effective way to stop infiltration than the use of air power. Encumbered by military and political problems, the idea was revived periodically but always rejected. The pattern of deployment that actually developed in South Vietnam was a compromise between the first two concepts.
For any type of operations, secure logistical enclaves at deep-water ports (Cam Ranh Bay, Nha Trang, Qui Nhon, for example) were a military necessity. In such areas combat units arrived and bases developed for regional
logistical complexes to support the troops. As the administration neared a decision on combat deployment, the Army began to identify and ready units for movement overseas and to prepare mobilization plans for Selected Reserve forces. The dispatch of Army units to the Dominican Republic in May 1965 to forestall a leftist take-over caused only minor adjustments to the build-up plans. The episode nevertheless showed how unexpected demands elsewhere in the world could deplete the strategic reserve, and it underscored the importance of mobilization if the Army was to meet worldwide contingencies and supply trained combat units to Westmoreland as well.
The prospect of deploying American ground forces also revived discussions of allied command arrangements. For a time, Westmoreland considered placing South Vietnamese and American forces under a single commander, an arrangement similar to that of U.S. and South Korean forces during the Korean War. In the face of South Vietnamese opposition, however, the idea was dropped. Arrangements with other allies were varied. Americans in South Vietnam were joined by combat units from Australia, New Zealand, South Korea, Thailand, and by noncombat elements from several other nations. Westmoreland entered into separate agreements with each commander in turn; the compacts ensured close co-operation with MACV, but fell short of giving Westmoreland command over the allied forces.
While diversity marked these arrangements, Westmoreland strove for unity within the American build-up. As forces began to deploy to South Vietnam, the Army again sought to elevate the U.S. Army, Vietnam (USARV), to a full-fledged Army component command with responsibility for combat operations. But Westmoreland successfully warded off the challenge to his dual role as unified commander of MACV and Army commander. For the remainder of the war, USARV performed solely in a logistical and administrative capacity; unlike MACV's air and naval component commands, the Army component did not exercise operational control over combat forces, special forces, or field advisers. However, through its logistical, engineer, signal, medical, military police, and aviation commands all established in the course of the build-up, USARV commanded and managed a support base of unprecedented size and scope.
Despite this victory, unity of command over the ground war in South Vietnam eluded Westmoreland, as did over-all control of U.S. military operations in support of the war. Most air and naval operations outside of South Vietnam, including ROLLING THUNDER, were carried out by the Commander in Chief, Pacific, and his air and naval commanders from his headquarters thousands of miles away in Hawaii. This patchwork of command arrangements contributed to the lack of a unified strategy, the fragmentation
of operations, and the pursuit of parochial service interests to the detriment of the war effort. No single American commander had complete authority or responsibility to fashion an over-all strategy or to co-ordinate all military aspects of the war in Southeast Asia. Furthermore, Westmoreland labored under a variety of political and operational constraints on the use of the combat forces he did command. Like the Korean War, the struggle in South Vietnam was complicated by enemy sanctuaries and by geographical and political restrictions on allied operations. Ground forces were barred from operating across South Vietnam's borders into Cambodia, Laos, or North Vietnam, although the border areas of those countries were vital to the enemy's war effort. These factors narrowed Westmoreland's freedom of action and detracted from his efforts to make effective use of American military power.
Groundwork for Combat: Build-up and Strategy
On 28 July 1965, President Johnson announced plans to deploy additional combat units and to increase American military strength in South Vietnam to 175,000 by year's end. The Army already was preparing hundreds of units for duty in Southeast Asia, among them the newly activated 1st Cavalry Division (Airmobile). Other combat units—the 1st Brigade, 101st Airborne Division, and all three brigades of the 1st Infantry Division—were either ready to go or already on their way to Vietnam. Together with hundreds of support and logistical units, these combat units constituted the first phase of the build-up during the summer and fall of 1965.
At the same time, President Johnson decided not to mobilize any Reserve units.
The President's decision profoundly affected the manner in which the Army supported
and sustained the build-up. To meet the call for additional combat forces and
to obtain manpower to enlarge its training base and to maintain a pool for rotation
and replacement of soldiers in South Vietnam, the Army had to increase its active
strength, over the next three years, by nearly 1.5 million men. Necessarily,
it relied on larger draft calls and voluntary enlistments, supplementing them
with heavy draw downs of experienced soldiers from units in Europe and South
Korea and extensions of some tours of duty to retain specialists, technicians,
and cadres who could train recruits or round out deploying units. Combat units
assigned to the strategic reserve were used to meet a large portion of MACV's
force requirements, and Reservists were not available to replace them. Mobilization
could have eased the additional burden of providing noncommissioned officers
(NCO's) and officers to man the Army's growing training bases. As matters stood,
On 28 July 1965, President Johnson announced plans to deploy additional combat units and to increase American military strength in South Vietnam to 175,000 by year's end. The Army already was preparing hundreds of units for duty In Southeast Asia, among them the newly activated 1st Cavalry Division (Airmobile). Other combat units—the 1st Brigade, 101st Airborne Division, and all three brigades of the 1st Infantry Division—were either ready to go or already on their way to Vietnam. Together with hundreds of support and logistical units, these combat units constituted the first phase of the build-up during the summer and fall of 1965.
At the same time, President Johnson decided not to mobilize any Reserve units.
The President's decision profoundly affected the manner in which the Army supported
and sustained the build-up. To meet the call for additional combat forces and
to obtain manpower to enlarge its training base and to maintain a pool for rotation
and replacement of soldiers in South Vietnam, the Army had to increase its active
strength, over the next three years, by nearly 1.5 million men. Necessarily,
it relied on larger draft calls and voluntary enlistments, supplementing them
with heavy draw downs of experienced soldiers from units in Europe and South
Korea and extensions of some tours of duty to retain specialists, technicians,
and cadres who could train recruits or round out deploying units. Combat units
assigned to the strategic reserve were used to meet a large portion of MACV's
force requirements, and Reservists were not available to replace them. Mobilization
could have eased the additional burden of providing noncommissioned officers
(NCO's) and officers to man the Army's growing training bases. As matters stood,
The personnel turbulence caused by competing demands for the Army's limited manpower was intensified by a one-year tour of duty in South Vietnam. A large number of men was needed to sustain the rotational base, often necessitating the quick return to Vietnam of men with critical skills. The heightened demand for leaders led to accelerated training programs and the lowering of standards for NCO's and junior officers. Moreover, the one-year tour deprived units in South Vietnam of experienced leadership. In time, the infusion of less-seasoned NCO's and officers contributed to a host of morale problems that afflicted some Army units. At a deeper level, the administration's decision against calling the Reserves to active duty sent the wrong signal to friends and enemies alike, implying that the nation lacked the resolution to support an effort of the magnitude needed to achieve American objectives in South Vietnam.
Hence the Army began to organize additional combat units. Three light infantry brigades were activated, and the 9th Infantry Division was reactivated. In the meantime the 4th and 25th Infantry Divisions were alerted for deployment to South Vietnam. With the exception of a brigade of the 25th, all of the combat units activated and alerted during the second half of 1965 deployed to South Vietnam during 1966 and 1967. By the end of 1965, U.S. military strength in South Vietnam had reached 184,000; a year later it stood at 385,000; and by the end of 1967 it approached 490,000. Army personnel accounted for nearly two-thirds of the total. Of the Army's eighteen divisions, at the end of 1967, seven were serving in South Vietnam.
Facing a deteriorating military situation, Westmoreland in the summer of 1965 planned to use his combat units to blunt the enemy's spring-summer offensive. As they arrived in the country, Westmoreland moved them into a defensive arc around Saigon and secured bases for the arrival of subsequent units. His initial aim was defensive—to stop losing the war and to build a structure that could support a later transition to an offensive campaign. As additional troops poured in, Westmoreland planned to seek out and defeat major enemy forces. Throughout both phases, the South Vietnamese, relieved of major combat tasks, were to refurbish their forces and conduct an aggressive pacification program behind the American shield. In a third and final stage, as enemy main force units were driven into their secret zones and bases, Westmoreland hoped to achieve victory by destroying those sanctuaries and shifting the weight of the military effort to pacification, thereby at last subduing the Viet Cong throughout rural South Vietnam.
The fulfillment of this concept rested not only on the success of American's
efforts to find and defeat enemy forces, but on the success of Saigon's
pacification program. In June 1965 the last in a series of coups that followed Diem's overthrow brought in a military junta headed by Lt. Gen. Nguyen Van Thieu as Chief of State and Air Vice Marshal Nguyen Cao Ky as Prime Minister. The new government provided the political stability requisite for successful pacification. Success hinged also on the ability of the U.S. air campaign against the North to reduce the infiltration of men and material, dampening the intensity of combat in the South and inducing Communist leaders in Hanoi to alter their long-term strategic goals. Should any strand of this threefold strategy—the campaign against Communist forces in the South, Saigon's pacification program, and the air war in the North—falter, Westmoreland's prospects would become poorer. Yet he was directly responsible for only one element, the U.S. military effort in the South. To a lesser degree, through American advice and assistance to the South Vietnamese forces, he also influenced Saigon's efforts to suppress the Viet Cong and to carry out pacification.
Army Operations in III and IV Corps, 1965-1967
Centered on the defense of Saigon, Westmoreland's concept of operations in the III Corps area had a clarity of design and purpose that was not always apparent elsewhere in South Vietnam. (Map 48) Nearly two years would pass before U.S. forces could maintain a security belt around the capital and at the same time attack the enemy's bases. But Westmoreland's ultimate aims and the difficulties he would encounter were both foreshadowed by the initial combat operations in the summer and fall of 1965.
Joined by newly arrived Australian infantrymen, the 173d Airborne Brigade during June began operations in War Zone D, a longtime enemy base north of Saigon. Though diverted several times to other tasks, the brigade gained experience in conducting heliborne assaults and accustomed itself to the rigors of jungle operations. It also established a pattern of operations that was to grow all too familiar. Airmobile assaults, often in the wake of B-52 air strikes, were followed by extensive patrolling, episodic contact with the Viet Cong, and withdrawal after a few days' stay in the enemy's territory. In early November the airborne soldiers uncovered evidence of the enemy's recent and hasty departure—abandoned camps, recently vacated tunnels, and caches of food and supplies. However, the Viet Cong, by observing the brigade, began to formulate plans for dealing with the Americans.
On 8 November, moving deeper into War Zone D, the brigade encountered the first significant resistance. A multibattalion Viet Cong force attacked at close quarters and forced the Americans into a tight defensive perimeter. Hand-to-hand combat ensued as the enemy tried to "hug" Ameri-
can soldiers to prevent the delivery of supporting air and artillery fire. Unable to prepare a landing zone to receive reinforcements or to evacuate casualties, the beleaguered Americans withstood repeated enemy assaults. At nightfall the Viet Cong ceased their attack and withdrew under cover of darkness. Next morning, when reinforcements arrived, the brigade pursued the enemy, finding evidence that he had suffered heavy casualties. Such operations inflicted losses but failed either to destroy the enemy's base or to prevent him from returning to it later on.
Like the airborne brigade, the 1st Infantry Division initially divided its efforts. In addition to securing its base camps north of Saigon, the division helped South Vietnamese forces clear an area west of the capital in the vicinity of Cu Chi in Hau Nghia Province. Reacting to reports of enemy troop concentrations, units of the division launched a series of operations in the fall of 1965 and early 1966 that entailed quick forays into the Ho Bo and Boi Loi woods, the Michelin Rubber Plantation, the Rung Sat swamp, and War Zones C and D. In Operation MASTIFF, for example, the division sought to disrupt Viet Cong infiltration routes between War Zones C and D that crossed the Boi Loi woods in Tay Ninh Province, an area that had not been penetrated by government forces for several years.
But defense of Saigon was the first duty of the "Big Red One" as well as of the 25th Infantry Division, which arrived in the spring of 1966. The 1st Division took up a position protecting the northern approaches, blocking Route 15 from the Cambodian border. The 25th guarded the western approaches, chiefly Route 1 and the Saigon River. The two brigades of the 25th Division served also as a buffer between Saigon and the enemy's base areas in Tay Ninh Province. Westmoreland hoped, however, that the 25th Division would loosen the insurgents' tenacious hold on Hau Nghia Province as well. Here American soldiers found to their amazement that the division's camp at Cu Chi had been constructed atop an extensive Viet Cong tunnel complex. Extending over an area of several miles, this subterranean network, one of several in the region, contained hospitals, command centers, and storage sites. The complex, though partially destroyed by Army "tunnel rats," was never completely eliminated and lasted for the duration of the war. The With Division worked closely with South Vietnamese Army and paramilitary forces throughout 1966 and 1967 to foster pacification in Hau Nghia and to secure its own base. But suppressing insurgency in Hau Nghia proved as difficult as eradicating the tunnels at Cu Chi.
As the number of Army combat units in Vietnam grew larger, Westmoreland established two corps-size commands, I Field Force in the II Corps area and II Field Force in the III Corps area. Reporting directly to the
MACV commander, the field force commander was the senior Army tactical commander in his area and the senior U.S. adviser to ARVN forces there. Working closely with his South Vietnamese counterpart, he co-ordinated ARVN and American operations by establishing territorial priorities for combat and pacification efforts. Through his deputy senior adviser, a position established in 1967, the field force commander was able to keep abreast both of the activities of U.S. sector (province) and subsector (district) advisers and of the progress of Saigon's pacification efforts. A similar arrangement was set up in I Corps, where the commander of the III Marine Amphibious Force was the equivalent of a field force commander. Only in IV Corps, in the Mekong Delta where few American combat units served, did Westmoreland choose not to establish a corps-size command. There the senior U.S. adviser served as COMUSMACV's representative; he commanded Army advisory and support units, but no combat units.
Although Army commanders in III Corps were eager to seek out and engage enemy main force units in their strongholds along the Cambodian border, operations at first were devoted to base and area security and to clearing and rehabilitating roads. The 1st Infantry Division's first major encounter with the Viet Cong occurred in November as division elements carried out a routine road security operation along Route 13, in the vicinity of the village of Bau Bang. Trapping convoys along Route 13 had long been a profitable Viet Cong tactic. Ambushed by a large, well-entrenched enemy force, division troops reacted aggressively and mounted a successful counterattack. But the road was by no means secured; close to enemy bases, the Cambodian border, and Saigon, Route 13 would be the site of several major battles in years to come.
Roads were a major concern of U.S. commanders. In some operations, infantrymen provided security as Army engineers improved neglected routes. Defoliants and the Rome plow—a bulldozer modified with sharp front blades—removed from the sides of important highways the jungle growth that provided cover for Viet Cong ambushes. Road-clearing operations also contributed to pacification by providing peasants with secure access to local markets. In III Corps, with its important road network radiating from Saigon, ground mobility was as essential as airmobility for the conduct of military operations. Lacking as many helicopters as the airmobile division, the 1st and 25th Infantry Divisions, like all Army units in South Vietnam, strained the resources of their own aviation support units and of other Army aviation units providing area support to obtain the maximum airmobile capacity for each operation. Nevertheless, on many occasions the Army found itself road bound.
Road and convoy security was also the original justification for introducing Army mechanized and armor units into South Vietnam in 1966. At first Westmoreland was reluctant to bring heavy mechanized equipment into South Vietnam, for it seemed ill suited either to counterinsurgency operations or to operations during the monsoon season, when all but a few roads were impassable. Armor advocates pressed Westmoreland to reconsider his policy. Operation CIRCLE PINES, carried out by elements of the 25th Infantry Division in the spring of 1966, successfully combined an infantry force and an armor battalion. This experience, together with new studies indicating a greater potential for mechanized forces, led Westmoreland to reverse his original policy and request deployment of the 11th Armored Cavalry Regiment, with its full complement of tanks, to Vietnam.
Arriving in III Corps in the last half of 1966, the regiment set up base at Xuan Loc, on Route I northeast of Saigon in Long Khanh Province. In addition to assuming an area support mission and strengthening the eastern approaches to Saigon as part of Westmoreland's security belt around the capital, squadrons of the regiment supported Army units throughout the corps zone, often "homesteading" with other brigades or divisions.
Route security, however, was only the first step in carving out a larger role for Army mechanized forces. Facing an enemy who employed no armor, American mechanized units, often in conjunction with airmobile assaults, acted both as blocking or holding forces and as assault or reaction forces, where terrain permitted. "Jungle bashing," as offensive armor operations were sometimes called, had its uses but also its limitations. The intimidating presence of tanks and personnel carriers was often nullified by their cumbersomeness and noise, which alerted the enemy to an impending attack. The Viet Cong also took countermeasures to immobilize tracked vehicles. Crude tank traps, locally manufactured mines (often made of plastic to thwart discovery by metal detectors), and well-aimed rocket or recoilless rifle rounds could disable a tank or personnel carrier. Together with the dust and tropical humidity, such weapons placed a heavy burden on Army maintenance units. Yet mechanized units brought the allies enhanced mobility and firepower and often were essential to counter ambushes or destroy an enemy force protected by bunkers.
As Army strength increased in III Corps, Westmoreland encouraged his units to operate farther afield. In early 1966 intelligence reports indicated that enemy strength and activity were increasing in many of his base areas. In two operations during the early spring of 1966, units of the 1st and 25th Divisions discovered Viet Cong training camps and supply dumps, some of the sites honeycombed with tunnels. But they failed to engage major enemy forces. As Army units made the deepest penetration of War Zone C since 1961, all signs pointed to the foe's hasty withdrawal into Cambodia. An airmobile raid failed to locate the enemy's command center, COSVN. (COSVN, in fact, was fragmented among several sites in Tay Ninh Province and in nearby Cambodia.) Like the 173d Airborne Brigade's operations, the new attacks had no lasting effects.
By May 1966 an ominous build-up of enemy forces, among them NVA regiments that had infiltrated south, was detected in Phuoc Long and Binh Long Provinces in northern III Corps. U.S. commanders viewed the build-up as a portent of the enemy's spring offensive, plans for which included an attack on the district town of Loc Ninh and on a nearby Special Forces camp. The 1st Division responded, sending a brigade to secure Route 13. But the threat to Loc Ninh heightened in early June, when regiments of the 9th Viet Cong Division took up positions around the town. The arrival of American reinforcements apparently prevented an assault. About a week later, however, an enemy regiment was spotted in fortified positions in a rubber plantation adjacent to Loc Ninh. Battered by massive air and artillery strikes, the regiment was dislodged and its position overrun, ending the
threat. Americans recorded other successes, trapping Viet Cong ambushers in a counterambush, securing Loc Ninh, and spoiling the enemy's spring offensive. But if the enemy still underestimated the mobility and firepower that U.S. commanders could bring to bear, he had learned how easily Americans could be lured away from their base camps.
By the summer of 1966 Westmoreland believed he had stopped the losing trend of a year earlier and could begin the second phase of his general campaign strategy. This entailed aggressive operations to search out and destroy enemy main force units, in addition to continued efforts to improve security in the populated areas of III Corps. In Operation ATTLEBORO he sent the 196th Infantry Brigade and the 3d Brigade, 4th Infantry Division, to Tay Ninh Province to bolster the security of the province seat. Westmoreland's challenge prompted COSVN to send the 9th Viet Cong Division on a "countersweep," the enemy's term for operations to counter allied search and destroy tactics. Moving deeper into the enemy's stronghold, the recently arrived and inexperienced 196th Infantry Brigade sparred with the enemy. Then an intense battle erupted, as elements of the brigade were isolated and surprised by a large enemy force. Operation ATTLEBORO quickly grew to a multidivision struggle as American commanders sought to maintain contact with the Viet Cong and to aid their own surrounded forces. Within a matter of days, elements of the 1st and 25th Divisions, the 173d Airborne Brigade, and the 11th Armored Cavalry Regiment had converged on War Zone C. Control of ATTLEBORO passed in turn from the 25th to the 1st Division and finally to the II Field Force, making it the first Army operation in South Vietnam to be controlled by a corps-size headquarters. With over 22,000 U.S. troops participating, the battle had become the largest of the war. Yet combat occurred most often at the platoon and company levels, usually at night. As the number of American troops increased, the 9th Viet Cong Division shied away, withdrawing across the Cambodian border. Then Army forces departed, leaving to the Special Forces the task of detecting the enemy's inevitable return.
As the threat along the border abated, Westmoreland turned his attention to the enemy's secret zones near Saigon, among them the so-called Iron Triangle in Binh Duong Province. Harboring the headquarters of Military Region IV, the Communist command that directed military and terrorist activity in and around the capital, this stronghold had gone undisturbed for several years. Westmoreland hoped to find the command center, disrupt Viet Cong activity in the capital region, and allow South Vietnamese forces to accelerate pacification and uproot the stubborn Viet Cong political organization that flourished in many villages and hamlets.
Operation CEDAR FALLS began on 8 January 1967 with the objectives of destroying the headquarters, interdicting the movement of enemy forces into the major war zones in III Corps, and defeating Viet Cong units encamped there. Like ATTLEBORO before it, CEDAR FALLS tapped the manpower and resources of nearly every major Army unit in the corps area. A series of preliminary maneuvers brought Army units into position. Several air assaults sealed off the Iron Triangle, exploiting the natural barriers of the rivers that formed two of its boundaries. Then American units began a series of sweeps to push the enemy toward the blocking forces. At the village of Ben Suc, long under the sway of the insurgents, sixty helicopters descended into seven landing zones in less than a minute. Ben Suc was surrounded, its entire population evacuated, and the village and its tunnel complex destroyed. But insurgent forces had fled before the heliborne assault. As CEDAR FALLS progressed, U.S. troops destroyed hundreds of enemy fortifications, captured large quantities of supplies and food, and evacuated other hamlets. Contact with the enemy was fleeting. Most of the Viet Cong, including the high-level cadre of the regional command, had escaped, sometimes infiltrating through allied lines.
By the time Army units left the Iron Triangle, MACV had already received reports that Viet Cong and NVA regiments were returning to War Zone C in preparation for a spring offensive. This time Westmoreland hoped to prevent Communist forces from escaping into Cambodia, as they had done in ATTLEBORO. From forward field positions established during earlier operations, elements of the 25th and 1st Divisions, the 196th Infantry Brigade, and the 11th Armored Cavalry Regiment launched JUNCTION CITY, moving rapidly to establish a cordon around the war zone and to begin a new sweep of the base area. As airmobile and mechanized units moved into positions on the morning of 21 February 1967, elements of the 173d Airborne Brigade made the only parachute drop of the Vietnam War—and the first combat airborne assault since the Korean War—to establish a blocking position near the Cambodian border. Then other U.S. units entered the horseshoe-shaped area of operations through its open end.
Despite the emphasis on speed and surprise, Army units did not encounter many enemy troops at the outset. As the operation entered its second phase, however, American forces concentrated their efforts in the eastern portion of War Zone C, close to Route I3. Here several violent battles erupted, as Communist forces tried to isolate and defeat individual units and possibly also to screen the retreat of their comrades into Cambodia. On I9 March a mechanized unit of the gth Infantry Division was attacked and nearly overrun along Route Is near the battered village of Bau Bang. The
combined firepower of armored cavalry, supporting artillery, and close air support finally caused the enemy to break contact. A few days later, at Fire Support Base GOLD, in the vicinity of Soul Tre, an infantry and artillery battalion of the Pith Infantry engaged the 272d Viet Cong Regiment. Behind an intense, walking mortar barrage, enemy troops breached GOLD'S defensive perimeter and rushed into the base. Man-to-man combat ensued. A complete disaster was averted when Army artillerymen lowered their howitzers and fired, directly into the oncoming enemy, Beehive artillery rounds that contained hundreds of dartlike projectiles. The last major encounter with enemy troops during JUNCTION CITY occurred at the end of March, when elements of two Viet Cong regiments, the 271st and the 70th (the latter directly subordinate to COSVN) attacked a battalion of the 1st Infantry Division in a night defensive position deep in War Zone C, near the Cambodian border. The lopsided casualties—over 600 enemy killed in contrast to 10 Americans—forcefully illustrated once again the U.S. ability to call in overwhelmingly superior fire support by artillery, armed helicopters, and tactical aircraft.
Thereafter, JUNCTION CITY became a pale shadow of the multidivision effort it had been at its outset. Most Army units were withdrawn, either to return to their bases or to participate in other operations. The 196th Infantry Brigade was transferred to I Corps to help replace Marine forces sent north to meet a growing enemy threat near the demilitarized zone. Contacts with enemy forces in this final phase were meager. Again a planned Viet Cong offensive had been aborted; the enemy himself escaped, though not unscathed.
In the wake of JUNCTION CITY, MACV's attention reverted to the still critical security conditions around Saigon. The 1st Infantry Division returned to War Zone D to search for the 271st Viet Cong Regiment and to disrupt the insurgents' lines of communications between War Zones C and D. Despite two major contacts, the main body of the regiment eluded its American pursuers. Army units again returned to the Iron Triangle between April and July 1967, after enemy forces were detected in their old stronghold. Supplies and documents were found in quantities even larger than those discovered in CEDAR FALLS. Once again, however, encounters with the Communists were fleeting. The enemy's reappearance in the Iron Triangle and War Zone D, combined with rocket and mortar attacks on U.S. bases around Saigon, heightened Westmoreland's concern about the security of the capital. When the 1st Infantry Division's base at Phuoc Vinh and the Bien Hoa Air Base were attacked in mid-1967, the division mounted operations into the Ong Dong jungle and the Vinh Loi woods. Other operations
swept the jungles and villages of Bien Hoa Province and sought once again to support pacification in Hau Nghia Province.
These actions pointed up a basic problem. The large, multidivision operations into the enemy's war zones produced some benefits for the pacification campaign; by keeping enemy main force regiments at bay, Westmoreland impeded their access to heavily populated areas and prevented them from reinforcing Viet Cong provincial and district forces. Yet when American units were shifted to the border, the local Viet Cong units gained a measure of relief Westmoreland faced a strategic dilemma: he could not afford to keep substantial forces away from their bases for more than a few months at a time without jeopardizing local security. Unless he received additional forces, Westmoreland would always be torn between two operational imperatives. By the summer of 1967, MACV's likelihood of receiving more combat troops, beyond those scheduled to deploy during the latter half of the year and in early 1968, had become remote. In Washington the administration turned down his request for an additional 200,000 men.
Meanwhile, however, the 9th Infantry Division and the 199th Infantry Brigade arrived in South Vietnam. Westmoreland stationed the brigade at Bien Hoa, where it embarked on FAIRFAX, a year-long operation in which it worked closely with a South Vietnamese ranger group to improve security in Gia Dinh Province, which surrounded the capital. Units of the brigade "paired off,' with South Vietnamese rangers and, working closely with paramilitary and police forces, sought to uproot the very active Viet Cong local forces and destroy the enemy's political infrastructure. Typical activities included ambushes by combined forces; cordon and search operations in villages and hamlets, often in conjunction with the Vietnamese police; psychological and civic action operations; surprise road blocks to search for contraband and Viet Cong supporters; and training programs to develop proficient military and local self-defense capabilities.
Likewise, the 9th Infantry Division set up bases east and south of Saigon. One brigade deployed to Bear Cat; another set up camp at Tan An in Long An Province, south of Saigon, where it sought to secure portions of Route 4, an important north-south highway connecting Saigon with the rice-rich lower Delta. Further south, the 2d Brigade, 9th Infantry Division, established its base at Dong Tam in Dinh Tuong Province in IV Corps. Located in the midst of rice paddies and swamps, Dong Tam was created by Army engineers with sand dredged from the My Tho River. From this 600-acre base, the brigade began a series of riverine operations unique to the Army's experience in South Vietnam.
To patrol and fight in the inundated marshlands and rice paddies and along the numerous canals and waterways crossing the Delta, the Army
modernized the concept of riverine warfare employed during the Civil War by Union forces on the Mississippi River and by the French during the Indochina War. The Mobile Riverine Force utilized a joint Army-Navy task force controlled by a ground commander. In contrast to amphibious operations, where control reverts to the ground commander only after the force is ashore, riverine warfare was an extension of land combat, with infantry units traveling by water rather than by trucks or tracked vehicles. Aided by a Navy river support squadron and river assault squadron, infantrymen were housed on barracks ships and supported by gunships or fire support boats called monitors. Howitzers and mortars mounted on barges provided artillery support. The ad Brigade, 9th Infantry Division, began operations against the Cam Son Secret Zone, approximately 10 miles west of Dong Tam, in May 1967.
Meanwhile, the war of main force units along the borders waxed and waned in relation to seasonal weather cycles, which affected the enemy's pattern of logistical activity, his ability to infiltrate men and supplies from North Vietnam, and his penchant for meticulous preparation of the battlefield. By the fall of 1967, enemy activity had increased again in the base areas, and sizable forces began appearing along South Vietnam's border from the demilitarized zone to III Corps. By the year's end, American forces had returned to War Zone C to screen the Cambodian border to prevent Communist forces from re-entering South Vietnam. Units of the 25th Infantry Division that had been conducting operations in the vicinity of Saigon moved to the border. Elements of the 1st Infantry Division had resumed road-clearing operations along Route I3, but the division soon faced another major enemy effort to capture Loc Ninh. On 29 October Viet Cong units assaulted the CIDG camp and the district command post, breaching the defense perimeter. Intense air and artillery fire prevented its complete loss. Within a few hours, South Vietnamese and U.S. reinforcements reached Loc Ninh, their arrival made possible by the enemy's failure to capture the local airstrip.
When the build-up ended, ten Army battalions were positioned within Loc Ninh and between the town and the Cambodian border. During the next two days allied units warded off repeated enemy attacks as Communist forces desperately tried to score a victory. Tactical air support and artillery fire prevented the enemy from massing though he outnumbered allied forces by about ten to one. At the end of a ten-day battle, over 800 enemy were left on the battlefield, while allied deaths numbered only 50. Some 452 close air support sorties, 8 B-52 bomber strikes, and 30,125 rounds of artillery had been directed at the enemy. Once again, Loc Ninh had served as a lightning rod to attract U.S. forces to the border. The pattern of two wars—one in the villages, one on the border—continued without decision.
Army Operations in II and I Corps, 1965-1967
Spearheaded by at least three NVA regiments, Communist forces mounted a strong offensive in South Vietnam's Central Highlands during the summer of 1965, overrunning border camps and besieging some district towns. Here the enemy threatened to cut the nation in two. To meet the danger, Westmoreland proposed to introduce the newly organized Army airmobile division, the 1st Cavalry Division, with its large contingent of helicopters, directly into the highlands. Some of his superiors in Hawaii and Washington opposed this plan, preferring to secure coastal bases. Though Westmoreland contended that enclave security made poor use of U.S. mobility and offensive firepower, he was unable to overcome the fear of an American Dien Bien Phu, if a unit in the highlands should be isolated and cut off from the sea.
In the end, the deployment of Army forces to II Corps reflected a compromise. As additional American and South Korean forces arrived during 1965 and 1966, they often reinforced South Vietnamese efforts to secure coastal enclaves, usually centered on the most important cities and ports. (Map 49) At Phan Thiet, Tuy Hoa, Qui Nhon, Nha Trang, and Cam Ranh Bay, allied forces provided area security, not only protecting the ports and logistical complexes that developed in many of these locations, but also assisting Saigon's forces to expand the pacified zone that extended from the urban cores to the countryside.
Here, as in III Corps, Westmoreland addressed two enemy threats. Local insurgents menaced populated areas along the coastal plain, while enemy main force units intermittently pushed forward in the western highlands. Between the two regions stretched the Piedmont, a transitional area in whose lush valleys lived many South Vietnamese. In the piedmont's craggy hills and jungle-covered uplands, local and main force Viet Cong units had long flourished by exacting food and taxes from the lowland population through a well-entrenched shadow government. Although the enemy's bases in the Piedmont did not have the notoriety of the secret zones near Saigon, they served similar purposes, harboring units, command centers, and training and logistical facilities. Extensions of the Ho Chi Minh Trail ran from the highlands through the Piedmont to the coast, facilitating the movement of enemy units and supplies from province to province. To be effective, allied operations on the coast had to uproot local units living amid the population and to eradicate the enemy base areas in the Piedmont, together with the main force units that supported the village and hamlet guerrillas.
Despite their sparse population and limited economic resources, the highlands had a strategic importance equal to and perhaps greater than the
coastal plain. Around the key highland towns—Pleiku, Kontum, Ban Me Thuot, and Da Lat—South Vietnamese and U.S. forces had created enclaves. Allied forces protected the few roads that traversed the highlands, screened the border, and reinforced outposts and Montagnard settlements from which the irregulars and Army Special Forces sought to detect enemy cross-border movements and to strengthen tribal resistance to the Communists. Such border posts and tribal camps, rather than major towns, most often were the object of enemy attacks. Combined with road interdiction, such attacks enabled the Communists to disperse the limited number of defenders and to discourage the maintenance of outposts.
Such actions served a larger strategic objective. The enemy planned to develop the highlands into a major base area from which to mount or support operations in other areas. A Communist-dominated highlands would be a strategic fulcrum, enabling the enemy to shift the weight of his operations to any part of South Vietnam. The highlands also formed a "killing zone" where Communist forces could mass. Challenging American forces had become the principal objective of leaders in Hanoi, who saw their plans to undermine Saigon's military resistance thwarted by U.S. intervention. Salient victories against Americans, they believed, might deter a further build-up and weaken Washington's resolve to continue the war.
The 1st Cavalry Division (Airmobile) moved with its 435 helicopters into this hornet's nest in September 1965, establishing its main base at An Khe, a government stronghold on Route I9, halfway between the coastal port of Qui Nhon and the highland city of Pleiku. The location was strategic: at An Khe the division could help to keep open the vital east-west road from the coast to the highlands and could pivot between the highlands and the coastal districts, where the Viet Cong had made deep inroads. Meanwhile, the1st Brigade, 101st Airborne Division, had begun operations in the rugged Song Con valley, about 18 miles northeast of An Khe. Here, on 15 September, one battalion ran into heavy fire from an enemy force in the tree line around its landing zone. Four helicopters were lost and three company commanders killed; reinforcements could not land because of the intense enemy fire. With the fight at close quarters, the Americans were unable to call in close air support, armed gunships, and artillery fire, except at the risk of their own lives. But as the enemy pressed them back, supporting fires were placed almost on top of the contending forces. At dusk the fighting subsided; as the Americans steeled themselves for a night attack, the enemy, hard hit by almost 100 air strikes and 11,000 rounds of artillery, slipped away. Inspection of the battlefield revealed that the Americans had unwittingly landed in the midst of a heavily bunkered enemy base.
The fight had many hallmarks of highland battles that were to come. Americans had little information about enemy forces or the area of operations; the enemy could "hug" Army units to nullify their massive advantage in firepower. In compensation, the enemy underestimated the accuracy of such fire and the willingness of U.S. commanders to call it in even when fighting at close quarters. Finally, enemy forces when pressed too hard could usually escape, and pursuit, as a rule, was futile.
Less than a month later the newly arrived airmobile division received its own baptism of combat. The North Vietnamese Army attacked a Special Forces camp at Plei Me; when it was repulsed, Westmoreland directed the division to launch an offensive to locate and destroy enemy regiments that had been identified in the vicinity of the camp. The result was the battle of the Ia Drang valley, named for a small river that flowed through the area of operations. For thirty-five days the division pursued and fought the 32d, 33d, and 66th North Vietnamese Regiments, until the enemy, suffering heavy casualties, returned to his bases in Cambodia.
With scout platoons of its air cavalry squadron covering front and flanks, each battalion of the division's 1st Brigade established company bases from which patrols searched for enemy forces. For several days neither ground patrols nor aero-scouts found any trace, but on 4 November the scouts spotted a regimental aid station several miles west of Plei Me. Quick reacting aerorifle platoons converged on the site. Hovering above, the airborne scouts detected an enemy battalion nearby and attacked from UH-IB gunships with aerial rockets and machine guns. Operating beyond the range of their ground artillery, Army units engaged the enemy in an intense firelight.. Again enemy troops "hugged" American forces, then broke contact as reinforcements began to arrive.
The search for the main body of the enemy continued for the next few days, with Army units concentrating their efforts in the vicinity of the Chu Pang Massif, a mountain near the Cambodian border that was believed to be an enemy base. Communist forces were given little rest, as patrols harried and ambushed them. The enemy attacked an American patrol base, Landing Zone MARY, at night, but was repulsed by the first night air assault into a defensive perimeter under fire, accompanied by aerial rocket fire.
The heaviest fighting was yet to come. As the division began the second stage of its campaign, enemy forces began to move out of the Chu Pong base. Units of the 1st Cavalry Division advanced to establish artillery bases and landing zones at the base of the mountain. Landing Zone X-RAY was one of several U.S. positions vulnerable to attack by the enemy forces that occupied the surrounding high ground. Here on 14 November began fighting that
pitted three battalions against elements of two NVA regiments. Withstanding repeated mortar attacks and infantry assaults, the Americans used every means of firepower available to them—the division's own gunships, massive artillery bombardment, hundreds of strafing and bombing attacks by tactical aircraft, and earth-shaking bombs dropped by B-52 bombers from Guam—to turn back a determined enemy. The Communists lost 600 dead, the Americans 79.
Although badly hurt, the enemy did not leave the Ia Drang valley. Elements of the 66th North Vietnamese Regiment moving east toward Plei Me encountered an American battalion on 17 November, a few miles north of X-RAY. The fight that resulted was a gory reminder of the North Vietnamese mastery of the ambush. The Communists quickly snared three U.S. companies in their net. As the trapped units struggled for survival, nearly all semblance of organized combat disappeared in the confusion and mayhem. Neither reinforcements nor effective firepower could be brought in. At times combat was reduced to valiant efforts by individuals and small units to avert annihilation. When the fighting ended that night, 60 percent of the Americans were casualties, and almost one of every three soldiers in the battalion had been killed.
Lauded as the first major American triumph of the Vietnam War, the battle of the Ia Drang valley was in truth a costly and problematic victory. The airmobile division, committed to combat less than a month after it arrived in-country, relentlessly pursued the enemy for thirty-five days over difficult terrain and defeated three NVA regiments. In part, its achievements underlined the flexibility that Army divisions had gained in the early 1960's under the Reorganization Objective Army Division (ROAD) concept. Replacing the pentomic division with its five lightly armed battle groups, the ROAD division, organized around three brigades, facilitated the creation of brigade and battalion task forces tailored to respond and fight in a variety of military situations. The newly organized division reflected the Army's embrace of the concept of flexible response and proved eminently suitable for operations in Vietnam. The helicopter was given great credit as well. Nearly every aspect of the division's operations was enhanced by its airmobile capacity. Artillery batteries were moved sixty-seven times by helicopter. Intelligence, medical, and all manner of logistical support benefited as well from the speed and flexibility provided by helicopters. Despite the fluidity of the tactical situation, airmobile command and control procedures enabled the division to move and to keep track of its units over a large area, and to accommodate the frequent and rapid changes in command arrangements as units were moved from one headquarters to another.
Yet for all the advantages that the division accrued from airmobility, its performance was not without blemish. Though the conduct of division-size airmobile operations proved tactically sound, two major engagements stemmed from the enemy's initiative in attacking vulnerable American units. On several occasions massive air and artillery support provided the margin of victory (if not survival). Above all, the division's logistical self-sufficiency fell short of expectations. It could support only one brigade in combat at a time, for prolonged and intense operations consumed more fuel and ammunition than the division's helicopters and fixed-wing Caribou aircraft could supply. Air Force tactical airlift became necessary for resupply. Moreover, in addition to combat losses and damage, the division's helicopters suffered from heavy use and from the heat, humidity, and dust of Vietnam, taxing its maintenance capacity. Human attrition was also high; hundreds of soldiers, the equivalent of almost a battalion, fell victim to a resistant strain of malaria peculiar to Vietnam's highlands.
Westmoreland's satisfaction in blunting the enemy's offensive was tempered by concern that enemy forces might re-enter South Vietnam and resume their offensive while the airmobile division recuperated at the end of November and during most of December. He thus requested immediate reinforcements from the Army's With Infantry Division, based in Hawaii and scheduled to deploy to South Vietnam in the spring of 1966. By the end of 1965, the division's 3d Brigade had been airlifted to the highlands and, within a month of its arrival, had joined elements of the 1st Cavalry Division to launch a series of operations to screen the border. Army units did not detect any major enemy forces trying to cross from Cambodia into South Vietnam. Each operation, however, killed hundreds of enemy soldiers and refined airmobile techniques, as Army units learned to cope with the vast territorial expanse and difficult terrain of the highlands.
In Operation MATADOR, for example, air strikes were used to blast holes in the forests, enabling helicopters to bring in heavy engineer equipment to construct new landing zones for use in future operations. Operation LINCOLN, a search and destroy operation on the Chu Pong Massif, featured combined armor and airmobile operations; air cavalry scouts guided armored vehicles of the 3d Brigade, 25th Infantry Division, as they operated in a lightly wooded area near Pleiku City. Also in LINCOLN, Army engineers, using heli-lifted equipment, in two days cleared and constructed a runway to handle C-130 air transports in an area inaccessible by road.
Despite the relative calm that followed the Ia Drang fighting, the North Vietnamese left no doubt of their intent to continue infiltration and to challenge American forces along the highland border. In February 1966
enemy forces overran the Special Forces camp at A Shau, in the remote northwest corner of I Corps. The loss of the camp had long-term consequences, enabling the enemy to make the A Shau valley a major logistical base and staging area for forces infiltrating into the Piedmont and coastal areas. The loss also highlighted certain differences between operational concepts of the Army and the marines. Concentrating their efforts in the coastal districts of I Corps and lacking the more extensive helicopter support enjoyed by Army units, the marines avoided operations in the highlands. On the other hand, Army commanders in II Corps sought to engage the enemy as close to the border as possible and were quick to respond to threats to Special Forces camps in the highlands. Operations near the border were essential to Westmoreland's efforts to keep main force enemy units as far as possible from heavily populated areas.
For Hanoi's strategists, however, a reciprocal relation existed between highlands and coastal regions. Here, as in the south, the enemy directed his efforts to preserving his own influence among the population near the coast, from which he derived considerable support. At the same time, he maintained a constant military threat in the highlands to divert allied forces from
efforts at pacification. Like the chronic shifting of units from the neighborhood of Saigon to the war zones in III Corps, the frequent movement of American units between coast and border in II Corps reflected the Communist desire to relieve allied military pressure whenever guerrilla and local forces were endangered. In its broad outlines, Hanoi's strategy to cope with U.S. forces was the same employed by the Viet Minh against the French and by Communist forces in 1964 and 1965 against the South Vietnamese Army. Whether it would be equally successful remained to be seen.
The airmobile division spent the better part of the next two years fighting Viet Cong and NVA main force units in the coastal plain and Piedmont valleys of Binh Dinh Province. Here the enemy had deep roots, while pacification efforts were almost dead. Starting in early 1966, the 1st Cavalry Division embarked on a series of operations against the ad Viet Cong and the `8th and Id North Vietnamese Regiments of the 3d North Vietnamese Division (the Yellow Star Division). For the most part, the 1st Cavalry Division operated in the Bong Son plain and the adjacent hills, from which enemy units reinforced the hamlet and village guerrillas who gathered in taxes, food, and recruits. As in the highlands, the division exploited its airmobility, using helicopters to establish positions in the upper reaches of the valleys. They sought to flush the enemy from his hiding places and drive him toward the coast, where American, South Vietnamese, and South Korean forces held blocking positions. When trapped, the enemy was attacked by ground, naval, and air fire. The scheme was a new version of an old tactical concept, the "hammer and anvil," with the coastal plain and the natural barrier formed by the South China Sea forming the anvil or killing zone. Collectively the operations became known as the Binh Dinh Pacification Campaign.
For forty-two days elements of the airmobile division scoured the An Lao and Kim Son valleys, pursuing enemy units that had been surprised and routed from the Bong Son plain. Meanwhile, Marine forces in neighboring Quang Ngai Province in southern I Corps sought to bar the enemy's escape routes to the north. The enemy units evaded the Americans, but thousands of civilians fled from the Viet Cong-dominated valleys to government-controlled areas. Although the influx of refugees taxed the government's already strained relief services, the exodus of peasants weakened the Viet Cong's infrastructure and aimed a psychological blow at the enemy's prestige. The Communists had failed either to confront the Americans or to protect the population over which they had gained control.
Failing to locate the fleeing enemy in the An Lao valley, units of the airmobile division assaulted another enemy base area, a group of valleys and ridges southwest of the Bong Son plain known as the Crow's Foot or the Eagle's Claw. Here some Army units sought to dislodge the enemy from his
upland bases while others established blocking positions at the "toe" of each valley, where it found outlet to the plain. In six weeks over 1,300 enemy soldiers were killed. Enemy forces in northern Binh Dinh Province were temporarily thrown off balance. Beyond this, the long-term effects of the operation were unclear. The 1st Cavalry Division did not stay in one area long enough to exploit its success. Whether the Saigon government could marshal its forces effectively to provide local security and to reassert its political control remained to be seen.
Later operations continued to harass an elusive foe. Launching a new attack without the extensive preparatory reconnaissance that often alerted the enemy, Army units again surprised him in the Bong Son area but soon lost contact. The next move was against an enemy build-up in the vicinity of the Vinh Thanh Special Forces Camp. Here the Green Berets watched the "Oregon Trail," an enemy infiltration corridor that passed through the Vinh Thanh valley from the highlands to the coast. Forestalling the attack, Army units remained in the area where they conducted numerous patrols and made frequent contact with the enemy. (One U.S. company came close to being overrun in a ferocious firelight.)) But again the action had little enduring effect, except to increase the enemy's caution by demonstrating the airmobile division's agility in responding to a threat.
After a brief interlude in the highlands, the division returned to Binh Dinh Province in September 1966. Conditions in the Bong Son area differed little from those the division had first encountered. For the most part, the Viet Cong rather than the Saigon government had been successful in reasserting their authority, and pacification was at a standstill. The division devoted most of its resources for the remainder of 1966 and throughout 1967 to supporting renewed efforts at pacification. In the fall of 1966, for the first time in a year, all three of the division's brigades were reunited and operating in Binh Dinh Province. Although elements of the division were occasionally transferred to the highlands as the threat there waxed and waned, the general movement of forces was toward the north. Army units increasingly were sent to southern I Corps during 1967, replacing Marine units in operations similar to those in Binh Dinh Province.
In one such operation the familiar pattern of hammer and anvil was tried anew, with some success. The 1st Cavalry Division opened with a multibattalion air assault in an upland valley to flush the enemy toward the coast, where allied ground and naval forces were prepared to bar his escape. Enemy forces had recently left their mountain bases to plunder the rice harvest and to harass South Vietnamese forces providing security for provincial elections. These units were caught with their backs to the sea. For most of October, allied forces sought to destroy the main body of a Communist regiment
isolated on the coast and to seize an enemy base in the nearby Phu Cat Mountain. The first phase consisted of several sharp combat actions near the coastal hamlet of Hoa Hoi. With South Vietnamese and U.S. naval forces blocking an escape by sea, the encircled enemy fought desperately to return to the safety of his bases in the upland valleys. His plight was compounded when floods forced his troops out of their hiding places and exposed them to attacks. After heavy losses, remnants of the regiment divided into small parties that escaped through allied lines. As contacts with the enemy diminished on the coast, American efforts shifted inland, with several sharp engagements occurring when enemy forces tried to delay pursuit or to divert the allies from entering base areas. By the end of October, as the Communists retreated north and west, the running fight had accounted for over 2,000 enemy killed. Large caches of supplies, equipment, and food were uncovered, and the Viet Cong's shadow government in some coastal hamlets and villages was severely damaged, some hamlets reverting to government control for the first time in several years.
Similar operations continued through 1967 and into early 1968. In addition to offensive operations against enemy main forces, Army units in Binh Dinh worked in close co-ordination with South Vietnamese police, Regional and Popular Forces, and the South Vietnamese Army to help the Saigon government gain a foothold in villages and hamlets dominated or contested by the Communists. The 1st Cavalry Division adopted a number of techniques in support of pacification. Army units frequently participated in cordon and search operations: airmobile forces seized positions around a hamlet or village at dawn to prevent the escape of local forces or cadres, while South Vietnamese authorities undertook a methodical house-to-house search. The Vietnamese checked the legal status of residents, took a census, and interrogated suspected Viet Cong to obtain more information about the enemy's local political and military apparatus. At the same time, allied forces engaged in a variety of civic action and psychological operations; specially trained pacification cadres established the rudiments of local government and provided various social and economic services. At other times, the division might participate in "checkpoint and snatch" operations, establishing surprise roadblocks and inspecting traffic on roads frequented by the insurgents.
Although much weakened by such methods, enemy forces found opportunities to attack American units. They aimed both to win a military victory and to remind the local populace of their presence and power. An attack on Landing Zone BIRD, an artillery base on the Bong Son plain, was one such example. Taking advantage of the Christmas truce of 1966, enemy units moved into position and mounted a ferocious attack as soon as the truce ended. Although portions of the base were overrun, the onslaught was
checked when artillerymen their guns and fired Beehive antipersonnel rounds directly into the waves of oncoming enemy troops. Likewise, several sharp firefights occurred immediately after the 1967 Tet truce, when the enemy took advantage of the cease-fire to move back among the population. This time units of the 1st Cavalry Division forced the enemy to leave the coastal communities and seek refuge in the Piedmont. As the enemy moved across the boundary into southern I Corps, so too did units of the airmobile division. About a month later, the 3d Brigade, 25th Infantry Division, also moved to southern I Corps. Throughout the remainder of 1967, other Army units transferred to either I Corps to reinforce the marines or to the highlands to meet renewed enemy threats. As the strength of American units committed to the Binh Dinh Pacification Campaign decreased during late 1967 and early 1968, enemy activity in the province quickened as the Viet Cong sought to reconstitute their weakened military forces and to regain a position of influence among the local population.
In many respects, the Binh Dinh campaign was a microcosm of Westmoreland's over-all campaign strategy. It showed clearly the intimate relation between the war against enemy main force units and the fight for pacification waged by the South Vietnamese, and it demonstrated the effectiveness of the airmobile concept. After two years of persistent pursuit of the NVA's Yellow Star Division, the 1st Cavalry Division had reduced the combat effectiveness of each of its three regiments. By the end of 1967, the threat to Binh Dinh Province posed by enemy main force units had been markedly reduced. The airmobile division's operations against the 3d North Vietnamese Division, as well as its frequent role in operations directly in support of pacification, had weakened local guerrilla forces and created an environment favorable to pacification.
The campaign in Binh Dinh also exposed the vulnerabilities of Westmoreland's campaign strategy. Despite repeated defeats at the hands of the Americans, the three NVA regiments still existed. They contrived to find respite and a measure of rehabilitation, building their strength anew with recruits filtering down from the North, with others found in-country, and with Viet Cong units consolidated into their ranks. Although much weakened, Communist forces persistently returned to areas cleared by the 1st Cavalry Division. Even more threatening to the allied cause, Saigon's pacification efforts languished as South Vietnamese forces failed in many instances to provide security to the villages and effective police action to root out local Viet Cong cadres. And the government, dealing with a population already skeptical, failed to grant the political, social, and economic benefits it had promised.
The Highlands: Progress or Stalemate?
Moreover, the allies could not concentrate their efforts everywhere as they had in strategic Binh Dinh. The expanse of the highlands compelled Army operations there to be carried out with economy of force. During 1966 and 1967, the Americans engaged in a constant search for tactical concepts and techniques to maximize their advantages of firepower and mobility and to compensate for the constraints of time, distance, difficult terrain, and an inviolable border. Here the war was fought primarily to prevent the incursion of NVA units into South Vietnam and to erode their combat strength. In the highlands, each side pursued a strategy of military confrontation, seeking to weaken the fighting forces and will of its opponent through attrition. Each sought military victories to convince opposing leaders of the futility of continuing the contest. For the North Vietnamese, however, confrontation in the highlands had the additional purpose of relieving allied pressure in other areas, where pacification jeopardized their hold on the rural population. Of all the factors influencing operations in the highlands, the most significant may well have been the strength and success of pacification elsewhere.
For Americans, the most difficult problem was to locate the enemy. Yet Communist strategists sometimes created threats to draw in the Americans.
Recurrent menaces to Special Forces camps reflected the enemy's seasonal cycle of operations, his desire to harass and eliminate such camps, and his hope of luring allied forces into situations where he held the military advantages. Thus Army operations in the highlands during 1966 and 1967 were characterized by wide-ranging, often futile searches, punctuated by sporadic but intense battles fought usually at the enemy's initiative.
For the first few months of 1966, the Communists lay low. In May, however, a significant concentration of enemy forces appeared in Pleiku and Kontum Provinces. The 1st Brigade, 101st Airborne Division, the reserve of I Field Force, was summoned to Pleiku and subsequently moved to Dak To, a CIDG camp in northern Kontum Province, to assist a besieged South Vietnamese force at the nearby government post at Toumorong. Although the 24th North Vietnamese Regiment had surrounded Toumorong, allied forces secured the road to Dak To and evacuated the government troops, leaving one battalion of the 101st inside the abandoned camp and one company in an exposed defensive position in the jungle a short distance beyond. On the night of 6 June a large North Vietnamese force launched repeated assaults on this lone company. Facing disaster, the commander called in air strikes on his own position to stop the enemy's human-wave attacks. Relief arrived the next morning, as additional elements of the brigade were heli-lifted to the battlefield to pursue and trap the North Vietnamese. Fighting to close off the enemy's escape routes, the Americans called in renewed air strikes, including B-52's. By 20 June enemy resistance had ended, and the NVA regiment that had begun the fighting, leaving behind dead, escaped to the safety of its Laotian base.
Although the enemy's push in Kontum Province was blunted, the siege of Toumorong was only one aspect of his summer offensive in the highlands. Suspecting that NVA forces meant to return to the Ia Drang, Westmoreland sent the 3d Brigade, With Infantry Division, back into the valley in May. Dividing the area into "checkerboard" squares, the brigade methodically searched each square. Small patrols set out ambushes and operated for several days without resupply to avoid having helicopters reveal their location. After several days in one square, the patrols leapfrogged by helicopter to another. Though the Americans made only light, sporadic contacts, the cumulative toll of enemy killed was equal to many short, violent battles. One significant contact was made in late May near the Chu Pong Massif. A running battle ensued, as the enemy again sought safety in Cambodia. Westmoreland now appealed to Washington for permission to maneuver Army units behind the enemy, possibly into Cambodian territory. But officials refused, fearing international repercussions, and the NVA sanctuary remained inviolate.
Yet the operation confirmed that sizable enemy forces had returned to South Vietnam and, as in the fall of 1965, were threatening the outposts at Plei Me and Duc Co. To meet the renewed threat, I Field Force sent additional Army units to Pleiku Province and launched a new operation under the 1st Cavalry Division. The action followed the now familiar pattern of extensive heli-lifts, establishment of patrol bases, and intermittent contact with an enemy who usually avoided American forces. When the Communists elected to fight, they preferred to occupy high ground; dislodging them from hilltop bunkers was a difficult task, requiring massive air and artillery support. By the time the enemy left Pleiku again at the end of August, his forces had incurred nearly 500 deaths.
Border battles continued, however, and some were sharp. When enemy forces appeared in strength around a CIDG camp at Plei Djering in October, elements of the 4th Infantry and 1st Cavalry Divisions rapidly reinforced the camp, clashing with the enemy in firefights during October and November. As North Vietnamese forces began to withdraw through the Plei Trap valley, the 1st Brigade, 101st Airborne Division, was airlifted from Phu Yen to northern Kontum to try to block their escape, but failed to trap them before they reached the border. Army operations in the highlands were continued by the 4th Infantry Division. In addition to screening the border to detect infiltration, the division constructed a new road between Pleiku and the highland outpost at Plei Djering and helped the Saigon government resettle thousands of Montagnards in secure camps. Contact with the enemy generally was light, the heaviest occurring in mid-February 1967, in an area west of the Nam Sathay River near the Cambodian border, when Communist forces unsuccessfully tried to overrun several American fire bases. Despite infrequent contacts, however, 4th Division troops killed 700 enemy over a period of three months.
In I Corps as well, the enemy seemed intent on dispersing American forces to the border regions. Heightened activity along the demilitarized zone drew marines from southern I Corps. To replace them, Army units were transferred from III and II Corps to the area vacated by the marines, among them the 196th Infantry Brigade, which was pulled out of Operation JUNCTION CITY, and the 3d Brigade, With Infantry Division, which had been operating in the II Corps Zone. Together with the 1st Brigade, 101st Airborne Division, these units formed Task Force OREGON, activated on 12 April 1967 and placed under the operational control of the III Marine Amphibious Force. Army infantry units were now operating in all four of South Vietnam's corps areas.
Once at Chu Lai, the Army forces supported an extensive South Vietnamese pacification effort in Quang Tin Province. To the north, along the demilitarized zone, Army heavy artillery engaged in almost daily duels with NVA guns to the north. In Quang Tri Province, the marines fought a hard twelve-day battle to prevent NVA forces from dominating the hills surrounding Khe Sanh. The enemy's heightened military activity along the demilitarized zone, which included frontal attacks across it, prompted American officials to begin construction of a barrier consisting of highly sophisticated electronic and acoustical sensors and strong point defenses manned by allied forces. Known as the McNamara Line, after Secretary of Defense Robert S. McNamara, who vigorously promoted the concept, the barrier was to extend across South Vietnam and eventually into Laos. Westmoreland was not enthusiastic about the project, for he hesitated to commit large numbers of troops to man the strongpoints and doubted that the barrier would prevent the enemy from breaching the demilitarized zone. Hence the McNamara Line was never completed.
Throughout the summer of 1967, Marine forces endured some of the most intense enemy artillery barrages of the war and fought several battles with NVA units that infiltrated across the I7th parallel. Their stubborn defense, supported by massive counterbattery fire, naval gunfire, and air attacks, ended the enemy's offensive in northern I Corps, but not before Westmoreland had to divert additional Army units as reinforcements. A brigade of the 1st Cavalry Division and South Korean units were deployed to southern I Corps to replace additional marines who had been shifted further north. The depth of the Army's commitment in I Corps was shown by Task Force OREGON'S reorganization as the 23d Infantry Division (Americal). The only Army division to be formed in South Vietnam, its name echoed a famous division of World War II that had also been organized in the Pacific. If the enemy's aim was to draw American forces to the north, he evidently was succeeding.
Even as Westmoreland shifted allied forces from II Corps to I Corps, fighting intensified in the highlands. After Army units made several contacts with enemy forces during May and June, Westmoreland moved the 173d Airborne Brigade from III Corps to II Corps to serve as the I Field Force's strategic reserve. Within a few days, however, the brigade was committed to an effort to forestall enemy attacks against the CIDG camps of Dak To, Dak Seang, and Dak Pek in northern Kontum Province. Under the control of the 4th Infantry Division, the operation continued throughout the summer until the enemy threat abated. A few months later, however, reconnaissance patrols in the vicinity of Dak To detected a rapid and substantial build-up of enemy
forces in regimental strength. Believing an attack to be imminent, 4th Infantry Division forces reinforced the garrison. In turn, the 173d Airborne Brigade returned to the highlands, arriving on 2 November. From 3 to 15 November enemy forces estimated to number 12,000 probed, harassed, and attacked American and South Vietnamese positions along the ridges and hills surrounding the camp. As the attacks grew stronger, more U.S. and South Vietnamese reinforcements were sent, including two battalions from the airmobile division and six ARVN battalions. By mid-November allied strength approached 8,000.
Despite daily air and artillery bombardments of their positions, the North Vietnamese launched two attacks against Dak To on 15 November, destroying two C-130 aircraft and causing severe damage to the camp's ammunition dump. Allied forces strove to dislodge the enemy from the surrounding hills, but the North Vietnamese held fast in fortified positions. The center of enemy resistance was Hill 875; here, two battalions of the 173d Airborne Brigade made a slow and painful ascent against determined resistance and under grueling physical conditions, fighting for every foot of ground. Enemy fire was so intense and accurate that at times the Americans were unable to bring in reinforcements by helicopter or to provide fire support. In fighting that resembled the hill battles of the final stage of the Korean War, the confusion at Dak To pitted soldier against soldier in classic infantry battle. In desperation, beleaguered U.S. commanders on Hill 875 called in artillery and even B-52 air strikes at perilously close range to their own positions. On 17 November American forces at last gained control of Hill 875.
The battle of Dak To was the longest and most violent in the highlands since the battle of the Ia Drang two years before. Enemy casualties numbered in the thousands, with an estimated 1,400 killed. Americans had suffered too. Approximately one-fifth of the 173d Airborne Brigade had become casualties, with 174 killed, 642 wounded, and 17 missing in action. If the battle of the Ia Drang exemplified airmobility in all its versatility, the battle of Dak To, with the arduous ascent of Hill 875, epitomized infantry combat at its most basic and the crushing effect of supporting air power.
Yet Dak To was only one of several border battles in the waning months of 1967. At Song Be and Loc Ninh in III Corps, and all along the northern border of I Corps, the enemy exposed his positions in order to confront U.S. forces in heavy fighting. By the end of 1967 the 1st Infantry Division had again concentrated near the Cambodian border, and the With Infantry Division had returned to War Zone C. The enemy's threat in I Corps caused Westmoreland to disperse more Army units. In the vacuum left by their
departure, local Viet Cong sought to reconstitute their forces and to reassert their control over the rural population. In turn, Viet Cong revival often was a prelude to the resurgence of Communist military activity at the district and village level. Hard pressed to find additional Army units to shift from III Corps and II Corps to I Corps, Westmoreland asked the Army to accelerate deployment of two remaining brigades of the 101st Airborne Division from the United States. Arriving in December 1967, the brigades were added to the growing number of Army units operating in the northern provinces.
While allied forces were under pressure, the border battles of 1967 also led to a reassessment of strategy in Hanoi. Undeviating in their long-term aim of unification, the leaders of North Vietnam recognized that their strategy of military confrontation had failed to stop the American military buildup in the South or to reduce U.S. military pressure on the North. The enemy's regular and main force units had failed to inflict a salient military defeat on American forces. Although the North Vietnamese Army maintained the tactical initiative, Westmoreland had kept its units at bay and in some areas, like Binh Dinh Province, diminished their influence on the contest for control of the rural population. Many Communist military leaders perceived the war to be a stalemate and thought that continuing on their present course would bring diminishing returns, especially if their local forces were drastically weakened.
On the other side, Westmoreland could rightly point to some modest progress in improving South Vietnam's security and to punishing defeats inflicted on several NVA regiments and divisions. Yet none of his successes were sufficient to turn the tide of the war. The Communists had matched the build-up of American combat forces, the number of enemy divisions in the South increasing from one in early 1965 to nine at the start of 1968. Against 320 allied combat battalions, the North Vietnamese and Viet Cong could marshal 240. Despite heavy air attacks against enemy lines of infiltration, the flow of men from the North had continued unabated, even increasing toward the end of 1967.
Although the Military Assistance Command had succeeded in warding off defeat in 1965 and had gained valuable time for the South Vietnamese to concentrate their political and military resources on pacification, security in many areas of South Vietnam had improved little. Americans noted that the Viet Cong, in one district within artillery range of Saigon, rarely had any unit as large as a company. Yet, relying on booby traps, mines, and local guerrillas, they tied up over 6,000 American and South Vietnamese troops. More and more, success in the South seemed to depend not only on Westmoreland's ability to hold off and weaken enemy main force units, but on the
equally important efforts of the South Vietnamese Army, the Regional and the Popular Forces, and a variety of paramilitary and police forces to pacify the countryside. Writing to President Johnson in the spring of 1967, outgoing Ambassador Henry Cabot Lodge warned that if the South Vietnamese "dribble along and do not take advantage of the success which MACV has achieved against the main force and the Army of North Viet-Nam, we must expect that the enemy will lick his wounds, pull himself together and make another attack in '68." Westmoreland's achievements, he added, would be "judged not so much on the brilliant performance of the U.S. troops as on the success in getting ARVN, RF and PF quickly to function as a first-class . . . counter-guerrilla force." Meanwhile the war appeared to be in a state of equilibrium. Only an extraordinary effort by one side or the other could bring a decision.
The Tet Offensive
The Tet offensive marked a unique stage in the evolution of North Vietnam's People's War. Hanoi's solution to the stalemate in the South was the product of several factors. North Vietnam's large unit war was unequal to the task of defeating American combat units. South Vietnam was becoming politically and militarily stronger, while the Viet Cong's grip over the rural population eroded. Hanoi's leaders suspected that the United States, frustrated by the slow pace of progress, might intensify its military operations against the North. (Indeed, Westmoreland had broached plans for an invasion of the North when he appealed for additional forces in 1967.) The Tet offensive was a brilliant stroke of strategy by Hanoi, designed to change the arena of war from the battlefield to the negotiating table, and from a strategy of military confrontation to one of talking and fighting.
Communist plans called for violent, widespread, simultaneous military actions in rural and urban areas throughout the South—a general offensive. But as always, military action was subordinate to a larger political goal. By focusing attacks on South Vietnamese units and facilities, Hanoi sought to undermine the morale and will of Saigon's forces. Through a collapse of military resistance, the North Vietnamese hoped to subvert public confidence in the government's ability to provide security, triggering a crescendo of popular protest to halt the fighting and force a political accommodation. In short, they aimed at a general uprising.
Hanoi's generals, however, were not completely confident that the general offensive would succeed. Viet Cong forces, hastily reinforced with new recruits and part-time guerrillas, bore the brunt. Except in the northern pro-
vinces, the North Vietnamese Army stayed on the sidelines, poised to exploit success. While hoping to spur negotiations, Communist leaders probably had the more modest goals of reasserting Viet Cong influence and undermining Saigon's authority so as to cast doubt on its credibility as the United States' ally. In this respect, the offensive was directed toward the United States and sought to weaken American confidence in the Saigon government, discredit Westmoreland's claims of progress, and strengthen American antiwar sentiment. Here again, the larger purpose was to bring the United States to the negotiating table and hasten American disengagement from Vietnam.
The Tet offensive began quietly in mid-January 1968 in the remote northwest corner of South Vietnam. Elements of three NVA divisions began to mass near the Marine base at Khe Sanh. At first the ominous proportions of the build-up led the Military Assistance Command to expect a major offensive in the northern provinces. To some observers the situation at Khe Sanh resembled Dien Bien Phu, the isolated garrison where the Viet Minh had defeated French forces in 1954. Khe Sanh, however, was a diversion, an attempt to entice Westmoreland to defend yet another border post by withdrawing forces from the populated areas of the South.
While pressure around Khe Sanh increased, 85,000 Communist troops prepared for the Tet offensive. Since the fall of 1967, the enemy had been infiltrating arms, ammunition, and men, including entire units, into Saigon and other cities and towns. Most of these meticulous preparations went undetected, although MACV received warnings of a major enemy action to take place in early 1968. The command did pull some Army units closer to Saigon just before the attack. However, concern over the critical situation at Khe Sanh and preparations for the Tet holiday festivities preoccupied most Americans and South Vietnamese. Even when Communist forces prematurely attacked Kontum, Qui Nhon, Da Nang, and other towns in the northern and central provinces on 29 January, Americans were unprepared for what followed.
On 31 January combat erupted throughout the entire country. Thirty-six of 44 provincial capitals and 64 of 242 district towns were attacked, as well as 5 of South Vietnam's 6 autonomous cities, among them Hue and Saigon. Once the shock and confusion wore off, most attacks were crushed in a few days. During those few days, however, the fighting was some of the most violent ever seen in the South or experienced by many ARVN units. Though the South Vietnamese were the main target, American units were swept into the turmoil. All Army units in the vicinity of Saigon helped to repel Viet Cong attacks there and at the nearby logistical base of Long Binh. In some American compounds, cooks, radiomen, and clerks took up arms in their own defense. Military police units helped root the Viet Cong out of Saigon, and Army helicopter gunships were in the air almost continuously, assisting the allied forces.
The most tenacious combat occurred in Hue, the ancient capital of Vietnam, where the 1st Cavalry and 101st Airborne Divisions, together with marines and South Vietnamese forces, participated in the only extended urban combat of the war. Hue had a tradition of Buddhist activism, with overtones of neutralism, separatism, and anti-Americanism, and Hanoi's strategists thought that here if anywhere the general offensive-general uprising might gain a political foothold. Hence they threw North Vietnamese regulars into the battle, indicating that the stakes at Hue were higher than elsewhere in the South. House-to-house and street-to-street fighting caused enormous destruction, necessitating massive reconstruction and community assistance programs after the battle. The allies took three weeks to recapture the city. The slow, hard-won gains of 1967 vanished overnight as South Vietnamese and Marine forces were pulled out of the countryside to reinforce the city.
Yet throughout the country the South Vietnamese forces acquitted themselves well, despite high casualties and many desertions. Stunned by the attacks, civilian support for the Thieu government coalesced instead of weakening. Many Vietnamese for whom the war had been an unpleasant abstraction were outraged. Capitalizing on the new feeling, South Vietnam's leaders for the first time dared to enact general mobilization. The change from grudging toleration of the Viet Cong to active resistance provided an opportunity to create new local defense organizations and to attack the Communist infrastructure. Spurred by American advisers, the Vietnamese began to revitalize pacification. Most important, the Viet Cong suffered a major military defeat, losing thousands of experienced combatants and seasoned political cadres, seriously weakening the insurgent base in the South.
Americans at home saw a different picture. Dramatic images of the Viet
Cong storming the American Embassy in the heart of Saigon and the North Vietnamese Army clinging tenaciously to Hue obscured Westmoreland's assertion that the enemy had been defeated. Claims of progress in the war, already greeted with skepticism, lost more credibility in both public and official circles. The psychological jolt to President Johnson's Vietnam policy was redoubled when the military requested an additional 206,00 troops. Most were intended to reconstitute the strategic reserve in the United States, exhausted by Westmoreland's appeals for combat units between 1965 and 1967. But the magnitude of the new request, at a time when almost a half-million U.S. troops were already in Vietnam, cast doubts on the conduct of the war and prompted a reassessment of American policy and strategy.
Without mobilization, the United States was overcommitted. The Army could send few additional combat units to Vietnam without making deep inroads on forces destined for NATO or South Korea. The dwindling strategic reserve left Johnson with fewer options in the spring of 1968 than in the summer of 1965. His problems were underscored by heightened international tensions when North Korea captured an American naval vessel, the USS Pueblo, a week before the Tet offensive; by Soviet armed intervention in Czechoslovakia in the summer of 1968; and by chronic crises in the Mideast. In addition, Army units in the United States were needed often between 1965 and 1968 to enforce federal civil rights legislation and to restore public order in the wake of civil disturbances.
Again, as in 1967, Johnson refused to sanction a major troop levy, but he did give Westmoreland some modest reinforcements to bolster the northern provinces. Again tapping the strategic reserve, the Army sent him the 3d Brigade, 82d Airborne Division, and the 1st Brigade, 5th Infantry Division (Mechanized)—the last Army combat units to deploy to South Vietnam. In addition, the President called to active duty a small number of Reserve units, totaling some 40,000 men, for duty in Southeast Asia and South Korea, the only use of Reserves during the Vietnam War. For Westmoreland, Johnson's decision meant that future operations would have to make the best possible use of American forces, and that the South Vietnamese Army would have to shoulder a larger share of the war effort. The President also curtailed air strikes against North Vietnam to spur negotiations. Finally, on 31 March Johnson announced his decision not to seek reselection in order to give his full attention to the goal of resolving the conflict. Hanoi had suffered a military defeat, but had won a political and diplomatic victory by shifting American policy toward disengagement.
For the Army the new policy meant a difficult time. In South Vietnam, as in the United States, its forces were stretched thin. The Tet offensive had
concentrated a large portion of the combat forces in I Corps, once a Marine preserve. A new command, the XXIV Corps, had to be activated at Da Nang, and Army logistical support, previously confined to the three southern corps zones, extended to the five northern provinces as well. While Army units reinforced Hue and the demilitarized zone, the marines at Khe Sanh held fast. Enemy pressure on the besieged base increased daily, but the North Vietnamese refrained from an all-out attack, still hoping to divert American forces from Hue. Recognizing that he could ill afford Khe Sanh's defense, Westmoreland decided to subject the enemy to the heaviest air and artillery bombardment of the war. His tactical gamble succeeded; the enemy withdrew, and the Communist offensive slackened.
The enemy nevertheless persisted in his effort to weaken the Saigon government, launching nationwide "mini-Tet" offensives in May and August. Pockets of heavy fighting occurred throughout the south, and Viet Cong forces again tried to infiltrate into Saigon—the last gasps of the general offensive-general uprising. Thereafter enemy forces generally dispersed and avoided contact with Americans. In turn, the allies withdrew from Khe Sanh itself in the summer of 1968. Its abandonment signaled the demise of the McNamara Line and further postponement of MACV's hopes for large-scale American cross-border operations. For the remainder of 1968, Army units in I Corps were content to help restore security around Hue and other coastal areas, working closely with the marines and the South Vietnamese in support of pacification. North Vietnamese and Viet Cong forces generally avoided offensive operations. As armistice negotiations began in Paris, both sides prepared to enter a new phase of the war.
The last phase of American involvement in South Vietnam was carried out under a broad policy called Vietnamization. Its main goal was to create strong, largely self-reliant South Vietnamese military forces, an objective consistent with that espoused by U.S. advisers as early as the 1950'S. But Vietnamization also meant the withdrawal of a half-million American soldiers. Past efforts to strengthen and modernize South Vietnam's Army had proceeded at a measured pace, without the pressure of diminishing American support, large-scale combat, or the presence of formidable North Vietnamese forces in the South. Vietnamization entailed three overlapping phases: redeployment of American forces and the assumption of their combat role by the South Vietnamese; improvement of ARVN's combat and support capabilities, especially firepower and mobility; and replacement of
the Military Assistance Command by an American advisory group. Vietnamization had the added dimension of fostering political, social, and economic reforms to create a vibrant South Vietnamese state based on popular participation in national political life. Such reforms, however, depended on progress n the pacification program which never had a clearly fixed timetable.
The task of carrying out the military aspects of Vietnamization fell to General Creighton W. Abrams, who succeeded General Westmoreland as MACV commander in mid-1968, when the latter returned to the United States to become Chief of Staff of the Army. Although he had the aura of a blunt, hard-talking, World War II tank commander, Abrams had spent two years as Westmoreland's deputy, working closely with South Vietnamese commanders. Like Westmoreland before him, Abrams viewed the military situation after Tet as an opportunity to make gains in pacifying rural areas and to reduce the strength of Communist forces in the South. Until the weakened Viet Cong forces could be rebuilt or replaced with NVA forces, both guerrilla and regular Communist forces had adopted a defensive posture. Nevertheless, 90,000 NVA forces were in the South, or in border sanctuaries, waiting to resume the offensive at a propitious time.
Abrams still had strong American forces; indeed, they reached their peak strength of 543,000 in March 1969. But he was also under pressure from Washington to minimize casualties and to conduct operations with an eye toward leaving the South Vietnamese in the strongest possible military position when U.S. forces withdrew. With these considerations in mind, Abrams decided to disrupt and destroy the enemy's bases, especially those near the border, to prevent their use as staging areas for offensive operations. His primary objective was the enemy's logistical support system rather than enemy main combat forces. At the same time, to enhance Saigon's pacification efforts and improve local security, Abrams intended to emphasize small unit operations, with extensive patrolling and ambushes, aiming to reduce the enemy's base of support among the rural population.
To the greatest extent possible, he planned to improve ARVN's performance by conducting combined operations with American combat units. As the South Vietnamese Army assumed the lion's share of combat, it was expected to shift operations to the border and to assume a role similar to that performed by U.S. forces between 1965 and 1969. The Regional and Popular Forces, in turn, were to take over ARVN's role in area security and pacification support, while the newly organized People's Self-Defense Force took on the task of village and hamlet defense. Stressing the close connection between combat and pacification operations, the need for co-operation between American and South Vietnamese forces, and the importance of co-ordinating all echelons of Saigon's armed forces, Abrams propounded a "one war" concept.
Yet even in his emphasis on combined operations and American support of pacification, Abrams' strategy had strong elements of continuity with Westmoreland's. For the first, operations in War Zones C and D in 1967 and the thrust into the A Shau valley in 1968 were ample precedents. Again, Westmoreland had laid the foundation for a more extensive U.S. role in pacification in 1967 by establishing Civil Operations Rural Development Support (CORDS). Under CORDS, the Military Assistance Command took charge of all American activities, military and civilian, in support of pacification. Abrams' contribution was to enlarge the Army's role. Under him, the U.S. advisory effort at provincial and district levels grew as the territorial forces gained in importance, and additional advisers were assigned to the Phoenix program, a concerted effort to eliminate the Communist political apparatus. Numerous mobile advisory teams helped the South Vietnamese Army and paramilitary forces to become adept in a variety of combat and support functions.
Despite all efforts, many Americans doubted whether Saigon's armed forces could successfully play their enlarged role under Vietnamization. Earlier counterinsurgency efforts had languished under less demanding circumstances, and Saigon's forces continued to be plagued with high desertions,
spotty morale, and shortages of high quality leaders. Like the French before them, U.S. advisers had assumed a major role in providing and co-ordinating logistical and firepower support, leaving the Vietnamese inexperienced in the conduct of large combined-arms operations. Despite the Viet Cong's weakened condition, South Vietnamese forces also continued to incur high casualties.
Similarly, pacification registered ostensible gains in rural security and other measures of progress, but such improvements often obscured its failure to establish deep roots. The Phoenix program, despite its success in seizing low-level cadres, rarely caught hard-core, high-level party officials, many of whom survived, as they had in the mid-1950's, by taking more stringent security measures. Furthermore, the program was abused by some South Vietnamese officials, who used it as a vehicle for personal vendettas. Saigon's efforts at political, social, and economic reform likewise were susceptible to corruption, venality, and nepotism. Temporary social and economic benefits for the peasantry rested on an uncertain foundation of continued American aid, as did South Vietnam's entire economy and war effort.
Influencing all parts of the struggle was a new defense policy enunciated by Richard M. Nixon, who became President in January 1969. The "Nixon Doctrine" harkened back to the precepts of the New Look, placing greater reliance on nuclear retaliation, encouraging allies to accept a larger share of their own defense burden, and barring the use of U.S. ground forces in limited wars in Asia, unless vital national interests were at stake. Under this policy, American ground forces in South Vietnam, once withdrawn, were unlikely to return. For President Thieu in Saigon, the future was inauspicious. For the time being, large numbers of American forces were still present to bolster his country's war effort; what would happen when they departed, no one knew.
Military Operations, 1968-1969
Vietnamization began in earnest when two brigades of the U.S. Army's 9th Infantry Division left South Vietnam in July 1968, making the South Vietnamese Army responsible for securing the southern approaches to Saigon. The protective area that Westmoreland had developed around the capital was still intact. Allied forces engaged in a corps-wide counteroffensive to locate and destroy remnants of the enemy units that had participated in the Tet offensive, combining thousands of small unit operations, frequent sweeps through enemy bases, and persistent screening of the Cambodian border to prevent enemy main force units from returning. As the Military
Assistance Command anticipated, the Communists launched a Tet offensive in 1969, but a much weaker one than a year earlier. Allied forces easily suppressed the outbreaks. Meanwhile, in critical areas around Saigon pacification had begun to take hold. Such signs of progress probably resulted mainly from the attrition of Viet Cong forces during Tet 1968. But the vigilant screening of the border contributed to the enemy's difficulty in reaching and helping local insurgent forces.
Yet Saigon was not impregnable. With increasing frequency, enemy sappers penetrated close enough to launch powerful rocket attacks against the capital. Such incidents terrorized civilians, caused military casualties, and were a violent reminder of the government's inability to protect the population. Sometimes simultaneous attacks were conducted throughout the country. An economy-of-force measure, the attacks brought little risk to the enemy and compelled allied forces to suspend other tasks while they cleared the "rocket belts" around every major urban center and base in the country.
In the Central Highlands the war of attrition continued. Until its redeployment of 1970, the Army protected major highland population centers and kept open important interior roads. Special Forces worked with the tribal highlanders to detect infiltration and harass enemy secret zones. As in the past, highland camps and outposts were a magnet for enemy attacks, meant to lure reaction forces into an ambush or to divert the allies from operations elsewhere. Ben Het in Kontum Province was besieged from March to July of 1969. Other bases—Thien Phuoc and Thuong Duc in I Corps; Bu Prang, Dak Seang, and Dak Pek in II Corps; and Katum, Bu Dop, and Tong Le Chon in III Corps—were attacked because of their proximity to Communist strongholds and infiltration routes. In some cases camps had to be abandoned, but in most the attackers were repulsed. By the time the 5th Special Forces Group left South Vietnam in March 1971, all CIDG units had been converted to Regional Forces or absorbed by the South Vietnamese Rangers. The departure of the Green Berets brought an end to any significant Army role in the highlands.
Following the withdrawal of the 4th and 9th Divisions, Army units concentrated around Saigon and in the northern provinces. Operating in Quang Ngai, Quang Tin, and Quang Nam Provinces, the 23d Infantry Division (Americal) conducted a series of operations in 1968 and 1969 to secure and pacify the heavily populated coastal plain of southern I Corps. Along the demilitarized zone, the 1st Brigade, 5th Infantry Division (Mechanized), helped marines and South Vietnamese forces to screen the zone and to secure the northern coastal region, including a stretch of highway, the "street without joy," that was notorious from the time of the French. The
101st Airborne Division (converted to the Army's second airmobile division in 1968) divided its attention between the defense of Hue and forays into the enemy's base in the A Shau valley.
Since the 1968 Tet offensive, the Communists had restocked the A Shau valley with ammunition, rice, and equipment. The logistical build-up pointed to a possible NVA offensive in early 1969. In quick succession, Army operations were launched in the familiar pattern: air assaults, establishment of fire support bases, and exploration of the lowlands and surrounding hills to locate enemy forces and supplies. This time the Army met stiff enemy resistance, especially from antiaircraft guns. The North Vietnamese had expected the American forces and now planned to hold their ground.
On 11 May 1969, a battalion of the 101st Airborne Division climbing Hill 937 found the 28th North Vietnamese Regiment waiting for it. The struggle for "Hamburger Hill" raged for ten days and became one of the war's fiercest and most controversial battles. Entrenched in tiers of fortified bunkers with well-prepared fields of fire, the enemy forces withstood repeated attempts to dislodge them. Supported by intense artillery and air strikes, Americans made a slow, tortuous climb, fighting hand to hand. By the time Hill 937 was taken, three Army battalions and an ARVN regiment had been committed to the battle. Victory, however, was ambiguous as well as costly; the hill itself had no strategic or tactical importance and was abandoned soon after its capture. Critics charged that the battle wasted American lives and exemplified the irrelevance of U.S. tactics in Vietnam. Defending the operation, the commander of the 101st acknowledged that the hill's only significance was that the enemy occupied it. "My mission," he said, "was to destroy enemy forces and installations. We found the enemy on Hill 937, and that is where we fought them."
About one month later the 101st left the A Shau valley, and the North Vietnamese were free to use it again. American plans to return in the summer of 1970 came to nothing when enemy pressure forced the abandonment of two fire support bases needed for operations there. The loss of Fire Support Base O'REILLY, only eleven miles from Hue, was an ominous sign that enemy forces had reoccupied the A Shau and were seeking to dominate the valleys leading to the coastal plain. Until it redeployed in 1971, the 101st Airborne, with the marines and South Vietnamese forces, now devoted most of its efforts to protecting Hue. The operations against the A Shau had achieved no more than Westmoreland's large search and destroy operations in 1967. As soon as the allies left, the enemy reclaimed his traditional bases.
The futility of such operations was mirrored in events on the coastal plain. Here the 23d Infantry Division fought in an area where the population
had long been sympathetic to the Viet Cong. As in other areas, pacification in southern I Corps seemed to improve after the 1968 Tet offensive, though enemy units still dominated the Piedmont and continued to challenge American and South Vietnamese forces on the coast. Operations against them proved to be slow, frustrating exercises in warding off NVA and Viet Cong main force units while enduring harassment from local guerrillas and the hostile population. Except during spasms of intense combat, as in the summer of 1969 when the Americal Division confronted the 1st North Vietnamese Regiment, most U.S. casualties were caused by snipers, mines, and booby traps. Villages populated by old men, women, and children were as dangerous as the elusive enemy main force units. Operating in such conditions day after day induced a climate of fear and hate among the Americans. The already thin line between civilian and combatant was easily blurred and violated. In the hamlet of My Lai, elements of the Americal Division killed about two hundred civilians in the spring of 1968. Although only one member of the division was tried and found guilty of war crimes, the repercussions of the atrocity were felt throughout the Army. However rare, such acts undid the benefit of countless hours of civic action by Army units and individual soldiers and raised unsettling questions about the conduct of the war.
What happened at My Lai could have occurred in any Army unit in Vietnam in the late 1960's and early 1970's. War crimes were born of a sense of frustration that also contributed to a host of morale and discipline problems, among enlisted men and officers alike. As American forces were withdrawn by a government eager to escape the war, the lack of a clear military objective contributed to a weakened sense of mission and a slackening of discipline. The short-timer syndrome, the reluctance to take risks in combat toward the end of a soldier's one-year tour, was compounded by the "last-casualty" syndrome. Knowing that all U.S. troops would soon leave Vietnam, no soldier wanted to be the last to die. Meanwhile, in the United States harsh criticism of the war, the military, and traditional military values had become widespread. Heightened individualism, growing permissiveness, and a weakening of traditional bonds of authority pervaded American society and affected the Army's rank and file. The Army grappled with problems of drug abuse, racial tensions, weakened discipline, and lapses of leadership. While outright refusals to fight were few in number, incidents of "fragging"— murderous attacks on officers and noncoms—occurred frequently enough to compel commands to institute a host of new security measures within their cantonments. All these problems were symptoms of larger social and political forces and underlined a growing disenchantment with the war among soldiers in the field.
As the Army prepared to leave Vietnam, lassitude and war-weariness at times resulted in tragedy, as at Fire Support Base MARY ANN in 1971. There soldiers of the Americal Division, soon to go home, relaxed their security and were overrun by a North Vietnamese force. Such incidents reflected a decline in the quality of leadership among both noncommissioned and commissioned officers. Lowered standards, abbreviated training, and accelerated promotions to meet the high demand for noncommissioned and junior officers often resulted in the assignment of squad, platoon, and company leaders with less combat experience than the troops they led. Careerism and ticket-punching in officer assignments, false reporting and inflated body counts, and revelations of scandal and corruption all raised disquieting questions about the professional ethics of Army leadership. Critics indicted the tactics and techniques used by the Army in Vietnam, noting that airmobility, for example, tended to distance troops from the population they were sent to protect and that commanders aloft in their command and control helicopters were at a psychological and physical distance from the soldiers they were supposed to lead.
With most U.S. combat units slated to leave South Vietnam during 1970 and 1971, time was a critical factor for the success of Vietnamization and pacification. Neither program could thrive if Saigon's forces were distracted by enemy offensives launched from bases in Laos or Cambodia. While Abrams' logistical offensive temporarily reduced the level of enemy activity in the South, bases outside South Vietnam had been inviolable to allied ground forces. Harboring enemy forces, command facilities, and logistical depots, the Cambodian and Laotian bases threatened the fragile progress made in the South since Tet 1968. To the Nixon administration, Abrams' plans to violate the Communist sanctuaries had the special appeal of gaining more time for Vietnamization and of compensating for the bombing halt over North Vietnam.
Because of their proximity to Saigon, the bases in Cambodia received first priority. Planning for the cross-border attack occurred at a critical time in Cambodia. In early 1970 Cambodia's neutralist leader, Prince Norodom Sihanouk, was overthrown by his pro-Western Defense Minister, General Lon Nol. Among Lon Nol's first actions was closing the port of Sihanoukville to supplies destined for Communist forces in the border bases and in South Vietnam. He also demanded that Communist forces leave Cambodia and accepted Saigon's offer to apply pressure against those located near the
border. A few weeks earlier, American B-52 bombers had begun in secret to bomb enemy bases in Cambodia. By late April, South Vietnamese military units, accompanied by American advisers, had mounted large-scale ground operations across the border.
On 1 May 1970, units of the 1st Cavalry Division, the 25th Infantry Division, and the 11th Armored Cavalry followed. Cambodia became a new battlefield of the Vietnam War. Cutting a broad swath through the enemy's Cambodian bases, Army units discovered large, sprawling, well-stocked storage sites, training camps, and hospitals, all recently occupied. What Americans did not find were large enemy forces or COSVN headquarters. Only small delaying forces offered sporadic resistance, while main force units retreated to northeastern Cambodia. Meanwhile the expansion of the war produced violent demonstrations in the United States. In response to the public outcry, Nixon imposed a geographical and time limit on operations in Cambodia, enabling the enemy to stay beyond reach. At the end of June, one day short of the sixty days allotted to the operation, all advisers accompanying the South Vietnamese and all U.S. Army units had left Cambodia.
Political and military events in Cambodia triggered changes in the war as profound as those engendered by the Tet offensive. From a quiescent "sideshow" of the war, Cambodia became an arena for the major belligerents. Military activity increased in northern Cambodia and southern Laos as Hanoi established new infiltration routes and bases to replace those lost during the incursion. Hanoi made clear that it regarded all Indochina as a single theater of operations. Cambodia itself was engulfed in a virulent civil war.
As U.S. Army units withdrew, the South Vietnamese Army found itself in a race against Communist forces to secure the Cambodian capital of Phnom Penh. Americans provided Saigon's overextended forces air and logistical support to enable them to stabilize the situation there. The time to strengthen Vietnamization gained by the incursion now had to be weighed in the balance against ARVN's new commitment in Cambodia. To the extent that South Vietnam's forces bolstered Lon Nol's regime, they were unable to contribute to pacification and rural security in their own country. Moreover, the South Vietnamese performance in Cambodia was mixed. When working closely with American advisers, the army acquitted itself well. But when forced to rely on its own resources, the army revealed its inexperience and limitations in attempting to plan and execute large operations.
Despite ARVN's equivocal performance, less than a year later the Americans pressed the South Vietnamese to launch a second cross-border operation, this time into Laos. Although U.S. air, artillery, and logistical support
would be provided, this time Army advisers would not accompany South Vietnamese forces. The Americans' enthusiasm for the operation exceeded that of their allies. Anticipating high casualties, South Vietnam's leaders were reluctant to involve their army once more in extended operations outside their country. But American intelligence had detected a North Vietnamese build-up in the vicinity of Tchepone, a logistical center on the Ho Chi Minh Trail approximately 25 miles west of the South Vietnamese border in Laos. The Military Assistance Command regarded the build-up as a prelude to an NVA spring offensive in the northern provinces. Like the Cambodian incursion, the Laotian invasion was justified as benefiting Vietnamization, but with the added bonuses of spoiling a prospective offensive and cutting the Ho Chi Minh Trail.
In preparation for the operation, Army helicopters and artillery were moved to the vicinity of the abandoned base at Khe Sanh. The 101st Airborne Division conducted a feint toward the A Shau valley to conceal the true objective. On 8 February 1971, spearheaded by tanks and with airmobile units leapfrogging ahead to establish fire support bases in Laos, a South Vietnamese mechanized column advanced down Highway 9 toward Tchepone. Operation LAM SON 719 had begun.
The North Vietnamese were not deceived. South Vietnamese forces numbering about 2s,000 became bogged down by heavy enemy resistance and bad weather. The drive toward Tchepone stalled. Facing the South Vietnamese were elements of five NVA divisions, as well as a tank regiment, an artillery regiment, and at least nineteen antiaircraft battalions. After a delay of several days, South Vietnamese forces air-assaulted into the heavily bombed town of Tchepone. By that time, the North Vietnamese had counterattacked with Soviet-built T54 and T55 tanks, heavy artillery, and infantry. They struck the rear of the South Vietnamese forces strung out on Highway 9, blocking their main avenue of withdrawal. Enemy forces also overwhelmed several South Vietnamese fire support bases, depriving ARVN units of desperately needed flank protection. The South Vietnamese also lacked antitank weapons to counter the North Vietnamese armor that appeared on the Laotian jungle trails. The result was near-disaster. Army helicopter pilots trying to rescue South Vietnamese soldiers from their besieged hilltop fire bases encountered intense antiaircraft fire. Panic ensued when some South Vietnamese units ran out of ammunition. In some units all semblance of an orderly withdrawal vanished as desperate South Vietnamese soldiers pushed the wounded off evacuation helicopters or clung to helicopter skids to reach safety. Eventually, ARVN forces punched their way out of Laos, but only after paying a heavy price.
That the South Vietnamese Army had reached its objective of Tchepone was of little consequence. Its stay there was brief and the supply caches it discovered disappointingly small. Saigon's forces had failed to sever the Ho Chi Minh Trail; infiltration reportedly increased during LAM SON 719, as the North Vietnamese shifted traffic to roads and trails further to the west in Laos. In addition to losing nearly 2,000 men, the South Vietnamese lost large amounts of equipment during their disorderly withdrawal, and the U.S. Army lost IO7 helicopters, the highest number in any one operation of the war. Supporters pointed to heavy enemy casualties and argued that equipment losses were reasonable, given the large number of helicopters used to support LAM SON 719. The battle nevertheless raised disturbing questions among Army officials about the vulnerability of helicopters in mid- or highintensity conflict. What was the future of airmobility in any war where the enemy possessed a significant antiaircraft capability?
LAM SON 719 proved to be a less ambiguous test of Vietnamization than the Cambodian incursion. The South Vietnamese Army did not perform well in Laos. Reflecting on the operation, General Ngo Quan Truong, the commander of I Corps, noted ARVN's chronic weakness in planning for and
co-ordinating combat support. He also noted that from the battalion to the division level, the army had become dependent on U.S. advisers. At the highest levels of command, he added, "the need for advisers was more acutely felt in two specific areas: planning and leadership. The basic weakness of ARVN units at regimental and sometimes division level in those areas," he continued, "seriously affected the performance of subordinate units." LAM SON 719 scored one success, forestalling a Communist spring offensive in the northern provinces; in other respects, it was a failure and an ill omen for the future.
Withdrawal: The Final Battles
As the Americans withdrew, South Vietnam's combat capability declined. The United States furnished its allies the heavier M48 tank to match the NVA's T54 tank and heavier artillery to counter North Vietnamese 130mm. guns, though past experience suggested that additional arms and equipment could not compensate for poor skills and mediocre leadership. In fact, the weapons and equipment were insufficient to offset the reduction in U.S. combat strength. In mid-1968, for example, an aggregate of fifty-six allied combat battalions were present in South Vietnam's two northern provinces; in 1972, after the departure of most American units, only thirty battalions were in the same area. Artillery strength in the northern region declined from approximately 400 guns to 169 in the same period, and ammunition supply rates fell off as well. Similar reductions took place throughout South Vietnam, causing decreases in mobility, firepower, intelligence support, and air support. Five thousand American helicopters were replaced by about 500. American specialties—B-52 strikes, photo reconnaissance, and the use of sensors and other means of target acquisition—were drastically curtailed.
Such losses were all the more serious because operations in Cambodia and Laos had illustrated how deeply ingrained in the South Vietnamese Army the American style of warfare had become. Nearly two decades of U.S. military involvement were exacting an unexpected price. As one ARVN division commander commented, "Trained as they were through combined action with US units, the [South Vietnamese] unit commander was used to the employment of massive firepower." That habit, he added, "was hard to relinquish."
By November 1971, when the 101st Airborne Division withdrew from the South, Hanoi was planning its 1972 spring offensive. With ARVN's combat capacity diminished and nearly all U.S. combat troops gone, North Vietnam sensed an opportunity to demonstrate the failure of Vietnamization, hasten
ARVN's collapse, and revive the stalled peace talks. In its broad outlines and goals, the 1972 offensive resembled Bet 1968, except that the North Vietnamese Army, instead of the Viet Cong, bore the major burden of combat. The Nguyen-Hue offensive or Easter offensive began on 30 March 1972. Total U.S. military strength in South Vietnam was about 95,000, of which only 6,000 were combat troops, and the task of countering the offensive on the ground fell almost exclusively to the South Vietnamese.
Attacking on three fronts, the North Vietnamese Army poured across the demilitarized zone and out of Laos to capture Quang Tri, South Vietnam's northernmost province. In the Central Highlands, enemy units moved into Kontum Province, forcing Saigon to relinquish several border posts before government forces contained the offensive. On 2 April, Viet Cong and North Vietnamese forces struck Loc Ninh, just south of the Cambodian border on Highway 13, and advanced south to An Loc along one of the main invasion routes toward Saigon. A two-month-long battle ensued, until enemy units were driven from An Loc and forced to disperse to bases in Cambodia. By late summer the Easter offensive had run its course; the South Vietnamese, in a slow, cautious counteroffensive, recaptured Quang Tri City and most of the lost province. But the margin of victory or defeat often was supplied by the massive supporting firepower provided by U.S. air and naval forces.
The tactics of the war were changing. Communist forces now made extensive use of armor and artillery. Among the new weapons in the enemy's arsenal was the Soviet SA-7 hand-held antiaircraft missile, which posed a threat to slow-flying tactical aircraft and helicopters. On the other hand, the Army's attack helicopter, the Cobra, outfitted with TOW antitank missiles, proved effective against NVA armor at stand-off range. In their antitank role, Army attack helicopters were crucial to ARVN's success at An Loc, suggesting a larger role for helicopters in the future as part of a combined arms team in conventional combat.
Vietnamization continued to show mixed results. The benefits of the South Vietnamese Army's newly acquired mobility and firepower were dissipated as it became responsible for securing areas vacated by American forces. Improvements of territorial and paramilitary troops were offset as they became increasingly vulnerable to attack by superior North Vietnamese forces. Insurgency was also reviving. Though their progress was less spectacular than the blitzkrieg-like invasion of the South, North Vietnamese forces entered the Delta in thousands between 1969 and 1973 to replace the Viet Cong—one estimate suggested a tenfold increase in NVA strength, from 3,000 to 30,000, in this period. Here the fighting resembled that of the early
1960's, as enemy forces attacked lightly defended outposts and hamlets to regain control over the rural population in anticipation of a cease-fire. The strength of the People's Self-Defense Force, Saigon's first line of hamlet and village defense, after steady increases in 1969 and 1970, began to decline after 1971, also suggesting a revival of the insurgency in the countryside. Pursuing a strategy used successfully in the past, the North Vietnamese forced ARVN troops to the borders, exposing the countryside and leaving its protection in the hands of weaker forces.
Such unfavorable signs, however, did not disturb South Vietnam's leaders as long as they could count on continued United States air and naval support. Nixon's resumption of the bombing of North Vietnam during the Easter offensive and, for the first time, his mining of North Vietnamese ports encouraged this expectation, as did the intense American bombing of Hanoi and Haiphong in late 1972. But such pressure was intended, at least in part, to force North Vietnam to sign an armistice. If Thieu was encouraged by the display of U.S. military muscle, the course of negotiations could only have been a source of discouragement. Hanoi dropped an earlier demand for Thieu's removal, but the United States gave up its insistence on Hanoi's withdrawal of its troops from the South. In early 1973 the United States, North and South Vietnam, and the Viet Cong signed an armistice that promised a cease-fire and national reconciliation. In fact, fighting continued, but the Military Assistance Command was dissolved, remaining U.S. forces withdrawn, and American military action in South Vietnam terminated. Perhaps most important of all, American advisers—still in many respects the backbone of ARVN's command structure were withdrawn.
Between 1973 and 1975 South Vietnam's military security further declined through a combination of old and new factors. Plagued by poor maintenance and shortages of spare parts, much of the equipment provided Saigon's forces under Vietnamization became inoperable. A rise in fuel prices stemming from a worldwide oil crisis further restricted ARVN's use of vehicles and aircraft. South Vietnamese forces in many areas of the country were on the defensive, confined to protecting key towns and installations. Seeking to preserve its diminishing assets, the South Vietnamese Army became garrison bound and either reluctant or unable to react to a growing number of guerrilla attacks that eroded rural security. Congressionally mandated reductions in U.S. aid further reduced the delivery of repair parts, fuel, and ammunition. American military activities in Cambodia and Laos, which had continued after the cease-fire in South Vietnam went into effect, ended in 1973 when Congress cut off funds. Complaining of this austerity, President Thieu noted that he had to fight a "poor man's war." Vietnamization's legacy
was that South Vietnam had to do more with less.
In 1975 North Vietnam's leaders began planning for a new offensive, still uncertain whether the United States would resume bombing or once again intervene in the South. When their forces overran Phuoc Long Province, north of Saigon, without any American military reaction, they decided to proceed with a major offensive in the Central Highlands. Neither President Nixon, weakened by the Watergate scandal and forced to resign, nor his successor, Gerald Ford, was prepared to challenge Congress by resuming U.S. military activity in Southeast Asia. The will of Congress seemed to reflect the mood of an American public weary of the long and inconclusive war.
What had started as a limited offensive in the highlands to draw off forces from populated areas now became an all-out effort to conquer South Vietnam. Thieu, desiring to husband his military assets, decided to retreat rather than to reinforce the highlands. The result was panic among his troops and a mass exodus toward the coast. As Hanoi's forces spilled out of the highlands, they cut off South Vietnamese defenders in the northern provinces from the rest of the country. Other NVA units now crossed the demilitarized zone, quickly overrunning Hue and Da Nang, and signaling the collapse of South Vietnamese resistance in the north. Hurriedly established defense lines around Saigon could not hold back the inexorable enemy offensive against the capital. As South Vietnamese leaders waited in vain for American assistance, Saigon fell to the Communists on 29 April 1975.
The Post-Vietnam Army
Saigon's fall was a bitter end to the long American effort to sustain South Vietnam. Ranging from advice and support to direct participation in combat and involving nearly three million U.S. servicemen, the effort failed to stop Communist leaders from reaching their goal of unifying a divided nation. South Vietnam's military defeat tended to obscure the crucial inability of this massive military enterprise to compensate for Saigon's political shortcomings. Over a span of nearly two decades, a series of regimes failed to mobilize fully and effectively their nation's political, social, and economic resources to foster a popular base of support. North Vietnamese main force units ended the war, but local insurgency among the people of the South made that outcome possible and perhaps inevitable.
The U.S. Army paid a high price for its long involvement in South Vietnam. American military deaths exceeded 58,000, and of these about two-thirds were soldiers. The majority of the dead were low-ranking enlisted
men (E-2 and E-3), young men twenty-three years old or younger, of whom approximately 13 percent were black. Most deaths were caused by small-arms fire and gunshot, but a significant portion, almost 30 percent, stemmed from mines, booby traps, and grenades. Artillery, rockets, and bombs accounted for only a small portion of the total fatalities.
If not for the unprecedented medical care that the Army provided in South Vietnam, the death toll would have been higher yet. Nearly 300,000 Americans were wounded, of whom half required hospitalization. The lives of many seriously injured men, who would have become fatalities in earlier wars, were saved by rapid helicopter evacuation direct to hospitals close to the combat zone. Here, relatively secure from air and ground attack, usually unencumbered by mass casualties, and with access to an uninterrupted supply of whole blood, Army doctors and nurses availed themselves of the latest medical technology to save thousands of lives. As one medical officer pointed out, the Army was able to adopt a "civilian philosophy of casualty triage" in the combat zone that directed the "major effort first to the most seriously injured." But some who served in South Vietnam suffered more insidious damage from the adverse psychological effects of combat or the long-term effects of exposure to chemical agents. More than a decade after the end of the war, 1,761 American soldiers remain listed as missing in action.
The war-ravaged Vietnamese, north and south, incurred the greatest losses. South Vietnamese military deaths exceeded 200,000. War-related civilian deaths in the South approached a half-million, while the injured and maimed numbered many more. Accurate estimates of enemy casualties run afoul of the difficulty in distinguishing between civilians and combatants, imprecise body counts, and the difficulty of verifying casualties in areas controlled by the enemy. Nevertheless, nearly a million Viet Cong and North Vietnamese soldiers are believed to have perished in combat through the spring of 1975
For the U.S. Army the scars of the war ran even deeper than the grim statistics showed. Given its long association with South Vietnam's fortunes, the Army could not escape being tarnished by its ally's fall. The loss compounded already unsettling questions about the Army's role in Southeast Asia, about the soundness of its advice to the South Vietnamese, about its understanding of the nature of the war, about the appropriateness of its strategy and tactics, and about the adequacy of the counsel provided by Army leaders to national decision makers. Marked by ambiguous military objectives, defensive strategy, lack of tactical initiative, ponderous tactics, and untidy command arrangements, the struggle in Vietnam seemed to violate most of the time-honored principles of war. Many officers sought to erase
Vietnam from the Army's corporate memory, feeling uncomfortable with the ignominy of failure or believing that the lessons and experience of the war were of little use to the post-Vietnam Army. Although a generation of officers, including many of the Army's future leaders, cut their combat teeth in Vietnam, many regretted that the Army's reputation, integrity, and professionalism had been tainted in the service of a flawed strategy and a dubious ally.
Even before South Vietnam fell, Army strategists turned their attention to what seemed to them to be the Army's more enduring and central mission—the defense of western Europe. Ending a decade of neglect of its forces there, the Army began to strengthen and modernize its NATO contingent. Army planners doubted that in any future European war they would enjoy the luxury of a gradual, sustained mobilization, or unchallenged control of air and sea lines of communication, or access to support facilities close to the battlefield. France's decision in 1966 to end its affiliation with NATO had already forced the Army to re-evaluate its strategy and support arrangements. The end of the draft in 1972 and the transition to an all-volunteer Army in 1973—a reflection of popular dissatisfaction with the Vietnam War—added to the unlikelihood of another war similar to Vietnam and made it seem more than ever an anomaly.
Instead, Army planners faced a possible future conflict that would begin with little or no warning and confront allied forces-in-being with a numerically superior foe. Combat in such a war was likely to be violent and sustained, entailing deep thrusts by armored forces, intense artillery and counterbattery fire, and a fluid battlefield with a high degree of mobility. Army doctrine to fight this war, codified in 1976 in FM (Field Manual) 100-5, Operations, barely acknowledged the decade of Army combat in Vietnam. The new doctrine of "active defense" drew heavily on the experience of armored operations in World War II and recent fighting in the Middle East between Arab and Israeli forces. From a study of about 1,000 armored battles, Army planners deduced that an outnumbered defender could force a superior enemy to concentrate his forces and reveal his intentions, and thus bring to bear in the all-important initial phase of the battle sufficient forces and firepower in the critical area to defeat his main attack. The conversion of the 1st Cavalry Division, the unit that exemplified combat operations in South Vietnam, from an airmobile division to a new triple capabilities (TRICAP) division symbolized the post-Vietnam Army's reorientation toward combat in Europe. Infused with additional mechanized and artillery forces to give it greater flexibility and firepower, the division's triple capabilities—armor, airmobility, and air cavalry—better suited it to carry out the tactical concepts
of FM 100-5 than its previous configuration.
Yet the Army did not totally ignore its Vietnam experience. U.S. armor and artillery forces had gained valuable experience there in co-ordinating operations with airmobile forces. Although some in the military questioned whether helicopters could operate in mid-intensity conflict, Army doctrine rested heavily on concepts of airmobility that had evolved during Vietnam. Helicopters were still expected to move forces from one sector of the battlefield to another, to carry out reconnaissance and surveillance, to provide aerial fire support, and to serve as antitank weapons systems. In many respects, the role contemplated for helicopters in the post-Vietnam Army harkened back to concepts of airmobility originally formulated for the atomic battlefield of the early 1960's, but modified by combat in Vietnam. Like the Army of the Vietnam era, the postwar Army continued a common hallmark of the American military tradition by emphasizing technology and firepower over manpower.
The Army's new operational doctrine had its share of critics. Stressing tactical operations of units below the division, the doctrine of FM IOO-5 neglected the role of larger Army echelons. Recognition of this deficiency led to a revival of interest in the role of divisions, corps, and armies in the gray area between grand strategy and tactics. But some strategists warned that the Army seemed to be preparing for the war it was least likely to fight. Like the strategists of the New Look in the 1950's, they viewed an attack on Army forces in Europe as a mere trip wire that would ignite a nuclear confrontation between the superpowers and thus make the land battle irrelevant. With insurgencies, small wars, subversion, and terrorism flourishing throughout Asia, Africa, and Latin America, others believed that that Army would sooner or later find itself once again engaged in conflicts that closely resembled Vietnam.
Ten years after the loss of South Vietnam, the U.S. Army's major overseas commitments remained anchored in NATO and South Korea. International realities still compelled it to prepare for a variety of contingencies. In addition to organizing divisions to fight in Europe, the Army revived its old interest in light infantry divisions. By the mid-1980's two such divisions, the 10th Mountain Division and the 6th Infantry Division (Light), had been activated, giving the Army once again a total of eighteen divisions. Lower active-duty strength required many divisions to be fleshed out by Reserve Components before they could be committed to combat. Nevertheless, the Army viewed its new divisions as suitable for use in a rapid deployment force to reinforce NATO or world trouble spots. Although their strength was drastically reduced following the Vietnam War, Special Forces continued to
be called upon to advise and train anti-Communist military forces in Latin America and elsewhere and to participate in a variety of special activities to counter terrorism. Operations like the abortive attempt to rescue American hostages in Iran and the successful operation to prevent a Communist takeover of the Caribbean island of Grenada attested to the Army's continuing need for both rapidly deployable and special-purpose forces. The realities of a complex world reinforced the pervasive influence of flexible response on the U.S. national security policy. Many other missions fell under the doctrinal umbrella of low-intensity conflict, a vague and faddish term that became popular in the 1980's as counterinsurgency had two decades earlier. The relevance of Vietnam to low-intensity conflict remains an open question.
Nevertheless, by the 1980's the conduct and lessons of the war in Vietnam had again become the subject of lively debate in the Army. Reassessments of its role tend to center around the issue of whether the Army should have devoted more effort to pacification or to defeating the conventional military threat posed by North Vietnam. These issues stem from the ambiguities of the war and the paradox of the Army's experience. Reliance on massive firepower and technological superiority and the ability to marshal vast logistical resources have been hallmarks of the American military tradition. Tactics have often seemed to exist apart from larger issues, strategies, and objectives. Yet in Vietnam the Army experienced tactical success and strategic failure. The rediscovery of the Vietnam War suggests that its most important legacy may be the lesson that unique historical, political, cultural, and social factors always impinge on the military. Strategic and tactical success rests not only on military progress but on correctly analyzing the nature of the particular conflict, understanding the enemy's strategy, and realistically assessing the strengths and weaknesses of allies. A new humility and a new sophistication may form the best parts of the complex heritage left the Army by the long, bitter war in Vietnam.
page updated 27 April 2001
Return to the Table of Contents
Return to CMH Online | http://www.history.army.mil/books/AMH/AMH-28.htm | 13 |
20 | Kindergarten: Learning and Working Now and Long Ago
Kindergarten students studying the life, work, and philosophy of César E.
Chávez will learn that being a good citizen involves acting in certain ways,
and that the personal qualities that Chávez possessed reflect good civic behavior.
They will also have the opportunity to learn about the work that people must do to
grow food, to harvest the crops, and to transport the food to locations for people
to buy. Kindergarten students will learn about César E. Chávez,
the man for which California named a holiday.
Kindergarten: History-Social Science Framework
Students in kindergarten begin their formal education by learning to understand the
character traits that are necessary for good civic behavior. They will listen to
stories of times past and about men and women who have made a difference. They will
learn how it might have been to live in other times and places and how their lives
would have been different. They will observe different ways people lived in earlier
days; for example, getting water from a well or growing their food. (Pp. 27-29)
Kindergarten: History-Social Science Standards
Standard K.1 Students understand that being a good citizen involves acting in
K.1.2 Learn examples of honesty, courage, determination, individual
responsibility, and patriotism in American and world history from stories and folklore.
Standard K.3 Students match simple descriptions of work that people do and the
names of related jobs at school, in the community, and from historical accounts.
Standard K.6 Students understand that history relates to events, people, and
places of other times.
K.6.1 Identify the purpose of, and the people and events honored in,
commemorative holidays, including the human struggles that were the basis for
the events (e.g., Thanksgiving, Independence Day, Washington’s and
Lincoln’s Birthdays, Martin Luther King Jr. Day, Labor Day, Columbus Day,
K.6.3 Understand how people lived in earlier times and how their lives
would be different today (e.g., water from a well, growing food, making clothing,
having fun, forming organizations, living by rules and laws).
César E. Chávez: An American Hero
- Students will be able to identify a photograph or
portrait of César E. Chávez
and orally state one reason why he is an American hero.
César E. Chávez and His Family
- Students will be able to state what César E. Chávez learned from his
mother, father and grandmother.
César E. Chávez and the Community
- Students will be able to describe and model good citizenship.
Students will be able to state why César E. Chávez was a good citizen.
César E. Chávez Making Change
- Students will be able to explain how César E. Chávez helped farm workers.
Students will be able to explain how the life of a farm worker changed
because of Chávez’s work.
Students will be able to show how nonviolent action can lead to a peaceful resolution.
The Memory of César E. Chávez
- Students will plan and complete a service learning project.
Grade One: A Child’s Place in Time and Space
Students studying the life, work, and philosophy of César E. Chávez in
grade one will learn how he worked to resolve problems peaceably. By examining the
life of Chávez, they will understand how his cultural experiences influenced
his politics, family life, education, philosophy, recreational activities, and his
Grade One: History-Social Science Framework (Revised 2000)
Students in grade one will learn more about the world they live in and their responsibility
to other people. They will be ready to develop a deeper understanding of cultural
diversity and to appreciate the many people from various backgrounds. They will gain
a beginning understanding of economics and how goods and services are exchanged for
money. They will be ready to examine their neighborhood's many geographic and economic
connections to the larger world. Students will hear stories to discover the many ways
in which people, families, and cultural groups are alike as well as those ways in which
they differ. (Pp. 32-34)
Grade One: History-Social Science Standards
Standard 1.2 Students compare and contrast the absolute and relative locations of
places and people, and describe the physical and/or human characteristics of places.
1.2.4 Describe how location, weather, and physical environment affect
the way people live, including the effects on their food, clothing, shelter,
transportation, and recreation.
Standard 1.4 Students compare and contrast everyday life in different times and
places around the world and recognize that some aspects of people, places, and
things change over time while others stay the same.
1.4.3 Recognize similarities and differences of earlier generations
in such areas as work (inside and outside the home), dress, manners, stories,
games, festivals, drawing from biographies, oral histories, and folklore.
Standard 1.5 Students describe the human characteristics of familiar places and
the varied backgrounds of American citizens and residents in those places.
1.5.2 Understand the ways in which Native Americans and immigrants
have helped define Californian and American culture.
Standard 1.6 Students understand basic economic concepts and the role of
individual choice in a free-market economy.
1.6.2 Identify the specialized work that people do to manufacture,
transport, and market goods and services, and the contributions of those who
work in the home.
- Lesson 1
- Students will be able to explain how a nonviolent action can cause change in the
cycle of a free-market economy. Students will be able to create imagery that conveys
a nonviolent message of change.
- Lesson 2
Chávez Time Line
- Students will be able to put a time line in order. They will identify factors
of change and similarities and differences between the life of Chávez.
Students will be able to understand how Chávez changed the world he lived in.
- Lesson 3
- Students will be able to see how a cultural song can invoke visions and emotions.
Students will be able to identify reasons that Chávez would play this song at meetings.
- Lesson 4
Farm Worker Inspired Poetry
- Students will be able to recognize the role farm workers play in the marketing of goods.
Students will be able to use words to invoke the feelings of this individual group of people.
- Lesson 5
Similarities and Differences
- Students will be able to identify the similarities and differences in the life of Chávez.
Students and teacher pick out similarities and differences to create a list.
- Lesson 6
Violence Versus Nonviolence
- Students will see how violence does not reach a given goal and how nonviolence
and unity does. Students are able to explain why to choose nonviolence over violence.
Grade Two: People Who Make a Difference
Students studying César E. Chávez will learn about his role in
improving the lives of farm workers. They will learn about Chávez as a
family man, as a husband, as a father and grandfather. They will learn about
the role that religion played in Chávez’s life. They will learn
about his role as an organizer, a labor leader, and as an environmentalist.
Most importantly, they will learn about him as a civil rights leader and as
an advocate for social justice and nonviolence.
Grade Two: History-Social Science Framework
Students in grade two will learn about people who made a difference in the past.
They will learn about those who supply the goods and services that are necessary
for daily life. Their studies will emphasize those who supply food: people who
grow and harvest food, vegetable farms, fruit orchards, and the processors and
distributors who move food from farm to market. Students will also learn to
use maps that extend to regions beyond their neighborhood to the farmlands and
to the places where people work to produce their food. They will also learn to
explore geographic questions such as: How does climate affect the crops that a
farmer can grow? Why are some areas more fertile than others are? Why is water
such an important resource for farmers?
They will also understand and appreciate the many ways that parents, grandparents,
and ancestors have made a difference. This will help them develop a beginning
sense of history. Teachers will ask students: Where did the family come from?
What was it like to live there? Who was in the family then? Do photos or
letters from that time still exist? Reading literature helps children acquire
deeper insights into the cultures from which families came; the stories, games,
and festivals parents or grandparents might have enjoyed as children; the work
that children as well as their families were expected to perform; their religious
practices; and the dress, manners, and morals expected of family members at that time.
Comparisons will be drawn with children’s lives today to discover how many of these
family traditions, practices, and values have carried forward to the present and
what kinds of changes have occurred.
They will also learn about those extraordinary men and women who have made a
difference in our national life and in the larger world community. Children
will meet those men and women whose contributions can be appreciated by seven-year-olds
and whose achievements have directly or indirectly touched their lives or the
lives of others. They will learn about leaders from all walks of life who have
helped to solve community problems, worked for better schools, or improved
living conditions and the lifelong opportunities for workers, families, women,
and children. They will learn about those who have been honored locally for
the special courage, responsibility, and concern they have displayed in
contributing to the safety, welfare, and happiness of others. (Pp. 38-41)
Grade Two History-Social Science Standards
Standard 2.2 Students demonstrate map skills by describing the absolute and
relative locations of people, places, and environments.
2.2.4 Compare and contrast basic land use in urban, suburban, and
rural environments in California.
Standard 2.4 Students understand basic economic concepts and their individual
roles in the economy and demonstrate basic economic reasoning skills.
2.4.1 Describe food production and consumption long ago and today,
including the roles of farmers, processors, distributors, weather, and land
and water resources.
2.4.3 Understand how limits on resources affect production and
consumption (what to produce and what to consume).
Standard 2.5 Students understand the importance of individual action and
character and explain how heroes from long ago and the recent past have made
a difference in others' lives (e.g., from biographies of Abraham Lincoln,
Louis Pasteur, Sitting Bull, George Washington Carver, Marie Curie, Albert Einstein,
Golda Meir, Jackie Robinson, Sally Ride).
The Food We Eat
- Students will be able to name the major crops grown in California’s
Students will list where the products of the central valley are exported.
Students will trace one of California’s major crops from the farm
- Students will be able to state why farm workers are an
important part of the farm economy.
Students will be able to explain the crop cycle in California
and why farm workers are migrant workers.
Students will state that César E. Chávez was an important
migrant farm worker.
- Students will explain the reasons that farmers need to use pesticides.
Students will list the dangers of using pesticides.
Students will understand that limits on resources affect production and consumption.
Students will understand that the general public does not
want to eat products that are harmful.
The Importance of Farm Owners and Farm Workers
- Students will be able to state why farm owners are an important part of the farm economy.
Students will be able to state why farm workers are an
important part of the farm economy. Students will state why César E.
Chávez became interested in improving the lives of farm workers.
Grade Three: Continuity and Change
Students in grade three studying César E. Chávez will learn about his
relationship with immigrants. Students will learn about Chávez’s
work with Fred Ross, as well as his work in his own local community. They will
learn about Chávez’s work as a civil rights leader and the
connection between his ideas and his actions and behavior. Students will learn
how César E. Chávez was taught to organize people to solve their
problems and to fight for justice.
Grade Three: History-Social Science Framework
Students in grade three will continue their study of community by examining continuity
and change. They will differentiate between major landforms and landscapes.
They will consider the impact of new groups of people on those that came before.
They will use historical photographs to observe the changes in the ways families
lived and worked. They will have opportunities to role-play being an immigrant
today and long ago; discover how newcomers, including children have earned their
living, now and long ago; and analyze why such occupations have changed over time.
They will compare the past to changes underway today. (How do people today earn a
living? How are people working to protect their region’s natural resources?
How do people in this community work to influence public policy and participate
in resolving local issues that are important to children and their families?)
Children will listen to biographies of the nation’s heroes and of those who
took the risk of new and controversial ideas and opened new opportunities for many.
These stories will help children to understand today’s great movement of immigrants
into California as part of the continuing history of their nation. (Pp. 44-47)
Grade Three: History-Social Science Standards
Standard 3.1 Students describe the physical and human geography and use maps,
tables, graphs, photographs, and charts to organize information about people,
places, and environments in a spatial context.
3.1.1 Identify geographical features in their local region (e.g.,
deserts, mountains, valleys, hills, coastal areas, oceans, lakes).
3.1.2 Trace the ways in which people have used the resources of the
local region and modified the physical environment (e.g., a dam constructed
upstream changed a river or coastline).
Standard 3.4 Students understand the role of rules and laws in our daily lives
and the basic structure of the U.S. Government.
3.4.2 Discuss the importance of public virtue and the role of citizens,
including how to participate in a classroom, in the community, and in civic life.
3.4.6 Describe the lives of American heroes who took risks to secure our
freedoms (e.g., Anne Hutchinson, Benjamin Franklin, Thomas Jefferson, Abraham Lincoln,
Frederick Douglas, Harriet Tubman, Martin Luther King, Jr.).
Standard 3.5 Students demonstrate basic economic reasoning skills and an
understanding of the economy of the local region.
3.5.1 Describe the ways in which local procedures have used and are
using natural resources, human resources, and capital resources to produce goods
and services in the past and the present.
- Lesson 1
We Depend on the Land: Agriculture in California
- Students will be able to color and label a map of California by geographic region.
Students will be able to identify the agricultural regions of California
and describe why they are conducive to farming.
Students will be able to graph data of California’s major crops
and write/say fact statements about the state's agriculture.
Students will write/say sentences comparing and contrasting agriculture
now and in California’s past.
- Lesson 2
César E. Chávez: An American Hero
- Students will be able to create and interpret a time line of César’s life.
Students will be able to state and analyze several causes and effects of
César’s actions to help migrant farm workers.
Students will be able to compare and contrast conditions of migrant workers now
and in the past.
- Lesson 3
Understanding a Democratic Society
- Students will be able to state examples of local laws and explain why they exist.
Students will be able to describe who makes the laws in a democratic society.
Students will be able to distinguish between the roles of local, state and federal government.
- Lesson 4
César E. Chávez: An Instrument of Change in a Democracy
- Students will be able to describe a day in the life of a migrant farm worker in the 1960s.
Students will be able to summarize why Chávez was an important person in American history.
Students will be able to state the nonviolent methods used by Chávez and explain their effects.
Students will be able to develop an action plan to solve a community problem.
- Lesson 5
Agriculture and the Economy
- The students will be able to demonstrate an understanding of the impact that
farm workers have on the economy of the state and the country. | http://chavez.cde.ca.gov/ModelCurriculum/Teachers/Lessons_K-3.aspx | 13 |
16 | The Punic Wars were a series of three wars fought between Rome and Carthage from 264 BC to 146 BC. At the time, they were probably the largest wars that had ever taken place, much like today's World Wars. The term Punic comes from the Latin word Punicus (or Poenicus), meaning "Carthaginian", with reference to the Carthaginians' Phoenician ancestry.
The main cause of the Punic Wars was the fight of interests between the existing Carthaginian Empire and the expanding Roman Republic. The Romans were initially interested in expansion via Sicily (which at that time was a cultural melting pot), part of which lay under Carthaginian control. At the start of the first Punic War, Carthage was the dominant power of the Western Mediterranean, with an extensive maritime empire, while Rome was the rapidly ascending power in Italy, but lacked the naval power of Carthage.
By the end of the third war, after more than a hundred years and the loss of many hundreds of thousands of soldiers from both sides, Rome had conquered Carthage's empire and completely destroyed the city, becoming the most powerful state of the Western Mediterranean. With the end of the Macedonian wars - which ran concurrently with the Punic Wars - and the defeat of the Seleucid King Antiochus III the Great in the Roman–Syrian War (Treaty of Apamea, 188 BC) in the eastern sea, Rome emerged as the dominant Mediterranean power and one of the most powerful cities in classical antiquity. The Roman victories over Carthage in these wars gave Rome a preeminent status it would retain until the 5th century AD.
During the mid-3rd century BC, Carthage was a large city located on the coast of modern Tunisia. Founded by the Phoenicians in the mid-9th century BC, it was a powerful thalassocratic city-state with a vast commercial network. Of the great city-states in the western Mediterranean, only Rome rivaled it in power, wealth, and population.
While Carthage's navy was the largest in the ancient world at the time, it did not maintain a large, permanent, standing army. Instead, Carthage relied mostly on mercenaries, especially the indigenous Numidian Berbers, to fight its wars. However, most of the officers who commanded the armies were Carthaginian citizens. The Carthaginians were famed for their abilities as sailors, and unlike their armies, many Carthaginians from the lower classes served in their navy, which provided them with a stable income and career.
In 200 BC the Roman Republic had gained control of the Italian peninsula south of the Po river. Unlike Carthage, Rome had large disciplined armed forces. On the other hand, at the start of the First Punic War the Romans had no navy, and were thus at a disadvantage until they began to construct their own large fleets during the war.
The First Punic War was primarily fought in Sicily and at sea. Both states suffered heavily; Rome was the victor, receiving Sicily and Sardinia as spoils. It was the first of three major wars between the two powers for supremacy in the Mediterranean Sea. After 23 years of fighting, Rome emerged the victor and imposed heavy conditions upon Carthage as the price for peace. The conflict was called the "Punic War" because Rome's name for Carthaginians was Punici (older Phoenici, due to their Phoenician ancestry).
In the middle of the 3rd century BC, the power of Rome was growing. Following centuries of internal rebellions and disturbances, the whole of the Italian peninsula was tightly secured under Roman hands. All enemies - such as the Latin league or the Samnites - had been overcome, and the invasion of Pyrrhus of Epirus was repelled.
Romans had an enormous confidence in their political system and military. Across the Tyrrhenian Sea and the Strait of Sicily, Carthage was already an established naval and commercial power, controlling most of the Mediterranean maritime trade routes. Originally a Phoenician colony, the city had become the center of a wide commercial empire reaching along the North African coast to as far as Iberia.
In 288 BC, the Mamertines, a group of Italian mercenaries, occupied the city of Messina in the northeastern tip of Sicily, killing all the men and taking the women as their wives. From this base, they ravaged the countryside and became a problem for the independent city of Syracuse. When Hiero II, tyrant of Syracuse, came to power in 265 BC, he decided to take definitive action against the Mamertines and besieged Messina.
The Mamertines then appealed for help simultaneously to Rome and Carthage. At first, the Romans did not wish to the aid of soldiers who had unjustly stolen a city from its rightful possessors. Moreover, Rome had recently dealt with an insurrection of mercenaries following the defeat of Pyrrhus of Epirus (Rhegium, 271) and was probably reluctant to help this faction now, so Carthage was the first city to respond to the plea and send troops to the area.
Most likely unwilling to see Carthaginian power spread further over Sicily and get too close to Italy, Rome responded by entering into an alliance with the Mamertines.
In 264 BC, Roman troops were deployed to Sicily (the first time a Roman army acted outside the Italian peninsula) and forced a reluctant Syracuse to join their alliance. Soon enough the only parties in the dispute were Rome and Carthage and the conflict evolved into a struggle for the possession of Sicily.
As Sicily was a hilly island, with geographical obstacles and a terrain where lines of communication are difficult to maintain, land warfare played a secondary role in the First Punic War. Land operations were mostly confined to small scale raids and skirmishes between the armies, with hardly any pitched battle. Sieges and land blockades were the most common operations for the regular army. The main targets of blockading were the important naval ports, since neither of the belligerent parties were based in Sicily and both needed a continuous supply of reinforcements and communication with the mainland.
Despite these general considerations, at least two large scale land campaigns were fought during the First Punic War. In 262 BC, Rome besieged the city of Agrigentum, an operation that involved both consular armies - a total of four Roman legions - and took several months to resolve. The garrison of Agrigentum managed to call for reinforcements and a Carthaginian relief force commanded by Hanno came to the rescue. With the supplies from Syracuse cut, the Romans found themselves also besieged and constructed a line of circumvallation. After a few skirmishes, the battle of Agrigentum was fought and won by Rome, and the city fell.Inspired by this victory, Rome attempted (256/255 BC) another large scale land operation, this time with different results.
Following several naval battles, Rome was aiming for a quick end to the war and decided to invade the Carthaginian colonies of Africa, to force the enemy to accept terms. A major fleet was built, both of transports for the army and its equipment and warships for protection. Carthage tried to intervene but was defeated in the battle of Cape Ecnomus.
As a result, the Roman army commanded by Marcus Atilius Regulus landed in Africa and started to ravage the Carthaginian countryside. At first Regulus was victorious, winning the battle of Adys and forcing Carthage to sue for peace. The terms were so heavy that negotiations failed and, in response, the Carthaginians hired Xanthippus, a Spartan mercenary, to reorganize the army. Xanthippus managed to cut off the Roman army from its base by re-establishing Carthiginian naval supremacy, then defeated and captured Regulus at the battle of Tunis.
Towards the end of the conflict (249 BC), Carthage sent general Hamilcar Barca (Hannibal's father) to Sicily. Hamilcar managed to gain control of most of inland Sicily; in desperation, the Romans appointed a dictator to resolve the situation. Nevertheless, Carthaginian success in Sicily was secondary to the progress of the war at sea; Hamilcar remaining undefeated in Sicily became irrelevant following the Roman naval victory at the battle of the Aegates Islands in 241 BC.
Due to the difficulty of operating in Sicily, most warfare of the First Punic War was fought at sea, including the most decisive battles. Moreover, naval warfare permitted an efficient blockade of enemy ports, and consequently of reinforcement and supply for the inland troops. Both sides of the conflict had publicly funded fleets. This fact compromised Carthage and Rome's finances and eventually decided the course of the war.
At the beginning of the First Punic War, Rome had virtually no experience in naval warfare, whereas Carthage had a great deal of experience on the seas thanks to its sea-based trade. Nevertheless, the Republic soon understood the importance of Mediterranean control in the outcome of the conflict.The first large fleet was constructed after the victory of Agrigentum in 261 BC. Since Rome lacked naval technology, the design of the warships was copied in a straightforward manner from captured Carthaginian triremes and quinqueremes.
Perhaps in order to compensate for the lack of experience, and to make use of standard land military tactics on sea, the Romans equipped their new ships with a special boarding device, the corvus. The new weapon's efficiency was first proved in the battle of Mylae, the first Roman naval victory, and continued to prove its value in the following years, especially in the huge Battle of Ecnomus. The addition of the corvus forced Carthage to review its military tactics, and since the city had difficulty in doing so, Rome had the naval advantage. Later, as Roman experience in naval warfare grew, the corvus device was abandoned due to its impact on the navigability of the war vessels.
Despite the Roman victories in sea, the Republic was the side that lost most ships and crews during the war, largely due to the effect of storms. On at least two occasions (255 and 253 BC) whole fleets were destroyed in bad weather. The weight of the corvus on the prows of the ships was largely responsible for the disasters. Towards the end of the war Carthage ruled the seas, as Rome was unwilling to finance the construction of yet another expensive fleet. The Romans did however build another fleet paid for with donations from wealthy citizens.
The First Punic War was decided in the naval battle of the Aegates Islands (March 10, 241 BC), where the new Roman fleet under consul Gaius Lutatius Catulus scored a victory. Carthage lost most of its fleet and was economically incapable of funding another, or to find manpower for the crews. With no fleet, Hamilcar Barca was cut from Carthage and forced to surrender.
Rome won the First Punic War after 23 years of conflict and in the end replaced Carthage as the dominant naval power of the Mediterranean. In the aftermath of the war, both states were financially and demographically exhausted. To determine the final borders of their territories, they drew what they considered a straight line across the Mediterranean. Hispania, Corsica, Sardinia and Africa remained Carthaginian.
All that was north of that line was signed over to Rome. Rome's victory was greatly influenced by its persistent refusal to admit defeat and by accepting only total victory. Moreover, the Republic's ability to attract private investments in the war effort by playing on their citizens' patriotism to fund ships and crews, was one of the deciding factors of the war, particularly when contrasted with the Carthaginian nobility's apparent unwillingness to risk their fortunes for the common good. The end of the First Punic War also resulted in the official birth of Roman navy, further enticing the expansion of the Roman Empire.
According to Polybius there had been several trade agreements between Rome and Carthage, even a mutual alliance against king Pyrrhus of Epirus. When Rome and Carthage made peace in 241 BC, Rome secured the release of all 8,000 prisoners of war without ransom and, furthermore, received a considerable amount of silver as a war indemnity. However, Carthage refused to deliver to Rome the Roman deserters serving among their troops. A first issue for dispute was that the initial treaty, agreed upon by Hamilcar Barca and the Roman commander in Sicily, had a clause stipulating that the Roman popular assembly had to accept the treaty in order for it to be valid. The assembly not only rejected the treaty but increased the indemnity Carthage had to pay.
Carthage had a liquidity problem and attempted to gain financial help from Egypt, a mutual ally of Rome and Carthage, but failed. This resulted in delay of payments owed to the mercenary troops that had served Carthage in Sicily, leading to a climate of mutual mistrust and, finally, a revolt supported by the Libyan natives, known as the Mercenary War (240–238 BC). During this war, Rome and Syracuse both aided Carthage, although traders from Italy seem to have done business with the insurgents. Some of them were caught and punished by Carthage, aggravating the political climate which had started to improve in recognition of the old alliance and treaties.
During the uprising in the Punic mainland, the mercenary troops in Corsica and Sardinia toppled Punic rule and briefly established their own, but were expelled by a native uprising. After securing aid from Rome, the exiled mercenaries then regained authority on the island of Sicily. For several years a brutal campaign was fought to quell the insurgent natives. Like many Sicilians, they would ultimately rise again in support of Carthage during the Second Punic War.
Eventually, Rome annexed Corsica and Sardinia by revisiting the terms of the treaty that ended the first Punic War. As Carthage was under siege and engaged in a difficult civil war, they begrudgingly accepted the loss of these islands and the subsequent Roman conditions for ongoing peace, which also increased the war indemnity levied against Carthage after the first Punic War. This eventually plunged relations between the two powers to a new low point.
After Carthage emerged victorious from the Mercenary War there were two opposing factions: the reformist party was led by Hamilcar Barca while the other, more conservative, faction was represented by Hanno the Great and the old Carthaginian aristocracy. Hamilcar had led the initial Carthaginian peace negotiations and was blamed for the clause that allowed the Roman popular assembly to increase the war indemnity and annex Corsica and Sardinia, but his superlative generalship was instrumental in enabling Carthage to ultimately quell the mercenary uprising, ironically fought against many of the same mercenary troops he had trained. Hamilcar ultimately left Carthage for the Iberian peninsula where he captured rich silver mines and subdued many tribes who fortified his army with levies of native troops.
Hanno had lost many elephants and soldiers when he became complacent after a victory in the Mercenary War. Further, when he and Hamilcar were supreme commanders of Carthage's field armies, the soldiers had supported Hamilcar when his and Hamilcar's personalities clashed. On the other hand he was responsible for the greatest territorial expansion of Carthage's hinterland during his rule as strategus and wanted to continue such expansion. However, the Numidian king of the relevant area was now a son-in-law of Hamilcar and had supported Carthage during a crucial moment in the Mercenary War. While Hamilcar was able to obtain the resources for his aim, the Numidians in the Atlas Mountains were not conquered, like Hanno suggested, but became vassals of Carthage.
The Iberian conquest was begun by Hamilcar Barca and his other son-in-law, Hasdrubal the Fair, who ruled relatively independently of Carthage and signed the Ebro treaty with Rome. Hamilcar died in battle in 228 BC. Around this time, Hasdrubal became Carthaginian commander in Iberia (229 BC). He maintained this post for some eight years until 221 BC.
Soon the Romans became aware of a burgeoning alliance between Carthage and the Celts of the Po river valley in northern Italy. The latter were amassing forces to invade Italy, presumably with Carthaginian backing. Thus, the Romans preemptively invaded the Po region in 225 BC. By 220 BC, the Romans had annexed the area as Gallia Cisalpina.
Hasdrubal was assassinated around the same time (221 BC), bringing Hannibal to the fore. It seems that, having apparently dealt with the threat of a Gaulo-Carthaginian invasion of Italy (and perhaps with the original Carthaginian commander killed), the Romans lulled themselves into a false sense of security. Thus, Hannibal took the Romans by surprise a mere two years later (218 BC) by merely reviving and adapting the original Gaulo-Carthaginian invasion plan of his brother-in-law Hasdrubal.
After Hasdrubal's assassination by a Celtic assassin, Hamilcar's young sons took over, with Hannibal becoming the strategus of Iberia, although this decision was not undisputed in Carthage. The output of the Iberian silver mines allowed for the financing of a standing army and the payment of the war indemnity to Rome. The mines also served as a tool for political influence, creating a faction in Carthage's magistrate that was called the Barcino.
In 219 BC Hannibal attacked the town of Saguntum, which stood under the special protection of Rome. According to Roman tradition, Hannibal had been made to swear by his father never to be a friend of Rome, and he certainly did not take a conciliatory attitude when the Romans berated him for crossing the river Iberus (Ebro) which Carthage was bound by treaty not to cross. Hannibal did not cross the Ebro River (Saguntum was near modern Valencia - well south of the river) in arms, and the Saguntines provoked his attack by attacking their neighboring tribes who were Carthaginian protectorates and by massacring pro-Punic factions in their city. Rome had no legal protection pact with any tribe south of the Ebro River. Nonetheless, they asked Carthage to hand Hannibal over, and when the Carthaginian oligarchy refused, Rome declared war on Carthage.
The 'Barcid Empire' consisted of the Punic territories in Iberia. According to the historian Pedro Barcelo, it can be described as a private military-economic hegemony backed by the two independent powers, Carthage and Gades. These shared the profits of the silver mines in southern Iberia with the Barcas family and closely followed Hellenistic diplomatic customs. Gades played a supporting role in this field, but Hannibal visited the local temple to conduct ceremonies before launching his campaign against Rome. The Barcid Empire was strongly influenced by the Hellenistic kingdoms of the time and for example, contrary to Carthage, it minted silver coins in its short time of existence.
Depiction of Hannibal and his army crossing the Alps during the Second Punic War.
The Second Punic War, also referred to as The Hannibalic War, (by the Romans) The War Against Hannibal, or "The Carthaginian War", lasted from 218 to 201 BC and involved combatants in the western and eastern Mediterranean. This was the second major war between Carthage and the Roman Republic, with the crucial participation of Numidian-Berber armies and tribes on both sides. The two states had three major conflicts against each other over the course of their existence. They are called the "Punic Wars" because Rome's name for Carthaginians was Punici, due to their Phoenician ancestry.
The war is marked by Hannibal's surprising overland journey and his costly crossing of the Alps, followed by his reinforcement by Gaulish allies and crushing victories over Roman armies in the battle of the Trebia and the giant ambush at Trasimene. Against his skill on the battlefield the Romans deployed the Fabian strategy. But because of the increasing unpopularity of this approach, the Romans resorted to a further major field battle. The result was the Roman defeat at Cannae.
In consequence many Roman allies went over to Carthage, prolonging the war in Italy for over a decade, during which more Roman armies were destroyed on the battlefield. Despite these setbacks, the Roman forces were more capable in siegecraft than the Carthaginians and recaptured all the major cities that had joined the enemy, as well as defeating a Carthaginian attempt to reinforce Hannibal at the battle of the Metaurus.
In the meantime in Iberia, which served as the main source of manpower for the Carthaginian army, a second Roman expedition under Publius Cornelius Scipio Africanus Major took New Carthage by assault and ended Carthaginian rule over Iberia in the battle of Ilipa. The final showdown was the battle of Zama in Africa between Scipio Africanus and Hannibal, resulting in the latter's defeat and the imposition of harsh peace conditions on Carthage, which ceased to be a major power and became a Roman client-state.
All battles mentioned in the introduction are ranked among the most costly traditional battles of human history; in addition there were a few successful ambushes of armies that also ended in their annihilation.
After assaulting Saguntum, Hannibal surprised the Romans in 218 BC by leading the Iberians and three dozen elephants through the Alps. Although Hannibal surprised the Romans and thoroughly beat them on the battlefields of Italy, he lost his only siege engines and most of his elephants to the cold temperatures and icy mountain paths. In the end it allowed him to defeat the Romans in the field, but not in the strategically crucial city of Rome itself, thus making him unable to win the war.
Hannibal defeated the Roman legions in several major engagements, including the Battle of the Trebia, the Battle of Lake Trasimene and most famously at the Battle of Cannae, but his long-term strategy failed. Lacking siege engines and sufficient manpower to take the city of Rome itself, he had planned to turn the Italian allies against Rome and starve the city out through a siege. However, with the exception of a few of the southern city-states, the majority of the Roman allies remained loyal and continued to fight alongside Rome, despite Hannibal's near-invincible army devastating the Italian countryside. Rome also exhibited an impressive ability to draft army after army of conscripts after each crushing defeat by Hannibal, allowing them to recover from the defeats at Cannae and elsewhere and keep Hannibal cut off from aid.
More importantly, Hannibal never successfully received any significant reinforcements from Carthage. Despite his many pleas, Carthage only ever sent reinforcements successfully to Hispania. This lack of reinforcements prevented Hannibal from decisively ending the conflict by conquering Rome through force of arms.
The Roman army under Quintus Fabius Maximus intentionally deprived Hannibal of open battle, while making it difficult for Hannibal to forage for supplies. Nevertheless, Rome was also incapable of bringing the conflict in the Italian theatre to a decisive close. Not only were they contending with Hannibal in Italy, and his brother Hasdrubal in Hispania, but Rome had embroiled itself in yet another foreign war, the first of its Macedonian wars against Carthage's ally Philip V, at the same time. Hannibal used elephants for the first time in war.
Through Hannibal's inability to take strategically important Italian cities, the general loyalty Italian allies showed to Rome, and Rome's own inability to counter Hannibal as a master general, Hannibal's campaign continued in Italy inconclusively for sixteen years. Though he managed to sustain for 15 years, he did so only by ravaging farm lands, keeping his army healthy, which brought anger among the Roman's subject states. Realizing that Hannibal's army was outrunning its supply lines quickly, Rome took countermeasures against Hannibal's home base in Africa by sea command and stopped the flow of supplies. Hannibal quickly turned back and rushed to home defense, but was soundly defeated in the Battle of Zama.
In Hispania, a young Roman commander, Publius Cornelius Scipio (later to be given the agnomen Africanus because of his feats during this war), eventually defeated the larger but divided Carthaginian forces under Hasdrubal and two other Carthaginian generals. Abandoning Hispania, Hasdrubal moved to bring his mercenary army into Italy to reinforce Hannibal.
The Third Punic War was fought between Carthage and the Roman Republic from 149 BC to 146 BC. This was the last in a series of three wars.
In the years between the Second and Third Punic Wars, Rome was engaged in the conquest of the Hellenistic empires to the east and ruthlessly suppressing the Iberian people in the west, although they had been essential to the Roman success in the Second Punic War.
Carthage, stripped of allies and territory (Sicily, Hispania), was suffering under a yearly indemnity of 200 silver talents to be paid every year for 50 years, an enormous sum.
The Romans still harboured a bitter hatred for Carthage, which had nearly destroyed them in the Second Punic War. Sentiments ran so strong that the powerful statesman Cato stated during a discussion of Carthage's fate, ceterum censeo Carthaginem delendam esse. (Besides which, I think that Carthage must be destroyed).
Meanwhile, Carthage had regained much of its prosperity through trade, further alarming Rome that a revived Carthage could again threaten them with war. The peace treaty at the end of the Second Punic War required that all border disputes involving Carthage be arbitrated by the Roman Senate and required Carthage to get explicit Roman approval before arming its citizens, or hiring a mercenary force.
As a result, in the fifty intervening years between the Second and Third wars Carthage had to take all border disputes with Rome's ally Numidia to the Senate, where they were decided almost exclusively in Numidian favor.
In 151 B.C., however, when the Carthaginian debt to Rome was fully repaid (meaning that, in Hellenic eyes, the treaty was now expired, though not so according to the Romans, who instead viewed the treaty as a permanent declaration of Carthaginian subordinance to Rome akin to the Roman treaties with her Italian allies) Numidia launched another border raid on Carthaginian soil, and in response Carthage launched a military expedition to repel the Numidian invaders.
As a result, Carthage suffered a humiliating military defeat and was charged with another fifty year debt to Numidia. Immediately thereafter, however, Rome showed displeasure with Carthage's decision to wage war against her neighbor without Roman consent, and told her that in order to avoid a war she had to "satisfy the Roman People." The Roman Senate then began gathering an army.
After Utica defected to Rome in 149 B.C., Rome declared war against Carthage. The Carthaginians made a series of attempts to negotiate with Rome, and received a promise that if three hundred children of well-born Carthaginians were sent as hostages to Rome the Carthaginians would keep the rights to their land and self-governance.
Even after this was done, however, the Romans landed an army at Utica where the Consuls demanded that Carthage hand over all weapons and armor. After those had been handed over, Rome additionally demanded that the Carthaginians move at least ten miles inland, while Carthage itself was burned. When the Carthaginians learned of this they abandoned negotiations and the city was immediately besieged, beginning the Third Punic War.
The Carthaginians endured the siege from 149 BC to 146 BC, when Scipio Aemilianus took the city by storm. Many Carthaginians died from starvation during the latter part of the siege, while many others died in the final six days of fighting. When the war ended, the remaining 50,000 Carthaginians (perhaps a tenth of the original pre-war population) were sold into slavery.
The city was systematically burned for somewhere between 10 and 17 days. Then the city walls, its buildings and its harbor were utterly destroyed and the surrounding territory was supposedly sown with salt to ensure that nothing would grow there again. The sowing may have been merely a symbolic curse against Rome's defeated enemy, or the account may be entirely invented; it does not appear in the records of the war, and historians today dispute whether it actually happened.
ANCIENT AND LOST CIVILIZATIONS
ALPHABETICAL INDEX OF ALL FILES
CRYSTALINKS HOME PAGE
PSYCHIC READING WITH ELLIE
2012 THE ALCHEMY OF TIME | http://www.crystalinks.com/punicwars.html | 13 |
37 | A hearing impairment or hearing loss is a full or partial decrease in the ability to detect or understand sounds. Caused by a wide range of biological and environmental factors, loss of hearing can happen to any organism that perceives sound.
Sound waves vary in amplitude and in frequency. Amplitude is the sound wave's peak pressure variation. Frequency is the number of cycles per second of a sinusoidal component of a sound wave. Loss of the ability to detect some frequencies, or to detect low-amplitude sounds that an organism naturally detects, is a hearing impairment.
Hearing sensitivity is indicated by the quietest sound that an individual can detect, called the hearing threshold. In the case of people and some animals, this threshold can be accurately measured by a behavioral audiogram. A record is made of the quietest sound that consistently prompts a response from the listener. The test is carried out for sounds of different frequencies. There are also electro-physiological tests that can be performed without requiring a behavioral response.
Normal hearing thresholds are not the same for all frequencies in any species of animal. If different frequencies of sound are played at the same amplitude, some will be loud, and others quiet or even completely inaudible. Generally, if the gain or amplitude is increased, a sound is more likely to be perceived. Ordinarily, when animals use sound to communicate, hearing in that type of animal is most sensitive for the frequencies produced by calls, or, in the case of humans, speech. This tuning of hearing exists at many levels of the auditory system, all the way from the physical characteristics of the ear to the nerves and tracts that convey the nerve impulses of the auditory portion of the brain.
A hearing impairment exists when an individual is not sensitive to the sounds normally heard by its kind. In human beings, the term hearing impairment is usually reserved for people who have relative insensitivity to sound in the speech frequencies. The severity of a hearing impairment is categorized according to how much louder a sound must be made over the usual levels before the listener can detect it. In profound deafness, even the loudest sounds that can be produced by the instrument used to measure hearing (audiometer) may not be detected.
There is another aspect to hearing that involves the quality of a sound rather than amplitude. In people, that aspect is usually measured by tests of speech discrimination. Basically, these tests require that the sound is not only detected but understood. There are very rare types of hearing impairments which affect discrimination alone.
Hearing impairment comes from different biologic causes. Most commonly, the ear is the affected part of the body.
Conductive hearing loss occurs when sound is not normally conducted through the outer or middle ear or both. Since sound can be picked up by a normally sensitive inner ear even if the ear canal, ear drum, and ear ossicles are not working, conductive hearing loss is often only mild and is never worse than a moderate impairment. Hearing thresholds will not rise above 55-60 dB from outer or middle ear problems alone. Generally, with pure conductive hearing loss, the quality of hearing (speech discrimination) is good, as long as the sound is amplified loud enough to be easily heard.
A conductive loss can be caused by any of the following: Ear canal obstruction Middle ear abnormalities: Tympanic membrane Ossicles Inner ear abnormalities: Superior canal dehiscence syndrome
A sensorineural hearing loss is due to insensitivity of the inner ear, the cochlea, or to impairment of function in the auditory nervous system. It can be mild, moderate, severe, or profound, to the point of total deafness. This is classified as a disability under the ADA and if unable to work is eligible for disability payments.
The great majority of human sensorineural hearing loss is caused by abnormalities in the hair cells of the organ of Corti in the cochlea. There are also very unusual sensorineural hearing impairments that involve the VIIIth cranial nerve, the Vestibulocochlear nerve or the auditory portions of the brain. In the rarest of these sorts of hearing loss, only the auditory centers of the brain are affected. In this situation, central hearing loss, sounds may be heard at normal thresholds, but the quality of the sound perceived is so poor that speech can not be understood.
Most sensory hearing loss is due to poor hair cell function. The hair cells may be abnormal at birth, or damaged during the lifetime of an individual. There are both external causes of damage, like noise trauma and infection, and intrinsic abnormalities, like deafness genes.
Sensorineural hearing loss that results from abnormalities of the central auditory system in the brain is called Central Hearing Impairment. Since the auditory pathways cross back and forth on both sides of the brain, deafness from a central cause is unusual.
Typical causes are discussed in following subsections.
Populations of people living near airports or freeways are exposed to levels of noise typically in the 65 to 75 dB(A) range. If lifestyles include significant outdoor or open window conditions, these exposures over time can degrade hearing. The U.S. EPA and various states have set noise standards to protect people from these adverse health risks. The EPA has identified the level of 70 dB(A) for 24 hour exposure as the level necessary to protect the public from hearing loss and other disruptive effects from noise, such as sleep disturbance, stress-related problems, learning detriment, etc. (EPA, 1974).
Noise-Induced Hearing Loss (NIHL) typically is centered at 3000, 4000, or 6000 Hz. As noise damage progresses, damage starts affecting lower and higher frequencies. On an audiogram, the resulting configuration has a distinctive notch, sometimes referred to as a "noise notch." As aging and other effects contribute to higher frequency loss (6-8 kHz on an audiogram), this notch may be obscured and entirely disappear.
Louder sounds cause damage in a shorter period of time. Estimation of a "safe" duration of exposure is possible using an exchange rate of 3 dB. As 3 dB represents a doubling of intensity of sound, duration of exposure must be cut in half to maintain the same energy dose. For example, the "safe" daily exposure amount at 85 dB A, known as an exposure action value, is 8 hours, while the "safe" exposure at 91 dB(A) is only 2 hours (National Institute for Occupational Safety and Health, 1998). Note that for some people, sound may be damaging at even lower levels than 85 dB A. Exposures to other ototoxins (such as pesticides, some medications including chemotherapy, solvents, etc.) can lead to greater susceptibility to noise damage, as well as causing their own damage. This is called a synergistic interaction.
Some American health and safety agencies (such as OSHA and MSHA), use an exchange rate of 5 dB. While this exchange rate is simpler to use, it drastically underestimates the damage caused by very loud noise. For example, at 115 dB, a 3 dB exchange rate would limit exposure to about half a minute; the 5 dB exchange rate allows 15 minutes.
While OSHA, MSHA, and FRA provide guidelines to limit noise exposure on the job, there is essentially no regulation or enforcement of sound output for recreational sources and environments, such as sports arenas, musical venues, bars, etc. This lack of regulation resulted from the defunding of ONAC, the EPA's Office of Noise Abatement and Control, in the early 1980s. ONAC was established in 1972 by the Noise Control Act and charged with working to assess and reduce environmental noise. Although the Office still exists, it has not been assigned new funding.
Most people in the United States are unaware of the presence of environmental sound at damaging levels, or of the level at which sound becomes harmful. Common sources of damaging noise levels include car stereos, children's toys, transportation, crowds, lawn and maintenance equipment, power tools, gun use, and even hair dryers. Noise damage is cumulative; all sources of damage must be considered to assess risk. If one is exposed to loud sound (including music) at high levels or for extended durations (85 dB A or greater), then hearing impairment will occur. Sound levels increase with proximity; as the source is brought closer to the ear, the sound level increases. This is why music is more likely to cause damage at the same output when listened to through headphones, as the headphones are in closer proximity to the ear drum than a loudspeaker. With the invention of in-ear headphones, these dangers are increased.
Hearing loss can be inherited. Both dominant gene and recessive genes exist which can cause mild to profound impairment. If a family has a dominant gene for deafness it will persist across generations because it will manifest itself in the offspring even if it is inherited from only one parent. If a family had genetic hearing impairment caused by a recessive gene it will not always be apparent as it will have to be passed onto offspring from both parents. Dominant and recessive hearing impairment can be syndromic or nonsyndromic. Recent gene mapping has identified dozens of nonsyndromic dominant (DFNA#) and recessive (DFNB#) forms of deafness.
Some medications cause irreversible damage to the ear, and are limited in their use for this reason. The most important group is the aminoglycosides (main member gentamicin).
Various other medications may reversibly affect hearing. This includes some diuretics, aspirin and NSAIDs, and macrolide antibiotics.
Extremely heavy hydrocodone (Vicodin or Lorcet) abuse is known to cause hearing impairment. Commentators have speculated that radio talk show host Rush Limbaugh's hearing loss was at least in part caused by his admitted addiction to narcotic pain killers, in particular Vicodin and OxyContin.
The quietest sound one can hear at different frequencies is plotted on an audiogram to reflect one's ability to hear at different frequencies. The range of normal human hearing (from the softest audible sound to the loudest comfortable sound) is so great that the audiogram must be plotted using a logarithmic scale. This large normal range, and the different amounts of hearing loss at different frequencies, make it virtually impossible to accurately describe the amount of hearing loss in simple terms such as percentages or the rankings above.
Measuring hearing loss in terms of a percentage is debatable in terms of effectiveness, and has been compared to measuring weight in inches. Though in specific legal situations, where decibels of loss are converted via a recognized legal formula, one can infer a standardized "percentage of hearing loss" which is suitable for legal purposes only.
Another method for determining hearing loss, is the Hearing in Noise Test (HINT). HINT technology was developed by the House Ear Institute, and is intended to measure an ability to understand speech in quiet and noisy environments. Unlike pure-tone tests, where only one ear is tested at a time, HINT evaluates hearing using both ears simultaneously (binaural), as binaural hearing is essential for communication in noisy environments, and for sound localization.
The age at which the hearing impairment develops is crucial to spoken language acquisition. Post-lingual hearing impairments are far more common than pre-lingual impairments.
If the hearing loss occurs at a young age, interference with the acquisition of spoken language and social skills may occur. Hearing aids, which amplify the incoming sound, may alleviate some of the problems caused by hearing impairment, but are often insufficient. Cochlear implants artificially stimulate the VIIIth Nerve by providing an electric impulse substitution for the firing of hair cells. Cochlear implants are not only expensive, but require sophisticated programming in conjunction with patient training for effectiveness. People who have hearing impairments, especially those who develop a hearing problem in childhood or old age, require support and technical adaptations as part of the rehabilitation process.
The phrase hard of hearing, normally used as an adjective or adverb, can also be used as a noun, referring to people with hearing impairment as the hard of hearing. People who consider themselves culturally deaf, prefer the term "hard of hearing" or "deaf", and perceive "hearing impaired" as an insult.
Hearing impaired persons with partial loss of hearing may find that the quality of their hearing varies from day to day, or from one situation to another or not at all. They may also, to a greater or lesser extent, depend on both hearing-aids and lip-reading. They may perhaps not always be aware of it, but they do admit to it being important to see the speaker's face in conversation.
Many people with hearing loss have better hearing in the lower frequency ranges (low tones), and cannot hear as well or at all in the higher frequencies. Some people may merely find it difficult to differentiate between words that begin with consonantal sounds such as the fricatives or sibilants, z, or th, or the plosives d, t, b, or p. They may be unable to hear thin, high-pitched or metallic noises, such as birds chirping or singing, clocks ticking, etc. Often, they are able to hear and understand men's voices better than women's.
Others will find their condition so much worse if circumstances in their immediate environment affect the way they are able to use their hearing aids, or prevent them from employing their speech reading skills. A room with a high ceiling and a lot of reverberation will affect the sound of a speaker's voice adversely. The position of the listener, too, sitting at a right angle to the speaker at a long seminar table, thus being able to hear only with one, maybe the ineffectual ear, can make a difference. Difficulties can also arise for the listener trying to lip-read, if the speaker is sitting with his back against the light-source and is in this way obscuring his face. A rule of thumb is that bright lighting is to the hearing-impaired what noise is to the hearing; a source of distraction.
The speaker's accent; the topic under discussion, possibly with many unfamiliar words; the softness of his voice; possibly his having a speech impediment; a habit of holding a hand in front of his mouth or turning his face away at times: all these tendencies cause problems to the hard-of-hearing, especially when they have to rely on lip-reading. The rustling of papers, and notebook pages being turned are precisely the noises that will be the first thing hearing-aids pick up.
Noisy situations are especially difficult, because hearing loss affects not only the ability to hear sounds, but also the ability to localize and filter out background noise.
In children, hearing loss can lead to social isolation for several reasons. First, the child experiences delayed social development that is in large part tied to delayed language acquisition. It is also directly tied to their inability to pick up auditory social cues. This can result in a deaf person becoming generally irritable. A child who uses sign language, or identifies with the deaf sub-culture does not generally experience this isolation, particularly if he/she attends a school for the deaf, but may conversely experience isolation from his parents if they do not know sign language. A child who is exclusively or predominantly oral (using speech for communication) can experience social isolation from his or her hearing peers, particularly if no one takes the time to explicitly teach her social skills that other children acquire independently by virtue of having normal hearing. Finally, a child who has a severe impairment and uses some sign language may be rejected by his or her deaf peers, because of an understandable hesitation in abandoning the use of existent verbal and speech-reading skills. Some in the deaf community can view this as a rejection of their own culture and its mores, and therefore will reject the individual preemptively.
Many relationships have suffered because of the anger that occurs when there is general miscommunication between family members. Generally, it's not only the person with a hearing disability that feels isolated, but others around them who feel they are not being "heard" or paid attention to, especially when the hearing loss has been gradual. Many people opt not to choose hearing aids for fear of looking old, since hearing loss is usually associated with old age, which equals ineffectiveness in some societies. Family members then feel as if their hearing loss partner doesn't care about them enough to make changes to reduce their disability and make it easier to communicate.
Many hearing impaired individuals use certain assistive devices in their daily lives. Individuals can communicate by telephone using telecommunications devices for the deaf (TDD). This device looks like a typewriter or word processor and transmits typed text over the telephone. Other names in common use are textphone and minicom. A videophone can be used for distance communication using sign language. In 2004, mobile textphone devices came onto the market for the first time allowing simultaneous two way text communication. In the U.S., the UK, the Netherlands and many other western countries there are Telecommunications Relay Services so that a hearing impaired person can communicate over the phone with a hearing person via a human translator. Wireless, internet and mobile phone/SMS text messaging are beginning to take over the role of the TDD. Other assistive devices include those that use flashing lights to signal events such as a ringing telephone, a doorbell, or a fire alarm. Video conferencing is also a new technology that permits signed conversations as well as permitting an ASL-English interpreter to voice and sign conversations between a hearing impaired and hearing person, negating the need to use a TTY or computer keyboard. In addition, there are many new Telecommunications Relay Service technologies including IP Relay and captioned telephone.
Researchers from Connecticut Children's Medical Center, Division of Otolaryngology publish new studies and findings in the area of cleft palate.
Jul 31, 2010; Scientists discuss in 'Treatment of persistent middle ear effusion in cleft palate patients' new findings in cleft palate.... | http://www.reference.com/browse/conductive+hearing+impairment | 13 |
38 | Early Exploration and French Control
It was the Spaniards who first discovered Louisiana for Europe. The first was the expedition led by Hernando de Soto in 1542. de Soto's expedition brought European diseases to the area, and these diseases spread to the Native Americans. After the Spaniards left, the population of the Native Americans drastically declined.
For 150 years, nothing happened in Louisiana until a French explorer named René-Robert Cavelier, Sieur de la Salle explored the Mississippi River and claimed all regions drained by the Mississippi for France. Since the current French monarch was Louis XIV, La Salle named the new territory Louisiane in his honor.
Several French settlements and forts were built in the Mississippi Valley and along the Gulf Coast. The first white settlement in the current-day state was Natchitoches, which was founded in 1714. Four years later, the French founded the city of Nouvelle-Orléans, or New Orleans, in order to protect the lower Mississippi River from Great Britain and Spain. New Orleans became the capital of Louisiana in 1722.
Life was not easy for the Louisianans. Due to the fighting between Great Britain and France during the War of Spanish Succession, France was cut off from its colony for several years at a time. In 1712, a wealthy Frenchman named Antoine Crozat was given control of Louisiana. Under Crozat's leadership, the population of the colony remained very small.
Antoine Crozat lost control of Louisiana five years later to the Compagnie d'Occident, or Company of the West. The head of this company was a Scotsman named John Law. Law had established the French national bank, and the bank invested quite heavily in the Company of the West. Since Louisiana was the largest asset the Company of the West controlled, Law had to quickly develop Louisiana in order to keep the French public confident in the new bank. John Law headed a promotional campaign that attracted thousands of settlers to the area. Many of these settlers were convicts forced to move to Louisiana. Others were indentured servants who had been promised freedom if they lived in Louisiana for a specified amount of time. It is estimated that 7,100 Europeans arrived in Louisiana between 1717 and 1721. In addition to the European settlers, 3000 African slaves were brought to the colony, thanks to the Campagnie du Senegal, which held a monopoly on the French slave trade.
The settlers who arrived in Louisiana had been promised quick, easy profits requiring little-to-no effort or investment. Much to their dismay, however, life in Louisiana was not easy at all. The colonial government quickly became overwhelmed, and many people died due to lack of food, shelter, and clothing. The only reason many of the settlers remained in Louisiana was they simply could not afford to sail back to Europe. Most immigrants became farmers, growing only enough food to sustain their families. Occasionally, indigo or tobacco would be grown for export.
In the end, the Mississippi Bubble created by John Law burst, as French citizens heard about the abysmal conditions in Louisiana in 1720. However, the Company of the West was still allowed to run the colony. In 1731, Louisiana was returned to the control of the French Monarchy due to the fighting with the Natchez tribe, who lived along the east bank of the Mississippi.
Spain Gains Possession
Although Louisiana was a French colony, the monarchy found Louisiana to be nothing more than a burden economically. Also, the British had conquered the French Canadian colonies during the French and Indian War, so any strategic value Louisiana once had was gone. Near the end of the war in 1762, France and Spain signed a secret treaty known as the Treaty of Fontainebleau. Under the terms of this treaty, Spain would enter the war as an ally of the French. In exchange, Spain would gain ownership of Louisiana. However, Great Britain won the war in 1763, and Britain gained control of nearly all portions of Louisiana east of the Mississippi River. The western portion remained under Spanish control, as did the Ile d'Orléans (Isle of Orleans), which was the region surrounding New Orleans.
The primarily French population was not happy being ruled by the Spanish. The first Spanish governor, Antonio de Ulloa, arrived in New Orleans in 1766. Ulloa tried to rule the colony in a harsher way than had been done before. This really angered the colonists, and they rebelled in 1768, driving Ulloa from Louisiana. Spanish control was restored in 1769 when General Alejandro O'Reilly became the colonial governor.
With the government established by O'Reilly, Spain ruled Louisiana for the next 34 years. However, the population remained primarily French. Spain tried to bring more Spanish colonists in, but most of the immigrants during this time period were French-speaking refugees from the conflicts in the West Indies, France, and Canada. The two most important waves of immigrants were the refugees from the black uprising in Saint-Domingue (Haiti) and the Acadians from eastern Canada. The Acadians settled primarily in the wilderness west of New Orleans, where they became know as the Cajuns. The Cajuns quickly became the main cultural group in rural south Louisiana.
During the American Revolution, Louisiana played a small role. Since Great Britain was Spain's biggest competitor in the New World, the Spanish colonists in New Orleans supplied the American colonies with weapons, ammunition, and other supplies. Spain officially declared war on Britain in 1779, and Spain sent a militia from Louisiana to capture all of the British settlements in West Florida. When the 1783 Treaty of Paris ended the Revolutionary War, Spain gained control of both West and East Florida.
New Owners and Statehood
After the American Revolution, Louisiana began to prosper. The location of New Orleans along the Mississippi made it the gateway to the interior of the North American continent. However, Spain gave Louisiana back to France in yet another secret treaty in 1800. Spain still kept control of West Florida. Three years later, France sold Louisiana, along with quite a bit of other territory, to the United States in what became known as the Louisiana Purchase.
After acquiring the new territory, the United States split the Louisiana Purchase in two. The first division, which included the land north of the 33rd parallel, became known as the Territory of Louisiana. The southern section, which included all of the land in modern Louisiana minus the West Florida Territory, was called the Territory of Orleans. The first territorial governor of the Territory of Orleans was William C. C. Claiborne. Claiborne had the difficult task of bringing the concept of democracy, which was foreign to the Louisianans, to the territory. Claiborne also presided over a second wave of refugees from Saint-Domingue in 1809. In six months, new immigrants doubled the population of New Orleans. Most historians believe that it was this wave of immigrants that helped preserve the French character of the city.
Americans had been settling in West Florida, and in 1810, they declared independence from Spain. Claiborne was given control of West Florida, and soon all the land east of the Pearl River was annexed to the Territory of Orleans. The Territory of Orleans entered the Union on April 30, 1812, as the 18th state. The capital remained at New Orleans, and Claiborne continued as governor.
Soon after achieving statehood, the United States entered the War of 1812. Near the end of the war, the British planned to seize several key points along the Gulf Coast and the lower Mississippi. On January 8, 1815, British troops attacked New Orleans. However, Major General Andrew Jackson led the defense, and the United States won a decisive victory- two months after a peace treaty had ended the war! However, the Battle of New Orleans was not pointless; Great Britain most likely would not have ratified the treaty had Jackson failed.
Agriculture and the Population Take Off
By 1820, the population of Louisiana had risen to 153,400 settlers, most of whom were whites from other parts of the South. However, the upper Red River Valley remained mostly unsettled, due to the gigantic logjam known as the Great Raft made the river unnavigable. By the 1830's, the Great Raft was cleared, and settlement got under way. By 1860, the population had shot up to 708,000 people, about half of whom were African slaves.
In the early-to-mid 1800's, the two main crops grown by Louisiana farmers were sugarcane and cotton. Sugar was more profitable, but cotton required less labor. Soon, cotton was the choice crop of both plantation owners and small farmers, particularly those in the Mississippi Valley. The third major crop in Louisiana was rice, which had originally been grown to feed slaves. The rice market expanded as irrigation did, and rice became a very popular crop for immigrants from the Midwest.
Thanks to the success of agriculture in Louisiana, New Orleans became one of the largest commercial cities in the United States. Immigrants also came to New Orleans in droves; it ranked second among arrival ports for immigrants from 1830 to 1860. In 1820, New Orleans became the largest city in the South, with its population of 27,180 citizens edging out Charleston, South Carolina. Forty years later, the official population had swelled to 168,675. Had it not been for the large number of yellow fever outbreaks, this number would have been much higher.
By the 1840's, the rest of Louisiana felt that New Orleans had too much economic and political power. In 1849, pressured politicians moved the state capital from New Orleans to Baton Rouge, where it remains today.
Slavery, Secession, and the Civil War
One of the biggest political issues in the United States at this time was the slavery debate. Northerners pressed for abolition, both on moral grounds and economic grounds; white men could not compete with the free labor provided by slaves. Meanwhile, the Southern states, particularly the Deep South states, believed that the agriculture of the South needed slavery and that the abolitionist movement was nothing more than an attempt for the North to control the national economy. With the abolitionist movement rising and the South's congressional power waning, many Southerners began to talk of secession as the only way to protect what they dubbed "Southern Rights."
In 1860, Abraham Lincoln won the presidential election despite not being on the ballots of some southern states. This triggered a wave of states seceding, and Louisiana was the sixth. Almost immediately, the Southern states formed the Confederate States of America, and after attacking Fort Sumter in Charleston Harbor, started the American Civil War.
Due to Louisiana's location in the southwestern corner of the Confederacy, Louisiana missed most of the fighting in the early parts of the war. In order to defend New Orleans, which was the South's main supply center, the Confederates built forts along the Mississippi River. In 1862, David Farragut led a fleet of Union ships up the river. Farragut was able to slip past the forts and sail into New Orleans without a fight.
After taking New Orleans, Farragut continued up the river and captured Baton Rouge. This forced the state government to flee to Opelousas. When Opelousas fell, Shreveport became the new capital. The government did not return to Baton Rouge until the end of Reconstruction, seventeen years after the war had ended.
As soon as the Union captured New Orleans, the city was made the capital of all Louisianan territory that was held by the North. Martial law was declared, and Major General Benjamin F. Butler was placed in charge. Butler was corrupt, however, and he earned the nickname of Beast Butler before the Union dismissed him as governor.
In 1864, a civil government was established under the terms of the Proclamation of Amnesty and Reconstruction. It was under this government that the first Louisianan state constitution banning slavery was drafted and adopted. At the end of the war, ex-Confederates controlled the government. This led to the passing of the Black Codes and the disenfranchisement of black citizens. In these ways, the blacks found their freedoms and liberties severely limited.
The federal government soon took a hand in reconstructing Louisiana, thanks to a race riot in 1866. In 1867, despite President Andrew Johnson's veto, Congress passed the Reconstruction Acts. These acts restored military rule over ten of the eleven states to secede, and required Congressional approval of a new state constitution before they could be readmitted to the Union. In 1868, a new state constitution was drafted. Voting rights were promised to all adult males, and full civil rights were promised to blacks. Also, many ex-Confederates found themselves unable to vote. Although whites were the majority in the state, many whites neglected to register or vote, so the constitution was approved on June 25 by a mostly black population.
Republicans controlled Louisiana, just as they controlled all the Confederate states. These Republicans running the state were mostly either white Southerners who had supported the Union, who became known as scalawags, and white immigrants from the North, who became known as carpetbaggers. Many blacks also achieved political power. The list of blacks in government included U.S. Senator Blanche K. Bruce, U.S. Congressmen Jefferson Long and Joseph H. Rainey, governor P. B. S. Pinchback, lieutenant governor Oscar J. Dunn, and state treasurer Antoine Dubuclet.
During Reconstruction, Republican rule was challenged by many of the white Louisianans. Groups like the White League, the Knights of the White Camella, and the Ku Klux Klan sprung up, burning down black houses and lynching any blacks thought to be "dangerous." The worst of these three groups was the White League, which routinely assassinated Republican officials and force black workers out of their homes in droves. The activities of the White League finally led up to the Battle of Liberty Place in 1874. In this event, 3,500 members of the White League took over the New Orleans arsenal, statehouse, and city hall. The leaguesmen were only forced out by the arrival of federal troops. This resulted in a federal army occupying Louisiana until the end of Reconstruction.
By the early 1870's, Republican control of Louisiana politics was slipping; the white supremacy organizations were becoming very effective at intimidating Republicans, particularly blacks, into not voting. In addition, many ex-Confederates were re-enfranchised, thanks to pardons from both Congress and the presidency. The Louisiana gubernatorial election in 1876 resulted in a virtual tie between Democratic candidate Francis R. T. Nicholls and Republican Stephen B. Packard. Both men claimed victory, refusing to allow the other man the office. To make matters more dramatic, 1876 was also a federal election year, and the Louisianan electoral votes were also contested between the Democratic Samuel J. Tilden and the Republican Rutherford B. Hayes. Hayes needed Louisiana's votes to win, along with the electoral votes of South Carolina and Florida.
Although there are no official records of an agreement between the parties, many historians agree that there must have been some sort of bargain between the Southern Democrats and the Republicans. As it turned out, the Democrats did not contest Hayes' claim on the electoral votes, and the Republicans gave the governorship of Louisiana to Nicholls. Rutherford B. Hayes became president the following year, and upon being instated, Hayes removed the federal troops from Louisiana and officially ended Reconstruction.
With Nicholls in office, Louisiana became a one-party state, with the Democrats ruling until 1980, 103 years after the end of Reconstruction. The Democrats were aided in maintaining their power by a new state constitution, which disenfranchised many blacks by requiring literacy tests, property requirements, and poll taxes in order to vote.
The Economy Turns Sour
Since most of the freed slaves did not have the means to purchase their own land, they were forced to farm land owned by others. When the Civil War had ended, most of the prewar owners still controlled their land. However, many Louisiana farmers and plantation owners lost their land due to economic depressions and labor disputes. Much of this farmland was purchased by Northerners at public auctions, who then proceeded to set up the sharecropping system. This entailed loaning tenants the money to start farming the owner's land, in exchange for a share of the tenant's profits, after repaying the initial loan. Sharecroppers were equally black and white, and they usually were unable to earn enough money to get out of the sharecropping cycle.
Agricultural yield had not been hurt by sharecropping, but the prices had. Although Louisiana was producing as much sugar, rice, and cotton as it had during the antebellum era, prices were horribly low for decades to come. It was not uncommon for a farmer to be unable to repay debts to his landlord or the bank. Also, farming methods could not be improved due to the lack of money to purchase better equipment. Soon, most of Louisiana's farmers were struck by poverty, particularly the sharecroppers.
It was not just Louisiana where things were tough for farmers. Nationwide, farmers were being hurt by a combination of low prices for crops, high railroad fees for shipping, and debts to banks. By the 1880's, groups such as the Grange and Farmer's Alliance had sprung up in the Midwest. When these groups expanded into a political party, known as the People's Party, white Democrats in Louisiana began to feel threatened. The populist movement was especially unnerving for Democrats, because the Populists were willing to reach out to black farmers. Populism in Louisiana was at it's strongest in 1896, when the People's Party candidate for governor, John Pharr, lost the governorship because of the rampant vote fraud ran by the Democrats. Twenty percent of the nation's lynchings that year occurred in Louisiana, mostly on supporters of the populists. By the turn of the century, the populist movement had been demoralized to the point that the People's Party just died off.
Although the Civil War had virtually stopped all traffic along the Mississippi River, New Orleans gradually became one of the top ports in the nation. At the mouth of the Mississippi, jetties were constructed in order to deepen the mouth and allow easier access for large oceangoing ships. Railroads were constructed, improving New Orleans' connection with the rest of the U.S. Commerce in New Orleans was improved even more in 1914, when the Panama Canal opened and allowed an increase in trade with Latin America.
In 1900, approximately one-fifth of Louisiana's population lived in New Orleans. The rest of the state was mostly rural. The economy of Louisiana improved in the next decade, as deposits of first oil, and then natural gas, were found throughout the state. Soon, industry in northern Louisiana took off, with Shreveport leading the way. In the late 30's, more oil was found offshore, and Louisiana became an important producer of oil to the rest of the nation. The economy was also improved when salt and sulfur mines were discovered in southern Louisiana.
Although the mineral industry in Louisiana was flourishing, the farmers still suffered. Some improvement occurred during World War I, as cotton was in high demand to make military uniforms, but after the armistice, cotton prices fell as quickly as they had rose. The effects of this recession lasted well into the 20's.
The Kingfish and Politics
Thanks to the discontent among farmers, the political machine of Huey P. Long was able to come into being. Long, known as the Kingfish, had a blunt manner about him that appealed to the poor farmers in rural parishes. Huey Long claimed to stand for laborers and farmers, and he denounced large companies like the Standard Oil Company. In 1928, Long was elected governor of Louisiana. Two years later, he was elected to the U.S. Senate. However, after winning the Senate seat, Long did not go to Washington for another two years, when Long's hand-picked successor was able to replace Long as governor. Up until his assassination in 1935, the Kingfish controlled the Louisiana government with an iron fist.
Part of Long's appeal was due to the large number of public works projects that started in Louisiana. Most of these were designed to soften the blow of the Great Depression which had struck the country. However, it turned out that Huey Long had been blocking federal relief in order to further his own interests. Upon his death, the depression was significantly helped by the increased flow of federal money.
Huey Long managed to control Louisianan politics well after his death, thanks to his brother, Earl K. Long, and his son, Russell Long. Up until 1960, the race for governor in Louisiana was decided in the Democratic primaries between candidate chosen by the Long machine and the anti-Long faction. Candidates supported by the Longs were frequently corrupt populist who were in favor of state spending. Their opponents were progressives who often ran under a platform of fiscal conservatism and integrity.
Louisianan industry boomed during World War II, thanks to the demand for the minerals found in the state. Also, many chemical and petrochemical plants were built along the Gulf Coast. Farmers gave up on farming and took jobs in industrial centers such as Lake Charles and Baton Rouge. Other farmers left the state, migrating to large cities, particularly Oakland, California, and Chicago, Illinois. Other farmers, primarily Cajuns, left for Texas, where they found jobs in the refineries and shipyards of Port Arthur, Orange, and Beaumont.
Civil Rights and the Present Day
Since 1898, racial segregation was mandated by law in all Louisiana public schools. However, in 1954, the U.S. Supreme Court decided in Brown v. Board of Education of Topeka, Kansas that the "Separate but Equal" doctrine had no place in education, and required integration of all schools. The Louisiana state legislature quickly passed a series of measures intended to keep de facto segregation, but the federal courts quickly declared these laws unconstitutional. By 1960, all Louisiana elementary schools had been desegregated. Two years later, the archbishop of New Orleans, after excommunicating several opponents, demanded the desegregation of Roman Catholic schools. Public high schools in large cities began in 1963, and in the next two years, practically all of Louisiana's public schools had been integrated. The civil rights issue remained the political hot topic for the rest of the decade.
By the beginning of the 1970's, the commotion over the civil rights movement had died down. Supporters of white supremacy were outnumbered by young people looking for improvement of Louisiana's economy, less corruption in the government, and harmony between the races. The gubernatorial election of 1971 looked like a return of the Democratic primaries of the Long era: the winning Democratic populist, Edwin W. Edwards, and the reform-minded Republican opponent, David C. Treen. Practically all of the elections until 1995 had this flavor.
Louisiana had an enormous economic boom during the first two terms of Edwards' governorship due to the prosperity of the oil industry. Soon, the Louisiana government had put nearly all of the state's future towards oil. Taxes on the oil industry and oil royalties became the two largest sources of government income. Revenues from oil briefly exceeded government spending. Edwards used the surplus to create one of the largest state bureaucracies in the nation. In 1979, Edwards lost the governorship to David C. Treen, who had once again ran on a reform platform. However, much of the state legislature was pro-Edwards, and Treen found it impossible to get cooperation. Four years later, Treen lost the election to Edwards again. However, in 1985, Edwards was arrested for racketeering in a hospital construction scandal and fraud. The first trial resulted in a hung jury. At the second, a jury acquitted Edwards.
Along with the Edwards trial, Louisiana was dealing with a major economic crisis. The price of oil had gradually been declining, but in December 1985, the petroleum-based economy of southern Louisiana collapsed. With the voters fed up with Edwards, Edwards lost the Democratic nomination to Charles E. Roemer in 1987.
The Roemer administration was unable to get its programs past the legislature, which was still dominated by pro-Edwardsists. Roemer called for a special session of the Louisiana legislature in 1988 in order to enact a large fiscal reform. Despite the support of the state's television stations, newspapers, and businesses, and a billion-dollar debt, Roemer's program was defeated by a large margin. The next year, the state legislature rejected a similar bill.
According to the Edwards faction, gambling was the way to recover the Louisiana economy. Although there was massive resistance in grassroots movements, gambling advocates won anyway. This led to a statewide feeling of resentment among Louisiana's voters. This discontentment only rose with the increase in taxes. In 1990, the bad feelings of the voters was evident in the surprisingly strong senatorial campaign of former Ku Klux Klansman David Duke. Strength was added to the so-called voter's rebellion when Edwin Edwards defeated Duke in the 1991 gubernatorial race.
In 1995, Edwards chose not to run for governor again. The FBI accused the Louisiana legislature of corruption, and the voters saw state politicians as indifferent to the state's problems. These things, coupled with the rampant growth of the gambling industry, enabled the Republican candidate, Mike Foster, to win the election. Foster denounced gambling often, and worked to remove the corrupt members to the Edwards faction from the state legislature. Foster apparently struck a chord with the Louisiana voters, because he was reelected by a wide margin in 1999.
In 2000, former governor Edwin Edwards was convicted along with his son Stephen and three other associates of extortion, racketeering, and conspiracy in the awarding of casino licenses. Edwards has announced plans to appeal. | http://everything2.com/title/louisiana | 13 |
20 | Tiny Wiki :
Fast loading, text only version of Wikipedia.
Late Middle Ages
The Late Middle Ages is a term used by historians to describe European history in the period of the 14th and 15th centuries (c. 1300–1499). The Late Middle Ages were preceded by the High Middle Ages, and followed by the Early Modern era (Renaissance).
Around 1300, centuries of European prosperity and growth came to a halt. A series of famines and plagues, such as the Great Famine of 1315–1317 and the Black Death, reduced the population by as much as half according to some estimates. Along with depopulation came social unrest and endemic warfare. France and England experienced serious peasant risings: the Jacquerie, the Peasants' Revolt, and the Hundred Years' War. To add to the many problems of the period, the unity of the Catholic Church was shattered by the Great Schism. Collectively these events are sometimes called the Crisis of the Late Middle Ages.
Despite these crises, the 14th century was also a time of great progress within the arts and sciences. A renewed interest in ancient Greek and Roman texts led to what has later been termed the Italian Renaissance. The absorption of Latin texts had started in the twelfth-century Renaissance through contact with Arabs during the Crusades, but the availability of important Greek texts accelerated with the capture of Constantinople by the Ottoman Turks, when many Byzantine scholars had to seek refuge in the West, particularly Italy.
Combined with this influx of classical ideas was the invention of printing which facilitated dissemination of the printed word and democratized learning. These two things would later lead to the Protestant Reformation. Toward the end of the period, an era of discovery began (Age of Discovery). The growth of the Ottoman Empire, culminating in the fall of Constantinople in 1453, cut off trading possibilities with the east. Europeans were forced to discover new trading routes, as was the case with Columbus’s travel to the Americas in 1492, and Vasco da Gama’s circumnavigation of India and Africa in 1498. Their discoveries strengthened the economy and power of European nations.
The changes brought about by these developments have caused many scholars to see it as leading to the end of the Middle Ages, and the beginning of the modern world. However, the division will always be a somewhat artificial one for other scholars, who argue that since ancient learning was never entirely absent from European society, there is a certain continuity between the Classical and the Modern age. Some historians, particularly in Italy, prefer not to speak of the Late Middle Ages at all, but rather see the 14th century Renaissance as a direct transition to the Modern Era.
The limits of Christian Europe were still being defined in the fourteenth and fifteenth centuries. While the Grand Duchy of Moscow was beginning to repel the Mongols, and the Iberian kingdoms completed the Reconquista of the peninsula and turned their attention outwards, the Balkans fell under the dominance of the Ottoman Empire.
[For references, see below.] Meanwhile, the remaining nations of the continent were locked in almost constant international or internal conflict.
The situation gradually led to the consolidation of central authority, and the emergence of the nation state.
[Brady et al., p. xvii; Jones, p. 21.] The financial demands of war necessitated higher levels of taxation, resulting in the emergence of representative bodies – most notably the English Parliament. The growth of secular authority was further aided by the decline of the papacy with the Great Schism, and the coming of the Protestant Revolution.
:''Main articles: Denmark, Norway, Sweden''
After the failed union of Sweden and Norway of 1319–1365, the pan-Scandinavian Kalmar Union was instituted in 1397. The Swedes were reluctant members of the Danish-dominated union from the start. In an attempt to subdue the Swedes, King Christian II of Denmark had large numbers of the Swedish aristocracy killed in the Stockholm Bloodbath of 1520. Yet this measure only led to further hostilities, and Sweden broke away for good in 1523. Norway, on the other hand, became an inferior party of the union, and remained united with Denmark until 1814.
Iceland benefited from its relative isolation, and was the only Scandinavian country not struck by the Black Death. Meanwhile, the Norwegian colony on Greenland died out, probably under extreme weather conditions in the 15th century. These conditions might have been the effect of the Little Ice Age.
The death of Alexander III of Scotland in 1286 threw the country into a succession crisis, and the English king, Edward I, was brought in to arbitrate. When Edward claimed overlordship over Scotland, this led to the Wars of Scottish Independence. The English were eventually defeated, and the Scots were able to develop a stronger state under the Stewarts.
From 1337, England's attention was largely directed towards France in the Hundred Years' War. Henry V’s victory at the Battle of Agincourt in 1415 briefly paved the way for a unification of the two kingdoms, but his son Henry VI soon squandered all previous gains. The loss of France led to discontent at home, and almost immediately upon the end of the war in 1453, followed the dynastic struggles of the Wars of the Roses (c. 1455–1485), involving the rival dynasties of Lancaster and York.
The war ended in the accession of Henry VII of the Tudor family, who could continue the work started by the Yorkist kings of building a strong, centralized monarchy. While England's attention was thus directed elsewhere, the Hiberno-Norman lords in Ireland were becoming gradually more assimilated into Irish society, and the island was allowed to develop virtual independence under English overlordship.
:''Main articles: France, Burgundy, Burgundian Netherlands''
The French House of Valois, which followed the House of Capet in 1328, was at its outset virtually marginalized in its own country, first by the English invading forces of the Hundred Years' War, later by the powerful Duchy of Burgundy. The appearance of Joan of Arc on the scene changed the course of war in favour of the French, and the initiative was carried further by King Louis XI.
Meanwhile Charles the Bold, Duke of Burgundy, met resistance in his attempts to consolidate his possessions, particularly from the Swiss Confederation formed in 1291. When Charles was killed in the Burgundian Wars at the Battle of Nancy in 1477, the Duchy of Burgundy was reclaimed by France. At the same time, the County of Burgundy and the wealthy Burgundian Netherlands came into the Holy Roman Empire under Habsburg control, setting up conflict for centuries to come.
:''Main articles: Germany'', ''Hungary'', ''Poland'', ''Lithuania''
Bohemia prospered in the fourteenth century, and the Golden Bull of 1356 made the king of Bohemia first among the imperial electors, but the Hussite revolution threw the country into crisis. The Holy Roman Empire passed to the Habsburgs in 1438, where it remained until the Empire's dissolution in 1806. Yet in spite of the extensive territories held by the Habsburgs, the Empire itself remained fragmented, and much real power and influence lay with the individual principalities. Also financial institutions, such as the Hanseatic League and the Fugger family, held great power, both on an economic and a political level.
The kingdom of Hungary experienced a golden age during the fourteenth century. In particular the reign of the Angevin kings Charles Robert (1308–42) and his son Louis I (1342–82) were marked by greatness. The country grew wealthy as the main European supplier of gold and silver. Meanwhile Poland's attention was turned eastwards, as the union with Lithuania created an enormous entity in the region. The union, and the conversion of Lithuania, also marked the end of paganism in Europe.
The thirteenth century had seen the fall of the state of Kievan Rus', in the face of the Mongol invasion. In its place would eventually emerge the Grand Duchy of Moscow, which won a great victory against the Golden Horde at the Battle of Kulikovo in 1380. The victory did not end Tartar rule in the region, however, and its immediate beneficiary was Lithuania, which extended its influence eastwards.
It was under the reign of Ivan III, the Great (1462–1505), that Moscow finally became a major regional power, and the annexation of the vast Republic of Novgorod in 1478 laid the foundations for a Russian national state. After the Fall of Constantinople in 1453 the Russian princes started to see themselves as the heirs of the Byzantine Empire. They eventually took on the imperial title of Tsar, and Moscow was described as the Third Rome.
Byzantine Empire and the Balkans
:''Main articles: Byzantine Empire, Bulgaria, Serbia''
The Byzantine Empire had for a long time dominated the eastern Mediterranean in politics and culture. By the fourteenth century, however, it had almost entirely collapsed into a tributary state of the Ottoman Empire, centred on the city of Constantinople and a few enclaves in Greece. With the Fall of Constantinople in 1453, the Byzantine Empire was permanently extinguished.
The Bulgarian Empire was in decline by the fourteenth century, and the ascendancy of Serbia was marked by the Serbian victory over the Bulgarians in the Battle of Velbazhd in 1330. By 1346, the Serbian king Stefan Dušan had been proclaimed emperor. Yet Serbian dominance was short-lived; the Serb armies were defeated by the Ottomans at the Battle of Kosovo in 1389, where most of the Serbian nobility were killed and the country became a part of the Ottoman empire, like Bulgaria before it. By the end of the medieval period, the entire Balkan peninsula was annexed by, or became vassals to, the Ottomans.
:''Main articles: Italy'', ''Spain'', ''Portugal''
Avignon was the seat of the papacy from 1309 to 1376. With the return of the Pope to Rome in 1378, the Papal State developed into a major secular power, culminating in the morally corrupt papacy of Alexander VI. Florence grew to prominence amongst the Italian city-states through financial business, and the dominant Medici family became important promoters of the Renaissance through their patronage of the arts. Also other city states in northern Italy expanded their territories and consolidated their power, primarily Milan and Venice. The War of the Sicilian Vespers had by the early fourteenth century divided southern Italy into an Aragon Kingdom of Sicily and an Anjou Kingdom of Naples. In 1442, the two kingdoms were effectively united under Aragonese control.
The 1469 marriage of Isabella of Castile` and Ferdinand II of Aragon and 1479 death of John II of Aragon led to the creation of modern-day Spain. In 1492, Granada was captured from the Moors, thereby completing the Reconquista. Portugal had during the fifteenth century – particularly under Henry the Navigator – gradually explored the coast of Africa, and in 1498, Vasco da Gama found the sea route to India. The Spanish monarchs met the Portuguese challenge by financing Columbus’s attempt to find the western sea route to India, leading to the discovery of America in the same year as the capture of Granada.
Around 1300–1350 the Medieval Warm Period gave way to the Little Ice Age. The colder climate resulted in agricultural crises, the first of which is known as the Great Famine of 1315-1317. The demographic consequences of this famine, however, were not as severe as those of the plagues of the later century, particularly the Black Death. Estimates of the death rate caused by this epidemic range from one third to as much as sixty percent. By around 1420, the accumulated effect of recurring plagues and famines had reduced the population of Europe to perhaps no more than a third of what it was a century earlier. The effects of natural disasters were exacerbated by armed conflicts; this was particularly the case in France during the Hundred Years' War.
As the European population was severely reduced, land became more plentiful for the survivors, and labour consequently more expensive. Attempts by landowners to forcibly reduce wages, such as the English 1351 Statute of Laborers, were doomed to fail. These efforts resulted in nothing more than fostering resentment among the peasantry, leading to rebellions such as the French Jacquerie in 1358 and the English Peasants' Revolt in 1381. The long-term effect was the virtual end of serfdom in Western Europe. In Eastern Europe, on the other hand, landowners were able to exploit the situation to force the peasantry into even more repressive bondage.
Up until the mid-fourteenth century, Europe had experienced a steadily increasing urbanisation. Cities were of course also decimated by the Black Death, but the urban areas' role as centres of learning, commerce and government ensured continued growth. By 1500 Venice, Milan, Naples, Paris and Constantinople probably had more than 100,000 inhabitants.
[Allmand (1998), p. 125] Twenty-two other cities were larger than 40,000; most of these were to be found in Italy and the Iberian peninsula, but there were also some in France, the Empire, the Low Countries plus London in England.
The upheavals caused by the Black Death left certain minority groups particularly vulnerable, especially the Jews. The calamities were often blamed on this group, and anti-Jewish pogroms were carried out all over Europe; in February 1349, 2,000 Jews were murdered in Strasbourg. Also the state was guilty of discrimination against the Jews, as monarchs gave in to the demands of the people, the Jews were expelled from England in 1290, from France in 1306, from Spain in 1492 and from Portugal in 1497.
While the Jews were suffering persecution, one group that probably experienced increased empowerment in the Late Middle Ages was women. The great social changes of the period opened up new possibilities for women in the fields of commerce, learning and religion.
[Klapisch-Zuber, p. 268.] Yet at the same time, women were also vulnerable to incrimination and persecution, as belief in witchcraft increased.
Through battles such as Courtrai (1302), Bannockburn (1314), and Morgarten (1315), it became clear to the great territorial princes of Europe that the military advantage of the feudal cavalry was lost, and that a well equipped infantry was preferable. Through the Welsh Wars the English became acquainted with, and adopted the highly efficient longbow. Once properly managed, this weapon gave them a great advantage over the French in the Hundred Years' War.
The introduction of gunpowder affected the conduct of war significantly. Though employed by the English as early as the Battle of Crécy in 1346, firearms initially had little effect in the field of battle. It was through the use of cannons as siege weapons that major change was brought about; the new methods would eventually change the architectural structure of fortifications.
Changes also took place within the recruitment and composition of armies. The use of the national or feudal levy was gradually replaced by paid troops of domestic retinues or foreign mercenaries. The practice was associated with Edward III of England and the condottieri of the Italian city-states. All over Europe, Swiss soldiers were in particularly high demand. At the same time, the period also saw the emergence of the first permanent armies. It was in Valois France, under the heavy demands of the Hundred Years' War, that the armed forces gradually assumed a permanent nature.
Parallel to the military developments emerged also a constantly more elaborate chivalric code of conduct for the warrior class. This new-found ethos can be seen as a response to the diminishing military role of the aristocracy, and gradually it became almost entirely detached from its military origin. The spirit of chivalry was given expression through the new (secular) type of chivalric orders; the first of which was the Order of St. George founded by Charles I of Hungary in 1325, the best known probably the English Order of the Garter, founded by Edward III in 1348.
The Great Schism
The French crown's increasing dominance over the Papacy culminated in the transference of the Holy See to Avignon in 1309. When the Pope returned to Rome in 1377, this led to the election of different popes in Avignon and Rome, resulting in the Great Schism (1378–1417). The Schism divided Europe along political lines; while France, her ally Scotland and the Spanish kingdoms supported the Avignon Papacy, France's enemy England stood behind the Pope in Rome, together with Portugal, Scandinavia and most of the German princes.
At the Council of Constance (1414–1418), the Papacy was once more united in Rome. Even though the unity of the Western Church was to last for another hundred years, and though the Papacy was to experience greater material prosperity than ever before, the Great Schism had done irreparable damage. The internal struggles within the Church had impaired her claim to universal rule, and promoted anti-clericalism among the people and their rulers, paving the way for reform movements.
Though the Catholic Church had long fought against heretic movements, in the Late Middle Ages, it started to experience demands for reform from within. The first of these came from the Oxford professor John Wyclif in England. Wycliffe held that the Bible should be the only authority in religious questions, and spoke out against transubstantiation, celibacy and indulgences. In spite of influential supporters among the English aristocracy, such as John of Gaunt, the movement was not allowed to survive. Though Wycliffe himself was left unmolested, his supporters, the Lollards, were eventually suppressed in England.
Richard II of England's marriage to Anne of Bohemia established contacts between the two nations and brought Lollard ideas to this part of Europe. The teachings of the Czech priest Jan Hus were based on those of John Wyclif, yet his followers, the Hussites, were to have a much greater political impact than the Lollards. Hus gained a great following in Bohemia, and in 1414, he was requested to appear at the Council of Constance, to defend his cause. When he was burned as a heretic in 1415, it caused a popular uprising in the Czech lands. The subsequent Hussite Wars fell apart due to internal quarrels, and did not result in religious or national independence for the Czechs, but both the Catholic Church and the German element within the country were weakened.
Though technically outside the time-period of the Middle Ages, the Protestant Reformation of Martin Luther ended the unity of the Western Church – one of the distinguishing characteristics of the medieval period.
Luther, a German monk, started the Reformation by the posting of the 95 theses on the castle church of Wittenberg on October 31, 1517. The immediate provocation behind the act was Pope Leo X’s renewing the indulgence for the building of the new St. Peter's Basilica in 1514. Luther was challenged to recant his heresy at the Diet of Worms in 1521. When he refused, he was placed under the ban of the Empire by Charles V. Receiving the protection of Frederick the Wise, he was then able to translate the Bible into German.
To many secular rulers, the Protestant reformation was a welcome opportunity to expand their wealth and influence. The Catholic Church met the challenges of the reforming movements with what has been called the Catholic or Counter-Reformation. Europe became split into a northern Protestant and a southern Catholic part, resulting in the Religious Wars of the 16th and 17th centuries.
Trade and commerce
The increasingly dominant position of the Ottoman Empire in the eastern Mediterranean presented an impediment to trade for the Christian nations of the west, who in turn started looking for alternatives. Portuguese and Spanish explorers found new trade routes – south of Africa to India, and across the Atlantic Ocean to America. As Genoese and Venetian merchants opened up direct sea routes with Flanders, the Champagne fairs lost much of their importance.
At the same time, English wool export shifted from raw wool to processed cloth, resulting in losses for the cloth manufacturers of the Low Countries. In the Baltic and North Sea, the Hanseatic League reached the peak of their power in the fourteenth century, but started going into decline in the fifteenth.
In the late thirteenth and early fourteenth centuries, a process took place – primarily in Italy but partly also in the Empire – that historians have termed a 'commercial revolution'. Among the innovations of the period were new forms of partnership and the issuing of insurance, both of which contributed to reducing the risk of commercial ventures; the bill of exchange and other forms of credit that circumvented the canonical laws for gentiles against usury, and eliminated the dangers of carrying bullion; and new forms of accounting, in particular double-entry bookkeeping, which allowed for better oversight and accuracy.
With the financial expansion, trading rights became more jealously guarded by the commercial elite. Towns saw the growing power of guilds, while on a national level special companies would be granted monopolies on particular trades, like the English wool Staple. The beneficiaries of these developments would accumulate immense wealth. Families like the Fuggers in Germany, the Medicis in Italy, the de la Poles in England, and individuals like Jacques Coeur in France would help finance the wars of kings, and achieve great political influence in the process.
Though there is no doubt that the demographic crisis of the fourteenth century caused a dramatic fall in production and commerce in ''absolute'' terms, there has been a vigorous historical debate over whether the decline was greater than the fall in population. While the older orthodoxy was that the artistic output of the Renaissance was a result of greater opulence, more recent studies have suggested that there might have been a so-called 'depression of the Renaissance'. In spite of convincing arguments for the case, the statistical evidence is simply too incomplete that a definite conclusion can be made.
Arts and sciences
In the fourteenth century, the predominant academic trend of scholasticism was challenged by the humanist movement. Though primarily an attempt to revitalise the classical languages, the movement also led to innovations within the fields of science, art and literature, helped on by impulses from Byzantine scholars who had to seek refuge in the west after the Fall of Constantinople in 1453.
In science, classical authorities like Aristotle were challenged for the first time since antiquity. Within the arts, humanism took the form of the Renaissance. Though the fifteenth-century Renaissance was a highly localised phenomenon – limited mostly to the city states of northern Italy – artistic developments were taking place also further north, particularly in the Netherlands.
Philosophy, science and technology
The predominant school of thought in the thirteenth century was the Thomistic reconciliation of the teachings of Aristotle with Christian theology. The Condemnation of 1277, enacted at the University of Paris, placed restrictions on ideas that could be interpreted as heretical; restrictions that had implication for Aristotelian thought.
An alternative was presented by William of Ockham, who insisted that the world of reason and the world of faith had to be kept apart. Ockham introduced the principle of parsimony – or Occam's razor – whereby a simple theory is preferred to a more complex one, and speculation on unobservable phenomena is avoided.
This new approach liberated scientific speculation from the dogmatic restraints of Aristotelian science, and paved the way for new approaches. Particularly within the field of theories of motion great advances were made, when such scholars as Jean Buridan, Nicole Oresme and the Oxford Calculators challenged the work of Aristotle. Buridan developed the theory of ''impetus'' as the cause of the motion of projectiles, which was an important step towards the modern concept of inertia. The works of these scholars anticipated the heliocentric worldview of Nicolaus Copernicus.
Certain technological inventions of the period – whether of Arab or Chinese origin, or unique European innovations – were to have great influence on political and social developments, in particular gunpowder, the printing press and the compass. The introduction of gunpowder to the field of battle affected not only military organisation, but helped advance the nation state. Gutenberg's movable type printing press made possible not only the Reformation, but also a dissemination of knowledge that would lead to a gradually more egalitarian society. The compass, along with other innovations such as the cross-staff, the mariner's astrolabe, and advances in shipbuilding, enabled the navigation of the World Oceans, and the early phases of colonialism. Other inventions had a greater impact on everyday life, such as eyeglasses and the weight-driven clock.
Visual arts and architecture
A precursor to Renaissance art can be seen already in the early fourteenth-century works of Giotto. Giotto was the first painter since antiquity to attempt the representation of a three-dimensional reality, and to endow his characters with true human emotions. The most important developments, however, came in fifteenth-century Florence. The affluence of the merchant class allowed extensive patronage of the arts, and foremost among the patrons were the Medici.
The period saw several important technical innovations, like the principle of linear perspective found in the work of Masaccio, and later described by Brunelleschi. Greater realism was also achieved through the scientific study of anatomy, championed by artists like Donatello. This can be seen particularly well in his sculptures, inspired by the study of classical models. As the centre of the movement shifted to Rome, the period culminated in the High Renaissance masters da Vinci, Michelangelo and Raphael.
The ideas of the Italian Renaissance were slow to cross the Alps into northern Europe, but important artistic innovations were made also in the Low Countries. Though not – as previously believed – the inventor of oil painting, Jan van Eyck was a champion of the new medium, and used it to create works of great realism and minute detail. The two cultures influenced each other and learned from each other, but painting in the Netherlands remained more focused on textures and surfaces than the idealised compositions of Italy.
In northern European countries gothic architecture remained the norm, and the gothic cathedral was further elaborated. In Italy, on the other hand, architecture took a different direction, also here inspired by classical ideals. The crowning work of the period was the Santa Maria del Fiore in Florence, with Giotto's clock tower, Ghiberti's baptistery gates, and Brunelleschi's cathedral dome of unprecedented proportions.
The most important development of late medieval literature was the ascendancy of the vernacular languages. The vernacular had been in use in France and England since the eleventh century, where the most popular genres had been the chanson de geste, troubadour lyrics and romantic epics, or the romance. Though Italy was later in evolving a native literature in the vernacular language, it was here that the most important developments of the period were to come.
Dante Alighieri's ''Divine Comedy'', written in the early fourteenth century, merged a medieval world view with classical ideals. Another promoter of the Italian language was Boccaccio with his ''Decameron''. The application of the vernacular did not entail a rejection of Latin, and both Dante and Boccaccio wrote prolifically in Latin as well as Italian, as would Petrarch later (whose ''Canzoniere'' also promoted the vernacular and whose contents are considered the first modern lyric poems). Together the three poets established the Tuscan dialect as the norm for the modern Italian language.
The new literary style spread rapidly, and in France influenced such writers as Eustache Deschamps and Guillaume de Machaut. In England Geoffrey Chaucer helped establish English as a literary language with his ''Canterbury Tales'', which tales of everyday life were heavily influenced by Boccaccio. The spread of vernacular literature eventually reached as far as Bohemia, and the Baltic, Slavic and Byzantine worlds.
Music was an important part of both secular and spiritual culture, and in the universities it made up part of the ''quadrivium'' of the liberal arts. From the early thirteenth century, the dominant sacred musical form had been the motet; a composition with text in several parts. From the 1330s and onwards, emerged the polyphonic style, which was a more complex fusion of independent voices. Polyphony had been common in the secular music of the Provençal troubadours. Many of these had fallen victim to the thirteenth-century Albigensian Crusade, but their influence reached the papal court at Avignon.
The main representatives of the new style, often referred to as ''ars nova'' as opposed to the ''ars antiqua'', were the composers Philippe de Vitry and Guillaume de Machaut. In Italy, where the Provençal troubadours had also found refuge, the corresponding period goes under the name of trecento, and the leading composers were Giovanni da Cascia, Jacopo da Bologna and Francesco Landini.
For eighteenth-century historians studying the fourteenth and fifteenth centuries, the central theme was the Renaissance, with its rediscovery of ancient learning and the emergence of an individual spirit. This was a process centred on Italy, where, in the words of Jacob Burckhardt: "Man became a spiritual individual and recognized himself as such" (''The Civilization of the Renaissance in Italy'', 1860). This proposition was later challenged, and it was argued that the twelfth century was a period of greater cultural achievement.
As economic and demographic methods were applied to the study of history, the trend was increasingly to see the late Middle Ages as a period of recession and crisis. Belgian historian Henri Pirenne introduced the now common subdivision of Early, High and Late Middle Ages in the years around World War I. Yet it was his Dutch colleague Johan Huizinga who was primarily responsible for popularising the pessimistic view of the Late Middle Ages, with his book ''The Autumn of the Middle Ages'' (1919). To Huizinga, whose research focused on France and the Low Countries rather than Italy, despair and decline were the main themes, not rebirth.
[Allmand, p. 299; Cantor, p. 530.]
Modern historiography on the period has reached a consensus between the two extremes of innovation and crisis.
It is now generally acknowledged that conditions were vastly different north and south of the Alps, and "Late Middle Ages" is often avoided entirely within Italian historiography. The term "Renaissance" is still considered useful for describing certain intellectual, cultural or artistic developments, but not as the defining feature of an entire European historical epoch. [Brady ''et al.'', p. xvii.] The period from the early fourteenth century up until – sometimes including – the sixteenth century, is rather seen as characterised by other trends: demographic and economic decline followed by recovery, the end of western religious unity and the subsequent emergence of the nation state, and the expansion of European influence onto the rest of the world. | http://tinywiki.org/late_Middle_Ages.html | 13 |
27 | The IMF In Action: Why Do We Need the IMF?
In this activity, students take the role of a cell phone salesperson in a mythical country.
They are asked to choose between alternative solutions in a series of scenarios
about trade issues. In the course of their decision-making, they learn how the IMF provides assistance to countries with currency problems.
14-18 year olds (9-12 graders, US) studying Social Studies and Economics in school.
National Economics Content Standards
Standard 10 — Role of Economic Institutions
Institutions evolve in market economies to help individuals and groups accomplish their goals. Banks, labor unions, corporations, legal systems, and not-for-profit
organizations are examples of important institutions. A different kind of institution, clearly defined and well-enforced property rights, is essential to a market
Standard 11 — Role of Money
Money makes it easier to trade, borrow, save, invest and compare the value of goods and services.
- Ask students who have visited other countries what kinds of currencies they used. List the names of the other currencies on the board.
- Divide the class into groups of 4 students. Give each group daily exchange rate tables (found in the business section of most major newspapers) for at least 5 days.
Make enough copies so that each group has one table from each day. Have student compare the exchange rate of one foreign currency for the five days. (For example,
how many Japanese Yen were needed to buy one U.S. dollar on Monday, Tuesday, Wednesday, Thursday and Friday?) Guide students to recognize that exchange rates
fluctuate from day to day.
- Explain that exchange rates change as the demand for a currency changes. Demand for a currency increases when the demand for goods and services from that country
increases. If a country is producing cars that lots of people want to buy, then demand for that country's currency will increase, and the value of the currency
Divide the class into two groups, Country R and Country B. Distribute red construction paper strips randomly to the students in Country R and blue construction paper
strips randomly to the students in Country B. Country R produces lima beans and charges 5 R per bean; Country B produces elbow macaroni and charges 10 B per noodle.
Set up one desk on each side of the class that will be the "sales center" for that country, and give Country R a container of lima beans and Country B a
container of elbow macaroni. Consumers in Country B wants to buy beans, but they have no Red currency. Consumers in Country R want to buy macaroni, but they have
no Blue currency. An exchange service is needed.
Set up an exchange "bank" at one desk, and choose a student to be the "banker." Give the banker a stack of R and a stack of B currency. The rate of
exchange is 1R = 2B. Students can now exchange their currency and purchase beans or macaroni.
Ask: "What would happen to the demand for beans if a medical report said eating beans would add ten years to a person's life?" (Demand would increase.)
"What would happen to the demand for B currency?" (It too would increase.) "What would happen to the exchange rate of B currency relative to R
currency?" (It would take more R to buy B.)
Explain that in the scenario above, R currency has appreciated relative to B. B currency has depreciated relative to R.
Current Currencies. This link presents a lesson plan about money and currency exchange rates. It includes a teacher's plan and a student how-to guide.
Online Extension Activities:
Choose 8 students who have finished their assigned work and have them, in pairs, evaluate the following lessons and present them to the class.
Globalization Comes to the Table. This lesson deals with globalization, with special emphasis on food production.
Currency Exchange and the Balance of Trade. This lesson deals with exchange rates and world currencies.
Why Do We Need the IMF? | http://www.imf.org/external/np/exr/center/econed/g_why.htm | 13 |
68 | The era of piracy in the Caribbean began in the 16th century and died out in the 1830s after the navies of the nations of Western Europe and North America with colonies in the Caribbean began combating pirates. The period during which pirates were most successful was from the 1660s to 1730s. Piracy flourished in the Caribbean because of the existence of pirate seaports such as Port Royal in Jamaica, Tortuga, and Nassau in the Bahamas.
The causes of piracy
Piracy in the Caribbean first resulted from the larcenous activities of groups of European sailors, mostly English, Dutch and French, who were marooned or shipwrecked in the Caribbean. They were called buccaneers, from the French "boucanier" (to smoke meat) on a "boucan" (wooden frame set over a fire.) By setting up smokey fires and boucans with the prepared meat of marooned cattle, these castaways could get a ship to draw near for trading, at which time the buccaneers could seize the ship. The buccaneers were later chased off the islands they lived upon by colonial powers and had to seek a new life at sea, where they continued their raiding of other ships. There they created lucrative but illegitimate opportunities for common seamen to attack European merchant ships (especially Spanish fleets sailing from the Caribbean to Europe) and seize their valuable cargo, a practice that began in the 16th century. Piracy was sometimes given "legal" status by the colonial powers, especially France under King Francis I (r.1515–1547), in the hope of weakening the sea trade of their rivals who established a mare clausum policy in the Atlantic and Indian Oceans. This "legal" form of piracy is known as privateering. From 1520 to 1560, French privateers were alone in their fight against the Crown of Spain and the vast commerce of the Spanish Empire in the New World. They were later joined by English and Dutch privateers. The following quote by a Welsh pirate shows the motivations for piracy in the 18th century Caribbean:
|“||In an honest Service, there is thin Commons, low Wages, and hard Labour; in this, Plenty and Satiety, Pleasure and Ease, Liberty and Power; and who would not balance Creditor on this Side, when all the Hazard that is run for it, at worst, is only a sower Look or two at choaking. No, a merry Life and a short one shall be my Motto.||”|
—Pirate Captain Bartholomew Roberts
The Caribbean had become a center of European trade and colonization after Columbus' discovery of the New World for Spain in 1492. In the 1493 Treaty of Tordesillas the non-European world had been divided between the Spanish and the Portuguese along a north-south line 270 leagues west of the Cape Verde. This gave Spain control of the Americas, a position the Spaniards later reinforced with an equally unenforceable papal bull (The Inter caetera). On the Spanish Main, the key early settlements were Cartagena in present-day Colombia, Porto Bello and Panama City on the Isthmus of Panama, Santiago on the southeastern coast of Cuba, and Santo Domingo on the island of Hispaniola. In the 16th century, the Spanish were mining staggering amounts of silver bullion from the mines of Zacatecas in New Spain (Mexico) and Potosí in Peru (actually now located in Bolivia). The huge Spanish silver shipments from the New World to the Old attracted pirates and French privateers like François Leclerc or Jean Fleury, both in the Caribbean and across the Atlantic, all along the route from the Caribbean to Seville.
To combat this constant danger, in the 1560s the Spanish adopted a convoy system. A treasure fleet or flota would sail annually from Seville (and later from Cádiz) in Spain, carrying passengers, troops, and European manufactured goods to the Spanish colonies of the New World. This cargo, though profitable, was really just a form of ballast for the fleet as its true purpose was to transport the year's worth of silver to Europe. The first stage in the journey was the transport of all that silver from the mines in Peru and New Spain in a mule convoy called the Silver Train to a major Spanish port, usually on the Isthmus of Panama or from Veracruz in New Spain. The flota would meet up with the Silver Train, offload its cargo of manufactured goods to waiting colonial merchants and then transfer the precious cargo of gold and silver (in bullion or coin form) into its holds. This made the returning Spanish treasure fleet a tempting target, although pirates were more likely to shadow the fleet to attack stragglers than try and seize the well-guarded main vessels. The classic route for the treasure fleet in the Caribbean was through the Lesser Antilles to the ports along the Spanish Main on the coast of Central America and New Spain, then northwards into the Yucatán Channel to catch the westerly winds back to Europe.
The Dutch United Provinces of the Netherlands and England, both defenders of Protestantism, were defiantly opposed to Catholic Spain (the greatest power of Christendom in the 16th century) by the 1560s, while the French government was seeking to expand its colonial holdings in the New World now that Spain had proven they could be extremely profitable. It was the French who had established the first non-Spanish settlement in the Caribbean when they had founded Fort Caroline near what is now Jacksonville, Florida in 1564, although the settlement was soon wiped out by a Spanish attack from the larger colony of Saint Augustine. Aided by their governments, English, French and Dutch traders and colonists utterly ignored the unenforceable line drawn by the Treaty of Tordesillas to invade Spanish colonial territory even in times of peace between their nations in Europe, which gave rise to the famed 16th century phrase: "No peace beyond the line."
The Spanish, despite being the wealthiest state in Christendom at the time, could not afford a sufficient military presence to control such a vast area of ocean or enforce their exclusionary, mercantilist trading laws. These laws allowed only Spanish merchants to trade with the colonists of the Spanish Empire in the Americas. This legal arrangement allowed for constant smuggling to break the Spanish trading laws and new attempts at Caribbean colonization in peacetime by England, France and the Netherlands. Whenever a war was declared in Europe between the Great Powers the result was always widespread piracy and privateering throughout the Caribbean.
The Anglo-Spanish War in 1585–1604 was partly due to trade disputes in the New World. A focus on extracting mineral and agricultural wealth from the New World rather than building productive, self-sustaining settlements in its colonies; inflation fueled in part by the massive shipments of silver and gold to Western Europe; endless rounds of expensive wars in Europe; an aristocracy that belittled commercial opportunities as beneath them; and an inefficient system of tolls and tariffs that hampered industry all contributed to Spain's decline from power during the 17th century. However, very profitable trade continued between its colonies and Spain's overseas empire continued to expand until the early 19th century.
Meanwhile, in the Caribbean, the arrival of European diseases with Columbus had reduced the local American Indian populations; the native population of New Spain fell as much as 90% from its original numbers in the 16th century. This loss of native population led Spain to increasingly rely on African slave labor to run Spanish America's colonies, plantations and mines and the trans-Atlantic slave trade offered new sources of profit for English, Dutch and French traders who wanted to violate the Spanish mercantilist laws—and did so, with impunity. But the relative emptiness of the Caribbean also made it an inviting place for England, France and the Netherlands to set up colonies of their own, especially as gold and silver became less important as commodities to be seized and were replaced by tobacco and sugar as cash crops that could make men very rich.
As Spain's military might in Europe weakened, the Spanish trading laws in the New World were violated with greater frequency by the merchants of other nations. The Spanish port on the island of Trinidad off the northern coast of South America, permanently settled only in 1592, became a major point of contact between all the nations with a presence in the Caribbean.
The early seventeenth century, 1600–1660
Changes in demographics
In the early 17th century, expensive fortifications and the size of the colonial garrisons at the major Spanish ports increased to deal with the enlarged presence of Spain's competitors in the Caribbean, but the treasure fleet's silver shipments and the number of Spanish-owned merchant ships operating in the region declined. Additional problems came from shortage of food supplies because of the lack of people to work farms. The number of European-born Spaniards in the New World or Spaniards of pure blood who had been born in New Spain, known as peninsulares and creoles, respectively, in the Spanish caste system, totaled no more than 250,000 people in 1600. Few Spanish colonists in the New World served as the productive members of society who grew crops or manufactured goods—they all wanted to pursue lives of aristocratic luxury in their haciendas as the masters of great plantations growing food, tobacco or sugar, with African or Indian slaves to serve them and do all of the real labor. Later settlements in the Caribbean islands by other European powers also relied on the labour of non-European workers, namely African slaves.
At the same time, England and France were powers on the rise in 17th century Europe as they mastered their own internal religious schisms between Catholic and Protestant and the resulting societal peace allowed their economies to rapidly expand. England especially began to turn its people's maritime skills into the basis of commercial prosperity. English and French kings of the early 17th century—James I (r. 1603–1625) and Henry IV (r. 1598–1610), respectively, each sought more peaceful relations with Habsburg Spain in an attempt to decrease the financial costs of the ongoing wars. Although the onset of peace in 1604 reduced the opportunities for both piracy and privateering against Spain's colonies, neither monarch discouraged his nation from trying to plant new colonies in the New World and break the Spanish monopoly on the Western Hemisphere. The reputed riches, pleasant climate and the general emptiness of the Americas all beckoned to those eager to make their fortunes and a large assortment of Frenchmen and Englishmen began new colonial ventures during the early 17th century, both in North America, which lay basically empty of European settlement north of Mexico, and in the Caribbean, where Spain remained the dominant power until late in the century.
As for the Dutch Netherlands, after decades of rebellion against Spain fueled by both Dutch nationalism and their staunch Protestantism, independence had been gained in all but name (and that too would eventually come with the Treaty of Westphalia in 1648). The Netherlands had become Europe's economic powerhouse. With new, innovative ship designs like the fluyt (a cargo vessel able to be operated with a small crew and enter relatively inaccessible ports) rolling out of the ship yards in Amsterdam and Rotterdam, new capitalist economic arrangements like the joint-stock company taking root and the military reprieve provided by the Twelve Year Truce with the Spanish (1609–1621), Dutch commercial interests were expanding explosively across the globe, but particularly in the New World and East Asia. However, in the early 17th century, the most powerful Dutch companies, like the Dutch East India Company, were most interested in developing operations in the East Indies (Indonesia) and Japan, and left the West Indies to smaller, more independent Dutch operators.
In the early 17th century, the Spanish colonies of Cartagena, Havana, Santiago de Cuba, San Juan, Panama City, Maracaibo, and Santo Domingo were among the most important settlements of the Spanish West Indies. Each possessed a large population and a self-sustaining economy, and was well-protected by Spanish defenders. These Spanish settlements were generally unwilling to deal with traders from the other European states because of the strict enforcement of Spain's mercantilist laws pursued by the large Spanish garrisons. In these cities European manufactured goods could command premium prices for sale to the colonists, while the trade goods of the New World—tobacco, chocolate and other raw materials, were shipped back to Europe.
By 1600, Porto Bello had replaced Nombre de Dios (where Sir Francis Drake had first attacked a Spanish settlement) as the Isthmus of Panama's Caribbean port for the Spanish Silver Train and the annual treasure fleet. Veracruz, the only port city open to trans-Atlantic trade in New Spain, continued to serve the vast interior of New Spain as its window on the Caribbean. By the 17th century, the majority of the towns along the Spanish Main and in Central America had become self-sustaining. The smaller towns of the Main grew tobacco and also welcomed foreign smugglers who avoided the Spanish mercantilist laws. The underpopulated inland regions of Hispaniola were another area where tobacco smugglers in particular were welcome to ply their trade.
The Spanish-ruled island of Trinidad was already a wide-open port open to the ships and seamen of every nation in the region at the start of the 17th century, and was a particular favorite for smugglers who dealt in tobacco and European manufactured goods. Local Caribbean smugglers sold their tobacco or sugar for decent prices and then bought manufactured goods from the trans-Atlantic traders in large quantities to be dispersed among the colonists of the West Indies and the Spanish Main who were eager for a little touch of home. The Spanish governor of Trinidad, who both lacked strong harbor fortifications and possessed only a laughably small garrison of Spanish troops, could do little but take lucrative bribes from English, French and Dutch smugglers and look the other way—or risk being overthrown and replaced by his own people with a more pliable administrator.
The English had established an early colony known as Virginia in 1607 and one on the island of Barbados in the West Indies in 1625, although this small settlement's people faced considerable dangers from the local Carib Indians (known to be cannibals) for some time after its founding. The two early colonies needed regular imports of food but mainly woollen textiles from England. The main early exports back to England included: sugar, tobacco, and tropical food. No large tobacco plantations or even truly organized defenses were established by the English on its Caribbean settlements at first and it would take time for London to realize just how valuable its possessions in the Caribbean could prove to be. Eventually, African slaves would be purchased through the slave trade. They would working the colonies and fuel Europe's tobacco, rice and sugar supply; by 1698 England had the largest slave exports with the most efficiency in their labor in relation to any other imperial power. Barbados, the first truly successful English colony in the West Indies, grew fast as the 17th century wore on and by 1698 Jamaica would be England’s biggest colony to employ slave labor. Increasingly, English ships chose to use it as their primary home port in the Caribbean. Like Trinidad, merchants in the trans-Atlantic trade who based themselves on Barbados always paid good money for tobacco and sugar. Both of these commodities remained the key cash crops of this period and fueled the growth of the American Southern Colonies as well as their counterparts in the Caribbean.
After the destruction of Fort Caroline by the Spanish, the French made no further colonization attempts in the Caribbean for several decades as France was convulsed by its own Catholic-Protestant religious divide during the late 16th century Wars of Religion. However, old French privateering anchorages with small "tent camp" towns could be found during the early 17th century in the Bahamas. These settlements provided little more than a place for ships and their crews to take on some fresh water and food and perhaps have a dalliance with the local camp followers, all of which would have been quite expensive.
From 1630 to 1654, Dutch merchants had a port in Brazil known as Recife. It was initially founded by the Portuguese in 1548. The Dutch had decided in 1630 to invade several sugar producing cities in Portuguese-controlled Brazil, including Salvador and Natal. From 1630 to 1654, they took control of Recife and Olinda, making Recife the new capital of the territory of Dutch Brazil, renaming the city Mauritsstad. During this period, Mauritsstad became one of the most cosmopolitan cities of the world. Unlike the Portuguese, the Dutch did not prohibit Judaism. The first Jewish community and the first synagogue in the Americas - Kahal Zur Israel Synagogue - was founded in the city.
The inhabitants fought on their own to expel the Dutch in 1654, being helped by the involvement of the Dutch in the First Anglo-Dutch War. This was known as the Insurreição Pernambucana (Pernambucan Insurrection). Most of the Jews fled to Amsterdam; others fled to North America, starting the first Jewish community of New Amsterdam (now known as New York City). The Dutch spent most of their time trading in smuggled goods with the smaller Spanish colonies. Trinidad was the unofficial home port for Dutch traders and privateers in the New World early in the 17th century before they established their own colonies in the region in the 1620s and 1630s. As usual, Trinidad's ineffective Spanish governor was helpless to stop the Dutch from using his port and instead he usually accepted their lucrative bribes.
The first third of the 17th century in the Caribbean was defined by the outbreak of the savage and destructive Thirty Years' War in Europe (1618–1648) that represented both the culmination of the Protestant-Catholic conflict of the Reformation and the final showdown between Habsburg Spain and Bourbon France. The war was mostly fought in Germany, where one-third to one-half of the population would eventually be lost to the strains of the conflict, but it had some effect in the New World as well. The Spanish presence in the Caribbean began to decline at a faster rate, becoming more dependent on African slave labor. The Spanish military presence in the New World also declined as Madrid shifted more of its resources to the Old World in the Habsburgs' apocalyptic fight with almost every Protestant state in Europe. This need for Spanish resources in Europe accelerated the decay of the Spanish Empire in the Americas. The settlements of the Spanish Main and the Spanish West Indies became financially weaker and were garrisoned with a much smaller number of troops as their home countries were more consumed with happenings back in Europe. The Spanish Empire's economy remained stagnant and the Spanish colonies' plantations, ranches and mines became totally dependent upon slave labor imported from West Africa. With Spain no longer able to maintain its military control effectively over the Caribbean, the other Western European states finally began to move in and set up permanent settlements of their own, ending the Spanish monopoly over the control of the New World.
Even as the Dutch Netherlands were forced to renew their struggle against Spain for independence as part of the Thirty Years' War (the entire rebellion against the Spanish Habsburgs was called the Eighty Years War in the Low Countries), the Dutch Republic had become the world's leader in mercantile shipping and commercial capitalism and Dutch companies finally turned their attention to the West Indies in the 17th century. The renewed war with Spain with the end of the truce offered many opportunities for the successful Dutch joint-stock companies to finance military expeditions against the Spanish Empire. The old English and French privateering anchorages from the 16th century in the Caribbean now swarmed anew with Dutch warships.
In England, a new round of colonial ventures in the New World was fueled by declining economic opportunities at home and growing religious intolerance for more radical Protestants (like the Puritans) who rejected the compromise Protestant theology of the established Church of England. After the demise of the Saint Lucia and Grenada colonies soon after their establishment, and the near-extinction of the English settlement of Jamestown in Virginia, new and stronger colonies were established by the English in the first half of the 17th century, at Plymouth, Boston, Barbados, the West Indian islands of Saint Kitts and Nevis and Providence Island. These colonies would all persevere to become centers of English civilization in the New World.
For France, now ruled by the Bourbon King Louis XIII (r. 1610–1642) and his able minister Cardinal Richelieu, religious civil war had been reignited between French Catholics and Protestants (called Huguenots). Throughout the 1620s, French Huguenots fled France and founded colonies in the New World much like their English counterparts. Then, in 1636, to decrease the power of the Habsburg dynasty who ruled Spain and the Holy Roman Empire on France's eastern border, France entered the cataclysm in Germany—on the Protestants' side.
Many of the cities on the Spanish Main in the first third of the 17th century were self-sustaining but few had yet achieved any prosperity. The more backward settlements in Jamaica and Hispaniola were primarily places for ships to take on food and fresh water. Spanish Trinidad remained a popular smuggling port where European goods were plentiful and fairly cheap, and good prices were paid by its European merchants for tobacco or sugar.
The English colonies on Saint Kitts and Nevis, founded in 1623, would prove to become wealthy sugar-growing settlements in time. Another new English venture, the Providence Island colony on what is now Providencia Island off the malaria ridden Mosquito Coast of Nicaragua, deep in the heart of the Spanish Empire, had become the premier base for English privateers and other pirates raiding the Spanish Main.
On the shared Anglo-French island of Saint Christophe (called "Saint Kitts" by the English) the French had the upper hand. The French settlers on Saint Christophe were mostly Catholics, while the unsanctioned but growing French colonial presence in northwest Hispaniola (the future nation of Haiti) was largely made up of French Protestants who had settled there without Spain's permission to escape Catholic persecution back home. France cared little what happened to the troublesome Huguenots, but the colonization of western Hispaniola allowed the French to both rid themselves of their religious minority and strike a blow against Spain—an excellent bargain, from the French Crown's point of view. The ambitious Huguenots had also claimed the island of Tortuga off the northwest coast of Hispaniola and had established the settlement of Petit-Goâve on the island itself. Tortuga in particular was to become a pirate and privateer haven and was beloved of smugglers of all nationalities—after all, even the creation of the settlement had been illegal.
Dutch colonies in the Caribbean remained rare until the second third of the 17th century. Along with the traditional privateering anchorages in the Bahamas and Florida, the Dutch West India Company settled a "factory" (commercial town) at New Amsterdam on the North American mainland in 1626 and at Curaçao in 1634, an island positioned right in the center of the Caribbean off the northern coast of Venezuela that was perfectly positioned to become a major maritime crossroads.
The seventeenth century crisis and colonial repercussions
The mid-17th century in the Caribbean was again shaped by events in far-off Europe. For the Dutch Netherlands, France, Spain and the Holy Roman Empire, the Thirty Years War being fought in Germany, the last great religious war in Europe, had degenerated into an outbreak of famine, plague and starvation that managed to kill off one-third to one-half of the population of Germany. England, having avoided any entanglement in the European mainland's wars, had fallen victim to its own ruinous civil war that resulted in the short but brutal Puritan military dictatorship (1649–1660) of the Lord Protector Oliver Cromwell and his Roundhead armies. Of all the European Great Powers, Spain was in the worst shape economically and militarily as the Thirty Years War concluded in 1648. Economic conditions had become so poor for the Spanish by the middle of the 17th century that a major rebellion began against the bankrupt and ineffective Habsburg government of King Philip IV (r. 1625–1665) that was eventually put down only with bloody reprisals by the Spanish Crown. This did not make poor Philip IV more popular.
But disasters in the Old World bred opportunities in the New World. The Spanish Empire's colonies were badly neglected from the middle of the 17th century because of Spain's many woes. Freebooters and privateers, experienced after decades of European warfare, pillaged and plundered the almost defenseless Spanish settlements with ease and with little interference from the European governments back home who were too worried about their own problems at home to turn much attention to their New World colonies. The non-Spanish colonies were growing and expanding across the Caribbean, fueled by a great increase in immigration as people fled from the chaos and lack of economic opportunity in Europe. While most of these new immigrants settled into the West Indies' expanding plantation economy, others took to the life of the buccaneer. Meanwhile, the Dutch, at last independent of Spain when the 1648 Treaty of Westphalia ended their own Eighty Years War (1568–1648) with the Habsburgs, made a fortune carrying the European trade goods needed by these new colonies. Peaceful trading was not as profitable as privateering, but it was a safer business.
By the later half of the 17th century, Barbados had become the unofficial capital of the English West Indies before this position was claimed by Jamaica later in the century. Barbados was a merchant's dream port in this period. European goods were freely available, the island's sugar crop sold for premium prices, and the island's English governor rarely sought to enforce any type of mercantilist regulations. The English colonies at Saint Kitts and Nevis were economically strong and now well-populated as the demand for sugar in Europe increasingly drove their plantation-based economies. The English had also expanded their dominion in the Caribbean and settled several new islands, including Bermuda in 1612, Antigua and Montserrat in 1632, and Eleuthera in the Bahamas in 1648, though these settlements began like all the others as relatively tiny communities that were not economically self-sufficient.
The French also founded major new colonies on the sugar-growing islands of Guadeloupe in 1634 and Martinique in 1635 in the Lesser Antilles. However, the heart of French activity in the Caribbean in the 17th century remained Tortuga, the fortified island haven off the coast of Hispaniola for privateers, buccaneers and outright pirates. The main French colony on the rest of Hispaniola remained the settlement of Petit-Goâve, which was the French toehold that would develop into the modern state of Haiti. French privateers still used the tent city anchorages in the Florida Keys to plunder the Spaniards' shipping in the Florida Channel, as well as to raid the shipping that plied the sealanes off the northern coast of Cuba.
For the Dutch in the 17th century Caribbean, the island of Curaçao was the equivalent of England's port at Barbados. This large, rich, well-defended free port, open to the ships of all the European states, offered good prices for sugar that was re-exported to Europe and also sold large quantities of manufactured goods in return to the colonists of every nation in the New World. A second Dutch-controlled free port had also developed on the island of Sint Eustatius which was settled in 1636.The constant back-and-forth warfare between the Dutch and the English for possession of it in the 1660s later damaged the island's economy and desirability as a port. The Dutch also had set up a settlement on the island of Saint Martin which became another haven for Dutch sugar planters and their African slave labor. In 1648, the Dutch agreed to divide the prosperous island in half with the French.
The Golden Age of Piracy, 1660–1726
The late 17th and early 18th centuries (particularly between the years 1716 to 1726) are often considered the "Golden Age of Piracy" in the Caribbean, and pirate ports experienced rapid growth in the areas in and surrounding the Atlantic and Indian Oceans. Furthermore, during this time period there were approximately 2400 men that were currently active pirates. The military power of the Spanish Empire in the New World started to decline when King Philip IV of Spain was succeeded by King Charles II (r. 1665–1700), who in 1665 became the last Habsburg king of Spain at the age of four. While Spanish America in the late 17th century had little military protection as Spain entered a phase of decline as a Great Power, it also suffered less from the Spanish Crown's mercantilist policies with its economy. This lack of interference, combined with a surge in output from the silver mines due to increased availability of slave labor (the demand for sugar increased the number of slaves brought to the Caribbean) began a resurgence in the fortunes of Spanish America.
England, France and the Dutch Netherlands had all become New World colonial powerhouses in their own right by 1660. Worried by the Dutch Republic's intense commercial success since the signing of the Treaty of Westphalia, England launched a trade war with the Dutch. The English Parliament passed the first of its own mercantilist Navigation Acts (1651) and the Staple Act (1663) that required that English colonial goods be carried only in English ships and legislated limits on trade between the English colonies and foreigners. These laws were aimed at ruining the Dutch merchants whose livelihoods depended on free trade. This trade war would lead to three outright Anglo-Dutch Wars over the course of the next twenty-five years. Meanwhile, King Louis XIV of France (r. 1642–1715) had finally assumed his majority with the death of his regent mother Queen Anne of Austria's chief minister, Cardinal Mazarin, in 1661. The "Sun King's" aggressive foreign policy was aimed at expanding France's eastern border with the Holy Roman Empire and led to constant warfare against shifting alliances that included England, the Dutch Republic, the various German states and Spain. In short, Europe was consumed in the final decades of the 17th century by nearly constant dynastic intrigue and warfare—an opportune time for pirates and privateers to engage in their bloody trade.
In the Caribbean, this political environment led colonial governors to face new threats from every direction. The Dutch sugar island of Sint Eustatius changed ownership ten times between 1664 and 1674 as the English and Dutch dueled for supremacy. Consumed with the various wars in Europe, the mother countries provided few further military reinforcements to their colonies, so the colonial governors of the Caribbean increasingly made use of buccaneers as mercenaries and privateers to guard their colonies or carry the fight to their mother country's current enemy. Surprisingly (or not), these undisciplined and greedy dogs of war often proved difficult for their sponsors to control.
By the late 17th century, the great Spanish towns of the Caribbean had begun to prosper and Spain also began to make a slow, fitful recovery, but remained poorly defended militarily because of Spain's problems and so were sometimes easy prey for pirates and privateers. The English presence continued to expand in the Caribbean as England itself was rising toward great power status in Europe. Captured from Spain in 1655, the island of Jamaica had been taken over by England and its chief settlement of Port Royal had become a new English buccaneer haven in the midst of the Spanish Empire. Jamaica was slowly transformed, along with Saint Kitts, into the heart of the English presence in the Caribbean. At the same time the French Lesser Antilles colonies of Guadeloupe and Martinique remained the main centers of French power in the Caribbean, as well as among the richest French possessions because of their increasingly profitable sugar plantations. The French also maintained privateering strongholds around western Hispaniola, at their traditional pirate port of Tortuga, and their Hispaniolan capital of Petit-Goâve. The French further expanded their settlements on the western half of Hispaniola and founded Léogâne and Port-de-Paix, even as sugar plantations became the primary industry for the French colonies of the Caribbean.
At the start of the 18th century, Europe remained riven by warfare and constant diplomatic intrigue. France was still the dominant power but now had to contend with a new rival, England (Great Britain after 1707) which emerged as a great power at sea and land during the War of the Spanish Succession. But the depredations of the pirates and buccaneers in the Americas in the latter half of the 17th century and of similar mercenaries in Germany during the Thirty Years War had taught the rulers and military leaders of Europe that those who fought for profit rather than for King and Country could often ruin the local economy of the region they plundered, in this case the entire Caribbean. At the same time, the constant warfare had led the Great Powers to develop larger standing armies and bigger navies to meet the demands of global colonial warfare. By 1700 the European states had enough troops and ships at their disposal to begin better protecting the important colonies in the West Indies and in the Americas without relying on the aid of privateers. This spelled the doom of privateering and the easy (and nicely legal) life it provided for the buccaneer. Though Spain remained a weak power for the rest of the colonial period, pirates in large numbers generally disappeared after 1730, chased from the seas by a new English Royal Navy squadron based at Port Royal, Jamaica and a smaller group of Spanish privateers sailing from the Spanish Main known as the Costa Garda (Coast Guard in English). With regular military forces now on-station in the West Indies, letters of marque were harder and harder to obtain.
Economically, the late 17th century and the early 18th century was a time of growing wealth and trade for all the nations who controlled territory in the Caribbean. Although some piracy would always remain until the mid-18th century, the path to wealth in the Caribbean in the future lay through peaceful trade, the growing of tobacco, rice and sugar and smuggling to avoid the British Navigation Acts and Spanish mercantilist laws. By the 18th century the Bahamas had become the new colonial frontier for the British. The port of Nassau became one of the last pirate havens. A small British colony had even sprung up in former Spanish territory at Belize in Honduras that had been founded by an English pirate in 1638. The French's colonial empire in the Caribbean had not grown substantially by the start of the 18th century. The sugar islands of Guadaloupe and Martinique remained the twin economic capitals of the French Lesser Antilles, and were now equal in population and prosperity to the largest of the English's Caribbean colonies. Tortuga had begun to decline in importance, but France's Hispaniolan settlements were becoming major importers of African slaves as French sugar plantations spread across the western coast of that island, forming the nucleus of the modern nation of Haiti.
The end of an era
The decline of piracy in the Caribbean paralleled the decline of the use of mercenaries and the rise of national armies in Europe. Following the end of the Thirty Years' War the direct power of the state in Europe expanded. Armies were systematized and brought under direct state control; the Western European states' navies were expanded and their mission was extended to cover combating piracy. The elimination of piracy from European waters expanded to the Caribbean in the 18th century, West Africa and North America by the 1710s and by the 1720s even the Indian Ocean was a difficult location for pirates to operate.
After 1720, piracy in the classic sense became extremely rare in the Caribbean as European military and naval forces, especially those of the Royal Navy, became too widespread and active for any pirate to pursue an effective career for long. By 1718, the British Royal Navy had approximately 124 vessels and 214 by 1815; a big increase from the two vessels England had possessed in 1670. British Royal Navy warships tirelessly hunted down pirate vessels, and almost always won these engagements. Pirates who were caught by British forces in particular were tried in court and had to be convicted before they were transferred to England. Before a captured pirate was transferred they had to be convicted according to the testimony of witnesses and other hard evidence. This was a lengthy and expensive process so to make it quicker seven commissioners were created from colonial and naval officers to try all piracy-related cases. These new and faster ‘trials’ provided no legal representation for the pirates; and ultimately led in this era to the execution of 600 pirates, which represented approximately 10 percent of the pirates active at the time in the Caribbean region. Piracy saw a brief resurgence between the end of the War of the Spanish Succession in 1713 and around 1720, as many unemployed seafarers took to piracy as a way to make ends meet when a surplus of sailors after the war led to a decline in wages and working conditions. At the same time, one of the terms of the Treaty of Utrecht that ended the war gave to Great Britain's Royal African Company and other British slavers a thirty-year asiento, or contract, to furnish African slaves to the Spanish colonies, providing British merchants and smugglers potential inroads into the traditionally closed Spanish markets in America and leading to an economic revival for the whole region. This revived Caribbean trade provided rich new pickings for a wave of piracy. Also contributing to the increase of Caribbean piracy at this time was Spain's breakup of the English logwood settlement at Campeche and the attractions of a freshly sunken silver fleet off the southern Bahamas in 1715.
This early 18th century resurgence of piracy lasted only until the Royal Navy and the Spanish Guardacosta's presence in the Caribbean were enlarged to deal with the threat. Also crucial to the end of this era of piracy was the loss of the pirates' last Caribbean safe haven at Nassau. It is in this period that the popular Pirates of the Caribbean film series produced by the Walt Disney Company is loosely set.
The famous pirates of the early 18th century were a completely illegal remnant of a golden buccaneering age, and their choices were limited to quick retirement or eventual capture. Contrast this with the earlier example of Henry Morgan, who for his privateering efforts was knighted by the English Crown and appointed the lieutenant governor of Jamaica.
In the early 19th century, piracy along the East and Gulf Coasts of North America as well as in the Caribbean increased again. Jean Lafitte was probably the greatest pirate/privateer of the time, operating in the Caribbean and in American waters from his havens in Texas and Louisiana during the 1810s. But the records of the US Navy indicate that hundreds of pirate attacks occurred in American and Caribbean waters between the years of 1820 and 1835. The Latin American Wars of Independence led to widespread use of privateers both by Spain and by the revolutionary governments of Mexico, Colombia, and other newly independent Latin American countries. These privateers were rarely scrupulous about adhering to the terms of their letters of marque even during the Wars of Independence, and continued to plague the Caribbean as outright pirates long after those conflicts ended.
About the time of the Mexican-American War in 1846, the United States Navy had grown strong and numerous enough to eliminate the pirate threat in the West Indies. By the 1830s, ships had begun to convert to steam propulsion, so the Age of Sail and the classical idea of pirates in the Caribbean ended. Privateering, similar to piracy, continued as an asset in war for a few more decades and proved to be of some importance during the naval campaigns of the American Civil War.
Privateering would remain a tool of European states, and even of the newborn United States, until the mid-19th century's Declaration of Paris. But letters of marque were given out much more sparingly by governments and were terminated as soon as conflicts ended. The idea of "no peace beyond the Line" was a relic that had no meaning by the more settled late 18th and early 19th centuries.
The Rules of Piracy
Aboard a pirate vessel things were fairly democratic and had “codes of conduct” that even reflect modern laws. Some of these rules consisted of a dress code, no woman, and some ships had no smoking. The rules, the punishment for breaking them, and even the staying arrangements would be decided amongst everyone going on the ship before departure; which was very abstract compared to the authoritarianism that occurred in the Royal Navy. In further contrast to the society of Britain’s colonies, on board a pirate vessel racial divisions were usually unknown and in some instances pirates of African descent even served as ships' Captains. Another activity that had to be engaged in before the ship left the dock was swearing an oath to not betray any others in one's entire crew, and to sign what was known as the ship's Article, which would determine the percentage of profit each crew member would receive. Furthermore, some of the ways for deciding disagreements amongst pirate crew members consisted of fighting till first blood or in more serious cases abandoning the individual on an uninhabited island, being whipped 39 times, or even execution by firearm. Despite popular belief, however, the punishment of plank walking off the ship into the open ocean never was used as a way of settling disputes amongst pirates. There was, however, a division of power on a pirate crew between the captain, the quartermaster, the governing council for the vessel, and the regular crewmen; but in battle the pirate captain always retained all power and the ultimate decision-making power to insure an orderly chain of command. When it came time to split the captured wealth into shares, profits were normally given to the person in each rank as follows: Captain (5-6 shares), individuals with a senior position like the quartermaster (2 shares), crewmen (1 share), and individuals in a junior position (1/2 a share).
Famous Caribbean pirates
Born in Vatteville and financed by shipowner Jean Ango, French privateer Jean Fleury was Spain's nemesis. In 1522, he captured seven Spanish vessels. One year later most of Montezuma's Aztec treasure fell into his hands after he captured two of the three galleons in which Cortez shipped the fabled booty back to Spain. He was captured in 1527 and executed by order of Holy Roman Emperor Charles V.
François Le Clerc
François Le Clerc also nicknamed "Jambe de bois" ("Pie de Palo", "wooden leg") was a formidable privateer, ennobled by Henri II in 1551. In 1552, Le Clerc ransacked Porto Santo. One year later, he mustered one thousand men and caused havoc in the Caribbean with his lieutenants Jacques de Sores and Robert Blondel. They pillaged and burned down the seaport of Santo Domingo, and ransacked Las Palmas in the Canary Islands on his way back to France. He led another expedition in 1554 and plundered Santiago de Cuba.
He was born about 1680 in England as Edward Thatch, Teach, or Drummond, and operated off the east coast of North America, particularly pirating in the Bahamas and had a base in North Carolina in the period of 1714–1718. Noted as much for his outlandish appearance as for his piratical success, in combat Blackbeard placed burning slow-match (a type of slow-burning fuse used to set off cannon) under his hat; with his face wreathed in fire and smoke, his victims claimed he resembled a fiendish apparition from Hell. Blackbeard's ship was the two hundred ton, forty-gun frigate he named the Queen Anne's Revenge.
Blackbeard met his end at the hands of a British Royal Navy specifically sent out to capture him. After an extremely bloody boarding action, the British commanding officer of the fleet, Lieutenant Robert Maynard, killed him with the help of his crew. According to legend, Blackbeard suffered a total of five bullet wounds and twenty slashes with a cutlass before he finally died off the coast of Ocracoke, North Carolina.
Henry Morgan, a Welshman, was one of the most destructive pirate captains of the 17th century. Although Morgan always considered himself a privateer rather than a pirate, several of his attacks had no real legal justification and are considered piracy. Recently found off the coast of what is now known as the nation of Haiti, was one of Captain Morgan’s “30-cannon oak ships,” which was thought to of aid the buccaneer in his ventures. Another Caribbean area that was known for the headquarters of Captain Morgan was Port Royal, Jamaica. A bold, ruthless and daring man, Morgan fought England's enemies for thirty years, and became a very wealthy man in the course of his adventures. Morgan's most famous exploit came in late 1670 when he led 1700 buccaneers up the pestilential Chagres River and then through the Central American jungle to attack and capture the "impregnable" city of Panama. Morgan's men burnt the city to the ground, and the inhabitants were either killed or forced to flee. Although the burning of Panama City did not mean any great financial gain for Morgan, it was a deep blow to Spanish power and pride in the Caribbean and Morgan became the hero of the hour in England. At the height of his career, Morgan had been made a titled nobleman by the English Crown and lived on an enormous sugar plantation in Jamaica, as lieutenant governor. Morgan died in his bed, rich and respected—something rarely achieved by pirates in his day or any other.
Bartholomew Roberts or Black Bart was successful in sinking, or capturing and pillaging some 400 ships. and like most pirate captains of the time he looked fancy doing it. He started his freebooting career in the Gulf of Guinea in February 1719 when Howell Davis' pirates captured his ship and he proceeded to join them. Rising to captain, he quickly came to the Caribbean and plagued the area until 1722. He commanded a number of large, powerfully armed ships, all of which he named Fortune, Good Fortune, or Royal Fortune. Aboard his vessels the political atmosphere was a form of democracy that depended on participation; in which was a rule that everyone aboard his ship had to vote on issues that arose. Efforts by the governors of Barbados and Martinique to capture him only provoked his anger; when he found the governor of Martinique aboard a newly captured vessel, Roberts hanged the man from a yardarm. Roberts returned to Africa in February 1722, where he met his death in a naval battle, whereby his crew was captured.
Probably the least qualified pirate captain ever to sail the Caribbean, Bonnet was a sugar planter who knew nothing about sailing. He started his piracies in 1717 by buying an armed sloop on Barbados and recruiting a pirate crew for wages, possibly to escape from his wife. He lost his command to Blackbeard and sailed with him as his associate. Although Bonnet briefly regained his captaincy, he was captured in 1718 by a privateering vessel that was employed by South Carolina.
Charles Vane, like many early 18th century pirates, operated out of Nassau in the Bahamas. He was the only pirate captain to resist Woodes Rogers when Rogers asserted his governorship over Nassau in 1718, attacking Rogers' squadron with a fire ship and shooting his way out of the harbor rather than accept the new governor's royal pardon. Vane's quartermaster was Calico Jack Rackham, who deposed Vane from the captaincy. Vane started a new pirate crew, but he was captured and hanged in Jamaica in 1720.
Edward - or Ned - Low was notorious as one of the most brutal and vicious pirates. Originally from London, he started as a lieutenant to George Lowther, before striking out on his own. His career as a pirate lasted just three years, during which he captured over 100 ships, and he and his crew murdered, tortured and maimed hundreds of people. After his own crew mutinied in 1724 when Low murdered a sleeping subordinate, he was rescued by a French vessel who hanged him on Martinique island.
Anne Bonny and Mary Read
Anne Bonny and Mary Read or together known as Henry 'Long Ben' Avery were two of few infamous woman pirates of the 18th century; both spent their brief sea-roving careers under the command of Calico Jack Rackham.They were also known to have been associated with other well known pirates: Blackbeard, Captain Henry Morgan, William Kidd, Bartholomew Sharp, and Bartholomew Roberts. They are noted chiefly for their gender, highly unusual for pirates, which helped to sensationalize their 1720 October trial in Jamaica. They gained further notoriety for their ruthlessness—they are known to have spoken in favor of murdering witnesses in the crew's counsels—and for fighting the intruders of Rackham’s vessel while he and his crew members were drunk and hiding under the deck. The capstone to their legend is that all the crew including Rackham, Anne and Mary were tried in a Spanish town close to Port Royal. Rackham and his crew were hanged; but when the judge sentenced Anne and Mary to death he asked if they had anything to say. "Milord, we plead our bellies", in which meaning that they declared pregnancy. The judge immediately postponed their death sentence because no English court had the authority to kill an unborn child. Reed died in prison of fever before the birth of the child. There is no record of Anne being executed and it was rumored her wealthy father had paid a ransom and took her home, other stories of what happen to her include: that she returned to pirating or became a nun.
In the Caribbean the use of privateers was especially popular for what amounted to legal and state-ordered piracy. The cost of maintaining a fleet to defend the colonies was beyond national governments of the 16th and 17th centuries. Private vessels would be commissioned into a 'navy' with a letter of marque, paid with a substantial share of whatever they could capture from enemy ships and settlements, the rest going to the crown. These ships would operate independently or as a fleet and if successful the rewards could be great—when Jean Fleury and his men captured Cortes' vessels in 1523, they found the incredible Aztec treasure that they were allowed to keep. Later, when Francis Drake captured the Spanish Silver Train at Nombre de Dios (Panama's Caribbean port at the time) in 1573 his crews were rich for life. This was repeated by Piet Hein in 1628, who made a profit of 12 million guilders for the Dutch West India Company. This substantial profit made privateering something of a regular line of business; wealthy businessmen or nobles would be quite willing to finance this legitimized piracy in return for a share. The sale of captured goods was a boost to colonial economies as well. The main imperial countries operating at this time and in the region were French, English, Spanish, Dutch and Portuguese. Privateers from each country were all ordered to attack each other countries vessels, especially Spain in which was a shared enemy among the other powers. By the seventeenth century piracy and privateering became less of an acceptable behaviour, especially because many privateers turned into full blown pirates because they did not have to give part of the profit they made back to their country of employment. Corruptness led to the remove of many officials over the years including the individuals: Governor Nicholas Trott and Governor Benjamin Fletcher. One way that government found and discarded of active pirates and corrupt privateers was through the use of “pirate hunters” whom were bribe with all or at least most of the wealth that they would find aboard pirate vessels, along with a set bounty. The most renowned pirate hunter was Captain William Kidd who hit the peak of his legal career in 1695 but later saw the benefits of illegally piracy and made that his new intent.
Pirates involved specifically in the Caribbean were called buccaneers. Roughly speaking, they arrived in the 1630s and remained until the effective end of piracy in the 1730s. The original buccaneers were setter’s that were deprived of their land by “Spanish authorities” and eventually were picked up by white settlers. The word "buccaneer" is actually from the French boucaner, meaning "to smoke meat", from the hunters of wild oxen curing meat over an open fire. They transferred the skills which kept them alive into piracy. They operated with the partial support of the non-Spanish colonies and until the 18th century their activities were legal, or partially legal and there were irregular amnesties from all nations. For the most part buccaneers attacked other vessel and ransacked settlements owned by the Spanish.
Traditionally buccaneers had a number of peculiarities. Their crews operated as a democracy: the captain was elected by the crew and they could vote to replace him. The captain had to be a leader and a fighter—in combat he was expected to be fighting with his men, not directing operations from a distance.
Spoils were evenly divided into shares; when the officers had a greater number of shares, it was because they took greater risks or had special skills. Often the crews would sail without wages—"on account"—and the spoils would be built up over a course of months before being divided. There was a strong esprit de corps among pirates. This allowed them to win sea battles: they typically outmanned trade vessels by a large ratio. There was also for some time a social insurance system, guaranteeing money or gold for battle wounds at a worked-out scale.
The romantic notion of pirates burying treasure on isolated islands and wearing gaudy clothes had some basis in fact. Most pirate wealth was accumulated by selling of chandlery items: ropes, sails, and block and tackle stripped from captured ships.
One undemocratic aspect of the buccaneers was that sometimes they would force specialists like carpenters or surgeons to sail with them for some time, though they were released when no longer needed (if they had not volunteered to join by that time). Note also that a typical poor man had few other promising career choices at the time apart from joining the pirates. According to reputation, the pirates' egalitarianism led them to liberate slaves when taking over slave ships. However there are several accounts of pirates selling slaves captured on slave ships, sometimes after they had helped man the pirates' own vessels.
In combat they were considered ferocious and were reputed to be experts with flintlock weapons (invented in 1615), but these were so unreliable that they were not in widespread military use before the 1670s.
Many slaves, primarily from places in Africa were being exported to colonies in the Caribbean for slave labour in implantations. Out of the people that were forced into slavery and shipped off to colonies, 9 to 32 percent were children from the years 1673-1798 (this number only considers Great Britain’s exports). While on the average 12 week journey to the colonies the new slaves endured ghastly living condition that included: too small of space to even be able to stand, the temperatures were hot, diet was poor, and disease and death flourished. Before many slaves became slave they were already victims and/or prisoners of civil war. many aspects of being a slave overall increased the allure of the pirating lifestyle. During the 17th and 18th century, pirating was at a height and its symbolic interpretation of freedom peaked. This abstract ideal was something that was very appealing to the slaves and victims of imperialism. Although the main European powers did not want slaves to find out about the opportunity for freedom that piracy offered, still “...30 percent of the 5000 or more pirates who were active between 1715 and 1725 were of African heritage.” Along with the opportunity of a new life and freedom, the indigenous people of Africa experience ahead of its time equality when individuals joined pirating communities. In which many slave turn pirates “secured” a position of leadership or prestige on pirating vessels, like that of Captain. One of the main areas of origin for many slaves was Madagascar; one of the largest importers of slaves to American colonies such as in Jamaica and Barbados was Great Britain.
Roberto Cofresí—a 19th-century pirate
Roberto Cofresí, better known as "El Pirata Cofresí", became interested in sailing at a young age. By the time he reached adulthood there were some political and economic difficulties in Puerto Rico, which at the time was a colony of Spain. Influenced by this situation he decided to become a pirate in 1818. Cofresí commanded several assaults against cargo vessels focusing on those that were responsible for exporting gold. During this time he focused his attention on boats from the United States and the local Spanish government ignored several of these actions. On March 2, 1825, Cofresí engaged the USS Grampus and a flotilla of ships led by Capt. John D. Sloat in battle. He eventually abandoned his ship and tried to escape by land before being captured. After being imprisoned he was sent to San Juan, Puerto Rico, where a brief military trial found him guilty and on March 29, 1825, he and other members of his crew were executed by a firing squad. After his death his life was used as inspiration for several stories and myths, which served as the basis for books and other media.
Boysie Singh—a 20th century pirate
John Boysie Singh, usually known as "the Rajah", "Boysie" or "Boysie Singh", was born on 5 April 1908 in Woodbrook, Port of Spain, Trinidad, and finally hanged in Port of Spain in 1957 for the murder of his niece, Hattie Werk.
He had a long and successful career as a gangster and gambler before turning to piracy and murder. For almost ten years, from 1947 until 1956 he and his gang terrorized the waters between Trinidad and Venezuela. They were responsible for the deaths of approximately 400 people. They would promise to ferry people from Trinidad to Venezuela but en route he would rob his victims at gunpoint, kill them and dump them into the sea.
Boysie was well-known to people in Trinidad and Tobago. He had successfully beaten a charge of breaking and entering which nearly resulted in his deportation before he was finally executed after losing his third case - for the murder of his niece. He was held in awe and dread by most of the population and was frequently seen strolling grandly about Port of Spain in the early 1950s wearing bright, stylish clothes. Mothers and nannies would warn their charges: "Behave yourself, man, or Boysie goyn getchu, oui!"
Piracy in popular culture
- Many silent films of pirates, especially starring Douglas Fairbanks, such as The Black Pirate
- Captain Blood (1935)
- Treasure Island
- Return to Treasure Island
- Swashbuckler (1976)
- Cutthroat Island
- Pirates of the Caribbean films
- Pirates: The Blood Brothers (Caraibi)
- Muppet Treasure Island
- Nate and Hayes, also known as Savage Islands
- Yellowbeard (1983)
- A General History of the Pyrates by Charles Johnson, the prime source for the biographies of many well known pirates, giving an almost mythical status to the more colorful characters, such as the infamous English pirates Blackbeard and Calico Jack, and influenced pirate literature that followed.
- Treasure Island by Robert Louis Stevenson—a novel with a huge influence on pirates in the public imagination, particularly in the character of the quintessential pirate, Long John Silver
- Captain Blood by Rafael Sabatini, a novel chronicling the adventures of Peter Blood, M.D., wrongly convicted of aiding Monmouth's Rebellion and turned pirate during the reign of James II.
- The Black Corsair (Il Corsaro Nero, 1898) by Emilio Salgari and its 4 sequels.
- "Pirates!" by Celia Rees a novel about young Nancy and her half sister Minerva who find themselves hunted by the authorities and are rescued by pirates.
- The Princess Bride by William Goldman
- On Stranger Tides by Tim Powers - pirates, voodoo, zombies, and the Fountain of Youth.
- Empire of Blue Water by Stephan Talty - The story of Captain Morgan and the real pirates of the Caribbean.
- Pirate Latitudes - a posthumous novel by Michael Crichton
- In the Time Machine series, the fourth book, Sail with Pirates, had the protagonist searching for a treasure ship that sank in the Caribbean and having to defeat the pirates of the region.
- "To Catch A Pirate" by Jade Parker
- The Pyrates by George MacDonald Fraser—a comedic novel tracing the adventures of Captain Benjamin Avery (RN) multiple damsels in distress, and the six captains who lead the infamous Coast Brotherhood (Calico Jack Rackham, Black Bilbo, Firebeard, Happy Dan Pew, Akbar the Terrible and Sheba the She-Wolf).
- Pirates of the Spanish Main, a tabletop game
- The theme park attraction: Pirates of the Caribbean.
- Pirates of the Caribbean
- Piracy in the British Virgin Islands
- Jolly Roger, the traditional pirate flag
- Pirate code of the Brethren
- Piracy in Somalia
- Piracy in the Atlantic World
- Campo-Flores/ Arian, “Yar, Mate! Swashbuckler Tours!,” Newsweek 180, no. 6 (2002): 58.
- Smith, Simon. "Piracy in early British America." History Today 46, no. 5 (May 1996): 29.
- Types of Pirates:The Buccaneers
- Bartolome de Las Casas, The Devastation of the Indies: A Brief Account (1542)
- Morgan, Kenneth. “Symbiosis: Trade and the British Empire.” BBC. Accessed February 17, 2011. http://www.bbc.co.uk/history/british/empire_seapower/trade_empire_01.shtml.
- "Recife," Columbia Electronic Encyclopedia, 6Th Edition (2011): 1.
- Boot, Max (2009). "Pirates, Then and Now". Foreign Affairs 88 (4): 94–107.
- "The real Pirates of the Caribbean." USA Today Magazine 137, no. 2764 (January 2009): 42-47.
- Leeson/ Peter "Democrats of the Caribbean," Atlantic Monthly (10727825) 300, no. 3 (2007): 39.
- “Pirate Shipwreck,” Maclean’s 114, no. 30 (2001): 12.
- Highleyman/ Liz. "Who Were Anne Bonny and Mary Read?," Lesbian News 32, no. 11 (2007): 18.
- Teelucksingh, Jerome. "The ‘invisible child’ in British West Indian slavery." Slavery & Abolition 27, no. 2
- Farley/ Christopher, “The Black faces beneath black flags,” New York Amsterdam News, July 7, 2005.
- Bialuschewski, Arne, “PIRATES, SLAVERS, AND THE INDIGENOUS POPULATION
- Luis R. Negrón Hernández, Jr. "Roberto Cofresí: El pirata caborojeño" (in Spanish). Retrieved 2007-05-25.
- Derek Bickerton. The Murders of Boysie Singh: Robber, Arsonist, Pirate, Mass-Murderer, Vice and Gambling King of Trinidad. Arthur Barker Limited, London. (1962).
- 2004 vs. 2007 global piracy summary, The Economist, published 23 Apr 2008, accessed 2008-04-28.
- The Golden Age of Piracy, and its origins in class struggle - on peopleshistory.co.uk
- Pirates of the Caribbean, In Fact and Fiction- from BlindKat Publishers | http://pediaview.com/openpedia/Piracy_in_the_Caribbean | 13 |
103 | The Gini coefficient (also known as the Gini index or Gini ratio) is a measure of statistical dispersion developed by the Italian statistician and sociologist Corrado Gini and published in his 1912 paper "Variability and Mutability" (Italian: Variabilità e mutabilità).
The Gini coefficient measures the inequality among values of a frequency distribution (for example levels of income). A Gini coefficient of zero expresses perfect equality, where all values are the same (for example, where everyone has an exactly equal income). A Gini coefficient of one (100 on the percentile scale) expresses maximal inequality among values (for example where only one person has all the income). However, a value greater than one may occur if some persons have negative income or wealth. For larger groups, values close to or above 1 are very unlikely in practice however.
Gini coefficient is commonly used as a measure of inequality of income or wealth. For OECD countries, in the late 2000s, considering the effect of taxes and transfer payments, the income Gini coefficient ranged between 0.24 to 0.49, with Slovenia the lowest and Chile the highest. The countries in Africa had the highest pre-tax Gini coefficients in 2008–2009, with South Africa the world's highest at 0.7. The global income inequality Gini coefficient in 2005, for all human beings taken together, has been estimated to be between 0.61 and 0.68 by various sources.
There are some issues in interpreting a Gini coefficient. The same value may result from many different distribution curves. The demographic structure should be taken into account. Countries with an aging population, or with a baby boom, experience an increasing pre-tax Gini coefficient even if real income distribution for working adults remain constant. Scholars have devised over a dozen variants of the Gini coefficient.
The Gini coefficient is usually defined mathematically based on the Lorenz curve, which plots the proportion of the total income of the population (y axis) that is cumulatively earned by the bottom x% of the population (see diagram). The line at 45 degrees thus represents perfect equality of incomes. The Gini coefficient can then be thought of as the ratio of the area that lies between the line of equality and the Lorenz curve (marked A in the diagram) over the total area under the line of equality (marked A and B in the diagram); i.e., G = A / (A + B).
If all people have non-negative income (or wealth, as the case may be), the Gini coefficient can theoretically range from 0 to 1; it is sometimes expressed as a percentage ranging between 0 and 100. In practice, both extreme values are not quite reached. If negative values are possible (such as the negative wealth of people with debts), then the Gini coefficient could theoretically be more than 1. Normally the mean (or total) is assumed positive, which rules out a Gini coefficient less than zero.
A low Gini coefficient indicates a more equal distribution, with 0 corresponding to complete equality, while higher Gini coefficients indicate more unequal distribution, with 1 corresponding to complete inequality. When used as a measure of income inequality, the most unequal society (assuming no negative incomes) will be one in which a single person receives 100% of the total income and the remaining people receive none (G = 1−1/N); and the most equal society will be one in which every person receives the same income (G = 0).
An alternative approach would be to consider the Gini coefficient as half of the relative mean difference, which is a mathematical equivalence. The mean difference is the average absolute difference between two items selected randomly from a population, and the relative mean difference is the mean difference divided by the average, to normalize for scale.
The Gini index is defined as a ratio of the areas on the Lorenz curve diagram. If the area between the line of perfect equality and the Lorenz curve is A, and the area under the Lorenz curve is B, then the Gini index is A / (A + B). Since A + B = 0.5, the Gini index is G = 2 * A or G = 1 – 2 B.
If the Lorenz curve is represented by the function Y = L (X), the value of B can be found with integration and:
In some cases, this equation can be applied to calculate the Gini coefficient without direct reference to the Lorenz curve. For example (taking y to mean the income or wealth of a person or household):
- For a population uniform on the values yi, i = 1 to n, indexed in non-decreasing order (yi ≤ yi+1):
- This may be simplified to:
- This formula actually applies to any real population, since each person can be assigned his or her own yi.
- For a discrete probability function f(y), where yi, i = 1 to n, are the points with nonzero probabilities and which are indexed in increasing order (yi < yi+1):
- For a cumulative distribution function F(y) that has a mean μ and is zero for all negative values of y:
- (This formula can be applied when there are negative values if the integration is taken from minus infinity to plus infinity.)
- Since the Gini coefficient is half the relative mean difference, it can also be calculated using formulas for the relative mean difference. For a random sample S consisting of values yi, i = 1 to n, that are indexed in non-decreasing order (yi ≤ yi+1), the statistic:
- is a consistent estimator of the population Gini coefficient, but is not, in general, unbiased. Like G, G (S) has a simpler form:
There does not exist a sample statistic that is in general an unbiased estimator of the population Gini coefficient, like the relative mean difference.
For some functional forms, the Gini index can be calculated explicitly. For example, if y follows a lognormal distribution with the standard deviation of logs equal to , then where is the cumulative distribution function of the standard normal distribution.
Sometimes the entire Lorenz curve is not known, and only values at certain intervals are given. In that case, the Gini coefficient can be approximated by using various techniques for interpolating the missing values of the Lorenz curve. If (Xk, Yk) are the known points on the Lorenz curve, with the Xk indexed in increasing order (Xk – 1 < Xk), so that:
- Xk is the cumulated proportion of the population variable, for k = 0,...,n, with X0 = 0, Xn = 1.
- Yk is the cumulated proportion of the income variable, for k = 0,...,n, with Y0 = 0, Yn = 1.
- Yk should be indexed in non-decreasing order (Yk > Yk – 1)
If the Lorenz curve is approximated on each interval as a line between consecutive points, then the area B can be approximated with trapezoids and:
is the resulting approximation for G. More accurate results can be obtained using other methods to approximate the area B, such as approximating the Lorenz curve with a quadratic function across pairs of intervals, or building an appropriately smooth approximation to the underlying distribution function that matches the known data. If the population mean and boundary values for each interval are also known, these can also often be used to improve the accuracy of the approximation.
The Gini coefficient calculated from a sample is a statistic and its standard error, or confidence intervals for the population Gini coefficient, should be reported. These can be calculated using bootstrap techniques but those proposed have been mathematically complicated and computationally onerous even in an era of fast computers. Ogwang (2000) made the process more efficient by setting up a “trick regression model” in which the incomes in the sample are ranked with the lowest income being allocated rank 1. The model then expresses the rank (dependent variable) as the sum of a constant A and a normal error term whose variance is inversely proportional to yk;
Ogwang showed that G can be expressed as a function of the weighted least squares estimate of the constant A and that this can be used to speed up the calculation of the jackknife estimate for the standard error. Giles (2004) argued that the standard error of the estimate of A can be used to derive that of the estimate of G directly without using a jackknife at all. This method only requires the use of ordinary least squares regression after ordering the sample data. The results compare favorably with the estimates from the jackknife with agreement improving with increasing sample size. The paper describing this method can be found here: http://web.uvic.ca/econ/ewp0202.pdf
However it has since been argued that this is dependent on the model’s assumptions about the error distributions (Ogwang 2004) and the independence of error terms (Reza & Gastwirth 2006) and that these assumptions are often not valid for real data sets. It may therefore be better to stick with jackknife methods such as those proposed by Yitzhaki (1991) and Karagiannis and Kovacevic (2000). The debate continues.
where u is mean income of the population, Pi is the income rank P of person i, with income X, such that the richest person receives a rank of 1 and the poorest a rank of N. This effectively gives higher weight to poorer people in the income distribution, which allows the Gini to meet the Transfer Principle. Note that the Deaton formulation rescales the coefficient so that its value is 1 if all the are zero except one.
Gini coefficients of representative income distributions
|Income Distribution Function||Gini Coefficient (rounded)|
|y = 1 for all x||0.0|
|y = x⅓||0.143|
|y = x½||0.200|
|y = x + b (b = 10% of max income)||0.273|
|y = x + b (b = 5% of max income)||0.302|
|y = x||0.333|
|y = x2||0.500|
|y = x3||0.600|
Given the normalization of both the cumulative population and the cumulative share of income used to calculate the Gini coefficient, the measure is not overly sensitive to the specifics of the income distribution, but rather only on how incomes vary relative to the other members of a population. The exception to this is in the redistribution of wealth resulting in a minimum income for all people. When the population is sorted, if their income distribution were to approximate a well known function, then some representative values could be calculated. Some representative values of the Gini coefficient for income distributions approximated by some simple functions are tabulated below.
While the income distribution of any particular country need not follow such simple functions, these functions give a qualitative understanding of the income distribution in a nation given the Gini coefficient. The effects of minimum income policy due to redistribution can be seen in the linear relationships above.
Generalized inequality index
The Gini coefficient and other standard inequality indices reduce to a common form. Perfect equality—the absence of inequality—exists when and only when the inequality ratio, , equals 1 for all j units in some population (for example, there is perfect income equality when everyone’s income equals the mean income , so that for everyone). Measures of inequality, then, are measures of the average deviations of the from 1; the greater the average deviation, the greater the inequality. Based on these observations the inequality indices have this common form:
where pj weights the units by their population share, and f(rj) is a function of the deviation of each unit’s rj from 1, the point of equality. The insight of this generalised inequality index is that inequality indices differ because they employ different functions of the distance of the inequality ratios (the rj) from 1.
Gini coefficient of income distributions
Gini coefficients of income are calculated on market income as well as disposable income basis. The Gini coefficient on market income – sometimes referred to as pre-tax Gini index – is calculated on income before taxes and transfers, and it measures inequality in income without considering the effect of taxes and social spending already in place in a country. The Gini coefficient on disposable income – sometimes referred to as after-tax Gini index – is calculated on income after taxes and transfers, and it measures inequality in income after considering the effect of taxes and social spending already in place in a country.
The difference in Gini indices between OECD countries, on after-taxes and transfers basis, is significantly narrower.[page needed] For OECD countries, over 2008–2009 period, Gini coefficient on pre-taxes and transfers basis for total population ranged between 0.34 to 0.53, with South Korea the lowest and Italy the highest. Gini coefficient on after-taxes and transfers basis for total population ranged between 0.25 to 0.48, with Denmark the lowest and Mexico the highest. For United States, the country with the largest population in OECD countries, the pre-tax Gini index was 0.49, and after-tax Gini index was 0.38, in 2008–2009. The OECD averages for total population in OECD countries was 0.46 for pre-tax income Gini index and 0.31 for after-tax income Gini Index. Taxes and social spending that were in place in 2008–2009 period in OECD countries significantly lowered effective income inequality, and in general, "European countries — especially Nordic and Continental welfare states — achieve lower levels of income inequality than other countries."
Using the Gini can help quantify differences in welfare and compensation policies and philosophies. However it should be borne in mind that the Gini coefficient can be misleading when used to make political comparisons between large and small countries or those with different immigration policies (see limitations of Gini coefficient section).
The Gini index for the entire world has been estimated by various parties to be between 0.61 and 0.68. The graph shows the values expressed as a percentage, in their historical development for a number of countries.
US income Gini indices over time
|Gini indexes – before and after taxes between 1980 and 2010|
Taxes and social spending in most countries have significant moderating effect on income inequality Gini indices.
For the late 2000s, the United States had the 4th highest measure of income inequality out of the 34 OECD countries measured, after taxes and transfers had been taken into account. The table below presents the Gini indices for household income, without including the effect of taxes and transfers, for the United States at various times, according to the US Census Bureau. The Gini values are a national composite, with significant variations in Gini between the states. The states of Utah, Alaska and Wyoming have a pre-tax income inequality Gini coefficient that is 10% lower than the U.S. average, while Washington D.C. and Puerto Rico 10% higher. After including the effects of federal and state taxes, the U.S. Federal Reserve estimates 34 states in the USA have a Gini coefficient between 0.30 and 0.35, with the state of Maine the lowest. At the county and municipality levels, the pre-tax Gini index ranged from 0.21 to 0.65 in 2010 across the United States, according to Census Bureau estimates.
|1967||0.397||(first year reported)|
Regional income Gini indices
According to UNICEF, Latin America and the Caribbean region had the highest net income Gini index in the world at 48.3, on unweighted average basis in 2008. The remaining regional averages were: sub-Saharan Africa (44.2), Asia (40.4), Middle East and North Africa (39.2), Eastern Europe and Central Asia (35.4), and High-income Countries (30.9). Using the same method, the United States is claimed to have a Gini index of 36, while South Africa had the highest income Gini index score of 67.8.
World income Gini index since 1800s
The table below presents the estimated world income Gini index over the last 200 years, as calculated by Milanovic. Taking income distribution of all human beings, the worldwide income inequality has been constantly increasing since the early 19th century. There was a steady increase in global income inequality Gini score from 1820 to 2002, with a significant increase between 1980 and 2002. This trend appears to have peaked and begun a reversal with rapid economic growth in emerging economies, particularly in the large populations of BRIC countries.
|Year||World Gini index|
If we consider the population size of every country, which is a more accurate method, the world Gini index has been falling since the early 1960s. In 1962 it was 0.57, in 2000 0.50.
Gini coefficient is widely used in fields as diverse as sociology, economics, health science, ecology, engineering and agriculture. For example, in social sciences and economics, in addition to income Gini coefficients, scholars have published education Gini coefficients and opportunity Gini coefficients.
Gini coefficient of education
Education Gini index estimates the inequality in education for a given population. It is used to discern trends in social development through educational attainment over time. From a study of 85 countries, Thomas et al. estimate Mali had the highest education Gini index of 0.92 in 1990 (implying very high inequality in education attainment across the population), while the United States had the lowest education inequality Gini index of 0.14. Between 1960 and 1990, South Korea, China and India had the fastest drop in education inequality Gini Index. They also claim education Gini index for the United States slightly increased over the 1980 – 1990 period.
Gini coefficient of opportunity
Similar in concept to income Gini coefficient, opportunity Gini coefficient measures inequality of opportunity. The concept builds on Amartya Sen's suggestion that inequality coefficients of social development should be premised on the process of enlarging people’s choices and enhancing their capabilities, rather than process of reducing income inequality. Kovacevic in a review of opportunity Gini coefficient explains that the coefficient estimates how well a society enables its citizens to achieve success in life where the success is based on a person’s choices, efforts and talents, not his background defined by a set of predetermined circumstances at birth, such as, gender, race, place of birth, parent's income and circumstances beyond the control of that individual.
Gini coefficients and income mobility
In 1978, A. Shorrocks introduced a measure based on income Gini coefficients to estimate income mobility. This measure, generalized by Maasoumi and Zandvakili, is now generally referred to as Shorrocks index, sometimes as Shorrocks mobility index or Shorrocks rigidity index. It attempts to estimate whether the income inequality Gini coefficient is permanent or temporary, and to what extent a country or region enables economic mobility to its people so that they can move from one (e.g. bottom 20%) income quantile to another (e.g. middle 20%) over time. In other words, Shorrocks index compares inequality of short-term earnings such as annual income of households, to inequality of long-term earnings such as 5-year or 10-year total income for same households.
Shorrocks index is calculated in number of different ways, a common approach being from the ratio of income Gini coefficients between short-term and long-term for the same region or country.
A 2010 study using social security income data for the United States since 1937 and Gini-based Shorrocks indexes concludes that its income mobility has had a complicated history, primarily due to mass influx of women into the country's labor force after World War II. Income inequality and income mobility trends have been different for men and women workers between 1937 and the 2000s. When men and women are considered together, the Gini coefficient-based Shorrocks Index trends imply long-term income inequality has been substantially reduced among all workers, in recent decades for the United States. Other scholars, using just 1990s data or other short periods have come to different conclusions. For example, Sastre and Ayala, conclude from their study of income Gini coefficient data between 1993 and 1998 for six developed economies, that France had the least income mobility, Italy the highest, and the United States and Germany intermediate levels of income mobility over those 5 years.
Features of Gini coefficient
Gini coefficient has features that make it useful as a measure of dispersion in a population, and inequalities in particular. It is a ratio analysis method making it easier to interpret. It also avoids references to a statistical average or position unrepresentative of most of the population, such as per capita income or gross domestic product. For a given time interval, Gini coefficient can therefore be used to compare diverse countries and different regions or groups within a country; for example states, counties, urban versus rural areas, gender and ethnic groups. Gini coefficients can be used to compare income distribution over time, thus it is possible to see if inequality is increasing or decreasing independent of absolute incomes.
- Anonymity: it does not matter who the high and low earners are.
- Scale independence: the Gini coefficient does not consider the size of the economy, the way it is measured, or whether it is a rich or poor country on average.
- Population independence: it does not matter how large the population of the country is.
- Transfer principle: if income (less than the difference), is transferred from a rich person to a poor person the resulting distribution is more equal.
Limitations of Gini coefficient
The Gini coefficient is a relative measure. Its proper use and interpretation is controversial. Mellor explains it is possible for the Gini coefficient of a developing country to rise (due to increasing inequality of income) while the number of people in absolute poverty decreases. This is because the Gini coefficient measures relative, not absolute, wealth. Kwok concludes that changing income inequality, measured by Gini coefficients, can be due to structural changes in a society such as growing population (baby booms, aging populations, increased divorce rates, extended family households splitting into nuclear families, emigration, immigration) and income mobility. Gini coefficients are simple, and this simplicity can lead to oversights and can confuse the comparison of different populations; for example, while both Bangladesh (per capita income of $1,693) and the Netherlands (per capita income of $42,183) had an income Gini index of 0.31 in 2010, the quality of life, economic opportunity and absolute income in these countries are very different, i.e. countries may have identical Gini coefficients, but differ greatly in wealth. Basic necessities may be available to all in a developed economy, while in an undeveloped economy with the same Gini coefficient, basic necessities may be unavailable to most or unequally available, due to lower absolute wealth.
- Different income distributions with the same Gini coefficient
Even when the total income of a population is the same, in certain situations two countries with different income distributions can have the same Gini index (e.g. cases when income Lorenz Curves cross). Table A illustrates one such situation. Both countries have a Gini index of 0.2, but the average income distributions for household groups are different. As another example, in a population where the lowest 50% of individuals have no income and the other 50% have equal income, the Gini coefficient is 0.5; whereas for another population where the lowest 75% of people have 25% of income and the top 25% have 75% of the income, the Gini index is also 0.5. Economies with similar incomes and Gini coefficients can have very different income distributions. Bellù and Liberati claim that to rank income inequality between two different populations based on their Gini indices is sometimes not possible, or misleading.
- Extreme wealth inequality, yet low income Gini coefficient
A Gini index does not contain information about absolute national or personal incomes. Populations can have very low income Gini indices, yet simultaneously very high wealth Gini index. By measuring inequality in income, the Gini ignores the differential efficiency of use of household income. By ignoring wealth (except as it contributes to income) the Gini can create the appearance of inequality when the people compared are at different stages in their life. Wealthy countries such as Sweden can show a low Gini coefficient for disposable income of 0.31 thereby appearing equal, yet have very high Gini coefficient for wealth of 0.79 to 0.86 thereby suggesting an extremely unequal wealth distribution in its society. These factors are not assessed in income-based Gini.
|1||20,000||1 & 2||50,000|
|3||40,000||3 & 4||90,000|
|5||60,000||5 & 6||130,000|
|7||80,000||7 & 8||170000|
|9||120,000||9 & 10||270000|
- Small sample bias – sparsely populated regions more likely to have low Gini coefficient
Gini index has a downward-bias for small populations. Counties or states or countries with small populations and less diverse economies will tend to report small Gini coefficients. For economically diverse large population groups, a much higher coefficient is expected than for each of its regions. Taking world economy as one, and income distribution for all human beings, for example, different scholars estimate global Gini index to range between 0.61 and 0.68. As with other inequality coefficients, the Gini coefficient is influenced by the granularity of the measurements. For example, five 20% quantiles (low granularity) will usually yield a lower Gini coefficient than twenty 5% quantiles (high granularity) for the same distribution. Philippe Monfort has shown that using inconsistent or unspecified granularity limits the usefulness of Gini coefficient measurements.
The Gini coefficient measure gives different results when applied to individuals instead of households, for the same economy and same income distributions. If household data is used, the measured value of income Gini depends on how the household is defined. When different populations are not measured with consistent definitions, comparison is not meaningful.
Deininger and Squire (1996) show that income Gini coefficient based on individual income, rather than household income, are different. For United States, for example, they find that individual income-based Gini index was 0.35, while for France they report individual income-based Gini index to be 0.43. According to their individual focussed method, in the 108 countries they studied, South Africa had the world's highest Gini index at 0.62, Malaysia had Asia's highest Gini index at 0.5, Brazil the highest at 0.57 in Latin America and Caribbean region, and Turkey the highest at 0.5 in OECD countries.
(in 2010 adjusted dollars)
| % of Population
| % of Population
|$15,000 – $24,999||11.9%||12.0%|
|$25,000 – $34,999||12.1%||10.9%|
|$35,000 – $49,999||15.4%||13.9%|
|$50,000 – $74,999||22.1%||17.7%|
|$75,000 – $99,999||12.4%||11.4%|
|$100,000 – $149,999||8.3%||12.1%|
|$150,000 – $199,999||2.0%||4.5%|
|$200,000 and over||1.2%||3.9%|
|United State's Gini
on pre-tax basis
- Gini coefficient is unable to discern the effects of structural changes in populations
Expanding on the importance of life-span measures, the Gini coefficient as a point-estimate of equality at a certain time, ignores life-span changes in income. Typically, increases in the proportion of young or old members of a society will drive apparent changes in equality, simply because people generally have lower incomes and wealth when they are young than when they are old. Because of this, factors such as age distribution within a population and mobility within income classes can create the appearance of inequality when none exist taking into account demographic effects. Thus a given economy may have a higher Gini coefficient at any one point in time compared to another, while the Gini coefficient calculated over individuals' lifetime income is actually lower than the apparently more equal (at a given point in time) economy's. Essentially, what matters is not just inequality in any particular year, but the composition of the distribution over time.
Kwok claims income Gini index for Hong Kong has been high (0.434 in 2010), in part because of structural changes in its population. Over recent decades, Hong Kong has witnessed increasing numbers of small households, elderly households and elderly living alone. The combined income is now split into more households. Many old people are living separately from their children in Hong Kong. These social changes have caused substantial changes in household income distribution. Income Gini coefficient, claims Kwok, does not discern these structural changes in its society. Household money income distribution for the United States, summarized in Table C of this section, confirms that this issue is not limited to just Hong Kong. According to the US Census Bureau, between 1979 and 2010, the population of United States experienced structural changes in overall households, the income for all income brackets increased in inflation-adjusted terms, household income distributions shifted into higher income brackets over time, while the income Gini coefficient increased.
Another limitation of Gini coefficient is that it is not a proper measure of egalitarianism, as it is only measures income dispersion. For example, if two equally egalitarian countries pursue different immigration policies, the country accepting a higher proportion of low-income or impoverished migrants will report a higher Gini coefficient and therefore may appear to exhibit more income inequality.
- Gini coefficient falls yet the poor gets poorer, Gini coefficient rises yet everyone getting richer
|Income bracket||Year 1
|20% – 40%||1,000||1,200||500|
|40% – 60%||2,000||2,200||1,000|
|60% – 80%||5,000||5,500||2,000|
Arnold describes one limitation of Gini coefficient to be income distribution situations where it misleads. The income of poorest fifth of households can be lower when Gini coefficient is lower, than when the poorest income bracket is earning a larger percentage of all income. Table D illustrates this case, where the lowest income bracket has an average household market income of $500 per year at Gini index of 0.51, and zero income at Gini index of 0.48. This is counter-intuitive and Gini coefficient cannot tell what is happening to each income bracket or the absolute income, cautions Arnold.
Feldstein similarly explains one limitation of Gini coefficient as its focus on relative income distribution, rather than real levels of poverty and prosperity in society. He claims Gini coefficient analysis is limited because in many situations it intuitively implies inequality that violate the so-called Pareto improvement principle.
The Pareto improvement principle, named after the Italian economist Vilfredo Pareto, states that a social, economic or income change is good if it makes one or more people better off without making anyone else worse off. Gini coefficient can rise if some or all income brackets experience a rising income. Feldstein’s explanation is summarized in Table D. The table shows that in a growing economy, consistent with Pareto improvement principle, where income of every segment of the population has increased, from one year to next, the income inequality Gini coefficient can rise too. In contrast, in another economy, if everyone gets poorer and is worse off, income inequality is less and Gini coefficient lower.
- Inability to value benefits and income from informal economy affects Gini coefficient accuracy
Some countries distribute benefits that are difficult to value. Countries that provide subsidized housing, medical care, education or other such services are difficult to value objectively, as it depends on quality and extent of the benefit. In absence of free markets, valuing these income transfers as household income is subjective. The theoretical model of Gini coefficient is limited to accepting correct or incorrect subjective assumptions.
In subsistence-driven and informal economies, people may have significant income in other forms than money, for example through subsistence farming or bartering. These income tend to accrue to the segment of population that is below-poverty line or very poor, in emerging and transitional economy countries such as those in sub-Saharan Africa, Latin America, Asia and Eastern Europe. Informal economy accounts for over half of global employment and as much as 90 per cent of employment in some of the poorer sub-Saharan countries with high official Gini inequality coefficients. Schneider et al., in their 2010 study of 162 countries, report about 31.2%, or about $20 trillion, of world's GDP is informal. In developing countries, the informal economy predominates for all income brackets except for the richer, urban upper income bracket populations. Even in developed economies, between 8% (United States) to 27% (Italy) of each nation's GDP is informal, and resulting informal income predominates as a livelihood activity for those in the lowest income brackets. The value and distribution of the incomes from informal or underground economy is difficult to quantify, making true income Gini coefficients estimates difficult. Different assumptions and quantifications of these incomes will yield different Gini coefficients.
Gini has some mathematical limitations as well. It is not additive and different sets of people cannot be averaged to obtain the Gini coefficient of all the people in the sets.
Alternatives to Gini coefficient
Given the limitations of Gini coefficient, other statistical methods are used in combination or as an alternative measure of population dispersity. For example, entropy measures are frequently used (e.g. the Theil Index and the Atkinson index). These measures attempt to compare the distribution of resources by intelligent agents in the market with a maximum entropy random distribution, which would occur if these agents acted like non-intelligent particles in a closed system following the laws of statistical physics.
Relation to other statistical measures
Gini coefficient closely related to the AUC (Area Under receiver operating characteristic Curve) measure of performance. The relation follows the formula Gini coefficient is also closely related to Mann–Whitney U.
In certain fields such as ecology, Simpson's index is used, which is related to Gini. Simpson index scales as mirror opposite to Gini; that is, with increasing diversity Simpson index takes a smaller value (0 means maximum, 1 means minimum heterogeneity per classic Simpson index). Simpson index is sometimes transformed by subtracting the observed value from the maximum possible value of 1, and then it is known as Gini-Simpson Index.
Other uses
Although the Gini coefficient is most popular in economics, it can in theory be applied in any field of science that studies a distribution. For example, in ecology the Gini coefficient has been used as a measure of biodiversity, where the cumulative proportion of species is plotted against cumulative proportion of individuals. In health, it has been used as a measure of the inequality of health related quality of life in a population. In education, it has been used as a measure of the inequality of universities. In chemistry it has been used to express the selectivity of protein kinase inhibitors against a panel of kinases. In engineering, it has been used to evaluate the fairness achieved by Internet routers in scheduling packet transmissions from different flows of traffic. In statistics, building decision trees, it is used to measure the purity of possible child nodes, with the aim of maximising the average purity of two child nodes when splitting, and it has been compared with other equality measures.
The discriminatory power refers to a credit risk model's ability to differentiate between defaulting and non-defaulting clients. The formula , in calculation section above, may be used for the final model and also at individual model factor level, to quantify the discriminatory power of individual factors. It is related to accuracy ratio in population assessment models.
See also
Notes and references
- Gini, C. (1912). "Italian: Variabilità e mutabilità" (Variability and Mutability', C. Cuppini, Bologna, 156 pages. Reprinted in Memorie di metodologica statistica (Ed. Pizetti E, Salvemini, T). Rome: Libreria Eredi Virgilio Veschi (1955).
- Gini, C. (1909). "Concentration and dependency ratios" (in Italian). English translation in Rivista di Politica Economica, 87 (1997), 769–789.
- "Current Population Survey (CPS) – Definitions and Explanations". US Census Bureau.
- Note: Gini coefficient becomes 1, only in a large population where one person has all the income. In the special case of just two people, where one has no income and the other has all the income, the Gini coefficient is 0.5. For 5 people set, where 4 have no income and the fifth has all the income, the Gini coefficient is 0.8. See: FAO, United Nations – Inequality Analysis, The Gini Index Module (PDF format), fao.org.
- Sadras, V. O.; Bongiovanni, R. (2004). "Use of Lorenz curves and Gini coefficients to assess yield inequality within paddocks". Field Crops Research 90 (2–3): 303–310. doi:10.1016/j.fcr.2004.04.003.
- Gini, C. (1936). "On the Measure of Concentration with Special Reference to Income and Statistics", Colorado College Publication, General Series No. 208, 73–79.
- "Income distribution – Inequality: Income distribution – Inequality – Country tables". OECD. 2012.
- "South Africa Overview". The World Bank. 2011.
- Ali, Mwabu and Gesami (March 2002). "Poverty reduction in Africa: Challenges and policy options" (PDF). African Economic Research Consortium, Nairobi.
- Evan Hillebrand (June 2009). "Poverty, Growth, and Inequality over the Next 50 Years" (PDF). FAO, United Nations – Economic and Social Development Department.
- "The Real Wealth of Nations: Pathways to Human Development, 2010". United Nations Development Program. 2011. pp. 72–74. ISBN 9780230284456.
- Shlomo Yitzhaki (1998). "More than a Dozen Alternative Ways of Spelling Gini". Economic Inequality 8: 13–30.
- Myung Jae Sung (August 2010). Population Aging, Mobility of Quarterly Incomes, and Annual Income Inequality: Theoretical Discussion and Empirical Findings.
- Blomquist, N. (1981). "A comparison of distributions of annual and lifetime income: Sweden around 1970". Review of Income and Wealth 27 (3): 243–264. doi:10.1111/j.1475-4991.1981.tb00227.x.
- "Gini Coefficient". Wolfram Mathworld.
- Firebaugh, Glenn (1999). "Empirics of World Income Inequality". American Journal of Sociology 104 (6): 1597–1630. doi:10.1086/210218.. See also ——— (2003). "Inequality: What it is and how it is measured". The New Geography of Global Income Inequality. Cambridge, MA: Harvard University Press. ISBN 0-674-01067-1.
- N. C. Kakwani (April 1977). "Applications of Lorenz Curves in Economic Analysis". Econometrica 45 (3): 719–728. doi:10.2307/1911684. JSTOR 1911684.
- Chu, Davoodi, Gupta (March 2000). "Income Distribution and Tax and Government Social Spending Policies in Developing Countries". International Monetary Fund.
- "Monitoring quality of life in Europe – Gini index". Eurofound. 26 August 2009
- Chen Wang, Koen Caminada, and Kees Goudswaard (July–September 2012). "The redistributive effect of social transfer programmes and taxes: A decomposition across countries". International Social Security Review 65 (3): 27–48. doi:10.1111/j.1468-246X.2012.01435.x.
- Bob Sutcliffe (April 2007). "Postscript to the article ‘World inequality and globalization’ (Oxford Review of Economic Policy, Spring 2004)". Retrieved 2007-12-13
- Income distribution – Inequality. Gini coefficient after taxes and transfers. OECD. StatExtracts. Retrieved: 24 December 2012.
- "A brief look at post-war U.S. Income Inequality". United States Census Bureau. 1996.
- "Table 3. Income Distribution Measures Using Money Income and Equivalence-Adjusted Income: 2007 and 2008". Income, Poverty, and Health Insurance Coverage in the United States: 2008. United States Census Bureau. p. 17.
- "Income, Poverty and Health Insurance Coverage in the United States: 2009". Newsroom. United States Census Bureau.
- "Income, Poverty and Health Insurance Coverage in the United States: 2011". Newsroom. United States Census Bureau. September 12, 2012. Retrieved January 23, 2013.
- Daniel H. Cooper, Byron F. Lutz, and Michael G. Palumbo (September 22, 2011). "Quantifying the Role of Federal and State Taxes in Mitigating Income Inequality". Federal Reserve, Boston, United States.
- Adam Bee (February 2012). "Household Income Inequality Within U.S. Counties: 2006–2010". Census Bureau, U.S. Department of Commerce.
- Isabel Ortiz and Matthew Cummins (April 2011). "Global Inequality: Beyond the Bottom Billion". UNICEF. p. 26.
- Berg, Andrew G.; Ostry, Jonathan D. (2011). "Equality and Efficiency". Finance and Development (International Monetary Fund) 48 (3). Retrieved September 10, 2012.
- Branko Milanovic (September 2011). "More or Less". Finance & Development (International Monetary Fund) 48 (3).
- Albert Berry and John Serieux (September 2006). "Riding the Elephants: The Evolution of World Economic Growth and Income Distribution at the End of the Twentieth Century (1980–2000)". United Nations (DESA Working Paper No. 27).
- Thomas, Wang, Fan (January 2001). "Measuring education inequality – Gini coefficients of education". The World Bank.
- John E. Roemer (September 2006). "ECONOMIC DEVELOPMENT AS OPPORTUNITY EQUALIZATION". Yale University.
- John Weymark (2003). "Generalized Gini Indices of Equality of Opportunity". Journal of Economic Inequality 1 (1): 5–24. doi:10.1023/A:1023923807503.
- Milorad Kovacevic (November 2010). "Measurement of Inequality in Human Development – A Review". United Nations Development Program.
- Anthony Atkinson (1999). "The contributions of Amartya Sen to Welfare Economics". Scand. J. Of Economics 101 (2): 173–190. doi:10.1111/1467-9442.00151.
- Roemer et al; Aaberge, Rolf; Colombino, Ugo; Fritzell, Johan; Jenkins, Stephen P; Lefranc, Arnaud; Marx, Ive; Page, Marianne et al. (March 2003). "To what extent do fiscal regimes equalize opportunities for income acquisition among citizens?". Journal of Public Economics 87 (3–4): 539–565. doi:10.1016/S0047-2727(01)00145-1.
- Shorrocks, Anthony (December 1978). "Income Inequality and Income Mobility". Journal of Economic Theory 19 (2): 376–393. doi:10.1016/0022-0531(78)90101-1.
- Maasoumi and Zanvakili; Zandvakili, Sourushe (1986). "A class of generalized measures of mobility with applications". Economic Letters 22: 97–102. doi:10.1016/0165-1765(86)90150-3.
- Wojciech Kopczuk, Emmanuel Saez and Jae Song (2010). "Earnings Inequality and Mobility in the United States: Evidence from Social Security Data Since 1937". The Quarterly Journal of Economics 125 (1): 91–128. doi:10.1162/qjec.2010.125.1.91.
- Wen-Hao Chen (March 2009). "CROSS-NATIONAL DIFFERENCES IN INCOME MOBILITY: EVIDENCE FROM CANADA, THE UNITED STATES, GREAT BRITAIN AND GERMANY". Review of Income and Wealth 55 (1): 75–100. doi:10.1111/j.1475-4991.2008.00307.x.
- Mercedes Sastre and Luis Ayala (2002). "Europe vs. The United States: Is There a Trade-Off Between Mobility and Inequality?". Institute for Social and Economic Research, University of Essex.
- Lorenzo Giovanni Bellù and Paolo Liberati (2006). "Inequality Analysis – The Gini Index". Food and Agriculture Organization, United Nations.
- Julie A. Litchfield (March 1999). "Inequality: Methods and Tools". The World Bank.
- Stefan V. Stefanescu (2009). "Measurement of the Bipolarization Events". World Academy of Science, Engineering and Technology 57: 929–936.
- Ray, Debraj (1998). Development Economics. Princeton, NJ: Princeton University Press. p. 188. ISBN 0-691-01706-9.
- Thomas Garrett (Spring 2010). "U.S. Income Inequality: It's Not So Bad". Inside the Vault (U.S. Federal Reserve, St Louis) 14 (1).
- John W. Mellor (June 2, 1989). Dramatic Poverty Reduction in the Third World: Prospects and Needed Action. International Food Policy Research Institute. pp. 18–20.
- KWOK Kwok Chuen (2010). "Income Distribution of Hong Kong and the Gini Coefficient". The Government of Hong Kong, China.
- "The Real Wealth of Nations: Pathways to Human Development (2010 Human Development Report – see Stat Tables)". United Nations Development Program. 2011. pp. 152–156.
- Fernando G De Maio (2007). "Income inequality measures". Journal of Epidemiology and Community Health 61 (10): 849–852. doi:10.1136/jech.2006.052969. PMC 2652960. PMID 17873219.
- Domeij and Floden; Flodén, Martin (2010). "Inequality Trends in Sweden 1978–2004". Review of Economic Dynamics 13 (1): 179–208. doi:10.1016/j.red.2009.10.005.
- Domeij and Klein (January 2000). "Accounting for Swedish wealth inequality".
- George Deltas (February 2003). "The Small-Sample Bias of the Gini Coefficient: Results and Implications for Empirical Research". The Review of Economics and Statistics 85 (1): 226–234. doi:10.1162/rest.2003.85.1.226.
- Philippe Monfort (2008). "Convergence of EU regions – Measures and evolution". European Union – Europa. p. 6.
- Klaus Deininger and Lyn Squire (1996). "A New Data Set Measuring Income Inequality". World Bank Economic Review 10 (3): 565–591. doi:10.1093/wber/10.3.565.
- "Income, Poverty, and Health Insurance Coverage in the United States: 2010 (see Table A-2)". Census Bureau, Dept of Commerce, United States. September 2011.
- Congressional Budget Office: Trends in the Distribution of Household Income Between 1979 and 2007. October 2011. see p. i–x, with definitions on ii–iii
- Roger Arnold (2007). Economics. pp. 573–581. ISBN 978-0324538014.
- Frank Cowell (2007). "Inequality decomposition – three bad measures". Bulletin of Economic Research 40 (4): 309–311. doi:10.1111/j.1467-8586.1988.tb00274.x.
- Martin Feldstein (August, 1998). "Is income inequality really the problem? (Overview)". U.S. Federal Reserve.
- Taylor and Weerapana (2009). Principles of Microeconomics: Global Financial Crisis Edition. pp. 416–418. ISBN 978-1439078211.
- Martin Feldstein (1998). "Income inequality and poverty". National Bureau of Economic Research.
- Friedrich Schneider et al (2010). "New Estimates for the Shadow Economies all over the World". International Economic Journal 24 (4): 443–461. doi:10.1080/10168737.2010.525974.
- The Informal Economy. International Institute for Environment and Development, United Kingdom. 2011. ISBN 978-1-84369-822-7.
- J. Barkley Rosser, Jr., Marina V. Rosser, and Ehsan Ahmed (March, 2000). "INCOME INEQUALITY AND THE INFORMAL ECONOMY IN TRANSITION ECONOMIES". Journal of Comparative Economics 28 (1): 156–171. doi:10.1006/jcec.2000.1645.
- Gorana Krstić and Peter Sanfey (February 2010). "Earnings inequality and the informal economy: evidence from Serbia". European Bank for Reconstruction and Development.
- Friedrich Schneider (December 2004). "The Size of the Shadow Economies of 145 Countries all over the World: First Results over the Period 1999 to 2003".
- Hand, David J.; Robert J. Till (2001). "A Simple Generalisation of the Area Under the ROC Curve for Multiple Class Classification Problems". Machine Learning 45 (2): 171–186. doi:10.1023/A:1010920819831.
- Iddo Eliazar and Igor Sokolov (2010). "Measuring statistical heterogeneity: The Pietra index". Physica A-Statistical Mechanics and Its Applications 389 (1): 117–125. doi:10.1016/j.physa.2009.08.006.
- Wen-Chung Lee (1999). "Probabilistic Analysis of Global Performances of Diagnostic Tests: Interpreting the Lorenz Curve-Based Summary Measures". Statistics in Medicine 18 (4): 455–471. doi:10.1002/(SICI)1097-0258(19990228)18:4<455::AID-SIM44>3.0.CO;2-A. PMID 10070686.
- Robert K. Peet (1974). "The Measurement of Species Diversity". Annual Review of Ecology and Systematics 5: 285–307. doi:10.1146/annurev.es.05.110174.001441. JSTOR 2096890.
- Wittebolle, Lieven; et al. (2009). "Initial community evenness favours functionality under selective stress". Nature 458 (7238): 623–626. doi:10.1038/nature07840. PMID 19270679.
- Asada, Yukiko (2005). "Assessment of the health of Americans: the average health-related quality of life and its inequality across individuals and groups". Population Health Metrics 3: 7. doi:10.1186/1478-7954-3-7. PMC 1192818. PMID 16014174.
- Halffman, Willem; Leydesdorff, L (2010). "Is Inequality Among Universities Increasing? Gini Coefficients and the Elusive Rise of Elite Universities". Minerva 48 (1): 55–72. doi:10.1007/s11024-010-9141-3. PMC 2850525. PMID 20401157.
- Graczyk, Piotr (2007). "Gini Coefficient: A New Way To Express Selectivity of Kinase Inhibitors against a Family of Kinases". Journal of Medicinal Chemistry 50 (23): 5773–5779. doi:10.1021/jm070562u. PMID 17948979.
- Shi, Hongyuan; Sethu, Harish (2003). "Greedy Fair Queueing: A Goal-Oriented Strategy for Fair Real-Time Packet Scheduling". Proceedings of the 24th IEEE Real-Time Systems Symposium. IEEE Computer Society. pp. 345–356. ISBN 0-7695-2044-8.
- Gonzalez, Luis; et al. (2010). "The Similarity between the Square of the Coeficient of Variation and the Gini Index of a General Random Variable". Journal of Quantitative Methods for Economics and Business Administration 10: 5–18. ISSN 1886-516X.
- George A. Christodoulakis and Stephen Satchell (Editors) (November 2007). The Analytics of Risk Model Validation (Quantitative Finance). Academic Press. ISBN 978-0750681582.
Further reading
- Amiel, Y.; Cowell, F.A. (1999). Thinking about Inequality. Cambridge. ISBN 0-521-46696-2.
- Anand, Sudhir (1983). Inequality and Poverty in Malaysia. New York: Oxford University Press. ISBN 0-19-520153-1.
- Brown, Malcolm (1994). "Using Gini-Style Indices to Evaluate the Spatial Patterns of Health Practitioners: Theoretical Considerations and an Application Based on Alberta Data". Social Science Medicine 38 (9): 1243–1256. doi:10.1016/0277-9536(94)90189-9. PMID 8016689.
- Chakravarty, S. R. (1990). Ethical Social Index Numbers. New York: Springer-Verlag. ISBN 0-387-52274-3.
- Deaton, Angus (1997). Analysis of Household Surveys. Baltimore MD: Johns Hopkins University Press. ISBN 0-585-23787-5.
- Dixon, PM, Weiner J., Mitchell-Olds T, Woodley R. (1987). "Bootstrapping the Gini coefficient of inequality". Ecology (Ecological Society of America) 68 (5): 1548–1551. doi:10.2307/1939238. JSTOR 1939238.
- Dorfman, Robert (1979). "A Formula for the Gini Coefficient". The Review of Economics and Statistics (The MIT Press) 61 (1): 146–149. doi:10.2307/1924845. JSTOR 1924845.
- Firebaugh, Glenn (2003). The New Geography of Global Income Inequality. Cambridge MA: Harvard University Press. ISBN 0-674-01067-1.
- Gastwirth, Joseph L. (1972). "The Estimation of the Lorenz Curve and Gini Index". The Review of Economics and Statistics (The MIT Press) 54 (3): 306–316. doi:10.2307/1937992. JSTOR 1937992.
- Giles, David (2004). "Calculating a Standard Error for the Gini Coefficient: Some Further Results". Oxford Bulletin of Economics and Statistics 66 (3): 425–433. doi:10.1111/j.1468-0084.2004.00086.x.
- Gini, Corrado (1912). "Variabilità e mutabilità" Reprinted in Memorie di metodologica statistica (Ed. Pizetti E, Salvemini, T). Rome: Libreria Eredi Virgilio Veschi (1955).
- Gini, Corrado (1921). "Measurement of Inequality of Incomes". The Economic Journal (Blackwell Publishing) 31 (121): 124–126. doi:10.2307/2223319. JSTOR 2223319.
- Giorgi, G. M. (1990). A bibliographic portrait of the Gini ratio, Metron, 48, 183–231.
- Karagiannis, E. and Kovacevic, M. (2000). "A Method to Calculate the Jackknife Variance Estimator for the Gini Coefficient". Oxford Bulletin of Economics and Statistics 62: 119–122. doi:10.1111/1468-0084.00163.
- Mills, Jeffrey A.; Zandvakili, Sourushe (1997). "Statistical Inference via Bootstrapping for Measures of Inequality". Journal of Applied Econometrics 12 (2): 133–150. doi:10.1002/(SICI)1099-1255(199703)12:2<133::AID-JAE433>3.0.CO;2-H.
- Modarres, Reza and Gastwirth, Joseph L. (2006). "A Cautionary Note on Estimating the Standard Error of the Gini Index of Inequality". Oxford Bulletin of Economics and Statistics 68 (3): 385–390. doi:10.1111/j.1468-0084.2006.00167.x.
- Morgan, James (1962). "The Anatomy of Income Distribution". The Review of Economics and Statistics (The MIT Press) 44 (3): 270–283. doi:10.2307/1926398. JSTOR 1926398.
- Ogwang, Tomson (2000). "A Convenient Method of Computing the Gini Index and its Standard Error". Oxford Bulletin of Economics and Statistics 62: 123–129. doi:10.1111/1468-0084.00164.
- Ogwang, Tomson (2004). "Calculating a Standard Error for the Gini Coefficient: Some Further Results: Reply". Oxford Bulletin of Economics and Statistics 66 (3): 435–437. doi:10.1111/j.1468-0084.2004.00087.x.
- Xu, Kuan (January 2004). How Has the Literature on Gini's Index Evolved in the Past 80 Years?. Department of Economics, Dalhousie University. Retrieved 2006-06-01. The Chinese version of this paper appears in Xu, Kuan (2003). "How Has the Literature on Gini's Index Evolved in the Past 80 Years?". China Economic Quarterly 2: 757–778.
- Yitzhaki, S. (1991). "Calculating Jackknife Variance Estimators for Parameters of the Gini Method". Journal of Business and Economic Statistics (American Statistical Association) 9 (2): 235–239. doi:10.2307/1391792. JSTOR 1391792.
- Deutsche Bundesbank: Do banks diversify loan portfolios?, 2005 (on using e.g. the Gini coefficient for risk evaluation of loan portfolios)
- Forbes Article, In praise of inequality
- Measuring Software Project Risk With The Gini Coefficient, an application of the Gini coefficient to software
- The World Bank: Measuring Inequality
- Travis Hale, University of Texas Inequality Project:The Theoretical Basics of Popular Inequality Measures, online computation of examples: 1A, 1B
- Article from The Guardian analysing inequality in the UK 1974 – 2006
- World Income Inequality Database
- Income Distribution and Poverty in OECD Countries
- Gini Coefficient Calculator | http://en.wikipedia.org/wiki/Gini_index | 13 |
31 | The modern Democratic Party in
North Carolina arose out of opposition to so-called "radical"
reconstruction efforts led by the Republican-controlled federal
government in the 1860s and 1870s. The Conservative Party, a coalition
of former Democrats and Whigs who opposed federal intervention in state
affairs, won control of the General Assembly in 1870 and began to
reverse some of the laws and policies established by the
Reconstruction-era Republicans. In 1876 the Conservatives changed their
name to Democrats and popular Civil War governor Zebulon Vance was
returned to the state's highest office. In the eyes of many white North
Carolinians, the state had been "redeemed."
Under the encouragement of the
Democrats, whose policies aided business interests, the state began a
rapid process of industrialization. Textile mills were built throughout
the Piedmont, and the state's tobacco and furniture industries grew
Sources: William S. Powell, North
Carolina through Four Centuries. Chapel Hill: University of
North Carolina Press, 1989
The right to vote is called the
franchise, thus disfranchisement is the removal of voting rights. This
became an issue during the campaign of 1898 when Democratic leaders
suggested that the only sure way to prevent "negro domination" in North
Carolina -- especially in parts of the state where African Americans
outnumbered whites -- was first to return the Democrats to power and
then to pass legislation effectively preventing African Americans from
voting. The Fifteenth Amendment to the Constitution made it impossible
for the states to deny the vote to African Americans outright, but many
Southern states, beginning with Mississippi in 1890, enacted laws that
worked to prevent most African Americans from voting. These new voting
requirements included a poll tax and a literacy test and gave greater
authority to local election officials. In the 1898 campaign in North
Carolina, the Populists warned that if the Democrats went through with
their plan for disfranchisement, they would pass laws that, by
extension, also denied the vote to poor whites.
After the Democratic victory, the
new North Carolina legislature began work on an amendment to the
Constitution regarding voting laws. These new acts were passed in 1900,
and would significantly decrease participation by African Americans in
statewide elections for decades to come.
Michael Perman, Struggle for Mastery: Disfranchisement in
the South, 1888-1908. Chapel Hill: UNC Press, 2001. See
chapter 8: "Defeating Fusion II: North Carolina, 1898-1900."
The depression of the 1880s hit
small farmers especially hard. Farmers in the midwest formed
organizations to advocate for reform of monetary policies and to
attempt to curb the influence of big business. The largest of these
organizations was the National Farmers Alliance, which incorporated
many statewide groups, including the North Carolina Farmers
Association. North Carolinian Leonidas LaFayette Polk, former state
secretary of agriculture and editor of the Progressive Farmer,
was a national leader in the Alliance until his death in 1892.
The Farmers Alliance first
advocated reform within the Democratic Party, but when the Democrats
proved reluctant to change their business-friendly policies, many
"Alliancemen" left in favor of a new party, the People's, or, Populist
Party. However, as it became clear in North Caorlina in 1898, many
Southern farmers who had supported the platform of the Populists, would
soon return to the Democratic Party. At the dawn of the twentieth
century, the Farmers Alliance had lost a great deal of its influence
and the Populist Party no longer posed a serious challenge.
William S. Powell, North Carolina through Four Centuries.
Chapel Hill: University of North Carolina Press, 1989; James Truslow Adams, ed., Dictionary of
American History. Second Edition. New York: Scribner, 1940.
Frustrated by Democratic
domination of nearly every election since 1876, the Republican and
Populist parties decided to combine forces in an effort to gain control
of the state government. The coalition was dubbed "fusion" by the
Democratic press. Instead of running competing candidates on separate
tickets, state Republican and Populist leaders divided the offices and
ran on a single ticket. The parties first combined in 1894,
successfully taking control of the state legislature. They joined
forces again in 1896, claiming control of the legislature and several
prominent offices in each election. Populist spokesman Marion Butler
was elected to the U.S. Senate in 1894, while Republican leader Daniel
Russell was elected governor in 1896. Similar attempts at fusion were
made in other Southern states, but nowhere was it as successful as in
Sources: William S. Powell, North Carolina
through Four Centuries. Chapel Hill: University of North
Carolina Press, 1989; Helen G. Edmonds, The Negro and Fusion
Politics in North Carolina, 1894-1901. Chapel Hill: UNC
News and Observer
By 1898, the Raleigh News
and Observer was the self-proclaimed "largest daily in
North Carolina." Under the editorship of staunch Democrat Josephus
Daniels, the paper was strongly Democratic, and became the closest
thing to an official party organ.
Daniels was involved in the
Democrats' 1898 campaign from the beginning, working with Furnifold
Simmons and other party leaders to formulate strategy. Daniels wrote
later that "The News and Observer was the
printed voice of the campaign." In the months leading up to the
November election, the News and Observer
hammered away at Republican and Populist leaders and maintained the
party's steady cry of white supremacy. Daniels wrote,
. . . The News and
Observer was relied upon to carry the Democratic message
and to be the militant voice of White Supremacy, and it did not fail in
what was expected, sometimes going to extremes in its partisanship. Its
correspondents visited every town where the Fusionists were in control
and presented column after column day by day of stories of every Negro
in office and every peculation, every private delinquency of a Fusion
office-holder. (Editor in Politics , p. 295.)
One of the most effective tools of
the campaign was the paper's use of editorial cartoons, which usually
ran on the front page. Daniels and cartoonist Norman Jennett came up
with the topics, which frequently ridiculed Governor Daniel Russell and
North Carolina's African American politicians. At a party celebrating
the Democratic victory, a motion was passed to thank the News
and Observer for its leadership throughout the campaign.
Sources: Alf Pratte, "Daniels,
Josephus." In Dictionary of National Biography,
vol. 6. New York: Oxford University Press, 1999; Josephus Daniels, Editor
in Politics. Chapel Hill: UNC Press, 1941.
The Populist Party, sometimes
called the People's Party, grew out of the national Farmers Alliance,
an organization of small farmers. The Farmers Alliance favored monetary
reform (especially the free coinage of silver), low-interest loans, and
fair trade. The Alliance originally advocated for reform within the
Democratic Party. Failing that, the "alliancemen" ventured into
politics on their own.
The Populists first received a
significant number of votes in North Carolina in 1892. In that
election, the votes received statewide by the Populists and Republicans
were, when combined, greater than those received by the Democrats. The
voters sent a clear signal that the Democratic Party was no longer the
party of the majority.
Despite clear policy differences,
especially in the area of monetary reform, the Populists and
Republicans joined together in the 1894 campaign, splitting the offices
on the ballot in a fusion agreement. It worked, with the new fusion
government taking control of the legislature and winning again in the
election of 1896.
In the election of 1898, the
Democrats effectively reclaimed many of the conservative white voters
who had fled to the Populists. The incessant and effective white
supremacy campaign by the Democrats overwhelmed the opposition. By the
closing months of the campaign, even the Populists were coming over to
the side of white supremacy, publishing racist editorials and cartoons
in their newspapers, with the only difference being that the Populists
accused the Democrats of placing African Americans in state office.
Although the Populists again fused
with the Republicans in 1900, the election of 1898 had effectively
destroyed the party. By the early twentieth century it ceased to be a
viable third party, both in North Carolina and nationwide, and the
two-party system was firmly established.
G. Edmonds, The Negro and Fusion Politics in North Carolina,
1894-1901. Chapel Hill: UNC Press, 1951.
The Progressive Farmer
was founded in 1886 by Leonidas LaFayette Polk, the former secretary of
agriculture for North Carolina and a leading advocate on behalf of
farmers. The paper contained practical advice for farmers and some
discussion of statewide political issues. In 1887, the paper became the
official organ of the North Carolina State Farmers Alliance. As the
Alliance became more politically active, so too did the Progressive
After Polk's death in 1892, the
paper continued to be published, openly supporting the Populist
candidates in the statewide elections of 1894 and 1896. In 1898, under
the leadership of editor Clarence Poe, the Progressive
Farmer initially stayed away from the contentious election
before finally being drawn in late in the campaign. In October, the
paper published an election supplement, which attacked the Democrats
for ignoring pressing issues to concentrate solely on race. The
supplement included several racist cartoons, which accused the
Democrats of placing African Americans in statewide office.
Although the Populist party faded
after the elections of 1898 and 1900, the Progressive Farmer,
returning to issues of more practical use to farmers, thrived. The
paper changed hands several times, but remained a mainstay in farming
households and continues to be published today.
Sources: William D. Poe, Jr., "The Progressive
Farmer, 1886-1903." M.A. Thesis, University of South
Toward the end of the 1898
campaign, especially in southeastern North Carolina, groups of men in
red shirts appeared at rallies and rode on horseback and in wagons
through African American neighborhoods, brandishing shotguns at the
terrified onlookers. Josephus Daniels suggests that the Red Shirts were
the idea of South Carolina Senator Ben Tillman, who had used them in a
campaign as early as 1876. Tillman made several speeches in North
Carolina in the 1898 campaign and was usually accompanied by Red
Shirts. Although there were several acts of violence against African
Americans attributed to the Red Shirts, it was thought that their
menacing presence alone was enough to intimidate potential Populist or
Republican voters. Daniels wrote, "If you have never seen three hundred
red-shirted men towards sunset with the sky red and the red shirts
seeming to blend with the sky, you cannot conceive the impression it
Sources: Josephus Daniels, Editor in Politics.
Chapel Hill: UNC Press, 1941; Hugh T. Lefler, ed., North
Carolina History Told by Contemporaries. Chapel Hill: UNC
The Republican Party in North
Carolina was formed in 1867, primarily by former unionists and men who
had recently relocated to the state. William Woods Holden, editor of
the North Carolina Standard and later governor,
was the most prominent spokesman of the party. From its inception, the
party welcomed African Americans.
As former Confederates were
pardoned and allowed to vote and participate in government again, they
gave strength to the new Conservative party, later renamed the
Democratic Party. Taking advantage of popular opposition to the
reconstruction policies of the Republican-led federal government, the
Democrats regained control of the North Carolina legislature in 1870
and the governor's office in 1876.
Republican candidates continued to
receive a significant number of votes in the 1880s and early 1890s, but
they were never able to achieve a majority. The Republicans finally
found success at the ballot box by running a joint campaign with the
upstart Populist Party in 1894. Although the Republicans and Populists
had differences, most notably in their views on monetary policy -- the
Populists favored the free coinage of silver as a means of deflating
the currency while the Republicans remained committed to the gold
standard -- they found enough common ground to work together. After the
new fusion government took office, they successfully reformed election
laws, which made it easier for people to vote, and returned county
government to local control.
The fusion ticket won again in
1896, with Republican Daniel Russell elected as governor, but when the
1898 election neared, the relations between the Populists and the
Republicans began to sour. Having already achieved many of the goals
they set in 1894, they were having a harder time finding areas in which
to cooperate. The two parties finally agreed to run together in 1898,
each acknowledging that this was the only chance they had to beat the
In the 1898 campaign, the
Republicans ran largely on their past successes and appealed to the
patriotism of voters in asking them to support the party of President
William McKinley, especially with the nation at war with Spain.
However, from the beginnings of the campaign, the Republicans were
forced into a defensive posture. The Democrats were relentless in their
cries of Republican corruption and "negro domination" and the
Republicans were never able to get around these accusations and raise
issues of their own. After Daniel Russell's term expired in 1901, it
would be 71 years before another Republican was elected governor in
Sources: William S. Powell, North Carolina
through Four Centuries. Chapel Hill: University of North
Carolina Press, 1989; Jeffrey J. Crow and Robert F. Durden, Maverick
Republican in the Old North State: A Political Biography of Daniel L.
Russell. Baton Rouge: LSU Press, 1977.
The coinage of silver became a
major political issue in the decades following the Civil War. The
silver dollar had slowly fallen out of circulation and gold had become
the dominant form of sound currency. As the country continued to expand
to the west, new mining ventures uncovered increased quantities of
silver. Many southern and western farmers, who had been hit hard by the
uncertain economy beginning in the 1870s, came to believe that the
federal government's reliance on gold currency was a key cause of the
farmers' financial problems. Farmers and their advocates began to argue
that more expanded coinage of silver and a return to full bimetallism
would increase the value of silver and result in a more equitable
distribution of wealth.
The cry of "free silver" was taken
up by the national Farmers Alliance and, later, the Populist Party.
Calling for the unlimited coinage of silver at a ratio of 16:1 to that
of gold (a ratio originally set by the federal government in 1830), the
Populists argued that an expansion of the currency system would result
in increased (but controlled) inflation, which they said would improve
the economic standing of small farmers by raising crop prices and
reducing the value of their debt. The cause of "free silver" was
opposed by the Republican Party, which counted eastern businessmen and
bankers as its key constituents.
The Republicans and Populists in
North Carolina did not come to an agreement on the silver issue when
they ran together in the elections of 1894 and 1896. However, the
Populists' continued discomfort with the "gold-bug" Republicans was one
of the reasons the party originally tried to fuse with the Democrats in
1898. Silver was also an important campaign issue for the Democrats. In
the rally that opened the campaign, the two main issues were announced
as the "white man and the white metal."
Sources: Lawrence Goodwyn, Democratic
Promise: The Populist Moment in America New York: Oxford
University Press, 1976; Marshall Gramm and Phil Gramm, "The Free Silver
Movement in America: A Reinterpretation." Journal of
Economic History 64, no. 4 (2004): 1108-1129; James Turner,
"Understanding the Populists." The Journal of American History 67, no.
2 (1980): 354-373.
The Wilmington Daily
Record , an African American newspaper, was founded in
1892, but didn't have much success until Alex Manly took over as editor
in 1895. By 1898 it was billing itself as "The Only Negro Daily in the
World." The paper catered to the local African American community, but
in every aspect it resembled the white papers of the time, with a
social column, advice to readers, editorials, and articles reprinted
from other newspapers. The editorial stance of the paper was solidly
Republican, and Manly favored fusion with the Populists as the most
effective way of advancing the interests of African Americans in North
On 18 August 1898, the Daily
Record published an editorial in response to an article by
a Georgia woman who suggested the widespread lynching of African
Americans in order to protect white women. The unsigned editorial --
most often attributed to editor Alex Manly, though possibly written by
associate editor William L. Jeffries -- suggested that relationships
between African American men and white women were far more common than
most whites were willing to admit. Indeed, "Meetings of this kind go on
for some time until the woman's infatuation or the man's boldness bring
attention to them and the man is lynched for rape."
The editorial went unnoticed by
the white press for a couple of days before it was discovered and
reprinted in the Wilmington Star. The Democratic
press rose in an uproar, printing excerpts in papers across the state.
In the Raleigh News and Observer, the editorial
appeared under the headline "Vile and Villanous."
When Wilmington erupted in
violence after the election in November 1898, the office of the
Wilmington Daily Record was one of the targets
of the angry white mob.
Sources: Helen G. Edmonds, The Negro and Fusion
Politics in North Carolina, 1894-1901. Chapel Hill: UNC
Press, 1951; Robert, H. Wooley, "Race and Politics: The
Evolution of the White Supremacy Campaign of 1898 in North Carolina."
Ph. D. Dissertation, University of North Carolina at Chapel Hill, 1977. | http://www.lib.unc.edu/ncc/1898/glossary.html | 13 |
17 | |Part of a series on|
Evolution is the change in the inherited characteristics of biological populations over successive generations. Evolutionary processes give rise to diversity at every level of biological organisation, including species, individual organisms and molecules such as DNA and proteins.
All life on earth is descended from a last universal ancestor that lived approximately 3.8 billion years ago. Repeated speciation and the divergence of life can be inferred from shared sets of biochemical and morphological traits, or by shared DNA sequences. These homologous traits and sequences are more similar among species that share a more recent common ancestor, and can be used to reconstruct evolutionary histories, using both existing species and the fossil record. Existing patterns of biodiversity have been shaped both by speciation and by extinction.
Charles Darwin was the first to formulate a scientific argument for the theory of evolution by means of natural selection. Evolution by natural selection is a process that is inferred from three facts about populations: 1) more offspring are produced than can possibly survive, 2) traits vary among individuals, leading to different rates of survival and reproduction, and 3) trait differences are heritable. Thus, when members of a population die they are replaced by the progeny of parents that were better adapted to survive and reproduce in the environment in which natural selection took place. This process creates and preserves traits that are seemingly fitted for the functional roles they perform. Natural selection is the only known cause of adaptation, but not the only known cause of evolution. Other, nonadaptive causes of evolution include mutation and genetic drift.
In the early 20th century, genetics was integrated with Darwin's theory of evolution by natural selection through the discipline of population genetics. The importance of natural selection as a cause of evolution was accepted into other branches of biology. Moreover, previously held notions about evolution, such as orthogenesis and "progress" became obsolete. Scientists continue to study various aspects of evolution by forming and testing hypotheses, constructing scientific theories, using observational data, and performing experiments in both the field and the laboratory. Biologists agree that descent with modification is one of the most reliably established facts in science. Discoveries in evolutionary biology have made a significant impact not just within the traditional branches of biology, but also in other academic disciplines (e.g., anthropology and psychology) and on society at large.
The proposal that one type of animal could descend from an animal of another type goes back to some of the first pre-Socratic Greek philosophers, such as Anaximander and Empedocles. In contrast to these materialistic views, Aristotle understood all natural things, not only living things, as being imperfect actualisations of different fixed natural possibilities, known as "forms", "ideas", or (in Latin translations) "species". This was part of his teleological understanding of nature in which all things have an intended role to play in a divine cosmic order. The Roman poet and philosopher Titus Lucretius Carus proposed the possibility of evolutionary changes of organisms. Variations of this idea became the standard understanding of the Middle Ages, and were integrated into Christian learning, but Aristotle did not demand that real types of animals corresponded one-for-one with exact metaphysical forms, and specifically gave examples of how new types of living things could come to be. Leonardo da Vinci simply wrote, "Motion is the cause of all life".
In the 17th century the new method of modern science rejected Aristotle's approach, and sought explanations of natural phenomena in terms of laws of nature which were the same for all visible things, and did not need to assume any fixed natural categories, nor any divine cosmic order. But this new approach was slow to take root in the biological sciences, which became the last bastion of the concept of fixed natural types. John Ray used one of the previously more general terms for fixed natural types, "species", to apply to animal and plant types, but unlike Aristotle he strictly identified each type of living thing as a species, and proposed that each species can be defined by the features that perpetuate themselves each generation. These species were designed by God, but showing differences caused by local conditions. The biological classification introduced by Carolus Linnaeus in 1735 also viewed species as fixed according to a divine plan.
Other naturalists of this time speculated on evolutionary change of species over time according to natural laws. Maupertuis wrote in 1751 of natural modifications occurring during reproduction and accumulating over many generations to produce new species. Buffon suggested that species could degenerate into different organisms, and Erasmus Darwin proposed that all warm-blooded animals could have descended from a single micro-organism (or "filament"). The first full-fledged evolutionary scheme was Lamarck's "transmutation" theory of 1809 which envisaged spontaneous generation continually producing simple forms of life developed greater complexity in parallel lineages with an inherent progressive tendency, and that on a local level these lineages adapted to the environment by inheriting changes caused by use or disuse in parents. (The latter process was later called Lamarckism.) These ideas were condemned by established naturalists as speculation lacking empirical support. In particular Georges Cuvier insisted that species were unrelated and fixed, their similarities reflecting divine design for functional needs. In the meantime, Ray's ideas of benevolent design had been developed by William Paley into a natural theology which proposed complex adaptations as evidence of divine design, and was admired by Charles Darwin.
The critical break from the concept of fixed species in biology began with the theory of evolution by natural selection, which was formulated by Charles Darwin. Partly influenced by An Essay on the Principle of Population by Thomas Robert Malthus, Darwin noted that population growth would lead to a "struggle for existence" where favorable variations could prevail as others perished. Each generation, many offspring fail to survive to an age of reproduction because of limited resources. This could explain the diversity of animals and plants from a common ancestry through the working of natural laws working the same for all types of thing. Darwin was developing his theory of "natural selection" from 1838 onwards until Alfred Russel Wallace sent him a similar theory in 1858. Both men presented their separate papers to the Linnean Society of London. At the end of 1859, Darwin's publication of On the Origin of Species explained natural selection in detail and in a way that led to an increasingly wide acceptance of Darwinian evolution. Thomas Henry Huxley applied Darwin's ideas to humans, using paleontology and comparative anatomy to provide strong evidence that humans and apes shared a common ancestry. Some were disturbed by this since it implied that humans did not have a special place in the universe.
Precise mechanisms of reproductive heritability and the origin of new traits remained a mystery. Towards this end, Darwin developed his provisional theory of pangenesis. In 1865 Gregor Mendel reported that traits were inherited in a predictable manner through the independent assortment and segregation of elements (later known as genes). Mendel's laws of inheritance eventually supplanted most of Darwin's pangenesis theory. August Weismann made the important distinction between germ cells (sperm and eggs) and somatic cells of the body, demonstrating that heredity passes through the germ line only. Hugo de Vries connected Darwin's pangenesis theory to Weismann's germ/soma cell distinction and proposed that Darwin's pangenes were concentrated in the cell nucleus and when expressed they could move into the cytoplasm to change the cells structure. De Vries was also one of the researchers who made Mendel's work well-known, believing that Mendelian traits corresponded to the transfer of heritable variations along the germline. To explain how new variants originate, De Vries developed a mutation theory that led to a temporary rift between those who accepted Darwinian evolution and biometricians who allied with de Vries. At the turn of the 20th century, pioneers in the field of population genetics, such as J.B.S. Haldane, Sewall Wright, and Ronald Fisher, set the foundations of evolution onto a robust statistical philosophy. The false contradiction between Darwin's theory, genetic mutations, and Mendelian inheritance was thus reconciled.
In the 1920s and 1930s a modern evolutionary synthesis connected natural selection, mutation theory, and Mendelian inheritance into a unified theory that applied generally to any branch of biology. The modern synthesis was able to explain patterns observed across species in populations, through fossil transitions in palaeontology, and even complex cellular mechanisms in developmental biology. The publication of the structure of DNA by James Watson and Francis Crick in 1953 demonstrated a physical basis for inheritance. Molecular biology improved our understanding of the relationship between genotype and phenotype. Advancements were also made in phylogenetic systematics, mapping the transition of traits into a comparative and testable framework through the publication and use of evolutionary trees. In 1973, evolutionary biologist Theodosius Dobzhansky penned that "nothing in biology makes sense except in the light of evolution", because it has brought to light the relations of what first seemed disjointed facts in natural history into a coherent explanatory body of knowledge that describes and predicts many observable facts about life on this planet.
Since then, the modern synthesis has been further extended to explain biological phenomena across the full and integrative scale of the biological hierarchy, from genes to species. This extension has been dubbed "eco-evo-devo".
Evolution in organisms occurs through changes in heritable traits – particular characteristics of an organism. In humans, for example, eye colour is an inherited characteristic and an individual might inherit the "brown-eye trait" from one of their parents. Inherited traits are controlled by genes and the complete set of genes within an organism's genome is called its genotype.
The complete set of observable traits that make up the structure and behaviour of an organism is called its phenotype. These traits come from the interaction of its genotype with the environment. As a result, many aspects of an organism's phenotype are not inherited. For example, suntanned skin comes from the interaction between a person's genotype and sunlight; thus, suntans are not passed on to people's children. However, some people tan more easily than others, due to differences in their genotype; a striking example are people with the inherited trait of albinism, who do not tan at all and are very sensitive to sunburn.
Heritable traits are passed from one generation to the next via DNA, a molecule that encodes genetic information. DNA is a long polymer composed of four types of bases. The sequence of bases along a particular DNA molecule specify the genetic information, in a manner similar to a sequence of letters spelling out a sentence. Before a cell divides, the DNA is copied, so that each of the resulting two cells will inherit the DNA sequence. Portions of a DNA molecule that specify a single functional unit are called genes; different genes have different sequences of bases. Within cells, the long strands of DNA form condensed structures called chromosomes. The specific location of a DNA sequence within a chromosome is known as a locus. If the DNA sequence at a locus varies between individuals, the different forms of this sequence are called alleles. DNA sequences can change through mutations, producing new alleles. If a mutation occurs within a gene, the new allele may affect the trait that the gene controls, altering the phenotype of the organism. However, while this simple correspondence between an allele and a trait works in some cases, most traits are more complex and are controlled by multiple interacting genes.
Recent findings have confirmed important examples of heritable changes that cannot be explained by changes to the sequence of nucleotides in the DNA. These phenomena are classed as epigenetic inheritance systems. DNA methylation marking chromatin, self-sustaining metabolic loops, gene silencing by RNA interference and the three dimensional conformation of proteins (such as prions) are areas where epigenetic inheritance systems have been discovered at the organismic level. Developmental biologists suggest that complex interactions in genetic networks and communication among cells can lead to heritable variations that may underlay some of the mechanics in developmental plasticity and canalization. Heritability may also occur at even larger scales. For example, ecological inheritance through the process of niche construction is defined by the regular and repeated activities of organisms in their environment. This generates a legacy of effects that modify and feed back into the selection regime of subsequent generations. Descendants inherit genes plus environmental characteristics generated by the ecological actions of ancestors. Other examples of heritability in evolution that are not under the direct control of genes include the inheritance of cultural traits and symbiogenesis.
An individual organism's phenotype results from both its genotype and the influence from the environment it has lived in. A substantial part of the variation in phenotypes in a population is caused by the differences between their genotypes. The modern evolutionary synthesis defines evolution as the change over time in this genetic variation. The frequency of one particular allele will become more or less prevalent relative to other forms of that gene. Variation disappears when a new allele reaches the point of fixation — when it either disappears from the population or replaces the ancestral allele entirely.
Natural selection will only cause evolution if there is enough genetic variation in a population. Before the discovery of Mendelian genetics, one common hypothesis was blending inheritance. But with blending inheritance, genetic variance would be rapidly lost, making evolution by natural selection implausible. The Hardy-Weinberg principle provides the solution to how variation is maintained in a population with Mendelian inheritance. The frequencies of alleles (variations in a gene) will remain constant in the absence of selection, mutation, migration and genetic drift.
Variation comes from mutations in genetic material, reshuffling of genes through sexual reproduction and migration between populations (gene flow). Despite the constant introduction of new variation through mutation and gene flow, most of the genome of a species is identical in all individuals of that species. However, even relatively small differences in genotype can lead to dramatic differences in phenotype: for example, chimpanzees and humans differ in only about 5% of their genomes.
Mutations are changes in the DNA sequence of a cell's genome. When mutations occur, they can either have no effect, alter the product of a gene, or prevent the gene from functioning. Based on studies in the fly Drosophila melanogaster, it has been suggested that if a mutation changes a protein produced by a gene, this will probably be harmful, with about 70% of these mutations having damaging effects, and the remainder being either neutral or weakly beneficial.
Mutations can involve large sections of a chromosome becoming duplicated (usually by genetic recombination), which can introduce extra copies of a gene into a genome. Extra copies of genes are a major source of the raw material needed for new genes to evolve. This is important because most new genes evolve within gene families from pre-existing genes that share common ancestors. For example, the human eye uses four genes to make structures that sense light: three for colour vision and one for night vision; all four are descended from a single ancestral gene.
New genes can be generated from an ancestral gene when a duplicate copy mutates and acquires a new function. This process is easier once a gene has been duplicated because it increases the redundancy of the system; one gene in the pair can acquire a new function while the other copy continues to perform its original function. Other types of mutations can even generate entirely new genes from previously noncoding DNA.
The generation of new genes can also involve small parts of several genes being duplicated, with these fragments then recombining to form new combinations with new functions. When new genes are assembled from shuffling pre-existing parts, domains act as modules with simple independent functions, which can be mixed together to produce new combinations with new and complex functions. For example, polyketide synthases are large enzymes that make antibiotics; they contain up to one hundred independent domains that each catalyze one step in the overall process, like a step in an assembly line.
In asexual organisms, genes are inherited together, or linked, as they cannot mix with genes of other organisms during reproduction. In contrast, the offspring of sexual organisms contain random mixtures of their parents' chromosomes that are produced through independent assortment. In a related process called homologous recombination, sexual organisms exchange DNA between two matching chromosomes. Recombination and reassortment do not alter allele frequencies, but instead change which alleles are associated with each other, producing offspring with new combinations of alleles. Sex usually increases genetic variation and may increase the rate of evolution.
Gene flow is the exchange of genes between populations and between species. It can therefore be a source of variation that is new to a population or to a species. Gene flow can be caused by the movement of individuals between separate populations of organisms, as might be caused by the movement of mice between inland and coastal populations, or the movement of pollen between heavy metal tolerant and heavy metal sensitive populations of grasses.
Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer. Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among bacteria. In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species. Horizontal transfer of genes from bacteria to eukaryotes such as the yeast Saccharomyces cerevisiae and the adzuki bean beetle Callosobruchus chinensis has occurred. An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which have received a range of genes from bacteria, fungi and plants. Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains.
Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and bacteria, during the acquisition of chloroplasts and mitochondria. It is possible that eukaryotes themselves originated from horizontal gene transfers between bacteria and archaea.
From a Neo-Darwinian perspective, evolution occurs when there are changes in the frequencies of alleles within a population of interbreeding organisms. For example, the allele for black colour in a population of moths becoming more common. Mechanisms that can lead to changes in allele frequencies include natural selection, genetic drift, genetic hitchhiking, mutation and gene flow.
Evolution by means of natural selection is the process by which genetic mutations that enhance reproduction become and remain more common in successive generations of a population. It has often been called a "self-evident" mechanism because it necessarily follows from three simple facts:
These conditions produce competition between organisms for survival and reproduction. Consequently, organisms with traits that give them an advantage over their competitors pass these advantageous traits on, while traits that do not confer an advantage are not passed on to the next generation.
The central concept of natural selection is the evolutionary fitness of an organism. Fitness is measured by an organism's ability to survive and reproduce, which determines the size of its genetic contribution to the next generation. However, fitness is not the same as the total number of offspring: instead fitness is indicated by the proportion of subsequent generations that carry an organism's genes. For example, if an organism could survive well and reproduce rapidly, but its offspring were all too small and weak to survive, this organism would make little genetic contribution to future generations and would thus have low fitness.
If an allele increases fitness more than the other alleles of that gene, then with each generation this allele will become more common within the population. These traits are said to be "selected for". Examples of traits that can increase fitness are enhanced survival and increased fecundity. Conversely, the lower fitness caused by having a less beneficial or deleterious allele results in this allele becoming rarer — they are "selected against". Importantly, the fitness of an allele is not a fixed characteristic; if the environment changes, previously neutral or harmful traits may become beneficial and previously beneficial traits become harmful. However, even if the direction of selection does reverse in this way, traits that were lost in the past may not re-evolve in an identical form (see Dollo's law).
Natural selection within a population for a trait that can vary across a range of values, such as height, can be categorised into three different types. The first is directional selection, which is a shift in the average value of a trait over time — for example, organisms slowly getting taller. Secondly, disruptive selection is selection for extreme trait values and often results in two different values becoming most common, with selection against the average value. This would be when either short or tall organisms had an advantage, but not those of medium height. Finally, in stabilizing selection there is selection against extreme trait values on both ends, which causes a decrease in variance around the average value and less diversity. This would, for example, cause organisms to slowly become all the same height.
A special case of natural selection is sexual selection, which is selection for any trait that increases mating success by increasing the attractiveness of an organism to potential mates. Traits that evolved through sexual selection are particularly prominent in males of some animal species, despite traits such as cumbersome antlers, mating calls or bright colours that attract predators, decreasing the survival of individual males. This survival disadvantage is balanced by higher reproductive success in males that show these hard to fake, sexually selected traits.
Natural selection most generally makes nature the measure against which individuals and individual traits, are more or less likely to survive. "Nature" in this sense refers to an ecosystem, that is, a system in which organisms interact with every other element, physical as well as biological, in their local environment. Eugene Odum, a founder of ecology, defined an ecosystem as: "Any unit that includes all of the organisms...in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity and material cycles (ie: exchange of materials between living and nonliving parts) within the system." Each population within an ecosystem occupies a distinct niche, or position, with distinct relationships to other parts of the system. These relationships involve the life history of the organism, its position in the food chain and its geographic range. This broad understanding of nature enables scientists to delineate specific forces which, together, comprise natural selection.
Natural selection can act at different levels of organisation, such as genes, cells, individual organisms, groups of organisms and species. Selection can act at multiple levels simultaneously. An example of selection occurring below the level of the individual organism are genes called transposons, which can replicate and spread throughout a genome. Selection at a level above the individual, such as group selection, may allow the evolution of co-operation, as discussed below.
In addition to being a major source of variation, mutation may also function as a mechanism of evolution when there are different probabilities at the molecular level for different mutations to occur, a process known as mutation bias. If two genotypes, for example one with the nucleotide G and another with the nucleotide A in the same position, have the same fitness, but mutation from G to A happens more often than mutation from A to G, then genotypes with A will tend to evolve. Different insertion vs. deletion mutation biases in different taxa can lead to the evolution of different genome sizes. Developmental or mutational biases have also been observed in morphological evolution. For example, according to the phenotype-first theory of evolution, mutations can eventually cause the genetic assimilation of traits that were previously induced by the environment.
Mutation bias effects are superimposed on other processes. If selection would favor either one out of two mutations, but there is no extra advantage to having both, then the mutation that occurs the most frequently is the one that is most likely to become fixed in a population. Mutations leading to the loss of function of a gene are much more common than mutations that produce a new, fully functional gene. Most loss of function mutations are selected against. But when selection is weak, mutation bias towards loss of function can affect evolution. For example, pigments are no longer useful when animals live in the darkness of caves, and tend to be lost. This kind of loss of function can occur because of mutation bias, and/or because the function had a cost, and once the benefit of the function disappeared, natural selection leads to the loss. Loss of sporulation ability in a bacterium during laboratory evolution appears to have been caused by mutation bias, rather than natural selection against the cost of maintaining sporulation ability. When there is no selection for loss of function, the speed at which loss evolves depends more on the mutation rate than it does on the effective population size, indicating that it is driven more by mutation bias than by genetic drift.
Genetic drift is the change in allele frequency from one generation to the next that occurs because alleles are subject to sampling error. As a result, when selective forces are absent or relatively weak, allele frequencies tend to "drift" upward or downward randomly (in a random walk). This drift halts when an allele eventually becomes fixed, either by disappearing from the population, or replacing the other alleles entirely. Genetic drift may therefore eliminate some alleles from a population due to chance alone. Even in the absence of selective forces, genetic drift can cause two separate populations that began with the same genetic structure to drift apart into two divergent populations with different sets of alleles.
It is usually difficult to measure the relative importance of selection and neutral processes, including drift. The comparative importance of adaptive and non-adaptive forces in driving evolutionary change is an area of current research.
The neutral theory of molecular evolution proposed that most evolutionary changes are the result of the fixation of neutral mutations by genetic drift. Hence, in this model, most genetic changes in a population are the result of constant mutation pressure and genetic drift. This form of the neutral theory is now largely abandoned, since it does not seem to fit the genetic variation seen in nature. However, a more recent and better-supported version of this model is the nearly neutral theory, where a mutation that would be neutral in a small population is not necessarily neutral in a large population. Other alternative theories propose that genetic drift is dwarfed by other stochastic forces in evolution, such as genetic hitchhiking, also known as genetic draft.
The time for a neutral allele to become fixed by genetic drift depends on population size, with fixation occurring more rapidly in smaller populations. The number of individuals in a population is not critical, but instead a measure known as the effective population size. The effective population is usually smaller than the total population since it takes into account factors such as the level of inbreeding and the stage of the lifecycle in which the population is the smallest. The effective population size may not be the same for every gene in the same population.
Recombination allows alleles on the same strand of DNA to become separated. However, the rate of recombination is low (approximately two events per chromosome per generation). As a result, genes close together on a chromosome may not always be shuffled away from each other and genes that are close together tend to be inherited together, a phenomenon known as linkage. This tendency is measured by finding how often two alleles occur together on a single chromosome compared to expectations, which is called their linkage disequilibrium. A set of alleles that is usually inherited in a group is called a haplotype. This can be important when one allele in a particular haplotype is strongly beneficial: natural selection can drive a selective sweep that will also cause the other alleles in the haplotype to become more common in the population; this effect is called genetic hitchhiking or genetic draft. Genetic draft caused by the fact that some neutral genes are genetically linked to others that are under selection can be partially captured by an appropriate effective population size.
Gene flow is the exchange of genes between populations and between species. The presence or absence of gene flow fundamentally changes the course of evolution. Due to the complexity of organisms, any two completely isolated populations will eventually evolve genetic incompatibilities through neutral processes, as in the Bateson-Dobzhansky-Muller model, even if both populations remain essentially identical in terms of their adaptation to the environment.
If genetic differentiation between populations develops, gene flow between populations can introduce traits or alleles which are disadvantageous in the local population and this may lead to organism within these populations to evolve mechanisms that prevent mating with genetically distant populations, eventually resulting in the appearance of new species. Thus, exchange of genetic information between individuals is fundamentally important for the development of the biological species concept (BSC).
During the development of the modern synthesis, Sewall Wright's developed his shifting balance theory that gene flow between partially isolated populations was an important aspect of adaptive evolution. However, recently there has been substantial criticism of the importance of the shifting balance theory.
Evolution influences every aspect of the form and behaviour of organisms. Most prominent are the specific behavioural and physical adaptations that are the outcome of natural selection. These adaptations increase fitness by aiding activities such as finding food, avoiding predators or attracting mates. Organisms can also respond to selection by co-operating with each other, usually by aiding their relatives or engaging in mutually beneficial symbiosis. In the longer term, evolution produces new species through splitting ancestral populations of organisms into new groups that cannot or will not interbreed.
These outcomes of evolution are sometimes divided into macroevolution, which is evolution that occurs at or above the level of species, such as extinction and speciation and microevolution, which is smaller evolutionary changes, such as adaptations, within a species or population. In general, macroevolution is regarded as the outcome of long periods of microevolution. Thus, the distinction between micro- and macroevolution is not a fundamental one – the difference is simply the time involved. However, in macroevolution, the traits of the entire species may be important. For instance, a large amount of variation among individuals allows a species to rapidly adapt to new habitats, lessening the chance of it going extinct, while a wide geographic range increases the chance of speciation, by making it more likely that part of the population will become isolated. In this sense, microevolution and macroevolution might involve selection at different levels – with microevolution acting on genes and organisms, versus macroevolutionary processes such as species selection acting on entire species and affecting their rates of speciation and extinction.
A common misconception is that evolution has goals or long-term plans; realistically however, evolution has no long-term goal and does not necessarily produce greater complexity. Although complex species have evolved, they occur as a side effect of the overall number of organisms increasing and simple forms of life still remain more common in the biosphere. For example, the overwhelming majority of species are microscopic prokaryotes, which form about half the world's biomass despite their small size, and constitute the vast majority of Earth's biodiversity. Simple organisms have therefore been the dominant form of life on Earth throughout its history and continue to be the main form of life up to the present day, with complex life only appearing more diverse because it is more noticeable. Indeed, the evolution of microorganisms is particularly important to modern evolutionary research, since their rapid reproduction allows the study of experimental evolution and the observation of evolution and adaptation in real time.
Adaptation is the process that makes organisms better suited to their habitat. Also, the term adaptation may refer to a trait that is important for an organism's survival. For example, the adaptation of horses' teeth to the grinding of grass. By using the term adaptation for the evolutionary process and adaptive trait for the product (the bodily part or function), the two senses of the word may be distinguished. Adaptations are produced by natural selection. The following definitions are due to Theodosius Dobzhansky.
Adaptation may cause either the gain of a new feature, or the loss of an ancestral feature. An example that shows both types of change is bacterial adaptation to antibiotic selection, with genetic changes causing antibiotic resistance by both modifying the target of the drug, or increasing the activity of transporters that pump the drug out of the cell. Other striking examples are the bacteria Escherichia coli evolving the ability to use citric acid as a nutrient in a long-term laboratory experiment, Flavobacterium evolving a novel enzyme that allows these bacteria to grow on the by-products of nylon manufacturing, and the soil bacterium Sphingobium evolving an entirely new metabolic pathway that degrades the synthetic pesticide pentachlorophenol. An interesting but still controversial idea is that some adaptations might increase the ability of organisms to generate genetic diversity and adapt by natural selection (increasing organisms' evolvability).
Adaptation occurs through the gradual modification of existing structures. Consequently, structures with similar internal organisation may have different functions in related organisms. This is the result of a single ancestral structure being adapted to function in different ways. The bones within bat wings, for example, are very similar to those in mice feet and primate hands, due to the descent of all these structures from a common mammalian ancestor. However, since all living organisms are related to some extent, even organs that appear to have little or no structural similarity, such as arthropod, squid and vertebrate eyes, or the limbs and wings of arthropods and vertebrates, can depend on a common set of homologous genes that control their assembly and function; this is called deep homology.
During evolution, some structures may lose their original function and become vestigial structures. Such structures may have little or no function in a current species, yet have a clear function in ancestral species, or other closely related species. Examples include pseudogenes, the non-functional remains of eyes in blind cave-dwelling fish, wings in flightless birds, and the presence of hip bones in whales and snakes. Examples of vestigial structures in humans include wisdom teeth, the coccyx, the vermiform appendix, and other behavioural vestiges such as goose bumps and primitive reflexes.
However, many traits that appear to be simple adaptations are in fact exaptations: structures originally adapted for one function, but which coincidentally became somewhat useful for some other function in the process. One example is the African lizard Holaspis guentheri, which developed an extremely flat head for hiding in crevices, as can be seen by looking at its near relatives. However, in this species, the head has become so flattened that it assists in gliding from tree to tree—an exaptation. Within cells, molecular machines such as the bacterial flagella and protein sorting machinery evolved by the recruitment of several pre-existing proteins that previously had different functions. Another example is the recruitment of enzymes from glycolysis and xenobiotic metabolism to serve as structural proteins called crystallins within the lenses of organisms' eyes.
A critical principle of ecology is that of competitive exclusion: no two species can occupy the same niche in the same environment for a long time. Consequently, natural selection will tend to force species to adapt to different ecological niches. This may mean that, for example, two species of cichlid fish adapt to live in different habitats, which will minimise the competition between them for food.
An area of current investigation in evolutionary developmental biology is the developmental basis of adaptations and exaptations. This research addresses the origin and evolution of embryonic development and how modifications of development and developmental processes produce novel features. These studies have shown that evolution can alter development to produce new structures, such as embryonic bone structures that develop into the jaw in other animals instead forming part of the middle ear in mammals. It is also possible for structures that have been lost in evolution to reappear due to changes in developmental genes, such as a mutation in chickens causing embryos to grow teeth similar to those of crocodiles. It is now becoming clear that most alterations in the form of organisms are due to changes in a small set of conserved genes.
Interactions between organisms can produce both conflict and co-operation. When the interaction is between pairs of species, such as a pathogen and a host, or a predator and its prey, these species can develop matched sets of adaptations. Here, the evolution of one species causes adaptations in a second species. These changes in the second species then, in turn, cause new adaptations in the first species. This cycle of selection and response is called co-evolution. An example is the production of tetrodotoxin in the rough-skinned newt and the evolution of tetrodotoxin resistance in its predator, the common garter snake. In this predator-prey pair, an evolutionary arms race has produced high levels of toxin in the newt and correspondingly high levels of toxin resistance in the snake.
Not all co-evolved interactions between species involve conflict. Many cases of mutually beneficial interactions have evolved. For instance, an extreme cooperation exists between plants and the mycorrhizal fungi that grow on their roots and aid the plant in absorbing nutrients from the soil. This is a reciprocal relationship as the plants provide the fungi with sugars from photosynthesis. Here, the fungi actually grow inside plant cells, allowing them to exchange nutrients with their hosts, while sending signals that suppress the plant immune system.
Coalitions between organisms of the same species have also evolved. An extreme case is the eusociality found in social insects, such as bees, termites and ants, where sterile insects feed and guard the small number of organisms in a colony that are able to reproduce. On an even smaller scale, the somatic cells that make up the body of an animal limit their reproduction so they can maintain a stable organism, which then supports a small number of the animal's germ cells to produce offspring. Here, somatic cells respond to specific signals that instruct them whether to grow, remain as they are, or die. If cells ignore these signals and multiply inappropriately, their uncontrolled growth causes cancer.
Such cooperation within species may have evolved through the process of kin selection, which is where one organism acts to help raise a relative's offspring. This activity is selected for because if the helping individual contains alleles which promote the helping activity, it is likely that its kin will also contain these alleles and thus those alleles will be passed on. Other processes that may promote cooperation include group selection, where cooperation provides benefits to a group of organisms.
There are multiple ways to define the concept of "species". The choice of definition is dependent on the particularities of the species concerned. For example, some species concepts apply more readily toward sexually reproducing organisms while others lend themselves better toward asexual organisms. Despite the diversity of various species concepts, these various concepts can be placed into one of three broad philosophical approaches: interbreeding, ecological and phylogenetic. The biological species concept (BSC) is a classic example of the interbreeding approach. Defined by Ernst Mayr in 1942, the BSC states that "species are groups of actually or potentially interbreeding natural populations, which are reproductively isolated from other such groups". Despite its wide and long-term use, the BSC like others is not without controversy, for example because these concepts cannot be applied to prokaryotes, and this is called the species problem. Some researchers have attempted a unifying monistic definition of species, while others adopt a pluralistic approach and suggest that there may be different ways to logically interpret the definition of a species. "
Barriers to reproduction between two diverging sexual populations are required for the populations to become new species. Gene flow may slow this process by spreading the new genetic variants also to the other populations. Depending on how far two species have diverged since their most recent common ancestor, it may still be possible for them to produce offspring, as with horses and donkeys mating to produce mules. Such hybrids are generally infertile. In this case, closely related species may regularly interbreed, but hybrids will be selected against and the species will remain distinct. However, viable hybrids are occasionally formed and these new species can either have properties intermediate between their parent species, or possess a totally new phenotype. The importance of hybridisation in producing new species of animals is unclear, although cases have been seen in many types of animals, with the gray tree frog being a particularly well-studied example.
Speciation has been observed multiple times under both controlled laboratory conditions and in nature. In sexually reproducing organisms, speciation results from reproductive isolation followed by genealogical divergence. There are four mechanisms for speciation. The most common in animals is allopatric speciation, which occurs in populations initially isolated geographically, such as by habitat fragmentation or migration. Selection under these conditions can produce very rapid changes in the appearance and behaviour of organisms. As selection and drift act independently on populations isolated from the rest of their species, separation may eventually produce organisms that cannot interbreed.
The second mechanism of speciation is peripatric speciation, which occurs when small populations of organisms become isolated in a new environment. This differs from allopatric speciation in that the isolated populations are numerically much smaller than the parental population. Here, the founder effect causes rapid speciation after an increase in inbreeding increases selection on homozygotes, leading to rapid genetic change.
The third mechanism of speciation is parapatric speciation. This is similar to peripatric speciation in that a small population enters a new habitat, but differs in that there is no physical separation between these two populations. Instead, speciation results from the evolution of mechanisms that reduce gene flow between the two populations. Generally this occurs when there has been a drastic change in the environment within the parental species' habitat. One example is the grass Anthoxanthum odoratum, which can undergo parapatric speciation in response to localised metal pollution from mines. Here, plants evolve that have resistance to high levels of metals in the soil. Selection against interbreeding with the metal-sensitive parental population produced a gradual change in the flowering time of the metal-resistant plants, which eventually produced complete reproductive isolation. Selection against hybrids between the two populations may cause reinforcement, which is the evolution of traits that promote mating within a species, as well as character displacement, which is when two species become more distinct in appearance.
Finally, in sympatric speciation species diverge without geographic isolation or changes in habitat. This form is rare since even a small amount of gene flow may remove genetic differences between parts of a population. Generally, sympatric speciation in animals requires the evolution of both genetic differences and non-random mating, to allow reproductive isolation to evolve.
One type of sympatric speciation involves cross-breeding of two related species to produce a new hybrid species. This is not common in animals as animal hybrids are usually sterile. This is because during meiosis the homologous chromosomes from each parent are from different species and cannot successfully pair. However, it is more common in plants because plants often double their number of chromosomes, to form polyploids. This allows the chromosomes from each parental species to form matching pairs during meiosis, since each parent's chromosomes are represented by a pair already. An example of such a speciation event is when the plant species Arabidopsis thaliana and Arabidopsis arenosa cross-bred to give the new species Arabidopsis suecica. This happened about 20,000 years ago, and the speciation process has been repeated in the laboratory, which allows the study of the genetic mechanisms involved in this process. Indeed, chromosome doubling within a species may be a common cause of reproductive isolation, as half the doubled chromosomes will be unmatched when breeding with undoubled organisms.
Speciation events are important in the theory of punctuated equilibrium, which accounts for the pattern in the fossil record of short "bursts" of evolution interspersed with relatively long periods of stasis, where species remain relatively unchanged. In this theory, speciation and rapid evolution are linked, with natural selection and genetic drift acting most strongly on organisms undergoing speciation in novel habitats or small populations. As a result, the periods of stasis in the fossil record correspond to the parental population and the organisms undergoing speciation and rapid evolution are found in small populations or geographically restricted habitats and therefore rarely being preserved as fossils.
Extinction is the disappearance of an entire species. Extinction is not an unusual event, as species regularly appear through speciation and disappear through extinction. Nearly all animal and plant species that have lived on Earth are now extinct, and extinction appears to be the ultimate fate of all species. These extinctions have happened continuously throughout the history of life, although the rate of extinction spikes in occasional mass extinction events. The Cretaceous–Paleogene extinction event, during which the non-avian dinosaurs went extinct, is the most well-known, but the earlier Permian–Triassic extinction event was even more severe, with approximately 96% of species driven to extinction. The Holocene extinction event is an ongoing mass extinction associated with humanity's expansion across the globe over the past few thousand years. Present-day extinction rates are 100–1000 times greater than the background rate and up to 30% of current species may be extinct by the mid 21st century. Human activities are now the primary cause of the ongoing extinction event; global warming may further accelerate it in the future.
The role of extinction in evolution is not very well understood and may depend on which type of extinction is considered. The causes of the continuous "low-level" extinction events, which form the majority of extinctions, may be the result of competition between species for limited resources (competitive exclusion). If one species can out-compete another, this could produce species selection, with the fitter species surviving and the other species being driven to extinction. The intermittent mass extinctions are also important, but instead of acting as a selective force, they drastically reduce diversity in a nonspecific manner and promote bursts of rapid evolution and speciation in survivors.
Highly energetic chemistry is thought to have produced a self-replicating molecule around 4 billion years ago, and half a billion years later the last common ancestor of all life existed. The current scientific consensus is that the complex biochemistry that makes up life came from simpler chemical reactions. The beginning of life may have included self-replicating molecules such as RNA and the assembly of simple cells.
All organisms on Earth are descended from a common ancestor or ancestral gene pool. Current species are a stage in the process of evolution, with their diversity the product of a long series of speciation and extinction events. The common descent of organisms was first deduced from four simple facts about organisms: First, they have geographic distributions that cannot be explained by local adaptation. Second, the diversity of life is not a set of completely unique organisms, but organisms that share morphological similarities. Third, vestigial traits with no clear purpose resemble functional ancestral traits and finally, that organisms can be classified using these similarities into a hierarchy of nested groups – similar to a family tree. However, modern research has suggested that, due to horizontal gene transfer, this "tree of life" may be more complicated than a simple branching tree since some genes have spread independently between distantly related species.
Past species have also left records of their evolutionary history. Fossils, along with the comparative anatomy of present-day organisms, constitute the morphological, or anatomical, record. By comparing the anatomies of both modern and extinct species, paleontologists can infer the lineages of those species. However, this approach is most successful for organisms that had hard body parts, such as shells, bones or teeth. Further, as prokaryotes such as bacteria and archaea share a limited set of common morphologies, their fossils do not provide information on their ancestry.
More recently, evidence for common descent has come from the study of biochemical similarities between organisms. For example, all living cells use the same basic set of nucleotides and amino acids. The development of molecular genetics has revealed the record of evolution left in organisms' genomes: dating when species diverged through the molecular clock produced by mutations. For example, these DNA sequence comparisons have revealed that humans and chimpanzees share 98% of their genomes and analyzing the few areas where they differ helps shed light on when the common ancestor of these species existed.
Prokaryotes inhabited the Earth from approximately 3–4 billion years ago. No obvious changes in morphology or cellular organisation occurred in these organisms over the next few billion years. The eukaryotic cells emerged between 1.6 – 2.7 billion years ago. The next major change in cell structure came when bacteria were engulfed by eukaryotic cells, in a cooperative association called endosymbiosis. The engulfed bacteria and the host cell then underwent co-evolution, with the bacteria evolving into either mitochondria or hydrogenosomes. Another engulfment of cyanobacterial-like organisms led to the formation of chloroplasts in algae and plants.
The history of life was that of the unicellular eukaryotes, prokaryotes and archaea until about 610 million years ago when multicellular organisms began to appear in the oceans in the Ediacaran period. The evolution of multicellularity occurred in multiple independent events, in organisms as diverse as sponges, brown algae, cyanobacteria, slime moulds and myxobacteria.
Soon after the emergence of these first multicellular organisms, a remarkable amount of biological diversity appeared over approximately 10 million years, in an event called the Cambrian explosion. Here, the majority of types of modern animals appeared in the fossil record, as well as unique lineages that subsequently became extinct. Various triggers for the Cambrian explosion have been proposed, including the accumulation of oxygen in the atmosphere from photosynthesis.
About 500 million years ago, plants and fungi colonised the land and were soon followed by arthropods and other animals. Insects were particularly successful and even today make up the majority of animal species. Amphibians first appeared around 364 million years ago, followed by early amniotes and birds around 155 million years ago (both from "reptile"-like lineages), mammals around 129 million years ago, homininae around 10 million years ago and modern humans around 250,000 years ago. However, despite the evolution of these large animals, smaller organisms similar to the types that evolved early in this process continue to be highly successful and dominate the Earth, with the majority of both biomass and species being prokaryotes.
Concepts and models used in evolutionary biology, such as natural selection, have many applications.
Artificial selection is the intentional selection of traits in a population of organisms. This has been used for thousands of years in the domestication of plants and animals. More recently, such selection has become a vital part of genetic engineering, with selectable markers such as antibiotic resistance genes being used to manipulate DNA. In repeated rounds of mutation and selection proteins with valuable properties have evolved, for example modified enzymes and new antibodies, in a process called directed evolution.
Understanding the changes that have occurred during organism's evolution can reveal the genes needed to construct parts of the body, genes which may be involved in human genetic disorders. For example, the mexican tetra is an albino cavefish that lost its eyesight during evolution. Breeding together different populations of this blind fish produced some offspring with functional eyes, since different mutations had occurred in the isolated populations that had evolved in different caves. This helped identify genes required for vision and pigmentation.
In computer science, simulations of evolution using evolutionary algorithms and artificial life started in the 1960s and was extended with simulation of artificial selection. Artificial evolution became a widely recognised optimisation method as a result of the work of Ingo Rechenberg in the 1960s. He used evolution strategies to solve complex engineering problems. Genetic algorithms in particular became popular through the writing of John Holland. Practical applications also include automatic evolution of computer programs. Evolutionary algorithms are now used to solve multi-dimensional problems more efficiently than software produced by human designers and also to optimise the design of systems.
In the 19th century, particularly after the publication of On the Origin of Species in 1859, the idea that life had evolved was an active source of academic debate centred on the philosophical, social and religious implications of evolution. Today, the modern evolutionary synthesis is accepted by a vast majority of scientists. However, evolution remains a contentious concept for some theists.
While various religions and denominations have reconciled their beliefs with evolution through concepts such as theistic evolution, there are creationists who believe that evolution is contradicted by the creation myths found in their religions and who raise various objections to evolution. As had been demonstrated by responses to the publication of Vestiges of the Natural History of Creation in 1844, the most controversial aspect of evolutionary biology is the implication of human evolution that humans share common ancestry with apes and that the mental and moral faculties of humanity have the same types of natural causes as other inherited traits in animals. In some countries, notably the United States, these tensions between science and religion have fuelled the current creation-evolution controversy, a religious conflict focusing on politics and public education. While other scientific fields such as cosmology and Earth science also conflict with literal interpretations of many religious texts, evolutionary biology experiences significantly more opposition from religious literalists.
The teaching of evolution in American secondary school biology classes was uncommon in most of the first half of the 20th century. The Scopes Trial decision of 1925 caused the subject to become very rare in American secondary biology textbooks for a generation, but it was gradually re-introduced later and became legally protected with the 1968 Epperson v. Arkansas decision. Since then, the competing religious belief of creationism was legally disallowed in secondary school curricula in various decisions in the 1970s and 1980s, but it returned in pseudoscientific form as intelligent design, to be excluded once again in the 2005 Kitzmiller v. Dover Area School District case.
History of evolutionary thought
|Find more about evolution at Wikipedia's sister projects|
|Definitions and translations from Wiktionary|
|Media from Commons|
|Learning resources from Wikiversity|
|News stories from Wikinews|
|Quotations from Wikiquote|
|Source texts from Wikisource|
|Textbooks from Wikibooks|
Here you can share your comments or contribute with more information, content, resources or links about this topic. | http://www.mashpedia.com/Evolution | 13 |
19 | 1. All combinations of goods and services that provide the same utility are identified by the: A) law of constant marginal utility.B) law of increasing marginal utility. C) law of diminishing marginal utility. D) indifference curve.E) elasticity of demand.2. The budget set defines the combinations of good X and Y that A) maximizes supplier's profit. B) maximizes consumer's utility.C) are affordable if all income is spent.D) are desirable to the consumer.E) are affordable to the consumer.3. An increase in money income shifts the consumer's: A) budget line to the right. B) budget line to the left. C) indifference curves to the left. D) indifference curves to the right. E) marginal utilities per dollar spent 4. A fall in the price of a good increases the real income or purchasing power of consumers so that they are able to buy more of the product. This statement best describes: A) the income effect. B) the substitution effect. C) a complementary good. D) a luxury good. E) an inferior good. 5. If the price of a product falls, that product becomes cheaper and people will want to purchase more of it in place of other goods. This statement best describes: A) the income effect. B) the substitution effect. C) a complementary good. D) a luxury good. E) an inferior good. 6. Suppose you spend your budget only on milk and bread. If both are normal goods and the price of milk increases, the quantity of bread you choose to buy will change. How? A) It will either increase or decrease, or somethingB) The substitution effect suggests more will be purchased, but the income effect suggests less will be purchased C) The income effect suggests more will be purchased, but the substitution effect suggests less will be purchased D) Both the income and substitution effects suggest that more will be purchased E) Both the income and substitution effects suggest that less will be purchased 7. Given limited budgets, consumers obtain the most satisfaction if they purchase goods and services that: A) provide the highest level of marginal utility. B) are hi tech or are of the best quality.C) provide the highest level of marginal utility per dollar spent (best bang for your dollar). D) cost the least.E) cost the most, so they can show off.8. If the quantity of X is measured on the horizontal axis and the quantity of Y is measured on the vertical axis, the slope of the budget constraint will decrease if the: A) price of Y increases. B) price of Y decreases. C) marginal utility of X decreases.D) income decreases.E) income increases.9. What is the maximum amount of good Y that can be purchased if X and Y are the only two goods available for purchase and Px = $10, Py = $20, X = 20, and M = 400? A) 20.B) 15.C) 10.D) 5.E) 0. 10. Suppose a consumer with an income of $100 who is faced with Px = 1 and Py = 1/2. What is the market rate of substitution between good X (horizontal axis) and good Y (vertical axis)? A) 0.50.B) -1.0.C) -2.0.D) -4.0. E) -0.5011. The difference between a price decrease and an increase in income is that A) A price decrease leaves real income unchanged while an increase in income increases real income.B) A price decrease decreases real income while an increase in income increases real income.C) A price decrease does not affect the consumption of other goods while an increase in income does.D) An increase in income does not affect the slope of the budget line while a decrease in price does change the slope.E) There is no difference.12. Joe Schmoe prefers a three pack of soda to a six-pack. What properties does this preference violate? A) More is better.B) Diminishing MRS. C) Transitivity.D) Completeness.E) Diminishing marginal utility.13. A situation where a consumer says he does not know his preference ordering for bundles X and Y would violate the property of: A) more is better.B) completeness.C) substitutability.D) complementarity. E) jointness.14. Sam Voter prefers Ronald to Joe, Joe to Gary, and Gary to Ronald. Sam's preferences A) are not complete.B) are not transitive. C) indicate that he is a liberal.D) are consistent with our assumptions about consumer behavior.E) are irrational.15. By the property of "more is better" and transitivity, indifference curves A) may overlap one another.B) do not intersect one another.C) can intersect one another only twice.D) can intersect one another only once.E) are straight lines.16. Given that income is $200 and the price of good Y is $40. What is the vertical intercept of the budget line? A) 5. B) 1/5.C) 20.D) 400.E) 8,000.17. The equilibrium consumption bundle is A) the cheapest bundle of goods money can buy.B) the bundle where the budget line and the indifference curve meet.C) the affordable bundle that yields the greatest satisfaction to the consumer.D) any affordable bundle in the budget set. E) any bundle that is the farthest from the origin.18. At the point of consumer equilibrium the slope of the budget line is equal to the: A) indifference curve.B) consumer preference. C) market rate of substitution.D) marginal rate of substitution.E) the slope of the demand curve.19. If you wish to open a store and you do not like risk, it would be wise to sell: A) only normal goods.B) all inferior goods. C) a mix of normal and inferior goods.D) luxury goods.E) high tech goods.20. The total earnings of a worker are represented by E = 100 + $10(24 - L), where E is earnings and L is the number of hours of leisure. How much will the worker earn if he takes 14 hours of leisure per day? A) $200. B) $100.C) $240.D) $150.E) $140.21. Suppose earnings are given by E = $60 + $7(24 - L), where E is earnings and L is the hours of leisure. How much is this person working if their daily earnings are $116? A) 6 hours. B) 8 hours.C) 12 hours.D) 16 hours.E) 18 hours.
Level/Year: collegeSubject: economics
THIS ANSWER IS LOCKED!You can view this answer by clicking here to Register or Login and paying $3. If you've already paid for this answer, simply Login.
I am a qualified Chartered Accountant. For the last five years i am working as manager | http://www.justanswer.com/homework/35v7e-1-combinations-goods-services-provide.html | 13 |
15 | Module 11: Air
Air pollution is a major environmental risk to health and is estimated to cause approximately 2 million deaths worldwide per year. Exposure to air pollutants is largely beyond the control of individuals and requires action by public authorities at the national, regional, and international levels. The WHO Air Quality Guidelines contain up-to-date assessments of the health effects of air pollution, and the guidelines recommend targets for air quality at which the health risks will be significantly reduced. By reducing particulate matter (PM10) pollution from 70 to 20 micrograms per cubic meter, we can reduce air quality-related deaths by around 15%. By reducing air pollution levels, we can help countries reduce the global burden of disease from respiratory infections, heart disease, and lung cancer. The WHO guidelines provide temporary targets for countries that still have very high levels of air pollution. The targets call for a maximum of three days a year with up to 150 micrograms of PM10 per cubic meter (for short term peaks of air pollution), and 70 micrograms per cubic meter for long term exposures to PM10.(1)
Wood Burning Stoves(2)
Approximately 50% of people, almost all in developing countries, rely on coal and biomass in the form of wood and crop residues for domestic energy. These materials are typically burned in simple stoves with very incomplete combustion. Consequently, women and children are exposed to high levels of indoor air pollution every day, which increases the risk of chronic obstructive pulmonary disease and of acute respiratory infection in childhood. It is the greatest cause of death among children under 5 years of age in developing countries. Evidence also shows associations with low birth weight, increased infant and perinatal mortality, pulmonary tuberculosis, nasopharyngeal and laryngeal cancer, and cataract. Indoor air pollution is a major global public health threat requiring increased efforts in the areas of research and policy-making. There should be more research on its health effects, particularly in relation to respiratory infections.
Poverty is one of the main barriers to the adoption of cleaner fuels. Wood is the most common example of biomass fuel, but the use of animal dung and crop residues is also widespread. Many of the substances in biomass smoke can harm human health. The most toxic byproducts are particles, carbon monoxide, nitrous oxides, sulphur oxides (principally from coal), formaldehyde, and polycyclic organic matter. The majority of households in developing countries burn biomass fuels in open fireplaces, or in a poorly functioning earth or metal stove. Combustion is very incomplete in most of these stoves, resulting in substantial emissions which, in the presence of poor ventilation, produce very high levels of indoor pollution.
Exposure can be reduced by using improved stoves, better housing, cleaner fuels, and behavioral changes. Cleaner fuels, especially liquefied petroleum gas, offer the best long-term option in terms of reducing pollution and protecting the environment, but most poor communities using biomass are unlikely to be able to make the transition to these types of cleaner fuels. The use of improved biomass stoves has given varying results and has often been unsuccessful. Until recently, the main emphasis of stove programs has been to reduce the use of wood, and consequently there has been relatively little evaluation of reductions in exposure. Other factors such as seasonal energy requirements and cultural beliefs are also important considerations.(3)
More than half of the world’s population relies on dung, wood, crop waste or coal to meet their most basic energy needs. Cooking and heating with such solid fuels on open fires or stoves without chimneys leads to indoor air pollution. This indoor smoke contains a range of health-damaging pollutants, including small soot or dust particles that are able to penetrate deep into the lungs. People in developing countries are commonly exposed to very high levels of pollution for 3-7 hours daily over many years. During the winter in cold and mountainous areas, exposure may occur over a substantial portion of each 24-hour period. Because of their customary involvement in cooking, women’s exposure is much higher than men’s. Young children are often carried on their mothers’ backs while cooking is in progress, and they therefore spend many hours breathing smoke. Pollution attributable to the use of biomass fuel also causes eye irritation and may cause cataract. (4)
One study found that less than 20 % of homes in poor areas of northeastern Brazil and central Mexico were safe for living. Wood smoke contains many chemical products such carcinogens, carbon monoxide, and hydrocarbons that negatively impact human health.(5) “In fact, nearly half the world's population prepares meals with wood or wood-replacement fuels on primitive stoves without chimneys.”(6) The consequences of indoor air pollution are revealed in a study conducted in Nepal and India which examined the association between cooking with unflued indoor stoves and development of cataract. This study found that the use of solid fuel indoor stoves is associated with increased risk of developing cataract in women.(7)
Outdoor Air Pollution
Outdoor air pollution results largely from the combustion of fossil fuels for transport, power generation, and other human activities. Combustion processes produce a mixture of pollutants that comprise both primary emissions, such as diesel soot particles and lead, and the products of atmospheric transformation, such as ozone and sulfate particles that form from the burning of sulfur-containing fuel. Urban air pollution is primarily generated by transportation vehicles and energy production, and this form of pollution kills an estimated 1.2 million people annually. Today, many developing world cities face very severe levels of urban air pollution.(8)
Smoking impacts many forms of human health, including increased risk for lung disease and heart disease. It has also been shown that exposure to cigarette smoke can contribute to the development of cataracts later in life. One third of all women and two thirds of men in India use tobacco in some form, such as smoking tobacco in cigarettes. This statistic is cause for concern since smoking is reported to be a risk factor in eye diseases such as cataract, age-related macular degeneration, and glaucoma.(9) Studies show that cigarettes contribute to the formation of cataracts in two ways. First, free radicals present in tobacco smoke attack the eye directly, potentially damaging lens proteins and the fiber cell membrane in the lens.(10),(11) Second, smoking reduces the body's levels of antioxidants and certain enzymes which may help remove damaged protein from the lens.(12),(13) Over time, this damage can double or triple the risk of developing cataracts versus a non-smoker from a similar background.(14)
Replacing unflued stoves with flued stoves would greatly improve ventilation. Cooking in an unventilated kitchen doubles the risk of cataract compared with cooking in a fully or partially ventilated kitchen. Allowing smoke to exit the house, and the use of cleaner fuels, are ideal options. It is best to replace the solid-fuel stoves with stoves that use liquid fuel or gas. Since resources are limited in developing countries, vented solid-fuel stoves may be a more economic solution.(15)
Twelve studies have assessed the risk for developing cataract in ex-smokers. Ex-smokers have a reduced risk for developing cataract compared with current smokers. However, the risk is still greater than for individuals who have never smoked. Also, the greater the intensity of previous smoking, the longer it takes for the increased risk to decline. However, cessation does ultimately decrease the risk. Education and public health campaigns aimed at increasing awareness of the dangers of smoking may help to curb rates in the developing world.(16)
(1) “Air quality and health.” WHO. (August 2008). Accessed 10 July 2010. <http://www.who.int/mediacentre/factsheets/fs313/en/index.html>.
(2) Bruce,N., Perez-Padilla, R., and Albalak, R. “Indoor air pollution in developing countries: a major environmental and public health challenge.” Bulletin of the World Health Organzation. (2000). 78(9). Accessed 8 July 2010. <http://www.who.int/bulletin/archives/78(9)1078.pdf>.
(7) Pokhrel AK, Smith KR, Khalakdina A, Deuja A, Bates MN. “Case-control study of indoor cooking smoke exposure and cataract in Nepal and India.” Int J Epidemiol. (2005). 34:702–708
(8) Bruce, N., Perez-Padilla, R., and Albalak, R. “Indoor air pollution in developing countries: a major environmental and public health challenge.” Bulletin of the World Health Organzation. (2000). 78(9). Accessed 8 July 2010. <http://www.who.int/bulletin/archives/78(9)1078.pdf>.
(9) World Health Organization. Tobacco or health, a global status report. Geneva: WHO, 1997.
(10) McCarty CA, Nanjan MB, Taylor HR. “Attributable risk estimates for cataract to prioritize medical and public health action.” Invest Ophthalmol Vis Sci. (2000). 41:3720–25.
(11) Van Heyningen R, Pirie A. “Naphthalene cataract in pigmented and albino rabbits.” Exp Eye Res.1976. 22:393–94.
(12) Shalini VK, Luthra M, Srinivas L et al. “Oxidative damage to the eye lens caused by cigarette smoke and fuel smoke condensates” Indian J Biochem Biophys.(1994). 31:261–66.
(13) Wegener A, Kaegler M, Stinn W. “Frequency and nature of spontaneous age-related eye lesions observed in a 2-year inhalation toxicity study in rats.” Ophthalmic Res.(2002). 34:281–87.
(14) Kelly SP, Thornton J, Edwards R, et al. “Smoking and cataract: review of causal association.” J Cataract Refract Surg. (2005). 31:2395–404.
(15) “Module 12: Smoke Exposure and Cataract.” Accessed 10 July 2010. <http://www.uniteforsight.org/community-eye-health-course/module12>.
(16) Kelly SP, Thornton J, Edwards R, et al. “Smoking and cataract: review of causal association.” J Cataract Refract Surg. (2005). 31:2395–404. | http://www.uniteforsight.org/environmental-health/module11 | 13 |
39 | Back to top of Section 4
4.1.1 Dimensional and Temporal Scale Factors
In Section 2 the properties of fission chain reactions were described using two simplified mathematical models: the discrete step chain reaction, and the more accurate continuous chain reaction model. A more detailed discussion of fission weapon design is aided by introducing more carefully defined means of quantifying the dimensions and time scales involved in fission explosions. These scale factors make it easier to analyze time-dependent neutron multiplication in systems of varying composition and geometry.
These scale factors are based on an elaboration of the continuous chain reaction model. It uses the concept of the "average neutron collision" which combines the scattering, fission, and absorption cross sections, with the total number of neutrons emitted per fission, to create a single figure of merit which can be used for comparing different assemblies.
The basic idea is this, when a neutron interacts with an atom we can think of it as consisting of two steps:
If the interaction is ordinary neutron capture, then no neutron is emitted from the collision. If the interaction is a scattering event, then one neutron is emitted. If the interaction is a fission event, then the average number of neutrons produced per fission is emitted (this average number is often designated by nu). By combining these we get the average number of neutrons produced per collision (also called the number of secondaries), designated by c:
Eq. 4.1.1-1 c = (cross_scatter + cross_fission*avg_n_per_fission)/cross_total
the total cross section, cross_total, is equal to:
Eq. 4.1.1-2 cross_total = cross_scatter + cross_fission + cross_absorb
The total neutron mean free path, the average distance a neutron will travel before undergoing a collision, is given by:
Eq. 4.1.1-3 MFP = 1/(cross_total * N)where N is the number of atoms per unit volume, determined by the density.
In computing the effective reactivity of a system we must also take into account the rate at which neutrons are lost by escape from the system. This rate is measured by the number of neutrons lost per collision. For a given geometry, the rate is determined by the size of the system in MFPs. Put another way, for a given geometry and degree of reactivity, the size of the system as measured in MFPs, is determined only by the parameter c. The higher the value of c, the smaller the assembly can be.
An indication of the effect of c on the size of a critical assembly can be gained by the following table of critical radii (in MFPs) for bare (unreflected) spheres:
Table 4.1.1-1. Critical Radius (r_c) vs Number of Secondaries (c ) c value r_c (crit. radius in MFP) 1.0 infinite 1.02 12.027 1.05 7.277 1.10 4.873 1.20 3.172 1.40 1.985 1.60 1.476
If the composition, geometry, and reactivity of a system are specified then the size of a system in MFPs is fixed. From Eq. 4.1.1-3 we can see that the physical size or scale of the system (measured in centimeters, say) is inversely proportional to its density. Since the mass of the system is equal to volume*density, and volume varies with the cube of the radius, we can immediately derive the following scaling law:
Eq. 4.1.1-4 mcrit_c = mcrit_0/(rho/rho_0)^2 = mcrit_0/C^2That is, the critical mass of a system is inversely proportional to the square of the density. C is the degree of compression (density ratio). This scaling law applies to bare cores, it also applies cores with a surrounding reflector, if the reflector is density has an identical degree of compression. This is usually not the case in real weapon designs, a higher degree of compression generally being achieved in the core than in the reflector.
An approximate relationship for this is:
Eq. 4.1.1-5 mcrit_c = mcrit_0/(C_c^1.2 * C_r^0.8)where C_c is the compression of the core, and C_r is the compression of the reflector. Note that when C_c = C_r, then this is identical to Eq. 4.1.1-4. For most implosion weapon designs (since C_c > C_r) we can use the approximate relationship:
Eq. 4.1.1-6 mcrit_c = mcrit_0/C_c^1.7
These same considerations are also valid for any other specified degree of reactivity, not just critical cores.
Fission explosives depend on a very rapid release of energy. We are thus very interested in measuring the rate of the fission reaction. This is done using a quantity called the effective multiplication rate or "alpha". The neutron population at time t is given by:
Eq. 4.1.1-7 N_t = N_0*e^(alpha*t)
Alpha thus has units of 1/t, and the neutron population will increase by a factor of e (2.71...) in a time interval equal to 1/alpha. This interval is known as the "time constant" (or "e-folding time") of the system, t_c. The more familiar concept of "doubling time" is related to alpha and the time constant simply by:
Eq. 4.1.1-8 doubling_time = (ln 2)/alpha = (ln 2)*t_c
Alpha is often more convenient than t_c or doubling times since its value is bounded and continuous: zero at criticality; positive for supercritical systems; and negative for subcritical systems. The time constant goes to infinity at criticality. The term "time constant" seems unsatisfactory for this discussion though since it is hardly constant, t_c continually changes during reactivity insertion and disassembly. Therefore I will henceforth refer to the quantity 1/alpha as the "multiplication interval".
Alpha is determined by the reactivity (c and the probability of escape), and the length of time it takes an average neutron (for a suitably defined average) to traverse an MFP. If we assume no losses from the system then alpha can be calculated by:
Eq. 4.1.1-9 alpha = (1/tau)*(c - 1) = (v_n/total_MFP)*(c - 1)
where tau is the average neutron lifetime between collisions; and v_n is the average neutron velocity (which is 2.0x10^9 cm/sec for a 2 MeV neutron, the average fission spectrum energy). The "no losses" assumption is an idealization. It provides an upper bound for reaction rates, and provides a good indication of the relative reaction rates in different materials. For very large assemblies, consisting of many critical masses, neutron losses may actually become negligible and approach the alphas given below.
The factor c - 1 used above is the "neutron number", it represents the average neutron excess per collision. In real systems there is always some leakage, when this leakage is taken in account we get the "effective neutron number" which is always less than c - 1. When the effective neutron number is zero the system is exactly critical.
4.1.2 Nuclear Properties of Fissile Materials
The actual value of alpha at a given density is the result of many interacting factors: the relative neutron density and cross sections values as a function of neutron energy, weighted by neutron velocity which in turn is determined by the fission neutron energy spectrum modified by the effects of both moderation and inelastic scattering.
Ideally the value of alpha should be determined by "integral experiments", that is, measured directly in the fissile material where all of these effects will occur naturally. Calculating tau and alpha from differential cross section measurements, adjusted neutron spectrums, etc. is fraught with potential error.
In the table below I give some illustrative values of c, total cross section, total mean free path lengths for the principal fissionable materials (at 1 MeV), and the alphas at maximum uncompressed densities. Compression to above normal density (achievable factors range up to 3 or so in weapons) reduce the MFPs, alphas (and the physical dimensions of the system) proportionately.
Table 4.1.2-1 Fissile Material Properties Isotope c cross_total total_MFP density alpha t_double (barns) (cm) (1/microsec) (nanosec) U-233 1.43 6.5 3.15 18.9 273 2.54 U-235 1.27 6.8 3.04 18.9 178 3.90 Pu-239 1.40 7.9 2.54 19.8 315 2.20
Values of c and total MFP can be easily calculated for mixtures of materials as well. In real fission weapons (unboosted) effective values for alpha are typically in the range 25-250 (doubling times of 2.8 to 28 nanoseconds).
All nations interested in nuclear weapons technology have performed integral experiments to measure alpha, but published data is sparse and in general is limited to the immediate region of criticality. Collecting data for systems at high densities requires extremely difficult high explosive experiments, and data for high alpha systems can only be done in actual nuclear weapon tests.
Some integral alpha data is available for systems near prompt critical. The most convenient measurements are of the negative alpha value for fast neutron chain reactions at delayed criticality. Since at prompt critical alpha is exactly zero, the ratio of the magnitude of this delayed critical measurement to the fraction of fission neutrons that are delayed allows the alpha value to be calculated. These were the only sort of alpha measurements available to the Manhattan Project for the design of the first atomic bombs.
The most informative values are from the Godiva and Jezebel unreflected reactor experiments. These two systems used bare metal weapon grade cores, so the properties of weapons material was being measured directly. Godiva consisted of oralloy (93.71 wt% U-235, 5.24 wt% U-238, 1.05 wt% U-234), Jezebel of weapon-grade delta-phase plutonium alloy (94.134 wt% Pu-239, 4.848 wt% Pu-240, 1.018 wt% gallium)
Table 4.1.2-2 Properties of Bare Critical Metal Assemblies Mass, Density, and Measured Alpha are at Delayed Critical (D.C.) Assembly Material Mass Density Meas. Alpha Del. Neutron Calc. Alpha Name kg (1/microsec) Fraction (1/microsec) Godiva Oralloy 52.25 18.71 -1.35 0.0068 199 Jezebel WG-Pu 16.45 15.818 -0.66 0.0023 287
The calculated values of alpha from the Godiva and Jezebel experiments are reasonably close to those calculated above from 1 MeV cross section data. Adjusting for density, we get 270/microsecond for U-235 (1 Mev data) vs 199/microsecond (experimental), and for plutonium 252/microsecond vs 287/microsecond.
The effective value of alpha (the actual multiplication rate), taking into account neutron leakage, varies with the size of the system. If the system radius R = r_c, then it is exactly one critical mass (m = M_crit), and alpha is zero. The more critical masses present, the closer alpha comes to the limiting value. This can be estimated from the relation:
Eq. 4.1.2-1 alpha_eff = alpha_max*[1 - (r_c/R)^2] = alpha_max*[1 - (M_crit/m)^(2/3)]
Notice that using the two tables above we can immediately estimate the critical mass of a bare plutonium sphere:
mass_crit = [(2*1.985*2.54 cm)^3]*(Pi/6)*19.8 g/cm^3 = 10,600 grams The published figure is usually given as 10.5 kg.
4.1.3 Distribution of Neutron Flux and Energy in the Core
Since neutron leakage occurs at the surface of a critical or supercritical core, the strength of the neutron flux is not constant throughout the core. Since the rate of energy release at any point in the core is proportional to the flux at that point, this also affects the energy density throughout the core. This is a matter of some significance, since it influences weapon efficiency and the course of events in terminating the divergent fission chain reaction.
126.96.36.199 Flux Distribution in the Core
For a bare (unreflected) critical spherical system, the flux distribution is given by:
Eq. 188.8.131.52-1 flux(r) = max_flux * Sin(Pi*r/(r + 0.71*MFP))/(Pi*r/(r + 0.71*MFP))
(using the diffusion approximation) where Sin takes radians as an argument.
If we measure r in MFPs, then by referring to Table 4.1.1-1 we can relate the flux distribution to the parameter c. Computing the ratio between the flux at the surface of the critical system, and the maximum flux (in the center) we find:
Table 4.1.3-1 Relative Flux at Surface c value flux(r_c) 1.0 0.0 (at the limit) 1.02 0.0587 1.05 0.0963 1.10 0.1419 1.20 0.2117 1.40 0.3182 1.60 0.4018
This shows that as c increases, the flux distribution becomes flatter with less drop in the flux near the surface.
The flux distribution function above applies only to bare critical systems. If the system is supercritical, then the flux distribution becomes flatter, since neutron production over-balances loss. The greater the value of alpha for the system, the flatter it becomes. The addition of a neutron reflector also flattens the distribution, even for the same degree of reactivity. The flux distribution function is useful though, since the maximum rate of fission occurs at the moment when the core passes through second criticality (on the way to disassembling, see below).
184.108.40.206 Energy Distribution in the Core
As long as the geometry doesn't change, the relative flux distribution remains the same throughout the fission process. The fission reaction rate at any point in the core is proportional to the flux. The net burnup of fissile material (and total energy release) is determined by the reaction rate integrated over time.
This indicates that the degree of burnup (the efficiency of utilization) varies throughout the core. The outer layers of material will be fissioned less efficiently than the material near the center. The steeper the drop off in flux the greater this effect will be. We can thus expect less efficient utilization of fissile material in small cores, and in materials with low values of c. From the relatively low value of c for U-235 compared to U-233 and Pu-239, we can expect that U-235 will be used less efficiently. This is observed in pure fission tests, the difference being about 15% in nominal yield (20 kt) pure fission designs.
The energy density (energy content per unit volume) in any region of the core is determined not only by the total energy produced in that region, but also by the flow of heat in to and out from the region.
The energy present in the core rises by a factor of e (2.71...) every multiplication interval (neglecting any losses from the surface). Nearly all of the energy present has thus been produced in the last one or two multiplication intervals, which in a high alpha system is a very short period of time (10 nanoseconds or less). There is not much time for heat flow to significantly alter this energy distribution.
Close to the end point of the fission process, the energy density in the core is so high that significant flow can occur. Since most of the energy is present as a photon gas the dominant mechanism is radiation (photon) heat transport, although electron kinetic heat transport may be significant as well. This heat flow can be modelled by the diffusion approximation just like neutron transport, but in this case estimating the photon mean free path (the opacity of the material) is quite difficult. A rough magnitude estimate for the photon MFP is a few millimeters.
The major of effect of energy flow is the loss of energy from a layer about 1 photon mean free path thick (referred to as one optical thickness) at the surface of the core. In a bare core this cooling can be quite dramatic, but the presence of a high-Z tamper (which absorbs and re-emits energy) greatly reduces this cooling. Losses also occur deeper in the core, but below a few photon MFPs it becomes negligible. Otherwise, there is a significant shift in energy out of the center of the core that tends to flatten the energy distribution.
The energy density determines the temperature and pressure in the core, so there is also a variation in these parameters. Since the temperature in radiation dominated matter varies with the fourth power of the energy density, the temperature distribution is rather flat (except near the surface perhaps). The pressure is proportional to the energy density, so it varies in similar degree.
4.1.4 History of a Fission Explosion
To clarify the issues governing fission weapon design it is very helpful to understand the sequence of events that occurs in every fission explosion. The final event in the process - disassembly - is especially important since it terminates the fission energy release and thus determines the efficiency of the bomb.
220.127.116.11 Sequence of Events
Several distinct physical states can be identified during the detonation of a fission bomb. In each of these states a different set of physical processes dominates.
18.104.22.168.1 Initial State
Before the process that leads to a fission explosion is initiated, the fissile material is in a subcritical configuration. Reactivity insertion begins by increasing the average density of the configuration in some way.
22.214.171.124.2 Delayed Criticality
When the density has increased just to the point that a neutron population in the mass is self-sustaining, the state of delayed criticality has been achieved. Although nearly all neutrons produced by fission are emitted as soon as the atom splits (within 10^-14 sec or so), a very small proportion of neutrons (0.65% for U-235, 0.25% for Pu-239) are emitted by fission fragments with delays of up to a few minutes. In delayed criticality these neutrons are required to maintain the chain reaction. These long delays mean that power level changes can only occur slowly. All nuclear reactors operate in a state of delayed criticality. Due to the slowness of neutron multiplication in this state it is of no significance in nuclear explosions, although it is important for weapon safety considerations.
126.96.36.199.3 Prompt Criticality
When reactivity increases to the point that prompt neutrons alone are sufficient to maintain the chain reaction then the state of prompt criticality has been reached. Rapid multiplication can occur after this point. In bomb design the term "criticality" usually is intended to mean "prompt criticality". For our purposes we can take the value of alpha as being zero at this point. The reactivity change required to move from delayed to prompt criticality is quite small (for plutonium the prompt and delayed critical mass difference is only 0.80%, for U-235 it is 2.4%), so in practice the distinction is unimportant. Passage through prompt criticality into the supercritical state is also termed "first criticality".
188.8.131.52.4 Supercritical Reactivity Insertion
The insertion time of a supercritical system is measured from the point of prompt criticality, when the divergent chain reaction begins. During this phase the reactivity climbs, along with the value of alpha, as the density of the core continues to increase. Any insertion system will have some maximum degree of reactivity which marks the end of the insertion phase. This phase may be terminated by reaching a plateau value, by passing the point of maximum reactivity and beginning to spontaneously deinsert, or by undergoing explosive disassembly.
184.108.40.206.5 Exponential Multiplication
This phase may overlap supercritical insertion to any degree. Any neutrons introduced into the core after prompt criticality will initiate a rapid divergent chain reaction that increases in power exponentially with time, the rate being determined by alpha. If exponential multiplication begins before maximum reactivity, and insertion is sufficiently fast, there may be significant increases in alpha during the course of the chain reaction. Throughout the exponential multiplication phase the cumulative energy released remains too small to disrupt the supercritical geometry on the time scale of the reaction. Exponential multiplication is always terminated by explosive disassembly. The elapsed time from neutron injection in the supercritical state to the beginning of explosive disassembly is called the "incubation time".
220.127.116.11.6 Explosive Disassembly
The bomb core is disassembled by a combination of internal expansion that accelerates all portions of the core outward, and the "blow-off" or escape of material from the surface, which generates a rarefaction wave propagating inward from the surface. The drop in density throughout the core, and the more rapid loss of material at the surface, cause the neutron leakage in the core to increase and the effective value of alpha to decline.
The speed of both the internal expansion and surface escape processes is proportional to the local speed of sound in the core. Thus disassembly occurs when the time it takes sound to traverse a significant fraction of the core radius becomes comparable to the time constant of the chain reaction. Since the speed of sound is determined by the energy density in the core, there is a direct relationship between the value of alpha at the time of disassembly and the amount of energy released. The faster is the chain reaction, the more efficient is the explosion.
As long as the value of alpha is positive (the core is supercritical) the fission rate continues to increase. Thus the peak power (energy production rate) occurs at the point where the core drops back to criticality (this point is called "second criticality"). Although this terminates the divergent chain reaction, and exponential increase in energy output, this does not mean that significant power output has ended. A convergent chain reaction continues the release of energy at a significant, though rapidly declining, rate for a short time afterward. 30% or more of the total energy release typically occurs after the core has become sub-critical.
18.104.22.168 The Disassembly Process
The internal expansion of the core is caused by the existence of an internal pressure gradient. The escape of material from the surface is caused by an abrupt drop in pressure near the surface, allowing material to expand outward very rapidly. Both of these features are present in every fission bomb, but the degree to which each contributes to disassembly varies.
Consider a spherical core with internal pressure declining from the center towards the surface. At any radius r within the core the pressure gradient is dP/dR. Now consider a shell of material centered at r, that is sufficiently thin so that the slope of the pressure gradient does not change appreciably across it. The mass of the shell is determined by its area, density, and thickness:
m = thickness * area * densityThe outward force exerted on the shell is determined by the pressure difference across the shell and the shell area:
F = dP/dR * thickness * areaFrom Newton's second law of motion we know that acceleration is related to force and mass by:
a = F/m so: a = (dP/dR * thickness * area)/(thickness * area * density) = (dP/dR)/density
If density is constant in the core, then the outward acceleration at any point is proportional to the pressure gradient; the steeper the gradient, the greater the acceleration. The kinetic energy acquired comes at the expense of the internal energy of the expanding material.
The limiting case of a steep pressure gradient is a sudden drop to zero. In this case the acceleration is infinite, the internal energy of the material is completely converted to kinetic energy instantaneously and it expands outwards at constant velocity (escape velocity). The edge of the pressure drop propagates back into the material as a rarefaction wave at the local speed of sound. The pressure at the leading edge of the expanding material (moving in the opposite direction at escape velocity) is zero. The pressure discontinuity thus immediately changes into a continuous pressure change of steadily diminishing slope. See Section 22.214.171.124 Release Waves for more discussion of this process.
In a bare core, thermal radiation from the surface causes a large energy loss in a surface layer about one optical thickness deep. Since energy lost from the core by thermal radiation cannot contribute to expansion, this has the effect of delaying disassembly. It does create a very steep pressure gradient in the layer however, and a correspondingly high outward acceleration. Deeper in the core, the pressure gradient is much flatter and the acceleration is lower. After the surface layer has expanded outward by a few times its original thickness, it has acquired considerable velocity, and the surface pressure drop rarefaction has propagated a significant distance back into the core. At this point the pressure and density profile of the core closely resembles the early stages of expansion from an instantaneous pressure drop, the development of the profile having been delayed slightly by the time it took the surface to accelerate to near escape velocity.
A bomb core will typically be surrounded by a high-Z tamper. A layer of tamper (about one optical thickness deep) absorbs the thermal radiation emitted by the core and is heated by it. As its temperature increases, this layer begins to radiate energy back to the core, reducing the core's energy loss. In addition, the heating also generates considerable pressure in the tamper layer. The combined effect of reduced core surface cooling, and this external pressure is to create a much more gradual pressure drop in the outer layer of the core and a correspondingly reduced acceleration.
The expanding core and heated tamper layer creates a shock wave in the rest of the tamper. This has important consequences for the disassembly process. The rarefaction wave velocity is not affected by the presence of the tamper, but the rate at which the density drops after arrival of the rarefaction wave is strongly affected. The rate of density drop is determined by the limiting outward expansion velocity, this is in turn determined by the shock velocity in the tamper. The denser the tamper the slower the shock, and the slower the density decrease behind the rarefaction wave. In any case the shock velocity in the tamper is much slower than the escape velocity of expansion into a vacuum. The disassembly of a tamped core thus more closely resembles one dominated by internal expansion rather than surface escape.
126.96.36.199 Post Disassembly Expansion
The expanding core creates a radiation dominated shock wave in the tamper that compresses it by at least a factor of 7, and perhaps as high as 16 due to ionization effects. This pileup of high density material at the shock front is called the "snow plow" effect. By the time this shock has moved a few centimeters into the tamper, the rarefaction wave will have reached the center of the core and the entire core will be expanding outward uniformly.
The basic structure of the early fireball has now developed, consisting of a thin highly compressed shell just behind the shock front containing nearly all of the mass that has been shocked and heated so far. This shell travels outward at nearly the same velocity as the shock front. The volume inside this shell is a region of very low density. Temperature and pressure behind the shock front is essentially uniform though since nearly all of the energy present is contained in the radiation field (i.e. it exists as a photon gas). Since the shock wave is radiation dominated, the front does not contain an abrupt pressure jump. Instead there is a transition zone with a thickness about equal to the radiation mean free path in the high-Z tamper material (typically a few millimeters). In this zone the temperature and pressure climb steadily to their final value.
This overall explosion structure remains the same as the shock expands outward until it reaches a layer of low-Z material (a beryllium reflector, or the high explosive).
The transition zone marking the shock front remains thin as long as the shock is travelling through opaque high-Z material. Low-Z material becomes completely ionized as it is heated, and once it is completely ionized it is nearly transparent to radiation and is no longer efficiently heated. When the shock front emerges at the boundary of the high-Z tamper and the low-Z material, it spits into two regions. A radiation driven shock front moves quickly away from the high-Z surface, bleaching the low-Z material to transparency. This faster shock front only creates a partial transition to the final temperature and pressure. The transition is completed by a second shock, this one a classical mechanical shock, driven by the opaque material.
4.1.5 Fission Weapon Efficiency
Fundamental to analyzing the design of fission bombs is understanding the factors that influence the efficiency of the explosion - the percentage of fissile material actually fissioned. The efficiency and the amount of fissile material present determine the amount of energy released by the explosion - the bomb's yield.
I have organized my discussion of design principles around the issue of efficiency since it is the most important design characteristic of any fission device. Any weapon designer must have a firm grasp on the expected efficiency in order to make successful yield predictions, and a firm grasp on the factors affecting efficiency is required to make design tradeoffs.
In the discussion below (and in later subsections as well) I assume that the system under discussion is spherically symmetric, and of homogenous density, unless otherwise stated. Spherical symmetry is the simplest geometry to analyze, and also happens to be the preferred geometry for efficient nuclear weapons.
188.8.131.52 Efficiency Equations
It is intrinsically difficult to accurately predict the performance of a particular design from fundamental physical principles alone. To make good predictions on this basis requires sophisticated computer simulations that include hydrodynamic, radiation, and neutronic effects. Even here it is very valuable to have actual test data to use for calibrating these simulation models.
Nuclear weapon programs have historically relied heavily on extrapolating tested baseline designs using scaling laws like the efficiency equations I discuss below, especially in the early years of development. These equations are derived from idealized models of bomb core behavior and consequently have serious limitations in making absolute efficiency estimates. The predictions of the Theoretical Section at Los Alamos underestimated the yield of the first atomic bomb by a factor of three; an attempts a few years later to recompute the bomb efficiency using the best models, physical data, and computers available at the time led to a yield overestimate by a factor of two.
From the description of core disassembly given above we can see that two possible idealizations are possible for deriving convenient efficiency equations:
The basic approach is to model how quickly the core expands to the point of second criticality. To within a constant scaling factor, this fixes the efficiency of the explosion.
In the first modelling approach, the state of second criticality is based on the average density of the entire core. In the second approach, second criticality is based on the surface loss of excess critical masses from a residual core which remains at constant initial density.
The first efficiency equation to be developed was the Bethe-Feynmann equation, prepared by Hans Bethe and Richard Feynman at Berkeley in 1942 based on the uniform expansion model. A somewhat different efficiency equation was presented by Robert Serber in early 1943 at Los Alamos, which was also based on uniform expansion but also explicitly included the exponential growth in energy release (which the Bethe-Feynmann equation did not). A problem with these derivations is that to keep the resultant formulas relatively simple, they assume that the expanding core remains at essentially constant density during deinsertion, which is only true (even approximately) when the degree of supercriticality is small.
For the purposes of this FAQ I have taken the second approach for deriving an efficiency equation, using the surface escape model. This model has the advantage that the residual core remains at constant density regardless of the degree of supercriticality. Comparing it to the other efficiency equations provides some insight into the sensitivity of the assumptions in the various models.
184.108.40.206.1 The Serber Efficiency Equation Revisited
Let us first consider the factors that affect the efficiency of a homogenous untamped supercritical mass. In this system, disassembly begins as fissile material expands off the core's surface into a vacuum. We make the following simplifying assumptions:
If r is the initial outer radius, and r_c is the critical radius, then the reaction halts when:
Eq. 220.127.116.11.1-1 Integral[c_s(t) dt] = r - r_c where c_s(t) is the speed of sound at time t.
If kinetic pressure is negligible compared to radiation pressure (this is true in all but extremely low yield explosions), then:
Eq. 18.104.22.168.1-2 c_s(t) = [(E(t)*gamma)/(3*V*rho)]^0.5
where E(t) is the cumulative energy produced by the reaction, V is the volume of the core, and rho is its density.
We also have:
Eq. 22.214.171.124.1-3 E(t) = (E1/(c - 1)) * e^(alpha*t)
where E1 is a constant that gives the energy yield per fission (E1 = 2.88 x 10^-4 erg/fission). Thus:
Eq. 126.96.36.199.1-4 Eff(t) = E(t)/E_total = (E1/((c - 1)*E_total)) * e^(alpha*t)
where Eff(t) is the efficiency at time t, and E_total is the energy yield at 100% efficiency.
Eq. 188.8.131.52.1-5 r - r_c = Integral[(E(t)*gamma/(3*V*rho))^0.5 dt] = (gamma*E1/(3*M*(c-1)))^0.5 * Integral[e^(alpha*t/2)dt] = (gamma*E1/(3*M*(c-1)))^0.5 * 2/alpha * e^(alpha*t/2)
where M is the fissile mass.
Rearranging and squaring we get:
Eq. 184.108.40.206.1-6 e^(alpha*t) = (r - r_c)^2 * ((3M*(c-1))/(gamma*E1)) * (alpha^2)/4
Substituting into the efficiency equation:
Eq. 220.127.116.11.1-7 Eff(t) = [3*alpha^2 * M * (r - r_c)^2]/(4*gamma*E_total)
If E2 is a constant equal to fission energy/gram in ergs (7.25 x 10^17 erg/g for Pu-239), and gamma is equal to 4/3 for a photon gas, then:
Eq. 18.104.22.168.1-8 Eff(t) = [9*alpha^2 * (r - r_c)^2]/(16*E2)
We can observe at this point that efficiency is determined by the actual value of alpha and the difference between the actual radius of the assembly, and the radius of the mass just sufficient to keep the chain reaction going. Note that it is the values of these parameters WHEN DISASSEMBLY ACTUALLY OCCURS that are relevant.
Now using r = r_c(1 + delta) so that (r - r_c) = delta*r_c, we get:
Eq. 22.214.171.124.1-9 Eff(t) = [9*alpha^2 * delta^2 * r_c^2]/(16*E2)
If we let tau = (total_MFP/v_n) then:
Eq. 126.96.36.199.1-10 alpha_max = (v_n/total_MFP)*(c - 1) = (c - 1)/tau and Eq. 188.8.131.52.1-11 alpha_eff = ((c - 1)/tau)*[1 - (1/(1 + delta)^2)]
Eq. 184.108.40.206.1-12 Eff(t) = ((c-1)/tau)^2 * 9/(16*E2) * r_c^2 * delta^2 *[1-(1/(1+ delta)^2)]^2 = ((c-1)/tau)^2 * 9/(16*E2) * r_c^2 *[delta - (delta/(1+ delta)^2)]^2
In the range of 0 < delta < 1 (up to 8 critical masses), the expression
[delta - (delta/(1+ delta)^2)]^2
is very close to 0.6*delta^3, giving us:
Eq. 220.127.116.11.1-13 Eff(t) = 0.338*((c-1)/tau)^2 * r_c^2/E2 * delta^3 = 0.338/E2 * alpha_max^2 * r_c^2 * delta^3
This last equation is identical with the equation derived by Robert Serber in the spring of 1943 and published in The Los Alamos Primer, except that his constant is 0.667 (i.e. gives efficiencies 1.98 times higher). Serber derived his efficiency equation from rough dynamical considerations without using a hydrodynamic model of disassembly and admits that his result is 2-4 time higher than the true value. This is consistent with the above derivation.
Both the equation given above and Serber's equation differ significantly from the Bethe-Feynman equation however, which gives an efficiency relationship of:
Eq. 18.104.22.168.1-14 Eff = (1/(gamma - 1)E2) * alpha_max^2 * r_c^2 * (delta*(1 + 3*delta/2)^2)/(1 + delta)
after reformulating to equivalent terms. This is a much more linear relationship between delta and efficiency, than the cubic relationship of Serber. Due to the crudeness of all of these derivations, the significance of this difference cannot be assessed at present.
Equation 22.214.171.124.1-13 shows that efficiency is proportional to the square of the maximum multiplication rate of the material, and the critical radius (also due to material properties), and is the cube of the excess critical radius excess delta.
Extending to larger values, we can approximate it in the range 1 < delta < 3 (up to 64 critical masses), with the expression:
Eq. 126.96.36.199.1-15 Eff(t) = 0.338/E2 * alpha_max^2 * r_c^2 * delta^(7/3)
188.8.131.52.2 The Density Dependent Efficiency Equation
The efficiency equations given above leave something to be desired for evaluating fission weapon designs. I have included it to assist in making comparisons with the available literature, but I will give it a different form below.
The choice of fissile materials available to a weapon designer is quite limited, and the nuclear and physical properties of these materials are fixed. It is desirable then to separate these factors from the factors that a designer can influence - namely, the mass of material present, and the density achieved. The density is of particular interest since it is the only factor that changes in a given design during insertion. Understanding how efficiency changes with density is essential to understanding the problem of predetonation for example.
Returning to equation Eq. 184.108.40.206.1-8:
Eff(t) = [9*alpha^2 * (r - r_c)^2]/(16*E2)
we want to reformulate it so that it consists of two parts, one that does not depend on density, and one that depends only on density.
Let the composition and mass of the system be fixed. We will normalize the radius and density so that they are expressed relative to the system's critical state. If rho_crit and r_crit are the values for density and radius of the critical state, and rho_rel and r_rel are the values of the system that we want to evaluate:
Eq. 220.127.116.11.2-1 rho_rel = rho_actual/rho_crit and Eq. 18.104.22.168.2-2 r_rel = r_actual/r_crit
When the system is exactly critical, rho_rel = 1 and r_rel = 1. Of course we are interested in states where rho_rel > 1, and r_rel < 1. We can relate r_rel to rho_rel:
Eq. 22.214.171.124.2-3 r_rel = (1/rho_rel)^(1/3) * r_crit
Using this notation, and letting alpha_max_c be the value of alpha_max at the critical state density, we can write:
alpha = alpha_max_c * rho_rel * (1 - (r_c/r_rel)^2)
In this case r_c refers to the effective critical radius at density rho_rel not rho_crit; that is, r_c IS NOT r_crit. Instead it is equal to r_crit/rho_rel. Using this, and the relation for r_rel above, we can eliminate r_crit:
Eq. 126.96.36.199.2-4 alpha = alpha_max_c * rho_rel * (1 - ((1/rho_rel)/(1/rho_rel)^(1/3))^2) = alpha_max_c * rho_rel * (1 - (rho_rel)^(-4/3))
Substituting into the efficiency equation:
Eq. 188.8.131.52.2-5 Eff = (9/16*E2) * alpha^2 * (r_rel - r_c)^2
Eq. 184.108.40.206.2-6 Eff = (9/(16*E2))*(alpha_max_c*rho_rel*(1 - (rho_rel)^(-4/3)))^2 * (r_rel - r_c)^2
Splitting constant and density dependent factors between two lines:
Eq. 220.127.116.11.2-7 Eff = (9/(16*E2)) * alpha_max_c^2 * rho_rel^2 * (1-(rho_rel)^(-4/3))^2 * (r_rel - r_c)^2
We can eliminate r_rel and r_c, replacing them with expressions of rho_rel and r_crit:
Eq. 18.104.22.168.2-8 r_rel - r_c = (1/rho_rel)^(1/3) * r_crit) - (r_crit/rho_rel) = ((1/rho_rel)^(1/3) - (1/rho_rel)) * r_crit
Eq. 22.214.171.124.2-9 Eff = (9/(16*E2)) * alpha_max_c^2 * r_crit^2 * rho_rel^2 * (1-(rho_rel)^(-4/3))^2 * ((1/rho_rel)^(1/3)-(1/rho_rel))^2
Recall that the rho_rel, the relative density, is not generally the compression ratio compared to normal density. This is true only if amount of fissile material in the system is exactly one critical mass at normal density (as was approximately true in the Fat Man bomb). For "sub-crit" systems, rho_rel is smaller than the actual compression of the material since compressive work is required to raise the initial sub-critical system to the critical state. For a system consisting of more than one critical mass (at normal density), rho_rel is higher than the actual compression.
By looking in turn at each of the density dependent terms we can gain insight into the significance of the efficiency equation. First note that alpha_max_c is a fundamental property of the fissile material and does not change, even though it is system dependent (being normalized to the critical density of the system).
The term (rho_rel^2) is introduced by the reduction of the MFP with increasing density and contributes to enhanced efficiency at all values of rho_rel.
The term (1-(rho_rel)^(-4/3)))^2 represents the effect of neutron leakage. At rho_rel=1 the value is 0. It has a limiting value of 1 when rho_rel is high, i.e. no leakage occurs. As this term approaches one, and leakage becomes insignificant, it ceases to be a significant contributor to further efficiency enhancement.
The term ((1/rho_rel)^(1/3)-(1/rho_rel))^2 describes the distance the rarefaction wave must travel to shut down the reaction. At rho_rel=1 it is 0. It initially increases rapidly, but soon slows down at reaches a maximum at about rho_rel = 5.196. Thereafter it declines slowly. This signifies that fact that once the critical radius of the system at rho_rel is small compared to the physical radius no further efficiency gain is obtained from this source. Instead further increases in density simply reduce the scale of the system, allowing faster disassembly.
We can provide some approximations for the efficiency equation to make the overall effect of density more apparent.
In the range of 1 < rho_rel < 2 it is approximately:
Eq. 126.96.36.199.2-10 Eff = (9/(16*E2)) * alpha_max_c^2 * r_crit^2 * ((rho_rel - 1)^3)/8
In the range of 2 < rho_rel < 4.5 it is approximately:
Eq. 188.8.131.52.2-11 Eff = ((9/(16*E2)) * alpha_max_c^2 * r_crit^2 * ((rho_rel - 1)^(2.333))/8
In the range of 4 < rho_rel < 8 it is approximately:
Eq. 184.108.40.206.2-12 Eff = (9/(16*E2)) * alpha_max_c^2 * r_crit^2 * ((rho_rel - 1)^(1.8))/5
220.127.116.11.3 The Mass and Density Dependent Efficiency Equation
The maximum degree of compression above normal density that is achievable is limited by technology. It is of interest then to consider how the amount of material present affects efficiency at a given level of compression, since it is the other major parameter that a designer can manipulate.
To examine this we would like to reintroduce an explicit term for mass. To do this we renormalize the equation to a fixed standard density rho_0 (the uncompressed density of the fissile material), and use rho_0 and the corresponding value of the critical mass M_c to replace the scale parameter r_crit. Thus:
Eqs. 18.104.22.168.3-1 through 22.214.171.124.3-5 alpha_max_crit = alpha_max_0 * (rho_crit/rho_0) m_rel = m/M_c rho_crit = rho_0/m_rel^(1/2) rho_rel = rho/rho_crit = (rho/rho_0)*m_rel^(1/2) r_crit = ((m/rho_crit)*(3/(2Pi)))^(1/3) = (m*m_rel^(1/2)/rho_0)^(1/3) * (3/2Pi)^(1/3) = (m^(3/2)/(M_c^(1/2) * rho_0))^(1/3) * (3/2Pi)^(1/3) = m^(1/2) * (M_c^(1/2) * rho_0)^(-1/3) * (3/2Pi)^(1/3)
Assuming the density rho >= rho_crit, we get:
Eq. 126.96.36.199.3-6 Eff = (9/(16*E2))*(3/(2Pi))^(2/3) * alpha_max_0^2 * (rho_crit/rho_0)^2 * (rho/rho_crit)^2 * (m^(1/2) * (M_c^(1/2) * rho_0)^(-1/3))^2 * (1-((rho_0/rho)^(4/3) * m_rel^(-2/3)))^2 * (((rho_0/rho)^(1/3) * m_rel^(-1/6)) - ((rho_0/rho) * m_rel^(-1/2)))^2
Eq. 188.8.131.52.3-7 Eff = (9/(16*E2))*(3/(2Pi))^(2/3) * alpha_max_0^2 * (rho/rho_0)^2 * m/(M_c^(1/3) * rho_0^(2/3)) * (1-((rho_0/rho)^(4/3) * m_rel^(-2/3)))^2 * m_rel^(-1) * (((rho_0 * m_rel)/rho)^(1/3) - (rho_0/rho))^2
Eq. 184.108.40.206.3-8 Eff = (9/(16*E2))*(3/(2Pi))^(2/3) * alpha_max_0^2 * m/(M_c^(1/3)) * (M_c/m) (rho^2)/(rho_0^(8/3)) * (1 - ((rho_0/rho)^(4/3) * m_rel^(-2/3)))^2 * (((rho_0 * m_rel)/rho)^(1/3) - (rho_0/rho))^2
Eq. 220.127.116.11.3-9 Eff = (9/(16*E2))*(3/(2Pi))^(2/3) * alpha_max_0^2 * M_c^(2/3) * (rho/(rho_0^(4/3)))^2 * (1 - ((rho_0/rho)^(4/3) * m_rel^(-2/3)))^2 * (((rho_0 * m_rel)/rho)^(1/3) - (rho_0/rho))^2
The first line of this equation consists entirely of constants, some of them fixed by the choice of material and reference density. From the next two lines it is clear that the density dependency is the same. The effect of increasing the mass of the system is to modestly reduce leakage and retard disassembly.
18.104.22.168.4 The Mass Dependent Efficiency Equation
It is useful to also have an equation that considers only the effect of mass. Including this as the only variable allows presenting a simplified form that makes the effect of varying the mass in a particular design easier to visualize. Also in gun-type designs no compression occurs, so the chief method of manipulating yield is by varying the mass of fissile material present.
Taking the mass and density dependent equation, we can set the density to a fixed nominal value, rho, and then simplify. Let rho = rho_0:
Eq. 22.214.171.124.4-1 Eff = (9/16*E2)*(3/2Pi)^(2/3) * alpha_max_0^2 * M_c^(2/3) * (rho_0/(rho_0^(4/3))^2 *(1 - ((rho_0/rho_0)^(4/3) * m_rel^(-2/3)))^2 * (((rho_0 * m_rel)/rho_0)^(1/3) - (rho_0/rho_0))^2 = (9/16*E2)*(3/2Pi)^(2/3) * alpha_max_0^2 * M_c^(2/3) * rho_0^(-2/3) * (1 - m_rel^(-2/3))^2 * ((m_rel)^(1/3) - 1)^2
Since M_c/rho_0 is the volume of a critical assembly (m_rel = 1):
Eq. 126.96.36.199.4-2 Eff = (9/16*E2)*(3/2Pi)^(2/3) * alpha_max_0^2 * vol_crit^(2/3) * (1 - m_rel^(-2/3))^2 * ((m_rel)^(1/3) - 1)^2
Eq. 188.8.131.52.4-3 Eff = (9/16*E2)*(2^(2/3)) * alpha_max_0^2 * r_crit^2 * (1 - m_rel^(-2/3))^2 * ((m_rel)^(1/3) - 1)^2
Again the top line consists of numeric and material constants, the second of mass dependent terms. This equation shows that efficiency is zero when m_rel = 1, as expected. Efficiency is negligible when m_rel < 1.05, similar to the power of conventional explosives. It climbs very quickly however, increasing by a factor of 400 or so between 1.05 and 1.5, where efficiency becomes significant. The Little Boy bomb had m_rel = 2.4. If its fissile content had been increased by a mere 16%, its yield would have increased by 75% (whether this could be done while maintaining a safe criticality margin is a different matter).
184.108.40.206.5 Limitations of the Efficiency Equations
These formulas provide good scaling laws, and a rough means to calculate efficiency. But we should return to the simplifying assumptions made earlier to understand their limitations.
It is obvious that alpha is not constant during disassembly. As material blows off, the size of the core and the value of alpha both decrease, which has a negative effect on efficiency. This is the most important factor not accounted for, and results in a lower effective coefficient in the efficiency equation.
The assumption about uniform temperature, and no energy loss is also not really true. The energy production rate in any region of the core is proportional to the neutron flux density. This density is highest in the center and lowest at the surface (although not dramatically so). Furthermore, the high radiation energy density in the core corresponds to a high radiation loss rate from the surface. Based on the Stefan-Boltzmann law it would seem that the loss rate from a bare core could eventually match the energy production rate. This doesn't really occur because of the high opacity of ionized high-Z material; thermal energy from inside the core cannot readily reach the surface. But by the same token, the surface can cool dramatically. Since core expansion starts at the surface, and the rate is determined by temperature, this surface cooling can significantly retard disassembly.
When scaling from known designs, most of these issues have little significance since the deviations from the theoretical model used for the derivations affects both system similarly.
The efficiency equations also breaks down at very small yields. To eliminate gamma from the equations I assumed that the core was radiation dominated at the time of disassembly. When yields drop to the low hundreds of tons and below, the value of gamma approximates that of a perfect gas which changes only the constant term in the equations, reducing efficiency by 20%. When yields drop to the ton range then the properties of condensed matter (like physical strength, heat of vaporization, etc.) become apparent. This tends to increase the energy release since these properties resist the expansion effects.
There is another factor that imposes an effective upper limit on efficiency regardless of other attempts to enhance yield. This is the decrease in fissile content of the core. The alert reader may have noticed that it is possible to calculate efficiencies that are greater than 1 using the equations. This is because energy release is represented as an exponentially increasing function of time without regard for the amount of energy actually present in the fissile material. At some point, the fact that the fission process depletes the fissile material present must have an effect on the progress of the chain reaction.
The limiting factor here is due to the dilution of the fissile material by the fission products. Most isotopes have roughly the same absorption cross section for fast neutrons, a few barns. The core initially consists of fissile material, but as the chain reaction proceeds each fission event replaces one fissile nucleus with two fission product nuclei. When 50% of the material has fissioned, for every 100 initial fissile atoms there are now 50 remaining, and 100 non-fissile atoms, i.e. the fissile content has declined to only 33%. This parasitic absorption will eventually extinguish the reaction entirely, regardless of what yield enhancement techniques are used (generally at an efficiency substantially below 50%).
220.127.116.11 Effect of Tampers and Reflectors on Efficiency
So far I have been explicitly assuming a bare fissile mass for efficiency estimation. Of course, most designs surround the core with layers of material intended to scatter escaping neutrons back into the fissile mass, or to retard the hydrodynamic expansion.
I use the term "reflector" to refer to the neutron scattering properties of the surrounding material, and "tamper" to refer to the effect on hydrodynamic expansion. The distinction is logical because the two effects are fundamentally unrelated, and because the term tamper was borrowed from explosive blasting technique where it refers only to the containment of the blast. This distinction is not usually made in US weapons programs, from Manhattan Project on. The custom is to use "tamper" to refer to both effects, although "neutronic tamper" and "reflector" are used if the neutron reflection effect alone is intended.
In the bare core, the fissile material that has been reached by the inward moving rarefaction wave expands outward very rapidly. In radiation dominated matter, expansion into a vacuum reaches a limiting speed of six times the local speed of sound in the material (this is the velocity at the outer surface of the expanding sphere of material). The density of matter behind the rarefaction front (which moves toward the center of the core) thus drops very rapidly and is almost immediately lost to the fission reaction.
If a layer of dense material surrounds the core then something very different occurs. The fissile material is not expanding into a vacuum, instead it has to compress and accelerate matter ahead of it. That is, it creates a shock wave. The expansion velocity of the core is then limited to the velocity of accelerated material behind the expanding shock front, which is close to the shock velocity itself. If the tamper and fissile core have similar densities, then this expansion velocity is similar to the speed of sound in the core and only 1/6 as fast as the unimpeded expansion velocity.
This confining effect means that the drop in alpha as disassembly proceeds is not nearly as abrupt as in a vacuum. It thus reduces the importance of the inaccurate assumption of constant alpha used in deriving the efficiency equation.
Another important effect is caused by the radiation cooling of the core. In a vacuum this energy is lost to free space. An opaque tamper absorbs this energy, and a layer of material one mean free path thick is heated to nearly the temperature and pressure of the core. The expansion shock wave then arises not at the surface of the core, but some distance away in the tamper (on the order of a few millimeters). A rarefaction wave must then propagate back to the surface of the core before its expansion even begins. In effect, this increases size of the expansion distance term ((1/rho_rel)^(1/3)-(1/rho_rel))^2 in the efficiency equation.
In a bare core, any neutron that reaches the surface of the core is lost forever to the reaction. A reflector scatters the neutrons, a process that causes some fraction of them to eventually reenter the fissile mass (usually after being scattered several times). Its effect on efficiency then can be described simply by reducing the neutron leakage term (rho_rel)^(-4/3) by a constant factor, or by reducing the reference density critical mass terms.
The leakage or critical mass adjustments must take into account time absorption effects. This means that leakage cannot simply be reduced by the probability of a lost neutron eventually returning, and the reflected critical mass cannot be based simply on the steady state criticality value. For example when an efficiently reflected assembly is only slightly supercritical, then multiplication is dependent mostly (or entirely) on the reflected neutrons that reenter the core. On average each of these neutrons spends quite a lot of time outside the core before being scattered back in. The relevant value for alpha_max in this system is not the value for the fissile material, but is instead:
alpha _max = 1/(average neutron life outside of core)
This is likely to be at least an order of magnitude larger than the core material alpha_max value.
An optimally efficient fission explosion requires that the explosive disassembly of the core occur when the neutron multiplication rate (designated alpha) is at a maximum. Ideally the bomb will be designed to compress the core to this state (or close to it) before injecting neutrons to initiate the chain reaction. If neutrons enter the mass after criticality, but before this ideal time, the result is predetonation (or preinitiation): disassembly at a sub-optimal multiplication rate, producing a reduced yield.
How significant this problem is depends on the reactivity insertion rate. Something like 45 multiplication intervals must elapse before really significant amounts of energy are released. Prior to this point predetonation is not possible. The number of these intervals that occur during a period of time is obtained by integrating alpha over the period. When alpha is effectively constant it is simply alpha*t.
During insertion, alpha is not constant. When insertion begins its value is zero. If a neutron is injected early in insertion and insertion is slow, we can accumulate 45 multiplication intervals when alpha is still quite low. In this case a dramatic reduction in yield will occur. On the other hand, if it were possible for insertion to be so fast that full insertion is achieved before accumulating enough multiplication intervals to disassemble the bomb then no predetonation problem would exist.
To evaluate this problem let us consider a critical system with initial radius r_0 undergoing uniform spherical compression, with the radius decreasing at a constant rate v, then alpha is:
Eq. 18.104.22.168-1 alpha = alpha_max_0 * ((r_0/(r_0 - v*t))^3 - ((r_0 - v*t)/r_0))
Integrating, we obtain:
Eq. 22.214.171.124-2 Int[alpha] = alpha_max_0*(r_0^3/(2v*(r_0-v*t)^2) - (t-(v*t^2)/(2*rc)))
Which allows to compute the number of elapsed multiplication intervals between times t_1 and t_2.
For example, consider a system with the following parameters with a critical radius r = 4.5 cm, a radial implosion velocity v = 2.5x10^5 cm/sec, and alpha_max_0 = 2.8x10^8/sec. Figure 126.96.36.199-1 shows the accumulation of elapsed neutron multiplication intervals (Y axis) as implosion proceeds (seconds on X axis).
Recall that disassembly occurs when the speed of sound, c_s, integrated over the life of the chain reaction is equal to r - r_c, the difference between the outer radius and the critical radius. Since c_s is proportional to the square root of the energy released, it increases by a factor of e every 2 multiplication intervals. Disassembly thus occurs quite abruptly, effectively occurring over a period of two multiplication intervals. The condition for disassembly is thus:
Eq. 188.8.131.52-3 r(t) - r_c(t) = 2*c_s(t)/alpha(t) for some time t.
Since r - r_c is a polynomial function, and c_s is a transcendental (exponential) function, no closed form means of calculating t is possible. However these functions are monotonically increasing in the range of values of interest so numeric and graphical techniques can easily determine when the disassembly condition occurs. The value of alpha at that point then determines efficiency.
Taking our previous example (r = 4.5 cm, v = 2.5x10^5 cm/sec, alpha_max = 2.8x10^8/sec) we can plot the net implosion distance (r - r_c) and the integrated expansion distance (2*c_s/alpha) against the implosion time. This is shown in the log plot in Figure 184.108.40.206-2 for the period between 1 and 1.3 microseconds. Distance is in centimeters (Y axis) and time is in seconds (X axis). If a neutron is present at the beginning of insertion, we see that the disassembly condition occurs at t = 1.25x10^-6 sec. At this point 52 multiplication intervals have elapsed, and the effective value of alpha is 8.6x10^7/sec. The corresponding yield is about 0.5 kt.
The parameters above approximately describe the Fat Man bomb. This shows that even in the worst case, neutrons being present at the moment of criticality, quite a substantial yield would have been created. Predetonation does not necessarily result in an insignificant fizzle. It is not feasible though to make a high explosive driven implosion system fast enough to completely defeat predetonation through insertion speed alone (radiation driven implosion and fusion boosting offer means of overcoming it however).
The likelihood of predetonation occurring depends on the neutron background, the average rate at which neutron injection events occur. I use the term "neutron injection event" instead of simply talking about neutrons for a specific reason: the major source of neutrons in a fission device is spontaneous fission of the fissile material itself (or of contaminating isotopes). Each spontaneous fission produces an average of 2-3 neutrons (depending on the isotope). However, these neutrons are all released at the same moment, and thus either a fission chain reaction is initiated at the moment, or they all very quickly disappear. Each fission is a single injection event, neutrons from other sources are uncorrelated and are thus individual injection events.
Now neutron injection during insertion is not guaranteed to initiate a divergent chain reaction. At criticality (alpha equals zero), each fission generates on average one fission in the next generation. Since each fission produces nu neutrons (nu is in the range of 2-3 neutrons, 2.9 for Pu-239), this means that each individual neutron has only 1/nu chance of causing a new fission. At positive values of alpha, the odds are better of course, but clearly we must consider then the probability that each injection actually succeeds in creating a divergent chain reaction. This probability is dependent on alpha, but since non-fission capture is a significant possibility in any fissile system, it does not truly converge to 1 regardless of how high alpha is (although with plutonium it comes close).
Near criticality the probability of starting a chain reaction (P_chain) for a single neutron is thus about 34% for plutonium, and 40% for U-235. Since spontaneous fission injects multiple neutrons, the P_chain for this injection event is high, about 70% for both Pu-239 and U-235.
If the average rate of neutron injection is R_inj, then the probability of initiating a chain reaction during an insertion time of length T is the Poisson function: Eqs. 220.127.116.11-4 P_init = 1 - e^((-T/R_inj)*P_chain) If T is much smaller than R_inj then this equation reduces approximately to P_init = (T/R_inj)*P_chain.
When T is much smaller than R_inj predetonation is unlikely, and the yield of the fission bomb (which will be the optimum yield) can be predicted with high confidence. As the ratio of T/R_inj becomes larger yield variability increases. When (T/R_inj)*P_chain is equal to ln 2 (0.693...) then the probability of predetonation and no predetonation is equal, although when predetonation occurs close to full assembly the yield reduction is small. As T/R_inj continues to increase predetonation becomes virtually certain. With a large enough value to T/R_inj the yield becomes predictable again, but this time it is the minimum yield that results when neutrons are present at the beginning of insertion. For an implosion bomb a typical spread between the optimum and minimum yields is something like 40:1.
In the Fat Man bomb the neutron source consisted of about 60 g of Pu-240, which produced an average of one fission every 37 microseconds. The probability of predetonation was 12% (from a declassified Oppenheimer memo), assuming an average P_chain of 0.7 we can estimate the insertion time at 6.7 microseconds, or 4.7 microseconds if P_chain was close to 1. The chance of large yield reduction was much smaller than this however. There was a 6% chance of a yield < 5 kt, and only a 2% chance of a yield < 1 kt. As we have seen, in no case would the yield have been smaller than 0.5 kt or so.
Spontaneous fission is not the only cause for concern, since neutrons can enter the weapon from outside. Natural neutron sources are not cause for concern, but in a combat situation very powerful sources of neutrons may be encountered - other nuclear weapons.
One kiloton of fission yield produces a truly astronomical number of excess neutrons - about 3x10^24, with a fluence of 1.5x10^10 neutrons/cm^2 500 m away. A kiloton of fusion yields 3-4 times as many. The fission reaction itself emits all of its neutrons in less than a microsecond, but due to moderation these neutrons arrive at distant locations over a much longer period of time. Most of them arrive in a pulse lasting a millisecond, but thermal neutrons can continue to arrive for much longer periods of time. This is not the whole problem though. Additional neutrons called "delayed neutrons" continue to be emitted for about a minute from the excited fission products. These amount to only 1% or so of the prompt neutrons, but this is still an average arrival rate of 2.5x10^6 neutrons/cm^2-sec for a kiloton of fission at 500 m. With weapons sensitive to predetonation, careful spacing of explosions in distance and time may be necessary. Neutron hardening - lining the bomb with moderating and neutron absorbing materials - may be necessary to hold predetonation problems to a tolerable level (it is virtually impossible to eliminate it entirely in this way).
4.1.6 Methods of Core Assembly
The principal problem in fission weapon design is how to rapidly assemble or compress the fissile material from a subcritical state to a supercritical one. Methods of doing this can be classified in two ways:
Subsonic assembly means that shock waves are not involved. Assembly is performed by adiabatic compression, or by continuous acceleration. As a practical matter, only one subsonic assembly scheme needs to be considered: gun assembly.
Supersonic assembly means that shock waves are involved. Shock waves cause instantaneous acceleration, and naturally arise whenever the very large forces required for extremely rapid assembly occur. The are thus the natural tools to use for assembly. Shocks are normally created by using high explosives, or by collisions between high velocity bodies (which have in turn been accelerated by high explosive shocks). The term "implosion" is generally synonymous with supersonic assembly. Most fission weapons have been designed with assembly schemes of this type.
Assembly may be performed by compressing the core along one, two, or three axes. One-D compression is used in guns, and plane shock wave compression schemes. Two and three-D compression are known as cylindrical implosion and spherical implosion respectively. Plane shock wave assembly might logically be called "linear Implosion", but this term has been usurped (in the US at any rate) by a variant on cylindrical implosion (see below). The basic principles involved with these approaches are discussed in detail in Section 3.7, Principles of Implosion.
To the approaches just mentioned, we might add more some difficult to classify hybrid schemes such as: "pseudo-spherical implosion", where the mass is compressed into a roughly spherical form by convergent shock waves of more complex form; and "linear implosion" where a compressive shock wave travels along a cylindrical body (or other axially symmetric form - like an ellipsoid), successively squeezing it from one end to the other (or from both ends towards the middle). Schemes of this sort may be used where high efficiency is not called for, and difficult design constraints are involved, such as severe size or mass limitations. Hybrid combinations of gun and implosion are also possible - firing a bullet into an assembly that is also compressed.
The number of axes of assembly naturally affect the overall shape of the bomb. One-D assembly methods naturally tend to produce long, thin weapon designs; 2-D methods lead to disk-shaped or short cylindrical systems; and 3-D methods lead to spherical designs.
The subsections detailing assembly methods are divided in gun assembly (subsonic assembly) and implosion assembly (supersonic assembly). Even though it superficially resembles gun assembly, linear implosion is discussed in the implosion section since it actually has much more in common with other shock compression approaches.
The performance of an assembly method can be evaluated by two key metrics: the total insertion time and the degree of compression. Total insertion time (and the related insertion rate) is principally important for its role in minimizing the probability of predetonation. The degree of compression determines the efficiency of the bomb, the chief criteria of bomb performance. Short insertion times and high compression are usually associated since the large forces needed to produce one also tend to cause the other.
18.104.22.168 Gun Assembly
This was the first technique to be seriously proposed for creating fission explosions, and the first to be successfully developed. The first nuclear weapon to be used in war was the gun-type bomb called Little Boy, dropped on Hiroshima. Basic gun assembly is very simple in both concept and execution. The supercritical assembly is divided into two pieces, each of which is subcritical. One of these, the projectile, is propelled into the other, called the target, by the pressure of propellant combustion gases in a gun barrel. Since artillery technology is very well developed, there are really no significant technical problems involved with designing or manufacturing the assembly system.
The simple single-gun design (one target, one projectile) imposes limits on weapon, mass, efficiency and yield that can be substantially improved by using a "double-gun" design using two projectiles fired at each other. These two approaches are discussed in separate sections below. Even more sophisticated "complex" guns, that combine double guns with implosion are discussed in Hybrid Assembly techniques.
Gun designs may be used for several applications. They are very simple, and may be used when development resources are scarce or extremely reliability is called for. Gun designs are natural where weapons can be relatively long and heavy, but weapon diameter is severely limited - such as nuclear artillery shells (which are "gun type" weapons in two senses!) or earth penetrating "bunker busters" (here the characteristics of a gun tube - long, narrow, heavy, and strong - are ideal).
Single guns are used where designs are highly conservative (early US weapon, the South African fission weapon), or where the inherent penalties of the design are not a problem (bunker busters perhaps). Double guns are probably the most widely used gun approach (in atomic artillery shells for example).
22.214.171.124.1 Single Gun Systems
We might conclude that a practical limit for simple gun assembly (using a single gun) is a bit less than 2 critical masses, reasoning as follows: each piece must be less than 1 critical mass, if we have two pieces then after they are joined the sum must be less than 2 critical masses.
Actually we can do much better than this. If we hollow out a supercritical assembly by removing a chunk from the center like an apple core, we reduce its effective density. Since the critical mass of a system is inversely proportional to the square of the density, we have increased the critical mass remaining material (which we shall call the target) while simultaneously reducing its actual mass. The piece that was removed (which will be called the bullet) must still be a bit less than one critical mass since it is solid. Using this reasoning, letting the bullet have the limiting value of one full critical mass, and assuming the neutron savings from reflection is the same for both pieces (a poor assumption for which correction must be made) we have:
Eq. 126.96.36.199.1-1 M_c/((M - M_c)/M)^2 = M - M_c
where M is the total mass of the assembly, and M_c is the standard critical mass. The solution of this cubic equation is approximately M = 3.15 M_c. In other words, with simple gun assembly we can achieve an assembly of no more than 3.15 critical masses. Of course a practical system must include a safety factor, and reduce the ratio to a smaller value than this.
The weapon designer will undoubtedly surround the target assembly with a very good neutron reflector. The bullet will not be surrounded by this reflector until it is fired into the target, its effective critical mass limit is higher, allowing a larger final assembly than the 3.15 M_c calculated above.
Looking at U-235 critical mass tables for various candidate reflectors we can estimate the achievable critical mass ratios taking into account differential reflector efficiency. A steel gun barrel is actually a fairly good neutron reflector, but it will be thinner and less effective than the target reflector. M_c for U-235 (93.5% enrichment) reflected by 10.16 cm of tungsten carbide (the reflector material used in Little Boy) is 16.5 kg, when reflected by 5.08 cm of iron it is 29.3 kg (the steel gun barrel of Little Boy was an average of 6 cm thick). This is a ratio of 1.78, and is probably close to the achievable limit (a beryllium reflector might push it to 2). Revising Eq. 188.8.131.52.1-1 we get:
Eq. 184.108.40.206.1-2 M_c/((M - (1.78 M_c))/M)^2 = M - (1.78 M_c)
which has a solution of M = 4.51 M_c. If a critical mass ratio of 2 is used for beryllium, then M = 4.88 M_c. This provides an upper bound on the performance of simple gun-type weapons.
Some additional improvement can be had by adding fast neutron absorbers to the system, either natural boron, or boron enriched in B-10. A boron-containing sabot (collar) around the bullet will suppress the effect of neutron reflection from the barrel, and a boron insert in the target will absorb neutrons internally thereby raising the critical mass. In this approach the system would be designed so that the sabot is stripped of the bullet as it enters the target, and the insert is driven out of the target by the bullet. This system was apparently used in the Little Boy weapon.
Using the M_c for 93.5% enriched U-235, the ratio M/M_c for Little Boy was (64 kg)/(16.5 kg) = 3.88, well within the limit of 4.51 (ignoring the hard-to-estimate effects of the boron abosrbers). It appears then that the Little Boy design (completed some six months before the required enriched uranium was available) was developed with the use of >90% enrichment uranium in mind. The actual fissile load used in the weapon was only 80% enriched however, with a corresponding WC reflected critical mass of 26.5 kg, providing an actual ratio of 64/26.5 = 2.4.
The mass-dependent efficiency equation shows that it is desirable to assembly as many critical masses as possible. Applying this equation to Little Boy (and ignoring the equation's limitations in the very low yield range) we can examine the effect of varying the amount of fissile material present:
1.05 80 kg 1.1 1.2 tons 1.2 17 tons 1.3 78 tons 1.4 220 tons 1.5 490 tons 1.6 930 tons 1.8 2.5 kt 2.0 5.2 kt 2.25 10.5 kt 2.40 15.0 kt LITTLE BOY 2.5 18.6 kt 2.75 29.6 kt 3.0 44 kt 3.1
If its fissile content had been increased by a mere 25%, its yield would have tripled.
The explosive efficiency of Little Boy was 0.23 kt/kg of fissile material (1.3%), compared to 2.8 kt/kg (16%) for Fat Man (both are adjusted to account for the yield contribution from tamper fast fission). Use of 93.5% U-235 would have at least doubled Little Boy yield and efficiency, but it would still have remained disappointing compared to the yields achievable using implosion and the same quantity of fissile material.
220.127.116.11.2 Double Gun Systems
Significant weight savings a possible by using a "double-gun" - firing two projectiles at each other to achieve the same insertion velocity. With all other factors being the same (gun length, projectile mass, materials, etc.) the mass of a gun varies with the fourth power of velocity (doubling velocity requires quadrupling pressure, quadrupling barrel thickness increases mass sixteen-fold). By using two projectiles the required velocity is cut by half, and so is the projectile mass (for each gun). On the other hand, to keep the same total gun length though, the projectile must be accelerated in half the distance, and of course there are now two guns. The net effect is to cut the required mass by a factor of eight. The mass of the breech block (which seals the end of the gun) reduces this weight saving somewhat, and of course there is the offsetting added complexity.
A double gun can improve on the achievable assembled mass size since the projectile mass is divided into two sub-critical pieces, each of which can be up to one critical mass in size. Modifying Eq. 18.104.22.168.1-1 we get:
Eq. 22.214.171.124.1-3 M_c/((M - 2M_c)/M)^2 = M - 2M_c
with a solution of M = 4.88 M_c.
Taking into account the effect of differential reflector efficiency we get mass ratios of ratios of 3.56 (tungsten carbide) and 4 (beryllium) which give assembled mass size limits of M = 7.34 M_c and M = 8 M_c respectively.
Another variant of the double gun concept is to still only have two fissile masses - a hollow mass and a cylindrical core as in the single gun - but to drive them both together with propellant. One possible design would be to use a constant diameter gun bore equal to the target diameter, with the smaller diameter core being mounted in a sabot. In this design the target mass would probably be heavier than the core/sabot system, so one end of the barrel might be reinforced to take higher pressures. Another more unusual approach would be to fire the target assembly down an annular (ring shaped) bore. This design appears to have been used in the U.S. W-33 atomic artillery shell, which is reported to have had an annular bore.
These larger assembled masses give significantly more efficient bombs, but also require large amounts of fissile material to achieve them. And since there is no compression of the fissile material, the large efficiency gains obtainable through implosive compression is lost. These shortcomings can be offset somewhat using fusion boosting, but gun designs are inherently less efficient than implosion designs when comparing equal fissile masses or yields.
126.96.36.199.3 Weapon Design and Insertion Speed
In addition to the efficiency and yield limitations, gun assembly has some other significant shortcomings:
First, guns tend to be long and heavy. There must be sufficient acceleration distance in the gun tube before the projectile begins insertion. Increasing the gas pressure in the gun can shorten this distance, but requires a heavier tube.
Second, gun assembly is slow. Since it desirable to keep the weight and length of the weapon down, practical insertion velocities are limited to velocities below 1000 m/sec (usually far below). The diameter of a core is on the order of 15 cm, so the insertion time must be at least a 150 microseconds or so.
In fact, achievable insertion times are much longer than this. Taking into account only the physical insertion of the projectile into the core underestimates the insertion problem. As previously indicated, to maximize efficiency both pieces of the core must be fairly close to criticality by themselves. This means that a critical configuration will be achieved before the projectile actually reaches the target. The greater the mass of fissile material in the weapon, the worse this problem becomes. With greater insertion distances, higher insertion velocities are required to hold the probability of predetonation to a specified value. This in turn requires greater accelerations or acceleration distances, further increasing the mass and length of the weapon.
In Little Boy a critical configuration was reached when the projectile and target were still 25 cm apart. The insertion velocity was 300 m/sec, giving an overall insertion time of 1.35 milliseconds.
Long insertion times like this place some serious constraints on the materials that can be used in the bomb since it is essential to keep neutron background levels very low. Plutonium is excluded entirely, only U-235 and U-233 may be used. Certain designs may be somewhat sensitive to the isotopic composition of the uranium also. High percentages of even-numbered isotopes may make the probability of predetonation unacceptably high.
The 64 kg of uranium in Little Boy had an isotopic purity of about 80% U-235. The 12.8 kg of U-238 and U-234 produced a neutron background of around 1 fission/14 milliseconds, giving Little Boy a predetonation probability of 8-9%. In contrast to the Fat Man bomb, predetonation in a Little Boy type bomb would result in a negligible yield in nearly every case.
The predetonation problem also prevents the use of a U-238 tamper/reflector around the core. A useful amount of U-238 (200 kg or so) would produce a fission background of 1 fission/0.9 milliseconds.
Gun-type weapons are obviously very sensitive to predetonation from other battlefield nuclear explosions. Without hardening, gun weapons cannot be used within a few of kilometers of a previous explosion for at least a minute or two.
Attempting to push close to the mass limit is risky also. The closer the two masses are to criticality, the smaller the margin of safety in the weapon, and the easier it is to cause accidental criticality. This can occur if a violent impact dislodges the projectile, allowing it to travel toward the target. It can also occur if water leaks into the weapon, acting as a moderator and rendering the system critical (in this case though a high yield explosion could not occur).
Due to the complicated geometry, calculating where criticality is achieved in the projectile's travel down the barrel is extremely difficult, as is calculating the effective value of alpha vs time as insertion continues. Elaborate computation intensive Monte Carlo techniques are required. In the development of Little Boy these things had to be extrapolated from measurements made in scale models.
Once insertion is completed, neutrons need to be introduced to begin the chain reaction. One route to doing this is to use a highly reliable "modulated" neutron initiator, an initiator that releases neutrons only when triggered. The sophisticated neutron pulse tubes used in modern weapons are one possibility. The Manhattan Project developed a simple beryllium/polonium 210 initiator named "Abner" that brought the two materials together when struck by the projectile.
If neutron injection is reliable, then the weapon designer does not need to worry about stopping the projectile. The entire nuclear reaction will be completed before the projectile travels a significant distance. On the other hand, if the projectile can be brought to rest in the target without recoiling back then an initiator is not even strictly necessary. Eventually the neutron background will start the reaction unaided.
A target designed to stop the projectile once insertion is complete is called a "blind target". The Little Boy bomb had a blind target design. The deformation expansion of the projectile when it impacted on the stop plate of the massive steel target holder guaranteed that it would lodge firmly in place. Other designs might add locking rings or other retention devices. Because of the use of a blind target design, Little Boy would have exploded successfully without the Abner initiators. Oppenheimer only decided to include the initiators in the bomb fairly late in the preparation process. Even without Abner, the probability that Little Boy would have failed to explode within 200 milliseconds was only 0.15%; a delay as long as one second was vanishingly small - 10^-14.
Atomic artillery shells have tended to be gun-type systems, since it is relatively easy to make a small diameter, small volume package this way (at the expense of large amounts of U-235). Airbursts are the preferred mode of detonation for battlefield atomic weapons which, for an artillery shell travelling downward at several hundred meters per second, means that initiation must occur at a precise time. Gun-type atomic artillery shells always include polonium/beryllium initiators to ensure this.
188.8.131.52 Implosion Assembly
High explosive driven implosion assembly uses the ability of shock waves to instantaneously compress and accelerate material to high velocities. This allows compact designs to rapidly compress fissile material to densities much higher than normal on a time scale of microseconds, leading to efficient and powerful explosions. The speed of implosion is typically several hundred times faster than gun assembly (e.g. 2-3 microseconds vs. 1 millisecond). Densities twice the normal maximum value can be reached, and advanced designs may be able to do substantially better than this (compressions of three and four fold are often claimed in the unclassified literature, but these seem exaggerated). Weapon efficiency is typically an order of magnitude better than gun designs.
The design of an implosion bomb can be divided into two parts:
The high explosive system may be essentially unconfined (like that in the Fat Man bomb), but increased explosive efficiency can be obtained by placing a massive tamper around the explosive. The system then acts like a piston turned inside out, the explosive gases are trapped between the outer tamper and the inner implosion hardware, which is driven inward as the gases expand. The added mass of the tamper is no doubt greater than the explosive savings, but if the tamper is required anyway (for radiation confinement, say) then it adds to the compactness of the design.
If you have not consulted Section 3.7 Principles of Implosion, it may be a good idea to do so.
184.108.40.206.1 Energy Required for Compression As explained in Section 3.4 Hydrodynamics, shock compression dissipates energy in three ways:
Only the first of these is ultimately desirable for implosion, although depending on the system design some or all of the kinetic energy may be reclaimable as compressive work. The energy expended in entropic heating is not only lost, but also makes the material more resistant to further compression.
Shock compression always dissipates some energy as heat, and is less efficient than gentle isentropic (constant entropy) compression. Examining the pressure and total energy required for isentropic compression thus provides a lower bound on the work required to reach a given density.
Below are curves for the energy required for isentropic and shock compression of uranium up to a compression factor of 3. For shock compression only the energy the appears as internal energy (compression and heating) are included, kinetic energy is ignored.
The energy expenditure figures on the X axis are in ergs/cm^3 of uncompressed uranium, the y axis gives the relative volume change (V/V_0). Shock compression, being less efficient, is the upper curve. It can be seen that as compression factors rise above 1.5 (a V/V_0 ratio of 0.67), the amount of work required for shock compression compared to isentropic compression rises rapidly. The kink in the shock compression curve at V/V_0 of 0.5 is not a real phenomenon, it is due to the transition from experimental data to a theoretical Thomas-Fermi EOS.
It is interesting to note that to double the density of one cubic centimeter of uranium (18.9 grams) 1.7 x 10^12 ergs is required for shock compression. This is the amount of energy found in 40 grams of TNT, about twice the weight of the uranium. The efficiency of an implosion system at transferring high explosive energy to the core is generally not better than 30%, and may be worse (possibly much worse if the design is inefficient). This allows us the make a good estimate of the amount of explosive required to compress a given amount of uranium or plutonium to high density (a minimum of 6 times the mass of the fissile material for a compression factor of 2).
These curves also show that very high shock compressions (four and above) are so energetically expensive as to be infeasible. To achieve a factor of only 3, 7.1x10^11 ergs/g of uranium is required. Factoring implosion efficiency (30%), the high explosive (if it is TNT) must have a mass 56 times that of the material being compressed. Reports in the unclassified literature of compressions of four and higher can thus be safely discounted.
Compression figures for plutonium are classified above 30 kilobars, but there is every reason to believe that they are not much different from that of uranium. Although there are large density variations from element to element at low pressure, the low density elements are also the most compressible, so that at high pressures (several megabars) the plot of density vs atomic number becomes a fairly smooth function. This implies that what differences there may be in behavior between U and Pu at low pressure will tend to disappear in the high pressure region.
Actually, even in the low pressure region the available information shows that the difference in behavior isn't all that great, despite the astonishingly large number of phases (six) and bizarre behavior exhibited by plutonium at atmospheric pressure. The highest density phases of both metals have nearly identical atomic volumes at room pressure, and the number of phases of both metals drops rapidly with increasing pressure, with only two phases existing for both metals above 30 kilobars. The lowest density phase of plutonium, the delta phase, in particular disappears very rapidly. The amount of energy expended in compression at these low pressures is trivial. The compression data for uranium is thus a good substitute for plutonium, especially at high pressures and high compressions.
The shock and isentropic pressures required corresponding to the compression energy curves are shown below. The pressures shown on the X axis are in kilobars, the y axis gives the relative volume change (V/V_0).
Since the compression energies of interest vary by many orders of magnitude over compressions ranging up to 3, it is often more convenient to look at logarithmic plots or energy. Figure 220.127.116.11.1-3, below, gives the isentropic curve from 10^7 ergs/cm^3 to 10^12 ergs/cm^3. Since the energy for shock compression is virtually identical to the isentropic value at small compressions, the curve for shock compression is given for compression energies of 10^10 erg/cm^3 (V/V_0 ~ 0.9)
18.104.22.168.2 Shock Wave Generation Systems
The only practical means of generating shock waves in weapons is through the use of high explosives. When suitably initiated, these energetic materials support detonation waves: a self-sustaining shock wave that triggers energy releasing chemical reactions, and is driven by the expanding gases that are produced by these reactions.
Normally a high explosive is initiated at a single point. The detonation propagates as a convex detonation wave, with a more or less spherical surface, from that point.
To drive an implosion, a divergent detonation wave must be converted into a convergent one (or a planar one for linear implosion). Three approaches can be identified for doing this.
22.214.171.124.2.1 Multiple Initiation Points
In this approach, the high explosive is initiated simultaneously by a large number of detonators all over its surface. The idea is that if enough detonation points exist, then it will approximate the simultaneous initiation of the entire surface, producing an appropriately shaped shock from the outset.
The problem with this approach is that colliding shock waves do not tend to "smooth out", rather the reverse happens. A high pressure region forms at the intersection of the waves, leading to high velocity jets that outrun the detonation waves and disrupting the hoped for symmetry.
The multiple detonation point approach was the first one tried at Los Alamos during the Manhattan Project to build a spherical implosion bomb. Attempts were made to suppress the jetting phenomenon by constantly increasing the number of points, or by inserting inert spacers at the collision points to suppress the jets. The problems were not successfully worked out at the time.
Since the war this approach has been used with reasonable success in laboratory megagauss field experiments employing the simpler cylindrical geometry. There is also evidence of continuing US interest in this approach. It is not clear whether this technique has been successfully adapted for use in weapons.
126.96.36.199.2.2 Explosive Lenses
The basic idea here is to use the principle of refraction to shape a detonation wave, just as it is used in optics to shape a light wave.
Optical lenses use combinations of materials in which light travels at different speeds. This difference in speed gives rise to the refractive index, which bends the wave when it crosses the boundary between materials.
Explosive lenses use materials that transmit detonation or shock waves at different speeds. The original scheme used a hollow cone of an explosive with a high detonation velocity, and an inner cone of an explosive with a low velocity. The detonator initiates the high velocity explosive at the apex of the cone. A high velocity detonation wave then travels down the surface of the hollow cone, initiating the inner explosive as it goes by. The low velocity detonation wave lags behind, causing the formation of a concave (or planar) detonation wave.
With any given combination of explosives, the curvature of the wave produced is determined by the apex angle of the lens. The narrower the angle, the greater the curvature. However, for a given lens base area the narrower angle, the taller the lens, and the greater its volume. Both of these are undesirable in weapons, since volume and mass are at a premium.
To create a spherical implosion wave, a number of inward facing lenses need to be arranged on the surface of a sphere so that the convergent spherical segments that each produces merge into one wave. There is substantial advantage in using a large number of lenses. Having many lenses means that each lens has a small base area, and needs to produce a wave with a smaller curvature, both of which reduce the thickness of the lens layer. A more symmetrical implosion can probably be achieved with more lenses also.
It is important to have the lens detonation points (and optical axes) spaced as regularly as possible to minimize irregularities, and to make the height of each lens identical. The largest number of points that can be spaced equidistantly from their neighbors on the surface of a sphere is 20 - corresponding to the 20 triangular facets of an icosahedron (imagine the sphere encased in a circumscribed polyhedron, with each facet touching the sphere at one point). The next largest number is 12 - corresponding to the 12 pentagonal facets of the dodecahedron.
12 lenses, even 20 lenses, is an undesirably small number (although some implosion systems have used the 20 point icosahedral layout). A close approximation to strict regularity can be achieved with more points by interleaving a dodecahedron and icosahedron to produce a polyhedron tiled with hexagonal and pentagonal facets, 20 hexagons and 12 pentagons, for a total of 32 points. This pattern is the same familiar one found on a soccer ball, and was used as the original implosion system lens layout in Gadget, and other early US nuclear weapons.
Designs with 40, 60, 72, and 92 lenses have also been used (although these do not rely on Platonic solids for providing the layout pattern). More lenses lead to a thinner, less massive explosive lens shell, and greater implosion uniformity. The penalty for more lenses is more fabrication effort, and a more powerful and complex initiation system (not a trivial problem originally, but greatly simplified by modern pulse power technology). A simple implosion system could be very massive. The 32 point systems used in early US nuclear weapons had an external diameter of 1.4 m and weighed over 2000 kg. Current systems may be less than 30 cm, and weigh as little as 20 kg, but probably do not follow the same design approach as earlier weapons.
To a degree these multi-lens systems all suffer from the same shortcoming as the basic multi-point detonation approach: strict uniformity of the spherical implosion wave is unachievable. The detonation wave spreads out radially from each detonation point, so each wave produces a circular segment of a spherical wave. If you consider an icosahedron or a "soccer ball", you can see that when circles are inscribed in each of the regular polygons they touch each of their neighbor circles at one point. This marks the moment when the individual wavelets start to merge into a single wave. The gaps left between the inscribed circles however are irregular areas where distortions are bound to arise as the wave edges spread into them, possibly even leading to jetting.
Since the shock wave created by the lens exits from it at the velocity of the slow (and relatively weak) explosive, it desirable to have a layer of powerful explosive inside the lens system (perhaps the same one used as the fast lens component). This layer provides most of the driving force for the implosion, for the most part the lens system (which may well be much more massive) simply provides a mechanism for spherical initiation.
Ideally, the best combination of explosives is the fastest and slowest that are available. This provides the greatest possible refractive index, and thus bending effect, and allows using a wider lens angle. The fastest and slowest explosives generally known are HMX (octogen) and baratol respectively. HMX has a detonation velocity of 9110 m/sec (at a pressed density of 1.89), the dense explosive baratol (76% barium nitrate/24% TNT) has a velocity of 4870 m/sec (cast density 2.55). Explosives with slightly slower detonation velocities include the even denser plumbatol - 4850 m/sec (cast density 2.89) for a composition of 70% lead nitrate/30% TNT; and the relatively light boracitol - 4860 m/sec (cast density 1.55) for a composition of 60% boric acid/40% TNT. Mixtures of TNT with glass or plastic microspheres have proven to be an effective, light weight, and economical slow explosive in recent unclassified explosive lens work (I don't have data on their velocities though).
During WWII Los Alamos developed lenses using combination of Composition B (or Comp B) for the fast explosive (detonation velocity of 7920 m/sec, at a cast density 1.72), and baratol for the slow explosive.
Later systems have used the very fast HMX as a fast explosive, often as a plastic bonded mixture consisting almost entirely of HMX. Plumbatol, a denser and slightly slower explosive, may have been used in some later lens system designs. Boracitol is definitely known to have been used, probably in thermonuclear weapon triggers and perhaps in other types of weapons as well.
The idea of explosives lenses appears to have originated with M. J. Poole of the Explosives Research Committee in England. In 1942 he prepared a report describing a two-dimensional arrangement of explosives (RDX and baratol) to create a plane detonation wave. This idea was brought to Los Alamos in May 1944 by James Tuck, where he expanded it by suggesting a 3-D lens for creating a spherical implosion wave as a solution to making an implosion bomb. A practical lens design was proposed separately by Elizabeth Boggs of the US Explosives Research Laboratory, and by Johann Von Neumann. The Boggs proposal was the earlier of the two, although it was Von Neumann's proposal who directly influenced the Manhattan Project.
The task of developing a successful spherical implosion wave system is extremely difficult. Although the concept involved is simple, actually designing a lens is not trivial. The detonation wave velocity is affected by events occurring some distance behind the front. When the wave crosses from the fast explosive into the slow explosive it does not instantly assume the steady state detonation velocity of the slow explosive. Unlike the analogy with light, the velocity change is gradual and occurs over a significant distance. Since energy can be lost through the surface of the lens, thus reducing the fast wave velocity, the test environment of the lens also affects its performance. The behavior of a lens can only be calculated using sophisticated 2 and 3-D hydrodynamic computer codes that have been validated against experimental data.
Practical lens development generally requires a combination of experimentation, requiring precision explosive manufacture and sophisticated instruments to measure shock wave shape and arrival times, and numerical modelling (computer simulation) to extrapolate from test results. An iterative design, test, and redesign cycle allows the development of efficient, high-performance lenses.
During the Manhattan Project, due to the primitive state of computers and high explosive science and instrumentation, lenses could only be designed by trial and error (guided to some extent by scaling laws deduced from previous experiments). This required the detonation of over 20,000 test lens (and for each one tested, several were fabricated and rejected). When successful sub-scale implosion systems were scaled up to full size, it was discovered that the lenses had to be redesigned.
Assembling the lenses into a complete implosion system aggravates the design and development problems. To avoid shock wave collisions that disrupt symmetry, the surfaces of the lenses need to be aligned very accurately. In a spherical system, the implosion wave that is created is completely hidden by the layer of detonating explosive. The chief region of interest is a small region in the center with perhaps < 0.1% the volume of the whole system. Very expensive diagnostic equipment and difficult experiments are required to study the implosion process, or even to verify that it works at all. Hemispherical tests can be quite useful though to validate lens systems before full spherical testing.
188.8.131.52.2.3 Advanced Wave Shaping Techniques
The conical lens design used by the Manhattan Project and early U.S. nuclear weapons is not the only lens design possible, or even the best. It had the crucial advantage of being simple in form (eliminating the need to design or fabricate complex shapes), and of having a single design variable - the cone apex angle. This made it possible to devise workable lenses with the crude methods then available. Other geometric arrangements of materials that transmit shocks slowly can be used to shape a convex shock into a concave one.
The shock slowing component of a lens, such as the inner cone of a conical explosive lens, does not really need to be another explosive. An inert substance that transmits a shock more slowly than the fast explosive detonation wave will also work. The great range of materials available that are not explosives gives much greater design flexibility. An additional (potential) advantage is that shock waves attenuate as they travel through non-explosive materials, and slow down. This can make lens design more complex, since this attenuation must be taken into account, but the reduced velocity can also lead to a more compact lens. Care must be taken though to insure that the attenuated shock remains strong enough to initiate the inner explosive layer.
By consulting the equation for shock velocity we can see that a high compressibility (low value of gamma) and a high density both lead to low shock wave velocities. An ideal material would be a highly compressible material of relatively high density. This describes an unusual class of filled plastic foams that have been developed at the Allied-Signal Kansas City Plant (the primary supplier of non-nuclear components for US nuclear weapons). It is quite possible that these foams were developed for use as wave shaping materials.
By extending the idea of custom tailoring the density and compressibility of materials, we can imagine that different arrangements of materials of varying properties can be used to reshape shock waves in a variety of ways.
Inserting low density materials, like solid or foam plastics, into explosives can also inhibit detonation propagation and allow the designer to "fold" the path the detonation wave must take. If suitable detonation inhibiting bodies are arranged in a grid inside a cone of high explosive, the same effect as the high explosive lens can be obtained with a lower lens density and with a larger apex angle.
French researchers have described advanced lens systems using alternating layers of explosive and inert material. This creates an anisotropic detonation velocity in the system, very slow across the layers, but fast along the them. A compact lens for producing spherically curved waves has been demonstrated using a cylindrical version of this system, with a slow explosive between the inert layers, and a curved "nose cone"-like surface covered by fast explosive.
It is possible to completely and uniformly cover a sphere with circles if the number of lenses (and circles) is less than or equal to two. A single lens capable of bending a single detonation wave into a complete spherically convergent wave can, in principle, be made so that the resulting wave is entirely uniform. This extends the principle of the explosive lens to its most extreme form. It is also possible to use two lenses, each covering a hemisphere, which meet at the equator of the sphere and can smoothly join two hemispherical implosion waves.
The single point detonation system is illustrated below. This idea makes use of a cardioid-like logarithmic spiral:
fffffff fssssssssssf fsssssssssssssf fCsssssssssssssfD <- Detonator fsssssssssssssf fssssssssssf f = fast explosive fffffff s = slow explosive C = core
This not a very practical design as given. The thickness of the slow explosive on the detonator side would have to be considerable to achieve the necessary bending. Inserting detonation path folding spacers in the explosive could also dramatically reduce the size (but making manufacturing extremely difficult). A variation on this using the French layered explosive approach has also been proposed.
It is unlikely that a slow explosive would really be used for the inner slow lens component, since the velocity differential is not that great. The high degree of shock bending required strongly encourages using something that transmits shocks as slowly as possible such as an advanced inert material.
Such an implosion system would be extremely difficult to design and possibly to manufacture. The continuously varying 3-D surfaces would require considerable experimentation to perfect, and the surfaces would be a nightmare to machine. Once an acceptable shape were developed, and suitable molds or dies were made, the actual manufacture might be quite easy, requiring only pressing of explosives and plastics into molds, or forming metal sheet in a die. The system would remain quite intolerant in any imperfections in dimensions or material however.
The difficulty in making compact and light implosion systems can be judged by the US progress in developing them. The initial Fat Man implosion system had a diameter of almost 60 inches. A significantly smaller system (30 inches) was not tested until 1951, a 22 inch system in mid-1952, and a 16 inch system in 1955. By 1955 a decade had passed since the invention of nuclear weapons, and hundreds of billions of dollars (in today's money) had been spent on developing and producing bombs and bomb delivery systems. These later systems must have used some advanced wave shaping technologies, which have remained highly classified. Clearly developing them is not an easy task (although the difficulty may be conceptual as much as technological).
184.108.40.206.2.4 Cylindrical and Planar Shock Techniques
Cylindrical and planar shock waves can be generated using the techniques previously described, making allowances for the geometry differences. A cylindrical shock can be created using the 2-D analog of the explosive lens, a wedge shaped lens with the same cross section as the conical version. A planar shock is simply a shaped shock with zero curvature.
A complete cylindrical implosion would require several parallel wedge-shaped explosive lenses arranged around the cylinder axis to form a star shape. To make the implosion truly cylindrical (as opposed to conical) it is necessary to detonate each of these lenses along the entire apex of the wedge simultaneously. This can be done by using a lens made out of sheets of high explosive (supported by a suitable backing) to create a plane shock. The edge of this sheet lens would join the apex of the wedge. This sheet lens need not extend out radially, it can join at an angle so that it folds into the space between the star points.
Some special techniques are also available based on the peculiar characteristics of the 1-D and 2-D geometries. The basic principle for these techniques is the "flying plate line charge", illustrated below.
A metal plate is covered on one side with a sheet of explosive. It is detonated on one edge, and the detonation wave travels across the plate. As it does so the detonation accelerates the plate, driving it to the right. After the explosive has completely detonated the flying plate will be flat again. The angle between the original stationary plate and the flying plate is determined by the ratio between the detonation velocity, and the velocity of the accelerated plate. When this high velocity plate strikes the secondary explosive charge the shock will detonate it, creating a planar detonation.
As described above, the system doesn't quite work. A single detonator will actually create a circular detonation front in the explosive sheet, expanding from the initiation point. This can be overcome by first using a long, narrow flying plate (a flying strip if you will) to detonate the edge of wide plate. This wide plate can then be used to initiate the planar detonation.
The flying strip approach can also be used to detonate the cylindrical lens system described above in place of the sheet lens.
The flying plate scheme can be easily extended to create cylindrical detonations.
This is a cross section view of a hollow truncated cone covered by a layer of explosives. The wide end of the cone is joined to a sheet of explosives with a detonator in the center.
The single detonator located on the axis causes an expanding circular detonation in the explosive sheet. When the shock wave reaches the perimeter, it continues travelling along the surface of the cone. The cone collapses starting at the wide end. The angle of the cone is such that a cylindrical flying plate is created that initiates a cylindrical detonation in the secondary explosive.
Flying plate systems are much easier to develop than explosive lenses. Instrumentation for observing their behavior is relatively simple. Multiple contact pins and an oscilloscope can easily measure plate motion, and well established spark gap photography can image the plate effectively.
The choice of explosives in an implosion system is driven by the desire for high performance, safety, ease of fabrication, or sometimes by special properties like the slow detonation velocity needed in explosive lenses.
The desire for high performance leads to the selection of very energetic explosives that have very high detonation velocities and pressures (these three things are closely correlated). The highest performance commonly known explosive is HMX. Using HMX as the main explosive will provide the greatest compression. HMX was widely used in US weapons from the late fifties on into the 1970s, often in a formula called PBX-9404 (although this particular formulation proved to have particularly serious safety problems - causing eight fatalities in a six month period in 1959 among personnel fabricating the explosive). HMX is known to be the principal explosive in many Soviet weapon designs since Russia is selling the explosive extracted from decommissioned warheads for commercial use. The chemically related RDX is a close second in power. It was the principal explosive used in most early US designs, in the form of a castable mixture called Composition B.
In recent years the US has become increasingly concerned with weapon safety, following some prominent accidents in which HE detonation caused widespread plutonium contamination and in the wake of repeated fatal explosions during fabrication. Many of the high energy explosives used, such as RDX and HMX, are rather sensitive to shock and heat. While normally an impact on the order of 100 ft/sec is required to detonate one these explosives, if a sliding or friction-producing impact occurs then these explosives can be set off by an impact as slow as 10 ft/sec (this requires only a drop of 18 inches)! This has led to the use of explosives that are insensitive to shock or fire. Insensitive explosives are all based on TATB, the chemical cousin DATB lacks this marked insensitivity. These explosives have very unusual reaction rate properties that make them extremely insensitive to shock, impact, or heat. TATB is reasonably powerful, being only a little less powerful than Comp B. A composition known as PBX-9504 has been developed that adds 15% HMX to a TATB mixture, creating a compromise between added power and added sensitivity.
Another very strong explosive called PETN has not been used much (or at all) as a main explosive in nuclear weapons due to its sensitivity, although it used in detonators.
Fabricating explosives for implosion systems is a demanding task, requiring rigid quality control. Many explosive components have complex shapes, most require tight dimensional tolerances, and all require a highly uniform product. Velocity variations cannot be greater than a few percent. Achieving such uniformity means carefully controlling such factors as composition, purity, particle size, crystal structure, curing time and curing temperature.
Casting was the first method used for manufacturing implosion components since a very homogenous product can be produced in fairly complex shapes. Unfortunately the most desirable explosives do not melt, which makes casting of the pure explosive impossible. The original solution adopted by the US to this problem was to use castable mixtures of the desired explosive and TNT. TNT is the natural choice for this, being the only reasonably powerful, easily melted explosive available. Composition B, the first explosive used, typically consisted of 63% RDX, 36% TNT, and 1% wax (cyclotol, a mixture with a higher proportion of RDX to TNT, was used later). Great care must be taken to ensure that the slurry of solid explosive and melted TNT is uniform since settling occurs. Considerable attention must be paid to controlling the particle size of the solid explosive, and to monitoring the casting, cooling, and curing processes. Mold making is also a challenging task, requiring considerable experimentation at Los Alamos before an acceptable product could be made.
Pressing is a traditional way of manufacturing explosives products, but its inability to make complex shapes, and problems with density variations and voids prevented its use during WWII. Plastic explosives (that is - soft, pliable explosives) can be pressed into uniform complex shapes quite easily, but their lack of strength make them unattractive in practical weapon designs.
During the forties and fifties advances in polymer technology led to the creation of PBXs (plastic bonded explosives). These explosives use a polymer binder that sets during or after fabrication to make a rigid mass. The first PBX was developed at Los Alamos in 1947, an RDX-polystyrene formulation later designated PBX 9205. Some early work used epoxy binders that harden after fabrication through chemical reactions, but current plastic binders are thermosetting resins (possibly in combination with a plasticizer). Explosive granules are coated with the plastic binder and formed by pressing, usually followed by machining of the billet.
The desire for maximum explosive energy has led to the selection of polymers and plasticizers that actively participate in the explosion, releasing energy through chemical reactions. Emphasis on this has led to undesirable side effects - like sensitization of the main explosive (as occurred with PBX-9404), or poor stability. In the 1970s the W-68 warhead, the comprising large part of the U.S. submarine warhead inventory, developed problems due to decomposition of the LX-09 PBX being used, requiring the rebuilding of 3,200 warheads. LX-09 also exhibited sensitivity problems similar to PBX-9404, in 1977 three men were killed at the Pantex plant in Amarillo from a LX-09 billet explosion.
Normally the explosive and polymer binder are processed together to form a granulated material called a molding powder. This powder is formed using hot pressing - either isostatic (hydrostatic) or hydraulic presses, using evaluated molds (1 mm pressure is typical). The formed material may represent the final component, but normally additional machining to final specifications is required.
PBXs contain a higher proportion of the desired explosive, possess greater structural strength, and also don't melt. These last two properties make them easier to machine to final dimensions. Plastic bonding is very important in insensitive high explosives (IHEs), since mixing the insensitive explosives with the more sensitive TNT would defeat the purpose of using them.
PBX was first used in a full-scale nuclear detonation during the Redwing Blackfoot shot in June 1956. PBXs have replaced melt castable explosives in all US weapons. The PBX compositions that have been used by the U.S. include PBX-9404, PBX-9010, PBX-9011, PBX-9501, LX-04, LX-07, LX-09, LX-10, LX-11. Insensitive PBXs used are PBX-9502 and LX-17.
Table 220.127.116.11.2.5-1. Basic Properties Of Explosives Used In Us Nuclear Weapons EXPLOSIVE DETONATION DENSITY SENSITIVITY VELOCITY PRESSURE m/sec kilobars HMX 9110 390 1.89/pressed Moderate LX-10 8820 375 1.86/pressed Moderate LX-09 8810 377 1.84/pressed Moderate PBX-9404 8800 375 1.84/pressed Moderate RDX 8700 338 1.77/pressed Moderate PETN 8260 335 1.76/pressed High Cyclotol 8035 - 1.71/cast Low Comp B 63/36 7920 295 1.72/cast Low TATB 7760 291 1.88/pressed Very Low PBX-9502 7720 - 1.90/pressed Very Low DATB 7520 259 1.79/pressed Low HNS 7000 200 1.70/pressed Low TNT 6640 210 1.56/cast Low Baratol 76/24 4870 140 2.55/cast Moderate Boracitol 60/40 4860 - 1.55/cast Low Plumbatol 70/30 4850 - 2.89/cast Moderate
18.104.22.168.2.6 Detonation Systems
Creating a symmetric implosion wave requires close synchronization in firing the detonators. Tolerances on the order of 100 nanoseconds are required.
Conventional detonators rely on electrically heating a wire, which causes a small quantity of a sensitive primary explosive to detonate (lead azide, mercury fulminate, etc.). The primary usually then initiates a secondary explosive, like PETN or tetryl, which fires the main charge.
The process of resistively heating the wire, followed by heat conduction to the primary explosive until it reaches detonation temperature requires a few milliseconds, with correspondingly large timing errors. Conventional detonators thus lack the necessary precision for firing an implosion system.
One approach to reducing the duration of action of the detonator is to send a sudden, powerful surge of current through a very fine wire (made of gold or platinum), heating it to the point of vaporization. This technique, called an exploding wire or exploding bridge wire (EBW) detonator, was invented by Luis Alvarez at Los Alamos during the Manhattan Project. Current surge rise times of a fraction of a microsecond are feasible, with a spread in detonation times of a few nanoseconds.
An exploding wire detonator can be used to initiate a primary explosive (usually lead azide), as in a conventional detonator. But if the current surge is energetic enough, then the exploding wire can directly initiate a less sensitive booster explosive (usually PETN). The advantage of doing this is that the detonation system is extremely safe from accidental activation by heat, stray currents, or static electricity. Only very powerful, very fast current surges can fire the detonators. This type of exploding wire detonator is one of the safest types of detonators known. The disadvantage is the need to supply those very powerful, very fast current surges. A typical EBW requires 5 KV, with a peak current of at least 500-1000 amps. A few kiloamps is more typical of most EBW detonators, but a multi-EBW system would probably try to minimize the required current. With sufficient care in detonator design and construction, inherent detonator accuracies of better than 10 nanoseconds are achievable.
Since WWII, a number of detonator designs based on exploding foils have been developed. Exploding foil detonators could be used to fire the booster explosive directly, as in EBW detonators, but generally this implies the use of different concept called a "slapper" detonator. This idea (developed at Lawrence Livermore) uses the expanding foil plasma to drive another thin foil or plastic film to high velocities, which initiates the explosive by impacting the surface. Normally the driving energy is provided entirely by heating of the foil plasma from the current passing through it, but more sophisticated designs may use a "back strap" to create a magnetic field that drives the plasma forward. Slappers are fairly efficient at converting electrical energy into flyer kinetic energy, it is not hard to achieve 25-30% energy transfer.
A typical slapper detonator consists of an explosive pellet pressed to a high density for maximum strength (plastic bonded explosives can also be used). Next to the explosive pellet is an insulation disk with a hole in the center which is set against the explosive pellet. An insulating "flyer" film, such as Kapton or Mylar with a metal foil etched to one side is placed against the disk. A necked down section of the etched foil acts as the bridgewire. The high current firing pulse causes vaporization of the necked down section of the foil. This then shears the insulated flyer which accelerates down the barrel of the disk and impacts the explosive pellet. This impact energy transmits a shock wave into the explosive causing it to detonate.
Another possible advantage of a slapper detonator is the ability to initiate an area of explosive surface rather than a point. This may make compact implosion systems easier to design.
This system has several advantages over the EBW detonator. These include:
Exploding wire detonators were used in the first atomic device, but have since been replaced in the U.S. arsenal by foil slappers, and very probably in all other arsenals as well. Due to the ability of slapper detonators to use insensitive primary explosives, these are almost certainly used with all insensitive high explosive equipped warheads (unless supplanted by an even more advanced technology - like laser detonators).
More recently laser detonating systems have been developed. These use a high power solid state laser to deliver sufficient energy in the form of a short optical pulse to initiate a primary or booster explosive. The laser energy is conducted to the detonator by a fiber optic cable. This is a safe detonator system, but the laser and its power supply is relatively heavy. A typical system might use a 1 W solid state laser to fire a single detonator. It is not known if this system has been used in any nuclear weapons.
Another fast detonator is the spark gap detonator. This uses a high voltage (approx. 5 KV) spark across a narrow gap to initiate the primary explosive. If a suitably sensitive primary explosive is used (lead azide, or the especially sensitive lead styphnate) then the current required is quite small, and a modest capacitor can supply sufficient power (10-100 millijoules per detonator). The chief disadvantage of this detonator design is that it is one of the least safe known. Static charges, or other induced currents, can very easily fire a spark gap detonator. For this reason they have probably never been used in deployed nuclear weapons.
Detonation systems require a reasonably compact and light high speed pulse power supply. To achieve accurate timing and fast response requires a powerful power source capable of extremely fast discharge, as well as fast, accurate, and reliable switching components, and close attention to managing the inductance of the entire system.
The normal method of providing the power for an EBW multi-detonator system is to discharge a high capacitance, high voltage, low inductance capacitor. Voltage range is several kilovolts, 5 KV is typical. Silicone oil filled capacitors using Kraft paper, polypropylene, or Mylar dielectrics are suitable types, as are ceramic-type capacitors. Compact power supplies for charging capacitors are readily available.
The capacitor must be matched with a switch that can handle high voltages and currents, and transition from a safe non-conducting state to a fully conducting one rapidly without adding undue inductance to the circuit. A variety of technologies are available: triggered spark gaps, krytrons, thyratrons, and explosive switches are some that could be used.
The current rise time of the firing pulse can actually be much longer than the required timing accuracy since the firing of an EBW detonator is basically determined by achieving a threshold current. As long as the current rise is synchronous for all detonators, they will fire simultaneously. Still a rise time of no more than 2-3 microseconds is desirable.
The capacitance required for a 5 KV EBW is on the order of 1 microfarad per detonator. A 32 detonator system (like Fat Man) thus requires at least 32 mF and to produce a 32 kA current surge. For a rise time of 3 microseconds this requires no more than 100 nanohenries of total inductance. A modern plastic cased capacitor of 40 microfarads, rated at 5 KV, with 100 nanohenries of inductance weighs about 4 kg.
Triggered spark gaps are sealed devices filled with high pressure air, argon, or SF6. A non-conducting gap between electrodes is closed by applying a triggering potential to a wire or grid in the gap. Compact versions of these devices are typically rated at 20-100 KV, and 50-150 kiloamps. The triggering potential is typically one-half to one-third the maximum voltage, with switch current rise times of 10-100 nanoseconds.
Krytrons are a type of cold cathode trigger discharge tube. Krytrons are small gas filled tubes. Some contain a small quantity of Ni-63, weak beta emitter (92 yr half-life, 63 KeV) that keeps the gas in a slightly ionized state. Applying a trigger voltage causes an ionization cascade to close the switch. These devices have maximum voltage ratings from 3 to 10 KV, but peak current rating of only 300-3000 amps making them unsuitable for directly firing multiple EBW detonators. They are small (2 cm long), rugged, and accurate (jitter 20-40 nanoseconds) however, and are triggered by voltages of only 200-300 V. They are very convenient then for triggering other high current devices, like spark gaps, by discharging through a pulse current transformer (they can, in turn, be conveniently triggered using a small capacitor, pulse transformer, and a thyristor). Krytrons are used commercially in powerful xenon flash lamp systems, among other uses. Krytrons have faster response times than other types of trigger discharge tubes. A vacuum tube relative of the krytron, the sprytron, is very similar and has very high radiation resistance. It is probably the sprytron that is actually used in U.S. nuclear weapons. The only manufacturer of krytrons and sprytrons is EG&G, the same company that provided the spark gap cascades for Gadget, Fat Man, and other early atomic weapons.
Other switching techniques that have been developed are explosive switches, and various other vacuum or gas-filled tube devices like hydrogen thyratrons and arc discharge tubes. An explosive switch uses the shock wave from an explosive charge to break down a dielectric layer between metal plates. Both this technique and the thyratron were under development at Los Alamos at the end of WWII.
Detonators are wired in parallel for reliability and to minimize inductance. For additional reliability, redundant detonation circuits may be used. In the Fat Man bomb the detonators were wired in parallel in spark gap triggered circuits. There were four detonating circuits, any two of which provided sufficient power for all 32 detonators. Each detonator was wired to two different circuits so that the failure of any one detonator circuit (and up to two of them) would not have affected the implosion. The whole system was fired by a spark gap cascade - the trigger spark gap supplied a current surge to fire the four main circuits simultaneously.
With sufficient care timing accuracies of 10 nanoseconds are achievable, which is probably better than practical implosion systems require (100 nanosecond accuracy is more typical).
Although the types of switches and capacitors mentioned here are, for the most part, available from many commercial sources and have many commercial uses, they are nonetheless subject to dual use export controls. Attempts to export of krytrons illegally has been especially well publicized over the years, but they are not the only such devices suitable for these applications.
The detonator bridge wire used in EBWs is typically made of high purity gold or platinum, 20-50 microns wide and about 1 mm long. PETN is invariably used as the explosive, possibly with a tetryl booster charge. Slapper detonators use metal foils (usually aluminum, but gold foil would work well also) deposited on a thin plastic film (usually Kapton). A wider variety of primary explosives can be used. PETN or HMX may have been used in slappers used in earlier weapon systems, but weapons using IHE probably use the highly heat stable HNS.
A possible substitute for a capacitor bank in a detonation system is an explosive generator, also called a flux compression generator (FCG). This consists of a primary coil that is energized to create a strong magnetic field by a capacitor discharge. At the moment of maximum field strength an explosive charge drives a conducting plate into the field, rapidly compressing it. The rising magnetic field induces a powerful high voltage current in a secondary coil. Any of the switching technologies mentioned above can then be used to switch the load to the detonating system. A substantial fraction of the chemical energy of the explosive can be converted to electrical power in this way.
FCGs can potentially provide ample power for detonators and external neutron initiators at a very modest weight. Extensive research on these generators has been conducted at Los Alamos and Lawrence Livermore, and they are known to have been incorporated into actual weapon designs (possibly the Mk12, which had 92 initiation points).
22.214.171.124.3 Implosion Hardware Designs
Once created implosion shocks can be used to drive different implosion hardware systems. By implosion hardware, I mean systems of materials that are inert from the viewpoint of chemical energy release: the fissile material itself, and any reflectors, tampers, pushers, drivers, buffers, etc.
One approach to designing an implosion hardware system is to simply use the direct compression of the explosive generated shock wave to accomplish the desired reactivity insertion. This is the "solid pit design" used in Gadget and Fat Man.
A variety of other designs make use of high velocity collisions to generate the compressive shocks for reactivity insertion. These velocities of course are obtained from the energy provided by the high explosive shocks.
126.96.36.199.3.1 Solid Pit Designs
Since shock waves inherently compress the material through which they pass, an obvious way of using the implosion wave is simply to let it pass through the fissile core, compressing it as it converges on the center. This technique can (and has) been used successfully, but it has some inherent problems not all of which can be remedied.
First, the detonation pressure of available explosives (limit 400 kilobars) is not high enough for much compression. A 25% density increase is all that can be obtained in uranium at this pressure, delta-phase plutonium can reach 50% due to the low pressure delta->alpha phase transformation. This pressure can be augmented in two ways: by reflecting the shock at high impedance interfaces, and by convergence.
Since the fissile material is about an order of magnitude denser than the explosive itself, the first phenomenon is certain to occur to some extent. It can be augmented by inserting one or more layers of materials of increasing density between the explosive and the dense tamper and fissile material in the center. As a limit, shock pressure can double when reflected at an interface. To approach this limit the density increase must be large, which means that no more than 2 or 3 intermediate layers can be used.
The second phenomenon, shock convergence, is limited by the ratio of the fissile core radius to the outer radius of the implosion hardware. The intensification is approximately proportional to this ratio. A large intensification thus implies a large diameter system - which is bulky and heavy.
Another problem with the solid pit design is the existence of the Taylor wave, the sharp drop in pressure with increasing distance behind the detonation front. This creates a ramp-shaped shock profile: a sudden jump to the peak shock pressure, followed by a slope down to zero pressure a short distance behind the shock front. Shock convergence actually steepens the Taylor wave since the front is augmented by convergence to a greater degree than the material behind the front (which is at a larger radius). If the Taylor wave is not suppressed, by the time the shock reaches the center of the fissile mass, the outer portions may have already expanded back to their original density.
The use of intermediate density "pusher" layers between the explosive and the tamper helps suppress or flatten the Taylor wave. The reflected high pressure shock reinforces the pressure behind the shock front so that instead of declining to zero pressure, it declines to a pressure equal to the pressure jump at the reflection interface. That is, if P is the initial shock pressure, and P -> 0 indicates a drop from P to zero through the Taylor wave, then the reflection augments both by p:(P + p) -> (0 + p).
The Gadget/Fat Man design had an intermediate aluminum pusher between the explosive and the uranium tamper, and had a convergence factor of about 5. As a rough estimate, one can conclude that the 300 kilobar pressure of Composition B could be augmented by a factor of 4 by shock reflection (doubling at the HE/Al interface, and the Al/U interface), and a factor of 5 by convergence, leading to a shock pressure of 6 megabars at the plutonium core. Assuming an alpha phase plutonium equation of state similar to that of uranium this leads to a compression of a bit less than 2, which when combined with the phase transformation from delta to alpha gives a maximum density increase of about 2.5. The effective compression may have been significantly less than this, but it is generally consistent with the observed yield of the devices.
188.8.131.52.3.2 Levitated Core Designs
In the solid pit design, the Taylor wave is reduced but not eliminated. Also, the kinetic energy imparted by the convergent shock is not efficiently utilized. It would be preferable to achieve uniform compression throughout the fissile core and tamper, and to be able to make use of the full kinetic energy in compressing the material (bringing the inward motion of material in the core to a halt at the moment of maximum compression).
This can be accomplished by using a shell, or hollow core, instead of a solid one (see Section 3.7.4 Collapsing Shells). The shell usually consists of an outer layer of tamper material, and an inner layer of fissile material. When the implosion wave arrives at the inner surface of the shell, the pressure drops to zero and an unloading wave is created. The shock compressed material (which has also been accelerated inward) expands inward to zero pressure, converting the compression energy into even greater inward directed motion (approximately doubling it). In this way energy loss by the outward expansion of material in the Taylor wave region is minimized.
Simply allowing this fast imploding hollow shell to collapse completely would achieve substantial compression. In practice this is never done. It is more efficient to allow the collapsing shell to collide with a motionless body in the center (the "levitated core"), the collision creating two shock waves - one moving inward to the center of the stationary levitated core (accelerating it inward), and one moving outward through the imploding shell (decelerating it). The pressure between these two shocks is initially constant so that when the converging shock reaches the center of the core, the region extending from the center out to the location of expanding shock has achieved reasonably even and efficient compression.
I use word "reasonably" because the picture is a bit more complicated than just described. First, by the time the shell impacts the levitated core it has acquired the character of a thick collapsing shell. The inner surface will be moving faster than the outer surface, and a region close to the inner surface will be somewhat compressed. Second, the inward and outward moving shocks do not move at constant speed. The inward moving shock is a classical converging shock with a shock velocity that accelerates and strengthens all the way to the center. The outward moving shock is a diverging or expanding shock that slows down and weakens.
In the classical converging shock region (the levitated core, and the innermost layer of the colliding shell) high compression is achieved and the material is brought to a halt when the shock reaches the center. In the outer diverging region, only about half of the implosion velocity is lost when the diverging shock compresses and decelerates it, and there is insufficient time for inward flow to bring it to a halt before the converging shock reaches the center. Thus the outer region is still collapsing (slowly) when the inner shock reaches complete convergence (assuming that the outer shock has not yet reached the surface of the pit (tamper shell plus core) and initiated an inward moving release wave).
Immediately after the converging shock reaches the center, the shock rebound begins. This is an outward moving shock that accelerates material away from the center, creating an expanding low density region surrounded by a layer compressed to an even greater degree than in the initial implosion. Once the rebound shock expands to a given radius the average density of the volume within that radius falls rapidly.
For a radius well outside the classical converging shock region, the true average density may continue to increase due to the continuing collapse of the outer regions until the rebound shock arrives. The structure of the shell/core system at the time of rebound shock arrival is actually hollow - a low density region in the center with a highly compressed shell, but the average density is at a maximum. Whether this configuration is acceptable or not depends on the weapon design, it may be acceptable in a homogenous un-boosted core but will not be acceptable in a boosted or a composite core design where high density at the center is desired.
Since the divergence of the outward shock is not great, and it is offset somewhat by the slower collapse velocity of the outer surface of the thick shell, we can treat it approximately as a constant speed shock traversing the impacting shell. The converging shock can be treated by the classical model (see Section 3.7.3 Convergent Shocks). This allows us to estimate the minimum shell/levitated core mass ratio for efficient compression, the case in which the shock reaches the surface of the shell, and the center simultaneously.
If the shell and levitated core have identical densities and compressibilities, then the two shocks will have the same initial velocity (the velocity change behind the shock front in both cases will be exactly half the impact velocity). If the shell has thickness r_shell, then the shock will traverse the shell in time:
Eq. 184.108.40.206.3.2-1 t_shell = r_shell/v
If the levitated core has radius r_lcore, the shock will reach the center in time:
Eq. 220.127.116.11.3.2-2 t_lcore = (r_lcore/v)*alpha
Alpha is this case is the convergent shock scaling parameter (see Section 3.7.3). For a spherical implosion, and a gamma of 3 (approximately correct for most condensed matter, and for uranium and plutonium in particular), alpha is equal to 0.638 (the exact value will be somewhat higher than this).
Since we want t_shell = t_lcore:
Eq. 18.104.22.168.3.2-3 r_shell = alpha * r_lcore = 0.638 r_lcore
That is, the thickness of the shell is smaller than the radius of the core by a factor of 0.638. But since volume is proportional to the cube of the radius:
Eq. 22.214.171.124.3.2-4 m_shell = density*(4*Pi/3)*[(r_shell + r_lcore)^3 - r_lcore^3] and Eq. 126.96.36.199.3.2-5 m_lcore = density*(4*Pi/3)*r_lcore^3
This gives us the mass ratio:
Eq. 188.8.131.52.3.2-6 m_shell/m_lcore = ((1.638)^3 - 1^3)/1^3 = 3.4
Thus we want the impacting shell to have at least 3.4 times as much mass as the levitated core. The ratio used may be considerably larger.
Now it is important to realize that in principle the shell/levitated core mass ratio is unrelated to the tamper/fissile material mass ratio. The boundary between tamper and fissile material can be located in the shell (i.e. the shell is partly tamper and partly fissile, the levitated core entirely fissile), it can be located between the shell and core (i.e. the shell is tamper and the core is fissile), or it can be located in the core (i.e. the shell is tamper, and the core is partly tamper and partly fissile). The tamper/fissile material ratio is determined by neutron conservation, hydrodynamic confinement, and critical mass considerations.
It appears however that the initial practice of the US (starting with the Mk4 design and the Sandstone test series) was to design levitated core weapons so that the shell was the uranium tamper, and the levitated portion was a solid fissile core. The mass of the tamper would have been similar to that used in the Gadget (115 kg), a large enough mass to allow the use of different pit sizes and compositions while ensuring sufficient driver mass. These early pure fission bombs were designed to use a variety of pits to produce different yields, and to allow the composition (U-235/Pu-239 ratio) to be varied to match the actual production schedules of these materials.
Levitation is achieved by having some sort of support structure that will not disrupt the implosion symmetry. The most widely used approach seems to be the use of truncated hollow cones (or conically tapered thin walled tubes if you prefer), usually made out of aluminum. Six of these are used, pairs on opposite sides of the levitated core for each axis of motion. Supporting wires (presumably under tension) have also been used.
The levitated core of the Hurricane device (the first British test) used "caltrops" (probably six of them) for support. A caltrops is a four pronged device originally used in the Middle Ages as an obstacle against soldiers and horses, and more recently against vehicle tires. Each of the prongs can be thought of as the vertex of a tetrahedron, with the point where they all join as the tetrahedral center. A caltrops has the property that no matter how you drop it, three of the prongs forms a tripod with the fourth prong pointed straight up. Dimples on the core might be used to seat the support prongs securely.
Another possibility is to use a strong light weight foam to fill the gap between shell and core (such foams have been produced at the Allied-Signal Kansas City Plant). A significant problem with using a foam support is that plastic foams are usually excellent thermal insulators, which could cause severe problems from self-heating in a plutonium levitated core.
A serious problem with hollow shell designs is the tensile stress generated by the Taylor wave (see Section 184.108.40.206.2 Free Surface Release Waves in Solids). As the release wave moves out from the inner shell surface, it encounters declining pressure due to the Taylor wave. The "velocity doubling" effect generates a pressure drop equal in magnitude to the shock peak pressure. If the pressure that the release wave encounters is below this pressure, a negative pressure (tension) is created (you can think of this as the faster moving part of the plate pulling the slower part along). This tensile stress builds up the farther back the release wave travels. If it exceeds the strength of the material it will fracture or "spall". This can cause the entire inner layer of material to peel off, or it may simply create a void. A new release wave will begin at the spall surface.
Spalling disrupts implosion symmetry and can also ruin the desired collision timing. It was primarily fears concerning spalling effects that prevented the use of levitated core designs in the first implosion bombs.
One approach to dealing with spalling is simply to make sure that excessive tensile stresses do not appear in the design. This requires strong materials, and at least one of the following:
Another approach is to adopt the "if you can't beat'em, join'em" strategy. Instead of trying to prevent separation in the shell, accommodation for the phenomenon is included in the design. This can be done by constructing the shell from separate layers. When the release wave reaches the boundary between shell layers (and tensile stress exists at that point), the inner layer will fly off the outer layer, and a new release wave will begin. This will create a series of imploding shells, separated by gaps.
As each shell layer converges toward the center, the inner surface will accelerate while the outer surface will decelerate. This will tend to bring the layers back together. If they do not rejoin before impact occurs with the core, a complicated arrangement of shocks may develop. The design possibilities for using these multiple shocks will not be considered here.
The concept of the levitated core and colliding shells can be extended to multiple levitation - having one collapsing shell collide with a second, which then collides with the levitated core. The outer shell, due to the concentration of momentum in its inner surface and the effects of elastic collision, could enhance the the velocity of the inner shell. This idea requires a large diameter system to be practical. It is possible that the "Type D" pit (that is, the hardware located between the explosive and fissile core) developed in the early fifties for the 60 inch diameter HE assemblies then in the US arsenal was such a system. It considerably increased explosive yields with identical cores.
It seems almost certain that the most efficient kiloton range pure fission bomb ever tested - the Hamlet device detonated in Upshot-Knothole Harry (19 May 1953) - used multiple levitation. It was described as being the first "hollow core" device, presumably the use of a fissile core that itself was an outer shell and an inner levitated core. A TX-13D bomb assembly (a 60 inch implosion system using a Type D pit) was used with the core. The yield was 37 kt.
220.127.116.11.3.3 Thin Shell (Flying Plate) Designs
Thin shell, or flying plate designs, take the hollow core idea to an extreme. In these designs a very thin, but relatively large diameter shell is driven inward by the implosion system. As with the regular hollow core design, a levitated core in the center is used.
The advantages of a flying plate design are: a greatly increased efficiency in the utilization of high explosive energy; and a higher collision speed - leading to faster insertion and greater compression for a given amount of explosive. Thin shell flying plate designs are standard now in the arsenals of the nuclear weapon states.
A thin plate, a few millimeters thick, is thinner than the Taylor wave of an explosive shock. The shock acceleration, followed by full release, is completed before the Taylor wave causes a significant pressure drop. The maximum initial shock acceleration is thus achieved.
Even greater energy transfer than this occurs however. When the release wave reaches the plate/explosive interface (completing the expansion and velocity doubling of the plate), a rarefaction wave propagates into the explosive gases. The gases expand, converting their internal energy into kinetic energy, and launching a new (but weaker) shock into the plate. A cyclic process thus develops in which a series of shocks of diminishing magnitude accelerate the plate to higher and higher velocities. If viewed from the inner surface, the observer would see a succession of velocity jumps of diminishing size and at lengthening intervals. The plate continues to accelerate over a distance of a few centimeters.
The maximum velocity achievable by this means can approach the escape velocity of the explosive gases, which is 8.5 km/sec for Comp B. Velocities up to 8 km/sec have been reported using HMX-based explosives. This can be compared to the implosion velocity of the plutonium pit in the Gadget/Fat Man design, which was some 2 km/sec.
Optimum performance is found when a small gap (a few mm) separates the high explosive from the plate. Among other things, this gap reduces the strength of the Taylor wave. The gap may be an air space, but it is usually filled with a low impedance material (like a plastic).
The mass ratio between the explosive and the plate largely determines the system performance. For reasonable efficiency it is important to have a ratio r of at least 1 (HE mass/plate mass). At r=1 about 30% of the chemical energy in the explosive is transferred to the plate. Below r=1, the efficiency drops off rapidly. Efficiency reaches a maximum at r=2, when 35% of the energy is transferred.
Since a higher mass ratio means more energy available, the actual final velocity and energy in the plate increases monotonically with r, as shown in the table below. Higher values of r also cause the plate to approach its limiting value with somewhat shorter travel distances.
|Table 18.104.22.168.3.3-1. Flying Plate Drive Efficiency|
|Plate/HE Mass Ratio (R)||Energy Fraction Transferred||Relative Velocity||Plate/Detonation Velocity Ratio|
By the time the flying plate converges from a radius of 10-20 cm to collide with the levitated core, it is no longer a thin shell. The velocity difference that is inherent in thick shell collapse leads to a collision velocity of the inner surface that is higher than the average plate velocity. Collision velocities of experimental uranium systems of 8.5 km/sec have been reported.
The flying plate can be used in a variety of ways. It can be the collapsing shell of a levitated core design. Or it can be used as a driver which collides with, and transfers energy to a shell, which then implodes on to a levitated core.
22.214.171.124.3.4 Shock Buffers
Powerful shock waves can dissipate significant amounts of energy in entropic heating. Energy that contributes to entropy increase is lost to compression. This problem can be overcome by using a shock buffer.
A shock buffer is a layer of low impedance (i.e. low density) material that separates two denser layers. When a shock is driven into the buffer from one of the dense layers, a weaker shock of low pressure (but higher velocity) is created (see 126.96.36.199.3 Shock Waves at a Low Impedance Boundary). This shock is reflected at the opposite interface, driving a shock of increased pressure into the second dense layer. This shock is still weaker than the original shock however, and dissipates much less entropy.
A series of shock reflections ensue in the buffer, each one increases the pressure in the buffer, but by diminishing amounts (the pressure of the original shock is the limiting value). A series of shocks is driven into the second dense material, each successive shock creating a pressure jump of diminishing magnitude.
The shock buffer thus effectively splits the original powerful shock into a series of weaker ones, essentially eliminating entropic heating. The first two shocks produced account for most of the compression.
The following shocks tend to overtake the leading ones since they are travelling through compressed and accelerated material. Ideally, the shock sequence should be timed so that they all converge at the center of the system. The thickness of the buffer is selected so that this ideal is approached as closely as possible. The usual thickness is probably a few millimeters.
The buffer can be employed to cushion a plate collision also. In this case, the reflected shocks gradually decelerate the impactor (driver plate), and accelerate the driven plate, without dissipating heat. This converts a largely inelastic supersonic collision into an elastic one. If the mass of the driven plate is substantially lower than the mass of the driver, it can be accelerated to greater velocities than the original driver velocity. In principle an elastic collision can boost the driven plate two as much as twice the velocity of the driver (if the driver/driven plate mass ratio is very large).
In practice this technique can transfer 65-80% of the driver energy to the driven plate, and provide driven plate velocities that are 50% greater than the driver velocity (or more). Since the explosive/plate mass ratio required for direct explosive drive increases very rapidly for velocities above 50% of the detonation velocity, the buffered plate collision method is the most efficient one for achieving velocities above this.
In an weapon implosion design a thin uranium or tungsten shell would probably be used as a driver.
Two likely low density materials for use as buffers are graphite and beryllium. Beryllium is an excellent neutron reflector which is commonly used in nuclear weapon designs for this reason. It thus may be a convenient shock buffer material that does double duty. Graphite is also a good neutron reflector. From information on manufacturing processes used at the Y-12 Plant at Oak Ridge, and the Allied-Signal Kansas City Plant, it is known that thin layers of graphite are used in the construction of nuclear weapons. The use of graphite as a shock buffer is a likely reason.
188.8.131.52.3.5 Cylindrical Implosion
The discussion of implosion has implicitly assumed a spherically symmetric implosion since this geometry is the simplest, and also the most efficient and widely used. Few changes are needed though to translate the discussion above to cylindrical geometry.
The changes required all relate to the differences in shock convergence in cylindrical geometry. There is a much lower degree of energy focusing during shock convergence, resulting in lower pressure increase for the same convergence ratio (reference radius/inner radius). A cylindrical solid core system would thus be much less effective in generating high pressures and compressions.
For a levitated core design, the shell/levitated core mass ratio must be recalculated. The appropriate value for alpha is 0.775 in this case, but the volume only increases by r^2, so:
Eq. 184.108.40.206.3.4-1 m_shell/m_lcore = ((1.775)^2 - 1^2)/1^2 = 2.15
The possibility of producing cylindrical implosion by methods that do not work for spherical geometries deserves some comment however. The flying plate line charge systems described above (220.127.116.11.2.4 Cylindrical and Planar Shock Techniques) for initiating a cylindrical implosion shocks in high explosives can be used to drive flying plates directly. Such a single-stage system would probably not be capable of generating as fast an implosion as a two stage system; one in which the first plate initiates a convergent detonation which then drives a second flying plate. A single stage system would be simpler to develop and build, and potentially lighter and more compact however.
Cylindrical implosion systems are easier to develop that spherical ones. This largely because they are easier to observe. Axial access to the system is available during the implosion, allowing photographic and electronic observation and measurement. Cylindrical test systems were used to develop the implosion lens technology at Los Alamos that was later applied to the spherical bomb design.
18.104.22.168.3.6 Planar Implosion
Planar implosion superficially resembles the gun assembly method - one body is propelled toward another to achieve assembly. The physics of the assembly process is completely different however, with shock compression replacing physical insertion. The planar implosion process is some two orders of magnitude faster than gun assembly, and can be used with materials with high neutron background (i.e. plutonium).
By analogy with spherical and cylindrical implosion, the natural name for this technique might be "linear implosion". This name is used for a different approach discussed below in Hybrid Assembly Techniques.
Most of the comments made above about implosion still apply after a fashion, but some ideas, like the levitated core, have little significance in this geometry. Planar implosion is attractive where a cylindrical system with a severe radius constraint exists.
Shock wave lenses for planar implosion are much easier to develop than in other geometries. A plane wave lens is used by itself, not as part of a multi-lens system. It is much easier to observe and measure the flat shock front, than the curved shocks in convergent systems. Finally, flat shocks fronts are stable while convergent ones are not. Although they tend to bend back at the edges due to energy loss, plane shock fronts actually tend to flatten out by themselves if irregularities occur.
22.214.171.124 Hybrid Assembly Techniques
For special applications, assembly techniques that do not fit neatly in the previously discussed categories may be used.
126.96.36.199.1 Complex Guns
Additional improvements in gun system performance are possible by combining implosion with gun assembly. The implosion system here would be a very weak one - a layer of explosive to collapse a ring of fissile material or dense tamper on to the gun assembled core. This would allow further increases in the amount of fissile material used, and generate modest efficiency gains through small compression factors. A significant increase in insertion speed is also possible, which may be important where battlefield neutron sources may cause predetonation (this may make the technique especially attractive for artillery shell use). Complex gun approaches have reportedly been used in Soviet artillery shell designs.
188.8.131.52.2 Linear Implosion
In weapons with severe size (especially radius) and mass constraints (like artillery shells) some technique other than gun assembly may be desired. For example, plutonium cannot be used in guns at all so a plutonium fueled artillery shell requires some other approach.
A low density, non-spherical, fissile mass can be squeezed and deformed into a supercritical configuration by high explosives without using neat, symmetric implosion designs. The technique of linear implosion, developed at LLNL, apparently accomplishes this by embedding an elliptical or football shaped mass in a cylinder of explosive, which is then initiated at each end. The detonation wave travels along the cylinder, deforming the fissile mass into a spherical form. Extensive experimentation is likely to be required to develop this into a usable technique.
Three physical phenomenon may contribute to reactivity insertion:
Since the detonation generated pressure are transient, and affect different parts of the mass at different times, compression to greater than normal densities do not occur. The reactivity insertion then is likely to be rather small, and weapon efficiency quite low (which can be offset by boosting). The use of metastable delta-phase plutonium alloys is especially attractive in this type of design. A rather weak impulse is sufficient to irreversibly collapse it into the alpha phase, giving a density increase of 23%.
The supercritical mass formed by linear implosion is stable - it does not disassemble or expand once the implosion is completed. This relieves the requirement for a modulated neutron initiator, since spontaneous fission (or a calibrated continuous neutron source) can assure detonation. If desired, a low intensity initiator of the polonium/beryllium type can no doubt be used.
Special initiation patterns may be advantageous in this design, such as annual initiation - where the HE cylinder is initiated along the rim of each end to create a convergent shock wave propagating up the cylinder.
4.1.7 Nuclear Design Principles
The design of the nuclear systems of fission weapons naturally divides into several areas - fissionable materials, core compositions, reflectors, tampers, and neutron initiating techniques.
184.108.40.206 Fissile Materials
In the nuclear weapons community a distinction is made between "fissile" and "fissionable". Fissile means a material that can be induced to fission by neutrons of energy - fast or slow. These materials always have fairly high average cross sections for the fission spectrum neutrons of interest in fission explosive devices. Fissionable simply means that the material can be induced to fission by neutrons of a sufficiently high energy. As examples, U-235 is fissile, but U-238 is only fissionable.
There are three principal fissile isotopes available for designing nuclear explosives: U-235, Pu-239, and U-233. There are other fissile isotopes that can be used in principle, but various factors (like cost, or half-life, or critical mass size) that prevent them from being serious candidates. Of course none of the fissile isotopes mentioned above is actually available in pure form. All actual fissile materials are a mixture of various isotopes, the proportion of different isotopes can have important consequences in weapon design.
The discussion of these materials will be limited here to the key nuclear properties of isotope mixtures commonly available for use in weapons. The reader is advised to turn to Section 6 - Nuclear Materials for more lengthy and detailed discussions of isotopes, and material properties. See also Table 4.1.2-1 for comparative nuclear properties for the three isotopes.
220.127.116.11.1 Highly Enriched Uranium (HEU)
Highly enriched uranium (HEU) is produced by processing natural uranium with isotopic separation techniques. Natural uranium consists of 99.2836% U-238, 0.7110% U-235, and 0.0054% U-234 (by mass). Enrichment processes increase the proportion of light isotopes (U-235 and U-234) to heavy ones (U-238). Enriched uranium thus contains a higher percentage of U-235 (and U-234) than natural uranium, but all three isotopes are always present in significant concentrations. The term "HEU" usually refers to uranium with a U-235 of 20% or more. Uranium known to have been used in fission weapon designs ranges in enrichment from 80-93.5%. In the US uranium with enrichment around 93.5% is sometimes called Oralloy (abbreviated Oy) for historical reasons (Oralloy, or Oak Ridge ALLOY, was a WWII codename for weapons grade HEU). As much as half of the US weapon stockpile HEU has an enrichment in the range of 20-80%. This material is probably used in thermonuclear weapon designs.
The techniques which have actually been used for producing HEU are gaseous diffusion, gas centrifuges, electromagnetic enrichment (Calutrons), and aerodynamic (nozzle/vortex) enrichment. Other enrichment processes have been used, some even as part of an overall enrichment system that produced weapons grade HEU, but none are suitable for the producing the highly enriched product. The original HEU production process used by the Manhattan Project relied on Calutrons, these were discontinued at the end of 1946. From that time on the dominant production process for HEU throughout the world has been gaseous diffusion. The vast majority of the HEU that has been produced to date, and nearly all that has been used in weapons, has been produced through gaseous diffusion. Although it is enormously more energy efficient, the only countries to have built or used HEU production facilities using gas centrifuges has been the Soviet Union, Pakistan, and The United Kingdom. Pakistan's production has been very small, the United Kingdom apparently has never operated there facility for HEU production.
High enrichment is important for reducing the required weapon critical mass, and for boosting the maximum alpha value for the material. The effect of enrichment on critical mass can be seen in the following table:
Figure 18.104.22.168.1. Uranium Critical Masses for Various Enrichments and Reflectors total kg/U-235 content kg (density = 18.9) Enrichment Reflector (% U-235) None Nat. U Be 10 cm 10 cm 93.5 48.0/44.5 18.4/17.2 14.1/13.5 90.0 53.8/48.4 20.8/18.7 15.5/14.0 80.0 68. /54.4 26.5/21.2 19.3/15.4 70.0 86. /60.2 33. /23.1 24.1/16.9 60.0 120 /72. 45. /27. 32. /19.2 50.0 170 /85. 65. /33. 45. /23. 40.0 250 /100 100 /40. 70. /28. 30.0 440 /132 190 /57. 130 /39. 20.0 800 /160 370 /74 245 /49.
The total critical mass, and the critical mass of contained U-235 are both shown. The increase in critical mass with lower enrichment is of course less pronounced when calculated by U-235 content. Even with equivalent critical masses present, lower enrichment reduces yield per kg of U-235 by reducing the maximum alpha. This is due to the non-fission neutron capture cross section of U-238, and the softening of the neutron spectrum through inelastic scattering (see the discussion of U-238 as a neutron reflector below for more details about this).
U-238 has a spontaneous fission rate that is 35 times higher than U-235. It thus accounts for essentially all neutron emissions from even the most highly enriched HEU. The spontaneous fission rate in uranium (SF/kg-sec) of varying enrichment can be calculated by:
SF Rate = (fraction U-235)*0.16 + (1 - (fraction U-235))*5.5
For 93.5% HEU this rate (0.5 n/sec-kg) is low enough that large amounts can be used in weapon designs without concern for predetonation. If used in the Little Boy design (which actually used 80% enriched uranium, however) it would produce only one neutron every 31 milliseconds on average. No problem exists for any design up to the limiting size of gun-type weapons. 50% HEU on the other hand would be difficult to use in a gun-type weapon. A beryllium reflector would minimize the mass (and thus the amount of U-238 present), but to have a reasonable amount HEU present (e.g. 2.5 critical masses) would produce one neutron every 3.2 millisecs, making predetonation a significant prospect. The rate is never high enough though to make a significant difference for implosion assembly.
Plutonium is produced by neutron bombardment of U-238, which captures a neutron to form U-239. The U-239 then decays into neptunium-239, which decays in turn to form Pu-239. Since the vast majority of nuclear reactors use low enriched uranium fuel (< 20% U-235, 3-4% typically for commercial reactors), they also contain large amounts of U-238. Plutonium production is thus an inevitable consequence of operation in most reactors.
Pu-239 is the principal isotope produced, and is the most desired isotope for use in weapons or as a nuclear fuel. Multiple captures and other side reactions invariably produce an isotope mixture however. The principal contaminating isotope is always Pu-240, formed by non-fission neutron capture by Pu-239. The exposure of U-238 to neutron irradiation is measured by the fuel "burn-up", the number of megawatt-days (thermal) per tonne of fuel. The higher the burn-up, the greater the percentage of contaminating isotopes. Weapon production reactors use fuel burn-ups of 600-1000 MWD/tonne, light water power reactors have a typical design burn-up of 33000 MWD/tonne, and have been pushed to 45000 MWD/tonne by using higher enrichment fuel.
Plutonium is commonly divided into categories based on the Pu-240 content:
The first US plutonium weapon (Fat Man) used plutonium with a Pu-240 content of only 0.9%, largely due to the hurried production schedule (only 100 MWD/tonne irradiations were used to get the plutonium out of the pile and into bombs quickly). Modern US nuclear weapons use weapons grade plutonium with a nominal 6.5% Pu-240 content. A lower Pu-240 content is not necessary for correct weapon functioning and increases the cost. The US has produced low-burnup supergrade plutonium to blend with higher burn-up feedstocks to produce weapons grade material. Plutonium produced in power reactors varies in composition, but its isotope profile remains broadly similar. If U-238 is exposed to extremely high burn-ups as in some fast breeder reactor designs (100,000 MWD/tonne), or if plutonium is separated from spent fuel and used as fuel in other reactors, it tends toward an equilibrium composition.
Representative plutonium compositions are:
Pu-238 Pu-239 Pu-240 Pu-241 Pu-242 Weapon Grade 0.0% 93.6% 5.8% 0.6% 0.0% 0.0% 92.8% 6.5% 0.7% 0.0% Reactor grade 2.0% 61.0% 24.0% 10.0% 3.0% Equilibrium 4.0% 32.0% 34.0% 15.0% 15.0%
These isotopes do not decay at the same rate, so the isotopic composition of plutonium changes with time (this is also true of HEU, but the decay process there is so slow as to be unimportant). The shortest lived isotopes found in weapon, fuel, or reactor grade plutonium in significant quantities are Pu-241 (13.2 yr) and Pu-238 (86.4 yr). The other isotopes have half-lives in the thousands of years and thus undergo little change over a human lifespan. The decay of Pu-241 (to americium-241) is of particular significance in weapons, since weapons grade plutonium contains no Pu-238 to speak of.
To understand the significance of these composition variations, we need to look at two principal factors: the critical mass size, and the spontaneous fission rate. An additional factor, decay self-heating, will be considered but is much less important.
Below are the estimated bare (unreflected) critical masses (kg) for spheres of pure plutonium isotopes in the alpha phase (and americium-241, since it is formed in weapons grade plutonium):
Pu-238 9 kg Pu-239 10 kg Pu-240 40 kg Pu-241 12 kg Pu-242 90 kg Am-241 114 kg
The most striking thing about this table is that they all have critical masses! In contrast U-238 (or natural uranium, or even LEU) has no critical mass since it is incapable of supporting a fast fission chain reaction. This means that regardless of isotopic composition, plutonium will produce a nuclear explosion if it can be assembled into a supercritical mass fast enough.
Next observe that the critical masses for Pu-239 and Pu-241 are nearly the same, while the critical masses for Pu-240 and 242 are both several times higher. Because of this disparity, Pu-239 and Pu-241 tend to dominate the fissionability of any mixture, and it is commonplace in the literature to talk about these two isotopes as "fissile", while Pu-240 and 242 are termed "non-fissile". However it is not really true that 240 and 242 are non-fissile, which has an important consequence (shown in the table below):
Figure 22.214.171.124.2 Critical Masses for Plutonium of Various Compositions total kg/Pu-239 content kg), density = 19.4 Isotopic Composition Reflector atomic % None 10 cm nat. U 239 240 100% 0% 10.5/10.5 4.4/4.4 90% 10% 11.5/10.3 4.8/4.3 80% 20% 12.6/10.0 5.4/4.3 70% 30% 13.9/ 9.7 6.1/4.3 60% 40% 15.4/ 9.2 7.0/4.2 50% 50% 17.2/ 8.6 8.0/4.0 40% 60% 20.0/ 8.0 9.2/3.7 20% 80% 28.4/ 5.7 13. /2.6 0% 100% 40. / 0.0 20. /0.0
We can see that while the critical mass increases with declining "fissile" isotope content, the mass of Pu-239 present in each critical system diminishes. This is the exact opposite of the effect of isotopic dilution in uranium. In the range of isotopic compositions encountered in normal reactor produced plutonium, the content of Pu-239 in the reflected critical assemblies scarcely change at all. Thus regardless of isotopic composition, we can estimate the approximate critical mass based solely on the quantities of Pu-239, Pu-241 (and Pu-238) in the assembly.
Pu-242, having a higher critical mass, is a more effective diluent but it is only a minor constituent compared to Pu-240 in most isotopic mixtures. Even if Pu-242 is considered as the main diluent, the picture remains broadly similar.
The reason a relatively low concentration of Pu-240 is tolerable in weapon grade plutonium is due to the emission of neutrons through spontaneous fission. A high performance fission weapon is designed to initiate the fission reaction close to the maximum possible compression achievable by the implosion system, and predetonation must be avoided. The fastest achievable insertion rate is probably about 1 microsecond, it was 4.7 microseconds in Fat Man, and many designs will fall somewhere in the middle of this range.
We can calculate the spontaneous fission rate in a mass of plutonium with the following formula:
SF Rate (SF/kg-sec) = (%Pu-238)*1.3x10^4 + (%Pu-239)*1.01x10^-1 + (%Pu-240)*4.52x10^3 + (%Pu-242)*8.1x10^3
For the 6.2 kg of plutonium (about 1% Pu-240) in Fat Man this is about 25,000 fissions/sec (or one every 40 microseconds). A weapon made with 4.5 kg of 6.5% Pu-240 weapon grade plutonium undergoes fission at a rate of 132,000 fission/sec (one every 7.6 microseconds). In an advanced design the window of vulnerability, in which a neutron injection will substantially reduce yield, might be as small as 0.5 microseconds, in this case weapon grade plutonium would produce only a 7% chance of substandard yield.
Even the plutonium found in the discharged fuel of light water power reactors can be used in weapons however. With a composition of 2% Pu-238, 61% Pu-239, 24% Pu-240, 10% Pu-241, and 3% Pu-242 we can calculate a fission rate of 159,000 fissions/kg-sec. If 6-7 kg were required in a design, then the average rate would be about 1 fission/microsecond. A fast insertion would have a significant chance of no predetonation at all, and would produce a substantial yield (a few kt) even in a worst case.
The US actually tested a nuclear device made from plutonium with a Pu-240 content of >19% in 1962. The yield was less than 20 kt. Although this was first made public in 1977, the exact amount of Pu-240, yield, and the date of the test are still classified.
Plutonium produces a substantial amount of heat from radioactive decay. This amounts to 2.4 W/kg in weapon grade plutonium, and 14.5 W/kg in reactor grade plutonium. This can make plutonium much warmer than the surrounding environment, and consideration of this heating effect must be taken into account in weapon design to ensure that deleterious temperatures aren't reached under any envisioned operating conditions. Thin shell designs are naturally resistant to these effects however, due to the large surface area of the thin plutonium shell. It can cause problems in levitated cores though, since the pit will have little thermal contact with surrounding materials.
Self heating can be calculated from the following formula:
Q (W/kg) = (%Pu-238)*5.67 + (%Pu-239)*0.019 + (%Pu-240)*0.07 + (%Pu-241)*0.034 + (%Pu-242)*0.0015 + (%Am-241)*1.06
The extremely weak decay energy of Pu-241 produces little heating considering the very short half-life, but Pu-241 decay does alter the isotopic and chemical composition substantially over a course of several years. Half of it decays over 13.2 years, giving rise to americium-241. This is a short half-life radioisotope with energetic decay. As Pu-241 is converted into americium significant increases in self-heating increases and radiotoxicity occur; a very slight (and probably insignificant) decline in reactivity also occurs.
Perhaps most important consequence of americium buildup is its effect on the alloy composition. Americium is one of the elements that can serve as an alloying agent to stabilize plutonium in the delta phase. Since alloying agents for this purpose are usually present to the extent of about 3% (atomic) in plutonium, a 0.6% addition of a new alloying agent (americium) is a significant composition change. This is not a serious problem with weapon grade plutonium, although it does have to be taken into account when selecting the alloy. In reactor grade plutonium the effect is quite pronounced since the decay of Pu-241 can add 10% americium to the alloy over a couple of decades. This would undoubtedly have important effects on alloy density and strength.
When refurbishing nuclear weapons it has been routine practice to extract americium from the plutonium and refabricate the pit. This is apparently not essential. The US is currently not refabricating weapon pits, and won't in significant numbers for several more years. Since weapon grade plutonium production has been shut down in the US, Russia, the UK, and France, the remaining supply of this material will become essentially free of Pu-241 (and Am-241 after reprocessing) over the next few decades.
126.96.36.199.2.1 Plutonium Oxide
Any sophisticated weapon design would use plutonium in the form of a metal, probably an alloy. The possibility of using plutonium (di)oxide (PuO2) in a bomb design is of interest because the bulk of the separated plutonium existing worldwide is in this form. A terrorist group stealing plutonium from a repository might seek to use the oxide directly in a weapon.
Plutonium oxide is a bulky green powder as usually prepared. Its color may range from yellow to brown however. Oxygen has an extremely small neutron cross section, so plutonium oxide behaves essentially like a low density form of elemental plutonium. The maximum (crystal) density for plutonium oxide is 11.45, but the bulk powder is usually much less dense. A loose, unconsolidated powder might have a density of only 3-4. When compacted under pressure, substantially higher densities are achievable, perhaps 5-6 depending on pressure used. When compacted under very high pressure and sintered the oxide can reach densities of 9.7-10.0
The critical mass of reactor grade plutonium is about 13.9 kg (unreflected), or 6.1 kg (10 cm nat. U) at a density of 19.4. A powder compact with a density of 8 would thus have a critical mass that is (19.4/8)^2 time higher: 82 kg (unreflected) and 36 kg (reflected), not counting the weight of the oxygen (which adds another 14%). If compressed to crystal density these values drop to 40 kg and 17.5 kg.
Uranium-233 is the same chemical element as U-235, but its nuclear properties are more closely akin to plutonium. Like plutonium it is an artificial isotope that must be bred in a nuclear reactor. Its critical mass is lower than U-235, and its material alpha value is higher, both are close to those of Pu-239. Its half-life and bulk radioactivity are much closer to those of Pu-239 than U-235 also.
U-233 has been studied as a possible weapons material since the early days of the Manhattan Project. It is attractive in designs where small amounts of efficient material are desirable, but the spontaneous fission rate of plutonium is a liability, such as small, compact fission weapons with low performance (and thus light weight) assembly systems. It does not seem to have been used much, if at all, in actual weapons by the US. It has been employed in many US tests however, possibly indicating its use in deployed weapons.
The reason for this is the difficulty of manufacture. It must be made by costly irradiation in reactors, but unlike plutonium, its fertile isotope (thorium-232) is not naturally part of uranium fuel. To produce significant quantities of U-233, a special production reactor is required that burns concentrated fissile material for fuel - either plutonium or moderately to highly enriched uranium. This further increases cost and inconvenience, making it more expensive even than plutonium (which also has the advantage of a substantially lower critical mass). Significant resources have been devoted to U-233 production in the US however. In the fifties, up to three breeder reactors were loaded with thorium at Savannah River for U-233 production, and a pilot-scale "Thorex" separation plant was built.
U-233 has some advantages over plutonium, principally its lower neutron emission background. Like other odd numbered fissile isotopes U-233 does not readily undergo spontaneous fission, also important is the fact that the adjacent even numbered isotopes have relatively low fission rates as well. The principal isotopic contaminants for U-233 is U-232, which is produced by an n,2n reaction during breeding. U-232 has a spontaneous fission rate almost 1000 times lower than Pu-240, and is normally present at much lower concentrations.
If appropriate precautions are taken to use low Th-230 containing thorium, and an appropriate breeding blanket/reactor design is used, then weapons-grade U-233 can be produced with U-232 levels of around 5 parts per million (0.0005%). Above 50 ppm (0.005%) of U-232 is considered low grade.
Due to the short half-life of U-232 (68.9 years) the alpha particle emission of normal U-233 is quite high, perhaps 3-6 times higher than in weapons grade plutonium. This makes alpha->n reactions involving light element impurities in the U-233 a possible issue. Even with low grade U-233, and very low chemical purity uranium the emission levels are not comparable to emissions of Pu-240 in weapon grade plutonium, but they may be high enough to preclude using impure U-233 in a gun assembly weapon. If purity levels of 1 ppm or better are maintained for key light elements (achievable back in the 1940s, and certainly readily obtainable today), then any normal isotopic grade of U-233 can be used in gun designs as well.
Although the U-232 contaminant produces significant amount of self-heating (718 W/kg), it is presnt to small a concentration to have a significant effect. A bare critical mass of low grade U-233 (16 kg) would emit 5.06 watts, 11% of it due to U-232 heating.
Potentially a more serious problem is due to the decay chain of U-232. It leads to a series of short-lived isotopes, some of which put out powerful gamma emissions. These emissions increase over a period of a couple of years after the U-233 is refined due to the accumulation of the longest lived intermediary, Th-228. A 10 kg sphere of weapons grade U-233 (5 ppm U-232) could be expected to reach 11 millirem/hr at 1 meter after 1 month, 0.11 rem/hr after 1 year, and 0.20 rem/hr after 2 years. Glove-box handling of such components, as is typical of weapons assembly and disassembly work, would quickly create worker safety problems. An annual 5 rem exposure limit would be exceeded with less than 25 hours of assembly work if 2-year old U-233 were used. Even 1 month old material would require limiting assembly duties to less than 10 hours per week.
Typical critical mass values for U-233 (98.25%, density 18.6) are:
Reflector None Nat. U Be 5.3 cm 10 cm 4.2 cm Mass(kg) 16 7.6 5.7 7.6
Self heating can be calculated from the following formula:
Q (W/kg) = (%U-232)*7.18 + (%U-233)*0.0027 + (%U-234)*0.0018
188.8.131.52 Composite Cores
If more than one type of fissile material is available (e.g. U-235 and plutonium, or U-235 and U-233) an attractive design option is to combine them within a single core design. This eliminates the need for multiple weapon designs, can provide synergistic benefits from the properties of the two materials, and result in optimal use of the total weapon-grade fissile material inventory.
U-235 is produced by isotope enrichment and is generally much cheaper than the reactor-bred Pu-239 or U-233 (typically 3-5 times cheaper). The latter two materials have higher maximum alpha values, making them more efficient nuclear explosives, and lower critical masses. Plutonium has the undesirable property of having a high neutron emission rate (causing predetonation). U-233 has the undesirable property of having a high gamma emission rate (causing health concerns).
By combining U-235 with Pu-239, or U-235 with U-233, the efficiency of the U-235 is increased, and the required mass for the core is reduced compared to pure U-235. On the other hand, the neutron or gamma emission rates are reduced compared to pure plutonium or U-233 cores, and are significantly cheaper as well.
When a higher alpha material is used with a lower alpha material, the high alpha material is always placed in the center. Two reasons can be given for this. First, the greatest overall alpha for the core is achieved if the high alpha material (with the fastest neutron multiplication rate) is placed where the neutron flux is highest (i.e. in the center). Second, the neutron leakage from the core is determined by the radius of the core as measured in mean free paths. By concentrating the material with the shortest MFP in a small volume in the center, the "size" of the core in MFPs is maximized, and neutron leakage minimized.
Composite cores can be used in any type of implosion system (solid core, levitated core, etc.). The ratio of plutonium to HEU used has generally been dictated by the relative inventories or production rates of the two materials. These designs have largely dropped out of use in the US (and probably Soviet/Russian) arsenal as low weight thermonuclear weapon designs came to dominate the stockpile.
184.108.40.206 Tampers and Reflectors
Although the term "tamper" has long been used to refer to both the effects of hydrodynamic confinement, and neutron reflection, I am careful to distinguish between these effects. I use the term "tamper" to refer exclusively to the confinement of the expanding fissile mass. I use "reflector" to describe the enhancement of neutron conservation through back-scattering into the fissile core. One material may perform both functions, but the physical phenomenon are unrelated, and the material properties responsible for the two effects are largely distinct. In some designs one or the other function may be mostly absent, and in other designs different materials may be used to provide most of each benefit.
Since the efficiency of a fission device is critically dependent on the rate of neutron multiplication, the effect of neutron conservation due to a reflector is generally more important than the inertial confinement effect of a tamper in maximizing device efficiency.
220.127.116.11.1 Tampers Tamping is provided by a layer adjacent to the fissile mass. This layer dramatically reduces the rate at which the heated core material can expand by limiting its velocity to that of a high pressure shock wave (a six-fold reduction compared to the rate at which it could expand into a vacuum).
Two physical properties are required to accomplish this: high mass density, and optical opacity to the thermal radiation emitted by core. High mass density requires a high atomic mass, and a high atomic density. Since high atomic mass is closely correlated to high atomic number, and high atomic number confers optical opacity to the soft X-ray spectrum of the hot core, the second requirement is automatically taken care of.
An additional tamping effect is obtained from the fact that a layer of tamper about one optical thickness (x-ray mean free path) deep becomes heated to temperatures comparable to the bomb core. The hydrodynamic expansion thus begins at the boundary of this layer, not the actual core/tamper boundary. This increases the distance the rarefaction wave must travel to cause significant disassembly.
To be effective, a tamper must be in direct contact with the fissile core surface. The thickness of the tamper need not be very large though. The shock travels outward at about the same speed as the rarefaction wave travelling inward. This means that if the tamper thickness is equal to the radius of the core, then by the time the shock reaches the surface of the tamper, all of the core will be expanding and no more tamping effect can be obtained. Since an implosion compressed bomb core is on the order of 3 cm (for Pu-239 or U-233), a tamper thickness of 3 cm is usually plenty.
In selecting a tamper, some consideration must be given to the phenomenon of Rayleigh-Taylor instability (see Section 3.8). During the period of inward flow following the passage of a convergent shock wave, instability can arise if the tamper is less dense than the fissile core. This is affected by the pressure gradient, length of time of implosion, implosion symmetry, the initial smoothness of the tamper/core interface, and the density difference.
The ideal tamper would the densest available material. The ten densest elements are (in descending order):
Osmium 22.57 Iridium 22.42 Platinum 21.45 Rhenium 21.02 Neptunium 20.02 Plutonium 19.84 Gold 19.3 Tungsten 19.3 Uranium 18.95 Tantalum 16.65
Although the precious metals osmium, iridium, platinum, or gold might seem to be too valuable to seriously consider blowing up, they are actually much cheaper than the fissile materials used in weapon construction. The cost of weapon-grade fissile material is inherently high. The US is currently buying surplus HEU from Russia for US$24/g, weapon grade plutonium is said to be valued 5 times higher. In the late 1940s U-235 cost $150/g in then-year dollars (worth several times current dollars)! If the precious metals actually had unique capabilities for enhancing the efficiency of fissile material, it might indeed be cost effective to employ them. No one is known to have actually used any of these materials as a fission tamper however.
Rhenium is much cheaper than the precious metals, and is a serious contender for a tamper material. Neptunium is a transuranic that is no cheaper than plutonium, and is actually a candidate fissile material itself. It is thus not qualified to be considered a tamper, nor is the costly and fissile plutonium. Gold would not be seriously considered as a tamper since tungsten has identical density but is much cheaper (it has been used as a fusion tamper however). Natural and depleted uranium (DU) has been widely used as a tamper due in large part to valuable nuclear properties (discussed below). The cheapness of DU (effectively free) certainly doesn't hurt.
Tungsten carbide (WC), with a maximum density of 15.63 (14.7 is more typical of fabricated pieces), is not an outstanding tamper material, but it is high enough to merit consideration as a combined tamper/reflector material since it is a very good reflector.
In comparison two other elements normally though of as being dense do not measure up: mercury (13.54), and lead (11.35). Lead has been used as a fusion tamper in radiation implosion designs though, either as the pure element or as a lead-bismuth alloy.
The usefulness of a material as a reflector is principally determined by its mean free path for scattering. The shorter this value, the better the reflector.
To see the importance of a short MFP, consider the typical geometry of a bomb - a spherical fissile core, with radius r_core, surrounded by a spherical reflector. The average distance from the center of the assembly at which an escaping neutron is first scattered is r_core + MFP. If the scattering MFP for a reflector is comparable to r_core, the reflector volume in which scattering occurs is much larger than the volume of the core. The direction of scattering is essentially random, so under these conditions a scattered neutron is unlikely to reenter the core. Most that eventually do reenter will have scattered several times, traversing a distance that is a multiple of the MFP value. Reducing the value of MFP will considerably reduce the volume in which scattering occurs, and thus increase the likelihood that a neutron will reenter, and reduce the average path it will traverse before doing so.
Since the neutron population in the core is increasing very fast, approximately doubling in the time it takes a neutron to traverse one MFP, the importance of an average reflected neutron to the chain reaction is greatly diluted by the "time absorption" effect. It represents an older and thus less numerous neutron generation, which has been overwhelmed by more recent generations. This effect can be represented mathematically by including in the reflector a fictitious absorber whose absorption cross section is inversely proportional to the neutron velocity. Due to time absorption, as well as the effects of geometry, effectiveness of a reflector thus drops very rapidly with increasing MFP.
For a constant MFP, increasing reflector thickness also has a point of diminishing returns. Most of the benefit in critical mass reduction occurs with a reflector thickness of one 1 MFP. With 2 MFPs of reflector, the critical mass has usually dropped to within a few percent of its value for an infinitely thick reflector. Time absorption also causes the benefits of a reflector to drop off rapidly with thicknesses exceeding about one MFP. A very thick reflector offers few benefits over a relatively thin one.
Experimental data showing the variation of critical mass with reflector thickness can be misleading for evaluating reflector performance in weapons since critical systems are non-multiplying (alpha = 0). These experiments are useful when the reflector is relatively thin (a few centimeters), but thick reflector data is not meaningful. For example, consider the following critical mass data for beryllium reflected plutonium:
Table 18.104.22.168.2-1. Beryllium-Plutonium Reflector Savings Beryllium Alpha Phase Pu Critical Mass (d = 19.25) Thickness (cm) (kg) 0.00 10.47 5.22 5.43 8.17 4.66 13.0 3.93 21.0 3.22 32.0 2.47
The very low critical mass with a 32 cm reflector is meaningless in a high alpha system, it would behave instead as if the reflector were much thinner (and critical mass correspondingly higher). Little or no benefit is gained for reflectors thicker than 10 cm. Even a 10 cm reflector may offer slight advantage over one substantially thinner.
[Note: The table above, combined with the 2 MFP rule for reflector effectiveness, might lead one to conclude that beryllium's MFP must be in the order of 16 cm. This is not true. Much of the benefit of very thick beryllium reflectors is due to its properties as a moderator, slowing down neutrons so that they are more effective in causing fission. This moderation effect is useless in a bomb since the effects of time absorption are severe for moderated neutrons.]
In the Fat Man bomb, the U-238 reflector was 7 cm thick since a thicker one would have been of no value. In assemblies with a low alpha, additional reflectivity benefits are seen with uranium reflectors exceeding 10 cm thick. To reduce the neutron travel time it is also important for the neutron reflector to be in close proximity to the fissile core, preferably in direct contact with it.
Since MFP decreases when the reflector is compressed, it is very beneficial to compress the reflector along with the fissile core.
Many elements have similar scattering microscopic cross sections for fission spectrum neutrons (2.5 - 3.5 barns). Consequently the MFP tends to correlate with atomic density. Some materials (uranium and tungsten for example) have unusually high scattering cross sections that compensate for a low atomic density.
The parameter c (the average number of secondaries per collision) is also significant. This is the same c mentioned earlier in connection with the alpha of fissile materials. In reflector materials the effective value of c over the spectrum of neutrons present is always less than 1. Only two reflector materials produce significant neutron multiplication: U-238 (from fast fission) and beryllium (from the Be-9 + n -> 2n + Be-8 reaction). Neutron multiplication in U-238 becomes significant when the neutron energy is above 1.5 MeV (about 40% of all fission neutrons), but a neutron energy of 4 MeV is necessary in beryllium. Further, U-238 produces more neutrons per reaction on average (2.5 vs 2). For fission spectrum neutrons this gives U-238 a value of c = 1.05, and Be a value of c = 1.03. Remember, this if for fission spectrum neutrons, i.e. neutron undergoing their first collision! The effective value is lower though since after one or more collisions the energy spectrum changes.
Each uranium fast fission neutron is considerably more significant in augmenting the chain reaction in the core, compared to beryllium multiplied neutrons, due to the higher energy of fast fission neutrons. U-238 fast fission is an energy producing reaction, and generates neutrons with an average energy of 2 MeV. The beryllium multiplication reaction absorbs energy (1.665 MeV per reaction) and thus produces slow, low energy neutrons for whom time absorption is especially severe. The energy produced by U-238 fast fission can also significantly augment the yield of a fission bomb. It is estimated that 20% of the yield of the Gadget/Fat Man design came from fast fission of the natural uranium tamper.
Both beryllium and uranium have negative characteristics in that they tend to reduce the energy of scattered neutrons (and reduce the effective value of c below 1). In beryllium this is due to moderation - the transfer of energy from the neutron to an atomic nucleus through elastic scattering. In uranium it is due to inelastic scattering.
22.214.171.124.2.1 Moderation and Inelastic Scattering
The energy loss with moderation is a proportional one - each collision robs the neutron of the same average fraction of its remaining energy. This fraction is determined by the atomic weight of the nucleus:
E_collision/E_initial = Exp(-epsilon)
the constant epsilon being calculated from:
epsilon = 1 + ((A - 1)^2 * ln((A - 1)/(A + 1))/(2*A) )
where A is the atomic number. The equation is undefined when A=1, but taking the limit as it approaches 1 gives the value for light hydrogen which is epsilon=1. If A is larger than 5 or so then it can be approximated by:
epsilon ~= 2/(A + 2/3).
Epsilon values for some light isotopes of interest are:
A Isotopes Epsilon 1 H 1.000 2 D 0.725 3 T, He-3 0.538 4 He-4 0.425 6 Li-6 0.299 7 Li-7 0.260 9 Be-9 0.207 10 B-10 0.187 12 C-12 0.158
Since epsilon is close to zero when A is large, we can easily see that moderation is significant only for light atoms. The atomic weight of beryllium (9) is light enough to make this effect significant.
The average number of collisions n required to reduce a neutron of energy
E_initial to E_final can be expressed by: n = (1/epsilon) * ln(E_initial/E_final)
Since A=9 for beryllium, it takes 3.35 collisions to reduce neutron energy by half. The average number of collisions for a neutron reentering the fissile mass will likely be substantially higher than this, unless the reflector is thin (in which case most of the neutrons will escape without reflection). For comparison carbon (A=12) takes 4.39 collisions to achieve similar moderation, iron (A=56) takes 19.6, and U-238 takes 165.
Clearly heavy atoms do not cause significant moderation. However they can experience another phenomenon called inelastic scattering that also absorbs energy from neutrons. In inelastic scattering, the collision excites the nucleus into a higher energy state, stealing the energy from the neutron. The excited nucleus quickly drops back to its ground state, producing an x-ray. Inelastic scattering is mostly important only in very heavy nuclei that have many excitation states (like tungsten and uranium). The effect drops off rapidly with atomic mass.
In balance, the energy loss by moderation in beryllium is more serious than the energy loss by inelastic scattering in uranium. This is partly due to the fact that every elastic collision reduces neutron energy, while only some collisions produce inelastic scattering.
126.96.36.199.2.2 Comparison of Reflector Materials
Below is a list of candidate materials, and their atomic densities. The list includes the six highest atomic density pure elements (C - in two allotropic forms, Be, Ni, Co, Fe, and Cu), and a number of compounds that are notable for having high atomic densities. Atomic densities for the major tampers materials are also shown.
Table 188.8.131.52.2.2-1. Candidate Reflector Materials Cross sections and MFPs are for fission spectrum neutrons Reflector Material At. Density Avg. Cross. MFP moles/cm^3 barns cm Carbon (C,diamond) 0.292 2.37 2.40 Beryllium Oxide (BeO) 0.241 2.79 2.47 Beryllium (Be) 0.205 2.83 2.86 Beryllium Carbide (BeC) 0.190 2.60 3.36 Carbon (C, graphite) 0.188 2.37 3.73 Water (H2O) 0.167 3.54 2.81 Nickel (Ni) 0.152 3.84 2.85 Tungsten Carbide (WC) 0.150 4.55 2.43 Cobalt (Co) 0.148 3.68 3.05 Iron (Fe) 0.141 3.66 3.22 Copper (Cu) 0.141 3.65 3.23 ... Osmium (Os) 0.118 Iridium (Ir) 0.117 Rhenium (Re) 0.110 Platinum (Pt) 0.110 Tungsten (W) 0.105 6.73 2.35 Gold (Au) 0.098 Plutonium (Pu) 0.083 Uranium (U) 0.080 7.79 2.66 Mercury (Hg) 0.068 Lead (Pb) 0.055
From this list it can be seen that the highest atomic density materials consist of light elements. Some compounds achieve higher atomic densities than pure elements by packing together atoms of different sizes. Thus BeO is denser (in both mass and moles/cm^3) than Be, and WC is denser than W (only in moles/cm^3).
Using critical mass data, some of these materials can be ordered by reflector efficiency. In the ordering below X > Y means X is a better reflector than Y, and (X > Y) means that though X is better than Y, the difference is so slight that they are nearly equal (MFPs are shown below each material):
Be > (BeO > WC) > U > W > Cu > H2O > (Graphite > Fe) 2.86 2.47 2.28 2.66 2.43 3.23 2.82 3.73 3.22
From this the general trend of lower MFPs for better reflectors is visible, but is not extremely strong. The effects of neutron multiplication and moderation are largely responsible. As noted earlier this ranking, made using critical assemblies, tends to overvalue beryllium somewhat with respect to use in weapons. Nonetheless beryllium is still by and large the best reflector, especially when low mass is desirable. Uranium and tungsten carbide are the best compromise reflector/tampers.
Carbon is a fairly good neutron reflector. It has the disadvantage of being a light element that moderates neutrons, but being heavier than beryllium (At Wt 12 vs 9) it moderates somewhat less. When used as a shock buffer, additional significant benefits from neutron reflection can be obtained. The singularly high atomic density and short MFP for diamond makes it an interesting material. Before dismissing the possibility out of hand as ridiculous, given its cost, it should be noted that synthetic industrial diamond cost only $2500/kg, far less than the fissile material used in the core. It can also be formed into high density compacts.
Iron is a surprisingly good reflector, though not good enough to be considered for this use in sophisticated designs. It may be important due to its use as a structural material - as in the casing of a nuclear artillery shell, or the barrel of gun-type weapon.
With a 4.6 cm radius core the following reflector thicknesses have been found to be equally effective:
Be 4.2 cm U 5.3 cm W 5.8 cm Graphite 10. Cm
Viewed from the other perspective (variation in critical mass with identical thicknesses of different materials) we get:
Table 184.108.40.206.2.2-2. Critical Mass for 93.5% U-235 (kg) Material Reflector Thicknesses 2.54 cm 5.08 cm 10.16 cm Be 29.2 20.8 14.1 BeO - 21.3 15.5 WC - 21.3 16.5 U 30.8 23.5 18.4 W 31.2 24.1 19.4 H2O 24 22.9 Cu 32.4 25.4 20.7 Graphite 35.5 29.5 24.2 Fe 36.0 29.5 25.3
Below is a plot showing the change of Oralloy critical mass with reflector thickness graphically (taken from LA-10860-MS, Critical Dimensions of Systems Containing 235-U, 239-Pu, and 233-U; 1986 Rev.):
The variation of plutonium and U-233 critical masses with reflector thickness can be determined using the chart below (also taken from LA-10860-MS) with the above chart for Oralloy:
The variation of critical mass with reflector thickness is sometimes also expressed in terms of reflector savings, the reduction in critical radius for a given reflector thickness:
Table 220.127.116.11.2.2-3. Reflector Savings (cm) for Various Reflector/Fissile Material Combinations Fissile Material 93.5% U-235 Plutonium Reflector Reflector Thicknesses (cm) Reflector Thicknesses (cm) Material 1.27 2.54 5.08 10.16 1.27 2.54 5.08 10.16 Be 0.90 1.46 2.14 2.94 0.73 1.11 1.51 1.97 U 0.81 1.31 1.87 2.40 0.66 1.01 1.36 1.66 W 0.82 1.29 1.82 2.29 0.67 1.00 1.33 1.59 Fe 0.59 0.92 1.36 1.70 0.50 0.74 1.04 1.25
18.104.22.168.3 Combined Tamper/Reflector Systems
In most weapon designs, both the benefits of tamping and neutron reflection are desired. Two design options are available:
Designs for relatively heavy implosion bombs typically use U-238 (as natural or depleted uranium) as a compromise material. It is very good to excellent in both respects, and boosts yield as well. The Gadget/Fat Man design used a 120 kg natural uranium tamper (7 cm thick). All of the early U.S. implosion designs used uranium as a tamper/reflector. The spontaneous fission rate in U-238 precludes its use in gun-type designs.
The Little Boy weapon used tungsten carbide as a compromise material. Its density is fairly high, and it is an excellent neutron reflector (second only to beryllium among practical reflector materials). It is less dense than the uranium core, but since the Little Boy core was not compressed, Rayleigh-Taylor instability was not a factor in design. Tungsten metal was used in the South African gun-type weapons, this choice places greater emphasis on tamping over reflection, compared to tungsten carbide. It is interesting to note the dual-use restrictions placed on tungsten alloys and carbide:
Parts made of tungsten, tungsten carbide, or tungsten alloys (>90% tungsten) having a mass >20 kg and a hollow cylindrical symmetry (including cylinder segments) with an inside diameter greater than 10cm but less than 30 cm.
This is clearly based on its use as a reflector in gun-type weapons.
Beryllium is used as a reflector in modern light weight fission warheads, and thermonuclear triggers. It has special value for triggers since it is essentially transparent to thermal radiation emitted by the core. It is a very efficient reflector for its mass, the best available. But due to its extremely low mass density, it is nearly useless as a tamper. In boosted designs tamping may be unnecessary, but it is also possible to insert a (thin) tamper layer between the core and beryllium reflector). The n,2n reaction is also useful in boosted designs, since that fraction of fusion neutrons that escape the core without capture or substantial scatter still retain enough energy to release reasonably energetic neutrons in the reflector. Beryllium has relatively high compressibility, which may also add to its effectiveness as a reflector.
It is also interesting to note that the Allied-Signal Kansas City Plant has developed a capability for depositing tungsten-rhenium films up to 4 mm thick. This would be a nearly ideal material and thickness for a tamper in a beryllium reflected flying plate implosion design. By alloying rhenium with tungsten, the density of the tungsten can be increased (so that it matches or exceeds the density of alpha phase plutonium), and the ductility and workability of tungsten is improved. Notable confirmation of this comes form the 31 kt Schooner cratering test in 1968 (part of the Plowshare program). Some of the most prominent radionuclides in the debris cloud were radioactive isotopes of tungsten and rhenium.
It is also possible that uranium foils known to have been manufactured for weapons were used as tampers in flying plate designs.
4.1.8 Fission Initiation Techniques
Once a supercritical mass is assembled, neutrons must be injected to start the chain reaction.
This is not really a problem for a gun type weapon, since the design allows the supercritical mass to remain in the fully assembled state indefinitely. Eventually a neutron from the prevailing background is certain to cause a full yield explosion.
It is a major problem in an implosion bomb since the interval during which the bomb is near optimum criticality is quite short - both in absolute length (less than a microsecond), and also as a proportion of the time the bomb is in a critical state.
The first technique to be seriously considered for use in a weapon was simply to include a continuous neutron emitter, either a material with a high spontaneous fission rate, or an alpha emitter that knocks neutrons loose from beryllium mixed with it. Such an emitter produces neutrons randomly, but with a specific average rate. This inevitably creates a random distribution in initiation time and yield (called stochastic initiation). By tuning the average emission rate a balance between pre and post detonation can be achieved so that a high probability of a reasonably powerful (but uncertain) yield can be achieved. This idea was proposed for the Fat Man bomb at an early stage of development.
A far superior idea is to use a modulated neutron initiator - a neutron emitter or neutron generator that can be turned on at a specific time. This is a much more difficult approach to develop, regardless of the technique used. Modulated initiators can be either internal designs, which are placed inside the fissile pit and activated by the implosion wave, or external designs which are placed outside the fission assembly.
It should be noted that it is very desirable for an initiator to emit at least several neutrons during the optimum period, since a single neutron may be captured without causing fission. If a large number can be generated then the total length of the chain reaction can be significantly shortened. A pulse of 1 million neutrons could cut the total reaction length by 25% or so (approx. 100 nanoseconds), which may be useful for ensuring optimal efficiency.
22.214.171.124 Modulated Beryllium/Polonium Initiators
This general type of initiator was used in all of the early bomb designs. The fundamental idea is to trigger the generation of neutrons at the selected moment by mixing a strong alpha emitter with the element beryllium. About 1 time out of 30 million, when an alpha particle collides with a beryllium atom a neutron is knocked loose.
The key difficulty here is keeping the alpha emitter out of contact with the beryllium, and then achieving sufficiently rapid and complete mixing that a precisely timed burst of neutrons is emitted.
The very short range of alpha particles in solid matter (a few tens of microns) would make the first requirement relatively easy to achieve, except for one thing. Most strong alpha emitters also emit gamma rays, which penetrate many centimeters of solid matter and also occasionally knock loose neutrons. Finding a radioisotope with sufficiently low gamma emissions greatly restricts the range of choices. A suitable radioisotope must also have a relatively short half-life (no more than a few decades) so sufficient activity can be provided by a small amount, and be reasonably economical to produce.
One isotope appears to be the clear favorite when all these factors are considered: polonium 210. Although other alpha emitters have been considered, all radioisotope based modulated initiators appear to have used Po-210 as the alpha source. This isotope has a half-life of only 138.39 days though. On the one hand, this means a strong emitter alpha source can be quite small (50 curies, which emits 1.85 x 10^12 alphas/sec, weighs only 11 mg). On the other, the Po-210 disappears quickly and must be constantly replenished to maintain a standing arsenal. Polonium-208 and actinium-227 have also been considered for this role.
The second requirement: carefully timed, fast, efficient mixing, needs very clever designs for implosion weapons. After considering several proposals, a neutron initiator called "Urchin" or "screwball" was selected by Los Alamos for Gadget/Fat Man. All of the designs considered were based on placing the initiator at the center of the fissile mass, and using the arrival of the convergent shock to drive the mixing process. This insured that the entire mass was highly compressed (although perhaps not optimally compressed), and placed the initiator where the neutrons emitted would be most effective.
The Urchin was a sphere consisting of a hollow beryllium shell, with a solid spherical beryllium pellet nested inside. The polonium was deposited in layer between the shell and the pellet. Both the shell and the pellet were coated with a thin metal film to prevent the polonium (or its alpha particles) from reaching the beryllium. The mixing was brought about by using the Munroe Effect (also called the shaped charge, or hollow charge, effect): shock waves collide, powerful high velocity jets are formed. This effect was created by cutting parallel wedge-shaped groves in the inner surface of the shell. When the implosion shock collapsed these grooves, sheet-like beryllium jets would erupt through the polonium layer, and cause violent turbulence that would quickly mix the polonium and beryllium together.
By placing the small mass of polonium as a layer trapped between two relatively large masses of beryllium, the Urchin designers were hedging their bets. Even if the Monroe effect did not work as advertised, any mixing process or turbulence present would likely disrupt the carefully isolated polonium layer and cause it to mix.
The whole initiator weighed about 7 grams. The outer shell was 2 cm wide and 0.6 cm thick, the solid inner sphere was 0.8 cm wide. 15 parallel wedge-shaped grooves, each 2.09 mm deep, were cut into the inner surface of the shell. Both the shell and the inner pellet were formed by hot pressing in a nickel carbonyl atmosphere, which deposited a nickel layer on the surfaces. The surfaces of the shell and central sphere were also coated with 0.1 mm of gold. Combined with the nickel layer, the gold film provided a barrier between the polonium and the beryllium.
50 curies polonium-210 (11 mg) was deposited on the grooves inside the shell and on the central sphere. This much polonium produces a thermal output of 0.1 watts, causing very noticeable warming in such a small object. Post war studies showed that no more than 10 curies still provided an acceptable initiation effect, allowing the manufacture of initiators that remained usable for up to a year.
Other designs for generating mixing have been considered. One design considered during or shortly after WWII used a spherical shell whose interior surface was covered with conical indentations. The shell was coated with a metal film, and polonium was deposited on the interior surface as in the Urchin design. In this design the cavity inside the hollow shell was empty, there was no central pellet. The principal advantage here is that the initiator could be made smaller while still being reliable. A shortcoming of the Urchin was that the Munroe effect is less robust in linear geometry. The formation of a jet when a wedge collapses depends on the apex angle and other factors, and could conceivably fail (its use may have been due to the more thorough study given the linear geometry by Fuchs during the war). The jet effect is quite robust in conical geometry however, the collapse of the conical pits producing high velocity jets of beryllium metal squirting into the cavity under nearly all conditions. Pyrimidal pits provide similar advantages, and have been used in hollow and central sphere equipped initiators.
The smaller TOM initiator (about 1 cm) that replaced the Urchin was probably based on the hollow conical pit (or tetrahedral pit) design. This design was proposed for use in 1948, but not put into production until January 1950 by Los Alamos. It was first tested (in a weapon test) in May 1951. One advantage of the TOM initiator was more efficient use of the polonium (more neutrons per gram of Po-210).
One sophisticated design that was developed and patented by Klaus Fuchs and Rubby Sherr during the Manhattan project was based on using the outgoing implosion rebound, rather than the incoming converging shock to accomplish mixing. This slight delay in initiation thus achieved was expected to allow significantly more compression to occur.
If internal initiators are used in fusion-boosted designs it is essential that they be quite small, the smaller the better (external initiation is best).
In gun-type weapons initiators are not strictly required, but may be desirable if the detonation time of the weapon needs to be precisely controlled. A low intensity polonium source can be used in this case, as can a simple system to bring the source and beryllium into contact upon impact by the bullet (like driving a beryllium foil coated piston into a sleeve coated with polonium).
126.96.36.199 External Neutron Initiators (ENIs)
These devices (sometimes called "neutron generators") rely on a miniature linear particle accelerator called a "pulse neutron tube" which collides deuterium and tritium nuclei together to generate high energy neutrons through a fusion reaction. The tube is an evacuated tube a few centimeters long with an ion source at one end, and an ion target at the other. The target contains one of the hydrogen isotopes adsorbed on its surface as a metal hydride (which isotope it is varies with the design).
When a current surge is applied to the ion source, an electrical arc creates a dense plasma of hydrogen isotope ions. This cloud of ions is then extracted from the source, and accelerated to an energy of 100-170 KeV by the potential gradient created by a high voltage acceleration electrode. Slamming into the target, a certain percentage of them fuse to release a burst of 14.1 MeV neutrons. These neutrons do not form a beam, they are emitted isotropically.
Early pulse neutron tubes used titanium hydride targets, but superior performance is obtained by using scandium hydride which is standard in current designs.
A representative tube design is the unclassified Milli-Second Pulse (MSP) tube developed at Sandia. It has a scandium tritide target, containing 7 curies of tritium as 5.85 mg of ScT2 deposited on a 9.9 cm^2 molybdenum backing. A 0.19-0.25 amp deuteron beam current produces about 4-5 x 10^7 neutrons/amp-microsecond in a 1.2 millisecond pulse with accelerator voltages of 130-150 KeV for a total of 1.2 x 10^10 neutrons per pulse. For comparison the classified Sandia model TC-655, which was developed for nuclear weapons, produced a nominal 3 x 10^9 neutron pulse.
A variety of ion source designs can be used. The MSP tube used a high current arc between a scandium deuteride cathode and an anode to vaporize and ionize deuterium. Other designs (like the duoplasmotron) may use an arc to ionize a hydrogen gas feed. The ion output current limits the intensity of the neutron pulse. Public domain ion source designs typically have a ion current limit of several amps. If we assume that the TC-655 achieved a 10 amp current from its ion source (the design of which is classified) then we can estimate an emission rate of up to 5 x 10^8 neutrons/microsecond in a pulse 6 microseconds long.
It is misleading though to think of a neutron tube as producing all its neutrons in a sudden burst. From the perspective of the fission process in a bomb core, it is not sudden at all. A typical core alpha is 100-400/microsecond, with corresponding neutron multiplication intervals of 2.5-10 nanoseconds. Any neutrons that enter the core in one multiplication interval will increase by a factor of e (2.7...) in the next, overwhelming the external neutron flux. From this point on, the fission process will proceed on its course unaffected by the ENI. Only the neutrons that enter the core during a single multiplication interval really count, and they count only insofar as they determine the time that the exponential chain reaction begins. Clearly, the vast majority of the neutrons in a 6 microsecond pulse are utterly irrelevant. The important factor in determining how effective an ENI is in precisely controlling the start of the chain reaction is the beam current intensity and how sharply and precisely it can be turned on. These are the design parameters that should be optimized in a weapon tube.
Note that only a small fraction of the neutrons generated will actually get into the core. If we assume a compressed core diameter of 6 cm, and a target-to-core distance of 30 cm (remember, it has to be safety outside the implosion system!), then only about 3% of the neutron flux will enter the core - an arrival rate of 15,000 neutrons/nanosecond using a 10 amp ion source. This many neutrons will significantly accelerate the chain reaction, cutting it by some 15 multiplication intervals.
The ENI does not have to be placed near the actual fission assembly. Since warhead dimensions are typically no more than 1-2 meters it can be placed virtually anywhere in the weapon, as long as there isn't a thick layer of moderating material (plastic, hydrocarbon fuel, etc.) between the ENI and the fission core.
The power supply required to drive a pulse tube has many similarities to the EBW pulse power supply. A pulse of a few hundred volts at a few hundred amps is needed to drive the ion source, and a 130-170 KeV pulse of several amps is required to extract the ions and accelerate the beam. This high voltage pulse controls actual neutron production and should thus have as fast an onset time as possible. This high voltage pulse can be supplied by discharging a capacitor of several KV through a pulse current transformer.
Pulse neutron tubes have been available commercially for decades (in non-miniaturized form) for use as a laboratory neutron source, or for non-destructive testing.
An additional type of ENI, not based on fusion reactions, has been successfully tested but apparently never deployed. This is the use of a compact betatron, a type of electron accelerator, to produce energetic photons (several MeV). These photons cause photon induced fission, and photon -> neutron reactions directly in the core.
188.8.131.52 Internal Tritium/Deuterium Initiators
Another approach to making an internal neutron initiator is to harness the high temperatures and densities achieved near the center of an implosion to trigger D+T fusion reactions. A few tenths of a gram of each isotope is placed in a small high pressure sphere at the center of the core in this scheme.
The number of actual fusions produced is small, but it may seem surprising that any could occur at all. The occurrence of fusion during a collision between two nuclei is a statistical process. The probability of it occurring on a given collision depends on the collision velocity. The velocity of the nuclei is in turn a statistical process which depends upon the temperature. The hydrogen plasma is in thermal equilibrium with a mean temperature of a few hundred thousand degrees K, but the Maxwellian energy distribution means that a very small number of ions is travelling at velocities very much higher than average. Given the very large number of ions present, a significant fusion rate results. Only a few fusions are actually necessary for reliable initiation after all.
The main attraction of this scheme is that the half-life of tritium (12.3 years) is much longer than Po-210, so the initiator can be stored ready-to-use for long periods of time. The system is also physically simpler, and more compact than ENIs. It is not clear whether this type of initiator has actually been used in weapon designs.
Like any munition, the development of a fission weapon will require a variety of tests. These include component tests, and perhaps tests of the complete weapon. Tests of components like the firing system, detonators, etc. are similar to the requirements of non-nuclear munitions and need no comment. Even conservative gun assembly designs will normally require proof testing of the gun/propellant combination to verify the internal ballistics.
In addition to these routine types of tests, fission weapon development requires (or at least benefits greatly) from certain types of test that are unique to nuclear weapons. These include nuclear tests, by which I mean tests of the nuclear properties of materials and designs, not nuclear explosions (although an actual explosion of substantial yield is one possible type of nuclear test). Implosion designs, by which I mean any design using shock waves for core assembly, also call for hydrodynamic tests - tests of materials under the extreme conditions of shock compression. Combined nuclear and hydrodynamic tests, called hydronuclear tests, provide a more direct way of developing data for weapon design, evaluating design concepts, or evaluating actual designs. "Hydronuclear" is a somewhat vague term. Hydronuclear tests can mean shock compression experiments that create sub-critical conditions, or supercritical conditions with yields ranging from negligible all the way up to a substantial fraction of full weapon yield. Tests of negligible yield are often called "zero yield tests", although this is also not a precise term. Generally it is taken to mean a test in which the nuclear energy release is small compared to the conventional explosive energy used for assembly - a few kg of HE equivalent for example. However even in sub-critical tests the nuclear energy release is not actually "zero". It appears that the Comprehensive Test Ban Treaty (CTBT) now being negotiated in Geneva will use a "no-criticality" standard for defining legal experiments with high explosives and fissile material.
184.108.40.206 Nuclear Tests
A variety of nuclear tests are of interest for collecting design data. Since the performance of nuclear weapons is the combined effect of many individual nuclear properties, the most desirable measurements for weapon design purposes are "integral experiments" - experiments that directly measure overall weapon design parameters that combine many different effects.
Critical mass experiments determine the quantity of fissile material required criticality with a variety of fissile material compositions and densities, in various geometries, and with various reflector systems. These provide a basic reference for evaluating nuclear computer codes, estimating material requirements for weapons, and (extremely important) for doing safety evaluations. The closer the critical mass experiment resembles actual weapon configurations, the more useful it is. A considerable amount of critical mass data has been published openly which makes it possible to perform reasonably good "first cut" weapon design evaluations using scaling laws (like the efficiency equations). Any weapon development program will want to perform criticality tests of systems closely resembling actual proposed designs, differing only in the amount of fissile material present.
Critical mass values can be predicted with good accuracy by extrapolation by taking neutron multiplication measurements in a succession of sub-critical tests using increasing quantities of fissile materials. Such tests can be conducted safety in the laboratory without special protective equipment since each successive test allows progressive refinements of critical mass estimates, and allows the calculation of safe masses for the next test. Tests intended to closely approach or reach criticality must be conducted under stringent safety conditions however. Even a very slight degree of criticality in an unmoderated system can produce a deadly radiation flux in seconds. Accidents during critical mass experiments killed two researchers at Los Alamos in 1945 and 1946 (Harry Daghlian and Louis Slotin) before manual experiments were banned there.
Basic critical mass tests are basically non-multiplying and do not measure alpha, the extremely important fast neutron multiplication parameter. Direct measurements of this require establishing systems with significant levels of supercriticality capable of creating rapid increases in neutron populations.
A variety of laboratory tests can be used for this. All of them depend on creating a supercritical state that persists for a very short period of time (milliseconds to microseconds) to prevent melt-down (or worse). Such experiments necessarily produce large neutron fluxes, and thus must be conducted under remote control.
One type of experiment creates a transient supercritical state by propelling a small fissile mass though a larger slightly sub-critical mass. The supercritical state exists while the small mass is inserted, and terminates when the mass exits the other side. Examples of this type of experiment are the "Dragon" experiments conducted at Los Alamos in early 1945, in which a fissile mass was dropped through a hole bored in a subcritical assembly (so-called because it was like "tickling the tail of a dragon"). Shorter assembly times (and thus higher multiplication rates) can be investigated by using a gun instead of gravity to accelerate the fissile projectile. This approach obviously extends naturally to evaluating a full-up gun weapon design with only the amount of fissile material in the bullet or target differing from the actual deployment weapon. This type of test was actually used by South Africa for evaluating its gun assembly weapon (using a test device named "Melba"). These experiments can explore assembly durations in the range of 0.1-10 milliseconds.
A second type of experiment achieves an even higher multiplication rate under controlled conditions by using the thermal expansion of the core to shut down the reaction. This is called a fast neutron pulse reactor. A solid core of fissile material is assembled that is slightly supercritical at room temperature, but is kept subcritical by the presence of a control rod, by removing a section of reflector, or by controlling the insertion of fissile material. When the rod is removed (or reflector is inserted), it becomes supercritical and rapidly heats up. The expansion of the material at sonic velocity, mediated by an acoustic wave, shuts down the reaction in a matter of microseconds. Assembly durations of 5-500 microseconds can be investigated. Examples of this type of experiment are a series of fast pulsed reactors operated by the US during the late forties and early fifties: the bare uranium core Godiva, the bare plutonium core Jezebel, and the reflected uranium and plutonium assemblies Topsy and Popsy.
These fast metal assemblies can also be used to collect multiplication data at the border of criticality by adjusting their density in various ways. All of the mentioned US assemblies have been used to measure multiplication rates by studying the change in rates with density in the region between delayed and prompt criticality. These measurements can be extrapolated to estimate the maximum values of the materials. Although little data on alpha values for weapons-usable material have been published in general, results of these types of experiments are available.
220.127.116.11 Hydrodynamic Tests
Hydrodynamic tests can evaluate shock compression techniques and designs, and collect data on the properties of nuclear materials under shock compression conditions. The latter sort of test requires conducting shock experiments with actual nuclear materials of course.
This is not much of a problem from a safety point of view for uranium since comparatively non-toxic and nuclearly inert natural or depleted uranium is available. Hydrodynamic tests on complete implosion weapon designs can be conducted for uranium weapons simply by substituting natural uranium or DU for the actual U-235 or U-233.
This is not true for plutonium. There is no non-toxic, non-fissile form of plutonium. The radiotoxicity of plutonium make hydrodynamic tests much more hazardous to perform and care to avoid criticality is essential. It is interesting to note that a considerable amount of high-pressure shock equation of state data has been published for uranium, but very little or none has been for plutonium. Uranium can be used as a plutonium substitute to some extent, but the unique and bizarre physical state diagram of plutonium limits this to some extent. This is especially true in situations were very accurate EOS knowledge is required. The small safety margins involved in creating one-point safe sealed pit weapons, and in preparing for hydronuclear tests, places a premium on precise knowledge of plutonium behavior.
Measurements in weapon-type implosion systems are very difficult to make since they must be taken through the layer of expanding explosion gases. Flying plate systems are widely used for collecting equation of state data ranging up to fairly high shock pressures (several megabars). Advanced weapon programs typically use sophisticated instruments like light gas guns to generate very high pressure shock data.
Even with natural uranium or DU, full scale hydrodynamic test of weapon designs will require special test facilities including heavily reinforced test cells, with provision for instrumentation . The cells will unavoidably remain contaminated with detectable levels of uranium, showing the nature of tests that have been conducted there.
18.104.22.168 Hydronuclear Tests
Hydronuclear tests are the ultimate in integral experiments, since they combine the full range of hydrodynamic and nuclear effects. Although implosion weapons (i.e. Fat Man) have been successfully developed without any tests of this kind, a weapon development program is likely to regard such tests as highly desirable.
In hydronuclear tests of a candidate weapon design data on both the rate of increase of alpha during compression, and the maximum alpha value achieved can be collected. The first type of data is useful to determine the ideal moment of initiation for maximum efficiency, the second for determining how efficient the weapon will be.
The influence of time absorption and other effects dependent on neutron energy (fission cross sections, moderation, inelastic scattering, etc.) changes with effective multiplication rate. This encourages weapon developers to conduct tests at very high multiplication rates to collect good data for weapon performance prediction. Since weapon efficiency and yield are dependent primarily on the effective multiplication rate, this means tests with large releases of nuclear energy. Prohibiting tests with substantial nuclear energy yields (tens or hundreds of tons) may not prevent a nation from developing fission weapons, but it does at least restrict its ability to predict weapon yield.
A serious problem with hydronuclear tests is predicting what is going to happen in advance. On one hand, it is obvious that if one can predict exactly what will happen, then there is no need for the test at all. On the other hand, not being able to estimate the effects reasonably well in advance makes conducting the test extremely difficult, even perilous.
The reason for this should be clear from the efficiency equations. Since at low degrees of supercriticality efficiency and yield scale as (rho - 1)^3, fairly small variations in compression cause fairly large variations in yield. For example, if a two-fold compression factor is intended in create a supercritical density of rho = 1.02 (and a yield of say, 50 kg), then a 5% variation in compression could cause a result ranging from a complete failure to approach criticality, to a 45-fold overshoot (2.2 tonnes). Since designing suitable instrumentation requires having a fairly good knowledge about the range of conditions to be measured, the first would result in no data been collected. The second could destroy the test facility (and also result in no data being collected!). Actual US tests have been known to overshoot target yields of kilograms, producing yields in the tens and even hundreds of tons. | http://nuclearweaponarchive.org/Nwfaq/Nfaq4-1.html | 13 |
26 | After the Tudors, England went through a period of massive turmoil under the new Royal Household of the Stuarts.
Throughout the 17th Century, the Royal Household (the Crown) and Government (Parliament) were at war, both figuratively and literally.
The 1640s saw a massive revolution in the country, with the execution of King Charles I, and was followed by another in 1688.
This tension between the Crown and Parliament was only resolved at the end of the century and led specifically to the creation of the Bank of England and many other financial reforms, most of them under the leadership of the hedonistic King, Charles II.
For example, Charles’s household laid the core tenets of today’s Treasury, Exchequer and Central Bank operations.
The core of these changes were around the separation of the Treasury from the Exchequer.
The Exchequer was established in England in the 12th century during the reign of Henry I. Its original purpose was for collecting and issuing money, and to audit money paid to the Crown. Part of the Exchequer was the Treasury, the place used to keep the King’s treasures.
The Exchequer gradually took on other functions, such as the collection of taxes, and acted as a court of law to decide what was legally owed to the Crown, and was named after the chequered cloth on the table where the treasurer inspected the accounts of the sheriffs, the men responsible for the king's interests in the counties.
For most of the medieval and Tudor period, the office of the Treasurer was part of the Exchequer but it became clear during their period that the Exchequer wasn’t very good at its job.
For example, England’s war with France had led to a deficit of £30,000 in 1433, the equivalent of over £100 billion today, which caused the collapse of many Italian banks.
You would think the banks would learn their lesson but, by the 1670s, the Royal Household was in dire straits once again.
This was down to the Stuarts – the Royal Household for England, Scotland and Ireland during the 17th century – undergoing a time of extremes.
In particular, England’s Civil War (1642-1651), the Great Plague (1663) and the Great Fire of London (1666) had left the King’s finances in a poor state.
This dire situation with the King’s finances is easily illustrated by the fact that the English Navy was so seriously underfunded that its flagship, the Royal Charles, was seized by the Dutch in 1667.
Arrival of the English Flagship Royal Charles, painting by Jeronymus van Diest II from the Rijksmuseum Amsterdam (note the Dutch flag at the rear and English flag flying upside down from the main mast)
Charles decided to do something about it and appointed a Commission to replace the Treasurer.
George Downing, who built Downing Street, was appointed Secretary to that Commission and was instrumental in major reforms which led to the Treasury breaking away from the Exchequer.
This was in June 1667, when it was determined that all money voted for by Parliament must also have specific Treasury approval before it can be given. This rule still holds today, and is the reason why Parliament, the Chancellor and the Treasury release an annual budget statement.
It was during the Stuarts reign that goldsmiths ran England’s banking system.
As mentioned previously, it was during the Tudor period that goldsmiths changed from passive deposit takers, offering physical security for the wealth of their clients, to offering credit on the security of the assets at this disposal.
By the mid-17th century, the goldsmith bankers had a virtual monopoly on banking and had made extensive loans to King Charles II.
In 1672 however, Charles II had become so heavily indebted that he could not pay, and announcement that he would suspend repayments of loans to his bankers for a year.
This resulted in five of the leading goldsmith banks going bust.
There was obviously unease about the situation, with more talk of Civil War, unless some form of guarantee was put into play for Royal, or rather governmental loans.
This was when George Downing – who was looking after the Treasury separating from the Exchequer, working with several other people including Isaac Newton who was Master of the Mint – introduced the idea of raising money by selling marketable Treasury orders with a guaranteed repayment date.
Today, these are known as government bonds and eased the pressure a little on the Royal office for funding the Anglo-Dutch war.
It didn’t solve the situation however, as the King once again defaulted on loans with this speech given to his bankers on February 8th 1676:
I have always resolved to show myself an honest man in paying my debts, and its not my fault that I cannot repay you with the same frankness you used ... in lending it, but having been so honestly dealt with by you, I will pay it as well as possibly I can. And to that purpose tomorrow will pass an Order in Council for the Settling ye Interest of it, till I can pay the principall and according to your owne desires too, upon the Excise.
Speech plate as shown in the Bank of England Museum
Some say that his debt difficulties weren't just due to war, as the King regularly dipped into the country's purse to fund his hedonism.
The financial issues, combined with disagreement over foreign and religious policies, meant that the Crown and Parliament had more arguments for years to come until Parliament was dissolved in 1681 and King Charles ruled alone.
These actions led to Revolution once more, with William and Mary taking the throne in 1688.
The new monarchs represented a far more stable political environment, but their finances were still deteriorating badly due to war with France.
By 1694, King William III was spending over £2½ million a year on the army alone, and were in desperate times.
Times were so desperate they launched a lottery to raise £1 million, with large cash prizes on offer.
Lottery tickets dated 1769, as shown in the Bank of England Museum. The tickets are signed by John Bridger, Chief Cashier at the Bank of England and lotteries were used regularly from the late 17th century to the 19th century as a fast way of raising money to meet state expenditure needs. The Bank acted as registrar and received subscriptions on behalf of the government.
It was in this moment that the Bank of England was born.
More about that in the next part of How the City Developed.
Previous entries include:
- Part One: The Romans
- Part Two: The Vikings
- Part Three: Medieval Times
- Part Four: The Tudors
- Part Five: The Stuarts
- Part Six: The Bank of England
- Part Seven: Lloyd's of London
- Part Eight: The London Stock Exchange
- Part Nine: The 1700s
- Part Ten: The Victorians
- Part Eleven: World Wars
- Part Twelve: After World War II
- Part Thirteen: The Big Bang
- Part Fourteen: Crisis | http://thefinanser.co.uk/fsclub/2011/12/how-the-city-developed-part-five-the-stuarts.html | 13 |
23 | Maine's combination of natural resources and geography put it in position to make a large contribution to feeding and housing the nation and carrying its goods in the early 19th century.
In the long run, seasonal work in the woods and mills anchored people to lands that should not have been cultivated, perpetuating a cycle of low wages, indifferent farming, and rural poverty.
In the years after statehood Maine grew rapidly as markets opened for its farm, forest, and mineral products. At a time when industrial production depended on hand-labor, Maine enjoyed rapid population growth, and in an age of seaborne commerce, it boasted some of the best deep-water harbors in the world. In a material culture built of wood, Maine's 17 million acres of forest stood within a few days sail of any port in the East, and in a time when water turbines drove American industry, Maine had the most powerful rivers east of the Mississippi. The times could hardly have been more propitious for Maine's economic ascent.
Improving the Land
Like the rest of America, Maine was an agrarian society. After 1820 farming spread into the fertile central lowlands and northward into the lime-rich soils of the lower Aroostook Valley, while the St. John River region, settled by Acadian farm families in the 1780s, grew weather-hardened crops of buckwheat and potatoes.
Maine farms were typically small, family-run operations, averaging around 100 acres, and they faced formidable natural obstacles, including geographical isolation, thin soils, dense forests, and unpredictable weather. Except for Aroostook County's potatoes, farmers found no great staple crop for export, and accordingly they devoted a significant amount of time to subsistence production.
They grew a variety of grains along with potatoes, corn, fruits, and vegetables, and raised poultry, cattle, and sheep. After the fall harvest men produced hand-crafted items like clocks, buggy whips, furniture, horse collars, barrels, and shingles, and women made brooms, baskets, and palm-leaf hats and wove cloth or took in cut fabric or leather to sew into clothes or shoes.
"Mixed husbandry," as this approach to farming was called, was a response to Maine's small, easily saturated markets and to the great risk in raising crops in Maine; if one source of income failed, another took its place. Workers in other areas – fishing, lumbering, and more – use a similar tactic of taking on a variety of jobs to ensure economic viability.
With subsistence as a primary goal, the farm family focused on the long winter months when humans and livestock lived off the bounty of the previous season's work. Households were fortified with bushels of potatoes, oats, wheat, buckwheat, and corn; barrels of salt pork, corned beef, and sausage; bins of vegetable and root crops, crocks of butter, loafs of maple sugar, rounds of cheese, and jars of preserves.
Winter dominated the farmers' psychology, as Robert P. Tristram Coffin noted in his poem "This is my country":
These are my people, saving of emotion,
With their eyes dipped in the Winter ocean
The lonely, patient ones, whose speech comes slow,
Whose bodies always lean toward the blow,
The enduring and the clean, the tough and the clear,
Who live where Winter is the word for year.
Women's work was central to this subsistence-based system. Mothers and daughters ran the farm in winter when husbands and sons worked in the woods, and they exchanged various products and skills with neighbors to supplement the family's harvest. They extended the bounty of one season through the next by processing meat, grains, and produce, and they nurtured the farm's primary labor force, instilling the strong work habits so vital to the farm's success.
While husbands and sons worked in the fields, barns, and woodlots, wives and daughters made meals, milked cows, churned butter, fed livestock and poultry, carried wood, tended smaller children, mended grain sacks, washed and ironed clothes, cleaned milk pans, tilled the garden, gathered fruit, and, when time permitted, cleaned the house.
To augment their self-sufficiency and their limited market purchases, men and women bartered skills like blacksmithing, candlemaking, weaving, dressmaking, health care, and carpentry with neighbors; they shared machinery, exchanged use of pastures, borrowed tools, and stood by others in birth, sickness, or death.
These patterns of work and trade shaped a unique rural culture for Maine. Strong inter-generational bonds gave Maine farming a conservative cast, as sons and daughters followed the practices set down by fathers and mothers, and the intense interaction among neighbors and extended kin gave this rural culture a close-knit and somewhat tribal character. Suspicion of outside influences left farmers slow to innovate. Mixed husbandry also inspired a distinctive form of architecture in which farm buildings were connected, house and ell to shed and barn.
While most Mainers were engaged in agriculture, several industries grabbed the attention of the state and nation after 1820. In fact, America's public buildings were made of Maine granite and its houses of Penobscot pine and Brewer brick, cemented over and plastered with lime from Rockland and Rockport and roofed with slate from Monson and Brownville or cedar shingles from the wetlands north of Bangor.
Maine's combination of natural resources and geography put it in position to make a large contribution to feeding and housing the nation and carrying its goods in the early 19th century.
Maine used its abundant natural resources in a number of ways. Tanneries, which utilized the state's abundant hemlock stands for bark extract, dotted the central part of the state, and mixed forests of oak, pine, spruce, and tamarack made Maine the nation's premier ship builder. Ice production, which peaked in the second half of the century, illustrates the windfall nature of these staples industries, wherein a relatively small investment brought vast rewards from seemingly inexhaustible resources.
Granite was another semiprocessed raw material exported in great quantities from Maine. Quarries, particularly those on the islands and the peninsulas of Penobscot Bay, were well positioned for cheap shipment by sea, and good-quality stone lay near the surface, thanks to glacial scouring.
Once the base of a gigantic mountain range, Maine granite was so superior in durability, polish, and color that it was marketed as far west as Denver and San Francisco.
Maine became the nation's leading lumber producer based on an abundance of white pine and a complex of environmental conditions that offered cheap transportation to mills and markets. Rivers flowing out of Maine's relatively flat western tablelands presented few obstacles to impede log drives, and Maine's granite bedrock channeled rainwater and snow-melt directly into these streams, providing a forceful spring "freshet" to push the logs through to the mills.
Nature provided cheap transportation for the loggers, but it also introduced an element of risk known to few other industries. Snowfall provided a friction-free hauling surface to move logs to the rivers, but some seasons brought too little snow and some too much, causing horses to founder on the roads.
Snowfall provided water to drive logs to the mills, but if the snow lingered in the spring, the drive was delayed, and if it melted too fast, logs were stranded in the upper branches. Absent perfect weather, log jams were inevitable, sometimes bringing financial disaster and considerable threat to life.
Strong markets in the expanding seaboard cities pushed the frontier of lumbering activity from the Piscataqua to the Kennebec Valley by 1800, and north to the Penobscot headwaters in the 1840s. This moving frontier left behind scores of inland towns founded on the promise of lumbering profits; sawmills provided off-season jobs for farmers, and woods operations consumed the farmers' hay, oats, beans, and potatoes.
Lumber shipments provided capital for these isolated communities, allowing mill owners to diversify into grain-processing, wool-carding, tanning, and metal-forging, and edging these communities into the industrial age.
But, as the industry moved north of the Penobscot waters, tensions arose between Maine and New Brunswick over the contested boundary between the two. The dispute erupted into the "Aroostook War" and was finally settled in 1842 with negotiation of the Webster-Ashburton Treaty. With the border established, the lumber industry moved further north and into the western highlands.
Where early operations involved hundreds of smaller companies, in the 1830s a few lumber barons like Abner Coburn, Samuel Veazie, Ira Wadleigh, and Rufus Dwinel bought up whole townships, constructed sawmills, and vied for control of the resource.
Companies often competed over construction of canals and dams, sometimes attempting to redirect water to serve their needs. Lumbering created booms towns, also, such as Bangor, for a time the world's greatest lumber shipping port.
In these flush decades, lumber production shaped Maine's politics in ways that sometimes hindered further economic growth. Low timberland taxes frustrated attempts to use state resources to encourage other industries. Jealous of their prerogatives, Bangor's merchants and lumbermen allied with rural Jacksonians to block state aid for railroad, canal, road, and waterpower projects aimed at benefitting inland farms and industry.
This conservative axis gave way only gradually in the second half of the century, and its continuing influence was responsible for Maine's contradictory and divided approach toward outside capital and industrial development.
While Portland financed new harbor facilities, the Cumberland and Oxford Canal, and the Atlantic and St. Lawrence Railroad, Bangor invested narrowly in sawmills, timberland, and ships to carry their lumber. In 1856 the Board of Agriculture polled the state's farmers and found that four-fifths considered lumbering an impediment to agricultural modernization.
Timberland owners controlled land, shut out settlers, and discouraged market roads, fearing higher taxes. Their sawmills provided employment, but the work was seasonal and the companies' expectation that farm families would provide their own food depressed wages.
In the long run, seasonal work in the woods and mills anchored people to lands that should not have been cultivated, perpetuating a cycle of low wages, indifferent farming, and rural poverty. For many, the only alternative was out-migration.
Ice, granite, lime, slate, fish industries created huge fortunes for those who mastered the art of turning Maine's natural resources into liquid assets. But in the long run these assets became too liquid, passing easily out of the state when opportunities arose elsewhere.
Maine Goes Global
In the middle of the 1800s, Maine stood at the juncture of two great streams of commerce: the transatlantic trade with Europe, and the long-shore links between northern and southern states and the West Indies. Maine built the ships that carried this trade and provided the crews that made America the world's premier trading nation. At its peak, Maine shipyards produced more than one-third of the nation's shipping, including some of the finest square-rigged vessels ever built.
Maine's shipyards were well positioned to benefit from America's burgeoning seaborne trade. Numerous sheltered harbors offered sloping beaches suited to sliding finished ships off the ways, rivers carried ocean-going vessels deep into the interior, and forests supplied a diverse array of timber to meet all ship-building needs. No less important, Maine's maritime skills were honed by a seafaring tradition going back to colonial times. These advantages gave Maine's seacoast towns a cosmopolitan air; villagers made friends around the world and knew intimately the goings-on in exotic places like Singapore and Sao Paulo.
Responding to the rise of cotton textile mills in England and New England, Maine shipyards produced deep-hulled square-rigged ships to transport cotton from the South. Expansion of the China trade in the 1840s created a market for clipper ships, capable of quick passage over vast expanses of open ocean.
Sacrificing cargo space and seaworthiness, these sharp-bowed, narrow-beam vessels emphasized speed to carry low-bulk, high-value items and outrun pirates in the South Pacific. They traded opium for Chinese jade, silks, porcelain, brocades, and tea and brought coffee, spices, and other exotics from the South Pacific.
During the gold rush, clippers transported prospectors and equipment to San Francisco and carried mail and passengers in the transatlantic packet service. These maritime activities shaped the culture of the coast. Sea shanties enlivened Maine lore with stories of lost vessels and rapid crossings, and seaborne superstitions made their way landward.
Small-boat building flourished along the coast, each locale producing its own distinctive design. With men away on voyages, women sustained these coastal communities, building networks of support to compensate for the difficulties families faced in this dangerous occupation. Marriage links forged shipbuilding and sailing dynasties that pooled capital and shared risks; sons-in-law became mates and masters in family ships, and daughters inherited vessel shares and combined them with those of their husbands.
In Searsport, a major seafaring center, about half the wives went to sea with their captain-husbands, sometimes bearing, raising, and educating their children at sea. Maria Whall Waterhouse took command of the S.F. Hersey in Melbourne when her husband died, and according to legend faced down a mutiny with the aid of her late husband's two pistols and the ship's cook.
Maine's long, indented coast also gave rise to a vigorous fishing industry. The Gulf of Maine was among the world's most productive fisheries, benefitting from a rich mix of nutrients from the Labrador Current and Gulf Stream and from extensive breeding grounds in the bays and estuaries along the coast.
Baiting and hauling lines and cleaning and curing the catch was a round-the-clock business, but by law, crews were awarded equal shares in the profits, because the U.S. Treasury provided subsidies in order to foster seafaring skills for the Navy. Because crews were often members of an extended family, fishing was more democratic than seafaring industries like shipping and whaling.
Maine, close to the great cod fisheries on the Grand and Georges banks focused on cod exports. Salted cod was marketed among urban immigrants and slaves on sugar, rice, and cotton plantations. At its peak, Maine provided one-fifth of the fish product produced in America, a vital source of protein for the nation.
When competition from larger ports made cod fishing less profitable, Maine fishermen turned to mackerel and menhaden, which arrived in huge schools each spring and were rendered as oil or fertilizer in small factories along the coast. Those markets declined in the 1880s, replaced largely by herring, which was caught in brush weirs along the coast.
Herring were smoked, pickled, used as bait, and, in the 1870s, canned in large processing factories as sardines. Capital investments were low, requiring only a few dories to dip herring out of the weirs and a small schooner to carry them to the canneries.
An Industrial Alternative
Maine's cultural identity was shaped by rural pursuits like lumbering, quarrying, and fishing, but in the first half of the century another landscape was shaping up in cities like Saco, Portland, and Lewiston. Maine experienced the Industrial Revolution in a variety of ways, from home and backyard shops producing tinware, cloth, candlesticks, spoons, chairs, and other items to some of the largest textile factories in New England.
As in other industrializing parts of the Northeast, when textile mills began turning out spun thread, women wove it into cloth in their homes, and when the mills began producing finished cloth, homebound women sewed it into ready-made clothing. From these amazingly productive homes, sheds, and dooryards came barrel and box staves, wheels and wagons, shoes, straw hats, boots, and myriad other "industrial" products.
Maine's advantage was its huge waterpower potential and its nearby port facilities. Boston supplied the capital and Maine the industrial energy. In 1826, two years after it built the large textile mills at Lowell, Massachusetts, the Boston Manufacturing Company completed a mill twice the size of Lowell at Saco Falls.
The Saco Manufacturing Company, by far the largest single cotton mill in country, burned in 1830 and was rebuilt on a much smaller scale, but from these beginnings the industry spread into central Maine. The power of the Saco River was matched by that of the Androscoggin.
Situated at the Falls of the Androscoggin, Lewiston was destined to become Maine's largest textile center. In 1845 several local investors incorporated the Lewiston Falls Cotton Mill Company, dammed the river, dug canals, and two years later sold their investment to Benjamin E. Bates, a member of the Boston Associates.
Bates opened a new mill in 1850 and added another in 1854. During the Civil War he expanded again, sending agents out into the countryside to recruit women. The company's boarding houses along Canal Street were pleasant, inexpensive, and strictly monitored, and, as in Lowell and other textile centers, Yankee women lived and labored until Irish immigrants replaced them.
Families from Ireland had been settling in Maine since colonial times, but the potato famine of 1845-1851 triggered a dramatic increase in migration. Impoverished and debilitated, immigrants arrived in the state's expanding industrial cities from the Maritimes and Quebec. French-Canadians faced a similar, although less drastic agricultural crisis after mid-century, and began moving to Maine in large numbers in the 1870s, eager to trade the uncertainties of marginal farming for the security of a weekly paycheck.
In both cases entire families worked – the men as day laborers digging canals and foundations and railroad grades, and the women and children in the mills. Both groups lived in segregated neighborhoods, often on cheap land near the factories or warehouses, where crowding and sanitation problems brought outbreaks of cholera and typhoid. And, both groups were subject to nativist hostility.
In addition to these new textile cities, Portland expanded its industrial output in the first half of the century, based on its rapid population growth and profits from the West India trade. Ships carrying fish, produce, livestock, and box shooks to the Caribbean returned with sugar and molasses. Portland capitalized on this trade by building sugar refineries and rum distilleries.
The city's infrastructure was geared to the West India trade, but in the 1840s Portland lawyer John A. Poor helped the city diversify by promoting a rail link to Montreal, which became landlocked when the St. Lawrence froze over each winter.
Because Portland was 100 miles closer to Liverpool than was Boston, Portland enjoyed advantages in the transatlantic trade in Canadian timber, agricultural, and mining products. The Atlantic and St. Lawrence Railway was completed in 1854. The rising tide of Canadian staples stimulated development of wharves, piers, stockyards, grain elevators, coal facilities, warehouses, and shipyards, transforming Portland into a major western Atlantic shipping point.
Beginning with these profitable enterprises, business leaders diversified into other forms of manufacturing, including, at one point, railroad locomotives. While no single enterprise was as spectacular as Lewiston's huge textile mills, Portland's smaller and more diversified industries made it the largest manufacturing center in the state.
John A. Poor's audacious railroad schemes – the Atlantic and St. Lawrence and later the European & North American Railway from Bangor to St. John – convinced many that Maine's development hinged on a mix of local resources, outside capital, and good transportation. At mid-century Maine's rail system was consolidated as the Maine Central Railroad, and in the 1890s the Bangor and Aroostook stretched this system into the productive potato and lumber region of eastern Aroostook County and the St. John Valley.
Transportation, capital, waterpower, and natural resources held great promise for Maine's Industrial Revolution, but there were also constraints: a labor force scattered through the upland farm towns and coastal villages or looking for new opportunities beyond the state's borders, and an economic and political system locked in the embrace of the old staple industries.
Still, Maine's prestige as the nation's supplier of fish, textiles, and construction materials gave its political representatives prominent standing in Washington as America approached a critical test of nationhood in 1861. | http://www.mainememory.net/sitebuilder/site/901/page/1312/print | 13 |
20 | Women's roles in the World Wars
During the twentieth century, women of the world became indispensable in the war efforts. In many countries the need for female participation in the First World War was seen as almost necessary, as unprecedented numbers of men were wounded and killed. In the Second World War, the need for women arose again. Whether it was on the home front or the front-lines, for civilian or enlisted women, the World Wars started a new era for women's opportunities to contribute in war and be recognized for efforts outside of the home.
Women's role before World War I
Before the First World War, the traditional female role in western countries was confined to the domestic sphere, though not necessarily to their own homes, and to certain types of jobs.
In Great Britain for example, just before World War I, of the approximately 24 million adult women, around 1.7 million worked in domestic service, 800,000 worked in the textile manufacturing industry, 600,000 worked in the clothing trades, 500,000 worked in commerce, and 260,000 worked in local and national government, including teaching. The British textile and clothing trades, in particular, employed far more women than men and were regarded as 'women's work'.
While some women managed to enter the traditionally male career paths, women, for the most part, were expected to be primarily involved in "duties at home" and "women's work". Before 1914, only a few countries, including New Zealand, Australia, and several Scandinavian nations, had given women the right to vote (see Women's suffrage), but otherwise, women were minimally involved in the political process.
The two world wars hinged as much on industrial production as they did on battlefield clashes. With millions of men away fighting and with the inevitable casualties, there was a severe shortage of labour in a range of industries, from rural and farm work to urban office jobs.
During both World War I and World War II, women were called on, by necessity, to do work and take on roles that were outside their traditional gender expectations. In Great Britain this was known as a process of "Dilution" and was strongly contested by the trade unions, particularly in the engineering and ship building industries. For the duration of both World Wars, women did take on jobs traditionally regarded as skilled "men's work". However, in accordance with the agreement negotiated with the trade unions, women undertaking jobs covered by the Dilution agreement lost their jobs at the end of the First World War.
World War I
Home front
By 1914 nearly 5.09 million out of the 23.8 million women in Britain were working. Thousands worked in munitions factories (see Canary girl), offices and large hangars used to build aircraft. Women were also involved in knitting socks for the soldiers on the front, as well as other voluntary work, but as a matter of survival women had to work for paid employment for the sake of their families. Many women worked as volunteers serving at the Red Cross, encouraged the sale of war bonds or planted "victory gardens".
Not only did women have to keep "the home fires burning" but they took on voluntary and paid employment that was diverse in scope and showed that women were highly capable in diverse fields of endeavor. There is little doubt that this expanded view of the role of women in society did change the outlook of what women could do and their place in the workforce. Although women were still paid less than men in the workforce, women's equality were starting to arise as women were now getting paid two-thirds of the typical pay for men. However, the extent of this change is open to historical debate. In part because of female participation in the war effort Canada, the USA, Great Britain, and a number of European countries extended suffrage to women in the years after the First World War.
British historians no longer emphasize the granting of woman suffrage as a reward for women's participation in war work. Pugh (1974) argues that enfranchising soldiers primarily and women secondarily was decided by senior politicians in 1916. In the absence of major women's groups demanding for equal suffrage, the government's conference recommended limited, age-restricted women's suffrage. The suffragettes had been weakened, Pugh argues, by repeated failures before 1914 and by the disorganizing effects of war mobilization; therefore they quietly accepted these restrictions, which were approved in 1918 by a majority of the War Ministry and each political party in Parliament. More generally, Searle (2004) argues that the British debate was essentially over by the 1890s, and that granting the suffrage in 1918 was mostly a byproduct of giving the vote to male soldiers. Women in Britain finally achieved suffrage on the same terms as men in 1928.
Canadian Women during World War I
During World War One, there was virtually no female presence in the Canadian armed forces, with the exception of the 3141 nurses serving both overseas and on the home front. Of these women, 328 had been decorated by King George V, and 46 gave their lives in the line of duty. Even though a number of these women received decorations for their efforts, many high-ranking military personnel still felt that they were unfit for the job. One notable adversary of the effort was Col. Guy Carleton Jones, he stated that, “Active service work is extremely severe, and a large portion of R.N.’s are totally unfit for it, mentally or physically.” Although the Great War, had not officially been opened up to women, they did feel the pressures at home. There had been a gap in employment when the men enlisted; many women strove to fill this void along with keeping up with their responsibilities at home. When war broke out Laura Gamble enlisted in the Canadian Army Medical Corps, because she knew that her experience in a Toronto hospital would be an asset to the war efforts. Canadian nurses were the only nurses of the Allied armies that held the rank of officers. Gamble was presented with a Royal Red Cross, 2nd Class medal, for her show of “greatest possible tact and extreme devotion to duty.” This was awarded to her at Buckingham Palace during a special ceremony for Canadian nurses. Health care practitioners had to deal with medical anomalies they had never seen during the First World War. The chlorine gas that was used by the Germans caused injuries that treatment protocols had not yet been developed for. The only treatment that soothed the Canadian soldiers affected by the gas was the constant care they received from the nurses. Canadian nurses were especially well known for their kindness.
Canadians had expected that women would feel sympathetic to the war efforts, but the idea that they would contribute in such a physical way was absurd to most. Because of the support that women had shown from the beginning of the war, people began to see their value in the war. In May 1918, a meeting was held to discuss the possible creation of the Canadian Women’s Corps. In September, the motion was approved, but the project was pushed aside because the wars end was in sight.
On the Canadian home front, there were many ways which women could participate in the war effort. Lois Allan joined the Farm Services Corps in 1918, to replace the men who were sent to the front. Allan was placed at E.B. Smith and Sons where she hulled strawberries for jam. Jobs were opened up at factories as well, as industrial production increased. Work days for these women consisted of ten to twelve hours, six days a week. Because the days consisted of long monotonous work, many women made of parodies of popular songs to get through the day and boost morale. Depending on the area of Canada, some women were given a choice to sleep in either barracks or tents at the factory or farm that they were employed at. According to a brochure that was issued by the Canadian Department of Public Works, there were several areas in which it was appropriate for women to work. These were:
- On fruit or vegetable farms.
- In the camps to cook for workers.
- On mixed and dairy farms.
- In the farmhouse to help feed those who are raising the crops.
- In canneries, to preserve the fruit and vegetables.
- To take charge of milk routes.
In addition many women were involved in charitable organization such as the Ottawa Women’s Canadian Club, which helped provide the needs of soldiers, families of soldiers and the victims of war. Women were deemed ‘soldiers on the home front’, encouraged to use less or nearly everything, and to be frugal in order to save supplies for the war efforts.
British Women during World War I
During World War I; many women were able to participate on the home front supporting the men who had gone out to fight. They were given the opportunity to help as nurses, teachers, textiles makers, coal miners and clothing, but the largest area in which the women worked was in the munitions factories. Munitions factories were there to produce supplies for the men on the front including tailoring, metal trades, chemical and explosives, food trades, hosiery and woolen and worsted industries. The reason for so many women joining the munitions factories and other parts of the war effort was mixed between the sense of patriotism felt for working and helping their fathers, brothers and husbands fighting, or they joined because the wages received were doubled of what they had previously made (although was still less than that of a man’s). The women working in these munitions factories were called Munitionettes and the work in which these women did was long, tiring and exhausting as well as dangerous and hazardous to their health.
The women working in munitions factories were from mainly lower-class families and were between the ages of 18 to 30 years old. A lot of the work these women did consisted of making gun shells, explosives, aircrafts and other materials that supplied the war at the front which was dangerous and repetitive work because they were constantly around and encased in toxic fumes as well has handling dangerous machinery and explosives. They were to handle these explosives and chemicals with little training, yet expected to make them quickly and efficiently so the weapons could be shipped off to the men at war.There were different groups in which were essential to the production of getting the weaponry out to the men. Each group was important in the making of munitions as each had their own particular job such as putting the cordite into the shells, another group was to put together the fuses and so on. This was very repetitive work and it was important to be very careful when handling these because explosions and unexpected gun fire was at all times possible putting themselves and others at risk.
Not only was the work stressful and dangerous but the amount in which the women worked contributed to the difficulty of their jobs. The women would work long twelve hour shifts, six or seven days a week and at times would be expected to work over night. These long days in the factories were difficult on the lives of the women because it affected their home lives, especially those with children at home and were expected fulfill their wifely duties. This could be considered double work as they would work all day, to go home and maintain the house, this was exhausting and the women got very little sleep and were worked very hard. The lack of sleep was supplementary to the harms of the chemicals of the factories took a toll on the health of the women.
The factories all over Britain in which women worked were often unheated, deafeningly noisy, and full of noxious fumes and other dangers, therefore the conditions in which they worked under were not exactly benefiting their health. The factories also had very little ventilation for the chemicals and fumes to escape from trapping all of the chemicals in and creating a very toxic environment. Explosives and guns rely on chemical reactions to work, therefore if the women because dealing with many chemicals and hazardous materials in order to create these weapons being exposed to the harshness of these chemicals without being properly protected increased the chances of illness.
Being enclosed in the chemicals some of the common diseases and illness which occurred were drowsiness, headaches, eczema, loss of appetite, cyanosis, shortness of breath, vomiting, anaemia, palpitation, bile stained urine, constipation, rapid weak pules, pains in the limbs and jaundice and mercury poisoning. In book On Her Their Lives Depend: Munitions Workers in Great War” there is a picture of a firewoman who is carrying a munitions worker out of a building who had passed out from the fumes and Smokey conditions in which she had been working. This kind of reaction to the fumes and smoke in munitions factories was common as there was very little ventilation and fresh air. Jaundice was caused from working with sulphur which was used in the making of explosives because it is found in TNT and other such explosives. Jaundice with along with other affects makes the skin turn into a yellowish hue, this yellowing of the skin created the term canary girls Canary girls which was a popular name for the women working in munitions factories because many had jaundice as a result of jaundice. Another discoloration of the skin found from working in the factories is cyanosis; this is the ashen gray and livid color of the lips. Although the women were at a high risk of getting diseases and illnesses, the women would go home at night to their children and would have these chemicals on them and attached to them carrying them home and putting their families at risk of health problems as well, especially to those women who were either pregnant or breast feeding their babies.
Along with health issues there were many obvious dangers of working in munitions factories such as the shells exploding or the fire-arms shooting when they were not supposed to, this was dangerous and many women had died from such instances. They women had to be very careful that nothing that was not supposed to enter the shells and explosives because if even a small amount of dirt was entered and the chemicals were added the reaction could be set off and harm the many working in that factory. This was critical to their safety and the women had to work carefully and hard knowing that anything bad could happen, a slip of the hand when drilling into a shell or the simple misplacement of a fuse could have drastic and deadly consequences. The munitionettes were brave and hard working women, they knew their lives were in danger yet they worked through the illness and the dangers to do their part in the war and increased the women’s role in society and gave women a new face proving their abilities to society and proved that they were capable of doing a man’s work.
British World War One Poster Campaign
Propaganda in the form of visual poster’s to entice women to join the factory industry in World War one did not represent the dangerous aspects of female wartime labour conditions. The poster’s failed to represent an accurate account of reality by creating a satisfactory appeal for women who joined the workforce and did their part in the war. Designed for women to persuade their men to join the armed forces, one propaganda poster is a romantic setting as the women looks out an open window into nature as the soldiers march off to war. The poster possesses a sentimental and romantic appeal when the reality of the situation is that many women endured extreme hardships when their husbands enlisted. It was this narrative of a false reality conveyed in the visual propaganda that aimed to motivate war effort. The Edwardian social construction of gender was that women should be passive, emotional, and have moral virtue and domestic responsibility. Men on the other hand were expected to be active, intelligent, and provide for their families. It was this idea of gender roles that poster propaganda aimed to reverse. In one war propaganda, titled “These Women Are Doing Their Bit” a woman is represented as making a sacrifice by joining the munitions while the men are at the front. The woman in this particular persuasive poster is depicted as cheerful and beautiful, ensuring that her patriotic duty will not reduce her femininity. These posters do not communicate the reality that munitions labour entails. There is no reference to highly explosive chemicals or illnesses due to harsh work environments. The persuasive images of idealized female figures and idyllic settings were designed to solicit female involvement in the war and greatly influenced the idea of appropriate feminine behavior in the wartime Britain. As a result, many women left their domestic lives to join munitions work as they were enticed by what they thought were better living conditions, patriotic duty and high pay. According to Hupfer, the female role in the social sphere was expanded as they joined previously male-dominated and hazardous occupations (325). Hupfer remarks that attitudes regarding the capabilities of women through the war effort sank back into the previously idealized roles of women and men once the war was over. Women went back to their duty in the home as they lost their jobs to returning soldiers and female labour statistics decreased to pre-war levels. Not until 1939 would the expansion of the role of women once again occur.
Military service
Nursing became almost the only area of female contribution that involved being at the front and experiencing the war. In Britain the Queen Alexandra's Royal Army Nursing Corps, First Aid Nursing Yeomanry and Voluntary Aid Detachment were all started before World War I. The VADs were not allowed in the front line until 1915.
More than 12,000 women enlisted in the United States Navy and Marine Corps during the First World War. About 400 of them died in that war.
Over 2,800 women served with the Royal Canadian Army Medical Corps during the First World War and it was during that era that the role of Canadian women in the military first extended beyond nursing. Women were given paramilitary training in small arms, drill, first aid and vehicle maintenance in case they were needed as home guards. Forty-three women in the Canadian military died during WWI.
The only belligerent to deploy female combat troops in substantial numbers was the Russian Provisional Government in 1917. Its few "Women's Battalions" fought well, but failed to provide the propaganda value expected of them and were disbanded before the end of the year. In the later Russian Civil War, the Bolsheviks would also employ women infantry.
World War II
World War II involved global conflict on an unprecedented scale; the absolute urgency of mobilizing the entire population made the expansion of the role of women inevitable. The hard skilled labor of women was symbolized in the United States by the concept of Rosie the Riveter, a woman factory laborer performing what was previously considered man's work.
With this expanded horizon of opportunity and confidence, and with the extended skill base that many women could now give to paid and voluntary employment, women's roles in World War II were even more extensive than in the First World War. By 1945, more than 2.2 million women were working in the war industries, building ships, aircraft, vehicles, and weaponry. Women also worked in factories, munitions plants and farms, and also drove trucks, provided logistic support for soldiers and entered professional areas of work that were previously the preserve of men. In the Allied countries thousands of women enlisted as nurses serving on the front lines. Thousands of others joined defensive militias at home and there was a great increase in the number of women serving in the military itself, particularly in the Red Army (see below).
In the World War Two era, approximately 400,000 U.S. women served with the armed forces and more than 460 — some sources say the figure is closer to 543 — lost their lives as a result of the war, including 16 from enemy fire. Women became officially recognized as a permanent part of the armed forces with the passing of the Women's Armed Services Integration Act of 1948.
Several hundred thousand women served in combat roles, especially in anti-aircraft units. The U.S. decided not to use women in combat because public opinion would not tolerate it.
Women in the Workplace
When Britain went to war, a previously forbidden job opportunity opened up for women. Women were called into the factories to create the weapons that were used on the battlefield. Women took on responsibility of both managing the home and became the heroines of the home front. According to Carruthers, this industrial employment of women significantly raised women’s self-esteem as it allowed them to carry out their full potential and do their part in the war. During the war, women’s normative roles of “house wife” transformed into a patriotic duty. As Carruther’s put it, the housewife has become a heroine in the defeat of Hitler (235). The roles of women shifting from domestic to masculine and dangerous jobs in the workforce made for important changes in workplace structure and society. During the Second World War, society had specific ideals for the jobs in which both women and men participated in. When women began to enter into the masculine workforce and munitions industries previously dominated by men, women’s segregation began to diminish. Increasing numbers of women were forced into industry jobs between 1940-1943. As surveyed by the Ministry of Labour, the increase of women in industrial jobs went from 19.75 per cent to 27 per cent from 1938-1945. It was beyond difficult for women to spend their days in factories, and then come home to their domestic chores and care-giving, and as a result, many women were unable to hold their jobs in the workplace. Britain underwent a labour shortage where an estimated 1.5 million people were needed for the armed forces, and an additional 775,000 for munitions and other services in 1942. It was during this ‘labour famine’ that propaganda aimed to coerce people into joining the labour force and do their bit in the war. Women were the target audience in the various forms of propaganda because they were substantially paid less than men. It was of no concern whether women were filling the same jobs that men previously held. Even if women were replacing jobs with the same skill level as a man, they were still paid significantly less due to their gender. In the engineering industry alone, skilled and semi-skilled female workers increased from 75 per cent to 85 per cent from 1940-1942. According to Gazeley, even though women were paid less than men, it is clear that women engaging in war work and taking on jobs preserved by men, reduced the increase of industrial segregation.
In Britain, women were essential to the war effort, in both civilian and military roles. The contribution by civilian men and women to the British war effort was acknowledged with the use of the words "Home Front" to describe the battles that were being fought on a domestic level with rationing, recycling, and war work, such as in munitions factories and farms. Men were thus released into the military. Many women served with the Women's Auxiliary Fire Service, the Women's Auxiliary Police Corps and in the Air Raid Precautions (later Civil Defence) services. Others did voluntary welfare work with Women's Voluntary Service for Civil Defence and the salvation Army.
Women were "drafted" in the sense that they were conscripted into war work by the Ministry of Labour, including non-combat jobs in the military, such as the Women's Royal Naval Service (WRNS or "Wrens"), the Women's Auxiliary Air Force (WAAF or "Waffs") and the Auxiliary Territorial Service (ATS). Auxiliary services such as the Air Transport Auxiliary also recruited women. In the early stages of the war such services relied exclusively on volunteers, however by 1941 conscription was extended to women for the first time in British history and around 600,000 women were recruited into these three organizations. In these organizations women performed a wide range of jobs in support of the Army, Royal Air Force (RAF) and Royal Navy both overseas and at home. These jobs ranged from feminine roles like cook, clerk and telephonist to more masculine duties like mechanic, armourer and anti- aircraft instrument operator. British women were not drafted into combat units, but could volunteer for combat duty in anti-aircraft units, which shot down German planes and V-1 missiles. Civilian women joined the Special Operations Executive (SOE), which used them in high-danger roles as secret agents and underground radio operators in Nazi occupied Europe.
Propaganda and British Women's Patriotic Role
British Women’s Propaganda was issued during the war in attempts to communicate to the house-wife that while keeping the domestic role, she must also take on a political role of patriotic duty. Propaganda was meant to eliminate all conflicts of personal and political roles and create a heroine out of the women. The implication with propaganda is that it asked women to redefine their personal and domestic ideals of womanhood and motivate them go against the roles that have been instilled in them. The government struggled to encourage women to respond to posters and other forms of propaganda. One attempt to recruit women into the labour force was in one short film, My Father’s Daughter. In this propaganda film a wealthy factory owner’s daughter begs to do her part in the war, but her father carries the stereotypical belief that women are meant to be caretakers and are incapable of such heavy work. When one foreman presents one of the most valuable and efficient workers in the factory as the daughter, the father’s prejudices are eliminated. The encouraging message of this short film is, “There’s Not Much Women Can’t Do.”
Common Roles for Women
The most common role of women in active service was that of a searchlight operator. In fact, all of the members of the 93 Searchlight Regiment were females (Harris). Despite being limited in their roles, there was a great amount of respect between the men and women in the mixed batteries. In fact, one report states, “Many men were amazed that women could make adequate gunners despite their excitable temperament, lack of technical instincts, their lack of interest in aeroplanes and their physical weaknesses”.). While women still faced discrimination from some of the older soldiers and officers who did not like women “playing with their guns”, women were still given rifle practice and taught to use anti-aircraft guns while serving in their batteries. They were told that this was in case the Germans invaded…however if that were to ever happen, they would be evacuated immediately.).
Three quarters of women who entered the wartime forces were volunteers, compared to men who made up less than a third. Single or married women were eligible to volunteer in WAAF, ATS or WRNS and were required to serve throughout Britain as well as overseas if needed, however the age limits set by the services varied from each other. Generally women between 17 and 43 could volunteer and those under 18 required parental consent. After applying, applicants had to fulfill other requirements, including an interview and medical examination; if they were deemed fit to serve then they were enrolled for the duration of the war. WRNS was the only service that offered an immobile branch which allowed women to live in their homes and work in the local naval establishment. WRNS was the smallest of the three organizations and as a result was very selective with their candidates. Of the three organizations, WAAF was the most preferred choice; the second being WRNS. ATS was the largest of the three organizations and was least favoured among women because it accepted those who were unable to get into the other forces. ATS had also developed a reputation of promiscuity and poor living conditions, many women also saw the khaki uniform unappealing and as a result caused women to favour WRNS and WAAF over ATS.dress
Women's Limitations
Women were limited in their roles-they were allowed to do almost anything except fire the guns. This meant that they never got to capitalize on the training they received. This was the most common sexual distinction between men and women during the war: women went through the same military training, lived in the same conditions and did almost the same jobs as men, however were restricted from actually killing anyone. This small but important distinction meant that women were not eligible for any of the medals of valour or bravery, because they were only awarded for “active operations against enemy in the field”, which women could not take part in. Women were also distinct because of the titles by which they were addressed in the army: corporals were known as bombardiers and privates were known as gunners. They were also required to wear their designations differently on their uniforms, further distinguishing them from their male counterparts. Discipline differed as well, as women were not allowed to be court marshaled unless she herself chose to be. The women in the service were also under the authority of the women officers of the ATS, instead of the male officers they served directly under. This meant any disciplinary action was difficult.
Opportunities to Enlist
Despite their obvious distinctions from men, women were eager to volunteer. Many of the servicewomen came from restricted backgrounds; therefore they found the army liberating. Other reasons women volunteered included escaping unhappy homes or marriages, or to have a more stimulating job. The overwhelming reason for joining the army, though, was patriotism. Like World War I, England was in a patriotic fervour throughout World War II to protect its island from foreign invasion. Women, for the first time, were given the opportunity to help in their native land’s defense, which attributes the high number of female volunteers at the beginning of the war. Even Princess Elizabeth was a driver for the Second Subaltern Windsor Unit, having joined to do her part in the protection of the country. Despite the overwhelming response to the call for female volunteers, some women refused to join the forces; many were unwilling to give up the civilian job they had, and others had male counterparts that were unwilling to let them go (Crang 384). Others felt that war was still a man’s job, and not something women should be involved in. Similar to the men’s forces, women’s forces were mostly volunteer throughout the war. When women’s conscription did come into effect, however, it was highly limited. For example, married women were exempt from any obligation to serve unless they chose to do so, and those who were called could opt to serve in civil defense (the home front).
During the war, approximately 487,000 women volunteered for women’s services; 80,000 for WRNS, 185,000 for WAAF and 222,000 for ATS. By 1941 the demands of the wartime industry called for women’s services to be expanded so that more men could be relieved of their previous positions and take on more active roles on the battle field. Of all the women’s services, ATS needed the greatest number of new applicants, however due to ATS’ lack of popularity, they were unable to gain the estimated 100,000 new volunteers needed. To try and change women’s opinions on ATS, living conditions were improved and a new more flattering uniform was made. In 1941 the Registration for Employment Order was introduced in hopes of getting more women enrolled. This act could not force women to join the forces, but instead required women ages 20–30 to try to find employment through labour exchanges and provide information on their current employment and family situations. Those who were deemed eligible were persuaded into the war industry because the Ministry of Labour did not have the power to force. Propaganda was also used to persuade women into the women services. poster By the end of 1941, ATS had only gained 58,000 new workers, falling short of expectations.Ernest Bevin then called for conscription and by late 1941 with the National Service Act it became compulsory for women ages 20–30 to join military service. Married women were exempt from conscription, but those who were eligible had the option to work in war industry or civil defense if they did not want to join one of the women services. Women were able to request which force they wished to join but most women were put into ATS because of its need for new applicants. The National Service Act was repealed in 1949 but by 1944 women were no longer being called up for service because relying on volunteers was thought to be enough to complete the required tasks during the final stages of war.
Women also played an important role in British industrial production during the war, in areas such as metals, chemicals, munitions, shipbuilding and engineering. At the beginning of the war in 1939 17.8% of women made up employment in these industries and by 1943 they made up 38.2%. With the start of the war there was an urgent need to expand the country’s labour force and women were seen as a source of factory labour. Before the war women in industrial production were exclusively on assembly, which was seen as cheap and undemanding work but during the war women were needed in other areas in the production process that were previously done by men such as Lathe operators. The Ministry of Labour created training centres that gave an introduction to the engineering process, and by 1941 women were allowed entrance as the importance of the engineering industry grew and became a large source of female employment. Areas such as aircraft manufacture, light and heavy general engineering and motor vehicle manufacturing all saw an increase in female employment during the war. Aircraft production saw the largest rise in female employment as it rose from 7% in 1935 to 40% in 1944. At the start of the war men who were already in engineering were prevented from going to war because engineering was seen as an important industry to war production but in 1940 there became a need for more female workers to supply the necessary labour for factory expansion. By 1941 with the shortage of skilled labour the Essential Workers Order was introduced which required all skilled workers to register and prevented workers from quitting from jobs that were deemed essential to the war effort without agreement from a National Service Officer. The Registration for the Employment Order in 1941 and the Women of Employment Order in 1942 also attempted to get more women into the workforce. The Women of Employment Order required women ages 18–45 to register for labour exchanges and by 1943 the maximum age was raised to 50, which brought an additional 20,000 women into the workforce. Aircraft production was given the top labour priority and many women were diverted into it with some even being transferred from agricultural production.
Sky Spying
One of the most important roles within the forces that women occupied during the war was that of interpreting aerial photographs taken by British spy planes over Allied Europe. There was equality in this work that was not found anywhere else during the war: women were considered equal to men in this field. Women played an important role in the planning of D-Day in this capacity-they analyzed the photos of the Normandy Coast and decided which beaches the troops landed on and which sections. Women as photo analysts also participated in the biggest intelligence coup of the war-the discovery of the German V1 flying bomb. The participation of women allowed these bombs to be destroyed.
Although many women were doing jobs that men had previously done during the war, there were still pay distinctions between the two sexes. Equal pay was rarely achieved as employers wanted to avoid labour costs. Skilled work was often broken down into smaller tasks and labelled skilled or semi-skilled and then paid according to women pay rates. Women who were judged to be doing ‘men’s work’ were paid more than women who were thought to be doing ‘women’s work’ and the employers definition of this varied regionally. Women were receiving closer wages to their male counterparts, however despite the governments expressed intentions, women continued to be paid less than men for equivalent work and were segregated in terms of job description, status, and the hours they put in. In 1940 Ernest Bevin persuaded engineering employers and unions to give women equal pay to men since they were taking on the same tasks that men previously had, this became the Extended Employment of Women Agreement. Generally, pay increases depended on the industry; industries that were dominated by women before the war, like textiles and clothing, saw no changes in pay. However the gap between male and female earnings narrowed by 20-24% in metals, engineering and vehicle building and by 10-13% in chemicals, which were all deemed important to the war effort. Overtime hours also differed, with women getting 2–3 hours and men 9-10 a week. Women’s hours were still regulated because of their responsibilities to take care of their family and household.
British Women Postwar
Postwar, women were returned to many of the mundane jobs they occupied before the war started. Where once the army represented an escape from domestic life and liberty, it now returned to the male-dominated field it was before the war. Women who served in the batteries as gunners and searchlight-operators were suddenly being demeaned to secretaries and clerks, taking away any opportunity these women may have had to capitalize on their training. 'Demob was a big disappointment to a lot of us. It was an awful and wonderful war. I wouldn't have missed it for anything; some of the friends we made were forever.” One female recounted after being dismissed from service to return to her normal job. Married women were released from service sooner at the end of the war, so they could return home before their husbands to ensure the home was ready when he returned from the front. Despite being largely unrecognized for their wartime efforts in the forces, the participation of women in World War II allowed for the founding of permanent women’s forces. Britain instituted these permanent forces in 1949, and the Women’s Voluntary Services are still a standing reserve force today.
When war began to look unavoidable in the late 1930s, Canadian women felt obligated to help the fight. In October of 1938, the Women’s Volunteer Service was established in Victoria, BC. Soon, all the provinces and territories followed suit and similar volunteer groups were emerged. “Husbands, brothers, fathers, boyfriends were all joining up, doing something to help win the war. Surely women could help as well!” In addition to the Red Cross, several volunteer corps had designed themselves after auxiliary groups from Britain. These corps had uniforms, marching drills and a few had rifle training. It soon was clear, that a unified governing system would be beneficial to the corps. The volunteers in British Columbia donated two dollars each to pay the expenses so a representative could talk to politicians in Ottawa. Although all of the politicians appeared sympathetic to the cause, it remained ‘premature’ in terms of national necessity.
In June 1941, the Canadian Women’s Army Corps was established. The women who enlisted would take over
- Drivers of light mechanical transport vehicles
- Cooks in hospitals and messes
- Clerks, typists, and stenographers at camps and training centres
- Telephone operators and messengers
- Canteen helpers
On July 2, 1942 women were given permission to enlist in what would be known as the Canadian Women’s Auxiliary Air Force. Lastly the Royal Canadian Navy created the Women’s Royal Canadian Naval Service or the WRENS. The WRENS were the only corps that were officially a part of their sanctioning body as a women’s division. This led to bureaucratic issues that would be solved most easily by absorbing the civilian corps governed by military organizations, into women’s divisions as soldiers. According to the RCAF the following are the requirements of an enlisted woman:
- Must be at least 18 years of age, and younger than 41 years of age
- Must be of medical category A4B (equivalent of A1)
- Must be equal to or over 5 feet, and fall within the appropriate weight for her height, not being too far above or below the standard
- Must have a minimum education of entrance into high school
- Be able to pass the appropriate trades test
- Be of good character with no record of conviction for an indictable offence
Women would not be considered for enlistment if they were married and had children dependent on them. Training centres were required for all of the new recruits. They could not be sent to the existing centres as it was necessary that they be separated from male recruits. The Canadian Women’s Army Corps set up centres in Vermilion, AB and Kitchener, ON. Ottawa, ON and Toronto, ON were the locations of the training centres for the Canadian Women’s Auxiliary Air Force. The WRENS were outfitted in Galt, ON. Each service had to come up with the best possible appeal to the women joining, for they all wanted them. In reality, the women went where their fathers, brothers and boyfriends were. Women had numerous reasons for wanting to join the effort; whether they had a father, husband, or brother in the forces, or simply felt the patriotic duty to help. One woman blatantly exclaimed the she could not wait to turn eighteen to enlist, because she had fantasies of assassinating Hitler. Many women aged 16 or 17 lied about their age in order to enlist. The United States would only allow women to join that were at least twenty-one. For their young female citizens, Canada was the logical option. Recruitment for the different branches of the Canadian Forces was set up in places like Boston and New York. Modifications were made to girls with US citizenship, having their records marked, “Oath of allegiance not taken by virtue of being a citizen of The United States of America.”
Women were obligated to conform to the same enlistment requirements as men. They had to adhere to medical examinations, and fitness requirements as well as training in certain trades depending on the aspect of the armed forces they wanted to be a part of. Enlisted women were issued entire uniforms minus the undergarments, which they would receive a quarterly allowance for.
To be an enlisted woman during the creation stages was not easy. Besides the fact that everyone was learning as they went, they did not receive the support they needed from the male recruits. To begin with, women were initially paid two-thirds of what a man at the same level would make. As the war progressed the military leaders began to see the substantial impact the women could make. In many cases the women had outperformed their male counterparts. This was taken into account and the women received a raise to four-fifths of the wages of a man. A female doctor however, would receive equal financial compensation to her male counterpart. The negative reaction of men towards the female recruits was addressed in propaganda films. Proudly She Marches and Wings on Her Shoulder were made to show the acceptance of female recruits, while showing the men that although they were taking jobs traditionally intended for men, they would be able to retain their femininity. .
Other problems faced early on for these women were that of a more racial stature. An officer of the CWAC had to write to her superiors regarding whether or not a girl of “Indian nationality” would be objected for enlistment. Because of Canada’s large population of immigrants, German women also enlisted creating great animosity between recruits. The biggest difficulty was however the French-Canadian population. In a document dated 25 November 1941, it was declared that enlisted women should ‘unofficially’ speak English. However, seeing the large number of capable women that this left out, a School of English was stabled for recruits in mid-1942. .
Once training, some women felt that they had made a mistake. Several women cracked under the pressure and were hospitalized. Other women felt the need to escape, and simply ran away. The easiest and fasted ticket home however was pregnancy. Women who found out that they were expecting were giving a special quickly executed discharge.
The women who successfully graduated from training had to find ways to entertain themselves to keep morale up. Softball, badminton, tennis, and hockey were among popular pastimes for recruits. .
Religion was of a personal matter to the recruits. A minister of sorts was usually on site for services. For Jewish girls, it was custom that they were able to get back to their barracks by sundown on Sabbath and holidays; a Rabbi would be made available if possible. .
At the beginning of the war 600,000 women in Canada held permanent jobs in the private sector, by the peak in 1943 1.2 million women had jobs. Women quickly gained a good reputation for their mechanical dexterity and fine precision due to their smaller stature. At home a woman could work as:
- Cafeteria workers
- Loggers or lumberjills
- Munitions workers
Women also had to keep their homes together while the men were away. “An Alberta mother of nine boys, all away at either war or factory jobs – drove the tractor, plowed the fields, put up hay, and hauled grain to the elevators, along with tending her garden, raising chickens, pigs, and turkeys, and canned hundreds of jars of fruits and vegetables.”
In addition to physical jobs, women were also asked to cut back and ration. Silk and nylon were used for the war efforts, creating a shortage of stockings. Many women actually painted lines down the back of their legs to create the illusion of wearing the fashionable stockings of the time.
Much like in the United Kingdom, the Finnish women took part in defence: nursing, air raid signaling, rationing and hospitalization of the wounded. Their organization was called Lotta Svärd, where voluntary women took part in auxiliary work of the armed forces to help those fighting on the front. Lotta Svärd was one of the largest, if not the largest, voluntary group in World War II. They never fired guns (a rule among the Lottas).
The Third Reich had many roles for women, including combat. The SS-Helferinnen were regarded as part of the SS if they had undergone training at a Reichsschule SS but all other female workers were regarded as being contracted to the SS and chosen largely from concentration camps. Women also served in auxiliary units in the navy (Kriegshelferinnen), air force (Luftnachrichtenhelferinnen) and army (Nachrichtenhelferin).
In 1944-45 more than 500,000 women were volunteer uniformed auxiliaries in the German armed forces (Wehrmacht). About the same number served in civil aerial defense, 400,000 volunteered as nurses, and many more replaced drafted men in the wartime economy. In the Luftwaffe they served in combat roles helping to operate the anti—aircraft systems that shot down Allied bombers. By 1945, German women were holding 85% of the billets as clericals, accountants, interpreters, laboratory workers, and administrative workers, together with half of the clerical and junior administrative posts in high-level field headquarters.
Germany had a very large and well organized nursing service, with four main organizations, one for Catholics, one for Protestants, the secular DRK (Red Cross) and the "Brown Nurses," for committed Nazi women. Military nursing was primarily handled by the DRK, which came under partial Nazi control. Frontline medical services were provided by male medics and doctors. Red Cross nurses served widely within the military medical services, staffing the hospitals that perforce were close to the front lines and at risk of bombing attacks. Two dozen were awarded the iron Cross for heroism under fire. The brief historiography focuses on the dilemmas of Brown Nurses forced to look the other way while their incapacitated patients were murdered.
Hundreds of women auxiliaries (Aufseherin) served for the SS in the camps, the majority of which were at Ravensbrück. In Germany women also worked, and were told by Hitler to produce more pure Aryan children to fight in future wars.
The Italian Social Republic had similar roles for women. In the 1944 the Women's Auxiliary Service (Servizio Ausiliario Femminile) were regarded as part of the RSI military formations. The commander was the brigadier general Piera Gatteschi Fondelli.
In occupied Poland, as elsewhere, women played a major role in the resistance movement, putting them in the front line. Their most important role was as couriers carrying messages between cells of the resistance movement and distributing news broadsheets and operating clandestine printing presses. During partisan attacks on Nazi forces and installations they served as scouts.
During the Warsaw Rising of 1944, female members of the Home Army were couriers and medics, but many carried weapons and took part in the fighting. Among the more notable women of the Home Army was Wanda Gertz who created and commanded DYSK (Women's sabotage unit). For her bravery in these activities and later in the Warsaw Uprising she was awarded Poland's highest awards - Virtuti Militari and Polonia Restituta. One of the articles of the capitulation was that the German Army recognized them as full members of the armed forces and needed to set up separate Prisoner-of-war camps to hold over 2000 women prisoners-of-war.
Soviet Union
The Soviet Union mobilized women at an early stage of the war, integrating them into the main army units, and not using the "auxiliary" status. Some 800,000 women served, most of whom were in front-line duty units. About 300,000 served in anti-aircraft units and performed all functions in the batteries—including firing the guns. A small number were combat flyers in the Air Force.
Tito's Yugoslav National Liberation Movement claimed 6,000,000 civilian supporters; its two million women formed the Antifascist Front of Women (AFŽ), in which the revolutionary coexisted with the traditional. The AFŽ managed schools, hospitals and even local governments. About 100,000 women served with 600,000 men in Tito's Yugoslav National Liberation Army. It stressed its dedication to women's rights and gender equality and used the imagery of traditional folklore heroines to attract and legitimize the partizanka. After the war women were relegated to traditional gender roles, but Yugoslavia is unique as its historians paid extensive attention to women's roles in the resistance, until the country broke up in the 1980s. Then the memory of the women soldiers faded away.
United States of America
More than 60,000 Army nurses (all military nurses were women at the time) served stateside and overseas during World War II. They were kept far from combat but 67 were captured by the Japanese in the Philippines in 1942 and were held as POWs for over two and a half years. One Army flight nurse was aboard an aircraft that was shot down behind enemy lines in Germany in 1944. She was held as a POW for four months. In 1943 Dr. Margaret Craighill became the first female doctor to become a commissioned officer in the United States Army Medical Corps.
The Army established the Women's Army Auxiliary Corps (WAAC) in 1942. WAACs served overseas in North Africa in 1942. The WAAC, however, never accomplished its goal of making available to "the national defense the knowledge, skill, and special training of the women of the nation.". In 1942, Charity Adams (Earley) became the first black female commissioned officer in the WAAC. The WAAC was converted to the Women's Army Corps (WAC) in 1943, and recognized as an official part of the regular army. More than 150,000 women served as WACs during the war, and thousands were sent to the European and Pacific theaters; in 1944 WACs landed in Normandy after D-Day and served in Australia, New Guinea, and the Philippines in the Pacific. In 1945 the 6888th Central Postal Directory Battalion (the only all African-American, all-female battalion during World War II) worked in England and France, making them the first black female battalion to travel overseas. The battalion was commanded by MAJ Charity Adams Earley, and was composed of 30 officers and 800 enlisted women. WWII black recruitment was limited to 10 percent for the WAAC/WAC—matching the percentage of African-Americans in the US population at the time. For the most part, Army policy reflected segregation policy. Enlisted basic training was segregated for training, living and dining. At enlisted specialists schools and officer training living quarters were segregated but training and dining were integrated. A total of 6,520 African-American women served during the war.
Asian-Pacific-American women first entered military service during World War II. The Women's Army Corps (WAC) recruited 50 Japanese-American and Chinese-American women and sent them to the Military Intelligence Service Language School at Fort Snelling, Minnesota, for training as military translators. Of these women, 21 were assigned to the Pacific Military Intelligence Research Section at Camp Ritchie, Maryland. There they worked with captured Japanese documents, extracting information pertaining to military plans, as well as political and economic information that impacted Japan's ability to conduct the war. Other WAC translators were assigned jobs helping the US Army interface with our Chinese allies. In 1943, the Women's Army Corps recruited a unit of Chinese-American women to serve with the Army Air Forces as "Air WACs." The Army lowered the height and weight requirements for the women of this particular unit, referred to as the "Madame Chiang Kai-Shek Air WAC unit." The first two women to enlist in the unit were Hazel (Toy) Nakashima and Jit Wong, both of California. Air WACs served in a large variety of jobs, including aerial photo interpretation, air traffic control, and weather forecasting.
More than 14,000 Navy nurses served stateside, overseas on hospital ships and as flight nurses during the war. Five Navy nurses were captured by the Japanese on the island of Guam and held as POWs for five months before being exchanged. A second group of eleven Navy nurses were captured in the Philippines and held for 37 months. (During the Japanese occupation of the Philippines, some Filipino-American women smuggled food and medicine to American prisoners of war (POWs) and carried information on Japanese deployments to Filipino and American forces working to sabotage the Japanese Army.) The Navy also recruited women into its Navy Women's Reserve, called Women Accepted for Volunteer Emergency Service (WAVES), starting in 1942. Before the war was over, 84,000 WAVES filled shore billets in a large variety of jobs in communications, intelligence, supply, medicine, and administration. The Navy refused to accept Japanese-American women throughout World War II. USS HIGBEE (DD-806), a GEARING-class destroyer, was the first warship named for a woman to take part in combat operation. Lenah S. Higbee, the ship's namesake, was the Superintendent of the Navy Nurse Corps from 1911 until 1922.
The Marine Corps created the Marine Corps Women's Reserve in 1943. That year, the first female officer of the United States Marine Corps was commissioned; the first detachment of female marines was sent to Hawaii for duty in 1945. The first director of the Marine Corps Women's Reserve was Mrs. Ruth Cheney Streeter from Morristown, New Jersey. Captain Anne Lentz was its first commissioned officer and Private Lucille McClarren its first enlisted woman; both joined in 1943. Marine women served stateside as clerks, cooks, mechanics, drivers, and in a variety of other positions. By the end of World War II, 85% of the enlisted personnel assigned to Headquarters U.S. Marine Corps were women.
In 1941 the first civilian women were hired by the Coast Guard to serve in secretarial and clerical positions. In 1942 the Coast Guard established their Women's Reserve known as the SPARs (after the motto Semper Paratus - Always Ready). YN3 Dorothy Tuttle became the first SPAR enlistee when she enlisted in the Coast Guard Women's Reserve on 7 December 1942. LCDR Dorothy Stratton transferred from the Navy to serve as the director of the SPARs. The first five African-American women entered the SPARs in 1945: Olivia Hooker, D. Winifred Byrd, Julia Mosley, Yvonne Cumberbatch, and Aileen Cooke. Also in 1945, SPAR Marjorie Bell Stewart was awarded the Silver Lifesaving Medal by CAPT Dorothy Stratton, becoming the first SPAR to receive the award. SPARs were assigned stateside and served as storekeepers, clerks, photographers, pharmacist's mates, cooks, and in numerous other jobs. More than 11,000 SPARs served during World War II.
In 1943, the US Public Health Service established the Cadet Nurse Corps which trained some 125,000 women for possible military service.
In all, 350,000 American women served in the U.S. military during World War II and 16[dubious ] were killed in action. World War II also marked racial milestones for women in the military such as Carmen Contreras-Bozak, who became the first Hispanic to join the WAC, serving in Algiers under General Dwight D. Eisenhower, and Minnie Spotted-Wolf, the first Native American woman to enlist in the United States Marines.
The Women Airforce Service Pilots (WASP), created in 1943, were civilians who flew stateside missions chiefly to ferry planes when male pilots were in short supply. They were the first women to fly American military aircraft. Accidents killed 38. The WASP was disbanded in 1944 when enough male veterans were available.
American Home Front
U.S. women also performed many kinds of non-military service in organizations such as the Office of Strategic Services (OSS), American Red Cross, and the United Service Organizations (USO). Nineteen million American women filled out the home front labor force, not only as "Rosie the Riveters" in war factory jobs, but in transportation, agricultural, and office work of every variety. Women joined the federal government in massive numbers during World War II. Nearly a million "government girls" were recruited for war work. In addition, women volunteers aided the war effort by planting victory gardens, canning produce, selling war bonds, donating blood, salvaging needed commodities and sending care packages.
By the end of the World War I, twenty-four percent of workers in aviation plants, mainly located along the coasts of the United States were women, and yet this percentage was easily surpassed by the beginning of the World War II. Mary Anderson, director of the Women’s Bureau, reported in January 1942 that about 2,800,000 women “are now engaged in war work, and that their numbers are expected to double by the end of this year.”
The skills women had acquired through their daily chores proved to be very useful in helping them acquire new skill sets towards the war effort. For example, the pop culture phenomenon of "Rosie the Riveter" made riveting one of the most widely known jobs. Experts speculate women were so successful at riveting because it so closely resembled sewing (assembling and seaming together a garment). However, riveting was only one of many jobs that women were learning and mastering as the aviation industry was developing. As Glenn Martin, a co-founder of Martin Marietta, told a reporter: “we have women helping design our planes in the Engineering Departments, building them on the production line, [and] operating almost every conceivable type of machinery, from rivet guns to giant stamp presses”.
It is true that some women chose more traditional female jobs such as sewing aircraft upholstery or painting radium on tiny measurements so that pilots could see the instrument panel in the dark. And yet many others, maybe more adventurous, chose to run massive hydraulic presses that cut metal parts while others used cranes to move bulky plane parts from one end of the factory to the other. They even had women inspectors to ensure any necessary adjustments were made before the planes were flown out to war often by female pilots. The majority of the planes they built were either large bombers or small fighters.
Although at first, most Americans were reluctant to allow women into traditional male jobs, women proved that they could not only do the job but in some instances they did it better than their male counterparts. For example, women in general paid more attention to detail as the foreman of California Consolidated Aircraft once told the Saturday Evening Post, “Nothing gets by them unless it’s right.”
See also
- Air Transport Auxiliary (UK)
- Australian Women's Army Service (World War II)
- Australian Women's Land Army
- Canadian Women's Army Corps – known as "CWACs"
- Dorothy Lawrence – British reporter who posed as a man in the First World War
- Female guards in Nazi concentration camps
- First Aid Nursing Yeomanry (UK) – known as "FANYs"
- History of women in the military
- List of uprisings led by women
- Ochotnicza Legia Kobiet (Poland, 1918), and the later Przysposobienie Wojskowe Kobiet (1920s-1930s)
- SPARS (USA)
- Wojskowa Służba Kobiet of the Polish resistance, the Home Army
- Women Accepted for Volunteer Emergency Service (USA) – known as "WAVES"
- Women Airforce Service Pilots (USA) – known as "WASPs"
- Women in the Russian and Soviet military
- Women's Army Corps (USA) – known as "WACs"
- Women's Auxiliary Air Force (UK)
- Women's Auxiliary Service (Poland) – its members known as "Pestki" (after PSK, Pomocnicza Służba Kobiet)
- Women's Auxiliary Territorial Service (UK) (in which Princess Elizabeth, now Queen Elizabeth II, was enlisted)
- Women's Land Army (UK) – known as "Land girls"
- Woman's Land Army of America
- Women's Royal Army Corps (UK)
- Women's Royal Australian Naval Service (Australia) – known as "WRANS"
- Women's Royal Canadian Naval Service (Canada) – also known as "Wrens"
- Women's Royal Naval Service (UK) – known as "Wrens"
- Adams, R.J.Q. (1978). Arms and the Wizard. Lloyd George and the Ministry of Munitions 1915 - 1916, London: Cassell & Co Ltd. ISBN 0-304-29916-2. Particularly, Chapter 8: The Women's Part.
- Martin D. Pugh, "Politicians and the Woman's Vote 1914-1918," History, October 1974, Vol. 59 Issue 197, pp 358-374
- G.R. Searle, A New England? Peace and war, 1886-1918 (2004) p 791
- Gossage, Carolyn. ‘’Greatcoats and Glamour Boots’’. (Toronto:Dundurn Press Limited, 1991)
- Library and Archives Canada, “Canada and the First World War: We Were There,” Government of Canada, 7 November 2008, www.collectionscanada.gc.ca/firstworldwar/025005-2500-e.html
- Library and Archives Canada, “Canada and the First World War: We Were There,” Government of Canada, 7 November 2008, www.collectionscanada.gc.ca/firstworldwar/025005-2100-e.html#d
- Canada, Department of Public Works, Women’s Work on the Land, (Ontario, Tracks and Labour Branch) www.collectionscanada.gc.ca/firstworldwar/025005-2100.005.07-e.html
- Abbott, Edith. “The War and Women’s Work in England” Journal of Political Economy (University of Chicago Press) 25. 7 (July, 1917): 656. JSTOR. Web. 19th February 2013.
- Crisp, Helen. “Women in Munitions.” The Australian Quarterly (Australian Institute of Policy and Science) 13. 3 (September. 1941): 71. JSTOR. Web. 19th February 2013.
- Woollacott, Angela. “Women Munitions Makers, War and Citizenship.” Peace Review 8. 3(September 1996): 374. ProQuest. Web. 19 February 2013.
- Woollacott, Angela. “Women Munitions Makers, War and Citizenship.” Peace Review 8. 3 (September 1996): 374. ProQuest. Web. 19 February 2013.
- “Health of Munitions Workers.” The British Medical Journal. (BMJ Publishing Group) 1.2883 (April 1st, 1916): 488. JSTOR. Web. 19 February 2013.
- Ferris, Helen Josephine. “Chapter XIV: Club Work in War Time- Over There.” Girls Clubs:Their Organization and Management, A Manual for Workers. New York: E.P Dutton, 1918. 327. Women and Social Movements in the United Sates. Web. February 19th 2013.
- Hupfer, Maureen. “A Pluralistic Approach to Visual Communication: Reviewing Rhetoric and Representation in World War I Posters”. University of Alberta. Advances in Consumer Research. (1997): 322-26.
- CBC News http://www.cbc.ca/news/background/military-international/
|url=missing title (help).
- "Women in the Canadian military". CBC News. 30 May 2006.
- Reese, Roger R. (2000). The Soviet military experience: a history of the Soviet Army, 1917–1991. Routledge. p. 17. ISBN 0-415-21719-9.
- D'Ann Campbell, "Women in Combat: The World War Two Experience in the United States, Great Britain, Germany, and the Soviet Union," Journal of Military History (April 1993), 57:301-323 online edition
- Carruthers, Susan L. "'Manning the Factories': Propaganda and Policy on the Employment of Women, 1939-1947." History 75.244 (1990): 232-56. Web.
- Gazeley, Ian. “Women’s pay in British Industry during the second world War.” Economic History Review. (2008): 651-671. Web.
- Shelford Bidwell, The Women's Royal Army Corps (London, 1977)
- Crang, Jeremy (2008). "Come into the Army, Maud': Women, Military Conscription, and the Markham Inquiry". Defence Studies 8: 381–395. EBSCOhost.
- See Campbell 1993
- Frederick Arthur Pile, Ack-Ack (London, 1949),
- Nigel West, Secret War: Story of S.O.E. (1993)
- Gingrich, Nadine. “"Every Man Who Dies, Dies for You and Me. See You Be Worthy": The Image of the Hero as Rhetorical Motivation in Unofficial War Propaganda, 1914-1918” War, Literature & the Arts: An International Journal of the Humanities. November 1, (2005): 108-117. Web.
- De Groot, Gerard J. "`I Love the Scent of Cordite in Your Hair': Gender Dynamics in Mixed Anti-Craft Batteries". History 82.265 (1997): 73-92. Web.
- Crang, Jeremy A. "'Come into the Army, Maud': Women, Military Conscription, and the Markham Inquiry." Defence Studies 8.3 (2008): 381-95. Web.
- Little, Stephen (2011). "Shadow Factories, Shallow Skills? an Analysis of Work Organisation in the Aircraft Industry in the Second World War". Labor History 52: 193–216 EBSCOhost.
- Gazeley, Ian (2008). "Women's Pay in British Industry during the Second World War". Economic History Review 61: 651–671 EBSCOhost.
- Hart, Robert (2007). "Women Doing Men’s Work and Women Doing Women’s Work: Female Work and Pay in British Wartime Engineering". Explorations in Economic History 44: 114–130 EBSCOhost.
- Bruley, Sue (2003). "A New Perspective on Women Workers in the Second World War: The Industrial Diary of Kathleen Church-Bliss and Elsie Whiteman". Labour History Review 68: 217–234 EBSCOhost.
- Downing, Taylor. "Spying from the Sky." History Today 61.11 (2011): 10-16. Web.
- Downing, Taylor. "Spying from the Sky." History Today 61.11 (2011): 10-16. Web
- De Groot, Gerard J. "`I Love the Scent of Cordite in Your Hair': Gender Dynamics in Mixed Anti-Craft Batteries". History 82.265 (1997): 73-92. Web
- De Groot, Gerard J. "`I Love the Scent of Cordite in Your Hair': Gender Dynamics in Mixed Anti-Craft Batteries". History 82.265 (1997): 73-92. Web
- De Groot, Gerard J. "`I Love the Scent of Cordite in Your Hair': Gender Dynamics in Mixed Anti-Craft Batteries". History 82.265 (1997): 73-92. Web
- Harris, Carol. "Women Under Fire in World War Two". February 17, 2011.Web. February 17, 2013 .
- "1942 Timeline". WW2DB. Retrieved 2011-02-09.
- Veterans Affairs Canada, “Women at War,” Government of Canada,19 October 2012, www.veterans.gc.ca/eng/history/secondwar/fact_sheets/women
- Charles Leonard Lundin, Finland in the Second World War (Indiana UP, 1957)
- Gordon Williamson, World War II German Women's Auxiliary Services (2003).
- Karen Hagemann, "Mobilizing Women for War: The History, Historiography, and Memory of German Women's War Service in the Two World Wars," Journal of Military History (2011) 75#4 pp 1055-1094
- Campbell, D'Ann (April 1993). "Women in Combat: The World War Two Experience in the United States, Great Britain, Germany, and the Soviet Union". Journal of Military History 57: 301–323.
- Bronwyn Rebekah McFarland-Icke, Nurses in Nazi Germany (1999)
- Leila J. Rupp, Mobilizing Women For War: German and American Propaganda, 1939-1945 (1979)
- Ney-Krwawicz, Marek. "Women Soldiers of the Polish Home Army". Polishresistance-ak.org. Retrieved 2013-01-07.
- (Russian)"Kalugina Klavdiya Yefremovna". Iremember.ru. Retrieved 2011-01-10.
- Bernard A. Cook (2006). "Women and war: a historical encyclopedia from antiquity to the present". ABC-CLIO. p.546. ISBN 1-85109-770-8
- Campbell 1993
- K. Jean Cottam, "Soviet Women in Combat in World War II: The Ground Forces and the Navy," International Journal of Women's Studies, 3, no. 4 (1980): 345-57;
- K. Jean Cottam, Soviet Airwomen in Combat in World War II (Manhattan, KS: Military Affairs/Aerospace Historian Publishing, 1983)
- Barbara Jancar, "Women in the Yugoslav National Liberation Movement: An Overview," Studies in Comparative Communism (1981) 14#2 pp 143-164.
- Vesna Drapac, "Resistance and the Politics of Daily Life in Hitler's Europe: The Case of Yugoslavia in a Comparative Perspective," Aspasia 2009 3: 55-78
- Barbara Jancar-Webster, Women and Revolution in Yugoslavia 1941-1945 (1990)
- "Resources-Historical Frequently Asked Questions". Women In Military Service For America Memorial Foundation. Retrieved 2013-01-07.
- Mary T. Sarnecky, A History of the U.S. Army Nurse Corps (1999)
- Windsor, Laura Lynn (2002). "Craighill, Margaret D.". Women in Medicine: An Encyclopedia. ABC-CLIO. p. 49. ISBN 978-1-57607-392-6.
- Stremlow, Mary V. Free a Marine to Fight: Women Marines in World War II. Reprint, illustrated ed. DIANE, 1996. Google Book Search. 23 April 2009 <http://books.google.com/books?id=lA8DkWs_FXgC&printsec=frontcover>
- "Claiming Their Citizenship: African American Women From 1624-2009". National Women's History Museum. Retrieved 2013-01-07.
- "Black America Web". Black America Web. Retrieved 2013-01-07.[dead link]
- "Celebrating the Legacy: African-American Women Serving in Our Nation's Defense". Women In Military Service For America Memorial Foundation. Retrieved 2013-01-07.
- "Asian-Pacific-American Servicewomen in Defense of a Nation". Women In Military Service For America Memorial Foundation. Retrieved 2013-01-07.
- Bonar, Nancy Yockey (November 16, 2010). "All-Aboard! Navy Welcomes Women to Submarine Fleet". On Patrol. Retrieved 2013-01-07.
- Jean Ebbert and Mary-Beth Hall, Crossed Currents: Navy Women in a Century of Change (1999)
- "History & Firsts". Navy Personnel Command. Retrieved 2013-01-07.
- "Women & the U.S. Coast Guard: Moments in History". United States Coast Guard. Retrieved 2013-01-07.
- "Highlights in the History of Military Women". Women in Military Service for America Memorial Foundation. Retrieved 2013-01-07.
- Molly Merryman, Clipped Wings: The Rise and Fall of the Women Airforce Service Pilots (WASPS) of World War II (2001)
- Adams, Frank S. “Women in Democracy’s Arsenal,” New York Times, October 19, 1941.
- “About 3,000,000 Women Now in War Work.” Science News Letter, January 16, 1943.
- Weatherford, Doris. American Women during World War II. New York:Routledge, 2010. p12
- Bradley, La Verne. “Women at Work.” National Geographic, August 1944.
- Weatherford, Doris. American Women during World War II. New York:Routledge, 2010, p.12
- Weatherford, Doris. American Women during World War II. New York:Routledge, 2010, p.14
Women on the homefront
- Beauman, Katharine Bentley. Green Sleeves: The Story of WVS/WRVS (London: Seeley, Service & Co. Ltd., 1977)
- Calder, Angus. The People's War: Britain 1939-45 (1969)
- Campbell, D'Ann. Women at War With America: Private Lives in a Patriotic Era (1984)
- Cook, Bernard A. Women and war: a historical encyclopedia from antiquity to the present (2006)
- Costello, John. Love, Sex, and War: Changing Values, 1939-1945 (1985). US title: Virtue under Fire: How World War II Changed Our Social and Sexual Attitudes
- Darian-Smith, Kate. On the Home Front: Melbourne in Wartime, 1939-1945. Australia: Oxford UP, 1990.
- Gildea, Robert. Marianne in Chains: Daily Life in the Heart of France During the German Occupation (2004)
- Maurine W. Greenwald. Women, War, and Work: The Impact of World War I on Women Workers in the United States (1990)
- Hagemann, Karen and Stefanie Schüler-Springorum; Home/Front: The Military, War, and Gender in Twentieth-Century Germany. Berg, 2002.
- Harris, Carol (2000). Women at War 1939-1945: The Home Front. Stroud: Sutton Publishing Limited. ISBN 0-7509-2536-1.
- Havens, Thomas R. "Women and War in Japan, 1937-1945." American Historical Review 80 (1975): 913-934. online in JSTOR.
- Higonnet, Margaret R., et al., eds. Behind the Lines: Gender and the Two World Wars. Yale UP, 1987.
- Marwick, Arthur. War and Social Change in the Twentieth Century: A Comparative Study of Britain, France, Germany, Russia, and the United States. 1974.
- Noakes, J. (ed.), The Civilian in War: The Home Front in Europe, Japan and the U.S.A. in World War II. Exeter: Exęter University Press. 1992.
- Pierson, Ruth Roach. They're Still Women After All: The Second World War and Canadian Womanhood. Toronto: McClelland and Stewart, 1986.
- Regis, Margaret. When Our Mothers Went to War: An Illustrated History of Women in World War II. Seattle: NavPublishing. (2008) ISBN 978-1-879932-05-0.
- Wightman, Clare (1999). More than Munitions: Women, Work and the Engineering Industries 1900-1950. London: Addison Wesley Longman limited. ISBN 0-582-41435-0.
- Williams, Mari. A. (2002). A Forgotten Army: Female Munitions Workers of South Wales, 1939-1945. Cardiff: University of Wales Press. ISBN 0-7083-1726-X.
- "Government Girls of World War II" 2004 film by Leslie Sewell
Women in military service
- Bidwell, Shelford. The Women's Royal Army Corps (London, 1977),
- Campbell, D'Ann. "Women in Combat: The World War Two Experience in the United States, Great Britain, Germany, and the Soviet Union" Journal of Military History (April 1993), 57:301-323. online edition
- Campbell, D'Ann. Women at War With America: Private Lives in a Patriotic Era (1984) ch 1-2
- Campbell, D'Ann. "Women in Uniform: The World War II Experiment," Military Affairs, Vol. 51, No. 3, Fiftieth Year—1937-1987 (July, 1987), pp. 137–139 in JSTOR
- Cottam, K. Jean, ed. The Golden-Tressed Soldier (Manhattan, KS, Military Affairs/Aerospace Historian Publishing, 1983) on Soviet women
- Cottam, K. Jean. Soviet Airwomen in Combat in World War II (Manhattan, KS: Military Affairs/Aerospace Historian Publishing, 1983)
- Cottam, K. Jean. "Soviet Women in Combat in World War II: The Ground Forces and the Navy," International Journal of Women's Studies, 3, no. 4 (1980): 345-57
- DeGroot G.J. "Whose Finger on the Trigger? Mixed Anti-Aircraft Batteries and the Female Combat Taboo," War in History, Volume 4, Number 4, December 1997, pp. 434–453(20)
- Dombrowski, Nicole Ann. Women and War in the Twentieth Century: Enlisted With Or Without Consent (1999)
- Hagemann, Karen, "Mobilizing Women for War: The History, Historiography, and Memory of German Women’s War Service in the Two World Wars," Journal of Military History 75:3 (2011): 1055-1093
- Krylova, Anna. Soviet Women in Combat: A History of Violence on the Eastern Front (2010) excerpt and text search
- Pennington, Reina. Wings, Women, and War: Soviet Airwomen in World War II Combat (2007) excerpt and text search ISBN 0-7006-1145-2
- Saywell, Shelley. Women in War (Toronto, 1985);
- Seidler, Franz W. Frauen zu den Waffen—Marketenderinnen, Helferinnen Soldatinnen ["Women to Arms: Sutlers, Volunteers, Female Soldiers"] (Koblenz, Bonn: Wehr & Wissen, 1978)
- Stoff, Laurie S. They Fought for the Motherland: Russia's Women Soldiers in World War I And the Revolution (2006)
- Treadwell, Mattie. The Women's Army Corps (1954)
- Tuten, "Jeff M. Germany and the World Wars," in Nancy Loring Goldman, ed. Female Combatants or Non-Combatants? (1982)
- Women of World War I The Women of World War I (from the book "War and Gender").
- Railwaywomen in Wartime British women's work on the railways in both world wars - photos and text - free information.
- WWII US women's service organizations — History and uniforms in color (WAAC/WAC, WAVES, ANC, NNC, USMCWR, PHS, SPARS, ARC and WASP)
- The U.S. Army Nurse Corps a publication of the United States Army Center of Military History
- Women soldiers in Polish Home Army
- Women in World War II Fact Sheet Statistics on the many roles of American women in World War II | http://en.wikipedia.org/wiki/Women's_roles_in_the_World_Wars | 13 |
14 | Meningitis is defined as an inflammation of the membranes and cerebrospinal fluid that encases and bathes the brain and spinal cord. It is a serious disease which can be life-threatening and may result in permanent complications if not diagnosed and treated early. The pathogenic development of the disease suggests that meningitis can be broadly categorized into three main types [1
]. Bacterial meningitis
, which is rare, but more serious and can be life-threatening if not treated immediately. Fungal meningitis
is typically diagnosed in patients with pre-existing conditions that have a weakened immune system, such as those living with lupus or HIV. Viral meningitis
is caused by a virus (can be acute or chronic), is more common, but is far less serious and those who are diagnosed usually make a full recovery.
Symptoms of meningitis amongst children can appear very quickly or may take several days to make themselves known and include: fever; irritability; headache; photophobia (eye sensitivity to light); stiff neck; skin rashes; jaundice; inability to feed; high pitched cry; lethargy; seizures. Early diagnosis and timely interventions are the most effective ways for preventing negative outcomes associated with the disease.
Whilst meningitis cases affect all age demographics, the World Health Organisation has observed the highest rates of infection in young children [2
]. For example, bacterial meningitis predominantly affects younger children and most cases of viral meningitis occur in children under the age of five years [3
]. Epidemiological studies suggest rates of about two to ten cases per 10,000 live births with children particularly vulnerable to meningitis between the ages of 3 months and 3 years [4
]. Fatality rates vary from as low as 2% for infants to 20 - 30% for neonates and adults. Since the mid-1980s, as a result of the protection offered by current vaccines and an increased understanding of the mechanisms of the disease [5
], the median age at which bacterial meningitis is diagnosed has shifted from 15 months to 25 years. Geographically, meningitis epidemics have been experienced in various parts of the world, with research suggesting that climate might be a contributory risk factor in the spread of the disease [6
In addition to the symptomatic development and epidemiological spread of the disease, there are other known risk factors associated with meningitis which include social, environmental and economic determinants. Although most cases are isolated, the disease can spread amongst people living in close social proximity, and outbreaks have occurred in those areas where there is a higher degree of social interaction or in areas experiencing overcrowding [7
] which promotes exposure and transmission. The research indicates that meningitis in more prevalent in poorer areas then in affluent areas, suggesting that there is also a strong socio-economic component to the development of the disease [8
]. Indeed the risk of invasive meningococcal disease (leading cause of bacterial meningitis) in children is strongly influenced by unfavorable socioeconomic conditions [9
]. Increased levels of poverty are also linked to identified barriers in terms of geography, income, and socio-cultural differences. Research has found that presenting for treatment and early management of the disease is compounded by issues related to geography (access to medical facility), income (cost of healthcare), or cultural differences (attitudes towards illness and disease) which prevent lower socio-economic groups from receiving treatment, increasing the risk of adverse outcomes. Others have suggested that improvements in access to healthcare and earlier treatment are more likely to reduce the rate of mortality from meningitis [10
Physicians are confronted with a broad range of symptoms and risk factors which they need to take into account when assessing a patient with possible meningitis, and when establishing the consequences of various treatment options. The ways in which these symptoms and risk factors inter-relate and how they are identified by healthcare professionals are integral to improving outcomes from the disease.
At a macro level, a number of studies have shown that the diagnosis and treatment management of meningitis is a complex and challenging problem for government and healthcare agencies requiring novel approaches to its management and intervention [11
]. This has involved the application of modelling approaches for diagnosis and treatment. Public health experts working at the health protection agencies have developed a model to determine if suspected meningitis is bacterial or viral in origin. Clinical prediction rules have also been used to develop bacterial meningitis scores that classify patients according to risk of contraction [15
]. Some diagnostic decision rules for management of children with meningeal signs have also been proposed to assist in timely diagnosis and decision-making [13
]. Diagnostic scores have been constructed to predict disease outcomes and have been applied to successfully identify at-risk patients [17
]. Based on literature studies, the symptoms, clinical features and microbiological (lab) examinations are the principal factors contributing to the accurate diagnosis and risk assessment of meningitis.
In this paper, we are proposing a modelling approach to understanding meningitis which focuses on capturing the various symptoms associated with the disease, incorporating specific risk factors such as socio-economic determinants as derived from expert knowledge provided by physicians. This work models the complex problem of meningitis diagnosis and severity assessment using Fuzzy cognitive mapping (FCM), which is an effective knowledge representation and modelling technique [19
]. Through the proposed technique, the paper will develop and validate a simple tool to predict the likelihood of viral or bacterial meningitis in younger infants and children.
The main scope of this work is the construction of a knowledge based tool for modelling meningitis diagnosis for children living in semi-urban areas of India. The meningitis diagnostic procedure typically involves close interaction between the biologist, pathologist and the paediatrician and involves extracting and analyzing blood samples from the patient. The diagnosis of meningitis is more challenging within semi-urban areas of Indian cities given the lack of healthcare infrastructure, co-ordination between healthcare agencies and professionals and the shortage of qualified physicians which potentially delay identification of the disease. Moreover, the average costs of laboratory tests and potentially long hospital stays as a result, make treatment expensive and unaffordable for the majority of patients living within developing countries [20
A decision-making tool to assist in the diagnosis of meningitis provides the potential for healthcare professionals to arrive at a decision sooner and alleviates the cost burden to the patient if laboratory tests and hospital stays are not required. No previous research has explored FCM methodology for assessing and diagnosing meningitis. The tool proposed in this research is designed to aid paediatricians who are responsible for clinical decision-making regarding the treatment of children with meningitis which involves: diagnosing the disease and its severity and making decisions regarding the most appropriate treatment.
This paper is structured into five sections. The section on Methods briefly describes the principal aspects of FCM formalization and describes the construction of a tool to support the diagnosis of meningitis. The Results describes the accuracy of the tool in predicting the diagnosis of meningitis. Finally the Discussion and Conclusions emerging from the study are presented. | http://pubmedcentralcanada.ca/pmcc/articles/PMC3473237/ | 13 |
28 | Finance is based on economics. Therefore, to properly understand financial markets and their behavior one must first understand economics. Economics at its core is concerned with the production, distribution, trade and consumption of goods and services. To put this in human terms we can say that economics is the science that arises out of the interplay between limited resources and unlimited human wants and needs.
There are two basic ways to view economics. There is the broad and distant view, which attempts to view things in aggregate for a society at large. We call this view “Macroeconomics”. Macroeconomics is concerned with the status of the economy as a whole. Thus, it looks at overall employment of a general population or overall income of a nation as opposed to a more focused view of a population segment or specific industry. This view is helpful because it is only by this kind of analysis that we can see the general trends which a society or nation is following. Macroeconomic theory and analysis is employed most often by governments and institutions, which have a responsibility to make policies and decisions which affect the economy as a whole.
Some terms you may have heard of which concern themselves with the macroeconomic view of the economy are Gross National Product, Inflation, Consumer Price Index and Fiscal Policy. The meaning of each of these is listed below.
Gross National Product – This is the most common measure of economic productivity for an aggregate population. GNP is defined as the total value of all goods and services produced in final form during a specific period of time (usually 1 year).
Inflation – Inflation is defined as a condition of generally increasing prices. The term used for measuring these prices can vary according to the desires of the individual, government or institution doing the evaluation.
Consumer Price Index – The CPI is a measure of how much prices have increased or decreased as compared to a baseline years prices. The prices used in arriving at this figure are standard goods and services determined by the evaluator. Thus, the CPI for the United States might vary greatly as compared the CPI for a country from the Middle East.
Fiscal Policy – Fiscal Policy is essentially the manner in which a government achieves economic objectives through government spending and taxation. Fiscal policy is the alternative to Monetary Policy.
Monetary Policy – Monetary Policy is essentially the practice of a government managing the supply of money to achieve economic objectives. The United States uses the Federal Reserve System to either increase or decrease the supply of money, which in turn effects the overall economic environment as a whole.
The principles of Macroeconomics are important in analyzing and understanding longer-term trends and aggregate market behavior. Therefore, for the individual managing his own portfolio it may be helpful to know the current fiscal policy and how it may affect the value of any government bond holdings. One of the ways the government will manage fiscal policy is to buy back these bonds or issue more depending on their objective. This is just one example of the way in which Macroeconomics affects the individual investor. | http://content.moneyinstructor.com/926/macroeconomics-overview.html | 13 |
14 | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Nationalism is an ideology that holds that a nation is the fundamental unit for human social life, and takes precedence over any other social and political principles. Nationalism makes certain political claims based upon this belief: above all, the claim that the nation is the only legitimate basis for the state, that each nation is entitled to its own state, and that the borders of the state should be congruent with the borders of the nation. Nationalism refers to both a political doctrine and any collective action by political and social movements on behalf of specific nations. Nationalism has had an enormous influence upon world history, since the nation-state has become the dominant form of state organization. Most of the world's population now lives in states which are, at least nominally, nation-states. Historians also use the term 'nationalism' to refer to this historical transition, and to the emergence of nationalist ideology and movements.
Principles of NationalismEdit
This section sets out the components of nationalist ideology as seen by nationalists themselves. (Academic theories of nationalism are sceptical of some of these beliefs and principles, see below).
Nationalism is a form of universalism when it makes universal claims about how the world should be organised, but it is particularistic with regard to individual nations. The combination of both is characteristic for the ideology, for instance in these assertions:
- "in a nation-state, the language of the nation should be the official language, and all citizens should speak it, and not a foreign language."
- "the official language of Denmark should be Danish, and all Danish citizens should speak it."
The universalistic principles bring nationalism into conflict with competing forms of universalism, the particularistic principles bring specific nationalist movements into conflict with rival nationalisms - for instance, the Danish-German tensions over their reciprocal linguistic minorities.
The starting point of nationalism is the existence of nations, which it takes as a given. Nations are typically seen as entities with a long history: most nationalists do not believe a nation can be created artificially. Nationalist movements see themselves as the representative of an existing, centuries-old nation. However, some theories of nationalism imply the reverse order - that the nationalist movements created the sense of national identity, and then a political unit corresponding to it, or that an existing state promoted a 'national' identity for itself.
Nationalists see nations as an inclusive categorisation of human beings - assigning every individual to one specific nation. In fact, nationalism sees most human activity as national in character. Nations have national symbols, a national culture, a national music and national literature; national folklore, a national mythology and - in some cases - even a national religion. Individuals share national values and a national identity, admire the national hero, eat the national dish and play the national sport.
Nationalists define individual nations on the basis of certain criteria, which distinguish one nation from another; and determine who is a member of each nation. These criteria typically include a shared language, culture, and/or shared values which are predominantly represented within a specific ethnic group. National identity refers both to these defining criteria, and to the shared heritage of each group. Membership in a nation is usually involuntary and determined by birth. Individual nationalisms vary in their degree of internal uniformity: some are monolithic, and tolerate little variance from the national norms. Academic nationalism theory emphasises that national identity is contested, reflecting differences in region, class, gender, and language or dialect. A recent development is the idea of a national core culture, in Germany the Leitkultur, which emphasises a minimal set of non-negotiable values: this is primarily a strategy of cultural assimilation in response to immigration.
Nationalism has a strong territorial component, with an inclusive categorisation of territory corresponding to the categorisation of individuals. For each nation, there is a territory which is uniquely associated with it, the national homeland, and together they account for most habitable land. This is reflected in the geopolitical claims of nationalism, which seeks to order the world as a series of nation-states, each based on the national homeland of its respective nation. Territorial claims characterise the politics of nationalist movements. Established nation-states also make an implicit territorial claim, to secure their own continued existence: sometimes it is specified in the national constitution. In the nationalist view, each nation has a moral entitlement to a sovereign state: this is usually taken as a given.
The nation-state is intended to guarantee the existence of a nation, to preserve its distinct identity, and to provide a territory where the national culture and ethos are dominant - nationalism is also a philosophy of the state. It sees a nation-state as a necessity for each nation: secessionist national movements often complain about their second-class status as a minority within another nation. This specific view of the duties of the state influenced the introduction of national education systems, often teaching a standard curriculum, national cultural policy, and national language policy. In turn, nation-states appeal to a national cultural-historical mythos to justify their existence, and to confer political legitimacy - acquiescence of the population in the authority of the government.
Nationalists recognise that 'non-national' states exist and existed, but do not see them as a legitimate form of state. The struggles of early nationalist movements were often directed against such non-national states, specifically multi-ethnic empires such as Austria-Hungary and the Ottoman Empire. Most multi-ethnic empires have disappeared, but some secessionist movements see Russia and China as comparable non-national, imperial states. At least one modern state is clearly not a nation-state: the Vatican City exists solely to provide a sovereign territorial unit for the Catholic Church.
Nationalism as ideology includes ethical principles: that the moral duties of individuals to fellow members of the nation override those to non-members. Nationalism claims that national loyalty, in case of conflict, overrides local loyalties, and all other loyalties to family, friends, profession, religion, or class.
Theory of nationalismEdit
Background and problemsEdit
Specific examples of nationalism are extremely diverse, the issues are emotional, and the conflicts often bloody. The theory of nationalism has always been complicated by this background, and by the intrusion of nationalist ideology into the theory. There are also national differences in the theory of nationalism, since people define nationalism on the basis of their local experience. Theory (and media coverage) may overemphasise conflicting nationalist movements, ethnic tension, and war - diverting attention from general theoretical issues; for instance, the characteristics of nation-states.
Nationalist movements are surrounded by other nationalist movements and nations, and this may colour their version of nationalism. It may focus purely on self-determination, and ignore other nations. When conflicts arise, however, ideological attacks upon the identity and legitimacy of the 'enemy' nationalism may become the focus. In the Israeli-Palestinian conflict, for instance, both sides have claimed that the other is not a 'real' nation, and therefore has no right to a state. Jingoism and chauvinism make exaggerated claims about the superiority of one nation over another. National stereotypes are also common, and are usually insulting. This kind of negative nationalism, directed at other nations, is certainly a nationalist phenomenon, but not a sufficient basis for a general theory of nationalism.
Issues in nationalism theoryEdit
The first studies of nationalism were generally historical accounts of nationalist movements. At the end of the 19th century, Marxists and other socialists produced political analyses that were critical of the nationalist movements then active in central and eastern Europe. Most sociological theories of nationalism date from after the Second World War. Some nationalism theory is about issues which concern nationalists themselves, such as who belongs to the nation and who does not, as well as the precise meaning of 'belonging'.
Origin of nations and nationalismEdit
Recent general theory has looked at underlying issues, and above all the question of which came first, nations or nationalism. Nationalist activists see themselves as representing a pre-existing nation, and the primordialist theory of nationalism agrees. It sees nations, or at least ethnic groups, as a social reality dating back twenty thousand years.
The modernist theories imply that until around 1800, almost no-one had more than local loyalties. National identity and unity were originally imposed from above, by European states, because they were necessary to modernise of economy and society. In this theory, nationalist conflicts are an unintended side-effect. For example, Ernest Gellner argued that nations are a by-product of industrialization, which required a large literate and culturally homogeneous population. According to Charles Tilly, states promoted nationalism in order to assure the popular consent with conscription into large modern armies and taxation, which was necessary to maintain such armies. According to the modernist view, the first true nation state was created by the French Revolution, though the tendencies have existed since the beginning of the Modern Era. In addition to the top-down nationalism, there were also cases of the bottom-up nationalism, such as the German Romantic nationalism, materialized in the resistance against Napoleon.
More recent theorists of nationalism emphasise that nations are a socially constructed phenomenon. Benedict Anderson, for example, described nations as "imagined communities". Gellner comments: "Nationalism is not the awakening of nations to self-consciousness: it invents nations where they do not exist." (Anderson and Gellner deploy terms such as 'imagined' and 'invent' in a neutral, descriptive manner. The use of these terms in this context is not intended to imply that nations are fictional or fantastic.) Modernisation theorists see such things as the printing press and capitalism as necessary conditions for nationalism.
Anthony D. Smith proposed a synthesis of primordialist and modernist views. According to Smith, the preconditions for the formation of a nation are as follows:
- A fixed homeland (current or historical)
- High autonomy
- Hostile surroundings
- Memories of battles
- Sacred centres
- Languages and scripts
- Special customs
- Historical records and thinking
Those preconditions may create powerful common mythology. Therefore, the mythic homeland is in reality more important for the national identity than the actual territory occupied by the nation. Smith also posits that nations are formed through the inclusion of the whole populace (not just elites), constitution of legal and political institutions, nationalist ideology, international recognition and drawing up of borders.
Theoretical literature on nationalismEdit
There is a large amount of theoretical and empirical literature on nationalism. The following is a minimal selection, and a series of capsule summaries that do not do justice to the range of views expressed.
- Anderson, Benedict. 1991. Imagined Communities. 2nd ed. London: Verso. Anderson argues that nations are imagined political communities, and are imagined to be limited and sovereign. Their development is due to the decline of other types of imagined community, especially in the face of capitalist production of print media.
- Armstrong, John. 1982. Nations Before Nationalism. Armstrong traces the development of national identities from origins in antiquity and the medieval world.
- Breuilly, John. 1992. Nationalism and the State. 2nd ed. Manchester: Manchester University Press. This approach focuses on the politics of nationalism, in particular on nationalism as a response to the imperatives of the modern state. It employs the mode of comparative history to study a large number of different cases of nationalism.
- Gellner, Ernest. 1983. Nations and Nationalism. Oxford: Blackwell. This work links nationalism to the homogenising imperatives of industrial society and the reactions of minority cultures to those imperatives.
- Greenfeld, Liah. 1992. Nationalism: Five Roads to Modernity. Cambridge: Harvard University Press. Greenfeld argues that nationalism existed at an earlier age than previously thought: as early as the sixteenth century in the case of England.
- Hechter, Michael. 1975. Internal Colonialism. London: Routledge and Kegan Paul. Hechter attributes nationalism in the "Celtic fringe" of Britain and Ireland to the reinforcing divisions of culture and the division of labour.
- Hobsbawm, Eric, and Ranger, Terence, eds. 1983. The Invention of Tradition. Cambridge: Cambridge University Press. This collection of essays, especially Hobsbawm's introduction and chapter on turn-of-the-century Europe, argues that the nation is a prominent type of invented tradition.
- Kedourie, Elie. 1960. Nationalism. London: Hutchinson. Kedourie focuses on the role of disaffected German intellectuals in developing the doctrine of nationalism at the beginning of the nineteenth century from Kant's idea of the autonomy of the will and Herder's belief in the primacy of linguistic communities in establishing modes of thought.
- Kedourie, Elie, ed. 1971. Nationalism in Asia and Africa. London: Weidenfeld and Nicolson. Kedourie's introduction to this volume of nationalist texts extends his analysis in his earlier work to the efforts of intellectuals in colonial states.
- Nairn, Tom. 1977. The Break-up of Britain. London: New Left Books. Marxist historian Nairn traces nationalism to the confrontation of colonialism, which leaves indigenous elites without recourse to any resources but their own population.
- Smith, Anthony D. 1986. The Ethnic Origins of Nations. Oxford: Blackwell. Smith traces modern nations and nationalism to pre-modern ethnic sources, arguing for the existence of an "ethnic core" in modern nations.
Historical evolution of nationalismEdit
Prior to 1900Edit
Most theories of nationalism assume a European origin of the nation-state. The modern state is often seen as emerging with the Treaty of Westphalia in 1648, though this view is disputed. This treaty created the 'Westphalian system' of states, which recognised each other's sovereignty and territory. Some of the signatories, such as the Dutch Republic, qualify as a nation-state, but in 1648 most states in Europe were still non-national.
Many, but not all, see the major transition to nation-states as originating in the late 18th and 19th centuries. Beginning with romantic nationalism, nationalist movements arose throughout Europe, a process accelerated by the French Revolution and the conquests of Napoleon Bonaparte. Some of these movements were separatist, directed against large empires: an early example is the Greek Revolution (1821-1829). Others sought to unify a divided or fragmented territory, as in the Italian unification under the rule of Piedmont-Sardinia. These movements promoted a national identity and culture: in the 1848 Revolutions in Europe they were often associated with liberal demands. By the end of the 19th century most people accepted that Europe was divided into nations, and personally identified with one of these nations. The collapse of the Austro-Hungarian Empire and the Ottoman Empire after the First World War accelerated the formation of nation-states.
According to the standard view, before the 19th century people had local, regional, or religious loyalties, but no idea of nationhood. The typical state in Europe was a dynastic state, ruled by a royal house: if there were any loyalties above regional level, they were owed to the king and the ruling house. Dynastic states could acquire territory by royal marriage, and lose it by division of inheritance - which is now seen as absurd. Nationalism introduced the idea that each nation has a specific territory, and that beyond this point the claims of other nations apply. Nation-states, in principle, do not seek to conquer territory. However, nationalist movements rarely agreed on where the border should be. As the nationalist movements grew, they introduced new territorial disputes in Europe.
Nationalism also determined the political life of 19th century Europe. Where the nation was part of an empire, the national liberation struggle was also a struggle against older autocratic regimes, and nationalism was allied with liberal anti-monarchical movements. Where the nation-state was a consolidation of an older monarchy, as in Spain, nationalism was itself conservative and monarchical. Most nationalist movements began in opposition to the existing order, but by the 20th century, there were regimes which primarily identified themselves as nationalist.
The standard theory of the 19th-century origin of nation-states is disputed. One problem with it is that the South American independence struggles and the American Revolution (American War of Independence) predate most European nationalist movements. Some countries, such as the Netherlands and England, seem to have had a clear national identity well before the 19th century.
20th Century nationalismEdit
By the end of the 19th century, nationalist ideas had begun to spread to Asia. In India, nationalism began to encourage calls for the end of British rule. The 20th century nationalist movement in India is generally associated with Mahatma Gandhi, although many other leaders were involved as well. In China, nationalism influenced the 1911 Revolution. In Japan, nationalism and Japanese "exceptionalism" influenced Japanese imperialism.
World War I led to to the creation of new nation-states in Europe. This was encouraged by the United States, which rejected the legitimacy of the former multi-ethnic empires, see Wilsonianism. France, which sought to to isolate Germany and Austria, also encouraged the creation of potential client states. The Ottoman Empire and the Austro-Hungarian Empire disintegrated. The Versailles Treaty, based upon US President Wilson's Fourteen Points, partially conformed the division into new nation-states. In the Middle East, the Arab Revolt did not lead to new independent states: the victorious western powers secured a League of Nations mandate for Iraq, Lebanon, Palestine including Transjordan, and Syria. The Turkish War of Independence (1919-1923) created a new nation state from the core of the Ottoman Empire. In the east of Europe, the Russian Empire had collapsed, as a result of the Russian Revolution of 1917. The Anglo-Irish War led to the partition of Ireland into the Irish Free State and Northern Ireland.
However, multi-nation and multi-ethnic states survived in Europe; and two new ones emerged, Czechoslovakia (where the more prosperous Czech half dominated), and the Kingdom of Yugoslavia, (dominated by Serbia). In the interwar period, the extreme nationalist movements of fascism and Nazism came to power in Italy and Germany respectively, and similar groups took over several other European countries during the late 1930s. This new wave of nationalism had powerful racist undertones, and it culminated in World War II and the Holocaust.
The horrors of World War II discredited militant nationalism as an ideology, but scarcely altered the division of Europe into nation-states. Outside Europe, the war initiated a new wave of nation-state formation, through the independence of African and Asian nations from European colonial empires. The most dramatic decolonisation began in the late 1950's in Africa, which was transformed from a collection of European colonies into a continent of nation-states. Few of them corresponded to the ideal nation-state (one nation, one language, one culture), but most still exist. Ironically, the one that best met those criteria, Somalia, disintegrated. The Algerian War of Independence was the most bloody of the decolonisation wars in Africa: some decolonisations were peaceful. Rhodesia and the Portuguese colonies of Mozambique and Angola delayed decolonisation for a time.
The collapse of the Soviet Union led to an unexpected revival of national movements in Europe around 1990. Its constituent states became independent, for the second time (in modern history) in the case of the Baltic states - Belarus, Ukraine, Moldova, Kazakhstan, Turkmenistan, Uzbekistan, Tajikistan, Kyrgyzstan, Armenia, Azerbaijan, Georgia, Latvia, Estonia and Lithuania. The second Yugoslavia broke up into nation states, some with predecessor states such as the Nazi-oriented Independent State of Croatia, some as new sovereign states. Within established nation-states, there are many secessionist movements, some of them seeking the creation of a new sovereign state, for instance in Quebec. The unresolved status of in Northern Ireland led to protracted violence known as The Troubles, but without changes in the border.
In the second half of the 20th century, some trends emerged which might indicate a weakening of the nation-state and nationalism. The European Union is widely seen transferring power from the national level, to both sub-national and supra-national levels. Critics of globalization often appeal to feelings of national identity, culture, and sovereignty. Free trade agreements, such as NAFTA and the GATT, and the increasing internationalisation of trade markets, are seen as damaging to the national economy, and have led to a revival of economic nationalism. Protest movements vehemently oppose these negative aspects of globalization, (see Anti-globalisation).
Not all anti-globalists are nationalists, but nationalism continues to assert itself in response to those trends. Nationalist parties continue to do well in elections, and most people continue to have a strong sense of attachment to their nationality. Moreover, globalism and European federalism are not always opposed to nationalism. For example, theorists of Chinese nationalism within the People's Republic of China have articulated the idea that China's national power is substantially enhanced, rather than being reduced, by engaging in international trade and multinational organizations. For a time sub-national groups such as Catalan autonomists and Welsh nationalists supported a stronger European Union in the hope that a Europe of the regions would limit the power of the present nation-states. However, with Euroscepticism now widespread in the EU, this transformation is no longer on its political agenda.
Language and NationalismEdit
A common language has been a defining characteristic of the nation, and an ideal for nationalists. For example, in France before the French Revolution, regional languages such as Breton and Occitan were spoken, which were mutually incomprehensible. Standard French was also spoken in large parts of the country and had also been the language of administration, but after the Revolution it was imposed as the national language in non-French-speaking regions. For instance, in Brittany, Celtic names were forbidden. The formation of nation-states, and their consolidation after independence, is generally accompanied by policies to restrict, replace, or abandon minority languages. This accelerates the tendency noted in sociolinguistic research that high-status languages displace low-status languages. See also: Language policy in France.
Some theorists believe that nationalism became pronounced in the 19th century simply because language became a more important unifier due to increased literacy. With more people reading newspapers, books, pamphlets and so on, which were increasingly widely available to read since the spread of the printing press, it became possible for the first time to develop a broader cultural attachment beyond the local community. At the same time, differences in language solidified, breaking down old dialects, and excluding those from completely different language groups.
The United States, a country which historically welcomed immigrants of varying nationality, has what can be seen as a pattern of discrimination against languages other than English. Prominent examples are the German language, which was nearly eradicated during World War I, and French and Italian, which have nearly disappeared from everyday life. Today Spanish is a second language across large portion of the country. Some politicians, such as Pat Buchanan have consciously opposed the rise of Spanish as a second American language, for fear that it would undermine unity in the American national character.
In the Arab World during the colonial period, the Turkish language, French language, Spanish language and English language were often imposed, although the intensity of imposition varied widely. When the colonial period ended (mostly after World War Two), a process of "Arabisation" began; reviving Arabic to unify their states and to facilitate a broader Arab identity, motivated by Pan-Arabism. Countries such as Algeria and Western Sahara underwent large scale Arabisations, changing from French and Spanish to Arabic respectively.
However within the Arab World, some nationalistic attempts were made to emancipate a domestic vernacular and treat classical Arabic as a formal foreign language, which was often incomprehensible to the non-literate population of nominally Arab countries, which were politically - but not necessarily linguistically, culturally or ethnically, Arabized. These policies were first promoted in Egypt in the early 20th century by the Egyptian scholar and nationalist Ahmad Lutfi al-Sayyid, who called for the formalization of the Egyptian Vernacular as the native language of the Egyptian people.
Similar attempts to emphasise minority languages completely independent of Arabic were made by the Nubians, speakers of Nobiinm who are split between Egypt and Sudan, and relatively more successfully by the Imazighen (commonly known as Berber) in Morocco.
Types of nationalismEdit
Nationalism may manifest itself as part of official state ideology or as a popular (non-state) movement and may be expressed along civic, ethnic, cultural, religious or ideological lines. These self-definitions of the nation are used to classify types of nationalism. However such categories are not mutually exclusive and many nationalist movements combine some or all of these elements to varying degrees. Nationalist movements can also be classified by other criteria, such as scale and location.
Some political theorists make the case that any distinction between forms of nationalism is false. In all forms of nationalism, the populations believe that they share some kind of common culture, and culture can never be wholly separated from ethnicity. The United States, for example, has "God" on its coinage and in its Pledge of Allegiance, and designates official holidays which are seen by some to promote cultural biases. The United States has an ethnic theory of being American (nativism), and, for a short period in the 20th Century, had a committee to investigate Un-American Activities.
Civic nationalism (or civil nationalism) is the form of nationalism in which the state derives political legitimacy from the active participation of its citizenry, from the degree to which it represents the "will of the people". It is often seen as originating with Jean-Jacques Rousseau and especially the social contract theories which take their name from his 1762 book The Social Contract. Civic nationalism lies within the traditions of rationalism and liberalism, but as a form of nationalism it is contrasted with ethnic nationalism. Membership of the civic nation is considered voluntary. Civic-national ideals influenced the development of representative democracy in countries such as the United States and France.
Ethnic nationalism, or ethnonationalism, defines the nation in terms of ethnicity, which always includes some element of descent from previous generations. It also includes ideas of a culture shared between members of the group and with their ancestors, and usually a shared language. Membership in the nation is hereditary. The state derives political legitimacy from its status as homeland of the ethnic group, and from its function to protect the national group and facilitate its cultural and social life, as a group. Ideas of ethnicity are very old, but modern ethnic nationalism was heavily influenced by Johann Gottfried von Herder, who promoted the concept of the Volk, and Johann Gottlieb Fichte. Ethnic nationalism is now the dominant form, and is often simply referred to as "nationalism". Note that the theorist Anthony Smith uses the term 'ethnic nationalism' for non-Western concepts of nationalism, as opposed to Western views of a nation defined by its geographical territory. (The term "ethnonationalism" is generally used only in reference to nationalists who espouse an explicit ideology along these lines; "ethnic nationalism" is the more generic term, and used for nationalists who hold these beliefs in an informal, instinctive, or unsystematic way. The pejorative form of both is "ethnocentric nationalism" or "tribal nationalism," though "tribal nationalism" can have a non-pejorative meaning when discussing African, Native American, or other nationalisms that openly assert a tribal identity.)
Romantic nationalism (also organic nationalism, identity nationalism) is the form of ethnic nationalism in which the state derives political legitimacy as a natural ("organic") consequence and expression of the nation, or race. It reflected the ideals of Romanticism and was opposed to Enlightenment rationalism. Romantic nationalism emphasised a historical ethnic culture which meets the Romantic Ideal; folklore developed as a Romantic nationalist concept. The Brothers Grimm were inspired by Herder's writings to create an idealised collection of tales which they labeled as ethnically German. Historian Jules Michelet exemplifies French romantic-nationalist history.
Cultural nationalism defines the nation by shared culture. Membership in the nation is neither entirely voluntary (you cannot instantly acquire a culture), nor hereditary (children of members may be considered foreigners if they grew up in another culture). Chinese nationalism is one example of cultural nationalism, partly because of the many national minorities in China. (The 'Chinese nationalists' include those on Taiwan who reject the mainland Chinese government but claim the mainland Chinese state).
Liberal nationalism is a kind of nationalism defended recently by political philosophers who believe that there can be a non-xenophobic form of nationalism compatible with liberal values of freedom, tolerance, equality, and individual rights (Tamir 1993; Kymlicka 1995; Miller 1995). Ernest Renan (1882) and John Stuart Mill (1861) are often thought to be early liberal nationalists. Liberal nationalists often defend the value of national identity by saying that individuals need a national identity in order to lead meaningful, autonomous lives (Kymlicka 1995; for criticism see Patten 1999) and that liberal democratic polities need national identity in order to function properly (Miller 1995; for criticism see Abizadeh 2002, 2004).
State nationalism is a variant on civic nationalism, very often combined with ethnic nationalism. It implies that the nation is a community of those who contribute to the maintenance and strength of the state, and that the individual exists to contribute to this goal. Italian fascism is the best example, epitomised in this slogan of Mussolini: "Tutto nello Stato, niente al di fuori dello Stato, nulla contro lo Stato." ("Everything in the State, nothing outside the State, nothing against the State"). It is no surprise that this conflicts with liberal ideals of individual liberty, and with liberal-democratic principles. The revolutionary (liberal) Jacobin creation of a unitary and centralist French state is often seen as the original version of state nationalism. Franquist Spain, and contemporary Turkish nationalism are later examples of state nationalism.
However, the term "state nationalism" is often used in conflicts between nationalisms, and especially where a secessionist movement confronts an established nation state. The secessionists speak of state nationalism to discredit the legitimacy of the larger state, since state nationalism is perceived as less authentic and less democratic. Flemish separatists speak of Belgian nationalism as a state nationalism. Basque separatists and Corsican separatists refer to Spain and France, respectively, in this way. There are no undisputed external criteria to assess which side is right, and the result is usually that the population is divided by conflicting appeals to its loyalty and patriotism.
Religious nationalism (also) defines the nation in terms of shared religion. If the state derives political legitimacy from adherence to religious doctrines, then it is more of a theocracy than a nation-state. In practice, many ethnic and cultural nationalisms are in some ways religious in character. The religion is a marker of group identity, rather than the motivation for nationalist claims. Irish nationalism is associated with Roman Catholicism, and most Irish nationalist leaders of the last 100 years were Catholic, but many of the early (18th century) nationalists were Protestant. Irish nationalism never centred on theological distinctions like transubstantiation, the status of the Virgin Mary, or the primacy of the Pope, but for some Protestants in Northern Ireland, these pre-Reformation doctrines are indeed part of Irish culture. Similarly, although Religious Zionism exists and influences many, the mainstream of Zionism is more secular in nature, and based on culture and Jewish ethnicity. Since the partition of British India, Indian nationalism is associated with Hinduism. In modern India, a contemporary form of Hindu nationalism, or Hindutva has been prominent among many followers of the Bharatiya Janata Party and Rashtriya Swayamsevak Sangh. Religious nationalism characterized by communal adherence to Eastern Orthodoxy and national Orthodox Churches is still prevalent in many states of Eastern Europe, Russia.
Pan-nationalism is usually an ethnic and cultural nationalism, but the 'nation' is itself a cluster of related ethnic groups and cultures, such as Turkic peoples. Occasionally pan-nationalism is applied to mono-ethnic nationalism, when the national group is dispersed over a wide area and several states - as in Pan-Germanism.
Diaspora nationalism (or, as Benedict Anderson terms it, "long-distance nationalism") generally refers to nationalist feeling among a diaspora such as the Irish in the United States, or the Lebanese in the Americas and Africa, and the Armenians in Europe and the United States. Anderson states that this sort of nationalism acts as a "phantom bedrock" for people who want to experience a national connection, but who do not actually want to leave their diaspora community. The essential difference between pan-nationalism and diaspora nationalism is that members of a diaspora, by definition, are no longer resident in their national or ethnic homeland. In the specific case of Zionism, the national movement advocates migration to the claimed national homeland, which would - if 100% effected - end the diaspora.
Nationalism within a nationEdit
With the establishment of a nation-state, the primary goal of any nationalist movement has been achieved. However, nationalism does not disappear but remains a political force within the nation, and inspires political parties and movements. The terms nationalist and 'nationalist politician’ are often used to describe these movements; nationalistic would be more accurate. Nationalists in this sense typically campaign for:
- strengthening national unity, including campaigns for national salvation in times of crisis.
- emphasising the national identity and rejecting foreign influences, influenced by cultural conservatism and in extreme cases, xenophobia.
- limiting non-national populations on the national territory, especially by limiting immigration and, in extreme cases, by ethnic cleansing.
- annexing territory which is considered part of the national homeland. This is called irredentism, from the Italian movement Italia irredenta.
- economic nationalism, which is the promotion of the national interest in economic policy, especially through protectionism and in opposition to free trade policies.
The term 'nationalism' is also used by extension, or as a metaphor, to describe movements which promote a group identity of some kind. This use is especially common in the United States, and includes black nationalism and white nationalism in a cultural sense. They may overlap with nationalism in the classic sense, including black secessionist movements and pan-Africanism.
Nationalists obviously have a positive attitude toward their own nation, although this is not a definition of nationalism. The emotional appeal of nationalism is visible even in established and stable nation-states. The social psychology of nations includes national identity (the individual’s sense of belonging to a group), and national pride (self-association with the success of the group). National pride is related to the cultural influence of the nation, and its economic and political strength - although they may be exaggerated. However the most important factor is that the emotions are shared: nationalism in sport includes the shared disappointment if the national team loses.
The emotions can be purely negative: a shared sense of threat can unify the nation. However, dramatic events, such as defeat in war, can qualitatively affect national identity and attitudes to non-national groups. The defeat of Germany in World War I, and the perceived humiliation by the Treaty of Versailles, economic crisis and hyperinflation, created a climate for xenophobia, revanchism, and the rise of Nazism. The solid bourgeois patriotism of the pre-1914 years, with the Kaiser as national father-figure, was no longer relevant.
Nationalism and extremismEdit
Although nationalism influences many aspects of life in stable nation-states, its presence is often invisible, since the nation-state is taken for granted. Michael Billig speaks of banal nationalism, the everyday, less visible forms of nationalism, which shape the minds of a nation's inhabitants on a day-to-day basis. Attention concentrates on extreme aspects, and on nationalism in unstable regions. Nationalism may be used as a derogatory label for political parties, or they may use it themselves as a euphemism for xenophobia, even if their policies are no more specifically nationalist, than other political parties in the same country. In Europe, some 'nationalist' anti-immigrant parties have a large electorate, and are represented in parliament. Smaller but highly visible groups, such as far right skinheads, also self-identify as 'nationalist', although it may be a euphemism for neo-Nazis or white supremacists. Activists in other countries are often referred to as ultra-nationalists, with a clearly pejorative meaning. See also chauvinism and jingoism.
Nationalism is a component of other political ideologies, and in its extreme form, fascism. However it is not accurate to simply describe fascism as a more extreme form of nationalism, although non-extreme nationalism can be seen as a lesser form of fascism. Fascism in the general sense, and the Italian original, were marked by a strong combination of ethnic nationalism and state nationalism, often combined with a form of economic and ethical socialism. That was certainly evident in Nazism. However the geopolitical aspirations of Adolf Hitler are probably better described as imperialist, and Nazi Germany ultimately ruled over vast areas where there was no historic German presence. The Nazi state was so different from the typical European nation-state, that it was sui generis (requires a category of its own).
Nationalism does not necessarily imply a belief in the superiority of one nation over others, but in practice some (but not all) nationalists do think that way about their own nation. Occasionally they believe another nation can serve as an example for their own nation, see Anglophilia. There is a specific racial nationalism which can be considered an ethnic nationalism, but some form of racism can be found within almost all nationalist movements. It is usually directed at neighbouring nations and ethnic groups.
Racism was also a feature of colonialist ideologies, which were especially strong at the end of the 19th century. Strictly speaking, overseas colonies conflict with the principles of the nation-state, since they are not part of the historic homeland of the nation, and their inhabitants clearly do not belong to the same ethnic group, speak its language, or share its culture. In practice, nationalists sometimes combined a belief in self-determination in Europe, with colonisation in Africa or Asia.
Explicit biological race theory was influential from the end of the 19th century. Nationalist and fascist movements in the first half of the 20th century often appealed to these theories. The Nazi ideology was probably the most comprehensively racial ideology in history, and race influenced all aspects of policy in Nazi Germany.
Nevertheless racism continues to be an influence on nationalism. Ethnic cleansing is often seen as both a nationalist and racist phenomenon. It is part of nationalist logic that the state is reserved for one nation, but not all nation-states expel their minorities. The best known recent examples of ethnic cleansing are those during the Yugoslav secession war in the 1990s. Other examples seen as related to racism include the removal of Germans from the Volga Republic during the 1950s, and the Armenian Genocide in the Ottoman Empire in 1915.
Opposition and critiqueEdit
Nationalism is an extremely assertive ideology, which makes far-reaching demands, including the disappearance of entire states. It is not surprising that it has attracted vehement opposition. Much of the early opposition to nationalism was related to its geopolitical ideal of a separate state for every nation. The classic nationalist movements of the 19th century rejected the very existence of the multi-ethnic empires in Europe. This resulted in severe repression by the (generally autocratic) governments of those empires. That tradition of secessionism, repression, and violence continues, although by now a large nation typically confronts a smaller nation. Even in that early stage, however, there was an ideological critique of nationalism. That has developed into several forms of anti-nationalism in the western world. The Islamic revival of the 20th century also produced an Islamic critique of the nation-state.
In the liberal political tradition there is widespread criticism of ‘nationalism’ as a dangerous force and a cause of conflict and war between nation-states. Liberals do not generally dispute the existence of the nation-states. The liberal critique also emphasises individual freedom as opposed to national identity, which is by definition collective (see collectivism).
The pacifist critique of nationalism also concentrates on the violence of nationalist movements, the associated militarism, and on conflicts between nations inspired by jingoism or chauvinism. National symbols and patriotic assertiveness are in some countries discredited by their historical link with past wars, especially in Germany.
The anti-racist critique of nationalism concentrates on the attitudes to other nations, and especially on the doctrine that the nation-state exists for one national group, to the exclusion of others. It emphasises the chauvinism and xenophobia of many nationalisms.
Political movements of the left have often been suspicious of nationalism, again without necessarily seeking the disappearance of the existing nation-states. Marxism has been ambiguous towards the nation-state, and in the late 19th century some Marxist theorists rejected it completely. For some Marxists the world revolution implied a global state (or global absence of state); for others it meant that each nation-state had its own revolution. A significant event in this context was the failure of the social-democratic and socialist movements in Europe to mobilise a cross-border workers' opposition to World War I. At present most, but certainly not all, left-wing groups accept the nation-state, and see it as the political arena for their activities.
In the Western world the most comprehensive current ideological alternative to nationalism is cosmopolitanism. Ethical cosmopolitanism rejects one of the basic ethical principles of nationalism: that humans owe more duties to a fellow member of the nation, than to a non-member. It rejects such important nationalist values as national identity and national loyalty. However there is also a political cosmopolitanism, which has a geopolitical programme to match that of nationalism: it seeks some form of world state, with a world government. Very few people openly and explicitly support the establishment of a global state, but political cosmopolitanism has influenced the development of international criminal law, and the erosion of the status of national sovereignty. In turn, nationalists are deeply suspicious of cosmopolitan attitudes, which they equate with treason and betrayal.
While internationalism in the cosmopolitanist context by definition implies cooperation among nations, and therefore the existence of nations, proletarian internationalism is different, in that it calls for the international working class to follow its brethren in other countries irrespective of the activities or pressures of the national government of a particular sector of that class. Meanwhile, anarchism rejects nation-states on the basis of self-determination of the majority social class, and thus reject nationalism. Instead of nations, anarchists usually advocate the creation of cooperative societies based on free association and mutual aid without regard to ethnicity or race.
Islamism and NationalismEdit
Some radical Islamists who reject the existence of any state on any basis, other than the Islamic caliphate. For them, the unity of Islam means that there can be only one government on Earth, in the form usually titled caliphate (khilafah). It is not a state in the usual Western sense, but all existing states are incompatible with this ideal, including the Islamic nation-states with Islam as the official religion. Only a minority of Islamists take this view, but insofar as Al-Qaeda has an ideology, it includes the goal of the caliphate. The Ba'ath Party and related groups have historically offered a secular Arab Nationalist opposition to Islamism in Arab countries.
As a universal religion, Islam is nominally opposed to any categorisation of people not based on one's beliefs. Islam promotes a strong feeling of community among all Muslims, who collectively constitute the Ummah. The word "Ummah" is often incorrectly translated into English as "Islamic nation" but it is not a nation in this sense and not a synonym of 'caliphate', although the idea is associated with the historic caliphates. There is no doubt that many Muslims do strongly identify with the religious community, probably more so than Christians. The confusion may arise because in other cases it does translate to the English word "nation", as in the Arabic name of the United Nations,الأمم المتحدة, Al Umam al Mutahidah. Shared observances such as the holy month of Ramadan and the Hajj (the pilgrimage to Mecca), contribute to this common Muslim identification. The Nation of Islam in the United States has been criticised by some Muslims, who find the comparison between Islam and an earthly nation offensive.
- This entry is related to, but not included in the Political ideologies series or one of its sub-series. Other related articles can be found at the Politics Portal.
- Cultural identity
- Ethnic autonomous regions
- List of active autonomist and secessionist movements
- List of historical autonomist and secessionist movements.
- List of historical effects of nationalism
- List of nationalistic musical pieces
- List of nationalist conflicts and organizations
- List of prominent figures in nationalism
- Historiography and nationalism
- Identity politics
- National flag
- National liberation movements
- National mysticism
- National personification
- National romanticism
- National Socialism or Nazism
- Nationalism and sport
- The Stanford Encyclopedia of Philosophy entry
- Internet Modern History Sourcebook: Nationalism — Resources
- The Nationalism Project is the world's most comprehensive English-language website on nationalism.
- Nation and Nationalism (2 parts)
- Animated map of German Unification
- What is a Nation? - Nadesan Satyendra,
- Religious Nationalism and Human Rights, David Little, United States Institute of Peace, also briefly discusses history of nationalism
- Alfred Verdross and Othmar Spann: German Romantic Nationalism, National Socialism and International Law, Anthony Carty, European Journal of International Law.
- Johann Gottfried Herder (1784): Materials for the Philosophy of the History of Mankind
- The Prohibition of Nationalism in Islam
- Notes on Nationalism Essay by George Orwell
- The Sabanci University: School of Languages Podcasts: Nationalism (Part 1) and Theories of Nationalism (Part 2)
- America's New Nationalism Book review of Anatol Lieven's book, America Right or Wrong: An Anatomy of American Nationalism, published in The American Conservative
- ↑ "Nationalism I would define as an ideology claiming that a given human population has a natural solidarity based on shared history and a common destiny. This collective identity as a historically constituted “people” crucially entails the right to constitute an independent or autonomous political community. The idea of nationalism takes form historically in tandem with the doctrine of popular sovereignty: that the ultimate source of authority lies in the people, not the ruler or government. The foregoing definition of nationalism will be found in any classic text with minor variations." M. Crawford Young, 2004. Revisiting nationalism and ethnicity in Africa. UCLA International Institute, James S. Coleman Memorial Lecture Series. Or: Handler, Richard. "Nationalism is an ideology about individuated being. It is an ideology concerned with boundedness, continuity, and homogeneity encompassing diversity. It is an ideology in which social reality, conceived in terms of nationhood, is endowed with the reality of natural things." Nationalism and the Politics of Culture in Quebec. New Directions in Antropological Writing: History, Poetics, Cultural Criticism, ed. George E.; Clifford Marcus, James. Madison: The University of Wisconsin Press, 1988. Passage online at . Specifically on the issue: M. Freeden, 1998. Is Nationalism a Distinct Ideology? Political Studies, Volume 46, Number 4, September 1998, pp. 748-765(18).
- ↑ Gellner, Ernest. 1983. Nations and Nationalism. Ithaca: Cornell University Press.
- ↑ Gellner, Ernest. 1983. Nations and Nationalism. Ithaca: Cornell University Press.
- ↑ Hechter, Michael. 2001. Containing Nationalism. ISBN 0-19-924751-X .
- ↑ Gellner, Ernest. 1983. Nations and Nationalism. Ithaca: Cornell University Press.
- ↑ Tilly, Charles. 1990. Coercion, Capital and European States AD 990-1992. Cambridge, MA: Basil Blackwell.
- Abizadeh, Arash. 2002. "Does Liberal Democracy Presuppose a Cultural Nation? Four Arguments." American Political Science Review 96(3): 495-509.
- Abizadeh, Arash. 2004. "Liberal Nationalist versus Postnational Social Integration." Nations and Nationalism 10(3): 231-250.
- Anderson, Benedict. 1991. Imagined Communities. ISBN 0-86091-329-5 .
- Anderson, Benedict. 1998. The Spectre of Comparison: Nationalism, Southeast Asia and the World. London: Verso. ISBN 1-85984-184-8 .
- Balakrishnan, Gopal, ed. 1996. Mapping the Nation. London: Verso. ISBN 1-85984-960-1 .
- Billig, Michael. Banal Nationalism. ISBN 0-8039-7525-2 .
- Blattberg, Charles. 2006. "Secular Nationhood? The Importance of Language in the Life of Nations." Nations and Nationalism 12(4): 597-612.
- Breuilly, John. 1994. Nationalism and the State. 2nd ed. Chicago: Chicago University Press. ISBN 0-226-07414-5 .
- Brubaker, Rogers. 1996. Nationalism Reframed: Nationhood and the National Question in the New Europe. Cambridge University Press. ISBN 0-521-57224-X .
- Calhoun, Craig. 1993. "Nationalism and Ethnicity." Annual Review of Sociology 19: 211-239.
- Canovan, Margaret. 1996. Nationhood and Political Theory. Cheltenham, UK: Edward Elgar. ISBN 1-85278-852-6 .
- Fitzgerald, Francis. 1972. Fire in the Lake: The Vietnamese and the Americans in Vietnam. Boston: Back Bay Books. ISBN 0-316-15919-0 .
- Freeden, Michael. 1998. "Is Nationalism a Distinct Ideology?" Political Studies 46: 748-765.
- Geary, Patrick J. 2002. The Myth of Nations: The Medieval Origins of Europe. Princeton University Press. ISBN 0-691-11481-1 .
- Gellner, Ernest. 1983. Nations and Nationalism. Ithaca: Cornell University Press. ISBN 0-8014-1662-0 .
- Greenfeld, Liah. 1992. Nationalism: Five Roads to Modernity Cambridge: Harvard University Press. ISBN 0-674-60319-2 .
- Hobsbawm, Eric J. 1992. Nations and Nationalism Since 1780: Programme, Myth, Reality. 2nd ed. Cambridge University Press. ISBN 0-521-43961-2 .
- Juergensmeyer, Mark. 1993. The New Cold War: Religious Nationalism Confronts the Secular State. Berkeley: University of California Press. ISBN 0-520-08651-1 .
- Kymlicka, Will. 1995. Multicultural Citizenship. Oxford University Press. ISBN 0-19-827949-3 .
- McKim, Robert, and Jeff McMahan. 1997. The Morality of Nationalism. Oxford University Press. ISBN 0-19-510391-2 .
- Mill, John Stuart. 1861. Considerations on Representative Government.
- Miller, David. 1995. On Nationality. Oxford University Press. ISBN 0-19-828047-5 .
- Patten, Alan. 1999. "The Autonomy Argument for Liberal Nationalism." Nations and Nationalism. 5(1): 1-17.
- Renan, Ernest. 1882. "Qu'est-ce qu'une nation?"
- Smith, Anthony D. 1986. The Ethnic Origins of Nations London: Basil Blackwell. pp 6–18. ISBN 0-631-15205-9 .
- Tamir, Yael. 1993. Liberal Nationalism. Princeton University Press. ISBN 0-691-07893-9 .
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| | http://psychology.wikia.com/wiki/Nationalism | 13 |
38 | |South Africa Table of Contents
Before South Africa's vast mineral wealth was discovered in the late nineteenth century, there was a general belief that southern Africa was almost devoid of the riches that had drawn Europeans to the rest of the continent. South Africa had no known gold deposits such as those the Portuguese had sought in West Africa in the fifteenth century. The region did not attract many slave traders, in part because local populations were sparsely settled. Valuable crops such as palm oil, rubber, and cocoa, which were found elsewhere on the continent, were absent. Although the local economy was rich in some areas--based on mixed farming and herding--only ivory was traded to any extent. Most local products were not sought for large-scale consumption in Europe.
Instead, Europeans first settled southern Africa to resupply their trading expeditions bound for other parts of the world (see Origins of Settlement, ch. 1). In 1652 the Dutch East India Company settled a few employees at a small fort at present-day Cape Town and ordered them to provide fresh food for the company's ships that rounded the Cape on their way to East Africa and Asia. This nucleus of European settlement quickly spread outward from the fort, first to trade with the local Khoikhoi hunting populations and later to seize their land for European farmers. Smallpox epidemics swept the area in the late eighteenth century, and Europeans who had come to rely on Khoikhoi labor enslaved many of the survivors of the epidemics.
By the early nineteenth century, when the Cape settlement came under British rule, 26,000 Dutch farmers had settled the area from Stellenbosch to the Great Fish River (see fig. 7). In 1820 the British government sponsored 5,000 more settlers who also established large cattle ranches, relying on African labor. But the European immigrants, like earlier arrivals in the area, engaged primarily in subsistence farming and produced little for export.
The discovery of diamonds in 1869 and of gold in 1886 revolutionized the economy. European investment flowed in; by the end of the nineteenth century, it was equivalent to all European investment in the rest of Africa. International banks and private lenders increased cash and credit available to local farmers, miners, and prospectors, and they, in turn, placed growing demands for land and labor on the local African populations. The Europeans resorted to violence to defend their economic interests, sometimes clashing with those who refused to relinquish their freedom or their land. Eventually, as the best land became scarce, groups of settlers clashed with one another, and rival Dutch and British populations fought for control over the land (see Industrialization and Imperialism, 1870-1910, ch. 1).
South Africa was drawn into the international economy through its exports, primarily diamonds and gold, and through its own increasing demand for a variety of agricultural imports. The cycle of economic growth was stimulated by the continual expansion of the mining industry, and with newfound wealth, consumer demand fueled higher levels of trade.
In the first half of the twentieth century, government economic policies were designed to meet local consumer demand and to reduce the nation's reliance on its mining sector by providing incentives for farming and for establishing manufacturing enterprises. But the government also saw its role as helping to defend white farmers and businessmen from African competition. In 1913 the Natives Land Act reserved most of the land for white ownership, forcing many black farmers to work as wage laborers on land they had previously owned. When the act was amended in 1936, black land ownership was restricted to 13 percent of the country, much of it heavily eroded.
White farmers received other privileges, such as loans from a government Land Bank (created in 1912), labor law protection, and crop subsidies. Marketing boards, which were established to stabilize production of many crops, paid more for produce from white farmers than for produce from black farmers. All farm activity suffered from the cyclical droughts that swept the subcontinent, but white farmers received greater government protection against economic losses.
During the 1920s, to encourage the fledgling manufacturing industries, the government established state corporations to provide inexpensive electricity and steel for industrial use, and it imposed import tariffs to protect local manufacturers. Again black entrepreneurs were discouraged, and new laws limited the rights of black workers, creating a large pool of low-cost industrial labor. By the end of the 1930s, the growing number of state-owned enterprises dominated the manufacturing sector, and black entrepreneurs continued to be pressured to remain outside the formal economy.
Manufacturing experienced new growth during and after World War II. Many of the conditions necessary for economic expansion had been present before the war--cities were growing, agriculture was being consolidated into large farms with greater emphasis on commercial production, and mine owners and shareholders had begun to diversify their investments into other sectors. As the war ended, local consumer demand rose to new highs, and with strong government support--and international competitors at bay--local agriculture and manufacturing began to expand.
The government increased its role in the economy, especially in manufacturing, during the 1950s and the 1960s. It also initiated large-scale programs to promote the commercial cultivation of corn and wheat. Government investments through the state-owned Industrial Development Corporation (IDC) helped to establish local textile and pulp and paper industries, as well as state corporations to produce fertilizers, chemicals, oil, and armaments. Both manufacturing and agricultural production expanded rapidly, and by 1970 manufacturing output exceeded that of mining.
Despite the appearance of self-sustaining economic growth during the postwar period, the country's economy continued to be susceptible to its historical limitations: recurrent drought, overreliance on gold exports, and the costs and consequences of the use of disenfranchised labor. While commercial agriculture developed into an important source of export revenue, production plummeted during two major droughts, from 1960 to 1966 and from 1981 to 1985. Gold continued to be the most important export and revenue earner; yet, as the price of gold fluctuated, especially during the 1980s, South Africa's exchange rate and ability to import goods suffered.
Manufacturing, in particular, was seriously affected by downswings in the price of gold, in part because it relied on imported machinery and capital. Some capital-intensive industries were able to expand, but only with massive foreign loans. As a result, many industries were insulated from the rising labor militancy, especially among black workers, which sparked disputes and slowed productivity in the late 1980s. As black labor increasingly voiced its frustrations, and foreign banks cut short their loans because of mounting instability, even capital-intensive industries felt the impact of apartheid on profits.
The economy was in recession from March 1989 through most of 1993, largely in response to worldwide economic conditions and the long-term effects of apartheid. It registered only negligible, or negative, growth in most quarters. High inflation had become chronic, driving up costs in all sectors. Living standards of the majority of black citizens either fell or remained dangerously low, while those of many whites also began to decline. Economic growth continued to depend on decent world prices for gold and on the availability of foreign loans. Even as some sectors of the economy began to recover in late 1993, intense violence and political uncertainty in the face of reform slowed overall growth through 1994.
More about the Economy of South Africa.
Source: U.S. Library of Congress | http://countrystudies.us/south-africa/60.htm | 13 |
18 | According to Greek
mythology, the creation of the olive tree was a result of a contest held
Athena and Poseidon.
Poseidon, the god of the sea, and
Athena, the goddess of wisdom, held a contest in which the winner would become
protector of a newly built city in Attica. The city would be named after the
god who gave the citizens the most precious gift. Poseidon struck a rock with
his trident and as water began to rush out of the rock, out ran a horse. Next,
Athena struck the rock with her spear and the first olive tree appeared at the
gates of the Acropolis. Considering her gift more valuable, residents of the
new city declared Athena the victor and themselves Athenians for life. To this
day, an olive tree still stands where this event took place. It was also believed
that the Greek gods were born under the branches of the olive tree.
Olympics were held in 776 BC. The olive tree played a crucial role in this event.
The first Olympic torch was a burning olive branch. The Olympic winners were
awarded with a crown woven from olive branches. These olive branches symbolized
peace and a truce of any hostility. Olive oil was also awarded to the winners
of the Panathenaic Games. The olive branch continues to be seen today as a symbol
of peace and friendship.
Olives and olive oil have significance in Christianity as well. In the
Book of Genesis, an olive branch was returned to Noah by a dove, signaling the
the flood. Noah recognized this as a sign of peace to come. In the Book of Exodus,
the Lord tells Moses how to make an anointing oil of spices and olive oil. In
ancient Greece they also used olive oil as an anointing oil during the consecration
of their kings and priests.
||Evolution of Olive Oil
Greece was the first civilization to be involved in the
full-scale cultivation of olives. Production of olive oil in Greece has spanned
Scientific evidence suggests that olive trees grew wild on the island of Crete
as early as 3500 BC and that systematic cultivation and exportation of the oil
began as early as 2000 BC. Ancient Greek philosopher Aristotle further developed
the cultivation of the olive into a science.
Olive oil was largely responsible for establishing Greece’s early commercial
success. With the expansion of the Greek colonies, their knowledge of olive tree
cultivation and olive oil extraction techniques spread throughout the Mediterranean,
from Italy to northern Africa. The olive had become increasingly important to
both the Greek culture and its economy. Homer even referred to olive oil as “liquid
gold” in The Odyssey. Greece continues to remain the world’s largest
exporter of extra virgin olive oil. In fact, many Italian and Spanish olive oils
add Greek extra virgin olive oil to their own products to enhance their color
and flavor. Seventy percent (70%) of total Greek production is extra virgin olive
oil of the highest quality.
Overall, the olive tree is a very resilient plant. It thrives in dry climates
and can tolerate both droughts and high winds. Olive trees do require very
warm temperatures however, and cannot endure cold temperatures below 10°
F. Therefore the olive prospers in Greece – and the Mediterranean
region – with its mild, rainy winters and long, hot, dry summers.
This region has an abundance of sunshine, nurturing soil conditions, gentle
sea breezes, a temperate climate and year-round growing season. And Greece’s
proximity to the sea allows their olive trees to produce up to 20 times
more fruit than those planted inland. This is why the Mediterranean is responsible
for 98% of the oil harvest. | http://www.terramedi.com/history.html | 13 |
15 | A History of the Zipper from Novelty to Ubiquity
It’s not an exaggeration to say it’s as common as blue jeans. In fact, it’s really as common as pants. And while the button fly still has its fans, the zipper is truly the fastener of choice among wearers of pants, dresses, shirts, jackets, and any other article of clothing that needs a good, reliable close. Zippers, not limited in use to clothing, are among the defining inventions that resulted from the frustrated exclamation, “There’s got to be a better way!” But better than what? The history of the zipper explains how the human mind once again rose to the occasion and conquered inconvenience—but, surprisingly, not until its manufacturers proved its convenience and trends shifted in its favor.
Phase One: Judson’s Early “Clasp Lockers” and “Slide Fasteners”
Fashionable nineteenth century high-button shoes required a great deal of effort to put on The history of the zipper began in the United States with nineteenth-century “high-button” boots that were practical for outdoor movement in dirty environments. High-button boots were also in fashion. Many of these boots exceeded 20 individual buttons, a challenge for even the nimblest of fingers with or without a button hook to aide in the process (Petroski). Near the end of the nineteenth century, Chicago inventor Whitcomb L. Judson began patenting what he intended to be an easier solution for securing high-button boots in a way more fashionable and secure than laces—but intention was no substitute for usefulness.
It was 1891 when Judson took his “Clasp Locker or Unlocker” to the U.S. patent office. It was a basic assembly of hooks and attachments, in reality not a great improvement to the hook-and-eye system. It was based upon the premise of an automatic engagement of components by virtue of a “guide.” It was the guiding apparatus that would evolve into the component that would eventually render the invention a success. Unfortunately, the assembly was still terribly complex and likely not very functional. The same would be true of the next design that was submitted nearly two years later (Friedel).
And, amazingly, the same would be true for the next several versions over 20 years of trial-and-error. The designs were just too complex and didn’t work all that well. But that didn’t stop the designer and his promoters from shopping their invention around the town. Indeed, under the direction of the Harry L. Earle Manufacturing Company, the growing team managed to find investors in a product ingenious in principle, but uninspiring in practice. Investors were found east of Chicago, in Ohio, Pennsylvania and, eventually, New Jersey, but many didn’t stay around after the initial failures.
By the financial panic of 1893, an invention in the hands of a less steady investor would have been doomed to failure. But a successful lawyer and businessman of the Pennsylvania National Guard, Colonel Lewis Walker, apparently had high aspirations to be a successful investor and saw something in Whitcomb L. Judson—or, perhaps, he was a fan of high-button boots and yearned for an improved system of closure.
Whatever Walker saw in Judson was important because Walker was able to look past his failed investment in a previous Judson invention. With the confidence of Walker (and his continued capital investment), the mechanical engineer could develop his invention. The clasp locker had a future, but it wasn’t yet as a zipper. It was a fastener. And in 1894, Judson's team founded the Universal Fastener Company in Chicago (Friedel).
Unfortunately, the design raised many questions, and its shortcomings became apparent every time the team attempted to stretch the intent of the invention to applications other than shoes, such as to corsets. Judson invented solutions that seemed only to complicate the original design. Meanwhile, Judson went to work on an impractical machine to manufacture the impractical fastener, eventually enlisting the technical help of machinists from a Connecticut-based manufacturing company, including Peter Aronson. Through the last years of the nineteenth century, Walker stayed invested and Earle found new investors on the Eastern seaboard.
The Universal Fastener Company had evolved into the Fastener Manufacturing and Machine Company, which evolved into the Automatic Hook and Eye Company of Hoboken, New Jersey, where the operation moved in 1904. That year, Judson developed the “C-curity,” a new hook-and-eye design that was also easily attachable to garments via a cloth tape. The C-curity was actually rather marketable in idea, and the team went to work advertising the invention as a secure fastener, in particular for women’s skirts. Contrary to the claims, however, the device was not particularly secure, bulky in design, and not easy to remedy in the sure event of a failure (Petroski).
Phase Two: Sundback’s “Plako” and “Hookless” Fasteners
Earle and Judson dropped out of the picture at this point in the Hoboken company’s story, but Walker remained committed. He brought on family from Meadville, Pennsylvania, as well as a Swedish-born engineer named Otto Frederick Gideon Sundback. Sundback was trained in electrical engineering in Germany before emigrating to the United States, where he eventually landed a job at Westinghouse Electric Corporation (Friedel, Petroski).
Sundback interviewed with Aronson at the struggling Hoboken company and was convinced to leave Westinghouse to help fix the technical defects in the machinery and the fastener itself. Sundback, in short, took on the monumental task of rendering Judson’s invention useful. As with any question or dilemma, fresh eyes and a new perspective proved precisely what the bulky and ineffective fastener needed. Sundback first perfected the C-curity under the name of Plako, later patented in 1913. But even a perfect C-curity wasn’t ideal. It was, however, enough to lure new money. It took a combination of in-kind work and incredible dedication to keep investors, both in money and raw materials, to stay committed to Sundback’s work.
As Sundback’s personal life collapsed around him, he apparently became more focused on the task of rendering these troubling fasteners a success. He scrapped the antiquated hook-and-eye and endeavored to build something almost completely new. What he created was a device that used spring clip jaws that “clamped around a beaded edge of the tape on the other side” (Sundback quoted in Petroski). Sundback submitted the patent for his “Hookless No. 1” in 1912, which would be approved five years later. During that same period, the early patents were expiring, but Walker saw great potential in Sundback’s improvements. He reorganized investments in the Hookless Fastener Company and moved the operation to a barn in Meadville. There, Gideon Sundback developed his Hookless No.2, (a.k.a. the “Hookless Hooker”) and, eventually, the zipper (Friedel).
Marketing the Early Zippers: Over 20 Years of Determination
Gideon Sundback's "Hookless" fastener designs and patents helped launch the modern zipper industry The brilliance of Sundback’s zipper design was two-fold. One, it was machine creatable, which meant mass-production (though the new machine would be several months in the making). Two, it completely abandoned the hook in favor of what Sundback described as “nested, cup-shaped members” or “interlocking scoops” conjoined by the slide. But what makes the Hookless No. 2 so brilliant is its clever use of interlocking components that are totally dependent upon the slide sequence and do not function independently, like individual hooks and eyes. The device’s brilliance, however, was no match for a fickle market, which initially took little interest in the “novelty” from an aesthetic standpoint. After over 20 years of developments, the fully functional Hookless No. 2 would be another 20 years in search of buyers (Friedel).
The first Hookless No. 2 fasteners were sold in 1914. Walker’s son managed to find a buyer of four fasteners. He sold all four for one dollar. McCreery’s department store in Pittsburg saw great potential for the fasteners in women’s garments, small tweaks were made to strengthen the fastener, and the machine they developed produced over 1,500 flawless fasteners a day—but there was simply no demand. When raw materials were diverted during World War I, the company had a stock of fasteners from which they made money belts that were very popular among soldiers in the war. Additional military applications meant the company had access to the raw materials they needed but, more importantly, the “handy novelty” was exposed to thousands of military personnel (Friedel).
Additional campaigns were launched to boost the fastener’s image among manufacturers of everyday garments, but it still lacked the branding and trendiness it needed. The name “zipper” finally emerged thanks to the advertisers at B.F. Goodrich Company whose Akron, Ohio, plant developed rubber galoshes originally called the “Mystik Boot.” The Hookless Fastener Company made a few more improvements at Goodrich’s request, and soon an order was placed that exceeded Hookless’ manufacturing capacity. In 1923, Goodrich adopted and trademarked the name “Zipper Boot” due to the more dramatic sense the word “zip” created. Meanwhile, the Meadville company stepped up production and choose the name “Talon” for its product (and in 1937 the company), believing the name epitomized their product’s “positive qualities” (Petroski). Goodrich purchased millions of the Hookless Company’s Talons, who would go on to produce 20 million in a single year by 1930. Though originally a trademark of Goodrich, zipper came into common usage as the generic, relegating names like slide fastener and Talon to obscurity.
The vast array of applications for zippers in the 1930s established the device’s usefulness, adding to its consumer base from the military and Zipper Boot fans. By the middle of that decade, the zipper finally emerged from obscure novelty in clothing applications to became fully trendy thanks to big-name designers who incorporated zippers into their new lines, and aggressively marketed them. Both men’s and women’s clothing manufactures embraced the device. A campaign to end “gap-osis,” the symptom that longed plagued people using fasteners of all kinds, finally sealed the device’s success (Petroski). Talon, Inc., was joined by many other zipper manufactures in 1939 to propel zippers into ubiquity, producing some 300 million zippers. A decade later, over a billion zippers were being produced (Friedel).
Today's Global Zipper Industry
At about the same time the Hookless Company was breaking through in America, versions of slide fasteners emerged in the international market, starting with Japan as early as 1927. Many of Talon’s original patents expired in the 1950s. At that time, the Japanese manufacturer YKK was well established, and founder Tadao Yoshida had personally developed most of the components needed to operate his facility independently of the supply chain. The virtually self-sustaining enterprise then added ideas from Talon and began to look for ways and places to grow.
Yoshida’s drive and attention to detail helped YKK dominate the Japanese zipper market by the 1960s. To get around extensive barriers to trade, Yoshida began setting up shop outside of Japan—one of the early Japanese companies to do so. The U.S. wing of the business would fall in the hands of his son, who had integrated into American society while earning an M.B.A. there. Yoshida’s business model was called the “virtuous cycle,” and was a principles-based mission that drove efficiency and quality while keeping prices low even as the company grew and integrated into more and more markets (Fulford).
Modern zippers are very versatile and come in a variety of sizes and materials The story of YKK is important in the history of zippers because they now dominate the global market, accounting for about half of all zippers produced in the world. But YKK has also innovated software and machinery that has perfected the manufacture of zippers of all kinds, building upon patents and continuously improving designs. YKK used DuPont nylon, for example, to manufacture a lightweight yet durable alternative to the metal zipper as early as the 1960s—and by the 1970s, Levi Strauss began substituting YKK zippers for button flies on their denim jeans. YKK was also had the first zipper on the moon. YKK has proven to be an extremely adaptable company that deftly moved into markets and quickly found its niche, while the descendant of Talon, Inc., proved less malleable and gradually lost market share.
There are other major players in the multi-billion-dollar zipper market, many of which are supplied by one of hundreds of zipper manufactures in China (Fulford). As long as zippers aren’t replaced by a better fastener, their function will remain indispensable in hundreds of industries. Meanwhile, companies like IDEAL Fastener Corporation have innovated zippers manufactured from recycled materials, saving raw materials and fuel in the production (Apparel Magazine).
Perhaps even more noteworthy is YKK’s release of a new hook-and-eye line of zippers designed to prevent fastener separation of tight and tightly worn garments (Apparel Magazine). The zipper, especially non-metal versions, is in fact still susceptible to separation especially during fastening or unfastening, though certainly not to the extent of the early C-Curity. The hook-and-eye seems to reincorporate an element of the original patent to the benefit of the current model. The product is just one of thousands of colors, sizes, and variations of something that, at its core, is basically the same thing. The zipper is an ingenuous but maddeningly difficult-to-perfect device that better than most devices works to prevent “gap-osis” and ultimately allows an immeasurable number of things to effortlessly open and close—making it, in many ways, the better way.
-- Posted March 10, 2011
Apparel Magazine. Vol 48.11 (July 2007): 46.
----. Vol. 49.2 (October 2007): 36.
Friedel, Robert. “The History of the Zipper.” American Heritage of Invention & Technology. Vol 10.1 (July 1994): 8-16.
Fulford, Benjamin. “Zipping Up the World.” Forbes Global. Vol 6.22 (Nov. 24, 2003).Petroski, Henry. 1992. The Evolution of Useful Things. New York: Alfred A. Knopf. | http://www.randomhistory.com/zipper-history.html | 13 |
24 | - Procedural History
After the horrors of World War II, a broad consensus emerged at the worldwide level demanding that the individual human being be placed under the protection of the international community. As particularly the atrocities committed against specific ethnic groups had shown, national governments could gravely fail in their duty to ensure the life and the liberty of their citizens. Some had even become murderous institutions. However, never again should a holocaust occur. Accordingly, since the lesson learned was that protective mechanisms at the domestic level alone did not provide sufficiently stable safeguards, it became almost self-evident to entrust the planned new world organization with assuming the role of guarantor of human rights on a universal scale. At the San Francisco Conference in 1945, some Latin American countries requested that a full code of human rights be included in the Charter of the United Nations itself. Since such an initiative required careful preparation, their motions could not be successful at that stage. Nonetheless, human rights were embraced as a matter of principle. The Charter contains references to human rights in the Preamble, among the purposes of the Organization (Article 1) and in several other provisions (Articles 13, 55, 62 and 68). Immediately after the actual setting up of the institutional machinery provided for by the Charter, the new Commission on Human Rights began its work for the creation of an International Bill of Rights. In a first step, the Universal Declaration of Human Rights was drafted, which the General Assembly adopted on 10 December 1948.
In order to make human rights an instrument effectively shaping the lives of individuals and nations, more than just a political proclamation was needed. Hence, from the very outset there was general agreement to the effect that the substance of the Universal Declaration should be translated into the hard legal form of an international treaty. The General Assembly reaffirmed the necessity of complementing, as had already been done in the Universal Declaration, traditional civil and political rights with economic, social and cultural rights, since both classes of rights were “interconnected and interdependent” (see section E of resolution 421 (V) of 4 December 1950). The only question was whether, following the concept of unity of all human rights, the new conventional rights should be encompassed in one international instrument or whether, on account of their different specificities, they should be arranged according to those specificities. Western nations in particular claimed that the implementation process could not be identical, economic and social rights partaking more of the nature of goals to be attained whereas civil and political rights had to be respected strictly and without any reservations. It is this latter view that eventually prevailed. By resolution 543 (VI) of 4 February 1952, the General Assembly directed the Commission on Human Rights to prepare, instead of just one Covenant, two draft treaties; a Covenant setting forth civil and political rights and a parallel Covenant providing for economic, social and cultural rights. The Commission completed its work in 1954. Yet it took many years before eventually the political climate was ripe for the adoption of these two ambitious texts. While both the Western and the Socialist States were still not fully convinced of their usefulness, it was eventually pressure brought to bear upon them from Third World countries which prompted them to approve the outcome of the protracted negotiating process. Accordingly, on 16 December 1966, the two Covenants were adopted by the General Assembly by consensus, without any abstentions (resolution 2200 (XXI)). Since that time, the two comprehensive human rights instruments of the United Nations have sailed on different courses. However, contrary to many pessimistic expectations, they have mostly been ratified simultaneously. The difference in the circle of States parties is low. As of June 2008, the International Covenant on Civil and Political Rights (ICCPR) comprises 161 States parties, whereas the International Covenant on Economic, Social and Cultural Rights (ICESCR) holds the second place with 158 ratifications. The Russian Federation, for instance, is a party to both Covenants, while the United States has left aside the ICESCR, and China, on the other hand, has not found it convenient to ratify the ICCPR. In general, however, the lacunae include only a small part of the world population. True universality is within reach.
The ICCPR comprises all of the traditional human rights as they are known from historic documents such as the First Ten Amendments to the Constitution of the United States (1789/1791) and the French Déclaration des droits de l’homme et du citoyen (1789). However, in perfect harmony with its sister instrument, Part I starts out with the right of self-determination which is considered to be the foundational stone of all human rights (article 1). Part II (articles 2 to 5) contains a number of general principles that apply across the board, among them in particular the prohibition on discrimination. Part III enunciates an extended list of rights, the first of which being the right to life (article 6). Article 7 establishes a ban on torture or other cruel, inhuman or degrading treatment or punishment, and article 8 declares slavery and forced or compulsory labour unlawful. Well-balanced guarantees of habeas corpus are set forth in article 9, and article 10 establishes the complementary proviso that all persons deprived of their liberty shall be treated with humanity.
Freedom of movement, including the freedom to leave any country, has found its regulation in article 12. Aliens, who do not enjoy a stable right of sojourn, must as a minimum be granted due process in case their expulsion is envisaged (article 13). Fair trial, the scope ratione materiae of which is confined to criminal prosecution and to civil suits at law, has its seat in articles 14 and 15. Privacy, the family, the home or the correspondence of a person are placed under the protection of article 17, and the social activities of human beings enjoy the safeguards of article 18 (freedom of thought, conscience and religion), article 19 (freedom of expression), article 21 (freedom of assembly), and article 22 (freedom of association). Going beyond the classic dimension of protection against interference by State authorities, articles 23 and 24 proclaim that the family and the child are entitled to protection by society and the State.
Article 25 establishes the right for everyone to take part in the running of the public affairs of his/her country. With this provision, the ICCPR makes clear that State authorities require some sort of democratic legitimacy. Finally, article 27 recognizes an individual right of members of ethnic, religious or linguistic minorities to engage in the cultural activities characteristic of such minorities. No political rights are provided for. Minorities as such have not been endowed with any rights of political autonomy.
Article 26 establishes a clause on equality and non-discrimination which seemingly stands in contrast to article 2, paragraph 3, the introductory non-discrimination clause, which is ancillary in nature, being applicable only in conjunction with one of the other substantive rights. The Human Rights Committee, the organ entrusted with monitoring compliance by States with their obligations under the ICCPR, has interpreted article 26 as setting forth a general ban on discrimination, without any regard for the field of life concerned. To date, this extension of the scope ratione materiae of article 26 remains contested.
The Human Rights Committee is the principal actor at the international level mandated to enforce the rights enunciated in the ICCPR. The instruments put at its disposal for that purpose are of limited scope, however. States are required to submit at regular intervals reports which are carefully scrutinized; at the end of that process, the Committee summarizes its assessment of the prevailing human rights situation by noting in particular its concerns in open and straightforward language without any diplomatic inhibitions. Such concluding observations are not legally binding. Similarly, the final views which the Committee delivers after having examined an individual communication under the [First] Optional Protocol to the ICCPR lack any binding legal force. Of course, States are expected to live up in good faith to the views addressed to them by the Committee. If they just shoved away such recommendations, the whole procedure would make no sense. In addition, by formulating “general comments”, the Committee has opened up a new window of activity. Through such “general comments”, it explains the scope and meaning of the provisions of the ICCPR and clarifies general issues as they arise in the process of implementation.
It is at the national level that the ICCPR has exerted its greatest impact. When today anywhere in the world a national constitution is framed, the ICCPR serves as the natural yardstick for the drafting of a section on fundamental rights. In most countries, the ICCPR has been made part and parcel of the national legal order although there is no general rule of international law that would enjoin States to embrace a specific method of implementation. Thus, the United States has made a declaration according to which the ICCPR is not self-executing within its domestic legal system. In some countries, administrative authorities and the courts are specifically enjoined to follow the applicable international guarantees when interpreting the national constitution (e.g., article 10, paragraph 2 of the Spanish Constitution). In other countries, the ICCPR has even been given the legal force of a provision of constitutional or quasi-constitutional rank (e.g., article 15, paragraph 4, of the Constitution of the Russian Federation). These legal techniques are not automatically successful, since, as a rule, national judges are not very familiar with the guarantees laid down in international human rights instruments and are more often than not reluctant to accord them precedence over the applicable national laws and regulations.
T. Buergenthal, “The U.N. Human Rights Committee”, Max Planck Yearbook of United Nations Law, vol. 5, 2001, pp. 341–398.
S. Joseph, J. Schultz and M. Castan, The International Covenant on Civil and Political Rights. Cases, Materials, and Commentary, 2nd edition, Oxford University Press, Oxford, 2005.
D. McGoldrick, The Human Rights Committee. Its Role in the Development of the International Covenant on Civil and Political Rights, 2nd edition, Clarendon Press, Oxford 1994.
M. Nowak, U.N. Covenant on Civil and Political Rights: CCPR Commentary, 2nd edition, N.P. Engel, Kehl, 2005.
Ch. Tomuschat, Human Rights: Between Idealism and Realism, 2nd edition, Oxford University Press, Oxford, 2008.
The Commission on Human Rights held its first session from 27 January to 10 February 1947, at which a drafting committee, consisting of seven Member States, was established. At its first session, held from 9 to 25 June 1947, the Drafting Committee of the Commission decided to prepare two documents: a preliminary draft of a declaration or manifesto setting forth general principles of human rights; and a draft outlining convention on those matters which the Committee felt could be formulated as binding obligations. The report of the Drafting Committee (E/CN.4/21) was submitted to the Commission on Human Rights for consideration at its second session, held in December 1947. The Commission endorsed the recommendation by the Drafting Committee to draft two separate documents, as many Governments were prepared to accept a declaration if it were to precede and not replace a convention. Efforts were consequently concentrated on a draft declaration, leading to the adoption of the Universal Declaration on Human Rights by resolution 217 A (III) of 10 December 1948. (See Universal Declaration on Human Rights). In the same resolution, the General Assembly requested the Economic and Social Council to ask the Commission on Human Rights to continue to give priority in its work to the preparation of a draft covenant on human rights and draft measures on its implementation (resolution 217 E (III)). The Economic and Social Council transmitted this resolution of the General Assembly to the Commission on Human Rights by resolution 191 (VIII) of 9 February 1949.
A first draft convention was prepared by the Commission on Human Rights during its sixth session, in 1950, and a report was submitted to the Economic and Social Council for consideration at its sixth session (E/1618 and Corr. 1 and Add. 1). In addition, the Council had before it two reports which the Commission had requested the Secretary-General to prepare (E/1721 and Corr. 1, and E/1732), dealing with federal and colonial clauses, and the possibility for the proposed Human Rights Committee to seek advisory opinions from the International Court of Justice. In resolution 303 I (XI) of 9 August 1950, the Council concluded that further progress could not be made until policy decisions were taken by the General Assembly on certain matters, including the general adequacy of the first draft and the articles relating to its implementation, the desirability of including articles on economic, social and cultural rights, and the desirability of including special articles relating to federal states and to Non-Self-Governing and Trust Territories. The General Assembly considered these topics at its fifth session, and adopted resolution 421 (V) of 4 December 1950 deciding that the covenant should include economic, social and cultural rights as well as a clause with regard to its territorial application, and that the draft articles proposed by the Commission on Human Rights should be revised and additional rights be added. Furthermore the Commission was asked to consider provisions relating to federal states and petitions with regard to alleged violations of the Covenant. The resolution was transmitted to the Commission on Human Rights by the Economic and Social Council by resolution 349 (XII) of 23 February 1951.
At its seventh session, in 1951, the Commission on Human Rights, assisted by representatives of the International Labour Organization (ILO), the United Nations Educational, Scientific and Cultural Organization (UNESCO) and the World Health Organization (WHO), completed its draft on economic, social and cultural rights (see report of the Commission, E/1681, and Corr. 1, Corr. 2 (French only), Corr. 3 and Corr. 4 (Spanish only)). The report was submitted to the Economic and Social Council, which discussed the draft articles and measures for its implementation at its session of the same year. In view of the discussions, by resolution 384 (XIII) of 29 August 1951, the Council invited the General Assembly to reconsider its decision to include in one covenant provisions on both economic, social and cultural rights, and civil and political rights. At the sixth session of the General Assembly, in 1951, the question of the Draft Covenant on Human Rights and measures of implementation was discussed at forty meetings of the Third (Social, Humanitarian and Cultural) Committee and subsequently at two plenary meetings of the General Assembly. After continued discussions in plenary, the General Assembly requested, in resolution 543 (VI) of 5 February 1952, contrary to its previous decision, that the Commission on Human Rights draft two separate Covenants, to be submitted simultaneously for consideration by the General Assembly. As further requested by the General Assembly in resolution 549 (VI) of 5 February 1952, the Economic and Social Council held a special session on 24 March 1952, and transmitted the above recommendations to the Commission on Human Rights.
The Commission on Human Rights continued its work on the preparation of the two draft covenants at its eighth and ninth sessions, but was not able, in the available time, to carry out the instructions of the General Assembly. At its tenth session, in 1954, it however completed the two draft covenants (see the report of the Commission, E/2573). Without dealing with the substance of the drafts, the Economic and Social Council adopted resolution 545B I (XVII) on 29 July 1954, transmitting the report of the Commission to the General Assembly. At the ninth session of the General Assembly, in 1954, the item was again allocated to the Third Committee which began a first reading of the draft covenants.
Preparation of the draft covenants continued in the Third Committee during the tenth to the seventeenth sessions of the General Assembly, from 1955 to 1962. In 1963, the final substantive articles were adopted (see the report of the Third Committee to the General Assembly, A/5655). On 12 December 1963, the General Assembly invited all Governments to consider the text of the articles adopted by the Third Committee and decided to make a special effort to adopt the entire texts, including the final clauses, of the draft covenants at its nineteenth session, the following year (resolution 1960 (XVIII)). Owing to the special circumstances prevailing then, work on the covenants could not however be continued in 1964 and, at the twentieth session, in 1965, the General Assembly decided to defer the topic due to its heavy agenda (resolution 2080 (XX) of 20 December 1965). At the twenty-first session, in 1966, the Third Committee completed the drafting of the covenants, adopting final clauses and articles relating to measures of implementation. The two draft Covenants and the Optional Protocol to the Covenant on Civil and Political Rights were submitted to the General Assembly (see the report of the Third Committee to the General Assembly, A/6564). After discussions in plenary, the General Assembly adopted unanimously the recommendation of the Third Committee in resolution 220 A (XXI) of 16 December 1966, the three instruments being annexed thereto. In separate votes, the General Assembly adopted the Covenant on Economic, Social and Cultural Rights, with a vote of 105 to 0, the Covenant on Civil and Political Rights, with a vote of 106 to 0, and the Optional Protocol to the Covenant on Civil and Political Rights with a vote to 66 to 2, with 38 abstentions.
The three instruments were opened for signature on 16 December 1966. In accordance with their respective provisions, the International Covenant on Economic, Social and Cultural Rights entered into force on 3 January 1976 and the International Covenant on Civil and Political Rights, together with its Optional Protocol, entered into force on 23 March 1976.
Text of the Covenant
Selected preparatory documents
(in chronological order)
Report of the first session of the Drafting Commission of the Commission on Human Rights, held from 9 to 25 June 1947 (E/CN.4.21, 1947)
Report of the third session of the Commission on Human Rights, held from 24 May to 18 June 1948 (E/800, 28 June 1948)
Economic and Social Council resolution 151 (VII) of 26 August 1948 (Report of the third session of the Commission on Human Rights)
General Assembly resolution 217 (III) of 10 December 1948
Economic and Social Council resolution 191 (VIII) of 9 February 1949 (General Assembly resolution 217 (III) regarding human rights)
Report of the Commission on Human Rights, Sixth session, held from 27 March to 19 May 1950 (E/1681 and Corr. 1 and Add. 1, 29 May 1950)
Economic and Social Council resolution 303 I (XI) of 9 August 1950 (Report of the Commission on Human Rights (sixth session))
General Assembly, Verbatim records of meeting No. 317 held on 4 December 1950
General Assembly resolution 421 E (V) of 4 December 1950 (Draft International Covenant on Human Rights and measures of implementation: future work of the Commission on Human Rights)
Economic and Social Council resolution 349 (XII) of 23 February 1951 (Draft International Covenant on Human Rights and measures of implementation: future work of the Commission on Human Rights)
Report of the seventh session of the Commission on Human Rights, held from 16 April to 19 May 1951 (E/1992, 24 May 1951, and Corr. 1, Corr. 2 (in French only), Corr. 3 (in Spanish only) and Corr. 4, 1951)
Economic and Social Council resolution 384 (XIII) of 29 August 1951 (Report of the Commission on Human Rights (seventh session))
General Assembly, Verbatim records of meetings Nos. 374 and 375 of 4 and 5 February 1952
General Assembly resolution 543 (VI) of 5 February 1952 (Preparation of two Draft International Covenants on Human Rights)
Report of the Commission on Human Rights, tenth session (E/2573, 1954).
Economic and Social Council resolution 545 B (XVIII) of 29 July 1954 (Report of the Commission on Human Rights (tenth session) – Draft International Covenants on Human Rights)
Report of the Third Committee to the General Assembly (A/2808 and Corr. 1, 29 November 1954 and 3 December 1954)
Report of the Third Committee to the General Assembly (A/3077, 8 December 1955)
Report of the Third Committee to the General Assembly (A/3525, 9 February 1957)
Report of the Third Committee to the General Assembly (A/3764 and Add. 1, 5 December 1957 and 10 December 1957)
Report of the Third Committee to the General Assembly (A/4045, 9 December 1958)
Report of the Third Committee to the General Assembly (A/4299 and Corr. 1, 3 December 1959 and 8 December 1959)
Report of the Third Committee to the General Assembly (A/4625, 8 December 1960)
Report of the Third Committee to the General Assembly (A/5000, 5 December 1961)
Report of the Third Committee to the General Assembly (A/5365, 17 December 1962)
Note by Secretary-General, “Text of articles adopted by the Third Committee at the tenth to seventeenth sessions of the General Assembly” (A/C.3./L.1062, 1963)
Third Committee of the General Assembly, Summary records of meetings Nos. 1256 to 1269, held from 7 November to 19 November 1963, and Nos. 1273 to 1279, held from 27 November to 4 December 1963 (A/C.3/PV.1256-1269 and 1273-1279).
Report of the Third Committee to the General Assembly (A/5655, 28 October 1963)
General Assembly resolution 1960 (XVIII) of 12 December 1963 (Draft International Covenants on Human Rights)
General Assembly resolution 2080 (XX) of 20 December 1965 (Draft International Covenants on Human Rights)
Note by the Secretary-General, “Draft International Covenants on Human Rights” (A/6342, 19 July 1966)
Third Committee of the General Assembly, Summary records of meetings Nos. 1395 to 1441, 1446, and 1451 to 1456, held respectively from 14 October to 1 December 1966, on 2 December 1966 and from 7 December to 12 December 1966 (A/C.3/PV.1395-1441, 1446 and 1451-1456)
General Assembly, Verbatim records of meetings Nos. 1495 and 1496 held on 16 December 1966 (A/PV.1495 and 1496)
General Assembly resolution 2200 A (XXI) of 16 December 1966 (International Covenant on Economic, Social and Cultural Rights, International Covenant on Civil and Political Rights and Optional Protocol to the International Covenant on Civil and Political Rights)
The Covenant entered into force on 23 March 1976. For the current participation status of the Covenant, as well as information and relevant texts of related treaty actions, such as reservations, declarations, objections, denunciations and notifications, see:
The Status of Multilateral Treaties Deposited with the Secretary-General | http://untreaty.un.org/cod/avl/ha/iccpr/iccpr.html | 13 |
19 | Science Fair Project Encyclopedia
Frankfurt am Main [ˈfraŋkfʊrt] is the largest city in the German state of Hesse and the fifth largest city of Germany. Situated on the Main river, it has a population of approximately 650,000 (but about 5 million in its metropolitan area ).
Among English speakers it is commonly known as simply "Frankfurt", though Germans more frequently call it by its full name in order to distinguish it from the other Frankfurt in Germany, Frankfurt an der Oder. It was once called Frankfort-on-the-Main in English: a direct translation of Frankfurt am Main.
The three pillars of Frankfurt's economy are finance, trade fairs, and transport; it is the transport hub of Germany. Frankfurt has been Germany's financial capital for centuries, and is the richest city in Europe. The Frankfurt Stock Exchange is Germany's largest, the site of 85% of Germany's turnover in stocks, and one of the world's biggest. Frankfurt is also the home of the European Central Bank and the German Bundesbank, as well as a large number of big commercial banks, notably Deutsche Bank, Dresdner Bank and Commerzbank . Many large trade fairs also call Frankfurt home, notably Messe Frankfurt.
Frankfurt is also the German capital that never was. The city has been at the political center of Germany for centuries. From 855-1792 Frankfurt was the electoral city for the German Emperors of the Holy Roman Empire of the German Nation which ultimately failed to establish a central government over all of Germany. In 1848/49 the city on the River Main was the revolutionary capital and the seat of the first democratically elected German parliament. The revolution failed and it was not before 1945, when Frankfurt lost out the status of West-German capital only by one vote to Bonn (near Cologne), which was chosen mainly because of its proximity to the home of the first West-German chancellor Konrad Adenauer. After the wall came down in 1989, Berlin was the obvious choice for German capital.
During World War II, Frankfurt was heavily bombed, though the city recovered relatively quickly.
Frankfurt is often called "Bankfurt" or "Mainhattan" (derived from the local Main River). It is one of only three European cities that have a significant number of high-rise skyscrapers. With 9 skyscrapers taller than 150 meters (492 feet) in 2004, Frankfurt is second behind Paris (La Défense and Montparnasse: 12 skyscrapers taller than 150 meters, not counting the Eiffel Tower), but ahead of London (Canary Wharf and City: 8 skyscrapers taller than 150 meters). The city of Frankfurt contains the tallest skyscraper in Europe, the Commerzbank Tower. In Germany, only Frankfurt and Düsseldorf have high-rise skyscrapers.
Frankfurt is renowned for its finance industry, on a par with London and Paris, as well as for its central location in Western Europe, surrounded by the most populous areas of Europe. It has a first-class infrastructure and a major international airport: Frankfurt International Airport. It is the second or third-busiest in Europe, depending on the data used. Passenger traffic at in 2003 was 48,351,664, second in Europe behind London Heathrow Airport (63,487,136), almost in a tie with Paris Charles de Gaulle Airport (48,220,436).
Frankfurt is also home to many cultural and educational institutions, among them Johann Wolfgang Goethe-Universität, its university, and many museums, most of them lined up along the Main river on the Museumsufer (museum embarkment), and a large botanical garden, the Palmengarten. The best known museums are the Städelsches Kunstinstitut und Städtische Galerie, called Städel, and the Naturmuseum Senckenberg. The Museum für moderne Kunst (Museum of Modern Art) and Schirn Kunsthalle (Schirn Art Gallery) are also notable.
In the area of the Römer Roman settlements were established, probably in the first century, with some artefacts remaining. Also, the city district Bonames has a name probably dating back to Roman times, Bonames is thought to be derived from bona me(n)sa. Nida (Heddernheim) was a Roman civitas capital.
The name of Frankfurt on the Main river is derived from the Franconofurt of the Germanic tribe of the Franks; Furt (cf. English ford) denotes a low point passage across a stream or river. Alemanni and Franks lived there and by 794 Charlemagne presided over an imperial assembly and church synod, at which Franconofurd (-furt -vurd) is first mentioned. However, since frank is also an old German word for frei (meaning "free"), Frankfurt was a "free ford," an opportunity to cross the river Main without paying bridgetoll.
In the Holy Roman Empire, Frankfurt was one of the most important cities. Since 855 the German kings and emperors were elected in Frankfurt, and then crowned in Aachen. Since 1562 the kings/emperors were also crowned in Frankfurt, Maximilian II being the first one. This tradition ended in 1792, when Franz II was elected. He was crowned, on purpose, on Bastille Day, July 14, the anniversary of the storming of the Bastille. The elections and coronations took place in the cathedral St. Bartholomäus, known as the Kaiserdom (en: Emperors Cathedral), or in its predecessors.
The Frankfurter Messe (en: Frankfurt trade fair) was first mentioned in 1150. In 1240, Emperor Friedrich II. granted an Imperial privilege to its visitors, meaning they would be protected by the Empire. Since 1478, book trade fairs are held in Frankfurt, the Frankfurter Buchmesse still is the most important in Germany and, some might say, the world.
Frankfurt managed to remain neutral during the Thirty Years' War, but it suffered nonetheless from the plague that was brought to the city by refugees. After the end of the war Frankfurt regained its wealth.
In the Napoleonic Wars Frankfurt was occupied or cannonaded several times by French troops. The Grand Duchy of Frankfurt, a vassal state of France, remained a short episode that lasted only from 1810 to 1813. The Congress of Vienna dissolved this entity, and Frankfurt entered the newly founded German Confederation as a free city. It became the seat of the Bundestag which was the parliament of the German Confederation.
After the ill-faithed revolution of 1848, Frankfurt was home to the first German National Assembly (Nationalversammlung), which resided in St. Paul's Church (Paulskirche) (see German Confederation for details) and was opened on May 18th, 1848. The institution failed in 1849 when the Prussian king declared that he wouldn't accept "a crown from the gutter". In the year of its existence the assembly had developed a common constitution for a unified Germany with the Prussian king as its monarch.
Frankfurt lost its independence in 1866. The Austro-Prussian War was over, and Prussia annexed several smaller states, among them the city of Frankfurt. The Prussian administration incorporated Frankfurt into its province of Hesse-Nassau. The formerly independent towns of Bornheim and Bockenheim were incorporated in 1890.
In 1914, the citizens of Frankfurt founded the University of Frankfurt, later called Johann Wolfgang Goethe University. This is the only civic foundation of a university in Germany; it is today one of Germany's largest.
During the Nazi era the synagogues of Frankfurt were destroyed. The city of Frankfurt was severely bombed in World War II. After the end of the war Frankfurt became a part of the newly founded state of Hessen. Frankfurt was the original choice for the capital of West Germany, they even went as far as constructing a new parliament building, never used for its intended purpose and is now a TV studio. In the end, Konrad Adenauer (the first post-war Chancellor) preferred the tiny city of Bonn, for the most part because it was his hometown, but also for another reason; many other prominent politicians opposed the choice of Frankfurt out of concern that Frankfurt, one of the largest German cities and a former center of the old German dominated Holy Roman Empire, would be accepted as a "permanent" capital of the federal republic - thereby weakening the West German population's support for reunification and eventually returning the capital city to Berlin.
Frankfurt is twinned with
- Toronto, Canada
- Birmingham, England
- Budapest, Hungary
- Kraków, Poland
- Granada, Nicaragua
- Guangzhou, China
- Lyon, France
- Milan, Italy
- Prague, Czech Republic
People born in Frankfurt
- Charles the Bald
- Johann Wolfgang von Goethe
- Bettina von Arnim
- Otto Hahn
- Erich Fromm
- Theodor Adorno
- Anne Frank
- Martin Lawrence
- Mayer Amschel Rothschild
- Adolf Schreyer
The Cathedral Saint Bartholomeus (Dom Sankt Bartholomäus) is a Gothic construction which was built in the 14th and 15th century on the foundation of an earlier church from the Merovingian time. It is the main church of Frankfurt. From 1356 on the kings of the Holy Roman Empire were elected in this church, and from 1562 to 1792 the emperors were crowned here.
Since the 18th century Saint Bartholomeus has been called "the cathedral" by the people although it has never been a bishop's seat. In 1867 the cathedral was destroyed by a fire and rebuilt in its present style. The height of the cathedral is 95 m.
The name of the town hall means "Roman". It is in fact nine houses which were acquired by the city council in 1405 from a wealthy merchant family. The middle house became the town hall and was later connected with the neighbouring buildings. In the upper floor there is the Kaisersaal ("Emperor's Hall") where the newly crowned emperors held their banquets.
The Römer was destroyed in World War II, but rebuilt afterwards.
Saint Paul's Church
Saint Paul's Church (Paulskirche) is a rather new church. It was established in 1789 as a Protestant church but not finished until 1833. Its importance has its root in the Frankfurt Parliament which was held here in 1848/49 in order to develop a constitution for a united Germany. The institution failed because the monarchs of Prussia and Austria did not want lose power; in 1849 Prussian troops ended the democratic experiment by force of arms, and the parliament was dissolved. Afterwards the building was used for church services again.
Saint Paul was completely destroyed in World War II but quickly rebuilt. Today it is not used as a sacral building anymore but for exhibitions. In 1963 US president John F. Kennedy made a speech in Saint Paul during his visit to Frankfurt.
The famous opera house of Frankfurt (Alte Oper) was built in 1880 by the architect Richard Lucae . It was one of the major opera houses of Germany until its destruction in World War II. It was not until 1981 that the old opera was eventually rebuilt and opened. Today it is a concert hall while operas are performed in a building from 1951.
- Henninger Turm (silo with observation deck, unfortunately closed for visitors)
- Europaturm (unfortunately closed for visitors)
- www.frankfurtm.de - photos and pictures
- Architecture of Frankfurt
- City's own website
- City Panoramas - Panoramic Views of Frankfurt's Highlights
- University of Frankfurt
- University of Applied Sciences
- European Central Bank
- German Stock Exchange (Deutsche Börse)
- Frankfurt Trade Fair (Messe Frankfurt)
- Frankfurt Book Fair (Frankfurter Buchmesse)
- Städelsches Kunstinstitut und Städtische Galerie
- Naturmuseum Senckenberg
- Schirn Kunsthalle
- Frankfurt travel guide at Wikitravel
- www.frankfurt360.de 360°-Panoramas - In- and Outdoorpanoramas at Day- and Nighttime, in fullscreen and with sound
- Roman history (in German)
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Frankfurt_am_Main | 13 |
17 | A rise in sea level, necessarily, begins slowly. Massive ice sheets must be softened and weakened before rapid disintegration and melting occurs and the sea level rises. It may require as much as a few centuries to produce most of the long-term response. But the inertia of ice sheets is not our ally against the effects of global warming. The Earth’s history reveals cases in which sea level, once ice sheets began to collapse, rose one meter (1.1 yards) every twenty years for centuries. That would be a calamity for hundreds of cities around the world, most of them far larger than New Orleans. Devastation from a rising sea occurs as the result of local storms which can be expected to cause repeated retreats from transitory shorelines and rebuilding away from them.
Satellite images and other data have revealed the initial response of ice sheets to global warming. The area on Greenland in which summer melting of ice took place increased more than 50 percent during the last twenty-five years. Meltwater descends through crevasses to the ice sheet base, where it provides lubrication that increases the movement of the ice sheet and the discharge of giant icebergs into the ocean. The volume of icebergs from Greenland has doubled in the last ten years. Seismic stations reveal a shocking increase in “icequakes” on Greenland, caused by a portion of an ice sheet lurching forward and grinding to a halt. The annual number of these icequakes registering 4.6 or greater on the Richter scale doubled from 7 in 1993 to 14 in the late 1990s; it doubled again by 2005. A satellite that measures minute changes in Earth’s gravitational field found the mass of Greenland to have decreased by 50 cubic miles of ice in 2005. West Antarctica’s mass decreased by a similar amount.
The effect of this loss of ice on the global sea level is small, so far, but it is accelerating. The likelihood of the sudden collapse of ice sheets increases as global warming continues. For example, wet ice is darker, absorbing more sunlight, which increases the melting rate of the ice. Also, the warming ocean melts the offshore accumulations of ice—”ice shelves”—that form a barrier between the ice sheets and the ocean. As the ice shelves melt, more icebergs are discharged from the ice sheets into the ocean. And as the ice sheet discharges more icebergs into the ocean and loses mass, its surface sinks to a lower level where the temperature is warmer, causing it to melt faster.
The business-as-usual scenario, with five degrees Fahrenheit global warming and ten degrees Fahrenheit at the ice sheets, certainly would cause the disintegration of ice sheets. The only question is when the collapse of these sheets would begin. The business-as-usual scenario, which could lead to an eventual sea level rise of eighty feet, with twenty feet or more per century, could produce global chaos, leaving fewer resources with which to mitigate the change in climate. The alternative scenario, with global warming under two degrees Fahrenheit, still produces a significant rise in the sea level, but its slower rate, probably less than a few feet per century, would allow time to develop strategies that would adapt to, and mitigate, the rise in the sea level.
Both the Department of Energy and some fossil fuel companies insist that continued growth of fossil fuel use and of CO2 emissions are facts that cannot be altered to any great extent. Their prophecies become self-fulfilling, with the help of government subsidies and intensive efforts by special interest groups to prevent the public from becoming well-informed.
In reality, an alternative scenario is possible and makes sense for other reasons, especially in the US, which has become an importer of energy, hemorrhaging wealth to foreign nations in order to pay for it. In response to oil shortages and price rises in the 1970s, the US slowed its growth in energy use mainly by requiring an increase from thirteen to twenty-four miles per gallon in the standard of auto efficiency. Economic growth was decoupled from growth in the use of fossil fuels and the gains in efficiency were felt worldwide. Global growth of CO2 emissions slowed from more than 4 percent each year to between 1 and 2 percent growth each year.
This slower growth rate in fossil fuel use was maintained despite lower energy prices. The US is still only half as efficient in its use of energy as Western Europe, i.e., the US emits twice as much CO2 to produce a unit of GNP, partly because Europe encourages efficiency by fossil fuel taxes. China and India, using older technologies, are less energy-efficient than the US and have a higher rate of CO2 emissions.
Available technologies would allow great improvement of energy efficiency, even in Europe. Economists agree that the potential could be achieved most effectively by a tax on carbon emissions, although strong political leadership would be needed to persuasively explain the case for such a tax to the public. The tax could be revenue-neutral, i.e., it could also provide for tax credits or tax decreases for the public generally, leaving government revenue unchanged; and it should be introduced gradually. The consumer who makes a special effort to save energy could gain, benefiting from the tax credit or decrease while buying less fuel; the well-to-do consumer who insisted on having three Hummers would pay for his own excesses.
Achieving a decline in CO2 emissions faces two major obstacles: the huge number of vehicles that are inefficient in their use of fuel and the continuing CO2 emissions from power plants. Auto makers oppose efficiency standards and prominently advertise their heaviest and most powerful vehicles, which yield the greatest short-term profits. Coal companies want new coal-fired power plants to be built soon, thus assuring long-term profits.
The California legislature has passed a regulation requiring a 30 percent reduction in automobile greenhouse gas emissions by 2016. If adopted nationwide, this regulation would save more than $150 billion annually in oil imports. In thirty-five years it would save seven times the amount of oil estimated by the US Geological Services to exist in the Arctic National Wildlife Refuge. By fighting it in court, automakers and the Bush administration have stymied the California law, which many other states stand ready to adopt. Further reductions of emissions would be possible by means of technologies now being developed. For example, new hybrid cars with larger batteries and the ability to plug into wall outlets will soon be available; and cars whose bodies are made of a lightweight carbon composite would get better mileage.
If power plants are to achieve the goals of the alternative scenario, construction of new coal-fired power plants should be delayed until the technology needed to capture and sequester their CO2 emissions is available. In the interim, new electricity requirements should be met by the use of renewable energies such as wind power as well as by nuclear power and other sources that do not produce CO2. Much could be done to limit emissions by improving the standards of fuel efficiency in buildings, lighting, and appliances. Such improvements are entirely possible, but strong leadership would be required to bring them about. The most effective action, as I have indicated, would be a slowly increasing carbon tax, which could be revenue-neutral or would cover a portion of the costs of mitigating climate change.
The alternative scenario I have been referring to has been designed to be consistent with the Kyoto Protocol, i.e., with a world in which emissions from developed countries would decrease slowly early in this century and the developing countries would get help to adopt “clean” energy technologies that would limit the growth of their emissions. Delays in that approach—especially US refusal both to participate in Kyoto and to improve vehicle and power plant efficiencies—and the rapid growth in the use of dirty technologies have resulted in an increase of 2 percent per year in global CO2 emissions during the past ten years. If such growth continues for another decade, emissions in 2015 will be 35 percent greater than they were in 2000, making it impractical to achieve results close to the alternative scenario.
The situation is critical, because of the clear difference between the two scenarios I have projected. Further global warming can be kept within limits (under two degrees Fahrenheit) only by means of simultaneous slowdown of CO2 emissions and absolute reduction of the principal non-CO2agents of global warming, particularly emissions of methane gas. Such methane emissions are not only the second-largest human contribution to climate change but also the main cause of an increase in ozone—the third-largest human-produced greenhouse gas—in the troposphere, the lowest part of the Earth’s atmosphere. Practical methods can be used to reduce human sources of methane emission, for example, at coal mines, landfills, and waste management facilities. However, the question is whether these reductions will be overwhelmed by the release of frozen methane hydrates—the ice-like crystals in which large deposits of methane are trapped—if permafrost melts.
If both the slowdown in CO2 emissions and reductions in non-CO2 emissions called for by the alternative scenario are achieved, release of “frozen methane” should be moderate, judging from prior interglacial periods that were warmer than today by one or two degrees Fahrenheit. But if CO2 emissions are not limited and further warming reaches three or four degrees Fahrenheit, all bets are off. Indeed, there is evidence that greater warming could release substantial amounts of methane in the Arctic. Much of the ten-degree Fahrenheit global warming that caused mass extinctions, such as the one at the Paleocene-Eocene boundary, appears to have been caused by release of “frozen methane.” Those releases of methane may have taken place over centuries or millennia, but release of even a significant fraction of the methane during this century could accelerate global warming, preventing achievement of the alternative scenario and possibly causing ice sheet disintegration and further long-term methane release that are out of our control.
Any responsible assessment of environmental impact must conclude that further global warming exceeding two degrees Fahrenheit will be dangerous. Yet because of the global warming already bound to take place as a result of the continuing long-term effects of greenhouse gases and the energy systems now in use, the two-degree Fahrenheit limit will be exceeded unless a change in direction can begin during the current decade. Unless this fact is widely communicated, and decision-makers are responsive, it will soon be impossible to avoid climate change with far-ranging undesirable consequences. We have reached a critical tipping point.
The public can act as our planet’s keeper, as has been shown in the past. The first human-made atmospheric crisis emerged in 1974, when the chemists Sherry Rowland and Mario Molina reported that chlorofluorocarbons (CFCs) might destroy the stratospheric ozone layer that protects animal and plant life from the sun’s harmful ultraviolet rays. How narrowly we escaped disaster was not realized until years later. | http://www.nybooks.com/articles/archives/2006/jul/13/the-threat-to-the-planet/?page=2 | 13 |
19 | - Understand benefits and costs to consider when making a spending decision
- Understand the basic concept of credit
- Use addition, subtraction, multiplication, and division (with whole numbers, fractions, decimals and/or percents, mixed numbers) to solve real-world math problems that will help them understand the process of making spending decisions and the concept of credit
Chart paper or chalkboard, marker or chalk, Student Magazine Pages 6-7: Spending Smarts (PDF), pencils
Discuss with students: Who has ever made a decision about spending money? What did you consider when you made your decision? What do you think it means when a clerk at a store says, "Would you like to pay with cash or credit?" What is a credit card? Write student answers on the board or on chart paper.
- Remind students that making a purchase requires making a decision. In this activity, they'll explore some of the steps to go through when deciding how to spend money. Students will find out the differences between paying with cash and paying with credit.
- First, explain that when people pay with cash, they pay up front for an item. If they pay with a credit card, they are taking out a loan and promising to pay it back.
- Call on volunteers for each step of the following decision-making process:
- Student 1-Identify the Problem: "I would like to buy a book for $10. I'm not sure whether I should pay with cash or credit."
- Student 2-List the Choices: "You can pay $10 cash and take the book today. Or you can pay with credit, paying nothing today and take the book, but promise to pay the money back in 30 days."
- Student 3-What Is Your Goal?: "The goal is to buy the book today for $10."
- Student 4-Evaluate Your Choices: "Think about the pros and cons. With cash, you can pay now, take the book now, and owe nothing more, ever. With credit, you don't have to pay any money now and can take the book today. In 30 days, you'll have the option to pay back the $10, or pay as little as $5 and owe the rest later, with an additional fee called interest."
- Do the Math: Lead students through the math behind these two choices as shown in this chart:
- Student 5-Make a Decision: "Based on the pros and cons of the two choices, what decision should be made? Remember the original goal."
Discuss with students: What are the pros and cons of paying with cash versus credit? What are possible additional costs when one pays with credit? Why does someone have to be careful when he or she has a credit card?
Have students scan newspapers, magazines, and flyers for advertisements that include prices, such as ads for computers, video games, or grocery-store items. Examine ads in class and comparison shop.
Language Arts Extension:
Expository: Writing Situation: When you make a decision about what to buy, you need to think about pros and cons. Directions for Writing: Think about a time when you purchased something. What were the pros and cons you thought about when you made your decision?
Narrative: Writing Situation: People often have an option of paying for something with credit. Directions for Writing: Think about the pros and cons of cash versus credit. Now write a story about something real or imaginary that involves using credit cards.
Have students read Student Magazine Pages 6-7: Spending Smarts (PDF), in class or at home, and complete the problems on page 7.
Student Magazine Answers (PDF) | http://www.scholastic.com/browse/lessonplan.jsp?id=372 | 13 |
30 | Early history traces the development of the Somali people to an Arab sultanate, which was founded in the seventh century A.D. by Koreishite immigrants from Yemen. During the 15th and 16th centuries, Portuguese traders landed in present Somali territory and ruled several coastal towns. The sultan of Oman and Zanzibar subsequently took control of these towns and their surrounding territory.
Somalia's modern history began in the late l9th century, when various European powers began to trade and establish themselves in the area. The British East India Company's desire for unrestricted harbor facilities led to the conclusion of treaties with the sultan of Tajura as early as 1840. It was not until 1886, however, that the British gained control over northern Somalia through treaties with various Somali chiefs who were guaranteed British protection. British objectives centered on safeguarding trade links to the east and securing local sources of food and provisions for its coaling station in Aden. The boundary between Ethiopia and British Somaliland was established in 1897 through treaty negotiations between British negotiators and King Menelik.
During the first two decades of this century, British rule was challenged through persistent attacks led by Mohamed Abdullah. A long series of intermittent engagements and truces ended in 1920 when British warplanes bombed Abdullah's stronghold at Taleex. Although Abdullah was defeated as much by rival Somali factions as by British forces, he was lauded as a popular hero and stands as a major figure of national identity to some Somalis.
In 1885, Italy obtained commercial advantages in the area from the sultan of Zanzibar and in 1889 concluded agreements with the sultans of Obbia and Aluula, who placed their territories under Italy's protection. Between 1897 and 1908, Italy made agreements with the Ethiopians and the British that marked out the boundaries of Italian Somaliland. The Italian Government assumed direct administration, giving the territory colonial status.
Italian occupation gradually extended inland. In 1924, the Jubaland Province of Kenya, including the town and port of Kismayo, was ceded to Italy by the United Kingdom. The subjugation and occupation of the independent sultanates of Obbia and Mijertein, begun in 1925, were completed in 1927. In the late 1920s, Italian and Somali influence expanded into the Ogaden region of eastern Ethiopia. Continuing incursions climaxed in 1935 when Italian forces launched an offensive that led to the capture of Addis Ababa and the Italian annexation of Ethiopia in 1936.
Following Italy's declaration of war on the United Kingdom in June 1940, Italian troops overran British Somaliland and drove out the British garrison. In 1941, British forces began operations against the Italian East African Empire and quickly brought the greater part of Italian Somaliland under British control. From 1941 to 1950, while Somalia was under British military administration, transition toward self-government was begun through the establishment of local courts, planning committees, and the Protectorate Advisory Council. In 1948 Britain turned the Ogaden and neighboring Somali territories over to Ethiopia.
In Article 23 of the 1947 peace treaty, Italy renounced all rights and titles to Italian Somaliland. In accordance with treaty stipulations, on September 15, 1948, the Four Powers referred the question of disposal of former Italian colonies to the UN General Assembly. On November 21, 1949, the General Assembly adopted a resolution recommending that Italian Somaliland be placed under an international trusteeship system for 10 years, with Italy as the administering authority, followed by independence for Italian Somaliland. In 1959, at the request of the Somali Government, the UN General Assembly advanced the date of independence from December 2 to July 1, 1960.
Meanwhile, rapid progress toward self-government was being made in British Somaliland. Elections for the Legislative Assembly were held in February 1960, and one of the first acts of the new legislature was to request that the United Kingdom grant the area independence so that it could be united with Italian Somaliland when the latter became independent. The protectorate became independent on June 26, 1960; five days later, on July 1, it joined Italian Somaliland to form the Somali Republic.
In June 1961, Somalia adopted its first national constitution in a countrywide referendum, which provided for a democratic state with a parliamentary form of government based on European models. During the early post-independence period, political parties reflected clan loyalties, which contributed to a basic split between the regional interests of the former British-controlled north and the Italian-controlled south. There also was substantial conflict between pro-Arab, pan-Somali militants intent on national unification with the Somali-inhabited territories in Ethiopia and Kenya and the "modernists," who wished to give priority to economic and social development and improving relations with other African countries. Gradually, the Somali Youth League, formed under British auspices in 1943, assumed a dominant position and succeeded in cutting across regional and clan loyalties. Under the leadership of Mohamed Ibrahim Egal, prime minister from 1967 to 1969, Somalia greatly improved its relations with Kenya and Ethiopia. The process of party-based constitutional democracy came to an abrupt end, however, on October 21, 1969, when the army and police, led by Maj. Gen. Mohamed Siad Barre, seized power in a bloodless coup.
Following the coup, executive and legislative power was vested in the 20-member Supreme Revolutionary Council (SRC), headed by Maj. Gen. Siad Barre as president. The SRC pursued a course of "scientific socialism" that reflected both ideological and economic dependence on the Soviet Union. The government instituted a national security service, centralized control over information, and initiated a number of grassroots development projects. Perhaps the most impressive success was a crash program that introduced an orthography for the Somali language and brought literacy to a substantial percentage of the population.
The SRC became increasingly radical in foreign affairs, and in 1974, Somalia and the Soviet Union concluded a treaty of friendship and cooperation. As early as 1972, tensions began increasing along the Somali-Ethiopian border; these tensions heightened after the accession to power in Ethiopia in 1973 of the Mengistu Hailemariam regime, which turned increasingly toward the Soviet Union. In the mid-1970s, the Western Somali Liberation Front (WSLF) began guerrilla operations in the Ogaden region of Ethiopia. Fighting increased, and in July 1977, the Somali National Army (SNA) crossed into the Ogaden to support the insurgents. The SNA moved quickly toward Harer, Jijiga, and Dire Dawa, the principal cities of the region. Subsequently, the Soviet Union, Somalia's most important source of arms, embargoed weapons shipments to Somalia. The Soviets switched their full support to Ethiopia, with massive infusions of Soviet arms and 10,000-15,000 Cuban troops. In November 1977, President Siad Barre expelled all Soviet advisers and abrogated the friendship agreement with the U.S.S.R. In March 1978, Somali forces retreated into Somalia; however, the WSLF continues to carry out sporadic but greatly reduced guerrilla activity in the Ogaden. Such activities also were subsequently undertaken by another dissident group, the Ogaden National Liberation Front (ONLF).
Following the 1977 Ogaden war, President Barre looked to the West for international support, military equipment, and economic aid. The United States and other Western countries traditionally were reluctant to provide arms because of the Somali Government's support for insurgency in Ethiopia. In 1978, the United States reopened the U.S. Agency for International Development mission in Somalia. Two years later, an agreement was concluded that gave U.S. forces access to military facilities in Somalia. In the summer of 1982, Ethiopian forces invaded Somalia along the central border, and the United States provided two emergency airlifts to help Somalia defend its territorial integrity.
From 1982 to 1990 the United States viewed Somalia as a partner in defense. Somali officers of the National Armed Forces were trained in U.S. military schools in civilian as well as military subjects. Within Somalia, Siad Barre's regime confronted insurgencies in the northeast and northwest, whose aim was to overthrow his government. By 1988, Siad Barre was openly at war with sectors of his nation. At the President's order, aircraft from the Somali National Air Force bombed the cities in the northwest province, attacking civilian as well as insurgent targets. The warfare in the northwest sped up the decay already evident elsewhere in the republic. Economic crisis, brought on by the cost of anti-insurgency activities, caused further hardship as Siad Barre and his cronies looted the national treasury.
By 1990, the insurgency in the northwest was largely successful. The army dissolved into competing armed groups loyal to former commanders or to clan-tribal leaders. The economy was in shambles, and hundreds of thousands of Somalis fled their homes. In 1991, Siad Barre and forces loyal to him fled the capital; he later died in exile in Nigeria. In the same year, Somaliland declared itself independent of the rest of Somalia, with its capital in Hargeisa. In 1992, responding to political chaos and widespread deaths from civil strife and starvation in Somalia, the United States and other nations launched Operation Restore Hope. Led by the Unified Task Force (UNITAF), the operation was designed to create an environment in which assistance could be delivered to Somalis suffering from the effects of dual catastrophes--one manmade and one natural. UNITAF was followed by the United Nations Operation in Somalia (UNOSOM). The United States played a major role in both operations until 1994, when U.S. forces withdrew.
The prevailing chaos in much of Somalia after 1991 contributed to growing influence by various Islamic groups, including al-Tabliq, al-Islah (supported by Saudi Arabia), and Al-Ittihad Al-Islami (Islamic Unity). These groups, which are among the main non-clan-based forces in Somalia, share the goal of establishing an Islamic state. They differ in their approach; in particular, Al-Ittihad supports the use of violence to achieve that goal and has claimed responsibility for terrorist acts. In the mid-1990s, Al-Ittihad came to dominate territory in Puntland as well as central Somalia near Gedo. It was forcibly expelled from these localities by Puntland forces as well as Ethiopian attacks in the Gedo region. Since that time, Al-Ittihad has adopted a longer term strategy based on integration into local communities and establishment of Islamic schools, courts, and relief centers.
After the attack on the United States of September 11, 2001, Somalia gained greater international attention as a possible base for terrorism--a concern that became the primary element in U.S. policy toward Somalia. The United States and other members of the anti-terrorism coalition examined a variety of short- and long-term measures designed to cope with the threat of terrorism in and emanating from Somalia. Economic sanctions were applied to Al-Ittihad and to the Al-Barakaat group of companies, based in Dubai, which conducted currency exchanges and remittances transfers in Somalia. The United Nations also took an increased interest in Somalia, including proposals for an increased UN presence and for strengthening a 1992 arms embargo.
Somalia1 has been without a central government since its last president, dictator Mohamed Siad Barre, fled the country in 1991. Subsequent fighting among rival faction leaders resulted in the killing, displacement, and starvation of thousands of persons and led the U.N. to intervene militarily in 1992. Following the U.N. intervention, periodic attempts at national reconciliation were made, but they did not succeed. In September 1999, during a speech before the U.N. General Assembly, Djiboutian President Ismail Omar Guelleh announced an initiative to facilitate reconciliation under the auspices of the Inter-Governmental Authority for Development (IGAD). In March 2000, formal reconciliation efforts began with a series of small focus group meetings of various elements of Somali society in Djibouti. In May 2000, in Arta, Djibouti, delegates representing all clans and a wide spectrum of Somali society were selected to participate in a "Conference for National Peace and Reconciliation in Somalia." More than 900 delegates, including representatives of nongovernmental organizations (NGO's), attended the Conference. The Conference adopted a charter for a 3-year Transitional National Government (TNG) and selected a 245-member Transitional National Assembly (TNA), which included 24 members of Somali minority groups and 25 women. In August 2000, the Assembly elected Abdiqassim Salad Hassan as Transitional President. Ali Khalif Gallayr was named Prime Minister in October 2000, and he appointed the 25-member Cabinet. Administrations in the northwest (Somaliland) and northeast ("Puntland") areas of the country do not recognize the results of the Djibouti Conference, nor do several Mogadishu-based factional leaders. In October the TNA passed a vote of no confidence in the TNG, and Gallayr was dismissed as Prime Minister. In November Abdiqassim appointed Hassan Abshir Farah as the new Prime Minister. Serious interclan fighting continued to occur in parts of the country, notably in the central regions of Hiran and Middle Shabelle, the southern regions of Gedo and Lower Shabelle, and in the Middle Juba and Lower Juba regions. No group controls more than a fraction of the country's territory. There is no national judicial system.
Leaders in the northeast proclaimed the formation of the Puntland state in 1998. Puntland's leader, Abdullahi Yusuf, publicly announced that he did not plan to break away from the remainder of the country, but the Puntland Administration did not participate in the Djibouti Conference or recognize the TNG that emerged from it. In July Yusuf announced his refusal to abide by the Constitution and step down. This led to a confrontation with Chief Justice Yusuf Haji Nur, who claimed interim presidential powers pending elections. In November traditional elders elected Jama Ali Jama as the new Puntland President. Yusuf refused to accept the elders' decision, and in December he seized by force the town of Garowe, reportedly with Ethiopian support. Jama fled to Bosasso. Both Yusuf and Jama continued to claim the presidency, and there were continued efforts to resolve the conflict at year's end 2001. A ban on political parties in Puntland remained in place.
In the northwest, the "Republic of Somaliland" continued to proclaim its independence within the borders of former British Somaliland. Somaliland has sought international recognition since 1991 without success. Somaliland's government includes a parliament, a functioning civil court system, executive departments organized as ministries, six regional governors, and municipal authorities in major towns. During the year 2001, 97 percent of voters in a referendum voted for independence for Somaliland and for a political party system. Presidential and parliamentary elections were scheduled to be held in February 2002; however, President Egal requested and Parliament granted a 1-year extension for the next elections.
Somalia has a long history of internal instability; in some instances, clan feuds have lasted more than a century. Most of this turmoil has been associated with disagreements and factionalism between and among the major branches of the Somali lineage system, which includes pastoral nomads such as the Dir, Daarood, Isaaq, and Hawiye, and agriculturalists such as the Digil and Rahanwayn. In more recent times, these historical animosities have expressed themselves through the emergence of clan-based dissident and insurgent movements. Most of these groups grew to oppose Siad Barre's regime because the president refused to make political reforms, unleashed a reign of terror against the country's citizenry, and concentrated power in the hands of his Mareehaan subclan (the Mareehaan belonged to the Daarood clan). After Siad Barre fled Mogadishu in January 1991, the Somali nation state collapsed, largely along warring clan lines.
In the aftermath of the 1969 coup, the central government acquired control of all legislative, administrative, and judicial functions. The only legally permitted party was the Somali Revolutionary Socialist Party (SRSP). In April 1970, Siad Barre authorized the creation of National Security Courts (NSCs), which shortly thereafter tried approximately sixty people: leaders of the previous government, businessmen, lawyers, and senior military personnel who had failed to support the coup. In September 1970, the Supreme Revolutionary Council (SRC) proclaimed that any person who harmed the nation's unity, peace, or sovereignty could be sentenced to death. The government also promised to punish anyone who spread false propaganda against Siad Barre's regime.
Until the early 1980s, the Siad Barre regime generally shunned capital punishment in favor of imprisonment and reeducation of actual, suspected, or potential opponents. The earlier parliamentary government had been able to hold people without trial up to ninety days during a state of emergency, but the military government removed most legal restrictions on preventive detention. After the coup, a local revolutionary council or the National Security Service (NSS) could detain individuals regarded as dangerous to peace, order, good government, or the aims and spirit of the revolution. Additionally, regional governors could order the search and arrest of persons suspected of a crime or of activities considered threatening to public order and security, and could requisition property or services without compensation. In 1974 the government began to require all civil servants to sign statements of intent to abide by security regulations. Furthermore, any contact between foreigners and Somali citizens had to be reported to the Ministry of Foreign Affairs. By the late 1970s, most Somalis were ignoring this latter regulation.
The Somali government became more repressive after an unsuccessful 1971 coup. Officials maintained that the coup attempt by some SRC members had sought to protect the interests of the trading bourgeoisie and the tribal structure. Many expected that the conspirators would receive clemency. Instead, the government executed them. Many Somalis found this act inconsistent with Islamic principles and as a consequence turned against Siad Barre's regime.
During its first years in power, the SRC sought to bolster nationalism by undermining traditional Somali allegiance to Islamic religious leaders and clan groups. Although it tried to avoid entirely alienating religious leaders, the government restricted their involvement in politics. During the early 1970s, some Islamic leaders affirmed that Islam could never coexist with scientific socialism; however, Siad Barre claimed that the two concepts were compatible because Islam propagated a classless society based on egalitarianism.
In the mid-1970s, the government tried to eliminate a rallying point for opposition by substituting allegiance to the nation for traditional allegiance to family and clan. Toward this end, the authorities stressed individual responsibility for all offenses, thereby undermining the concept of collective responsibility that existed in traditional society and served as the basis of diya-paying groups. The government also abolished traditional clan leadership responsibilities and titles such as sultan and shaykh.
By the late 1980s, it was evident that Siad Barre had failed to create a sense of Somali nationalism. Moreover, he had been unable to destroy the family and clan loyalties that continued to govern the lives of most Somalis. As antigovernment activities escalated, Siad Barre increasingly used force and terror against his opponents. This cycle of violence further isolated his regime, caused dissent within the SNA, and eventually precipitated the collapse of his government.
From 1969 until the mid-1970s, Siad Barre's authoritarian regime enjoyed a degree of popular support, largely because it acted with a decisiveness not displayed by the civilian governments of the 1960s. Even the 1971 coup attempt failed to affect the stability of the government. However, Somalia's defeat in the Ogaden War signaled the beginning of a decline in Siad Barre's popularity that culminated in his January 1991 fall from power.
Before the war, many Somalis had criticized Siad Barre for not trying to reincorporate the Ogaden into Somalia immediately after Ethiopian emperor Haile Selassie's death in 1975. The government was unable to stifle this criticism largely because the Somali claim to the Ogaden had overwhelming national support. The regime's commitment of regular troops to the Ogaden proved highly popular, as did Siad Barre's expulsion of the Soviet advisers, who had been resented by most Somalis. However, Somalia's defeat in the Ogaden War refocused criticism on Siad Barre.
After the spring 1978 retreat toward Hargeysa, Siad Barre met with his generals to discuss the battlefield situation, and ordered the execution of six of them for activities against the state. This action failed to quell SNA discontent over Siad Barre's handling of the war with Ethiopia. On April 9, 1978, a group of military officers (mostly Majeerteen) attempted a coup d'état. Government security forces crushed the plot within hours and subsequently arrested seventy-four suspected conspirators. After a month-long series of trials, the authorities imprisoned thirty-six people associated with the coup and executed another seventeen.
After the war, it was evident that the ruling alliance among the Mareehaan, Ogaden, and Dulbahante clans had been broken. The Ogaden--the clan of Siad Barre's mother, which had the most direct stake in the war--broke with the regime over the president's wartime leadership. To prevent further challenges to his rule, Siad Barre placed members of his own clan in important positions in the government, the armed forces, the security services, and other state agencies.
Throughout the late 1970s, growing discontent with the regime's policies and personalities prompted the defection of numerous government officials and the establishment of several insurgent movements. Because unauthorized political activity was prohibited, these organizations were based abroad. The best known was the Somali Salvation Front (SSF), which operated from Ethiopia. The SSF had absorbed its predecessor, the Somali Democratic Action Front (SODAF), which had been formed in Rome in 1976. Former minister of justice Usmaan Nur Ali led the Majeerteen-based SODAF. Lieutenant Colonel Abdillaahi Yuusuf Ahmad, a survivor of the 1978 coup attempt, commanded the SSF. Other prominent SSF personalities included former minister of education Hasan Ali Mirreh and former ambassador Muse Islan Faarah. The SSF, which received assistance from Ethiopia and Libya, claimed to command a guerrilla force numbering in the thousands. Ethiopia placed a radio transmitter at the SSF's disposal from which Radio Kulmis (unity) beamed anti-Siad Barre invective to listeners in Somalia. Although it launched a low- intensity sabotage campaign in 1981, the SSF lacked the capabilities to sustain effective guerrilla operations against the SNA.
The SSF's weakness derived from its limited potential as a rallying point for opposition to the government. Although the SSF embraced no ideology or political philosophy other than hostility to Siad Barre, its nationalist appeal was undermined by its reliance on Ethiopian support. The SSF claimed to encompass a range of opposition forces, but its leading figures belonged with few exceptions to the Majeerteen clan.
In October 1981, the SSF merged with the radical-left Somali Workers Party (SWP) and the Democratic Front for the Liberation of Somalia (DFLS) to form the Somali Salvation Democratic Front (SSDF). The SWP and DFLS, both based in Aden (then the capital of the People's Democratic Republic of Yemen--South Yemen), had included some former SRSP Central Committee members who faulted Siad Barre for compromising Somalia's revolutionary goals. An eleven-man committee led the SSDF. Yuusuf Ahmad, a former SNA officer and head of the SDF acted as chairman; former SWP leader Idris Jaama Husseen served as vice chairman; Abdirahman Aidid Ahmad, former chairman of the SRSP Ideology Bureau and founding father of the DFLS, was secretary for information. The SSDF promised to intensify the military and political struggles against the Siad Barre regime, which was said to have destroyed Somali unity and surrendered to United States imperialism. Like the SSF, the SSDF suffered from weak organization, a close identification with its Ethiopian and Libyan benefactors, and its reputation as a Majeerteen party.
Despite its shortcomings, the SSDF played a key role in fighting between Somalia and Ethiopia in the summer of 1982. After a SNA force infiltrated the Ogaden, joined with the WSLF and attacked an Ethiopian army unit outside Shilabo, about 150 kilometers northwest of Beledweyne, Ethiopia retaliated by launching an operation against Somalia. On June 30, 1982, Ethiopian army units, together with SSDF guerrillas, struck at several points along Ethiopia's southern border with Somalia. They crushed the SNA unit in Balumbale and then occupied that village. In August 1982, the Ethiopian/SSDF force took the village of Goldogob, about 50 kiloeters northwest of Galcaio. After the United States provided emergency military assistance to Somalia, the Ethiopian attacks ceased. However, the Ethiopian/SSDF units remained in Balumbale and Goldogob, which Addis Ababa maintained were part of Ethiopia that had been liberated by the Ethiopian army. The SSDF disputed the Ethiopian claim, causing a power struggle that eventually resulted in the destruction of the SSDF's leadership.
On October 12, 1985, Ethiopian authorities arrested Ahmad and six of his lieutenants after they repeatedly indicated that Balumbale and Goldogob were part of Somalia. The Ethiopian government justified the arrests by saying that Ahmad had refused to comply with a SSDF Central Committee decision relieving him as chairman. Mahammad Abshir, a party bureaucrat, then assumed command of the SSDF. Under his leadership, the SSDF became militarily moribund, primarily because of poor relations with Addis Ababa. In August 1986, the Ethiopian army attacked SSDF units, then launched a war against the movement, and finally jailed its remaining leaders. For the next several years, the SSDF existed more in name than in fact. In late 1990, however, after Ethiopia released former SSDF leader Ahmad, the movement reemerged as a fighting force in Somalia, albeit to a far lesser degree than in the early 1980s.
In April 1981, a group of Isaaq emigrés living in London formed the Somali National Movement (SNM), which subsequently became the strongest of Somalia's various insurgent movements. According to its spokesmen, the rebels wanted to overthrow Siad Barre's dictatorship. Additionally, the SNM advocated a mixed economy and a neutral foreign policy, rejecting alignment with the Soviet Union or the United States and calling for the dismantling of all foreign military bases in the region. In the late 1980s, the SNM adopted a pro-Western foreign policy and favored United States involvement in a post-Siad Barre Somalia. Other SNM objectives included establishment of a representative democracy that would guarantee human rights and freedom of speech. Eventually, the SNM moved its headquarters from London to Addis Ababa to obtain Ethiopian military assistance, which initially was limited to old Soviet small arms.
In October 1981, the SNM rebels elected Ahmad Mahammad Culaid and Ahmad Ismaaiil Abdi as chairman and secretary general, respectively, of the movement. Culaid had participated in northern Somali politics until 1975, when he went into exile in Djibouti and then in Saudi Arabia. Abdi had been politically active in the city of Burao in the 1950s, and, from 1965 to 1967, had served as the Somali government's minister of planning. After the authorities jailed him in 1971 for antigovernment activities, Abdi left Somalia and lived in East Africa and Saudi Arabia. The rebels also elected an eight-man executive committee to oversee the SNM's military and political activities.
On January 2, 1982, the SNM launched its first military operation against the Somali government. Operating from Ethiopian bases, commando units attacked Mandera Prison near Berbera and freed a group of northern dissidents. According to the SNM, the assault liberated more than 700 political prisoners; subsequent independent estimates indicated that only about a dozen government opponents escaped. At the same time, other commando units raided the Cadaadle armory near Berbera and escaped with an undetermined amount of arms and ammunition.
Mogadishu responded to the SNM attacks by declaring a state of emergency, imposing a curfew, closing gasoline stations to civilian vehicles, banning movement in or out of northern Somalia, and launching a search for the Mandera prisoners (most of whom were never found). On January 8, 1982, the Somali government also closed its border with Djibouti to prevent the rebels from fleeing Somalia. These actions failed to stop SNM military activities.
In October 1982, the SNM tried to increase pressure against the Siad Barre regime by forming a joint military committee with the SSDF. Apart from issuing antigovernment statements, the two insurgent groups started broadcasting from the former Radio Kulmis station, now known as Radio Halgan (struggle). Despite this political cooperation, the SNM and SSDF failed to agree on a common strategy against Mogadishu. As a result, the alliance languished.
In February 1983, Siad Barre visited northern Somalia in a campaign to discredit the SNM. Among other things, he ordered the release of numerous civil servants and businessmen who had been arrested for antigovernment activities, lifted the state of emergency, and announced an amnesty for Somali exiles who wanted to return home. These tactics put the rebels on the political defensive for several months. In November 1983, the SNM Central Committee sought to regain the initiative by holding an emergency meeting to formulate a more aggressive strategy. One outcome was that the military wing--headed by Abdulqaadir Kosar Abdi, formerly of the SNA--assumed control of the Central Committee by ousting the civilian membership from all positions of power. However, in July 1984, at the Fourth SNM Congress, held in Ethiopia, the civilians regained control of the leadership. The delegates also elected Ahmad Mahammad Mahamuud "Silanyo" SNM chairman and reasserted their intention to revive the alliance with the SSDF.
After the Fourth SNM Congress adjourned, military activity in northern Somalia increased. SNM commandos attacked about a dozen government military posts in the vicinity of Hargeysa, Burao, and Berbera. According to the SNM, the SNA responded by shooting 300 people at a demonstration in Burao, sentencing seven youths to death for sedition, and arresting an unknown number of rebel sympathizers. In January 1985, the government executed twenty- eight people in retaliation for antigovernment activity.
Between June 1985 and February 1986, the SNM claimed to have carried out thirty operations against government forces in northern Somalia. In addition, the SNM reported that it had killed 476 government soldiers and wounded 263, and had captured eleven vehicles and had destroyed another twenty-two, while losing only 38 men and two vehicles. Although many independent observers said these figures were exaggerated, SNM operations during the 1985-86 campaign forced Siad Barre to mount an international effort to cut off foreign aid to the rebels. This initiative included reestablishment of diplomatic relations with Libya in exchange for Tripoli's promise to stop supporting the SNM.
Despite efforts to isolate the rebels, the SNM continued military operations in northern Somalia. Between July and September 1987, the SNM initiated approximately thirty attacks, including one on the northern capital, Hargeysa; none of these, however, weakened the government's control of northern Somalia. A more dramatic event occurred when a SNM unit kidnapped a Médecins Sans Frontières medical aid team of ten Frenchmen and one Djiboutian to draw the world's attention to Mogadishu's policy of impressing men from refugee camps into the SNA. After ten days, the SNM released the hostages unconditionally.
Siad Barre responded to these activities by instituting harsh security measures throughout northern Somalia. The government also evicted suspected pro-SNM nomad communities from the Somali- Ethiopian border region. These measures failed to contain the SNM. By February 1988, the rebels had captured three villages around Togochale, a refugee camp near the northwestern Somali- Ethiopian border.
Following the rebel successes of 1987-88, Somali-Ethiopian relations began to improve. On March 19, 1988, Siad Barre and Ethiopian president Mengistu Haile Mariam met in Djibouti to discuss ways of reducing tension between the two countries. Although little was accomplished, the two agreed to hold further talks. At the end of March 1988, the Ethiopian minister of foreign affairs, Berhanu Bayih, arrived in Mogadishu for discussions with a group of Somali officials, headed by General Ahmad Mahamuud Faarah. On April 4, 1988, the two presidents signed a joint communiqué in which they agreed to restore diplomatic relations, exchange prisoners of war, start a mutual withdrawal of troops from the border area, and end subversive activities and hostile propaganda against each other.
Faced with a cutoff of Ethiopian military assistance, the SNM had to prove its ability to operate as an independent organization. Therefore, in late May 1988 SNM units moved out of their Ethiopian base camps and launched a major offensive in northern Somalia. The rebels temporarily occupied the provincial capitals of Burao and Hargeysa. These early successes bolstered the SNM's popular support, as thousands of disaffected Isaaq clan members and SNA deserters joined the rebel ranks.
Over the next few years, the SNM took control of almost all of northwestern Somalia and extended its area of operations about fifty kilometers east of Erigavo. However, the SNM did not gain control of the region's major cities (i.e., Berbera, Hargeysa, Burao, and Boorama), but succeeded only in laying siege to them.
With Ethiopian military assistance no longer a factor, the SNM's success depended on its ability to capture weapons from the SNA. The rebels seized numerous vehicles such as Toyota Land Cruisers from government forces and subsequently equipped them with light and medium weapons such as 12.7mm and 14.5mm machine guns, 106mm recoilless rifles, and BM-21 rocket launchers. The SNM possessed antitank weapons such as Soviet B-10 tubes and RPG- 7s. For air defense the rebels operated Soviet 30mm and 23mm guns, several dozen Soviet ZU23 2s, and Czech-made twin-mounted 30mm ZU30 2s. The SNM also maintained a small fleet of armed speed boats that operated from Maydh, fifty kilometers northwest of Erigavo, and Xiis, a little west of Maydh. Small arms included 120mm mortars and various assault rifles, such as AK-47s, M-16s, and G-3s. Despite these armaments, rebel operations, especially against the region's major cities, suffered because of an inadequate logistics system and a lack of artillery, mine- clearing equipment, ammunition, and communications gear.
To weaken Siad Barre's regime further, the SNM encouraged the formation of other clan-based insurgent movements and provided them with political and military support. In particular, the SNM maintained close relations with the United Somali Congress (USC), which was active in central Somalia, and the Somali Patriotic Movement (SPM), which operated in southern Somalia. Both these groups sought to overthrow Siad Barre's regime and establish a democratic form of government.
The USC, a Hawiye organization founded in 1989, had suffered from factionalism based on subclan rivalries since its creation. General Mahammad Faarah Aidid commanded the Habar Gidir clan, and Ali Mahdi Mahammad headed the Abgaal clan. The SPM emerged in March 1989, after a group of Ogaden officers, led by Umar Jess, deserted the SNA and took up arms against Siad Barre. Like the USC, the SPM experienced a division among its ranks. The moderates, under Jess, favored an alliance with the SNM and USC and believed that Somalia should abandon its claims to the Ogaden. SPM hardliners wanted to recapture the Ogaden and favored a stronger military presence along the Somali-Ethiopian border.
On November 19, 1989, the SNM and SPM issued a joint communiqué announcing the adoption of a "unified stance on internal and external political policy." On September 12, 1990, the SNM concluded a similar agreement with the USC. Then, on November 24, 1990, the SNM announced that it had united with the SPM and the USC to pursue a common military strategy against the SNA. Actually, the SNM had concluded the unification agreement with Aidid, which widened the rift between the two USC factions.
By the beginning of 1991, all three of the major rebel organizations had made significant military progress. The SNM had all but taken control of northern Somalia by capturing the towns of Hargeysa, Berbera, Burao, and Erigavo. On January 26, 1991, the USC stormed the presidential palace in Mogadishu, thereby establishing its control over the capital. The SPM succeeded in overrunning several government outposts in southern Somalia.
The SNM-USC-SPM unification agreement failed to last after Siad Barre fled Mogadishu. On January 26, 1991, the USC formed an interim government, which the SNM refused to recognize. On May 18, 1991, the SNM declared the independence of the Republic of Somaliland. The USC interim government opposed this declaration, arguing instead for a unified Somalia. Apart from these political disagreements, fighting broke out between and within the USC and SPM. The SNM also sought to establish its control over northern Somalia by pacifying clans such as the Gadabursi and the Dulbahante. To make matters worse, guerrilla groups proliferated; by late 1991, numerous movements vied for political power, including the United Somali Front (Iise), Somali Democratic Alliance (Gadabursi), United Somali Party (Dulbahante), Somali Democratic Movement (Rahanwayn), and Somali National Front (Mareehaan). The collapse of the nation state system and the emergence of clan-based guerrilla movements and militias that became governing authorities persuaded most Western observers that national reconciliation would be a long and difficult process.
The country's population is estimated to be between 7 and 8 million. The country is very poor with a market-based economy in which most of the work force is employed as subsistence farmers, agro-pastoralists, or pastoralists. The principal exports are livestock and charcoal; there is very little industry. Insecurity and bad weather continued to affect the country's already extremely poor economic situation. A livestock ban, lifted in 2000, was reinstituted by Saudi Arabia because of fears of Rift Valley fever and reportedly because of Saudi political considerations. Livestock is the most important component of the Somali economy, and the ban has harmed further an already devastated economy. The country's economic problems continued to cause serious unemployment and led to pockets of malnutrition in southern areas of the country.
Most Somalis are Sunni Muslims. (Less than 1 percent of ethnic Somalis are Christians.) Loyalty to Islam reinforces distinctions that set Somalis apart from their immediate African neighbors, most of whom are either Christians (particularly the Amhara and others of Ethiopia) or adherents of indigenous African faiths.
The Islamic ideal is a society organized to implement Muslim precepts in which no distinction exists between the secular and the religious spheres. Among Somalis this ideal had been approximated less fully in the north than among some groups in the settled regions of the south where religious leaders were at one time an integral part of the social and political structure. Among nomads, the exigencies of pastoral life gave greater weight to the warrior's role, and religious leaders were expected to remain aloof from political matters.
The role of religious functionaries began to shrink in the 1950s and 1960s as some of their legal and educational powers and responsibilities were transferred to secular authorities. The position of religious leaders changed substantially after the 1969 revolution and the introduction of scientific socialism. Siad Barre insisted that his version of socialism was compatible with Quranic principles, and he condemned atheism. Religious leaders, however, were warned not to meddle in politics.
The new government instituted legal changes that some religious figures saw as contrary to Islamic precepts. The regime reacted sharply to criticism, executing some of the protesters. Subsequently, religious leaders seemed to accommodate themselves to the government.
Somali Islam rendered the world intelligible to Somalis and made their lives more bearable in a harsh land. Amidst the interclan violence that characterized life in the early 1990s, Somalis naturally sought comfort in their faith to make sense of their national disaster. The traditional response of practicing Muslims to social trauma is to explain it in terms of a perceived sin that has caused society to stray from the "straight path of truth" and consequently to receive God's punishment. The way to regain God's favor is to repent collectively and rededicate society in accordance with Allah's divine precepts.
On the basis of these beliefs, a Somali brand of messianic Islamism (sometimes seen as fundamentalism) sprang up to fill the vacuum created by the collapse of the state. In the disintegrated Somali world of early 1992, Islamism appeared to be largely confined to Bender Cassim, a coastal town in Majeerteen country. For instance, a Yugoslav doctor who was a member of a United Nations team sent to aid the wounded was gunned down by masked assailants there in November 1991. Reportedly, the assassins belonged to an underground Islamist movement whose adherents wished to purify the country of "infidel" influence.
The Somali Penal Code, promulgated in early 1962, became effective on April 3, 1964. It was Somalia's first codification of laws designed to protect the individual and to ensure the equitable administration of justice. The basis of the code was the constitutional premise that the law has supremacy over the state and its citizens. The code placed responsibility for determining offenses and punishments on the written law and the judicial system and excluded many penal sanctions formerly observed in unwritten customary law. The authorities who drafted the code, however, did not disregard the people's past reliance on traditional rules and sanctions. The code contained some of the authority expressed by customary law and by Islamic, sharia, or religious law.
The penal laws applied to all nationals, foreigners, and stateless persons living in Somalia. Courts ruled out ignorance of the law as a justification for breaking the law or an excuse for committing an offense, but considered extenuations and mitigating factors in individual cases. The penal laws prohibited collective punishment, which was contrary to the traditional sanctions of diya-paying groups. The penal laws stipulated that if the offense constituted a violation of the code, the perpetrator had committed an unlawful act against the state and was subject to its sanctions. Judicial action under the code, however, did not rule out the possibility of additional redress in the form of diya through civil action in the courts. Siad Barre's regime attacked this tolerance of diya, and forbade its practice entirely in 1974.
Under the Somali penal code, to be criminally liable a person must have committed an act or have been guilty of an omission that caused harm or danger to the person or property of another or to the state. Further, the offense must have been committed willfully or as the result of negligence, imprudence, or illegal behavior. Under Somali penal law, the courts assumed the accused to be innocent until proved guilty beyond reasonable doubt. In criminal prosecution, the burden of proof rested with the state.
Penal laws classed offenses as either crimes or contraventions, the latter being legal violations without criminal intent. Death by shooting was the only sentence for serious offenses such as crimes against the state and murder. The penal law usually prescribed maximum and minimum punishments but left the actual sentence to the judge's discretion.
The penal laws comprised three categories. The first dealt with general principles of jurisprudence; the second defined criminal offenses and prescribed specified punishments; the third contained sixty-one articles that regulated contraventions of public order, safety, morality, and health. Penal laws took into consideration the role of punishment in restoring the offender to a useful place in society.
The Criminal Procedure Code governed matters associated with arrest and trial. The code, which conformed to British common law, prescribed the kinds and jurisdictions of criminal courts, identified the functions and responsibilities of judicial officials, outlined the rules of evidence, and regulated the conduct of trials. Normally, a person could be arrested only if caught in the act of committing an offense or upon issuance of a warrant by the proper judicial authority. The code recognized the writ of habeas corpus. Those arrested had the right to appear before a judge within twenty-four hours.
As government opposition proliferated in the late 1970s and early 1980s, the Siad Barre regime increasingly subverted or ignored Somalia's legal system. By the late 1980s, Somalia had become a police state, with citizens often falling afoul of the authorities for solely political reasons. Pressure by international human rights organizations such as Amnesty International and Africa Watch failed to slow Somalia's descent into lawlessness. After Siad Barre fell from power in January 1991, the new authorities promised to restore equity to the country's legal system. Given the many political, economic, and social problems confronting post-Siad Barre Somalia, however, it appeared unlikely that this goal would be achieved soon.
INCIDENCE OF CRIME
Somalia has provided data neither for United Nations nor INTERPOL surveys of crime; however, an estimate of crime is given in the United States State Department's Consular Information Sheet according to which .."The Department of State warns U.S. citizens against all travel to Somalia. Inter-clan and inter-factional fighting can flare up with little warning, and kidnapping, murder, and other threats to U.S. citizens and other foreigners can occur unpredictably in many regions. While the self-declared "Republic of Somaliland" in northern Somalia has been relatively peaceful, the Sanaag and Sool regions in eastern Somaliland, bordering on Puntland (northeastern Somalia), are subject to insecurity due to potential inter-clan fighting. In addition, the Mogadishu area, the Puntland region in northern Somalia, and the districts of Gedo and Bay (especially the vicinity of Baidoa) in the south have experienced serious fighting in recent months. Territorial control in the Mogadishu area is divided among numerous groups; lines of control are unclear and frequently shift, making movement within this area extremely hazardous. …incidents such as armed banditry and road assaults may occur. In addition, there have been reports of general crime and rock-throwing against aid workers outside of Hargeisa. Civil unrest persists in the rest of the country. U.S. citizens should not travel to areas other than Somaliland.
With the exception of Somaliland, crime is an extension of the general state of insecurity. Serious and violent crimes are very common. Kidnapping and robbery are a particular problem in Mogadishu and other areas in the south.
U.S. citizens are urged to use caution when sailing near the coast of Somalia. Merchant vessels, fishing boats and pleasure craft alike risk seizure and their crews being held for ransom, especially in the waters near the Horn of Africa and the Kenyan border.
At independence, Somalia had four distinct legal traditions: English common law, Italian law, Islamic sharia or religious law, and Somali customary law (traditional rulers and sanctions). The challenge after 1960 was to meld this diverse legal inheritance into one system. During the 1960s, a uniform penal code, a code of criminal court procedures, and a standardized judicial organization were introduced. The Italian system of basing judicial decisions on the application and interpretation of the legal code was retained. The courts were enjoined, however, to apply English common law and doctrines of equity in matters not governed by legislation.
In Italian Somaliland, observance of the sharia had been more common than in British Somaliland, where the application of Islamic law had been limited to cases pertaining to marriage, divorce, family disputes, and inheritance. Qadis (Muslim judges) in British Somaliland also adjudicated customary law in cases such as land tenure disputes and disagreements over the payment of diya or blood compensation. In Italian Somaliland, however, the sharia courts had also settled civil and minor penal matters, and Muslim plaintiffs had a choice of appearing before a secular judge or a qadi. After independence the differences between the two regions were resolved by making the sharia applicable in all civil matters if the dispute arose under that law. Somali customary law was retained for optional application in such matters as land tenure, water and grazing rights, and the payment of diya.
The military junta suspended the constitution of 1961 when it took power in 1969, but it initially respected other sources of law. In 1973 the Siad Barre regime introduced a unified civil code. Its provisions pertaining to inheritance, personal contracts, and water and grazing rights sharply curtailed both the sharia and Somali customary law. Siad Barre's determination to limit the influence of the country's clans was reflected in sections of the code that abolished traditional clan and lineage rights over land, water resources, and grazing. In addition, the new civil code restricted the payment of diya as compensation for death or injury to the victim or close relatives rather than to an entire diya-paying group. A subsequent amendment prohibited the payment of diya entirely.
The attorney general, who was appointed by the minister of justice, was responsible for the observance of the law and prosecution of criminal matters. The attorney general had ten deputies in the capital and several other deputies in the rest of the country. Outside of Mogadishu, the deputies of the attorney general had their offices at the regional and district courts.
Under the Siad Barre regime, several police and intelligence organizations were responsible for maintaining public order, controlling crime, and protecting the government against domestic threats. These included the Somali Police Force (SPF), the People's Militia, the NSS, and a number of other intelligencegathering operations, most of which were headed by members of the president's family. After Siad Barre's downfall, these units were reorganized or abolished.
The Somali Police Force (SPF) grew out of police forces employed by the British and Italians to maintain peace during the colonial period. Both European powers used Somalis as armed constables in rural areas. Somalis eventually staffed the lower ranks of the police forces, and Europeans served as officers. The colonial forces produced the senior officers and commanders-- including Siad Barre--who led the SPF and the army after independence. In 1884 the British formed an armed constabulary to police the northern coast. In 1910 the British created the Somaliland Coastal Police, and in 1912 they established the Somaliland Camel Constabulary to police the interior. In 1926 the colonial authorities formed the Somaliland Police Force. Commanded by British officers, the force included Somalis in its lower ranks. Armed rural constabulary (illalo) supported this force by bringing offenders to court, guarding prisoners, patrolling townships, and accompanying nomadic tribesmen over grazing areas. The Italians initially relied on military forces to maintain public order in their colony. In 1914 the authorities established a coastal police and a rural constabulary (gogle) to protect Italian residents. By 1930 this force included about 300 men. After the fascists seized power in Italy, colonial administrators reconstituted the Somali Police Corps into the Corpo Zaptié. Italian carabinieri commanded and trained the new corps, which eventually numbered approximately 800. During Italy's war against Ethiopia, the Corpo Zaptié expanded to about 6,000 men. In 1941 the British defeated the Italians and formed a British Military Administration (BMA) over both protectorates. The BMA disbanded the Corpo Zaptié and created the Somalia Gendarmerie. By 1943 this force had grown to more than 3,000 men, led by 120 British officers. In 1948 the Somalia Gendarmerie became the Somali Police Force. After the creation of the Italian Trust Territory in 1950, Italian carabinieri officers and Somali personnel from the Somali Police Force formed the Police Corps of Somalia (Corpo di Polizia della Somalia). In 1958 the authorities made the corps an entirely Somali force and changed its name to the Police Force of Somalia (Forze di Polizia della Somalia).
In 1960 the British Somaliland Scouts joined with the Police Corps of Somalia to form a new Somali Police Force, which consisted of about 3,700 men. The authorities also organized approximately 1,000 of the force as the Darawishta Poliska, a mobile group used to keep peace between warring clans in the interior. Since then, the government has considered the SPF a part of the armed forces. It was not a branch of the SNA, however, and did not operate under the army's command structure. Until abolished in 1976, the Ministry of Interior oversaw the force's national commandant and his central command. After that date, the SPF came under the control of the presidential adviser on security affairs. Each of the country's administrative regions had a police commandant; other commissioned officers maintained law and order in the districts. After 1972 the police outside Mogadishu comprised northern and southern group commands, divisional commands (corresponding to the districts), station commands, and police posts. Regional governors and district commissioners commanded regional and district police elements. Under the parliamentary regime, police received training and matériel aid from West Germany, Italy, and the United States. Although the government used the police to counterbalance the Soviet-supported army, no police commander opposed the 1969 army coup. During the 1970s, German Democratic Republic (East Germany) security advisers assisted the SPF. After relations with the West improved in the late 1970s, West German and Italian advisers again started training police units. By the late 1970s, the SPF was carrying out an array of missions, including patrol work, traffic management, criminal investigation, intelligence gathering, and counterinsurgency. The elite mobile police groups consisted of the Darawishta and the Birmadka Poliska (Riot Unit). The Darawishta, a mobile unit that operated in remote areas and along the frontier, participated in the Ogaden War. The Birmadka acted as a crack unit for emergency action and provided honor guards for ceremonial functions.
In 1961 the SPF established an air wing, equipped with Cessna light aircraft and one Douglas DC-3. The unit operated from improvised landing fields near remote police posts. The wing provided assistance to field police units and to the Darawishta through the airlift of supplies and personnel and reconnaissance. During the final days of Siad Barre's regime, the air wing operated two Cessna light aircraft and two DO-28 Skyservants. Technical and specialized police units included the Tributary Division, the Criminal Investigation Division (CID), the Traffic Division, a communications unit, and a training unit. The CID, which operated throughout the country, handled investigations, fingerprinting, criminal records, immigration matters, and passports. In 1961 the SPF established a women's unit. Personnel assigned to this small unit investigated, inspected, and interrogated female offenders and victims. Policewomen also handled cases that involved female juvenile delinquents, ill or abandoned girls, prostitutes, and child beggars. Service units of the Somali police included the Gadidka Poliska (Transport Department) and the Health Service. The Police Custodial Corps served as prison guards. In 1971 the SPF created a fifty-man national Fire Brigade. Initially, the Fire Brigade operated in Mogadishu. Later, however, it expanded its activities into other towns, including Chisimayu, Hargeysa, Berbera, Merca, Giohar, and Beledweyne. Beginning in the early 1970s, police recruits had to be seventeen to twenty-five years of age, of high moral caliber, and physically fit. Upon completion of six months of training at the National Police Academy in Mogadishu, those who passed an examination would serve two years on the force. After the recruits completed this service, the police could request renewal of their contracts. Officer cadets underwent a nine-month training course that emphasized supervision of police field performance. Darawishta members attended a six-month tactical training course; Birmadka personnel received training in public order and riot control. After Siad Barre fled Mogadishu in January 1991, both the Darawishta and Birmadka forces ceased to operate, for all practical purposes.
In August 1972, the government established the People's Militia, known as the Victory Pioneers (Guulwadayaal). Although a wing of the army, the militia worked under the supervision of the Political Bureau of the presidency. After the SRSP's formation in 1976, the militia became part of the party apparatus. Largely because of the need for military reserves, militia membership increased from 2,500 in 1977 to about 10,000 in 1979, and to approximately 20,000 by 1990. After the collapse of Siad Barre's regime, the People's Militia, like other military elements, disintegrated. The militia staffed the government and party orientation centers that were located in every settlement in Somalia. The militia aided in self-help programs, encouraged "revolutionary progress," promoted and defended Somali culture, and fought laziness, misuse of public property, and "reactionary" ideas and actions. Moreover, the militia acted as a law enforcement agency that performed duties such as checking contacts between Somalis and foreigners. The militia also had powers of arrest independent of the police. In rural areas, militiamen formed "vigilance corps" that guarded grazing areas and towns. After Siad Barre fled Mogadishu in January 1991, militia members tended to join one of the insurgent groups or clan militias.
Shortly after Siad Barre seized power, the Soviet Committee of State Security (Komitet Gosudarstvennoi Bezopasnosti--KGB) helped Somalia form the National Security Service (NSS). This organization, which operated outside normal bureaucratic channels, developed into an instrument of domestic surveillance, with powers of arrest and investigation. The NSS monitored the professional and private activities of civil servants and military personnel, and played a role in the promotion and demotion of government officials. As the number of insurgent movements proliferated in the late 1980s, the NSS increased its activities against dissidents, rebel sympathizers, and other government opponents. Until the downfall of Siad Barre's regime, the NSS remained an elite organization staffed by men from the SNA and the police force who had been chosen for their loyalty to the president. After the withdrawal of the last U.N. peacekeepers in 1995, clan and factional militias, in some cases supplemented by local police forces established with U.N. help in the early 1990's, continued to function with varying degrees of effectiveness. Intervention by Ethiopian troops in 1996 and 1997 helped to maintain order in Gedo region by closing down the training bases of the Islamic group Al'Ittihad Al-Islami (AIAI). In Somaliland more than 60 percent of the budget was allocated to maintaining a militia and police force composed of former troops. In 2000 a Somaliland presidential decree, citing national security concerns in the wake of the conclusion of the Djibouti Conference, delegated special powers to the police and the military. Also in 2000, the TNG began recruiting for a new 4,000-officer police force to restore order in Mogadishu. The TNG requested former soldiers to register and enroll in training camps to form a national army. At year's end 2001, the TNG had a 3,500-officer police force and a militia of approximately 5,000 persons. During the year 2001, 7,000 former non-TNG militia were demobilized to retrain them for service with the TNG; however, many of the militia members left the demobilization camps after the TNG was unable to pay their salaries for 3 months. At year's end 2001, the TNG was attempting to restore salaries and to continue the demobilization process. During the year 2001, Mogadishu police began to patrol in the TNG-controlled areas of the city. Police and militia committed numerous human rights abuses throughout the country. Many civilian citizens were killed in factional fighting, especially in Gedo, Hiran, Lower Shabelle, Middle Shabelle, Middle Juba, Lower Juba regions, and in the cities of Mogadishu and Bosasso. Kidnaping remained a problem. There were some reports of the use of torture by Somaliland and Puntland administrations and militias. In Somaliland and Puntland, police used lethal force while disrupting demonstrations. The use of landmines, reportedly by the Rahanwein Resistance Army (RRA), resulted in several deaths.
Political violence and banditry have been endemic since the revolt against Siad Barre, who fled the capital in January 1991. Since that time, tens of thousands of persons, mostly noncombatants, have died in interfactional and interclan fighting. The vast majority of killings throughout the year resulted from clashes between militias or unlawful militia activities; several occurred during land disputes, and a small number involved common criminal activity. The number of killings increased from 2000 as a result of fighting between the following groups: Between the RRA and TNG; between the TNG and warlord Muse Sudi in Mogadishu; between warlord Hussein Aideed and the TNG; between Abdullahi Yusuf's forces and those of Jama Ali Jama in Puntland; and between the SRRC and Jubaland Alliance in Kismayo. Security forces and police killed several persons, and in some instances used lethal force to disperse demonstrators during the year 2001. For example, on February 3, in Bosasso, security forces and police shot and killed 1 woman and injured 11 other persons during a demonstration. On August 23, Somaliland police, who were arresting supporters of elders for protesting actions of President Egal, killed a small child during an exchange of gunfire. On August 28, in Mogadishu, TNG police reportedly killed two young brothers. There were no investigations, and no action was taken against the perpetrators during the year 2001. Unlike in the previous year, Islamic courts did not execute summarily any persons during the year 2001. Killings resulted from conflicts between security and police forces and militias during the year.
There were no known reports of unresolved politically motivated disappearances, although cases easily might have been concealed among the thousands of refugees and displaced persons.
There continued to be reports of kidnapings of aid workers during the year 2001. There were numerous kidnapings by militia groups and armed assailants who demanded ransom for hostages.
The Transitional National Charter, adopted in 2000 but not implemented by year's end 2001, prohibits torture, and the Puntland Charter prohibits torture "unless sentenced by Islamic Shari'a courts in accordance with Islamic law;" however, there were some reports of the use of torture by the Puntland and Somaliland administrations and by warring militiamen against each other or against civilians. Observers believe that many incidents of torture were not reported.
Security forces killed and injured persons while forcibly dispersing demonstrations during the year 2001. Security forces, police, and militias also injured persons during the year, including supporters and members of the TNG.
The Transitional Charter, adopted in 2000 but not implemented by year's end 2001, provides for the sanctity of private property and privacy; however, looting and forced entry into private property continued in Mogadishu, although on a smaller scale than in previous years. The Puntland Charter recognizes the right to private property; however, the authorities did not respect this right on at least one occasion.
Militia members reportedly confiscated persons' possessions as punishment during extortion attempts during the year 2001.
Most properties that were occupied forcibly during militia campaigns in 1992-93, notably in Mogadishu and the Lower Shabelle, remained in the hands of persons other than their prewar owners.
Approximately 300,000 persons, or 4 percent of the population, are internally displaced persons (IDP's) as a result of interfactional and interclan fighting.
In the absence of constitutional or other legal protections, various factions and armed bandits continued to engage in arbitrary detention, including the holding of relief workers.
On February 26, a U.N. Educational, Scientific and Cultural Organization (UNESCO) academic who was in Garowe, Puntland to conduct a seminar, was arrested and charged with distributing antigovernment leaflets; he was released after paying a fine.
On May 22, authorities in Somaliland arrested and detained Suleiman Mohamoud Adan "Gaal" for holding meetings outside of Somaliland with Djibouti President Gelleh and TNG members; on June 5, he was released.
On June 12, warlord Muse Sudi's militia arrested six clan elders for attending a meeting to discuss clan affairs, because he reportedly believed that they were attempting to undermine his authority; the elders were released after several days.
On June 13, the Puntland Administration arrested two intellectuals reportedly for engaging in antigovernment political activities; they were released after a few days.
On August 23, Somaliland President Egal ordered the detention of approximately 10 elders. After fighting between Somaliland authorities and supporters of the elders, four sultans (sub-clan chiefs)_and one of their supporters were arrested. On September 3, President Egal ordered their release.
On September 24, the RRA in Burhakaba arrested 11 pro-TNG elders and accused them of fomenting division and dissension within the Rahanwein clan.
Unlike in the previous year, there were no reports that Somaliland authorities detained foreigners for proselytizing. Seven Christian Ethiopians arrested in Somaliland in 1999 for allegedly attempting to proselytize were released at the beginning of the year.
Unlike in previous years, there were no reports that authorities in Somaliland, Puntland, and in areas of the south detained local or foreign journalists.
It was unknown whether persons detained in 2000 were released during the year 2001.
There were no developments in the following arrest cases from 2000: The September arrests of five persons by Somaliland police, and the March detention of five persons by the Puntland region security committee.
There were no developments in the arrests of the following persons arrested by the Somaliland authorities in 2000 for participating in the Djibouti Conference: Sultan Mohamed Abdulkadir, who was arrested in November; Bile Mahmud Qabowsadeh, who was arrested in October; and Abdi Hashi, who was arrested in May.
There were no reports of lengthy pretrial detention in violation of the pre-1991 Penal Code in Somaliland or Puntland.
None of the factions used forced exile.
Over the centuries, the Somalis developed a system of handling disputes or acts of violence, including homicide, as wrongs involving not only the parties immediately concerned but also the clans to which they belonged. The offending party and his group would pay diya to the injured party and his clan. The British and Italians enforced criminal codes based on their own judicial systems in their respective colonies, but did not seriously disrupt the diya-paying system.
After independence the Somali government developed its own laws and procedures, which were largely based on British and Italian legal codes. Somali officials made no attempt to develop a uniquely Somali criminal justice system, although diya- paying arrangements continued.
The military junta that seized power in 1969 changed little of the criminal justice system it inherited. However, the government launched a campaign against diya and the concept of collective responsibility for crimes. This concept is the most distinctly Somali of any in the criminal justice system. The regime instead concentrated on extending the influence of laws introduced by the British and Italians. This increased the government's control over an area of national life previously regulated largely by custom.
The constitution of 1961 had provided for a unified judiciary independent of the executive and the legislature. A 1962 law integrated the courts of northern and southern Somalia into a four-tiered system: the Supreme Court, courts of appeal, regional courts, and district courts. Sharia courts were discontinued although judges were expected to take the sharia into consideration when making decisions. The Siad Barre government did not fundamentally alter this structure; nor had the provisional government made any significant changes as of May 1992.
At the lowest level of the Somali judicial system were the eighty-four district courts, each of which consisted of civil and criminal divisions. The civil division of the district court had jurisdiction over matters requiring the application of the sharia, or customary law, and suits involving claims of up to 3,000 Somali shillings (for value of the shilling, see Glossary). The criminal division of the district court had jurisdiction over offenses punishable by fines or prison sentences of less than three years.
There were eight regional courts, each consisting of three divisions. The ordinary division had jurisdiction over penal and civil cases considered too serious to be heard by the district courts. The assize division considered only major criminal cases, that is, those concerning crimes punishable by more than ten years' imprisonment. A third division handled cases pertaining to labor legislation. In both the district and regional courts, a single magistrate, assisted by two laymen, heard cases, decided questions of fact, and voted on the guilt or innocence of the accused.
Somalia's next-highest tier of courts consisted of the two courts of appeal. The court of appeals for the southern region sat at Mogadishu, and the northern region's court of appeals sat at Hargeysa. Each court of appeal had two divisions. The ordinary division heard appeals of district court decisions and of decisions of the ordinary division of the regional courts, whereas the assize division was only for appeals from the regional assize courts. A single judge presided over cases in both divisions. Two laymen assisted the judge in the ordinary division, and four laymen assisted the judge in the assize division. The senior judges of the courts of appeal, who were called presidents, administered all the courts in their respective regions.
The Supreme Court, which sat at Mogadishu, had ultimate authority for the uniform interpretation of the law. It heard appeals of decisions and judgments of the lower courts and of actions taken by public attorneys, and settled questions of court jurisdiction. The Supreme Court was composed of a chief justice, who was referred to as the president, a vice president, nine surrogate justices, and four laymen. The president, two other judges, and four laymen constituted a full panel for plenary sessions of the Supreme Court. In ordinary sessions, one judge presided with the assistance of two other judges and two laymen. The president of the Supreme Court decided whether a case was to be handled in plenary or ordinary session, on the basis of the importance of the matter being considered.
Although the military government did not change the basic structure of the court system, it did introduce a major new institution, the National Security Courts (NSCs), which operated outside the ordinary legal system and under the direct control of the executive. These courts, which sat at Mogadishu and the regional capitals, had jurisdiction over serious offenses defined by the government as affecting the security of the state, including offenses against public order and crimes by government officials. The NSC heard a broad range of cases, passing sentences for embezzlement by public officials, murder, political activities against the state, and thefts of government food stocks. A senior military officer was president of each NSC. He was assisted by two other judges, usually also military officers. A special military attorney general prosecuted cases brought before the NSC. No other court, not even the Supreme Court, could review NSC sentences. Appeals of NSC verdicts could be taken only to the president of the republic. Opponents of the Siad Barre regime accused the NSC of sentencing hundreds of people to death for political reasons. In October 1990, Siad Barre announced the abolition of the widely feared and detested courts; as of May 1992, the NSCs had not been reinstituted by the provisional government.
Before the 1969 coup, the Higher Judicial Council had responsibility for the selection, promotion, and discipline of members of the judiciary. The council was chaired by the president of the Supreme Court and included justices of the court, the attorney general, and three members elected by the National Assembly. In 1970 military officers assumed all positions on the Higher Judicial Council. The effect of this change was to make the judiciary accountable to the executive. One of the announced aims of the provisional government after the defeat of Siad Barre was the restoration of judicial independence.
As of year 2001, there is no national judicial system.
The Transitional Charter, adopted in 2000, provides for an independent judiciary and for a High Commission of Justice, a Supreme Court, a Court of Appeal, and courts of first reference; however, the Charter had not been implemented by year's end 2001. Some regions have established local courts that depend on the predominant local clan and associated factions for their authority. The judiciary in most regions relies on some combination of traditional and customary law, Shari'a law, the Penal Code of the pre-1991 Siad Barre Government, or some combination of the three. For example, in Bosasso and Afmadow, criminals are turned over to the families of their victims, which then exact blood compensation in keeping with local tradition. Under the system of customary justice, clans often hold entire opposing clans or sub-clans responsible for alleged violations by individuals.
Islamic Shari'a courts, which traditionally ruled in cases of civil and family law but extended their jurisdiction to criminal proceedings in some regions beginning in 1994, ceased to function effectively in the country during the year 2001. The Islamic courts in Mogadishu gradually were absorbed during the year 2001 by the TNG, and the courts in Merka and Beledweyne ceased to function. In Berbera courts apply a combination of Shari'a law and the former Penal Code. In south Mogadishu, a segment of north Mogadishu, the Lower Shabelle, and parts of the Gedo and Hiran regions, court decisions are based on a combination of Shari'a and customary law. Throughout most of the country, customary law forms a basis for court decisions.
In 2000 Somaliland adopted a new Constitution based on democratic principles but continued to use the pre-1991 Penal Code. The Constitution provides for an independent judiciary; however, the judiciary is not independent in practice. A U.N. report issued in 2000 noted a serious lack of trained judges and of legal documentation in Somaliland, which caused problems in the administration of justice. Untrained police and other persons reportedly served as judges. The Puntland Charter implemented in 1998 provides for an independent judiciary; however, the judiciary is not independent in practice. The Puntland Charter also provides for a Supreme Court, courts of appeal, and courts of first reference. In Puntland clan elders resolved the majority of cases using traditional methods; however, those with no clan representation in Puntland were subject to the Administration's judicial system.
The Transitional Charter, which was not implemented by year's end 2001, provides for the right to be represented by an attorney. The right to representation by an attorney and the right to appeal do not exist in those areas that apply traditional and customary judicial practices or Shari'a law. These rights more often are respected in regions that continue to apply the former government's penal code, such as Somaliland and Puntland.
In January more than 50 gunmen attacked an Islamic court in Mogadishu and released 48 prisoners and looted the premises; the motivation for the attack remained unknown at year's end 2001.
There were no reports of political prisoners.
The few prisons that existed before 1960 had been established during the British and Italian colonial administrations. By independence these facilities were in poor condition and were inadequately staffed.
After independence the Somali government included in the constitution an article asserting that criminal punishment must not be an obstacle to convicts' moral reeducation. This article also established a prison organization and emphasized prisoner rehabilitation.
The Somali Penal Code of 1962 effectively stipulated the reorganization of the prison system. The code required that prisoners of all ages work during prison confinement. In return for labor on prison farms, construction projects, and roadbuilding, prisoners received a modest salary, which they could spend in prison canteens or retain until their release. The code also outlawed the imprisonment of juveniles with adults.
By 1969 Somalia's prison system included forty-nine facilities, the best-equipped of which was the Central Prison of Mogadishu. During the 1970s, East Germany helped Somalia build four modern prisons. As opposition to Siad Barre's regime intensified, the country's prisons became so crowded that the government used schools, military and police headquarters, and part of the presidential palace as makeshift jails. Despite criticism by several international humanitarian agencies, the Somali government failed to improve the prison system.
As of year 2001, prison conditions varied throughout the country; however, in general they remained harsh, and in some cases, life threatening. Conditions at the north Mogadishu prison of the Shari'a court system remained harsh and life threatening. Hareryale, a prison established between north and south Mogadishu reportedly holds hundreds of prisoners, including children. Conditions at Hareryale are described as overcrowded and poor. Similar conditions exist at Shirkhole prison, an Islamic Court Militia run prison in south Mogadishu and at north Mogadishu prison for Abgel clan prisoners run by warlord Musa Sudi. In September the U.N. Secretary General's Independent Expert on Human Rights, Dr. Ghanim Alnajar, visited prisons in Hargeisa and Mogadishu. Alnajar reported that conditions had not improved in the 3 years since his previous visit.
Overcrowding, poor sanitary conditions, a lack of access to adequate health care, and an absence of education and vocational training characterized prisons throughout the country. Tuberculosis was widespread. Abuse by guards reportedly was common in many prisons. Pretrial detainees and political prisoners are held separately from convicted prisoners.
According to an international observer, men and women are housed separately in the Puntland prison in Bosasso; this is the case in other prisons as well. Juveniles frequently are housed with adults in prisons. Custom allows parents to place children in prison without judicial proceedings.
The detainees' clans generally pay the costs of detention. In many areas, prisoners are able to receive food from family members or from relief agencies. Ethnic minorities make up a disproportionately large percentage of the prison population.
The Puntland Administration permits prison visits by independent monitors. Somaliland authorities permit prison visits by independent monitors, and such visits occurred during the year 2001. The Jumale Center for Human Rights visited prisons in Mogadishu during the year 2001. | http://www-rohan.sdsu.edu/faculty/rwinslow/africa/somalia.html | 13 |
19 | The Great Depression in the United States began on October 29, 1929, a day known forever after as "Black Tuesday," when the American stock market–which had been roaring steadily upward for almost a decade–crashed, plunging the country into its most severe economic downturn yet. Speculators lost their shirts; banks failed; the nation’s money supply diminished; and companies went bankrupt and began to fire their workers in droves. Meanwhile, President Herbert Hoover urged patience and self-reliance: He thought the crisis was just "a passing incident in our national lives" that it wasn't the federal government's job to try and resolve. By 1932, one of the bleakest years of the Great Depression, at least one-quarter of the American workforce was unemployed. When President Franklin Roosevelt took office in 1933, he acted swiftly to try and stabilize the economy and provide jobs and relief to those who were suffering. Over the next eight years, the government instituted a series of experimental projects and programs, known collectively as the New Deal, that aimed to restore some measure of dignity and prosperity to many Americans. More than that, Roosevelt’s New Deal permanently changed the federal government's relationship to the U.S. populace.
Great Depression Leads to a New Deal for the American People
On March 4, 1933, at the height of the Great Depression, Franklin Roosevelt delivered his first inaugural address before 100,000 people on Washington's Capitol Plaza. "First of all," he said, "let me assert my firm belief that the only thing we have to fear is fear itself." He promised that he would act swiftly to face the "dark realities of the moment" and assured Americans that he would "wage a war against the emergency" just as though "we were in fact invaded by a foreign foe." His speech gave many people confidence that they'd elected a man who was not afraid to take bold steps to solve the nation's problems.
The next day, the new president declared a four-day bank holiday to stop people from withdrawing their money from shaky banks. On March 9, Congress passed Roosevelt's Emergency Banking Act, which reorganized the banks and closed the ones that were insolvent. In his first "fireside chat" three days later, the president urged Americans to put their savings back in the banks, and by the end of the month almost three quarters of them had reopened.
The First Hundred Days
Roosevelt's quest to end the Great Depression was just beginning. Next, he asked Congress to take the first step toward ending Prohibition—one of the more divisive issues of the 1920s—by making it legal once again for Americans to buy beer. (At the end of the year, Congress ratified the 21st Amendment and ended Prohibition for good.) In May, he signed the Tennessee Valley Authority Act into law, enabling the federal government to build dams along the Tennessee River that controlled flooding and generated inexpensive hydroelectric power for the people in the region. That same month, Congress passed a bill that paid commodity farmers (farmers who produced things like wheat, dairy products, tobacco and corn) to leave their fields fallow in order to end agricultural surpluses and boost prices. June’s National Industrial Recovery Act guaranteed that workers would have the right to unionize and bargain collectively for higher wages and better working conditions; it also suspended some antitrust laws and established a federally funded Public Works Administration.
In addition to the Agricultural Adjustment Act, the Tennessee Valley Authority Act, and the National Industrial Recovery Act, Roosevelt had won passage of 12 other major laws, including the Glass-Steagall Banking Bill and the Home Owners’ Loan Act, in his first 100 days in office. Almost every American found something to be pleased about and something to complain about in this motley collection of bills, but it was clear to all that FDR was taking the "direct, vigorous" action that he’d promised in his inaugural address.
The Second New Deal
Despite the best efforts of President Roosevelt and his cabinet, however, the Great Depression continued--the nation’s economy continued to wheeze; unemployment persisted; and people grew angrier and more desperate. So, in the spring of 1935, Roosevelt launched a second, more aggressive series of federal programs, sometimes called the Second New Deal. In April, he created the Works Progress Administration (WPA) to provide jobs for unemployed people. WPA projects weren’t allowed to compete with private industry, so they focused on building things like post offices, bridges, schools, highways and parks. The WPA also gave work to artists, writers, theater directors and musicians. In July 1935, the National Labor Relations Act, also known as the Wagner Act, created the National Labor Relations Board to supervise union elections and prevent businesses from treating their workers unfairly. In August, FDR signed the Social Security Act of 1935, which guaranteed pensions to millions of Americans, set up a system of unemployment insurance and stipulated that the federal government would help care for dependent children and the disabled.
In 1936, while campaigning for a second term, FDR told a roaring crowd at Madison Square Garden that “The forces of ‘organized money’ are unanimous in their hate for me—and I welcome their hatred.” He went on: “I should like to have it said of my first Administration that in it the forces of selfishness and of lust for power met their match, [and] I should like to have it said of my second Administration that in it these forces have met their master.” This FDR had come a long way from his earlier repudiation of class-based politics and was promising a much more aggressive fight against the people who were profiting from the Depression-era troubles of ordinary Americans. He won the election by a landslide.
Still, the Great Depression dragged on. Workers grew more militant: In December 1936, for example, the United Auto Workers started a sit-down strike at a GM plant in Flint, Michigan that lasted for 44 days and spread to some 150,000 autoworkers in 35 cities. By 1937, to the dismay of most corporate leaders, some 8 million workers had joined unions and were loudly demanding their rights.
The End of the New Deal?
Meanwhile, the New Deal itself confronted one political setback after another. Arguing that they represented an unconstitutional extension of federal authority, the conservative majority on the Supreme Court had already invalidated reform initiatives like the NRA and the AAA. In order to protect his programs from further meddling, in 1937 President Roosevelt announced a plan to add enough liberal justices to the Court to neutralize the “obstructionist” conservatives. This “Court-packing” turned out to be unnecessary–soon after they caught wind of the plan, the conservative justices started voting to uphold New Deal projects–but the episode did a good deal of public-relations damage to the administration and gave ammunition to many of the president’s Congressional opponents. That same year, the economy slipped back into a recession when the government reduced its stimulus spending. Despite this seeming vindication of New Deal policies, increasing anti-Roosevelt sentiment made it difficult for him to enact any new programs.
On December 7, 1941, the Japanese bombed Pearl Harbor and the United States entered World War II. The war effort stimulated American industry and, as a result, effectively ended the Great Depression.
The Great Depression and American Politics
From 1933 until 1941, President Roosevelt’s programs and policies did more than just adjust interest rates, tinker with farm subsidies and create short-term make-work programs. They created a brand-new, if tenuous, political coalition that included white working people, African Americans and left-wing intellectuals. These people rarely shared the same interests–at least, they rarely thought they did–but they did share a powerful belief that an interventionist government was good for their families, the economy and the nation. Their coalition has splintered over time, but many of the New Deal programs that bound them together–Social Security, unemployment insurance and federal agricultural subsidies, for instance–are still with us today.
How to Cite this Page:
New Deal. (2013). The History Channel website. Retrieved 8:52, May 18, 2013, from http://www.history.com/topics/new-deal.
New Deal. [Internet]. 2013. The History Channel website. Available from: http://www.history.com/topics/new-deal [Accessed 18 May 2013].
“New Deal.” 2013. The History Channel website. May 18 2013, 8:52 http://www.history.com/topics/new-deal.
“New Deal,” The History Channel website, 2013, http://www.history.com/topics/new-deal [accessed May 18, 2013].
“New Deal,” The History Channel website, http://www.history.com/topics/new-deal (accessed May 18, 2013).
New Deal [Internet]. The History Channel website; 2013 [cited 2013 May 18] Available from: http://www.history.com/topics/new-deal.
New Deal, http://www.history.com/topics/new-deal (last visited May 18, 2013).
New Deal. The History Channel website. 2013. Available at: http://www.history.com/topics/new-deal. Accessed May 18, 2013. | http://www.history.com/topics/print/new-deal | 13 |
53 | THE CONSTITUTION OF THE CONFEDERATE STATES OF AMERICA
In 1861 six states of the USA seceded from the American union and declared themselves independent. They formed a new, rival country known as the Confederate States of America. In the months that followed, an additional seven other states quickly followed suit, slicing the former United States of America into two clearly-divded rival factions.
The Civil War that followed has become a major event of American historical lore. Countless books, reenactments, and commemorative plates have all been churned out in the decades since. But why was the Civil War even fought in the first place? Hell, why did the Confederacy even exist?
Modern-day Confederate apologists insist the South only separated in response to legitimate political grievances, claiming that the South was being unjustly being pushed around and oppressed by a tyrannical federal government dominated by northern politicians who had no respect for "states' rights," federalism, and local sovereignty. Everyone else insists the Confederacy was founded for a much less noble reason, namely to ensure slavery could remain legal at a time when much of the country was uniting against the practice.
We can get a good glimpse into the founding principles of the Confederacy by taking an in-depth look at the Confederate constitution, which was approved, and came into use by the rebel states on March 11, 1861. The document is largely a word-for-word copy of the United States constitution, but with several key changes. The changes offer the clearest window of insight into how precisely the CSA intended to be different from the USA, and why.
History can be studied through online schools. Don't forget to take a look at pictures of the Civil War too.
Before we get into a line-by-line comparison, I should point out the general, minor changes that occurred during the revision process:
- All references to the "United States" were changed to the "Confederate States;" references to the "Union" were changed to "Confederacy."
- The CSA's constitution's punctuation, capitalization, and in some cases spelling, are all updated from 18th Century to 19th Century English standards.
- The CSA constitution numbers its clauses. In most cases, each paragraph from the US constitution is numbered as a single clause, but in some cases the CSA merges multiple clauses into one big one, or breaks up long paragraphs into several smaller ones.
And now the chart. Note that in the CSA column red text indicates new additions to original US clauses.
|We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America.||
We, the people of the Confederate States, each State acting in its sovereign and independent character, in order to form a permanent federal government, establish justice, insure domestic tranquillity, and secure the blessings of liberty to ourselves and our posterity invoking the favor and guidance of Almighty God do ordain and establish this Constitution for the Confederate States of America.
The Confederacy's preamble more-or-less deleted any reference to collective interests, presumably because it ostensibly intended to be a country focused more on state independence than any sort of grander, national goal. The CSA does not promise to form a "perfect union" nor does it aspire to provide for the "common defense" or promote the "general welfare."
It does, however, explicitly evoke God. So there would be no court challenges about the Pledge of Allegiance in alternate CSA-won-the-Civl-War-world.
|Section. 1. [legislative branch]||Sec. 1. [legislative branch]|
|All legislative Powers herein granted shall be vested in a Congress of the United States, which shall consist of a Senate and House of Representatives.||All legislative powers herein delegated shall be vested in a Congress of the Confederate States, which shall consist of a Senate and House of Representatives.||Changed the world "granted" to "delegated," which I suppose makes the federal government seem slightly more gentle and conciliatory.|
|Section. 2. [House]||Sec. 2. [House]|
|The House of Representatives shall be composed of Members chosen every second Year by the People of the several States, and the Electors in each State shall have the Qualifications requisite for Electors of the most numerous Branch of the State Legislature.||(1) The House of Representatives shall be composed of members chosen every second year by the people of the several States; and the electors in each State shall be citizens of the Confederate States, and have the qualifications requisite for electors of the most numerous branch of the State Legislature; but no person of foreign birth, not a citizen of the Confederate States, shall be allowed to vote for any officer, civil or political, State or Federal.||
The Confederacy explicitly declares that only citizens of the CSA can vote in elections. In the USA the individual states have the power to decide voter eligibility, so already here's one power that the supposedly more pro-"states' rights" Confederacy is actually taking away.
|No Person shall be a Representative who shall not have attained to the Age of twenty five Years, and been seven Years a Citizen of the United States, and who shall not, when elected, be an Inhabitant of that State in which he shall be chosen.||(2) No person shall be a Representative who shall not have attained the age of twenty-five years, and be a citizen of the Confederate States, and who shall not when elected, be an inhabitant of that State in which he shall be chosen.||Since the CSA was just being created, the Confederacy could not demand that their Representatives be citizens for seven years. The USA could, because at the time their constitution was adopted the US had already existed for almost ten years under the Articles of Confederation.|
|Representatives and direct Taxes shall be apportioned among the several States which may be included within this Union, according to their respective Numbers, which shall be determined by adding to the whole Number of free Persons, including those bound to Service for a Term of Years, and excluding Indians not taxed, three fifths of all other Persons. The actual Enumeration shall be made within three Years after the first Meeting of the Congress of the United States, and within every subsequent Term of ten Years, in such Manner as they shall by Law direct. The Number of Representatives shall not exceed one for every thirty Thousand, but each State shall have at Least one Representative; and until such enumeration shall be made, the State of New Hampshire shall be entitled to chuse three, Massachusetts eight, Rhode-Island and Providence Plantations one, Connecticut five, New-York six, New Jersey four, Pennsylvania eight, Delaware one, Maryland six, Virginia ten, North Carolina five, South Carolina five, and Georgia three.||(3) Representatives and direct taxes shall be apportioned among the several States, which may be included within this Confederacy, according to their respective numbers, which shall be determined by adding to the whole number of free persons, including those bound to service for a term of years, and excluding Indians not taxed, three-fifths of all slaves. The actual enumeration shall be made within three years after the first meeting of the Congress of the Confederate States, and within every subsequent term of ten years, in such manner as they shall by law direct. The number of Representatives shall not exceed one for every fifty thousand, but each State shall have at least one Representative; and until such enumeration shall be made, the State of South Carolina shall be entitled to choose six; the State of Georgia ten; the State of Alabama nine; the State of Florida two; the State of Mississippi seven; the State of Louisiana six; and the State of Texas six.||
This is a complicated clause detailing how to measure the population of the states. At the time, the US formally regarded slaves as only counting as "three fifths" of a person, which allowed the non-slave states to be over-represented in the Congress. The CSA kept this rule for some reason, probably to even out representatives among their slave-heavy and slave-light states.
The US constitution also bent over backwards to avoid using the term "slave" or "slavery" in the document, but the pro-slavery CSA apparently didn't have a problem calling a spade a spade.
Lastly, the the CSA appeared to aspire to have a smaller Congress, as Representatives could represent up to 50,000 people, while in the US the max is 30,000 per Congressman.
And obviously the founding states are different.
|When vacancies happen in the Representation from any State, the Executive Authority thereof shall issue Writs of Election to fill such Vacancies.||(4) When vacancies happen in the representation from any State the executive authority thereof shall issue writs of election to fill such vacancies.||No changes.|
|The House of Representatives shall chuse their Speaker and other Officers; and shall have the sole Power of Impeachment.||(5) The House of Representatives shall choose their Speaker and other officers; and shall have the sole power of impeachment; except that any judicial or other Federal officer, resident and acting solely within the limits of any State, may be impeached by a vote of two-thirds of both branches of the Legislature thereof.||
The CSA gave state legislatures the power to impeach federally-appointed state court judges and other federally-appointed state officials.
This is relevant as some federal judicial districts at the time (and today) exist only within a single state, yet state governments are powerless to control their judges because they are federal-employees. This change thus gives (certain) states more power over their presiding federal judges, which in turn blurs the distinction between federal and state judicial authority.
|Section. 3. [Senate]||Sec. 3. [Senate]|
|The Senate of the United States shall be composed of two Senators from each State, chosen by the Legislature thereof for six Years; and each Senator shall have one Vote.||(1) The Senate of the Confederate States shall be composed of two Senators from each State, chosen for six years by the Legislature thereof, at the regular session next immediately preceding the commencement of the term of service; and each Senator shall have one vote.||
The CSA clarifies that state legislatures will appoint senators at the last session before the Senator's term expires.
This prevented state legislatures from appointing a "reserve" senator to wait in the wings until the incumbent guy left office, as was common in some American states.
|Immediately after they shall be assembled in Consequence of the first Election, they shall be divided as equally as may be into three Classes. The Seats of the Senators of the first Class shall be vacated at the Expiration of the second Year, of the second Class at the Expiration of the fourth Year, and of the third Class at the Expiration of the sixth Year, so that one third may be chosen every second Year; and if Vacancies happen by Resignation, or otherwise, during the Recess of the Legislature of any State, the Executive thereof may make temporary Appointments until the next Meeting of the Legislature, which shall then fill such Vacancies.||(2) Immediately after they shall be assembled, in consequence of the first election, they shall be divided as equally as may be into three classes. The seats of the Senators of the first class shall be vacated at the expiration of the second year; of the second class at the expiration of the fourth year; and of the third class at the expiration of the sixth year; so that one-third may be chosen every second year; and if vacancies happen by resignation, or other wise, during the recess of the Legislature of any State, the Executive thereof may make temporary appointments until the next meeting of the Legislature, which shall then fill such vacancies.||No changes.|
|No Person shall be a Senator who shall not have attained to the Age of thirty Years, and been nine Years a Citizen of the United States, and who shall not, when elected, be an Inhabitant of that State for which he shall be chosen.||(3) No person shall be a Senator who shall not have attained the age of thirty years, and be a citizen of the Confederate States; and who shall not, then elected, be an inhabitant of the State for which he shall be chosen.||Again, the CSA is too young to demand nine years of citizenship from its senators.|
|The Vice President of the United States shall be President of the Senate, but shall have no Vote, unless they be equally divided.||(4) The Vice President of the Confederate States shall be president of the Senate, but shall have no vote unless they be equally divided.||No changes.|
|The Senate shall chuse their other Officers, and also a President pro tempore, in the Absence of the Vice President, or when he shall exercise the Office of President of the United States.||(5) The Senate shall choose their other officers; and also a president pro tempore in the absence of the Vice President, or when he shall exercise the office of President of the Confederate states.||No changes.|
|The Senate shall have the sole Power to try all Impeachments. When sitting for that Purpose, they shall be on Oath or Affirmation. When the President of the United States is tried, the Chief Justice shall preside: And no Person shall be convicted without the Concurrence of two thirds of the Members present.||(6) The Senate shall have the sole power to try all impeachments. When sitting for that purpose, they shall be on oath or affirmation. When the President of the Confederate States is tried, the Chief Justice shall preside; and no person shall be convicted without the concurrence of two-thirds of the members present.||No changes.|
|Judgment in Cases of Impeachment shall not extend further than to removal from Office, and disqualification to hold and enjoy any Office of honor, Trust or Profit under the United States: but the Party convicted shall nevertheless be liable and subject to Indictment, Trial, Judgment and Punishment, according to Law.||(7) Judgment in cases of impeachment shall not extend further than to removal from office, and disqualification to hold any office of honor, trust, or profit under the Confederate States; but the party convicted shall, nevertheless, be liable and subject to indictment, trial, judgment, and punishment according to law.||No changes.|
|Section. 4.||Sect. 4.|
|The Times, Places and Manner of holding Elections for Senators and Representatives, shall be prescribed in each State by the Legislature thereof; but the Congress may at any time by Law make or alter such Regulations, except as to the Places of chusing Senators.||(1) The times, places, and manner of holding elections for Senators and Representatives shall be prescribed in each State by the Legislature thereof, subject to the provisions of this Constitution; but the Congress may, at any time, by law, make or alter such regulations, except as to the times and places of choosing Senators.||
The CSA adds a disclaimer that the state legislatures are bound by the federal constitution when creating rules for elections to the Senate and House. This evokes Section 2(1) of the Confederate constitution, which demands that states only grant voting rights to citizens.
The CSA also takes away the Congress' power to alter the time of chosing Senators, as the CSA constitution already sets out a specific timeframe for appointments in Section 3(1).
|The Congress shall assemble at least once in every Year, and such Meeting shall be on the first Monday in December, unless they shall by Law appoint a different Day.||(2) The Congress shall assemble at least once in every year; and such meeting shall be on the first Monday in December, unless they shall, by law, appoint a different day.||No changes.|
|Section. 5.||Sect. 5.|
|Each House shall be the Judge of the Elections, Returns and Qualifications of its own Members, and a Majority of each shall constitute a Quorum to do Business; but a smaller Number may adjourn from day to day, and may be authorized to compel the Attendance of absent Members, in such Manner, and under such Penalties as each House may provide.||(1) Each House shall be the judge of the elections, returns, and qualifications of its own members, and a majority of each shall constitute a quorum to do business; but a smaller number may adjourn from day to day, and may be authorized to compel the attendance of absent members, in such manner and under such penalties as each House may provide.||No changes.|
|Each House may determine the Rules of its Proceedings, punish its Members for disorderly Behaviour, and, with the Concurrence of two thirds, expel a Member.||(2) Each House may determine the rules of its proceedings, punish its members for disorderly behavior, and, with the concurrence of two-thirds of the whole number, expel a member.||No changes.|
|Each House shall keep a Journal of its Proceedings, and from time to time publish the same, excepting such Parts as may in their Judgment require Secrecy; and the Yeas and Nays of the Members of either House on any question shall, at the Desire of one fifth of those Present, be entered on the Journal.||(3) Each House shall keep a journal of its proceedings, and from time to time publish the same, excepting such parts as may in their judgment require secrecy; and the yeas and nays of the members of either House, on any question, shall, at the desire of one-fifth of those present, be entered on the journal.||No changes.|
|Section. 6.||Sect. 6.|
|The Senators and Representatives shall receive a Compensation for their Services, to be ascertained by Law, and paid out of the Treasury of the United States. They shall in all Cases, except Treason, Felony and Breach of the Peace, be privileged from Arrest during their Attendance at the Session of their respective Houses, and in going to and returning from the same; and for any Speech or Debate in either House, they shall not be questioned in any other Place.||(1) The Senators and Representatives shall receive a compensation for their services, to be ascertained by law, and paid out of the Treasury of the Confederate States. They shall, in all cases, except treason, felony, and breach of the peace, be privileged from arrest during their attendance at the session of their respective Houses, and in going to and returning from the same; and for any speech or debate in either House, they shall not be questioned in any other place. No Senator or Representative shall, during the time for which he was elected, be appointed to any civil office under the authority of the Confederate States, which shall have been created, or the emoluments whereof shall have been increased during such time; and no person holding any office under the Confederate States shall be a member of either House during his continuance in office. But Congress may, by law, grant to the principal officer in each of the Executive Departments a seat upon the floor of either House, with the privilege of discussing any measures appertaining to his department.||
Two of the US constitution's original clauses are merged into one big clause here.
Most of the content is unchanged....
|No Senator or Representative shall, during the Time for which he was elected, be appointed to any civil Office under the Authority of the United States, which shall have been created, or the Emoluments whereof shall have been encreased during such time; and no Person holding any Office under the United States, shall be a Member of either House during his Continuance in Office.||
... but the CSA tacks on a bit at the end, which introduces a psuedo-parliamentary reform to the Congress. Under the CSA system Cabinet Secretaries can be summoned to answer members' questions on the floor of either the House or Senate.
This is actually not that different than what happens today, when Cabinet Secretaries can be summoned to answer questions before a Congressional committee.
|Section. 7.||Sect. 7.|
|All Bills for raising Revenue shall originate in the House of Representatives; but the Senate may propose or concur with Amendments as on other Bills.||(1) All bills for raising revenue shall originate in the House of Representatives; but the Senate may propose or concur with amendments, as on other bills.||No changes.|
|Every Bill which shall have passed the House of Representatives and the Senate, shall, before it become a Law, be presented to the President of the United States: If he approve he shall sign it, but if not he shall return it, with his Objections to that House in which it shall have originated, who shall enter the Objections at large on their Journal, and proceed to reconsider it. If after such Reconsideration two thirds of that House shall agree to pass the Bill, it shall be sent, together with the Objections, to the other House, by which it shall likewise be reconsidered, and if approved by two thirds of that House, it shall become a Law. But in all such Cases the Votes of both Houses shall be determined by yeas and Nays, and the Names of the Persons voting for and against the Bill shall be entered on the Journal of each House respectively. If any Bill shall not be returned by the President within ten Days (Sundays excepted) after it shall have been presented to him, the Same shall be a Law, in like Manner as if he had signed it, unless the Congress by their Adjournment prevent its Return, in which Case it shall not be a Law.||(2) Every bill which shall have passed both Houses, shall, before it becomes a law, be presented to the President of the Confederate States; if he approve, he shall sign it; but if not, he shall return it, with his objections, to that House in which it shall have originated, who shall enter the objections at large on their journal, and proceed to reconsider it. If, after such reconsideration, two-thirds of that House shall agree to pass the bill, it shall be sent, together with the objections, to the other House, by which it shall likewise be reconsidered, and if approved by two-thirds of that House, it shall become a law. But in all such cases, the votes of both Houses shall be determined by yeas and nays, and the names of the persons voting for and against the bill shall be entered on the journal of each House respectively. If any bill shall not be returned by the President within ten days (Sundays excepted) after it shall have been presented to him, the same shall be a law, in like manner as if he had signed it, unless the Congress, by their adjournment, prevent its return; in which case it shall not be a law. The President may approve any appropriation and disapprove any other appropriation in the same bill. In such case he shall, in signing the bill, designate the appropriations disapproved; and shall return a copy of such appropriations, with his objections, to the House in which the bill shall have originated; and the same proceedings shall then be had as in case of other bills disapproved by the President.||
This is the longest clause in the constitution and the Confederates added quite a bit to the end.
The bulk of the clause explains how the Congress can override the President's veto. The Confederates alter this a bit, and give the CSA President the power to approve certain parts of a bill into law, and reject other parts. Today this power is known as a "line-item veto." Many US state governors have such a power, but the American President does not.
|Every Order, Resolution, or Vote to which the Concurrence of the Senate and House of Representatives may be necessary (except on a question of Adjournment) shall be presented to the President of the United States; and before the Same shall take Effect, shall be approved by him, or being disapproved by him, shall be repassed by two thirds of the Senate and House of Representatives, according to the Rules and Limitations prescribed in the Case of a Bill.||(3) Every order, resolution, or vote, to which the concurrence of both Houses may be necessary (except on a question of adjournment) shall be presented to the President of the Confederate States; and before the same shall take effect, shall be approved by him; or, being disapproved by him, shall be repassed by two-thirds of both Houses, according to the rules and limitations prescribed in case of a bill.||One of the few minor meaningless wording changes.|
|Section. 8.||Sec. 8. The Congress shall have power|
|The Congress shall have Power To lay and collect Taxes, Duties, Imposts and Excises, to pay the Debts and provide for the common Defence and general Welfare of the United States; but all Duties, Imposts and Excises shall be uniform throughout the United States;||(1) To lay and collect taxes, duties, imposts, and excises for revenue, necessary to pay the debts, provide for the common defense, and carry on the Government of the Confederate States; but no bounties shall be granted from the Treasury; nor shall any duties or taxes on importations from foreign nations be laid to promote or foster any branch of industry; and all duties, imposts, and excises shall be uniform throughout the Confederate States.||
In the CSA constitution Section 8 has an official title: "Congress shall have power", where as in the original it's much less organized.
The Confederates didn't mention "providing for the common defense" in their constitution's preamble, but they do here. "General welfare" is still omitted, however.
Lastly, the CSA essentially bans trade protectionism by saying that tariffs cannot be imposed on foreign goods for the sole purpose of protecting local industry. It also bans "bounties" from the Treasury, which at the time was the term used to describe government subsidies distributed to offset the costs of managing certain uncompetitive industries.
Southerners had often been prevented from buying cheaper foreign goods because of such Yankee projectionist measures.
|To borrow Money on the credit of the United States;||(2) To borrow money on the credit of the Confederate States.||No changes.|
|To regulate Commerce with foreign Nations, and among the several States, and with the Indian Tribes;||(3) To regulate commerce with foreign nations, and among the several States, and with the Indian tribes; but neither this, nor any other clause contained in the Constitution, shall ever be construed to delegate the power to Congress to appropriate money for any internal improvement intended to facilitate commerce; except for the purpose of furnishing lights, beacons, and buoys, and other aids to navigation upon the coasts, and the improvement of harbors and the removing of obstructions in river navigation; in all which cases such duties shall be laid on the navigation facilitated thereby as may be necessary to pay the costs and expenses thereof.||
The Confederates added a ton here.
The changes forbid Congress from spending money to "facilitate commerce." This can be seen as an early attempt to limit the power of big business in politics; Congress was only supposed to fund infrastructure that served the interests of the states and the people, not industry. So probably no corporate bailouts during CSA recessions.
The only exception granted is for harbors and other waterway infrastructure. Sea-based trade was the Confederacy's big hope for financial survival.
|To establish an uniform Rule of Naturalization, and uniform Laws on the subject of Bankruptcies throughout the United States;||(4) To establish uniform laws of naturalization, and uniform laws on the subject of bankruptcies, throughout the Confederate States; but no law of Congress shall discharge any debt contracted before the passage of the same.||Clarification: Congress cannot declare its own debts null and void.|
|To coin Money, regulate the Value thereof, and of foreign Coin, and fix the Standard of Weights and Measures;||(5) To coin money, regulate the value thereof, and of foreign coin, and fix the standard of weights and measures.||No changes.|
|To provide for the Punishment of counterfeiting the Securities and current Coin of the United States;||(6) To provide for the punishment of counterfeiting the securities and current coin of the Confederate States.||No changes.|
|To establish Post Offices and post Roads;||(7) To establish post offices and post routes; but the expenses of the Post Office Department, after the Ist day of March in the year of our Lord eighteen hundred and sixty-three, shall be paid out of its own revenues.||
The Confederates set a cut-off day after which they would no longer provide cash for the Post Office Department.
It's also worth noting that the Confederates use the term "year of our Lord" when referencing dates. The US constitution just says "the year."
|To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries;||(8) To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.||No changes.|
|To constitute Tribunals inferior to the supreme Court;||(9) To constitute tribunals inferior to the Supreme Court.||No changes.|
|To define and punish Piracies and Felonies committed on the high Seas, and Offences against the Law of Nations;||(10) To define and punish piracies and felonies committed on the high seas, and offenses against the law of nations.||No changes.|
|To declare War, grant Letters of Marque and Reprisal, and make Rules concerning Captures on Land and Water;||(11) To declare war, grant letters of marque and reprisal, and make rules concerning captures on land and water.||No changes.|
|To raise and support Armies, but no Appropriation of Money to that Use shall be for a longer Term than two Years;||(12) To raise and support armies; but no appropriation of money to that use shall be for a longer term than two years.||No changes.|
|To provide and maintain a Navy;||(13) To provide and maintain a navy.||No changes.|
|To make Rules for the Government and Regulation of the land and naval Forces;||(14) To make rules for the government and regulation of the land and naval forces.||No changes.|
|To provide for calling forth the Militia to execute the Laws of the Union, suppress Insurrections and repel Invasions;||(15) To provide for calling forth the militia to execute the laws of the Confederate States, suppress insurrections, and repel invasions.||No changes. By keeping this clause the CSA essentially gives itself the right to fight its own Civil War someday.|
|To provide for organizing, arming, and disciplining, the Militia, and for governing such Part of them as may be employed in the Service of the United States, reserving to the States respectively, the Appointment of the Officers, and the Authority of training the Militia according to the discipline prescribed by Congress;||(16) To provide for organizing, arming, and disciplining the militia, and for governing such part of them as may be employed in the service of the Confederate States; reserving to the States, respectively, the appointment of the officers, and the authority of training the militia according to the discipline prescribed by Congress.||No changes.|
|To exercise exclusive Legislation in all Cases whatsoever, over such District (not exceeding ten Miles square) as may, by Cession of particular States, and the Acceptance of Congress, become the Seat of the Government of the United States, and to exercise like Authority over all Places purchased by the Consent of the Legislature of the State in which the Same shall be, for the Erection of Forts, Magazines, Arsenals, dock-Yards, and other needful Buildings;--And||(17) To exercise exclusive legislation, in all cases whatsoever, over such district (not exceeding ten miles square) as may, by cession of one or more States and the acceptance of Congress, become the seat of the Government of the Confederate States; and to exercise like authority over all places purchased by the consent of the Legislature of the State in which the same shall be, for the erection of forts, magazines, arsenals, dockyards, and other needful buildings; and||
The Confederacy makes the meaningless clarification that "one or more" states can give up territory to provide the country's capital distract.
what was the capital district of the CSA?
|To make all Laws which shall be necessary and proper for carrying into Execution the foregoing Powers, and all other Powers vested by this Constitution in the Government of the United States, or in any Department or Officer thereof.||(18) To make all laws which shall be necessary and proper for carrying into execution the foregoing powers, and all other powers vested by this Constitution in the Government of the Confederate States, or in any department or officer thereof.||No changes. It's interesting to note how the Confederacy barely takes away any powers from the federal government.|
|Section. 9.||Sect. 9.|
|The Migration or Importation of such Persons as any of the States now existing shall think proper to admit, shall not be prohibited by the Congress prior to the Year one thousand eight hundred and eight, but a Tax or duty may be imposed on such Importation, not exceeding ten dollars for each Person.||(1) The importation of negroes of the African race from any foreign country other than the slaveholding States or Territories of the United States of America, is hereby forbidden; and Congress is required to pass such laws as shall effectually prevent the same.||
This clause is an updated version of what was originally a time-sensitive article in the US constitution. The original US Section 9(1), in its euphemistic language, stated that Congress could only ban the slave trade after 1808 (and they did).
The Confederate clause 9(1) makes this ban on the slave trade permanent, though slave trading with the US is still permitted.
|(2) Congress shall also have power to prohibit the introduction of slaves from any State not a member of, or Territory not belonging to, this Confederacy.||
This clause was a completely new addition, the first of a few.
It gives Congress the power to ban slave imports from specific US states, should they ever desire to do so. This clause is thus a clever loophole of sorts, in that it allows the CSA to ban slave imports from the US while simultaneously not contradicting clause 1.
|The Privilege of the Writ of Habeas Corpus shall not be suspended, unless when in Cases of Rebellion or Invasion the public Safety may require it.||(3) The privilege of the writ of habeas corpus shall not be suspended, unless when in cases of rebellion or invasion the public safety may require it.||No changes. Though Confederate apologists often bemoan the fact that the Yankee tyrant Lincoln suspended habeus corpus, there was nothing to stop the President of the Confederacy from doing the exact same thing.|
|No Bill of Attainder or ex post facto Law shall be passed.||(4) No bill of attainder, ex post facto law, or law denying or impairing the right of property in negro slaves shall be passed.||The most important clause in the entire CSA constitution- the right to own slaves.|
|No Capitation, or other direct, Tax shall be laid, unless in Proportion to the Census or enumeration herein before directed to be taken.||(5) No capitation or other direct tax shall be laid, unless in proportion to the census or enumeration hereinbefore directed to be taken.||No changes.|
|No Tax or Duty shall be laid on Articles exported from any State.||(6) No tax or duty shall be laid on articles exported from any State, except by a vote of two-thirds of both Houses.||The Confederate Congress gains the power to meddle in the free-trading between the states by imposing tariffs on certain states' exported goods.|
|No Preference shall be given by any Regulation of Commerce or Revenue to the Ports of one State over those of another; nor shall Vessels bound to, or from, one State, be obliged to enter, clear, or pay Duties in another.||(7) No preference shall be given by any regulation of commerce or revenue to the ports of one State over those of another.||The CSA ditches the last sentence of the American clause, thus giving its states the power to tax domestic ships who enter their ports.|
|No Money shall be drawn from the Treasury, but in Consequence of Appropriations made by Law; and a regular Statement and Account of the Receipts and Expenditures of all public Money shall be published from time to time.||(8) No money shall be drawn from the Treasury, but in consequence of appropriations made by law; and a regular statement and account of the receipts and expenditures of all public money shall be published from time to time.||No changes.|
|(9) Congress shall appropriate no money from the Treasury except by a vote of two-thirds of both Houses, taken by yeas and nays, unless it be asked and estimated for by some one of the heads of departments and submitted to Congress by the President; or for the purpose of paying its own expenses and contingencies; or for the payment of claims against the Confederate States, the justice of which shall have been judicially declared by a tribunal for the investigation of claims against the Government, which it is hereby made the duty of Congress to establish.||
The first of two new Confederate clauses that try to impose certain standards of fiscal responsibility on the legislative branch.
The CSA Congress can only appropriate cash:
The document also demands that the Confederate Congress establish a tribunal to "investigate" the validity of such claims made against the CSA.
|(10) All bills appropriating money shall specify in Federal currency the exact amount of each appropriation and the purposes for which it is made; and Congress shall grant no extra compensation to any public contractor, officer, agent, or servant, after such contract shall have been made or such service rendered.||
The CSA Congress is forced to only issue money bills that cite an exact dollar amount, and cannot grant a penny more after such a bill is passed.
|No Title of Nobility shall be granted by the United States: And no Person holding any Office of Profit or Trust under them, shall, without the Consent of the Congress, accept of any present, Emolument, Office, or Title, of any kind whatever, from any King, Prince, or foreign State.||(11) No title of nobility shall be granted by the Confederate States; and no person holding any office of profit or trust under them shall, without the consent of the Congress, accept of any present, emolument, office, or title of any kind whatever, from any king, prince, or foreign state.||No changes.|
[Amendment I, see note at right]
|(12) Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble and petition the Government for a redress of grievances.||The CSA constitution directly incorporates the Bill of Rights into their constitution, which only makes sense. The original Bill of Rights is just 10 amendments tacked to the end of the US constitution (which are what I am citing here). This part of the CSA constitution only includes the first eight amendments, the last two are included at the very end of the document.|
A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.
|(13) A well-regulated militia being necessary to the security of a free State, the right of the people to keep and bear arms shall not be infringed.||Though there are no changes per se, Second Amendment scholars in the US have long argued over the significance of the punctuation in this clause. The CSA's version gets rid of a few commas, which makes the language closer to what gun control advocates believe the amendment was supposed to say, namely that the right to keep and bear arms only exists if one belongs to a militia.|
No Soldier shall, in time of peace be quartered in any house, without the consent of the Owner, nor in time of war, but in a manner to be prescribed by law.
|(14) No soldier shall, in time of peace, be quartered in any house without the consent of the owner; nor in time of war, but in a manner to be prescribed by law.||No changes.|
The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
|(15) The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated; and no warrants shall issue but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched and the persons or things to be seized.||No changes.|
No person shall be held to answer for a capital, or otherwise infamous crime, unless on a presentment or indictment of a Grand Jury, except in cases arising in the land or naval forces, or in the Militia, when in actual service in time of War or public danger; nor shall any person be subject for the same offence to be twice put in jeopardy of life or limb; nor shall be compelled in any criminal case to be a witness against himself, nor be deprived of life, liberty, or property, without due process of law; nor shall private property be taken for public use, without just compensation.
|(16) No person shall be held to answer for a capital or otherwise infamous crime, unless on a presentment or indictment of a grand jury, except in cases arising in the land or naval forces, or in the militia, when in actual service in time of war or public danger; nor shall any person be subject for the same offense to be twice put in jeopardy of life or limb; nor be compelled, in any criminal case, to be a witness against himself; nor be deprived of life, liberty, or property without due process of law; nor shall private property be taken for public use, without just compensation.||No changes.|
In all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial, by an impartial jury of the State and district wherein the crime shall have been committed, which district shall have been previously ascertained by law, and to be informed of the nature and cause of the accusation; to be confronted with the witnesses against him; to have compulsory process for obtaining witnesses in his favor, and to have the Assistance of Counsel for his defence.
|(17) In all criminal prosecutions the accused shall enjoy the right to a speedy and public trial, by an impartial jury of the State and district wherein the crime shall have been committed, which district shall have been previously ascertained by law, and to be informed of the nature and cause of the accusation; to be confronted with the witnesses against him; to have compulsory process for obtaining witnesses in his favor; and to have the assistance of counsel for his defense.||No changes.|
In Suits at common law, where the value in controversy shall exceed twenty dollars, the right of trial by jury shall be preserved, and no fact tried by a jury, shall be otherwise re-examined in any Court of the United States, than according to the rules of the common law.
|(18) In suits at common law, where the value in controversy shall exceed twenty dollars, the right of trial by jury shall be preserved; and no fact so tried by a jury shall be otherwise reexamined in any court of the Confederacy, than according to the rules of common law.||No changes.|
Excessive bail shall not be required, nor excessive fines imposed, nor cruel and unusual punishments inflicted.
|(19) Excessive bail shall not be required, nor excessive fines imposed, nor cruel and unusual punishments inflicted.||No changes.|
|(20) Every law, or resolution having the force of law, shall relate to but one subject, and that shall be expressed in the title.||The Confederates add this little clause at the end of Section 9. This is quite an interesting addition, as it demands that all laws only relate to "one subject." This would prevent what we see today, where endless riders attached to bills can allow Congress to pass all sorts of extraordinarily complicated laws that regulate about 50 different topics at once.|
|Section. 10.||Sec. 10.|
|No State shall enter into any Treaty, Alliance, or Confederation; grant Letters of Marque and Reprisal; coin Money; emit Bills of Credit; make any Thing but gold and silver Coin a Tender in Payment of Debts; pass any Bill of Attainder, ex post facto Law, or Law impairing the Obligation of Contracts, or grant any Title of Nobility.||(1) No State shall enter into any treaty, alliance, or confederation; grant letters of marque and reprisal; coin money; make anything but gold and silver coin a tender in payment of debts; pass any bill of attainder, or ex post facto law, or law impairing the obligation of contracts; or grant any title of nobility.||The CSA deletes the words "emit bills of credit," thereby allowing its states to issue them. By the standards of the time, this could have given states the right to issue their own paper currency. Today, however, a "bill of credit" is usually just understood to be a government loan of some sort.|
|No State shall, without the Consent of the Congress, lay any Imposts or Duties on Imports or Exports, except what may be absolutely necessary for executing it's inspection Laws: and the net Produce of all Duties and Imposts, laid by any State on Imports or Exports, shall be for the Use of the Treasury of the United States; and all such Laws shall be subject to the Revision and Controul of the Congress.||(2) No State shall, without the consent of the Congress, lay any imposts or duties on imports or exports, except what may be absolutely necessary for executing its inspection laws; and the net produce of all duties and imposts, laid by any State on imports, or exports, shall be for the use of the Treasury of the Confederate States; and all such laws shall be subject to the revision and control of Congress.||No changes.|
|No State shall, without the Consent of Congress, lay any Duty of Tonnage, keep Troops, or Ships of War in time of Peace, enter into any Agreement or Compact with another State, or with a foreign Power, or engage in War, unless actually invaded, or in such imminent Danger as will not admit of delay.||(3) No State shall, without the consent of Congress, lay any duty on tonnage, except on seagoing vessels, for the improvement of its rivers and harbors navigated by the said vessels; but such duties shall not conflict with any treaties of the Confederate States with foreign nations; and any surplus revenue thus derived shall, after making such improvement, be paid into the common treasury. Nor shall any State keep troops or ships of war in time of peace, enter into any agreement or compact with another State, or with a foreign power, or engage in war, unless actually invaded, or in such imminent danger as will not admit of delay. But when any river divides or flows through two or more States they may enter into compacts with each other to improve the navigation thereof.||
The CSA threw a lot of qualifications into this one.
The Confederates were apparently quite eager to raise money by taxing ships that used their waterways, so this clause had to be rewritten to allow that.
The Confederate states also gain the power to make river-related treaties with each other. In the US, the federal government regulates bodies of water that overlap multiple states.
|Section. 1.||Section. 1.|
|The executive Power shall be vested in a President of the United States of America. He shall hold his Office during the Term of four Years, and, together with the Vice President, chosen for the same Term, be elected, as follows:||(1) The executive power shall be vested in a President of the Confederate States of America. He and the Vice President shall hold their offices for the term of six years; but the President shall not be reeligible. The President and Vice President shall be elected as follows:||
The Confederate President can only serve a single, six-year term, unlike the US President, who (at the time) could be re-elected forever.
Interestingly, the Confederate Vice President could be re-elected.
|Each State shall appoint, in such Manner as the Legislature thereof may direct, a Number of Electors, equal to the whole Number of Senators and Representatives to which the State may be entitled in the Congress: but no Senator or Representative, or Person holding an Office of Trust or Profit under the United States, shall be appointed an Elector.||(2) Each State shall appoint, in such manner as the Legislature thereof may direct, a number of electors equal to the whole number of Senators and Representatives to which the State may be entitled in the Congress; but no Senator or Representative or person holding an office of trust or profit under the Confederate States shall be appointed an elector.||No changes.|
|(3) The electors shall meet in their respective States and vote by ballot for President and Vice President, one of whom, at least, shall not be an inhabitant of the same State with themselves; they shall name in their ballots the person voted for as President, and in distinct ballots the person voted for as Vice President, and they shall make distinct lists of all persons voted for as President, and of all persons voted for as Vice President, and of the number of votes for each, which lists they shall sign and certify, and transmit, sealed, to the seat of the Government of. the Confederate States, directed to the President of the Senate; the President of the Senate shall,in the presence of the Senate and House of Representatives, open all the certificates, and the votes shall then be counted; the person having the greatest number of votes for President shall be the President, if such number be a majority of the whole number of electors appointed; and if no person have such majority, then from the persons having the highest numbers, not exceeding three, on the list of those voted for as President, the House of Representatives shall choose immediately, by ballot, the President. But in choosing the President the votes shall be taken by States~the representation from each State having one vote; a quorum for this purpose shall consist of a member or members from two-thirds of the States, and a majority of all the States shall be necessary to a choice. And if the House of Representatives shall not choose a President, whenever the right of choice shall devolve upon them, before the 4th day of March next following, then the Vice President shall act as President, as in case of the death, or other constitutional disability of the President.||
The CSA constitution breaks this clause, originally from the US constitution's 12th amendment, into three parts, but it is otherwise unchanged.
|(4) The person having the greatest number of votes as Vice President shall be the Vice President, if such number be a majority of the whole number of electors appointed; and if no person have a majority, then, from the two highest numbers on the list, the Senate shall choose the Vice President; a quorum for the purpose shall consist of two-thirds of the whole number of Senators, and a majority of the whole number shall be necessary to a choice.|
|(5) But no person constitutionally ineligible to the office of President shall be eligible to that of Vice President of the Confederate States.|
|The Congress may determine the Time of chusing the Electors, and the Day on which they shall give their Votes; which Day shall be the same throughout the United States.||(6) The Congress may determine the time of choosing the electors, and the day on which they shall give their votes; which day shall be the same throughout the Confederate States.||No changes.|
|No Person except a natural born Citizen, or a Citizen of the United States, at the time of the Adoption of this Constitution, shall be eligible to the Office of President; neither shall any Person be eligible to that Office who shall not have attained to the Age of thirty five Years, and been fourteen Years a Resident within the United States.||(7) No person except a natural-born citizen of the Confederate; States, or a citizen thereof at the time of the adoption of this Constitution, or a citizen thereof born in the United States prior to the 20th of December, 1860, shall be eligible to the office of President; neither shall any person be eligible to that office who shall not have attained the age of thirty-five years, and been fourteen years a resident within the limits of the Confederate States, as they may exist at the time of his election.||Once again, the Confederacy has to create various grandfather clauses since no one had been a citizen of the CSA prior to their constitution's ratification.|
|In Case of the Removal of the President from Office, or of his Death, Resignation, or Inability to discharge the Powers and Duties of the said Office, the Same shall devolve on the Vice President, and the Congress may by Law provide for the Case of Removal, Death, Resignation or Inability, both of the President and Vice President, declaring what Officer shall then act as President, and such Officer shall act accordingly, until the Disability be removed, or a President shall be elected.||(8) In case of the removal of the President from office, or of his death, resignation, or inability to discharge the powers and duties of said office, the same shall devolve on the Vice President; and the Congress may, by law, provide for the case of removal, death, resignation, or inability, both of the President and Vice President, declaring what officer shall then act as President; and such officer shall act accordingly until the disability be removed or a President shall be elected.||No changes.|
|The President shall, at stated Times, receive for his Services, a Compensation, which shall neither be increased nor diminished during the Period for which he shall have been elected, and he shall not receive within that Period any other Emolument from the United States, or any of them.||(9) The President shall, at stated times, receive for his services a compensation, which shall neither be increased nor diminished during the period for which he shall have been elected; and he shall not receive within that period any other emolument from the Confederate States, or any of them.||No changes.|
|Before he enter on the Execution of his Office, he shall take the following Oath or Affirmation:--"I do solemnly swear (or affirm) that I will faithfully execute the Office of President of the United States, and will to the best of my Ability, preserve, protect and defend the Constitution of the United States."||
(10) Before he enters on the execution of his office he shall take the following oath or affirmation:
"I do solemnly swear (or affirm) that I will faithfully execute the office of President of the Confederate States, and will, to the best of my ability, preserve, protect, and defend the Constitution thereof."
|Section. 2.||Sec. 2.|
|The President shall be Commander in Chief of the Army and Navy of the United States, and of the Militia of the several States, when called into the actual Service of the United States; he may require the Opinion, in writing, of the principal Officer in each of the executive Departments, upon any Subject relating to the Duties of their respective Offices, and he shall have Power to grant Reprieves and Pardons for Offences against the United States, except in Cases of Impeachment.||(1) The President shall be Commander-in-Chief of the Army and Navy of the Confederate States, and of the militia of the several States, when called into the actual service of the Confederate States; he may require the opinion, in writing, of the principal officer in each of the Executive Departments, upon any subject relating to the duties of their respective offices; and he shall have power to grant reprieves and pardons for offenses against the Confederate States, except in cases of impeachment.||No changes.|
|He shall have Power, by and with the Advice and Consent of the Senate, to make Treaties, provided two thirds of the Senators present concur; and he shall nominate, and by and with the Advice and Consent of the Senate, shall appoint Ambassadors, other public Ministers and Consuls, Judges of the supreme Court, and all other Officers of the United States, whose Appointments are not herein otherwise provided for, and which shall be established by Law: but the Congress may by Law vest the Appointment of such inferior Officers, as they think proper, in the President alone, in the Courts of Law, or in the Heads of Departments.||2) He shall have power, by and with the advice and consent of the Senate, to make treaties; provided two-thirds of the Senators present concur; and he shall nominate, and by and with the advice and consent of the Senate shall appoint, ambassadors, other public ministers and consuls, judges of the Supreme Court, and all other officers of the Confederate States whose appointments are not herein otherwise provided for, and which shall be established by law; but the Congress may, by law, vest the appointment of such inferior officers, as they think proper, in the President alone, in the courts of law, or in the heads of departments.||No changes.|
|(3) The principal officer in each of the Executive Departments, and all persons connected with the diplomatic service, may be removed from office at the pleasure of the President. All other civil officers of the Executive Departments may be removed at any time by the President, or other appointing power, when their services are unnecessary, or for dishonesty, incapacity. inefficiency, misconduct, or neglect of duty; and when so removed, the removal shall be reported to the Senate, together with the reasons therefor.||The Confederate President is given the power to fire pretty much any civil servant he wishes, from the Cabinet Secretaries on down. He must then inform the Senate of the reasons for the firing. The American President has these powers as well, but they are codified in the various laws establishing the cabinet departments and not in the constitution itself.|
|The President shall have Power to fill up all Vacancies that may happen during the Recess of the Senate, by granting Commissions which shall expire at the End of their next Session.||(4) The President shall have power to fill all vacancies that may happen during the recess of the Senate, by granting commissions which shall expire at the end of their next session; but no person rejected by the Senate shall be reappointed to the same office during their ensuing recess.||The CSA adds an additional check to prevent the President from exploiting recess appointments. If someone is rejected by the Senate, the President cannot weasel around it by just making that person a recess appointment. Bad news, John Bolton.|
|Section. 3.||Sec. 3.|
|He shall from time to time give to the Congress Information of the State of the Union, and recommend to their Consideration such Measures as he shall judge necessary and expedient; he may, on extraordinary Occasions, convene both Houses, or either of them, and in Case of Disagreement between them, with Respect to the Time of Adjournment, he may adjourn them to such Time as he shall think proper; he shall receive Ambassadors and other public Ministers; he shall take Care that the Laws be faithfully executed, and shall Commission all the Officers of the United States.||(1) The President shall, from time to time, give to the Congress information of the state of the Confederacy, and recommend to their consideration such measures as he shall judge necessary and expedient; he may, on extraordinary occasions, convene both Houses, or either of them; and in case of disagreement between them, with respect to the time of adjournment, he may adjourn them to such time as he shall think proper; he shall receive ambassadors and other public ministers; he shall take care that the laws be faithfully executed, and shall commission all the officers of the Confederate States.||The Confederates were kind enough to clarify as to who exactly this mysterious "he" is. I always assumed it was the Emperor of Peru.|
|Section. 4.||Sec. 4.|
|The President, Vice President and all civil Officers of the United States, shall be removed from Office on Impeachment for, and Conviction of, Treason, Bribery, or other high Crimes and Misdemeanors.||The President, Vice President, and all civil officers of the Confederate States, shall be removed from office on impeachment for and conviction of treason, bribery, or other high crimes and misdemeanors.||No changes.|
|Section. 1.||Sect. 1.|
|The judicial Power of the United States shall be vested in one supreme Court, and in such inferior Courts as the Congress may from time to time ordain and establish. The Judges, both of the supreme and inferior Courts, shall hold their Offices during good Behaviour, and shall, at stated Times, receive for their Services a Compensation, which shall not be diminished during their Continuance in Office.||Section 1. (1) The judicial power of the Confederate States shall be vested in one Supreme Court, and in such inferior courts as the Congress may, from time to time, ordain and establish. The judges, both of the Supreme and inferior courts, shall hold their offices during good behavior, and shall, at stated times, receive for their services a compensation which shall not be diminished during their continuance in office.||No changes.|
|Section. 2.||Sect. 2.|
|The judicial Power shall extend to all Cases, in Law and Equity, arising under this Constitution, the Laws of the United States, and Treaties made, or which shall be made, under their Authority;--to all Cases affecting Ambassadors, other public Ministers and Consuls;--to all Cases of admiralty and maritime Jurisdiction;--to Controversies to which the United States shall be a Party;--to Controversies between two or more States;-- between a State and Citizens of another State;--between Citizens of different States;--between Citizens of the same State claiming Lands under Grants of different States, and between a State, or the Citizens thereof, and foreign States, Citizens or Subjects.||(1) The judicial power shall extend to all cases arising under this Constitution, the laws of the Confederate States, and treaties made, or which shall be made, under their authority; to all cases affecting ambassadors, other public ministers and consuls; to all cases of admiralty and maritime jurisdiction; to controversies to which the Confederate States shall be a party; to controversies between two or more States; between a State and citizens of another State, where the State is plaintiff; between citizens claiming lands under grants of different States; and between a State or the citizens thereof, and foreign states, citizens, or subjects; but no State shall be sued by a citizen or subject of any foreign state.||
The modifications to this section are based on the 12th amendment to the US constitution, which clarified some issues referring to the legal jurisdiction of the states and the feds.
CSA deletes the phrase "in law and equity" from the opening line.
The CSA also clarifies that a State can only enter into a lawsuit with citizens of another state when the state is the plaintiff, a concept designed to prevent states from being sued without consent, an idea which the US constitution introduced in amendment 12.
They also reword the context in which citizens who are claiming multi-state land can sue. Originally the clause specifically says that this power is only available to "citizens of the same state" but the Confederates remove this qualifier, so that any citizen can sue.
Lastly, the CSA notes in this section that that foreigners cannot sue the states.
The Judicial power of the United States shall not be construed to extend to any suit in law or equity, commenced or prosecuted against one of the United States by Citizens of another State, or by Citizens or Subjects of any Foreign State
|In all Cases affecting Ambassadors, other public Ministers and Consuls, and those in which a State shall be Party, the supreme Court shall have original Jurisdiction. In all the other Cases before mentioned, the supreme Court shall have appellate Jurisdiction, both as to Law and Fact, with such Exceptions, and under such Regulations as the Congress shall make.||(2) In all cases affecting ambassadors, other public ministers and consuls, and those in which a State shall be a party, the Supreme Court shall have original jurisdiction. In all the other cases before mentioned, the Supreme Court shall have appellate jurisdiction both as to law and fact, with such exceptions and under such regulations as the Congress shall make.||No changes. Federal courts remain the only judicial body allowed to resolve disputes between the states.|
|The Trial of all Crimes, except in Cases of Impeachment, shall be by Jury; and such Trial shall be held in the State where the said Crimes shall have been committed; but when not committed within any State, the Trial shall be at such Place or Places as the Congress may by Law have directed.||(3) The trial of all crimes, except in cases of impeachment, shall be by jury, and such trial shall be held in the State where the said crimes shall have been committed; but when not committed within any State, the trial shall be at such place or places as the Congress may by law have directed.||No changes.|
|Section. 3.||Sect. 3.|
|Treason against the United States, shall consist only in levying War against them, or in adhering to their Enemies, giving them Aid and Comfort. No Person shall be convicted of Treason unless on the Testimony of two Witnesses to the same overt Act, or on Confession in open Court.||(1) Treason against the Confederate States shall consist only in levying war against them, or in adhering to their enemies, giving them aid and comfort. No person shall be convicted of treason unless on the testimony of two witnesses to the same overt act, or on confession in open court.||No changes.|
|The Congress shall have Power to declare the Punishment of Treason, but no Attainder of Treason shall work Corruption of Blood, or Forfeiture except during the Life of the Person attainted.||(2) The Congress shall have power to declare the punishment of treason; but no attainder of treason shall work corruption of blood, or forfeiture, except during the life of the person attainted.||No changes.|
|Section. 1.||Sect. 1.|
|Full Faith and Credit shall be given in each State to the public Acts, Records, and judicial Proceedings of every other State. And the Congress may by general Laws prescribe the Manner in which such Acts, Records and Proceedings shall be proved, and the Effect thereof.||(1) Full faith and credit shall be given in each State to the public acts, records, and judicial proceedings of every other State; and the Congress may, by general laws, prescribe the manner in which such acts, records, and proceedings shall be proved, and the effect thereof.||No changes. The CSA still forces states to recognize the court rulings of other states.|
|Section. 2.||Sect. 2.|
|The Citizens of each State shall be entitled to all Privileges and Immunities of Citizens in the several States.||(1) The citizens of each State shall be entitled to all the privileges and immunities of citizens in the several States; and shall have the right of transit and sojourn in any State of this Confederacy, with their slaves and other property; and the right of property in said slaves shall not be thereby impaired.||Solidifying the right to slavery further, the CSA adds that the government cannot prohibit the rights of individuals to haul their slaves around the country as they so please.|
|A Person charged in any State with Treason, Felony, or other Crime, who shall flee from Justice, and be found in another State, shall on Demand of the executive Authority of the State from which he fled, be delivered up, to be removed to the State having Jurisdiction of the Crime.||(2) A person charged in any State with treason, felony, or other crime against the laws of such State, who shall flee from justice, and be found in another State, shall, on demand of the executive authority of the State from which he fled, be delivered up, to be removed to the State having jurisdiction of the crime.||CSA does a bit of odd meddling with this clause. By adding the qualifier "against the laws of such state" they seem to be implying that only criminals accused of a state offense can be extradited from one state to another. So if a guy committed a federal offense he could presumably not be extradited in this manner.|
|No Person held to Service or Labour in one State, under the Laws thereof, escaping into another, shall, in Consequence of any Law or Regulation therein, be discharged from such Service or Labour, but shall be delivered up on Claim of the Party to whom such Service or Labour may be due.||(3) No slave or other person held to service or labor in any State or Territory of the Confederate States, under the laws thereof, escaping or lawfully carried into another, shall, in consequence of any law or regulation therein, be discharged from such service or labor; but shall be delivered up on claim of the party to whom such slave belongs, or to whom such service or labor may be due.||
In both constitutions, this clause was supposed to prevent slaves from escaping into freedom in another state. It's what was evoked in the infamous Dred Scott case.
The Confederates simply strengthen and clarify the language.
|Section. 3.||Sect. 3.|
|New States may be admitted by the Congress into this Union; but no new State shall be formed or erected within the Jurisdiction of any other State; nor any State be formed by the Junction of two or more States, or Parts of States, without the Consent of the Legislatures of the States concerned as well as of the Congress.||(1) Other States may be admitted into this Confederacy by a vote of two-thirds of the whole House of Representatives and two-thirds of the Senate, the Senate voting by States; but no new State shall be formed or erected within the jurisdiction of any other State, nor any State be formed by the junction of two or more States, or parts of States, without the consent of the Legislatures of the States concerned, as well as of the Congress.||The Confederates make it a bit harder for new states to join their country, by demanding a two-thirds majority vote in both houses of Congress. In the US it just takes a simple majority.|
|The Congress shall have Power to dispose of and make all needful Rules and Regulations respecting the Territory or other Property belonging to the United States; and nothing in this Constitution shall be so construed as to Prejudice any Claims of the United States, or of any particular State.||(2) The Congress shall have power to dispose of and make all needful rules and regulations concerning the property of the Confederate States, including the lands thereof.||
The language in this clause is simplified a bit in the CSA version. In both versions the federal government is given jurisdiction over the physical lands and property posessed by the collective country.
|(3) The Confederate States may acquire new territory; and Congress shall have power to legislate and provide governments for the inhabitants of all territory belonging to the Confederate States, lying without the limits of the several Sates [sic]; and may permit them, at such times, and in such manner as it may by law provide, to form States to be admitted into the Confederacy. In all such territory the institution of negro slavery, as it now exists in the Confederate States, shall be recognized and protected be Congress and by the Territorial government; and the inhabitants of the several Confederate States and Territories shall have the right to take to such Territory any slaves lawfully held by them in any of the States or Territories of the Confederate States.||
Another new clause created for the Confederacy.
Like the United States, the CSA creates two tiers of local self-government in its federal system: territories and states. This clause simply clarifies that slavery is legal in the former as well as the latter, an issue that had often been debated in the antebellum United States.
|The United States shall guarantee to every State in this Union a Republican Form of Government, and shall protect each of them against Invasion; and on Application of the Legislature, or of the Executive (when the Legislature cannot be convened), against domestic Violence.||(4) The Confederate States shall guarantee to every State that now is, or hereafter may become, a member of this Confederacy, a republican form of government; and shall protect each of them against invasion; and on application of the Legislature or of the Executive (when the Legislature is not in session) against domestic violence.||No changes. The federal government retains the right to deploy troops to states when asked.|
|Section. 1.||Sect. 1.|
|The Congress, whenever two thirds of both Houses shall deem it necessary, shall propose Amendments to this Constitution, or, on the Application of the Legislatures of two thirds of the several States, shall call a Convention for proposing Amendments, which, in either Case, shall be valid to all Intents and Purposes, as Part of this Constitution, when ratified by the Legislatures of three fourths of the several States, or by Conventions in three fourths thereof, as the one or the other Mode of Ratification may be proposed by the Congress; Provided that no Amendment which may be made prior to the Year One thousand eight hundred and eight shall in any Manner affect the first and fourth Clauses in the Ninth Section of the first Article; and that no State, without its Consent, shall be deprived of its equal Suffrage in the Senate.||(1) Upon the demand of any three States, legally assembled in their several conventions, the Congress shall summon a convention of all the States, to take into consideration such amendments to the Constitution as the said States shall concur in suggesting at the time when the said demand is made; and should any of the proposed amendments to the Constitution be agreed on by the said convention voting by States and the same be ratified by the Legislatures of two-thirds of the several States, or by conventions in two-thirds thereof as the one or the other mode of ratification may be proposed by the general convention they shall thenceforward form a part of this Constitution. But no State shall, without its consent, be deprived of its equal representation in the Senate.||
The CSA method for making constitutional amendments is a bit different, but keeps the general spirit intact.
The biggest difference is that in the Confederacy the Congress has no role in passing amendments. It's all done by the state legislatures single-handedly.
In the CSA system it only takes three states to summon a constitutional convention, where as in the US it takes the request of "two-thirds" of them. Likewise, in the CSA it only takes two-thirds of the states to ratify an amendment, while in the US it takes three-fourths.
Lastly, the CSA changes the final rule. In the US a state cannot be deprived of its equal suffrage in the Senate, but under the Confederacy it cannot be denied equal representation. So, theoretically the CSA could pass an amendment taking away Texas' right to vote in the Senate, so long as that amendment didn't take away their two Senators.
|Section. 1.||Sect. 1.|
|1. The Government established by this Constitution is the successor of the Provisional Government of the Confederate States of America, and all the laws passed by the latter shall continue in force until the same shall be repealed or modified; and all the officers appointed by the same shall remain in office until their successors are appointed and qualified, or the offices abolished.||The CSA indicates it has legal continuity with its previous provisional government.|
|All Debts contracted and Engagements entered into, before the Adoption of this Constitution, shall be as valid against the United States under this Constitution, as under the Confederation.||2. All debts contracted and engagements entered into before the adoption of this Constitution shall be as valid against the Confederate States under this Constitution, as under the Provisional Government.||No changes.|
|This Constitution, and the Laws of the United States which shall be made in Pursuance thereof; and all Treaties made, or which shall be made, under the Authority of the United States, shall be the supreme Law of the Land; and the Judges in every State shall be bound thereby, any Thing in the Constitution or Laws of any State to the Contrary notwithstanding.||3. This Constitution, and the laws of the Confederate States made in pursuance thereof, and all treaties made, or which shall be made, under the authority of the Confederate States, shall be the supreme law of the land; and the judges in every State shall be bound thereby, anything in the constitution or laws of any State to the contrary notwithstanding.||No changes, except that the CSA inexplicably gets rid of the words "which shall be" in the first sentence.|
The Senators and Representatives before mentioned, and the Members of the several State Legislatures, and all executive and judicial Officers, both of the United States and of the several States, shall be bound by Oath or Affirmation, to support this Constitution; but no religious Test shall ever be required as a Qualification to any Office or public Trust under the United States.
|4. The Senators and Representatives before mentioned, and the members of the several State Legislatures, and all executive and judicial officers, both of the Confederate States and of the several States, shall be bound by oath or affirmation to support this Constitution; but no religious test shall ever be required as a qualification to any office or public trust under the Confederate States.||No changes.|
The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people.
|5. The enumeration, in the Constitution, of certain rights shall not be construed to deny or disparage others retained by the people of the several States.||The last two amendments from the US bill of rights are incorporated into the end of the CSA constitution.|
The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.
|6. The powers not delegated to the Confederate States by the Constitution, nor prohibited by it to the States, are reserved to the States, respectively, or to the people thereof.||No changes.|
|Sect. 1.||Sect. 1.|
The Ratification of the Conventions of nine States, shall be sufficient for the Establishment of this Constitution between the States so ratifying the Same.
|1. The ratification of the conventions of five States shall be sufficient for the establishment of this Constitution between the States so ratifying the same.||Both constitutions' Article VII has to do with how the constitution is adopted.|
|2. When five States shall have ratified this Constitution, in the manner before specified, the Congress under the Provisional Constitution shall prescribe the time for holding the election of President and Vice President; and for the meeting of the Electoral College; and for counting the votes, and inaugurating the President. They shall, also, prescribe the time for holding the first election of members of Congress under this Constitution, and the time for assembling the same. Until the assembling of such Congress, the Congress under the Provisional Constitution shall continue to exercise the legislative powers granted them; not extending beyond the time limited by the Constitution of the Provisional Government.||The CSA established a provisional constitution immediately after its founding. That document will continue to be used until the interim Congress could set an election date for the election of a new, permanent Congress and President. This day never actually came.|
Overall, the CSA constitution does not radically alter the federal system that was set up under the United States constitution. It is thus very debatable as to whether the CSA was a significantly more pro-"states' rights" country (as supporters claim) in any meaningful sense. At least three states rights are explicitly taken away- the freedom of states to grant voting rights to non-citizens, the freedom of states to outlaw slavery within their borders, and the freedom of states to trade freely with each other.
States only gain four minor rights under the Confederate system- the power to enter into treaties with other states to regulate waterways, the power to tax foreign and domestic ships that use their waterways, the power to impeach federally-appointed state officials, and the power to distribute "bills of credit." When people champion the cause of reclaiming state power from the feds, are matters like these at the tops of their lists of priorities?
As previously noted, the CSA constitution does not modify many of the most controversial (from a states' rights perspective) clauses of the American constitution, including the "Supremacy" clause (6-1-3), the "Commerce" clause (1-8-3) and the "Necessary and Proper" clause (1-8-18). Nor does the CSA take away the federal government's right to suspend habeus corpus or "suppress insurrections."
As far as slave-owning rights go, however, the document is much more effective. Indeed, CSA constitution seems to barely stop short of making owning slaves mandatory. Four different clauses entrench the legality of slavery in a number of different ways, and together they virtually guarantee that any sort of future anti-slave law or policy will be unconstitutional. People can claim the Civil War was "not about slavery" until the cows come home, but the fact remains that anyone who fought for the Confederacy was fighting for a country in which a universal right to own slaves was one of the most entrenched laws of the land.
In the end, however, many of the most interesting changes introduced in the CSA constitution have nothing to do with federalism or slavery at all. The President's term limit and line-item veto, along with the various fiscal restraints, and the ability of cabinet members to answer questions on the floor of Congress are all innovative, neutral ideals whose merits may still be worth pondering today.
Email me: email@example.com
Return to Filibuster cartoons | http://www.filibustercartoons.com/CSA.htm | 13 |
21 | What Is the Human Genome?
The complete supply of DNA--all the genes and spaces in between--in all the chromosomes of a species is called its genome. Except for red blood cells, which have no nucleus, the human genome is located in the nucleus of every cell in the body. There it is organized into 46 very large molecules called chromosomes; 44 are called autosomes and 2 are called the sex chromosomes.
An international collaboration known as the Human Genome Project has identified every chemical base in the human genome and has discovered that there are about 25,000 genes present.
Cancer genomics is the study of the human cancer genome. It is a search within "cancer families" and patients for the full collection of genes and mutations--both inherited and sporadic--that contribute to the development of a cancer cell and its progression from a localized cancer to one that grows uncontrolled and metastasizes (spreads throughout the body).
A Sample Human Genome
A human karyotype is a display of its genome. It shows all the chromosomes present in an individual after they have been stained and arranged in pairs called homologs. This is a male karyotype because there is an X and a Y chromosome present.
The centromere of a chromosome is the region that separates the two arms. The arm above the centromere, which is shorter, is called the p arm, while the longer arm is the q arm.
Genes: Keepers of the Code
The 25,000 genes scattered throughout the human chromosomes comprise only about 3 percent of the total genome. These genes hold information critical to all human life. While all the component bases in a gene are copied as information leaves the nucleus, not all this information is kept. This is because within a gene there are both coding and noncoding stretches of bases. For example, in split genes, coding sections called exons supply the genetic instructions that are copied to direct protein building. These sections are preserved, but other noncoding sections within the gene, called introns, are rapidly removed and degraded.
Close to each gene is a "regulatory" sequence of DNA, which is able to turn the gene "on" or "off." Farther away, there are enhancer regions, which can speed up a gene's activity.
The massive DNA molecules known as chromosomes also have many noncoding regions located outside the genes. These contain large stretches of repetitive sequences. Some of the sequences in these locations are involved in the regulation of gene expression, and others simply act as spacers. Still other regions have functions as yet undiscovered.
Genes to mRNA to Proteins
When a gene "switches on," it eventually makes a protein, but it does not do so directly. First, the gene codes an intermediary molecule called mRNA. To transfer a gene's information from DNA to mRNA, base pairing is used. However, there is one change: An adenine base (A) in the DNA matches with a new base called uracil (U) in the mRNA. This difference helps to distinguish mRNA from DNA.
mRNA travels from the nucleus into the cytoplasm to cell organelles called ribosomes. There it directs the assembly of amino acids that fold into a unique protein.
RNA Processing Before Translation
Before mRNA leaves the nucleus, it undergoes further processing. The regions not involved in building proteins, called introns, are cut out of the message. The mature RNA that arrives at a ribosome contains only exons that will be used to build a protein in a process called translation.
The translation of base sequences from DNA to protein is dependent on the nucleotide triplet in mRNA. Each mRNA triplet of nucleotides, called a codon, codes for a single amino acid, and, ultimately, a string of amino acids makes up a protein. Since the complementary DNA that specifies a particular mRNA has only four nucleotide bases in a gene, 64 (4X4X4) possible combinations of codons are available to code for 20 amino acids. So there is great redundancy. There are 60 mRNA triplets for 19 amino acids, 3 triplets for "stop," and 1 triplet to call for methionine, the 20th amino acid that signals "start." Most amino acids are coded for by more than one triplet codon. However, each triplet is linked to only one amino acid. (For more information on how genes build proteins, please see Genetic Variation.)
All mutations are changes in the normal base sequence of DNA. These changes may occur in either coding or noncoding regions. Mutations may be silent and have no effect on the resulting protein. This is especially true if they occur in noncoding regions of the DNA. But even base pair changes in the coding region may be silent because of the redundancy of the code. For example, a mutation within a codon may occur, yet still call for the same amino acid as was called for earlier.
Mutations may involve a single base change--called a point mutation--or may involve larger sections of DNA through deletions, insertions, or translocations.
Mutations: Somatic and Germline
Most cancers arise from several genetic mutations that accumulate in cells of the body over a person's lifespan. These are called somatic mutations, and the genes involved are usually located on autosomes (non-sex chromosomes). Cancer may also have a germline mutation component, meaning that they occur in germ cells, better known as the ovum or sperm. Germline mutations may occur de novo (for the first time) or be inherited from parents' germ cells. An example of germline mutations linked to cancer are the ones that occur in cancer susceptibility genes, increasing a person's risk for the disease.
Tumors Are Clonal
Each cell, when it divides, generates two identical new ones. So, when a cell acquires a mutation, it passes that mutation on to its progeny during cell growth and division. Because cells with cancer-linked mutations tend to proliferate more than normal cells, cellular candidates for additional mutations grow in number. Mutations continue to accumulate and are copied to descendant cells. If one cell finally acquires enough mutations to become cancerous, subsequent cancer cells will be derived from that one single transformed cell. So all tumors are clonal, which means that they originate from a single parent cell, whether that first mutant cell was of germline or somatic origin.
The majority of human cancers result from an accumulation of somatic mutations. Somatic mutations are not passed on to the next generation. An 80-year cancer-free lifespan is no small accomplishment. It requires as many as 10 million billion body cells to copy themselves correctly. It is easy to see how random errors can occur. These changes are acquired during a person's lifetime from exposures to carcinogens and other mutagens, or from random unrepaired errors that occur during routine cell growth and division. Occasionally, one of these somatic mutations alters the function of some critical genes, providing a growth advantage to the cell in which it has occurred. A clone then arises from that single cell.
Mitosis and Somatic Mutations
Normal human cells with a nucleus having 23 pairs of chromosomes are called diploid or 2N to indicate these homolog pairs. During the cell growth cycle for body (somatic) cells, the DNA of all 23 pairs--46 chromosomes--copies itself (4N). When the cell next divides by a process called mitosis, each daughter cell ends up with 23 pairs or 2N, a complete set of chromosomes. If a mutation occurs during the process of mitosis, only the offspring of the mutated somatic cell will have the alteration present.
Meiosis and Germline Mutations
Unlike other human body cells, maturing germ cells, like ova or sperm, must cut their chromosome number from 46 to 23, from 2N to N. They do this through two specialized cell divisions in a process called meiosis. After meiosis is complete, each germ cell has only one-half of the 44 original body chromosomes (or autosomes) plus either an X or a Y sex chromosome.
Recombination: Crossing Over
Early in meiosis, each chromosome pair copies itself. These homologs are all attached at the centromere and are wound very tightly around one another. Right before duplicate sets of homologs pull apart and move toward a different end of the cell to complete the first division, recombination can occur, as the intertwined genetic material separates. Then, later in meiosis, a second division occurs, and even the chromosomes within a homolog move apart, leaving only a haploid number (n) in each ovum or sperm. If mutations occur during meiosis, either in the ova or sperm, these will be germline mutations.
If mutated ova or sperm then go on to fertilization, their germline mutations will pass to every somatic cell in the new individual.
De Novo Mutations
Inherited mutations had to start somewhere, and that somewhere is a de novo mutation. A de novo mutation is a new mutation that occurs in a germ cell and is then passed on to an offspring. All germline mutations started as a de novo mutation in some ancestor. De novo mutations are common in a few inherited cancer susceptibility syndromes.
Point mutations, single base changes in DNA sequences, are the most common type of alteration in DNA. They can have varying effects on the resulting protein.
A missense point mutation substitutes one nucleotide for a different one, but leaves the rest of the code intact. The impact of these point mutations depends on the specific amino acid that is changed and the protein sequence that results. If the change is critical to the protein's catalytic site or to its folding, damage may be severe.
Nonsense mutations are point mutations that change an amino acid codon to one of the three stop codons, which results in premature termination of the protein. Nonsense mutations may be caused by single base pair substitutions or by frameshift mutations.
Another type of mutation that can occur is a frameshift mutation. When a gene is copied, the action begins in the nucleus. There an mRNA strand copies the DNA strand exactly. It codes for a protein precisely, leaving no gaps or spaces separating the triplets. This set of connected triplets is called the reading frame. A frameshift mutation is caused by the addition or loss of a nucleotide, or nucleotides. This alters the content of every triplet codon that follows in a reading frame. Frameshift mutations usually result in a shortened abnormal or nonfunctional protein, and they can create an early STOP codon downstream. If the number of added or missing base pairs is a multiple of three, the resulting protein may be drastically altered, and its function will depend on the extent of these alterations.
Splice-site mutations occur within genes in the noncoding regions (introns) just next to the coding regions (exons). They can have profound effects on the resulting protein, which may lead to disease. Before mRNA leaves the nucleus, the introns are removed and the exons are joined together. This process is called splicing. Splicing is controlled by specific intron sequences, called splice-donor and splice-acceptor sequences, which flank the exons. Mutations in these sequences may lead to retention of large segments of intronic DNA by the mRNA, or to entire exons being spliced out of the mRNA. These changes could result in production of a nonfunctional protein.
Although mutations in the noncoding region are generally silent, that is not always the case. Some of the most important regulatory regions are in the 5' noncoding flanking region of the gene. Promoter sequences that regulate the gene are located there. Also, enhancer sequences that regulate the rate of gene activity are in noncoding regions a considerable distance from the gene. And gene repressor regions, which negatively regulate gene activity, also exist. Mutations in any of these regions can change the rate of protein production.
Her2 protein expression is a good example of how gene amplification can have a regulatory impact upon a tumor's growth. In breast cancer, overexpression of Her2 protein results from gene amplification in chromosome 17. This increase in production of growth-signaling molecules speeds up the rate of the cancer's progress.
SNPs: Frequently Occurring Genetic Variants
There are over a million single nucleotide polymorphisms (SNPs) in the human genome. SNPs are specific sites within a human genome at which some individuals will have one nucleotide present while other individuals will have a different one. SNPs begin their existence as point mutations, and they eventually become established in a population. This substitution must occur in a significant proportion (more than 1 percent) of a large population for it to be called a SNP. Here is an example: In the DNA sequence TAGC, a SNP occurs when the G base changes to a C, and the sequence becomes TACC. When SNPs occur within a gene, the protein that results usually remains somewhat functional.
Large Deletions or Insertions
Large deletions or insertions in a chromosome also may lead to cancer. These may occur during mitosis or during recombination in meiosis. Translocations occur when segments of one chromosome break off and fuse to a different chromosome, without any loss of genetic material. Many of these have been found to enable tumor development. Inversions are mutations that arise when two breaks occur in a chromosome and the piece is reinserted in reversed order. Other chromosomal abnormalities include nondisjunction, the failure of the homologs (chromosome pairs) to separate as new cells divide. Source: Ried Lab, NCI
Example: Translocation of Bcr-Abl Genes
In chronic myelogenous leukemia, a translocation occurs between chromosomes 9 and 22. This rearrangement of genomic material creates a fusion gene call Bcr-Abl that produces a protein (tyrosine kinase) thought to promote the development of leukemia. The drug Gleevec blocks the activation of the Bcr-Abl protein.
Cancer-associated mutations, whether somatic or germline, whether point mutations or large deletions, alter key proteins and their functions in the human biosystem. A wide variety of mutations seems to be involved. Even mutations in noncoding regions, such as in promoters, enhancers, or negative regulatory regions, can result in under- or overexpression of proteins needed for normalcy. Other mutations may cause production of important checkpoint proteins to malfunction. Collectively, these mutations conspire to change a genome from normal to cancerous.
Genotypes and Phenotypes
Cancer may start as a new genotype, that is, as a change in the genetic makeup of a person, but it ultimately produces a new phenotype as well. A phenotype is the physical manifestation of a genotype in the form of a trait or disease. Cancer is known for its ever-changing genotypes and phenotypes.
All genotypes are not created equal in their influence on phenotype. Genes come in many varieties called alleles, and some are more dominant than others. In a pair of alleles, the effect of a dominant allele prevails over the effect of a recessive allele. And the effects of a recessive allele become apparent only if the dominant allele becomes inactivated or lost.
Same Allele, Different Locus, Different Phenotype
Different mutations in the same gene can result in different phenotypes. A good example is the RET proto-oncogene. Germline mutations of RET lead to multiple endocrine neoplasia (MEN) type 2. The disease produced varies depending on where in the RET gene the germline mutation sits, so the phenotype may be MEN-2A, MEN-2B, or familial medullary thyroid cancer.
Different Locus, Different Allele, Same Phenotype
Many cancer susceptibility syndromes are genetically heterogeneous (a mixture), which means that different mutations (genotypes) can be expressed as the same phenotype (e.g., cancer). These different mutations may be located within the same gene but at different locations (locus heterogeneity) or on different genes altogether (allelic heterogeneity). For example, hereditary breast and ovarian cancer susceptibility has both locus and allelic heterogeneity. More than 500 different mutations have been identified that can occur in the BRCA1 gene on chromosome 17 and increase a woman's risk for breast cancer. And more than 300 mutations scattered throughout the BRCA2 gene on chromosome 13 are associated with hereditary breast and ovarian cancer susceptibility.
Sometimes one person with a dominant allele will express a trait, yet that same genotype in another person will remain silent. This is an example of differences in penetrance. In classic Mendelian genetics, if an individual carries a dominant allele, the trait will be expressed (genotype = phenotype). However, if all carriers of a certain dominant allele in a population do not express the trait (same genotypes/different phenotypes), the gene is said to have incomplete penetrance.
Factors Influencing Penetrance
Modifier genes affect the expression of some alleles, which may increase or decrease the penetrance of a germline mutation such as an altered cancer susceptibility allele. Penetrance may also be affected by mutations in DNA damage response genes, whose normal function is to recognize and repair genetic damage. If repair malfunctions, mutations may accumulate in other genes, increasing the likelihood that a given cell will progress to cancer.
Penetrance is usually age related, meaning that the trait is not expressed in most carriers at birth but occurs with increased frequency as the carriers get older. For example, germline mutations in mismatch repair genes associated with hereditary nonpolyposis colorectal cancer (HNPCC) are incompletely penetrant. So not all individuals who carry these mutations will get colorectal cancer, but the risk increases as individuals age. About 20 percent of carriers will never develop colorectal cancer.
Epigenetic Factors and Penetrance
Epigenetic factors are mechanisms outside the gene such as a cell's exposure to carcinogens or hormones, or genetic variations that modify a gene or its protein by methylation, demethylation, phosphorylation, or dephosphorylation. These factors can alter what is ultimately expressed; they can change a phenotype. For example, hormone and reproductive factors may influence the penetrance of certain cancer-linked mutations. Breast and ovarian cancer are more likely to occur in women with early menarche, late menopause, and a first child after age 30 (or no children at all). These factors are believed to be linked to a woman's exposure to estrogen and progesterone and their effects on cell differentiation in the breast that occur during pregnancy.
In cancer, both the genotype and the phenotype change over time. Epigenetic factors play a key role in these changes.
Epigenetic Example: Methylation Alters Gene Expression
Methylation of the genome can render areas silent. There are two types of methylation that occur. Maintenance methylation adds methyl groups to newly synthesized strands of DNA at spots opposite methylated sites on the parent strand. This activity makes sure that daughter molecules of DNA maintain a methylation pattern after cell division. There is also de novo methylation, which can add methyl groups to totally new positions and change the pattern in a localized region of the genome.
Genes that must be expressed in all tissues have unmethylated regions, called CpG islands, located upstream. On the other hand, genes that must be turned off in differentiated tissues have these islands methylated. This allows a histone deacetylase complex nicknamed HDAC to bind, compress the shape of the genomic material, and inactivate the gene.
Imprinting Alters Gene Expression
Genomic imprinting is an uncommon event in human genomes that occurs when only one of a pair of genes present on homologous chromosomes is expressed because the other has been silenced by methylation. Thirty genes in humans display such imprinting. Curiously, for specific genes, the maternal copy is the one chosen to be silenced; for others, the paternal copy is selected.
Carrier frequency describes the prevalence in a given population of germline mutations in a specific gene. A mutation carrier is sometimes called a heterozygote because two different alleles are present at a given locus--one with a germline mutation and one normal allele. Here, 2 out of 10 individuals carry a mutated allele at a particular gene locus, so the carrier frequency is 20 percent.
Prevalence and Founder Effect
Some populations have a higher prevalence of specific cancer-associated alleles than others. This may result from a founder effect, which occurs when a population undergoes rapid shrinkage and then expansion in an isolated setting. In a population that is geographically or reproductively isolated, an individual called a founder carries or develops a germline mutation that is rare in the general population.
Example: Founder Effect in Ashkenazi Jewish Population
Because of reproductive isolation, later generations of an isolated population will have a higher frequency of a mutation than the original population. For example, Ashkenazi Jews were segregated from the rest of the population and lived in separate communities for hundreds of years. Today, one percent of the Ashkenazi Jewish population--one person in 40--carries a 185delAG mutation in BRCA1, which places them at higher than the average risk for breast and ovarian cancer.
Mutations in Cancer Susceptibility Genes: BRCA1
Here is an example of the mutations seen in the BRCA1 breast cancer susceptibility gene.
Individuals who inherit these cancer-predisposing germline mutations carry their mutated alleles in every cell in their bodies.
Mutations in Cancer Susceptibility Genes: BRCA2
Here is an example of the mutations seen in the BRCA2 breast cancer susceptibility gene.
Inheriting these mutated alleles greatly increases a person's lifetime risk for developing cancer. This may explain why cancers linked to germline mutations in susceptibility genes often occur at an earlier age and in multiple sites.
Autosomal Dominant Inheritance
Most hereditary cancer syndromes are inherited in autosomal dominant fashion.
Dominant inheritance occurs when only one copy of an allele is required for a particular trait to be expressed (phenotype). In autosomal dominant inheritance, multiple generations express the traits, with no skipped generations (assuming complete penetrance).
Examples of Dominantly Inherited Cancer Syndromes
Hereditary cancer syndromes are relatively uncommon, accounting for only about 5 to 10 percent of all cancers. Nevetheless, as many as 50,000 cancers newly diagnosed in the U.S. each year are associated with a hereditary syndrome.
Cancer Susceptibility: Incomplete Penetrance and Phenocopies
Individuals who inherit cancer susceptibility mutations inherit a predisposition to cancer, not cancer itself. Some mutation carriers inherit their predisposing genotypes in an autosomal dominant fashion, yet they do not develop cancer, indicating that their altered genes are incompletely penetrant. A somatic mutation in a second allele is required for cancer to develop.
Further confusing the situation is the fact that sporadic forms of cancer may also occur in families along with a hereditary cancer syndrome. These cases of sporadic cancer are called phenocopies because their phenotype is similar to that of the affected mutation carriers, but their genotype is different. Genetic testing may determine if the cancer is hereditary or sporadic in nature.
Example: BRCA1-Linked Hereditary Breast and Ovarian Cancer
In this pedigree of a family with a BRCA1 mutation, numbers below each person indicate age at first cancer diagnosis, if affected; age at death, if deceased; and age at interview, if alive. This family exemplifies several hallmarks of hereditary breast and ovarian cancer. The fact that the mutation is passed on by autosomal dominant transmission is evident in that approximately 50 percent of family members in each generation carry the mutation. Notice that an unaffected father passes the mutation to his affected daughter, showing that transmission of the BRCA1 mutation can occur through either parent. Note the high penetrance of the disease and early age at onset. The penetration is incomplete, though high, as shown by one female carrier who lives to age 86 and another who lives to age 92 without a diagnosis of breast or ovarian cancer.
Example: BRCA2-Linked Hereditary Breast Cancer
In this pedigree, it again becomes clear that most mutations in cancer susceptibility genes are germline and pass as dominant traits with incomplete penetrance. Note the female carrier who lives to age 83 and dies of natural causes even though her mother was affected by breast cancer and died at age 48. Also note two males: one diagnosed with breast cancer and the other with prostate cancer. Men who inherit an abnormal BRCA2 gene have an increased risk (80 times the lifetime risk of men without the mutation) for male breast cancer. They also are three to seven times more likely than men without the mutation to develop prostate cancer.
Autosomal Recessive Inheritance
In autosomal recessive inheritance, two copies of the allele are required for the trait to be expressed. Carriers of one disease allele will not develop the illness, and several generations may be unaffected, leading to the appearance of skipped generations. Males and females are equally affected. If both parents carry one copy of the recessive allele, one in four offspring, on average, will express the trait.
Some Recessively Inherited Cancer Syndromes
In X-linked inheritance, the gene of interest is on the X chromosome, not on an autosome. Because females have 2 X chromosomes, they must inherit two copies of the disease allele to express the disease phenotype. Females with only one mutated allele are carriers.
Males are more frequently affected because they only have one X chromosome and need only one allele mutated to express a disease phenotype. All males who inherit a copy of the abnormal X chromosome are affected by the disease (assuming 100 percent penetrance).
Other Genetic Conditions Linked to Increased Cancer Risk
Hereditary susceptibility to breast cancer occurs in several other rare genetic conditions. Breast cancer is the most common adult manifestation of Li-Fraumeni syndrome, a multiple cancer syndrome caused by germline mutations in the TP53 gene. Breast cancer also is the most frequent malignancy diagnosed in Cowden syndrome, a condition with germline mutations in the PTEN gene. Both benign and malignant breast tumors occur in Muir-Torre syndrome, a condition related to hereditary nonpolyposis colon cancer (HNPCC), characterized by germline mutations in the DNA mismatch repair genes MSH2 and MLH1. Patients with Peutz-Jeghers syndrome display abnormal pigmentation, gastrointestinal polyps, and, if they are women, they are at increased risk for breast cancer and they experience early onset bilateral disease.
Normal Cell Growth: The Cell Cycle
The cell cycle is a critical process that a cell undergoes in order to copy itself exactly. Most cancers have mutations in the signals that regulate the cell's cycle of growth and division. Normal cell division is required for the generation of new cells during development and for the replacement of old cells as they die.
Most cells remain in interphase, the period between cell divisions, for at least 90 percent of the cell cycle. The first part of the interphase is called G1 (for first gap), followed by the S phase (for DNA synthesis), then G2 (for second gap). During G1, there is rapid growth and metabolic activity, including synthesis of RNA and proteins. Cell growth continues during the S phase, and DNA is replicated. In G2, the cell continues to grow and prepares for cell division. Cell division (mitosis) is referred to as the M phase. Cells that do not divide for long periods do not replicate their DNA and are considered to be in G0.
In normal cells, tumor suppressor genes act as braking signals during G1 to stop or slow the cell cycle before S phase. DNA repair genes are active throughout the cell cycle, particularly during G2 after DNA replication and before the chromosomes prepare for mitosis.
Abnormal Cell Growth: Oncogenes
Most cancers have mutations in proto-oncogenes, the normal genes involved in the regulation of controlled cell growth. These genes encode proteins that function as growth factors, growth factor receptors, signal-relaying molecules, and nuclear transcription factors (proteins that bind to genes to start transcription). When the proto-oncogene is mutated or overregulated, it is called an oncogene and results in unregulated cell growth and transformation. At the cellular level, only one mutation in a single allele is enough to trigger an oncogenic role in cancer development. The chance that such a mutation will occur increases as a person ages.
Tumor Suppressor Genes
Most cancer susceptibility genes are tumor suppressor genes. Tumor suppressor genes are just one type of the many genes malfunctioning in cancer. These genes, under normal circumstances, suppress cell growth. Some do so by encoding transcription factors for other genes needed to slow growth. For example, the protein product of the suppressor gene TP53 is called p53 protein. It binds directly to DNA and leads to the expression of genes that inhibit cell growth or trigger cell death. Other tumor suppressor genes code for proteins that help control the cell cycle.
Mutations in Tumor Suppressor Genes
Both copies of a tumor suppressor gene must be lost or mutated for cancer to occur. A person who carries a germline mutation in a tumor suppressor gene has only one functional copy of the gene in all cells. For this person, loss or mutation of the second copy of the gene in any of these cells can lead to cancer.
In 1971, Dr. Alfred Knudson proposed the two-hit hypothesis to explain the early onset at multiple sites in the body of an inherited form of cancer called hereditary retinoblastoma. Inheriting one germline copy of a damaged gene present in every cell in the body was not sufficient to enable this cancer to develop. A second hit (or loss) to the good copy in the gene pair could occur somatically, though, producing cancer. This hypothesis predicted that the chances for a germline mutation carrier to get a second somatic mutation at any of multiple sites in his/her body cells was much greater than the chances for a noncarrier to get two hits in the same cell.
Tumor suppressors act recessive at the phenotypic level (both alleles must be mutated/lost for cancer to develop), but the "first hit" germline mutation at the genotypic level is actually inherited in an autosomal dominant fashion.
Loss of Heterozygosity
In hereditary cancer syndromes, individuals are called heterozygous (having one or more dissimilar gene pairs) because they start life with a germline mutation in one of the alleles linked to cancer susceptibility, but it is balanced by a normal counterpart. These individuals are predisposed to cancer because all their cells have already sustained the first hit to cancer-linked genes. If the critically needed normal suppressor gene that balances this germline mutation is lost at some time during an individual's life, a condition called loss of heterozygosity (LOH) occurs.
There are several ways a cell can suffer loss of heterozygosity. An entire chromosome containing a normal allele may be lost due to failure of the chromosomes to segregate properly at mitosis (nondisjunction). Alternatively, an unbalanced exchange of genetic material can occur in a process called translocation, resulting in loss of a chromosomal region containing the normal gene. Sometimes when a normal gene is lost, a reduplication of the remaining chromosome with an abnormal gene occurs, leaving the cell with two abnormal gene copies. Normal genes may also be lost during normal mitotic recombination events or as a consequence of a point mutation in the second allele, leading to inactivation of the normal counterpart.
Some mutations linked to cancer appear to involve a failure of one or many of the cell's repair systems. One example of such error involves DNA mismatch repair. After DNA copies itself, proteins from mismatch repair genes act as proofreaders to identify and correct mismatches. If a loss or mutation occurs in the mismatch repair genes, sporadic mutations will more likely accumulate. Other errors in repair may involve incorrect cutting out of bases--or whole nucleotides--as repair proteins try to fix DNA after bulky molecules, such as the carcinogens in cigarettes, have attached. This is faulty excision repair. Sometimes both strands of DNA suffer breaks at the same time, and faulty recombinational repair occurs. Any of these mistakes may enable mutations to persist, get copied, and eventually contribute to cancer's development.
Cancer Susceptibility: Much Still Unknown
Much remains elusive in our understanding of cancer susceptibility. Breast cancer is a good example of how incomplete a picture we have.
Most women with a family history of breast cancer DO NOT carry germline mutations in the single highly penetrant cancer susceptibility genes, yet familial clusters continue to appear with each new generation.
About 5 to 10 percent of breast cancer cases are linked to germline mutations in single, highly penetrant cancer susceptibility genes such as BRCA1 and BRCA2. Strong genetic predisposition and cancer susceptibility in these families is passed down in an autosomal dominant fashion.
Another 15 to 20 percent of breast cancers, however, are associated with some family history but no evidence of such autosomal dominant transmission. These cases are not well understood. Possibly environmental or multiple gene interactions contribute to very low penetrance of susceptibility genes, or possibly yet undiscovered mutations are involved.
Epigenetic Changes: Much Still Unknown
Much remains unknown about the role of epigenetic factors and cancer. Epigenetic changes are reversible modifications to genes or proteins that occur in the tumor and its microenvironment. Epigenetic modifier molecules have been observed making tumor-friendly, nonmutational changes in an already confused biosystem. For example, by heavily methylating genes or promoter regions, gene activity critical to counteract a tumor's drive toward metastasis gets turned off. Or noncoding ribonucleic acids meddle in epigenetic fashion, interfering with a cell's regulation of growth or attempt to repair damage.
Other Cancer-Associated Mutations: Much Still Unknown
In addition to oncogenes and tumor suppressor genes, most cancers acquire several other key mutations that enable cancer to progress. While researchers don't yet know all the mutations involved, they have organized them in terms of their activities in support of tumor growth and metastasis. In addition to the contributions of oncogenes and mutated suppressor genes, additional genomic mutations enable the invasion of neighboring tissue, evasion of immune system detection, recruitment of a new blood supply, dissemination and targeting of new sites, and the penetration and reinvasion through new blood and tissue layers. Over time, successful metastasis occurs.
Other Cancer-Associated Mutations: Much Still Unknown (cont.)
A Daunting Challenge
A comprehensive analysis of the cancer genome remains a daunting challenge. There is no single technology at present that will detect all the types of abnormality--deletions, rearrangements, point mutations, frameshift insertions, amplifications, imprinting, and epigenetic changes--implicated in cancer. Microarrays and gene chip analysis, however, are beginning to unveil some key genomic drivers. (Please see Molecular Diagnostics for more information.)
Many clinical trials now include genomic profiles of cancer patients as prognostic and diagnostic indicators. Genomic profiles are even used to monitor where and how the cancer genome has been hit during molecularly targeted therapies. Mining and sharing all this data should eventually help oncologists to better integrate the genotypic and phenotypic changes that occur in a biosystem during cancer's progression. This knowledge will be used to bring earlier and better interventions to cancer patients. | http://www.cancer.gov/cancertopics/understandingcancer/cancergenomics/page17/AllPages/Print | 13 |
23 | ReadWriteThink couldn't publish all of this great content without literacy experts to write and review for us. If you've got lessons plans, activities, or other ideas you'd like to contribute, we'd love to hear from you.
Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals.
Teacher Resources by Grade
|1st - 2nd||3rd - 4th|
|5th - 6th||7th - 8th|
|9th - 10th||11th - 12th|
Analyzing the Stylistic Choices of Political Cartoonists
|Grades||9 – 12|
|Lesson Plan Type||Standard Lesson|
|Estimated Time||Five 50-minute sessions|
- explore basic information about political cartoonists' techniques.
- analyze a cartoonists' techniques.
- write guidelines that explain how to analyze a cartoonists' work.
- participate in peer review of one another's guidelines.
- revise and polish drafts of their work.
- Display a political cartoon that you have chosen as a class example using an overhead projector or pass out copies of the cartoon. Alternatively, use the Analyzing a Political Cartoon: "Settin' on a Rail" to explore an historical political cartoon with the class.
- Ask students to respond to the cartoon, noting anything that stands out and any questions that they have.
- Explain that the class will be exploring political cartoons in more detail.
- Use the Comic Vocabulary Interactive to identify the parts of cartoons, or allow students to explore the interactive independently. If computers are not available, use the Comic Vocabulary Definitions sheets on Text, Layout and Design, and Angles.
- Begin with the Text Vocabulary, and have students apply the vocabulary from the interactive or definition sheets to the political cartoon that the class has been exploring. Ask students to expand the list as necessary to include any additional ways that the cartoonist has used text in the example cartoon.
- Move to the Layout and Design terms and the Angles terms, and encourage students to consider why the cartoonists have used the techniques that they have and how the different elements work together to communicate a message.
- To give students additional practice, arrange the class in small groups and give each group one or more additional political cartoons.
- To organize students’ analysis, pass out the Political Cartoon Analysis Sheet, and have groups take notes on the different characteristics of the cartoon(s) they are analyzing. Encourage groups to discuss why the cartoonists have used the techniques that they have in the cartoons that they are analyzing.
- Once groups have completed their analysis, gather the class and have each group present their observations to the class.
- If desired, have students read Cartoon Analysis Guide for additional background information.
- Briefly review the comic terms from the previous class and, if students read the piece, discuss Cartoon Analysis Guide and how the information applies to the political cartoons analyzed in the previous session.
- Pass out and explain the Political Cartoon Analysis Assignment that students will complete independently and the Political Cartoon Analysis Rubric, which outlines the expectations for the project.
- Detail the technology and resources that the class will use as they work on the project:
If students will be working with cartoons from printed newspapers If students will be working with cartoons from online cartoon archives Explain what newspapers students can use and where the newspapers can be accessed. Explain what online sites and cartoonists students can use. Discuss how students can make copies of the cartoons that they will be studying (e.g., photocopies, scanning). Demonstrate how to save a copy of the image files or take a screen shot of the images. Emphasize the importance of backup copies, as well as copies to trim and use as illustrations for the guidelines. If students are working with scanned copies, talk about the save-as command (see right column). Emphasize the importance of backup file copies and paper copies of the images and how to use the Save-As command to ensure that students do not overwrite the original image files when creating illustrations.
- Discuss copyright and documentation issues, going over the importance of including complete citations for all cartoons that are used in the’ analysis guidelines that students write.
- Point to the details on documenting cartoons in your class textbook, or use the details and examples from Comic Art in Scholarly Writing: A Citation Guide.
- If there are any guidelines that students should use while searching for their cartoons (e.g., topics that are inappropriate for the classroom), discuss these issues and explain what students should do if they happen upon such materials accidentally.
- Pass out additional copies of the Political Cartoon Analysis Assignment and copies of the Political Cartoon Comparison Sheet for students to use as they analyze the work of the cartoonists that they have chosen.
- Give students the remainder of the class session to find and begin analyzing cartoons.
- Draw the class together with approximately five minutes remaining, and invite students to share any observations they have made so far. If students are hesitant to share, ask some leading questions about the techniques that political cartoonists use. For instance, “which design and layout techniques seem most relevant to the cartoons that you have found?”---because most political cartoons today are only one panel, gutter and splash panels are irrelevant. However students can still look for use of borders and open panels in these works.
- Review the Political Cartoon Analysis Assignment and Rubric. Answer any questions that students have about the project.
- Allow students to work independently on their analysis during the session.
- Provide mini-lessons as needed on analytical (e.g., how to determine the difference between close-up and extreme close-up) and/or technical topics (e.g., how to insert an image file in a Microsoft Word file).
- Ask students to have a complete draft of their guidelines and copies of their political cartoons for peer review during the next class session. Students can continue work on their guidelines for homework if necessary.
- Explain that since the class will be doing peer review of one another’s guidelines, students will exchange one cartoon and the guidelines. Each student will read the guidelines and consider how well those details help them analyze the cartoon. After this process, students will complete the questions on the Political Cartoon Analysis Peer Review. This process may be slightly different from the typical peer review that the class completes, so ensure that students understand the process before students exchange their work.
- Organize the exchange of cartoons and guidelines, and ask students to use the guidelines to analyze the cartoon. If desired, students can take notes on their analysis to return to the author of the guidelines as well.
- As students complete their reading and analysis, give them copies of the Political Cartoon Analysis Peer Review. Students can complete this process at their own pace, picking up the peer review form once their analysis is complete.
- Circulate through the classroom as students work, providing support and feedback.
- As students complete their peer review sheets, have them return the guidelines to the author. Students can work on their own revisions until the entire class has completed the peer review process.
- Once the class has completed peer review, draw attention to the relationship between the questions on the Peer Review form and the Rubric. Point to the underlined words on questions 2 through 5 and their connection to the headings on the Rubric.
- Answer any questions that students have about revising their guidelines, and allow students to work on their revisions during any remaining class time.
- Ask students to have polished copies of their guidelines and the cartoons ready to submit at the beginning of the next session.
- If desired, ask students to choose at least one cartoon to discuss and share with other in class.
- Arrange students in small groups.
- Ask each student to share at least one cartoon and describe the techniques that the cartoonist uses.
- Circulate among students as they work, providing support and feedback.
- Ask each group to choose one cartoon to share with the whole class.
- Gather students together and ask each group to present their choice.
- Encourage students to compare the techniques that the different cartoonists use.
- If time allows, students can complete a final proofreading of their guidelines, or have students exchange papers and proofread each other’s work. Ask students to make any corrections.
- Collect the guidelines and related cartoons.
- Rather than focusing on political cartoons, complete a similar exploration and analysis of graphic novels or comic strips.
- For an in-depth study of a particular political cartoon and its historical and geographical context, complete the ReadWriteThink lesson plan Analyzing the Purpose and Meaning of Political Cartoons or the ArtsEdge lesson plan Drawing Political Cartoons.
Review the work that students complete during this lesson on an on-going basis for the thoroughness and completeness. While students are working on these projects, talk to the students and observe their work and the connections they make to the political cartoons. Grade polished drafts with the Political Cartoon Analysis Rubric. | http://www.readwritethink.org/classroom-resources/lesson-plans/analyzing-stylistic-choices-political-923.html?tab=4 | 13 |
14 | Are you confused by terms that educators use? The ASCD Lexicon of Learning might be what you need.
NCLB mandated that states and districts adopt programs and policies supported by scientifically based research. Drawing upon research and an extensive collection of evidence from multiple sources, the Common Core State Standards were developed to reflect the knowledge and skills that young people need for success in college and careers. Those standards impact teachers in several ways, including to guide them "toward curricula and teaching strategies that will give students a deep understanding of the subject and the skills they need to apply their knowledge" (Common Core State Standards Initiative, FAQ section). For many the standards require changes in how mathematics is taught, thus they will influence instructional strategies that educators use. In a standards-based classroom four instructional strategies are key:
Math Methodology is a three part series on instruction, assessment, and curriculum. Sections contains relevant essays and resources:
Part 1: Math Methodology: Instruction
The Instruction Essay (Page 1 of 3) contains the following subsections:
The Instruction Essay (Page 2 of 3) contains the following subsections:
The Instruction Essay (Page 3 of 3) addresses the needs of students with math difficulties and contains the following subsections:
Math Methodology Instruction Resources also includes resources for special needs students (e.g., hearing and visually impaired, learning disabilities, English language learners).
What does it mean to be mathematically literate and proficient?
According to the National Research Council (2012), "Deeper learning is the process through which a person becomes capable of taking what was learned in one situation and applying it to new situations – in other words, learning for “transfer.” Through deeper learning, students develop expertise in a particular discipline or subject area" (p. 1). As mathematics educators, we want our learners ultimately to be mathematically literate and proficient in mathematics. To achieve this, educators will need to focus on deeper learning and learning for understanding.
Volker Ulm (2011) noted that mathematical literacy involves several competencies:
Developing proficiency, as the National Research Council (2001) pointed out, embodies "expertise, competence, knowledge, and facility in mathematics" and the term mathematical proficiency entails what is "necessary for anyone to learn mathematics successfully" (p. 116). It has five interwoven and interdependent strands:
conceptual understanding—comprehension of mathematical concepts, operations, and relations
procedural fluency—skill in carrying out procedures flexibly, accurately, efficiently, and appropriately
strategic competence—ability to formulate, represent, and solve mathematical problems
adaptive reasoning—capacity for logical thought, reflection, explanation, and justification
productive disposition—habitual inclination to see mathematics as sensible, useful, and worthwhile, coupled with a belief in diligence and one’s own efficacy. (National Research Council, 2001, p. 116)
However, becoming mathematically literate and proficient are ongoing processes. Writing in IAE-pedia, David Morsund and Dick Ricketts (2010) noted that becoming proficient is a matter of developing math maturity, which certainly varies among students and which involves how well they learn and understand the math, how well they can apply their knowledge and skills in a variety of math-related problem-solving situations, and in their long term retention.
While teachers have a role to play in helping students to develop understanding, students also have a role to play in the process, which cannot be overlooked. They must have intrinsic motivation, as in Eric Booth's (2013) words: "Learning can be transformed into understanding only with intrinsic motivation. Learners must make an internal shift; they must choose to invest themselves to truly learn and understand" (p. 23). This kind of motivation involves fulfilling their need for creative engagement, which is where the teacher's role in the design of instruction comes into play.
But why focus on understanding?
Consider the following examples of students' reasoning and misconceptions. Have you ever heard students say (or have you as the teacher said), "To multiply by 10, just add a zero after the number"? Or, "The product of two numbers is always bigger than either one"? How about, "The number with the most digits is the biggest." Teachers Magazine with the help of Tim Coulson of the National Numeracy Strategy in England provided 10 such Maths misconceptions (2006) and suggestions to correct the situation. That article sets the tone for the need to teach mathematics right the first time with a focus on understanding.
A focus on understanding is among six key instructional shifts for implementing the Common Core State Standards. Certainly, it is an element of proficiency and literacy. While fluency is among those shifts with students being "expected to have speed and accuracy with simple calculations," for deep understanding, teachers will be expected to "teach more than “how to get the answer” and instead support students’ ability to access concepts from a number of perspectives so that students are able to see math as more than a set of mnemonics or discrete procedures." Further, teachers will need to ensure that students "demonstrate deep conceptual understanding of core math concepts by applying them to new situations. as well as writing and speaking about their understanding" (EngageNY, 2011).
This is not to negate the role of some memorization in mathematics. Morsund and Ricketts (2010) also noted, "It is well recognized that some rote memory learning is quite important in math education. However, most of this rote learning suffers from a lack of long term retention and from the learner’s inability to transfer this learning to new, challenging problem situation both within the discipline of math and to math-related problem situations outside the discipline of math. Thus, math education (as well as education in other disciplines) has moved in the direction of placing much more emphasis on learning for understanding. There is substantial emphasis on learning some “big ideas” that will last a lifetime" (section 1.1: Math Maturity, para. 1-2).
What does literacy look like in the mathematics classroom?
A central strategy for developing mathematical literacy is "enabling students to find their own independent approaches to learning" (Ulm, 2011, p. 5). According to the Ohio Department of Education (2012), there are multiple ways for developing literacy in the mathematics classroom:
Carpenter, Blanton, Cobb, Franke, Kaput, and McClain (2004) proposed that "there are four related forms of mental activity from which mathematical and scientific understanding emerges: (a) constructing relationships, (b) extending and applying mathematical and scientific knowledge, (c) justifying and explaining generalizations and procedures, and (d) developing a sense of identity related to taking responsibility for making sense of mathematical and scientific knowledge" (pp. 2-3). "Placing students' reasoning at the center of instructional decision making... represents a fundamental challenge to core educational practice" (p. 14).
According to Steve Leinwand and Steve Fleishman (2004), since the 1980s research results "consistently point to the importance of using relational practices for teaching mathematics" (p. 88). Such practices involve explaining, reasoning, and relying on multiple representations that help students develop their own understanding of content. Unfortunately, much instruction begins with instrumental practices involving memorizing and routinely applying procedures and formulas. "In existing research, students who learn rules before they learn concepts tend to score lower than do students who learn concepts first" (p. 88).
The importance of addressing misconceptions using relational practices and multiple representations was made clear when a teacher recently voiced concern about being unable to convince a beginning algebra student that (A + B)2 was not A2+ B2. The following visual helped clarify (A+B) (A+B) = A2 + 2AB + B2
This same discussion brought up a comparison to using such a visual for understanding the typical multiplication algorithm in which students have been taught to "leave off the zeroes and move each successive row of digits when multiplying left one place." Students often have no idea as to why they are doing that. Consider the multiplication problem 31 x 25 and how the distributive property plays a role in the algorithm:
The visual suggests that 31 x 25 = (30 + 1)(20 + 5) = (30 x 20) + (30 x 5)+ (1 x 20) + (1 x 5) and that there will be four values (600 + 150 + 20 + 5) to add together after the products are found. As addition can be done in any order, the above might make the transition to the traditional vertical presentation of the algorithm easier to understand, as in the following illustration:
What are the avenues to understanding?
What goes on in the classroom on a daily basis and over the course of a unit of instruction is key to processing information for understanding. Robert Marzano (2009) identified five avenues to understanding: chunking information into small bites, scaffolding, interacting, pacing, and monitoring. Of those, scaffolding is key to the entire process, as it involves the content of those chunks and their presentation in a logical order. After presenting a chunk of reasonable length, it is important for teachers to pause and allow students to interact with each other. A high rate of interaction among learners is a necessary component for understanding. Monitoring enables teachers to determine if a chunk has been understood before moving on. Pacing, how fast or slow to move through chunks, is not easily pre-determined. It depends on being able to read students' understanding and engagement with the content.
Within the classroom, how teaching is organized also matters. Spacing out learning over time with review and quizzing helps learners retain information over the course of the school year and beyond. According to research, such spacing and exposure to concepts and facts should occur on at least two occasions, separated by several weeks or months. Students will learn more when teachers alternate their demonstration of a worked problem with a similar problem that students do for practice. This helps students to learn problem solving strategies, enables them to transfer those strategies more easily, and to solve problems faster. Student learning is improved if teachers connect abstract ideas and concrete contexts via stories, simulations, hands-on activities, visual representations, real-world problem solving, and so on. Teachers can also enhance learning by using higher order questioning and providing opportunities for students to develop explanations. This ranges from creating units of study that provoke question-asking and discussion to simply having students explain their thinking after solving a problem (Pashler, Bain, Bottge, et al., 2007).
Marzano, Pickering, and Pollock (2001) included nine research-based instructional strategies that have a high probability of enhancing student achievement for all students in all subject areas at all grade levels. The authors caution, however, that instructional strategies are only tools and "should not be expected to work equally well in all situations." They are grouped together into three categories, suggested by Pitler, Hubbell, Kuhn, and Malenoski (2007).
Strategies that provide evidence of learning:
Setting objectives and providing feedback--set a unit goal and help students personalize that goal; use contracts to outline specific goals students should attain and grade they will receive if they meet those goals; use rubrics to help with feedback; provide timely, specific, and corrective feedback; consider letting students lead some feedback sessions.
Reinforcing effort and providing recognition--you might have students keep a weekly log of efforts and achievements with periodic reflections of those. They might even mathematically analyze their data. Find ways to personalize recognition, such as giving individualized awards for accomplishments.
Strategies that help students acquire and integrate learning:
Cues, questions, and advance organizers--these should be highly analytical, should focus on what is important, and are most effective when used before a learning experience.
Nonlinguistic representation--incorporate words and images using symbols to show relationships; use physical models and physical movement to represent information
Summarizing and note taking--provide guidelines for creating a summary; give time to students to review and revise notes; use a consistent format when note taking
Cooperative learning--consider common experiences or interests; vary group sizes and objectives. Core components include positive interdependence, group processing, appropriate use of social skills, face-to-face interaction, and individual and group accountability.
Note: Reinforcing effort from the first category also fits into this category to help students.
Strategies that help students practice, review, and apply learning:
Identifying similarities and differences--graphic forms, such as Venn diagrams or charts, are useful
Homework and practice--vary homework by grade level; keep parent involvement to a minimum; provide feedback on all homework; establish a homework policy; be sure students know the purpose of the homework
Generating and testing hypotheses--a deductive (e.g. predict what might happen if ...) , rather than an inductive, approach works best.
Learn more about how to teach for mathematical literacy.
Each month you can freely download an issue in the series Towards New Teaching in Mathematics from SINUS International (Germany). These are in English and great for middle and high school. Issues 1-8 address:
Buy additional resources via CT4ME.
The Amazon widget below shows books using the search phrase: instruction math. You can also use the widget to search with other key words. Suggestions include:
So what can you do to put research into practice?
Educators should have one goal in mind in everything they do: achievement of learners, which includes their ability to transfer knowledge to new situations. To this end, research-based instructional strategies focusing on deep learning should be used, as suggested by the National Research Council (2012):
According to Douglas Reeves (2006), "Schools that have improved achievement and closed the equity gap engage in holistic accountability, extensive nonfiction writing, frequent common assessments, decisive and immediate interventions, and constructive use of data" (p. 90). Such "accountability includes actions of adults, not merely the scores of students" (p. 83). Among those actions of adults is to assist students with gaining proficiency in a range of their own academic learning skills and behaviors. Writing in relation to the new Common Core State Standards, David Conley (2011) emphasized:
These behaviors include goal setting; study skills, both individually and in groups; self-reflection and the ability to gauge the quality of one's work; persistence with difficult tasks; a belief that effort trumps aptitude; and time-management skills. These behaviors may not be tested directly on common assessments, but without them, students are unlikely to be able to undertake complex learning tasks or take control of their own learning. (p. 20)
Assessments are not just summative, but also formative occurring at least quarterly or more with immediate feedback. Beyond a score, feedback contains detailed item and cluster analysis, and is used to inform future instruction. While individual class teachers might not be able to change student schedules to provide double classes in math or literacy for students in need, they can provide such interventions as homework supervision, break down projects into incremental steps, provide time management strategies, project management strategies, study skills, and help with reading the textbook, all of which are among immediate and decisive intervention strategies. An analysis of data in a constructive manner would reveal effective professional practices and lead to discussion on how they might be replicated (Reeves, 2006).
Educators in all instructional settings who put research into practice should apply "The Seven Principles of Good Practice in Undergraduate Education." Such practice emphasizes "active learning, time management, student-faculty contact, prompt feedback, high expectations, diverse learning styles, and cooperation among students" (Garon, 2000, para. 1). However, to reach an entire class, educators need to create an opportunity for full participation and cooperation among students.
Putting research into practice also involves building a community of learners who can dialogue effectively about mathematics, and "do" mathematics. Much depends on the teacher's ability to assist learners with developing thinking skills, which includes incorporating writing and journaling in math classes as a way to demonstrate thinking, and their ability to question, provide feedback, use varied instructional approaches, assist learners with reading math texts and doing homework, and use tools and manipulatives, all of which help concept development. Elaboration of those follows.
Embed Thinking Skills within the Curriculum
Consider learning some basic facts about the brain and the geography of thinking.
Visit The National Institute of Neurological Disorders and Stroke for an introduction to the brain and how it works.
See the table of Thinking and Learning Characteristics of Young People with suggested teaching strategies, presented at PUMUS, the online journal of practical uses of math and science. The table is subdivided into sections for grades K-2, 3-5, and 6-8.
Teaching critical thinking is very hard to do, but there are strategies consistent with research to help learners acquire the ability to think critically. According to Daniel Willingham (2007), a professor of cognitive psychology, "the mental activities that are typically called critical thinking are actually a subset of three types of thinking: reasoning, making judgments and decisions, and problem solving" (p. 11). Studies have revealed that:
First, critical thinking (as well as scientific thinking and other domain-based thinking) is not a skill. There is not a set of critical thinking skills that can be acquired and deployed regardless of context. Second, there are metacognitive strategies that, once learned, make critical thinking more likely. Third, the ability to think critically (to actually do what the metacognitive strategies call for) depends on domain knowledge and practice. (p. 17)
Rupert Wegerif (2002) noted, “[t]he emerging consensus, supported by some research evidence, is that the best way to teach thinking skills is not as a separate subject but through ‘infusing’ thinking skills into the teaching of content areas” (p. 3). In agreement, Willingham (2007) added that when learners "don't have much subject matter knowledge, introducing a concept by drawing on student experiences can help" (p. 18). Further, "Learners need to know what the thinking skills are that they are learning and these need to be explicitly modeled, drawn out and re-applied in different contexts. The evidence also suggests that collaborative learning improves the effectiveness of most activities" (Wegerif, 2002, p. 3).
Not only must the strategies be made explicit, but practice is an essential element. Willingham (2007) suggested:
The first time (or several times) the concept is introduced, explain it with at least two different examples (possibly examples based on students’ experiences ...), label it so as to identify it as a strategy that can be applied in various contexts, and show how it applies to the course content at hand. In future instances, try naming the appropriate critical thinking strategy to see if students remember it and can figure out how it applies to the material under discussion. With still more practice, students may see which strategy applies without a cue from you. (p. 18)
So what are valued thinking skills that might be embedded within a curriculum? Among those are information processing skills, reasoning skills, enquiry skills, creating thinking skills, and evaluation skills. Wegerif (2002) elaborated on each of those:
Information-processing skills: These enable pupils to locate and collect relevant information, to sort, classify, sequence, compare and contrast, and to analyze part/whole relationships.
Reasoning skills: These enable pupils to give reasons for opinions and actions, to draw inferences and make deductions, to use precise language to explain what they think, and to make judgments and decisions informed by reasons or evidence.
Enquiry skills: These enable pupils to ask relevant questions, to pose and define problems, to plan what to do and how to research, to predict outcomes and anticipate consequences, and to test conclusions and improve ideas.
Creative thinking skills: These enable pupils to generate and extend ideas, to suggest hypotheses, to apply imagination, and to look for alternative innovative outcomes.
Evaluation skills: These enable pupils to evaluate information, to judge the value of what they read, hear and do, to develop criteria for judging the value of their own and others’ work or ideas, and to hatve confidence in their judgments. (pp. 4-5)
Donald Treffinger (2008) distinguished between creating thinking and critical thinking, stating that effective problem solvers need both, as they are actually complementary. The former is used to generate options and the latter to focus thinking. Each form of thinking has associated guidelines and tools, illustrated in the following table.
Guidelines and Tools for Creative vs. Critical Thinking
|Guidelines||Defer judgment, seek quantity, encourage all possibilities, look for new combinations that might be stronger than any of their parts.||Use affirmative judgment as opposed to being critical, be deliberate--consider the purpose of focusing, consider novelty and not only what has worked in past, stay on course.|
|Tools||Brainstorming||Hits and Hot Spots--selecting promising options and grouping in meaningful ways|
|Force-Fitting--forcing a relationship between two seemingly unrelated ideas||ALoU--acryonym for what to consider when refining and developing options: A - Advantages, L - Limitations, o - ways to overcome limitations, U - Unique features|
|Attribute Listing||PCA or Paired Comparison Analysis--used to rank options or set priorities|
|SCAMPER--acronym for how to apply checklist of action words to look for new possibilities: S - Substitute, C - Combine, A - Adapt, M - Magnify or Minify, P - Put to other uses, E - Eliminate, R - Reverse or Rearrange)||Sequence: SML--sequence short, medium, or long-term actions|
|Morphological Matrix--identify key parameters of task)||Create Evaluation Matrix-- consider all options and possibilities|
Adapted from Treffinger, D. (2008, Summer). Preparing creative and critical thinkers [online]. Educational Leadership, 65(10). Retrieved from http://www.ascd.org/publications/educational_leadership/summer08/vol65/num10/toc.aspx
Research scientists Derek Cabrera and Laura Colosi (Wheeler, 2010) identified yet another approach to teaching thinking skills, the DSRP method, that is tied to four universal patterns that structure knowledge:
DSRP focuses on making teachers and students more metacognitive and can be used in any standards-based curriculum. Cabrera and Colosi believe the system works because it is so simple.
Incorporate Writing and Journaling in Math
As in other curricular areas, writing and journaling in math class helps students to organize and clarify their thoughts and to reflect on their understanding of concepts. Reeves (2006) noted, "The most effective writing is nonfiction--description, analysis, and persuasion with evidence" (p. 85). Writing includes "editing, collaborative scoring, constructive teacher feedback, and rewriting" (p. 84) in all subject areas, including math.
Principles and Standards for School Mathematics (NCTM, 2000) call for students to communicate about mathematics. Writing across the grades preK-12 is encouraged and should enable all students to--
Port Angeles School District (WA) emphasizes writing in math, as illustrated by their Sample Math Questions for the Washington Assessment of Student Learning (WASL) assessments. Problems by grade level (K-8 and High School) presented in the web site are recommended for student use to communicate (in written form) understanding of math content. The series of problems are grouped by number sense, measurement, geometry, algebraic sense, probability and statistics, logic, and problem solving strategies.
Students also need to learn how to revise their writing. Strategies include using graphic organizers to plan writing exercises, writing on every other line so that there is room for revision, and then rereading a response to see if it makes sense and responds to the topic of the exercise. See for example: Graphic Organizers from Enhance Learning with Technology Web site. What are they? Why use them? How to use them? The site includes numerous links on the topic, examples, and software possibilities to assist with the endeavor.
Marilyn Burns (2004) stated that writing assignments fall into four categories: keeping journals, solving math problems, explaining concepts and ideas, and writing about learning processes. Teachers might provide initial statements, prompts, and guidelines for topics of the day for when students write to a journal. Students might write about their reasoning and problem solving process as they solve math problems. They might comment on why their solution makes sense mathematically and as a real-life solution. When explaining a concept or idea, students might also provide an example. Some writing might include commentary about the general nature of the learning activity, such as what they liked the most or least about a learning unit, or their reactions to working alone or in a group. They might show their creative side to develop a game or learning activity, or compose directions for others on how to do one of their own already-completed math activities.
To illustrate Burns' (2004) ideas, Marian Small (2010) suggested providing parallel tasks to learners as a way to differentiate math instruction. Students might choose between two problems, which differ in difficulty. However, regardless of choice, teachers might pose a set of common questions for all students to answer. Such questions focus on common elements. For example, one question might be a reflection on the estimation of the answer to the problem itself before calculating the exact answer. Another might ask students to explain why a particular operation(s) is needed to solve it, or what would happen if one number was changed, or how mental math might be used, or to explain the exact strategy actually used to solve the problem (p. 32). Students might then write answers to such questions in a journal.
Among Burns' (2004) other strategies to incorporate writing in math is to have students discuss their ideas before writing, post useful vocabulary on a class chart, and use students' writing in subsequent instruction. Posting vocabulary reminds students to use the language of math to express their ideas. Above all students should know that writing supports their learning and helps you to assess their progress. They should share their writing in pairs or small groups so that they can get alternative viewpoints or bring to light conflicting understanding. This latter provides a stringboard for further discussion.
Individuals interested in learning more about how to use writing and journaling in math classes might consult the following. You'll also find resources for products to assist with writing in math:
Improve Questioning and Dialogue
The Common Core Standards (2010) for Mathematical Practice include that students "Construct viable arguments and critique the reasoning of others" (Standard 3). In addressing this standard through questioning and dialogue, teachers facilitate interactive participation to promote their students' conceptual understanding and problem solving abilities. As students communicate with others and present their ideas, the discourse process can also help them to "Attend to precision" as they "try to use clear definitions in discussion with others and in their own reasoning" (Standard 6). To improve questioning and dialogue, both the teacher role and the students' role should be considered.
Participating in a mathematical community through discourse is as much a part of learning mathematics as a conceptual understanding of the mathematics itself. As students learn to make and test conjectures, question, agree, or disagree about problems, they are learning the essence of what it means to do mathematics. If all students are to be engaged, teachers must foster classroom discourse by providing a welcoming community, establishing norms, using supporting motivational discourse, and pressing for conceptual understanding. (Stein, 2007, p. 288)
The process of building a community begins with what the teacher says and the way teachers pose questions, as this affects the richness of a discussion. According to Paul and Elder (1997), "The oldest, and still the most powerful, teaching tactic for fostering critical thinking is Socratic teaching. In Socratic teaching we focus on giving students questions, not answers." Mastering the process of Socratic questioning is highly disciplined:
The Socratic questioner acts as the logical equivalent of the inner critical voice which the mind develops when it develops critical thinking abilities. The contributions from the members of the class are like so many thoughts in the mind. All of the thoughts must be dealt with and they must be dealt with carefully and fairly. By following up all answers with further questions, and by selecting questions which advance the discussion, the Socratic questioner forces the class to think in a disciplined, intellectually responsible manner, while yet continually aiding the students by posing facilitating questions. (Paul & Elder, 1997)
Paul and Elder (1997) noted multiple dimensions for questioning and dialogue:
We can question goals and purposes. We can probe into the nature of the question, problem, or issue that is on the floor. We can inquire into whether or not we have relevant data and information. We can consider alternative interpretations of the data and information. We can analyze key concepts and ideas. We can question assumptions being made. We can ask students to trace out the implications and consequences of what they are saying. We can consider alternative points of view. (Paul & Elder, 1997)
However, to promote thinking and understanding for all learners, the effective questioner also needs to "draw as many students as possible into the discussion," and "periodically summarize what has and what has not been dealt with and/or resolved" (Paul & Elder, 1997). Unfortunately, this does not always occur in classrooms. Too often, math teachers tend to look for one right answer, which leads to one of the biggest problems in the art of questioning--teachers do not have an appropriate wait-time between posing the question and getting the answer. Students need time to process the question and reflect on it before answering. When there is insufficient time given, teachers tend to answer their own question, or will call on students who they are relatively certain will have that answer. Thus, the whole class is not involved.
That discourse was among NCTM's (1991) Professional Standards for Teaching Mathematics. Teachers orchestrate discourse by "posing questions and tasks that elicit, engage, and challenge each student's thinking" (Standard 2). The art of questioning involves knowing when to listen, when to ask students to clarify and justify their ideas, when to take ideas that students present and pursue those in depth, and when and how to convert ideas into math notation. Teachers must decide when to add their own input, when to let students struggle with difficulties, and monitor and encourage participation (Standard 2). They enhance discourse with tasks that employ computers, calculators, and other technology; concrete materials used as models; pictures, diagrams, tables, and graphs; invented and conventional terms and symbols; metaphors, analogies, and stories; written hypotheses, explanations and arguments; and oral presentations and dramatizations (Standard 4). NCTM provides a two-part collection of tips: Asking Good Questions and Promoting Discourse.
In his Questioning Toolkit, Jamie McKenzie listed 17 types of questions and elaborated on their role in addressing the essential questions related to a unit of study. Among those are organizing, elaborating, divergent, subsidiary, probing, clarification, strategic, sorting/sifting, hypothetical, planning, unanswerable, and irrelevant (McKenzie, 1997). Unfortunately, too often teachers unskilled in the art of questioning will pose questions that involve "only simple processes like recognition, rote memory, or selective recall to formulate an answer." Such cognitive-memory questions are at the lowest level of Gallagher and Ascher's Questioning Taxonomy: cognitive-memory, convergent, divergent, and evaluative questions (Vogler, 2008, Gallagher and Ascher's Questioning Taxonomy section).
As in the online learning environment, the richest discussions will come from higher order open-ended questions (i.e., divergent or evaluative questions), as opposed to centering or closed-ended questions (i.e., cognitive-memory or convergent questions), and then probing follow-up questions (Muilenburg & Berge, 2000). Open ended questions also better involve the whole class and thus enable teachers to better differentiate instruction. Marian Small (2010) suggested four strategies on how one might do this in the mathematics classroom:
Teachers should use answers to a question to help formulate the next question, enabling questions to build upon each other. Kenneth Vogler (2008) suggested how sequencing and patterns can be accomplished:
Extending and Lifting--involves asking a series of questions (extending) at the same cognitive level, then asking a question at the next higher level (lifting).
Circular Path--ask an initial question (this one perhaps was not answered) followed by a series of questions leading back to the first one.
Same Path--all questions are asked at the same level, typically at a lower level (e.g., a series of "what is ..." questions).
Narrow to Broad--lower-level, specific questions are followed by higher-level, general questions.
Broad to Narrow--lower-level, general questions are followed by higher-level, specific questions.
A Backbone of Questions with Relevant Digressions--the series of questions relate to the topic of discussion, rather than focus on a particular cognitive level.
Likewise, students have a role in discourse. The art of questioning can be introduced to them as earlier as Kindergarten. They, too, must listen, initiate questions and problems, and respond to others; use a variety of tools to explore examples and counterexamples; convince themselves and others of the representations, solutions, conjectures, and answers. They must rely on evidence and argument to determine validity (NCTM, 1991, Teaching Standard 3).
New teachers, and some of us veterans, might have difficulty in getting students to discuss mathematics in class. You will find helpful suggestions for discussion in How to Get Students to Talk in Class from Stanford University's Center for Teaching and Learning. Among those are to decentralize responses to you as teacher by encouraging learners to direct them specifically to others in the class, share discussion authority with student facilitators, ask open-ended questions, give students time to think and perhaps brainstorm answers to questions with a classmate, be encouraging to those who take risks to answer even if the answer was incorrect, use strategic body language, take notes on student responses to help summarize views later or keep discussion moving, and use active learning strategies.
Consider also the role that new technology tools can play in increasing dialogue about mathematics. These might take the form of wikis, podcasts, blogs, or voting options (known as clickers) that often come with interactive whiteboards. Students might use their classroom wiki to create their own textbook with group understandings of various topics, or for collaborative problem solving, projects, applications of math in everyday life, and so on. They might create podcasts in which they vocalize understandings individually or as a group to share with others. For more on the pedagogic value of podcasts and wikis, see Wiki Pedagogy by Renée Fountain. Blogs would be useful for monitoring individual contributions of learners in discussion on a variety of topics. Their commentaries are revealed in reverse chronological order (i.e., the most recent is listed first). Marzano (2009) noted that whiteboard voting technologies "allow students to electronically cast their vote regarding the correct answer to a question. Their responses are immediately displayed on a pie chart or bar graph, enabling teacher and students to discuss the different perceptions of the correct answer" (p. 87).
For more on podcasts and blogs for learning, read articles by Patricia Deubel (2007): Podcasts: Where's the learning? and Moderating and ethics for the classroom instructional blog.
Another key to successful instruction is effective feedback and reinforcement. However, strictly speaking, feedback is not advice, praise, grades or evaluation, as "none of these provide the descriptive information that students need" about their efforts to reach a goal (Wiggins, 2012, p. 11).
Feedback should be clearly understood, timely, immediately useable by students, consistent, comprehensive, supportive, and valued (Garon, 2000). "When anyone is trying to learn, feedback about the effort has three elements: recognition of the desired goal, evidence about present position, and some understanding of a way to close the gap between the two" (Sadler, in Black & Wiliam, 1998, Self Assessment by Pupils section).
Jan Chappuis (2012) provided the following five characteristics of effective feedback:
Everyone makes mistakes. That is, sometimes we do things that are uncharacteristic of work we might have done in past and which we might be able to correct ourselves through greater attention. So, in providing corrective feedback, we should focus on true errors, rather than pointing out all mistakes. True errors "occur because of a lack of knowledge" and fall into four broad categories," according to Douglas Fisher and Nancy Frey (2012):
David Nicol and Debra Macfarlane-Dick (n.d.) provided additional principles of good feedback, which are drawn from their formative assessment model and review of research literature:
Use Varied Instructional Approaches
Putting research into practice involves teaching for understanding by using a variety of instructional approaches. James Hiebert and Douglas Grouws (2009) stated that "conceptual understanding--the construction of meaningful relationships among mathematical facts, procedures, and ideas; and skill efficiency--the rapid, smooth, and accurate execution of mathematical procedures" are "central to mathematics learning and have often competed for attention" (p. 10). While teachers might wrestle with selecting effective instructional methods for increasing learning, an important point to remember is that "particular methods are not, in general, effective or ineffective. Instructional methods are effective for something" (p. 10). The key is to "balance these two approaches, with a heavier emphasis on conceptual understanding" (p. 11).
Teachers also need to remember that varying instructional approaches is part of differentiated instruction. The Rochester Institute of Technology (2009) noted how a mix of strategies might benefit visual, auditory, and kinesthetic learners. Visual learners appreciate lessons with graphics, illustrations, and demonstrations. Auditory learners might learn best from lectures and discussions. Kinesthetic learners process new information best when it can be touched or manipulated; thus, for this group of learners, written assignments, note taking, examination of objects, and participation in activities are valued strategies to consider.
Teachers might question if their approach should be more teacher-centered or more student-directed. The National Mathematics Advisory Panel (2008) noted, "High-quality research does not support the exclusive use of either approach" (p. 45). The terms themselves are not uniquely defined with "teacher-directed instruction ranging from highly scripted direct instruction approaches to interactive lecture styles, and with student-centered instruction ranging from students having primary responsibility for their own mathematics learning to highly structured cooperative groups" (p. 45). Ball, Ferrini-Mundy, Kilpatrick, Milgram, Schmid, and Schaar (2005) expressed:
Students can learn effectively via a mixture of direct instruction, structured investigation, and open exploration. Decisions about what is better taught through direct instruction and what might be better taught by structuring explorations for students should be made on the basis of the particular mathematics, the goals for learning, and the students' present skills and knowledge. For example, mathematical conventions and definitions should not be taught by pure discovery. Correct mathematical understanding and conclusions are the responsibility of the teacher. Making good decisions about the appropriate pedagogy to use depends on teachers having solid knowledge of the subject. (Areas of Agreement section)
Teachers should exercise caution if students are to use a discovery approach to learning. Discovery learning is a form of partially guided instruction. Partially guided instruction is known by other names, including "problem-based learning, inquiry learning, experiential learning, and constructivist learning" (Clark, Kirschner, & Sweller, 2012, p. 7).
According to Alfieri, Brooks, Aldrich, and Tenenbaum (2011), a review of literature would suggest that "discovery learning occurs whenever the learner is not provided with the target information or conceptual understanding and must find it independently and with only the provided materials" (p. 3). The extent that assistance is provided would depend on the difficulty students might have in discovering target information. Findings in their 2011 meta-analysis of 580 comparisons of discovery learning (unassisted and assisted) and direct instruction suggested that generally "unassisted discovery does not benefit learners, whereas feedback, worked examples, scaffolding, and elicited explanations do" (p. 1). Thus, Alfieri et al. indicated the following implications for teaching:
Although direct teaching is better than unassisted discovery, providing learners with worked examples or timely feedback is preferable. ... Furthermore, [their meta-analysis suggested] teaching practices should employ scaffolded tasks that have support in place as learners attempt to reach some objective, and/or activities that require learners to explain their own ideas. The benefits of feedback, worked examples, scaffolding, and elicited explanation can be understood to be part of a more general need for learners to be redirected, to some extent, when they are mis-constructing. Feedback, scaffolding, and elicited explanations do so in more obvious ways through an interaction with the instructor, but worked examples help lead learners through problem sets in their entireties and perhaps help to promote accurate constructions as a result. (p. 12)
Richard Clark, Paul Kirschner, and John Sweller (2012) further put to rest the debate on the use of partially guided instruction. After a half century of such advocacy, "Evidence from controlled, experimental studies (a.k.a. "gold standard") almost uniformly supports full and explicit instructional guidance" (p. 11). Elaborating, they revealed:
Decades of research clearly demonstrate that for novices (comprising virtually all students), direct, explicit instruction is more effective and more efficient than partial guidance. So, when teaching new content and skills to novices, teachers are more effective when they provide explicit guidance accompanied by practice and feedback, not when they require students to discover many aspects of what they must learn. ... this does not mean direct, expository instruction all day every day. Small group and independent problems and projects can be effective--not as vehicles for making discovery, but as a means of practicing recently learned content and skills. ... Teachers providing explicit instructional guidance fully explain the concepts and skills that students are required to learn. Guidance can be provided through a variety of media, such as lectures, modeling, videos, computer-based presentations, and realistic demonstrations. It can also include class discussions and activities. (p. 6)
Using instructional approaches such as "problem-based learning, scientific experimentation, historical investigation, Socratic seminar, research projects, problem solving, concept attainment, simulations, debates, and producing authentic products and performances" (Tomlinson & McTighe, 2006, p. 110) will help uncover the BIG ideas related to content that lie below the surface of acquiring basic skills and facts.
When teaching for understanding, a unit or course design incorporates instruction and assessment that reflects six facets of understanding. Students are provided opportunities to explain, interpret, apply, shift perspective, empathize, and self-assess (McTighe & Seif, 2002). Framing the essential or BIG questions in a unit is an important skill for educators to acquire, as these questions offer the organizing focus for a unit. Tomlinson and McTighe (2006) suggested two to five essential questions per unit, which are written at age-appropriate levels and sequenced so that one leads to the next. Students need to understand key vocabulary associated with those questions.
The emphasis on vocabulary development is particularly important for learning mathematics with understanding, especially for students for whom English is a second language. Imagine their possible confusion upon encountering homophones like "pi/pie, plane/plain, rows/rose, sine/sign, sum/some" (Bereskin, Dalrymple, Ingalls, et al., 2005, p. 3). Key vocabulary must be explicitly taught, and reinforced by posting symbols with definitions and examples to clarify meaning. Such learners also benefit from materials presented in their native language, where possible. In TIPS for English Language Learners in Mathematics, Bereskin, Dalrymple, Ingalls, and others from the Ontario (CA) Ministry of Education and their Partnership of School Boards proposed the following types of mathematical activities that help to develop both mathematics and language skills:
In discussing essential principles of effective math instruction for all learners, including learners with disabilities and those at risk of school failure, Karen Smith and Carol Geller (2004) said common attributes that have been identified as positively affecting student learning include:
Notice that Smith and Geller (2004) also noted the importance of feedback. In support of the above attributes, Leinwand and Fleishman (2004) suggested the following to teach for meaning:
Note: For examples on how to use open-ended problem-solving that enables learners to develop their own approaches, read Volker Ulm's (2011) Teaching mathematics - Opening up individual paths to learning.
Need more ideas for instructional strategies?
Visit the Teaching Channel for high-quality, free videos on effective teaching practices, inspiring lesson ideas, and the Common Core State Standards.
Consider using whiteboard technology to improve the quality of your lessons.
Steven Ross and Deborah Lowther (2009) noted several valuable features for improving lesson quality when using interactive whiteboards:
Further, when interactive response systems (known as clickers) are used, teachers can pose questions to students, enabling them to get immediate feedback with answers "instantly aggregated and graphically displayed" (Ross & Lowther, 2009, p. 21). This is the kind of feedback enabling timely review of lessons and student-centered community learning.
Teach Reading the Math Text
Students must be taught how to read a math textbook. Most students, in my experience, have never learned how, and rely greatly on explanations from their teachers and jump right in to doing their homework problems without reading the text. According to Mariana Haynes (2007), "The research is clear that when teachers across content areas help students use reading comprehension strategies (such as summarizing, generating questions, and using semantic and graphic organizers), student learning improves substantially. Studies show that explicitly teaching these strategies requires students to actively process information and connect new learning with prior concepts and experiences" (p. 4).
Reading a math text is different from reading texts in other subject areas. Diana Metsisto (2005), who discusses this issue in depth in Reading in the Mathematics Classroom, stated that math texts contain a greater number of concepts per sentence and paragraph than in texts for other subjects. Reading is complicated by the use of numeric and non-numeric symbols, specialized vocabulary, graphics which must be understood, page layouts that are different from other texts, and topic sentences that often occur at the end of paragraphs instead of at the beginning. The text is often written above the reading level of the intended learner. Some small words when used in a math problem make a big difference in students' understanding of a problem and how it is solved. Metsisto provides reading strategies for math texts.
Here are other resources to consult:
Provide Homework Assistance
The issue of assigning homework is controversial in terms of its purpose, what to assign, the amount of time needed to complete it, parental involvement, its actual affect on learning and achievement, and impact on family life and other valuable activities that occur outside of school hours. To help ensure that homework is completed and appropriate, consider the following research-based homework guidelines provided by Robert Marzano and Debra Pickering (2007, p. 78):
Assign purposeful homework. Legitimate purposes for homework include introducing new content, practicing a skill or process that students can do independently but not fluently, elaborating on information that has been addressed in class to deepen students' knowledge, and providing opportunities to explore topics of their own interest.
[E]nsure that homework is at the appropriate level of difficulty. Students should be able to complete homework assignments independently with relative high success rates, but they should still find the assignments challenging enough to be interesting.
Involve parents in appropriate ways (for example, as a sounding board to help students summarize what they learned from the homework) without requiring parents to act as teachers or to police students' homework completion.
Carefully monitor the amount of homework assigned so that it is appropriate to students' age levels and does not take too much time away from other home activities. (p. 78).
A rule of thumb for homework might be that "all daily homework assignments combined should take about as long to complete as 10 minutes multiplied by the students' grade level" and "when required reading is included as a type of homework, the 10-minute rule might be increased to 15 minutes" (Cooper, 2007, cited in Marzano & Pickering, 2007, p. 77). Other tips for getting homework done are in Helping Your Students with Homework, a 1998 booklet based on educational research from the U.S. Department of Education.
Classroom teachers might also make learners and their parents aware of the many homework assistance sites available on the Internet, many of which are noted at CT4ME among our Math Resources: Study Skills and Homework Help.
For more on homework, including the issue of differentiated homework, read Homework: A Math Dilemma and What to Do About It (Deubel, 2007).
Use Tools and Manipulatives
Students' thinking and understanding will be enhanced by their use of a variety of tools, such as graphic organizers, thinking maps, calculators, computers, and manipulatives. However, important variables to consider that influence effectiveness of tools and manipulatives (e.g., using graphic organizers) include such things as "grade level, point of implementation, instructional context, and ease of implementation" (Hall & Strangman, 2002, Factors Influencing Effectiveness section). CT4ME has an entire section devoted to math manipulatives, which includes use of calculators. Here I delve more into graphic organizers and thinking maps.
A graphic organizer is defined as "a visual and graphic display that depicts the relationships between facts, terms, and or ideas within a learning task. Graphic organizers are also sometimes referred to as knowledge maps, concept maps, story maps, cognitive organizers, advance organizers, or concept diagrams" (Hall & Strangman, 2002, Definition section). They are valuable as "a creative alternative to rote memorization"; they "coincide with the brain's style of patterning" and promote this patterning "because material is presented in ways that stimulate students' brains to create meaningful and relevant connections to previously stored memories" (Willis, 2006, Ch. 1, Graphic Organizers section). They are often used in brainstorming and to help learners examine their conceptual understanding of new content.
Graphic organizers might be classified as sequential, relating to a single concept, or multiple concepts. In The Theory Underlying Concept Maps and How to Construct and Use Them, Joseph Novak and Alberto Cañas (2008) stated, concepts within a concept map are "usually enclosed in circles or boxes of some type, and relationships between concepts indicated by a connecting line linking two concepts. Words on the line, referred to as linking words or linking phrases, specify the relationship between the two concepts." Concept is defined as "a perceived regularity in events or objects, or records of events or objects, designated by a label. The label for most concepts is a word, although sometimes we use symbols such as + or %, and sometimes more than one word is used. Propositions are statements about some object or event in the universe, either naturally occurring or constructed. Propositions contain two or more concepts connected using linking words or phrases to form a meaningful statement. Sometimes these are called semantic units, or units of meaning" (Introduction section).
Concept maps are usually developed with in "a hierarchical fashion with the most inclusive, most general concepts at the top of the map and the more specific, less general concepts arranged hierarchically below." Cross-links between sub-domains on the concept map should be added, where possible, as these illustrate that learners understand interrelationships between sub-domains in the map. Specific examples illustrating or clarifying a concept can be added to the concept map, but these would not be placed within ovals or boxes, as they are not concepts (Novak & Cañas, 2008, Introduction section). Novak and Cañas presented examples of concept maps developed with CMap Tools from the Institute for Human and Machine Cognition.
Graphic organizers come in many forms. Other common forms include continuum scales, cycles of events, spider maps, Venn diagrams, compare/contrast matrices, and network tree diagrams. A Venn diagram (two or more overlapping circles) could be used to compare and contrast sets, such as in a study of least common multiple and greatest common factor, or classifying geometric shapes. A tree diagram is useful for determining outcomes in a study of probability of events, permutations and combinations. KWL charts are useful for investigations. Note: CT4ME includes KWL charts in our resource booklets for standardized test prep. Educators might also wish to expand the KWL chart to a KWHL chart or the ultimate KWHLAQ chart to better promote 21st century skill development. These acronyms represent the following questions:
As an example, students can generate their own graphic organizer using the following sample instructions, adapted from Willis (2006, Ch. 1):
Student-generated Graphic Organizer
Adapted from J. Willis, Research-based strategies to ignite student learning, (2006, Ch. 1, Graphic Organizers section)
As another example, Metsisto (2005) suggested the Frayer Model and Semantic Feature Analysis Grid. The Frayer Model is used for vocabulary building and is a chart with four quadrants which can hold a definition, some characteristics/facts, examples, and non-examples of the word or concept. The word or concept might be placed at the center of the chart. In Think Literacy: Mathematics Approaches for Grades 7-12, the Ontario Association for Mathematics Education (2004) further elaborates on reading, writing and oral communication strategies and provides a thorough discussion of the Frayer Model.
Word or Concept
Similar to this Frayer Model, view the short ASCD video of grade 5 math teacher, Malinda Paige, using a Words in Context graphic organizer in a geometry lesson for learning vocabulary. Paige linked her lesson to real-world events. Then download the Words in Context graphic organizer for your lessons. Inspiration and Kidspiration software can be used for other graphic organizers, which Rockingham County Public Schools (VA) has made available in multiple subject areas, including for math.
The Semantic Feature Analysis Grid is a matrix or chart to help students to organize common features and to compare and contrast concepts. Spreadsheets are useful to design these kinds of charts.
Learn more by also reading Knowledge Maps: Tools for Building Structure in Mathematics, in which Astrid Brinkmann (2005) discussed the rules for developing mind maps and concept maps and illustrated how they are used to graphically link ideas and concepts in a well-structured form.
The following are graphic organizer web sites to consider:
Graphic Organizers from Education Oasis include multiple types such as cause and effect, compare and contrast, vocabulary development and concept organizers, brainstorming, KWL, and more.
Graphic Organizers from Education Place include about 38 organizers. Learners can use these freely "to structure writing projects, to help in problem solving, decision making, studying, planning research and brainstorming."
Graphic Organizers from Enhance Learning with Technology Web site. What are they? Why use them? How to use them? The site includes numerous links on the topic, examples, and software possibilities to assist with the endeavor.
Graphic Organizers is based on the work of Edwin Ellis, Ph.D., president of Makes Sense Strategies, and features SMARTsheets. The site also includes examples of how these graphic organizers can be used for math, literature, social studies, science, social/behavior. Register for free downloads.
The Graphic Organizer from Graphic.Org shows graphic organizers, concept mapping, and mind mapping examples related to their use: describing, comparing/contrasting, classifying, causal, sequencing, and decision making.
Thinking maps are closely aligned to graphic organizers; however, in the words of David Hyerle, they are "a LANGUAGE of interdependent graphic primitives....teachers and student thrive within the dynamism of eight integrated tools based on thinking patterns. (a simple analogy may be made to complexity of 8 parts of speech and how they are relatively meaningless in isolation, and convey complexity when used together... this also leads to deep, authentic assessment" (personal communication, October 6, 2007). Thinking maps are open-ended, allow students to draw on their own experience, and help them to identify, "organize, synthesize, and communicate patterns of information by using a common visual language. They enable students to explore multiple perspectives and to develop metacognitive strategies for planning, monitoring, and reflecting" (Lipton & Hyerle, n.d., p. 6). The eight maps are discussed and illustrated with student examples at Designs for Thinking. Lipton and Hyerle also described them, which I have adapted for the following table:
|Circle||helps students generate and identify information in context related to a topic written inside the inner circle; The map might be enclosed in a square for its frame of reference.||
|Tree||can be used both inductively and deductively for classifying or grouping.||
|Bubble||can be used for describing the characteristics, qualities or attributes of something with adjectives. Any number of connecting bubbles can extend from the center.||
|Double-bubble||useful for comparing and contrasting.||
|Flow||enables students to sequence and order events, directions, cycles, and so on.||
|Multi-flow||helps to analyze causes and effects of an event||
|Brace||useful for identifying part-whole relationships of physical structures.||
|Bridge||helps students to interpret analogies and investigate conceptual metaphors|
Adapted from Lipton, L., & Hyerle, D. (n.d.). I see what you mean: Using visual maps to assess student thinking, pp. 2-3. Thinking Foundation. Retrieved from http://www.thinkingfoundation.org/research/journal_articles/journal_articles.html
Overall, Harold Wenglinsky (2004) concluded that "teaching that emphasizes higher-order thinking skills, project based learning, opportunities to solve problems that have multiple solutions, and such hands-on techniques as using manipulatives were all associated with higher performance on the mathematics" National Assessment of Educational Progress among 4th and 8th graders (p. 33). Using such practices to teach for meaning promotes high performance for students at all grade levels. CT4ME has an entire section devoted to Math Manipulatives.
Read the Magic of Math in which Ken Ellis (2007) described Fullerton IV Elementary School's (Roseburg, OR) nationally recognized approach to teaching math and watch the video documentary. Math is embedded throughout the curriculum. Their immersion approach has led to improved test scores. There is a focus on using precise mathematical vocabulary and problem solving in real world contexts. Instructional strategies include a mix of direct instruction, structured investigation, and open exploration. Fullerton is one of 20 Intel Schools of Distinction.
Watch the short video at Edutopia.org: Cooperative Arithmetic: How to Teach Math as a Social Activity. A teacher in Anchorage, Alaska demonstrates how he establishes a cooperative learning environment in an upper-elementary math classroom.
Alfieri, L., Brooks, P. J., Aldrich, N. J., & Tenenbaum, H. R. (2011). Does discovery-based instruction enhance learning? Journal of Educational Psychology, 103(1), 1-18. Retrieved from http://www.cideronline.org/podcasts/pdf/1.pdf
Ball, D. L., Ferrini-Mundy, J., Kilpatrick, J., Milgram, R. J., Schmid, W., & Schaar, R. (2005). Reaching for common ground in K-12 mathematics education. Washington, DC: Mathematics Association of America: MAA Online. Retrieved from http://www.maa.org/common-ground/cg-report2005.html
Bereskin, S., Dalrymple, S., Ingalls, M., et al. (2005). TIPS for English language learners in mathematics. Ontario, CA: Ministry of Education and Partnership in School Boards. Retrieved from http://www.edu.gov.on.ca/eng/studentsuccess/lms/files/ELLMath4All.pdf
Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment [Online]. Phi Delta Kappan, 80(2), 139-144, 146-148. [Note: also see the article at http://blog.discoveryeducation.com/assessment/files/2009/02/blackbox_article.pdf].
Booth, E. (2013). A recipe for artful schooling. Educational Leadership, 70(5), 22-27.
Brinkmann, A. (2005, October 25). Knowledge maps: Tools for building structure in mathematics. International Journal for Mathematics Teaching and Learning. Retrieved from http://www.cimt.plymouth.ac.uk/journal/default.htm
Burns, M. (2004). Writing in math. Educational Leadership, 62(2), 30-33.
Carpenter, T. P., Blanton, M. L., Cobb, P., Franke, M. L., Kaput, J., & McClain, K. (2004). Scaling up innovative practices in mathematics and science: Research report. Madison, WI: National Center for Improving Student Learning and Achievement in Mathematics and Science. Retrieved from http://www.wcer.wisc.edu/NCISLA/publications/reports/NCISLAReport1.pdf
Chappuis, J. (2012). "How am I doing?" Educational Leadership, 70(1), 36-40.
Clark, R., Kirschner, P., & Sweller, J. (2012, Spring). Putting students on the path to learning: The case for fully guided instruction. American Educator, 36(1), 6-11. Retrieved from http://www.aft.org/newspubs/periodicals/ae/index.cfm
Common Core State Standards. (2010). Standards for Mathematical Practice. Retrieved from http://www.corestandards.org/Math/Practice
Conley, D. T. (2011). Building on the common core. Educational Leadership, 68(6), 16-20.
Deubel, P. (2007, October 22). Homework: A math dilemma and what to do about it. T.H.E. Journal. Retrieved from http://thejournal.com/articles/2007/10/22/homework-a-math-dilemma-and-what-to-do-about-it.aspx
Deubel, P. (2007, June 7). Podcasts: Where's the learning? T.H.E. Journal. Retrieved from http://thejournal.com/articles/2007/06/07/podcasts-wheres-the-learning.aspx
Deubel, P. (2007, February 21). Moderating and ethics for the classroom instructional blog. T.H.E. Journal. Retrieved from http://thejournal.com/articles/2007/02/21/moderating-and-ethics-for-the-classroom-instructional-blog.aspx?sc_lang=en
Ellis, K. (2005, November 8). The magic of math. Edutopia Magazine [online]. Retrieved from http://www.edutopia.org/node/1405
EngageNY (2011, August 1). Common core instructional shifts. Retrieved from http://engageny.org/resource/common-core-shifts/
Fisher, D., & Frey, N. (2012, September). Making time for feedback. Educational Leadership, 70(1), 42-47.
Garon, J. (2000, Spring). The seven principles of effective feedback. The Law Teacher, 7(2). Retrieved from http://lawteaching.org/lawteacher/2000spring/sevenprinciples.php
Hall, T., & Strangman, N. (2002). Graphic organizers. Wakefield, MA: National Center on Accessing the General Curriculum. Retrieved from http://www.cast.org/publications/ncac/ncac_go.html
Haynes, M. (2007, April). From state policy to classroom practice: Improving literacy instruction for all students. National Association of State Boards of Education. Available in Resources, Project Pages: Adolescent Literacy: http://www.nasbe.org/
Hiebert, J., & Grouws, D. (2009, Fall). Which instructional methods are most effective for math? Baltimore, MD: John Hopkins University, Better: Evidenced-based Education, 10-11. Retrieved from http://www.betterevidence.org
Leinwand, S., & Fleishman, S. (2004, September). Teach mathematics right the first time. Educational Leadership, 62(1), 88-89.
Lipton, L., & Hyerle, D. (n.d.). I see what you mean: Using visual maps to assess student thinking. Thinking Foundation. Retrieved from http://www.thinkingfoundation.org/research/journal_articles/journal_articles.html
Marzano, R. (2009, October). Helping students process information. Educational Leadership, 67(2), 86-87.
Marzano, R., & Pickering, D. (2007). The case for and against homework. Educational Leadership, 64(6), 74-79.
Marzano, R. J., Pickering, D. J., & Pollock, J. E. (2001). Classroom instruction that works. Alexandria, VA: ASCD.
Maths misconceptions (2006, January). Teachers Magazine, (42/Primary). Retrieved from http://www.teachernet.gov.uk/teachers/issue42/primary/features/Mathsmisconceptions/
McKenzie, J. (1997, November/December). A questioning toolkit. From Now On, 7(3). Retrieved from http://www.fno.org/nov97/toolkit.html
McTighe, J., & Seif, E. (2002). Indicators of teaching for understanding. Understanding by Design Exchange.
Metsisto, D. (2005). Reading in the mathematics classroom. In J. M. Kenney, E. Hancewicz, L. Heuer, D. Metsisto, & C. L. Tuttle, Literacy strategies for improving mathematics instruction (chapter 2). Alexandria, VA: ASCD. Retrieved from http://www.ascd.org/publications/books/105137/chapters/Reading_in_the_Mathematics_Classroom.aspx
Morsund, D., & Ricketts, D. (2010). Math maturity. In IAE-pedia [Information Aged Education wiki]. Retrieved February 17, 2010, from http://iae-pedia.org/Math_Maturity
Muilenburg, L., & Berge, Z. (2000). A framework for designing questions for online learning. The American Journal of Distance Education. Retrieved from http://smcm.academia.edu/LinMuilenburg/Papers/440394/A_Framework_for_Designing_Questions_for_Online_Learning
National Council of Teachers of Mathematics (2000). Principles and standards for school mathematics. Reston, VA: Author. Retrieved from http://standards.nctm.org/
National Council of Teachers of Mathematics (1991). Professional Standards for Teaching Mathematics. Reston, VA: Author. Retrieved from http://standards.nctm.org/
National Mathematics Advisory Panel (2008). Foundations for success: The final report of the National Mathematics Advisory Panel . Washington, DC: U.S. Department of Education. Retrieved from http://www.ed.gov/about/bdscomm/list/mathpanel/index.html
National Research Council (2012, July). Education for life and work: Developing transferable knowledge and skills in the 21st century [Report Brief]. J. W. Pellegrino, & M. L. Hilton (Eds.); Committee on Defining Deeper Learning and 21st Century Skills; Center for Education; Division on Behavioral and Social Sciences and Education. Washington, DC: National Academy Press. Retrieved from http://www.nap.edu/catalog.php?record_id=13398
National Research Council (2001). Adding it up: Helping children learn mathematics. J. Kilpatrick, J. Swafford, & B. Findell (Eds.). Mathematics Learning Study Committee, Center for Education, Division of Behavioral and Social Sciences and Education. Washington, DC: National Academy Press. Retrieved from http://www.nap.edu/catalog.php?record_id=9822
Nichol, D., & Macfarlane-Dick, D. (n.d.). Rethinking formative assessment in HE: A theoretical model and seven principles of good feedback practice. The Higher Education Academy SENLEF Project. Retrieved from http://www.heacademy.ac.uk/806.htm
Novak, J. D., & A. J. Cañas, A. J. (2008). The theory underlying concept maps and how to construct them. Technical Report IHMC CmapTools 2006-01 Rev 01-2008. Florida Institute for Human and Machine Cognition. Retrieved from http://cmap.ihmc.us/Publications/ResearchPapers/TheoryUnderlyingConceptMaps.pdf
Ohio Department of Education (2012, March 2). Ohio Mathematics Common Core Standards and Model Curriculum [YouTube video]. Retrieved from http://www.youtube.com/watch?v=0pJ_nI1AuLA
Ontario Association for Mathematics Education (2004). Think literacy: Mathematics approaches grades 7-12. Retrieved from http://oame.on.ca/main/index1.php?lang=en&code=ThinkLit
Pashler, H., Bain, P., Bottge, B., Graesser, A., Koedinger, K., McDaniel, M., & Metcalfe, J. (2007). Organizing instruction and study to improve student learning (NCER 2007-2004). Washington, DC: National Center for Education Research, Institute of Education Sciences, U.S. Department of Education. Retrieved from http://ies.ed.gov/ncee/wwc/publications/practiceguides/
Paul, R., & Elder, L. (1997, April). Foundation for critical thinking: Socratic teaching. Retrieved from http://www.criticalthinking.org/pages/socratic-teaching/507
Pitler, H., Hubbell, E. R., Kuhn, M., & Malenoski, K. (2007). Using technology with classroom instruction that works. Alexandria, VA: ASCD.
Reeves, D. (2006). The learning leader: How to focus school improvement for better results. Alexandria, VA: ASCD.
Rochester Institute of Technology (2009). Some characteristics of learners, with teaching implications. Retrieved from http://online.rit.edu/faculty/teaching_strategies/adult_learners.cfm
Ross, S., & Lowther, D. (2009, Fall). Effectively using technology in instruction. Baltimore, MD: John Hopkins University, Better: Evidenced-based Education, 20-21. Retrieved from http://www.betterevidence.org
Small, M. (2010). Beyond one right answer. Educational Leadership, 68(1), 29-32.
Smith, K., & Geller, C. (2004). Essential principles of effective mathematics instruction: Methods to reach all students. Preventing School Failure, 48(4), 22-29.
Stein, C. (2007). Let's talk: Promoting mathematical discourse in the classroom. Mathematics Teacher, 101(4), 285-289. Retrieved from http://teachingmathforlearning.wikispaces.com/file/view/Let's+Talk+Discourse.pdf
Tomlinson, C., & McTighe, J. (2006). Integrating differentiated instruction with Understanding by Design. Alexandria, VA: ASCD.
Treffinger, D. (2008, Summer). Preparing creative and critical thinkers [online]. Educational Leadership, 65. Retrieved from http://www.ascd.org/publications/educational-leadership/summer08/vol65/num09/Preparing-Creative-and-Critical-Thinkers.aspx
Ulm, V. (2011). Teaching mathematics - Opening up individual paths to learning. In series: Towards New Teaching in Mathematics, Issue 3. Bayreuth, Germany: SINUS International. Retrieved from http://sinus.uni-bayreuth.de/2974/
Vogler, K. (2008, Summer). Asking good questions [online]. Educational Leadership, 65. Retrieved from http://www.ascd.org/publications/educational-leadership/summer08/vol65/num09/Asking-Good-Questions.aspx
Wegerif, R. (2002, September) Literature review in thinking skills, technology and learning. Futurelab Series. Bristol, UK: Futurelab. Retrieved from http://www.scribd.com/doc/12831545/Thinking-Skills-Review
Wenglinsky, H. (2004). Facts or critical thinking skills? What NAEP results say. Educational Leadership, 62(1), 32-35.
Wheeler, G. (2010, August 19). A simple solution to a complex problem. ASCD Express, 5(23). Retrieved from http://www.ascd.org/ascd-express/vol5/523-toc.aspx
Wiggins, G. (2012). 7 keys to effective feedback. Educational Leadership, 70(1), 11-16.
Willingham, D. T. (2007, Summer). Critical thinking: Why is it so hard to teach? American Educator, 8-19. Retrieved from http://www.aft.org/pdfs/americaneducator/summer2007/Crit_Thinking.pdf
Willis, J. (2006). Research-based strategies to ignite student learning: Insights from a neurologist and classroom teacher. Alexandria, VA: ASCD.
See other Math Methodology pages: | http://www.ct4me.net/math_methodology_2.htm | 13 |
18 | A floodplain is a land area subject to overflow from a river or lake, and to a variety of human management schemes. The management strategies encompass a spectrum from leaving the area in its natural state to comprehensive changes in both the water flow and the societal uses of the land.
A flood commonly is defined as any level of flow that exceeds the natural carrying capacity of a river and that inundates the adjoining low-lying land (i.e., the floodplain) which ordinarily is dry, although some floodplains have large areas of wetlands . Of all the natural hazards, floods are among the most widespread and most ruinous to life and property, particularly as more people live near water.
Floods strike in many forms, including sea surges driven by winds, or tsunamis churned by seismic activity. By far the most frequent, however, and standing in a class by themselves, are the inland, fresh-water floods that are caused by rain, by melting snow and ice, or by the bursting of human-made
- The height and frequency of the overflow;
- The speed of change in water level;
- Flow velocity; and
- Content of sediment and debris.
A small mountain stream may go over its banks a few feet in a few hours on the average of once in 50 years, carrying rocks and forest debris. In contrast, a large stream in an alluvial delta may rise over adjoining lands as much as 10 feet during several weeks, carrying a large volume of sediment, with this event also occurring on the average of once in 50 years.
Floods commonly are described in three ways:
- The rate at which they rise and fall (e.g., a flash flood may rise in as little as an hour);
- The maximum height of inundation (e.g., 2 feet); and
- The average frequency of occurrence of a flood of given magnitude(e.g., a 25-year flood, 100-year flood , or 500-year flood).
Benefits and Costs
The effects of flooding on a floodplain may include the following:
- Any benefits or costs for natural ecosystems , such as soil moisture and sediment deposition;
- Any benefits or costs for societal systems, such as an increase in crop production or benefit to grassland; and
- A long list of societal costs including human life, illness, emergency response, property and crop loss, and interruption of business and social activities.
Because there are no universally accepted monetary criteria for measuring such benefits and costs, it is not practical to compute fundamentally consistent numerical estimates of the effects of flooding in all situations. Criteria vary from place to place, and over time.
In the United States, there are at least eight major types of societal strategies for managing floodplains. Each involves a decision, either planned or unplanned, by a landowner, government entity, or combination of land management agencies regarding the benefits and costs that will be promoted by a given system of land-use management and the land's vulnerability to flooding. Each strategy implicitly involves recognition of the extent to which the various stakeholders involved are vulnerable to flood benefits and losses.
A management practice pursued by many communities around the world is to incorporate a combination of some of the above flood-related measures in plans for guiding urban development. A major part of some of those plans is the designation of areas to receive specified degrees of structural flood protection (e.g., floodwalls) along with prohibition of new urban development in hazardous areas where structural protection measures are not planned.
Maintaining the natural ecosystem for beneficial purposes of wildlife conservation , human recreation, and enhancement of water quality may result in no explicit effort to change the flood regime or to alter vulnerability to benefits and costs from flooding. This strategy is present in some public parks and wildlife preserves and in reserves of land that might, unless retained in their natural state, cause higher flood flows or higher sediment loads downstream.
Some agricultural uses of floodplains may result in net human gains, taking into account the yields of suitable crops and pasture that benefit from average flood flows, and also allowing for occasional losses from large or unseasonal flows. There are many instances of cultivation of floodplains in which the risk to life and property is minimized and the gains in agricultural production are maximized.
Urban Open Spaces.
Numerous urban areas contain small floodplains that are occupied for purposes that are not highly vulnerable to flood loss. For example, city parks, golf courses, playing fields for schools, nature preserves, and lawns for suburban residences may be used when there is no flooding, but avoided at flood time.
Occupied Urban Areas.
More common in urban areas than open spaces are partial or complete residential, commercial, or industrial sections which are vulnerable to flooding and yet not subject to flood control, and in which there is no organized community effort to reduce the risk of loss. The occupants simply bear the loss without any planned response. As an extreme, they may consequently lose all their belongings and resources in a disastrous flood.
Property occupants acting individually and without joining in any community response can adopt one or more management strategies to cope with flood risk. They can set aside reserve funds to offset the possible loss when it occurs. They can attempt to flood-proof their property by a variety of measures, including:
- Elevating machinery and vulnerable equipment above prospective flood elevation;
- Changing the location of family papers and other vulnerable objects;
- Arranging vulnerable property so that it may be moved on short notice out of water's reach; and
- Sandbagging around their home if floodwaters threaten.
In building new structures, property owners can locate floors above the prospective flood level, or design furnaces and other critical features so that they are protected from high waters. This flood-proofing depends on preventive action and on receipt of early warning of imminent flooding.
Community groups may promote and maintain, in cooperation with state and national agencies, systems for the issuance and dissemination of flood warning, and for the provision of emergency services to facilitate evacuation, rescue, and rehabilitation services.
All of the above actions assume there will be a flood, with little human intervention to prevent the flood event itself. But the most attractive set of measures to many communities involves engineering construction to reduce the frequency and magnitude of flooding. This has been a favored strategy for public expenditure where the anticipated construction costs are less than the expected loss reduction.
Construction measures may include:
- Channel enlargement and channel straightening;
- The construction of levees and floodwalls; and
- Flow detention in reservoirs that are either single-purpose or multi-purpose.
Until the mid-twentieth century, insurance against flood losses was rarely available in the United States from private insurance companies, even though it could be purchased in some other countries. This situation changed drastically in the United States in 1968, when the U.S. Congress enacted legislation providing for a federally financed National Flood Insurance Program. Under this program, each community with identified flood problems would be provided with maps of its hazardous areas. The community would then be given the opportunity for its property owners to purchase insurance against flood losses and encouraged to adopt various mitigation measures. Properties already built in flood hazard zones were given subsidized premium rates. Newly constructed properties were to be insured at actuarial rates.
To qualify for participation a community must, among other requirements, regulate land use in the 100-year flood zone. Many maps also show various zones with probability of flooding as infrequently as once in 500 years, on average.
One of the federal task forces recommending initiation of publicly supported flood insurance in 1966 suggested that the effort be experimental because it might prove to be counterproductive. Nevertheless, the authorized
In the United States, the unfolding of public policy with respect to floodplain management has involved three phases of emphasis, summarized as follows.
|Up to 1927||Choice of damage-reduction measures was by individual property owners or by local or federal agencies for scattered structural projects|
|1928–1968||Emphasis upon structural measures—channels, levees, and dams—with major funding by the U.S. Army Corps of Engineers and cost-sharing with state and local agencies under the Flood Protection Act of 1936 as amended|
|1968–present||Continue structural measures supplemented by federally subsidized flood insurance on existing structures and more near-actuarial rates on new structures|
In recent years, the Federal Emergency Management Agency (FEMA) has placed increasing emphasis on encouraging communities to adopt and enforce plans that seek to reduce uneconomic flood losses through a combination of structural measures, land-use plans, emergency warning and response plans, and flood loss insurance. FEMA, an independent agency of the federal government that reports directly to the President, is tasked with responding to, planning for, recovering from, and mitigating against disaster. The beginnings of FEMA can be indirectly traced to the Congressional Act of 1803, the first piece of disaster legislation in the United States, and directly to then-president Jimmy Carter's executive order in 1979 that centralized federal emergency functions from throughout the government into a new emergency management agency. Today, FEMA is a 2,500-person agency, supplemented by more than 5,000 standby disaster reservists that can be called upon in times of emergency.
Gilbert F. White
Abramovitz, Janet. "Averting Natural Disasters." In State of the World—2001, ed.Lester R. Brown. Washington, D.C.: Worldwatch Institute, 2002.
Burton, Ian, Robert W. Kates, and Gilbert F. White. The Environment as Hazard, 2nd ed. New York: Guilford Press, 1993.
Chow, Ven-Te, ed. Handbook of Applied Hydrology. New York, McGraw-Hill, 1964.
Heinz Center for Science, Economics, and the Environment. The Hidden Costs of Coastal Hazards. Washington, D.C.: Island Press, 2000.
Parker, Dennis J., ed. Floods. London, U.K.: Routledge, 2000.
Yen, C. L., and Yen, Ben C. "A Study on the Effectiveness of Flood Mitigation Measures." In Rivertech: Proceeding of Rivertech '96: First International Conference on New/Emerging Concepts for Rivers. W. H. C. Maxwell, H. Preul, and G. E. Stout, eds. (1996):560–561.
Association of State Floodplain Managers. <http://www.floods.org/> .
Federal Emergency Management Agency. <http://www.fema.gov> .
National Weather Service. <http://www.nws.noaa.gov> .
WHAT DO FLOODS COST?
Comprehensive estimates of actual flood losses are subject to great uncertainties. But two recent estimates of recorded flood damage are as follows, broken down into the United States and the rest of the world.
- For the United States: The average annual flood damages from 1929 to 1993 were estimated to decrease very slightly after normalizing for changes in gross national product. Annual flood losses totaled $5.2 billion during the period 1989 to 1998.
- For the World: Floods between 1950 and 2000 were estimated to account for one-third of all economic losses, half of all deaths, and 70 percent of all homelessness caused by extreme natural hazards. | http://www.waterencyclopedia.com/En-Ge/Floodplain-Management.html | 13 |
112 | |Social Studies Curriculum
|Kindergarten through Grade 6
|Grades 11 & 12
Social Studies Curriculum
« Return to Economics Framework home page
This course is designed to teach students economics concepts and principles and to introduce them to important economic institutions. Students will learn to apply economic reasoning to their lives as citizens, consumers, workers, and producers.
Through this course students will:
- Understand the fundamental economic principles and concepts of microeconomics, macroeconomics, and international economics.
- Understand the roles and interaction of the individual, government, and economic institutions in a market economy.
- Develop critical thinking skills and how to apply fundamental economic concepts to their lives and important economic issues.
- Learn and apply measurement concepts and methods such as ratios, percentage, index numbers, averages, charts, graphs, and tables.
- Become economically literate participants in the local, national, and global economies.
Course Standards and Objectives
In Economics, students will:
Standard 1: Scarcity and Choice
- Understand that productive resources are limited. Therefore, people can not have all the goods and services they want; as a result, they must choose some things and give up others.
- Explain why individuals, governments, and societies experience scarcity.
- Explain why individuals, governments, and societies must choose how to allocate their limited resources.
Standard 2: Opportunity Cost and Trade-offs
- Understand that effective decision making requires comparing the additional costs of alternatives with the additional benefits. Most choices involve doing a little more or a little less of something: few choices are all or nothing decisions.
- Define and give examples of opportunity costs.
- Discuss the production tradeoffs which face societies using the Production Possibilities Frontier.
Standard 3: Economic Systems
- Understand that different methods can be used to allocate goods and services.
- Understand that people acting individually or collectively through government must choose which method to use to allocate different kinds of goods and services.
- Understand that a capitalist economic system is defined by the existence of profits, prices that are determined by the forces of supply and demand, and private property rights.
- Compare and contrast the ways goods and services are allocated differently by traditional, command and market economies.
Standard 4: Economic Incentives
- Understand that people respond predictably to positive and negative incentives.
- Understand that entrepreneurs are people who take risks of organizing productive resources to make goods and services. Profit is an important incentive that leads entrepreneurs to accept the risks of business failures.
- List costs and benefits in particular situations and make predictions given changes in incentives.
- Explain the importance of prices, the incentive of profits and existence of property rights in a market economy.
Standard 5: Economic Institutions
- Understand that institutions evolve in market economies to help individuals and groups accomplish their goals. Banks, labor unions, corporations, legal systems, and not-for-profit organizations are examples of important institutions.
- List ways that private institutions such as banks, unions, or corporations influence resource allocation in a market economy.
- List ways that public institutions such as the Federal Reserve, governmental regulatory agencies, and laws influence resource allocation in a market economy.
- Explain the role of an important institution in making trading easier and describe how this improves social welfare.
Standard 6: Exchange, Money, and Interdependence
- Understand that voluntary exchange occurs only when all participating parties expect to gain. This is true for trade among individuals or organizations within a nation, and usually among individuals or organizations in different nations.
- Explain money's role as a medium of exchange, a store of value, and a standard of value.
- Understand that money makes it easier to trade, borrow, save, invest, and compare the value of goods and services.
- Describe how people gain from the voluntarily exchange of goods and services.
- Identify ways that free trade increases the material standard of living.
Standard 7: Markets and Prices
- Markets exist when buyers and sellers interact. This interaction determines market prices and thereby allocates scarce goods and services.
- Explain how the interaction of buyers and sellers determines price in a particular situation.
- Explain the role of price in allocating resources to different goods & services.
- Compare and contrast other methods for allocating resources with the use of market prices.
Standard 8: Supply and Demand
- Understand that prices send signals and provide incentives to buyers and sellers.
- Understand that when supply or demand changes, market prices adjust, affecting incentives.
- Distinguish between demand and quantity demanded; and supply and quantity supplied.
- Describe the reasons for changes in demand and supply.
- List examples of products with a highly elastic demand.
- Describe the effect of price ceilings and price floors on supply and demand.
- Predict changes in real world markets using supply and demand analysis.
- Create and interpret graphs of supply and demand.
Standard 9: Competition and Market Structure
- Understand that competition among sellers lowers costs and prices and encourages producers to produce more of what consumers are willing and able to buy.
- Understand that competition among buyers increases prices and allocates goods and services to those people who are willing and able to pay the most for them.
- Discuss the element of risk in creating a new business.
- Explain the role of an entrepreneur in the market economy.
- Provide an example and discuss the significance of a particular entrepreneur in US history. Define barriers to entry and identify particular real world examples.
- Explain why new firms enter an industry.
- Use supply and demand curves to show the effect when new firms enter an industry.
- List the advantages and disadvantages of proprietorship, partnership, and corporation as types of business organization.
- List the characteristics and give examples of oligopolistic, monopolistic, competitive, and monopolistic competitive industries.
Standard 10: Income Distribution
- Understand that income for most people is determined by the market value of the productive resources they sell.
- Understand that what workers earn depends, primarily, on the market value of what they produce and how productive they are.
- List reasons for differences in income between occupations.
Standard 11: Market Failures
- Understand that market failures occur when there is inadequate competition, lack of access to reliable information, resource immobility, externalities, and the need for public goods. An example of market failure is pollution.
- Define and give examples of positive and negative externalities.
- Show how a particular type of market failure affects the results of market allocation.
- Describe the Tragedy of the Commons and apply the concept to Alaska resources.
Standard 12: Role of Government
- Understand that there is an economic role for government in a market economy whenever the benefits of government policy outweigh the costs.
- Understand that governments often provide schools, transportation, national defense, and address environmental concerns, define and protect property rights, and attempt to make markets more competitive. Government policies also redistribute income.
- Understand that costs of government policies sometimes exceed benefits. This may occur because of incentives facing voters, government officials, and government employees, because of actions by special interest groups that can impose costs on the general public, or because social goals other than economic efficiency are being pursued.
- Explain how governments provide the framework of the market by defining and enforcing property rights.
- List the costs and benefits of a particular governmental function.
- List government functions that limit externalities, provide public goods, redistribute income, and promote competition.
- Define and give examples of public goods.
- Discuss the alternative views about the effects of fiscal policy on the US economy.
Standard 13: Gross Domestic Product
- Understand that Gross Domestic Product (GDP) is a measure of the total dollar amount of final goods and services produced in the domestic economy in one year. It is the sum of personal consumption, government spending, business investment and net exports.
- Define GDP and identify its components.
- Explain the limits of GDP as a measure of social welfare.
- Explain difference between nominal and real GDP.
- Predict the effects that changes in the quantity and quality of resources, technology, institutions and laws will have on potential GDP.
Standard 14: Aggregate Supply and Aggregate Demand
- Understand that a nation's overall level of income, employment, and prices are determined by the interaction of spending and production decisions made by households, firms, government agencies, and others in the economy.
- Describe the circular flow of income.
Standard 15: Unemployment
- Understand that unemployment imposes costs on individuals and nations.
- Understand that the unemployment rate is the number of people who are unemployed expressed as a percentage of the labor force. Significant unemployment implies that the nation is not using its scarce resources as efficiently as possible.
- Describe the changes in unemployment and inflation over the business cycle.
- List the costs of unemployment to individuals and the nation.
- Describe the difference between structural, frictional, and demand deficient (cyclical) unemployment.
Standard 16: Inflation and Deflation
- Understand that inflation is sustained increase in the general level of prices, while deflation is a sustained decrease in the general level of prices.
- Understand that unexpected inflation or deflation imposes costs on many people and benefits on some others because it arbitrarily redistributes purchasing power.
- Understand that price instability can reduce the rate of growth of national living standards because individuals and organizations use resources to protect themselves against the uncertainty of future prices.
- Explain how inflation reduces the value of money, financial assets, and income.
- Identify ways some people benefit from inflation while others lose.
- List factors that lead to a high rate of inflation and methods used to attempt to control it.
Standard 17: Savings and Investment
- Understand that interest rates, adjusted for inflation, rise and fall to balance the amount saved with the amount borrowed, which affects the allocation of scarce resources between present and future uses.
- Explain how changes in interest rates allocate resources between the present and future.
- Identify ways to invest in people and explain how human capital investment increases a nation's income. (17)
Standard 18: Monetary Policy
- Understand that monetary policy influences the overall level of employment, output, and prices.
- Understand that the Federal Reserve System, the nation's central bank, conducts monetary policy in the US.
- Describe the role of central banking and the Federal Reserve as the central bank of the U.S.
- Identify the three means by which the Federal Reserve influences money supply. Explain the effects of changes in the money supply on the economy.
Standard 19: Fiscal Policy
Understand that fiscal policy: taxation and government spending decisions made by the Executive and Legislative branches influence the overall levels of employment, output, and prices.
Standard 20: Productivity
- Understand that investment in factories, machinery, new technology, and in the health, education, and training of people can raise future standards of living.
- Describe the ways hiring one more worker contributes to a firm's revenue.
- Describe the economic consequences of a particular new technology.
Standard 21: Economic Growth
- Understand that economic growth is a sustained rise in the production of goods and services.
- Understand that economic growth is the result of an increase in the stock of resources and improvements in the technology and human capital.
- Identify the role of improved technology in the long term growth of the economy.
- Describe the relationship between governmental policy (i.e. changing the tax rate of capital gains) and economic growth.
- Distinguish between economic growth and economic development.
- Explain how economic growth results from the increase in the stock of resources, improvements in technology, increased human capital, and changes in laws, institutions and traditions that promotes efficiency.
Standard 22: Absolute and Comparative Advantage and Barriers to Trade
- Understand that when individuals, regions, and nations specialize in what they can produce at the lowest cost and then trade with others, both production and consumption increase.
- Explain how specialization increases the output for people and nations that trade.
- Define opportunity cost of specialization in specific cases.
- Discuss the difference between absolute and comparative advantage.
- Identify goods and services for which Alaska has comparative advantage.
- Understand that international trade results in increased global intradependence.
Standard 23: Exchange Rates and the Balance of Power
- Understand that the exchange rate between two nations' currencies is determined by their balance of trade in goods, services, and assets.
- Understand that exchange rates are also affected by expectations regarding price levels in various countries.
- Compare the benefits and costs of fixed or floating exchange rates.
- Calculate the price of US goods in another currency at a given exchange rate.
- Predict the effects on the dollar price of German marks of rapid growth in the US economy.
Standard 24: International Aspects of Economic Development
- Understand that economic development is a sustained expansion of a nation's standard of living.
- Understand that differences in the level of economic development between nations are determined by each nation's government policies, institutions, and utilization of resources.
Next: Business Economics course descriptions, goals, and standards » | http://www.asdk12.org/depts/socialstudies/HS/Economics/economics.asp | 13 |
36 | (What do those dBs mean?)
The Decibel (dB)
Most individuals have heard the word "decibel" used to describe how loud something is. It's probably no surprise that a 100 decibel (dB) noise would be a lot louder than a 50 dB noise. What may be surprising, however, is that 100 dB isn't twice as loud as 50 dB (60 dB is approximately twice as loud as 50 dB!). And zero dB doesn't mean "no sound." This confuses a lot of people, but a brief explanation of sound and the decibel scale, plus a few analogies (for those who dread algebra), follow.
Any vibrating object creates local changes in atmospheric pressure. These pressure fluctuations travel as waves through the air to our ears, and we experience sound. How rapidly an object vibrates determines its frequency or "pitch." Musicians often use the term "pitch" as a synonym for frequency. Technically, frequency (measured in Hertz or Hz) is what we measure; pitch is what we perceive. The intensity, or what we call "loudness," depends on how great these pressure changes are. Sound pressure level (SPL) is the objective measure of sound intensity, loudness is the perceptual correlate. Sound pressure level is normally expressed in decibels sound pressure level (dB SPL). The reason for not expressing SPL as a unit of pressure (e.g. Pascals) follows.
The metric unit of pressure is the Pascal (Pa). Under optimal conditions, the lowest pressure that can be heard by a person with normal hearing is approximately 0.00002 Pa (= 20mPa). The loudest sound pressure that most humans can tolerate is about 200 Pascals, which is 10 million times greater than the lowest sound pressure that can be heard. Dealing directly with so wide a range is cumbersome; consequently, the decibel scale was devised to make sound measurements manageable. The decibel scale quantifies sound level by taking the logarithm of the ratio between a sound pressure divided by a reference pressure and multiplying this result by 20, thus allowing us to compress a very wide pressure range into more easily managed numbers. By definition, the reference pressure (0 dB) for the sound pressure level scale is 20 microPascal (mPa). Here’s an example to demonstrate how the decibel scale allows us to compress a wide pressure range to a more manageable range:
The average sound pressure of speech at a distance of 5 ft. is about 0.064 Pa. The SPL, in decibels, is
20log 0.064 Pa = 20log3200 = 70.1 dB SPL.
(Note that the pressure of speech at this distance is 3200 times greater than the faintest sound pressure we can detect.)
It is important to note that if we double the pressure we won't double the SPL. If we double the pressure from our previous example, we get
0.064 Pa x 2 = 0.128 Pa. In units of dB, this is
20log 0.128 Pa = 20log6400 = 76.1 dB SPL, a 6 dB increase.
Similarly, we can show by calculation (or measurement) that if two rifles differ in SPL by 6 dB, the "louder" rifle is creating twice the sound pressure (in Pascal, not SPL) of the "quieter" rifle.
It is reasonable to assume that doubling the sound pressure would result in a sound that is twice as loud. Unfortunately, this isn't the case. Our perceptual response to intensity (and frequency) isn't linear, so what we perceive as being twice as loud isn't a simple function of sound pressure, or even sound power (see note 3). A very good approximation, however, is that a 10 dB increase in SPL will result in a doubling of "loudness." A rifle producing a SPL of 150 dB is approximately twice as loud as a rifle producing 140 dB. Here’s a real-life analogy to help clarify the decibel (dB) and perceived loudness:
At a distance of 5 ft., the SPL of normal speech is approximately 70 dB SPL (we can easily measure this with a sound level meter—more on this below). If another person starts talking (also 5 ft. away), the SPL doesn't increase to 140 dB (this would be deafening!). What we would measure with the sound level meter is, in fact, more along the lines of a 3 dB increase. Perceptually, two people talking at once is louder than one person talking, but not twice as loud. If we had 10 people talking simultaneously in a room, we'd measure about 80 dB; this is approximately twice as loud as a single talker. We're assuming, of course, that no one is screaming or whispering.
For high-intensity sounds, a 3 dB change in SPL is quite noticeable. For example, compare the SPL of one rifle #7 (7mm) with its cover on (no BOSS) and with its muzzle brake (BOSS) on. The measured difference is (163.6 – 159.5) dB SPL = 4.1 dB. The difference in loudness is quite apparent. With regard to high-intensity sounds, a 1dB change in SPL is noticeable, even to the untrained ear. (Note: For low-intensity sounds, a 1 dB change is barely discernable—this has to do with the physiology of the human ear.) In summary, a rifle that produces a SPL of 141 dB is slightly “louder” than one producing 140 dB. A rifle producing 143 dB SPL is quite a bit louder, and a rifle producing 150 dB SPL is twice as loud (which is pretty loud considering 140 dB is loud enough). Readers should be aware that a 10-dB increase in SPL is equivalent to a three-fold increase in pressure on the eardrum.
Sound can be expressed in terms of power, pressure, or sound pressure level (SPL). Readers are referred to note 3 for an explanation of the relationship between sound power and sound pressure. Measuring sound power directly is difficult because it requires measuring the movement of the individual air molecules. Fortunately, it is relatively easy to measure the sound pressure level (SPL) directly using a sound level meter. Basically, the sound level meter consists of a pressure-sensitive microphone connected to an electronic voltmeter. The microphone converts sound pressure into an analogous electric voltage. Circuitry within the sound level meter (SLM) converts the signal from the microphone to an electrical equivalent of sound pressure level. The meter displays the voltage in units of dB SPL.
The "response time" (a meter-dynamic characteristic) of most sound level meters, even cheap ones, can be switched from "SLOW" to "FAST." In the FAST mode, the sound level meter must accurately respond to a signal of 200 millisecond (0.2 second) duration. Measuring steady-state noise (e.g. noise produced by machinery) is straightforward because the sound's duration is typically greater than 0.2 second. Unfortunately, sound level meters don't accurately measure the SPL of rifles because the duration of the sound (excluding echoes from surrounding hills or trees) is very short. For this reason, impulse precision sound level meters were initially used. But the response time in IMPULSE mode proved to be too slow for accurately measuring the SPL of rifle shots.
BIG NOTE: Sound level meters (SLM) used to perform noise surveys aren't designed to measure peak noise levels; they're designed to accurately measure steady-state noises such as noise produced by machinery. A SLM used for measuring steady-state noise (even in FAST response mode) will not accurately measure the SPL of a rifle shot. A SLM in SLOW response mode will barely detect a rifle shot! As I had discovered during previous firearms testing, a SLM designed to measure short-duration (e.g. 10 millisecond) impulse noises won't accurately measure the peak pressure produced by a muzzle blast.
Reader's familiar with sound measurement may question this, so let me explain a scenario I encountered when I first began analyzing muzzle blasts.
During one day of testing, about 15 different rifles were fired. I held the sound level meter close to the shooter’s right ear and positioned myself so as not to interfere with the measurement. Two different sound level meters were used: A Larson-Davis model 800B and a Brüel & Kjær model 2209. The first of 15 rifles tested was a small-bore target rifle. With CCI Mini Group shorts, a reading of 105 dB SPL was obtained (IMPULSE mode). In the FAST mode, a reading of 102 dB was obtained. The lower reading in FAST mode was expected. In the IMPULSE mode, a 117 dB reading was obtained using Super-X Expiditer ammo (same target rifle). So far, the measurements looked reasonable. The next rifle tested was an UltraLight 280 using Winchester Super-X ammo. I measured 136 dB SPL which seemed reasonable compared to the small-bore rifle’s modest SPL. What became an obvious problem (obvious meaning 15 rifles later) was that all of the high-power rifles gave approximately the same reading of 133 dB SPL in IMPULSE mode, and a more compressed reading in FAST mode.
The problem at that time became apparent while testing a .22/250 with and without its muzzle brake (BOSS). Anyone who's fired a rifle (or has been in the vicinity of a rifle) with a muzzle brake knows that it's a lot louder than the same rifle/ammo without the muzzle brake. However, my first day of testing using supposedly "objective" measurements fell short of demonstrating this: I measured 133 dB SPL with and without the boss!
After a day of testing and suspect results, I made a few phone calls. I spoke with several other people who also perform noise surveys and know the OSHA guidelines for measuring noise in factories, etc. Unfortunately, nobody was able to provide any insight to the problem I was experiencing. One possibility that I had considered was that the sound level meters were "overloading" due to the intensity of the rifle shots. But both the Larson-Davis and Brüel & Kjær meters have built-in overload indicators. I briefed through the Instruction and Applications manual for the B&K sound level meter. A chart showed that the meter reading, even in IMPULSE mode, is (as was suspected) affected by the duration of the signal. The only available option was to measure the peak pressure level (versus sound pressure level) produced by each rifle, or record the rifle shots and observe the waveform on a storage oscilloscope. Both the L-D and B&K sound level meters are precision laboratory instruments and are capable of measuring peak pressures. A storage oscilloscope wasn't available at the original test site, but I was equipped with a portable Sony DAT (digital audio tape) recorder. The recorded rifle shots were recorded using a DAT recorder and subsequently analyzed using a spectrum analyzer and a digital storage oscilloscope (more on this below).
The reason for not using the sound level meters in PEAK (peak pressure) mode from the start was because this created another (but surmountable) problem. The maximum pressure that could be measured was 140 dB (peak pressure) before overload occurred. For small caliber rifles and pistols, this wasn't a problem. But the peak pressure level (PPL) at the shooter's ear exceeded 140 dB with the majority of the firearms tested. The problem, then, was how to determine the PPL at the shooter's ear without overloading the instruments. The only way to do this was to move the sound level meters a sufficient distance from the shooter so that even the loudest rifle wouldn't overload the meter. In order to determine the PPL at the shooter's ear, a "correction factor" would have to be added to the results obtained with the meter some distance from the rifle. It was desirable to get the SLM as close as possible to the rifles in order to maximize the "signal-to-noise ratio." In short, if the meter is too far away, other noises (environmental, wind, etc.) would be nearly as loud as the noise being measured and possibly obscure the measurement. Fortunately, even the quietest rifle shots were loud enough at 50 ft. to be much louder than the ambient, or background, noise.
Fifty feet was chosen because this was the minimum distance from the loudest rifle that wouldn't cause the meters to overload in PEAK mode. A correction factor of 25 dB was determined empirically as follows: A small-bore rifle was fired while the meter was held at the shooter's ear. (Unlike the larger rifles, this didn't overload the meter.) Several shots were fired from the same rifle to ensure measurement repeatability. Next, the sound level meter was placed 50 ft. behind the shooter (and free of obstructions). The same small-bore rifle was fired again. The measured PPL at 50 ft. was 25 dB less than at the shooter's ear; hence the 25 dB correction factor.
Readers who have been able to follow this discussion thus far may question the difference between dB SPL (which is the normal unit for sound measurement) and dB PPL. Peak pressure level measurements and SPL use the same reference pressure of 20 microPascal (= 0 dB). Conventional sound pressure level (SPL) measurements reference sound or noise that has a duration exceeding several cycles of vibration. Peak pressure levels measure the single greatest change in pressure, even though the duration may only be half of a cycle. This is justifiable when measuring the intensity of firearms because the peak pressure can occur within one vibratory cycle, plus the ear can perceive the loudness differences of such noise.
The results using PPL in lieu of SPL correlate well with our perception of loudness. (Note: This is true for rifle shots, but not necessarily all types of noise.) One noteworthy difference in the results obtained using PPL versus SPL comes to mind. During my first day of testing, I measured 132 dB SPL for an M30 Carbine using GI ammo. The .22/250 (with BOSS) gave a reading of 133 dB SPL. This would lead us to believe that the Browning is only 1 dB louder than the M30 and would suggest a discernible, but not big, difference in loudness. Results obtained using peak pressure measurements yielded a much greater difference. The M30 measured 123 dB PPL (at 50 ft.) and the Browning 22/250 measured 137 dB PPL (again, at 50 ft.). The difference here is 14 dB. Anyone in the vicinity of the test site would have told you that there was a huge difference in loudness between these two rifles. Measurements using PPL reveal this difference; conventional SPL measurements did not.
Two sound level meters, a Brüel & Kjær (B&K) model 2209 impulse precision sound level meter and a Larson-Davis 800B, were used to make recordings of the rifle shots. The output of the B&K meter was connected to the right-channel input of a Sony DAT (digital audio tape) recorder. The Larson-Davis SLM was connected to the left-channel of the recorder. Recordings of the rifle shots were analyzed using a Tektronix model 7854 digital storage oscilloscope and SpectraPLUS (Pioneer Hill) spectrum analyzer software. The maximum pressure level using the analyzer was used primarily to confirm the results obtained using the SLM. Note: Both the L-D and B&K meters were calibrated before testing began. Calibration readings were obtained after we completed our testing to verify that the sound level meters did not incur damage during testing.
Table 1 (click here for pdf file) shows the sound intensity of various firearms using peak pressure level (PPL) measurements. The second column, labeled "dB PPL (SLM)" is the reading obtained directly from the Larson-Davis model 800B SLM (mode = PEAK) plus the 25 dB correction. The right-most column, labeled "Pascals peak (RTA)" is the peak pressure (in Pascals) measured via the spectrum analyzer software.
Here are a few observations: The blast noise emanating from a rifle with a muzzle brake is measurably more intense than the same rifle without a muzzle brake. The .300 Win Mag bolt action (using xxx ammo—see Table 1) measured 7.3 dB more intense with the BOSS than without the BOSS. Note: A 7.3 dB increase in PPL is a 2.3-fold increase in sound pressure, as shown below:
20log [2.32P] = 20log 2.32 = 20 x 0.365 = 7.3 dB
Similarly, the Browning .22/250 (40 grain) measured 7.8 dB more intense with its BOSS than without it. Other comparisons can be made using Table 1.
Note: When comparing firearm peak pressure levels using Table 1, remember that a 1 dB difference can be heard, a 3 dB increase is “quite a bit” louder, and a 10 dB increase is “twice as loud.”
Noise surveys using sound level meters are performed in workplaces to determine if workers are at risk for hearing loss. Two primary variables dictate guidelines previously set forth by OSHA: The sound’s intensity ("loudness") and the time duration a worker is exposed to noise. According to OSHA guidelines, workers exposed to noise at or below 85 dB SPL(A) are not required to wear hearing protection. Workers exposed to noise levels at 90 dB SPL(A) for up to 8 hours must wear hearing protection. For 95 dB SPL(A), the maximum exposure time is 4 hours; for 100 dB SPL(A) the maximum time is 2 hours—this function of SPL versus time is known as the "5 dB exchange rate." When the noise level increases 5 dB, the maximum safe exposure time is halved. Workers are required to wear hearing protection anytime a noise level exceeds 115 dB SPL(A). But even with these guidelines, approximately 50% of workers could experience some hearing loss at these levels without hearing protection.
The effects of hearing loss as a result of blasts (such as those produced by firearms) isn't as well documented as occupational hearing loss. There are individuals who have suffered permanent hearing loss as a result of shooting, but the effects of single, loud-noise events varies from person to person. One common complaint shooters have isn't hearing loss, but tinnitus (a "ringing in the ear"). It should be noted that tinnitus frequently accompanies hearing loss resulting from noise exposure.
The most well-known aftereffect of exposure to high-intensity sound is the change in auditory sensitivity. If an individual’s auditory threshold (hearing sensitivity) is measured before and after an exposure, the difference in hearing threshold levels is, by definition, the threshold shift (TS). If the threshold shift later disappears, then it is called a temporary threshold shift (TTS). If the shift does not disappear, the final measured threshold shift is called a permanent threshold shift (PTS).
The most undesirable aftereffect of exposure to high-intensity sound is a PTS. Sound-induced PTS is commonly divided into two categories depending on whether the loss was produced by a single, short exposure at a very high intensity (acoustic trauma) or by repeated longer exposures to noise at more moderate sound pressure levels. It is clear from animal studies that in acoustic trauma the inner ear has been subjected to such stress that its mechanical (or elastic) limit has been exceeded. Various structures of the organ of Corti, including hair cells (the individual receptor cells within the inner ear), may become partly or wholly detached. Additionally, one or more of the several membranes in the cochlea may be ruptured, allowing an intermixture of fluids of different composition, thereby poisoning hairs cells that survived the mechanical stress. The end consequence is a pronounced loss of hearing sensitivity at the frequencies correlated with the locus of this destruction.
Less is known about acoustic trauma in humans, although it is not at all rare. Victims of acoustic trauma seldom have had a recent audiogram that would enable the amount of threshold shift to be determined with certainty. And unlike controlled studies utilizing animals, information regarding the exposure level and duration isn’t always known. Finally, differences among people in susceptibility to damage are so great that single cases show that acoustic trauma is possible from a given exposure but not that it is inevitable to everyone exposed. In other words, a given exposure, however it is measured, does not produce the same hearing loss in every ear.
Figure 1 (click here) below shows a comparison of single noise exposures that have been shown to be “without hazard” to the average young healthy ear (circled symbols) and those that have apparently produced 15 dB or more of permanent threshold shift in at least one person (symbols in squares). The dashed line, representing 8 hours of exposure at 100 dB SPL or its energy equivalent (more on this below), divides single exposures that are “probably safe” from those that are capable of causing permanent damage in individuals (ref. 1). Note: Physiological damage to the cochlea by high-intensity sound is not necessarily reflected in a measurable PTS. Evidence from both animal and human studies implies that several hundred of the hair cells that have been presumed to be important in the process of hearing may be destroyed before a change in threshold is measurable (ref. 2).
Four of the exposures labeled in the figure are from the studies of Davis et al. (ref. 3) on the effect of noise: Point D—32 minutes at 130 dB SPL; Point M—1 minute exposure to a 2-kHz tone at 130 dB SPL; Point S—8 minute exposure to a 4-kHz tone at 120 dB SPL. Davis himself received a 30 dB increase in his preexisting high-frequency hearing loss after exposure for 20 minutes to a 500-Hz tone at 140 dB SPL (Point H). Other points in Figure 1 include Point SN (ref. 4)—0.4 second at 153 dB (an exposure designed to be equivalent to the sound produced by the opening of an air bag); Point E—1 minute at 135 dB (ref. 5); Point O—the effect of the ring of a cordless telephone in which the same transducer was used for the ringer as well as voice, and produced a measurable PTS in a small fraction of those individuals exposed (ref. 6); and Point L (ref. 7)—an exposure of “a few seconds” to a tone of about 138 dB SPL that was being used to elicit the acoustic reflex (and produced additional damage in two individuals who already had considerable hearing loss).
The “8 hours at 100 dB SPL energy equivalent” exposure duration for any intensity greater than 100 dB SPL can be calculated using the following equation:
2.88 x 104 seconds = time (seconds).
Similarly, the exposure intensity for any duration less than 8 hours can be calculated using
10log2.88 x 104 seconds + 100 dB = SPL (dB).
Table 1 shows the sound pressure level of each firearm (plus respective ammo and attachments) tested. The “Exposure duration” (the abscissa in Fig. 1) to single rifle shots was measured using a storage oscilloscope for each rifle tested; the results are shown in Table 2 (click here for pdf file). The duration of multiple gun shots is simply the duration of a single shot times the number of shots fired Superimposing the points whose X-Y coordinates are X=time & Y=SPL onto Fig. 1, we can determine the likelihood of safety or the possibility of damage for each firearm tested. If the point appears above the dashed line in Fig. 1, a permanent threshold shift is possible.
Figure 2 (click here) shows an extension of the plot shown in Fig. 1. The time scale has been modified to show points that are similar to the duration of the blast noise (e.g., 3.5 milliseconds). The line in Fig. 2 still represents the energy equivalent of 8 hours at 100 dB SPL. The duration of each blast was measured using a digital storage oscilloscope—refer to Fig. 3 (click here) for an example. Because of the complexity of each waveform encountered, only the duration of the noise at its maximum compression and rarefaction is shown in Table 2 (click here for pdf file). Residual peaks exceeding 140 dB SPL at times greater than 10 milliseconds can be seen on the oscilloscope printouts, but their contribution to risk was not taken into account. It is interesting to note that the “louder” rifles also have measurably longer blast duration in addition to greater peak pressures; this is a significant observation because it is the combination of duration and intensity that determine the “equivalent energy” of the noise. Rifle #5 with its respective BOSS (see Fig. 3) produces both great enough SPL and blast duration that three successive rifle blasts would put the shooter at risk for permanent hearing damage resulting from acoustic trauma.
Important Notes: The measurements shown in Table 1 are for the actual sound pressure levels encountered at a distance of one foot away from the shooter’s ear. The shooter would encounter additional noise (i.e., a greater SPL) in the ear resulting from closer proximity to the rifle’s muzzle plus sound transmitted via bone conduction. Sound transmission via bone conduction is very real: Audiologist routinely use bone conduction to test patients’ sensorineural (a.k.a. “nerve”) hearing thresholds. A “bone conduction” transducer is placed on the mastoid process (bump behind the ear) to transmit sound to the inner ear via bone conduction. Similarly, a shooter’s head placed on the rifle stock allows energy travelling through the rifle stock to be transmitted to the inner ear (this is in addition to the airborne blast noise). In short, the values presented in Table 1 would be the minimum sound pressure levels encountered by the shooter for each of the rifles tested.
Another important consideration regarding risk criteria is the “equivalent energy” theory: The idea that “equivalent energy” will result in the same amount of damage for a given person may be insufficient in determining risk for intensities greater than 150 dB SPL. One study (ref. 8) showed that 30 impulses (or “rounds”) of simulated gunfire at 150 dB SPL peak level created a temporary threshold shift (TTS) in a particular ear, whereas 300 impulses of the same pulse shape at 140 dB SPL (to maintain the same total energy) usually produced no TTS in the same individual. This suggests that 3 impulses at 160 dB SPL would create an even greater TTS for the same individual than 30 impulses at 150 dB SPL, even though the “equivalent energy” is the same. In conclusion, greater sound pressure levels are at least, if not more, damaging than their lower-intensity, but longer duration, “energy equivalent” SPLs.
In addition to studies showing that high-intensity impulse noise affects the cochlea differently than does continuous noise, there are logical reasons why the equivalent energy theorem can’t always predict risk for hearing damage. Logically, we know there are levels associated with impulses that are dangerous with just one exposure. But at lower levels, individuals can withstand almost an infinite number of the same “signature” impulses (same waveform, but at a lower intensity) without harm. Also, most of the data supporting the equivalent energy theorem have been large-scale demographic studies. Controlled laboratory studies using animals (e.g., ref. 9) have shown that hearing loss resulting from exposure to impulse noise of equal energy increases with peak level. To iterate: The danger of hearing loss resulting from a single exposure to high-intensity impulse noise is at least as great as what we would predict using Figures 1 or 2.
Another important question is “What is the risk using the same rifle and ammo (rifle #5 with xxx ammo) without its respective BOSS?” The peak pressure level without the BOSS is 157.5 dB PPL, and the duration of a single blast is 3.5 milliseconds (click here for Fig. 4). The maximum “safe” exposure time for 157.5 dB SPL is 51 milliseconds and is the time equivalent of 15 successive blasts. (Note that the blast duration is shorter for the same rifle/ammo combination without its BOSS.) Of the rifles tested, the rifle/ammo combinations that would put shooters at highest risk for hearing loss are rifles #5 and #9 with there respective BOSS plus high velocity or high-energy ammo. At 165.5 dB SPL (rifle #5), an exposure longer than 8 milliseconds would put the shooter in the “danger zone.” This is slightly greater than the duration of two shots, but less than three rifle shots at the measured duration of 3.5 milliseconds per shot (click here for Fig. 5). As previously stated, the shooter would experience a minimum SPL of 165.5 (without hearing protection), and the duration of exposure to a single shot is at least 3.5 milliseconds. It is entirely possible that a single rifle shot at this intensity and duration could cause a temporary or permanent threshold shift! The calculations and measurements provide us with information that can be used in conjunction with prior studies to suggest what is “possible” or “likely” versus what is “unlikely” (but not impossible!!). Similarly, rifle #9 with its BOSS plus high-energy ammo (click here for Fig. 6) measured 164.5 dB SPL and 3.8 milliseconds duration. Again, the shooter would be in the “danger zone” between 2 and 3 shots. Referring to Figures 5 and 6, we see that there’s a lot of energy subsequent to the primary compression/rarefaction of these blasts; consequently, the actual duration of exposure is greater than the value used in the above calculations, and the chances of a PTS are increased.
Additional Measurements and Notes
Sound level meters are but one type of instrument used to analyze sound: Other measuring devices are used to measure characteristics of sound other than intensity. One such device is known as a "spectrum analyzer." In general, spectrum analysis refers to a detailed observation of the individual parameters of a signal. In acoustics, the parameters of interest are time, frequency and amplitude. A simple example of spectrum analysis can be demonstrated using a prism. White light shown through a prism reveals that white light is actually composed of many colors. In essence, this is spectrum analysis because we see the color spectrum of white light.
Like light, most noise or sounds (other than pure tones) are made up of discrete tones or "colors." More accurately, any sound at a given time can be expressed as the sum of its individual frequency components. To give an example, a piano and violin sound different from one another, even when playing the same note. The reason for this is because each instrument produces harmonics, or overtones, in addition to a fundamental frequency or "pitch." A spectrum analyzer allows us to see the frequencies, or harmonics, that make up such sounds. A violin produces different harmonics than a piano or a flute; consequently, they sound different from each other. The same goes for the human voice: A man and woman talking at the same volume, or sound pressure level, are readily distinguishable. We can also use a spectrum analyzer to study the noise resulting from blasts.
Many of us claim we can identify a rifle shot from a pistol or a shotgun blast. Clearly, this isn't just a function of loudness because some rifles are louder than pistols, others aren't, etc. What allows us to distinguish a pistol from a rifle is the relative combination of frequencies that make up their individual, or characteristic, sound. The technique used to generate a plot of a sound's individual frequency components is called Fourier Transformation, named for the French mathematician who developed the technique. I used "Fast Fourier Transform" (FFT) spectrum analysis to obtain the results shown. Visual inspection of these plots gives evidence that not all firearms sound the same, but perhaps similar. Though not demonstrated here, other types of blasts, such as a balloon popping, don't sound like firearms.
Note 1. The Sony DAT recorder was "calibrated" as follows: An acoustic calibrator was attached to the microphones of the Larson-Davis and B&K SLMs. The calibrator provides a 1 kHz (1000 c.p.s.), 114 dB SPL acoustic signal. At this SPL, the DAT's record level was set at minus 34 dB. This allowed the Sony DAT to record signals as loud as 148 dB SPL without distorting. With a 114 dB input and the record level set to -34 dB, the electrical output was 163 millivolts. The recordings were later analyzed using a digital storage oscilloscope and spectrum analysis software. Here are two examples:
1. 7mm Mouser (no BOSS). Using a storage oscilloscope, a peak voltage of 1017 millivolts was
measured. (A storage oscilloscope is needed because the duration of this signal is about
0.003 seconds.) Using 163 millivolts = 114 dB as the reference, we get
20 log 1017 mV = 20 log 6.24 = 15.9 dB. The 1017 mV signal is 15.9 dB greater
than the 114 dB reference; the actual PPL (at 50 ft.) is then 114 dB + 15.9 dB = 129.9 dB and is
exactly what we measured using the SLM in PEAK mode.
2. .270 cal with BOSS; 150 grain. The measured peak voltage was 2865 millivolts.
20 log 2865 mV = 20 log 17.6 = 24.9 dB. Adding this to 114 dB gives 138.9 dB,
which is what we measured using the SLM.
Note 2. The 25 dB correction factor is explained in the text. Distance from the source and air itself account for the attenuation (very high frequencies are absorbed more readily by air because the acoustic energy is dissipated in the form of friction). Spectral analysis indicated that most of the energy produced by the blasts (rifles, shotguns, and pistols) was in the low- to mid-frequency range of audible frequencies. For this reason, the correction factor is accurate for all firearms tested, regardless of their PPL.
Note 3. Relationship between sound pressure and sound power. Sound pressure level (SPL) measurements use a reference pressure of 0.00002 Pascal (Pa). A reference power could be used, but this requires measuring the movement of the individual air molecules. The relationship between power (I) and pressure (P) is
I (power) = P2 where Za is the impedance (mechanical resistance) of air.
What's important to note here is that if we double the pressure P, we get a four-fold increase in power (Za doesn't change). Consequently, if we use power (either acoustical or electrical) as our decibel (dB) reference, we no longer multiply the log of the ratio by 20. Instead, we multiply the log of the ratio by 10. An example demonstrates that our end result in dB is the same.
As previously shown, doubling the pressure gives us a 6 dB increase in SPL. Using the above equation (I = P2/Z), we see that doubling the pressure gives four times the power. Our dB equation using power is
10 log Power . If power increases 4X, we get 10 log (4) = 6 dB.
This is why adding a second loudspeaker to your stereo only adds 3 dB SPL to the overall loudness. Imagine that your stereo is delivering 4 watts to the right loudspeaker. We connect the left channel's loudspeaker, so now we have a total of 8 watts. Using our ubiquitous dB formula, we get
10 log 8 watts = 10 log (2) = 3 dB increase in SPL.
Remember, 3 dB is a noticeable increase in loudness, but a 10 dB increase is needed for the sound to be perceived as "twice as loud." Incidentally, a 10 dB increase is equivalent to a 10 fold increase in power (not to mention a 3.16 fold increase in pressure). This means 40 watts, not 8 watts, is twice as loud as 4 watts.
Eric L. Carmichel, M.S.
ELC Audio Engineering
1. Handbook of Acoustics, Malcolm J. Crocker, Editor-in-Chief. Copyright Ó 1998 by John Wiley & Sons, Inc. Reference: Chapter 92 by W. Dixon Ward, “Effects of High-Intensity Sound,” Figure 3, page 1203.
2. J. C. Saunders, Y. E. Cohen, and Y. M. Szymko, “The Structural and Functional Consequences of Acoustic Injury in the Cochlea and Peripheral Auditory System: A Five Year Update,” J. Acoust. Soc. Am., Vol. 90, 1991, pp. 136-146.
3. H. Davis, C. T. Morgan, J. E. Hawkins, Jr., R. Galambos, and F. W. Smith, “Temporary Deafness Following Exposure to Loud Tones and Noise,” Acta Otolaryngol., Suppl. 88, 1950.
4. H. C. Sommer and C. W. Nixon, “Primary Components of Simulated Air Bag Noise and Their Relative Effects on Human Hearing,” AMRL-TR-73-52, Aerospace Medical Research Laboratory, Wright-Patterson AFB, Ohio, November 1973.
5. K. M. Eldred, W. J. Gannon, and H. Von Gierke, “Criteria for Short Time Exposure of Personnel to High Intensity Jet Aircraft Noise,” WADC Technical Note 55-355, Wright Air Development Center, U.S. Air Force, Wright-Patterson AFB, Ohio, September 1955.
6. D. J. Orchick, D. R. Schraier, J. J. Shea, Jr., J. R. Emmett, W. H. Moreta, and J. J. Shea III., “Sensorineural Hearing Loss in Cordless Telephone Injury,” Otolaryngol. Head Neck Surg., Vol. 96, 1987, pp. 30-33.
7. T. Lenarz and J. Gülzow, “Akustisches Innenohrtrauma bei Impedanzmessung. Akutes Schalltrauma?” Laryngol. Rhinol., Vol. 62, 1983, pp. 58-61.
8. H. McRobert and W. D. Ward, “Damage Risk Criteria: The Trading Relation Between Intensity and the Number of Nonreverberant Impulses,” J. Acoust. Soc. Am., Vol. 53, 1973, pp. 1297-1300.
9. D. Henderson, R. J. Salvi, and R. P. Hamernik, “Is The Equal Energy Rule Applicable To Impact Noise?” Scandinavian Audiology, Supplement 16, 1982. | http://www.elcaudio.com/decibel.htm | 13 |
23 | The various causes of extinction and the subsequent loss of biodiversity are known as drivers. Direct drivers explicitly influence ecosystem processes, while indirect drivers change the rate at which one or more of the direct drivers affects ecosystem processes. Biodiversity loss drivers include (but are not limited to): environmental stress, large environmental disturbances, extreme environmental conditions, severe limitation of resources, introduction of non-native species, and geographic isolation.
Extinction is the most common way biodiversity is lost or reduced. When a species or group becomes extinct it no longer exists in the biosphere or there are no known members of that species left on Earth. Two examples of extinct species include the dodo bird and passenger pigeon. Endangered species, such as the whooping crane and the Indian elephant, are organisms that are at risk of becoming extinct.
When a species becomes endangered or extinct, more than just that species is affected. A woodpecker, for instance, drills a hole in a tree with its beak in search of insects; other species then begin to use these holes for food storage or as a place to nest. If the woodpecker were to go extinct, the various species that rely on their drilled holes can become disadvantaged. In this manner, the disappearance of a single species acts like a ripple in a pond; spreading through the ecosystem and affecting other species in unexpected ways. In general, species extinction primarily disrupts the food chain, leaving the ecosystem at greater risk for further biodiversity loss.
While natural disasters and extreme ecosystems are naturally occurring, humans cause much of the environmental stress which is a direct driver of biodiversity loss. Habitat change or loss, the introduction of non-native species, and overexploitation are thought to be the three most significant ways in which humans can detrimentally affect ecosystem processes. Others include nutrient loading in bodies of water, selective agricultural breeding, and climate change.
Indirect drivers of biodiversity loss are a bit more difficult to comprehend. It might not be as obvious how a growing population or cultural belief could detrimentally influence the rate at which ecosystem processes unravel. However, some argue that larger populations require more land to live on, wealthier populations consume more resources, and advances in certain technologies can lead to a degradation of ecosystems. These factors are thought to help speed up the effects of direct drivers, including habitat loss and overexploitation. Yet, the negative qualities of these indirect drivers of biodiversity loss are not universally accepted; some argue that technological change allows a more efficient use of resources and that cultural beliefs can impart a conservation ethic.
Just as each direct and indirect driver affects biodiversity loss differently, some ecosystems and species are more at risk for extinction than others. Charismatic mega-fauna, such as mountain gorillas, are the subject of intense conservation efforts and research. Other endangered species may not be so lucky: many species of amphibians, insects, and plants are highly endangered but fail to draw the same amount of attention as the charismatic mega-fauna.
Species with small or limited habitats also tend to be prone to endangerment. If a species of insect lives solely on one or two trees within the rainforest, cutting down those two trees will likely cause the insect to become extinct. Conversely, species that require large habitats, like tigers, lynx or jaguars, are often threatened because their habitat needs cause frequent conflict with humans. Similar animals, such as the prairie dog, are also labeled pests and subject to eradication attempts which can also weaken the surrounding ecosystem.
Finally, biodiversity loss and extinction as a result of fragmentation is also a prevalent problem, but scientists know very little about how large ecosystems need to be to sustain viable populations. It is the unknown which makes preventing loss of biodiversity difficult; however, scientists, policymakers, and conservationists are making their best efforts to understand and protect our biological resources while balancing the risks and tradeoffs inherent in environmental decision-making.
Updated by Skyler Treat & Nicole Barone Callahan
Scientific Facts on Ecosystem Change GreenFacts uses the findings and research from the Millennium Ecosystem Assessment, breaking them down into accessible sections summarizing the most critical factors causing biodiversity loss.
Anthropogenic Drivers of Ecosystem Change: An Overview This scientific article by Gerald Nelson, et.al. in the journal Ecology and Society provides a detailed explanation of anthropogenic drivers of ecosystem change and expands the discussion of indirect drivers found in the Millennium Ecosystem Assessment.
Drivers of Change in Ecosystem Condition and Services Chapter 7 of the Millennium Ecosystem Assessment discusses both direct and indirect drivers of ecosystem change which can lead to biodiversity loss, including the effects of the tourism industry and land use change. The chapter includes data tables and charts for many of the drivers.
Data Viewer and Maps The World Data Center for Biodiversity and Ecology partners with the Millennium Ecosystem Assessment in providing an interactive map which allows viewers to generate additional maps using data from categories like agriculture statistics, climate, population, and global land cover. Data is also available to download for your own use or as part of their Core Data Viewer.
LAWS & TREATIES
The Convention on Biological Diversity Signed by 150 nations at the 1992 Rio Earth Summit, this treaty commits countries to sustainable development, intended to reduce the effects of drivers of biodiversity loss. The official website provides information on the convention, what it means, and how implementation is working.
Endangered Species Act The Endangered Species Act is designed to conserve species in the United States that are considered to be facing extinction. Conservation plans enacted in response to the law seek to protect species from habitat loss and other drivers of biodiversity loss.
Biodiversity Loss - It Will Make You Sick This U.N. Environmental Programme (UNEP) press release promotes the book Sustaining Life which argues that loss of biodiversity causes opportunities for new medical treatments to be lost.
FOR THE CLASSROOM
Biodiversity The Kid's Corner at the Madras School of Economics in India takes a brief look at the causes of biodiversity loss and actions being taken to stop it.
Countdown 2010: Biodiversity in the Classroom Hosted by the World Conservation Union, Countdown 2010 facilitates and encourages action, promotes the importance of the 2010 biodiversity target, and assesses progress. The website describes one school's pledge to conserve and promote biodiversity on their campus as part of the UK's Countdown 2010 Biodiversity Action Plan. A copy of the pledge is also available for download.
What is Biological Diversity? This lesson plan from the Convention on Biological Diversity uses a modified version of musical chairs to demonstrate the effects of extinction on species. [Grades 4-6]
Integrating Conservation Science & Math This professional development workshop offered by Dr. Tom Langen at Clarkson University examines how to integrate science and math in middle school and high school classrooms through the lens of biodiversity. The proposed modules cover many of the direct drivers of biodiversity loss by integrating quantitative field exercises with computer-based lab projects. [Grades 6-12] | http://enviroliteracy.org/subcategory.php/352.html | 13 |
27 | Race is a classification system used to categorize humans into large and distinct populations or groups by heritable phenotypic characteristics, geographic ancestry, physical appearance, ethnicity, and social status. In the early twentieth century the term was often used, in a taxonomic sense, to denote genetically differentiated human populations defined by phenotype. Law enforcement utilizes race in profiling suspects in some countries. These uses of racial categories are frequently criticized for perpetuating an outmoded understanding of human biological variation, and promoting stereotypes. Because in many societies racial groupings correspond closely with patterns of social stratification, for social scientists studying social inequality, race can be a significant variable. As sociological factors, racial categories may in part reflect subjective attributions, self-identities, and social institutions. Accordingly, the racial paradigms employed in different disciplines vary in their emphasis on biological reduction as contrasted with societal construction.
Complications and various definitions of the conceptEdit
While biologists sometimes use the concept of race to make distinctions among fuzzy sets of traits, others in the scientific community suggest that the idea of race is often used in a naive or simplistic way. Among humans, race has no taxonomic significance; all living humans belong to the same hominid subspecies, Homo sapiens sapiens. Social conceptions and groupings of races vary over time, involving folk taxonomies. Social conceptions and groupings of races vary over time, involving folk taxonomies that define essential types of individuals based on perceived traits. Scientists consider biological essentialism obsolete, and generally discourage racial explanations for collective differentiation in both physical and behavioral traits.
It is demonstrated that race has no biological or genetic basis: gross morphological features which traditionally has been defined as races (e.g. skin color) are determined by non-significant and superficial genetic alleles with no link to any characteristics, such as intelligence, talent, athletic ability, etc. Race has been socially and legally constructed despite the lack of any scientific evidence for dividing humanity into racial baskets with any generalized genetic meaning.
When people define and talk about a particular conception of race, they create a social reality through which social categorization is achieved. In this sense, races are said to be social constructs. These constructs develop within various legal, economic, and sociopolitical contexts, and may be the effect, rather than the cause, of major social situations. While race is understood to be a social construct by many, most scholars agree that race has real material effects in the lives of people through institutionalized practices of preference and discrimination.
Socioeconomic factors, in combination with early but enduring views of race, have led to considerable suffering within disadvantaged racial groups. Racial discrimination often coincides with racist mindsets, whereby the individuals and ideologies of one group come to perceive the members of an outgroup as both racially defined and morally inferior. As a result, racial groups possessing relatively little power often find themselves excluded or oppressed, while hegemonic individuals and institutions are charged with holding racist attitudes. Racism has led to many instances of tragedy, including slavery and genocide. Scholars continue to debate the degrees to which racial categories are biologically warranted and socially constructed, as well as the extent to which the realities of race must be acknowledged in order for society to comprehend and address racism adequately.
In the social sciences theoretical frameworks such as Racial formation theory and Critical race theory investigate implications of race as social construction by exploring how the images, ideas and assumptions of race are expressed in everyday life. A large body of scholarship has traced the relationships between the historical, social production of race in legal and criminal language and their effects on the policing and disproportionate incarceration of certain groups.
Since the second half of the twentieth century the associations of race with the ideologies and theories that grew out of the work of 19th-century anthropologists and physiologists has led to the use of the word race itself becoming problematic. Although still used in general contexts, it is now often replaced by other words which are less emotionally charged, such as populations, people(s), ethnic groups or communities depending on context.
Historical origins of racial classificationEdit
Groups of humans have probably always identified themselves as distinct from other groups, but such differences have not always been understood to be natural, immutable and global. These features are the distinguishing features of how the concept of race is used today.
The word "race" was originally used to refer to any nations or ethnic groups. Marco Polo in his 13th-century travels, for example, describes the Persian race—the current concept of "race" dates back only to the 17th century.
Race and colonialismEdit
The European concept of "race", along with many of the ideas now associated with the term, arose at the time of the scientific revolution, which introduced and privileged the study of natural kinds, and the age of European imperialism and colonization which established political relations between Europeans and peoples with distinct cultural and political traditions. As Europeans encountered people from different parts of the world, they speculated about the physical, social, and cultural differences among various human groups. The rise of the Atlantic slave trade, which gradually displaced an earlier trade in slaves from throughout the world, created a further incentive to categorize human groups in order to justify the subordination of African slaves. Drawing on Classical sources and upon their own internal interactions — for example, the hostility between the English and Irish was a powerful influence on early European thinking about the differences between people — Europeans began to sort themselves and others into groups based on physical appearance, and to attribute to individuals belonging to these groups behaviors and capacities which were claimed to be deeply ingrained. A set of folk beliefs took hold that linked inherited physical differences between groups to inherited intellectual, behavioral, and moral qualities. Similar ideas can be found in other cultures, for example in China, where a concept often translated as "race" was associated with supposed common descent from the Yellow Emperor, and used to stress the unity of ethnic groups in China. Brutal conflicts between ethnic groups have existed throughout history and across the world.
Early taxonomic modelsEdit
The first post-Classical published classification of humans into distinct races seems to be François Bernier's Nouvelle division de la terre par les différents espèces ou races qui l'habitent ("New division of Earth by the different species or races which inhabit it"), published in 1684. In the 18th century, the differences among human groups became a focus of scientific investigation. But the scientific classification of phenotypic variation was frequently coupled with racist ideas about innate predispositions of different groups, always attributing the most desirable features to the White, European race and arranging the other races along a continuum of progressively undesirable attributes. The 1735 classification of Carolus Linnaeus, inventor of zoological taxonomy, divided the human race Homo Sapiens continental varieties of Europaeus, Asiaticus, Americanus and Afer, each associated with a different humour: sanguine, melancholic, choleric and phlegmatic respectively. Homo Sapiens Europeaus was described as active, acute, and adventurous whereas Homo Sapiens Afer was crafty, lazy and careless.
The 1775 treatise "The Natural Varieties of Mankind," by Johann Friedrich Blumenbach proposed five major divisions: the Caucasoid race, Mongoloid race, Ethiopian race (later termed the Negroid race), American Indian race, and Malayan race, but he did not propose any hierarchy among the races. Blumenbach also noted the graded transition in appearances from one group to adjacent groups and suggested that "one variety of mankind does so sensibly pass into the other, that you cannot mark out the limits between them".
From the 17th through the 19th centuries, the merging of folk beliefs about group differences with scientific explanations of those differences produced what one scholar has called an "ideology of race". According to this ideology, races are primordial, natural, enduring and distinct. It was further argued that some groups may be the result of mixture between formerly distinct populations, but that careful study could distinguish the ancestral races that had combined to produce admixed groups. Subsequent influential classifications by Georges Buffon, Petrus Camper and Christoph Meiners all classified "Negros" as inferior to Europeans. In the United States the racial theories of Thomas Jefferson were influential. He saw Africans as inferior to Whites especially in regards to their intellect, and embued with unnatural sexual appetites, but described Native Americans as equals to whites.
Race and polygenismEdit
In the last two decades of the 18th century polygenism, the belief that different races had evolved separately in each continent and shared no common ancestor, was advocated in England by historian Edward Long and anatomist Charles White, in Germany by ethnographers Christoph Meiners and Georg Forster, and in France by Julien-Joseph Virey, and prominently in the US by Samuel Morton, Josiah Nott and Louis Agassiz. Polygenism was popular and most widespread in the 19th century, culminating in the creation of the Anthropological Society of London during the American civil war, in opposition to the Abolitionist Ethnological Society.
Models of human evolutionEdit
In a 1995 article, Leonard Lieberman and Fatimah Jackson suggested that any new support for a biological concept of race will likely come from the study of human evolution. They therefore ask what, if any, implications current models of human evolution may have for any biological conception of race.
Today, all humans are classified as belonging to the species Homo sapiens and sub-species Homo sapiens sapiens. However, this is not the first species of homininae: the first species of genus Homo, Homo habilis, are theorized to have evolved in East Africa at least 2 million years ago, and members of this species populated different parts of Africa in a relatively short time. Homo erectus is theorized to have evolved more than 1.8 million years ago, and by 1.5 million years ago had spread throughout Europe and Asia. Virtually all physical anthropologists agree that Homo sapiens evolved out of African Homo erectus ((sensu lato) or Homo ergaster). Most anthropologists believe that Homo sapiens evolved in East Africa and then migrated out of Africa, replacing H. erectus populations throughout Europe and Asia (the Out of Africa model). Recent Human evolutionary genetics (Jobling, Hurles and Tyler-Smith, 2004) support this "Out of Africa" model, however the recent sequencing of the Neanderthal and Denisovan genomes shows some admixture, suggesting interbreeding between early Hominid species. These results also show that 40,000 years ago there co-existed at least three major sub-species that may be considered as "races" (or not, see discussion below): Denisovans, Neanderthals and Cro-magnons. Today, there's only one human species with no sub-species.
"Within" versus "between group variation" Edit
The F(ST) or "genetic variation between versus within groups" for human races is approximately 0.15. This is ample to satisfy taxonomic significance. The F(ST) for humans and chimpanzees is 0.18. The attempt to claim F(ST) invalidates the human race concept is known as "Lewontin's Fallacy". However, Witherspoon et al. 2007 concluded that Lewontin's "Fallacy" is only a fallacy if one assumes the populations that individuals can be assigned to are "races". They concluded the ability to assign an individual to a specific population cluster with enough markers considered is perfectly compatible with the fact it may still be possible for two randomly chosen individuals from different populations/clusters to be more similar to each other than to a randomly chosen member of their own cluster whilst still being capable of being traced back to specific regions.
Lieberman and Jackson argued that while advocates of both the Multiregional Model and the Out of Africa Model use the word race and make racial assumptions, none define the term. They conclude that students of human evolution would be better off avoiding the word race, and instead describe genetic differences in terms of populations and clinal gradations.
SubspeciesEditIn the early 20th century, many anthropologists accepted and taught the belief that biologically distinct races were isomorphic with distinct linguistic, cultural, and social groups, while popularly applying that belief to the field of eugenics, in conjunction with a practice that is now called scientific racism. Following the Nazi eugenics program, racial essentialism lost scientific credibility. Race anthropologists were pressured to acknowledge findings coming from studies of culture and population genetics, and to revise their conclusions about the sources of phenotypic variation. A significant number of modern anthropologists and biologists in the West came to view race as an invalid genetic or biological designation.
The first to challenge the concept of race on empirical grounds were anthropologists Franz Boas, who demonstrated phenotypic plasticity due to environmental factors, and Ashley Montagu who relied on evidence from genetics. E. O. Wilson then challenged the concept from the perspective of general animal systematics, and further rejected the claim that "races" were equivalent to "subspecies".
According to Jonathan Marks,
By the 1970s, it had become clear that (1) most human differences were cultural; (2) what was not cultural was principally polymorphic – that is to say, found in diverse groups of people at different frequencies; (3) what was not cultural or polymorphic was principally clinal – that is to say, gradually variable over geography; and (4) what was left – the component of human diversity that was not cultural, polymorphic, or clinal – was very small.</p> A consensus consequently developed among anthropologists and geneticists that race as the previous generation had known it – as largely discrete, geographically distinct, gene pools – did not exist.
In biology the term "race" is used with caution because it can be ambiguous. Generally when it is used it is synonymous with subspecies. For mammals, the taxonomic unit below the species level is usually the subspecies.
Population geneticists have debated whether the concept of population can provide a basis for a new conception of race. In order to do this, a working definition of population must be found. Surprisingly, there is no generally accepted concept of population that biologists use. Although the concept of population is central to ecology, evolutionary biology and conservation biology, most definitions of population rely on qualitative descriptions such as "a group of organisms of the same species occupying a particular space at a particular time" Waples and Gaggiotti identify two broad types of definitions for populations; those that fall into an ecological paradigm, and those that fall into an evolutionary paradigm. Examples of such definitions are:
- Ecological paradigm: A group of individuals of the same species that co-occur in space and time and have an opportunity to interact with each other.
- Evolutionary paradigm: A group of individuals of the same species living in close-enough proximity that any member of the group can potentially mate with any other member.
Morphologically differentiated populationsEdit
Traditionally, subspecies are seen as geographically isolated and genetically differentiated populations. That is, "the designation 'subspecies' is used to indicate an objective degree of microevolutionary divergence" One objection to this idea is that it does not specify what degree of differentiation is required. Therefore, any population that is somewhat biologically different could be considered a subspecies, even to the level of a local population. As a result, Templeton has argued that it is necessary to impose a threshold on the level of difference that is required for a population to be designated a subspecies.
This effectively means that populations of organisms must have reached a certain measurable level of difference to be recognised as subspecies. Dean Amadon proposed in 1949 that subspecies would be defined according to the seventy-five percent rule which means that 75% of a population must lie outside 99% of the range of other populations for a given defining morphological character or a set of characters. The seventy-five percent rule still has defenders but other scholars argue that it should be replaced with ninety or ninety-five percent rule.
In 1978, Sewall Wright suggested that human populations that have long inhabited separated parts of the world should, in general, be considered different subspecies by the usual criterion that most individuals of such populations can be allocated correctly by inspection. Wright argued that it does not require a trained anthropologist to classify an array of Englishmen, West Africans, and Chinese with 100% accuracy by features, skin color, and type of hair despite so much variability within each of these groups that every individual can easily be distinguished from every other. However, it is customary to use the term race rather than subspecies for the major subdivisions of the human species as well as for minor ones.
On the other hand in practice subspecies are often defined by easily observable physical appearance, but there is not necessarily any evolutionary significance to these observed differences, so this form of classification has become less acceptable to evolutionary biologists. Likewise this typological approach to race is generally regarded as discredited by biologists and anthropologists.
Because of the difficulty in classifying subspecies morphologically, many biologists have found the concept problematic, citing issues such as:
- Visible physical differences do not always correlate with one another, leading to the possibility of different classifications for the same individual organisms.
- Parallel evolution can lead to the existence of the appearance of similarities between groups of organisms that are not part of the same species.
- Isolated populations within previously designated subspecies have been found to exist.
- The criteria for classification may be arbitrary if they ignore gradual variation in traits.
Sesardic argues that when several traits are analyzed at the same time, forensic anthropologists can classify a person's race with an accuracy of close to 100% based on only skeletal remains. This is discussed in a later section.
Ancestrally differentiated populationsEdit
Cladistics is another method of classification. A clade is a taxonomic group of organisms consisting of a single common ancestor and all the descendants of that ancestor. Every creature produced by sexual reproduction has two immediate lineages, one maternal and one paternal. Whereas Carolus Linnaeus established a taxonomy of living organisms based on anatomical similarities and differences, cladistics seeks to establish a taxonomy—the phylogenetic tree—based on genetic similarities and differences and tracing the process of acquisition of multiple characteristics by single organisms. Some researchers have tried to clarify the idea of race by equating it to the biological idea of the clade. Often mitochondrial DNA or Y chromosome sequences are used to study ancient human migration paths. These single-locus sources of DNA do not recombine and are inherited from a single parent. Individuals from the various continental groups tend to be more similar to one another than to people from other continents, and tracing either mitochondrial DNA or non-recombinant Y-chromosome DNA explains how people in one place may be largely derived from people in some remote location.
Often taxonomists prefer to use phylogenetic analysis to determine whether a population can be considered a subspecies. Phylogenetic analysis relies on the concept of derived characteristics that are not shared between groups, usually applying to populations that are allopatric (geographically separated) and therefore discretely bounded. This would make a subspecies, evolutionarily speaking, a clade – a group with a common evolutionary ancestor population. The smooth gradation of human genetic variation in general rules out any idea that human population groups can be considered monophyletic (cleanly divided) as there appears to always have been considerable gene flow between human populations. Rachel Caspari (2003) have argued that clades are by definition monophyletic groups (a taxon that includes all descendants of a given ancestor) and since no groups currently regarded as races are monophyletic, none of those groups can be clades.
For anthropologists Lieberman and Jackson (1995), however, there are more profound methodological and conceptual problems with using cladistics to support concepts of race. They claim that "the molecular and biochemical proponents of this model explicitly use racial categories in their initial grouping of samples". For example, the large and highly diverse macroethnic groups of East Indians, North Africans, and Europeans are presumptively grouped as Caucasians prior to the analysis of their DNA variation. This is claimed to limit and skew interpretations, obscure other lineage relationships, deemphasize the impact of more immediate clinal environmental factors on genomic diversity, and can cloud our understanding of the true patterns of affinity. They argue that however significant the empirical research, these studies use the term race in conceptually imprecise and careless ways. They suggest that the authors of these studies find support for racial distinctions only because they began by assuming the validity of race. "For empirical reasons we prefer to place emphasis on clinal variation, which recognizes the existence of adaptive human hereditary variation and simultaneously stresses that such variation is not found in packages that can be labeled races."
These scientists do not dispute the importance of cladistic research, only its retention of the word race, when reference to populations and clinal gradations are more than adequate to describe the results.
One crucial innovation in reconceptualizing genotypic and phenotypic variation was anthropologist C. Loring Brace's observation that such variations, insofar as it is affected by natural selection, slow migration, or genetic drift, are distributed along geographic gradations or clines. In part this is due to isolation by distance. This point called attention to a problem common to phenotype-based descriptions of races (for example, those based on hair texture and skin color): they ignore a host of other similarities and differences (for example, blood type) that do not correlate highly with the markers for race. Thus, anthropologist Frank Livingstone's conclusion, that since clines cross racial boundaries, "there are no races, only clines".
In a response to Livingstone, Theodore Dobzhansky argued that when talking about race one must be attentive to how the term is being used: "I agree with Dr. Livingstone that if races have to be 'discrete units,' then there are no races, and if 'race' is used as an 'explanation' of the human variability, rather than vice versa, then the explanation is invalid." He further argued that one could use the term race if one distinguished between "race differences" and "the race concept." The former refers to any distinction in gene frequencies between populations; the latter is "a matter of judgment." He further observed that even when there is clinal variation, "Race differences are objectively ascertainable biological phenomena… but it does not follow that racially distinct populations must be given racial (or subspecific) labels." In short, Livingstone and Dobzhansky agree that there are genetic differences among human beings; they also agree that the use of the race concept to classify people, and how the race concept is used, is a matter of social convention. They differ on whether the race concept remains a meaningful and useful social convention.
In 1964, biologists Paul Ehrlich and Holm pointed out cases where two or more clines are distributed discordantly—for example, melanin is distributed in a decreasing pattern from the equator north and south; frequencies for the haplotype for beta-S hemoglobin, on the other hand, radiate out of specific geographical points in Africa. As anthropologists Leonard Lieberman and Fatimah Linda Jackson observed, "Discordant patterns of heterogeneity falsify any description of a population as if it were genotypically or even phenotypically homogeneous".
Patterns such as those seen in human physical and genetic variation as described above, have led to the consequence that the number and geographic location of any described races is highly dependent on the importance attributed to, and quantity of, the traits considered. Scientists discovered a skin-lighting mutation that partially accounts for the appearance of Light skin in humans (people who migrated out of Africa northward into what is now Europe) which they estimate occurred 20,000 to 50,000 years ago. The East Asians owe their relatively light skin to different mutations. On the other hand, the greater the number of traits (or alleles) considered, the more subdivisions of humanity are detected, since traits and gene frequencies do not always correspond to the same geographical location. Or as Ossorio & Duster ( 2005) put it:
Anthropologists long ago discovered that humans' physical traits vary gradually, with groups that are close geographic neighbors being more similar than groups that are geographically separated. This pattern of variation, known as clinal variation, is also observed for many alleles that vary from one human group to another. Another observation is that traits or alleles that vary from one group to another do not vary at the same rate. This pattern is referred to as nonconcordant variation. Because the variation of physical traits is clinal and nonconcordant, anthropologists of the late 19th and early 20th centuries discovered that the more traits and the more human groups they measured, the fewer discrete differences they observed among races and the more categories they had to create to classify human beings. The number of races observed expanded to the 30s and 50s, and eventually anthropologists concluded that there were no discrete races. Twentieth and 21st century biomedical researchers have discovered this same feature when evaluating human variation at the level of alleles and allele frequencies. Nature has not created four or five distinct, nonoverlapping genetic groups of people.
More recent genetic studies indicate that skin color may change radically over as few as 100 generations, or about 2,500 years, given the influence of the environment.
Serre & Pääbo ( 2004) argued for smooth, clinal genetic variation in ancestral populations even in regions previously considered racially homogeneous, with the apparent gaps turning out to be artifacts of sampling techniques. Rosenberg et al. (2005) disputed this and argued that using more data showed that there were small discontinuities in the smooth genetic variation for ancestral populations at the location of geographic barriers such as the Sahara, the Oceans, and the Himalayas.
Genetically differentiated populationsEdit
Another way to look at differences between populations is to measure genetic differences rather than physical differences between groups. Mid-20th century anthropologist William C. Boyd defined race as: "A population which differs significantly from other populations in regard to the frequency of one or more of the genes it possesses. It is an arbitrary matter which, and how many, gene loci we choose to consider as a significant 'constellation'". Leonard Lieberman and Rodney Kirk have pointed out that "the paramount weakness of this statement is that if one gene can distinguish races then the number of races is as numerous as the number of human couples reproducing." Moreover, anthropologist Stephen Molnar has suggested that the discordance of clines inevitably results in a multiplication of races that renders the concept itself useless. The Human Genome Project states "People who have lived in the same geographic region for many generations may have some alleles in common, but no allele will be found in all members of one population and in no members of any other."
Fixation index Edit
Population geneticist Sewall Wright developed one way of measuring genetic differences between populations known as the Fixation index, which is often abbreviated to FST. This statistic is often used in taxonomy to compare differences between any two given populations by measuring the genetic differences among and between populations for individual genes, or for many genes simultaneously. It is often stated that the fixation index for humans is about 0.15. This translates to an estimated 85% of the variation measured in the overall human population is found within individuals of the same population, and about 15% of the variation occurs between populations. These estimates imply that any two individuals from different populations are almost as likely to be more similar to each other than either is to a member of their own group. Richard Lewontin, who affirmed these ratios, thus concluded neither "race" nor "subspecies" were appropriate or useful ways to describe human populations. Others also noting that group variation was relatively low compared to the variation observed in other mammalian species, agreed the evidence confirmed the absence of natural subdivision of the human population.
Wright himself believed that values >0.25 represent very great genetic variation and that an FST of 0.15–0.25 represented great variation. It should however be noted that about 5% of human variation occurs between populations within continents, therefore FST values between continental groups of humans (or races) of as low as 0.1 (or possibly lower) have been found in some studies, suggesting more moderate levels of genetic variation. Graves (1996) has countered that FST should not be used as a marker of subspecies status, as the statistic is used to measure the degree of differentiation between populations, although see also Wright (1978).
In an ongoing debate, some geneticists argue that race is neither a meaningful concept nor a useful heuristic device, and even that genetic differences among groups are biologically meaningless, because more genetic variation exists within such races than among them, and that racial traits overlap without discrete boundaries.
Jeffrey Long and Rick Kittles give a long critique of the application of FST to human populations in their 2003 paper "Human Genetic Diversity and the Nonexistence of Biological Races". They find that the figure of 85% is misleading because it implies that all human populations contain on average 85% of all genetic diversity. They claim that this does not correctly reflect human population history, because it treats all human groups as independent. A more realistic portrayal of the way human groups are related is to understand that some human groups are parental to other groups and that these groups represent paraphyletic groups to their descent groups. For example, under the recent African origin theory the human population in Africa is paraphyletic to all other human groups because it represents the ancestral group from which all non-African populations derive, but more than that, non-African groups only derive from a small non-representative sample of this African population. This means that all non-African groups are more closely related to each other and to some African groups (probably east Africans) than they are to others, and further that the migration out of Africa represented a genetic bottleneck, with much of the diversity that existed in Africa not being carried out of Africa by the emigrating groups. This view produces a version of human population movements that do not result in all human populations being independent; but rather, produces a series of dilutions of diversity the further from Africa any population lives, each founding event representing a genetic subset of its parental population. Long and Kittles find that rather than 85% of human genetic diversity existing in all human populations, about 100% of human diversity exists in a single African population, whereas only about 70% of human genetic diversity exists in a population derived from New Guinea. Long and Kittles argued that this still produces a global human population that is genetically homogeneous compared to other mammalian populations.
Cluster analysis Edit
In his 2003 paper, "Human Genetic Diversity: Lewontin's Fallacy", A. W. F. Edwards argued that rather than using a locus-by-locus analysis of variation to derive taxonomy, it is possible to construct a human classification system based on characteristic genetic patterns, or clusters inferred from multilocus genetic data. Geographically based human studies since have shown that such genetic clusters can be derived from analyzing of a large number of loci which can assort individuals sampled into groups analogous to traditional continental racial groups. Joanna Mountain and Neil Risch cautioned that while genetic clusters may one day be shown to correspond to phenotypic variations between groups, such assumptions were premature as the relationship between genes and complex traits remains poorly understood. However, Risch denied such limitations render the analysis useless: "Perhaps just using someone's actual birth year is not a very good way of measuring age. Does that mean we should throw it out? ... Any category you come up with is going to be imperfect, but that doesn't preclude you from using it or the fact that it has utility."
Early human genetic cluster analysis studies were conducted with samples taken from ancestral population groups living at extreme geographic distances from each other. It was thought that such large geographic distances would maximize the genetic variation between the groups sampled in the analysis and thus maximize the probability of finding cluster patterns unique to each group. In light of the historically recent acceleration of human migration (and correspondingly, human gene flow) on a global scale, further studies were conducted to judge the degree to which genetic cluster analysis can pattern ancestrally identified groups as well as geographically separated groups. One such study looked at a large multiethnic population in the United States, and "detected only modest genetic differentiation between different current geographic locales within each race/ethnicity group. Thus, ancient geographic ancestry, which is highly correlated with self-identified race/ethnicity—as opposed to current residence—is the major determinant of genetic structure in the U.S. population."(Tang et al. (2005))
Witherspoon et al. (2007) have argued that even when individuals can be reliably assigned to specific population groups, it may still be possible for two randomly chosen individuals from different populations/clusters to be more similar to each other than to a randomly chosen member of their own cluster. They found that many thousands of genetic markers had to be used in order for the answer to the question "How often is a pair of individuals from one population genetically more dissimilar than two individuals chosen from two different populations?" to be "never". This assumed three population groups separated by large geographic ranges (European, African and East Asian). The entire world population is much more complex and studying an increasing number of groups would require an increasing number of markers for the same answer. The authors conclude that "caution should be used when using geographic or genetic ancestry to make inferences about individual phenotypes." Witherspoon et al. concluded that, "The fact that, given enough genetic data, individuals can be correctly assigned to their populations of origin is compatible with the observation that most human genetic variation is found within populations, not between them. It is also compatible with our finding that, even when the most distinct populations are considered and hundreds of loci are used, individuals are frequently more similar to members of other populations than to members of their own population."
Anthropologists such as C. Loring Brace and philosopher Jonathan Kaplan and geneticist Joseph Graves, have argued that while there it is certainly possible to find biological and genetic variation that corresponds roughly to the groupings normally defined as "continental races", this is true for almost all geographically distinct populations. The cluster structure of the genetic data is therefore dependent on the initial hypotheses of the researcher and the populations sampled. When one samples continental groups the clusters become continental, if one had chosen other sampling patterns the clustering would be different. Weiss and Fullerton have noted that if one sampled only Icelanders, Mayans and Maoris, three distinct clusters would form and all other populations could be described as being clinally composed of admixtures of Maori, Icelandic and Mayan genetic materials. Kaplan therefore argues that seen in this way both Lewontin and Edwards are right in their arguments. He concludes that while racial groups are characterized by different allele frequencies, this does not mean that racial classification is a natural taxonomy of the human species, because multiple other genetic patterns can be found in human populations that crosscut racial distinctions. In this view racial groupings are social constructions that also have a biological reality which is largely an artefact of how the category has been constructed.
Biological definitions of raceEdit
|Essentialist||Hooton (1926)||"A great division of mankind, characterized as a group by the sharing of a certain combination of features, which have been derived from their common descent, and constitute a vague physical background, usually more or less obscured by individual variations, and realized best in a composite picture."|
|Taxonomic||Mayr (1969)||"A subspecies is an aggregate of phenotypically similar populations of a species, inhabiting a geographic subdivision of the range of a species, and differing taxonomically from other populations of the species."|
|Population||Dobzhansky (1970)||"Races are genetically distinct Mendelian populations. They are neither individuals nor particular genotypes, they consist of individuals who differ genetically among themselves."|
|Lineage||Templeton (1998)||"A subspecies (race) is a distinct evolutionary lineage within a species. This definition requires that a subspecies be genetically differentiated due to barriers to genetic exchange that have persisted for long periods of time; that is, the subspecies must have historical continuity in addition to current genetic differentiation."|
As anthropologists and other evolutionary scientists have shifted away from the language of race to the term population to talk about genetic differences, historians, cultural anthropologists and other social scientists re-conceptualized the term "race" as a cultural category or social construct—a particular way that some people talk about themselves and others.
Many social scientists have replaced the word race with the word "ethnicity" to refer to self-identifying groups based on beliefs concerning shared culture, ancestry and history. Alongside empirical and conceptual problems with "race," following the Second World War, evolutionary and social scientists were acutely aware of how beliefs about race had been used to justify discrimination, apartheid, slavery, and genocide. This questioning gained momentum in the 1960s during the U.S. civil rights movement and the emergence of numerous anti-colonial movements worldwide. They thus came to believe that race itself is a social construct, a concept that was believed to correspond to an objective reality but which was believed in because of its social functions.
Craig Venter and Francis Collins of the National Institute of Health jointly made the announcement of the mapping of the human genome in 2000. Upon examining the data from the genome mapping, Venter realized that although the genetic variation within the human species is on the order of 1–3% (instead of the previously assumed 1%), the types of variations do not support notion of genetically defined races. Venter said, "Race is a social concept. It's not a scientific one. There are no bright lines (that would stand out), if we could compare all the sequenced genomes of everyone on the planet." "When we try to apply science to try to sort out these social differences, it all falls apart."
Stephan Palmié asserted that race "is not a thing but a social relation"; or, in the words of Katya Gibel Mevorach, "a metonym," "a human invention whose criteria for differentiation are neither universal nor fixed but have always been used to manage difference." As such, the use of the term "race" itself must be analyzed. Moreover, they argue that biology will not explain why or how people use the idea of race: History and social relationships will.
Imani Perry, a professor in the Center for African American Studies at Princeton University, has made significant contributions to how we define race in America today. Perry’s work focuses on how race is experienced. Perry tells us that race, "is produced by social arrangements and political decision making." Perry explains race more in stating, "race is something that happens, rather than something that is. It is dynamic, but it holds no objective truth."
The theory that race is merely a social construct has been challenged by the findings of researchers at the Stanford University School of Medicine, published in the American Journal of Human Genetics as "Genetic Structure, Self-Identified Race/Ethnicity, and Confounding in Case-Control Association Studies". One of the researchers, Neil Risch, noted: "we looked at the correlation between genetic structure [based on microsatellite markers] versus self-description, we found 99.9% concordance between the two. We actually had a higher discordance rate between self-reported sex and markers on the X chromosome! So you could argue that sex is also a problematic category. And there are differences between sex and gender; self-identification may not be correlated with biology perfectly. And there is sexism."
Race and EthnicityEdit
The distinction between race and ethnicity is considered highly problematic. Ethnicity is often assumed to be the cultural identity of a group from a nation state, while race is assumed to be biological and/or cultural essentialization of a group hierarchy of superiority/inferiority related to their biological constitution. It is assumed that, based on power relations, there exist 'racialized ethnicities' and 'ethnicized races'. Ramán Grosfoguel (University of California, Berkeley) notes that 'racial/ethnic identity' is one concept and that concepts of race and ethnicity cannot be used as separate and autonomous categories.
Compared to 19th century United States, 20th century Brazil was characterized by a perceived relative absence of sharply defined racial groups. According to anthropologist Marvin Harris, this pattern reflects a different history and different social relations. Basically, race in Brazil was "biologized," but in a way that recognized the difference between ancestry (which determines genotype) and phenotypic differences. There, racial identity was not governed by rigid descent rule, such as the one-drop rule, as it was in the United States. A Brazilian child was never automatically identified with the racial type of one or both parents, nor were there only a very limited number of categories to choose from.
Over a dozen racial categories would be recognized in conformity with all the possible combinations of hair color, hair texture, eye color, and skin color. These types grade into each other like the colors of the spectrum, and no one category stands significantly isolated from the rest. That is, race referred preferentially to appearance, not heredity. The complexity of racial classifications in Brazil reflects the extent of miscegenation in Brazilian society, a society that remains highly, but not strictly, stratified along color lines. Henceforth, the Brazilian narrative of a perfect "post-racist" country, must be met with caution, as sociologist Gilberto Freyre demonstrated in 1933 in Casa Grande e Senzala.
According to European Union Council Directive, Template:Vquote The European Union uses the terms racial origin and ethnic origin synonymously in its documents and according to it „the use of the term “racial origin” in this directive does not imply an acceptance of such [racial] theories”. Haney López warns that using ‘race’ as a category within the law tends to legitimize its existence in the popular imagination. In the diverse geographic context of Europe, ethnicity and ethnic origin are arguably more resonant and are less encumbered by the ideological baggage associated with ‘race’. In European context, historical resonance of ‘race’ underscores its problematic nature. In some states, it is strongly associated with laws promulgated by the Nazi and Fascist governments in Europe during the 1930s and 1940s. Indeed, in 1996, the European Parliament adopted a resolution stating that “the term should therefore be avoided in all official texts”.
The concept of racial origin is inherently problematic, being grounded in the scientifically false notion that human beings can be separated into biologically distinct ‘races’. Since all human beings belong to the same species, the ECRI (European Commission against Racism and Intolerance) rejects theories based on the existence of different ‘races’. However, in its Recommendation ECRI uses this term in order to ensure that those persons who are generally and erroneously perceived as belonging to ‘another race’ are not excluded from the protection provided for by the legislation. The law claims to reject the existence of ‘race’, yet penalize situations where someone is treated less favourably on this ground.
Since the end of the Second World War, France has become an ethnically diverse country. Today, approximately five percent of the French population is non-European and non-white. This does not approach the number of non-white citizens in the United States (roughly 15-25%, depending on how Latinos are classified). Nevertheless, it amounts to at least three million people, and has forced the issues of ethnic diversity onto the French policy agenda. France has developed an approach to dealing with ethnic problems that stands in contrast to that of many advanced, industrialized countries. Unlike the United States, Britain, or even the Netherlands, France maintains a "color-blind" model of public policy. This means that it targets virtually no policies directly at racial or ethnic groups. Instead, it uses geographic or class criteria to address issues of social inequalities. It has, however, developed an extensive anti-racist policy repertoire since the early 1970s. Until recently, French policies focused primarily on issues of hate speech—going much further than their American counterparts-and relatively less on issues of discrimination in jobs, housing, and in provision of goods and services.
The immigrants to the Americas came from every region of Europe, Africa, and Asia. They mixed among themselves and with the indigenous inhabitants of the continent. In the United States most people who self-identify as African–American have some European ancestors, while many people who identify as European American have some African or Amerindian ancestors.
Since the early history of the United States, Amerindians, African–Americans, and European Americans have been classified as belonging to different races. Efforts to track mixing between groups led to a proliferation of categories, such as mulatto and octoroon. The criteria for membership in these races diverged in the late 19th century. During Reconstruction, increasing numbers of Americans began to consider anyone with "one drop" of known "Black blood" to be Black, regardless of appearance.3 By the early 20th century, this notion was made statutory in many states.4 Amerindians continue to be defined by a certain percentage of "Indian blood" (called blood quantum). To be White one had to have perceived "pure" White ancestry. The one-drop rule or hypodescent rule refers to the convention of defining a person as racially black if he or she has any known African ancestry. This rule meant that those that were mixed race but with some discernable African ancestry were defined as black. The one-drop rule is specific to not only those with African ancestry but to the United States, making it a particularly African-American experience.
The term "Hispanic" as an ethnonym emerged in the 20th century with the rise of migration of laborers from American Spanish-speaking countries to the United States. Today, the word "Latino" is often used as a synonym for "Hispanic". The definitions of both terms are non-race specific, and include people who consider themselves to be of distinct races (Black, White, Amerindian, Asian, and mixed groups). However, there is a common misconception in the US that Hispanic/Latino is a race or sometimes even that national origins such as Mexican, Cuban, Colombian, Salvadoran, etc. are races. In contrast to "Latino" or "Hispanic", "Anglo" refers to non-Hispanic White Americans or non-Hispanic European Americans, most of whom speak the English language but are not necessarily of English descent.
Current views across disciplinesEdit
In Poland, the race concept was rejected by 25 percent of anthropologists in 2001, although: "Unlike the U.S. anthropologists, Polish anthropologists tend to regard race as a term without taxonomic value, often as a substitute for population."
Liberman et al. in a 2004 study claimed to "present the currently available information on the status of the concept in the United States, the Spanish language areas, Poland, Europe, Russia, and China. Rejection of race ranges from high to low with the highest rejection occurring among anthropologists in the United States (and Canada). Rejection of race is moderate in Europe, sizeable in Poland and Cuba, and lowest in Russia and China." Methods used in the studies reported included questionnaires and content analysis.
Kaszycka et al. (2009) in 2002–2003 surveyed European anthropologists' opinions toward the biological race concept. Three factors, country of academic education, discipline, and age, were found to be significant in differentiating the replies. Those educated in Western Europe, physical anthropologists, and middle-aged persons rejected race more frequently than those educated in Eastern Europe, people in other branches of science, and those from both younger and older generations."The survey shows that the views of anthropologists on race are sociopolitically (ideologically) influenced and highly dependent on education."
United States viewsEdit
One result of debates over the meaning and validity of the concept of race is that the current literature across different disciplines regarding human variation lacks consensus, though within some fields, such as some branches of anthropology, there is strong consensus. Some studies use the word race in its early essentialist taxonomic sense. Many others still use the term race, but use it to mean a population, clade, or haplogroup. Others eschew the concept of race altogether, and use the concept of population as a less problematic unit of analysis.
The concept of biological race has declined significantly in frequency of use in physical anthropology in the United States during the 20th century. A majority of physical anthropologists in the United States have rejected the concept of biological races. Since 1932, an increasing number of college textbooks introducing physical anthropology have rejected race as a valid concept: from 1932 to 1976, only seven out of thirty-two rejected race; from 1975 to 1984, thirteen out of thirty-three rejected race; from 1985 to 1993, thirteen out of nineteen rejected race. According to one academic journal entry, where 78 percent of the articles in the 1931 Journal of Physical Anthropology employed these or nearly synonymous terms reflecting a bio-race paradigm, only 36 percent did so in 1965, and just 28 percent did in 1996.
The "Statement on 'Race'" (1998) composed by a select committee of anthropologists and issued by the executive board of the American Anthropological Association as a statement they "believe [...] represents generally the contemporary thinking and scholarly positions of a majority of anthropologists", declares:
"In the United States both scholars and the general public have been conditioned to viewing human races as natural and separate divisions within the human species based on visible physical differences. With the vast expansion of scientific knowledge in this century, however, it has become clear that human populations are not unambiguous, clearly demarcated, biologically distinct groups. Evidence from the analysis of genetics (e.g., DNA) indicates that most physical variation, about 94%, lies within so-called racial groups. Conventional geographic "racial" groupings differ from one another only in about 6% of their genes. This means that there is greater variation within "racial" groups than between them. In neighboring populations there is much overlapping of genes and their phenotypic (physical) expressions. Throughout history whenever different groups have come into contact, they have interbred. The continued sharing of genetic materials has maintained all of humankind as a single species."
"With the vast expansion of scientific knowledge in this century, ... it has become clear that human populations are not unambiguous, clearly demarcated, biologically distinct groups. [...] Given what we know about the capacity of normal humans to achieve and function within any culture, we conclude that present-day inequalities between so-called "racial" groups are not consequences of their biological inheritance but products of historical and contemporary social, economic, educational, and political circumstances."
A survey, taken in 1985 (Lieberman et al. 1992), asked 1,200 American scientists how many disagree with the following proposition: "There are biological races in the species Homo sapiens." The responses were for anthropologists:
The figure for physical anthropologists at PhD granting departments was slightly higher, rising from 41% to 42%, with 50% agreeing. This survey, however, did not specify any particular definition of race (although it did clearly specify biological race within the species Homo sapiens); it is difficult to say whether those who supported the statement thought of race in taxonomic or population terms.
The same survey, taken in 1999, showed the following changing results for anthropologists:
A line of research conducted by Cartmill (1998), however, seemed to limit the scope of Lieberman’s finding that there was "a significant degree of change in the status of the race concept". Goran Štrkalj has argued that this may be because Lieberman and collaborators had looked at all the members of the American Anthropological Association irrespective of their field of research interest, while Cartmill had looked specifically at biological anthropologists interested in human variation.
According to the 2000 edition of a popular physical anthropology textbook, forensic anthropologists are overwhelmingly in support of the idea of the basic biological reality of human races. Forensic physical anthropologist and professor George W. Gill has said that the idea that race is only skin deep "is simply not true, as any experienced forensic anthropologist will affirm" and "Many morphological features tend to follow geographic boundaries coinciding often with climatic zones. This is not surprising since the selective forces of climate are probably the primary forces of nature that have shaped human races with regard not only to skin color and hair form but also the underlying bony structures of the nose, cheekbones, etc. (For example, more prominent noses humidify air better.)" While he can see good arguments for both sides, the complete denial of the opposing evidence "seems to stem largely from socio-political motivation and not science at all". He also states that many biological anthropologists see races as real yet "not one introductory textbook of physical anthropology even presents that perspective as a possibility. In a case as flagrant as this, we are not dealing with science but rather with blatant, politically motivated censorship".
In partial response to Gill's statement, Professor of Biological Anthropology C. Loring Brace argues that the reason laymen and biological anthropologists can determine the geographic ancestry of an individual can be explained by the fact that biological characteristics are clinally distributed across the planet, and that does not translate into the concept of race. He states that "Well, you may ask, why can't we call those regional patterns "races"? In fact, we can and do, but it does not make them coherent biological entities. "Races" defined in such a way are products of our perceptions. ... We realize that in the extremes of our transit—Moscow to Nairobi, perhaps—there is a major but gradual change in skin color from what we euphemistically call white to black, and that this is related to the latitudinal difference in the intensity of the ultraviolet component of sunlight. What we do not see, however, is the myriad other traits that are distributed in a fashion quite unrelated to the intensity of ultraviolet radiation. Where skin color is concerned, all the northern populations of the Old World are lighter than the long-term inhabitants near the equator. Although Europeans and Chinese are obviously different, in skin color they are closer to each other than either is to equatorial Africans. But if we test the distribution of the widely known ABO blood-group system, then Europeans and Africans are closer to each other than either is to Chinese." "Race" is still sometimes used within forensic anthropology (when analyzing skeletal remains), biomedical research, and race-based medicine. Brace has criticized this, the practice of forensic anthropologists for using the controversial concept "race" out of convention when they in fact should be talking about regional ancestry. He argues that while a forensic anthropologists can determine that a skeletal remain comes from a person with ancestors in a specific region of Africa, categorizing that skeletal as being "black" is a socially constructed category that is only meaningful in the particular context of the United States, and which is not itself scientifically valid.
In the 1985 poll (Lieberman et al. 1992) the results for biologists and developmental psychologists were:
In February 2001, the editors of Archives of Pediatrics and Adolescent Medicine asked "authors to not use race and ethnicity when there is no biological, scientific, or sociological reason for doing so." The editors also stated that "analysis by race and ethnicity has become an analytical knee-jerk reflex." Nature Genetics now ask authors to "explain why they make use of particular ethnic groups or populations, and how classification was achieved."
Liberman et al. (1992) examined 77 college textbooks in biology and 69 in physical anthropology published between 1932 and 1989. Physical anthropology texts argued that biological races exist until the 1970s, when they began to argue that races do not exist. In contrast, biology textbooks never underwent such a reversal but instead dropped their discussion of race altogether. Morning (2008) looked at high school biology textbooks during the 1952-2002 period and initially found a similar pattern with only 35% directly discussing race in the 1983–92 period from initially 92% doing so. However, this has increased somewhat after this to 43%. More indirect and brief discussions of race in the context of medical disorders have increased from none to 93% of textbooks. In general, the material on race has moved from surface traits to genetics and evolutionary history. The study argues that the textbooks’ fundamental message about the existence of races has changed little.
Gissis (2008) examined several important American and British journals in genetics, epidemiology and medicine for their content during the 1946-2003 period. He wrote that "Based upon my findings I argue that the category of race only seemingly disappeared from scientific discourse after World War II and has had a fluctuating yet continuous use during the time span from 1946 to 2003, and has even become more pronounced from the early 1970s on".
A 1994 examination of 32 English sport/exercise science textbooks found that 7 (21.9%) claimed that there are biophysical differences due to race that might explain differences in sports performance, 24 (75%) did not mention nor refute the concept, and 1 (3.12%) expressed caution with the idea.
33 health services researchers from differing geographic regions were interviewed in a 2008 study. The researchers recognized the problems with racial and ethnic variables but the majority still believed these variables were necessary and useful.
A 2010 examination of 18 widely used English anatomy textbooks found that every one relied on the race concept. The study gives examples of how the textbooks claim that anatomical features vary between races.
Researchers have reported differences in the average IQ test scores of various ethnic groups. The interpretation, causes, accuracy and reliability of these differences are highly controversial. Some psychologists such as Arthur Jensen, and Richard Lynn, have argued that such differences are at least partially genetic. Richard Herrnstein and Charles Murray argue that "intelligence is less than completely heritable." Many other researchers both in Psychology, Sociology and Anthropology, for example Thomas Sowell, David F. Marks, Jonathan Marks, Richard Nisbett, argue that the differences largely owe to social and economic inequalities. Still others such as Stephen Jay Gould and Robert Sternberg have argued that categories such as "race" and "intelligence" are both "folk" constructs rather than well defined scientific concepts, and that since the definitions are largely fluid and susceptible to different cultural constructions this in turn renders attempts to explain variation of one in terms of the other scientifically invalid.
Political and practical usesEdit
In the United States, policy makers use racially categorized data to identify and address health disparities between racial or ethnic groups. In clinical settings, race has long been considered in the diagnosis and treatment of medical conditions, because some medical conditions are more prevalent in certain racial or ethnic groups than in others. Recent interest in race-based medicine, or race-targeted pharmacogenomics, has been fueled by the proliferation of human genetic data which followed the decoding of the human genome in the early 2000s. There is an active debate among biomedical researchers about the meaning and importance of race in their research. Some researchers strongly support the continued use of racial categorizations in biomedical research and clinical practice. They argue that race may correlate, albeit imperfectly, with the presence of specific genetic variants associated with disease: Insofar as race "provides a sufficiently precise proxy for human genetic variation", the concept may be medically viable. In addition, knowledge of a person's race may provide a cost-effective way to assess susceptibility to genetically influenced medical conditions.
Detractors of race-based medicine acknowledge that race is sometimes useful in clinical medicine, but encourage minimizing its use. They suggest that medical practices should maintain their focus on the individual rather than an individual's membership to any group. They argue that overemphasizing genetic contributions to health disparities carries various risks such as reinforcing stereotypes, promoting racism or ignoring the contribution of non-genetic factors to health disparities. Some researchers in the field have been accused "of using race as a placeholder during the 'meantime' of pharmacogenomic development". Conversely, it is argued that in the early stages of the field's development, researchers must consider race-related factors if they are to ascertain the clinical potentials of ongoing scholarship.
In an attempt to provide general descriptions that may facilitate the job of law enforcement officers seeking to apprehend suspects, the United States FBI employs the term "race" to summarize the general appearance (skin color, hair texture, eye shape, and other such easily noticed characteristics) of individuals whom they are attempting to apprehend. From the perspective of law enforcement officers, it is generally more important to arrive at a description that will readily suggest the general appearance of an individual than to make a scientifically valid categorization by DNA or other such means. Thus, in addition to assigning a wanted individual to a racial category, such a description will include: height, weight, eye color, scars and other distinguishing characteristics.
British Police use a classification based in the ethnic background of British society: W1 (White-British), W2 (White-Irish), W9 (Any other white background); M1 (White and black Caribbean), M2 (White and black African), M3 (White and Asian), M9 (Any other mixed background); A1 (Asian-Indian), A2 (Asian-Pakistani), A3 (Asian-Bangladeshi), A9 (Any other Asian background); B1 (Black Caribbean), B2 (Black African), B3 (Any other black background); O1 (Chinese), O9 (Any other). Some of the characteristics that constitute these groupings are biological and some are learned (cultural, linguistic, etc.) traits that are easy to notice.
In many countries, such as France, the state is legally banned from maintaining data based on race, which often makes the police issue wanted notices to the public that include labels like "dark skin complexion", etc.
In the United States, the practice of racial profiling has been ruled to be both unconstitutional and a violation of civil rights. There is active debate regarding the cause of a marked correlation between the recorded crimes, punishments meted out, and the country's populations. Many consider de facto racial profiling an example of institutional racism in law enforcement. The history of misuse of racial categories to impact adversely one or more groups and/or to offer protection and advantage to another has a clear impact on debate of the legitimate use of known phenotypical or genotypical characteristics tied to the presumed race of both victims and perpetrators by the government.
Mass incarceration in the United States disproportionately impacts African American and Latino communities. Michelle Alexander, author of The New Jim Crow: Mass Incarceration in the Age of Colorblindness (2010), argues that mass incarceration is best understood as not only a system of overcrowded prisons. Mass incarceration is also, "the larger web of laws, rules, policies, and customs that control those labeled criminals both in and out of prison." She defines it further as "a system that locks people not only behind actual bars in actual prisons, but also behind virtual bars and virtual walls," illustrating the second-class citizenship that is imposed on a disproportionate number of people of color, specifically African-Americans. She compares mass incarceration to Jim Crow laws, stating that both work as racial caste systems.
Recent work using DNA cluster analysis to determine race background has been used by some criminal investigators to narrow their search for the identity of both suspects and victims. Proponents of DNA profiling in criminal investigations cite cases where leads based on DNA analysis proved useful, but the practice remains controversial among medical ethicists, defense lawyers and some in law enforcement.
Similarly, forensic anthropologists draw on highly heritable morphological features of human remains (e.g. cranial measurements) to aid in the identification of the body, including in terms of race. In a 1992 article anthropologist Norman Sauer noted that Anthropologists had generally abandoned the concept of race as a valid representation of human biological diversity except for Forensic anthropologists. This lead him to ask "if races don't exist, why are forensic anthropologists so good at identifying them?" He concluded that "the successful assignment of race to a skeletal specimen is not a vindication of the race concept, but rather a prediction that an individual, while alive was assigned to a particular socially constructed ‘racial’ category. A specimen may display features that point to African ancestry. In this country that person is likely to have been labeled Black regardless of whether or not such a race actually exists in nature. C. Loring Brace echoed this answer stating that: "The simple answer is that, as members of the society that poses the question, they are inculcated into the social conventions that determine the expected answer. They should also be aware of the biological inaccuracies contained in that "politically correct" answer. Skeletal analysis provides no direct assessment of skin color, but it does allow an accurate estimate of original geographical origins. African, eastern Asian, and European ancestry can be specified with a high degree of accuracy. Africa of course entails "black," but "black" does not entail African."
Commercial determination of ancestryEdit
New research in molecular genetics, and the marketing of genetic identities through the analysis of one's Y chromosome, mtDNA, or autosomal DNA to the general public in the form of "Personalized Genetic Histories" (PGH) has caused debate.
Typically, a consumer of a commercial PGH service sends in a sample of DNA which is analyzed by molecular biologists and is sent a report. Shriver and Kittles remarked:
For many customers of lineage-based tests, there is a lack of understanding that their maternal and paternal lineages do not necessarily represent their entire genetic make-up. For example, an individual might have more than 85% Western European 'genomic' ancestry but still have a West African mtDNA or NRY lineage.Nevertheless, they acknowledge, such stories are increasingly appealing to the general public.
Through these reports, advances in molecular genetics are used to create or confirm stories have about social identities. Abu el-Haj argued that genetic lineages, like older notions of race, suggest some idea of biological relatedness, but unlike older notions of race they are not directly connected to claims about human behaviour or character. She said that "postgenomics does seem to be giving race a new lease on life."
Race science was never just about classification. It presupposed a distinctive relationship between "nature" and "culture," understanding the differences in the former to ground and to generate the different kinds of persons ("natural kinds") and the distinctive stages of cultures and civilizations that inhabit the world.
Abu el-Haj argues that genomics and the mapping of lineages and clusters liberates "the new racial science from the older one by disentangling ancestry from culture and capacity." As an example, she refers to recent work by Hammer et al., which aimed to test the claim that present-day Jews are more closely related to one another than to neighbouring non-Jewish populations. Hammer et al. found that the degree of genetic similarity among Jews shifted depending on the locus investigated, and suggested that this was the result of natural selection acting on particular loci. They therefore focused on the non-recombining Y chromosome to "circumvent some of the complications associated with selection".
As another example she points to work by Thomas et al., who sought to distinguish between the Y chromosomes of Jewish priests (Kohanim), (in Judaism, membership in the priesthood is passed on through the father's line) and the Y chromosomes of non-Jews. Abu el-Haj concluded that this new "race science" calls attention to the importance of "ancestry" (narrowly defined, as it does not include all ancestors) in some religions and in popular culture, and people's desire to use science to confirm their claims about ancestry; this "race science," she argues, is fundamentally different from older notions of race that were used to explain differences in human behaviour or social status:
As neutral markers, junk DNA cannot generate cultural, behavioural, or, for that matter, truly biological differences between groups ... mtDNA and Y-chromosome markers relied on in such work are not "traits" or "qualities" in the old racial sense. They do not render some populations more prone to violence, more likely to suffer psychiatric disorders, or for that matter, incapable of being fully integrated – because of their lower evolutionary development – into a European cultural world. Instead, they are "marks," signs of religious beliefs and practices… it is via biological noncoding genetic evidence that one can demonstrate that history itself is shared, that historical traditions are (or might well be) true."
Stephan Palmié has responded to Abu el-Haj's claim that genetic lineages make possible a new, politically, economically, and socially benign notion of race and racial difference by suggesting that efforts to link genetic history and personal identity will inevitably "ground present social arrangements in a time-hallowed past," that is, use biology to explain cultural differences and social inequalities.
One problem with these assignments is admixture. Many people have a varied ancestry. For example, in the United States, most people who self-identify as African American have some European ancestors. In a survey of college students who self-identified as "white" in a northeastern U.S. university, ~30% were estimated to have <90% European ancestry.
On the other hand, there are tests that do not rely on molecular lineages, but rather on correlations between allele frequencies, often when allele frequencies correlate these are called clusters. These sorts of tests use informative alleles called Ancestry-informative marker (AIM), which although shared across all human populations vary a great deal in frequency between groups of people living in geographically distant parts of the world.
These tests use contemporary people sampled from certain parts of the world as references to determine the likely proportion of ancestry for any given individual. In a recent Public Service Broadcasting (PBS) programme on the subject of genetic ancestry testing the academic Henry Louis Gates: "wasn’t thrilled with the results (it turns out that 50 percent of his ancestors are likely European)". Charles Rotimi, of Howard University's National Human Genome Center, argued in 2003 that —that "the nature or appearance of genetic clustering (grouping) of people is a function of how populations are sampled, of how criteria for boundaries between clusters are set, and of the level of resolution used" all bias the results—and concluded that people should be very cautious about relating genetic lineages or clusters to their own sense of identity.
On the other hand, Rosenberg (2005) argued that if enough genetic markers and subjects are analyzed, then the clusters found are consistent. How many genetic markers a commercial service uses likely varies, although new technology has continually allowed increasing numbers to be analyzed.
- ^ See: *Lie 2004 *Thompson & Hickey 2005 *Gordon 1964 *AAA 1998 *Palmié 2007 *Mevorach 2007 *Segal 1991 *Bindon 2005
- ^ King 2007: For example, "the association of blacks with poverty and welfare ... is due, not to race per se, but to the link that race has with poverty and its associated disadvantages"–p.75.
- ^ Schaefer 2008: "In many parts of Latin America, racial groupings are based less on the biological physical features and more on an intersection between physical features and social features such as economic class, dress, education, and context. Thus, a more fluid treatment allows for the construction of race as an achieved status rather than an ascribed status as is the case in the United States"
- ^ Graves 2001
- ^ a b Lee et al. 2008: "We caution against making the naive leap to a genetic explanation for group differences in complex traits, especially for human behavioral traits such as IQ scores"
- ^ a b c d Keita et al. 2004
- ^ AAPA 1996 Pure races, in the sense of genetically homogeneous populations, do not exist in the human species today, nor is there any evidence that they have ever existed in the past.-p.714
- ^ See:
- ^ Sober 2000
- ^ AAA 1998: For example, "Evidence from the analysis of genetics (e.g., DNA) indicates that most physical variation, about 94%, lies within so-called racial groups. Conventional geographic 'racial' groupings differ from one another only in about 6% of their genes. This means that there is greater variation within 'racial' groups than between them."
- ^ Steven A. Ramirez What We Teach When We Teach About Race: The Problem of Law and Pseudo-Economics 54 Journal of Legal Education 365 (2004)
- ^ American Anthropological Association's Statement on "Race" May 17 1998
- ^ American Association of Physical Anthropological, Statement on Biological Aspects of Race101 American Journal Physical Anthropology 569 1996
- ^ Steve Olson, Mapping Human History: Discovering the Past Through Our Genes, Boston, 2002
- ^ Lee 1997
- ^ See: *Blank, Dabady & Citro 2004 *Smaje 1997
- ^ See: *Lee 1997 *Nobles 2000 *Morgan 1975 as cited in Lee 1997, p. 407
- ^ See: *Morgan 1975 as cited in Lee 1997, p. 407 *Smedley 2007 *Sivanandan 2000 *Crenshaw 1988 *Conley 2007 *Winfield 2007: "It was Aristotle who first arranged all animals into a single, graded scale that placed humans at the top as the most perfect iteration. By the late 19th century, the idea that inequality was the basis of natural order, known as the great chain of being, was part of the common lexicon."
- ^ Lee 1997 citing Morgan 1975 and Appiah 1992
- ^ See: *Sivanandan 2000 *Muffoletto 2003 *McNeilly et al. 1996: psychiatric instrument called the "Perceived Racism Scale" "provides a measure of the frequency of exposure to many manifestations of racism ... including individual and institutional"; also assesses motional and behavioral coping responses to racism." *Miles 2000
- ^ Owens & King 1999
- ^ See: *Brace 2000 *Gill 2000 *Lee 1997: "The very naturalness of 'reality' is itself the effect of a particular set of discursive constructions. In this way, discourse does not simply reflect reality, but actually participates in its construction"
- ^ "race". Oxford Dictionaries. April 2010. Oxford University Press. http://oxforddictionaries.com/definition/english/race--2 (accessed July 31, 2012).
- ^ a b Marks 2008, p. 28
- ^ Marco Polo, in the 13th century, writes of the North Persians: "The people are of the Mahometan religion. They are in general a handsome race, especially the women, who, in my opinion, are the most beautiful in the world."; Polo 2007, p. 41
- ^ Smedley 2007
- ^ a b Smedley 1999
- ^ Meltzer 1993
- ^ Takaki 1993
- ^ Banton 1977
- ^ For examples see: :*Lewis 1990 :*Dikötter 1992
- ^ a b c d Race, Ethnicity, and Genetics Working Group (October 2005). "The use of racial, ethnic, and ancestral categories in human genetics research". American Journal of Human Genetics 77 (4): 519–32. DOI:10.1086/491747. PMID 16175499.
- ^ Todorov 1993
- ^ Brace 2005, p. 27
- ^ Slotkin (1965), p. 177.
- ^ a b c Graves 2001, p. 39
- ^ a b Marks 1995
- ^ Graves 2001, pp. 42–43
- ^ Stocking 1968, pp. 38–40
- ^ Desmond & Moore 2009, pp. 332–341
- ^ a b c d e Lieberman & Jackson 1995
- ^ Camilo J. Cela-Conde and Francisco J. Ayala. 2007. Human Evolution Trails from the Past Oxford University Press p. 195
- ^ Lewin, Roger. 2005. Human Evolution an illustrated introduction. Fifth edition. p. 159. Blackwell
- ^ Reich D, Patterson N, Kircher M, et al. (October 2011). "Denisova admixture and the first modern human dispersals into Southeast Asia and Oceania". Am. J. Hum. Genet. 89 (4): 516–28. DOI:10.1016/j.ajhg.2011.09.005. PMID 21944045.
- ^ Human genetic diversity and the nonexistence of biological races, 2009
- ^ Human genetic diversity: Lewontin's fallacy, Edwards, 2003
- ^ * (2007) "Genetic Similarities Within and Between Human Populations". Genetics 176 (1): 351–9. DOI:10.1534/genetics.106.067355. PMID 17339205.
- ^ Witherspoon DJ, Wooding S, Rogers AR, et al. (May 2007). "Genetic Similarities Within and Between Human Populations". Genetics 176 (1): 351–9. DOI:10.1534/genetics.106.067355. PMID 17339205.
- ^ Currell & Cogdell 2006
- ^ Cravens 2010
- ^ See: *Cravens 2010 *Angier 2000 *Amundson 2005 *Reardon 2005
- ^ See: *Smedley 2002 *Boas 1912
- ^ See: *Marks 2002 *Montagu 1941 *Montagu 1942
- ^ Wilson & Brown 1953
- ^ See: *Keita et al. 2004 *Templeton 1998 *Long & Kittles 2003
- ^ Haig et al. 2006
- ^ a b Waples & Gaggiotti 2006
- ^ a b c d e Templeton 1998
- ^ See: *Amadon 1949 *Mayr 1969 *Patten & Unitt 2002
- ^ a b Wright 1978
- ^ See: *Keita et al. 2004 *Templeton 1998
- ^ (2006) "Understanding Race and Human Variation: A Public Education Program". Anthropology News 47 (2): 7. DOI:10.1525/an.2006.47.2.7.
- ^ Brace 1964
- ^ a b Livingstone & Dobzhansky 1962
- ^ Ehrlich & Holm 1964
- ^ Weiss 2005
- ^ Marks 2002
- ^ Krulwich 2009
- ^ Boyd 1950
- ^ Lieberman & Kirk 1997, p. 195
- ^ Molnar 1992
- ^ Human Genome Project 2003
- ^ a b c Graves 2006
- ^ Lewontin 1972
- ^ Keita et al. 2004 , Bamshad et al. 2004 , Tishkoff & Kidd 2004 , Jorde Wooding2004
- ^ Wilson et al. 2001, Cooper, Kaufman & Ward 2003 (given in summary by Bamshad et al. 2004, p. 599)
- ^ (Schwartz 2001), (Stephens 2003) (given in summary by Bamshad et al. 2004, p. 599)
- ^ Smedley & Smedley 2005, (Helms et al. 2005), . Lewontin, for example argues that there is no biological basis for race on the basis of research indicating that more genetic variation exists within such races than among them (Lewontin 1972).
- ^ Long & Kittles 2003
- ^ Edwards 2003
- ^ See: *Cavalli-Sforza, Menozzi & Piazza 1994 *Bamshad et al. 2004, p. 599 *Tang et al. 2004 *Rosenberg et al. 2005: "If enough markers are used... individuals can be partitioned into genetic clusters that match major geographic subdivisions of the globe."
- ^ Mountain & Risch 2004
- ^ Gitschier 2005
- ^ Witherspoon et al. 2007
- ^ Witherspoon DJ, Wooding S, Rogers AR, et al. (May 2007). "Genetic Similarities Within and Between Human Populations". Genetics 176 (1): 358. DOI:10.1534/genetics.106.067355. PMID 17339205.
- ^ Loring Brace, C. 2005. Race is a four letter word. Oxford University Press.
- ^ Kaplan, Jonathan Michael (January 2011) ‘Race’: What Biology Can Tell Us about a Social Construct. In: Encyclopedia of Life Sciences (ELS). John Wiley & Sons, Ltd: Chichester
- ^ Graves, Joseph. 2001. The Emperor's New Clothes. Rutgers University Press
- ^ Weiss KM and Fullerton SM (2005) Racing around, getting nowhere. Evolutionary Anthropology 14: 165–169
- ^ Gordon 1964
- ^ "New Ideas, New Fuels: Craig Venter at the Oxonian". FORA.tv. 2008-11-03. http://fora.tv/2008/07/30/New_Ideas_New_Fuels_Craig_Venter_at_the_Oxonian#chapter_17. Retrieved 2009-04-18.
- ^ (May 2007) "Genomics, divination, 'racecraft'". American Ethnologist 34: 205–22. DOI:10.1525/ae.2007.34.2.205.
- ^ (2007) "Race, racism, and academic complicity". American Ethnologist 34: 238. DOI:10.1525/ae.2007.34.2.238.
- ^ Imani Perry, More Beautiful and More Terrible: The Embrace and Transcendence of Racial Inequality in the United States (New York, NY: New York University Press, 2011), 23.
- ^ Imani Perry, More Beautiful and More Terrible: The Embrace and Transcendence of Racial Inequality in the United States (New York, NY: New York University Press, 2011), 24.
- ^ Tang H, Quertermous T, Rodriguez B, et al. (February 2005). "Genetic Structure, Self-Identified Race/Ethnicity, and Confounding in Case-Control Association Studies". American Journal of Human Genetics 76 (2): 268–75. DOI:10.1086/427888. PMID 15625622.
- ^ Risch N (July 2005). "The whole side of it--an interview with Neil Risch by Jane Gitschier". PLoS Genetics 1 (1): e14. DOI:10.1371/journal.pgen.0010014. PMID 17411332.
- ^ Grosfoguel, Ramán (September 2004). "Race and Ethnicity or Racialized Ethnicities? Identities within Global Coloniality". Ethnicities 4 (3). DOI:10.1177/1468796804045237. Retrieved on 2012-08-06.
- ^ Harris 1980
- ^ The Council of the European Union Council Directive 2000/43/EC of 29 June 2000 implementing the principle of equal treatment between persons irrespective of racial or ethnic origin
- ^ European Union Directives on the Prohibition of Discrimination Icelandic Human Rights Centre
- ^ Mark Bell Racism and Equality in the European Union Oxford University Press, publication date: 2009, Print ISBN-13: 9780199297849, DOI:10.1093/acprof:oso/9780199297849.001.0001
- ^ Mark Bell Racism and Equality in the European Union Oxford University Press, publication date: 2009, Print ISBN-13: 9780199297849, DOI:10.1093/acprof:oso/9780199297849.001.0001
- ^ Race Policy in France by Erik Bleich, Middlebury College, 2012-05-01
- ^ Sexton, Jared (2008). Amalgamation Schemes. Univ of Minnesota Press.
- ^ Nobles 2000
- ^ "Revisions to the Standards for the Classification of Federal Data on Race and Ethnicity". Office of Management and Budget. 1997-10-30. http://www.whitehouse.gov/omb/fedreg/1997standards.html. Retrieved 2009-03-19. Also: U.S. Census Bureau Guidance on the Presentation and Comparison of Race and Hispanic Origin Data and B03002. HISPANIC OR LATINO ORIGIN BY RACE; 2007 American Community Survey 1-Year Estimates
- ^ (2003) "'Race' Still an Issue for Physical Anthropology? Results of Polish Studies Seen in the Light of the U.S. Findings". American Anthropologist 105: 116–24. DOI:10.1525/aa.2003.105.1.116.
- ^ The race concept in six regions: variation without consensus, Lieberman L, Kaszycka KA, Martinez Fuentes AJ, Yablonsky L, Kirk RC, Strkalj G, Wang Q, Sun L., Coll Antropol. 2004 Dec;28(2):907-21, http://www.ncbi.nlm.nih.gov/pubmed/15666627
- ^ Current Views of European Anthropologists on Race: Influence of Educational and Ideological Background, Katarzyna A. Kaszycka, Goran Štrkalj, Jan Strzalko, American Anthropologist Volume 111, Issue 1, pages 43–56, March 2009, doi:10.1111/j.1548-1433.2009.01076.x
- ^ The decline of race in American physical anthropology Leonard Lieberman, Rodney C. Kirk, Michael Corcoran. 2003. Department of Sociology and Anthropology, Central Michigan University, Mt. Pleasant, MI. 48859, USA
- ^ (2003) "Perishing Paradigm: Race1931-99". American Anthropologist 105: 110. DOI:10.1525/aa.2003.105.1.110.
A following article in the same issue questions the precise rate of decline, but from their opposing perspective agrees that the Negroid/Caucasoid/Mongoloid paradigm has fallen into near-total disfavor: (2003) "Surveying the Race Concept: A Reply to Lieberman, Kirk, and Littlefield". American Anthropologist 105: 114. DOI:10.1525/aa.2003.105.1.114.
- ^ "American Anthropological Association Statement on "Race"". Aaanet.org. 1998-05-17. http://www.aaanet.org/stmts/racepp.htm. Retrieved 2009-04-18.
- ^ Bindon, Jim. University of Alabama. "Post World War II". 2005. August 28, 2006.
- ^ (February 2001) "How "Caucasoids" got such big crania and why they shrank. From Morton to Rushton." (PDF). Current anthropology 42 (1): 69–95. DOI:10.1086/318434. PMID 14992214.
- ^ (2007) "The Status of the Race Concept in Contemporary Biological Anthropology: A Review" (PDF). Anthropologist.
- ^ a b Does race exist? A proponent’s perspective. Gill GW. (2000) PBS. http://www.pbs.org/wgbh/nova/first/gill.html
- ^ http://www.pbs.org/wgbh/nova/first/brace.html
- ^ See: *Gill 2000 *Armelagos & Smay 2000 *Risch et al. 2002 *Bloche 2004
- ^ C. Loring Brace, 1995. "Region Does not Mean "Race"--Reality Versus Convention in Forensic Anthropology," Journal of Forensic Sciences 40 (#2): 29-33.
- ^ Frederick P. Rivara and Laurence Finberg, "Use of the Terms Race and Ethnicity," Archives of Pediatrics & Adolescent Medicine 155, no. 2 (2001): 119. "In future issues of the ARCHIVES, we ask authors to not use race and ethnicity when there is no biological, scientific, or sociological reason for doing so. Race or ethnicity should not be used as explanatory variables, when the underlying constructs are variables that can, and should, be measured directly (eg, educational level of subjects, household income of the families, single vs 2-parent households, employment of parents, owning vs renting one's home, and other measures of socioeconomic status). In contrast, the recent attention on decreasing health disparities uses race and ethnicity not as explanatory variables but as ways of examining the underlying sociocultural reasons for these disparities and appropriately targeting attention and resources on children and adolescents with poorer health. In select issues and questions such as these, use of race and ethnicity is appropriate."
- ^ See program announcement and requests for grant applications at the NIH website, at nih.gov.
- ^ Robert S. Schwartz, "Racial Profiling in Medical Research," The New England Journal of Medicine, 344 (no, 18, May 3, 2001)
- ^ Lieberman, Leonard, Raymond E. Hampton, Alice Littlefield, and Glen Hallead. 1992. "Race in Biology and Anthropology: A Study of College Texts and Professors." Journal of Research in Science Teaching 29 (3): 301–21.
- ^ Reconstructing Race in Science and Society:Biology Textbooks, 1952–2002, Ann Morning, American Journal of Sociology. 2008;114 Suppl:S106-37.
- ^ Template:Cite pmid
- ^ The presentation of human biological diversity in sport and exercise science textbooks: the example of "race.", Christopher J. Hallinan, Journal of Sport Behavior, March 1994
- ^ The conceptualization and operationalization of race and ethnicity by health services researchers, Susan Moscou, Nursing Inquiry, Volume 15, Issue 2, pages 94–105, June 2008
- ^ Human Biological Variation in Anatomy Textbooks: The Role of Ancestry, Goran Štrkalj and Veli Solyali, Studies on Ethno-Medicine, 4(3): 157-161 (2010)
- ^ Herrnstein & Murray 1996, pp. 413–414
- ^ Gould, S. J. (1981). The Mismeasure of Man. New York: W.W. Norton & Co. passim
- ^ Sternberg, Grigorenko, Kidd (2005). "Intelligence, race, and genetics". American Psychologist 60.
- ^ Office of Minority Health
- ^ a b c Risch et al. 2002
- ^ a b Condit et al. 2003
- ^ Lee et al. 2008
- ^ (2009) "Beyond BiDil: the Expanding Embrace of Race in Biomedical Research and Product Development" (PDF). St. Louis University Journal of Health Law & Policy 3: 61–92. Retrieved on 30 December 2010. ; In 2005, the Food and Drug Administration licensed a drug, BiDil, targeted specifically for the treatment of heart disease in African Americans. The recommendation of the drug for "blacks" is criticized because clinical trials were limited only to self-identified African Americans. It has been conceded by the trial investigators that there is no basis to claim the drug works differently in any other population. However, being approved and marketed to African Americans only, that specificity alone has been used in turn to claim genetic differences.
- ^ In summary, Condit et al. (2003) argues that, in order to predict the clinical success of pharmacogenomic research, scholars must conduct subsidiary research on two fronts: Science, wherein the degree of correspondence between popular and professional racial categories can be assessed; and society at large, through which attitudinal factors moderate the relationship between scientific soundness and societal acceptance. To accept race-as-proxy, then, may be necessary but insufficient to solidify the future of race-based pharmacogenomics.
- ^ Michelle Alexander, The New Jim Crow: Mass Incarceration in the Age of Colorblindness (New York, NY: The New Press, 2010), 13.
- ^ Michelle Alexander, The New Jim Crow: Mass Incarceration in the Age of Colorblindness (New York, NY: The New Press, 2010), 12.
- ^ Abraham 2009
- ^ Willing 2005
- ^ a b Sauer 1992
- ^ Brace CL. 1995. J Forensic Sci. Mar;40(2):171-5. Region does not mean "race"--reality versus convention in forensic anthropology.
- ^ a b Shriver & Kittles 2004
- ^ Thomas MG, Skorecki K, Ben-Ami H, Parfitt T, Bradman N, Goldstein DB (July 1998). "Origins of Old Testament priests". Nature 394 (6689): 138–40. DOI:10.1038/28083. PMID 9671297.
- ^ (2007) "Rethinking genetic genealogy: A response to Stephan Palmié". American Ethnologist 34: 223. DOI:10.1525/ae.2007.34.2.223.
- ^ (2007) "Rejoinder: Genomic moonlighting, Jewish cyborgs, and Peircian abduction". American Ethnologist 34: 245. DOI:10.1525/ae.2007.34.2.245.
- ^ Frank, Reanne. "Back with a Vengeance: the Reemergence of a Biological Conceptualization of Race in Research on Race/Ethnic Disparities in Health". Retrieved on 2009-04-18.
- ^ Rotimi CN (December 2003). "Genetic ancestry tracing and the African identity: a double-edged sword?". Developing World Bioethics 3 (2): 151–8. DOI:10.1046/j.1471-8731.2003.00071.x. PMID 14768647.
- ^ Rosenberg NA, Mahajan S, Ramachandran S, Zhao C, Pritchard JK, Feldman MW (December 2005). "Clines, clusters, and the effect of study design on the inference of human population structure". PLoS Genetics 1 (6): e70. DOI:10.1371/journal.pgen.0010070. PMID 16355252.
- Abraham, Carolyn. "Molecular eyewitness: DNA gets a human face". The Globe and Mail (date=2009-04-07: Phillip Crawley). http://m.theglobeandmail.com/life/molecular-eyewitness-dna-gets-a-human-face/article888804/?service=mobile&template=shareEmail&tabInside_tab=0&page=1. Retrieved 2011-02-04.
- AAA (1998-05-17). "American Anthropological Association Statement on "Race"". Aaanet.org. http://www.aaanet.org/stmts/racepp.htm. Retrieved 2009-04-18.
- AAPA (1996). "AAPA statement on biological aspects of race". Am J Phys Anthropol 101: 569–570. DOI:10.1002/ajpa.1331010408.
- (1949) "The seventy-five percent rule for subspecies". Condor 51 (6): 250–258. DOI:10.2307/1364805.
- Amundson, Ron (2005). "Disability, Ideology, and Quality of Life: A Bias in Biomedical Ethics", in David T. Wasserman, Robert Samuel Wachbroit, Jerome Edmund Bickenbach: Quality of life and human difference: genetic testing, health care, and disability. Cambridge University Press, 101–24. ISBN 9780521832014.
- Angier, Natalie (2000-08-22). "Do Races Differ? Not Really, DNA Shows". The New York Times. http://www.nytimes.com/library/national/science/082200sci-genetics-race.html. Retrieved 9 August 2010.
- Appiah, Kwame Anthony (1992). In My Father's House: Africa in the Philosophy of Culture. Oxford University Press. ISBN 9780195068528.
- Armelagos, George (2000). "Galileo wept: A critical assessment of the use of race in forensic anthropolopy". Transforming Anthropology 9: 19–29. DOI:10.1525/tran.2000.9.2.19.
- (2003-11-10) "Does Race Exist?". Scientific American Magazine.
- (August 2004) "Deconstructing the relationship between genetics and race". Nat. Rev. Genet. 5 (8): 598–609. DOI:10.1038/nrg1401. PMID 15266342.
- Banton, Michael (1977). The idea of race (paperback), Boulder: Westview Press. ISBN 0891587195.
- (October 1990) "Relationships estimated by isonymy among the Italo-Greco villages of southern Italy". Human Biology 62 (5): 649–63. PMID 2227910.
- Blank, Rebecca M. (2004). "Chapter 2", Measuring racial discrimination, National Research Council (U.S.). Panel on Methods for Assessing Discrimination. National Adademies Press, 317. ISBN 9780309091268.
- Bindon, Jim (August 28, 2006, 2005). "Post World War II". University of Alabama. http://www.as.ua.edu/ant/bindon/ant275/presentations/POST_WWII.PDF#search=%22stanley%20marion%20garn%22.
- (2004) "Race-Based Therapeutics". New England Journal of Medicine 351 (20): 2035–2037. DOI:10.1056/NEJMp048271. PMID 15533852.
- (1912) "Change in Bodily Form of Descendants of Immigrants". American Anthropologist 14: 530–562. DOI:10.1525/aa.1912.14.3.02a00080.
- Boyd, William C. (1950). Genetics and the races of man: an introduction to modern physical anthropology. Boston: Little, Brown and Company.
- Brace, CL (2000). "Does race exist? An antagonist's perspective". Pbs.org. http://www.pbs.org/wgbh/nova/first/brace.html. Retrieved 2010-10-11.
- Brace, CL (2005). Race is a four letter word. Oxford University Press, 326. ISBN 9780195173512.
- (2003) "Attitudinal barriers to delivery of race-targeted pharmacogenomics among informed lay persons". Genetics in Medicine 5 (5): 385–392. DOI:10.1097/01.GIM.0000087990.30961.72. PMID 14501834.
- Conley, D (2007). "Being black, living in the red"", in PS Rothenberg: Race, Class, and Gender in the United States, 7th, New York: Worth Publishers, 350–358.
- Cravens, Hamilton (2010). "What's New in Science and Race since the 1930s?: Anthropologists and Racial Essentialism". The Historian 72 (2).
- (1988) "Race, reform, and retrenchment: Transformation and legitimation in antidiscrimination law". Harvard Law Review 101 (7): 1331–1337. DOI:10.2307/1341398.
- (2003) "Race and genomics". N Engl J Med 348 (12): 1166–1170. DOI:10.1056/NEJMsb022863. PMID 12646675.
- (2006) Popular Eugenics: National Efficiency and American Mass Culture in The 1930s. Athens, OH: Ohio University Press. ISBN 082141691X.
- Desmond, Adrian; Moore, James (2009), Darwin's sacred cause: how a hatred of slavery shaped Darwin's views on human evolution, Allen Lane, Penguin Books, pp. 484, ISBN 9781846140358
- Dikötter, Frank (1992). The discourse of race in modern China. Stanford: Stanford University Press. ISBN 9780804719940.
- (1970) Genetics of the Evolutionary Process. New York, NY: Columbia University Press. ISBN 0231028377.
- (2005) "Race and reification in science". Science 307 (5712): 1050–1051. DOI:10.1126/science.1110303. PMID 15718453.
- Edwards, AW (August 2003). "Human genetic diversity: Lewontin's fallacy". Bioessays 25 (8): 798–801. DOI:10.1002/bies.10315. PMID 12879450.
- (1964) "A Biological View of Race", in Ashley Montagu: The Concept of Race. Collier Books, 153–179.
- Gill, G (2000). "Does Race Exist? A proponent's perspective". Pbs.org. http://www.pbs.org/wgbh/nova/first/gill.html. Retrieved 2009-04-18.
- Gitschier, Jane (2005). "The Whole Side of It—An Interview with Neil Risch" 1 (1): e14. DOI:10.1371/journal.pgen.0010014. PMID 17411332.
- Gordon, Milton Myron (1964). Assimilation in American life: the role of race, religion, and national origins. Oxford: Oxford University Press. ISBN 978-0-19-500896-8.
- Graves, Joseph L (2001). The Emperor's New Clothes: Biological Theories of Race at the Millenium. Rutgers University Press.
- Graves, Joseph L. (2006). "What We Know and What We Don't Know: Human Genetic Variation and the Social Construction of Race". Social Science Research Council (SSRC). http://raceandgenomics.ssrc.org/Graves/. Retrieved 2011-01-22.
- (December 2006) "Taxonomic considerations in listing subspecies under the U.S. Endangered Species Act". Conservation Biology 20 (6): 1584–94. DOI:10.1111/j.1523-1739.2006.00530.x. PMID 17181793.
- Harris, Marvin (1980). Patterns of race in the Americas. Westport, Conn: Greenwood Press. ISBN 0-313-22359-9.
- (1996) The Bell Curve: Intelligence and class structure in American life. Simon & Schuster.
- Hooton, Earnest A (22 January 1926). "Methods of Racial Analysis". Science 63 (1621): 75–81. DOI:10.1126/science.63.1621.75.
- Human Genome Project (2003). "Human Genome Project Information: Minorities, Race, and Genomics". U.S. Department of Energy(DOE)-Human Genome Program. http://www.ornl.gov/sci/techresources/Human_Genome/elsi/minorities.shtml.
- (November 2004) "Genetic variation, classification and 'race'". Nat. Genet. 36 (11 Suppl): S28–33. DOI:10.1038/ng1435. PMID 15508000.
- (1997) "The persistence of racial thinking and the myth of racial divergence". Am Anthropol 99: 534–544. DOI:10.1525/aa.1918.104.22.1684.
- (2004) "Conceptualizing human variation". Nature Genetics 36 (S17–S20). DOI:10.1038/ng1455. PMID 15507998.
- King, Desmond (2007). "Making people work: Democratic consequences of workfare", in Beem, Christopher; Mead, Lawrence M.: Welfare Reform and Political Theory. New York: Russell Sage Foundation Publications, 65–81. ISBN 0-87154-588-8.
- Krulwich, Robert (2009-02-02). "Your Family May Once Have Been A Different Color". Morning Edition, National Public Radio. http://www.npr.org/templates/story/story.php?storyId=100057939.
- Lee, Jayne Chong-Soon (1997). "Review essay: Navigating the topology of race"", in Gates, E. Nathaniel: Critical Race Theory: Essays on the Social Construction and Reproduction of Race. New York: Garland Pub, 393–426. ISBN 9780815326038.
- (2008) "The ethics of characterizing difference: guiding principles on using racial categories in human genetics". Genome Biol. 9 (7): 404. DOI:10.1186/gb-2008-9-7-404. PMID 18638359.
- (1990) Race and slavery in the Middle East. New York: Oxford University Press. ISBN 0195062833.
- Lie, John (2004). Modern Peoplehood. Cambridge, Mass.: Harvard University Press. ISBN 0674013271.
- (2001) "How "Caucasoids" got such big crania and why they shrank: from Morton to Rushton". Curr Anthropol 42 (1): 69–95. DOI:10.1086/318434. PMID 14992214.
- (1997) "Teaching About Human Variation: An Anthropological Tradition for the Twenty-first Century", in Rice, Patricia; Kottak, Conrad Phillip; White, Jane G.; Richard H. Furlow: The Teaching of Anthropology: Problems, Issues, and Decisions. Mayfield Pub, 381. ISBN 1-55934-711-2.
- (1995) "Race and Three Models of Human Origins". American Anthropologist 97 (2): 231–242. DOI:10.1525/aa.1995.97.2.02a00030.
- (1992) "Race in Biology and Anthropology: A Study of College Texts and Professors". Journal of Research in Science Teaching 29: 301–321. DOI:10.1002/tea.3660290308.
- (1972) "The Apportionment of Human Diversity". Evolutionary Biology 6: 381–397.
- (1962) "On the Non-Existence of Human Races". Current Anthropology 3: 279–281. DOI:10.1086/200290.
- (August 2003) "Human genetic diversity and the nonexistence of biological races". Human Biology 75 (4): 449–71. DOI:10.1353/hub.2003.0058. PMID 14655871. Retrieved on 2009-04-18.
- (1995) Human biodiversity: genes, race, and history. New York: Aldine de Gruyter. ISBN 0-585-39559-4.
- Marks, Jonathan (2002). "Folk Heredity", in Jefferson M. Fish: Race and Intelligence: Separating Science from Myth. Mahwah, NJ: Lawrence Erlbaum Associates. ISBN 0805837574.
- Marks, Jonathan (2008). "Race: Past, present and future. Chapter 1", in Barbara Koenig, Sandra Soo-Jin Lee & Sarah S. Richardson: Revisiting Race in a Genomic Age. Rutgers University Press.
- (1969) Principles of Systematic Zoology. New York, NY: McGraw-Hill. ISBN 0070411433.
- (Winter 2002) "The Biology of Race and the Concept of Equality". Daedalus 31 (1): 89–94.
- (1996) "The perceived racism scale: A multidimensional assessment of the experience of white racism among African Americans" 6 (1–2): 154–166.
- (1993) Slavery: a world history, revised, Cambridge, MA: DaCapo Press. ISBN 0306805367.
- (2007) "Race, racism, and academic complicity". American Ethnologist 34: 238. DOI:10.1525/ae.2007.34.2.238.
- Miles, Robert (2000). "Apropos the idea of race ... again", in Les Back, John Solomos: Theories of race and racism. Psychology Press, 125–143. ISBN 9780415156721.
- Molnar, Stephen (1992). Human variation: races, types, and ethnic groups. Englewood Cliffs, N.J: Prentice Hall. ISBN 0-13-446162-2.
- (1941) "The Concept of Race in The Human Species in the Light of Genetics" (PDF). Journal of Heredity 32 (8): 243–248.
- (1997) Man’s Most Dangerous Myth: The Fallacy of Race (paperback), AltaMira Press. ISBN 0803946481.
- Montagu, Ashley (1962). "The Concept of Race". Retrieved on 26 January 2009.
- Morgan, Edmund S. (1975). American Slavery, American Freedom: The Ordeal of Colonial Virginia. W. W. Norton and Company, Inc..
- Mountain, Joanna L. (2004). "Assessing genetic contributions to phenotypic differences among 'racial' and 'ethnic' groups" (pdf). DOI:10.1038/ng1456.
- Muffoletto, Robert (2003). "Ethics: A discourse of power" 47 (6): 62–66. DOI:10.1007/BF02763286.
- Nobles, Melissa (2000). Shades of citizenship: race and the census in modern politics. Stanford, Calif: Stanford University Press. ISBN 0-8047-4059-3.
- (2005) "Controversies in biomedical, behavioral, and forensic sciences". Am Psychol 60 (1): 115–128. DOI:10.1037/0003-066X.60.1.115. PMID 15641926.
- (1999) "Genomic Views of Human History". Science 286 (5439): 451–453. DOI:10.1126/science.286.5439.451. PMID 10521333.
- (May 2007) "Genomics, divination, 'racecraft'". American Ethnologist 34: 205–22. DOI:10.1525/ae.2007.34.2.205.
- (2002) "Diagnosability versus mean differences of sage sparrow subspecies". Auk 119 (1): 26–35. DOI:[0026:DVMDOS2.0.CO;2 10.1642/0004-8038(2002)119[0026:DVMDOS]2.0.CO;2].
- (March 2000) "Least-inclusive taxonomic unit: a new taxonomic concept for biology". Proceedings. Biological Sciences 267 (1443): 627–30. DOI:10.1098/rspb.2000.1048. PMID 10787169.
- Polo, Marco (2007). "Chapter 21: Of the Country travelled over upon leaving Ormus", The Travels of Marco Polo. Cosimo, Inc, 408. ISBN 9781602068612.
- Race, Ethnicity, and Genetics Working Group (October 2005). "The use of racial, ethnic, and ancestral categories in human genetics research". American Journal of Human Genetics 77 (4): 519–32. DOI:10.1086/491747. PMID 16175499.
- Reardon, Jenny (2005). "Post World-War II Expert Discourses on Race", Race to the finish: identity and governance in an age of genomics. Princeton UP, 17ff. ISBN 9780691118574.
- (2002) "Apportionment of global human genetic diversity based on craniometrics and skin color". Am J Phys Anthropol 118 (4): 393–398. DOI:10.1002/ajpa.10079. PMID 12124919.
- Risch, Neil (2002). "Categorization of humans in biomedical research: genes, race and disease" (PDF). Genome Biology 3 (7): comment2007. DOI:10.1186/gb-2002-3-7-comment2007. PMID 12184798.
- (April 2002) "Patterns of human diversity, within and among continents, inferred from biallelic DNA polymorphisms". Genome Res. 12 (4): 602–12. DOI:10.1101/gr.214902. PMID 11932244.
- (2005) "Clines, Clusters, and the Effect of Study Design on the Inference of Human Population Structure". PLoS Genetics 1 (6): e70. DOI:10.1371/journal.pgen.0010070. PMID 16355252.
- (1992) "Forensic Anthropology and the Concept of Race: If Races Don't Exist, Why are Forensic Anthropologists So Good at Identifying them". Social Science and Medicine 34 (2): 107–111. DOI:10.1016/0277-9536(92)90086-6. PMID 1738862.
- Sesardic, Neven (2010). "Race: A Social Destruction of a Biological Concept" 25 (143). DOI:10.1007/s10539-009-9193-7.
- Segal, Daniel A (1991). "'The European'_ Allegories of Racial Purity". Anthropology Today 7 (5): 7–9. DOI:10.2307/3032780.
- (September 2004) "Evidence for gradients of human genetic diversity within and among continents". Genome Res. 14 (9): 1679–85. DOI:10.1101/gr.2529604. PMID 15342553.
- Schaefer, Richard T. (ed.) (2008). Encyclopedia of Race, Ethnicity and Society. Sage. ISBN 9781412926942.
- (2004) "Opinion: Genetic ancestry and the search for personalized genetic histories". Nature Reviews Genetics 5 (8): 611–8. DOI:10.1038/nrg1405. PMID 15266343.
- Sivanandan, A (2000). "Apropos the idea of 'race' ... again"", in Miles R: Theories of Race and Racism. London: Routledge, 125–143.
- Slotkin, J. S. (1965). "The Eighteenth Century", Readings in early Anthropology. Methuen Publishing, 175–243.
- (1997) "Not just a social construct: Theorising race and ethnicity". Sociology 31 (2): 307–327. DOI:10.1177/0038038597031002007.
- Smedley, A (1999). Race in North America: origin and evolution of a worldview, 2nd, Boulder: Westview Press. ISBN 0813334489.
- Smedley, Audrey (2002). "Science and the Idea of Race: A Brief History", in Jefferson M. Fish: Race and Intelligence: Separating Science from Myth. Mahwah, NJ: Lawrence Erlbaum Associates. ISBN 0805837574.
- (January 2005) "Race as biology is fiction, racism as a social problem is real: Anthropological and historical perspectives on the social construction of race". Am Psychol 60 (1): 16–26. DOI:10.1037/0003-066X.60.1.16. PMID 15641918.
- Smedley, Audrey (2007-March-14-17). "The History of the Idea of Race... and Why It Matters".
- Sober, Elliott (2000). Philosophy of biology, 2nd, Boulder, CO: Westview Press. ISBN 978-0813391267.
- (1982) The leopard's spots: scientific attitudes toward race in America, 1815–1859. University of Chicago Press. ISBN 0226771229.
- Stocking, George W. (1968). Race, Culture and Evolution: Essays in the History of Anthropology. University of Chicago Press, 380. ISBN 9780226774947.
- (1993) A different mirror: a history of multicultural America (paperback), Boston: Little, Brown. ISBN 0316831123.
- (2005) "Genetic Structure, Self-Identified Race/Ethnicity, and Confounding in Case-Control Association Studies". The American Journal of Human Genetics 76 (2): 268–75. DOI:10.1086/427888. PMID 15625622.
- (1998) "Human races: a genetic and evolutionary perspective". Am Anthropol 100: 632–650. DOI:10.1525/aa.1922.214.171.1242.
- Thompson, William (2005). Society in Focus. Boston, MA: Pearson. ISBN 0-205-41365-X.
- (2004) "Implications of biogeography of human populations for 'race' and medicine". Nature Genetics 36 (11 Suppl): S21. DOI:10.1038/ng1438. PMID 15507999.
- (1993) On human diversity. Cambridge, MA: Harvard University Press,. ISBN 0674634381.
- (2006) "What is a population? An empirical evaluation of some genetic methods for identifying the number of gene pools and their degree of connectivity". Molecular Ecology 15 (6): 1419–39. DOI:10.1111/j.1365-294X.2006.02890.x. PMID 16629801.
- Weiss, Rick (2005-12-16). "Scientists Find A DNA Change That Accounts For Light Skin". The Washington Post. http://www.washingtonpost.com/wp-dyn/content/article/2005/12/15/AR2005121501728.html.
- Willing, Richard (2005-08-16). "DNA tests offer clues to suspect's race". USA Today. http://www.usatoday.com/news/nation/2005-08-16-dna_x.htm.
- (1953) "The Subspecies Concept and Its Taxonomic Application". Systematic Zoology 2 (3): 97–110. DOI:10.2307/2411818.
- (2001) "Population genetic structure of variable drug response". Nat Genet 29 (3): 265–269. DOI:10.1038/ng761. PMID 11685208.
- Winfield, AG (2007). Eugenics and education in America: Institutionalized racism and the implications of history, ideology, and memory. New York: Peter Lang Publishing, Inc, 45–46.
- (2007) "Genetic Similarities Within and Between Human Populations". Genetics 176 (1): 351–9. DOI:10.1534/genetics.106.067355. PMID 17339205.
- Wright, Sewall (1978). Evolution and the Genetics of Populations. Chicago, Illinois: Univ. Chicago Press.
- von Vacano, Diego. "The Color of Citizenship: Race, Modernity and Latin American/Hispanic Political Thought". Oxford: Oxford University Press, 2011.
- Race: the Power of an Illusion a three part documentary from California Newsreel.
- James, Michael (2008) Race, in the Stanford Encyclopedia of Philosophy.
- Ten Things Everyone Should Know About Race by California Newsreel.
- American Anthropological Association's educational website on race with links for primary school educators and researchers
- Boas's remarks on race to a general audience
- Catchpenny mysteries of ancient Egypt, "What race were the ancient Egyptians?", Larry Orcutt.
- Judy Skatssoon, "New twist on out-of-Africa theory", ABC Science Online, Wednesday, 14 July 2004.
- Racial & Ethnic Distribution of ABO Blood Types – bloodbook.com
- Are White Athletes an Endangered Species? And Why is it Taboo to Talk About It? Discussion of racial differences in athletics
- "Does Race Exist? A proponent's perspective" – Author argues that the evidence from forensic anthropology supports the idea of race.
- "Does Race Exist? An antagonist's perspective" – The author argues that clinal variation undermines the idea of race.
- American Ethnography – The concept of race Ashley Montagu's 1962 article in American Anthropology
- American Ethnography – The genetical theory of race, and anthropological method Ashley Montagu's 1942 American Anthropology article
Official statements and standardsEdit
- "The Race Question", UNESCO, 1950
- US Census Bureau: Definition of Race
- American Association of Physical Anthropologists' Statement on Biological Aspects of Race
- "Standards for Maintaining, Collecting, and Presenting Federal Data on Race and Ethnicity", Federal Register 1997
- American Anthropological Association's Statement on Race and RACE: Are we so different? a public education program developed by the American Anthropological Association.
- "Race (human)" article on Encyclopædia Britannica Online.
- The Myth of Race On the lack of scientific basis for the concept of human races (Medicine Magazine, 2007).
- Race – The power of an illusion Online companion to California Newsreel's documentary about race in society, science, and history
- Steven and Hilary Rose, The Guardian, "Why we should give up on race", 9 April 2005
- Times Online, "Gene tests prove that we are all the same under the skin", 27 October 2004.
- Michael J. Bamshad, Steve E. Olson "Does Race Exist?", Scientific American, December 2003
- "Gene Study Identifies 5 Main Human Populations, Linking Them to Geography", Nicholas Wade, NYTimes, December 2002. Covering
- Scientific American Magazine (December 2003 Issue) Does race exists ?.
- DNA Study published by United Press International showing how 30% of White Americans have at least one Black ancestor
- Yehudi O. Webster Twenty-one Arguments for Abolishing Racial Classification, The Abolitionist Examiner, June 2000
- The Tex(t)-Mex Galleryblog, An updated, online supplement to the University of Texas Press book (2007), Tex(t)-Mex
- Times of India – Article about Asian racism
- South China Morning Post – Going beyond ‘sorry’
- Is Race "Real"? forum organized by the Social Science Research Council, includes 2005 op-ed article by A.M. Leroi from the New York Times advocating biological conceptions of race and responses from scholars in various fields More from Leori with responses
- Richard Dawkins: Race and creation (extract from The Ancestor's Tale: A Pilgrimage to the Dawn of Life) – On race, its usage and a theory of how it evolved. (Prospect Magazine October 2004)
|This page uses content from the English language Wikipedia. The original content was at Race (human classification). The list of authors can be seen in the page history. As with this Familypedia wiki, the content of Wikipedia is available under the Creative Commons License.| | http://familypedia.wikia.com/wiki/Race_(classification_of_human_beings) | 13 |
137 | ||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (April 2009)|
|An aspect of fiscal policy|
In a tax system and in economics, the tax rate describes the burden ratio (usually expressed as a percentage) at which a business or person is taxed. There are several methods used to present a tax rate: statutory, average, marginal, and effective. These rates can also be presented using different definitions applied to a tax base: inclusive and exclusive.
A statutory tax rate is the legally imposed rate. An income tax could have multiple statutory rates for different income levels, where a sales tax may have a flat statutory rate.
- To calculate the average tax rate on an income tax, divide total tax liability by taxable income:
- Let be the average tax rate.
- Let be the tax liability.
- Let be the taxable income.
A marginal tax rate is the tax rate that applies to the last unit of currency of the tax base (taxable income or spending), and is often applied to the change in one's tax obligation as income rises:
- To calculate the marginal tax rate on an income tax:
- Let be the marginal tax rate.
- Let be the tax liability.
- Let be the taxable income.
For an individual, it can be determined by increasing or decreasing the income earned or spent and calculating the change in taxes payable. An individual's tax bracket is the range of income for which a given marginal tax rate applies. The marginal tax rate may increase or decrease as income or consumption increases, although in most countries the tax rate is (in principle) progressive. In such cases, the average tax rate will be lower than the marginal tax rate: an individual may have a marginal tax rate of 45%, but pay average tax of half this amount.
In a jurisdiction with a flat tax on earnings, every taxpayer pays the same percentage of income, regardless of income or consumption. Some proponents of this system propose to exempt a fixed amount of earnings (such as the first $10,000) from the flat tax.
In economics, marginal tax rates are important because they are one of the factors that determine incentives to increase income; at higher marginal tax rates, some argue, the individual has less incentive to earn more. This is the foundation of the Laffer curve, which claims taxable income decreases as a function of marginal tax rate, and therefore tax revenue begins to decrease after a certain point.
Public discussion of "high taxes" may refer to overall tax rates or marginal taxes.
Marginal tax rates may be published explicitly, together with the corresponding tax brackets, but they can also be derived from published tax tables showing the tax for each income. It may be calculated noting how tax changes with changes in pre-tax income, rather than with taxable income.
Marginal tax rates do not fully describe the impact of taxation. A flat rate poll tax has a marginal rate of zero, but a discontinuity in tax paid can lead to positively or negatively infinite marginal rates at particular points.
The term effective tax rate has significantly different meanings when used in different contexts or by different sources. Generally it means some amount of tax divided by some amount of income or other tax base. In International Accounting Standard 12, it is defined as income tax expense or benefit for accounting purposes divided by accounting profit. In Generally Accepted Accounting Principles (United States), the term is used in official guidance only with respect to determining income tax expense for interim (e.g., quarterly) periods by multiplying accounting income by an "estimated annual effective tax rate," the definition of which rate varies depending on the reporting entity's circumstances. In U.S. income tax law, the term is used in relation to determining whether a foreign income tax on specific types of income exceeds a certain percentage of U.S. tax that would apply on such income if U.S. tax had been applicable to the income. The popular press, Congressional Budget Office, and various think tanks have used the term to mean varying measures of tax divided by varying measures of income, with little consistency in definition. An effective tax rate may incorporate econometric, estimated, or assumed adjustments to actual data, or may be based entirely on assumptions or simulations.
Inclusive and exclusive
Tax rates can be presented differently due to differing definitions of tax base, which can make comparisons between tax systems confusing.
Some tax systems include the taxes owed in the tax base (tax-inclusive), while other tax systems do not include taxes owed as part of the base (tax-exclusive). In the United States, sales taxes are usually quoted exclusively and income taxes are quoted inclusively. The majority of Europe, value added tax (VAT) countries, include the tax amount when quoting merchandise prices, including Goods and Services Tax (GST) countries, such as Australia and New Zealand. However, those countries still define their tax rates on a tax exclusive basis.
For direct rate comparisons between exclusive and inclusive taxes, one rate must be manipulated to look like the other. When a tax system imposes taxes primarily on income, the tax base is a household's pre-tax income. The appropriate income tax rate is applied to the tax base to calculate taxes owed. Under this formula, taxes to be paid are included in the base on which the tax rate is imposed. If an individual's gross income is $100 and income tax rate is 20%, taxes owed equals $20.
The income tax is taken "off the top", so the individual is left with $80 in after-tax money. Some tax laws impose taxes on a tax base equal to the pre-tax portion of a good's price. Unlike the income tax example above, these taxes do not include actual taxes owed as part of the base. A good priced at $80 with a 25% exclusive sales tax rate yields $20 in taxes owed. Since the sales tax is added "on the top", the individual pays $20 of tax on $80 of pre-tax goods for a total cost of $100. In either case, the tax base of $100 can be treated as two parts—$80 of after-tax spending money and $20 of taxes owed. A 25% exclusive tax rate approximates a 20% inclusive tax rate after adjustment. By including taxes owed in the tax base, an exclusive tax rate can be directly compared to an inclusive tax rate.
- Inclusive income tax rate comparison to an exclusive sales tax rate:
- Let be the income tax rate. For a 20% rate, then
- Let be the rate in terms of a sales tax.
- Let be the price of the good (including the tax).
- The revenue that would go to the government:
- The revenue remaining for the seller of the good:
- To convert the tax, divide the money going to the government by the money the company nets:
- Therefore, to adjust any inclusive tax rate to that of an exclusive tax rate, divide the given rate by 1 minus that rate.
- 15% inclusive = 18% exclusive
- 20% inclusive = 25% exclusive
- 25% inclusive = 33% exclusive
- 33% inclusive = 50% exclusive
- 50% inclusive = 100% exclusive
See also
|Wikimedia Commons has media related to: Marginal tax rates|
- Progressive tax
- Proportional tax
- Regressive tax
- Tax incidence
- List of countries by tax rates
- List of countries by tax revenue as percentage of GDP
- Tax rates of Europe
- Tax exporting
- Capital flight
- "What is the difference between statutory, average, marginal, and effective tax rates?". Americans For Fair Taxation. Retrieved 2007-04-23.
- IAS 12, paragraphs 86.
- ASC 740-270-30-6 through -9.
- See, e.g., 26 CFR 1.904-4(c).
- For example, in CBO tables comparing historical tax rates, "Effective tax rates are calculated by dividing taxes by comprehensive household income," where comprehensive household income "equals pretax cash income plus income from other sources. Pretax cash income is the sum of wages, salaries, self-employment income, rents, taxable and nontaxable interest, dividends, realized capital gains, cash transfer payments, and retirement benefits plus taxes paid by businesses (corporate income taxes and the employer's share of Social Security, Medicare, and federal unemployment insurance payroll taxes) and employee contributions to 401(k) retirement plans. Other sources of income include all in-kind benefits (Medicare, Medicaid, employer-paid health insurance premiums, food stamps, school lunches and breakfasts, housing assistance, and energy assistance). Households with negative income are excluded from the lowest income category but are included in totals." This CBO definition includes in income many items, such as employer share of Social Security tax, not considered income for most purposes. In a different context, CBO uses the term to include total Federal corporate income taxes imputed to individuals based on the assumed level of corporate shareholdings for a class of individuals.
- For example, one study provides the caveat that "The effective tax rate calculations utilize information on the median level of assessment within a given geographical area. While a property is likely to be near the median level of assessment, the actual level of assessment for any given property could be greater or lesser than the median."
- Bachman, Paul; Haughton, Jonathan; Kotlikoff, Laurence J.; Sanchez-Penalver, Alfonso; Tuerck, David G. (2006-11). "Taxing Sales under the FairTax – What Rate Works?". Beacon Hill Institute. Tax Analysts. Retrieved 2007-04-24. | http://en.wikipedia.org/wiki/Tax_rate | 13 |
17 | 1619--- 17 black men and 3 black women land at Jamestown, Virginia, on August 20th. Possibly the first Africans to arrive in what will later be the U.S., they are accorded the status of indentured servants.
1623 or 1624--- The first black person born in America was William, son of Antoney and Isabell, indentured servants.
1644---11 blacks petitioned the Council of New Netherlands for freedom--the first black legal protest in America. The Council freed them because they had "served the Company 17 or 18 years" and had "long since been promised their freedom."
1760--- Jupiter Hammon, a New York slave, was the first black poet. He wrote An Evening Thought: Salvation by Christ with Penitential Cries.
1770s--- Jean Baptiste Pointe DuSable was the first settler in Chicago.
1770--- Crispus Attucks died in the Boston Massacre.
1773--- Phillis Wheatley was the first author and first major black poet. She wrote Poems on Various Subjects, Religious and Moral. It was the second book published by an American woman.
1777--- Vermont became the first state to abolish slavery.
1780--- Lemuel Haynes of the Congregational Church was the first black minister certified by a predominantly white denomination.
1787---The first general institution organized and managed by blacks was the Free African Society of Philadelphia. The first black Masonic lodge was African Lodge No. 459 in Boston. James Derham, a former slave, was the first black physician. He bought his freedom and established a large practice among both blacks and whites.
1792--- The first scientific writing by a black person was produced by astronomer and mathematician Benjamin Banneker, writing in his almanac, which was issued annually after 1792.
1804--- Lemuel Haynes was the first black to receive a degree from a U.S. college, an honorary M.A. from Middlebury College.
1810--- The first black insurance company was the American Insurance Company of Philadelphia.
1816--- Richard Allen was the first black bishop, elected at the general convention of the African Methodist Episcopal Church in Philadelphia.
1818--- Frank Johnson became the first black to publish sheet music in the U.S.
1820s--- The first black drama group was the African Company of New York City.
1821--- Thomas L. Jennings was the first African American to receive a patent, issued on March 3rd.
1822--- James Hall graduated from the Medical College of Maine, the first black to graduate from a U.S. medical college.
1823--- Alexander Lucius Twilight was the first black college graduate, who received a bachelor's degree from Middlebury College.
1827--- Freedom's Journal, published in New York City, was the first black newspaper.
1830--- The first black national convention met at Philadelphia's Bethel African Methodist Episcopal Church.
1832--- Maria W. Stewart began an unprecedented public speaking tour at Franklin Hall in Boston. She was the first woman in the U.S. to engage in public political debates.
1834--- Henry Blair of Maryland was the first black inventor to receive a patent. He invented a corn planter.
1836--- Alexander Lucius Twilight was the first black elected to public office (the Vermont legislature).
1837--- Cheyney State Training School in Pennsylvania was the first black college established.
1838--- Mirror of Liberty, published in New York, was the first black magazine.
1843--- Macon B. Allen of Maine was the first black lawyer.
1853--- William Wells Brown, who wrote Clotel: or, The President's Daughter, was the first black novelist.
1854--- John V. DeGrasse was the first black to be admitted to a medical society, the Massachusetts Medical Society.
1858--- William Wells Brown was the first black playwright. He wrote The Escape.
1862--- Mary Jane Patterson was the first black woman to graduate from an American college--Oberlin College.
1863--- The 1st Kansas Colored Volunteer Infantry Regiment was the first African American regiment from a northern state to join the U.S. Army during the Civil War. Sgt. William H. Carney of the 54th Massachusetts Volunteers was the first black to earn the Congressional Medal of Honor. He was 1 of 20 blacks who fought during the Civil War to receive Congressional Medals of Honor, although the honor was not awarded until May 23, 1900.
1864--- Rebecca Lee of Boston was the first black woman physician. The New Orleans Tribune, founded by Dr. Louis C. Roudanez, was the first black daily newspaper.
1865--- John S. Rock of Massachusetts was the first black lawyer admitted to practice before the U.S. Supreme Court. Patrick Francis Healy was the first black to receive a Ph.D.
1866--- Lucy Hobbs was the first black woman to graduate from dental school.
1867--- Robert Tanner Freeman of Harvard University was the first black man to graduate from an American school of dentistry.
1869--- Ebenezer Don Carlos Bassett became the first black diplomat and the first black to receive a major government appointment--he was appointed minister to Haiti by President Grant. John Willis Menard of Louisiana became the first black to speak on the floor of the House when he pleaded his own case concerning the election he had just won when he was denied a seat.
1870--- Hiram Rhodes Revels of Mississippi became the first black U.S. Senator when he was elected to fill the unexpired term of Jefferson Davis. He was the first black in Congress. Joseph R. Rainey was the first black member of the U.S. House of Representatives. Jonathan Jasper Wright was the first black judge. He was elected to the South Carolina Supreme Court. James W. Smith of South Carolina was the first black student at West Point Military Academy.
1872--- P.B.S. Pinchback became the first black governor (Louisiana). John Henry Conyers of South Carolina was the first black student at Annapolis Naval Academy. Charlotte E. Ray was the first black woman lawyer.
1874--- The first black to preside over the House of Representatives was Rep. Joseph H. Rainey of South Carolina. Patrick Francis Healy was inaugurated president of Georgetown University, the oldest Catholic university in the U.S. Healy was the first African-American to head a predominantly white university.
1875--- The first black to serve a full term as a U.S. Senator was Blanche Kelso Bruce of Mississippi. Oliver Lewis became the first black jockey--and the first jockey--to win the Kentucky Derby. Thirteen or fourteen jockeys in the first race were black. James A. Healy was the first black bishop of a predominantly white denomination, the Roman Catholic Church.
1876--- Edward A. Bouchet was the first black to receive a Ph.D. degree from an American university, Yale University.
1877--- Frederick Douglass became the first black to receive a major government appointment in the U.S., the U.S. Marshal of the District of Columbia. Henry O. Flipper was the first black to graduate from West Point.
1878--- Mary Eliza Mahoney enrolled in the New England Hospital Nursing School on March 26th. She became the first professionally trained African-American nurse in the U.S.
1879--- Blanche Kelso Bruce became the first black to preside over the U.S. Senate.
1881--- The first African-American nursing school in the country opened at Spelman College in Atlanta, Georgia.
1884--- John R. Lynch was the first black to preside over a national political convention (Republican). Moses Fleetwood Walker was the first black in major league baseball, a catcher on the Toledo team of the American Association.
1886--- Matthew Henson moved in with his sister Eliza in Washington, D.C. Working as a stock boy for a haberdashery, he met civil engineer Lieutenant Robert Peary and began work with him as a valet. He proved himself more useful as a colleague, going with Peary on his crossings of northern Greenland in 1891-1892 and 1893-1895, lending invaluable support during the explorer's repeated struggles to reach the North Pole. He pushed Peary forward during periods of despair and saved his life on more than one occasion. He was also able to deal with the Inuits, who taught him to drive dogsleds and survive in their world, when the arrogant Peary could not convince them to lift a finger on his behalf.
1888--- Capital Savings Bank of Washington, D.C., was the first black bank.
1890--- George Dixon was the first black world champion in boxing, defeating Nunc Wallace in the 18th round.
1892--- Playing for center Harvard, William H. Lewis was the first black All-American from a major college.
1893--- Dr. Daniel Hale Williams performed the first successful operation on the human heart at Chicago's Provident Hospital.
1896--- Oriental America was the first Broadway production with an all-black company.
1897--- Edwin P. "King" McCabe founded Langston University in Oklahoma, the first African American A & M College.
1898--- A Trip to Coontown was the first black musical comedy produced, directed and managed by blacks. It ran for 3 seasons in New York.
1900--- "Lift Ev'ry Voice and Sing" was first performed.
1901--- Joe Walcott defeated Rube Ferns in 5 rounds to become the first black welterweight champion.
1902--- Joe Gans became the first black lightweight champion by knocking out Frank Erne in the 1st round.
1903--- Maggie Lena Walker founded the Saint Luke Penny Savings Bank, becoming the first black woman to head a bank.
1904--- George Poage was the first black to compete in the Olympics.
1907--- Alain L. Locke was the first black Rhodes scholar.
1908--- Jack Johnson was the first black heavyweight boxing champion. He defeated Tommy Burns.
1909--- Matthew Henson became the first black to reach the North Pole, accompanying Robert Peary. Later Peary downplayed Henson's role in the expedition. Henson wrote a book, A Negro Explorer at the North Pole. A racially-mixed group met at Niagara Falls to organize the NAACP. Then later in the year 300 blacks and whites met in New York City for the first NAACP conference.
1912--- W.C. Handy's "Memphis Blues" was the first published blues number. Bill Foster's comedy, The Railroad Porter was the first black film.
1914--- Sam Lucas was the first black actor in a full-length Hollywood film--he played Tom in Uncle Tom's Cabin.
1915--- The Lincoln Motion Picture Company was the first black movie production company. Ernest E. Just received the first Springarn Medal for pioneering research on fertilization and cell division.
1917--- Tally Holmes and Lucy Stone were the first black players to win the American Tennis Association championship.
1919--- Fritz Pollard was the first black professional football player. He was also the first black coach--he was a player-coach for the Akron Indians. He coached them to a world professional championship in 1920.
1920--- James Weldon Johnson became the first black secretary of the National Association for the Advancement of Colored People--he was preceded by three white women and two white men.
1921--- In June, aviator Bessie Coleman became the first African American and woman to be licensed as an international pilot. Georgiana Simpson and Sadie M. Alexander were the first black women awarded Ph.D. degrees one day apart.
1923--- The Chipwoman's Fortune was the first Broadway play by a black writer (Willis Richardson).
1924--- DeHart Hubbard was the first black to win an Olympic gold medal. Dixie to Broadway, "the first real revue by Negroes," opened in New York City. Florence Mills starred.
1926--- The First Negro History Week was observed. Tiger Flowers became the first black middleweight champion, defeating Harry Greb in 15 rounds.
1928--- Archibald Motley was the first black artist to have a show at the New Gallery of New York.
1929--- The first feature-length black Hollywood films were Hearts in Dixie and Hallelujah.
1933--- Caterina Jarboro was the first black to perform with an American opera company, the Chicago Opera Company.
1934--- Caterina Jarboro was the first flack prima donna of an opera company, performing Aida at the Metropolitan Opera House in New York City.
1936--- Mary McLeod Bethune was the first black woman to receive a major appointment from the U.S. government. She was named Director of Negro Affairs of the National Youth Administration. Jesse Owens defied Hitler's racist predictions and won four gold medals at the Summer Olympics in Berlin.
1938--- Crystal Bird Fauset of Pennsylvania was the first black woman elected to a state legislature.
1939--- Way Down South was the first film with a script by black writers (Langston Hughes and Clarence Muse). Jane Matilda Bolin was the first black woman judge (in New York City). The first full-length black film was Oscar Micheaux's Birthright.
1940--- Hattie McDaniel was the first black to receive an Oscar for her supporting role in Gone With the Wind. Benjamin O. Davis Sr. was the first black general in the regular army. He was appointed by President Franklin Delano Roosevelt. Booker T. Washington was the first black to be pictured on a U.S. postage stamp--the 10-cent stamp.
1943--- W.E.B. Du Bois was the first black admitted to the National Institute of Arts and Letters.
1945--- Nat King Cole was the first black with his own network radio show.
1946--- Kenny Washington of the Los Angeles Rams was the first black player in professional football in the modern era.
1947--- Jackie Robinson was the first black in baseball's major leagues in the modern era. He played for the Brooklyn Dodgers. The first black players in a World Series were Jackie Robinson and Dan Bankhead, who played with the Brooklyn Dodgers against the New York Yankees.
1948--- Alice Coachman was the first black woman to win a gold medal in the Olympics.
1949--- The first black-owned radio station was WERD in Atlanta.
1950--- Edith S. Sampson became the first black named to the U.S. delegation to the United Nations. Charles Cooper was signed by the Boston Celtics and Harlem Globetrotter "Sweetwater" Nat Clifton's contract was purchased by the New York Knicks. They were the first black players in the NBA. Ralph J. Bunche, undersecretary of the U.N., was the first black to be awarded the Nobel Peace Prize. Gwendolyn Brooks was the first black to receive a Pulitzer Prize for poetry. Althea Gibson was the first black tennis player to be accepted in national competition. Arthur Dorrington of the Atlantic City Seagulls was the first black man in organized hockey to suit up.
1951--- Amos 'n' Andy moved to television, the first TV show to have an all-black cast.
1952--- Jackie Robinson was named Director of Communication for NBC, becoming the first black executive of a major radio-TV network.
1953--- Lorraine Williams was the first black to win a nationally recognized tennis title, the junior girls' championship.
1954--- Benjamin O. Davis Jr. was the first black general in the U.S. Air Force.
1955--- E. Frederic Morrow was the first black named to an executive position in the White House. He was appointed administrative aide to President Eisenhower. Marian Anderson was the first black signed by the Metropolitan Opera. She appeared as Ulrica in Verdi's The Masked Ball on January 7th. The Brooklyn Dodgers took to the field, making history as the first team with a majority of black players.
1956--- Nat King Cole was the first black with his own network TV show, The Nat King Cole Show. Althea Gibson was the first black to win a major tennis title--the French Open.
1957--- Charles Sifford was the first black to win a major professional golf tournament (Long Beach Open). Althea Gibson was the first black to win a major U.S. national tennis championship. She was also the first black to win a Wimbledon championship.
1958--- Clifton R. Wharton Sr. was the first black to head a U.S. embassy in Europe. He was minister to Rumania. Althea Gibson was the first black voted female athlete of the year. Ruth Carol Taylor was the first black woman to become a stewardess. Lorraine Hansberry's Raisin in the Sun was the first Broadway play by a black woman to be produced.
1960--- Lorraine Hansberry's Raisin in the Sun was the first Broadway play by a black writer to win the New York Drama Critics Award.
1961--- Robert C. Weaver was the first black to head a major agency of the U.S. government as administrator of the Housing and Home Finance Agency. Ernest Davis of Syracuse was the first black to win the Heisman Memorial Trophy. With a contract for $85,000, Willie Mays was making more money than any other baseball player.
1962--- Jackie Robinson was the first black inducted into the National Baseball Hall of Fame. John "Buck" O'Neil was the first black coach of a major league baseball team, the Chicago Cubs.
1963--- Sidney Poitier was the first black to receive an Academy Award for best actor for his performance in Lilies of the Field.
1964--- Martin Luther King Jr. was the youngest person awarded the Nobel Peace Prize--he was 35. Arthur Ashe was the first African-American to play on the U.S. Davis Cup tennis team.
1965--- Patricia R. Harris took the post of U.S. Ambassador to Belgium, becoming the first African-American U.S. ambassador.
1966--- Robert C. Weaver became the first black cabinet member when appointed by President Johnson to be secretary of the Department of Housing and Urban Development. Andrew F. Brimmer was the first black governor of the Federal Reserve Board. Emmett Ashford was the first black umpire in the major leagues. Andrew F. Brimmer was the first black governor of the Federal Reserve Board.
1967--- Emlen Tunnell, a defensive back for the New York Giants, was the first black elected to the Football Hall of Fame. Thurgood Marshall became the first black Supreme Court justice.
1968--- Henry Lewis was the first black musical director of an American orchestra, the New Jersey Symphony. Shirley Chisholm was the first black woman in Congress. Moneta J. Sleet Jr. of Ebony magazine was the first black male to receive a Pulitzer Prize for photography.
1970--- Joseph L. Searles III became the first black on the New York Stock Exchange. Cheryl Brown, Miss Iowa, was the first African-American contestant in the nation's most popular beauty pageant.
1971--- Samuel Lee Gravely, Jr. was the first black admiral in the U.S. Navy.
1972--- Shirley Chisholm was the first black woman nominated for president of the U.S. Jerome H. Holland was the first black elected to the board of directors of the New York Stock Exchange. Bob Douglas, owner and coach of the New York Renaissance (which won 88 consecutive games in 1933) was the first black man to be elected to the Basketball Hall of Fame.
1975--- Lee Elder was the first black to play in the Masters Tournament at Augusta, Georgia. The first black-owned TV station was Detroit's WGPR-TV.
1976--- Patricia R. Harris was the first black woman named to the cabinet of a U.S. president. She was appointed secretary of the Department of Housing and Urban Development by Jimmy Carter.
1979--- The first black general in the Marine Corps was Frank E. Peterson, Jr. Hazel Johnson was appointed the first black woman general.
1983--- Guion Steward Bluford, Jr. was the first black in space. Vanessa Williams, Miss New York, was crowned the first black Miss America.
1986--- Navy Lt. Commander Donnie Cochran became the first black pilot to fly with the celebrated Blue Angels precision aerial demonstration team. Debi Thomas was the first black to win a world figure skating championship.
1988--- The Most Reverend Eugene Antonio Marino became the nation's first black Roman Catholic archbishop during an installation mass in the Atlantic Civic Center.
1989--- Oprah Winfrey became the first black to own her own television and film production company, Harpo Studios, Inc.
2002--- Vonetta Flowers was the first African-American to win a gold medal in a Winter Olympics. She won in the women's bobsleigh event on February 19th. Then on March 24th, Actress Halle Berry became the first African-American woman to win the Academy Award for best actress for the film Monster's Ball. | http://minorityhealth.hhs.gov/templates/content.aspx?ID=4018 | 13 |
23 | (a) What is meant by the terms supply curve, and demand curve, and what is significant about the point where they cross? [5 marks]
(b) What is meant by the terms cost curve, and break even point. Draw a diagram to show the relation between them and the demand curve. How would you attempt to establish a demand curve in practice? [5 marks]
(c) The cost and demand schedules for a particular product are given in the following table. What price should the manufacturer set? [5 marks]
|Volume, k||Unit Cost||Unit Price|
(d) Discuss the economic impact of the advent of Internet. [5 marks]
(a) A supply curve shows the relationship between the quantity of a
good that suppliers in a given market desire to sell at each price,
holding other things equal.
A demand curve shows the relationship between the quantity of a good that buyers would purchase at each price, holding other things equal. Normally a demand curve has price on the Y axis and Quantity on the X axis.
The point where the supply curve and the demand curve cross is the stable equilibrium point in the market. If the market is over-supplied, prices fall and the market moves toward equilibrium. If it is under-supplied prices rise, and again the market moves towards equilibrium. However because the feedback mechanism is not perfect, and involves delays prices may oscillate about the equilibrium point. If the supply and demand curves are non-linear, this oscillation may be chaotic.
(b) A cost curve show the relationship between the quantity and the cost of goods supplied in a given market. The point at which it crosses the demand curve (if it does) is the break-even point, where goods are supplied to the market at no profit or loss.
It can be very difficult to determine the cost curve of a market in practice, particularly if there are multiple suppliers to the market. Market research, such as test marketing with different prices, or correlation between market surveys offering different features at different prices may go some way to showing the market sensitivity. Historical data for goods with fluctuating prices, such as petrol, can be revealing, but other factors may also play a part and must be discounted.
|Volume, k||Unit Cost||Unit Price||Profit/unit||Revenue||Expenditure||Gross Profit|
Gross profit is quadratic with quantity. It can be solved algebraically (C=12-0.2*Q ; P= 20-Q), but it is easier numerically by extending the table.
By inspection gross profit peaks at a price equivalent to a quantity of 5, which equates to a price of 15, although it is not sensitive to small variations about this point.
(d) Economic effect of the advent of the Internet.
Large topic, but some of the following points should be noted:
Maybe a 20% productivity gain, although hard to separate from widespread
adoption of computers ("The trillion dollar market" effect)
Change from smokestack to information industries
Globalisation of marketplaces; may have tax and nationality implications
Better information, leading to more perfect markets
Development of demand pricing, such as B2B auctions
Barriers to entry low; good for small producers
Bubble effect, akin to the introduction of radio and air transport in the 1920s
Possibly socially divisive between the information rich and the information poor | http://www.cl.cam.ac.uk/teaching/2001/Business/2001p9q5.html | 13 |
24 | ||This article may require cleanup to meet Wikipedia's quality standards. The specific problem is: article is incoherently written, with many short unconnected sentences and sections, too many quotes that should be paraphrased as prose, and too little overall prose and coherence between sections, paragraph and between sentences. Many sections are too long and go into unnecessary detail or includes embedded lists that should be split into separate articles.. (December 2012)|
Indigenous peoples are ethnic minorities who have been marginalized as their historical territories became part of a state. In international or national legislation they are generally defined as having a set of specific rights based on their historical ties to a particular territory, and to their cultural or historical distinctiveness from politically dominant populations. The concept of indigenous people may define them as particularly vulnerable to exploitation, marginalization and oppression by nations or states that may still be in the process of colonialism, or by politically dominant ethnic groups. As a result, a special set of political rights have been set to protect them by international organizations such as the United Nations, the International Labour Organization and the World Bank. The United Nations have issued a Declaration on the Rights of Indigenous Peoples to guide member-state national policies in order to protect the collective rights of indigenous peoples, such as their culture, identity, language, and access to employment, health, education and natural resources. Although no definitive definition of "indigenous peoples" exists, it is estimated that the total population of post-colonial indigenous peoples seeking human rights and discrimination redress ranges from 220 million to 350 million.
Terms and etymologies
The adjective indigenous is derived from the Latin etymology meaning "native" or "born within". According to its meaning in English, any given people, ethnic group or community may be described as indigenous in reference to some particular region or location to which they trace their traditional tribal land claim. However, during the late twentieth century the term Indigenous people evolved into a legal category, which refers to culturally distinct groups that had been affected by the processes of colonization.
Other terms used to refer to indigenous populations are: aboriginal, native, original, first, and hereditary owners in indigenous law.
The use of the term peoples in association with the indigenous is derived from the 19th century anthropological and ethnographic disciplines that Merriam-Webster Dictionary defines as "a body of persons that are united by a common culture, tradition, or sense of kinship, which typically have common language, institutions, and beliefs, and often constitute a politically organized group".
Definition of indigeneity
There is no single, universally accepted definition of the term "indigenous peoples"; however, the four most often invoked elements are:
- a priority in time
- the voluntary perpetuation of cultural distinctiveness
- an experience of subjugation, marginalisation and dispossession
- and self-identification
They form at present non-dominant sectors of society and are determined to preserve, develop and transmit to future generations their ancestral territories, and their ethnic identity, as the basis of their continued existence as peoples, in accordance with their own cultural patterns, social institutions and legal system. This historical continuity may consist of the continuation, for an extended period reaching into the present of one or more of the following factors:
a. Occupation of ancestral lands, or at least of part of them
b. Common ancestry with the original occupants of these lands
c. Culture in general, or in specific manifestations (such as religion, living under a tribal system, membership of an indigenous community, dress, means of livelihood, lifestyle, etc.)
d. Language (whether used as the only language, as mother-tongue, as the habitual means of communication at home or in the family, or as the main, preferred, habitual, general or normal language)
e. Residence in certain parts of the country, or in certain regions of the world
f. Other relevant factors.
On individual basis, an indigenous person is one who self-identifies as indigenous (group consciousness), and is recognized and accepted by these populations as one of its members (acceptance by the group). This working definition is recognised and employed by international and rights-based non-governmental organizations, as well as among national/sub-national governments themselves. However, the degree to which indigenous peoples' rights and issues are accepted and recognised in practical instruments such as treaties and other binding and non-binding agreements varies, sometimes considerably, from the application of the above definition.
Academics who define indigenous peoples as "living descendants of pre-invasion inhabitants of lands now dominated by others. They are culturally distinct groups that find themselves engulfed by other settler societies born of forces of empire and conquest" have encounter criticism as they fail to consider regions and states where indigenous peoples constitute a majority as in PRC, Fiji, Bolivia, and Mexico, or where the entire population is indigenous, as in Iceland, Tonga and the Papua New-Guinea.
Legal definitions
Legal definitions of indigenousness have changed over time to reflect the changing perceptions of the people within the framework of conceptualisation Indigenousness, for example in Africa:
- 1. from the advent of the colonial rule until decolonisation, the concept was used to refer to all non-European natives on territories conquered and colonised by European powers
- 2. under the early years of the post-colonial era, indigenousness was popularised as a concept referring to non-Europeans in countries where peoples mainly descending from European settlers remained dominant
- 3. the indigenous rights movement was internationalised to cover other (marginalised) groups, in Africa, Asia, Europe and the Pacific
The first attempt for a legal definition was made by the International Labour Organization's The Indigenous and Tribal Populations Convention, 1957 (No. 107).
International organisations
International Labour Organisation
The International Labour Organisation (Convention No. 169, concerning the working rights of Indigenous and Tribal Peoples, 1989) in Article 1 contains a statement of coverage rather than a definition, indicating that the Convention applies to:
- a) tribal peoples in independent countries whose social, cultural and economic conditions distinguish them from other sections of the national community and whose status is regulated wholly or partially by their own customs or traditions or by special laws or regulations;
- b) peoples in independent countries who are regarded as indigenous on account of their descent from the populations which inhabited the country, or a geographical region to which the country belongs, at the time of conquest or colonization or the establishment of present state boundaries and who irrespective of their legal status, retain some or all of their own social, economic, cultural and political institutions.
The World Bank
A description of Indigenous Peoples given by the World Bank (operational directive 4.20, 1991) reads as follows:
- Indigenous Peoples can be identified in particular geographical areas by the presence in varying degrees of the following characteristics:
- a) close attachment to ancestral territories and to the natural resources in these areas;
- b) self-identification and identification by others as members of a distinct cultural group;
- c) an indigenous language, often different from the national language;
- d) presence of customary social and political institutions;
- and e) primarily subsistence-oriented production.
The World Bank's policy for indigenous people states:
Because of the varied and changing contexts in which Indigenous Peoples live and because there is no universally accepted definition of "Indigenous Peoples," this policy does not define the term. Indigenous Peoples may be referred to in different countries by such terms as "indigenous ethnic minorities", "aboriginals", "hill tribes", "minority nationalities", "scheduled tribes", or "tribal groups."
In 1972 the United Nations Working Group on Indigenous Populations (WGIP) accepted as a preliminary definition a formulation put forward by Mr. José R. Martínez-Cobo, Special Rapporteur on Discrimination against Indigenous Populations. This definition has some limitations, because the definition applies mainly to pre-colonial populations, and would likely exclude other isolated or marginal societies. In 1983 the WGIP enlarged this definition (FICN. 41Sub.211983121 Adds. para. 3 79), and in 1986 further added that any individual who identified himself or herself as indigenous and was accepted by the group or the community as one of its members was to be regarded as an indigenous person (E/CN.4/Sub.2/1986/7/Add.4. para.381) as per the Martínez-Cobo working definition. However, the report was based on data gathered from the 37 respondent countries, of which 18 were from the South and Latin America, and three from North America, while not one African country was represented. Because the study terms of reference were concerned with discrimination, those instances where the indigenous populations are not subjected to discrimination because they remain the dominant demographic, or were never subject to colonisation were omitted. From this evolved the more-often cited definition
Indigenous communities, peoples and nations are those which, having a historical continuity with pre-invasion and pre-colonial societies that developed on their territories, consider themselves distinct from other sectors of the societies now prevailing in those territories, or parts of them. They form at present non-dominant sectors of society and are determined to preserve, develop, and transmit to future generations their ancestral territories, and their ethnic identity, as the basis of their continued existence as peoples, in accordance with their own cultural patterns, social institutions and legal systems.
Although Special Rapporteur to the UN on Indigenous peoples Erica-Irene Daes in 1995 stated that a definition was unnecessary because "historically, indigenous peoples have suffered, from definitions imposed by others" Indigenous representatives also on several occasions have expressed the view before the Working Group that
...a definition of the concept of 'indigenous people' is not necessary or desirable. They have stressed the importance of self-determination as an essential component of any definition which might be elaborated by the United Nations System. In addition, a number of other elements were noted by indigenous representatives...Above all and of crucial importance is the historical and ancient connection with lands and territories
The Draft Declaration on the Rights of Indigenous Peoples prepared by the Working Group on Indigenous Populations was adopted on the 13 September 2007 by the General Assembly as the United Nations Declaration on the Rights of Indigenous Peoples which is used to produce a definition of indigenous peoples or populations based on the Annex and 46 Articles. This is because
...there is a perceived erosion of indigenous claims as nothing prevents groups whose indigenousness is resisted — if not resented — by widely recognized groups to equally invoke this identity. Countries, mostly from Africa and Asia, continue to oppose domestic applicability of the concept.
In particular Article 33 is used by many national lawmakers in producing indigenous definitions based on
1. Indigenous peoples have the right to determine their own identity or membership in accordance with their customs and traditions. This does not impair the right of indigenous individuals to obtain citizenship of the States in which they live.
2. Indigenous peoples have the right to determine the structures and to select the membership of their institutions in accordance with their own procedures.
The primary impetus in considering indigenous identity comes from the post-colonial movements and considering the historical impacts on populations by the European imperialism. The first paragraph of the Introduction of a report published in 2009 by the Secretariat of the Permanent Forum on Indigenous Issues published a report, states
For centuries, since the time of their colonization, conquest or occupation, indigenous peoples have documented histories of resistance, interface or cooperation with states, thus demonstrating their conviction and determination to survive with their distinct sovereign identities. Indeed, indigenous peoples were often recognized as sovereign peoples by states, as witnessed by the hundreds of treaties concluded between indigenous peoples and the governments of the United States, Canada, New Zealand and others.
Another recent publication by the UNPFII includes the following passage with regard to the term "indigenous",
Understanding the term “indigenous”
Considering the diversity of indigenous peoples, an official definition of “indigenous” has not been adopted by any UN-system body. Instead the system has developed a modern understanding of this term based on the following:
- Self- identification as indigenous peoples at the individual level and accepted by the community as their member.
- Historical continuity with pre-colonial and/or pre-settler societies
- Strong link to territories and surrounding natural resources
- Distinct social, economic or political systems
- Distinct language, culture and beliefs
- Form non-dominant groups of society
- Resolve to maintain and reproduce their ancestral environments and systems as distinctive peoples and
Another recent statement from the UNPFII holds that,
Considering the diversity of indigenous peoples, an official definition of “indigenous” has not been adopted by any UN-system body. Instead the system has developed a modern understanding of this term based on the following: • Self- identification as indigenous peoples at the individual level and accepted by the community as their member. • Historical continuity with pre-colonial and/or pre-settler societies • Strong link to territories and surrounding natural resources • Distinct social, economic or political systems • Distinct language, culture and beliefs • Form non-dominant groups of society • Resolve to maintain and reproduce their ancestral environments and systems as distinctive peoples and communities.
According to the International Work Group for Indigenous Affairs, the following organisations currently represent the indigenous peoples rights internationally:
- UN Permanent Forum on Indigenous Issues (UNPFII)
- UN Expert Mechanism on the Rights of Indigenous Peoples
- UN Special Rapporteur on the Rights of Indigenous Peoples
- Universal Periodic Review (UPR)
- UN Framework Convention on Climate Change (UNFCCC)
- UN Convention on Biological Diversity
- African Commission on Human and Peoples' Rights (ACHPR)
- Arctic Council
National definitions
Throughout history different states designate the groups within their boundaries that are recognized as indigenous peoples according to international legislation by different terms. The indigenous peoples also include peoples who are regarded as indigenous based on their descent from the populations which inhabited the country at the time of inroads of non-indigenous religions and cultures or the establishment of present state boundaries, who retain some or all of their own social, economic, cultural and political institutions, but who may have been displaced from their traditional domains or who may have resettled outside their ancestral domains.
The status of the indigenous groups in the subjugated relationship can be characterized in most instances as an effectively marginalized, isolated or minimally participative one, in comparison to majority groups or the nation-state as a whole. Their ability to influence and participate in the external policies that may exercise jurisdiction over their traditional lands and practices is very frequently limited. This situation can persist even in the case where the indigenous population outnumbers that of the other inhabitants of the region or state; the defining notion here is one of separation from decision and regulatory processes that have some, at least titular, influence over aspects of their community and land rights. In a ground-breaking decision involving the Ainu people of Japan, the Japanese courts recognised their claim in law, stating that "If one minority group lived in an area prior to being ruled over by a majority group and preserved its distinct ethnic culture even after being ruled over by the majority group, while another came to live in an area ruled over by a majority after consenting to the majority rule, it must be recognised that it is only natural that the distinct ethnic culture of the former group requires greater consideration."
The presence of external laws, claims and cultural mores either potentially or actually act to variously constrain the practices and observances of an indigenous society. These constraints can be observed even when the indigenous society is regulated largely by its own tradition and custom. They may be purposefully imposed, or arise as unintended consequence of trans-cultural interaction; and have a measurable effect even where countered by other external influences and actions deemed to be beneficial or which serve to promote indigenous rights and interests within the wider community.
Commonwealth of Australia
In the early 1980s, the Commonwealth Department of Aboriginal Affairs proposed a new three-part definition of an Aboriginal or Torres Strait Islander person.
An Aboriginal or Torres Strait Islander is a person of Aboriginal or Torres Strait Islander descent who identifies as an Aboriginal or Torres Strait Islander and is accepted as such by the community in which he [or she] lives.
Indigenous Peoples of the Philippines (Tagalog: Katutubong Tao sa Pilipinas; Cebuano: Lumad or Tumandok; Ilocano: Umili a Tattao iti Filipinas) refers to a group of people or homogenous societies, identified by self-ascription and ascription by others, who have continuously lived as an organized community on communally bounded and defined territory, and who have, under claims of ownership since time immemorial, occupied, possessed and used such territories, sharing common bonds of language, customs, traditions and other distinctive cultural traits, or who have, through inroads of colonization, non-indigenous religions, and cultures, become historically differentiated from the majority of the Filipinos.
According to Russian law, the recognition of ethnic groups as indigenous peoples in Russia is based on
...their lifestyle, livelihoods, ethnic identity and population size. Of these criteria, only population size is relatively straightforward, whereas the others involve substantial subjectivity. In practice, historical administrative categories play an important role in determining which small groups are to be considered indigenous and which are not. According to the restrictions on population size, only groups that number less than fifty thousand people can be considered numerically small indigenous peoples.5 Hence, notwithstanding their claim to autochthony, larger non-Russian groups, such as the Sakha, Komi or Chechens, are not included in this concept because they count too many members. This is not to say that they are not considered indigenous (korennye) in a broader sense, but rather that this indigenousness does not entail the type of rights bestowed on the smaller peoples in Russia or on indigenous peoples of all sizes internationally.
A composite definition of "indigenous people" can be assembled from the above examples which includes cultural groups (and their continuity or association with a given region, or parts of a region, and who formerly or currently inhabit the region) either:
- before or its subsequent colonization or annexation, or
- alongside other cultural groups during the formation or reign of a colony or nation-state, or
- independently or largely isolated from the influence of the claimed governance by a nation-state,
- have maintained at least in part their distinct cultural, social/organizational, or linguistic characteristics, and in doing so remain differentiated in some degree from the surrounding populations and dominant culture of the nation-state, and
- are self-identified as indigenous, or those recognized as such by other groups.
Another defining characteristic for an indigenous group is that it has preserved traditional ways of living, such as present or historical reliance upon subsistence-based production (based on pastoral, horticultural and/or hunting and gathering techniques), and a predominantly non-urbanized society. Not all indigenous groups share these characteristics. Indigenous societies may be either settled in a given locale/region or exhibit a nomadic lifestyle across a large territory, but are generally historically associated with a specific territory on which they are dependent. Indigenous societies are found in every inhabited climate zone and continent of the world.
Population and distribution
Indigenous societies range from those who have been significantly exposed to the colonizing or expansionary activities of other societies (such as the Maya peoples of Mexico and Central America) through to those who as yet remain in comparative isolation from any external influence (such as the Sentinelese and Jarawa of the Andaman Islands).
Precise estimates for the total population of the world's Indigenous peoples are very difficult to compile, given the difficulties in identification and the variances and inadequacies of available census data. Recent source estimates range from 300 million to 350 million as of the start of the 21st century. This would equate to just fewer than 6% of the total world population. This includes at least 5000 distinct peoples in over 72 countries.
Contemporary distinct indigenous groups survive in populations ranging from only a few dozen to hundreds of thousands and more. Many indigenous populations have undergone a dramatic decline and even extinction, and remain threatened in many parts of the world. Some have also been assimilated by other populations or have undergone many other changes. In other cases, indigenous populations are undergoing a recovery or expansion in numbers.
Certain indigenous societies survive even though they may no longer inhabit their "traditional" lands, owing to migration, relocation, forced resettlement or having been supplanted by other cultural groups. In many other respects, the transformation of culture of indigenous groups is ongoing, and includes permanent loss of language, loss of lands, encroachment on traditional territories, and disruption in traditional lifeways due to contamination and pollution of waters and lands.
Historical cultures
The migration, expansion and settlement of societies throughout different territories is a universal, almost defining thread which runs through the entire course of human history. Many of the cross-cultural interactions which arose as a result of these historical encounters involved societies which might properly be considered as indigenous, either from their own viewpoint or that of external societies.
Most often, these past encounters between indigenous and "non-indigenous" groups lack contemporary account or description. Any assessment or understanding of impact, result and relation can at best only be surmised, using archaeological, linguistic or other reconstructive means. Where accounts do exist, they frequently originate from the viewpoint of the colonizing, expansionary or nascent state or from rather scarce and fragmented ethnographic sources compiled by those more congenial with indigenous communities and/or representatives thereof.
Classical antiquity
Greek sources of the Classical period acknowledge the prior existence of indigenous people(s), whom they referred to as "Pelasgians". These peoples inhabited lands surrounding the Aegean Sea before the subsequent migrations of the Hellenic ancestors claimed by these authors. The disposition and precise identity of this former group is elusive, and sources such as Homer, Hesiod and Herodotus give varying, partially mythological accounts. However, it is clear that cultures existed whose indigenous characteristics were distinguished by the subsequent Hellenic cultures (and distinct from non-Greek speaking "foreigners", termed "barbarians" by the historical Greeks). Greco-Roman society flourished between 250 BC and 480 AD and commanded successive waves of conquests that gripped more than half of the globe. But because already existent populations within other parts of Europe at the time of classical antiquity had more in common culturally speaking with the Greco-Roman world, the intricacies involved in expansion across the European frontier were not so contentious relative to indigenous issues. But when it came to expansion in other parts of the world, namely Asia, Africa, and the Middle East, then totally new cultural dynamics had entered into the equation, so to speak, and one sees here of what was to take the Americas, South East Asia, and the Pacific by storm a few hundred years later. The idea that peoples who possessed cultural customs and racial appearances strikingly different to that of the colonizing power is no new idea borne out of the Medieval period or the Enlightenment.
European expansion and colonialism
The rapid and extensive spread of the various European powers from the early 15th century onwards had a profound impact upon many of the indigenous cultures with whom they came into contact. The exploratory and colonial ventures in the Americas, Africa, Asia and the Pacific often resulted in territorial and cultural conflict, and the intentional or unintentional displacement and devastation of the indigenous populations.
One product of globalization has been a revolt against the forces of cultural uniformity and the appropriation of indigenous peoples' sovereignty by states. The premise of globalization is suggested to follow a purpose which by removing indigenous people from their land, denying cultural knowledge in state schools to succeeding generations, eliminating use of their languages, and usurping their own normative components of culture, states are "...imposing a gray uniformity on all of humanity, stifling and suppressing the creative cultural energies of those who are most knowledgeable and prescient about the forces of nature." "Those who would destroy their way of life would first have us believe that this task is already accomplished. We now have proof to the contrary, and we have received, with gratitude, the message of harmony and respect for all life brought to us by an ancient people whose culture may still yet be allowed to make a worthy contribution to the world community of nations."
Indigenous peoples by region
Indigenous populations are distributed in regions throughout the globe. The numbers, condition and experience of indigenous groups may vary widely within a given region. A comprehensive survey is further complicated by sometimes contentious membership and identification.
In the post-colonial period, the concept of specific indigenous peoples within the African continent has gained wider acceptance, although not without controversy. The highly diverse and numerous ethnic groups which comprise most modern, independent African states contain within them various peoples whose situation, cultures and pastoralist or hunter-gatherer lifestyles are generally marginalized and set apart from the dominant political and economic structures of the nation. Since the late 20th century these peoples have increasingly sought recognition of their rights as distinct indigenous peoples, in both national and international contexts. Although the vast majority of African peoples can be considered to be indigenous in the sense that they have originated from that continent and middle and south east Asia, in practice identity as an "indigenous people" as per the term's modern application is more restrictive, and certainly not every African ethnic group claims identification under these terms. Groups and communities who do claim this recognition are those who by a variety of historical and environmental circumstances have been placed outside of the dominant state systems, and whose traditional practices and land claims often come into conflict with the objectives and policies promulgated by governments, companies and surrounding dominant societies. Given the extensive and complicated history of human migration within Africa, being the "first peoples in a land" is not a necessary precondition for acceptance as an indigenous people. Rather, indigenous identity relates more to a set of characteristics and practices than priority of arrival. For example, several populations of nomadic peoples such as the Tuareg of the Sahara and Sahel regions now inhabit areas in which they arrived comparatively recently; their claim to indigenous status (endorsed by the African Commission on Human and Peoples' Rights) is based on their marginalization as nomadic peoples in states and territories dominated by sedentary agricultural peoples. The Indigenous Peoples of Africa Co-ordinating Committee (IPACC) is one of the main trans-national network organizations recognized as a representative of African indigenous peoples in dialogues with governments and bodies such as the UN. IPACC identifies several key characteristics associated with indigenous claims in Africa:
- political and economic marginalization rooted in colonialism;
- de facto discrimination based often on the dominance of agricultural peoples in the State system (e.g. lack of access to education and health care by hunters and herders);
- the particularities of culture, identity, economy and territoriality that link hunting and herding peoples to their home environments in deserts and forests (e.g. nomadism, diet, knowledge systems);
- some indigenous peoples, such as the San and Pygmy peoples are physically distinct, which makes them subject to specific forms of discrimination.
With respect to concerns expressed that identifying some groups and not others as indigenous is in itself discriminatory, IPACC states that it:
- "...recognises that all Africans should enjoy equal rights and respect. All of Africa's diversity is to be valued. Particular communities, due to historical and environmental circumstances, have found themselves outside the state-system and underrepresented in governance...This is not to deny other Africans their status; it is to emphasise that affirmative recognition is necessary for hunter-gatherers and herding peoples to ensure their survival."
At an African inter-governmental level, the examination of indigenous rights and concerns is pursued by a sub-commission established under the African Commission on Human and Peoples' Rights (ACHPR), sponsored by the African Union (AU) (successor body to the Organization of African Unity (OAU)). In late 2003 the 53 signatory states of the ACHPR adopted the Report of the African Commission's Working Group on Indigenous Populations/Communities and its recommendations. This report says in part (p. 62):
- ...certain marginalized groups are discriminated in particular ways because of their particular culture, mode of production and marginalized position within the state[; a] form of discrimination that other groups within the state do not suffer from. The call of these marginalized groups to protection of their rights is a legitimate call to alleviate this particular form of discrimination.
The adoption of this report at least notionally subscribed the signatories to the concepts and aims of furthering the identity and rights of African Indigenous peoples. The extent to which individual states are mobilizing to put these recommendations into practice varies enormously, however, and most Indigenous groups continue to agitate for improvements in the areas of land rights, use of natural resources, protection of environment and culture, political recognition and freedom from discrimination.
Indigenous peoples of the American continents are broadly recognized as being those groups and their descendants who inhabited the region before the arrival of European colonizers and settlers (i.e., Pre-Columbian). Indigenous peoples who maintain, or seek to maintain, traditional ways of life are found from the high Arctic north to the southern extremities of Tierra del Fuego.
The impact of European colonization of the Americas on the indigenous communities has been in general quite severe, with many authorities estimating ranges of significant population decline due to the ravages of various genocide campaigns, epidemic diseases (smallpox, measles, etc.), displacement, conflict, compulsory boarding schools, massacres and exploitation. The extent of this impact is the subject of much continuing debate. Several peoples shortly thereafter became extinct, or very nearly so.
All nations in North and South America have populations of indigenous peoples within their borders. In some countries (particularly Latin American), indigenous peoples form a sizable component of the overall national population—in Bolivia they account for an estimated 56%–70% of the total nation, and at least half of the population in Guatemala and the Andean and Amazonian nations of Peru. In English, indigenous peoples are collectively referred to by several different terms which vary by region and include such ethnonyms as Native Americans, Amerindians, and Indians. In Spanish or Portuguese speaking countries one finds the use of terms such as pueblos indígenas, amerindios, povos nativos, povos indígenas, and in Peru, Comunidades Nativas, particularly among Amazonian societies like the Urarina and Matsés.
In Brazil, the term índio (Portuguese pronunciation: [ˈĩdʒi.u] or ˈĩdʒju) is used by most of the population, the media, the indigenous peoples themselves and even the government (FUNAI is acronym for Fundação Nacional do Índio), although its Hispanic equivalent indio is widely not considered politically correct and falling into desuse. Nevertheless, Portuguese for Amerindian and amerindio, ameríndio (ameˈɾĩdʒi.u or ameˈɾĩdʒju in the standard South American dialects) is gaining some popularity, still, it seems odd for many. The widespread completely politically correct term of which Brazilians are used to is indígena ĩˈdʒiʒenɐ (although its literal translation is "indigenous person or peoples from anywhere", it is colloquially intended as synonym for Amerindian, without need for specifications in reference to the indigenous peoples of what continent). It has more ethnic meanings than racial ones, and a "Westerner" in Brazil can be an acculturated ameríndio/índio but not an indígena, which officially means indigenous in the narrow sense.
Aboriginal peoples in Canada comprise the First Nations, Inuit and Métis. The descriptors "Indian" and "Eskimo" are falling into disuse in Canada. There are currently over 600 recognized First Nations governments or bands encompassing 1,172,790 2006 peoples spread across Canada with distinctive Aboriginal cultures, languages, art, and music. National Aboriginal Day recognises the cultures and contributions of Aboriginals to the history of Canada
The Inuit have achieved a degree of administrative autonomy with the creation in 1999 of the territories of Nunavik (in Northern Quebec), Nunatsiavut (in Northern Labrador) and Nunavut, which was until 1999 a part of the Northwest Territories. The self-ruling Danish territory of Greenland is also home to a majority population of indigenous Inuit (about 85%).
In the United States, the combined populations of Native Americans, Inuit and other indigenous designations totalled 2,786,652 (constituting about 1.5% of 2003 US census figures). Some 563 scheduled tribes are recognized at the federal level, and a number of others recognized at the state level.
In Mexico, approximately 6,011,202 (constituting about 6.7% of 2005 Mexican census figures) identify as Indígenas (Spanish for natives or indigenous peoples). In the southern states of Chiapas, Yucatán and Oaxaca they constitute 26.1%, 33.5% and 35.3%, respectively, of the population. In these states several conflicts and episodes of civil war have been conducted, in which the situation and participation of indigenous societies were notable factors (see for example EZLN).
The Amerindians make up 0.4% of Brazil's population, or about 700,000 people. Indigenous peoples are found in the entire territory of Brazil, although the majority of them live in Indian reservations in the North and Center-Western part of the country. On 18 January 2007, FUNAI reported that it had confirmed the presence of 67 different uncontacted tribes in Brazil, up from 40 in 2005. With this addition Brazil has now overtaken the island of New Guinea as the country having the largest number of uncontacted tribes.
Guatemala is 50 to 80% indigenous, depending on whose statistics are used (Nelson, Finger in the Wound 1999)
The vast regions of Asia contain the majority of the world's present-day Indigenous populations, about 70% according to IWGIA figures.
The most substantial populations are in India, which constitutionally recognizes a range of "Scheduled Tribes" within its borders. These various peoples (collectively referred to as Adivasis, or tribal peoples) number about 68 million (1991 census figures, approximately 8% of the total national population). Nivkh people are an ethnic group indigenous to Sakhalin, having a few speakers of the Nivkh language, but their fisher culture has been endangered due to the development of oil field of Sakhalin from 1990s.
Ainu people are an ethnic group indigenous to Hokkaidō, the Kuril Islands, and much of Sakhalin. As Japanese settlement expanded, the Ainu were pushed northward, until by the Meiji period they were confined by the government to a small area in Hokkaidō, in a manner similar to the placing of Native Americans on reservations.
The languages of Taiwanese aborigines have significance in historical linguistics, since in all likelihood Taiwan was the place of origin of the entire Austronesian language family, which spread across Oceania.
There are also indigenous people in Southeast Asia.
There are indigenous peoples of the Philippines, which Spain and the United States colonized.
The Assyrians and Marsh Arabs are indigenous to areas of the geocultural region of Mesopotamia which includes parts of Iraq, Syria, and Turkey. The Lurs also inhabit parts of Iran close to the Iranian border with the provinces of Lorestan and Ilam.
The plurality of this usage to all of the World's indigenous communities is therefore indigenous peoples'. In the 1970s and 1980s concerns were initially raised by some UN member countries in using the term. The concern was due in part to the use of the same term amongst Maoist-oriented groups of a variation invented by Vladimir Lenin, 'Workers and Oppressed Peoples and Nations of the World, Unite!', that would include emancipation of the indigenous communities. This slogan was the rallying cry of the 2nd Comintern congress in 1920, and denoted the anti-Imperialist and anti-Colonialist agenda of the Comintern, and later the many indigenous Maoist and Communist-leaning liberation movements although the socialist doctrine rejected "indigenous nationalism". China, to comply with this very Maoist doctrine, as recently as 2003 claimed it had no "indigenous peoples".
Since most of Europe in historical times was never colonized by non-European powers with lasting effect (arguably except for Hungary, Bulgaria Turkish Thrace, Tatarstan, Kalmykia and islands such as Malta or Cyprus), the vast majority of Europeans could be considered indigenous. However several widely accepted formulations, which define the term "indigenous peoples" in stricter terms, have been put forward by internationally recognized organizations, such as the United Nations, the International Labour Organization and the World Bank. Indigenous peoples in this article is used in such a narrower sense.
In Europe, present-day recognized indigenous populations are relatively few, mainly confined to northern and far-eastern reaches of this Eurasian peninsula. Whilst there are various ethnic minorities distributed within European countries, few of these still maintain traditional subsistence cultures and are recognized as indigenous peoples, per se. Notable indigenous populations include the Sami people of northern Scandinavia, the Nenets and other Samoyedic peoples of the northern Russian Federation, and the Komi peoples of the western Urals, beside the Circassians in the North Caucasus.
Many of the present-day Pacific Island nations in the Oceania region were originally populated by Polynesian, Melanesian and Micronesian peoples over the course of thousands of years. European colonial expansion in the Pacific brought many of these under non-indigenous administration. During the 20th century several of these former colonies gained independence and nation-states were formed under local control. However, various peoples have put forward claims for Indigenous recognition where their islands are still under external administration; examples include the Chamorros of Guam and the Northern Marianas, and the Marshallese of the Marshall Islands.
In most parts of Oceania, indigenous peoples outnumber the descendants of colonists. Exceptions include Australia, New Zealand and Hawaii. According to the 2001 Australian census, indigenous Australians make up 2.4% of the total population, while in New Zealand 14.6% of the population identify at least partially as indigenous Māori, with slightly more than half (53%) of all Māori residents identifying solely as Māori. The Māori are indigenous to Polynesia and settled New Zealand relatively recently, the migrations were thought to have occurred between 1000–1200 CE. In New Zealand pre-contact Māori tribes were not a single people, thus the more recent grouping into tribal (iwi) arrangements has become a more formal arrangement in more recent times. Many Māori tribal leaders signed a treaty with the British, the Treaty of Waitangi, so that the modern geo-political entity that is New Zealand was established by partial consent. However, the Māori language translation of the Treaty of Waitangi which they signed is worded ambiguously and does not fully match the English version. The treaty process gave British citizenship to the "native" population. However, some of the British settlers ignored the Treaty of Waitangi and through some illegal acts of colonization and war (though there were legitimate land sales between Māori and the settlers) Māori lost 95% of their land and resources from the 1850s through to the 1970s, which resulted in the large scale socio-economic marginalization of the vast majority of Māori. Since the 1970s there has been a cultural renaissance by Māori, and a political drive to assert their Treaty rights to their land, resources and culture through the Waitangi Tribunal process. This has resulted in the legal recognition of the Māori language and culture and has resulted in the return of some land, resources and money so that today Māori businesses have an estimated value of over NZD$14 billion. Māori have also formed an important political party.
The independent state of Papua New Guinea (PNG) has a majority population of indigenous societies, with more than 700 different tribal groups recognized out of a total population of just over 5 million. The PNG Constitution and other Acts identify traditional or custom-based practices and land tenure, and explicitly set out to promote the viability of these traditional societies within the modern state. However, several conflicts and disputes concerning land use and resource rights continue to be observed between indigenous groups, the government and corporate entities.
Rights, issues and concerns
|Part of a series on|
|Conflict resolution · Cultural diversity
Cultural heritage · Forced assimilation
Forced relocation · Freedom of religion
Gender equality · Human rights
Intellectual property · Land rights
Land-use planning · Language
Racial discrimination · Right to identity
Self-determination · Traditional knowledge
|AADNC · ACHPR · Arctic Council
Bureau of Indian Affairs · CDI
Council of Indigenous Peoples
FUNAI · NCIP · UNPFII
|NGOs and political groups|
|AFN · Amazon Watch · CAP · COICA
CONAIE · Cultural Survival · EZLN · fPcN
IPACC · IPCB · IWGIA · NARF · ONIC
Survival International · UNPO · (more ...)
|Colonialism · Civilizing mission
Cultural genocide · Manifest Destiny
Postdevelopment theory · Lands inhabited by indigenous peoples
|ILO 169 · United Nations Declaration|
Indigenous peoples confront a diverse range of concerns associated with their status and interaction with other cultural groups, as well as changes in their inhabited environment. Some challenges are specific to particular groups; however, other challenges are commonly experienced. Bartholomew Dean and Jerome Levi (2003) explore why and how the circumstances of indigenous peoples are improving in some places of the world, while their human rights continue to be abused in others. These issues include cultural and linguistic preservation, land rights, ownership and exploitation of natural resources, political determination and autonomy, environmental degradation and incursion, poverty, health, and discrimination.
The interaction between indigenous and non-indigenous societies throughout history has been complex, ranging from outright conflict and subjugation to some degree of mutual benefit and cultural transfer. A particular aspect of anthropological study involves investigation into the ramifications of what is termed first contact, the study of what occurs when two cultures first encounter one another. The situation can be further confused when there is a complicated or contested history of migration and population of a given region, which can give rise to disputes about primacy and ownership of the land and resources.
A reference page devoted to Indigenous Matters on the website of The International Federation of Library Associations and Institutions (IFLA) includes the following passage.
Trask observes that “indigenous peoples are defined in terms of collective aboriginal occupation prior to colonial settlement.” She points one an important difference between indigenous history and that of settler history: settlers can claim a voluntary status-- they chose to relocate to lands where their descendants now claim a legal inheritance. Indigenous peoples have an involuntary status: their physical lives on homeland areas are tied to emergence or other creation stories. Their formal nationalities were imposed upon them by outside governments.
The Bangladesh Government has stated that there are "no Indigenous Peoples in Bangladesh". This has angered the Indigenous Peoples of Chittagong Hill Tracts, Bangladesh, collectively known as the Jumma (whichs include the Chakma, Marma, Tripura, Tenchungya, Chak, Pankho, Mru, Murung, Bawm, Lushai, Khyang, Gurkha, Assamese, Santal and Khumi). Experts have protested against this move of the Bangladesh Government and have questioned the Government's definition of the term "Indigenous Peoples". This move by the Bangladesh Government is seen by the Indigenous Peoples of Bangladesh as another step by the Government to further erode their already limited rights.
Wherever indigenous cultural identity is asserted, some particular set of societal issues and concerns may be voiced which either arise from (at least in part), or have a particular dimension associated with, their indigenous status. These concerns will often be commonly held or affect other societies also, and are not necessarily experienced uniquely by indigenous groups.
Despite the diversity of Indigenous peoples, it may be noted that they share common problems and issues in dealing with the prevailing, or invading, society. They are generally concerned that the cultures of Indigenous peoples are being lost and that indigenous peoples suffer both discrimination and pressure to assimilate into their surrounding societies. This is borne out by the fact that the lands and cultures of nearly all of the peoples listed at the end of this article are under threat. Notable exceptions are the Sakha and Komi peoples (two of the northern indigenous peoples of Russia), who now control their own autonomous republics within the Russian state, and the Canadian Inuit, who form a majority of the territory of Nunavut (created in 1999).
It is also sometimes argued that it is important for the human species as a whole to preserve a wide range of cultural diversity as possible, and that the protection of indigenous cultures is vital to this enterprise.
An example of this occurred in 2002 when the Government of Botswana expelled all the Kalahari Bushmen known as the San from their lands on which they had lived for at least twenty thousand years. President Festus Mogai has described the Bushmen as "stone age creatures" and a minister for local government, Margaret Nasha, likened public criticism of their eviction to criticism of the culling of elephants. In 2006, the Botswanan High Court ruled that the Bushmen had a right to return to their land in the Central Kalahari Game Reserve.
Health issues
In December 1993, the United Nations General Assembly proclaimed the International Decade of the World's Indigenous People, and requested UN specialized agencies to consider with governments and indigenous people how they can contribute to the success of the Decade of Indigenous People, commencing in December 1994. As a consequence, the World Health Organization, at its Forty-seventh World Health Assembly established a core advisory group of indigenous representatives with special knowledge of the health needs and resources of their communities, thus beginning a long-term commitment to the issue of the health of indigenous peoples.
The WHO notes that "Statistical data on the health status of indigenous peoples is scarce. This is especially notable for indigenous peoples in Africa, Asia and eastern Europe", but snapshots from various countries, where such statistics are available, show that indigenous people are in worse health than the general population, in advanced and developing countries alike: higher incidence of diabetes in some regions of Australia; higher prevalence of poor sanitation and lack of safe water among Twa households in Rwanda; a greater prevalence of childbirths without prenatal care among ethnic minorities in Vietnam; suicide rates among Inuit youth in Canada are eleven times higher than the national average; infant mortality rates are higher for indigenous peoples everywhere.
International bodies concerned with indigenous peoples' rights
- African Commission on Human and Peoples' Rights (ACHPR)
- United Nations Permanent Forum on Indigenous Issues
- United Nations Expert Mechanism on the Rights of Indigenous Peoples
- United Nations Special Rapporteur on the Situation of Human Rights and Fundamental Freedoms of Indigenous Peoples
- United Nations Working Group on Indigenous Populations (discontinued)
Non-governmental Organizations working for indigenous peoples' rights
Various organizations are devoted to the preservation or study of indigenous peoples. Of these, several have widely recognized credentials to act as an intermediary or representative on behalf of indigenous peoples' groups, in negotiations on indigenous issues with governments and international organizations. These include:
- Center for World Indigenous Studies
- Cultural Survival
- Earth Peoples
- Friends of Peoples Close to Nature (fPcN)
- Incomindios Switzerland
- Indigenous Dialogues
- Indigenous Peoples' Center for Documentation, Research and Information (doCip)
- Indigenous Peoples of Africa Co-ordinating Committee (IPACC)
- International Work Group for Indigenous Affairs (IWGIA)
- Minority Rights Group International
- Netherlands Center for Indigenous Peoples (NCIV)
- Survival International
International Day of the World's Indigenous People
The International Day of the World's Indigenous People falls on 9 August as this was the date of the first meeting in 1982 of the United Nations Working Group of Indigenous Populations of the Subcommission on Prevention of Discrimination and Protection of Minorities of the Commission on Human Rights.
The UN General Assembly decided on 23 December 1994 that the International Day of the World's Indigenous People should be observed on 9 August every year during the International Decade of the World's Indigenous People (resolution 49/214). Thereafter, on 20 December 2004, the General Assembly decided to continue observing the International Day of Indigenous People every year during the Second International Decade of the World's Indigenous People (2005–2014) (resolution 59/174).
Knowledge and culture
The preservation and investigation of specialized Indigenous knowledge, particularly in relation to the resources of the natural environment with which the society is associated, is a goal of both the Indigenous and the societies who thereby seek to identify new resources and benefits (example: partnerships established to research biological extracts from vegetation in the Amazon rainforests).
For some people (e.g. Indigenous communities from India, Brazil, and Malaysia and some NGOs such as GRAIN and Third World Network),[clarification needed] Indigenous peoples have often been victims of biopiracy when they are subjected to unauthorized use of their natural resources, of their traditional knowledge on these biological resources, of unequal share of benefits between them and a patent holder.
||This section needs additional citations for verification. (January 2009)|
A range of differing viewpoints and attitudes have arisen from the experience and history of contact between Indigenous and "non-indigenous" communities. The cultural, regional and historical contexts in which these viewpoints have developed are complex, and many competing viewpoints exist simultaneously in any given society, albeit promulgated with greater or lesser force depending on the extent of cross-cultural exposure and internal societal change. These views may be noted from both sides of the relationship.
Indigenous viewpoints
Indigenous peoples are increasingly faced with threats to their sovereignty, environment, and access to natural resources. Examples of this can be the deforestation of tropical rainforests where many native tribe's subsistence lifestyles are threatened. Assimilative colonial policies resulted in ongoing issues related to aboriginal child protection.
Non-indigenous viewpoints
Indigenous peoples have been denoted primitives, savages, or uncivilized. These terms were common during the heights of European colonial expansion, but still continue in modern times. During the 17th century, indigenous peoples were commonly labeled "uncivilized". Whilst there was a swell in bringing back creative elements of classical antiquity in artistic pursuits, there was also the not so creative side of regurgitating xenophobic ideas from that period. Some philosophers such as Thomas Hobbes considered indigenous people to be merely 'savages', while others are purported to have considered them to be "noble savages". Those who were close to the Hobbesian view tended to believe themselves to have a duty to civilize and modernize indigenes. Although anthropologists, especially from Europe, used to apply these terms to all tribal cultures, it has fallen into disfavor as demeaning and, according to anthropologists, inaccurate (see tribe, cultural evolution). Survival International runs a campaign to stamp out media portrayal of indigenous peoples as 'primitive' or 'savages'. Friends of Peoples Close to Nature considers not only that indigenous culture should be respected as not being inferior, but also sees their way of life as a lesson of sustainability and a part of the struggle within the "corrupted" western world, from which the threat stems.
After World War I, however, many Europeans came to doubt the morality of the means used to "civilize" peoples. At the same time, the anti-colonial movement, and advocates of indigenous peoples, argued that words such as "civilized" and "savage" were products and tools of colonialism, and argued that colonialism itself was savagely destructive. In the mid 20th century, European attitudes began to shift to the view that indigenous and tribal peoples should have the right to decide for themselves what should happen to their ancient cultures and ancestral lands.
See also
- Collective rights
- Ethnic minority
- Genocide of indigenous peoples
- Human rights
- The Image Expedition
- Indigenous rights
- Indigenous intellectual property
- Intangible cultural heritage
- Indigenous Peoples Climate Change Assessment Initiative
- List of ethnic groups
- Uncontacted peoples
- United Nations Permanent Forum on Indigenous Issues
- Unrepresented Nations and Peoples Organization
- Virgin soil epidemic
- Coates 2004:12
- Sanders, Douglas (1999). "Indigenous peoples: Issues of definition". International Journal of Cultural Property 8: 4–13. doi:10.1017/S0940739199770591.
- Bodley 2008:2
- "World Directory of Minorities and Indigenous Peoples – Philippines: Overview, 2007", UNHCR | Refworld.
- Hanihara, T (1992). "Negritos, Australian Aborigines, and the proto-sundadont dental pattern: The basic populations in East Asia". American journal of physical anthropology 88 (2): 183–96. doi:10.1002/ajpa.1330880206. PMID 1605316.
- Klein, Ernest, Dr., A Comprehensive Etymological Dictionary of the English Language, volume I A-K, Elsevier Publishing Company, New York, 1966, p.787
- Mario Blaser, Harvey A. Feit, Glenn McRae, In the Way: Indigenous Peoples, Life Projects, and Development, IDRC, 2004, p.53
- Silke Von Lewinski, Indigenous Heritage and Intellectual Property: Genetic Resources, Traditional Knowledge, and Folklore, Kluwer Law International, 2004, pp.130-131
- Robert K. Hitchcock, Diana Vinding, Indigenous Peoples' Rights in Southern Africa, IWGIA, 2004, p.8 based on Working Paper by the Chairperson-Rapporteur, Mrs. Erica-Irene A. Daes, on the concept of indigenous people. UN-Dokument E/CN.4/Sub.2/AC.4/1996/2 (, unhchr.ch)
- was the first UN Special Rapporteur of the Sub-Commission on Prevention of Discrimination and Protection of Minorities
- Martínez-Cobo (1986/7), paras. 379-382
- S. James Anaya, Indigenous Peoples in International Law, 2nd ed., Oxford University press, 2004, p.3; Professor Anaya teaches Native American Law, and is the third Commission on Human Rights Special Rapporteur on the Human Rights and Fundamental Freedoms of Indigenous People
- Felix Mukwiza Ndahinda,A Contested Legal Framework for Empowerment of 'Marginalized' Communities, Springer, 2011, p.18
- "Operational Policy 4.10 – Indigenous Peoples".
- STUDY OF THE PROBLEM OF DISCRIMINATION AGAINST INDIGENOUS POPULATIONS 30 July 1981, UN EASC
- STUDY OF THE PROBLEM OF DISCRIMINATION AGAINST INDIGENOUS POPULATIONS, p.10, Paragraph 25, 30 July 1981, UN EASC
- United Nations Sub-Commission on Prevention of Discrimination and Protection of Minorities and its Study of the Problem of Discrimination against Indigenous Populations, UN Doc. E./CN.4/Sub.2/1986/7/Add. 4.para 379 (1986)
- Daes, Erica-Irene, Special Rapporteur, Economic and Social Council, Protection of the Heritage of Indigenous Peoples Final Report of the Special Rapporteur, Mrs Erica-Irene Daes, in Conformity with Subcommission Resolution 1993/44 and Decision 1994/105 of the Commission on Human Rights. E/C'N.4/Sub.2/1995/26, p.3.; Daes was the second Special Rapporteur on the Rights of Indigenous Peoples
- Simpson, Tony, Indigenous heritage and self-determination: the cultural and intellectual property rights of indigenous peoples, The Forest Peoples Programme and IWGIA (International Work Group for Indigenous Affairs), 1997 pp.22-23
- established in 1982
- Felix Mukwiza Ndahinda, Indigenousness in Africa: A Contested Legal Framework for Empowerment of 'Marginalized' Communities, Springer, 2011, p.20
- State of the World's Indigenous Peoples, p.1
- State of the World's Indigenous Peoples, Secretariat of Permanent Forum on Indigenous Issues, UN, 2009
- Who are indigenous peoples?
- State of the World's indigenous peoples Permanent Forum on Indigenous Issues, UN, 2007
- The Indigenous World 2011, International Work Group for Indigenous Affairs, Copenhagen 2011, pp.452-543
- Judgment of the Sapporo District Court, Civil Division No. 3, 27 March 1997, in (1999) 38 ILM, p.419
- Department of Aboriginal Affairs, Report on a Review of the Administration of the Working Definition of Aboriginal and Torres Strait Islanders (1981), Commonwealth of Australia, Canberra, cited in J Gardiner-Garden, The Definition of Aboriginality: Research Note 18, 2000–01 (2000), The Parliament of Australia, p.2
- Section 3 of Republic Act 8371 or the Indigenous Peoples Rights Act.
- Elana Wilson Rowe, Russia and the North, University of Ottawa Press, 2009, p.168
- Acharya, Deepak and Shrivastava Anshu (2008): Indigenous Herbal Medicines: Tribal Formulations and Traditional Herbal Practices, Aavishkar Publishers Distributor, Jaipur- India. ISBN 978-81-7910-252-7. p. 440
- WGIP (2001). Indigenous Peoples and the United Nations System. Office of the High Commissioner for Human Rights, United Nations Office at Geneva.
- "Indigenous issues". International Work Group on Indigenous Affairs. Retrieved 5 September 2005.
- Hall, Gillette, and Harry Anthony Patrinos. Indigenous Peoples, Poverty and Human Development in Latin America. New York: Palgrave MacMillan, n.d. Google Scholar. Web. 11 Mar. 2013
- Old World Contacts/Colonists/Canary Islands. Ucalgary.ca (22 June 1999). Retrieved on 2011-10-11.
- Ronald Niezen, Origins of Indigenism: Human Rights & Politics of Identity, University of California Press, 2003, p.2
- Report from the United Nations' International NGO Conference on Discrimination against Indigenous Populations in the Americas, United Nations, 1977, p.21
- Dean, Bartholomew 2009 Urarina Society, Cosmology, and History in Peruvian Amazonia, Gainesville: University Press of Florida ISBN 978-0-8130-3378-5
- "Civilization.ca-Gateway to Aboriginal Heritage-Culture". Canadian Museum of Civilization Corporation. Government of Canada. 12 May 2006. Retrieved 18 September 2009.
- "Inuit Circumpolar Council (Canada)-ICC Charter". Inuit Circumpolar Council > ICC Charter and By-laws > ICC Charter. 2007. Retrieved 18 September 2009.
- "In the Kawaskimhon Aboriginal Moot Court Factum of the Federal Crown Canada" (PDF). Faculty of Law. University of Manitoba. 2007. p. 2. Retrieved 18 September 2009.
- "Words First An Evolving Terminology Relating to Aboriginal Peoples in Canada". Communications Branch of Indian and Northern Affairs Canada. 2004. Retrieved 26 June 2010.
- "Terminology of First Nations, Native, Aboriginal and Metis" (PDF). Aboriginal Infant Development Programs of BC. 2009. Retrieved 26 June 2010.
- "Aboriginal Identity (8), Sex (3) and Age Groups (12) for the Population of Canada, Provinces, Territories, Census Metropolitan Areas and Census Agglomerations, 2006 Census – 20% Sample Data". Census > 2006 Census: Data products > Topic-based tabulations >. Statistics Canada, Government of Canada. 06/12/2008. Retrieved 18 September 2009.
- "Assembly of First Nations - Assembly of First Nations-The Story". Assembly of First Nations. Retrieved 2 October 2009.
- "Civilization.ca-Gateway to Aboriginal Heritage-object". Canadian Museum of Civilization Corporation. 12 May 2006. Retrieved 2 October 2009.
- Brazil urged to protect Indians. BBC News (30 March 2005). Retrieved on 2011-10-11.
- Brazil sees traces of more isolated Amazon tribes. Reuters.com. Retrieved on 2011-10-11.
- "Natives in Russia's far east worry about vanishing fish". The Economic Times (India). Agence France-Presse. 25 February 2009. Retrieved 5 March 2011.
- Recognition at last for Japan's Ainu, BBC NEWS
- Blust, R. (1999), "Subgrouping, circularity and extinction: some issues in Austronesian comparative linguistics" in E. Zeitoun & P.J.K Li, ed., Selected papers from the Eighth International Conference on Austronesian Linguistics. Taipei: Academia Sinica
- Fox, James J. PDF (105 KB). Paper prepared for Symposium Austronesia Pascasarjana Linguististik dan Kajian Budaya. Universitas Udayana, Bali 19–20 August 2004.
- Diamond, Jared M. PDF (107 KB). Nature, Volume 403, February 2000, pp. 709–710
- J. K. Das, Human Rights And Indigenous Peoples, APH Publishing, 2001. pp/30-31
- Françoise Vergès, Monsters and Revolutionaries: Colonial Family Romance and Métissage, Duke University Press, 1999, p.80
- James Summers, Peoples and International Law: How Nationalism and Self-Determination Shape a Contemporary Law of Nations, Martinus Nijhoff Publishers, 2007, p.245
- temporary rules over parts of Europe by non-European powers include Avar Khaganate (c.560s–800), Al-Andalus (711–1492), Emirate of Sicily (831–1072), the Mongol/Tatar invasions (1223–1480), and Ottoman control of the Balkans (1389–1878)
- Pygmy human remains found on rock islands, Science | The Guardian
- Bartholomew Dean and Jerome Levi (eds.) At the Risk of Being Heard: Indigenous Rights, Identity and Postcolonial States University of Michigan Press (2003)
- Mughal, Muhammad Aurang Zeb. (2012). Brazil. In Native Peoples of the World: An Encyclopedia of Groups, Cultures, and Contemporary Issues. Steven L. Danver (Ed). New York: M. E. Sharpe.
- Mughal, Muhammad Aurang Zeb. (2012). Spain. In Native Peoples of the World: An Encyclopedia of Groups, Cultures, and Contemporary Issues. Steven L. Danver (Ed). New York: M. E. Sharpe.
- Mughal, Muhammad Aurang Zeb. (2012). Tunisia. In Native Peoples of the World: An Encyclopedia of Groups, Cultures, and Contemporary Issues. Steven L. Danver (Ed). New York: M. E. Sharpe.
- Trask, Haunani-Kay. From a Native Daughter: Colonialism and Sovereignty in Hawaii (Honolulu: University of Hawaii Press, 1999), 2nd ed., 33.
- (Last update: 5 October 2012). Retrieved on 2012-12-11.
- No 'indigenous', reiterates Shafique. bdnews24.com (18 June 2011). Retrieved on 2011-10-11.
- Ministry of Chittagong Hill Tracts Affairs. mochta.gov.bd. Retrieved on 2012-03-28.
- INDIGENOUS PEOPLEChakma Raja decries non-recognition. bdnews24.com (28 May 2011). Retrieved on 2011-10-11.
- 'Define terms minorities, indigenous'. bdnews24.com (27 May 2011). Retrieved on 2011-10-11.
- Disregarding the Jumma. Himalmag.com. Retrieved on 2011-10-11.
- "afrol News – Botswana govt gets tougher on San tribesmen". Afrol.com. Retrieved 30 June 2010.
- Simpson, John (2 May 2005). "Africa | Bushmen fight for homeland". BBC News. Retrieved 30 June 2010.
- Monbiot, George (21 March 2006). "Who really belongs to another age – bushmen or the House of Lords?". The Guardian (London). Retrieved 5 May 2010.
- "Botswana bushmen ruling accepted". BBC News. 18 December 2006. Retrieved 5 May 2010.
- Brigitte Weidlich Botswana Bushmen win eviction case. namibian.com.na. 14 December 2006
- "RESOLUTIONS AND DECISIONS. WHA47.27 International Decade of the World's Indigenous People. The Forty-seventh World Health Assembly," (PDF). World Health Organization. Retrieved 17 April 2011.
- Hanley, Anthony J. Diabetes in Indigenous Populations, Medscape Today
- Ohenjo, Nyang'ori; Willis, Ruth; Jackson, Dorothy; Nettleton, Clive; Good, Kenneth; Mugarura, Benon (2006). "Health of Indigenous people in Africa". The Lancet 367 (9526): 1937. doi:10.1016/S0140-6736(06)68849-1.
- Health and Ethnic Minorities in Viet Nam, Technical Series No. 1, June 2003, WHO, p. 10
- Facts on Suicide Rates, First Nations and Inuit Health, Health Canada
- "Health of indigenous peoples". Health Topics A to Z. Retrieved 17 April 2011.
- International Day of the World's Indigenous People – 9 August. www.un.org. Retrieved on 2012-03-28.
- See Oliphant v. Suquamish Indian Tribe, 435 U.S. 191 (1978); also see Robert Williams, Like a Loaded Weapon
- Survival International website – About Us/FAQ. Survivalinternational.org. Retrieved on 2012-03-28.
- friends of Peoples close to Nature website – Our Ethos and statement of principles[dead link]
- African Commission on Human and Peoples’ Rights (2003). "Report of the African Commission's Working Group of Experts on Indigenous Populations/Communities". ACHPR & IWGIA.
- Baviskar, Amita (2007). "Indian Indigeneitites: Adivasi Engagements with Hindu NAtionalism in India". In Marisol de la Cadena & Orin Starn. Indigenous Experience today. Oxford, UK: Berg Publishers. ISBN 978-1-84520-519-5.
- Bodley, John H. (2008). Victims of Progress (5th. ed.). Plymouth, England: AltaMira Press. ISBN 0-7591-1148-0.
- de la Cadena, Marisol; Orin Starn (eds.) (2007). Indigenous Experience Today. Oxford: Berg Publishers, Wenner-Gren Foundation for Anthropological Research. ISBN 978-1-84520-519-5.
- Clifford, James (2007). "Varieties of Indigenous Experience: Diasporas, Homelands, Sovereignties". In Marisol de la Cadena & Orin Starn. Indigenous Experience today. Oxford, UK: Berg Publishers. ISBN 978-1-84520-519-5.
- Coates, Ken S. (2004). A Global History of Indigenous Peoples: Struggle and Survival. New York: Palgrave MacMillan. ISBN 0-333-92150-X.
- Henriksen, John B. (2001). "Implementation of the Right of Self-Determination of Indigenous Peoples" (PDF). Indigenous Affairs. 3/2001 (PDF ed.) (Copenhagen: International Work Group for Indigenous Affairs). pp. 6–21. ISSN 1024-3283. OCLC 30685615. Retrieved 1 September 2007.
- Hughes, Lotte (2003). The no-nonsense guide to indigenous peoples. Verso. ISBN 1-85984-438-3.
- Howard, Bradley Reed (2003). Indigenous Peoples and the State: The struggle for Native Rights. DeKalb, Illinois: Northern Illinois University Press. ISBN 0-87580-290-7.
- Johansen. Bruce E. (2003). Indigenous Peoples and Environmental Issues: An Encyclopedia. Westport, Connecticut: Greenwood Press. ISBN 978-0-313-32398-0.
- Martinez Cobo, J. (198). "United Nations Working Group on Indigenous Populations". Study of the Problem of Discrimination Against Indigenous Populations. UN Commission on Human Rights.
- Maybury-Lewis, David (1997). Indigenous Peoples, Ethnic Groups and the State. Needham Heights, Massachusetts: Allyn & Bacon. ISBN 0-205-19816-3.
- Merlan, Francesca (2007). "Indigeneity as Relational Identity: The Construction of Australian Land Rights". In Marisol de la Cadena & Orin Starn. Indigenous Experience today. Oxford, UK: Berg Publishers. ISBN 978-1-84520-519-5.
- Pratt, Mary Louise (2007). "Afterword: Indigeneity Today". In Marisol de la Cadena & Orin Starn. Indigenous Experience today. Oxford, UK: Berg Publishers. ISBN 978-1-84520-519-5.
- Tsing, Anna (2007). "Indigenous Voice". In Marisol de la Cadena & Orin Starn. Indigenous Experience today. Oxford, UK: Berg Publishers. ISBN 978-1-84520-519-5.
|Wikisource has original text related to this article:|
|Look up indigenous in Wiktionary, the free dictionary.|
|Wikimedia Commons has media related to: Indigenous people|
- IFAD and indigenous peoples (International Fund for Agricultural Development, IFAD)
- IPS Inter Press Service News on indigenous peoples from around the world
- Indigenous Peoples Issues & Resources | http://en.wikipedia.org/wiki/Indigenous_peoples | 13 |
59 | Most of us are familiar with the concept of interest in our daily lives. If we invest $100 at 10% interest for a year, at the end of the year we will have $110. If we waited another year we would have $121. If you thought that we should have $120, look again. 10% of $110 is $11, so if we add this amount to what we have, we get $121. This phenomenon is called compounding interest, where we get more and more money from an investment as time goes on, even at the same interest rate because interest is accruing based on all the money we have now, not just what we started with.
Supply and Demand
Consider a good that is being sold in many places across a nation. From the perspective of a consumer, generally a higher price will mean fewer units of the good are sold. Generally a lower price will mean that more units of the good are sold. From the perspective of a producer however, the higher the price is, the more they will want to produce of the good. Again, the opposite is generally true as well; the lower the price, the less they will want to produce the good.
This can be represented by a graph, called a supply and demand curve. Here is an example of a supply and demand curve from the Wikimedia commons.
Price is indicated on the vertical axis, denoted as “P”. Quantity is on the horizontal axis, denoted as “Q”. The S curve is an example of a supply curve, and the D1 and D2 curves are examples of two slightly different demand curves.
When looking at the supply curve, we can see that an increase in price will create additional supply (the quantity goes up), while an increase in price stimulates a decrease in the amount demanded (the D curve). The point where the supply and demand curves cross corresponds to a point known as equilibrium.
This is the point where producers and consumers exchange goods at a cost and quantity that represents a balance between the consumer’s wish to pay less money and the producer’s wish to make more money. Equilibrium is the point where everybody willing to pay the market price has their demand satified, and anybody willing to produce at the market price has a buyer for the good. At any other point along either curve the market forces would drive supply and demand toward equilibrium. For instance, at a lower price, demand would outstrip supply. Unsatisfied consumers could offer to pay more to receive goods, and the increased price is an incentive for a profit seeking producer to produce more goods, causing supply to increase. In general, market forces will cause the market to move towards the equilibrium point.
A labourer who hones skills with regards to one specific task will be more effective at the given task than someone who is unskilled. The market encourages people to develop special skill sets and trade their labour, or the products of their labour, with others for the things they need or want. Alternatively people could directly satisfy their own needs. The key to specialization is that a specialist can perform their work generally better and more quickly than a non-specialist.
Example: A group of five specialists could all trade their respective goods and services to each other. This five specialists would be more wealthy (capable of enjoying more free time, and having better quality goods and services) than five equally capable people who all worked to provide for all their own needs on their own without specializing and trading with each other.
This is one of the key benefits of market and trade concepts, as this interdependent network of specialists are capable of dramatically more effective action than would otherwise be possible.
A place where goods and services are exchanged. One might imagine a bustling street full of vendors and customers or a stock exchange full of people buying and selling stocks. These are physical manifestations of what we call a market, but the definition is not limited to these examples.
Goods and services are exchanged at many levels. We can imagine markets at a local, national, and global scale. We can still use those first mental images, but we should be aware that markets can span continents and cross borders. They manifest themselves in many ways. The most general rules that define the way a market acts is via supply and demand (see above). Markets are also places where discussions can happen between people and organizations regarding appropriate quantities and prices for their exchange of goods.
A useful definition of capital is anything that can enhance the ability to do economically useful work1. Economically useful essentially means anything that has value to human beings. There are many ways to produce value, so it makes sense that there are several types of capital that we can refer to.
Human beings who can perform useful work. This includes physical as well as mental work and specialized skills. Investment in improving human capital is generally through education and training. Accumulation of human capital could also mean hiring people who are useful for doing work.
Financial capital is essentially just money. It isn’t just money that is ‘in hand’ however. It refers to the ability to use money to acquire other forms of capital. In this way the ability to take on debt by borrowing from someone else is a form of financial capital. High value commodities such as gold are often considered as being another form of financial capital.
Factories, roads, buildings, and tools are all good examples of physical capital. These are non-human, non-monetary objects that are useful for conducting valuable work.
A company that is well-regarded by the populace would have more ‘social capital’ than a company that is poorly regarded, all else being equal. Social capital refers to the power of social networks to accomplish work. This could be due to enhanced communication abilities, or it could be simply customer loyalty. There are many forms that social capital could take.
Land, forests, rivers, rainfall, wind, sunlight, animals, and everything else that comprises the natural world is regarded as natural capital. One could think of this as the category that includes everything else other than the above forms of capital.
In economics an externality is something that affects other people who are not part of a specific economic exchange/interaction, but is not accounted for in the price of the transaction in question. Externalities can be positive or negative. Positive externalities are good for people not involved in the trade in question, while negative externalities are bad. An example of a positive externality would be a nice office building in a city makes the city seem more prosperous and civilized, causing people to enjoy living there more. An example of a negative externality would be pollutants emitted by coal power plants. These pollutants can make people and animals ill, damage forests and crops, and damage buildings with acid rain.
Corporations are referred to as ‘externalizing machines‘ by critics of the corporate model. This refers to negative externalities that they do not want to pay for. The company that owns a coal power plant will do their best to not have to pay for the full costs of their effects on the environment, health, and infrastructure. Corporations are mandated with producing profits for their shareholders. In order to serve this mandate effectively they will make sure that negative externalities stay just that – external. If a company internalizes an externality, it means that they will be shouldering some of the additional cost being incurred to others. Some companies choose to do this for ethical reasons, or to gain additional social capital. Generally it is the role of government to enforce policy that makes companies take responsibility for the full cost of their actions. An example of this sort of policy would be government forcing coal power plants to include scrubbers that remove some of the dangerous pollutants from the gases emitted from their plants.
Purchasing Power Parity
Purchasing Power Parity (PPP) is a term indicating equal buying power. The same standard of living costs different amounts of money in different places in the world. Economists define what is called a ‘basket of goods’ that are available everywhere that markets exist2. We can imagine a hypothetical basket that would cost $120 USD in the United States, and 100€ in France. These numbers are intended to be illustrative, not real. We can see then that $6 USD has the approximate buying power of 5€ in this example. It would thus cost more USD than euros to obtain the same value of goods. Therefore we can adjust our perceptions of income levels accordingly for these countries, since in our example one USD does not buy as much in the United States as one euro buys in France.
It can be difficult to compare the products in wealthy nations with those in the poorest, as product quality can vary greatly and this is difficult to quantify numerically. This can cause the PPP of poor countries to be overstated. PPP is closely related to currency exchange rates, but it is not the same thing. For more information on the subject, see the Wikipedia article on Purchasing Power Parity. The differences in prices that cause the basket of goods to be more or less expensive can be caused by a myriad of factors that we will not go into here.
Gross Domestic Product
Gross domestic product (GDP) is an economic measure intended to represent the sum of all economic activity in a country. Economic activity is measured according to market value3. Therefore GDP is the sum of all market value delivered in a country. This quantity is usually presented on a yearly scale. GDP is often given in terms of PPP, since it is a more accurate reflection of buying power4. ‘Nominal’ GDP is when GDP is calculated without taking into account PPP5. If you view the previous two citations, notice that the GDP of The People’s Republic of China is given for 2009 as 4,908,982 million USD (nominal) and 8,765,240 million USD (PPP) by the International Monetary Fund. This striking difference is apparently largely due to the fact that China ‘pegs’ its currency at a specific value compared to the USD. Part of their motivation to do this is to ensure that their largest export market, the United States, can buy goods from them at a consistent price. The effect on nominal vs PPP GDP for China is drastic, with the nominal value being only about 56% of the PPP value. | http://www.visionofearth.org/news/economics-key-terms-and-definitions/ | 13 |
20 | In England and Wales a workhouse, colloquially known as a spike, was a place where those unable to support themselves were offered accommodation and employment. The earliest known use of the term dates from 1631, in an account by the mayor of Abingdon reporting that "wee haue erected wthn our borough a workehouse to sett poore people to worke".
The origins of the workhouse can be traced to the Poor Law Act of 1388, which attempted to address the labour shortages following the Black Death in England by restricting the movement of labourers, and ultimately led to the state becoming responsible for the support of the poor. But mass unemployment following the end of the Napoleonic Wars in 1815, the introduction of new technology to replace agricultural workers in particular, and a series of bad harvests, meant that by the early 1830s the established system of poor relief was proving to be unsustainable. The New Poor Law of 1834 attempted to reverse the economic trend by discouraging the provision of relief to anyone who refused to enter a workhouse. Some Poor Law authorities hoped to run workhouses at a profit by utilising the free labour of their inmates, who generally lacked the skills or motivation to compete in the open market. Most were employed on tasks such as breaking stones, bone crushing to produce fertilizer, or picking oakum using a large metal nail known as a spike, perhaps the origin of the workhouse's nickname.
Life in a workhouse was intended to be harsh, to deter the able-bodied poor and to ensure that only the truly destitute would apply. But in areas such as the provision of free medical care and education for children, neither of which was available to the poor in England living outside workhouses until the early 20th century, workhouse inmates were advantaged over the general population, a dilemma that the Poor Law authorities never managed to reconcile.
As the 19th century wore on workhouses increasingly became refuges for the elderly, infirm and sick rather than the able-bodied poor, and in 1929 legislation was passed to allow local authorities to take over workhouse infirmaries as municipal hospitals. Although workhouses were formally abolished by the same legislation in 1930, many continued under their new appellation of Public Assistance Institutions under the control of local authorities. It was not until the National Assistance Act of 1948 that the last vestiges of the Poor Law disappeared, and with them the workhouses.
Medieval to Early Modern period
The Poor Law Act of 1388 was an attempt to address the labour shortage caused by the Black Death, a devastating pandemic that killed about one-third of England's population. The new law fixed wages and restricted the movement of labourers, as it was anticipated that if they were allowed to leave their parishes for higher-paid work elsewhere then wages would inevitably rise. According to historian Derek Fraser, the fear of social disorder following the plague ultimately resulted in the state, and not a "personal Christian charity", becoming responsible for the support of the poor. The resulting laws against vagrancy were the origins of state-funded relief for the poor. From the 16th century onwards a distinction was legally enshrined between those who were able to work but could not, and those who were able to work but would not; between "the genuinely unemployed and the idler". Supporting the destitute was a problem exacerbated by King Henry VIII's Dissolution of the Monasteries, which began in 1536. They had been "a significant source of alms" and provided a good deal of direct and indirect employment. The Poor Relief Act of 1576 went on to establish the principle that if the able-bodied poor needed support then they had to work for it.
The Act for the Relief of the Poor of 1601 made parishes legally responsible for the care of those within their boundaries who, through age or infirmity, were unable to work. The Act essentially classified the poor into one of three groups. It proposed that the able-bodied be offered work in a house of correction (the precursor of the workhouse), where the "persistent idler" was to be punished. It also proposed the construction of housing for the impotent poor, the old and the infirm, although most assistance was granted through a form of poor relief known as outdoor relief – money, food, or other necessities given to those living in their own homes, funded by a local tax on the property of the wealthiest in the parish.
Georgian era
The workhouse system evolved in the 17th century as a way for parishes to reduce the cost to ratepayers of providing poor relief. The first authoritative figure for numbers of workhouses comes in the next century from The Abstract of Returns made by the Overseers of the Poor, which was drawn up following a government survey in 1776. It put the number of parish workhouses in England and Wales at more than 1800 (approximately one parish in seven), with a total capacity of more than 90,000 places. This growth in the number of workhouses was prompted by the Workhouse Test Act of 1723; by obliging anyone seeking poor relief to enter a workhouse and undertake a set amount of work, usually for no pay (a system called indoor relief), the Act helped prevent irresponsible claims on a parish's poor rate. The growth in the number of workhouses was also bolstered by the Relief of the Poor Act 1782, proposed by Thomas Gilbert. Gilbert's Act was intended to allow parishes to share the cost of poor relief by forming unions – known as Gilbert Unions – to build and maintain even larger workhouses to accommodate the elderly and infirm. The able-bodied poor were instead either given outdoor relief or found employment locally. Relatively few Gilbert Unions were set up, but supplementing inadequate wages under the Speenhamland system did become established towards the end of the 18th century. So keen were some Poor Law authorities to cut costs wherever possible that cases were reported of husbands being forced to sell their wives, to avoid them becoming a financial burden on the parish. In one such case in 1814 the wife and child of Henry Cook, who were living in Effingham workhouse, were sold at Croydon market for one shilling; the parish paid for the cost of the journey and a "wedding dinner".
The workhouse is an inconvenient building, with small windows, low rooms and dark staircases. It is surrounded by a high wall, that gives it the appearance of a prison, and prevents free circulation of air. There are 8 or 10 beds in each room, chiefly of flocks, and consequently retentive of all scents and very productive of vermin. The passages are in great want of whitewashing. No regular account is kept of births and deaths, but when smallpox, measles or malignant fevers make their appearance in the house, the mortality is very great. Of 131 inmates in the house, 60 are children.
In lieu of a workhouse some sparsely populated parishes placed homeless paupers into rented accommodation, and provided others with relief in their own homes. Those entering a workhouse might have joined anything from a handful to several hundred other inmates; for instance, between 1782 and 1794 Liverpool's workhouse accommodated 900–1200 indigent men, women and children. The larger workhouses such as the Gressinghall House of Industry generally served a number of communities, in Gressinghall's case 50 parishes. Writing in 1854, Poor Law commissioner George Nicholls viewed many of them as little more than factories:
These workhouses were established, and mainly conducted, with a view to deriving profit from the labour of the inmates, and not as being the safest means of affording relief by at the same time testing the reality of their destitution. The workhouse was in truth at that time a kind of manufactory, carried on at the risk and cost of the poor-rate, employing the worst description of the people, and helping to pauperise the best.
1834 Act
By 1832 the amount spent on poor relief nationally had risen to £7 million a year, more than 10 shillings per head of population, up from £2 million in 1784.[a] The large number of those seeking assistance was pushing the system to "the verge of collapse".[b] The economic downturn following the end of the Napoleonic Wars in the early 19th century resulted in increasing numbers of unemployed. Coupled with developments in agriculture that meant less labour was needed on the land, along with three successive bad harvests beginning in 1828 and the Swing Riots of 1830, "reform of the Old Poor Law was inevitable". Many suspected that the system of poor relief was being widely abused, and in 1832 the government established a Royal Commission to investigate and recommend how relief could best be given to the poor. The result was the establishment of a centralised Poor Law Commission in England and Wales under the Poor Law Amendment Act 1834, also known as the New Poor Law, which discouraged the allocation of outdoor relief to the able-bodied; "all cases were to be 'offered the house', and nothing else". Individual parishes were formed into Poor Law Unions, each of which was to have a union workhouse. More than 500 were built during the next 50 years, two-thirds of them by 1840. In certain parts of the country there was a good deal of resistance to these new buildings, some of it violent, particularly in the industrial north. Many workers lost their jobs during the major economic depression of 1837, and there was a strong feeling that what the unemployed needed was not the workhouse but short-term relief to tide them over. By 1838, 573 Poor Law Unions had been formed in England and Wales, incorporating 13,427 parishes, but it was not until 1868 that unions were established across the entire country, the same year that the New Poor Law was applied to the Gilbert Unions.
Despite the intentions behind the 1834 Act, relief of the poor remained the responsibility of local taxpayers, and there was thus a powerful economic incentive to use loopholes such as sickness in the family to continue with outdoor relief; the weekly cost per person was about half that of providing workhouse accommodation.[c] Outdoor relief was further restricted by the terms of the 1844 Outdoor Relief Prohibitory Order, which aimed to end it altogether for the able-bodied poor. In 1846, of 1.33 million paupers only 199,000 were maintained in workhouses, of whom 82,000 were considered to be able-bodied, leaving an estimated 375,000 of the able-bodied on outdoor relief. Excluding periods of extreme economic distress, it has been estimated that about 6.5 per cent of the British population may have been accommodated in workhouses at any given time.[d]
Early Victorian workhouses
The New Poor Law Commissioners were very critical of existing workhouses, and generally insisted that they be replaced. They complained in particular that "in by far the greater number of cases, it is a large almshouse, in which the young are trained in idleness, ignorance, and vice; the able-bodied maintained in sluggish sensual indolence; the aged and more respectable exposed to all the misery that is incident to dwelling in such a society".
After 1835 many workhouses were constructed with the central buildings surrounded by work and exercise yards enclosed behind brick walls, so-called "pauper bastilles". The commission proposed that all new workhouses should allow for the segregation of paupers into at least four distinct groups, each to be housed separately: the aged and impotent, children, able-bodied males, and able-bodied females. A common layout resembled Jeremy Bentham's prison panopticon, a radial design with four three-storey buildings at its centre set within a rectangular courtyard, the perimeter of which was defined by a three-storey entrance block and single-storey outbuildings, all enclosed by a wall. That basic layout, one of two designed by the architect Sampson Kempthorne (his other design was hexagonal with a segmented interior, sometimes known as the Kempthorne star), allowed for four separate work and exercise yards, one for each class of inmate. Separating the inmates was intended to serve three purposes: to direct treatment to those who most needed it; to deter others from pauperism; and as a physical barrier against illness, physical and mental. The commissioners argued that buildings based on Kempthorne's plans would be symbolic of the recent changes to the provision of poor relief; one assistant commissioner expressed the view that they would be something "the pauper would feel it was utterly impossible to contend against", and "give confidence to the Poor Law Guardians". Another assistant commissioner claimed the new design was intended as a "terror to the able-bodied population", but the architect George Gilbert Scott was critical of what he called "a set of ready-made designs of the meanest possible character". Some critics of the new Poor Law noted the similarities between Kempthorne's plans and model prisons, and doubted that they were merely coincidental. Augustus Pugin compared Kempthorne's hexagonal plan with the "antient poor hoyse", in what Professor Felix Driver calls a "romantic, conservative critique" of the "degeneration of English moral and aesthetic values".
By the 1840s some of the enthusiasm for Kempthorne's designs had waned. With limited space in built-up areas, and concerns over the ventilation of buildings, some unions moved away from panoptican designs. From 1840–1870 about 150 workhouses with separate blocks designed for specific functions were built. Typically, the entrance building contained offices, while the main workhouse building housed the various wards and workrooms, all linked by long corridors designed to improve ventilation and lighting. Where possible, each building was separated by an exercise yard, for the use of a specific category of pauper.
Admission and discharge
Each Poor Law Union employed one or more relieving officers, whose job it was to visit those applying for assistance and assess what relief, if any, they should be given. Any applicants considered to be in need of immediate assistance could be issued with a note admitting them directly to the workhouse. Alternatively they might be offered any necessary money or goods to tide them over until the next meeting of the guardians, who would decide on the appropriate level of support and whether or not the applicants should be assigned to the workhouse.
Workhouses were designed with only a single entrance guarded by a porter, through which inmates and visitors alike had to pass. Near to the entrance were the casual wards for tramps and vagrants and the relieving rooms, where paupers were housed until they had been examined by a medical officer. After being assessed the paupers were separated and allocated to the appropriate ward for their category: boys under 14, able-bodied men between 14 and 60, men over 60, girls under 14, able-bodied women between 14 and 60, and women over 60.[e] Children under the age of two were allowed to remain with their mothers, but by entering a workhouse paupers were considered to have forfeited responsibility for their families. Clothing and personal possessions were taken from them and stored, to be returned on their discharge. After bathing, they were issued with a distinctive uniform: for men it was a striped cotton shirt, jacket and trousers, and a cloth cap. Women were given a blue-and-white striped dress worn underneath a smock. Shoes were also provided. Some workhouses had a separate "foul" or "itch" ward, where inmates diagnosed with skin diseases such as scabies could be detained before entering the workhouse proper.
Conditions in the casual wards were worse than in the relieving rooms and deliberately designed to discourage vagrants, who were considered potential trouble-makers and probably disease-ridden. Vagrants presented themselves at the door, and it was the porter's decision as to whether or not to allocate them a bed for the night in the casual ward. A typical early 19th-century casual ward was a single large room furnished with some kind of bedding and perhaps a bucket in the middle of the floor for sanitation. The bedding on offer could be very basic: the Poor Law authorities in Richmond in the mid-1840s provided only straw and rags, although beds were available for the sick. In return for their night's accommodation vagrants might be expected to undertake a certain amount of work before leaving the next day, such as at Guisborough, where men were required to break stones for three hours and women to pick oakum, two hours before breakfast and one after.
Inmates were free to leave as they wished after giving reasonable notice, generally considered to be three hours, but if a parent discharged him or herself then the children were also discharged, to prevent them from being abandoned. The comic actor Charlie Chaplin, who spent some time with his mother in Lambeth workhouse, records in his autobiography that when he and his half-brother returned to the workhouse after having been sent to a school in Hanwell, he was met at the gate by his mother Hannah, dressed in her own clothes. Desperate to see them again she had discharged herself and the children; they spent the day together playing in Kennington Park and visiting a coffee shop, after which she readmitted them all to the workhouse.
|Daily workhouse schedule
|Sunday was a day of rest. During the winter months inmates were allowed to rise an hour later and did not start work until 8:00 am.|
Some Poor Law authorities hoped that payment for the work undertaken by the inmates would produce a profit for their workhouses, or at least allow them to be self-supporting, but whatever small income could be produced never matched the running costs. Eighteenth-century inmates were poorly managed, and lacked either the inclination or skills to compete effectively with free market industries such as spinning and weaving. Some workhouses operated not as places of employment, but as houses of correction, a role similar to that trialled by a Buckinghamshire magistrate, Matthew Marryott. Between 1714 and 1722 he experimented with using the workhouse as a test of poverty rather than a source of profit, leading to the establishment of a large number of workhouses for that purpose. Nevertheless, local people became concerned about the competition to their businesses from cheap workhouse labour. As late as 1888, for instance, the Firewood Cutters Protection Association was complaining that the livelihood of its members was being threatened by the cheap firewood on offer from the workhouses in the East End of London.
Many inmates were allocated tasks in the workhouse such as caring for the sick or teaching that were beyond their capabilities, but most were employed on "generally pointless" work, such as breaking stones or removing the hemp from telegraph wires. Others picked oakum using a large metal nail known as a spike, which may be the source of the workhouse's nickname. Bone-crushing, useful in the creation of fertilizer, was a task most inmates could perform, until a government inquiry into conditions in the Andover workhouse in 1845 found that starving paupers were reduced to fighting over the rotting bones they were supposed to be grinding, to suck out the marrow. The resulting scandal led to the withdrawal of bone-crushing as an employment for those living in workhouses and the replacement of the Poor Law Commission by the Poor Law Board in 1847. Conditions thereafter were regulated according to a list of rules contained in the 1847 Consolidated General Order, which included guidance on issues such as diet, staff duties, dress, education, discipline and redress of grievances.
Some Poor Law Unions opted to send destitute children to the British colonies, in particular Canada and Australia, where it was hoped the fruits of their labour would contribute to the defence of the empire and enable the colonies to buy more British exports. Known as Home Children, the Philanthropic Farm School alone sent more than 1000 boys to the colonies between 1850 and 1871, many of them taken from workhouses. In 1869 Maria Rye and Annie Macpherson, "two spinster ladies of strong resolve", began taking groups of orphans and children from workhouses to Canada, most of whom were taken in by farming families in Ontario. The Canadian government paid a small fee to the ladies for each child delivered, but most of the cost was met by charities or the Poor Law Unions.
As far as possible elderly inmates were expected to undertake the same kind of work as the younger men and women, although concessions were made to their relative frailty. They might alternatively be required to chop firewood, clean the wards, or carry out other domestic tasks.
In 1836 the Poor Law Commission distributed six diets for workhouse inmates, one of which was to be chosen by each Poor Law Union depending on its local circumstances. Although dreary, the food was generally nutritionally adequate, and according to contemporary records was prepared with great care. Issues such as training staff to serve and weigh portions were well understood. The diets included general guidance, as well as schedules for each class of inmate. They were laid out on a weekly rotation, the various meals selected on a daily basis, from a list of foodstuffs. For instance, a breakfast of bread and gruel was followed by dinner, which might consist of cooked meats, pickled pork or bacon with vegetables, potatoes, yeast dumpling, soup and suet, or rice pudding. Supper was normally bread, cheese and broth, and sometimes butter or potatoes.
The larger workhouses had separate dining rooms for males and females; those that did not staggered meal times to avoid any contact between the sexes. Rations provided for the indoor staff were much the same as those for the paupers, although more generous. The master and matron, for instance, received six times the amount of food given to a pauper.
Education and discipline
Education was provided for the children, but workhouse teachers were a particular problem. Poorly paid, without any formal training, and facing large classes of unruly children with little or no interest in their lessons, few stayed in the job for more than a few months. In an effort to force workhouses to offer at least a basic level of education legislation was passed in 1845 requiring that all pauper apprentices should be able to read and sign their own indenture papers. A training college for workhouse teachers was set up at Kneller Hall in Twickenham during the 1840s, but it closed in the 1850s.
Some children were trained in skills valuable to the area. In Shrewsbury, the boys were placed in the workhouse's workshop, while girls were tasked with spinning, making gloves and other jobs "suited to their sex, their ages and abilities". At St Martin in the Fields, children were trained in spinning flax, picking hair and carding wool, before being placed as apprentices. Workhouses also had links with local industry; in Nottingham, children employed in a cotton mill earned about £60 a year for the workhouse. Some parishes advertised for apprenticeships, and were willing to pay any employer prepared to offer them. Such agreements were preferable to supporting children in the workhouse: apprenticed children were not subject to inspection by justices, thereby lowering the chance of punishment for neglect; and apprenticeships were viewed as a better long-term method of teaching skills to children who might otherwise be uninterested in work. Supporting an apprenticed child was also considerably cheaper than the workhouse or outdoor relief. Children often had no say in the matter, which could be arranged without the permission or knowledge of their parents. The supply of labour from workhouse to factory, which remained popular until the 1830s, was sometimes viewed as a form of transportation. While getting parish apprentices from Clerkenwell, Samuel Oldknow's agent reported how some parents came "crying to beg they may have their Children out again". Historian Arthur Redford suggests that the poor may have once shunned factories as "an insidious sort of workhouse".
Discipline was strictly enforced in the workhouse; for minor offences such as swearing or feigning sickness the "disorderly" could have their diet restricted for up to 48 hours. For more serious offences such as insubordination or violent behaviour the "refractory" could be confined for up to 24 hours, and might also have their diet restricted. Girls were punished in the same way as adults, but boys under the age of 14 could be beaten with "a rod or other instrument, such as may have been approved of by the Guardians". The persistently refractory, or anyone bringing "spirituous or fermented liquor" into the workhouse, could be taken before a Justice of the Peace and even jailed. All punishments handed out were recorded in a punishment book, which was examined regularly by the workhouse guardians.
Religion played an important part in workhouse life; prayers were read to the paupers before breakfast and after supper each day. Each Poor Law Union was required to appoint a chaplain to look after the spiritual needs of the workhouse inmates, who was invariably expected to be from the established Church of England. Religious services were generally held in the dining hall, as few early workhouses had a separate chapel. But in some parts of the country, notably Cornwall and the north of England, there were more dissenters than members of the established church; as section 19 of the 1834 Poor Law specifically forbade any regulation forcing an inmate to attend church services "in a Mode contrary to [their] Religious Principles", the commissioners were reluctantly forced to allow non-Anglicans to leave the workhouse on Sundays to attend services elsewhere, so long as they were able to provide a certificate of attendance signed by the officiating minister on their return.
As the 19th century wore on non-conformist ministers increasingly began to conduct services within the workhouse, but Catholic priests were rarely welcomed. A variety of legislation had been introduced during the 17th century to limit the civil rights of Catholics, beginning with the Popish Recusants Act 1605 in the wake of the failed Gunpowder Plot that year. But although almost all restrictions on Catholics in England and Ireland were removed by the Roman Catholic Relief Act 1829, a great deal of anti-Catholic feeling remained. Even in areas with large Catholic populations, such as Liverpool, the appointment of a Catholic chaplain was unthinkable. Some guardians even refused Catholic priests entry to the workhouse.
Management and staffing
Although the commissioners were responsible for the regulatory framework within which the Poor Law Unions operated, each union was run by a locally elected board of guardians, comprising representatives from each of the participating parishes, assisted by six ex officio members. The guardians were usually farmers or tradesmen, and as one of their roles was the contracting out of the supply of goods to the workhouse the position could prove lucrative for them and their friends. Simon Fowler has commented that "it is clear that this [the awarding of contracts] involved much petty corruption, and it was indeed endemic throughout the Poor Law system".
Although the 1834 Act allowed for women to become workhouse guardians provided they met the property requirement, the first female was not elected until 1875. Working-class guardians were not appointed until 1892, when the property requirement was dropped in favour of occupying rented premises worth £5 a year.
Every workhouse had a complement of full-time staff, often referred to as the indoor staff. At their head was the governor or master, who was appointed by the board of guardians. His duties were laid out in a series of orders issued by the Poor Law Commissioners. As well as the overall administration of the workhouse, masters were required to discipline the paupers as necessary and to visit each ward twice daily, at 11 am and 9 pm. Female inmates and children under seven were the responsibility of the matron, as was the general housekeeping. The master and the matron were usually a married couple, charged with running the workhouse "at the minimum cost and maximum efficiency – for the lowest possible wages".
A large workhouse such as Whitechapel, accommodating several thousand paupers, employed a staff of almost 200; the smallest may only have had a porter and perhaps an assistant nurse in addition to the master and matron. A typical workhouse accommodating 225 inmates had a staff of five, which included a part-time chaplain and a part-time medical officer. The low pay meant that many medical officers were young and inexperienced. To add to their difficulties, in most unions they were obliged to pay out of their own pockets for any drugs, dressings or other medical supplies needed to treat their patients.
Later developments and abolition
A second major wave of workhouse construction began in the mid-1860s, the result of a damning report by the Poor Law inspectors on the conditions found in infirmaries in London and the provinces. Of one workhouse in Southwark, London, an inspector observed bluntly that "The workhouse does not meet the requirements of medical science, nor am I able to suggest any arrangements which would in the least enable it to do so". By the middle of the 19th century there was a growing realisation that the purpose of the workhouse was no longer solely or even chiefly to act as a deterrent to the able-bodied poor, and the first generation of buildings was widely considered to be inadequate. About 150 new workhouses were built mainly in London, Lancashire and Yorkshire between 1840 and 1875, in architectural styles that began to adopt Italianate or Elizabethan features, to better fit into their surroundings and present a less intimidating face. One surviving example is the gateway at Ripon, designed somewhat in the style of a medieval almshouse. A major feature of this new generation of buildings is the long corridors with separate wards leading off for men, women, and children.
By 1870 the architectural fashion had moved away from the corridor style in favour of a "pavilion" style based on the military hospitals built during and after the Crimean War, providing light and well-ventilated accommodation. Opened in 1878, the Manchester Union's infirmary comprised seven parallel three-storey pavilions separated by 80-foot (24 m) wide "airing yards"; each pavilion had space for 31 beds, a day room, a nurse's kitchen, and toilets.
As early as 1841 the Poor Law Commissioners were aware of an "insoluble dilemma" posed by the ideology behind the New Poor Law:
If the pauper is always promptly attended by a skilful and well qualified medical practitioner ... if the patient be furnished with all the cordials and stimulants which may promote his recovery: it cannot be denied that his condition in these respects is better than that of the needy and industrious ratepayer who has neither the money nor the influence to secure prompt and careful attendance.
The education of children presented a similar dilemma. It was provided free in the workhouse but had to be paid for by the "merely poor"; free elementary education for all children was not provided in the UK until 1918. Instead of being "less eligible", those living in the workhouse were in certain respects "more eligible" than those living in poverty outside.
By the late 1840s most workhouses outside London and the larger provincial towns housed only "the incapable, elderly and sick". Responsibility for administration of the Poor Law passed to the Local Government Board in 1871, and the emphasis soon shifted from the workhouse as "a receptacle for the helpless poor" to its role in the care of the sick and helpless. The Diseases Prevention Act of 1883 allowed workhouse infirmaries to offer treatment to non-paupers as well as inmates, and by the beginning of the 20th century some infirmaries were even able to operate as private hospitals. By the end of the century only about 20 per cent admitted to workhouses were unemployed or destitute, but about 30 per cent of the population over 70 were in workhouses. The introduction of pensions for those aged over 70 in 1908 did not result in a reduction in the number of elderly housed in workhouses, but it did reduce the number of those on outdoor relief by 25 per cent.
A Royal Commission of 1905 reported that workhouses were unsuited to deal with the different categories of resident they had traditionally housed, and recommended that specialised institutions for each class of pauper should be established, in which they could be treated appropriately by properly trained staff. The "deterrent" workhouses were in future to be reserved for "incorrigibles such as drunkards, idlers and tramps". The Local Government Act of 1929 gave local authorities the power to take over workhouse infirmaries as municipal hospitals, although outside London few did so. The workhouse system was abolished in the UK by the same Act on 1 April 1930, but many workhouses, renamed Public Assistance Institutions, continued under the control of local county councils. Even as late as the outbreak of the Second World War in 1939 there were still almost 100,000 people accommodated in the former workhouses, 5629 of whom were children. It was not until the National Assistance Act of 1948 that the last vestiges of the Poor Law disappeared, and with them the workhouses. Many of the buildings were converted into old folk's homes run by local authorities. slightly more than 50 per cent of local authority accommodation for the elderly was provided in former workhouses in 1960. Southwell workhouse, now a museum, was used to provide temporary accommodation for mothers and children until the early 1990s.
Modern view
The Poor Law was not designed to address the issue of poverty, which was considered to be the inevitable lot for most people; rather it was concerned with pauperism, "the inability of an individual to support himself". Writing in 1806 Patrick Colquhoun commented that:
Poverty ... is a most necessary and indispensable ingredient in society, without which nations and communities could not exist in a state of civilisation. It is the lot of man – it is the source of wealth, since without poverty there would be no labour, and without labour there could be no riches, no refinement, no comfort, and no benefit to those who may be possessed of wealth.
Historian Simon Fowler has argued that workhouses were "largely designed for a pool of able-bodied idlers and shirkers ... However this group hardly existed outside the imagination of a generation of political economists". Workhouse life was intended to be harsh, to deter the able-bodied poor and to ensure that only the truly destitute would apply, a principle known as less eligibility. Writing ten years after its introduction, Friedrich Engels described the motives of the authors of the 1834 New Poor Law as "to force the poor into the Procrustean bed of their preconceived notions. To do this they treated the poor with incredible savagery."
The purpose of workhouse labour was never clear according to historian M. A. Crowther. In the early days of workhouses it was either a punishment or a source of income for the parish, but during the 19th century the idea of work as punishment became increasingly unfashionable. The idea took hold that work should rehabilitate the workhouse inmates for their eventual independence, and that it should therefore be rewarded with no more than the workers' maintenance, otherwise there would be no incentive for them to seek work elsewhere.
See also
- Britain's gross national income in 1830 was £400 million, of which the £7 million spent on poor relief represents 2 per cent, not a great deal by modern standards according to the historian Trevor May. He further observes that "As poor relief was the only social service provided by the state this might seem to be a small price to pay for saving Britain from the revolution that must have seemed so imminent during the Swing riots.
- It has been estimated that there were 1.5 million paupers in Britain in 1832, about 12 per cent of the population of 13 million.
- In 1860 the weekly cost of maintaining a pauper in a workhouse in the east of England was 3s ½d a week, as opposed to 1s 9d a week for outdoor relief.
- Official twice-yearly headcounts, taken on 1 January and 1 July, suggest that between 2.5 and 4.5 per cent of the population was accommodated in workhouses at any given time.
- Those were the official categories, but some Poor Law Unions further subdivided those in their care, particularly women: prostitutes, "women incapable of getting their own way from syphilis", and "idiotic or weak-minded women with one or more bastard children".
- Higginbotham, Peter, "Introduction", workhouse.org.uk, retrieved 9 April 2010
- Higginbotham (2006), p. 9
- Fraser (2009), p. 39
- Fraser (2009), p. 40
- Higginbotham, Peter, "Parish Workhouses", retrieved 16 October 2011
- Nixon (2011), p. 63
- Fowler (2007), p. 47
- Fowler (2007), p. 28
- May (1987), p. 89
- Gibson (1993), p. 51
- Fowler (2007), p. 18
- Hopkins (1994), pp. 163–164
- Nicholls (1854), p. 18
- Fraser (2009), p. 50
- May (1987), p. 121
- Fowler (2007), p. 103
- Fowler 2007, pp. 14–16
- Knott (1986), p. 51
- Fowler (2007), p. 242
- Fraser (2009), pp. 63–64
- May (1987), p. 124
- Fowler (2007), p. 42
- May (1987), p. 125
- May (1987), pp. 124–125
- Fraser (2009), p. 67
- Fowler (2007), p. 49
- May (1987), pp. 122–123
- May (2011), p. 10
- Fowler (2007), pp. 49–52
- Driver (2004), p. 65
- Driver (2004), p. 59
- Driver (2004), p. 61
- Green (2010), pp. 117–118
- Fowler (2007), pp. 202–203.
- Fowler (2007), p. 57
- Higginbotham (2006), p. 19
- Fowler (2007), p. 59
- Fowler (2007), pp. 160–161.
- Fowler (2007), p. 190
- Higginbotham, Peter, "The Workhouse in Guisborough, Yorkshire, N. Riding", workhouses.org.uk, retrieved 15 October 2011
- Fowler (2007), p. 130
- Fowler (2007), pp. 130–131
- Crowther (1981), p. 27
- Poynter (1969), pp. 15–16
- Fowler (2007), p. 110.
- Fowler (2007), p. 111
- Nicholls (1854), p. 394
- Fowler (2007), pp. 8–9
- Fowler (2007), p. 147
- Fowler (2007), p. 174
- Smith, L.; Thornton, S. J.; Reinarz, J; Williams, A. N. (17 December 2008), "Please, sir, I want some more", British Medical Journal 337: 1450–1451, doi:10.1136/bmj.a2722, retrieved 2 December 2010
- Anon (1836), pp. 56–59
- Fowler (2007), p. 62
- Fowler (2007), p. 79
- Fowler (2007), pp. 134–135
- Fowler (2007), p. 135
- Fowler (2007), p. 134
- Honeyman (2007), pp. 21–23
- Redford (1976), pp. 24–25
- "Instructional Letter Accompanying the Consolidated General Order", workhouses.org.uk, retrieved 14 October 2011
- Jones (1980), p. 90
- Fowler (2007), p. 66
- Higginbotham, Peter, "Religion in Workhouses", workhouses.org.uk, retrieved 21 October 2011
- Levinson (2004), p. 666
- Crowther (1981), p. 130
- "About the Museum", riponmuseums.co.uk, retrieved 2 October 2011
- "Poor Law records 1834–1871", The National Archives, retrieved 3 December 2010
- May (2011), p. 14
- Fowler (2007), p. 33
- Fowler (2007), pp. 75–76
- Fowler (2007), p. 77
- Fowler (2007), p. 75
- Crowther (1981), p. 127
- Fowler (2007), pp. 155–156
- Fowler (2007), p. 48
- May (1987), pp. 144–145
- Fowler (2007), p. 171
- May (2011), p. 19
- Fowler (2007), p. 105
- Fowler (2007), p. 170
- Crowther (1981), p. 54
- May (1987), p. 346
- Means & Smith (1985), p. 155
- Crowther (1981), p. 110
- Longmate (2003), p. 284
- Crowther (1981), p. 112
- Fowler (2007), p. 223
- May (1987), p. 120
- Fowler (2007), p. 14
- May (1987), p. 122
- Fowler (2007), p. 10
- Crowther (1981), p. 197
- Anon (1836), Reports from Commissioners, Fifteen Volumes, (8. Part I), Poor Laws (England), Session 4 February – 20 August 1836, 29, part 1, HMSO
- Crowther, A. C. (1981), The Workhouse System 1834–1929: The History of an English Social Institution, Batsford Academic and Educational, ISBN 0-7134-3671-9
- Driver, Felix (2004), Power and Pauperism, Cambridge University Press, ISBN 0-521-60747-7
- Fowler, Simon (2007), Workhouse: The People: The Places: The Life Behind Closed Doors, The National Archives, ISBN 978-1-905615-28-5
- Fraser, Derek (2009), The Evolution of the British Welfare State (4 ed.), Palgrave Macmillan, ISBN 978-0-230-22466-7
- Gibson, Colin (1993), Dissolving Wedlock, Routledge, ISBN 978-0-415-03226-1
- Green, David R. (2010), Pauper Capital: London and the Poor Law, 1790–1870, Ashgate Publishing, ISBN 0-7546-3008-0
- Higginbotham, Peter (2006), Workhouses of the North, Tempus, ISBN 0-7524-4001-2
- Honeyman, Katrina (2007), Child Workers in England, 1780–1820, Ashgate Publishing, ISBN 978-0-7546-6272-3
- Hopkins, Eric (1994), Childhood Transformed, Manchester University Press, ISBN 0-7190-3867-7
- Jones, Catherine (1980) , Immigration and Social Policy in Britain, Taylor & Francis, ISBN 978-0-422-74680-9
- Knott, John (1986), Popular opposition to the 1834 Poor Law, Taylor & Francis, ISBN 978-0-7099-1532-4
- Levinson, David, ed. (2004), "An Act for the Amendment and better Administration of the Laws relating to the Poor in England and Wales (14th August 1834)", Encyclopedia of Homelessness 2, Sage, pp. 663–692, ISBN 978-0-7619-2751-8
- Longmate, Norman (2003), The Workhouse, Pimlico, ISBN 978-0-7126-0637-0
- May, Trevor (1987), An Economic and Social History of Britain 1760–1970, Longman Group, ISBN 0-582-35281-9
- May, Trevor (2011), The Victorian Workhouse, Shire Publications, ISBN 978-0-74780-355-3
- Means, Robin; Smith, Randall (1985), The Development of Welfare Services for Elderly People, Routledge, ISBN 0-7099-3531-5
- Nicholls, Sir George (1854), A History of the English Poor Law II, John Murray
- Nixon, Cheryl L. (2011), The Orphan in Eighteenth-Century Law and Literature, Ashgate Publishing, ISBN 0-7546-6424-4
- Poynter, J. R. (1969), Society and Pauperism, Routledge and Kegan Paul, ISBN 978-0-8020-1611-9
- Redford, Arthur (1976), Labour Migration in England, 1800–1850 (3rd ed.), Manchester University Press, ISBN 978-0-7190-0636-4
Further reading
- Crompton, Frank (1997), Workhouse Children: Infant and Child Paupers Under the Worcestershire Poor Law, 1780–1871, Sutton Publishing, ISBN 978-0-7509-1429-1
- Downing, J. (1725), An Account of Several Work-houses for Employing and Maintaining the Poor, Joseph Downing
- Higginbotham, Peter (2007), Workhouses of the Midlands, The History Press, ISBN 978-0-7524-4488-8
- Higginbotham, Peter (2012), The Workhouse Encylopedia, The History Press, ISBN 978-0-7524-7012-2
- Higginbotham, Peter (2008), The Workhouse Cookbook, The History Press, ISBN 978-0-7524-4730-8
- Rogers, Joseph (1889), Reminiscences of a Workhouse Medical Officer, T. F. Unwin
- The Workhouse Website An extensive history of the workhouse
- Horsham Workhouse A site dedicated to the workhouse in Horsham
- Cleveland Street Workhouse A site dedicated saving the Cleveland Street Workhouse
- Workhouse records on The National Archives' website.
- The poor in Burslem and Wolstanton, Stoke-on-Trent, early 19th century. (c) Stoke-on-Trent City Council 2005.
- High Hall Bainbridge 1809–2007 thedales.org.uk copyright 2004
- Aysgarth Union Workhouse sycamoreclose.com
- Workhouses in the Eastern Essex area of the UK www.essex-family-history.co.uk
- Sources for the study of workhouses in Sheffield, UK Produced by Sheffield City Council's Libraries and Archives | http://en.wikipedia.org/wiki/Workhouse | 13 |
15 | |This article does not cite any references or sources. (February 2013)|
Purchasing power (sometimes retroactively called adjusted for inflation) is the amount of goods or services that can be purchased with a unit of currency. For example, if one had taken one unit of currency to a store in the 1950s, it is probable that it would have been possible to buy a greater number of items than would today, indicating that one would have had a greater purchasing power in the 1950s. Currency can be either a commodity money, like gold or silver, or fiat currency, or free-floating market-valued currency like US dollars. As Adam Smith noted, having money gives one the ability to "command" others' labor, so purchasing power to some extent is power over other people, to the extent that they are willing to trade their labor or goods for money or currency.
If one's monetary income stays the same, but the price level increases, the purchasing power of that income falls. Inflation does not always imply falling purchasing power of one's money income since it may rise faster than the price level. A higher real income means a higher purchasing power since real income refers to the income adjusted for inflation.
For a price index, its value in the base year is usually normalized to a value of 100. The purchasing power of a unit of currency, say a dollar, in a given year, expressed in dollars of the base year, is 100/P, where P is the price index in that year. So, by definition the purchasing power of a dollar decreases as the price level rises. The purchasing power in today's money of an amount C of money, t years into the future, can be computed with the formula for the present value:
where in this case i is an assumed future annual inflation rate.
See also
- MeasuringWorth.com has a calculator with different measures for bringing values in Pound sterling from 1264 to the present and in US Dollars from 1774 up to any year until the present. The Measures of Worth page discusses which would be the most appropriate for different things.
- Purchasing Power Calculator by Fiona Maclachlan, The Wolfram Demonstrations Project.
|This economics-related article is a stub. You can help Wikipedia by expanding it.| | http://en.wikipedia.org/wiki/Purchasing_power | 13 |
25 | Telnet is a network protocol used on the Internet or local area networks to provide a bidirectional interactive text-oriented communication facility using a virtual terminal connection. User data is interspersed in-band with Telnet control information in an 8-bit byte oriented data connection over the Transmission Control Protocol (TCP).
Historically, Telnet provided access to a command-line interface (usually, of an operating system) on a remote host. Most network equipment and operating systems with a TCP/IP stack support a Telnet service for remote configuration (including systems based on Windows NT). However, because of serious security issues when using Telnet over an open network such as the Internet, its use for this purpose has waned significantly in favor of SSH.
The term telnet may also refer to the software that implements the client part of the protocol. Telnet client applications are available for virtually all computer platforms. Telnet is also used as a verb. To telnet means to establish a connection with the Telnet protocol, either with command line client or with a programmatic interface. For example, a common directive might be: "To change your password, telnet to the server, log in and run the passwd command." Most often, a user will be telnetting to a Unix-like server system or a network device (such as a router) and obtain a login prompt to a command line text interface or a character-based full-screen manager.
|Internet protocol suite|
History and standards
Telnet is a client-server protocol, based on a reliable connection-oriented transport. Typically this protocol is used to establish a connection to Transmission Control Protocol (TCP) port number 23, where a Telnet server application (telnetd) is listening. Telnet, however, predates TCP/IP and was originally run over Network Control Program (NCP) protocols.
Before March 5, 1973, Telnet was an ad-hoc protocol with no official definition. Essentially, it used an 8-bit channel to exchange 7-bit ASCII data. Any byte with the high bit set was a special Telnet character. On March 5, 1973, a Telnet protocol standard was defined at UCLA with the publication of two NIC documents: Telnet Protocol Specification, NIC #15372, and Telnet Option Specifications, NIC #15373.
Because of negotiable options protocol architecture, many extensions were made for it, some of which have been adopted as Internet standards, IETF documents STD 27 through STD 32. Some extensions have been widely implemented and others are proposed standards on the IETF standards track (see below)
|This section does not cite any references or sources. (April 2010)|
When Telnet was initially developed in 1969, most users of networked computers were in the computer departments of academic institutions, or at large private and government research facilities. In this environment, security was not nearly as much a concern as it became after the bandwidth explosion of the 1990s. The rise in the number of people with access to the Internet, and by extension the number of people attempting to hack other people's servers, made encrypted alternatives much more necessary.
- Telnet, by default, does not encrypt any data sent over the connection (including passwords), and so it is often practical to eavesdrop on the communications and use the password later for malicious purposes; anybody who has access to a router, switch, hub or gateway located on the network between the two hosts where Telnet is being used can intercept the packets passing by and obtain login, password and whatever else is typed with a packet analyzer.
- Most implementations of Telnet have no authentication that would ensure communication is carried out between the two desired hosts and not intercepted in the middle.
- Several vulnerabilities have been discovered over the years in commonly used Telnet daemons.
These security-related shortcomings have seen the usage of the Telnet protocol drop rapidly, especially on the public Internet, in favor of the Secure Shell (SSH) protocol, first released in 1995. SSH provides much of the functionality of telnet, with the addition of strong encryption to prevent sensitive data such as passwords from being intercepted, and public key authentication, to ensure that the remote computer is actually who it claims to be. As has happened with other early Internet protocols, extensions to the Telnet protocol provide Transport Layer Security (TLS) security and Simple Authentication and Security Layer (SASL) authentication that address the above issues. However, most Telnet implementations do not support these extensions; and there has been relatively little interest in implementing these as SSH is adequate for most purposes.
IBM 5250 or 3270 workstation emulation is supported via custom telnet clients, TN5250/TN3270, and IBM servers. Clients and servers designed to pass IBM 5250 data streams over Telnet generally do support SSL encryption, as SSH does not include 5250 emulation. Under OS/400, port 992 is the default port for secured telnet.
All data octets except \377 are transmitted over the TCP transport as is. Therefore, a Telnet client application may also be used to establish an interactive raw TCP session, and it is commonly believed that such session which does not use the IAC (\377 character, or 255 in decimal) is functionally identical. This is not the case, however, because there are other network virtual terminal (NVT) rules, such as the requirement for a bare carriage return character (CR, ASCII 13) to be followed by a NULL (ASCII 0) character, that distinguish the telnet protocol from raw TCP sessions.[clarification needed] On the other hand, many systems now possess true raw TCP clients, such as netcat or socat on UNIX and PuTTY on Windows, which also can be used to manually "talk" to other services without specialized client software. Nevertheless, Telnet is still sometimes used in debugging network services such as SMTP, IRC, HTTP, FTP or POP3 servers, to issue commands to a server and examine the responses, but of all these protocols only FTP really uses Telnet data format.
Another difference of Telnet from a raw TCP session is that Telnet is not 8-bit clean by default. 8-bit mode may be negotiated, but high-bit-set octets may be garbled until this mode was requested, and it obviously will not be requested in non-Telnet connection. The 8-bit mode (so named binary option) is intended to transmit binary data, not characters though. The standard suggests the interpretation of codes \000–\176 as ASCII, but does not offer any meaning for high-bit-set data octets. There was an attempt to introduce a switchable character encoding support like HTTP has, but nothing is known about its actual software support.
|This section does not cite any references or sources. (January 2013)|
As of mid-2010, the Telnet protocol itself has been mostly superseded for remote login. Telnet is still popular in various application areas:
- Enterprise networks to access host applications, e.g., on IBM Mainframes.
- Administration of network elements, e.g., in configuring routers on a home network, in commissioning, integration and maintenance of core network elements in mobile communication networks, and many industrial control systems.
- MUD games played over the Internet, as well as talkers, MUSHes, MUCKs and MOOs.
- Connection to Bulletin Board Systems.
- Internet game clubs, like the Internet Chess Club, the Free Internet Chess Server and the Internet Go server.
- Connection to Amateur Radio DX clusters.
- Embedded systems.
- Mobile data collection applications where telnet runs over secure networks
- Troubleshooting and testing of basic TCP functionality between IP endpoints, often as a response to editing firewall rules, initial endpoint configuration, or partial connectivity loss.
- RFC 137, TELNET protocol specification
- RFC 139, TELNET protocol specification
- RFC 854, TELNET protocol specification
- RFC 855, TELNET option specifications
- RFC 856, TELNET binary transmission
- RFC 857, TELNET echo option
- RFC 858, TELNET suppress Go Ahead option
- RFC 859, TELNET status option
- RFC 860, TELNET timing mark option
- RFC 861, TELNET extended options - list option
- RFC 885, Telnet end of record option
- RFC 1041, Telnet 3270 regime option
- RFC 1073, Telnet Window Size Option
- RFC 1079, Telnet terminal speed option
- RFC 1091, Telnet terminal-type option
- RFC 1096, Telnet X display location option
- RFC 1116, Telnet Linemode Option
- RFC 1123, Requirements for Internet Hosts - Application and Support
- RFC 1143, The Q Method of Implementing TELNET Option Negotiation
- RFC 1184, Telnet linemode option
- RFC 1205, 5250 Telnet interface
- RFC 1372, Telnet remote flow control option
- RFC 1572, Telnet Environment Option
- RFC 2217, Telnet Com Port Control Option
- RFC 2941, Telnet Authentication Option
- RFC 2942, Telnet Authentication: Kerberos Version 5
- RFC 2943, TELNET Authentication Using DSA
- RFC 2944, Telnet Authentication: SRP
- RFC 2946, Telnet Data Encryption Option
- RFC 4248, The telnet URI Scheme
- RFC 4777, IBM's iSeries Telnet Enhancements
- PuTTY is a free, open-source SSH, Telnet, rlogin, and raw TCP client for Windows, Linux, and Unix.
- AbsoluteTelnet is a telnet client for Windows. It also supports SSH and SFTP,
- RUMBA (Terminal Emulator)
- Line Mode Browser, a command line web browser
- NCSA Telnet
- SecureCRT from Van Dyke Software
- ZOC Terminal
- SyncTERM BBS terminal program supporting Telnet, SSHv2, RLogin, Serial, Windows, *nix, and Mac OS X platforms, X/Y/ZMODEM and various BBS terminal emulations
- PowerTerm InterConnect from Ericom available for Windows, Mac OS X, Linux, Windows CE and supports 35 terminal emulation types including TN3270, TN5250, VT420, Wyse and others with SSH and SSL.
- Rtelnet is a SOCKS client version of Telnet, providing similar functionality of telnet to those hosts which are behind firewall and NAT.
- Telnet Options - The official list of assigned option numbers at iana.org
- Telnet Interactions Described as a Sequence Diagram
- Telnet protocol description, with NVT reference
- Microsoft TechNet:Telnet commands
- TELNET: The Mother of All (Application) Protocols | http://en.wikipedia.org/wiki/Telnet | 13 |
27 | Colonialism is the establishment, exploitation, maintenance, acquisition and expansion of colonies in one territory by people from another territory. It is a set of unequal relationships between the colonial power and the colony and between the colonists and the indigenous population.
The European colonial period was the era from the 1500s to the mid-1900s when several European powers (particularly, but not exclusively, Portugal, Spain, Britain, the Netherlands and France) established colonies in Asia, Africa, and the Americas. At first the countries followed mercantilist policies designed to strengthen the home economy at the expense of rivals, so the colonies were usually allowed to trade only with the mother country. By the mid-19th century, however, the powerful British Empire gave up mercantilism and trade restrictions and introduced the principle of free trade, with few restrictions or tariffs.
Collins English Dictionary defines colonialism as "the policy and practice of a power in extending control over weaker people or areas."1 The Merriam-Webster Dictionary offers four definitions, including "something characteristic of a colony" and "control by one power over a dependent area or people."2
The 2006 Stanford Encyclopedia of Philosophy "uses the term 'colonialism' to describe the process of European settlement and political control over the rest of the world, including Americas, Australia, and parts of Africa and Asia." It discusses the distinction between colonialism and imperialism and states that "given the difficulty of consistently distinguishing between the two terms, this entry will use colonialism as a broad concept that refers to the project of European political domination from the sixteenth to the twentieth centuries that ended with the national liberation movements of the 1960s."3
In his preface to Jürgen Osterhammel's Colonialism: A Theoretical Overview, Roger Tignor says, "For Osterhammel, the essence of colonialism is the existence of colonies, which are by definition governed differently from other territories such as protectorates or informal spheres of influence."4 In the book, Osterhammel asks, "How can 'colonialism' be defined independently from 'colony?'"5 He settles on a three-sentence definition:
Colonialism is a relationship between an indigenous (or forcibly imported) majority and a minority of foreign invaders. The fundamental decisions affecting the lives of the colonized people are made and implemented by the colonial rulers in pursuit of interests that are often defined in a distant metropolis. Rejecting cultural compromises with the colonized population, the colonizers are convinced of their own superiority and their ordained mandate to rule.6
Historians often distinguish between two overlapping forms of colonialism:
- Settler colonialism involves large-scale immigration, often motivated by religious, political, or economic reasons.
- Exploitation colonialism involves fewer colonists and focuses on access to resources for export, typically to the metropole. This category includes trading posts as well as larger colonies where colonists would constitute much of the political and economic administration, but would rely on indigenous resources for labour and material. Prior to the end of the slave trade and widespread abolition, when indigenous labour was unavailable, slaves were often imported to the Americas, first by the Portuguese Empire, and later by the Spanish, Dutch, French and British.
Plantation colonies would be considered exploitation colonialism; but colonizing powers would utilize either type for different territories depending on various social and economic factors as well as climate and geographic conditions.
Surrogate colonialism involves a settlement project supported by colonial power, in which most of the settlers do not come from the mainstream of the ruling power.
As colonialism often played out in pre-populated areas, sociocultural evolution included the formation of various ethnically hybrid populations. Colonialism gave rise to culturally and ethnically mixed populations such as the mestizos of the Americas, as well as racially-divided populations such as those found in French Algeria or in Southern Rhodesia. In fact, everywhere where colonial powers established a consistent and continued presence hybrid communities existed.
Notable examples in Asia include the Anglo-Burmese, Anglo-Indian, Burgher, Eurasian Singaporean, Filipino mestizo, Kristang and Macanese peoples. In the Dutch East Indies (later Indonesia) the vast majority of "Dutch" settlers were in fact Eurasians known as Indo-Europeans, formally belonging to the European legal class in the colony (see also Indos in Pre-Colonial History and Indos in Colonial History).78
Activity that could be called colonialism has a long history. Starting with the pre colonial African empires which lead to the Egyptians, Phoenicians, Greeks and Romans who all built colonies in antiquity. The word "metropole" comes from the Greek metropolis [Greek: "μητρόπολις"]—"mother city". The word "colony" comes from the Latin colonia—"a place for agriculture". Between the 11th and 18th centuries, the Vietnamese established military colonies south of their original territory and absorbed the territory, in a process known as nam tiến.9
Modern colonialism started with the Age of Discovery. Portugal and Spain discovered new lands across the oceans and built trading posts or conquered large extensions of land. For some people, it is this building of colonies across oceans that differentiates colonialism from other types of expansionism. These new lands were divided between the Portuguese Empire and Spanish Empire, first by the papal bull Inter caetera and then by the Treaty of Tordesillas and the Treaty of Zaragoza (1529).
This period is also associated with the Commercial Revolution. The late Middle Ages saw reforms in accountancy and banking in Italy and the eastern Mediterranean. These ideas were adopted and adapted in western Europe to the high risks and rewards associated with colonial ventures.
The 17th century saw the creation of the French colonial empire and the Dutch Empire, as well as the English overseas possessions, which later became the British Empire. It also saw the establishment of a Danish colonial empire and some Swedish overseas colonies.
The spread of colonial empires was reduced in the late 18th and early 19th centuries by the American Revolutionary War and the Latin American wars of independence. However, many new colonies were established after this time, including the German colonial empire and Belgian colonial empire. In the late 19th century, many European powers were involved in the Scramble for Africa.
The Russian Empire, Ottoman Empire and Austrian Empire existed at the same time as the above empires, but did not expand over oceans. Rather, these empires expanded through the more traditional route of conquest of neighbouring territories. There was, though, some Russian colonization of the Americas across the Bering Strait. The Empire of Japan modelled itself on European colonial empires. The United States of America gained overseas territories after the Spanish-American War for which the term "American Empire" was coined.
After the First World War, the victorious allies divided up the German colonial empire and much of the Ottoman Empire between themselves as League of Nations mandates. These territories were divided into three classes according to how quickly it was deemed that they would be ready for independence.10 However, decolonisation outside the Americas lagged until after the Second World War. In 1962 the United Nations set up a Special Committee on Decolonization, often called the Committee of 24, to encourage this process.
Further, dozens of independence movements and global political solidarity projects such as the Non-Aligned Movement were instrumental in the decolonization efforts of former colonies.
The major European empires consisted of the following colonies at the start of World War I (former colonies of the Spanish Empire became independent before 1914 and are not listed; former colonies of other European empires that previously became independent, such as the former French colony Haiti, are not listed):
British colonies and protectorates:
- Anglo-Egyptian Sudan
- Ascension Island
- British East Africa
- British Guiana
- British Honduras
- British Hong Kong
- British Somaliland
- Falkland Islands
- Fiji Island
- Gilbert and Ellice Islands
- The Gambia
- Gold Coast
- British Malaya
- New Zealand
- North Borneo
- Northern Rhodesia
- Sierra Leone
- Southern Rhodesia
- St. Helena
- Trinidad and Tobago
- South Africa
- Clipperton Island
- Comoros Islands
- French Guiana
- French Equatorial Africa
- French India (Pondichéry, Chandernagor, Karikal, Mahé and Yanaon)
- French Indochina
- French Polynesia
- French Somaliland
- French Southern and Antarctic Lands
- French West Africa
- La Réunion
- New Caledonia
- Shanghai French Concession (similar concessions in Kouang-Tchéou-Wan, Tientsin, Hankéou)
German Empire colonies:
- Caroline Islands
- German New Guinea
- German East Africa
- German South West Africa
- Gilbert Islands
- Mariana Islands
- Marshall Islands
- Portuguese Africa
- Portuguese Asia
By 1914, Europeans had migrated to the colonies in the millions. Some intended to remain in the colonies as temporary settlers, mainly as military personnel or on business. Others went to the colonies as immigrants. British citizens were by far the most numerous population to migrate to the colonies: 2.5 million settled in Canada; 1.5 million in Australia; 750,000 in New Zealand; 450,000 in the Union of South Africa; and 200,000 in India. French citizens also migrated in large numbers, mainly to the colonies in the north African Maghreb region: 1.3 million settled in Algeria; 200,000 in Morocco; 100,000 in Tunisia; while only 20,000 migrated to French Indochina. Dutch and German colonies saw relatively scarce European migration, since Dutch and German colonial expansion focused upon commercial goals rather than settlement. Portugal sent 150,000 settlers to Angola, 80,000 to Mozambique, and 20,000 to Goa. During the Spanish Empire, approximately 550,000 Spanish settlers migrated to Latin America.11
The term neocolonialism has been used to refer to a variety of contexts since decolonization that took place after World War II. Generally it does not refer to a type of direct colonization, rather, colonialism by other means. Specifically, neocolonialism refers to the theory that former or existing economic relationships, such as the General Agreement on Tariffs and Trade and the Central American Free Trade Agreement, created by former colonial powers were or are used to maintain control of their former colonies and dependencies after the colonial independence movements of the post–World War II period.
The conquest of vast territories brings multitudes of diverse cultures under the central control of the imperial authorities. From the time of Ancient Greece and Ancient Rome, this fact has been addressed by empires adopting the concept of universalism, and applying it to their imperial policies towards their subjects far from the imperial capitol. The capitol, the metropole, was the source of ostensibly enlightened policies imposed throughout the distant colonies.
The empire that grew from Athenian conquest spurred the spread of Greek language, religion, science and philosophy throughout the colonies. The Athenians considered their own culture superior to all others. They referred to people speaking foreign languages as barbarians, dismissing foreign languages as inferior mutterings that sounded to Greek ears like "bar-bar".
Romans found efficiency in imposing a universalist policy towards their colonies in many matters. Roman law was imposed on Roman citizens, as well as colonial subjects, throughout the empire. Latin spread as the common language of government and trade, the lingua franca, throughout the Empire. Romans also imposed peace between their diverse foreign subjects, which they described in beneficial terms as the Pax Romana. The use of universal regulation by the Romans marks the emergence of a European concept of universalism and internationalism. Tolerance of other cultures and beliefs has always been secondary to the aims of empires, however. The Roman Empire was tolerant of diverse cultures and religious practises, so long as these did not threaten Roman authority. Napoleon's foreign minister, Charles Maurice de Talleyrand, once remarked: "Empire is the art of putting men in their place".12
Settlers acted as the link between the natives and the imperial hegemony, bridging the geographical, ideological and commercial gap between the colonisers and colonised. Advanced technology made possible the expansion of European states. With tools such as cartography, shipbuilding, navigation, mining and agricultural productivity colonisers had an upper hand. Their awareness of the Earth's surface and abundance of practical skills provided colonisers with a knowledge that, in turn, created power.13
Painter and Jeffrey argue that geography as a discipline was not and is not an objective science, rather it is based on assumptions about the physical world. Whereas it may have given “The West” an advantage when it came to exploration, it also created zones of racial inferiority. Geographical beliefs such as environmental determinism, the view that some parts of the world are underdeveloped, legitimised colonialism and created notions of skewed evolution.13 These are now seen as elementary concepts.clarification needed Political geographers maintain that colonial behavior was reinforced by the physical mapping of the world, visually separating “them” and “us”. Geographers are primarily focused on the spaces of colonialism and imperialism, more specifically, the material and symbolic appropriation of space enabling colonialism.14
A colony is part of an empire and so colonialism is closely related to imperialism. Assumptions are that colonialism and imperialism are interchangeable, however Robert Youngdisambiguation needed suggests that imperialism is the concept while colonialism is the practice. Colonialism is based on an imperial outlook, thereby creating a consequential relationship. Through an empire, colonialism is established and capitalism is expanded, on the other hand a capitalist economy naturally enforces an empire. In the next section Marxists make a case for this mutually reinforcing relationship.
Marxism views colonialism as a form of capitalism, enforcing exploitation and social change. Marx thought that working within the global capitalist system, colonialism is closely associated with uneven development. It is an “instrument of wholesale destruction, dependency and systematic exploitation producing distorted economies, socio-psychological disorientation, massive poverty and neocolonial dependency.”15 According to some Marxist historians, in all of the colonial countries ruled by Western European countries “the natives were robbed of more than half their natural span of life by undernourishment”.16 Colonies are constructed into modes of production. The search for raw materials and the current search for new investment opportunities is a result of inter-capitalist rivalry for capital accumulation. Lenin regarded colonialism as the root cause of imperialism, as imperialism was distinguished by monopoly capitalism via colonialism and as Lyal S. Sunga explains: "Vladimir Lenin advocated forcefully the principle of self-determination of peoples in his "Theses on the Socialist Revolution and the Right of Nations to Self-Determination" as an integral plank in the programme of socialist internationalism" and he quotes Lenin who contended that "The right of nations to self-determination implies exclusively the right to independence in the political sense, the right to free political separation from the oppressor nation. Specifically, this demand for political democracy implies complete freedom to agitate for secession and for a referendum on secession by the seceding nation."17 Non Russian marxists within the RSFSR and later the USSR, like Sultan Galiev and Vasyl Shakhrai, meanwhile, between 1918-1923 and then after 1929, considered the Soviet Regime a renewed version of the Russian imperialism and colonialism.
In his critique of colonialism in Africa, the Guyanese historian and political activist Walter Rodney states:
- "The decisiveness of the short period of colonialism and its negative consequences for Africa spring mainly from the fact that Africa lost power. Power is the ultimate determinant in human society, being basic to the relations within any group and between groups. It implies the ability to defend one’s interests and if necessary to impose one’s will by any means available. In relations between peoples, the question of power determines manoeuvrability in bargaining, the extent to which one people respect the interests of another, and eventually the extent to which a people survive as a physical and cultural entity. When one society finds itself forced to relinquish power entirely to another society that in itself is a form of underdevelopment....During the centuries of pre-colonial trade, some control over social political and economic life was retained in Africa, in spite of the disadvantageous commerce with Europeans. That little control over internal matters disappeared under colonialism. Colonialism went much further than trade. It meant a tendency towards direct appropriation by Europeans of the social institutions within Africa. Africans ceased to set indigenous cultural goals and standards, and lost full command of training young members of the society. Those were undoubtedly major steps backwards.... Colonialism was not merely a system of exploitation, but one whose essential purpose was to repatriate the profits to the so-called ‘mother country’. From an African view-point, that amounted to consistent expatriation of surplus produced by African labour out of African resources. It meant the development of Europe as part of the same dialectical process in which Africa was underdeveloped."
“Colonial Africa fell within that part of the international capitalist economy from which surplus was drawn to feed the metropolitan sector. As seen earlier, exploitation of land and labour is essential for human social advance, but only on the assumption that the product is made available within the area where the exploitation takes place.
Classical liberals generally opposed colonialism (as opposed to colonization) and imperialism, including Adam Smith, Frédéric Bastiat, Richard Cobden, John Bright, Henry Richard, Herbert Spencer, H. R. Fox Bourne, Edward Morel, Josephine Butler, W. J. Fox and William Ewart Gladstone.20clarification needed
Adam Smith wrote in Wealth of Nations that Britain should liberate all of its colonies and also noted that it would be economically beneficial for British people in the average, although the merchants having mercantilist privileges would lose out.20
The act of colonizing spread and synthesized social and political western ideas of a gender and racial hierarchy to colonized areas, as well as elicited the further development of ideas about the gender dichotomy and racial divisions in European society during the colonial era.212223 Popular political practices of the time were to support colonialism rule by legitimizing European male authority and female and non European inferiority through studies of Craniology, Comparative Anatomy, and Phrenology.222324 Biologists, naturalists, anthropologists, and ethnologists of the 1800s were focused on the study of colonized indigenous women, as in the case of Georges Cuvier's study of Sarah Baartman.23 Such cases embraced a natural superiority and inferiority relationship between the races based on European naturalists' observations; They gave rise to the perception that African women's anatomy, and especially genitalia, resembled those of mandrills, baboons, and monkeys, thus differentiating colonized Africans from what were viewed as the features of the evolutionarily superior, and thus rightfully authoritarian, European woman.23
In addition to what would now be viewed as pseudo-scientific studies of race which supported new racially hierarchical and evolutionary ideology of the time, new science based ideology about gender was also emerging in reaction to the colonial era of European history.22 Female inferiority across all cultures was emerging as an idea based in craniology that lead scientists to argue human women's brain size, based on skull measurements, was minuscule and therefore less developed and less evolutionary advanced compared to men.22 The influence that lead to such studies was the establishment of comparative anatomy of humans that developed in response to European scientists' delving into the question of biological racial difference.
Thus Non Europeans and women faced invasive study by colonial powers in the interest of scientific ideology and theory that encouraged the political institution of colonialism.23 Such studies of race and gender coincided with the era of colonialism and the introduction of foreign cultures, appearances, and gender roles into the line of vision of European scholars.
Post-colonialism (or post-colonial theory) can refer to a set of theories in philosophy and literature that grapple with the legacy of colonial rule. In this sense, postcolonial literature may be considered a branch of postmodern literature concerned with the political and cultural independence of peoples formerly subjugated in colonial empires. Many practitioners take Edward Saïd's book Orientalism (1978) as the theory's founding work (although French theorists such as Aimé Césaire and Frantz Fanon made similar claims decades before Said).
Saïd analysed the works of Balzac, Baudelaire and Lautréamont arguing that they helped to shape a societal fantasy of European racial superiority. Writers of post-colonial fiction interact with the traditional colonial discourse, but modify or subvert it; for instance by retelling a familiar story from the perspective of an oppressed minor character in the story. Gayatri Chakravorty Spivak's Can the Subaltern Speak? (1998) gave its name to Subaltern Studies.
In A Critique of Postcolonial Reason (1999), Spivak explored how major works of European metaphysics (such as those of Kant and Hegel) not only tend to exclude the subaltern from their discussions, but actively prevent non-Europeans from occupying positions as fully human subjects. Hegel's Phenomenology of Spirit (1807), famous for its explicit ethnocentrism, considers Western civilization as the most accomplished of all, while Kant also allowed some traces of racialism to enter his work.
The impacts of colonization are immense and pervasive.25 Various effects, both immediate and protracted, include the spread of virulent diseases, the establishment of unequal social relations, exploitation, enslavement, medical advances, the creation of new institutions, abolitionism,26 improved infrastructure,27 and technological progress.28 Colonial practices also spur the spread of languages, literature and cultural institutions. The native cultures of the colonized peoples can also have a powerful influence on the imperial country.citation needed
Economic expansion has accompanied imperial expansion since ancient times.citation needed Greek trade-networks spread throughout the Mediterranean region, while Roman trade expanded with the main goal of directing tribute from the colonized areas towards the Roman metropole. According to Strabo, by the time of emperor Augustus, up to 120 Roman ships would set sail every year from Myos Hormos in Roman Egypt to India.29 With the development of trade routes under the Ottoman Empire,
Aztec civilization developed into a large empire that, much like the Roman Empire, had the goal of exacting tribute from the conquered colonial areas. For the Aztecs, the most important tribute was the acquisition of sacrificial victims for their religious rituals.31
On the other hand, European colonial empires sometimes attempted to channel, restrict and impede trade involving their colonies, funnelling activity through the metropole and taxing accordingly.
European nations entered their imperial projects with the goal of enriching the European metropole. Exploitation of non-Europeans and other Europeans to support imperial goals was acceptable to the colonizers. Two outgrowths of this imperial agenda were slavery and indentured servitude. In the 17th century, nearly two-thirds of English settlers came to North America as indentured servants.32
African slavery had existed long before Europeans discovered it as an exploitable means of creating an inexpensive labour force for the colonies. Europeans brought transportation technology to the practise, bringing large numbers of African slaves to the Americas by sail. Spain and Portugal had brought African slaves to work at African colonies such as Cape Verde and the Azores, and then Latin America, by the 16th century. The British, French and Dutch joined in the slave trade in subsequent centuries. Ultimately, around 11 million Africans were taken to the Caribbean and North and South America as slaves by European colonizers.33
|European empire||Colonial destination||Number of slaves imported33|
|British Empire||British Caribbean||1,665,000|
|French Empire||French Caribbean||1,600,200|
|Spanish Empire||Latin America||1,552,100|
|Dutch Empire||Dutch Caribbean||500,000|
|British Empire||British North America||399,000|
Abolitionists in Europe and America protested the inhumane treatment of African slaves, which led to the elimination of the slave trade by the late 18th century. The labour shortage that resulted inspired European colonizers to develop a new source of labour, using a system of indentured servitude. Indentured servants consented to a contract with the European colonizers. Under their contract, the servant would work for an employer for a term of at least a year, while the employer agreed to pay for the servant's voyage to the colony, possibly pay for the return to the country of origin, and pay the employee a wage as well. The employee was "indentured" to the employer because they owed a debt back to the employer for their travel expense to the colony, which they were expected to pay through their wages. In practice, indentured servants were exploited through terrible working conditions and burdensome debts created by the employers, with whom the servants had no means of negotiating the debt once they arrived in the colony.
India and China were the largest source of indentured servants during the colonial era. Indentured servants from India travelled to British colonies in Asia, Africa and the Caribbean, and also to French and Portuguese colonies, while Chinese servants travelled to British and Dutch colonies. Between 1830 and 1930, around 30 million indentured servants migrated from India, and 24 million returned to India. China sent more indentured servants to European colonies, and around the same proportion returned to China.34
Following the Scramble for Africa, an early but secondary focus for most colonial regimes was the suppression of slavery and the slave trade. By the end of the colonial period they were mostly successful in this aim, though slavery is still very active in Africa.26
Imperial expansion follows military conquest in most instances. Imperial armies therefore have a long history of military innovation in order to gain an advantage over the armies of the people they aim to conquer. Greeks developed the phalanx system, which enabled their military units to present themselves to their enemies as a wall, with foot soldiers using shields to cover one another during their advance on the battlefield. Under Philip II of Macedon, they were able to organize thousands of soldiers into a formidable battle force, bringing together carefully trained infantry and cavalry regiments.35 Alexander the Great exploited this military foundation further during his conquests.
The Spanish Empire held a major advantage over Mesoamerican warriors through the use of weapons made of stronger metal, predominantly iron, which was able to shatter the blades of axes used by the Aztec civilization and others. The European development of firearms using gunpowder cemented their military advantage over the peoples they sought to subjugate in the Americas and elsewhere.
The populations of some colonial territories, such as Canada, enjoyed relative peace and prosperity as part of a European power, at least among the majority; however, minority populations such as First Nations peoples and French-Canadians experienced marginalization and resented colonial practises. Francophone residents of Quebec, for example, were vocal in opposing conscription into the armed services to fight on behalf of Britain during World War I, resulting in the Conscription crisis of 1917. Other European colonies had much more pronounced conflict between European settlers and the local population. Rebellions broke out in the later decades of the imperial era, such as India's Sepoy Rebellion.
The territorial boundaries imposed by European colonizers, notably in central Africa and south Asia, defied the existing boundaries of native populations that had previously interacted little with one another. European colonizers disregarded native political and cultural animosities, imposing peace upon people under their military control. Native populations were relocated at the will of the colonial administrators. Once independence from European control was achieved, civil war erupted in some former colonies, as native populations fought to capture territory for their own ethnic, cultural or political group. The Partition of India, a 1947 civil war that came in the aftermath of India's independence from Britain, became a conflict with 500,000 killed. Fighting erupted between Hindu, Sikh and Muslim communities as they fought for territorial dominance. Muslims fought for an independent country to be partitioned where they would not be a religious minority, resulting in the creation of Pakistan.36
In a reversal of the migration patterns experienced during the modern colonial era, post-independence era migration followed a route back towards the imperial country. In some cases, this was a movement of settlers of European origin returning to the land of their birth, or to an ancestral birthplace. 900,000 French colonists (known as the Pied-Noirs) resettled in France following Algeria's independence in 1962. A significant number of these migrants were also of Algerian descent. 800,000 people of Portuguese origin migrated to Portugal after the independence of former colonies in Africa between 1974 and 1979; 300,000 settlers of Dutch origin migrated to the Netherlands from the Dutch West Indies after Dutch military control of the colony ended.37
After WWII 300,000 Dutchmen from the Dutch East Indies, of which the majority were people of Eurasian descent called Indo Europeans, repatriated to the Netherlands. A significant number later migrated to the US, Canada, Australia and New Zealand.3839
Global travel and migration in general developed at an increasingly brisk pace throughout the era of European colonial expansion. Citizens of the former colonies of European countries may have a privileged status in some respects with regard to immigration rights when settling in the former European imperial nation. For example, rights to dual citizenship may be generous,40 or larger immigrant quotas may be extended to former colonies.
In some cases, the former European imperial nations continue to foster close political and economic ties with former colonies. The Commonwealth of Nations is an organization that promotes cooperation between and among Britain and its former colonies, the Commonwealth members. A similar organization exists for former colonies of France, the Francophonie; the Community of Portuguese Language Countries plays a similar role for former Portuguese colonies, and the Dutch Language Union is the equivalent for former colonies of the Netherlands.
Migration from former colonies has proven to be problematic for European countries, where the majority population may express hostility to ethnic minorities who have immigrated from former colonies. Cultural and religious conflict have often erupted in France in recent decades, between immigrants from the Maghreb countries of north Africa and the majority population of France. Nonetheless, immigration has changed the ethnic composition of France; by the 1980s, 25% of the total population of "inner Paris" and 14% of the metropolitan region were of foreign origin, mainly Algerian.41
Encounters between explorers and populations in the rest of the world often introduced new diseases, which sometimes caused local epidemics of extraordinary virulence.42 For example, smallpox, measles, malaria, yellow fever, and others were unknown in pre-Columbian America.43
Disease killed the entire native (Guanches) population of the Canary Islands in the 16th century. Half the native population of Hispaniola in 1518 was killed by smallpox. Smallpox also ravaged Mexico in the 1520s, killing 150,000 in Tenochtitlan alone, including the emperor, and Peru in the 1530s, aiding the European conquerors. Measles killed a further two million Mexican natives in the 17th century. In 1618–1619, smallpox wiped out 90% of the Massachusetts Bay Native Americans.44 Smallpox epidemics in 1780–1782 and 1837–1838 brought devastation and drastic depopulation among the Plains Indians.45 Some believe that the death of up to 95% of the Native American population of the New World was caused by Old World diseases.46 Over the centuries, the Europeans had developed high degrees of immunity to these diseases, while the indigenous peoples had no time to build such immunity.47
Smallpox decimated the native population of Australia, killing around 50% of indigenous Australians in the early years of British colonisation.48 It also killed many New Zealand Māori.49 As late as 1848–49, as many as 40,000 out of 150,000 Hawaiians are estimated to have died of measles, whooping cough and influenza. Introduced diseases, notably smallpox, nearly wiped out the native population of Easter Island.50 In 1875, measles killed over 40,000 Fijians, approximately one-third of the population.51 The Ainu population decreased drastically in the 19th century, due in large part to infectious diseases brought by Japanese settlers pouring into Hokkaido.52
Conversely, researchers concluded that syphilis was carried from the New World to Europe after Columbus's voyages. The findings suggested Europeans could have carried the nonvenereal tropical bacteria home, where the organisms may have mutated into a more deadly form in the different conditions of Europe.53 The disease was more frequently fatal than it is today; syphilis was a major killer in Europe during the Renaissance.54 The first cholera pandemic began in Bengal, then spread across India by 1820. Ten thousand British troops and countless Indians died during this pandemic.55 Between 1736 and 1834 only some 10% of East India Company's officers survived to take the final voyage home.56 Waldemar Haffkine, who mainly worked in India, who developed and used vaccines against cholera and bubonic plague in the 1890s, is considered the first microbiologist.
As early as 1803, the Spanish Crown organised a mission (the Balmis expedition) to transport the smallpox vaccine to the Spanish colonies, and establish mass vaccination programs there.57 By 1832, the federal government of the United States established a smallpox vaccination program for Native Americans.58 Under the direction of Mountstuart Elphinstone a program was launched to propagate smallpox vaccination in India.59 From the beginning of the 20th century onwards, the elimination or control of disease in tropical countries became a driving force for all colonial powers.60 The sleeping sickness epidemic in Africa was arrested due to mobile teams systematically screening millions of people at risk.61 In the 20th century, the world saw the biggest increase in its population in human history due to lessening of the mortality rate in many countries due to medical advances.62 The world population has grown from 1.6 billion in 1900 to over 7 billion today.
- Africa (see Whites in Africa)
- South Africa (White South African): 9.6% of the population64
- Namibia (White Namibians): 6% of the population, of which most are Afrikaans-speaking, in addition to a German-speaking minority.65
- Réunion estimated to be approx. 25% of the population66
- Zimbabwe (Whites in Zimbabwe)
- Algeria (Pied-noir)67
- Kenya (Whites in Kenya)
- Mauritius (Franco-Mauritian)
- Côte d'Ivoire (French people)69
- Canary Islands (Spaniards), known as Canarians.
- Seychelles (Franco-Seychellois)
- Saint Helena (UK) including Tristan da Cunha (UK): predominantly European.
- Swaziland: 3% of the population71
- Siberia (Russians, Germans and Ukrainians)7273
- Kazakhstan (Russians in Kazakhstan, Germans of Kazakhstan): 30% of the population7475
- Uzbekistan (Russians and other Slavs): 5.5% of the population75
- Kyrgyzstan (Russians and other Slavs): 13.5% of the population757677
- Turkmenistan (Russians and other Slavs): 4% of the population7578
- Tajikistan (Russians and other Slavs)7579
- Hong Kong80
- People's Republic of China (Russians in China)
- Christmas Island: approx. 20% of the population.
- Latin America (see White Latin American)
- Argentina (European Immigration to Argentina): 97% of the population 81
- Bolivia: 15% of the population 82
- Brazil (White Brazilian): 47.3% of the population 83
- Chile (White Chilean): 52.7%-64% of the population.848586
- Colombia (White Colombian): 20% of the population 87
- Costa Rica88
- Cuba (White Cuban): 65% of the population89
- Dominican Republic: 16% of the population 90
- Ecuador: 7% of the population91
- El Salvador: 12% of the population92
- Mexico (White Mexican): 9% or ~17% of the population.9394 and 70-80% more as Mestizos.9596
- Nicaragua: 17% of the population97
- Panama 10% of the population98
- Puerto Rico approx. 80% of the population 99
- Peru (European Peruvian): 15% of the population 100
- Paraguay approx. 20% of the population 101
- Venezuela (White Venezuelan): 42,2% of the population102
- Uruguay: 88% of the population 103
- Rest of the Americas
- Bahamas: 12% of the population104
- Barbados (White Barbadian): 4% of the population105
- Bermuda: 34.1% of the population106
- Canada: 80% of the population 107
- Falkland Islands, mostly of British descent.
- French Guiana: 12% of the population108
- Greenland: 12% of the population109
- Martinique: 5% of the population110
- Saint Barthélemy111
- Trinidad and Tobago:112 0.6% of the population
- United States of America (European American): 72.4% of the population, including Hispanic and Non-Hispanic Whites.
- Oceania (see Europeans in Oceania)
- African independence movements
- Age of discovery
- American Empire
- Chartered companies
- Christianity and colonialism
- Civilising mission
- Cold War
- Colonial cinema
- Colonial Empire
- Colonial wars
- Colonies in antiquity
- Colonial India
- Concession (territory)
- Empire of Liberty
- European colonization of the Americas
- German eastward expansion
- Global Empire
- Historical migration
- Historiography of the British Empire
- Impact and evaluation of colonialism and colonization
- Imperialism in Asia
- Manifest Destiny
- Mongol invasions
- Muslim conquests
- Ottoman wars in Europe
- Right to exist
- Settler colonialism
- Sino-African relations
- Soviet Empire
- Soviet occupations
- Special Committee on Decolonization
- Stranger King (Concept)
- Transmigration program
- Tropical geography
- Turkic migration
- United Nations list of non-self-governing territories
- French colonial flags
- List of French possessions and colonies
- List of Muslim empires and dynasties
- "Colonialism". Collins English Dictionary. HarperCollins. 2011. Retrieved 8 January 2012.
- "Colonialism". Merriam-Webbster. Merriam-Webster. 2010. Retrieved 5 April 2010.
- Margaret Kohn (2006). "Colonialism". Stanford Encyclopedia of Philosophy. Stanford University. Retrieved 5 April 2010.
- Tignor, Roger (2005). Preface to Colonialism: a theoretical overview. Markus Weiner Publishers. p. x. ISBN 1-55876-340-6, 9781558763401 Check
|isbn=value (help). Retrieved 5 April 2010.
- Osterhammel, Jürgen (2005). Colonialism: a theoretical overview. trans. Shelley Frisch. Markus Weiner Publishers. p. 15. ISBN 1-55876-340-6, 9781558763401 Check
|isbn=value (help). Retrieved 5 April 2010.
- Osterhammel, Jürgen (2005). Colonialism: A Theoretical Overview. trans. Shelley Frisch. Markus Weiner Publishers. p. 16. ISBN 1-55876-340-6, 9781558763401 Check
|isbn=value (help). Retrieved 5 April 2010.
- Bosma U., Raben R. Being "Dutch" in the Indies: a history of creolisation and empire, 1500–1920 (University of Michigan, NUS Press, 2008) p. 223. ISBN 9971-69-373-9 Googlebook
- Gouda, Frances Dutch Culture Overseas: Colonial Practice in the Netherlands Indies 1900-1942. (Publisher: Equinox, 2008) ISBN 978-979-3780-62-7. Chapter 5, p. 163.
- The Le Dynasty and Southward Expansion
- "The Trusteeship Council - The mandate system of the League of Nations". Encyclopedia of the Nations. Advameg. 2010. Retrieved 8 August 2010.
- King, Russell (2010). People on the Move: An Atlas of Migration. Berkeley, Los Angeles: University of California Press. pp. 34–5. ISBN 0-520-26151-8.
- Pagden, Anthony (2003). Peoples and Empires. New York: Modern Library. pp. xxiii. ISBN 0-8129-6761-5.
- "Painter, J. & Jeffrey, A., 2009. Political Geography, 2nd ed., Sage. “Imperialism” p. 23 (GIC).
- Gallaher, C. et al., 2008. Key Concepts in Political Geography, Sage Publications Ltd. "Imperialism/Colonialism", p. 5 (GIC).
- Dictionary of Human Geography, "Colonialism"
- The Labour Government 1945-51 by Denis Nowell Pritt
- In the Emerging System of International Criminal Law: Developments and Codification, Brill Publishers (1997) at page 90, Sunga traces the origin of the international movement against colonialism, and relates it to the rise of the right to self-determination in international law.
- Walter Rodney. How Europe Underdeveloped Africa. East African Publishers. pp. 149, 224.
- Henry Schwarz; Sangeeta Ray (2004). A Companion To Postcolonial Studies. John Wiley & Sons. p. 271.
- Liberal Anti-Imperialism, professor Daniel Klein, 1.7.2004
- Stoler, Ann L. (Nov. 1989). "Making Empire Respectable: The Politics of Race and Sexual Morality in 20th-Century Colonical Cultures". American Ethnologist 16 (4): 634–660.
- Fee, Elizabeth (1979). "Nineteenth Century Craniology: The Study of the Female Skull". Bulletin of the History of Medicine 53: 415–53.
- Fausto-Sterling, Anne (2001). "Gender, Race, and Nation: The Comparative Anatomy of "Hottentot" women in Europe, 1815-1817". In Muriel Lederman and Ingrid Bartsch. The Gender and Science Reader (Routledge).
- Stepan, Nancy (1993). Sandra Harding, ed. The "Racial" Economy of Science (3 ed.). Indiana University press. pp. 359–376. ISBN 9780253208101.
- Come Back, Colonialism, All is Forgiven
- Lovejoy, Paul E. (2012). Transformations of Slavery: A History of Slavery in Africa. London: Cambridge University Press.
- Ferguson, Niall (2003). Empire: How Britain Made the Modern World. London: Allen Lane.
- [Thong, Tezenlo. Civilized Colonizers and Barbaric Colonized: Reclaiming Naga Identity by Demythologizing Colonial Portraits, History and Anthropology 23, no. 3 (2012): 375-397]
- "Strabo's Geography Book II Chapter 5 "
- Pagden, Anthony (2003). Peoples and Empires. New York: Modern Library. p. 45. ISBN 0-8129-6761-5.
- Pagden, Anthony (2003). Peoples and Empires. New York: Modern Library. p. 5. ISBN 0-8129-6761-5.
- "White Servitude", by Richard Hofstadter, Montgomery College
- King, Russell (2010). People on the Move: An Atlas of Migration. Berkeley, Los Angeles: University of California Press. p. 24. ISBN 978-0-520-26124-2.
- King, Russell (2010). People on the Move: An Atlas of Migration. Berkeley, Los Angeles: University of California Press. pp. 26–7. ISBN 978-0-520-26124-2.
- Pagden, Anthony (2003). Peoples and Empires. New York: Modern Library. p. 6. ISBN 0-8129-6761-5.
- White, Matthew (2012). The Great Big Book of Horrible Things. London: W.W. Norton & Co. Ltd. p. 427. ISBN 978-0-393-08192-3.
- King, Russell (2010). People on the Move: An Atlas of Migration. Berkeley, Los Angeles: University of California Press. p. 35. ISBN 978-0-520-26124-2.
- Willlems, Wim "De uittocht uit Indie (1945–1995), De geschiedenis van Indische Nederlanders" (Publisher: Bert Bakker, Amsterdam, 2001). ISBN 90-351-2361-1
- Crul, Lindo and Lin Pang. Culture, Structure and Beyond, Changing identities and social positions of immigrants and their children (Het Spinhuis Publishers, 1999). ISBN 90-5589-173-8
- "British Nationality Act 1981". The National Archives, United Kingdom. Retrieved February 24, 2012.
- Seljuq, Affan (July 1997). "Cultural Conflicts: North African Immigrants in France". The International Journal of Peace Studies 2, (2). ISSN 1085-7494. Retrieved February 24, 2012.
- Kenneth F. Kiple, ed. The Cambridge Historical Dictionary of Disease (2003).
- Alfred W. Crosby, Jr., The Columbian Exchange: Biological and Cultural Consequences of 1492 (1974)
- Smallpox - The Fight to Eradicate a Global Scourge, David A. Koplow.
- "The first smallpox epidemic on the Canadian Plains: In the fur-traders' words", National Institutes of Health.
- The Story Of... Smallpox – and other Deadly Eurasian Germs.
- Stacy Goodling, "Effects of European Diseases on the Inhabitants of the New World"
- "Smallpox Through History". Archived from the original on 2009-10-31.
- New Zealand Historical Perspective
- How did Easter Island's ancient statues lead to the destruction of an entire ecosystem?, The Independent.
- Fiji School of Medicine
- Meeting the First Inhabitants, TIMEasia.com, 21 August 2000.
- Genetic Study Bolsters Columbus Link to Syphilis, New York Times, January 15, 2008.
- Columbus May Have Brought Syphilis to Europe, LiveScience
- Cholera's seven pandemics. CBC News. December 2, 2008.
- Sahib: The British Soldier in India, 1750-1914 by Richard Holmes.
- Dr. Francisco de Balmis and his Mission of Mercy, Society of Philippine Health History.
- Lewis Cass and the Politics of Disease: The Indian Vaccination Act of 1832.
- Smallpox History - Other histories of smallpox in South Asia.
- Conquest and Disease or Colonialism and Health?, Gresham College | Lectures and Events.
- WHO Media centre (2001). Fact sheet N°259: African trypanosomiasis or sleeping sickness..
- The Origins of African Population Growth, by John Iliffe, The Journal of African History, Vol. 30, No. 1 (1989), pp. 165-169.
- Ethnic groups by country. Statistics (where available) from CIA Factbook.
- South Africa: People: Ethnic Groups. World Factbook of CIA
- Namibia: People: Ethnic Groups. World Factbook of CIA
- "Anthropometric evaluations of body composition of undergraduate students at the University of La Réunion".
- "Former settlers return to Algeria". BBC News. July 29, 2006.
- Botswana: People: Ethnic Groups. World Factbook of CIA
- "Ivory Coast - The Economy". Library of Congress Country Studies.
- Senegal, About 50,000 Europeans (mostly French) and Lebanese reside in Senegal, mainly in the cities.
- Swaziland: People: Ethnic Groups. World Factbook of CIA
- Fiona Hill, Russia — Coming In From the Cold?, The Globalist, 23 February 2004
- "Siberian Germans".
- "Migrant resettlement in the Russian federation: reconstructing 'homes' and 'homelands'". Moya Flynn. (1994). p.15. ISBN 1-84331-117-8
- Robert Greenall, "Russians left behind in Central Asia", BBC News, 23 November 2005.
- Kyrgyzstan: People: Ethnic Groups. World Factbook of CIA
- "The Kyrgyz – Children of Manas.". Petr Kokaisl, Pavla Kokaislova (2009). p. 125. ISBN 80-254-6365-6.
- Turkmenistan: People: Ethnic Groups. World Factbook of CIA
- Tajikistan - Ethnic Groups. Source: U.S. Library of Congress.
- HK Census. "HK Census." Statistical Table. Retrieved on 2007-03-08.
- Argentina: People: Ethnic Groups. World Factbook of CIA
- Bolivia: People: Ethnic Groups. World Factbook of CIA
- Brazil: People: Ethnic Groups. World Factbook of CIA
- Fernández, Francisco Lizcano (2007). Composición Étnica de las Tres Áreas Culturales del Continente Americano al Comienzo del Siglo XXI. UAEM. ISBN 978-970-757-052-8.
- Informe Latinobarómetro 2011, Latinobarómetro (p. 58).
- Genetic epidemiology of single gene defects in Chile.
- Colombia: People: Ethnic Groups. World Factbook of CIA
- "Costa Rica; People; Ethnic groups". CIA World Factbook. Retrieved 2007-11-21. "white (including mestizo) 94%" = 3.9 million whites and mestizos
- "Tabla II.3 Población por color de la piel y grupos de edades, según zona de residencia y sexo". Censo de Población y Viviendas (in Spanish). Oficina Nacional de Estadísticas. 2002. Retrieved 2008-10-13.
- Dominican Republic: People: Ethnic groups. World Factbook of CIA
- "Ecuador: People; Ethnic groups". CIA World Factbook. Retrieved 2007-11-26.
- El Salvador: People: Ethnic Groups. World Factbook of CIA
- "Mexico: People; Ethnic groups". CIA World Factbook. Retrieved 2010-01-24.
- "Mexico: Ethnic Groups". Encyclopædia Britannica.
- Mexico: People: Ethnic Groups. World Factbook of CIA
- Mexico - Britannica Online Encyclopedia
- "Nicaragua: People; Ethnic groups". CIA World Factbook. Retrieved 2007-11-15.
- "Panama; People; Ethnic groups". CIA World Factbook. Retrieved 2007-11-21.
- Puerto Rico: People: Ethnic Groups World Factbook of CIA
- Peru: People: Ethnic Groups. World Factbook of CIA
- 8 LIZCANO
- Resultado Basico del XIV Censo Nacional de Población y Vivienda 2011 (p. 14).
- Uruguay: People: Ethnic Groups. World Factbook of CIA
- Bahamas: People: Ethnic Groups. World Factbook of CIA
- Barbados: People: Ethnic Groups. World Factbook of CIA
- Bermuda: People: Ethnic Groups. World Factbook of CIA
- Canadian Census 2006
- French Guiana: People: Ethnic Groups. World Factbook of CIA
- Martinique: People: Ethnic Groups. World Factbook of CIA
- Fact Sheet on St. Barthélemy
- Trinidad French Creole
- French Polynesia: People: Ethnic Groups. World Factbook of CIA
- American FactFinder - Results
- Brazil: People: Ethnic Groups. World Factbook of CIA
- Cooper, Frederick: Colonialism in Question: Theory, Knowledge, History (2005)
- Getz, Trevor R. and Heather Streets-Salter, eds.: Modern Imperialism and Colonialism: A Global Perspective (2010)
- LeCour Grandmaison, Olivier: Coloniser, Exterminer - Sur la guerre et l'Etat colonial, Fayard, 2005, ISBN 2-213-62316-3
- Lindqvist, Sven: Exterminate All The Brutes, 1992, New Press; Reprint edition (June 1997), ISBN 978-1-56584-359-2
- Nuzzo, Luigi: Colonial Law, European History Online, Mainz: Institute of European History, 2010, retrieved: December 17, 2012.
- Osterhammel, Jürgen: Colonialism: A Theoretical Overview, Princeton, NJ: M. Wiener, 1997.
- Petringa, Maria, Brazza, A Life for Africa (2006), ISBN 978-1-4259-1198-0.
- Stuchtey, Benedikt: Colonialism and Imperialism, 1450-1950, European History Online, Mainz: Institute of European History, 2011, retrieved: July 13, 2011.
- Velychenko, Stephen: The Issue of Russian Colonialism in Ukrainian Thought.Dependency Identity and Development, AB IMPERIO 1 (2002) 323-66 .
- Wendt, Reinhard: European Overseas Rule , European History Online, Mainz: Institute of European History, 2011, retrieved: June 13, 2012.
- Conrad, Joseph, Heart of Darkness, 1899
- Fanon, Frantz, The Wretched of the Earth, Preface by Jean-Paul Sartre. Translated by Constance Farrington. London: Penguin Book, 2001
- Kipling, Rudyard, The White Man's Burden, 1899
- Las Casas, Bartolomé de, A Short Account of the Destruction of the Indies (1542, published in 1552).
- Liberal opposition to colonialism, imperialism and empire (pdf) - by professor Daniel Klein
- Colonialism entry by Margaret Kohn in the Stanford Encyclopedia of Philosophy
- Globalization (and the metaphysics of control in a free market world) - an online video on globalization, colonialism, and control. | http://www.dotleb.net/directory/index.php?title=Colonialism | 13 |
14 | Reparations for slavery is a proposal in the U.S. for the federal government to pay reparation, in various forms, to slave descendants for the transatlantic slave trade. There is also a newer movement to secure reparations, particularly from Western, ex-colonial powers, for Africa and African nations.
The arguments surrounding reparations is based on the formal discussion about reparations and the actual land reparations received by African Americans which were later taken away. In, 1865 after the South was defeated in the American Civil War, General William Tecumseh Sherman issued Special Field Orders, No. 15 that set aside tracts of land in the sea islands and around Charleston, South Carolina for the exclusive use of Black people who had been enslaved. Around 40,000 freed slaves were settled on 400,000 acres (1,600 km²) in Georgia and South Carolina. However, President Johnson reversed the order after Lincoln was killed and the land ownership reverted back to Whites. In 1867, Thaddeus Stevens sponsored a bill for the redistribution of land to African Americans, but it was not passed.
When reconstruction abruptly ended in 1877 rather than addressing the atrocities of slavery, a deliberate movement of regression and oppression arose in southern states. (Jim Crow) laws passed in the South to reinforce the existing inequality that slavery had produced. In addition White terrorist organizations such as the KKK engaged in a massive campaign of intimidation throughout the South in order to keep African Americans in their prescribed social place. For centuries this inequality, injustice, and was rationalized away in court decisions and in public discourse and reparations dropped off the radar screen
So give us our 40 acres and a mule.
Much of this information is from http://en.wikipedia.org/w iki/Slavery_reparation
anonymous: All valid points. However, most people in this country can only trace their ancestors here back a few generations, which is not far enough for them to have owned slaves. Ergo, these people should not be made to pay reparations for something they had no part in, since they did not own slaves. If you believe that these people are somehow on the same level as slave owners from further back in the past than their own roots in the US, then please feel free to explain how someone like myself, whose ancestors arrived from Italy about 4 generations ago, would need to pay reparations. Also, I don
anonymous: Oh i
anonymous: Following your line of reasoning the desendants who can trace their history back to African nations that engaged in the common practice of slavery [all of them], should also pay reparations. The question is then who should recieve the money.
Because the forefathers of African Americans [who immigrated here or were taken here] engaged in slavery in Africa and actually provided Europeans with their slaves. The slave trade was widely accepted in Africa and slavery was rampant among ther African tribes themselves.
So the issue is not as black and white as you make it seem. The truth is that no single racial or ethnic group is entirely responsible for the trans Atlantic slave trade. So it
anonymous: As slaves you were nothing but bipedal mules that could be trained to understand English. Be thankful you were given the right to be normal citizens instead of being shipped back to Africa.
Quaker based Agnostic: Put that hand back in your pocket and fill out a job app.
And quoting wiki as a source is as swift as admitting you drink from toilets.
Im not a descendant of slave-owners, slave ship captains, or the KKK. My roots from the period of American slavery lie with the Quakers, and while yes my blood helped escaped slave to a better life I can make no claim to any of that service. And by corollary, none of the people currently in government, the general population, or even the ignorant assbackwards Klan that still exists can or should be held responsible for that time.
Confused: You take your 40 acres and a mule, and see what you can do. Happy plowing.
anonymous: Your insistence on reparations is a manifestation of your deep-seeded feelings of inferiority, and if granted, will only prolong the central beliefs of racism. You demand that someone give you something for whatever sophistical reason simply because you are convinced you cannot earn it yourself.
My ancestors were Scottish coalminers in West Virginia and were horribly exploited for generations by mining companies who kept them from unionizing and securing fair wages. But I spend no time demanding reparations because I am confident that I can make a living on my own, and let the past be the past.
The reparation argument is in character much like a slave begging his master for more food, rather than like the emmanicipated slave who embraces (and is completely responsible for) his freedom.
anonymous: There are economically disadvantaged whites in this country who couldnt afford to go to law school, medical school or dental school because theres no "set-asides" for them, no affirmative action, no NAACP and no generous back pay/front pay and other monetary wards from the EEOC for the white victims of job discrimination such as those who were unjustly denied good jobs due to age, due to being disabled or due to being Jewish. There is no scholarships for poor white people, nothing - zilch. But there are plenty of blacks doing quite well because of those programs - better than the poor whites who have no recourse for getting a chance for good jobs, law school admissions, med school admissions - and the scholarships and grants to pay for it too - due to preferential hiring and college admission practices. Why should those getting favored and preferential treatment over others who are disadvantaged lay claim to even more free money that comes out of everybody elses pockets? If you think you have it so bad as a black descendent of slavery, talk to the African slaves living in the Sudan today, the women and children in Africa subjected to FGM, and the African men whose limbs were amputated for punishment under Sharia Law imposed by radical Muslim African warlords. Black Americans should be grateful they were born here - even if their ancestors were slaves because they certainly have it better here than they would back in their "Motherland". If anything, you owe America a debt of gratitude for the horrors you are being spared today because your ancestors came here enslaved.
Da 1 and only: this is very important.All this information is really good and people can learn about important things on whats been going on in the world
Warrior: Understand this, African Reparations for Slavery is the COMPENSATION FOR UNPAID WAGES that are WELL OVER DUE and to be paid by the Governments that benefited off the backs of African Slaves. It
anonymous: FYI, the average black american earns approximatley 10 xs the income of a black in the african nations that slaves came from. If anything the slave traders should be given the reparations for bringing them to a more wealthy nation.
fighting for justice and truth: the majority of you people that have written on this page so far are the most unfair and racist people I have ever came across!!!! it is a great disappointment to humanity that there are still people in this day an age that can be so blatantly ignorant!!! how can any people of anywhere rightly uphold injustice????? to even suggest that slave traders should be compensated is very sickening and plain evil!!! the reason why your countries are wealthy today is because of the african slaves who unwillingly worked for free, who were forced into hard labour!! and as far as black americans today making big income that is only a small (but somewhat growing) percentage of the population, that may be someone like you see on tv. a lot of us are struggling because of things like racism discimination that you people on here seem to glorify!! as a black i do not want to be a victim and i refuse it but i know that i have been taken advantage of and have experience some straight forward racism but you guys are the worst!! i know that the white race that benefited from slavery has had their advantages being overly rich with blacks inheritance. if there are poor whites today that have slave traders as ancestors, they should know their people at one point had riches, and their circumstances today are a result of their ancestors lack of proper investing... i see it as the tide is turning, whatever you sow that is what youll reap. it is very sad that generational curses effect the offspring generations but it is for those individuals to repent of the sins of the forefathers, enlighten themselves and do the good works of the Lord.
ONE LOVE - LETS GET TOGETHER: I BEEN READING THE COMMENTS HERE - I LAUGH AT THE STUPIDITY OF PEOPLE - IN COURT WHEN AN EMPLOYER (SAY THE SLAVE TRADE GOVERNMENTS) IS PROVEN TO OWE PAY TO AN EMPLOYEE (SAY THE SLAVES OR SLAVE DESCENDANTS) - THE EMPLOYER HAS NO CHOICE BUT TO PAY OR THE EMPLOYER WOULD BE IN VIOLATION OF THE LAW - I SEE IT NO DIFFERENT WITH ANY GOVERNMENT - BLACKS OF SLAVES DESERVE REPARATIONS PERIOD - THE GOVERNMENTS FOR CENTURIES HAVE LIED TO THEM AND INTURN TRY TO BRAINWASH PEOPLE TO THINK THAT EVERYTHING IS EQUAL - THAT BLACKS GET THEIR SHARE FROM BEING ABLE TO STAY IN AMERICA AN HAVE THE OPPORTUNITY TO RUN THE RAT RACE TO GAIN WEALTH - THAT FYI WAS THEIRS TO BEGIN WITH - IT IS GARBAGE AND A LOAD OF PROPAGANDA THAT I HOPE FOR THE SAKE OF OUR NATION THAT MOST PEOPLE DONT REALLY BELIEVE - REPARATE BLACKS SO WE CAN BEGIN TO RIGHT OUR WRONGS AND GET MORE FOCUS ON OTHER ISSUES - PEACE
anonymous: Yeah, give us our 40 mules and an acre.
William Randee: Well, slavery was legal for a time. So how can you persecute people for taking advantage of a legal enterprise?
Hell, I wish I could buy a few slaves today since Im tired of paying some greedy lawn-care company to take care of my lawn. A couple of slaves to clean the house once in a while wouldnt be bad either.
You dont have to be brutal to them its just that theyll work for you till you get tired of them. Unless you get a couple of cute slaves then you can keep them for a while.
Heh,heh, take them on a slave camping trip, or to slave day at disneyland, or buy them a slave burger and fries once and a while.
I know what Im saying a lot of people wont like but thats okay. I can say what I want. -b.r.
anonymous: slavery as a legal enterprise?...that is a joke...so if genocide was legal that wud make it all right?..if u r that immoral that u can not see at least the basics of right from wrong then may God have mercy on ur soul....and yes i do not agree with what ur saying caus ur obviously prejudice an ur silly racist comments shows ur hatred an lack of intelligence...hey think about it, who made it legal at that time but white racist people who were in power...look at when saddam was in power, he put in place horrible rules and regulations that oppressed the people, he murdered many, even families...did that make it right just because it was legal in his country at that time? in the past hitler did the same and even much worse, he literally try to exterminate millions of the jews, killed deformed newborn babies and disabled children, ...did that make it right because it was legal in his country at that time?... according to u, we should not hold these kind of individual people, extremist groups or corrupted governments in account for their wicked acts just because they were just taking advantage of a legal enterprise at that time?.......can u really believe this??.. u can say what u will but ur not right on this one...
Nubian: The whole offer of 40 acres an a mule to me was and still is an insult to the black nation that they brought under bondage. Yes back then 40 acres and a mule had some worth but it still wasnt even enough compared to how much they benefited off our people. Its like profiting $300 dollars and u say u will give the workers a combined total of $3 dollars that they must share amongst themselves. They owed us much greater then that and now the amount has increased. they never paid it and we have never forgotten it.
cassava dumplin: You Know What If In Africa Slavery Was So Much Then Why When the Europeans Came They Did Not Free The Enslaved Africans? Why Did They Ship Them Off To A New World And Continued Enslaving Them There? And Why Did the Europeans Took African Families That Were Not Even Enslaved In Africa And Shippped Them Too And Enslaved Them In A New World? You Want To Know Why? For One Thing They Did Not Come To Make Peace But To Divide And Rule. Africa Was Just Another Conquest To Them. They Hated Our Culture, Our Skin Color, Our Hair Texture, But They Lusted After Our Gold, Rubies, Diamonds And Perls. They Saw Us As Lesser Then Dogs But Valuable As Cattel. There Countries Became Wealthy Because Of Our People. Our People Became Poor Because Of Them. They Did Not Want Our People To Get Compensated Then And They Dont Want Us Who Remain Of Their Bloodline To Get Reparations Now. But It Is Said That There Is A Season For Everything.
Nubian: Well said cassava.
anonymous: My ancestors were Scottish. They were enslaved by the romans and a the vikings for hundreds of years. Im gonna set up a website demanding money for the horrible suffering my "people" endured at the hands of these invaders. I want some money from the Italians for what happened centuries ago!!!! This issue is ridiculous, if anybody should pay reparations its the Africans that sold thier people in the first place. Go demand money from them.
Nubian: You seem to not understand what Reparations are and you also dont know much of anything about African History of Slavery. And your talking about Governments that no longer exist, Vikings and Romans. Were talking about the same Governments that existed then in slavery times, still exist today. These Governments havnt dismanteled and become new countries. So your argument is unfounded.
anonymous: this is a great site to learn about african history of slavery: http://www.africanholocau st.net
anonymous: A Commentary by Oscar L. Beard, Consultant in African Studies 24 May 1999:
"The single most effective White propaganda assertion that continues to make it very difficult for us to reconstruct the African social systems of mutual trust broken down by U.S. Slavery is the statement, unqualified, that, "We sold each other into slavery." Most of us have accepted this statement as true at its face value. It implies that parents sold their children into slavery to Whites, husbands sold their wives, even brothers and sisters selling each other to the Whites. It continues to perpetuate a particularly sinister effluvium of Black character. But deep down in the Black gut, somewhere beneath all the barbecue ribs, gin and whitewashed religions, we know that we are not like this...." can read more here: http://www.africawithin.c om/maafa/did_we_sell.htm there are many great sites that discuss about the real africa and what really occured in african slavery, you just have to look for them if you want to really know the truth. the facts are there not heresy like most are listening to and believing.
anonymous: anybody ever herd of, Caucasians United for Reparations & Emancipation? it is a true blessing to see people of all races united for the right cause, african reparations for slavery: http://www.reparationsthe cure.org
Dave Bart: Sure. Alot of the wealthiest families in the South (and those who migrated North afterward) made their money on the backs of your ancestors. Unfortunately, we can
Insightful: Now you are wrong for saying that, you got your information screwed up. See you do not have to go back as far as which plantations our ancestors were enslaved on because that is totally irrelevant. From you can trace back your roots to an actual african slave under the European Government rule, you are entitle to african reparations for slavery, period. And is plain stupid to track down slave owners descendants because you surely can not hold them directly responsible for what there ancestors did despite even if some of them have racist views and would act on them if they could today. The mistake a lot of people that think like you are making is that you guys are putting the blame on the wrong people and on the wrong things.
It is the European Governments that are responsible and even if all those original slave owners were alive today they would have to compensate as well as the entire European Governments whom made it legal. It is not so much the claim to the lands in which our ancestors were enslaved but the many forms of compensation that is due to us. Simple as that.
anonymous: As in other parts of the world Whites enslaved and sold other of their race for centuries in europe. Race was not an issue to those feudal lords. But suddenly when anyone says that the african people did the same, it becomes an insult to their character. Once again race might not have been an issue to the people of africa who are guilty of that same thing. Why should I believe the poeple of africa were special in that they did not practice the trade slaves that looked like them. This character you defend seems to only be racism.
anonymous: The reason I would not want to take reparation money from the Italians for what their ancestors, the Ancient Romans, did is not because their country is no longer called the Roman Empire. It is because they are completely different people with new policies. The current Italian people,who are paying the taxes to the government, are decent people who have never owned slaves, and I would not want to take from them and their families whether their country was still called the Roman Empire or not.
cassava dumplin: Look here, either way you look at it two wrongs does not make it a right. When those whites came to Africa do you honestly believe that they enslaved my people because we were doing the same? They had their own agenda from the start and they did not understand our ways or our culture of life nor did they care. You need to study real African History and you will know that the when the whites came to Africa they enslaved the innocent as well as the guilty because in Africa a lot of the slaves we had were like prisoners usually to the victims, those whom committed crimes like thievery. To understand is to know, and you can not fully know unless you seek the true knowledge of events.
Reasonings: Racism was the cause for African Slavery as well as Greed. The heavy debt that those whites whom controlled the European Governments at that time left is still presently owed and it is not fair to say that it should not be paid because you have to realize that then they wouldnt pay it for the obvious fact that they were racist! So if the European Governments continue not to pay it, its to say they are still racist today and are condoning that practice that came from their forefathers.
GottaLetYallKnow: one thing we gotta remember is that when it comes 2 us Blacks a lot of these peeple love 2 say we are not entitle 2 anything, but lets speak of the jews OR native indians and then thats a different story. is it fair? NO! but life isn
fortwynt: i hate to break it to you, but as lousy as it is, it wasnt a legal "promise" to begin with, and even if it were it was reversed Legally...if you are living now you never saw a day of slavery in your life, save the sort of slavery that ALL people experience, and not to mention only around 6 percent of Americans owned slaves to begin with and that 6 percent didnt include my family...so frankly I dont owe you a penny...while it was sad, the whole slavery mess, those slaves and their immediate families are dead....if Hitlers family were tracked down would they be liable to pay for the misdeeds of him? no. If the slaves would have been given something for their trouble THEN thats one thing I would agree with, but waiting a hundred or more years and then forcing the decendants of those slave owners to pay for something their great great great grandfathers did is just ridiculous...You werent a slave, my family did not own slaves....end of story....besides even if the psychotic notion of reparations could be made reality, who would get them? All Blacks? Even the ones of non-african descent (many), what about the whites with black ancestry? In other words it would never work.
anonymous: I SEE IT ALL AS A NO CHOICE DEAL WHEN THE WHITE EUROPEANS WENT TO AFRICA BECAUSE THE SITUATION WAS THIS: THE WHITES EITHER TOOK WHAT THEY WANTED BY FORCE WHICH CAUSED A LOT OF BLOODSHED OR WE SEMICOOPERATED WITH THEIR BULLYING.
anonymous: If your skin color is white, by that I mean you are dominately caucasian, and you can trace back your ancestry to an actual african slave under european rule then you too should be getting some level of reparations. This is not so much an issue of skin color anymore as it was in the beginning, for there are black asians and black indians, people have mixed heritage now so it would be dealt with on a case by case basis. And another thing we did not wait hundreds of years to claim reparations, that is an outright insult because the fights for African Reparations for Slavery started from the times of Slavery but of course it was ignored and still being ignored.
ANSWERS TO YOUR QUESTIONS: Who should pay out Reparations? All of the European Governments that directly benefited off the backs of African Slaves. Who should get this Reparations? Should have been our African Enslaved Ancestors but now it is the nearest of kin which are the African Descendants of Slaves that can trace back there lineage to actual African Slave Survivors. How should Reparations be paid? In more then one form and lasting for a certain period of time, from debt relief in Africa to free education and so on... that is just a few suggestions on how these monies should be spent; for the greater good towards equality and justice. How should Reparations pay out be done? Well one good way is the European Governments could set up a special private trust for only African Descendants of Slaves that will easily be accessable to us. The trust should be financed by funds drawn annually from the general revenue of all these European Governments for just a certain period of time and a certain set amount that can all be negotiated.
anonymous: If it helps promote emotional healing and social harmony then Im for reparations regardless of whether or not all those benifitting are actually descended from slaves and whether or not you have a legal claim to it because of the actual slavery. A gesture of concern in the form of voluntary slavery reparations might bring closure and reduce emotional disturbances of people who beleive their descendants were mistreated because of african slavory.
anonymous: Does the amount that these governments owe in reparations depend on their current level of wealth?
Nubian: No, see I personal believe it dosnt matter on their current level of wealth that dictates how much they will payout. However a lot of those Governments that were involved in Slavery more are the wealthiest nations today. So as it was said previously it all can be discussed and worked out.
Art Avedisian: Did we all forget that it was the African trible Chiefs that SOLD there people into slavery for gold. What are you people thinking. My ancesters did not have slaves, they were not even in this country during slavery, why should my money be used for this. I think not.
anonymous: Fuck you, no one living did any slave labor, so stop bitching and do some work and earn your money
anonymous: Why not just offer a one ticket back to Africa? That would put everything back the way it was before the ancestors were brought from Africa. You might find the decendants of those who caught and sold your ancestors off still in Africa.
Look to the future: There was a very relevant issue brought up by a Scot that he should claim compensation from the Vikings and the Romans for the enslaving of his ancestors.
Many people countered this argument with the equally relevant point that the Roman Empire no longer exists in its current form, nor the Norwegians or Swedes. The government of these nations is made up of completely different people, with different policies and opinions.
Why is this any different to the case in point? To bring in an even more contemporary example, to use your logic would imply that the Kurds now have the right to ask for compensation from the new Iraqi government.
Surely I don
anonymous: Die in a fucking fire, you stupid fucking niggers.
anonymous: Who should pay reparitions?
How about the black Africans who originally kidnapped and sold the slaves to white slave traders?
anonymous: HEY THE EUROPEAN GOVERNMENTS HAVE MANY BIG OVERDUE DEBTS ESPECIALLY TO BLACKS, LIKE IT OR NOT THIS IS THE TRUTH AND A STRAIGHT UP FACT! WHAT THE EUROPEAN GOVERNMENTS ALWAYS TRYING TO DO IS AVOID AND IGNORE THE ISSUE THROUGH USEAGE OF PROPOGANDA SO THEY DO NOT PAY THEIR BILLS! IT IS NOT FAIR WHAT THESE RACIST GOVERNMENTS DID TO BLACK PEOPLE AND ARE YET TO PAYOUT A CENT! AND REALIZE THAT BLACKS ARE NOT ONLY IN AMERICA BUT ALL OVER THE WORLD SO STOP LOOKING AT JUST LOCAL BUT THE GLOBAL ISSUE OF AFRICAN REPARATIONS! THE REAL REASON WHY THESE RACIST GOVERNMENTS DONT WANT TO PAY IS BECAUSE THERE ARE SO MANY BLACK PEOPLE! IF THEY HAD SUCEEDED IN WIPEING OUT THE MAJORITY OF US LIKE WHAT FOR EXAMPLE THEY DID WITH THE NATIVE INDIANS I BET THEN THEY WOULD PAY BECAUSE THERE WOULD BE A VERY FEW NUMBER OF US LEFT! THE HATRED I SEE IN HERE JUST PROVES THE SLAVE MASTER MENTALITY THE WHITE RACISTS HAVE SO OF COURSE THEY WILL FIGHT BLACK REPARATIONS, THEY HATE US SO THEY DO NOT WANT US TO GET WHAT IS RIGHTFULLY OURS!
array: you people in here talking about blacks sold out blacks do not know or understand black history at all. you try see what blacks did but you did not see what whites did, why is that? why is that everytime they try to point out what the black man did but refuse to acknowledge what the white man did and his part in it? like the white man innocent in what he did to us, that is ridiculous and then try say that we are the cause of us being enslaved that is the most ignorant and insensitive thing I have ever herd. you see what this all is manipulation, they forced and manipulated us into slavery and continue to do so mentally even today, when will it end.
anonymous: This whole argument is so indicative of this insipid compensation culture that is taking over the world right now.
If you look through history, virtually every country owes some other country reparations, using your logic.
In 99% of the cases, no one who was alive at the time, on either side of the claim, are alive today. In my mind, that nullifies the debt. So for exactly the same reason that the Italians don
anonymous: IF THAT IS SO TRUE THEN GO TAKE BACK THE MONIES PAID OUT TO THE NATIVE INDIANS AND THE JEWS AND JAPANESE AND ALL THOSE OTHER GROUPS SINCE THEY DID NOT DESERVE IT BECAUSE THAT IS WHAT YOU ARE BASICALLY SAYING!!!
anonymous: Beat the whites at their own game??? ::laughing:: How can blacks ever win a game when the other opponents always cheating? Honestly we have to stop fool ourselves, it was unfair from the start, blacks had the disadvantage. And as an activist against human suffering I see it as nothing compensating any rightful black person. our governmentes could have done it many years ago so it is better late then never because the situation is getting out of hand. blacks are right it is an injustice.
NO MORE RACISM: The compensation for the Black Holocaust includes all the damages afflicted on the African people and the African continent caused by the European/Arab Governments. African Reparations for Slavery is just apart of that and is a justified cause that regard to race, one should take a stand for it.
anonymous: Reparation has been paid to blacks. Its called welfare and its created generations of people that are solely dependent on the government for their existance. What exactly is another money grab going to fix? Will it stop racism? Will it change anything at all? No, people that feel theyre oppressed will feel that way no matter how much money is dropped into their hand. The fact still remains, you are owed nothing. Paying reparations to the Japanese is different because it was paid to the people directly affected. Let me point this out one more time, YOU WERE NOT A SLAVE. Therefore the government owes you nothing for having enslaved you. To compare the situation blacks have today with slavery is a direct slap in the face to your ancestors. This is nothing but a shameless money grab. Anyone with an ounce of pride would not stand their with their hand out wanting to cash in on someone elses suffering.
anonymous: Lots of valid points, but who would be expected to pay for this? You could print loads of money, make everyone a paper millionaire, the money of course would be worthless and massive unemployment would ensue. Blacks woould be blamed and massive racism would ensue. In a lot of ways reparation has been paid, how much money in the form of aid has fruitlessly been pumped into African countries? Fine, next time they have a disaster or famine charge them the going rate for sorting it out. When Africa needs medicine and technology, charge them the going market rate. Once reparations have been paid, are the next generation then going to get paid as well, and for how long - or at that point do they get told "your daddy got paid reparations but blew it all on a caddilac and crack".
Bob Harris: What a bunch of crap... if anyone is owed reparations its women who for thousands of years were treated as second class citizens. And, Im a guy!
Damned funny request: Good luck finding a mule for every black man, woman and child. And where do you suppose we get all this land from? Invade Canada and plow the tundra?
Then the eskimos will be asking the same thing of your ancestors in 200 years.
AFFER MAYS: I feel that all African Americans that were 18 years of age at the time the Civil Rights act was put into affect should recieve 40 acres anywhere in the U.S., or a couple of Hud homes in desirable zip codes. There is plenty of open land, or Hud homes available to settle this debt. I also feel that the deffinition of the word "NIGGER" should be changed in the dictionary. There are many more niggers in the country of all races than there are dark skinned people. I would like to see all the contributions of African Americans that led to the development of this great country, added to all history books K thru 12. Lastly, I would like a formal nation wide apology from the president of the United States to Afican Americans for all of the horrible things that this government condoned, that has been done to them.
Solas: The American government never keep their side of any bargin- just ask the Native Americans who they slaughtered and stole from.
Kathleen: I just want to say for all the people regardless of your race that think that African americans should get the promise that was going to be given to us if President Lincoln was able to sign that piece of paper you are heartless bastards. For the person that say fuck you niggers, that shows that you are pathetic, envious, and just plain stupid. We as in the Descendants of the slaves deserve what are ancestors are not able to have. You think that just because its 2007 that we are not slaves some kind of way you need to wake the fuck up. We were happy in our native land before we were kidnapped because the white people could not make the USA prosper without us. The white people should thank us for doing what we did even if it was against our will. So should we get our 40 acres and a mule? Hell Yeah we should. So again all the people that are fucking stupid haters on this page go fuck yourself or actually read some african history better yet watch roots stupid asses maybe you will realize that we deserve our fucking 40 acres a mule.
terrell ray: first to the individuals who are out there and you are under anonymous and your throwing that bulshit out there stop it cus your cowards second the state of many of our black males is because of a constant flow of poverty yea ill atmit that there are blacks that are lazy but what the fuck you cant put that on a race and if those people who need the scholar ships and soforth are able to get these reparations cus there is no mor forty acres and a mule you have to pay back wages and the equals to money secondly all this bullshit about how you race was treated when they came to this country GET THE FUCK OUT OF HERE your race made the consious choice to come we were stolen bamboozeled hood winked there has never been an atrocity in america like slavery of african people so shut the fuck up young punk live a little
anonymous: AFRICAN REPARATIONS FOR THE BLACK HOLOCAUST PETITION: www.gopetition.co.uk/onli ne/10151.html
anonymous: Ummm. Who do you think "the government" is? Perhaps you neglected to pay attention that day in civics class. But, umm, YOU are the people who will pay on behalf of the government. One way or the other, the US government is a "pass through" organization. They don
anonymous: Slave reparations are rediculous. You can
Getoveryourselfyouwerenttheonl: Putting your "problems" above others makes me dislike this claim even more. What happened to the First Nation communities was just as devastating and is stil occuring to this day. Remember Residential schools? The death rate at these schools was and still is astounding. What about Reservations, where many still live on today, are often desolate places and many are considered worse than third world countries by the UN. Now let
anonymous: The comment is illegible in Mozilla Firefox due to the sponsored links, dude.
anonymous: The American Recovery and Reinvestment Act of 2009 (Recovery Act) was signed into law by President Obama on February 17th, 2009. It is your 40 acres and a mule. It is an unprecedented effort to jumpstart our economy, create or save millions of jobs, and put a down payment on addressing long-neglected challenges so our country can thrive in the 21st century. The Act is an extraordinary response to a crisis unlike any since the Great Depression, and includes measures to modernize our nations infrastructure, enhance energy independence, expand educational opportunities, preserve and improve affordable health care, provide tax relief, and protect those in greatest need. Read the full bill here.
Copy and paste the following into your wed brouser for full details on your 40 acres and a mule http://www.usda.gov/wps/p ortal/?navid=USDA_ARRA
anonymous: Weve given you all your rights back and are trying to treat you as equals. If you want reperations for something thats happened in the past, then everyone will. The descendents of whites dont owe the descendants of blacks anything, because WE didnt enslave YOU. The people demanding reparation dont deserve anything, because they were never slaves.
European/American: If the "African/American&qu ot; people check the history of this country they will find that there were Black/Americns that owned black slaves. They don
Bug Jr.: I have witnessed here in Ga.African Americans lose there farms because they couldnt ge farm loans like there white neighbors. And in my county the Justice Department found that the black vote did not count in 1999. We were not real citezens in 1999 in Buena Vista Georgia. Court records are in Muscogee county, Columbus Georgia. Justice Department won . What about my votes that didnt count for those years Im suppose to b e a free black man. That is why I go to local Hardware store an they call my boy. We are living in the WHite mans matrix in places like this in Georgia USA. Unless we follow the old Malcolm x By any means , nothing will change out here. Ga republicans refuse to accept President Obama as I honor him. My hero.!
shandreal,johnson: I dont have to go into detail about what happen. America should of paid us to work instead of forced slavery.Our founding fathers had a bad idea on the thought of forcing a people to suffer like that.for the crime of rape today you recieve a life sentence in prison.So why do the government look at this matter like its a joke,JUST PAY US WHAT YOU OWE.We didnt ask for slavery,your government forced us into this, and bad things came out for us(blacks).Now show us you are sorry give us our compensation.
anonymous: I am White slave wantin the same will take polygrapj
anonymous: It easy to say, Just pay the slaves
Te Dotados: Many people believe the above statement is not relevant because the ancestors of those who committed those heinous crimes of slavery took no part in it? Youre wrong and ill give you an example to show why. Take Penn State for example, Sandusky molested a couple of kids but the players of the football team and Joe Paterno suffered the consequences too. Why is that, when it was clear that none of the up to date players had any partaking in what was happening at their school years prior. Nonetheless, both the players and the school suffered repercussions for the actions of a person in their camp. This is the same situation, and although no one prevalent now had anything to do with that situation, there needs to be dues paid. Now to those of you who feel like slavery was slightly beneficial i have a point to make to you too. You should check history more thoroughly before you make half brained suggestions, as when you look at what has transpired in africa, you will notice that it happened because of power hungry Europeans. Now im not bashing every european country and saying that europe as a whole utterly destroyed africa, no, but i am saying that certain europeans did in fact shape africa to what it is today. Africa, as a whole continent, was turned out to greed and hate by colonization of european countries. Its just something you cant deny.
anonymous: I think that we should get what we deserve and what was written in the constitution.if not the 40 acres then the money of what 40 acres is worth | http://reparateme.com/story44.html | 13 |
14 | Elements of Drama
by: Christina Sheryl L. Sianghio
Most simply a character is one of the persons who appears in the play, one of the dramatis personae (literally, the persons of the play). In another sense of the term, the treatment of the character is the basic part of the playwright's work. Conventions of the period and the author's personal vision will affect the treatment of character.
Most plays contain major characters and minor characters. The delineation and development of major characters is essential to the play; the conflict between Hamlet and Claudius depends upon the character of each. A minor character like Marcellus serves a specific function, to inform Hamlet of the appearance of his father's ghost. Once, that is done, he can depart in peace, for we need not know what sort of person he is or what happens to him. The distinction between major and minor characters is one of degree, as the character of Horatio might illustrate.
The distinction between heroes (or heroines) and villains, between good guys and bad guys, between virtue and vice is useful in dealing with certain types of plays, but in many modern plays (and some not so modern) it is difficult to make. Is Gregers Werle in The Wild Duck, for example, a hero or a villain?
Another common term in drama is protagonist. Etymologically, it means the first contestant. In the Greek drama, where the term arose, all the parts were played by one, two, or three actors (the more actors, the later the play), and the best actor, who got the principal part(s), was the protagonist. The second best actor was called the euteragonist. Ideally, the term "protagonist" should be used only for the principal character. Several other characters can be defined by their relation to the protagonist. The antagonist is his principal rival in the conflict set forth in the play. A foil is a character who defines certain characteristics in the protagonist by exhibiting opposite traits or the same traits in a greater or lesser degree. A confidant(e) provides a ready ear to which the protagonist can address certain remarks which should be heard by the audience but not by the other characters. In Hamlet, for example, Hamlet is the protagonist, Claudius the antagonist, Laertes and Fortinbras foils (observe the way in which each goes about avenging the death or loss of property of his father), and Horatio the confidant.
Certain writers-- for example, Moliere and Pirandello--use a character type called the raisonneur, whose comments express the voice of reason and also, presumably, of the author. Philinte and the Father are examples of the raisonneur.
Another type of character is the stereotype or stock character, a character who reappears in various forms in many plays. Comedy is particularly a fruitful source of such figures, including the miles gloriosus or boastful soldier (a man who claims great valor but proves to be a coward when tested), the irascible old man (the source of elements in the character of Polonius), the witty servant, the coquette, the prude, the fop, and others. A stock character from another genre is the revenger of Renaissance tragedy. The role of Hamlet demonstrates how such a stereotype is modified by an author to create a great role, combining the stock elements with individual ones.
Sometimes group of actors work together over a long period in relatively stable companies. In such a situation, individual members of the group develop expertise in roles of a certain type, such as leading man and leading lady (those who play the principal parts), juveniles or ingénues of both sexes (those who specialize as young people), character actors (those who perform mature or eccentric types), and heavies or villains.
The commedia dell'arte, a popular form of the late Middle Ages and early Renaissance, employed actors who had standard lines of business and improvised the particular action in terms of their established characters and a sketchy outline of a plot. Frequently, Pantalone, an older man, generally a physician, was married to a young woman named Columbine. Her lover, Harlequin, was not only younger and more handsome than her husband but also more vigorous sexually. Pantalone's servants, Brighella, Truffaldino, and others, were employed in frustrating or assisting either the lovers in their meetings or the husband in discovering them.
A group of actors who function as a unit, called a chorus, was a characteristic feature of the Greek tragedy. The members of the chorus shared a common identity, such as Asian Bacchantes or old men of Thebes. The choragos (leader of the chorus) sometimes spoke and acted separately. In some of the plays, the chorus participated directly in the action; in others they were restricted in observing the action and commenting on it. The chorus also separated the individual sins by singing and dancing choral odes, though just what the singing and dancing were like is uncertain. The odes were in strict metrical patterns; sometimes they were direct comments on the action and characters, and at other times they were more general statements and judgments. A chorus in Greek fashion is not common in later plays, although there are instances such as T.S. Eliot's Murder in the Cathedral, in which the Women of Canterbury serve as a chorus.
On occasion a single actor may perform the function of a chorus, as do the aptly named Chorus in Shakespeare's Henry V and the Stage Manager in Thornton Wilder's Our Town. Alfieri in the View from the Bridge functions both as a chorus and a minor character in the action of the play.
The Norton Introduction to Literature (Combined Shorter Edition) Edited by Carl E. Bain, Jerome Beaty & J. Paul Hunter Copyright 1973 by W. W. Norton & Company, Inc. and published simultaneously in Canada by Goerge J. McLeod Limited, Toronto
Back to Top
by: Eduardo M. Tajonera Jr.
The interest generated by the plot varies for different kinds of plays. (See fiction elements on plot for more information regarding plot.) The plot is usually structured with acts and scenes.
Open conflict plays: rely on the suspense of a struggle in which the hero, through perhaps fight against all odds, is not doomed. Dramatic thesis: foreshadowing, in the form of ominous hints or symbolic incidents, conditions the audience to expect certain logical developments. Coincidence: sudden reversal of fortune plays depict climatic ironies or misunderstandings. Dramatic irony: the fulfillment of a plan, action, or expectation in a surprising way, often opposite of what was intended.
Back to Top
The plot has been called the body of a play and the theme has been called its soul. Most plays have a conflict of some kind between individuals, between man and society, man and some superior force or man and h imself. The events that this conflict provokes make up the plot. One of the first items of interest is the playwright\rquote s treatment of the plot and what them he would draw from it. The same plots have been and will be used many times; it is the treatment that supplies each effort with originality or artistic worth. Shakespeare is said to have borrowed all but one of his stories, but he presented them so much better than any of the previous authors that he is not seriously criticized for the borrowing. Th e treatment of theme is equally varied.
The same theme or story may be given a very serious or a very light touch. It may be a severe indictment or a tongue-in- cheek attack. It could point up a great lesson or show the same situation as a handicap to progress. The personality, background an d social or artistic temperament of the playwright are responsible for the treatment that he gives to his story or theme. We must, therefore, both understand and evaluate these factors.
To endure, a play should have a theme. It is sometimes suggested in the title as in Loyalties, Justice, or Strife, You can't Take It With You, or The Physician in Spite of Himself. At other times it is found in the play itself, as in Craig's Wife when the aunt says to Mrs. Craig, "People who live to themselves are often left to themselves." Sometimes theme is less obvious, necessitating closer study.
If a play has a theme, we should be able to state it in general terms and in a single sentence, even at the risk of oversimplification. The theme of Hamlet is usually stated as the failure of a youth of poetic temperament to cope with circumstances that demand action. The theme of Macbeth is that too much ambition leads to destruction; a Streetcar Named Desire, that he who strives hardes t to find happiness oftentimes finds the least; and of Green pastures, that even God must change with the universe.
Of course the theme, no matter how fully stated, is not the equivalent of the play. The play is a complex experience, and one must remain open to its manifold suggestions.
As indicated above, the statement of the play in specific terms is the plot presented. Plot and theme should go hand in hand. If the theme is one of nobility, or dignity, the plot must concern events and characters that measure up to that theme. As we a nalyze many plays, we find that some posses an excellent theme, but are supported by an inconsequential plot. One famous play of this nature, Abie's Irish Rose, held the stage for many years. The theme said: Difference of r eligion need not hinder a happy marriage. The plot was so thin and both characters and situation so stereotyped, that justice was not done to the theme. This weakness was most obvious in the play's revival after twenty years.
Examples of the frequent fault of superior plot and little or no theme come to us in much of the work of our current playwrights. Known for their cleverness in phrasing and timing, and their original extremely witty conceptions, these plays are often ver y successful. More often than not, however, they are utterly lacking in a theme or truth that will withstand more than momentary analysis. They are delightful but ephemeral. An audience believes them only while watching in the theatre. Consequently, the author, although now among ou r most popular, will not endure as artists, nor are their plays likely to be revived a hundred years hence. They but emphasize more strongly the axiom that a good plot or conflict is needed for transitory success, but a great theme is more likely to assu re a play a long life.
Wright, E.A. (1969). A PRIMER FOR PLAYGOERS. Englewood Cliffs; PRENTICE-HALL, INC., pp.156-158
Back to Top
Dialogue provides the substance of a play. Each word uttered by the character furthers the business of the play, contributes to its effect as a whole. Therefore, a sense of DECORUM must be established by the characters, ie., what is said is appropriate to the role and situation of a character. Also the exposition of the play often falls on the dialogue of the characters. Remember exposition establishes the relationships, tensions or conflicts from which later plot developments derive.
Any artificial picture of life must start from the detail of actuality. An audience must be able to recognize it; however changed; we want to check it against experience. Death for exampl e, is something we cannot know. In every man it is represented as an embodying some of our feelings about it. So Death is partly humanized, enough, anyway, for us to be able to explore what the dramatist thinks about it.
Conversely, the detail of actuality in realistic drama can be chosen and presented in such a way as to suggest that it stands for more on the stage than it would in life. The Cherry Orchard family, in the excitement of their departure, overlook s their old servant Firs. Placed with striking force at the end of the play, this trivial accident becomes an incisive and major comment on everything the family has done.
So it is dramatic speech. A snatch of phase caught in everyday conversation may mean little, Used by an actor on a stage, it can assume general and typical qualities. The context into which it is put can make it pull more than its conversation al weight, no matter how simple words. Consider Othello\rquote s bare repetition: 'Put out the light, end then put out the light.' In its context the repetition prefigures precisely the comparison Shakespeare is about to make between the lam Othello is holding and Desdemona's life and being. Its heavy rhythm suggests the strained tone and obsessed mood of the man, and an almost priestlike attitude behind the twin motions. We begin to see the murder of Desdemona in the larger general terms of a ritualistic sacrifice. Poetry is made of words, which can be in use in more prosaic ways; dramatic speech, with its basis in ordinary co nversation, is speech that has had a specific pressure put on it.
Why do words begin to assume general qualities, and why do they become dramatic? Here are two problems on either side of the same coin. The words in both cases depend upon the kind of attention we give them. The artist using them, whether aut hor or actors, force them upon us, and in a variety of ways try to fix the quality of our attention.
If dialogue carefully follows the way we speak in life, as it is likely to go i n a naturalistic play, the first step towards understanding how it departs from actuality can be awkward. It is helpful to cease to submit the pretence for the moment. An apparent reproduction of ordinary conversation will be, in good drama, a constructio n of word setup to do many jobs that are not immediately obvious. Professor Erick Bently has written of Ibsen's 'opaque, uninviting sentences' :
An ibsenite sentence often performs four or five function at once. It shed light on the character spo ken about, it furthers the plot; it functions ironically is conveying to the audience a meaning different from that conveyed to the characters.
It is true that conversation itself can sometimes be taken to do this thing. 'Whatever you think. I'm going to tell him what you said.' is a remark which in its context can shed light on the speaker, the person spoken to and the spoken about. For a fourth person listening, as spectator witnesses a play, there may also be an element of that mean something only to himself as observer. In the play the difference lies first in an insistence that the words go somewhere, move towards a predetermined end. It lies in a charge of meaning that will advance the action.
This is argued in a statement in Strindberg's manifesto for the naturalistic theatre. He says of his characters that he has 'permitted he minds to work irregularly as they do in reality, where, during conversation, the cogs of mind seem more or less haphazardly to engage those of another one, an where no topic is fully exhausted.' But he adds that. While the dialogue seems to stray a good deal in the opening scenes, \lquote it acquires a material that later on is worked over, picked up again repeated, expounded, and built up the theme in a musical composition.'
It is a question of economy. The desultory and clumsy talk of real life, with its interruptions, overlapping, in decisions and repetitions, talk without direction, wastes our interest\emdash unless, like the chatter given to Jane Austen\rquote s Miss Bates, it hides relevance in irrelevance. It follows the dialogue which the wit and vitality in Shaw's dialogue yet ignore the question of its relevance to the action.
When the actor examines the text to prepare his part, he looks for what makes the words different from conversation, that is he looks for the structural elements of the building, for links of characteristic thought in the character, and so on . He persists till he has shaped in his mind a firm and workable pattern of his part. Now the clues sought by the actor hidden beneath the surface of the dialogue are the playgoer's guides too. The actor and producer Stanislavsky have called these clues the 'subtext' of a play.
The subtext is a web of innumerable, varied inner patterns inside a play and a part, woven from 'magic ifs' , given circumstances, all sorts of figments of the imagination, inner movements, objects of attention, smaller and greater truths and a belief in them, adaptations, adjustmen ts and other similar elements. It is subtext that makes us say the words we do in a play.
And in another place he says that 'the whole text of the play will be accompanied by a sub textual stream of images, like a moving picture constantly thrown on the screen of our inner vision, to guide us as we speak and act on the stage.' Once we admit that the words must propose and substantiate the play\rquote s meaning, we shall find in them more and more of the author's wishes.
For dramat ic dialogue has other work to do before it provides a table of words to be spoken. In the absence of the author it must provide a set of unwritten working directives to the actor on how to speak its speeches. And before that, it has to teach him how to think and feel them: the particularly of a play requires this if is not to be animated by a series of cardboard stereotypes.
Dramatic dialogue works by a number of instinctively agreed codes. Some tell the producer how to arrange the figures on the stage. Others tell him what he should hear as the pattern of sound echoing and contradicting, changing tone, rising and falling. These are directives strongly compelling him to hear the key in which a scene should be played, and the tone and temp of the melody. Others oblige him to start particular rhythmic movements of emotion flowing between the stage and the audience. He is th en left to marry the colour and shape of the stage picture with the music he finds recorded in the text.
Good dialogue works like this and throws out a 'substextual stream of images'; Even if the limits within which these effects work are narrow, even if the effect lies in the barest or simplest of speeches, we may expect to hear the text humming the tune as it cannot in real life. Dialogue should be read and heard as a dramatic score.
The Elements of Drama by J.L. Styan
Cambridge University Press 1960
Back to Top
The means the playwright employs are determined at least in part by dramatic convention. Greek: Playwrights of this era often worked with familiar story material, legend about gods and famous families that the audience was familiar with. Since the audience was familiar with certain aspects of these, the playwrights used allusion rather than explicit exposition. In representing action, they often relied on messengers to report off-stage action. For interpretation the Greeks relied on the CHORUS, a body of onlookers, usually citizens or elders, whose comments on the play reflected reactions common to the community. These plays were written in metered verse arranged in elaborate stanzas. This required intense attention from the audience. English Drama: Minor chara cters play an important role in providing information and guiding interpretation. The confidant, a friend or servant, listens to the complaints, plans and reminiscences of a major character. Minor characters casually comment among themselves on major characters and plot development. Extended SOLILOQUY enables a major character to reveal his thoughts in much greater detail than in natural dialogue. ASIDES, remarks made to the audience but not heard by those on the stage, are common. Realism: Toward the end of the nineteenth century, realistic depiction of everyday life entered the genre of drama, whereas the characters may be unconventional and their thoughts turbulent and fantasy-ridden. Contemporary: Experimentation seems to be the key word here. A NARRATOR replaces the messenger, the chorus and the confidant. FLASHBACKS often substitute for narration. Many contemporary playwrights have abandoned recognizable setting, chronological sequence and characterization through dialogue.
Back to Top
Genre is a term that describes works of literature according to their shared thematic or structural characteristics. The attempt to classify literature in this way was initiated by Aristotle in the Poetics, where he distinguishes tragedy, epic, and comedy and recognizes even more fundamental distinctions between drama, epic, and lyric poetry. Classical genre theory, established by Aristotle and reinforced by Horace, is regulative and prescriptive, attempting to maintain rigid boundaries that correspond to social differences. Thus, tragedy and epic are concerned exclusively with the affairs of the nobility, comedy with the middle or lower classes.
Modern literary criticism, on the other hand, does not regard genres as dogmatic categories, but rather as aesthetic conventions that guide, but are also led by, writers. The unstable nature of genres does not reduce their effectiveness as tools of critical inquiry, which attempts to discover universal attributes among individual works, and has, since classical times, evolved theories of the novel, ode, elegy, pastoral, satire, and many other kinds of writing.
Back to Top
Manuel L. Ortiz
It is the act or chance of hearing; a reception by a great person; the person to hear.
Playhouse, script, actors, mise en scene, audience are inseparable parts of the theatre. The concept of drama put forward in this book insists that the audience have an indispensable role to play. While Stanislavsky is right in saying that 'spectator come to the theatre to hear the subtext. They can read the text at home; he is speaking as a man of the nineteenth century. We do not go to the play merely to have the text interpreted and explained by the skills of the director and his actor. We do not go as in a learning situation, but to share in a partnership without which the players cannot work. In his Reflaxions sur l; art, valery believed that a creator is one who makes other create': in art both the artist and the spectator actively cooperate, and the value of the work is dependent on this reciprocity. If in the theatre there is no interaction between stage and audience, the play is dead, bad or non-existent: the audience, like the customer, is always right.
Every man, women, or child who has expressed an opinion concerning a dramatic performance has, in a sense, proclaim himself to be a critic. Whether his reaction has been good or bad, his opinion will have some effect on the thinking of those who have heard or read his comment, and what have been said will become a part of the production's history. The statement may have been inadvertent, biased, unfair, without thought or foundation, but once spoken or repeated, it cease to be just an opinion and is accepted as a fact. Who has not heard, accepted, repeated, and been affected by such generalization as: "They say its terrible!" or " They say its terrific!"
Another type of critic is the more powerful and frequently only slightly more qualified, individual who is-often for strange and irrelevant reasons-assigned to cover an opening for the school or community paper. He may be completely lacking in the knowledge required of even a beginner in dramatic criticisms, but, again, "Anyone can write up a play." Yet the power of the written words takes over, and what this novice write becomes the accepted authority for many. The hundreds of hour of work by the many persons involved in the production, their personal sacrifices, and their pride in their work-to say nothing of the financial outlay involved-far too often are condemned or praised for the wrong reasons or for logical reason at all. As a further injustice, what the critic has written, although it is just a single opinion, becomes the only record of the production and so catalogs the event of the future.
It is doubtful if any other business or art is so much a victim of inept, untrained, illogical, and undeserved criticism as is a dramatic performance. Whether the remarks have grown out of prejudice, meager knowledge of the theatre, lack of understanding or sensitivity, momentary admiration or dislike foe some individual participant, a poor dinner or disposition, an auditorium too hot or too cold, or any of a hundred incidents that could occurred during the production itself does not matter. Those whose effort are being discussed can console themselves only with the fact that criticism-good or bad-is much easier than creation or craftsmanship for the same reason that the work is harder than talk.
Having been a part of the theatre-professional, community, and educational-for more than four decades, we are well aware that criticism of the critics is frequently heard, and that this criticism includes those who write the drama section for the national magazine or the large daily newspaper report on the opening night. This is inevitable, for total agreement on any phase of the theatre is impossible. We live in a world with out laws of logic or mathematical formulas to guide us. There are no yardsticks that will give us all the same answer, but there are yardsticks that should be familiar to all of us. In this paper we propose to present and to discuss some of these criteria. If the amateur critics just referred to had been familiar with some basic dramatic principles and had used them honestly, there would be a greater feeling that justice had been done. Any intelligent theatre person knows that each member of the audience views what is before him with different eyes and so sees something different from his neighbor. How each member reacts will be determined by education, age, experience, nationality, maturity, background, temperament, heredity, environment, the rest of the audience, the weather, what he has done or eaten in the past few hours, or his plans for after the performance. This list of imponderable could go indefinitely. Furthermore, if agreement on any one aspect of a given performance is impossible, then agreement is even more hopeless if different performances of the same play, in the same theatre, and with the same cast, are under discussion; for a different audience makes for a different production.
Back to Top
Eduardo M. Tajonera Jr
The stage creates its effects in spite of, and in part because of, definite physical limitations. Setting and action tend to be suggestive rather than panoramic or colossal. Both setting and action may be little more than hints for the spectator to fill out.
Back to Top
Francis CalangiTheater SpaceTheater can also be discussed in terms of the type of space in which it is produced. Stages and auditoriums have had distinctive forms in every era and in different cultures. New theaters today tend to be flexible and eclectic in design, incorporating elements of several styles; they are known as multiple-use or multiple-form theaters.A performance, however, need not occur in an architectural structure designed as a theater, or even in a building. The English director Peter Brook talks of creating theater in an "empty space." Many earlier forms of theater were performed in the streets, open spaces, market squares, churches, or rooms or buildings not intended for use as theaters. Much contemporary experimental theater rejects the formal constraints of available theaters and seeks more unusual spaces. In all these "found" theaters, the sense of stage and auditorium is created by the actions of the performers and the natural features of the space. Throughout history, however, most theaters have employed one of three types of stage: end, thrust, and arena. An end stage is a raised platform facing the assembled audience. Frequently, it is placed at one end of a rectangular space. The simplest version of the end stage is the booth or trestle stage, a raised stage with a curtained backdrop and perhaps an awning. This was the stage of the Greek and Roman mimes, the mountebanks and wandering entertainers of the Middle Ages, commedia dell'arte, and popular entertainers into the 20th century. It probably formed the basis of Greek tragic theater and Elizabethan theater as well.The Proscenium TheaterSince the Renaissance, Western theater has been dominated by an end stage variant called the proscenium theater. The proscenium is the wall separating the stage from the auditorium. The proscenium arch, which may take several shapes, is the opening in that wall through which the audience views the performance. A curtain that either rises or opens to the sides may hang in this space. The proscenium developed in response to the desire to mask scenery, hide scene-changing machinery, and create an offstage space for performers' exits and entrances. The result is to enhance illusion by eliminating all that is not part of the scene and to encourage the audience to imagine that what they cannot see is a continuation of what they can see. Because the proscenium is (or appears to be) an architectural barrier, it creates a sense of distance or separation between the stage and the spectators. The proscenium arch also frames the stage and consequently is often called a peep-show or picture-frame stage.The Thrust StageA thrust stage, sometimes known as three-quarter round, is a platform surrounded on three sides by the audience. This form was used for ancient Greek theater, Elizabethan theater, classical Spanish theater, English Restoration theater, Japanese and Chinese classical theater, and much of Western theater in the 20th century. A thrust may be backed by a wall or be appended to some sort of end stage. The upstage end (back of the stage, farthest from the audience) may have scenery and provisions for entrances and exits, but the thrust itself is usually bare except for a few scenic elements and props. Because no barrier exists between performers and spectators, the thrust stage generally creates a sense of greater intimacy, as if the performance were occurring in the midst of the auditorium, while still allowing for illusionistic effects through the use of the upstage end and adjacent offstage space.The Arena StageThe arena stage, or theater-in-the-round, is a performing space totally surrounded by the auditorium. This arrangement has been tried several times in the 20th century, but its historical precedents are largely in nondramatic forms such as the circus, and it has limited popularity. The necessity of providing equal sight lines for all spectators puts special constraints on the type of scenery used and on the movements of the actors, because at any given time part of the audience will inevitably be viewing a performer's back. Illusion is more difficult to sustain in arena, since in most setups, entrances and exits must be made in full view of the audience, eliminating surprise, if nothing else. Nonetheless, arena, when properly used, can create a sense of intimacy not often possible with other stage arrangements, and, as noted, it is well suited to many nondramatic forms. Furthermore, because of the different scenic demands of arena theater, the large backstage areas associated with prosceniums can be eliminated, thus allowing a more economical use of space.Variant FormsOne variant form of staging is environmental theater, which has precedents in medieval and folk theater and has been widely used in 20th-century avant-garde theater. It eliminates the single or central stage in favor of surrounding the spectators or sharing the space with them. Stage space and spectator space become indistinguishable. Another popular alternative is the free, or flexible, space, sometimes called a black-box theater because of its most common shape and color. This is an empty space with movable seating units and stage platforms that can be arranged in any configuration for each performance.The Fixed Architectural StageMost stages are raw spaces that the designer can mold to create any desired effect or location; in contrast, the architectural stage has permanent features that create a more formal scenic effect. Typically, ramps, stairs, platforms, archways, and pillars are permanently built into the stage space. Variety in individual settings may be achieved by adding scenic elements. The Stratford Festival Theater in Stratford, Ontario, for example, has a permanent "inner stage"-a platform roughly 3.6 m (12 ft) high-jutting onto the multilevel thrust stage from the upstage wall. Most permanent theaters through the Renaissance, such as the Teatro Olimpico (1580) in Vicenza, Italy, did not use painted or built scenery but relied on similar permanent architectural features that could provide the necessary scenic elements. The No and kabuki stages in Japan are other examples.AuditoriumsAuditoriums in the 20th century are mostly variants on the fan-shaped auditorium built (1876) by the composer Richard Wagner at his famous opera house in Bayreuth, Germany. These auditoriums are shaped like a hand-held fan and are usually raked (inclined upward from front to back), with staggered seats to provide unobstructed sight lines. Such auditoriums may be designed with balconies, and some theaters, such as opera houses, have boxes-seats in open or partitioned sections along the sidewalls of the auditorium-a carry-over from baroque theater architecture.Set DesignIn Europe, one person, frequently called a scenographer, designs sets, costumes, and lights; in the U.S. these functions are usually handled by three separate professionals. Set design is the arrangement of theatrical space; the set, or setting, is the visual environment in which a play is performed. Its purpose is to suggest time and place and to create the proper mood or atmosphere. Settings can generally be classified as realistic, abstract, suggestive, or functional.Stage FacilitiesThe use and movement of scenery are determined by stage facilities. Relatively standard elements include trapdoors in the stage floor, elevators that can raise or lower stage sections, wagons (rolling platforms) on which scenes may be mounted, and cycloramas-curved canvas or plaster backdrops used as a projection surface or to simulate the sky. Above the stage, especially in a proscenium theater, is the area known as the fly gallery, where lines for flying-that is, raising-unused scenery from the stage are manipulated, and which contains counterweight or hydraulic pipes and lengths of wood, or battens, from which lights and pieces of scenery may be suspended. Other special devices and units can be built as necessary. Although scene painting seems to be a dying art, modern scene shops are well equipped to work with plastics, metals, synthetic fabrics, paper, and other new and industrial products that until recently were not in the realm of theater.Lighting DesignLighting design, a more ephemeral art, has two functions: to illuminate the stage and the performers and to create mood and control the focus of the spectators. Stage lighting may be from a direct source such as the sun or a lamp, or it may be indirect, employing reflected light or general illumination. It has four controllable properties: intensity, color, placement on the stage, and movement-the visible changing of the first three properties. These properties are used to achieve visibility, mood, composition (the overall arrangement of light, shadow, and color), and the revelation of form-the appearance of shape and dimensionality of a performer or object as determined by light.Until the Renaissance, almost all performance was outdoors and therefore lit by the sun, but with indoor performance came the need for lighting instruments. Lighting was first achieved with candles and oil lamps and, in the 19th century, with gas lamps. Although colored filters, reflectors, and mechanical dimming devices were used for effects, lighting served primarily to illuminate the stage. By current standards the stage was fairly dim, which allowed greater illusionism in scenic painting. Gas lighting facilitated greater control, but only the advent of electric lighting in the late 19th century permitted the brightness and control presently available. It also allowed the dimming of the house-lights, plunging the auditorium into darkness for the first time.Lighting design, however, is not simply aiming the lighting instruments at the stage or bathing the stage in a general wash of light. Audiences usually expect actors to be easily visible at all times and to appear to be three-dimensional. This involves the proper angling of instruments, provision of back and side lighting as well as frontal, and a proper balance of colors. Two basic types of stage-lighting instruments are employed: floodlights, which illuminate a broad area, and spotlights, which focus light more intensely on a smaller area. Instruments consist of a light source and a series of lenses and shutters in some sort of housing. These generally have a power of 500 to 5000 watts. The instruments are hung from battens and stanchions in front of, over, and at the sides of the stage. In realistic settings, lights may be focused to simulate the direction of the ostensible source, but even in these instances, performers would appear two-dimensional without back and side lighting.Because so-called white light is normally too harsh for most theater purposes, colored filters called gels are used to soften the light and create a more pleasing effect. White light can be simulated by mixing red, blue, and green light. Most designers attempt to balance "warm" and "cool" colors to create proper shadows and textures. Except for special effects, lighting design generally strives to be unobtrusive; just as in set design, however, the skillful use of color, intensity, and distribution can have a subliminal effect on the spectators' perceptions.The lighting designer is often responsible for projections. These include still or moving images that substitute for or enhance painted and constructed scenery, create special effects such as stars or moonlight, or provide written legends for the identification of scenes. Images can be projected from the audience side of the stage onto opaque surfaces, or from the rear of the stage onto specially designed rear-projection screens. Similar projections are often used on scrims, semitransparent curtains stretched across the stage. Film and still projection, sometimes referred to as mixed media, was first used extensively by the German director Erwin Piscator in the 1920s and became very popular in the 1960s.The lights are controlled by a skilled technician called the electrician, who operates a control or dimmer board, so called because a series of "dimmers" controls the intensity of each instrument or group of instruments. The most recent development in lighting technology is the memory board, a computerized control system that stores the information of each light cue or change of lights. The electrician need no longer operate each dimmer individually; by pushing one button, all the lights will change automatically to the preprogrammed intensity and at the desired speed.Costume DesignA costume is whatever is worn on the performer's body. Costume designers are concerned primarily with clothing and accessories, but are also often responsible for wigs, masks, and makeup. Costumes convey information about the character and aid in setting the tone or mood of the production. Because most acting involves impersonation, most costuming is actual or re-created historical or contemporary dress; as with scenery, however, costumes may also be suggestive or abstract. Until the 19th century, little attention was paid to period or regional accuracy; variations on contemporary dress sufficed. Since then, however, costume designers have paid great attention to authentic period style.As with the other forms of design, subtle effects can be achieved through choice of color, fabric, cut, texture, and weight or material. Because costume can indicate such things as social class and personality traits, and can even simulate such physical attributes as obesity or a deformity, an actor's work can be significantly eased by its skillful design. Costume can also function as character signature, notably for such comic characters as Harlequin or the other characters of the commedia dell'arte, Charlie Chaplin's Little Tramp, or circus clowns.In much Oriental theater, as in classical Greek theater, costume elements are formalized. Based originally on everyday dress, the costumes became standardized and were appropriated for the stage. Colors, designs, and ornamentation all convey meaningful information.MaskA special element of costume is the mask. Although rarely used in contemporary Western theater, masks were essential in Greek and Roman drama and the commedia dell'arte and are used in most African and Oriental theater. The masks of tragedy and of comedy, as used in ancient Greek drama, are in fact the universal symbols of the theater. Masks obviate the use of the face for expression and communication and thus render the performer more puppetlike; expression depends solely on voice and gesture. Because the mask's expression is unchanging, the character's fate or final expression is known from the beginning, thereby removing one aspect of suspense. The mask shifts focus from the actor to the character and can thus clarify aspects of theme and plot and give a character a greater universality. Like costumes, the colors and features of the mask, especially in the Orient, indicate symbolically significant aspects of the character. In large theaters masks can also aid in visibility.MakeupPrevious Topic
Makeup may also function as a mask, especially in Oriental theater, where faces may be painted with elaborate colors and images that exaggerate and distort facial features. In Western theater, makeup is used for two purposes: to emphasize and reinforce facial features that might otherwise be lost under bright lights or at a distance and to alter signs of age, skin tone, or nose shape.
The technical aspects of production may be divided into preproduction and run of production. Preproduction technical work is supervised by the technical director in conjunction with the designers. Sets, properties (props), and costumes are made during this phase by crews in the theater shops or, in the case of most commercial theater, in professional studios.
Props are the objects handled by actors or used in dressing the stage-all objects placed or carried on the set that are not costumes or scenery. Whereas real furniture and hand props can be used in many productions, props for period shows, nonrealistic productions, and theatrical shows such as circuses must be built. Like sets, props can be illusionistic-they may be created from papier-mâché or plastic for lightness, exaggerated in size, irregularly shaped, or designed to appear level on a raked stage; they may also be capable of being rolled, collapsed, or folded. The person in charge of props is called the props master or mistress.
Sound and Sound Effects
Sound, if required, is now generally recorded during the preproduction period. From earliest times, most theatrical performances were accompanied by music that, until recently, was produced by live musicians. Since the 1930s, however, use of recorded sound has been a possibility in the theater. Although music is still the most common sound effect, wind, rain, thunder, and animal noises have been essential since the earliest Greek tragedies. Any sound that cannot be created by a performer may be considered a sound effect. Such sounds are most often used for realistic effect (for example, a train rushing by or city sounds outside a window), but they can also assist in the creation of mood or rhythm. Although many sounds can be recorded from actual sources, certain sounds do not record well and seem false when played through electronic equipment on a stage. Elaborate mechanical devices are therefore constructed to simulate these sounds, such as rain or thunder.
Technicians also create special aural and visual effects simulating explosions, fire, lightning, and apparitions and giving the illusion of moving objects or of flying.
Microsoft Encarta 98 Encyclopedia copyright 1993-1997 Microsoft Corporation.
Back to Top
Ma. Criselda De Leon
Conversions, closely examined, will be found to fall into two classes: changes of volition, and changes of sentiment. It was the former class that Dryden had in mind; and, with reference to this class, the principle he indicates remains a sound one. A change of resolve should never be due to mere lapse of time---to the necessity for bringing the curtain down and letting the audience go home. It must always be rendered plausible by some new fact or new motive; some hitherto untried appeal to reason or emotion. This rule, however, is too obvious to require enforcement. It was not quite superfluous so long as the old convention of comedy endured. For a century and a half after Dryden's time, hard-hearted parents were apt to withdraw their opposition to their children's "felicity" for no better reason than that the fifth act was drawing to a close. But this formula is practically obsolete. Changes of will, on the modern stage, are not always adequately motived; but that is because of individual inexpertness, not because of any failure to recognize theoretically the necessity for adequate motivation.
Changes of sentiment are much more important and more difficult to handle. A change of will can always manifest itself in action; but it is very difficult to externalize convincingly a mere change of heart. When the conclusion of a play hinges (as it frequently does) on a conversion of this nature, it becomes a matter of the first moment that it should not merely be asserted but proved. Many a promising play has gone wrong because of the author's neglect, or inability, to comply with this condition.
It has often been observed that of all Ibsen's thoroughly mature works, from A Doll's House to John Gabriel Borkman, The Lady from the Sea is the loosest in texture, the least masterly in construcion. The fact that it leaves this impression on the mind is largely due, I think, to a single fault. The conclusion of the play---Ellida's clinging to Wangel and rejection of the Stranger---depends entirely on a change in Wangel's mental attitude, of which we have no proof whatever beyond his bare assertion. Ellida, in her overwrought mood, is evidently inclining to yield to the uncanny allurement of the Stranger's claim upon her, when Wangel, realizing that her sanityis threatened, says:
WANGEL: It shall not come to that. There is no other way of deliverance for you---at least I see none. And therefore---therefore I---cancel our bargain on the spot. Now you can choose your own path, in full---full freedom.
ELLIDA: (Gazes at him awhile, as if speechless): Is this true---true---what you say? Do you mean it---from your inmost heart?
WANGEL: Yes---from the inmost depths of my tortured heart, I mean it.... Now your own true life can return to its---its right groove again. For now you can choose in freedom; and on your own responsibility, Ellida.
ELLIDA: In freedom---and on my own responsibility? Responsibility? This---this transforms everything.
---and she promptly gives the Stranger his dismissal. Now this is inevitably felt to be a weak conclusion, because it turns entirely on a condition of Wangel's mind of which he gives no positive and convincing evidence. Nothing material is changed by his change of heart. He could not in any case have restrained Ellida by force; or, if the law gave him the abstract right to do so, he certainly never had the slightest intention of exercising it. Psychologically, indeed, the incident is acceptable enough. The saner part of Ellida's will was always on Wangel's side; and a merely verbal undoing of the "bargain" with which she reproached herself might quite naturally suffice to turn the scale decisively in his favour. But what may suffice for Ellida is not enough for the audience. Too much is made to hang upon a verbally announced conversion. The poet ought to have invented some material---or, at the very least, some impressively symbolic---proof of Wangel's change of heart. Had he done so, The Lady from the Sea would assuredly have taken a higher rank among his works.
Let me further illustrate my point by comparing a very small thing with a very great.The late Captain Marshall wrote a "farcical romance" named The Duke of Killiecrankie, in which that nobleman, having been again and again rejected by the Lady Henrietta Addison, kidnapped the obdurate fair one, and imprisoned her in a crag-castle in the Highlands. Having kept her for a week in deferential durance, and shown her that he was not the inefficient nincompoop she had taken him for, he threw open the prison gate, and said to her: "Go! I set you free!" The moment she saw the gate unlocked, and realized that she could indeed go when and where she pleased, she also realized that had the least wish to go, and flung herself into her captor's arms. Here we have Ibsen's situation transposed into the key of fantasy, and provided with the material "guarantee of good faith" which is lacking in The Lady from the Sea. The Duke's change of mind, his will to set the Lady Henrietta free, is visibly demonstrated by the actual opening of the prison gate, so that we believe in it, and believe that she believes in it. The play was a trivial affair, and is deservedly forgotten; but the situation was effective because it obeyed the law that a change of will or of feeling, occurring at a crucial point in a dramatic action, must be certified by some external evidence, on pain of leaving the audience unimpressed.
This is a more important matter than it may at first sight appear. How to bring home to the audience a decisive change of heart is one of the ever-recurring problems of the playwright's craft. In The Lady from the Sea, Ibsen failed to solve it: in Rosmersholm he solved it by heroic measures. The whole catastrophe is determined by Rosmer's inability to accept without proof Rebecca's declaration that Rosmersholm has "ennobled' her, and that she is no longer the same woman whose relentless egoism drove Beata into the mill-race. Rebecca herself puts it to him: "How can you believe me on my bare word after to-day?" There is only one proof she can give---that of "going the way Beata went." She gives it: and Rosmer, who cannot believe her if she lives, and will not survive her if she dies, goes with her to her end. But the cases are not very frequent, fortunately, in which such drastic methods of proof are appropriate or possible. The dramatist must, as a rule, attain his end by less violent means; and often he fails to attain it at all.
A play by Mr. Haddon Chambers, The Awakening, turned on a sudden conversion---the "awakening," in fact, referred to in the title. A professional lady-killer, a noted Don Juan, has been idly making love to a country maiden, whose heart is full of innocent idealisms. She discovers his true character, or, at any rate, his reputation, and is horror-stricken, while practically at the same moment, he "awakens" to the error of his ways, and is seized with a passion for her as single-minded and idealistic as hers for him. But how are the heroine and the audience to be assured of the fact? That is just the difficulty; and the author takes no effectual measures to overcome it. The heroine, of course, is ultimately convinced; but the audience remains skeptical, to the detriment of the desired effect. "Sceptical," perhaps is not quite the right word. The state of mind of a fictitious character is not a subject for actual belief or disbelief. We are bound to accept theoretically what the author tells us; but in this case he has failed to make us intimately feel and know that it is true.In Mr. Alfred Sutro's play The Builder of Bridges, Dorothy Faringay, in her devotion to her forger brother, has conceived the rather disgraceful scheme of making one of his official superiors fall in love with her, in order to induce him to become practically an accomplice in her brother's crime. She succeeds beyond her hopes. Edward Thursfield does fall in love with her, and, at a great sacrifice, replaces the money the brother has stolen. But, in a very powerful peripety-scene in the third act, Thursfield learns that Dorothy has been deliberately beguiling him, while in fact she was engaged to another man. The truth is, however, that she has really come to love Thursfield passionately, and has broken her engagement with the other, for whom she never truly cared. So the author tells us, and so we are willing enough to believe---if he can devise any adequate method of making Thursfield believe it. Mr. Sutro's handling of the difficulty seems to me fairly, but not conspicuously, successful. I cite the case as a typical instance of the problem, a part from the merits or demerits of the solution.
It may be said that the difficulty of bringing home to us the reality of a revulsion of feeling, or radical change of mental attitude, is only a particular case of the playwright's general problem of convincingly externalizing inward conditions and processes. That is true: but the special importance of a conversion which unties the knot and brings the curtain down seemed to render it worthy of special consideration.
Reference:Play-making A Manual of Craftsmanship by William Archer
Back to Top | http://litera1no4.tripod.com/elements.html | 13 |
65 | |Japan Table of Contents
Since the mid-nineteenth century, when the Tokugawa government first opened the country to Western commerce and influence, Japan has gone through two periods of economic development. The first began in earnest in 1868 and extended through World War II; the second began in 1945 and continued into mid-1990s. In both periods, the Japanese opened themselves to Western ideas and influence; experienced revolutionary social, political, and economic changes; and became a world power with carefully developed spheres of influence. During both periods, the Japanese government encouraged economic change by fostering a national revolution from above and by planning and advising in every aspect of society. The national goal each time was to make Japan so powerful and wealthy that its independence would never again be threatened.
In the Meiji period (1868-1912), leaders inaugurated a new Western-based education system for all young people, sent thousands of students to the United States and Europe, and hired more than 3,000 Westerners to teach modern science, mathematics, technology, and foreign languages in Japan. The government also built railroads, improved roads, and inaugurated a land reform program to prepare the country for further development.
To promote industrialization, the government decided that, while it should help private business to allocate resources and to plan, the private sector was best equipped to stimulate economic growth. The greatest role of government was to help provide the economic conditions in which business could flourish. In short, government was to be the guide and business the producer. In the early Meiji period, the government built factories and shipyards that were sold to entrepreneurs at a fraction of their value. Many of these businesses grew rapidly into the larger conglomerates that still dominates much of the business world. Government emerged as chief promoter of private enterprise, enacting a series of probusiness policies, including low corporate taxes.
Before World War II, Japan built an extensive empire that included Taiwan, Korea, Manchuria, and parts of northern China. The Japanese regarded this sphere of influence as a political and economic necessity, preventing foreign states from strangling Japan by blocking its access to raw materials and crucial sea-lanes. Japan's large military force was regarded as essential to the empire's defense. Japan's colonies were lost as a result of World War II, but since then the Japanese have extended their economic influence throughout Asia and beyond. Japan's Constitution, promulgated in 1947, forbids an offensive military force, but Japan still maintained its formidable Self-Defense Forces and ranked third in the world in military spending behind the United States and the Soviet Union in the late 1980s.
Rapid growth and structural change characterized Japan's two periods of economic development since 1868. In the first period, the economy grew only moderately at first and relied heavily on traditional agriculture to finance modern industrial infrastructure. By the time the Russo-Japanese War (1904-5) began, 65 percent of employment and 38 percent of the gross domestic product (GDP) was still based on agriculture, but modern industry had begun to expand substantially. By the late 1920s, manufacturing and mining contributed 23 percent of GDP, compared with 21 percent for all of agriculture. Transportation and communications had developed to sustain heavy industrial development.
In the 1930s, the Japanese economy suffered less from the Great Depression than most of the other industrialized nations, expanding at the rapid rate of 5 percent of GDP per year. Manufacturing and mining came to account for more than 30 percent of GDP, more than twice the value for the agricultural sector. Most industrial growth, however, was geared toward expanding the nation's military power.
World War II wiped out many of the gains Japan had made since 1868. About 40 percent of the nation's industrial plants and infrastructure were destroyed, and production reverted to levels of about fifteen years earlier. The people were shocked by the devastation and swung into action. New factories were equipped with the best modern machines, giving Japan an initial competitive advantage over the victor states, who now had older factories. As Japan's second period of economic development began, millions of former soldiers joined a well-disciplined and highly educated work force to rebuild Japan.
Japan's highly acclaimed postwar education system contributed strongly to the modernizing process. The world's highest literacy rate and high education standards were major reasons for Japan's success in achieving a technologically advanced economy. Japanese schools also encouraged discipline, another benefit in forming an effective work force.
The early postwar years were devoted to rebuilding lost industrial capacity: major investments were made in electric power, coal, iron and steel, and chemical fertilizers. By the mid-1950s, production matched prewar levels. Released from the demands of military-dominated government, the economy not only recovered its lost momentum but also surpassed the growth rates of earlier periods. Between 1953 and 1965, GDP expanded by more than 9 percent per year, manufacturing and mining by 13 percent, construction by 11 percent, and infrastructure by 12 percent. In 1965 these sectors employed more than 41 percent of the labor force, whereas only 26 percent remained in agriculture.
The mid-1960s ushered in a new type of industrial development as the economy opened itself to international competition in some industries and developed heavy and chemical manufactures. Whereas textiles and light manufactures maintained their profitability internationally, other products, such as automobiles, ships, and machine tools, assumed new importance. The value added to manufacturing and mining grew at the rate of 17 percent per year between 1965 and 1970. Growth rates moderated to about 8 percent and evened out between the industrial and service sectors between 1970 and 1973, as retail trade, finance, real estate, information, and other service industries streamlined their operations.
Japan faced a severe economic challenge in the mid-1970s. The world oil crisis in 1973 shocked an economy that had become virtually dependent on foreign petroleum. Japan experienced its first postwar decline in industrial production, together with severe price inflation. The recovery that followed the first oil crisis revived the optimism of most business leaders, but the maintenance of industrial growth in the face of high energy costs required shifts in the industrial structure.
Changing price conditions favored conservation and alternative sources of industrial energy. Although the investment costs were high, many energy-intensive industries successfully reduced their dependence on oil during the late 1970s and 1980s and enhanced their productivity. Advances in microcircuitry and semiconductors in the late 1970s and 1980s also led to new growth industries in consumer electronics and computers and to higher productivity in already established industries. The net result of these adjustments was to increase the energy efficiency of manufacturing and to expand so-called knowledge-intensive industry. The service industries expanded in an increasingly postindustrial economy.
Structural economic changes, however, were unable to check the slowing of economic growth as the economy matured in the late 1970s and 1980s, attaining annual growth rates no better than 4 to 6 percent. But these rates were remarkable in a world of expensive petroleum and in a nation of few domestic resources. Japan's average growth rate of 5 percent in the late 1980s, for example, was far higher than the 3.8 percent growth rate of the United States.
Despite more petroleum price increases in 1979, the strength of the Japanese economy was apparent. It expanded without the double- digit inflation that afflicted other industrial nations and that had bothered Japan itself after the first oil crisis in 1973. Japan experienced slower growth in the mid-1980s, but its demand- sustained economic boom of the late 1980s revived many troubled industries.
Complex economic and institutional factors affected Japan's postwar growth. First, the nation's prewar experience provided several important legacies. The Tokugawa period (1600-1867) bequeathed a vital commercial sector in burgeoning urban centers, a relatively well-educated elite (although one with limited knowledge of European science), a sophisticated government bureaucracy, productive agriculture, a closely unified nation with highly developed financial and marketing systems, and a national infrastructure of roads. The buildup of industry during the Meiji period to the point where Japan could vie for world power was an important prelude to postwar growth and provided a pool of experienced labor following World War II.
Second, and more important, was the level and quality of investment that persisted through the 1980s. Investment in capital equipment, which averaged more than 11 percent of GNP during the prewar period, rose to some 20 percent of GNP during the 1950s and to more than 30 percent in the late 1960s and 1970s. During the economic boom of the late 1980s, the rate still kept to around 20 percent. Japanese businesses imported the latest technologies to develop the industrial base. As a latecomer to modernization, Japan was able to avoid some of the trial and error earlier needed by other nations to develop industrial processes. In the 1970s and 1980s, Japan improved its industrial base through technology licensing, patent purchases, and imitation and improvement of foreign inventions. In the 1980s, industry stepped up its research and development, and many firms became famous for their innovations and creativity.
Japan's labor force contributed significantly to economic growth, not only because of its availability and literacy but also because of its reasonable wage demands. Before and immediately after World War II, the transfer of numerous agricultural workers to modern industry resulted in rising productivity and only moderate wage increases. As population growth slowed and the nation became increasingly industrialized in the mid-1960s, wages rose significantly. But labor union cooperation generally kept salary increases within the range of gains in productivity.
High productivity growth played a key role in postwar economic growth. The highly skilled and educated labor force, extraordinary savings rates and accompanying levels of investment, and the low growth of Japan's labor force were major factors in the high rate of productivity growth.
The nation has also benefited from economies of scale. Although medium-sized and small enterprises generated much of the nation's employment, large facilities were the most productive. Many industrial enterprises consolidated to form larger, more efficient units. Before World War II, large holding companies formed wealth groups, or zaibatsu, which dominated most industry. The zaibatsu were dissolved after the war, but keiretsu--large, modern industrial enterprise groupings-- emerged. The coordination of activities within these groupings and the integration of smaller subcontractors into the groups enhanced industrial efficiency.
Japanese corporations developed strategies that contributed to their immense growth. Growth-oriented corporations that took chances competed successfully. Product diversification became an essential ingredient of the growth patterns of many keiretsu. Japanese companies added plant and human capacity ahead of demand. Seeking market share rather than quick profit was another powerful strategy.
Finally, circumstances beyond Japan's direct control contributed to its success. International conflicts tended to stimulate the Japanese economy until the devastation at the end of World War II. The Russo-Japanese War (1904-5), World War I (1914- 18), the Korean War (1950-53), and the Second Indochina War (1954- 75) brought economic booms to Japan. In addition, benign treatment from the United States after World War II facilitated the nation's reconstruction and growth.
The United States occupation of Japan (1945-52) resulted in the rebuilding of the nation and the creation of a democratic state. United States assistance totaled about US$1.9 billion during the occupation, or about 15 percent of the nation's imports and 4 percent of GNP in that period. About 59 percent of this aid was in the form of food, 15 percent in industrial materials, and 12 percent in transportation equipment. United States grant assistance, however, tapered off quickly in the mid-1950s. United States military procurement from Japan peaked at a level equivalent to 7 percent of Japan's GNP in 1953 and fell below 1 percent after 1960. A variety of United States-sponsored measures during the occupation, such as land reform, contributed to the economy's later performance by increasing competition. In particular, the postwar purge of industrial leaders allowed new talent to rise in the management of the nation's rebuilt industries. Finally, the economy benefited from foreign trade because it was able to expand exports rapidly enough to pay for imports of equipment and technology without falling into debt, as had a number of developing nations in the 1980s.
The consequences of Japan's economic growth were not always positive. Large advanced corporations existed side-by-side with the smaller and technologically less-developed firms, creating a kind of economic dualism in the late twentieth century. Often the smaller firms, which employed more than two-thirds of Japan's workers, worked as subcontractors directly for larger firms, supplying a narrow range of parts and temporary workers. Excellent working conditions, salaries, and benefits, such as permanent employment, were provided by most large firms, but not by the smaller firms. Temporary workers, mostly women, received much smaller salaries and had less job security than permanent workers. Thus, despite the high living standards of many workers in larger firms, Japan in 1990 remained in general a low-wage country whose economic growth was fueled by highly skilled and educated workers who accepted poor salaries, often unsafe working conditions, and poor living standards.
Additionally, Japan's preoccupation with boosting the rate of industrial growth during the 1950s and 1960s led to the relative neglect of consumer services and also to the worsening of industrial pollution. Housing and urban services, such as water and sewage systems, lagged behind the development of industry. Social security benefits, despite considerable improvement in the 1970s and 1980s, still lagged well behind other industrialized nations at the end of the 1980s. Agricultural subsidies and a complex and outmoded distribution system also kept the prices of some essential consumer goods very high by world standards. Industrial growth came at the expense of the environment. Foul air, heavily polluted water, and waste disposal became critical political issues in the 1970s and again in the late 1980s.
The Evolving Occupational Structure
As late as 1955, some 40 percent of the labor force still worked in agriculture, but this figure had declined to 17 percent by 1970 and to 7.2 percent by 1990. The government estimated in the late 1980s that this figure would decline to 4.9 percent by 2000, as Japan imported more and more of its food and small family farms disappeared.
Japan's economic growth in the 1960s and 1970s was based on the rapid expansion of heavy manufacturing in such areas as automobiles, steel, shipbuilding, chemicals, and electronics. The secondary sector (manufacturing, construction, and mining) expanded to 35.6 percent of the work force by 1970. By the late 1970s, however, the Japanese economy began to move away from heavy manufacturing toward a more service-oriented (tertiary sector) base. During the 1980s, jobs in wholesaling, retailing, finance and insurance, real estate, transportation, communications, and government grew rapidly, while secondary-sector employment remained stable. The tertiary sector grew from 47 percent of the work force in 1970 to 59.2 percent in 1990 and was expected to grow to 62 percent by 2000, when the secondary sector will probably employ about one-third of Japan's workers.
Source: U.S. Library of Congress | http://countrystudies.us/japan/98.htm | 13 |
22 | Mass grading for the Panama Canal enlargement project will
entirely destroy this island. source: C.Michael Hogan Habitat destruction is the alteration of a natural habitat to the point that it is rendered unfit to support the species dependent upon it as their home territory. Many organisms previously using the area are displaced or destroyed, reducing biodiversity. Modifying habitats for agriculture is the chief cause of such habitat loss. Other causes of habitat destruction include surface mining, deforestation, slash-and-burn practices and urban development. Habitat destruction is presently ranked as the most significant cause of species extinction worldwide. Additional causes of habitat destruction include acid rain, water pollution, introduction of alien species, overgrazing and overfishing.
A closely related concept is that of habitat fragmentation, where a habitat is separated into fragments that lack effect ecological connectivity, reducing the viability of some of the resident species. The fundamental driver of habitat destruction has been the unprecented human population explosion, which has been a unique event of a single species dominating natural systems of the Earth within the short time span of 10,000 years. The waves of habitat destruction are closely correlated with the numerical expansion of the human population as well as settlement incursions such as the Maori in New Zealand and the Europeans to North America.
The chief proximate causes of habitat destruction are:
- Conversion of natural habitat for agricultural use including crops and grazing activity
- Pollution, especially chemical herbicide and pesticide use, water pollution, air pollution and acid rain
- Urban development and infrastructure, including roads, power plants, desert solar arrays, pipelines and transmission lines
- Timber harvesting and slash-and-burn practices leading to deforestation
- Introduction of alien species
Clearfelling of monoculture alien species conifer forest.
Aberdeenshire, Scotland. Source: C.Michael Hogan Starting in the mid-Holocene and continuing to the present time, agriculture has been the predominant cause of habitat destruction. Conversion of natural habitat to crop production as well as to grazing has eliminated the expanse of much of the Earth's original habitat. For example, in Europe, over 85 percent of all natural habitat has been destroyed, mostly for agricultural practices. In principle, grazing could be consistent with grassland conservation; however, widespread overgrazing practices have resulted in extensive loss of natural habitat.
Pollution compromises and destroys habitats in numerous ways. Acid rain alters the pH of both watercourses as well as soils, thus fundamentally transforming the abiotic integrity of natural habitats. The change in pH levels alters the metabolic capability of both plants and animals, leading to reduced numbers or complete loss of entire species within the affected area. Similarly water pollution can dramatically alter the survival of species within an aquatic habitat. Air pollution impacts include dispersal of oxides of ntirogen and sulfur dioxide, which among other gas contaminants can alter metabolism, fitness and mortality of flora and fauna.
Latvian monocultural agriculture displaced native grassland
and forest: beautiful but ecologically disastrous.
Source: C.Michael Hogan While urban development represents very visible evidence of habitat destruction, it accounts for far less of the net damage compared to agricultural and deforestation causes. One of the prominent effects of this type of destruction is the habitat fragmentation effects of long linear projects, especially roadways that create permanent barriers to habitat continuity.
Any type of deforestation represents habitat destruction; the most significant forms of this destruction are clearcutting and slash-and-burn agriculture. These two practices are responsible for massive habitat losses in such places as Madagascar, Indonesia and Brazil.
Perhaps the most subtle form of habitat destruction results from invasive species, flora or fauna which generally are introduced by humans and crowd out native species. This phenomenon can occur on such a massive scale and progress sufficiently slowly that a fundamental transformation may occur in the form of relatively modest annual steps. An example of this phenomenon is the destruction of most of the California coastal prairie, resulting from introduction of exotic European and Asian grasses when European settlement began in earnest in the mid 1800s.
The fundamental driver: human overpopulation
Hong Kong has displaced virtually all the original ecosystem
where it stands. Source: C.Michael Hogan While there are a number of clearly defined processes leading to destruction of habitat, the underlying cause of all these is the human population explosion. Ironically the majority of the human population growth is situated within the greatest biodiversity hotspots. Specific statistical analysis demonstrates that 87.9 percent of varation in species endangerment can be explained by the single variable of human population density. Some researchers like to further break down the pressure of human overpopulation into components of behavior; in one sense this is a distraction from the fundamental reality of the causation. These behaviors and attributes consist of such descriptors as: (a) lack of family planning; (b) lack of secure property rights; (c) famine; (d) poverty and (e) lax enforcement of environmental statutes.
Prominent consequences of habitat destruction may include local or global extinction of species and thus biodiversity loss. In a anthropocentric context a major consequence is reduction of ecosystem services or loss of economic value of the environment to humans. Specific elements of these losses include: (a) topsoil erosion; (b) reduction in sustainable yields of fisheries, forests and other biotic resources; (c) loss of pollinators; (d) reduction in water quality due to sedimentation; (e) loss of carbon storage; (f) reduction of surface water resources and (g) loss of genetic materials that provide medicinal value. Reduction of usable water resource is compounded by pollution degradation pollution along with reduced retention of freshwater resources as natural soils and detritus are replaced with less pervious soils and even pavement.
Without regard to the inestimable value of species lost and aesthetic degradation, the brute economic toll of habitat destruction is massive. Economic losses to fisheries and agricultural productivity equates easily to hundred of billions of dollars (US) per annum. More significantly, the uprooting of food security for hundreds of millions of people is an intrinsic consequence of the topsoil and pollinator losses. The loss of food security is occurring in the very places that habitat losses are currently greatest, and where population growth is the highest, implying a near certainty of increasing famine and warfare in those regions as food and water conflicts exacerbate.
By habitat type
Extensive loss of all major habitat types has already occurred. Vast percentages of forest, grassland, chaparral, wetlands, desert and tundra have been eliminated by the actions of man.Some of these losses such as forest clearly and wetland filling are quite visible, where other destruction such as grassland loss is much more subtle; a monocultural grain crop may replace a robust biodiverse native prairie, or overgrazing by livestock may remove native forbs and grasses by excessive erosion effects, as well as the frequent effect of importing alien grasses as weed seed within hay brought in from other regions.
Deforestation worldwide amounts to approximately one third of the original forest stands lost, giving full credit to secondary regrowth. In terms of virgin forests destroyed the percentage of deforestation is clearly much higher. Today's chief threat is to tropical forests, since most temperate regions are either lost or in a state of effective conservation. Devastation of tropical forests has been intense in recent decades in Indonesia, Phillipines and more recently Amazonia. While estimates of present loss rates vary, most developing countries in the tropics have an annual rate of loss of their rainforests ranging from 0.4 to 4.7 percent.
Grasslands are even more vulnerable than forests to habitat destruction, since their occurrence is often on easily farmed topography that is inviting for grain cultivation or livestock. In addition, grassland losses are harmful in that their inherent biodiversity is quite high. Replacement of natural grasslands with monoculture cereal crops totally transforms the landscape to effectively produce a biological desert in its lack of biodiversity. Tropical grasslands are generally derivatives of cleared forests, but their Holocene evolution has witnessed remarkable speciation and biodiversity gain as well. Tropical grasslands are endangered by pressures for overgrazing as well as grain production to feed the burgeoning tropical human population. While European grasslands have endured losses for centuries, it has been the last two hundred years that North American grasslands have been severely decimated.
Chaparral losses are also some of the least appreciated, since this biome is not universally respected for its natural beauty and biodiversity. In fact, humans often view these areas in the western USA, Southern Africa or Chilean Mattoral as landscapes not useful for any purpose other than housing developments; this misconstrued use often results in residential areas threatened by both wildfire as well as ensuing slope instability. Beginning in the 1980s in the western USA, the chaparral biome has begun to be understood and protected. Heathlands in the British Isles are European analogs of chaparral; these also have begun to gain understanding and respect within the United Kingdom habitat preservation planning scheme.
Beginning at least 7700 years before present in China and continuing to the present time, coastal wetlands have been under attack by humans. The diverse types of wetland destruction include filling for human habitation, pollution, harvesting of mangroves for charcoal and other types of agriculture, especially rice farming. In the last two centuries there has been extraordinary pressure on coastal wetlands for residential uses to house the human population explosion as well as tourism uses; this development has been intense in the last two centuries in such places as the California coast, Sea of Cortez in Mexico, Panama's Bocas del Toro coast, Antigua and other Caribbean Islands and the Mediterranean coast. The California coastal wetlands have shrunk by 90 percent in 150 years time.
The desert biome is a special case where the inhospitable nature of that landscape has limited human pressures in such disparate areas as Mongolia, the southwestern USA, the Kalahari Desert and the South African Karoo; however, a combination of the human population explosion and wealth have broken some of these barriers, notably in Southern California and Arizona. Besides wanton destruction by suburban sprawl and off-road vehicles, a more modern threat faces some of these deserts in the form of well-intentioned solar power. While superficially environmentally friendly, this form of large scale photovoltaic arrays can harm large tracts of desert lands, and thus is being debated more carefully at the present time. The great irony of the desert biome is the process of desertification, which one might think is leading to greater areas of desert in North Africa and China; however, desertification of drylands actually leads to a biologically depauperate landscape that lacks the inherent values of dryland, scrub or desert.
Some natural events such as volcanic eruptions, hurricanes, flooding, forest fires and other disturbances can cause habitat loss; however, these factors produce a very small percentage of the total habitat loss over the past 10,000 years. Furthermore, these natural events can be viewed as elements of ecological succession, that are part of the evolutionary fabric of speciation. More importantly, natural causes tend to produce relatively minor swaths of destruction compared to the systematic destruction of habitat by human activities. For example, volcanic eruptions from K?lauea, one of the world's most active volcanoes, has covered about four square kilometers of land per annum over the last 27 year period of intense eruption; moreover, much of the land covered three decades ago has been substantially recolonized by pioneer vegetation in the cycle that has built this island of Hawaii. By contrast in the central highlands of Madagascar, over a similar time span, slash-and-burn destruction of previous rainforests decimated over 60,000 square kilometers, with the destruction being irreversible, owing to the subsequent loss of topsoil and soil nutrients. Similarly hurricanes and flooding do not destroy a total habitat, but cause disruption which can be viewed as a natural cycle of nature, which has endured for hundreds of thousands of years within the context of ecosystems which have persisted over that time..
The outlook for halting habitat destruction is not favorable when viewed on a worldwide basis. A number of countries, such as the USA, Canada, Belize, Botswana, Israel, United Kingdom, Sweden, New Zealand and Australia, have advanced efforts for analysis of habitat values and national programs for protection of natural areas. Developing countries including China, Pakistan, Indonesia, Cambodia, Venezuela and most of Africa have substantial deficiencies in food production, and hence are under great pressure to exploit remaining natural areas for subsistence agriculture as well as cash crops. Approximately 98 percent of the usable agricultural area of the Earth has already been developed, so that enormous pressure will exist in the next four decades as the human population is expected to expand by antoher three billion people.
- ^ Stuart L.Pimm and Peter Raven. 2000. Biodiversity: Extinction by numbers. Nature 403: 843-845
- ^ J.K. McKee, P.W. Sciulli, C. D. Fooce, and T. A. Waite. 2003. Forecasting global biodiversity threats associated with human population growth. Biological Conservation 115: 161-164
- ^ Sharon L.Spray and Matthew David Moran. 2006.Tropical deforestation. 193 pages
- ^ Masae Shiyomi and Hiroshi Koizumi. 2001. Structure and function in agroecosystem design and management. 435 pages
- ^ M.Gerardo, E.Perillo, Eric Wolanski, Donald R. Cahoon and Mark M.Brinson. 2009. Coastal wetlands: an integrated ecosystem approach. 941 pages
- ^ Gordon L.Maclean. 1996. Ecophysiology of desert birds. 181 pages
- ^ E.W.Sanderson, M. Jaiteh, M. A. Levy, K. H. Redford, A. V. Wannebo, and G. Woolmer. 2002. The human footprint and the last of the wild. Bioscience 52(10): 891-904. | http://www.eoearth.org/article/Habitat_destruction?topic=49513 | 13 |
17 | When does a shortage—or excess—of water become a disaster rather than a temporary inconvenience? The answer may be obvious in prolonged drought or widespread flooding, but not so clear-cut in communities already at health risk from poverty, poor sanitation and limited coordination of health care and other services. Such communities may be unable to restore normality after a water emergency.
Water disasters can be sudden, as in flooding, or progressive and long lasting, as in drought. This affects both the way the disaster is identified and managed, and the timescale of the health effects. The health effects can be classified as:
The long-term health effects of water disasters are usually due to the lack of prompt restoration of public health services and interventions, with the resulting risk of epidemics and other ill health. The health effects of climate phenomena such as the El Niño Southern Oscillation (ENSO) also tend to develop gradually. The recent ENSO during 1997-8 was particularly severe in its effects: the associated natural disasters affected an estimated 160 million people (WHO, 1999). The slow time scale of droughts means that ill health may not be identified until the drought has persisted for months, affecting food supplies as well as the water needed to maintain health.
Whether a water emergency turns into a disaster depends on whether the community can take effective measures without external assistance. One working definition of a disaster is that it causes at least 10 deaths or results in an appeal for outside assistance (WHO 1999). Whatever the definition, disasters involving water are increasing. In recent decades there has been an increase in the numbers of deaths and the numbers of people affected by weather disasters such as droughts and floods (WHO 1999). Climate change appears to be responsible for at least some of this increase: and while global warming has been acknowledged, the term is misleading because it leaves out the key element of water. Floods are the second most frequent cause of natural disaster, after windstorms. The largest cause of deaths through natural disaster is drought, because of the associated severe food insecurity: examples include the high death rate in Sahelian people in Africa in the early 1970s and mid-1980s - and droughts are not only increasing, but also lasting longer.
Too much water: the health effects of floods
The early health effects of floods include death through drowning and accidents such as falls, electrocution and the effect of landslides. People lose their homes and often also lose their source of food and water. If the drinking water supply and sanitation system is already inadequate, flooding poses a further major health threat. Sanitation is a major problem in all flooded areas, as demonstrated by recent floods in Mexico, Ghana and Mozambique(Box 1). Industrial waste, such as engine oil and refuse dumps, adds to the health risks. In tropical countries, the floodwaters provide an ideal breeding ground for mosquitoes and an increased risk of diseases such as dengue, malaria and Rift Valley Fever. They also displace rodent populations, which may cause human outbreaks of leptospirosis and hantavirus infection. The combined effects of open sewage and reduced opportunities for good personal hygiene also favour the spread of infections causing diarrhoea, such as cholera and gastrointestinal viruses. Flooding in the horn of Africa in 1997, associated with ENSO, caused an upsurge in cholera deaths due to the lethal combination of damage to sanitation and contamination of water supplies (WHO, 1998). During flooding in Bolivia and Peru in the mid 1980s, increases in diarrhoeal diseases and acute respiratory diseases were recorded (WHO, 1999). Prolonged heavy rainfall causes less deaths than floods, but the infection risk is just as high in areas of poor sanitation: cholera showed a marked increase after heavy rains in Tanzania, Kenya, Guinea-Bissau, Chad and Somalia in 1997 (WHO 1998).
Box 1: Flood stories 2000: Mexico, Ghana and Mozambique
Mexico: In June 2000, heavy rains ruptured the wall of an open sewer in Mexico. This forced 6,000 people out of their homes in the low-income areas of Chalco valley. Although emergency shelters were available, many residents camped on their roofs to protect their homes from looting. Residents blamed the spill on the local authorities for failing to install piped sewerage.
In northern Ghana, clean drinking water became scarce three months after severe floods. Water sources had been polluted by tons of untreated human and industrial waste. More than 200 dams, wells and boreholes in the upper West Region were reported to be polluted with sewage and used engine oil. In addition, the floodwater had submerged refuse dumps due to rising river levels. The costs of the flooding raised dramatically due to the need to resettle people in other areas and to rehabilitate the polluted dams in the three northern regions.
The widespread floods in Mozambique in February 2000 made international headlines. Coupled with the lack of access to adequate sanitation and drinking water, nearly 800,000 people were put at increased risk of infectious diseases. The dam management was criticised, for example with claims that water had not been released in time, but it is possible that, with such overwhelming floods, better dam management would have had only slight effects. The key issues were to strengthen existing monitoring and early warning systems, to control settlement of flood plains and promote activities to limit human and economic casualties and a new flood is threatening in 2001.
As the examples in Box 1 show, flooding affects health indirectly through the widespread damage to the infrastructure of a community: its roads, buildings, equipment, drainage, sewerage and water supply systems. For example, during flooding in Peru in 1997/98, nearly a tenth of the health facilities were damaged: in Ecuador during the same period 2% of the hospitals were put out of action by a combination of flooding, mud, damage to sewerage systems and contamination of the drinking water supply. The mental distress of flooding may persist long after the floods have receded, because people have lost their homes, their livelihood and their confidence. The severe flooding in China in 1998 killed more than 5,500 people and left at least 21 million homeless (Kriner, 1999).
Too little water: droughts
“There it overtook me that I fell down for thirst, I was parched, my throat burned, and I said, “This is the taste of death”
Anonymous sufferer of drought in Ancient Egypt
According to an old French proverb, they that are thirsty drink silently. They also die silently: drought is a major cause of death worldwide and accounts for about half of the victims of natural disasters (WHO, 1999). Death is mainly due to lack of food and worsening of pre-existing malnutrition, but also through other pathways (Figure). In hot countries or during heat waves associated with drought, mortality may also be directly related to a combination of heat and water shortage.
Food production, such as the grain harvest, is particularly at risk in arid regions with a seasonal rainfall pattern. Loss of livestock is also a major problem during drought: arid regions, such as in SE Asia are particularly vulnerable (Box 2). Early warning systems such as climate forecasts can help communities to prepare for drought: other important early actions include coordinating the supply of water and any necessary rationing, as in the recent drought in Kenya (Box 3). While famine is the biggest killer in drought, the health effects include increased malaria and forest fires. In Venezuela and Colombia, malaria cases increased by more than a third following dry periods associated with ENSO; a fourfold increase in malaria was documented in south west Sri Lanka during ENSO (WHO, 1999). Drought has a major impact on infection because there is less water available for drinking and for personal hygiene: in addition to increasing diseases such as trachoma and scabies, people are more likely to risk drinking unsafe water and its load of infection. Studies have shown than in times of shortage, people tend to use water for cooking rather than for hygiene (WHO, 1999).
Box 2: Drought in SE Asia
In 2000, loss of livestock due to a serious drought in South Asia and the Near East caused the death of many people. In southern Afghanistan, the entire population (300,000 families) of the Registan desert fled when their water sources dried up. In Pakistan, the drought in Baluchistan and Sindh provinces were reported to be the worst in the country's history. This has led to indirect appeals to India to help battle the drought. Forty years ago Pakistan and India signed the Indus Treaty, to officially recognise that the Indus River is the main source of water for both countries. In Iran, 18 of the counties and 28 provinces also faced a severe drought. The Tigris and Euphrates rivers in Iraq also dropped to about 20% of their average flow.
Box 3: Drought in Kenya
Kenya experienced its worst drought in 40 years in 2000 and the President claimed it put 80% of the Kenyan population at risk. The World Food Programme (WFP) confirmed that 3.3 million people were seriously affected. Acute shortages of food, water and insecurity forced 15 primary schools in Kenya's north-central Samburu district to close, according to the local district education officer. Nairobi City Council had to ration water in the city from May.
With water already in short supply in many countries, it is not surprising that drought can also lead to water being used as a political tool, for example the water bargaining in Central Asian countries (Box 4). Water politics also affect attempts to relieve drought in countries in the throes of civil war, such as Ethiopia (Box 5).
Box: 4 Drought in Central Asia and water as a political tool
Shared water resources in the drought-affected nations of Central Asia have been used to bargain between countries. For example, in 1999 Kyrgyzstan succeeded in getting much needed coal from Kazakhstan after closing down water reservoirs. In 2000, Uzbekistan cut water supplies to Kazakhstan, citing non-payment of debt. Kazakhstan asked Tajikistan to release more water to Uzbekistan, in return for Uzbek electricity. The aim of this exchange was the hope that water flowing to Uzekistan would be likely to also flow to Kazakhstan. Meanwhile, a proposed Chinese water diversion project involving the Ertis (Irtysh) River poses more water problems for Kazahstan, as the river provides the drinking water for the industrial northeast region of the country.
Box 5: Water shortages due to drought and war in Ethiopia
A long-term drought as well as the effects of civil war has afflicted the Somali region of Ethiopia. The drought affected 8.3 million people. Heavy rainfall in April 2000 brought some relief, but the continuing conflict has restricted efforts to sort out the water supplies and sanitation in the region. The aid programme includes the installation of water reservoirs and digging or repairing wells. Emergency repairs have been conducted by UNICEF, associated with the distribution of water treatment chemicals and jerry cans throughout the hardest hit areas. ‘Donor fatigue’ and cynicism about the use of aid is a serious barrier to the international relief efforts to improve water supplies in the region.
Flooding after storms
Severe storms and hurricanes may cause water-related health problems through both excessive precipitation (heavy rains) and disruption of the water and sanitation infrastructure. Hurricane Mitch in 1998 was one of the worst recent natural disasters, estimated to have killed more people than any Atlantic hurricane in the last 200 years (NCDC, 1999). More than 3 million people were left homeless and at least 11,000 dead, with thousands of others reported missing. The storm started in late October, producing enormous amounts of precipitation as it travelled west, caused in part by the mountains of Central America. Floods and mudslides destroyed the entire infrastructure of Honduras and devastated parts of Nicaragua, Guatemala, Belize and El Salvador. As well as loss of many homes, approximately 4.5 million people in Honduras (75% of the population) were left without access to clean drinking water: around 1500 rural water mains were destroyed. Following this major natural disaster there were also critical shortages in medicine and food supplies. Fever and respiratory illness were widespread and in the following months increases in malaria, dengue and cholera were reported (NCDC 1999).
Reducing the effects of water disasters
Although the epidemiology of floods and droughts has been well studied (Noji, 1997), we still know more about the effects than how to prevent disasters occurring. Also, the death toll and immediate effects of sudden disasters tend to be given more emphasis than the gradually unfolding ill health following a disaster, or during a lasting disaster such as drought. Countries can reduce the effects of floods and droughts in two main ways: by being prepared for a disaster, and by disaster mitigation.
The aim of disaster preparation is to be able to reduce the immediate mortality and morbidity with a better prepared, well equipped service. The preparation includes early warning systems for seasonal changes in climate, the ENSO, and risk of flood or drought, such as electronic information systems and satellites that can provide information over large regions and continents. Separate systems are needed to cater for the agricultural sector, cities and people in rural or remote communities. The public health infrastructure is particularly important for the immediate measures needed and for public information on reducing the health risks. Being prepared also means thorough disaster contingency plans, covering emergency housing, repairs, replacement of essential equipment and protection of the most vulnerable people in the community: the sick, the very young and the old. Improvement of water supply and sanitation systems is an important way of reducing the effects of a water disaster: countries with a good infrastructure for drainage and disposal of human waste and adequate water supply facilities have far fewer direct health problems during water-related disasters Sanitary inspections are an important tool to assess the water supply and sanitation facilities and these should be conducted systematically. The logistics of the predicted need for health and social services also need to be laid down in advance, including early warning systems to detect health effects. Planning should be on a regional, national and international level and include planning for climate change: as global warming and its water effects will increase the frequency of water disasters. Finally, public information and education can serve two purposes in preparing for disasters: ensuring early warnings to communities at risk; and giving information about how to conserve water and keep it safe from contamination.
Once a disaster has occurred, or has been identified, all the measures in disaster preparedness will be needed and if not in place, outside help is probably needed. At the least, the mitigation efforts must include:
Both disaster preparedness and its mitigation require multisectoral cooperation and joint planning. Both need evaluation after a disaster to reduce the ill effects of later crises. While our world is never likely to be free of water disasters, there is much that can be done to minimise their health effects.
IRIN, 23 June 2000,
- The Nation, 25 May 2000
- More information
- Environment News Service, 3 Feb 2000,
African Eye News Service, 21 Feb 2000
- Floods and dam management in Mozambique, New Scientist, 22 April 2000
Noji E, The public health consequences of disasters. Oxford: Oxford University Press, 1997
WHO, Cholera in 1997. Weekly Epidemiological Record, 1998; 73: 201-8
WHO, El Niño and its health impacts. Weekly Epidemiological Record, 1998; 73: 148-152
WHO, El Niño and Health. Task Force on Climate and Health. Geneva: WHO, 1999. WHO/SDE/99.4 | http://www.who.int/water_sanitation_health/hygiene/emergencies/floodrought/en/print.html | 13 |
105 | Balance of payments
The Balance of Payments (BOP) is a measure of all the financial transactions flowing between one country and all other countries during a specific period, usually a quarter or a year. It is also the name of the official record of these transactions. A positive, or favorable, balance of payments is one in which more payments have come in to a country than have gone out. A negative or unfavorable balance means more payments are going out than coming in.
The BOP is a major indicator of a country's status in international trade, and a reflection of its economic well-being or vulnerability. The balance of trade is one component of the balance of payments. It is also a sign of the productiveness of a people and a reflection of whether they are primarily producers or consumers.
Producing nations grow while consuming nations eventually deplete their resources and collapse as fewer people are able to access them.
Within any country, the BOP record comprises three "accounts": the current account, which includes primarily trade in goods and services (often referred to as the balance of trade), along with earnings on investments; the capital account, including transfers of non-financial capital such as debt forgiveness, gifts and inheritances; and the financial account, essentially trade in such assets as currencies, stocks, bonds, real estate, and gold, among others.
Each of these components is further divided into subcomponents. Thus, for example, the current account comprises trade in merchandise, trade in services (such as tourism and law), income receipts such as dividends, and unilateral transfers of money, including direct foreign aid. (To economists, the current account is viewed as the difference between exports and capital inflows, on one hand; and on the other hand, imports and capital outflows.)
Likewise, the capital account includes such "transfers" as debt forgiveness, money that migrant workers take home with them when they leave the country or bring with them as they enter the country, and sales and purchases of natural resources. The financial account consists both of assets owned abroad, and of foreign-owned assets within the country.
In the financial account, if foreign ownership of domestic financial assets has increased more quickly than domestic ownership of foreign assets in a given year, then the domestic country has a financial account surplus. On the other hand, if domestic ownership of foreign financial assets has increased more quickly than foreign ownership of domestic assets, then the domestic country has a financial account deficit. The United States persistently has the largest capital (and financial) surplus in the world, but as of 2006 had a large account deficit. To a significant extent, this reflects that the United States imports far more than it exports.
Taken together, the capital and financial accounts consist of "capital transfers, direct investments [in which the investor has a permanent interest], portfolio investments [stocks, bonds, notes and the like] and other forms of investment [financial derivatives, loans, etc.]."
The method of recording these payments explains the "balance." As payments leave or enter a country—perhaps to finance a purchase, or to invest in a foreign corporation—the transactions are recorded as both debits and as credits, in accordance with the practice of double-entry bookkeeping that is the standard business accounting practice. For example, when a country or any of its citizens buys a foreign good—such as furniture—that is treated as an increase in the asset of furniture. Therefore, that recording is made, according to convention, by a debit-entry in the books of the current account (i.e., on the left side of the ledger). At the same time, that same entry is countered, or balanced, by a decrease in the asset of money, which is recorded by a credit-entry (on the right side of the ledger) of the capital account.
Credits and debits
In brief, according to the International Monetary Fund, a country "records credit entries for (a) exports of goods and services, provision of services, provision of the factors of production to another economy, and (b) financial items reflecting a reduction in the [country's] external assets or an increase in external liabilities." Likewise, it records debit entries for "(a) imports of goods, acquisition of services, use of production factors provided by another economy, and (b) financial items reflecting an increase in assets or a decrease in liabilities."
Therefore, the current account should always balance, or equal, the sum of the capital and financial accounts. For example, when a country "buys more goods and services than it sells [resulting in] a current account deficit, it must finance the difference by borrowing, or by selling more capital assets than it buys [resulting in] a capital account surplus. A country with a persistent current account deficit is, therefore, effectively exchanging capital assets for goods and services."
In practice, however, perfect balancing is not always the case, given "statistical discrepancies, accounting conventions, and exchange rate movements that change the recorded value of transactions."
Prices and currency issues
The value of each balance of payments transaction is measured largely by market prices, or the prices actually paid between a buyer and a seller, rather than the price that is officially quoted. Those prices, in turn, are usually recorded in terms of a country's domestic currency. However, for international comparisons, economists use a more stable or solid currency, such as the U.S. dollar.
Currency strength, therefore, is one of several factors influencing a nation's balance of payments, and indeed its overall economy. (Other factors include degree of industrialization, education and skill levels of labor force, stability of government, etc.) For example, if a domestic currency is "over valued [relative to other currencies], the balance of payments would be in deficit, money would be reduced, and deflation would be imposed, bringing in its wake unemployment. On the other hand, if a currency is undervalued, balance of payments surplus would produce inflationary pressure that could change expectations and set in motion a wage explosion that might overshoot the equilibrium."
Data from the balance of payments, along with information from a country's International Investment Position (a record of the nation's stock of outstanding foreign assets and liabilities) are useful as indicators for economic policy makers. For example, a current account deficit, which usually reflects an imbalance between imports and exports, may suggest a policy "directed to increase competitiveness in the global market for local products and/or develop new industries that will produce import substitutes," or a policy focused on currency exchange rates,such as devaluation.
Likewise, a steep current account deficit can lead policy makers to impose tariffs, which effectively slow imports, or lower interest rates, which enable domestic manufacturers to lower their own prices, thereby better competing with demand for imports. Other measures suggested by payments imbalances might include restrictive monetary and fiscal policies, or increasing borrowing.
IMF Balance of Payments Manual
The Balance of Payments Manual is a manual published by the IMF that provides accounting standards for balance of payments reporting and analysis for many countries. The Bureau of Economic Analysis adheres to this standard.
The sixth edition was released in prepublication form in December 2008. Its title has been amended to Balance of Payments and International Investment Position Manual to reflect that it covers not only transactions, but also the stocks of the related financial assets and liabilities.
The following list of countries and territories by current account balance (CAB) is based on the International Monetary Fund data for 2007, obtained from the World Economic Outlook database (October 2008). Numbers for 2008 should become available in April 2009. Estimates are highlighted.
|Rank||Country||CAB USD, bn|
|1||People's Republic of China||371.833|
|11||United Arab Emirates||39.113|
|34||Trinidad and Tobago||5.380|
|55||Papua New Guinea||0.259|
|72||São Tomé and Príncipe||-0.044|
|79||Central African Republic||-0.075|
|87||Saint Vincent and the Grenadines||-0.147|
|88||Saint Kitts and Nevis||-0.150|
|94||Democratic Republic of the Congo||-0.191|
|97||Antigua and Barbuda||-0.211|
|135||Republic of the Congo||-1.479|
|142||Bosnia and Herzegovina||-1.920|
|181||United States||-731.214 |
- Current account
- Capital account
- Balance of trade
- Floating currency
- Capital surplus
- International investment position
- Foreign exchange reserves
- Sovereign wealth fund
- Money supply
- United States public debt
- FRED (Federal Reserve Economic Data)
- Pink Book
- Milton Friedman
- ↑ Fedpoint: "Balance of Payments," Federal Reserve Bank of New York, Retrieved May 30, 2009.
- ↑ OECD Glossary of Statistical Terms oecd.org. Retrieved May 30, 2009.
- ↑ Fedpoint
- ↑ Economic Report of the President 2006, Chapter 6 - The US Capital Account Surplus usembassy. Retrieved May 30, 2009.
- ↑ IMF, 2007, "Balance of Payments Statistics Yearbook 2007," Part 2, Table A2
- ↑ Michael R. Darby, May 1990, "The Balance of Payments of the United States," U.S. Department of Commerce, Washington, DC, Page 26., Retrieved May 30, 2009.
- ↑ Norman S. Fieleke, October 1996, Federal Reserve Bank of Boston, What is the Balance of Payments?, Retrieved May 30, 2009.
- ↑ International Monetary Fund, 1996 Balance of Payments Textbook. (IMF, Washington, DC), page 3
- ↑ Fedpoints
- ↑ Fedpoints
- ↑ IMF Balance of Payments Textbook, 5
- ↑ Robert Mundell, "Exchange Rate Arrangements in the Transition Economies," in Mario I. Blejer & Marko Skreb, (eds.), Balance of Payments, Exchange Rates, and Competitiveness in Transition Economies. (Boston: Kluwer Academic Publishers, 1999). online . books.google. Retrieved July 10, 2009.
- ↑ "Balance of Payments," June 2008, Bangko Sentral ng Pilipinas (Central Bank of the Republic of the Philippines), Retrieved May 30, 2009.
- ↑ "Balance of Payment Policies," Biz.Ed. Retrieved May 30, 2009.
- ↑ Current account balance, U.S. dollars, Billions from IMF World Economic Outlook Database, October 2008
- ↑ Current account balance, U.S. dollars, Billions from IMF World Economic Outlook Database, October 2008
- Anderton, Alain. Economics Third Edition. Causeway Press, 2000. ISBN 9781902796116
- Balance of Payments Statistics Yearbook 2007. Washington, DC: Intl Monetary Fund, 2007. ISBN 9781589066564
- Bamford, C. G. AS and A Level Economics. Cambridge University Press, 2002. ISBN 9780521007818
- Begg, David, Stanley Fischer, and Rudiger Dornbusch. Economics. McGraw-Hill Education, 2008. ISBN 9780077119669
- Ellen Frank, Where Do U.S. Dollars Go When the United States Runs a Trade Deficit? Dollars & Sense Magazine Retrieved May 30, 2009.
All links retrieved December 8, 2012.
- Balance of Payments Federal Reserve Bank of New York.
- Sixth Edition of the IMF's Balance of Payments and International Investment Position Manual, December 2008.
Balance of payments · Current account (Balance of trade) · Capital account · Foreign exchange reserves · Sovereign wealth funds · Net Capital Outflow · Comparative advantage · Absolute advantage · Import substitution · International trade
|Organizations and policies|
|Schools of thought||
Free trade · Balanced trade · Mercantilism · Protectionism
Globalization · Outsourcing · Trade justice · Fair trade
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
- Balance_of_payments (Feb 23, 2009) history
- IMF_Balance_of_Payments_Manual (Feb 23, 2009) history
- List_of_countries_by_current_account_balance (Feb 23, 2009) history
Note: Some restrictions may apply to use of individual images which are separately licensed. | http://www.newworldencyclopedia.org/entry/Balance_of_payments | 13 |
16 | What is the auditory system?
|Figure 10: Diagram of the auditory system|
The auditory system performs the functions of hearing. The sense of hearing is a fine-tuned, intricate process. A sound wave is collected by the outer ear and sent through the middle ear into the inner ear where auditory hair cells are located. Once auditory hair cells in the inner ear are stimulated via sound waves, an electrical signal is generated and transmitted from these hair cells to the auditory nerve (also called cranial nerve VIII). From the auditory nerve, this signal is finally sent to the brain and subsequently processed. Hearing loss may be present if a part of the outer, middle, inner ear or auditory nerve is damaged or missing.
What hearing abnormalities can be seen in children with PHACE syndrome?
Hearing loss is a relatively new finding associated with PHACE syndrome. The hearing loss is most often unilateral (on one side) and ipsilateral (the same side) to the hemangioma located on a child's face. Radiologic imaging studies have attributed this hearing loss in PHACE syndrome to intracranial hemangiomas affecting various auditory structures (see Intracranial Hemangioma section).
The 3 types of hearing loss associated with PHACE syndrome are conductive, sensorineural and mixed hearing loss.
- Conductive hearing loss occurs when sound is not conducted efficiently through the outer ear canal to the eardrum and the tiny bones, or ossicles, of the middle ear. The most common in PHACE syndrome is an intracranial hemangioma that occludes the Eustachian tube (part of the middle ear). Conductive hearing loss usually involves a reduction in sound level or loss of the ability to hear faint sounds.
- Sensorineural hearing loss occurs when there is damage to the inner ear or to the nerve pathways from the inner ear to the brain. An intracranial hemangioma that affects the auditory nerve can lead to this type of hearing loss in PHACE syndrome. Sensorineural hearing loss is usually permanent.
- Mixed hearing loss is a combination of conductive and sensorineural hearing loss. Although this has been seen in PHACE syndrome, it is thought to be caused by factors other than the PHACE syndrome itself (i.e. coincidence).
How are hearing abnormalities diagnosed and treated?
Children with PHACE syndrome will generally pass their initial newborn hearing screen, but may develop problems during the first year of life. Suspicions of hearing loss should be brought to the attention of the primary care physician. A referral to an otolaryngologist (ear-nose-throat doctor) or audiologist (hearing specialist) may be needed. If a child has hemangiomas on or around the ear and has been diagnosed with PHACE syndrome, a hearing test should be repeated during the first year of life.
If hearing loss is determined to be attributed to an intracranial hemangioma in the child, treatment to minimize the growth of the hemangioma can be started. Assistive devices are also available to improve hearing.
|<Back to Chapter 10|
All information ©Birthmarks and Vascular Anomalies Clinic 2011 | http://www.chw.org/display/PPF/DocID/48522/Nav/1/router.asp | 13 |
41 | The terms deficit and debt are sometimes used loosely by politicians. It may even be politically expedient to use the terms sloppily.
If a government spends less than its income, the government is said to operating at (or “running”) a surplus. If a government spends more than it receives, it is running a deficit. If its income and expenditure are equal, the government has a balanced budget.
If a government balances its budget, and it wishes to dispense more goodies to one sector of its voters, it must either increase taxes or reduce the amount it spends on some other sector of its voters. Neither higher taxes nor reduced handouts are electorally popular, so governments have a disincentive to balance their budgets or run a surplus. Most western governments run a deficit in most years.
If, in a year, a government spends a hundred billion dollars more than it takes in, it is said to run a deficit for that year of a hundred billion dollars. This can be expressed as a percentage of the country’s total monetary income (its GNP, or Gross National Product). So, if the country’s GNP for that year was one trillion dollars, a deficit of a hundred billion dollars would be 10%.
The government can fund its deficit by borrowing from people or businesses. It’s fashionable to refer to this as selling debt rather than borrowing money, but both terms mean the same. Or, the government can borrow money from (sell debt to) its central bank, which can create new money to lend to the government (to purchase the debt with). Creating new money is essentially a tax on those with savings, because by increasing the total amount of money in circulation it debases the value of each unit of that money. This is the process of inflation.
Suppose a government runs a deficit of 10% for five years in a row. The government is now in debt to the tune of 50% of the nation’s GNP. This means that if the government were to spend no money, and if it taxed all the income of all the people, the government could pay back its debt in half a year. Of course, debt repayment can never happen at that speed because people still need a certain amount of money for basics such as food, clothing, housing and energy.
A government can borrow essentially unlimited sums from its central bank, so there is little motivation for the government to pay back the debt. Many countries routinely “roll over” their government debt as it becomes due for repayment. However, interest must continue to be paid on that debt. As the debt mounts up, the interest payments can become a big drain on the nation’s wealth.
For example, if a country has debts equal to 100% of its GNP (as many countries do), and the average interest rate on that debt is 6%, then the government must pay 6% of the nation’s income each year just to service the debt. If the government taxes, say, half of the income generated by its people, then the interest payments will represent 12% of the government’s budget. This forces a choice between government austerity, higher taxes, profligate further borrowing, or a default on some or all of the debt.
None of these options is appealing, yet governments have been forced to do them many times in the past, and many governments are deep enough in debt that they are today faced with these choices.
There is an additional source of debt which is usually not included in the headline debt figure: debt due to unfunded future liabilities. If the government makes a promise of future pensions, or a promise that it will pay for future medical care for its citizens, those costs must be met in the future (or else the government must default on its promises). If money has not been put aside for future liabilities, these are said to be unfunded, and the amount of those liabilities must be added to the government debt to arrive at a true figure representing the government’s liabilities.
Government debt is an obligation that will fall onto future generations. It is essentially a tax on those who are not yet old enough to vote, and on unborn children, and probably on the children of those as-yet-unborn children. The ability of any government to meet its future obligations depends on the willingness of future generations to be taxed enough that the debt which has been passed on to them can be repaid.
When a politician says “we have reduced the deficit”, it doesn’t mean that the debt is being reduced. It means there is still a deficit, and that the debt is still growing. It’s just that the deficit is a bit smaller than it was last year, and that the debt is growing a little more slowly.
For government debt to be reduced, the government must be “in surplus”. When a politician says “the budget will be in balance within five years”, this means that the national debt will continue to increase over those five years. And after waiting patiently for those five years, don’t be surprised to hear a politician saying “the budget will be in balance within five years”.
Need research? Quezi's researchers can answer your questions at uclue.com | http://quezi.com/15122 | 13 |